Employees that use smart devices — PCs or mobile devices — for work have expanded their use of technology more than most people realize. How many devices do you think a typical information worker uses for work? If you only ask the IT staff, the answer will be that most use just a PC, some use a smartphone, and a few use a tablet. But our latest Forrsights workforce employee survey asked more than 9,900 information workers in 17 countries about all of the devices they use for work, including personal devices they use for work purposes. It turns out that they use an average of about 2.3 devices.
About 74% of the information workers in our survey used two or more devices for work — and 52% used three or more! This means that the typical information worker has to figure out how to manage their information from more than one device. So they’ll be increasingly interested in work systems and personal cloud services that enable easy multidevice access, such as Dropbox, Box, SugarSync, Google Docs/Apps, Windows Live, and Apple iCloud.
When you dig into the data, the mix of devices info workers use for work is different than what IT provides. About 25% are mobile devices, not PCs, and 33% use operating systems other than Microsoft.
My blog post Apple Infiltrates The Enterprise: 1/5 Of Global Info Workers Use Apple Products For Work! got lots of visibility because of how hot Apple is right now, but our data is much broader than just Apple. Our Forrsights Workforce and Hardware surveys have lots more data about all types of PCs and smart devices that information workers use for work, including types of operating systems — and we even know about what personal-only devices they have.
For example, as of the fall of 2011, the top three smartphone OSes have essentially the same share of the installed base of smartphones used for work by information workers across the globe (full-time workers in companies with 20 or employees who use a PC, tablet, or smartphone for work one hour or more per day). See the chart below and the reference in the Monday, January 30, New York Times article on Blackberry in Europe.
The proposed acquisitions of SuccessFactors by SAP, and of Emptoris by IBM got me thinking about the impact on buyers of market consolidation, in respect of the difference between dealing with independent specialists versus technology giants selling a large portfolio of products and services. Sourcing professionals talk about wanting “one throat to choke,” but personally I’ve never met one with hands big enough to get round the neck of a huge vendor such as IBM or Oracle. Moreover, many of the giants organize their sales teams by product line, to ensure they fully understand the product they are selling, rather than giving customers one account manager for the whole portfolio who may not understand any of it in sufficient depth. Our clients complain about having to deal with just as many reps as before the acquisitions. They all now have the same logo on their business card, but can’t fix problems outside their area, nor negotiate based on the complete relationship. It seems that buyers end up like Hercules, wrestling either with a Nemean lion or with a Lernaean hydra.
The acquirers' press releases tend to take it for granted that customers will be better off with the one-stop shop. Bill McDermott, co-CEO of SAP, said, “Together, SAP and SuccessFactors will create tremendous business value for customers.” While Lars Dalgaard, founder and CEO of SuccessFactors, talks about “expanding relationships with SAP’s 176,000 customers.” Craig Hayman, general manager of industry solutions at IBM, said, “Adding Emptoris strengthens the comprehensive capabilities we deliver and enables IBM to meet the specific needs of chief procurement officers."
Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.
Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:
ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.
Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…
HP made the right decision today to keep the Personal Systems Group. Beyond the reasons cited, supply chain and sales synergy and expense of spinning out, it's also crucial for HP to remain in the market for personal devices, which is entering a period of radical transformation and opportunity. The innovations spawned first by RIM with the BlackBerry, followed by the transformative effects of Apple's iPhone and iPad are beginning to ripple into the PC market. Apple's MacBook Air and Lion operating system, combined with Microsoft's Metro interface for Windows 8 herald the beginning of a transformation of personal computing devices. By keeping PSG, HP has the opportunity to innovate and differentiate in the PC market that will move away from commodity patterns.
For vendor strategists at vendors of all sizes, one of the lessons of HP's decision is that consumer businesses are becoming more relevant to succeeding in commercial products for end users. During the announcement call today, CEO Meg Whitman talked about the importance of "consumerization" in winning business from enterprises. I heartily endorse that view and look forward to sharing a report soon on how consumerization is changing commercial product development.
Do you think consumerization was a part of why HP kept PCs?
What effect do you think consumerization will have in IT markets?
I just spent several days at Dell World, and came away with the impression of a company that is really trying to change its image. Old Dell was boxes, discounts and low cost supply chain. New Dell is applications, solution, cloud (now there’s a surprise!) and investments in software and integration. OK, good image, but what’s the reality? All in all, I think they are telling the truth about their intentions, and their investments continue to be aligned with these intentions.
As I wrote about a year ago, Dell seems to be intent on climbing up the enterprise food chain. It’s investment in several major acquisitions, including Perot Systems for services and a string of advanced storage, network and virtual infrastructure solution providers has kept the momentum going, and the products have been following to market. At the same time I see solid signs of continued investment in underlying hardware, and their status as he #1 x86 server vendor in N. America and #2 World-Wide remains an indication of their ongoing success in their traditional niches. While Dell is not a household name in vertical solutions, they have competent offerings in health care, education and trading, and several of the initiatives I mentioned last year are definitely further along and more mature, including continued refinement of their VIS offerings and deep integration of their much-improved DRAC systems management software into mainstream management consoles from VMware and Microsoft.
As soon as you think you understand software companies’ policies on virtualization, a new problem appears that makes you tear your hair out and scratch your now-bald head. This month’s conundrum is whether or not VMware’s ThinApp product breaches your Microsoft Windows license agreement:
However, Microsoft, via its knowledge base, claims that “Running multiple versions of Windows Internet Explorer, or portions of Windows Internet Explorer, on a single instance of Windows is an unlicensed and unsupported solution.” http://support.microsoft.com/kb/2020599/en-us#top
VMware doesn’t warn customers that ThinApp could cause them Microsoft licensing problems, but neither does it claim that it is legal. It merely advises customers to check with Microsoft.
I just attended IDF and I’ve got to say, Intel has certainly gotten the cloud message. Almost everything is centered on clouds, from the high-concept keynotes to the presentations on low-level infrastructure, although if you dug deep enough there was content for general old-fashioned data center and I&O professionals. Some highlights:
Chips and processors and low-level hardware
Intel is, after all, a semiconductor foundry, and despite their expertise in design, their true core competitive advantage is their foundry operations – even their competitors grudgingly acknowledge that they can manufacture semiconductors better than anyone else on the planet. As a consequence, showing off new designs and processes is always front and center at IDF, and this year was no exception. Last year it was Sandy Bridge, the 22nm shrink of the 32nm Westmere (although Sandy Bridge also incorporated some significant design improvements). This year it was Ivy Bridge, the 22nm “tick” of the Intel “tick-tock” design cycle. Ivy Bridge is the new 22nm architecture and seems to have inherited Intel’s recent focus on power efficiency, with major improvements beyond the already solid advantages of their 22nm process, including deeper P-States and the ability to actually shut down parts of the chip when it is idle. While they did not discuss the server variants in any detail, the desktop versions will get an entirely new integrated graphics processor which they are obviously hoping will blunt AMD’s resurgence in client systems. On the server side, if I were to guess, I would guess more cores and larger caches, along with increased support for virtualization of I/O beyond what they currently have.
Well, maybe everybody is saying “cloud” these days, but my first impression of Microsoft Windows Server 8 (not the final name) is that Microsoft has been listening very closely to what customers want from an OS that can support both public and private enterprise cloud implementations. And most importantly, the things that they have built into WS8 for “clouds” also look like they make life easier for plain old enterprise IT.
Microsoft appears to have focused its efforts on several key themes, all of which benefit legacy IT architectures as well as emerging clouds:
Management, migration and recovery of VMs in a multi-system domain – Major improvements in Hyper-V and management capabilities mean that I&O groups can easily build multi-system clusters of WS8 servers, and easily migrate VMs across system boundaries. Muplitle systems can be clustered with Fibre Channel, making it easier to implement high-performance clusters.
Multi-tenancy – A host of features, primarily around management and role-based delegation that make it easier and more secure to implement multi-tenant VM clouds.
Recovery and resiliency – Microsoft claims that they can failover VMs from one machine to another in 25 seconds, a very impressive number indeed. While vendor performance claims are always like EPA mileage – you are guaranteed never to exceed this number – this is an impressive claim and a major capability, with major implications for HA architecture in any data center.