Windows 8 is a make or break product launch for Microsoft. Windows will endure a slow start as traditional PC users delay upgrades, while those eager for Windows tablets jump in. After a slow start in 2013, Windows 8 will take hold in 2014, keeping Microsoft relevant and the master of the PC market, but simply a contender in tablets, and a distant third in smartphones.
Microsoft has long dominated PC units, with something more than 95% sales. The incremental gains of Apple’s Mac products over the last five years haven’t really changed that reality. But the tremendous growth of smartphones, and then tablets, has. If you combine all the unit sales of personal devices, Microsoft’s share of units has shrunk drastically to about 30% in 2012.
It’s hard to absorb the reality of the shift without a picture, so in the report “Windows: The Next Five Years,” we estimated and forecast the unit sales of PCs, smartphones, and tablets from 2008 to 2016 to create a visual. As you can see below in the chart of unit sales, Microsoft has and will continue to grow unit sales of Windows and Windows Phone. But the mobile market grew very fast in the last five years, while Microsoft had tiny share in smartphones and no share in tablets.
If you look at the results by share of all personal devices, below, you can see how big a shift happened over the last five years as smartphone units exploded and the iPad took hold.
Every culture has its coming of age rituals — Confirmation, Bar Mitzvah, being hunted by tribal elders, surviving in the wilderness, driving at high speed while texting — all of which mark the progress from childhood to adulthood. In the high-tech world, one of the rituals marking the maturation of a company is the user group. When a company has a strategy it wants to communicate, a critical mass of customers, and prospects bright enough that it wants to highlight them rather than obscure them, it is time for a user group meeting.
This year, having passed a year since the acquisition of Novell by AttachMate and its subsequent instantiation as a standalone division, as well as being its 20th anniversary, SUSE had its first user group meeting. All in all, the portents were good, and SUSE got its core messages across to an audience of about 500 of its users as well as a cadre of the more sophisticated (IMHO) industry analysts.
Among My Key Takeaways:
SUSE is a stable company with rational management — With profitable revenues of over $200M and a publicly stated plan to hit $234 for the next fiscal year, SUSE is a reasonably sized company (technically a division of $1.3B Attachmate, but it looks and acts like an independent company), with growth rates that look to be a couple of points higher than its segment.
SUSE’s management has done an excellent job of focusing the company — SUSE, acknowledging its size disadvantage over competitor Red Hat, has chosen to focus heavily on enterprise Linux, publicly disavowing desktop and mobile device directions. SUSE’s claim is that their market share in the core enterprise segment is larger than their overall market share compared to Red Hat. This is a hard number to even begin to tweeze out, but it feels like a reasonable claim.
Tablets aren’t the most powerful computing gadgets. But they are the most convenient.
They’re bigger than the tiny screen of a smartphone, even the big ones sporting nearly 5-inch screens.
They have longer battery life and always-on capabilities better than any PC — and will continue to be better at that than any ultrathin/book/Air laptop. That makes them very handy for carrying around and using frequently, casually, and intermittently even where there isn’t a flat surface or a chair on which to use a laptop.
And tablets are very good for information consumption, an activity that many of us do a lot of. Content creation apps are appearing on tablets. They’ll get a lot better as developers get used to building for touch-first interfaces, taking advantage of voice input, and adding motion gestures.
They’re even better for sharing and working in groups. There’s no barrier of a vertical screen, no distracting keyboard clatter, and it just feels natural to pass over a tablet, like a piece of paper, compared to spinning around a laptop.
Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.
Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:
ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.
Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…
Enterprise laptops are on the shopping list for many I&O professionals I speak with every week, with some asking if Netbooks are the antidote to the MacBook Air for their people. Well, on the menu of enterprise laptops, I think of Netbooks as an appetizer -- inexpensive, but after an hour my stomach is growling again. Garden-variety ultraportables on the other hand are like a turkey sandwich -- everything I need to keep me going, but they make me sleepy halfway through the afternoon.
Ultrabooks are a new class of notebook promoted by Intel and are supposed to be a little more like caviar and champagne -- light and powerful, but served on business-class china with real silverware and espresso. At least that's what I took away after being briefed by Intel on the topic. I had the chance to sample HP's new Ultrabook fare in San Francisco a few weeks ago while they were still in the test kitchen, and it seems they took a little different approach. Not bad, just different.
It struck me that rather than beluga and Dom Perignon , HP has created more of a Happy Meal -- a tasty cheeseburger and small fries with a Diet Coke, in a lightweight, easy to carry package for a bargain price. It has everything the road warrior needs to get things done, and like a Happy Meal, they can carry it on the plane and set it on the tray table…even if the clown in front of them reclines. Folio offers the Core i5-2467M processor, 4GB RAM, a 13.3" LED display and a 128GB SSD storage, a 9-hour battery and USB 3.0 + Ethernet ports as highlights, all for $900. It's a true bargain. I think I will call it the McUltrabook.
This was possibly the most important Nokia World event ever. Nokia had to demonstrate that it can deliver against its plans. In February 2011, Nokia communicated its intention to team up with Microsoft to develop its new platform and to “entrust” its Symbian operating system to accenture. In total 3,000 visitors from 70 countries attended Nokia World 2011 in London to hear and see what the “new Nokia” looks like.
In essence it was clear what Nokia World 2011 would be all about before the actual event had even started. Nokia had to produce a device that can take on the iPhone and the Galaxy. At the event Nokia announced the launch of the first “real Windows phone” in the form of the Lumia 800. The result is an impressive device that certainly secured Nokia a seat on the table of the tripartite of leading smartphones platforms.
Well, maybe everybody is saying “cloud” these days, but my first impression of Microsoft Windows Server 8 (not the final name) is that Microsoft has been listening very closely to what customers want from an OS that can support both public and private enterprise cloud implementations. And most importantly, the things that they have built into WS8 for “clouds” also look like they make life easier for plain old enterprise IT.
Microsoft appears to have focused its efforts on several key themes, all of which benefit legacy IT architectures as well as emerging clouds:
Management, migration and recovery of VMs in a multi-system domain – Major improvements in Hyper-V and management capabilities mean that I&O groups can easily build multi-system clusters of WS8 servers, and easily migrate VMs across system boundaries. Muplitle systems can be clustered with Fibre Channel, making it easier to implement high-performance clusters.
Multi-tenancy – A host of features, primarily around management and role-based delegation that make it easier and more secure to implement multi-tenant VM clouds.
Recovery and resiliency – Microsoft claims that they can failover VMs from one machine to another in 25 seconds, a very impressive number indeed. While vendor performance claims are always like EPA mileage – you are guaranteed never to exceed this number – this is an impressive claim and a major capability, with major implications for HA architecture in any data center.
We have been repeatedly reminded that the requirements of hyper-scale cloud properties are different from those of the mainstream enterprise, but I am now beginning to suspect that the top strata of the traditional enterprise may be leaning in the same direction. This suspicion has been triggered by the combination of a recent day in NY visiting I&O groups in a handful of very large companies and a number of unrelated client interactions.
The pattern that I see developing is one of “haves” versus “have nots” in terms of their ability to execute on their technology vision with internal resources. The “haves” are the traditional large sophisticated corporations, with a high concentration in financial services. They have sophisticated IT groups, are capable fo writing extremely complex systems management and operations software, and typically own and manage 10,000 servers or more. The have nots are the ones with more modest skills and abilities, who may own 1000s of servers, but tend to be less advanced than the core FSI companies in terms of their ability to integrate and optimize their infrastructure.
The divergence in requirements comes from what they expect and want from their primary system vendors. The have nots are companies who understand their limitations and are looking for help form their vendors in the form of converged infrastructures, new virtualization management tools, and deeper integration of management software to automate operational tasks, These are people who buy HP c-Class, Cisco UCS, for example, and then add vendor-supplied and ISV management and automation tools on top of them in an attempt to control complexity and costs. They are willing to accept deeper vendor lock-in in exchange for the benefits of the advanced capabilities.
A project I’m working on for an approximately half-billion dollar company in the health care industry has forced me to revisit Hyper-V versus VMware after a long period of inattention on my part, and it has become apparent that Hyper-V has made significant progress as a viable platform for at least medium enterprises. My key takeaways include:
Hyper-V has come a long way and is now a viable competitor in Microsoft environments up through mid-size enterprise as long as their DR/HA requirements are not too stringent and as long as they are willing to use Microsoft’s Systems Center, Server Management Suite and Performance Resource Optimization as well as other vendor specific pieces of software as part of their management environment.
Hyper-V still has limitations in VM memory size, total physical system memory size and number of cores per VM compared to VMware, and VMware boasts more flexible memory management and I/O options, but these differences are less significant that they were two years ago.
For large enterprises and for complete integrated management, particularly storage, HA, DR and automated workload migration, and for what appears to be close to 100% coverage of workload sizes, VMware is still king of the barnyard. VMware also boasts an incredibly rich partner ecosystem.
For cloud, Microsoft has a plausible story but it is completely wrapped around Azure.
While I have not had the time (or the inclination, if I was being totally honest) to develop a very granular comparison, VMware’s recent changes to its legacy licensing structure (and subsequent changes to the new pricing structure) does look like license cost remains an attraction for Microsoft Hyper-V, especially if the enterprise is using Windows Server Enterprise Edition.
Over the past months server vendors have been announcing benchmark results for systems incorporating Intel’s high-end x86 CPU, the E7, with HP trumping all existing benchmarks with their recently announced numbers (although, as noted in x86 Servers Hit The High Notes, the results are clustered within a few percent each other). HP recently announced new performance numbers for their ProLiant DL980, their high-end 8-socket x86 server using the newest Intel E7 processors. With up to 10 cores, these new processors can bring up to 80 cores to bear on large problems such as database, ERP and other enterprise applications.
The performance results on the SAP SD 2-Tier benchmark, for example, at 25160 SD users, show a performance improvement of 35% over the previous high-water mark of 18635. The results seem to scale almost exactly with the product of core count x clock speed, indicating that both the system hardware and the supporting OS, in this case Windows Server 2008, are not at their scalability limits. This gives us confidence that subsequent spins of the CPU will in turn yield further performance increases before hitting system of OS limitations. Results from other benchmarks show similar patterns as well.
Key takeaways for I&O professionals include:
Expect to see at least 25% to 35% throughput improvements in many workloads with systems based on the latest the high-performance PCUs from Intel. In situations where data center space and cooling resources are constrained this can be a significant boost for a same-footprint upgrade of a high-end system.
For Unix to Linux migrations, target platform scalability continues become less of an issue.