We have been repeatedly reminded that the requirements of hyper-scale cloud properties are different from those of the mainstream enterprise, but I am now beginning to suspect that the top strata of the traditional enterprise may be leaning in the same direction. This suspicion has been triggered by the combination of a recent day in NY visiting I&O groups in a handful of very large companies and a number of unrelated client interactions.
The pattern that I see developing is one of “haves” versus “have nots” in terms of their ability to execute on their technology vision with internal resources. The “haves” are the traditional large sophisticated corporations, with a high concentration in financial services. They have sophisticated IT groups, are capable fo writing extremely complex systems management and operations software, and typically own and manage 10,000 servers or more. The have nots are the ones with more modest skills and abilities, who may own 1000s of servers, but tend to be less advanced than the core FSI companies in terms of their ability to integrate and optimize their infrastructure.
The divergence in requirements comes from what they expect and want from their primary system vendors. The have nots are companies who understand their limitations and are looking for help form their vendors in the form of converged infrastructures, new virtualization management tools, and deeper integration of management software to automate operational tasks, These are people who buy HP c-Class, Cisco UCS, for example, and then add vendor-supplied and ISV management and automation tools on top of them in an attempt to control complexity and costs. They are willing to accept deeper vendor lock-in in exchange for the benefits of the advanced capabilities.
A project I’m working on for an approximately half-billion dollar company in the health care industry has forced me to revisit Hyper-V versus VMware after a long period of inattention on my part, and it has become apparent that Hyper-V has made significant progress as a viable platform for at least medium enterprises. My key takeaways include:
Hyper-V has come a long way and is now a viable competitor in Microsoft environments up through mid-size enterprise as long as their DR/HA requirements are not too stringent and as long as they are willing to use Microsoft’s Systems Center, Server Management Suite and Performance Resource Optimization as well as other vendor specific pieces of software as part of their management environment.
Hyper-V still has limitations in VM memory size, total physical system memory size and number of cores per VM compared to VMware, and VMware boasts more flexible memory management and I/O options, but these differences are less significant that they were two years ago.
For large enterprises and for complete integrated management, particularly storage, HA, DR and automated workload migration, and for what appears to be close to 100% coverage of workload sizes, VMware is still king of the barnyard. VMware also boasts an incredibly rich partner ecosystem.
For cloud, Microsoft has a plausible story but it is completely wrapped around Azure.
While I have not had the time (or the inclination, if I was being totally honest) to develop a very granular comparison, VMware’s recent changes to its legacy licensing structure (and subsequent changes to the new pricing structure) does look like license cost remains an attraction for Microsoft Hyper-V, especially if the enterprise is using Windows Server Enterprise Edition.
Over the past months server vendors have been announcing benchmark results for systems incorporating Intel’s high-end x86 CPU, the E7, with HP trumping all existing benchmarks with their recently announced numbers (although, as noted in x86 Servers Hit The High Notes, the results are clustered within a few percent each other). HP recently announced new performance numbers for their ProLiant DL980, their high-end 8-socket x86 server using the newest Intel E7 processors. With up to 10 cores, these new processors can bring up to 80 cores to bear on large problems such as database, ERP and other enterprise applications.
The performance results on the SAP SD 2-Tier benchmark, for example, at 25160 SD users, show a performance improvement of 35% over the previous high-water mark of 18635. The results seem to scale almost exactly with the product of core count x clock speed, indicating that both the system hardware and the supporting OS, in this case Windows Server 2008, are not at their scalability limits. This gives us confidence that subsequent spins of the CPU will in turn yield further performance increases before hitting system of OS limitations. Results from other benchmarks show similar patterns as well.
Key takeaways for I&O professionals include:
Expect to see at least 25% to 35% throughput improvements in many workloads with systems based on the latest the high-performance PCUs from Intel. In situations where data center space and cooling resources are constrained this can be a significant boost for a same-footprint upgrade of a high-end system.
For Unix to Linux migrations, target platform scalability continues become less of an issue.
In a coordinated, trans-continental series of presentations at Computex in Taipei and All Things D:9 in Palos Verdes, California, Microsoft revealed key details about the next version of its Windows operating system, code-named “Windows 8.” Windows 8 is a “reimagining” of Windows from top to bottom: new chipsets, new hardware, a new user interface, and a new application model. Microsoft has not yet announced a release date (or year) for Windows 8, but intends for Windows 8 to power everything from tablets to clamshells to desktops and larger surfaces. The next version of Windows will:
Run natively on system-on-a-chip (SoC) designs, including ARM-based processors.The importance of this development is hard to overstate. Windows on ARM means that Windows devices will get online faster and stay online longer. They can take on new form factors, including tablets and hardware that has yet to be invented.
Deliver touch-first experiences, while supporting legacy peripherals and devices. Windows 7 “supported” touch but was not “touch-first,” a distinction apparent to anyone observing the use of a Windows 7 tablet or an HP TouchSmart PC. Windows 8 works with keyboards and mice but is truly touch-first, with a redesigned start screen (no more “start” menu!) and a tile-based UI similar to Windows Phone 7.
A reporter just asked me what I thought HP's earnings meant in the context of the post-PC era and I thought I'd share my response:
HP’s drop in PC shipments is not unique in the industry—Acer and other companies have also reported a drop in their recent quarters. And let me say this loud and clear: Tablet cannibalization is only a minor contributor to soft PC sales. The bigger factor is the Windows release cycle—so many consumers bought new PCs when Windows 7 came out, and without a new version of Windows this year, there isn’t the same catalyst to buy. Forrester’s data shows that 34% of US online consumers report having bought a PC in the past 12 months, and an additional 25% bought one 12-24 months ago. Tablet owners are actually more likely than US online consumers in general to have recently bought a PC: 44% in the past 12 months and 28% in the 12 months before that.
Most of the hype in advance of today’s Apple media event is rightly about a new iPad. Sarah Rotman Epps will post on her blog about the new iPad for consumer product strategists after the announcement. I’m focused on the published reports that Apple’s Mobile Me service will be upgraded. I cited Mobile Me as an example of emerging personal cloud services in a July 2009 report, and I’m working on a follow-on report now. Mobile Me is Apple’s horse in a contest with Facebook, Google, Microsoft, and others, to shift personal computing from being device-centric to user-centric, so that you and I don’t need to think about which gadget has the apps or data that we want. The vision of personal cloud is that a combination of local apps, cached data, and cloud-based services will put the right information in the right device at the right time, whether on personal or work devices. The strengths of Mobile Me today are:
Synced contacts, calendar, Safari bookmarks, and email account settings, as well as IMAP-based Mobile Me email accounts, for Web, Mac, Windows, and iOS devices.
Synced Mac preferences, including app and system preferences.
Mobile Me Gallery for easy uploading and sharing of photos and videos.
Today Forrester published its revised US consumer tablet forecast, updating its previous forecast from June 2010. When Apple's iPad first debuted, we saw the device as a game-changer but were too conservative with our forecast. Since then, we've fielded additional consumer surveys and an SMB and enterprise survey, conducted additional supply-side research, and seen more sales numbers from Apple. We've had briefings from many companies that will release new tablets at CES. All of these inputs have led us to revise our US consumer tablet forecast for 2010 upward to 10.3 million units, and we expect sales to more than double in 2011 to 24.1 million units. Of those sales, the lion's share will be iPads, and despite many would-be competitors that will be released at CES, we see Apple commanding the vast majority of the tablet market through 2012.
Forrester's US Consumer Tablet Forecast, updated Jan. 4, 2011:
I’ve recently had the opportunity to talk with a small sample of SLES 11 and RH 6 Linux users, all developing their own applications. All were long-time Linux users, and two of them, one in travel services and one in financial services, had applications that can be described as both large and mission-critical.
The overall message is encouraging for Linux advocates, both the calm rational type as well as those who approach it with near-religious fervor. The latest releases from SUSE and Red Hat, both based on the 2.6.32 Linux kernel, show significant improvements in scalability and modest improvements in iso-configuration performance. One user reported that an application that previously had maxed out at 24 cores with SLES 10 was now nearing production certification with 48 cores under SLES 11. Performance scalability was reported as “not linear, but worth doing the upgrade.”
Overall memory scalability under Linux is still a question mark, since the widely available x86 platforms do not exceed 3 TB of memory, but initial reports from a user familiar with HP’s DL 980 verify that the new Linux Kernel can reliably manage at least 2TB of RAM under heavy load.
File system options continue to expand as well. The older Linux FS standard, ETX4, which can scale to “only” 16 TB, has been joined by additional options such as XFS (contributed by SGI), which has been implemented in several installations with file systems in excess of 100 TB, relieving a limitation that may have been more psychological than practical for most users.
Yesterday, Apple announced that it had sold 4.19M iPads in its fiscal Q4 2010, up from 3.27M in Q3. That means it sold more iPads than Macs in Q4, even though quarterly Mac sales were the highest they've ever been: 3.89M, a 27% unit sales increase from the year-ago quarter. Given that calendar Q4 sales typically account for 35%-40% of consumer electronics sales, we could be looking at 15M+ iPads sold globally for Apple in its first, three-quarter year. I am not the only analyst saying "Wow" right now.
There were tons of interesting tidbits in Apple's earnings call yesterday but I want to focus on a two points that I know are plaguing product strategists in this area. In particular, Steve Jobs attacked:
iPad mania has reached full tilt: Apple announced that it has sold more than 1 million units, and Apple’s competitors (like RIM and potentially Google) are rushing to get their own products out (or not, as the case may be for HP). But there’s something very significant about the device that has nothing to do with how many units it will sell. What’s revolutionary about the iPad is the experience that it delivers: The iPad is a new kind of PC that ushers in an era of Curated Computing.
Forrester defines “Curated Computing” as:
A mode of computing where choice is constrained to deliver less complex, more relevant experiences.