A project I’m working on for an approximately half-billion dollar company in the health care industry has forced me to revisit Hyper-V versus VMware after a long period of inattention on my part, and it has become apparent that Hyper-V has made significant progress as a viable platform for at least medium enterprises. My key takeaways include:
Hyper-V has come a long way and is now a viable competitor in Microsoft environments up through mid-size enterprise as long as their DR/HA requirements are not too stringent and as long as they are willing to use Microsoft’s Systems Center, Server Management Suite and Performance Resource Optimization as well as other vendor specific pieces of software as part of their management environment.
Hyper-V still has limitations in VM memory size, total physical system memory size and number of cores per VM compared to VMware, and VMware boasts more flexible memory management and I/O options, but these differences are less significant that they were two years ago.
For large enterprises and for complete integrated management, particularly storage, HA, DR and automated workload migration, and for what appears to be close to 100% coverage of workload sizes, VMware is still king of the barnyard. VMware also boasts an incredibly rich partner ecosystem.
For cloud, Microsoft has a plausible story but it is completely wrapped around Azure.
While I have not had the time (or the inclination, if I was being totally honest) to develop a very granular comparison, VMware’s recent changes to its legacy licensing structure (and subsequent changes to the new pricing structure) does look like license cost remains an attraction for Microsoft Hyper-V, especially if the enterprise is using Windows Server Enterprise Edition.
An important prerequisite for a full cloud broker model is the technical capability of cloud bursting:
Cloud bursting is the dynamic relocation of workloads from private environments to cloud providers and vice versa. A workload can represent IT infrastructure or end-to-end business processes.
The initial meaning of cloud bursting was relatively simple. Consider this scenario: An enterprise with traditional, non-cloud infrastructure is running out of infrastructure and temporarily gets additional compute power from a cloud service provider. Many enterprises have now established private clouds, and cloud bursting fits even better here, with dynamic workload relocation between private clouds, public clouds, and the more private provider models in the middle; Forrester calls these virtual private clouds. The private cloud is literally bursting into the next cloud level at peak times.
An essential step before leveraging cloud bursting is properly classifying workloads. This involves describing the most public cloud level possible, based on technical restrictions and data privacy needs (including compliance concerns). A conservative enterprise could structure their workloads into three classes of cloud:
Productive workloads of back-office data and processes, such as financial applications or customer-related transactions:These need to remain on-premises. An example is the trading system of an investment bank.
After considerable speculation and anticipation, VMware has finally announced vSphere 5 as part of a major cloud infrastructure launch, including vCloud Director 1.5, SRM 5 and vShield 5. From our first impressions, it is both well worth the wait and merits immediate serious consideration as an enterprise virtualization platform, particularly for existing VMware customers.
The list of features is voluminous, with at least 100 improvements, large and small, but among the features, several stand out as particularly significant as I&O professionals continue their efforts to virtualize the data center, primarily dealing with and support for both larger VMs and physical host systems, and also with the improved manageability of storage and improvements Site Recovery Manager (SRM), the remote-site HA components:
Replication improvements for Site Recovery Manager, allowing replication without SANs
Distributed Resource Scheduling (DRS) for Storage
Support for up to 1 TB of memory per VM
Support for 32 vCPUs per VM
Support for up to 160 Logical CPUs and 2 TB or RAM
New GUI to configure multicore vCPUs
Storage driven storage delivery based on the VMware-Aware Storage APIs
Improved version of the Cluster File System, VMFS5
Storage APIs – Array Integration: Thin Provisioning enabling reclaiming blocks of a thin provisioned LUN on the array when a virtual disk is deleted
Swap to SSD
2TB+ LUN support
Storage vMotion snapshot support
vNetwork Distributed Switch improvements providing improved visibility in VM traffic
vCenter Server Appliance
vCenter Solutions Manager, providing a consistent interface to configure and monitor vCenter-integrated solutions developed by VMware and third parties
Revamped VMware High Availability (HA) with Fault Domain Manager
Back during the dot.com boom years, existing telcos and dozens of new network operators, especially in western Europe and North America, laid vast amounts of fiber optic networks in anticipation of rapidly rising Internet usage and traffic. When the expected volumes of Internet usage failed to materialize, they did not turn on or “light up” most (some estimate 80% and even 90% on many routes) of this fiber network capacity. This unused capacity was called “dark fiber,” and it has only been in recent years that this dark fiber has been put to use.
I am seeing early signs of something similar in the build-out of infrastructure-as-a-service (IaaS) cloud offerings. Of course, the data centers of servers, storage devices, and networks that IaaS vendors need can scale up in a more linear fashion (add another rack of blade servers as needed to support an new client) than the all-or-nothing build-out of fiber optic networks, so the magnitude of “dark cloud” will never reach the magnitude of “dark fiber.” Nonetheless, if current trends continue and accelerate, there is a real potential for IaaS wannabes creating a glut of “dark cloud” capacity that exceeds actual demand, with resulting downward pressure on prices and shakeouts of unsuccessful IaaS providers.
HP this week really stirred up the Converged Infrastructure world by introducing three new solution offerings, one an incremental evolution of an existing offering and the other two representing new options which will put increased pressure on competitors. The trio includes:
HP VirtualSystem - HP’s answer to vStart, Flex Pod and vBlocks, VirtualSystem is a pre-integrated stack of servers (blade and racked options), HP network switches and HP Converged Storage (3Par and Left Hand Networks iSCSI) along with software, including the relevant OS and virtualization software. Clients can choose from four scalable deployment options that support up to 750, 2500 or 6000 virtual servers or up to 3000 virtual clients. It supports Microsoft and Linux along with VMware and Citrix. Since this product is new, announced within weeks of the publication of this document, we have had limited exposure it, but HP claims that they have added significant value in terms of optimized infrastructure, automation of VM deployment, management and security. In addition, HP will be offering a variety of services and hosting options along with VirtualSystem. Forrester expects that VirtualSystem will change the existing competitive dynamics and will result in a general uptick of interest it similar solutions. HP is positioning VirtualSystem as a growth path to CloudSystem, with what they describe as a “streamlined” upgrade path to a hybrid cloud environment.
Recent Forrester inquiries from enterprise infrastructure and operations (I&O) professionals show that there's still significant confusion between infrastructure-as-a-service (IaaS) private clouds and server virtualization environments. As a result, there are a lot of misperceptions about what it takes to get your private cloud investments right and drive adoption by your developers. The answers may surprise you; they may even be the opposite of what you're thinking.
From speaking with Forrester clients who have deployed successful private clouds, we've found that your cloud should be smaller than you think, priced cheaper than the ROI math would justify and actively marketed internally - no, private clouds are not a Field of Dreams. Our latest report, "Q&A: How to Get Private Cloud Right," details this unconventional thinking, and you may find that internal clouds are much easier than you think.
First and foremost, if you think the way you operate your server virtualization environment today is good enough to call a cloud, you are probably lying to yourself. Per the Forrester definition of cloud computing, your internal cloud must be:
Highly standardized - meaning that the key operational procedures of your internal IaaS environment (provisioning, placement, patching, migration, parking and destroying) should all be documented and conducted the same way every time.
Highly automated - and to make sure the above standardized procedures are done the same time every time, you need to take these tasks out of human error and hand them over to automation software.
Is your cloud strategy centered on saving money or fueling revenue growth? Where you land on this question could determine a lot about your experience level with cloud services and what guidance you should be giving to your application developers and infrastructure & operations teams. According to our research the majority of CIOs would vote for the savings, seeing cloud computing as an evolution of outsourcing and hosting that can drive down capital and operations expenses. In some cases this is correct but in many the opposite will result. Using the cloud wrong may raise your costs.
But this isn’t a debate worth having because it’s the exploration of the use cases where it does save you money that bears the real fruit. And it’s through this experience that you can start shifting your thinking from cost savings to revenue opportunities. Forrester surveys show that the top reasons developers tap into cloud services (and the empowered non-developers in your business units) is to rapidly deploy new services and capabilities. And the drivers behind these efforts – new services, better customer experience and improved productivity. Translation: Revenues and profits.
If the cloud is bringing new money in the door, does it really matter if it’s the cheaper solution? Not at first. But over time using cloud as a revenue engine doesn’t necessarily mean high margins on that revenue. That’s where your experience with the cost advantaged uses of cloud come in.
Having attended the OpenStack Design Summit this week and at the same time fielding calls from Forrester clients affected by the Amazon Web Services (AWS) outage, an interesting contrast in approaches bore out. You could boil it down to closed versus open but there’s more to this contrast that should be part of your consideration when selecting your Infrastructure as a Service (IaaS) providers.
The obvious comparison is that AWS’ architecture and operational procedures are very much their own and few outside the company know how it works. Not even close partners like RightScale or those behind the open source derivative Eucalyptus know it well enough to do more than deduce what happened based on their experience and what they could observe. OpenStack, on the other hand, is fully open source so if you want to know how it works you can download the code. At the Design Summit here in Santa Clara, Calif. this week, developers and infrastructure & operations professionals had ample opportunity to dig into the design and suggest and submit changes right there. And there were plenty of conversations this week about how CloudFiles and other storage services worked and how to ensure an AWS Elastic Block Store (EBS) mirror storm could be avoided.
It seems that during every major shift in the telecommunications, service provider or hosting market there is a string of moves like these as players attempt to capitalize on the change to gain greater market position. And there are plenty of investors caught up in the opportunity who are willing to lend a few bucks. In the dot.com period, through 2000s, we saw major shifts in the service provider landscape as colo/hosting giants were created such as Cable & Wireless and Equinix.
But what does this mean for infrastructure & operations professionals looking to select a hosting or Infrastructure as a Service (IaaS) cloud provider? The key is in determining if 1 + 1 actually equals anything greater than 2.
Cloud infrastructure-as-a-service (IaaS) is a hot market. Amazon Web Services, now five years old, drives a lot of attention and customer volume, but the vendor strategists at enterprise-facing providers such as IBM, HP, AT&T and Verizon have been building and delivering IaaS offerings. As I’ve studied the market, I’ve heard wildly different types of requirements from buyers and quite a range of offerings from service providers. Yet much of the industry dialogue is about one central idea of what IaaS is – think that’s wrong headed. I found that there were really two buyer types: 1) informal buyers outside of the IT operations/data center manager organizations, such as engineers, scientists, marketing executives, and developers, and 2) formal buyers, the IT operations and data center managers responsible for operating applications and maintaining infrastructure.
With this idea in mind, I set out to test the views of IT infrastructure buyers in the Forrsights Hardware Survey, Q3 2010 and learned that:
After 2+ years of cloud hype, only 6% of enterprises IT infrastructure respondents report using IaaS, with another 7% planning to implement by Q3, 2012. After flat adoption from 2008 to 2009, this represents an approximate doubling from 2009, off a very small base.
Almost two thirds of IT infrastructure buyers themselves don’t believe they are the primary buyer of cloud IaaS! We asked them which groups in their company are using or most interested in cloud IaaS. Only 36% of IT infrastructure buyers listed themselves, while 7% didn’t know. The rest, 58% said that IT developers, Web site owners, business unit owners of batch compute intensive apps, and other business unit developers were more interested in using IaaS than themselves.