For the past few months, I’ve been heads down talking to our clients about storage refreshes. There have been some technology refreshes, primarily from some product coming up on end of life. However, for the most part, I’ve been consistently hearing the pain that I&O professionals have been suffering, which is from the storage capacity overload of server virtualization. Many today, however, are suffering even more, because not only do they have the server virtualization storage growth problems, but now it’s compounded with VDI, AND the overall private cloud initiatives many organizations have in place. Not only has their storage grown by 50% in the last 12 months, but it’s now projected to grow another 50% in the next 12 months. Before another million dollars plus investment is made, many are asking (as should you) the question: Is throwing more hardware going to really solve the problem?
These three BIG initiatives have a significant impact on how storage architectures change. But the reality is that storage has been an afterthought for a long time, and today, there is much change that has to happen. Features such as thin provisioning, deduplication (for primary environments), and compression have all been available for some time now and must be a part of common practice and procedures for managing storage that is supporting virtualization environments. And this is key. Having tools and solutions in place that understand your virtualization environment are critical to the overall success of your private cloud initiative, because storage is one of the integrated foundational blocks of establishing a private cloud environment in your data center. Today, it’s difficult to manage your storage without understanding what’s happening in the network as well in your server virtualization environment.
A project I’m working on for an approximately half-billion dollar company in the health care industry has forced me to revisit Hyper-V versus VMware after a long period of inattention on my part, and it has become apparent that Hyper-V has made significant progress as a viable platform for at least medium enterprises. My key takeaways include:
Hyper-V has come a long way and is now a viable competitor in Microsoft environments up through mid-size enterprise as long as their DR/HA requirements are not too stringent and as long as they are willing to use Microsoft’s Systems Center, Server Management Suite and Performance Resource Optimization as well as other vendor specific pieces of software as part of their management environment.
Hyper-V still has limitations in VM memory size, total physical system memory size and number of cores per VM compared to VMware, and VMware boasts more flexible memory management and I/O options, but these differences are less significant that they were two years ago.
For large enterprises and for complete integrated management, particularly storage, HA, DR and automated workload migration, and for what appears to be close to 100% coverage of workload sizes, VMware is still king of the barnyard. VMware also boasts an incredibly rich partner ecosystem.
For cloud, Microsoft has a plausible story but it is completely wrapped around Azure.
While I have not had the time (or the inclination, if I was being totally honest) to develop a very granular comparison, VMware’s recent changes to its legacy licensing structure (and subsequent changes to the new pricing structure) does look like license cost remains an attraction for Microsoft Hyper-V, especially if the enterprise is using Windows Server Enterprise Edition.