The benefits of virtualization are quite obvious but when you start to really increase the density of virtual machines in order to maximize utilization suddenly it ain't such a simple proposition. The latest CPUs from AMD and Intel are more than up to the task of running 10-20 or more applications at a time. Most servers run out of memory and I/O bandwidth well before processing power. Recent announcements from the leading server vendors have been made to address the memory side by packing more DIMMs onto a single motherboard (including blade server boards), but you can only add so many Ethernet cards and Fibre Channel HBAs. Oh yeah, and then there's the switch ports to go with them (blade systems help a lot here).
by Forrester, New
CEO Paul Maritz announced this week that VMware will drop the price of ESXi (their base server
hypervisor) to $0 (from $495).
This obviously comes in response to Microsoft Hyper-V
pricing ($28 per server) and as competition to the free open source Xen
Over the past few months a flurry of announcements have begun swirling around the cloud computing space, which remains a nascent market in the overall IT realm. Do these announcements portend a fast maturity for the concept or just the typical "me too" that comes with a hyped market?
In June, RightScale, a cloud management software and consulting company that has become a bit of a poster child as a cloud integrator, announced a partnership with GigaSpaces that integrates their eXtreme Application Platform (XAP) clustering and cache solution with the RightScale automated cloud management platform for Amazon EC2 clients. The value of this partnership comes from the fact that EC2 simply provides you with a VM you can populate but no availability or scalability services. XAP is a cluster architecture that delivers these values and can be quickly and easily deployed via the RightScale tool.
Next came Elastra, a San Francisco startup building a Cloud Server, a middleware layer that turns a commodity infrastructure into a cloud (similar value to what 3Tera provides today). The first iteration deploys similarly to XAP -- as a software layer you load into EC2 VMs, that enables scale and availability to the apps you lay on top of it.
On May 12th, 2008 VMware announced that nine storage replication vendors have tested and certified their technology with VMware’s long awaited Site Recovery Manager (SRM) offering. SRM is an important step forward in DR (DR) preparedness because it automates the process of restarting virtual machines (VM) at an alternate data center. Of course, your data and your VM configuration files must be present at the alternate site, hence the necessary integration with replication vendors. SRM not only automates the restart of VMs at an alternate data center, it can automate other aspects of DR. For example, it can shutdown other VMs before it recovers others. You can also integrate scripts for other tasks and insert checkpoints where a manual procedure is required. This is useful if you are using the redundant infrastructure at the alternate data center for other workloads such as application development and testing (a very common scenario). When you recover an application to an alternate site, especially if your redundant infrastructure supports other workloads, you have to think about how you will repurpose between secondary and production workloads. You also have to think about the entire ecosystem, such as network and storage settings, not just simply recovering a VM.
Essentially, VMware wants you to replace manual DR runbook with the automated recovery plans in SRM. It might not completely replace your DR runbook but it can automate enough of it. So much so that DR service providers such as SunGard are productizing new service offerings based on SRM.
of a partnership between Dell and Egenera has
done something unique in the business development world -- increased the
credibility of both players who were lagging in overall market presence in a
key technology area -- server virtualization.
smaller server vendor, popular in financial services, public sector and service
providers, was the first to bring Unix-class virtualization capabilities to x86
systems but did so only within its unique blade server frame design. As such,
Egenera simply hasn’t been able to make much headway in the general enterprise
market. A 2005 hardware OEM partnership with Fujitsu-Siemens was a step in the
right direction but one only felt in Europe.
A shift is
taking place in the server market that is starting to look very much like a
throw back to simpler times. As enterprises gain comfort with x86 server
virtualization, they are starting to push for higher and higher consolidation
ratios, which are driving a return to scale up server purchases. Where a
single-socket server with 8GBs of RAM was the most popular choice a few years
back when scaling out was all the rage, we are starting to see beefier
configurations become the norm to accommodate server
A Forrester survey from just last year showed that while adoption of x86 virtualization was ramping
quickly among enterprise infrastructure & operations (I&O) leaders, the
ratios of servers consolidated were low, averaging 4:1. But this may have been
as much a byproduct of the new technology comfort curve as it was server buying
Wouldn’t it be nice if the enterprise software world were on board with your server virtualization efforts? Imagine downloading the latest version of PeopleSoft or Crystal Reports in a virtual server format that could be loaded on to VMware ESX and would just run – no installation, no configuration hassles, just instantiate and go.
In today's LinuxWorld session by Simon Crosby, CTO of XenSource, and shepherd of the Xen open source project made the contention that the open source community is holding itself back by not ensuring compatibility between Xen, KVM and the other open source virtualization efforts. He's right to a degree in that standards for foundation functions would allow the greater community to enhance virtualization for all, but should we honestly hold out hope of this happening? As is always the case in the open source world, the crowd goes where the excitement is and popularity wins. It would be a waste of the community's efforts to try and drive standardization where it isn't wanted and to try and ensure compatibility between competing implementations when everyone expects a winner to emerge.
Enterprise customers want things they can count on, especially if they are pitched for use in production. The fickleness of the open source community runs counter to this desire which keeps open source technologies in the fringe until a commercial entity hardens them and wraps them in professional support offerings. This commercialization collects the interest of the community that wants to make a profit and, voila, the winner emerges. It's not the community that holds back open source projects its failure to bridge the desires of the commercial customers and ISVs and the community enthusiasts - the key to this is collective advancement of the chosen project.