System Management Capabilities To Focus On For Ramping Up Virtualization In 2011

When thinking about their 2011 IT initiatives, many firms I've spoken with are continuing to build out their virtual server environments. My colleague Gene Leganza has written an excellent report entitled The Top 15 Technology Trends EAs Should Watch that includes system management as a driver of continued virtualization. In addition, it is now becoming apparent that public and private cloud computing architectures are becoming more and more intertwined due to the intersection of virtualization platforms and management tools. From a technology standpoint, I believe that management is one of the key enablers of virtualized and cloud infrastructure. Therefore, as you're making your 2011 plans and virtualization or cloud computing inevitably pops up, think about how the following system management capabilities will influence your strategy:

  1. Integration of public and private cloud management. Management tools for managing virtualization platforms will integrate with cloud services. In the beginning it will be about monitoring or provisioning. Later on, your system management infrastructure will orchestrate the movement between internal and external resource pools.
Read more

Can Thousands Of Tiny Processors Compete With Traditional Enterprise-Class Systems?

Well, I guess we're going to find out. Earlier this week I met with Andrew Feldman, one of the founders and CEO of SeaMicro -- and he's betting that his Atom-based server can beat traditional Xeon-based systems. According to Andrew, the Atom processor is way more efficient on a per-watt basis than CPUs like the Xeon. Sure, it's not as fast, but it makes up for it by being cheap and power efficient, which lets you put a lot of them to work on tasks like web applications. Basically, SeaMicro puts 512 Atom-based servers into a 10U chassis, which provides virtualized network and storage resources as well as management over all these systems. This is not a big SMP box -- it's literally 512 servers that share common infrastructure.

According to SeaMicro, you would need 1,000 dual-socket quad-core Xeon systems to achieve the same SPECint_rate benchmark as 40 of their systems. If my math is right, that would be 40 * 512 Atom servers=20,480 Atom CPUs, compared with 1,000 Xeons * 2 sockets * 4 cores = 8,000 Xeon cores. 

One of the most interesting technical hurdles SeaMicro had to address in building this server is the interconnect for all these processors. Rather than going with PCI-e or another off-the-shelf interconnect, SeaMicro's architecture has more in common with IBM's Blue Gene.

Read more

Big Data Center Operators Keep Getting Bigger

It is becoming very clear that data center facilities have metamorphosed from small computer rooms to industrialized facilities. Due to the cost and complexity of running such a facility, we believe that many firms will opt to get out of that business and use data centers that are built and run by firms that focus on mission-critical facilities. Most of us are familiar with suppliers like AT&T and Savvis, but very few have noticed the larger data center wholesalers behind the scenes. This is partly because data center wholesalers are more focused on facilities than IT, leasing large chunks of capacity, entire data centers, or even supplying data center space to popular hosters and outsourcers that resell them to corporate buyers.

However, for firms with large data centers that aren't interested in outsourcing IT, it sometimes makes economic sense to go directly to wholesalers like 365 Main, CoreSite, and Digital Realty Trust. Or maybe just CoreSite and Digital Realty Trust. As of yesterday, almost 1 million square feet of 365 Main data center space was acquired by Digital Realty Trust, which brings their portfolio to somewhere just shy of 16 million square feet.

From a technical architecture standpoint, we believe that larger, more efficient, and industrialized data centers are the future of IT. As servers have multiplied and become more power-hungry, we find that firms with older facilities have fewer architectural choices. For them, highly utilized racks of blades are simply not an option because they can not provide the required environment.

Read more

Categories:

Getting involved in Forrester's EA research

Not too long ago, I shifted my role here at Forrester from Infrastructure & Operations to Enterprise Architecture. I was spending a lot of time looking at technical architecture topics and helping folks with assessing their infrastructure. Turns out, that falls into the kinds of things that EA folks are interested in as well. For the past few months, I've been focused on building tools and research that will help with assessing your infrastructure and getting it to where it needs to be in order to support future business demands. To that end, we've also begun research on best practices for making large-scale transformations successful. This could be moving from a mainframe to distributed systems, rolling out thin clients, or modernizing system management. Why do some organizations succeed at these types of transformations, while others take many years or fail to reach any consensus?

 

IT transformation is often likened to turning a large ship, due to the inertia of the status quo -- so many organizations struggle with these types of changes. I've even seen firms put the project team in charge of the "to be" infrastructure in physically separate buildings, away from the influences of business-as-usual thinking.

 

Come to think of it, that's a good idea. What do you think? If you've successfully completed a large IT transformation -- be it consolidation, migration, or something else -- we'd love to hear from you. 

Categories:

Free Webinar On Server Virtualization

Galen SchreckIn my conversations with organizations implementing server virtualization, I've found that there seems to be a gap opening between the number of virtual machines firms are willing to run on a server, and the maximum number that could still reasonably fit on there. It looks like this will be aggravated by newer servers that can run twice as many virtual machines, along with more mature virtualization platforms like VMware vSphere that will support up to a TB of physical RAM. How long will it be before IT is expected to support 50 or 75 VMs per server?

If you want to learn more about this topic, please join my complimentary Webinar, "Forrester’s Top Three Recommendations For Implementing Server Virtualization" on June 11th at 11AM EST. You can register for the session by visiting:

www.forrester.com/virtualizationwebinar

By Galen Schreck

Read more

Categories:

Are Big Boxes Better For Server Virtualization?

Galen SchreckFrom time to time, someone will ask me if it makes sense to purchase a large (32+ CPU) server as a big virtualization host running a VMware, Microsoft, or Citrix hypervisor.

In a word, I think the answer is "no".

Here are a few reasons why:

Read more

Categories:

Oracle Enhances VM Management With Virtual Iron Acquisition

The rumors are true. Oracle has finally announced that its intent to acquire Virtual Iron for its virtual server management capabilities.

It's a nice fit for Oracle, which has its own Xen-based virtualization platform called Oracle VM. Up until now, Virtual Iron has been selling its management tools on its own Xen-derived hypervisor. The net result is that Oracle gets dynamic resource management, power management, and better virtual server capabilities. And since Virtual Iron was built to manage Xen-based systems, the integration should be done by lunchtime on the day of the acquisition.

The acquisition brings up some larger questions, however, such as how Oracle will integrate Oracle VM, Oracle Enterprise Manager, and Virtual Iron Sun's xVM Server and xVM Ops Center. Now that it will supply hardware and virtualization platforms, Oracle will need to build out a stronger system management and automation portfolio.

Read more

Cisco's Big Blade Server Bet

After months of rumors, Cisco officially entered the server business this morning with a modular system it calls the "Unified Computing System" (UCS). This blade server system goes one step beyond its predecessors by starting from a unified network foundation on ten gigabit per second Ethernet (10 GbE) that delivers a true wire-once architecture. We believe this is the next step in blade server technology as it collapses a lot of components in these systems -- unified network, direct path I/O from the VM through to the chip, to the network, and unique system optimization for virtual workloads.

However, no one is clamoring for another server vendor, so despite the strong showing of partners at this launch, Cisco will have to win over enterprise server buyers who up to this point have had no relationship with the company. We think UCS will succeed mostly in green field deployments inside of companies who have a strong strategic partnership with Cisco. As they realize the gains promised, others will start to take them seriously.

As Cisco has affected the telephony market with its VoIP push, so too could they have the same affect on the server market over time, but it would be a mistake to discount HP and IBM who see this same vision and arguably are better positioned to take their strategic partner enterprise clients there.

By James Staten and Galen Schreck

Read more

Emerson and Schneider get ready to compete in facility management software

Galen The IT industry has made huge strides in software for change & configuration management, business service management, and IT process automation. Yet your data center facilities are probably living in the dark ages. For years, IT has been mapping out business services and mapping the underlying applications to the infrastructure it depends on. But the details never seem to go any further than the server level. I’m fairly certain that no one can tell you the racks, power circuits, or generators that a particular application depends on. And you can forget about managing applications’ power consumption – there’s no control past the UPS or PDU level.

The recent acquisition of Aperture by Emerson is noteworthy because it shows that the power and cooling giants are finally stirring in their lairs. Likewise, Emerson’s competitor Schneider Electric (through the better-known APC) has been building its own software portfolio focused on data center management for some time. Until recently, power and cooling vendors have been focused their limited software offerings on controlling their own infrastructure. But that’s changing -- now they’re shipping tools that can tell you the best location for that next rack of servers, or if you’ve already over-committed your physical infrastructure. There’s little competition from systems vendors like IBM and HP today, but it remains to be seen if Emerson and Schneider’s will become middleware providers or a direct competitors.

Read more