DCIM And The New Reality Of Infrastructure & Operations

Richard Fichera

I recently published an update on power and cooling in the data center (http://www.forrester.com/go?docid=60817), and as I review it online, I am struck by the combination of old and new. The old – the evolution of semiconductor technology, the increasingly elegant attempts to design systems and components that can be incrementally throttled, and the increasingly sophisticated construction of the actual data centers themselves, with increasing modularity and physical efficiency of power and cooling.

The new is the incredible momentum I see behind Data Center Infrastructure Management software. In a few short years, DCIM solutions have gone from simple aggregated viewing dashboards to complex software that understands tens of thousands of components, collects, filters and analyzes data from thousands of sensors in a data center (a single CRAC may have in excess of 20 sensors, a server over a dozen, etc.) and understands the relationships between components well enough to proactively raise alarms, model potential workload placement and make recommendations about prospective changes.

Of all the technologies reviewed in the document, DCIM offers one of the highest potentials for improving overall efficiency without sacrificing reliability or scalability of the enterprise data center. While the various DCIM suppliers are still experimenting with business models, I think that it is almost essential for any data center operations group that expects significant change, be it growth, shrinkage, migration or a major consolidation or cloud project, to invest in DCIM software. DCIM consumers can expect to see major competitive action among the current suppliers, and there is a strong potential for additional consolidation.

Xsigo Expands to a Data Center Fabric: Converged Infrastructure for the Virtual Data Center

Richard Fichera

Last year at VMworld I noted Xsigo Systems, a small privately held company at VMworld showing their I/O Director technology, which delivereda subset of HP Virtual Connect or Cisco UCS I/O virtualization capability in a fashion that could be consumed by legacy rack-mount servers from any vendor. I/O Director connects to the server with one or more 10 G Ethernet links, and then splits traffic out into enterprise Ethernet and FC networks. On the server side, the applications, including VMware, see multiple virtual NICs and HBAs courtesy of Xsigo’s proprietary virtual NIC driver.

Controlled via Xsigo’s management console, the server MAC and WWNs can be programmed, and the servers can now connect to multiple external networks with fewer cables and substantially lower costs for NIC and HBA hardware. Virtualized I/O is one of the major transformative developments in emerging data center architecture, and will remain a theme in Forrester’s data center research coverage.

This year at VMworld, Xsigo announced a major expansion of their capabilities – Xsigo Server Fabric, which takes the previous rack-scale single-Xsigo switch domains and links them into a data-center-scale fabric. Combined with improvements in the software and UI, Xsigo now claims to offer one-click connection of any server resource to any network or storage resource within the domain of Xsigo’s fabric. Most significantly, Xsigo’s interface is optimized to allow connection of VMs to storage and network resources, and to allow the creation of private VM-VM links.

Read more

Hyper-V Matures As An Enterprise Platform

Richard Fichera

A project I’m working on for an approximately half-billion dollar company in the health care industry has forced me to revisit Hyper-V versus VMware after a long period of inattention on my part, and it has become apparent that Hyper-V has made significant progress as a viable platform for at least medium enterprises. My key takeaways include:

  • Hyper-V has come a long way and is now a viable competitor in Microsoft environments up through mid-size enterprise as long as their DR/HA requirements are not too stringent and as long as they are willing to use Microsoft’s Systems Center, Server Management Suite and Performance Resource Optimization as well as other vendor specific pieces of software as part of their management environment.
  • Hyper-V still has limitations in VM memory size, total physical system memory size and number of cores per VM compared to VMware, and VMware boasts more flexible memory management and I/O options, but these differences are less significant that they were two years ago.
  • For large enterprises and for complete integrated management, particularly storage, HA, DR and automated workload migration, and for what appears to be close to 100% coverage of workload sizes, VMware is still king of the barnyard. VMware also boasts an incredibly rich partner ecosystem.
  • For cloud, Microsoft has a plausible story but it is completely wrapped around Azure.
  • While I have not had the time (or the inclination, if I was being totally honest) to develop a very granular comparison, VMware’s recent changes to its legacy licensing structure (and subsequent changes to the new pricing structure) does look like license cost remains an attraction for Microsoft Hyper-V, especially if the enterprise is using Windows Server Enterprise Edition. 
Read more

VMware Pushes Hypervisor And Management Features With vSphere 5 Announcement

Richard Fichera

After considerable speculation and anticipation, VMware has finally announced vSphere 5 as part of a major cloud infrastructure launch, including vCloud Director 1.5, SRM 5 and vShield 5. From our first impressions, it is both well worth the wait and merits immediate serious consideration as an enterprise virtualization platform, particularly for existing VMware customers.

The list of features is voluminous, with at least 100 improvements, large and small, but among the features, several stand out as particularly significant as I&O professionals continue their efforts to virtualize the data center, primarily dealing with and support for both larger VMs and physical host systems, and also with the improved manageability of storage and improvements Site Recovery Manager (SRM), the remote-site HA components:

  • Replication improvements for Site Recovery Manager, allowing replication without SANs
  • Distributed Resource Scheduling (DRS) for Storage
  • Support for up to 1 TB of memory per VM
  • Support for 32 vCPUs per VM
  • Support for up to 160 Logical CPUs and 2 TB or RAM
  • New GUI to configure multicore vCPUs
  • Storage driven storage delivery based on the VMware-Aware Storage APIs
  • Improved version of the Cluster File System, VMFS5
  • Storage APIs – Array Integration: Thin Provisioning enabling reclaiming blocks of a thin provisioned LUN on the array when a virtual disk is deleted
  • Swap to SSD
  • 2TB+ LUN support
  • Storage vMotion snapshot support
  • vNetwork Distributed Switch improvements providing improved visibility in VM traffic
  • vCenter Server Appliance
  • vCenter Solutions Manager, providing a consistent interface to configure and monitor vCenter-integrated solutions developed by VMware and third parties
  • Revamped VMware High Availability (HA) with Fault Domain Manager
Read more

Getting Private Cloud Right Takes Unconventional Thinking

James Staten

Recent Forrester inquiries from enterprise infrastructure and operations (I&O) professionals show that there's still significant confusion between infrastructure-as-a-service (IaaS) private clouds and server virtualization environments. As a result, there are a lot of misperceptions about what it takes to get your private cloud investments right and drive adoption by your developers. The answers may surprise you; they may even be the opposite of what you're thinking.

From speaking with Forrester clients who have deployed successful private clouds, we've found that your cloud should be smaller than you think, priced cheaper than the ROI math would justify and actively marketed internally - no, private clouds are not a Field of Dreams. Our latest report, "Q&A: How to Get Private Cloud Right," details this unconventional thinking, and you may find that internal clouds are much easier than you think.

First and foremost, if you think the way you operate your server virtualization environment today is good enough to call a cloud, you are probably lying to yourself. Per the Forrester definition of cloud computing, your internal cloud must be:

  1. Highly standardized - meaning that the key operational procedures of your internal IaaS environment (provisioning, placement, patching, migration, parking and destroying) should all be documented and conducted the same way every time.
  2. Highly automated - and to make sure the above standardized procedures are done the same time every time, you need to take these tasks out of human error and hand them over to automation software.
Read more