2013 Server Virtualization Predictions: Driving Value Above And Beyond The Hypervisor

Dave Bartoletti

Now that we’ve been back from the holidays for a month, I’d like to round out the 2013 predictions season with a look at the year ahead in server virtualization. If you’re like me (or this New York Times columnist), you’ll agree that a little procrastination can sometimes be a good thing to help collect and organize your plans for the year ahead. (Did you buy that rationalization?)

We’re now more than a decade into the era of widespread x86 server virtualization. Hypervisors are certainly a mature (if not peaceful) technology category, and the consolidation benefits of virtualization are now uncontestable. 77% of you will be using virtualization by the end of this year, and you’re running as many as 6 out of 10 workloads in virtual machines. With such strong penetration, what’s left? In our view: plenty. It’s time to ask your virtual infrastructure, “What have you done for me lately?”

With that question in mind, I asked my colleagues on the I&O team to help me predict what the year ahead will hold. Here are the trends in 2013 you should track closely:

  1. Consolidation savings won’t be enough to justify further virtualization. For most I&O pros, the easy workloads are already virtualized. Looking ahead at 2013, what’s left are the complex business-critical applications the business can’t run without (high-performance databases, ERP, and collaboration top the list). You won’t virtualize these to save on hardware; you’ll do it to make them mobile, so they can be moved, protected, and duplicated easily. You’ll have to explain how virtualizing these apps will make them faster, safer, and more reliable—then prove it.
Read more

IBM STG Is Upbeat On PureSystems And Growth Markets

Manish Bahl

 

Last month, I attended an IBM Systems and Technology Group (STG) Executive Summit in the US, where IBM outlined its key strategies for accelerating sales in growth markets, including:

·         Aggressively marketing PureSystems. IBM is positioning PureSystems (a pre-integrated, converged system of servers, storage, and networking technology with automated self-management and built-in SmartCloud technology) as an integrated and simplified data center offering to help organizations reduce the money and time they spend on the management and administration of servers. 

·         Continuing to expand in “tier two” cities. Over the next 12 months, IBM plans to continue its expansion outside of major metropolitan areas by opening small branches in nearly 100 locations in growth markets, most notably India, China, Brazil, and Russia.

·         Expanding channel capabilities and accelerating new routes to market. IBMplans to certify 2,800 global resellers on PureSystems in 2013 and upgrade the solution and technical expertise of 500 of its partners. Also, the company plans to drive the revenue of managed service providers (MSPs) by working with them closely to develop cloud-based services and solutions on PureSystems.

Considering the vast potential demand from growth markets and slowdown in developed markets, IBM is among the growing camp of multinational vendors aggressively targeting them as an engine for future business. Some of my key observations on IBMs event and recent announcements:

Read more

HP’s Troubles Continue, But Does It Matter?

Richard Fichera

HP seems to be on a tear, bouncing from litigation with one of its historically strongest partners to multiple CEOs in the last few years, continued layoffs, and a recent massive write-down of its EDS purchase. And, as we learned last week, the circus has not left town. The latest “oops” is an $8.8 billion write-down for its purchase of Autonomy, under the brief and ill-fated leadership of Léo Apotheker, combined with allegations of serious fraud on the part of Autonomy during the acquisition process.

The eventual outcome of this latest fiasco will be fun to watch, with many interesting sideshows along the way, including:

  • Whose fault is it? Can they blame it on Léo, or will it spill over onto Meg Whitman, who was on the board and approved it?
  • Was there really fraud involved?
  • If so, how did HP miss it? What about all the internal and external people involved in due diligence of this acquisition? I’ve been on the inside of attempted acquisitions at HP, and there were always many more people around with the power to say “no” than there were people who were trying to move the company forward with innovative acquisitions, and the most persistent and compulsive of the group were the various finance groups involved. It’s really hard to see how they could have missed a little $5 billion discrepancy in revenues, but that’s just my opinion — I was usually the one trying to get around the finance guys. :)
Read more

AMD Acquires SeaMicro — Big Bet On Architectural Shift For Servers

Richard Fichera

[For some reason this has been unpublished since April — so here it is well after AMD announced its next spin of the SeaMicro product.]

At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business, it made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.

SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2) with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not a part of SeaMicro’s original Atom-based offering.

Read more

Data Center Power And Efficiency – Public Enemy #1 Or The Latest Media Punching Bag?

Richard Fichera

This week, the New York Times ran a series of articles about data center power use (and abuse) “Power, Pollution and the Internet” (http://nyti.ms/Ojd9BV) and “Data Barns in a Farm Town, Gobbling Power and Flexing Muscle” (http://nyti.ms/RQDb0a). Among the claims made in the articles were that data centers were “only using 6 to 12 % of the energy powering their servers to deliver useful computation. Like a lot of media broadsides, the reality is more complex than the dramatic claims made in these articles. Technically they are correct in claiming that of the electricity going to a server, only a very small fraction is used to perform useful work, but this dramatic claim is not a fair representation of the overall efficiency picture. The Times analysis fails to take into consideration that not all of the power in the data center goes to servers, so the claim of 6% efficiency of the servers is not representative of the real operational efficiency of the complete data center.

On the other hand, while I think the Times chooses drama over even-keeled reporting, the actual picture for even a well-run data center is not as good as its proponents would claim. Consider:

  • A new data center with a PUE of 1.2 (very efficient), with 83% of the power going to IT workloads.
  • Then assume that 60% of the remaining power goes to servers (storage and network get the rest), for a net of almost 50% of the power going into servers. If the servers are running at an average utilization of 10%, then only 10% of 50%, or 5% of the power is actually going to real IT processing. Of course, the real "IT number" is the server + plus storage + network, so depending on how you account for them, the IT usage could be as high as 38% (.83*.4 + .05).
Read more

Can Dell Be Your Strategic Vendor In Asia Pacific?

Tim Sheedy

I recently had the chance to spend some quality time with Dell in Singapore at their event for Forrester analysts in the Asia Pacific region. As Dell is a company traditionally known for its hardware products, I had low expectations – to date, few of my CIO clients would consider Dell a “strategic” supplier.

However, I was pleasantly surprised – Dell is reinventing itself from a PC and server supplier into an IT solutions provider. The benefits of the acquisition of Perot Systems and various software assets in North America and around the globe are starting to pay dividends in Asia Pacific.

As a late entrant into many of the newer markets they play in, they have the rare advantage of being able to do things differently – both from a solution and a pricing standpoint. From data centre transformation through legacy migration and application modernisation, to networking solutions, Dell is attempting to be disruptive player in the market – simplifying processes that were typically human-centric, and automating capabilities to reduce the overall burden of owning and running infrastructure.

Their strategy is to stay close to what they know – much of their capability is linked directly to infrastructure – but their open, modular, and somewhat vendor agnostic approach is in direct opposition to the “vendor lock-in” solutions that many of the other major vendors push.

Read more

Dell Is On A Quest For Software

Glenn O'Donnell

 

One of the many hilarious scenes in Monty Python and the Holy Grail is the "Bridge of Death" sequence. This week's news that Dell plans to acquire Quest Software makes one think of a slight twist to this scene:

Bridgekeeper:   "What ... is your name?"
Traveler:           "John Swainson of Dell."
Bridgekeeper:   "What ... is your quest?"
Traveler:           "Hey! That's not a bad idea!"

We suspect Dell's process was more methodical than that!

This acquisition was not a surprise, of course. All along, it has been obvious that Dell needed stronger assets in software as it continues on its quest to avoid the Gorge of Eternal Peril that is spanned by the Bridge of Death. When the company announced that John Swainson was joining to lead the newly formed software group, astute industry watchers knew the next steps would include an ambitious acquisition. We predicted such an acquisition would be one of Swainson's first moves, and after only four months on the job, indeed it was.

Read more

DCIM — Updates And Trends

Richard Fichera

Only a few months since I authored Forrester’s "Market Overview: Data Center Infrastructure Management Solutions," significant changes merit some additional commentary.

Vendor Drama

The major vendor drama of the “season” is the continued evolution of Schneider and Emerson’s DCIM product rollout. Since Schneider’s worldwide analyst conference in Paris last week, we now have pretty good visibility into both major vendors' strategy and products. In a nutshell, we have two very large players, both with large installed bases of data center customers, and both selling a vision of an integrated modular DCIM framework. More importantly it appears that both vendors can deliver on this promise. That is the good news. The bad news is that their offerings are highly overlapped, and for most potential customers the choice will be a difficult one. My working theory is that whoever has the largest footprint of equipment will have an advantage, and that a lot depends on the relative execution of their field marketing and sales organizations as both companies rush to turn 1000s of salespeople and partners loose on the world with these products. This will be a classic market share play, with the smart strategy being to sacrifice margin for market share, since DCIM solutions have a high probability of pulling through services, and usually involve some annuity revenue stream from support and update fees.

How Big Is The Market?

Read more

HP Rolls Out BladeSystem Upgrades – Significant Improvements Aim To Fend Off IBM And Cisco

Richard Fichera

Overview

Earlier this week at its Discover customer event, HP announced a significant set of improvements to its already successful c-Class BladeSystem product line, which, despite continuing competitive pressure from IBM and the entry of Cisco into the market three years ago, still commands approximately 50% of the blade market. The significant components of this announcement fall into four major functional buckets – improved hardware, simplified and expanded storage features, new interconnects and I/O options, and serviceability enhancements. Among the highlights are:

  • Direct connection of HP 3PAR storage – One of the major drawbacks for block-mode storage with blades has always been the cost of the SAN to connect it to the blade enclosure. With the ability to connect an HP 3PAR storage array directly to the c-Class enclosure without any SAN components, HP has reduced both the cost and the complexity of storage for a wide class of applications that have storage requirements within the scope of a single storage array.
  • New blades – With this announcement, HP fills in the gaps in their blade portfolio, announcing a new Intel Xeon EN based BL-420 for entry requirements, an upgrade to the BL-465 to support the latest AMD 16-core Interlagos CPU, and the BL-660, a new single-width Xeon E5 based 4-socket blade. In addition, HP has expanded the capacity of the sidecar storage blade to 1.5 TB, enabling an 8-server and 12 TB + chassis configuration.
Read more

Dell Joins The ARMs Race, Announces ARM-Based 'Copper' Server

Richard Fichera

Earlier this week Dell joined arch-competitor HP in endorsing ARM as a potential platform for scale-out workloads by announcing “Copper,” an ARM-based version of its PowerEdge-C dense server product line. Dell’s announcement and positioning, while a little less high-profile than HP’s February announcement, is intended to serve the same purpose — to enable an ARM ecosystem by providing a platform for exploring ARM workloads and to gain a visible presence in the event that it begins to take off.

Dell’s platform is based on a four-core Marvell ARM V7 SOC implementation, which it claims is somewhat higher performance than the Calxeda part, although drawing more power, at 15W per node (including RAM and local disk). The server uses the PowerEdge-C form factor of 12 vertically mounted server modules in a 3U enclosure, each with four server nodes on them for a total of 48 servers/192 cores in a 3U enclosure. In a departure from other PowerEdge-C products, the Copper server has integrated L2 network connectivity spanning all servers, so that the unit will be able to serve as a low-cost test bed for clustered applications without external switches.

Dell is offering this server to selected customers, not as a GA product, along with open source versions of the LAMP stack, Crowbar, and Hadoop. Currently Cannonical is supplying Ubuntu for ARM servers, and Dell is actively working with other partners. Dell expects to see OpenStack available for demos in May, and there is an active Fedora project underway as well.

Read more