For the most part, enterprises understand that virtualization and automation are key components of a private cloud, but at what point does a virtualized environment become a private cloud? What can a private cloud offer that a virtualized environment can’t? How do you sell this idea internally? And how do you deliver a true private cloud in 2011?
In London, this March, I am facilitating a meeting of the Forrester Leadership Board Infrastructure & Operations Council, where we will tackle these very questions. If you are considering building a private cloud, there are changes you will need to make in your organization to get it right and our I&O council meeting will give you the opportunity to discuss this with other I&O leaders facing the same challenge.
NetApp recently announced that it was acquiring Akorri, a small but highly regarded provider of management solutions for virtualized storage environments. All in all, this is yet another sign of the increasingly strategic importance of virtualized infrastructure and the need for existing players, regardless of how strong their positions are in their respective silos, to acquire additional tools and capabilities for management of an extended virtualized environment.
NetApp, while one of the strongest suppliers in the storage industry, not only faces continued pressure from not only EMC, which owns VMware and has been on a management software acquisition binge for years, but also renewed pressure from IBM and HP, who are increasingly tying their captive storage offerings into their own integrated virtualized infrastructure offerings. This tighter coupling of proprietary technology, while not explicitly disenfranchising external storage vendors, will still tighten the screws slightly and reduce the number of opportunities for NetApp to partner with them. Even Dell, long regarded as the laggard in high-end enterprise presence, has been ramping up its investment management and ability to deliver integrated infrastructure, including both the purchase of storage technology and a very clear signal with its run at 3Par and recent investments in companies such as Scalent (see my previous blog on Dell as an enterprise player and my colleague Andrew Reichman’s discussion of the 3Par acquisition) that it wants to go even further as a supplier of integrated infrastructure.
I just spent some time talking to ScaleMP, an interesting niche player that provides a server virtualization solution. What is interesting about ScaleMP is that rather than splitting a single physical server into multiple VMs, they are the only successful offering (to the best of my knowledge) that allows I&O groups to scale up a collection of smaller servers to work as a larger SMP.
Others have tried and failed to deliver this kind of solution, but ScaleMP seems to have actually succeeded, with a claimed 200 customers and expectations of somewhere between 250 and 300 next year.
Their vSMP product comes in two flavors, one that allows a cluster of machines to look like a single system for purposes of management and maintenance while still running as independent cluster nodes, and one that glues the member systems together to appear as a single monolithic SMP.
Does it work? I haven’t been able to verify their claims with actual customers, but they have been selling for about five years, claim over 200 accounts, with a couple of dozen publicly referenced. All in all, probably too elaborate a front to maintain if there was really nothing there. The background of the principals and the technical details they were willing to share convinced me that they have a deep understanding of the low-level memory management, prefectching, and caching that would be needed to make a collection of systems function effectively as a single system image. Their smaller scale benchmarks displayed good scalability in the range of 4 – 8 systems, well short of their theoretical limits.
My quick take is that the software works, and bears investigation if you have an application that:
Either is certified to run with ScaleMP (not many), or one where that you control the code.
You understand the memory reference patterns of the application, and
There has been a lot of press about IBM’s acquisition of BNT (Blade Network Technologies) focusing on the economics and market share of BNT as a competitor to Cisco and HP’s ProCurve/3Com franchise. But at its heart the acquisition is more about defending and expanding a position in the emerging converged server, networking, and storage infrastructure segment than it is about raw switch port market share. It is also a powerful vindication of the proposition that infrastructure convergence is driving major realignment in the vendor industry.
Starting with HP’s success with its c-Class blade servers and Virtual Connect technology, and escalating with Cisco’s entrance into the server market, IBM continued its investment in its Virtual Fabric and Open Fabric Manager technology, heavily leveraging BNT’s switch platforms. At some point it became clear that BNT was a critical element of IBM’s convergence strategy, with IBM’s plans now heavily dependent on a vendor with whom they had an excellent, but non-exclusive relationship, and one whose acquisition by another player could severely compromise their product plans. Hence the acquisition. Now that it owns BNT, IBM can capitalize on its excellent edge network technology for further development of its converged infrastructure strategy without hesitation about further leveraging BNT’s technology.
I recently spent a day with IBM’s x86 team, primarily to get back up to speed on their entire x86 product line, and partially to realign my views of them after spending almost five years as a direct competitor. All in all, time well spent, with some key takeaways:
IBM has fixed some major structural problems with the entire x86 program and it perception in the company – As recently as two years ago, it appeared to the outside world that IBM was not really serious about x86 servers. Between licensing its low-end server designs to Lenovo (although IBM continued to sell its own versions) and an apparent retreat to the upper-end of the segment, it appeared that IBM was not serious about x86 severs. New management, new alignment with sales, and a higher internal profile for x86 seems to have moved the division back into IBM’s mainstream.
Increased investment – It looks like IBM significantly ramped up investments in x86 products about three years ago. The result has been a relatively steady flow of new products into the marketplace, some of which, such as the HS22 blade, significantly reversed deficits versus equivalent HP products. Others followed in high-end servers, virtualization and systems management, and increased velocity of innovation in low-end systems.
Established leadership in new niches such as dense modular server deployments – IBM’s iDataplex, while representing a small footprint in terms of their total volume, gave them immediate visibility as an innovator in the rapidly growing niche for hyper scale dense deployments. Along the way, IBM has also apparently become the leader in GPU deployments as well, another low-volume but high-visibility niche.
Despite its networking roots, today’s Interop events have evolved to address an expansive range of IT roles, responsibilities and topics. While networking managers will still feel at home in the networking track, Interop addresses a variety of themes very relevant to the broader interests of IT Infrastructure & Operations (I&O) professionals, like cloud computing, virtualization, storage, wireless and mobility, and IT management.
IT professionals responsible for the “I” (or Infrastructure) in I&O will find the event particularly relevant. So much so that Forrester has partnered with Interop to develop track agendas, identify speakers, moderate panels, and even present. For the last two years, I have chaired the Data Center and Green IT tracks at Interop’s Las Vegas and New York events. And I am doing the same this year at Interop New York 2010 from October 18th to 22nd.
Historically, the positioning of Dell versus its two major competitors for high-value enterprise business, particularly where it involved complex services and the ability to deliver deeply integrated infrastructure and management stacks, has been as sort of an also ran. Competitors looked at Dell as a price spoiler and a channel for standard storage and networking offerings from its partners, not as a potential threat to the high-ground of being able to deliver complex integrated infrastructure solutions.
This comforting image of Dell as being a glorified box pusher appears to be coming to an end. When my colleague Andrew Reichman recently wrote about Dell’s attempted acquisition of 3Par, it made me take another look at Dell’s recent pattern of investments and the series of announcements they have made around delivering integrated infrastructure with a message and solution offering that looks like it is aimed squarely at HP and IBM's Virtual Fabric.
I’ve been getting a number of inquiries recently regarding benchmarking potential savings from consolidating multiple physical servers onto a smaller number of servers using VMs, usually VMware. The variations in the complexity of the existing versus new infrastructures, operating environments, and applications under consideration make it impossible to come up with consistent rules of thumb, and in most cases, also make it very difficult to predict with any accuracy what the final outcome will be absent a very tedious modeling exercise.
However, the major variables that influence the puzzle remain relatively constant, giving us the ability to at least set out a framework to help analyze potential consolidation projects. This list usually includes:
Yesterday I attended Computacenter’s Analyst Event. It’s a major independent provider of IT infrastructure services in Europe, ranging from reselling hardware and software to managing data centers and providing outsourced desktop management. My main interest was how it manages the potential conflict between properly advising the client and maximizing revenue from selling software. For instance, clients often ask me if it's dangerous to employ their value-added reseller (VAR) to advise them on license management in case the reseller tips off its vendors about a potential source of licence revenue.
An excellent customer case study at the event provided another example. A UK water company engaged Computacenter to implement a new desktop strategy involving 90% fully virtualized thin clients. Such a project creates major licensing challenges on both the desktop and server sides, because the software companies haven’t enhanced their models to properly cope with this scenario. The VAR’s dilemma is whether to design a solution that will be cheapest for the customer or one that will be most lucrative for itself.
As we said in our recent report “Refresher Course: Hiring VARs,” sourcing managers should decide whether they want their VARs to provide design and integration services like these or merely process orders at a minimum margin.
Computacenter will do either, but they clearly want to do more of the VA part and less (proportionately) of the R. So, according to their executives, they have no hesitation doing what is best for the customer even if it reduces their commission in the short term. But they didn’t think many of their competitors would take the same view.
The rise and rise of cloud has been dominating the headlines for the past few years, and for CIOs, it has become a more serious priority only recently. People like cloud computing. Well - at least they like the concept of cloud computing. It is fast to implement, affordable, and scales to business requirements easily. On closer inspection, cloud poses many challenges for organizations. For CIOs there are the considerable challenges around how you restructure your IT department and IT services to cope with the new demands that cloud computing will place on your business - and often these demands come from the business, as they start to get the idea that they can get so many more business cases over the line for new capabilities, products and/or services, as they realize that cloud computing lowers the costs and hastens the time to value.