Hybrid clouds are especially subject to the law of unintended consequences, says Forrester’s cloud expert James Staten. Many IT organizations don’t even acknowledge that they have a hybrid cloud. The reality: If enterprises are using public cloud software-as-a-service (SaaS) and/or deploying any custom applications in the public cloud, then by definition they have a hybrid cloud, because it almost always connects to the back end.
In this episode of TechnoPolitics, James implores CIOs and IT professionals to get serious about hybrid cloud now to avoid spaghetti clouds in the future.
To publish this post, I must first discredit myself. I'm 42, and while I love what I do for a living, Michael Dell is 47 and his company was already doing $1 million a day in business by the time he was 31. I look at guys like that and think: "What the h*** have I been doing with my time?!?" Nevertheless, Dell is a company I've followed more closely than any other but Apple since the mid-2000s, and in the past two years I've had the opportunity to meet with several Dell executives and employees - from Montpellier, France to Austin, Texas.
Because I cover both PC hardware as well as client virtualization here at Forrester, it puts me in regular contact with Dell customers who will inevitably ask what we as a firm think about Dell's latest announcements to go private, just as they have for HP these past several quarters since the circus started over there with Mr. Apotheker. Hopefully what follows here is information and analysis that you as an I&O leader can rely on to develop your own perspective on Dell with more clarity.
Complexity is Dell's enemy
The complexity of Dell as an organization right now is enormous. They have been on a "Quest" to re-invent themselves and go from PC and server vendor, to an end-to-end solutions vendor with the hope that their chief differentiator could be unique software to drive more repeatable solutions delivery, and in turn lower solutions cost. I say the word 'hope' deliberately because to do that means focusing most of their efforts around a handful of solutions that no other vendor could provide. It's a massive undertaking because as a public company, they have to do this while keeping cash-flow going in their lines of business from each acquisition and growing those while they develop the focused solutions. So far, they haven't.
Now that we’ve been back from the holidays for a month, I’d like to round out the 2013 predictions season with a look at the year ahead in server virtualization. If you’re like me (or this New York Times columnist), you’ll agree that a little procrastination can sometimes be a good thing to help collect and organize your plans for the year ahead. (Did you buy that rationalization?)
We’re now more than a decade into the era of widespread x86 server virtualization. Hypervisors are certainly a mature (if not peaceful) technology category, and the consolidation benefits of virtualization are now uncontestable. 77% of you will be using virtualization by the end of this year, and you’re running as many as 6 out of 10 workloads in virtual machines. With such strong penetration, what’s left? In our view: plenty. It’s time to ask your virtual infrastructure, “What have you done for me lately?”
With that question in mind, I asked my colleagues on the I&O team to help me predict what the year ahead will hold. Here are the trends in 2013 you should track closely:
Consolidation savings won’t be enough to justify further virtualization. For most I&O pros, the easy workloads are already virtualized. Looking ahead at 2013, what’s left are the complex business-critical applications the business can’t run without (high-performance databases, ERP, and collaboration top the list). You won’t virtualize these to save on hardware; you’ll do it to make them mobile, so they can be moved, protected, and duplicated easily. You’ll have to explain how virtualizing these apps will make them faster, safer, and more reliable—then prove it.
On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.
So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”
Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.
What It Does
The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.
New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
Last year, my colleague, James Staten, and I published evaluations of the (internal) private cloud and public cloud markets — this year we’re going to fill in the remaining gap in the IaaS space, by publishing a Forrester Wave evaluation on Hosted Private Cloud Solutions. Vendors participating in this report will be evaluated on key criteria, a demo following a mandatory script, and customer references for validation of the solution. Throughout the research process I’ll be providing some updates and interesting findings before it goes live in early Q4 2012.
So, what is hosted private cloud? Like almost every product in the cloud space, there’s a lot of ambiguity about what you’ll be getting if you sign on to use a hosted private cloud solution. Today, NIST defines private cloud as:
The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
Hosted private cloud refers to a variation of this where the solution lives off-premises in a hosted environment while still incorporating NIST's IaaS service definition, particularly where “[t]he consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications.” But there’s a great deal of variation in today’s hosted private cloud arena. Usually solutions differ in the following ways:
The long-rumored changing of the guard at VMware finally took place last week and with it came down a stubborn strategic stance that was a big client dis-satisfier. Out went the ex-Microsoft visionary who dreamed of delivering a new "cloud OS" that would replace Windows Server as the corporate standard and in came a pragmatic refocusing on infrastructure transformation that acknowledges the heterogeneous reality of today's data center.
Paul Maritz will move into a technology strategy role at EMC where he can focus on how the greater EMC company can raise its relevance with developers. Clearly, EMC needs developer influence and application-level expertise, and from a stronger, full-portfolio perspective. Here, his experience can be more greatly applied -- and we expect Paul to shine in this role. However, I wouldn't look to see him re-emerge as CEO of a new spin out of these assets. At heart, Paul is more a natural technologist and it's not clear all these assets would move out as one anyway.
Our latest survey on IT budgets and priorities shows that 35 percent of enterprises have a big focus on cloud computing (calling it a high or critical priority), but do we really know how best to apply that investment?
We continue to see a large disconnect between what the business wants from cloud services and what the IT department wants to offer and support. The short story is the business wants public cloud services (or something very, very close to this value proposition) for delivering new services and capabilities to market. Yet IT wants to offer private cloud solutions that improve operational efficiency and drive down overall IT costs. IT doesn't have its head in the sand about business' demands, they just have to balance these desires against what IT is measured on - the cost and security of services provided. And frankly they don't trust the public cloud.
Knowing the psychology above, how best can an enterprise set a winning cloud strategy? if it invests purely against the business care-abouts it may win time to market but risks investing ahead of its ability to support and protect the business. If it invests against the IT priorities it risks alienating the business, increasing circumvention and being a laggard competitively. The answer lies in striking an appropriate balance between these conflicting priorities and choosing a strategy that encourages the most collaboration between business and IT and accelerating everyone's experience level with these new technologies. And that balance will be different for every firm based on their competitive market, regulatory environment and geography. But in general, most enterprises are being far more conservative than they should.
Sound familiar? Executives across the globe feel peer and competitive pressure to “get to yes” on private cloud. This burden falls on IT to provide a cloud solution — oh, and by the way, we need it by the end of the year. With this clock ticking, it’s hard to think about private cloud strategically. In fact, why not to just cloudwash your virtual environment and buy your team time? Many enterprises (yes, even those presenting at events) have gone down this road. And some vendors will suggest this as a short-term fix. DON’T DO IT.
You’re cutting yourself short on what you could achieve with this environment while losing credibility with the business and your peers. Sound overdramatic? The consumerization of IT is forcing IT to connect with the business or risk circumvention. For many, the existing relationship isn't great. And each future interaction could either improve or worsen that relationship. Promising the business a cloud delivered within your own data center, and then failing to provide basic functionality of a cloud will just make future initiatives and interactions even harder. In the meantime, the business will continue to circumvent your department. If you're going to invest the resources/time to build this environment and rope in rogue cloud users — make sure you get to cloud.
Shakespeare wrote in his famous play Hamlet,"Whether 'tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles, and by opposing end them? To die: to sleep; No more." He of course was talking about the betrayal in his family but the quote is just as appropriate today in the world of cloud computing. Because in the minds of many I&O professionals, the business is conducting the betrayal.
Over the last couple of years, IBM, despite having a rich internal technology ecosystem and a number of competitive blade and CI offerings, has not had a comprehensive integrated offering to challenge HP’s CloudSystem Matrix and Cisco’s UCS. This past week IBM effectively silenced its critics and jumped to the head of the CI queue with the announcement of two products, PureFlex and PureApplication, the results of a massive multi-year engineering investment in blade hardware, systems management, networking, and storage integration. Based on a new modular blade architecture and new management architecture, the two products are really more of a continuum of a product defined by the level of software rather than two separate technology offerings.
PureFlex is the base product, consisting of the new hardware (which despite having the same number of blades as the existing HS blade products, is in fact a totally new piece of hardware), which integrates both BNT-based networking as well as a new object-based management architecture which can manage up to four chassis and provide a powerful setoff optimization, installation, and self-diagnostic functions for the hardware and software stack up to and including the OS images and VMs. In addition IBM appears to have integrated the complete suite of Open Fabric Manager and Virtual Fabric for remapping MAC/WWN UIDs and managing VM networking connections, and storage integration via the embedded V7000 storage unit, which serves as both a storage pool and an aggregation point for virtualizing external storage. The laundry list of features and functions is too long to itemize here, but PureFlex, especially with its hypervisor-neutrality and IBM’s Cloud FastStart option, is a complete platform for an enterprise private cloud or a horizontal VM compute farm, however you choose to label a shared VM utility.