Last year, my colleague, James Staten, and I published evaluations of the (internal) private cloud and public cloud markets — this year we’re going to fill in the remaining gap in the IaaS space, by publishing a Forrester Wave evaluation on Hosted Private Cloud Solutions. Vendors participating in this report will be evaluated on key criteria, a demo following a mandatory script, and customer references for validation of the solution. Throughout the research process I’ll be providing some updates and interesting findings before it goes live in early Q4 2012.
So, what is hosted private cloud? Like almost every product in the cloud space, there’s a lot of ambiguity about what you’ll be getting if you sign on to use a hosted private cloud solution. Today, NIST defines private cloud as:
The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
Hosted private cloud refers to a variation of this where the solution lives off-premises in a hosted environment while still incorporating NIST's IaaS service definition, particularly where “[t]he consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications.” But there’s a great deal of variation in today’s hosted private cloud arena. Usually solutions differ in the following ways:
During a recent global analyst event in Paris, Capgemini presented its strategy to a panel of market and financial analysts. It hinges on two main objectives: improving the resilience of the organization in an uncertain economic environment — especially in Europe — and finding new levers for margin improvements.
From an operations point of view, Capgemini intends to continue leveraging the usual suspects: industrialization, cost cutting, and accelerating the development of its offshore talent pool. It also aiming to optimize its human resource pool via a pyramid management program aimed at, among other things, allocating the right experience level to the right type of work.
More interestingly, the company showcased some of the global offerings it has put together or refined over the past 12 months. Capgemini’s strategic intent is to develop offerings addressing three major client-relevant themes – customer experience, operational processes, and new business models. The offerings will be enabled by a combination of cloud, mobile, analytics, and social technologies. Among the set of offerings managed globally, I found the following of particular interest due to their emerging nature and Capgemini’s interesting approach to developing them:
Services budgets represent 10% of annual IT operating and capital budgets[i], but Forrester sees considerable evidence that the influence of these IT Services vendors is proportionally higher — and growing dramatically. While there are several reasons for the rising importance of your services partners, at the most fundamental level Forrester sees that:
Business professionals need immediate access to tech-enabled innovation. Most strategic business initiatives now have an underlying technology component. Service providers come to the table with the tech savvy, vertical market expertise, and best practices to make these initiatives work.
IT professionals can’t keep pace with business demand. The volume and complexity of technology demands from business professionals means that traditional IT organizations have difficulty keeping pace. They too need to work with the best mix of IT service providers to meet the demands of their business. Effective supplier management is quickly becoming the most essential skill in IT organizations.
Our latest survey on IT budgets and priorities shows that 35 percent of enterprises have a big focus on cloud computing (calling it a high or critical priority), but do we really know how best to apply that investment?
We continue to see a large disconnect between what the business wants from cloud services and what the IT department wants to offer and support. The short story is the business wants public cloud services (or something very, very close to this value proposition) for delivering new services and capabilities to market. Yet IT wants to offer private cloud solutions that improve operational efficiency and drive down overall IT costs. IT doesn't have its head in the sand about business' demands, they just have to balance these desires against what IT is measured on - the cost and security of services provided. And frankly they don't trust the public cloud.
Knowing the psychology above, how best can an enterprise set a winning cloud strategy? if it invests purely against the business care-abouts it may win time to market but risks investing ahead of its ability to support and protect the business. If it invests against the IT priorities it risks alienating the business, increasing circumvention and being a laggard competitively. The answer lies in striking an appropriate balance between these conflicting priorities and choosing a strategy that encourages the most collaboration between business and IT and accelerating everyone's experience level with these new technologies. And that balance will be different for every firm based on their competitive market, regulatory environment and geography. But in general, most enterprises are being far more conservative than they should.
On July 11, 2012, SingTel launched its PowerON Compute cloud service in Hong Kong. While certainly interesting on its own, I believe this announcement is particularly noteworthy as a harbinger of things to come.
Some key points to consider:
As a hybrid offering, PowerON Compute is a dynamic infrastructure services solution hosted in SingTel’s data centers in Singapore, Australia, and now Hong Kong. The computing resources (e.g., CPU, memory, storage) can be accessed either via a public Internet connection or a private secured network.
This announcement confirms the findings of my February 2012 report, “Sizing the Cloud Markets in Asia Pacific”: that market demand for cloud-based computing resources in Asia Pacific (AP) will rapidly shift from infrastructure-as-a-service (IaaS) to dynamic infrastructure services.
The poorly kept secret that is the Google Nexus 7 tablet was just announced amid much developer applause and excitement. The device is everything it was rumored to be and the specs — something that only developers care about, of course — were impressive, including the 12 core GPU that will make the Nexus 7 a gaming haven. True, it's just another in a long line of tablets, albeit a $199 one that competes directly with Amazon's Kindle Fire and undercuts the secondary market for the iPad.
But as a competitor to the iPad, Nexus 7 isn't worth the digital ink I'm consuming right now.
But Google isn't just selling a device. Instead, the company wants to create a content platform strategy that ties together all of its ragtag content and app experiences into a single customer relationship. Because the power of the platform is the only power that will matter (see my recent post for more information on platform power). It's unfortunate that consumers barely know what Google Play is because it was originally called Android Market, but the shift to the Google Play name a few months back and the debut of a device that is, according to its designers, "made for Google Play," show that Google understands what will matter in the future. Not connections, not devices. But experiences. The newly announced Nexus 7, as a device, is from its inception subservient to the experiences — some of them truly awesome — that Google's Play platform can provide through it.
In typical Microsoft fashion, they don't catch a new trend right with the first iteration but they keep at it and eventually strike the right tone and in more cases than not, get good enough. And often good enough wins. That seems the be the pattern playing out with Windows Azure, its cloud platform.
Earlier this week Dell joined arch-competitor HP in endorsing ARM as a potential platform for scale-out workloads by announcing “Copper,” an ARM-based version of its PowerEdge-C dense server product line. Dell’s announcement and positioning, while a little less high-profile than HP’s February announcement, is intended to serve the same purpose — to enable an ARM ecosystem by providing a platform for exploring ARM workloads and to gain a visible presence in the event that it begins to take off.
Dell’s platform is based on a four-core Marvell ARM V7 SOC implementation, which it claims is somewhat higher performance than the Calxeda part, although drawing more power, at 15W per node (including RAM and local disk). The server uses the PowerEdge-C form factor of 12 vertically mounted server modules in a 3U enclosure, each with four server nodes on them for a total of 48 servers/192 cores in a 3U enclosure. In a departure from other PowerEdge-C products, the Copper server has integrated L2 network connectivity spanning all servers, so that the unit will be able to serve as a low-cost test bed for clustered applications without external switches.
Dell is offering this server to selected customers, not as a GA product, along with open source versions of the LAMP stack, Crowbar, and Hadoop. Currently Cannonical is supplying Ubuntu for ARM servers, and Dell is actively working with other partners. Dell expects to see OpenStack available for demos in May, and there is an active Fedora project underway as well.
I bet you are thinking, “Oh no, this looks like a typical Friday IT blog post” and it has all the key ingredients – It’s Friday-tick-has science fiction references-tick-has a weird title-tick – but please go with the flow with this one.