The drum continues to beat for converged infrastructure products, and Dell has given it the latest thump with the introduction of vStart, a pre-integrated environment for VMware. Best thought of as a competitor to VCE, the integrated VMware, Cisco and EMC virtualization stack, vStart combines:
A lot of tech vendors – and channel partners – are struggling over what channel partners’ play in the cloud services demand chain is going to be. Technology is decreasingly delivered/consumed in the form of on-premise installation (a function performed by and the original raison d’être of channel partners), and increasingly delivered as-a-service by a service provider. In the software sector, that service provider is typically (but not always) the software vendor (think: salesforce.com).
And, in most cases, for good reason. Software has bugs. Early versions of software can be unstable and unpredictable. In the classic channel-partner-sells-and-installs-software model, the product (the software) remains in the control of the software vendor, i.e., the vendor assumes the risk of customers’ unmet expectations. The license is between the vendor and the customer, and the vendor is on the hook for providing bug fixes and tier-2 and -3 support.
As much as many channel partners would like to act as application hosters (and many of them do – approximately 15% of software is delivered via a hosting model today, and 20% of channel partners today have a hosting business [see “Channel Models In The Era Of Cloud”]), when it comes to early-version or mission-critical software, vendors simply can’t risk putting the as-a-service service level/performance responsibility in the hands of channel partners. Service failures, over which the vendor would have no control, would result in egg (or worse!) on the vendor’s brand, not the channel partner’s. Until tech vendors’ partner programs mature to the point where they can certify partners’ data centers, those vendors are going to be reticent to hand over the data center reins to partners.
Cloud infrastructure-as-a-service (IaaS) is a hot market. Amazon Web Services, now five years old, drives a lot of attention and customer volume, but the vendor strategists at enterprise-facing providers such as IBM, HP, AT&T and Verizon have been building and delivering IaaS offerings. As I’ve studied the market, I’ve heard wildly different types of requirements from buyers and quite a range of offerings from service providers. Yet much of the industry dialogue is about one central idea of what IaaS is – think that’s wrong headed. I found that there were really two buyer types: 1) informal buyers outside of the IT operations/data center manager organizations, such as engineers, scientists, marketing executives, and developers, and 2) formal buyers, the IT operations and data center managers responsible for operating applications and maintaining infrastructure.
With this idea in mind, I set out to test the views of IT infrastructure buyers in the Forrsights Hardware Survey, Q3 2010 and learned that:
After 2+ years of cloud hype, only 6% of enterprises IT infrastructure respondents report using IaaS, with another 7% planning to implement by Q3, 2012. After flat adoption from 2008 to 2009, this represents an approximate doubling from 2009, off a very small base.
Almost two thirds of IT infrastructure buyers themselves don’t believe they are the primary buyer of cloud IaaS! We asked them which groups in their company are using or most interested in cloud IaaS. Only 36% of IT infrastructure buyers listed themselves, while 7% didn’t know. The rest, 58% said that IT developers, Web site owners, business unit owners of batch compute intensive apps, and other business unit developers were more interested in using IaaS than themselves.
Calxeda, one of the most visible stealth mode startups in the industry, has finally given us an initial peek at the first iteration of its server plans, and they both meet our inflated expectations from this ARM server startup and validate some of the initial claims of ARM proponents.
While still holding their actual delivery dates and details of specifications close to their vest, Calxeda did reveal the following cards from their hand:
The first reference design, which will be provided to OEM partners as well as delivered directly to selected end users and developers, will be based on an ARM Cortex A9 quad-core SOC design.
The SOC, as Calxeda will demonstrate with one of its reference designs, will enable OEMs to design servers as dense as 120 ARM quad-core nodes (480 cores) in a 2U enclosure, with an average consumption of about 5 watts per node (1.25 watts per core) including DRAM.
While not forthcoming with details about the performance, topology or protocols, the SOC will contain an embedded fabric for the individual quad-core SOC servers to communicate with each other.
Most significantly for prospective users, Calxeda is claiming, and has some convincing models to back up these claims, that they will provide a performance advantage of 5X to 10X the performance/watt and (even higher when price is factored in for a metric of performance/watt/$) of any products they expect to see when they bring the product to market.
Intel, despite a popular tendency to associate a dominant market position with indifference to competitive threats, has not been sitting still waiting for the ARM server phenomenon to engulf them in a wave of ultra-low-power servers. Intel is fiercely competitive, and it would be silly for any new entrants to assume that Intel will ignore a threat to the heart of a high-growth segment.
In 2009, Intel released a microserver specification for compact low-power servers, and along with competitor AMD, it has been aggressive in driving down the power envelope of its mainstream multicore x86 server products. Recent momentum behind ARM-based servers has heated this potential competition up, however, and Intel has taken the fight deeper into the low-power realm with the recent introduction of the N570, a an existing embedded low-power processor, as a server CPU aimed squarely at emerging ultra-low-power and dense servers. The N570, a dual-core Atom processor, is being currently used by a single server partner, ultra-dense server manufacturer SeaMicro (see Little Servers For Big Applications At Intel Developer Forum), and will allow them to deliver their current 512 Atom cores with half the number of CPU components and some power savings.
Technically, the N570 is a dual-core Atom CPU with 64 bit arithmetic, a differentiator against ARM, and the same 32-bit (4 GB) physical memory limitations as current ARM designs, and it should have a power dissipation of between 8 and 10 watts.
For the most part, enterprises understand that virtualization and automation are key components of a private cloud, but at what point does a virtualized environment become a private cloud? What can a private cloud offer that a virtualized environment can’t? How do you sell this idea internally? And how do you deliver a true private cloud in 2011?
In London, this March, I am facilitating a meeting of the Forrester Leadership Board Infrastructure & Operations Council, where we will tackle these very questions. If you are considering building a private cloud, there are changes you will need to make in your organization to get it right and our I&O council meeting will give you the opportunity to discuss this with other I&O leaders facing the same challenge.
Another year and Citrix’s acquisition strategy of interesting companies continues as they have announced the purchase of EMS-Cortex. This acquisition has caught my eye because EMS-Cortex provides a web-based “cloud control panel” that can be used by service providers and end users to manage the provisioning and delegation administration of hosted business applications in a cloud environment such as XenApp, Microsoft Exchange, BlackBerry Enterprise Server, and a number of other critical business applications. In theory this means that customers and vendors will be able to “spin up” core business services quickly in a multi tenant environment.
It is an interesting acquisition, as vendors are starting to address the fact that for “cloudonomics” to be achieved by their customers it is important that they ease the route to cloud adoption. While this acquisition is potentially a good move for Citrix I think it will be interesting for I&O professionals to see how they plan to integrate this ease of deployment with existing business service management processes, especially if the EMS-Cortex solution is going to be used in a live production environment.
SAP Has Managed A Turnaround After Léo Apotheker’s Departure
In February 2010, after Léo Apotheker resigned as CEO of SAP, I wrote a blog post with 10 predictions for the company for the remaining year. Although the new leadership mentioned again and again that this step would not have any influence on the company’s strategy, it was clear that further changes would follow, as it doesn’t make any sense to simply replace the CEO and leave everything else as is when problems were obviously growing bigger for the company.
I predicted that the SAP leadership change was just the starting point, the visible tip of an iceberg, with further changes to come. Today, one year later, I want to review these predictions and shed some light on 2010, which has become the “Turnaround Year For SAP.”
The 10 SAP Predictions For 2010 And Their Results (7 proved true / 3 proved wrong)
Only a few weeks to go before Forrester’s US EA Forum 2011 in San Francisco in February! I’ll be presenting a number of sessions, including the opening kickoff, where I’ll paint a picture of where I see EA going in the next decade. As Alex Cullen mentioned, I’ll examine three distinct scenarios where EA rises in importance, EA crashes and burns, or EA becomes marginalized.
But the most fun I’ve had preparing for this year’s event is putting together a new track: “Key Technology Trends That Will Change Your Business.” In the past, we’ve focused this conference on the practice of EA and used our big IT Forum conference in the spring to talk about technology strategies, but this year I’ve had the opportunity to put together five sessions that drill down into the technology trends that we think will have significant impact in your environment, with a particular focus on impacting business outcomes. Herewith is a quick summary of the sessions in this track:
The General Services Administration made a bold decision to move its email and collaboration systems to the cloud. In the RFP issued last June, it was easy to see their goals in the statement of objectives:
This Statement of Objectives (SOO) describes the goals that GSA expects to achieve with regard to the
1. modernization of its e-mail system;
2. provision of an effective collaborative working environment;
3. reduction of the government’s in-house system maintenance burden by providing related business, technical, and management functions; and
4. application of appropriate security and privacy safeguards.
GSA announced yesterday that they choose Google Apps for email and collaboration and Unisys as the implementation partner.
So what does this mean?
What it means (WIM) #1: GSA employees will be using a next-generation information workplace. And that means mobile, device-agnostic, and location-agile. Gmail on an iPad? No problem. Email from a home computer? Yep. For GSA and for every other agency and most companies, it's important to give employees the tools to be productive and engage from every location on every device. "Work becomes a thing you do and not a place you go." [Thanks to Earl Newsome of Estee Lauder for that quote.]