Recent outages from Amazon and Google have got me thinking about resiliency in the cloud. When you use a cloud service, whether you are consuming an application (backup, CRM, email, etc), or just using raw compute or storage, how is that data being protected? A lot of companies assume that the provider is doing regular backups, storing data in geographically redundant locations or even have a hot site somewhere with a copy of your data. Here's a hint: ASSUME NOTHING. Your cloud provider isn't in charge of your disaster recovery plan, YOU ARE!
Yes, several cloud providers are offering a fair amount of resiliency built in, but not all of them, so it's important to ask. Even within a single provider, there are different policies depending on the service, for example, Amazon Web Services, which has different policies for EC2 (users are responsible for their own failover between zones) and S3 (data is automatically replicated between zones in the same geo). Here is a short list of questions I would ask your provider about their resiliency:
Can I audit your BC/DR plans?
Can I review your BC/DR planning documents?
Geographically, where are your recovery centers located?
In the event of a failure at one site, what happens to my data?
Can you guarantee that my data will not be moved outside of my country/region in the event of a disaster?
What kinds of service-levels can you guarantee during a disaster?
What are my expected/guaranteed recovery time objective (RTO) and recovery point objective (RPO)?
The drum continues to beat for converged infrastructure products, and Dell has given it the latest thump with the introduction of vStart, a pre-integrated environment for VMware. Best thought of as a competitor to VCE, the integrated VMware, Cisco and EMC virtualization stack, vStart combines:
Cloud infrastructure-as-a-service (IaaS) is a hot market. Amazon Web Services, now five years old, drives a lot of attention and customer volume, but the vendor strategists at enterprise-facing providers such as IBM, HP, AT&T and Verizon have been building and delivering IaaS offerings. As I’ve studied the market, I’ve heard wildly different types of requirements from buyers and quite a range of offerings from service providers. Yet much of the industry dialogue is about one central idea of what IaaS is – think that’s wrong headed. I found that there were really two buyer types: 1) informal buyers outside of the IT operations/data center manager organizations, such as engineers, scientists, marketing executives, and developers, and 2) formal buyers, the IT operations and data center managers responsible for operating applications and maintaining infrastructure.
With this idea in mind, I set out to test the views of IT infrastructure buyers in the Forrsights Hardware Survey, Q3 2010 and learned that:
After 2+ years of cloud hype, only 6% of enterprises IT infrastructure respondents report using IaaS, with another 7% planning to implement by Q3, 2012. After flat adoption from 2008 to 2009, this represents an approximate doubling from 2009, off a very small base.
Almost two thirds of IT infrastructure buyers themselves don’t believe they are the primary buyer of cloud IaaS! We asked them which groups in their company are using or most interested in cloud IaaS. Only 36% of IT infrastructure buyers listed themselves, while 7% didn’t know. The rest, 58% said that IT developers, Web site owners, business unit owners of batch compute intensive apps, and other business unit developers were more interested in using IaaS than themselves.
Calxeda, one of the most visible stealth mode startups in the industry, has finally given us an initial peek at the first iteration of its server plans, and they both meet our inflated expectations from this ARM server startup and validate some of the initial claims of ARM proponents.
While still holding their actual delivery dates and details of specifications close to their vest, Calxeda did reveal the following cards from their hand:
The first reference design, which will be provided to OEM partners as well as delivered directly to selected end users and developers, will be based on an ARM Cortex A9 quad-core SOC design.
The SOC, as Calxeda will demonstrate with one of its reference designs, will enable OEMs to design servers as dense as 120 ARM quad-core nodes (480 cores) in a 2U enclosure, with an average consumption of about 5 watts per node (1.25 watts per core) including DRAM.
While not forthcoming with details about the performance, topology or protocols, the SOC will contain an embedded fabric for the individual quad-core SOC servers to communicate with each other.
Most significantly for prospective users, Calxeda is claiming, and has some convincing models to back up these claims, that they will provide a performance advantage of 5X to 10X the performance/watt and (even higher when price is factored in for a metric of performance/watt/$) of any products they expect to see when they bring the product to market.
Intel, despite a popular tendency to associate a dominant market position with indifference to competitive threats, has not been sitting still waiting for the ARM server phenomenon to engulf them in a wave of ultra-low-power servers. Intel is fiercely competitive, and it would be silly for any new entrants to assume that Intel will ignore a threat to the heart of a high-growth segment.
In 2009, Intel released a microserver specification for compact low-power servers, and along with competitor AMD, it has been aggressive in driving down the power envelope of its mainstream multicore x86 server products. Recent momentum behind ARM-based servers has heated this potential competition up, however, and Intel has taken the fight deeper into the low-power realm with the recent introduction of the N570, a an existing embedded low-power processor, as a server CPU aimed squarely at emerging ultra-low-power and dense servers. The N570, a dual-core Atom processor, is being currently used by a single server partner, ultra-dense server manufacturer SeaMicro (see Little Servers For Big Applications At Intel Developer Forum), and will allow them to deliver their current 512 Atom cores with half the number of CPU components and some power savings.
Technically, the N570 is a dual-core Atom CPU with 64 bit arithmetic, a differentiator against ARM, and the same 32-bit (4 GB) physical memory limitations as current ARM designs, and it should have a power dissipation of between 8 and 10 watts.
For the most part, enterprises understand that virtualization and automation are key components of a private cloud, but at what point does a virtualized environment become a private cloud? What can a private cloud offer that a virtualized environment can’t? How do you sell this idea internally? And how do you deliver a true private cloud in 2011?
In London, this March, I am facilitating a meeting of the Forrester Leadership Board Infrastructure & Operations Council, where we will tackle these very questions. If you are considering building a private cloud, there are changes you will need to make in your organization to get it right and our I&O council meeting will give you the opportunity to discuss this with other I&O leaders facing the same challenge.
Forrester’s Forrsights Software Survey, Q4 2010 has quantified for the first time how enterprise demand is shifting from traditional licensing models to subscriptions and other licensing models, such as financing and license leasing. However, the shift to subscriptions for business-applications-as-a-service is the major driver of this change. Traditional enterprise licenses are slowly decreasing, and Forrester predicts that subscriptions for SaaS applications will drive alternative license spending up to 29% — as early as 2011. This demand-side change goes beyond front-office applications like CRM. In 2011 and 2012, enterprises will opt for “as-a-service” subscriptions for more back-office applications, such as ERP, instead of licensed and on-premise installations. Detailed data cuts by company size and region are available to clients from our Forrsights service.
Base: 622 (2007), 1,026 (2008), 537 (2009), and 930 (2010) software decision-makers predicting license spending for the coming year Source: Enterprise And SMB Software Survey, North America And Europe, Q3 2007; Enterprise And SMB Software Survey, North America And Europe, Q4 2008; Enterprise And SMB Software Survey, North America And Europe, Q4 2009; Forrsights Software Survey, Q4 2010
What does this means for existing independent software vendors (ISVs) and infrastructure vendors?
Another year and Citrix’s acquisition strategy of interesting companies continues as they have announced the purchase of EMS-Cortex. This acquisition has caught my eye because EMS-Cortex provides a web-based “cloud control panel” that can be used by service providers and end users to manage the provisioning and delegation administration of hosted business applications in a cloud environment such as XenApp, Microsoft Exchange, BlackBerry Enterprise Server, and a number of other critical business applications. In theory this means that customers and vendors will be able to “spin up” core business services quickly in a multi tenant environment.
It is an interesting acquisition, as vendors are starting to address the fact that for “cloudonomics” to be achieved by their customers it is important that they ease the route to cloud adoption. While this acquisition is potentially a good move for Citrix I think it will be interesting for I&O professionals to see how they plan to integrate this ease of deployment with existing business service management processes, especially if the EMS-Cortex solution is going to be used in a live production environment.
SAP Has Managed A Turnaround After Léo Apotheker’s Departure
In February 2010, after Léo Apotheker resigned as CEO of SAP, I wrote a blog post with 10 predictions for the company for the remaining year. Although the new leadership mentioned again and again that this step would not have any influence on the company’s strategy, it was clear that further changes would follow, as it doesn’t make any sense to simply replace the CEO and leave everything else as is when problems were obviously growing bigger for the company.
I predicted that the SAP leadership change was just the starting point, the visible tip of an iceberg, with further changes to come. Today, one year later, I want to review these predictions and shed some light on 2010, which has become the “Turnaround Year For SAP.”
The 10 SAP Predictions For 2010 And Their Results (7 proved true / 3 proved wrong)
Only a few weeks to go before Forrester’s US EA Forum 2011 in San Francisco in February! I’ll be presenting a number of sessions, including the opening kickoff, where I’ll paint a picture of where I see EA going in the next decade. As Alex Cullen mentioned, I’ll examine three distinct scenarios where EA rises in importance, EA crashes and burns, or EA becomes marginalized.
But the most fun I’ve had preparing for this year’s event is putting together a new track: “Key Technology Trends That Will Change Your Business.” In the past, we’ve focused this conference on the practice of EA and used our big IT Forum conference in the spring to talk about technology strategies, but this year I’ve had the opportunity to put together five sessions that drill down into the technology trends that we think will have significant impact in your environment, with a particular focus on impacting business outcomes. Herewith is a quick summary of the sessions in this track: