Having attended the OpenStack Design Summit this week and at the same time fielding calls from Forrester clients affected by the Amazon Web Services (AWS) outage, an interesting contrast in approaches bore out. You could boil it down to closed versus open but there’s more to this contrast that should be part of your consideration when selecting your Infrastructure as a Service (IaaS) providers.
The obvious comparison is that AWS’ architecture and operational procedures are very much their own and few outside the company know how it works. Not even close partners like RightScale or those behind the open source derivative Eucalyptus know it well enough to do more than deduce what happened based on their experience and what they could observe. OpenStack, on the other hand, is fully open source so if you want to know how it works you can download the code. At the Design Summit here in Santa Clara, Calif. this week, developers and infrastructure & operations professionals had ample opportunity to dig into the design and suggest and submit changes right there. And there were plenty of conversations this week about how CloudFiles and other storage services worked and how to ensure an AWS Elastic Block Store (EBS) mirror storm could be avoided.
It seems that during every major shift in the telecommunications, service provider or hosting market there is a string of moves like these as players attempt to capitalize on the change to gain greater market position. And there are plenty of investors caught up in the opportunity who are willing to lend a few bucks. In the dot.com period, through 2000s, we saw major shifts in the service provider landscape as colo/hosting giants were created such as Cable & Wireless and Equinix.
But what does this mean for infrastructure & operations professionals looking to select a hosting or Infrastructure as a Service (IaaS) cloud provider? The key is in determining if 1 + 1 actually equals anything greater than 2.
Since Oracle dropped their bombshell on HP and Itanium, I have fielded multiple emails and about a dozen inquiries from HP and Oracle customers wanting to discuss their options and plans. So far, there has been no general sense of panic, and the scenarios seem to be falling into several buckets:
The majority of Oracle DB/HP customers are not at the latest revision of Oracle, so they have a window within which to make any decisions, bounded on the high end by the time it will take them to make a required upgrade of their application plus DB stack past the current 11.2 supported Itanium release. For those customers still on Oracle release 9, this can be many years, while for those currently on 11.2, the next upgrade cycle will cause a dislocation. The most common application that has come up in inquiries is SAP, with Oracle’s own apps second.
Customers with other Oracle software, such as Hyperion, Peoplesoft, Oracle’s eBusiness Suite, etc., and other ISV software are often facing complicated constraints on their upgrades. In some cases decisions by the ISVs will drive the users toward upgrades they do not want to make. Several clients told me they will defer ISV upgrades to avoid being pushed into an unsupported version of the DB.
Recent outages from Amazon and Google have got me thinking about resiliency in the cloud. When you use a cloud service, whether you are consuming an application (backup, CRM, email, etc), or just using raw compute or storage, how is that data being protected? A lot of companies assume that the provider is doing regular backups, storing data in geographically redundant locations or even have a hot site somewhere with a copy of your data. Here's a hint: ASSUME NOTHING. Your cloud provider isn't in charge of your disaster recovery plan, YOU ARE!
Yes, several cloud providers are offering a fair amount of resiliency built in, but not all of them, so it's important to ask. Even within a single provider, there are different policies depending on the service, for example, Amazon Web Services, which has different policies for EC2 (users are responsible for their own failover between zones) and S3 (data is automatically replicated between zones in the same geo). Here is a short list of questions I would ask your provider about their resiliency:
Can I audit your BC/DR plans?
Can I review your BC/DR planning documents?
Geographically, where are your recovery centers located?
In the event of a failure at one site, what happens to my data?
Can you guarantee that my data will not be moved outside of my country/region in the event of a disaster?
What kinds of service-levels can you guarantee during a disaster?
What are my expected/guaranteed recovery time objective (RTO) and recovery point objective (RPO)?
Product strategists at Mars, Incorporated are experimenting with mass customized offerings quite a bit. In addition to their build-to-order customized M&Ms offering, their subsidiary Wrigley has just rolled out MyExtra gum, which prints personalized wrappers on Extra gum packs.
Product strategists at Wrigley declined Forrester’s recent request for a research interview, but judging from the myextragum website and their press release, the offering is a really interesting example of a creatively mass customized product strategy. Why? Product strategists at Wrigley have:
Redefined the product using customization. Myextragum isn’t just gum with a customized wrapper. Instead, it’s a greeting card (Mother’s day, birthday, other holiday) or a business card (to be given to patrons) plus gum. Wrigley is moving into a non-adjacent, previously orthogonal product market in one fell swoop. That’s aggressive and creative.
Justified the higher price point. At $4.99 – though the price reduces with bulk orders – the product is pretty expensive for a pack of gum. But, again, it’s not a pack of gum – it’s a greeting card or business card that also has gum inside. This pricing makes sense when you think of the price of Hallmark cards or custom business cards.
Mass customization has been the “next big thing” in product strategy for a very long time. Theorists have been talking about it as the future of products since at least 1970, when Alvin Toffler presaged the concept. Important books from 1992 and 2000 further promoted the idea that mass customization was the future of products.
Yet for years, mass customization has disappointed. Some failures were due to execution: Levi Strauss, which sold customized jeans from 1993-2003, never offered consumers choice over a key product feature – color. In other cases, changing market conditions undermined the business model: Dell, once the most prominent practitioner of mass customization, failed spectacularly, reporting that the model had become “too complex and costly.”
Overall, the “next big thing” has remained an elusive strategy in the real world, keeping product strategists away in droves.
Egenera, arguably THE pioneer in what the industry is now calling converged infrastructure, has had a hard life. Early to market in 2000 with a solution that was approximately a decade ahead of its time, it offered an elegant abstraction of physical servers into what chief architect Maxim Smith described as “fungible and anonymous” resources connected by software defined virtual networks. Its interface was easy to use, allowing the definition of virtualized networks, NICs, servers with optional failover and pools of spare resources with a fluidity that has taken the rest of the industry almost 10 years to catch up to. Unfortunately this elegant presentation was chained to a completely proprietary hardware architecture, which encumbered the economics of x86 servers with an obsolete network fabric, expensive system controller and physical architecture (but it was the first vendor to include blue lights on its servers). The power of the PanManager software was enough to keep the company alive, but not enough to overcome the economics of the solution and put them on a fast revenue path, especially as emerging competitors began to offer partial equivalents at lower costs. The company is privately held and does not disclose revenues, but Forrester estimates it is still less than $100 M in annual revenues.
In approximately 2006, Egenera began the process of converting its product to a pure software offering capable of running on commodity server hardware and standard Ethernet switches. In subsequent years they have announced distribution arrangements with Fujitsu (an existing partner for their earlier products) and an OEM partnership with Dell, which apparently was not successful, since Dell subsequently purchased Scalent, an emerging software competitor. Despite this, Egenera claims that its software business is growing and has been a factor in the company’s first full year of profitability.
Shortly before the IT Service Management Forum's annual Fusion conference in 2009, Forrester and the US chapter of IT Service Management Forum (itSMF) put the finishing touches on a partnership agreement between the two entities. There are many aspects of this partnership, including Forrester analysts speaking at numerous itSMF events throughout the year. (I had the pleasure of speaking to and spending the day with the Washington, DC area's National Capital LIG just today!) The truly exciting aspect of the partnership, however, is our intent to perform some joint research on the ITSM movement. By combining Forrester's venerable research and analysis capabilities with the wide and diverse membership of itSMF our hope is to gain unprecedented insight into ITSM trends and sentiments. The beneficiaries will be everyone in the broad ITSM community! What a concept!
The study is open to all itSMF USA members, so we expect a large sample size for the research. That said, we encourage everyone to participate. The results will be tabulated by Forrester, who will perform the analysis and produce the research report on the findings. This report will be free to all itSMF USA members and Forrester clients. If you are neither, that's no problem. If you participate, you are eligible for a free copy, regardless of your affiliation. This is our way of thanking you for your help! Naturally, you will have to provide some contact information so we can send you your copy when it is ready.
I was recently taken to task in the twittersphere (I post as @reichmanIT) by @Knieriemen, a VP at virtualization and storage reseller Chi Corporation and the host of the Infosmack podcast. Mr. Knieriemen took exception to statements about storage single sourcing I made to the press related to a recent document about storage choices for virtual server environments, including the following:
RT @markwojtasiak: Forrester says "single source when possible" <-what an incredibly dumb thing for @reichmanIT to say
He followed up his comments with the following clarifications:
@reichmanIT In context to managing data center costs, single sourcing is probably the worst suggestion I've heard from an analyst
@reichmanIT In a virtual server environment, storage is a commodity and should be purchased as such to control costs
I don’t know Mr. Knieriemen personally, and I must admit that I was taken aback by his blunt approach, but I’m from New York and have a thick skin, so I can look past that. The reason I bring this up on Forrester’s blog is solely to take a look at the actual points of view brought out in the exchange. The crux of the argument is whether or not storage is commoditized, and in my opinion, it’s not (yet anyway). There are three reasons why:
A lot has been written about potential threats to Intel’s low-power server hegemony, including discussions of threats from not only its perennial minority rival AMD but also from emerging non-x86 technologies such as ARM servers. While these are real threats, with potential for disrupting Intel’s position in the low power and small form factor server segment if left unanswered, Intel’s management has not been asleep at the wheel. As part of the rollout of the new Sandy Bridge architecture, Intel recently disclosed their platform strategy for what they are defining as “Micro Servers,” small single-socket servers with shared power and cooling to improve density beyond the generally accepted dividing line of one server per RU that separates “standard density” from “high density.” While I think that Intel’s definition is a bit myopic, mostly serving to attach a label to a well established category, it is a useful tool for segmenting low-end servers and talking about the relevant workloads.
Intel’s strategy revolves around introducing successive generations of its Sandy Bridge and future architectures embodied as Low Power (LP) and Ultra Low Power (ULP) products with promises of up to 2.2X performance per watt and 30% less actual power compared to previous generation equivalent x86 servers, as outlined in the following chart from Intel:
So what does this mean for Infrastructure & Operations professionals interested in serving the target loads for micro servers, such as: