I'm at IDF, a major geekfest for the people interested in the guts of today’s computing infrastructure, and will be immersing myself in the flow for a couple of days. Before going completely off the deep end, I wanted to call out the announcement of the new Xeon E5. While I’ve discussed it in more depth in an accompanying Quick Take just published on our main website, I wanted to add some additional comments on its implications for data center operations, particularly in the areas of capacity planning and long-term capital budgeting.
For many years, each successive iteration of Intel’s and partners’ roadmaps has been quietly delivering a major benefit that seldom gets top billing – additional capacity within the same power and physical footprint, and the resulting ability for users from small enterprises to mega-scale service providers, to defer additional data spending capital expense.
If you want to be the best in data center operations you are right to benchmark yourself against the cloud computing leaders – just don’t delude yourself into thinking you can match them.
In our latest research report, Rich Fichera and I updated a 2007 study that looked at what enterprise infrastructure leaders could learn from the best in the cloud and hosting market. We found that while they may have greater buying power, deeper IT R&D and huge security teams, many of their best practices apply to a standard enterprise data center – or at least part of it.
There are several key differences between you and the cloud leaders, many of which are detailed in the table below. Perhaps the starkest however is that for the clouds, they are the product. And that means they get budgetary priority and R&D attention that I&O leaders in the enterprise can only dream about.
Earlier this month The Information Technology & Innovation Foundation (ITIF) published a prediction that the U.S. cloud computing industry stands to lose up to $35 billion by 2016 thanks to the National Security Agency (NSA) PRISM project, leaked to the media in June. We think this estimate is too low and could be as high as $180 billion or a 25% hit to overall IT service provider revenues in that same timeframe. That is, if you believe the assumption that government spying is more a concern than the business benefits of going cloud.
Having read through the thoughtful analysis by Daniel Castro at ITIF, we commend him and this think tank on their reasoning and cost estimates. However the analysis really limited the impact to the actions of non-US corporations. The high-end figure, assumes US-based cloud computing providers would lose 20% of the potential revenues available from the foreign market. However we believe there are two additional impacts that would further be felt from this revelation:
1. US customers would also bypass US cloud providers for their international and overseas business - costing these cloud providers up to 20% of this business as well.
2. Non-US cloud providers will lose as much as 20% of their available overseas and domestic opportunities due to other governments taking similar actions.
Let's examine these two cases in a bit more detail.
Well if you're going to make a dramatic about face from total dismissal of cloud computing, this is a relatively credible way to do it. Following up on its announcement of a serious cloud future at Oracle Open World 2011, the company delivered new cloud services with some credibility at this last week's show. It's a strategy with laser focus on selling to Oracle's own installed base and all guns aimed at Salesforce.com. While the promise from last year was a homegrown cloud strategy, most of this year's execution has been bought. The strategy is essentially to deliver enterprise-class applications and middleware any way you want it - on-premise, hosted and managed or true cloud. A quick look at where they are and how they got here:
It seems that during every major shift in the telecommunications, service provider or hosting market there is a string of moves like these as players attempt to capitalize on the change to gain greater market position. And there are plenty of investors caught up in the opportunity who are willing to lend a few bucks. In the dot.com period, through 2000s, we saw major shifts in the service provider landscape as colo/hosting giants were created such as Cable & Wireless and Equinix.
But what does this mean for infrastructure & operations professionals looking to select a hosting or Infrastructure as a Service (IaaS) cloud provider? The key is in determining if 1 + 1 actually equals anything greater than 2.
Last week IBM and ARM Holdings Plc quietly announced a continuation of their collaboration on advanced process technology, this time with a stated goal of developing ARM IP optimized for IBM physical processes down to a future 14 nm size. The two companies have been collaborating on semiconductors and SOC design since 2007, and this extension has several important ramifications for both companies and their competitors.
It is a clear indication that IBM retains a major interest in low-power and mobile computing, despite its previous divestment of its desktop and laptop computers to Lenovo, and that it will be in a position to harvest this technology, particularly ARM's modular approach to composing SOC systems, for future productization.
For ARM, the implications are clear. Its latest announced product, the Cortex A15, which will probably appear in system-level products in approximately 2013, will be initially produced in 32 nm with a roadmap to 20nm. The existence of a roadmap to a potential 14 nm product serves notice that the new ARM architecture will have a process roadmap that will keep it on Intel’s heels for another decade. ARM has parallel alliances with TSMC and Samsung as well, and there is no reason to think that these will not be extended, but the IBM alliance is an additional insurance policy. As well as a source of semiconductor technology, IBM has a deep well of systems and CPU IP that certainly cannot hurt ARM.
From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.
Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.
The Promised Land
The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?
Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…