Intel, despite a popular tendency to associate a dominant market position with indifference to competitive threats, has not been sitting still waiting for the ARM server phenomenon to engulf them in a wave of ultra-low-power servers. Intel is fiercely competitive, and it would be silly for any new entrants to assume that Intel will ignore a threat to the heart of a high-growth segment.
In 2009, Intel released a microserver specification for compact low-power servers, and along with competitor AMD, it has been aggressive in driving down the power envelope of its mainstream multicore x86 server products. Recent momentum behind ARM-based servers has heated this potential competition up, however, and Intel has taken the fight deeper into the low-power realm with the recent introduction of the N570, a an existing embedded low-power processor, as a server CPU aimed squarely at emerging ultra-low-power and dense servers. The N570, a dual-core Atom processor, is being currently used by a single server partner, ultra-dense server manufacturer SeaMicro (see Little Servers For Big Applications At Intel Developer Forum), and will allow them to deliver their current 512 Atom cores with half the number of CPU components and some power savings.
Technically, the N570 is a dual-core Atom CPU with 64 bit arithmetic, a differentiator against ARM, and the same 32-bit (4 GB) physical memory limitations as current ARM designs, and it should have a power dissipation of between 8 and 10 watts.
In October, with great fanfare, the Open Data Center Alliance unfurled its banners. The ODCA is a consortium of approximately 50 large IT consumers, including large manufacturing, hosting and telecomm providers, with the avowed intent of developing standards for interoperable cloud computing. In addition to the roster of users, the announcement highlighted Intel with an ambiguous role as a technology advisor to the group. The ODCA believes that it will achieve some weight in the industry due to its estimated $50 billion per year of cumulative IT purchasing power, and the trade press was full of praises for influential users driving technology as opposed to allowing rapacious vendors such as HP and IBM to drive users down proprietary paths that lead to vendor lock-in.
Now that we’ve had a month or more to allow the purple prose to settle a bit, let’s look at the underlying claims, potential impact of the ODCA and the shifting roles of vendors and consumers of technology. And let’s not forget about the role of Intel.
First, let me state unambiguously that one of the core intentions of the ODCA, the desire to develop common use case models that will in turn drive vendors to develop products that comply with the models based on the economic clout of the ODCA members (and hopefully there will be a correlation between ODCA member requirements and those of a wider set of consumers), is a good idea. Vendors spend a lot of time talking to users and trying to understand their requirements, and having the ODCA as a proxy for the requirements of a lot of very influential customers will be a benefit to all concerned.
Despite its networking roots, today’s Interop events have evolved to address an expansive range of IT roles, responsibilities and topics. While networking managers will still feel at home in the networking track, Interop addresses a variety of themes very relevant to the broader interests of IT Infrastructure & Operations (I&O) professionals, like cloud computing, virtualization, storage, wireless and mobility, and IT management.
IT professionals responsible for the “I” (or Infrastructure) in I&O will find the event particularly relevant. So much so that Forrester has partnered with Interop to develop track agendas, identify speakers, moderate panels, and even present. For the last two years, I have chaired the Data Center and Green IT tracks at Interop’s Las Vegas and New York events. And I am doing the same this year at Interop New York 2010 from October 18th to 22nd.
In a recent discussion with a group of infrastructure architects, power architecture, especially UPS engineering, was on the table as a topic. There was general agreement that UPS systems are a necessary evil, cumbersome and expensive beasts to put into a DC, and a lot of speculation on alternatives. There was general consensus that the goal was to develop a solution that would be more granular install and deploy and thus allow easier and ad-hoc decisions about which resources to protect, and agreement that battery technologies and current UPS architectures were not optimal for this kind of solution.
So what if someone were to suddenly expand battery technology R&D investment by a factor of maybe 100x of R&D and into battery technology, expand high-capacity battery production by a giant factor, and drive prices down precipitously? That’s a tall order for today’s UPS industry, but it’s happening now courtesy of the auto industry and the anticipated wave of plug-in hybrid cars. While batteries for cars and batteries for computers certainly have their differences in terms of depth and frequency of charge/discharge cycles, packaging, lifespan, etc, there is little doubt that investments in dense and powerful automotive batteries and power management technology will bleed through into the data center. Throw in recent developments in high-charge capacitors (referred to in the media as “super capacitors”), which add the impedance match between the requirements for spike demands and a chemical battery’s dislike of sudden state changes, and you have all the foundational ingredients for major transformation in the way we think about supplying backup power to our data center components.