Last week I had the pleasure of visiting the remote and beautiful country of Iceland. After a 5-hour flight and a brief history lesson, I was amazed to learn that in addition to its unique local attractions — geothermal springs, volcanos, aurora borealis — Iceland possesses a wealth of natural resources.
View of the run off from Ljósafoss Hydro-Power Station, located on the River Sog by Lake Úlfljótsvatn’s outflow
Straddling the North American and European tectonic plates, Iceland’s geological conditions supply its inhabitants with an abundance of natural resources ideal for renewable energy generation. Over the last century, locals have learned how to harvest these resources, constructing geothermal and hydroelectric power generation facilities and providing the country with 100% renewable, carbon-free electricity. With the current cost-prohibitive, technologically limited methods of electrical interconnection, Iceland’s public utilities have been investigating alternative ways to export their energy surplus in the form of finished products.
I was part of a Forrester Team that recently completed a multi-country rollout tour with Emerson Network Power as they formally released their Trellis DCIM product, a comprehensive DCIM environment many years in the building. One of the key takeaways was both an affirmation of our fundamental assertions about DCIM, plus hints about its popularity and attraction for potential customers that in some ways expand on the original value proposition we envisioned. Our audiences were in total approximately 500 selected data center users, most current Emerson customers of some sort, plus various partners.
The audiences uniformly supported the fundamental thesis around DCIM – there exists a strong underlying demand for integrated DCIM products, with a strong proximal emphasis on optimizing power and cooling to save opex and avoid the major disruption and capex of new data center capacity. Additionally, the composition of the audiences supported our contention that these tools would have multiple stakeholders in the enterprise. As expected, the groups were heavy with core Infrastructure & Operations types – the people who have to plan, provision and operate the data center infrastructure to deliver the services needed for their company’s operations. What was heartening was the strong minority presence of facilities people, ranging from 10% to 30% of the attendees, along with a sprinkling of corporate finance and real-estate executives. Informal conversations with a number of these people gave us consistent input that they understood the need, and in some cases were formerly tasked by their executives, to work more closely with the I&O group. All expressed the desire for an integrated tool to help with this.
Data centers, like any other aspect of real estate, follow the age-old adage of “location, location, location,” and if you want to build one that is really efficient in terms of energy consumption as well as possessing all the basics of reliability, you have to be really picky about ambient temperatures, power availability and, if your business is hosting for others rather than just needing one for yourself, potential expansion. If you want to achieve a seeming impossibility – a zero carbon footprint to satisfy increasingly draconian regulatory pressures – you need to be even pickier. In the end, what you need is:
Low ambient temperature to reduce your power requirements for cooling.
Someplace where you can get cheap “green” energy, and lots of it.
A location with adequate network connectivity, both in terms of latency as well as bandwidth, for global business.
A cooperative regulatory environment in a politically stable venue.
In late 2010 I noted that startup SeaMicro had introduced an ultra-dense server using Intel Atom chips in an innovative fabric-based architecture that allowed them to factor out much of the power overhead from a large multi-CPU server ( http://blogs.forrester.com/richard_fichera/10-09-21-little_servers_big_applications_intel_developer_forum). Along with many observers, I noted that the original SeaMicro server was well-suited to many light-weight edge processing tasks, but that the system would not support more traditional compute-intensive tasks due to the performance of the Atom core. I was, however, quite taken with the basic architecture, which uses a proprietary high-speed (1.28 Tb/s) 3D mesh interconnect to allow the CPU cores to share network, BIOS and disk resources that are normally replicated on a per-server in conventional designs, with commensurate reductions in power and an increase in density.
Most of the suppliers of IT-for-sustainability (ITfS) solutions that we work with have one path to finding a buyer in their customer organizations: through the IT organization. Whether giants, such as SAP and HP, or newcomers, such as Hara and ENXSuite, vendors of energy management, carbon reporting and other ITfS products are typically starting their sales motion with customers' traditional buyers of software sytems: IT.
Not that there's anything wrong with that. We have long maintained that IT organizations and the CIOs that lead them will increasingly be the owner and operator of environmental systems of record, just as they are for financial, HR, and customer data systems, among others. But, ITfS suppliers will want to develop multiple pathways into customer organizations. For most, decision-making around sustainability processes and technologies is diffuse, spread across IT, facilities, operations and CSR. Finding the buyer for sustainability is oft-times the proverbial needle in the haystack.
Over the past few weeks, computing giants HP and IBM have made significant new thrusts into the market for sustainability software and services. At first look, both companies are strengthening their commitment to "IT for sustainability (ITfS)" -- the use of information technology to help their customers meet their sustainability goals.
Both are prominently featuring "energy" in their messaging in keeping with the current customer focus on that side of the consumption/emissions coupling. And both are emphasizing a combination of software products and consulting services, the two segments of the market that we at Forrester have been tracking for some time now, as regular readers of this blog know by now.
But under the surface there are more differences than similarities in the approach that these two suppliers are taking to ITfS; differences that illuminate divergent strategies, philosophies, and experiences between them. Let's take a closer look.
HP is going broad; IBM is narrowing its focus. With its initial "Energy and Sustainability Management Services" entry, HP is leveraging its data center design and implementation expertise into buildings and other assets across the enterprise. It is stressing a holistic, top-down approach, starting with assessment workshops and other methods to help customers get their arms around the size and shape of the energy/carbon/resource issues.
The world of hyper scale web properties has been shrouded in secrecy, with major players like Google and Amazon releasing only tantalizing dribbles of information about their infrastructure architecture and facilities, on the presumption that this information represented critical competitive IP. In one bold gesture, Facebook, which has certainly catapulted itself into the ranks of top-tier sites, has reversed that trend by simultaneously disclosing a wealth of information about the design of its new data center in rural Oregon and contributing much of the IP involving racks, servers, and power architecture to an open forum in the hopes of generating an ecosystem of suppliers to provide future equipment to themselves and other growing web companies.
The Data Center
By approaching the design of the data center as an integrated combination of servers for known workloads and the facilities themselves, Facebook has broken some new ground in data center architecture with its facility.
At a high level, a traditional enterprise DC has a utility transformer that feeds power to a centralized UPS, and then power is subsequently distributed through multiple levels of PDUs to the equipment racks. This is a reliable and flexible architecture, and one that has proven its worth in generations of commercial data centers. Unfortunately, in exchange for this flexibility and protection, it extracts a penalty of 6% to 7% of power even before it reaches the IT equipment.
Calxeda, one of the most visible stealth mode startups in the industry, has finally given us an initial peek at the first iteration of its server plans, and they both meet our inflated expectations from this ARM server startup and validate some of the initial claims of ARM proponents.
While still holding their actual delivery dates and details of specifications close to their vest, Calxeda did reveal the following cards from their hand:
The first reference design, which will be provided to OEM partners as well as delivered directly to selected end users and developers, will be based on an ARM Cortex A9 quad-core SOC design.
The SOC, as Calxeda will demonstrate with one of its reference designs, will enable OEMs to design servers as dense as 120 ARM quad-core nodes (480 cores) in a 2U enclosure, with an average consumption of about 5 watts per node (1.25 watts per core) including DRAM.
While not forthcoming with details about the performance, topology or protocols, the SOC will contain an embedded fabric for the individual quad-core SOC servers to communicate with each other.
Most significantly for prospective users, Calxeda is claiming, and has some convincing models to back up these claims, that they will provide a performance advantage of 5X to 10X the performance/watt and (even higher when price is factored in for a metric of performance/watt/$) of any products they expect to see when they bring the product to market.
My travels last month took me back to the Bay Area for client meetings and a chance to spend some time at the Autodesk Gallery, a very cool space near the ferry building in San Francisco. Autodesk uses it to show off its customers' design innovations, not coincidentally created using the company's design software. The event in January showcased how customers are using Autodesk visualization software to improve the sustainability of their product designs and implementations. This is tackling sustainability right at its core: making products that are more energy- and resource-efficient, easier to manufacture, easier to reuse and recycle, right from the start. The products we saw at the event included:
A new research facility at NASA Ames down the peninsula. This super-green building is aimed at "beyond" LEED Platinum standards, incorporating a variety of innovative design and engineering elements all captured in building information modeling (BIM) software. The Feds will use it as a laboratory for energy efficient buildings, spreading its best practices and learnings across the broad portfolio of US government buildings and research facilities. NASA is also working to make the design blueprint a working model for efficient ongoing operation of the building.
From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.
Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.
The Promised Land
The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?
Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…