From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.

Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.

The Promised Land

The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?

Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…

… is a low power CPU enough? If we take a conventional x86 server today, the CPU is half to 1/3 of the power budget, with the rest being memory, fans, disk, and power supply loss. A quick back of the envelope (that’s a thing that used to carry snail mail, now used for calculations) estimate looks like a low-power server that used to draw 100W with a 35W CPU could easily be reduced to an under 40W server with a conventional design implemented with an ARM CPU. So, 100W down to 35W? Good but not great. It will also be smaller and lighter since it will have fewer fans and much smaller heat sinks, and there will be fewer limitations on component placement with the very hot CPU, which dominates all thermal planning on a server, gone. Is there an option to squeeze out more?

The answer looks like yes, primarily due to ARM Holdings business model.  ARM does not sell product, rather it sells licenses to its architecture license and collects a royalty on the units that its licensees ship. The licensees, manufacturers like Apple and NVIDIA, are then free to add their own value, such as complete SOC implementations including GPUs, different memory architectures and embedded peripherals and specialized accelerators. With the freedom to design a custom SOC product, designs that begin to repartition the traditional server design, as SeaMicro has done with its Intel Atom-based dense servers (see Little Servers for Big Applications at Intel Developer Forum). Again, a very fuzzy calculation with lots of places where I could have gone wrong, but it looks like that 100W server might begin to look like 2 – 5W per core, based on what SeaMicro has done with a nominal 4W Atom core, where they manage to get 512 single core servers at about 5W per server with their optimized design. And it looks like a dual-core ARM CPU at 1.2 GHz should outperform a single-core atom (the few benchmarks that exist comparing low-end x86 CPUs with ARM CPUs seem to indicate a higher performance per clock cycle for the ARM processor). Alternatively, rather than an ultimate low-power server, we might see an ARM server with integrated GPUs, at the same power as a standard x86 server with no GPUs, a huge advantage for sleected applications.

The Elephant in the Room – Microsoft

Initially, all attention was focused on Linux. OK, Linux. Nice OS for a fringe, and the TAM for anARM-based server, while interesting, looked like a small part of the maybe 20% of servers that run Linux. But at CES, NVIDIA and Microsoft demonstrated an NVIDIA Tegra (dual-core ARM @1.3 GHz) SOC product running a prototype Windows 7, and Microsoft spoke vaguely of a server OS for ARM-based servers.

Microsoft will be the difference between a healthy ripple and a tsunami. If Microsoft delivers a Windows 7 (or whatever they choose to call it) OS for an ARM desktop, that will drive immense volumes of new tablet and other client devices (I would buy a Windows system that had the same form factor as my iPad in a heartbeat). If they do a server OS, it will totally change the dynamics of the server CPU market. Note that I said the CPU market. The major system vendors will do just fine – they will simply design and ship these new hardware widgets. Microsoft will also prosper, with a piece of the action that otherwise would go to Apple and Android-based device vendors. The existance of an OS does not solve all problems, since ISVs must still recompile their software for the new OS, but that is ligher lifting than rewriting it for a new OS.

Intel and AMD? Well, yes both might get bruised, but they will react vigorously, with continued evolution of their legacy products, and possibly even as licensees of ARM. You’ve got a lot of very smart people, a lot of money, and immense incentives to keep making more, so almost any alignment is possible.

The one thing that is certain is that the pace of innovation and competition in the server world is not about to slow down.

Would you buy an ARM-based server if it ran your software stack?