Today, after two of its largest partners have already announced their systems portfolios that will use it, Intel finally announced one of the worst-kept secrets in the industry: the Xeon E5-2600 family of processors.

OK, now that I’ve got in my jab at the absurdity of the announcement scheduling, let’s look at the thing itself. In a nutshell, these new processors, based on the previous-generation 32 nm production process of the Xeon 5600 series but incorporating the new “Sandy Bridge” architecture, are, in fact, a big deal. They incorporate several architectural innovations and will bring major improvements in power efficiency and performance to servers. Highlights include:

  • Performance improvements on selected benchmarks of up to 80% above the previous Xeon 5600 CPUs, apparently due to both improved CPU architecture and larger memory capacity (up to 24 DIMMs at 32 GB per DIMM equals a whopping 768 GB capacity for a two-socket, eight-core/socket server).
  • Improved I/O architecture, including an on-chip PCIe 3 controller and a special mode that allows I/O controllers to write directly to the CPU cache without a round trip to memory — a feature that only a handful of I/O device developers will use, but one that contributes to improved I/O performance and lowers CPU overhead during PCIe I/O.
  • Significantly improved energy efficiency, with the SPECpower_ssj2008 benchmark showing a 50% improvement in performance per watt over previous models.

The list of interesting technical features is long and can be extensively reviewed on Intel’s website, but the real issue is whether this is an upgrade that will have significant impact on users’ data centers and whether it is worth the cost and disruption of a system upgrade. Based on the system announcements to date from HP and Dell and my review of the details of the Intel specifications, my opinion is that this upgrade cycle of combined server and CPU is one of the most powerful in the last several years and offers some significant advantages for users, specifically in the areas of relieving data center capacity constraints:

  • If you are short on data center space, the systems using these new CPUs will give you substantially more throughput per rack unit than previous generations. Actual gains are unlikely to match the Intel claims of 80% (unless your workload happens to be the benchmarks they ran) — but, depending on your workloads, they could easily be in the range of 50%.
  • You will get this improvement in throughput with essentially no additional power bill — a really big benefit if your facility is pushing the limits of its power and cooling capacity. All of the system designs we have seen to date from HP and Dell are clearly oriented toward maximizing energy efficiency, and I expect that the anticipated offerings from Cisco and IBM will have similar characteristics.

In a nutshell, this is a major upgrade cycle, and one that should be in the plans for every I&O group. I would recommend that you get out your spreadsheets and look at the ROI for server refresh, possibly even ahead of a normal depreciation/refresh cycle because the numbers will be surprisingly good.

And next year we get to see the 22 nm process shrink of this new architecture …