Intel Bumps up High-End Servers with New Xeon E7 V2 - A Long Awaited and Timely Leap

Richard Fichera

The long draught at the high-end

It’s been a long wait, about four years if memory serves me well, since Intel introduced the Xeon E7, a high-end server CPU targeted at the highest performance per-socket x86, from high-end two socket servers to 8-socket servers with tons of memory and lots of I/O. In the ensuing four years (an eternity in a world where annual product cycles are considered the norm), subsequent generations of lesser Xeons, most recently culminating in the latest generation 22 nm Xeon E5 V2 Ivy Bridge server CPUs, have somewhat diluted the value proposition of the original E7.

So what is the poor high-end server user with really demanding single-image workloads to do? The answer was to wait for the Xeon E7 V2, and at first glance, it appears that the wait was worth it. High-end CPUs take longer to develop than lower-end products, and in my opinion Intel made the right decision to skip the previous generation 22nm Sandy Bridge architecture and go to Ivy Bridge, it’s architectural successor in the Intel “Tick-Tock” cycle of new process, then new architecture.

What was announced?

The announcement was the formal unveiling of the Xeon E7 V2 CPU, available in multiple performance bins with anywhere from 8 to 15 cores per socket. Critical specifications include:

  • Up to 15 cores per socket
  • 24 DIMM slots, allowing up to 1.5 TB of memory with 64 GB DIMMs
  • Approximately 4X I/O bandwidth improvement
  • New RAS features, including low-level memory controller modes optimized for either high-availability or performance mode (BIOS option), enhanced error recovery and soft-error reporting
Read more

2014 Server and Data Center Predictions

Richard Fichera

As the new year looms, thoughts turn once again to our annual reading of the tea leaves, in this case focused on what I see coming in server land. We’ve just published the full report, Predictions for 2014: Servers & Data Centers, but as teaser, here are a few of the major highlights from the report:

1.      Increasing choices in form factor and packaging – I&O pros will have to cope with a proliferation of new form factors, some optimized for dense low-power cloud workloads, some for general purpose legacy IT, and some for horizontal VM clusters (or internal cloud if you prefer). These will continue to appear in an increasing number of variants.

2.      ARM – Make or break time is coming, depending on the success of coming 64-bit ARM CPU/SOC designs with full server feature sets including VM support.

3.      The beat goes on – Major turn of the great wheel coming for server CPUs in early 2014.

4.      Huge potential disruption in flash architecture – Introduction of flash in main memory DIMM slots has the potential to completely disrupt how flash is used in storage tiers, and potentially can break the current storage tiering model, initially physically with the potential to ripple through memory architectures.

Read more

IBM Makes Major Commitment to Flash

Richard Fichera

 

Wisdom from the Past

In his 1956 dystopian sci-fi novel “The City and the Stars”, Arthur C. Clarke puts forth the fundamental design tenet for making eternal machines, “A machine shall have no moving parts”. To someone from the 1950s current computers would appear to come close to that ideal – the CPUs and memory perform silent magic and can, with some ingenuity, be passively cooled, and invisible electronic signals carry information in and out of them to networks and … oops, to rotating disks, still with us after more than five decades[i]. But, as we all know, salvation has appeared on the horizon in the form of solid-state storage, so called flash storage (actually an idea of several decades standing as well, just not affordable until recently).

The initial substitution of flash for conventional storage yields immediate gratification in the form of lower power, maybe lower cost if used effectively, and higher performance, but the ripple effect benefits of flash can be even more pervasive. However, the implementation of the major architectural changes engendered across the whole IT stack by the use of flash is a difficult conceptual challenge for users and largely addressed only piecemeal by most vendors. Enter IBM and its Flashahead initiative.

What is Happening?

On Friday, April 11, IBM announced a major initiative, to the tune of a spending commitment of $1B, to accelerate the use of flash technology by means of three major programs:

·        Fundamental flash R&D

·        New storage products built on flash-only memory technology

Read more

HP Shows its Next Generation Blade and Converged Infrastructure – No Revolution, but Strong Evolution

Richard Fichera

With the next major spin of Intel server CPUs due later this year, HP’s customers have been waiting for HP’s next iteration of its core c-Class BladeSystem, which has been on the market for almost 7 years without any major changes to its basic architecture. IBM made a major enhancement to its BladeCenter architecture, replacing it with the new Pure Systems, and Cisco’s offering is new enough that it should last for at least another three years without a major architectural refresh, leaving HP customers to wonder when HP was going to introduce its next blade enclosure, and whether it would be compatible with current products.

At their partner conference this week, HP announced a range of enhancements to its blade product line that on combination represent a strong evolution of the current product while maintaining compatibility with current investments. This positioning is similar to what IBM did with its BladeCenter to BladeCenter-H upgrade, preserving current customer investment and extending the life of the current server and peripheral modules for several more years.

Tech Stuff – What Was Announced

Among the goodies announced on February 19 was an assortment of performance and functionality enhancements, including:

  • Platinum enclosure — The centerpiece of the announcement was the new c7000 Platinum enclosure, which boosts the speed of the midplane signal paths from 10 GHz to 14GHz, for an increase of 40% in raw bandwidth of the critical midplane, across which all of the enclosure I/O travels. In addition to the increased bandwidth midplane, the new enclosure incorporates location aware sensors and also doubles the available storage bandwidth.
Read more

EMC And VMware Carve Out Pivotal: Good News For I&O Pros And The Virtualization Market

Dave Bartoletti

So what does VMware and EMC’s announcement of the new Pivotal Initiative mean for I&O leaders? Put simply, it means the leading virtualization vendor is staying focused on the data center — and that’s good news. As many wise men have said, the best strategy comes from knowing what NOT to do. In this case, that means NOT shifting focus too fast and too far afield to the cloud.

I think this is a great move, and makes all kinds of sense to protect VMware’s relationship with its core buyer, maintain focus on the datacenter, and lay the foundation for the vendor’s software-defined data center strategy. This move helps to end the cloud-washing that’s confused customers for years: There’s a lot of work left to do to virtualize the entire data center stack, from compute to storage and network and apps, and the easy apps, by now, have mostly been virtualized. The remaining workloads enterprises seek to virtualize are much harder: They don’t naturally benefit from consolidation savings, they are highly performance sensitive, and they are much more complex.

Read more

AMD Acquires SeaMicro — Big Bet On Architectural Shift For Servers

Richard Fichera

[For some reason this has been unpublished since April — so here it is well after AMD announced its next spin of the SeaMicro product.]

At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business, it made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.

SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2) with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not a part of SeaMicro’s original Atom-based offering.

Read more

Data Center Power And Efficiency – Public Enemy #1 Or The Latest Media Punching Bag?

Richard Fichera

This week, the New York Times ran a series of articles about data center power use (and abuse) “Power, Pollution and the Internet” (http://nyti.ms/Ojd9BV) and “Data Barns in a Farm Town, Gobbling Power and Flexing Muscle” (http://nyti.ms/RQDb0a). Among the claims made in the articles were that data centers were “only using 6 to 12 % of the energy powering their servers to deliver useful computation. Like a lot of media broadsides, the reality is more complex than the dramatic claims made in these articles. Technically they are correct in claiming that of the electricity going to a server, only a very small fraction is used to perform useful work, but this dramatic claim is not a fair representation of the overall efficiency picture. The Times analysis fails to take into consideration that not all of the power in the data center goes to servers, so the claim of 6% efficiency of the servers is not representative of the real operational efficiency of the complete data center.

On the other hand, while I think the Times chooses drama over even-keeled reporting, the actual picture for even a well-run data center is not as good as its proponents would claim. Consider:

  • A new data center with a PUE of 1.2 (very efficient), with 83% of the power going to IT workloads.
  • Then assume that 60% of the remaining power goes to servers (storage and network get the rest), for a net of almost 50% of the power going into servers. If the servers are running at an average utilization of 10%, then only 10% of 50%, or 5% of the power is actually going to real IT processing. Of course, the real "IT number" is the server + plus storage + network, so depending on how you account for them, the IT usage could be as high as 38% (.83*.4 + .05).
Read more

2 Big Shifts Taking Us To More Resource-Efficient Computing

Chris Mines

In the last couple of weeks, I finally put a couple of pieces together . . . the tech industry is pushing hard, down two parallel tracks, toward much more resource-efficient computing architectures.

Track 1: Integrated systems. Computer suppliers are putting hardware components (including compute, network, and storage) together with middleware and application software in pre-integrated packages. The manufacturers will do assembly and testing of these systems in their factories, rather than on the customer's site. And they will tailor the system — to a greater or lesser degree, depending on the system — to the characteristics of the workload(s) it will be running.

The idea is to use general-purpose components (microprocessors, memory, network buses, and the like) to create special-purpose systems on a mass-customization basis. This trend has been evident for a while in the Oracle Exadata and Cisco UCS systems; IBM's Pure systems introductions push it even further into pre-configured applications and systems management.

Track 2. Modular data centers. Now, zoom out from individual computing systems to aggregations of those systems into data centers. And again, assemble as much of the componentry as possible in the factory rather than on-site. Vendors like Schneider, Emerson, and the systems shops like IBM and HP are creating a design approach and infrastructure systems that will allow data centers to be designed in modular fashion, with much of the equipment like air handling and power trucked to the customer's site, set up in the parking lot, and quickly turned on.

Read more

Cisco’s Turn At Bat, Introduces Next Generation Of UCS

Richard Fichera

Next up in the 2012 lineup for the Intel E5 refresh cycle of its infrastructure offerings is Cisco, with its announcement last week of what it refers to as its third generation of fabric computing. Cisco announced a combination of tangible improvements to both the servers and the accompanying fabric components, as well as some commitments for additional hardware and a major enhancement of its UCS Manager software immediately and later in 2012. Highlights include:

  • New servers – No surprise here, Cisco is upgrading its servers to the new Intel CPU offerings, leading with its high-volume B200 blade server and two C-Series rack-mount servers, one a general-purpose platform and the other targeted at storage-intensive requirements. On paper, the basic components of these servers sound similar to competitors – new E5 COUs, faster I/O, and more memory. In addition to the servers announced for March availability, Cisco stated that it would be delivering additional models for ultra-dense computing and mission-critical enterprise workloads later in the year.
  • Fabric improvements – Because Cisco has a relatively unique architecture, it also focused on upgrades to the UCS fabric in three areas: server, enclosure, and top-level interconnect. The servers now have an optional improved virtual NIC card with support for up to 128 VLANs per adapter and two 20 GB ports per adapter. One in on the motherboard and another can be plugged in as a mezzanine card, giving up to 80 GB bandwidth to each server. The Fabric Interconnect, the component that connects each enclosure to the top-level Fabric Interconnect, has seen its bandwidth doubled to a maximum of 160 GB. The Fabric Interconnect, the top of the UCS management hierarchy and interface to the rest of the enterprise network, has been up graded to a maximum of 96 universal 10Gb ports (divided between downlinks to the blade enclosures and uplinks to the enterprise fabric.
Read more

Dell’s Turn For Infrastructure Announcements — Common Theme Emerging For 2012?

Richard Fichera

Last week it was Dell’s turn to tout its new wares, as it pulled back the curtain on its 12th-eneration servers and associated infrastructure. I’m still digging through all the details, but at first glance it looks like Dell has been listening to a lot of the same customer input as HP, and as a result their messages (and very likely the value delivered) are in many ways similar. Among the highlights of Dell’s messaging are:

  • Faster provisioning with next-gen agentless intelligent controllers — Dell’s version is iDRAC7, and in conjunction with its LifeCyle Controller firmware, Dell makes many of the same claims as HP, including faster time to provision and maintain new servers, automatic firmware updates, and many fewer administrative steps, resulting in opex savings.
  • Intelligent storage tiering and aggressive use of flash memory, under the aegis of Dell’s “Fluid Storage” architecture, introduced last year.
  • A high-profile positioning for its Virtual Network architecture, building on its acquisition of Force10 Networks last year. With HP and now Dell aiming for more of the network budget in the data center, it’s not hard to understand why Cisco was so aggressive in pursuing its piece of the server opportunity — any pretense of civil coexistence in the world of enterprise networks is gone, and the only mutual interest holding the vendors together is their customers’ demand that they continue to play well together.
Read more