Today, after two of its largest partners have already announced their systems portfolios that will use it, Intel finally announced one of the worst-kept secrets in the industry: the Xeon E5-2600 family of processors.
OK, now that I’ve got in my jab at the absurdity of the announcement scheduling, let’s look at the thing itself. In a nutshell, these new processors, based on the previous-generation 32 nm production process of the Xeon 5600 series but incorporating the new “Sandy Bridge” architecture, are, in fact, a big deal. They incorporate several architectural innovations and will bring major improvements in power efficiency and performance to servers. Highlights include:
Performance improvements on selected benchmarks of up to 80% above the previous Xeon 5600 CPUs, apparently due to both improved CPU architecture and larger memory capacity (up to 24 DIMMs at 32 GB per DIMM equals a whopping 768 GB capacity for a two-socket, eight-core/socket server).
Improved I/O architecture, including an on-chip PCIe 3 controller and a special mode that allows I/O controllers to write directly to the CPU cache without a round trip to memory — a feature that only a handful of I/O device developers will use, but one that contributes to improved I/O performance and lowers CPU overhead during PCIe I/O.
Significantly improved energy efficiency, with the SPECpower_ssj2008 benchmark showing a 50% improvement in performance per watt over previous models.
The Dell brand is one of the most recognizable in technology. It was born a hardware company in 1984 and deservedly rocketed to fame, but it has always been about the hardware. In 2009, its big Perot Systems acquisition marked the first real departure from this hardware heritage. While it made numerous software acquisitions, including some good ones like Scalent, Boomi, and KACE, it remains a marginal player in software. That is about to change.
In late 2010 I noted that startup SeaMicro had introduced an ultra-dense server using Intel Atom chips in an innovative fabric-based architecture that allowed them to factor out much of the power overhead from a large multi-CPU server ( http://blogs.forrester.com/richard_fichera/10-09-21-little_servers_big_applications_intel_developer_forum). Along with many observers, I noted that the original SeaMicro server was well-suited to many light-weight edge processing tasks, but that the system would not support more traditional compute-intensive tasks due to the performance of the Atom core. I was, however, quite taken with the basic architecture, which uses a proprietary high-speed (1.28 Tb/s) 3D mesh interconnect to allow the CPU cores to share network, BIOS and disk resources that are normally replicated on a per-server in conventional designs, with commensurate reductions in power and an increase in density.
I just spent several days at Dell World, and came away with the impression of a company that is really trying to change its image. Old Dell was boxes, discounts and low cost supply chain. New Dell is applications, solution, cloud (now there’s a surprise!) and investments in software and integration. OK, good image, but what’s the reality? All in all, I think they are telling the truth about their intentions, and their investments continue to be aligned with these intentions.
As I wrote about a year ago, Dell seems to be intent on climbing up the enterprise food chain. It’s investment in several major acquisitions, including Perot Systems for services and a string of advanced storage, network and virtual infrastructure solution providers has kept the momentum going, and the products have been following to market. At the same time I see solid signs of continued investment in underlying hardware, and their status as he #1 x86 server vendor in N. America and #2 World-Wide remains an indication of their ongoing success in their traditional niches. While Dell is not a household name in vertical solutions, they have competent offerings in health care, education and trading, and several of the initiatives I mentioned last year are definitely further along and more mature, including continued refinement of their VIS offerings and deep integration of their much-improved DRAC systems management software into mainstream management consoles from VMware and Microsoft.
About five months ago, I “broke up” with T-Mobile in favor of AT&T. I was a T-Mobile customer for six years on a very competitive service plan. But none of that mattered; I wanted an iPhone, and T-Mobile couldn’t give it to me. It was a clean but cruel breakup: AT&T cancelled my T-Mobile contract on my behalf, the equivalent of getting dumped by your girlfriend’s new boyfriend.
I bring this up because it reminds me of the saying: “If we don’t take care of our customers, someone else will.” This is particularly important to remember in “The Age Of The Customer” where technology-led disruption is eroding traditional competitive barriers across all industries. Empowered buyers have information at their fingertips to check a price, read a product review, or ask for advice from a friend right from the screen of their smartphone.
This is affecting your IT just as much as your business: As an indicator, Forrester finds that 48% of information workers already buy whatever smartphone they want and use it for work purposes. In the new era, it is easier than ever for empowered employees and App Developers to circumvent traditional IT procurement and provisioning to take advantage of new desktop, mobile, and tablet devices as well as cloud-based software and infrastructure you don’t support. They’re “cheating” on you to get their jobs done better, faster, and cheaper.
To become more desirable to your customer – be it your Application Developers, workforce, or end buyers – IT Infrastructure and Operations leaders must become more customer-obsessed, which I talk about in this video:
I recently published an update on power and cooling in the data center (http://www.forrester.com/go?docid=60817), and as I review it online, I am struck by the combination of old and new. The old – the evolution of semiconductor technology, the increasingly elegant attempts to design systems and components that can be incrementally throttled, and the increasingly sophisticated construction of the actual data centers themselves, with increasing modularity and physical efficiency of power and cooling.
The new is the incredible momentum I see behind Data Center Infrastructure Management software. In a few short years, DCIM solutions have gone from simple aggregated viewing dashboards to complex software that understands tens of thousands of components, collects, filters and analyzes data from thousands of sensors in a data center (a single CRAC may have in excess of 20 sensors, a server over a dozen, etc.) and understands the relationships between components well enough to proactively raise alarms, model potential workload placement and make recommendations about prospective changes.
Of all the technologies reviewed in the document, DCIM offers one of the highest potentials for improving overall efficiency without sacrificing reliability or scalability of the enterprise data center. While the various DCIM suppliers are still experimenting with business models, I think that it is almost essential for any data center operations group that expects significant change, be it growth, shrinkage, migration or a major consolidation or cloud project, to invest in DCIM software. DCIM consumers can expect to see major competitive action among the current suppliers, and there is a strong potential for additional consolidation.
Security & Risk (S&R) chiefs and Infrastructure & Operations (I&O) leaders have a lot in common, and in great companies, we work in concert to run an efficient, reliable technology infrastructure that keeps critical business assets safe. Much has changed in the world of technology since I pulled my first all-nighter in a data center (falling asleep next to the EMC Symmetrix array was not one of my better ideas – those corners were sharp!), but that partnership is still the same – it takes security engineers and network/server engineers working together to solve really thorny problems.
We have our frictions, of course – I&O pros prioritize operational stability and continuity of service, while S&R pros must occasionally interrupt that continuity to contain security breaches. But when a serious incident (whether security breach or system failure) threatens to sideline our business systems, it falls to us to find and fix the problems – together. We may be organizationally separate now, with I&O reporting into the CIO and the CISO reporting into a COO or Head of Operational Risk, but we share a set of fundamental challenges. We must excel in our own domains (not exactly a cakewalk) but also anticipate and deliver on what our businesses need (much harder).
And what our businesses seek today is growth – in Forrester’s most recent survey of business decision-makers, the top two priorities were growing overall company revenue and acquiring and retaining customers. S&R pros have already worked hard to escape their “Department of No” reputations, and I&O pros have labored tirelessly to get out of the data center and into the business.
I just attended IDF and I’ve got to say, Intel has certainly gotten the cloud message. Almost everything is centered on clouds, from the high-concept keynotes to the presentations on low-level infrastructure, although if you dug deep enough there was content for general old-fashioned data center and I&O professionals. Some highlights:
Chips and processors and low-level hardware
Intel is, after all, a semiconductor foundry, and despite their expertise in design, their true core competitive advantage is their foundry operations – even their competitors grudgingly acknowledge that they can manufacture semiconductors better than anyone else on the planet. As a consequence, showing off new designs and processes is always front and center at IDF, and this year was no exception. Last year it was Sandy Bridge, the 22nm shrink of the 32nm Westmere (although Sandy Bridge also incorporated some significant design improvements). This year it was Ivy Bridge, the 22nm “tick” of the Intel “tick-tock” design cycle. Ivy Bridge is the new 22nm architecture and seems to have inherited Intel’s recent focus on power efficiency, with major improvements beyond the already solid advantages of their 22nm process, including deeper P-States and the ability to actually shut down parts of the chip when it is idle. While they did not discuss the server variants in any detail, the desktop versions will get an entirely new integrated graphics processor which they are obviously hoping will blunt AMD’s resurgence in client systems. On the server side, if I were to guess, I would guess more cores and larger caches, along with increased support for virtualization of I/O beyond what they currently have.
Last year at VMworld I noted Xsigo Systems, a small privately held company at VMworld showing their I/O Director technology, which delivereda subset of HP Virtual Connect or Cisco UCS I/O virtualization capability in a fashion that could be consumed by legacy rack-mount servers from any vendor. I/O Director connects to the server with one or more 10 G Ethernet links, and then splits traffic out into enterprise Ethernet and FC networks. On the server side, the applications, including VMware, see multiple virtual NICs and HBAs courtesy of Xsigo’s proprietary virtual NIC driver.
Controlled via Xsigo’s management console, the server MAC and WWNs can be programmed, and the servers can now connect to multiple external networks with fewer cables and substantially lower costs for NIC and HBA hardware. Virtualized I/O is one of the major transformative developments in emerging data center architecture, and will remain a theme in Forrester’s data center research coverage.
This year at VMworld, Xsigo announced a major expansion of their capabilities – Xsigo Server Fabric, which takes the previous rack-scale single-Xsigo switch domains and links them into a data-center-scale fabric. Combined with improvements in the software and UI, Xsigo now claims to offer one-click connection of any server resource to any network or storage resource within the domain of Xsigo’s fabric. Most significantly, Xsigo’s interface is optimized to allow connection of VMs to storage and network resources, and to allow the creation of private VM-VM links.
Oracle announced today that it is going to cease development for Itanium across its product line, stating that itbelieved, after consultation with Intel management, that x86 was Intel’s strategic platform. Intel of course responded with a press release that specifically stated that there were at least two additional Itanium products in active development – Poulsen (which has seen its initial specifications, if not availability, announced), and Kittson, of which little is known.
This is a huge move, and one that seems like a kick carefully aimed at the you know what’s of HP’s Itanium-based server business, which competes directly with Oracle’s SPARC-based Unix servers. If Oracle stays the course in the face of what will certainly be immense pressure from HP, mild censure from Intel, and consternation on the part of many large customers, the consequences are pretty obvious:
Intel loses prestige, credibility for Itanium, and a potential drop-off of business from its only large Itanium customer. Nonetheless, the majority of Intel’s server business is x86, and it will, in the end, suffer only a token loss of revenue. Intel’s response to this move by Oracle will be muted – public defense of Itanium, but no fireworks.