A lot has been written about potential threats to Intel’s low-power server hegemony, including discussions of threats from not only its perennial minority rival AMD but also from emerging non-x86 technologies such as ARM servers. While these are real threats, with potential for disrupting Intel’s position in the low power and small form factor server segment if left unanswered, Intel’s management has not been asleep at the wheel. As part of the rollout of the new Sandy Bridge architecture, Intel recently disclosed their platform strategy for what they are defining as “Micro Servers,” small single-socket servers with shared power and cooling to improve density beyond the generally accepted dividing line of one server per RU that separates “standard density” from “high density.” While I think that Intel’s definition is a bit myopic, mostly serving to attach a label to a well established category, it is a useful tool for segmenting low-end servers and talking about the relevant workloads.
Intel’s strategy revolves around introducing successive generations of its Sandy Bridge and future architectures embodied as Low Power (LP) and Ultra Low Power (ULP) products with promises of up to 2.2X performance per watt and 30% less actual power compared to previous generation equivalent x86 servers, as outlined in the following chart from Intel:
So what does this mean for Infrastructure & Operations professionals interested in serving the target loads for micro servers, such as:
The drum continues to beat for converged infrastructure products, and Dell has given it the latest thump with the introduction of vStart, a pre-integrated environment for VMware. Best thought of as a competitor to VCE, the integrated VMware, Cisco and EMC virtualization stack, vStart combines:
Intel today publicly announced its anticipated “Westmere EX” high end Westmere architecture server CPU as the E7, now part of a new family nomenclature encompassing entry (E3), midrange (E5), and high-end server CPUs (E7), and at first glance it certainly looks like it delivers on the promise of the Westmere architecture with enhancements that will appeal to buyers of high-end x86 systems.
The E7 in a nutshell:
32 nm CPU with up to 10 cores, each with hyper threading, for up to 20 threads per socket.
Intel claims that the system-level performance will be up to 40% higher than the prior generation 8-core Nehalem EX. Notice that the per-core performance improvement is modest (although Intel does offer a SKU with 8 cores and a slightly higher clock rate for those desiring ultimate performance per thread).
Improvements in security with Intel Advanced Encryption Standard New Instruction (AES-NI) and Intel Trusted Execution Technology (Intel TXT).
Major improvements in power management by incorporating the power management capabilities from the Xeon 5600 CPUs, which include more aggressive P states, improved idle power operation, and the ability to separately reduce individual core power setting depending on workload, although to what extent this is supported on systems that do not incorporate Intel’s Node Manager software is not clear.
From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.
Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.
The Promised Land
The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?
Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…
In October, with great fanfare, the Open Data Center Alliance unfurled its banners. The ODCA is a consortium of approximately 50 large IT consumers, including large manufacturing, hosting and telecomm providers, with the avowed intent of developing standards for interoperable cloud computing. In addition to the roster of users, the announcement highlighted Intel with an ambiguous role as a technology advisor to the group. The ODCA believes that it will achieve some weight in the industry due to its estimated $50 billion per year of cumulative IT purchasing power, and the trade press was full of praises for influential users driving technology as opposed to allowing rapacious vendors such as HP and IBM to drive users down proprietary paths that lead to vendor lock-in.
Now that we’ve had a month or more to allow the purple prose to settle a bit, let’s look at the underlying claims, potential impact of the ODCA and the shifting roles of vendors and consumers of technology. And let’s not forget about the role of Intel.
First, let me state unambiguously that one of the core intentions of the ODCA, the desire to develop common use case models that will in turn drive vendors to develop products that comply with the models based on the economic clout of the ODCA members (and hopefully there will be a correlation between ODCA member requirements and those of a wider set of consumers), is a good idea. Vendors spend a lot of time talking to users and trying to understand their requirements, and having the ODCA as a proxy for the requirements of a lot of very influential customers will be a benefit to all concerned.
As an immediate reaction to the recent announcement of Attachmate’s intention to acquire Novell, covered in depth by my colleagues and synthesized by Chris Voce in his recent blog post, I have received a string of inquiries about the probable fate of SUSE LINUX. Should we continue to invest? Will Attachmate kill it? Will it be sold?
Reduced to its essentials the answer is that we cannot predict the eventual ownership of SUSE Linux, but it is almost certain to remain a viable and widely available Linux distribution. SUSE is one of the crown jewels of Novell’s portfolio, with steady growth, gaining market share, generating increasing revenues, and from the outside at least, a profitable business.
Attachmate has two choices with SUSE – retain it as a profitable growth engine and attachment point for other Attachmate software and services, or package it up for sale. In either case they have to continue to invest in the product and its marketing. If Attachmate chooses to keep it, SUSE Linux will behave as it did with Novell. If they sell it, its acquirer will be foolish to do anything else. Speculation about potential acquirers has included HP, IBM, Cisco and Oracle, all of whom could make use of a Linux distribution as an internal product component in addition to the software and service revenues it could engender. But aside from an internal platform, for SUSE to have value as an industry alternative to Red Hat, it would have to remain vendor agnostic and widely available.
With the inescapable caveat that this is a developing situation, my current take on SUSE Linux is that there is no reason to back away from it or to fear that it will disappear into the maw of some giant IT company.
I met recently with Cisco’s UCS group in San Jose to get a quick update on sales and maybe some hints about future development. The overall picture is one of rapid growth decoupled from whatever pressures Cisco management has cautioned about in other areas of the business.
Overall, according to recent disclosure by Cisco CEO John Chambers, Cisco’s UCS revenue is growing at a 550% Y/Y growth rate, with the most recent quarterly revenues indicating a $500M run rate (we make that out as about $125M quarterly revenue). This figure does not seem to include the over 4,000 blades used by Cisco IT, nor does it include units being consumed internally by Cisco and subsequently shipped to customers as part of appliances or other Cisco products. Also of note is the fact that it is fiscal Q1 for Cisco, traditionally its weakest quarter, although with an annual growth rate in excess of 500% we would expect that UCS sequential quarters will be marching to a totally different drummer than the overall company numbers.
There has been a lot of press about IBM’s acquisition of BNT (Blade Network Technologies) focusing on the economics and market share of BNT as a competitor to Cisco and HP’s ProCurve/3Com franchise. But at its heart the acquisition is more about defending and expanding a position in the emerging converged server, networking, and storage infrastructure segment than it is about raw switch port market share. It is also a powerful vindication of the proposition that infrastructure convergence is driving major realignment in the vendor industry.
Starting with HP’s success with its c-Class blade servers and Virtual Connect technology, and escalating with Cisco’s entrance into the server market, IBM continued its investment in its Virtual Fabric and Open Fabric Manager technology, heavily leveraging BNT’s switch platforms. At some point it became clear that BNT was a critical element of IBM’s convergence strategy, with IBM’s plans now heavily dependent on a vendor with whom they had an excellent, but non-exclusive relationship, and one whose acquisition by another player could severely compromise their product plans. Hence the acquisition. Now that it owns BNT, IBM can capitalize on its excellent edge network technology for further development of its converged infrastructure strategy without hesitation about further leveraging BNT’s technology.
I recently spent a day with IBM’s x86 team, primarily to get back up to speed on their entire x86 product line, and partially to realign my views of them after spending almost five years as a direct competitor. All in all, time well spent, with some key takeaways:
IBM has fixed some major structural problems with the entire x86 program and it perception in the company – As recently as two years ago, it appeared to the outside world that IBM was not really serious about x86 servers. Between licensing its low-end server designs to Lenovo (although IBM continued to sell its own versions) and an apparent retreat to the upper-end of the segment, it appeared that IBM was not serious about x86 severs. New management, new alignment with sales, and a higher internal profile for x86 seems to have moved the division back into IBM’s mainstream.
Increased investment – It looks like IBM significantly ramped up investments in x86 products about three years ago. The result has been a relatively steady flow of new products into the marketplace, some of which, such as the HS22 blade, significantly reversed deficits versus equivalent HP products. Others followed in high-end servers, virtualization and systems management, and increased velocity of innovation in low-end systems.
Established leadership in new niches such as dense modular server deployments – IBM’s iDataplex, while representing a small footprint in terms of their total volume, gave them immediate visibility as an innovator in the rapidly growing niche for hyper scale dense deployments. Along the way, IBM has also apparently become the leader in GPU deployments as well, another low-volume but high-visibility niche.
Today, Cisco unveiled its home telepresence solution called Umi (prounounced you-me, get it?). For those of us who aren't familiar with Cisco's use of the term telepresence, it's a term it coined to describe the very impressive (and very expensive) enterprise immersive videoconferencing experience it provides to businesses around the world. In the home, it basically means TV-based videoconferencing.
The home offering is similar to the enterprise version in two key ways -- it is also impressive and expensive. Starting November 14, affluent consumers who really want to connect with family across great distances (and who are either unaware of or uninterested in Skype) can put down $599 and sign up for a $24.99 monthly Umi service fee and become HD videoconferencers. I tried the system in a real home and I'll admit the quality is eye-opening. As is the price. Read more of the details here in this post from CNET, but some of the less obvious points include: video voicemail, video voicegreetings, and the ability to record video messages when not connected to someone else. The camera rests above your TV screen and makes for one of the most believable videoconference setups I've seen (the person you speak to actually appears to be looking at you, imagine that). The whole experience rides on top of the existing video input so that while you watch TV you can see a message indicating a call is coming in. Choose not to take it and it will go to video voicemail. There are nice touches like a privacy-minded sliding shutter over the camera (complete with "shooshing" noise when the shutter closes) that helps you know via the senses of sight and sound that your camera is not on. So go ahead and give the missus a kiss while on the couch, no one is looking.