Every culture has its coming of age rituals — Confirmation, Bar Mitzvah, being hunted by tribal elders, surviving in the wilderness, driving at high speed while texting — all of which mark the progress from childhood to adulthood. In the high-tech world, one of the rituals marking the maturation of a company is the user group. When a company has a strategy it wants to communicate, a critical mass of customers, and prospects bright enough that it wants to highlight them rather than obscure them, it is time for a user group meeting.
This year, having passed a year since the acquisition of Novell by AttachMate and its subsequent instantiation as a standalone division, as well as being its 20th anniversary, SUSE had its first user group meeting. All in all, the portents were good, and SUSE got its core messages across to an audience of about 500 of its users as well as a cadre of the more sophisticated (IMHO) industry analysts.
Among My Key Takeaways:
SUSE is a stable company with rational management — With profitable revenues of over $200M and a publicly stated plan to hit $234 for the next fiscal year, SUSE is a reasonably sized company (technically a division of $1.3B Attachmate, but it looks and acts like an independent company), with growth rates that look to be a couple of points higher than its segment.
SUSE’s management has done an excellent job of focusing the company — SUSE, acknowledging its size disadvantage over competitor Red Hat, has chosen to focus heavily on enterprise Linux, publicly disavowing desktop and mobile device directions. SUSE’s claim is that their market share in the core enterprise segment is larger than their overall market share compared to Red Hat. This is a hard number to even begin to tweeze out, but it feels like a reasonable claim.
This week the California courts handed down a nice present for HP — a verdict confirming that Oracle was required to continue to deliver its software on HP’s Itanium-based Integrity servers. This was a major victory for HP, on the face of it giving them the prize they sought — continued availability of Oracle’s eponymous database on their high-end systems.
However, HP’s customers should not immediately assume that everything has returned to a “status quo ante.” Once Humpty Dumpty has fallen off the wall it is very difficult to put the pieces together again. As I see it, there are still three major elephants in the room that HP users must acknowledge before they make any decisions:
Oracle will appeal, and there is no guarantee of the outcome. The verdict could be upheld or it could be reversed. If it is upheld, then that represents a further delay in the start date from which Oracle will be measured for its compliance with the court ordered development. Oracle will also continue to press its counterclaims against HP, but those do not directly relate to the continued development or Oracle software on Itanium.
Itanium is still nearing the end of its road map. A reasonable interpretation of the road map tea leaves that have been exposed puts the final Itanium release at about 2015 unless Intel decides to artificially split Kittson into two separate releases. Integrity customers must take this into account as they buy into the architecture in the last few years of Itanium’s life, although HP can be depended on to offer high-quality support for a decade after the last Itanium CPU rolls off Intel’s fab lines. HP has declared its intention to produce Integrity-level x86 systems, but OS support intentions are currently stated as Linux and Windows, not HP-UX.
Earlier this week at its Discover customer event, HP announced a significant set of improvements to its already successful c-Class BladeSystem product line, which, despite continuing competitive pressure from IBM and the entry of Cisco into the market three years ago, still commands approximately 50% of the blade market. The significant components of this announcement fall into four major functional buckets – improved hardware, simplified and expanded storage features, new interconnects and I/O options, and serviceability enhancements. Among the highlights are:
Direct connection of HP 3PAR storage – One of the major drawbacks for block-mode storage with blades has always been the cost of the SAN to connect it to the blade enclosure. With the ability to connect an HP 3PAR storage array directly to the c-Class enclosure without any SAN components, HP has reduced both the cost and the complexity of storage for a wide class of applications that have storage requirements within the scope of a single storage array.
New blades – With this announcement, HP fills in the gaps in their blade portfolio, announcing a new Intel Xeon EN based BL-420 for entry requirements, an upgrade to the BL-465 to support the latest AMD 16-core Interlagos CPU, and the BL-660, a new single-width Xeon E5 based 4-socket blade. In addition, HP has expanded the capacity of the sidecar storage blade to 1.5 TB, enabling an 8-server and 12 TB + chassis configuration.
Earlier this week Dell joined arch-competitor HP in endorsing ARM as a potential platform for scale-out workloads by announcing “Copper,” an ARM-based version of its PowerEdge-C dense server product line. Dell’s announcement and positioning, while a little less high-profile than HP’s February announcement, is intended to serve the same purpose — to enable an ARM ecosystem by providing a platform for exploring ARM workloads and to gain a visible presence in the event that it begins to take off.
Dell’s platform is based on a four-core Marvell ARM V7 SOC implementation, which it claims is somewhat higher performance than the Calxeda part, although drawing more power, at 15W per node (including RAM and local disk). The server uses the PowerEdge-C form factor of 12 vertically mounted server modules in a 3U enclosure, each with four server nodes on them for a total of 48 servers/192 cores in a 3U enclosure. In a departure from other PowerEdge-C products, the Copper server has integrated L2 network connectivity spanning all servers, so that the unit will be able to serve as a low-cost test bed for clustered applications without external switches.
Dell is offering this server to selected customers, not as a GA product, along with open source versions of the LAMP stack, Crowbar, and Hadoop. Currently Cannonical is supplying Ubuntu for ARM servers, and Dell is actively working with other partners. Dell expects to see OpenStack available for demos in May, and there is an active Fedora project underway as well.
Driving in the snow is an experience normally reserved for those of us denizens of the northern climes who haven't yet figured out how to make a paycheck mixing Mai Tais in the Caymans. Behind the wheel in the snow, everything happens a little slower. Turn the wheel above 30 on the speedo and it could be a second or two before the car responds, and you'll overshoot the turn and take out the neighbor's shrubs.
Hosted Virtual Desktops are a bit like driving in the snow. Every link in the chain between the data on a hard drive in the datacenter and the pixels on the user's screen introduces a delay that the user perceives as lag, and the laws of physics apply. Too much lag or too much snow and it's hard to get anywhere, as citizens of Anchorage, Alaska after this years' record snowfalls, or anyone trying to use a hosted virtual desktop half a world away from the server will testify.
NVIDIA Brings Gaming Know-How to HVD
Last week I spent a day with NVIDIA's soft-spoken, enthusiastic CEO, Jensen Huang who put the whole latency issue for VDI into a practical perspective (thanks Jensen). These days, he says, home game consoles run about 100-150 milliseconds from the time a player hits the fire button to the time they see their plasma cannon blast away an opponent on the screen. For comparison, the blink of an eye is 200-400 milliseconds, and the best gamers can react to things they see on screen as fast as 50 milliseconds.
I said last year that this would happen sometime in the first half of this year, but for some reason my colleagues and clients have kept asking me exactly when we would see a real ARM server running a real OS. How about now?
To copy from Calxeda’s most recent blog post:
“This week, Calxeda is showing a live Calxeda cluster running Ubuntu 12.04 LTS on real EnergyCore hardware at the Ubuntu Developer and Cloud Summit events in Oakland, CA. … This is the real deal; quad-core, w/ 4MB cache, secure management engine, and Calxeda’s fabric all up and running.”
This is a significant milestone for many reasons. It proves that Calxeda can indeed deliver a working server based on its scalable fabric architecture, although having HP signing up as a partner meant that this was essentially a non-issue, but still, proof is good. It also establishes that at least one Linux distribution provider, in this case Ubuntu, is willing to provide a real supported distribution. My guess is that Red Hat and Centos will jump on the bus fairly soon as well.
Most importantly, we can get on with the important work of characterizing real benchmarks on real systems with real OS support. HP’s discovery centers will certainly play a part in this process as well, and I am willing to bet that by the end of the summer we will have some compelling data on whether the ARM server will deliver on its performance and energy efficiency promises. It’s not a slam dunk guaranteed win – Intel has been steadily ratcheting up its energy efficiency, and the latest generation of x86 server from HP, IBM, Dell, and others show promise of much better throughput per watt than their predecessors. Add to that the demonstration of a Xeon-based system by Sea Micro (ironically now owned by AMD) that delivered Xeon CPUs at a 10 W per CPU power overhead, an unheard of efficiency.
Next up in the 2012 lineup for the Intel E5 refresh cycle of its infrastructure offerings is Cisco, with its announcement last week of what it refers to as its third generation of fabric computing. Cisco announced a combination of tangible improvements to both the servers and the accompanying fabric components, as well as some commitments for additional hardware and a major enhancement of its UCS Manager software immediately and later in 2012. Highlights include:
New servers – No surprise here, Cisco is upgrading its servers to the new Intel CPU offerings, leading with its high-volume B200 blade server and two C-Series rack-mount servers, one a general-purpose platform and the other targeted at storage-intensive requirements. On paper, the basic components of these servers sound similar to competitors – new E5 COUs, faster I/O, and more memory. In addition to the servers announced for March availability, Cisco stated that it would be delivering additional models for ultra-dense computing and mission-critical enterprise workloads later in the year.
Fabric improvements – Because Cisco has a relatively unique architecture, it also focused on upgrades to the UCS fabric in three areas: server, enclosure, and top-level interconnect. The servers now have an optional improved virtual NIC card with support for up to 128 VLANs per adapter and two 20 GB ports per adapter. One in on the motherboard and another can be plugged in as a mezzanine card, giving up to 80 GB bandwidth to each server. The Fabric Interconnect, the component that connects each enclosure to the top-level Fabric Interconnect, has seen its bandwidth doubled to a maximum of 160 GB. The Fabric Interconnect, the top of the UCS management hierarchy and interface to the rest of the enterprise network, has been up graded to a maximum of 96 universal 10Gb ports (divided between downlinks to the blade enclosures and uplinks to the enterprise fabric.
Today, after two of its largest partners have already announced their systems portfolios that will use it, Intel finally announced one of the worst-kept secrets in the industry: the Xeon E5-2600 family of processors.
OK, now that I’ve got in my jab at the absurdity of the announcement scheduling, let’s look at the thing itself. In a nutshell, these new processors, based on the previous-generation 32 nm production process of the Xeon 5600 series but incorporating the new “Sandy Bridge” architecture, are, in fact, a big deal. They incorporate several architectural innovations and will bring major improvements in power efficiency and performance to servers. Highlights include:
Performance improvements on selected benchmarks of up to 80% above the previous Xeon 5600 CPUs, apparently due to both improved CPU architecture and larger memory capacity (up to 24 DIMMs at 32 GB per DIMM equals a whopping 768 GB capacity for a two-socket, eight-core/socket server).
Improved I/O architecture, including an on-chip PCIe 3 controller and a special mode that allows I/O controllers to write directly to the CPU cache without a round trip to memory — a feature that only a handful of I/O device developers will use, but one that contributes to improved I/O performance and lowers CPU overhead during PCIe I/O.
Significantly improved energy efficiency, with the SPECpower_ssj2008 benchmark showing a 50% improvement in performance per watt over previous models.
Last week it was Dell’s turn to tout its new wares, as it pulled back the curtain on its 12th-eneration servers and associated infrastructure. I’m still digging through all the details, but at first glance it looks like Dell has been listening to a lot of the same customer input as HP, and as a result their messages (and very likely the value delivered) are in many ways similar. Among the highlights of Dell’s messaging are:
Faster provisioning with next-gen agentless intelligent controllers — Dell’s version is iDRAC7, and in conjunction with its LifeCyle Controller firmware, Dell makes many of the same claims as HP, including faster time to provision and maintain new servers, automatic firmware updates, and many fewer administrative steps, resulting in opex savings.
Intelligent storage tiering and aggressive use of flash memory, under the aegis of Dell’s “Fluid Storage” architecture, introduced last year.
A high-profile positioning for its Virtual Network architecture, building on its acquisition of Force10 Networks last year. With HP and now Dell aiming for more of the network budget in the data center, it’s not hard to understand why Cisco was so aggressive in pursuing its piece of the server opportunity — any pretense of civil coexistence in the world of enterprise networks is gone, and the only mutual interest holding the vendors together is their customers’ demand that they continue to play well together.
At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business AMD made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.
SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2), with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not for SeaMicro’s original Atom-based offering.