Cisco’s Turn At Bat, Introduces Next Generation Of UCS

Richard Fichera

Next up in the 2012 lineup for the Intel E5 refresh cycle of its infrastructure offerings is Cisco, with its announcement last week of what it refers to as its third generation of fabric computing. Cisco announced a combination of tangible improvements to both the servers and the accompanying fabric components, as well as some commitments for additional hardware and a major enhancement of its UCS Manager software immediately and later in 2012. Highlights include:

  • New servers – No surprise here, Cisco is upgrading its servers to the new Intel CPU offerings, leading with its high-volume B200 blade server and two C-Series rack-mount servers, one a general-purpose platform and the other targeted at storage-intensive requirements. On paper, the basic components of these servers sound similar to competitors – new E5 COUs, faster I/O, and more memory. In addition to the servers announced for March availability, Cisco stated that it would be delivering additional models for ultra-dense computing and mission-critical enterprise workloads later in the year.
  • Fabric improvements – Because Cisco has a relatively unique architecture, it also focused on upgrades to the UCS fabric in three areas: server, enclosure, and top-level interconnect. The servers now have an optional improved virtual NIC card with support for up to 128 VLANs per adapter and two 20 GB ports per adapter. One in on the motherboard and another can be plugged in as a mezzanine card, giving up to 80 GB bandwidth to each server. The Fabric Interconnect, the component that connects each enclosure to the top-level Fabric Interconnect, has seen its bandwidth doubled to a maximum of 160 GB. The Fabric Interconnect, the top of the UCS management hierarchy and interface to the rest of the enterprise network, has been up graded to a maximum of 96 universal 10Gb ports (divided between downlinks to the blade enclosures and uplinks to the enterprise fabric.
Read more

Intel (Finally) Announces Its Latest Server Processors — Better, Faster, Cooler

Richard Fichera

Today, after two of its largest partners have already announced their systems portfolios that will use it, Intel finally announced one of the worst-kept secrets in the industry: the Xeon E5-2600 family of processors.

OK, now that I’ve got in my jab at the absurdity of the announcement scheduling, let’s look at the thing itself. In a nutshell, these new processors, based on the previous-generation 32 nm production process of the Xeon 5600 series but incorporating the new “Sandy Bridge” architecture, are, in fact, a big deal. They incorporate several architectural innovations and will bring major improvements in power efficiency and performance to servers. Highlights include:

  • Performance improvements on selected benchmarks of up to 80% above the previous Xeon 5600 CPUs, apparently due to both improved CPU architecture and larger memory capacity (up to 24 DIMMs at 32 GB per DIMM equals a whopping 768 GB capacity for a two-socket, eight-core/socket server).
  • Improved I/O architecture, including an on-chip PCIe 3 controller and a special mode that allows I/O controllers to write directly to the CPU cache without a round trip to memory — a feature that only a handful of I/O device developers will use, but one that contributes to improved I/O performance and lowers CPU overhead during PCIe I/O.
  • Significantly improved energy efficiency, with the SPECpower_ssj2008 benchmark showing a 50% improvement in performance per watt over previous models.
Read more

Dell’s Turn For Infrastructure Announcements — Common Theme Emerging For 2012?

Richard Fichera

Last week it was Dell’s turn to tout its new wares, as it pulled back the curtain on its 12th-eneration servers and associated infrastructure. I’m still digging through all the details, but at first glance it looks like Dell has been listening to a lot of the same customer input as HP, and as a result their messages (and very likely the value delivered) are in many ways similar. Among the highlights of Dell’s messaging are:

  • Faster provisioning with next-gen agentless intelligent controllers — Dell’s version is iDRAC7, and in conjunction with its LifeCyle Controller firmware, Dell makes many of the same claims as HP, including faster time to provision and maintain new servers, automatic firmware updates, and many fewer administrative steps, resulting in opex savings.
  • Intelligent storage tiering and aggressive use of flash memory, under the aegis of Dell’s “Fluid Storage” architecture, introduced last year.
  • A high-profile positioning for its Virtual Network architecture, building on its acquisition of Force10 Networks last year. With HP and now Dell aiming for more of the network budget in the data center, it’s not hard to understand why Cisco was so aggressive in pursuing its piece of the server opportunity — any pretense of civil coexistence in the world of enterprise networks is gone, and the only mutual interest holding the vendors together is their customers’ demand that they continue to play well together.
Read more

How Telcos Will Play A Larger Role In Cloud Computing

Dan Bieler

Corporate CIOs should not ignore the network-centric nature of cloud-based solutions when developing their cloud strategies and choosing their cloud providers. And end users should understand what role(s) telcos are likely to play in the evolution of the wider cloud marketplace.

Like many IT suppliers, telcos view cloud computing as a big opportunity to grow their business. Cloud computing will dramatically affect telcos — but not by generating significant additional revenues. Instead, cloud computing will alter the role of telcos in the value chain irreversibly, putting their control over usage metering and billing at risk. Alarm bells should ring for telcos as Google, Amazon, et al. put their own billing and payment relationships with customers in place.

Telcos must defend their revenue collection role at all costs; failure to do so will accelerate their decline to invisible utility status. At the same time, cloud computing offers telcos a chance to become more than bitpipe providers. Cloud solutions will increasingly be delivered by ecosystems of providers that include telcos, software, hardware, network equipment vendors, and OTT providers.

Telcos have a chance to leverage their network and financial assets to grow into the role of ecosystem manager. To start on this path, telcos will provide cloud-based solutions that are adjacent to communication services they already provide (like home area networking and machine-to-machine solutions), such as connected healthcare and smart grid solutions. Expanding from this beachhead into a broader role in cloud solutions markets is a tricky path that only some telcos will successfully navigate.

We are analyzing the potential role of telcos in cloud computing markets in the research report Telcos as Cloud Rainmakers.

AMD Acquires SeaMicro — Big Bet On Architectural Shift For Servers

Richard Fichera

At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business AMD made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.

SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2), with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not for SeaMicro’s original Atom-based offering.

Read more

HP Announces Gen8 Servers – Focus On Opex And Improving SLAs Sets A High Bar For Competitors

Richard Fichera

On Monday, February 13, HP announced its next turn of the great wheel for servers with the announcement of its Gen8 family of servers. Interestingly, since the announcement was ahead of Intel’s official announcement of the supporting E5 server CPUs, HP had absolutely nothing to say about the CPUs or performance of these systems. But even if the CPU information had been available, it would have been a sideshow to the main thrust of the Gen8 launch — improving the overall TCO (particularly Opex) of servers by making them more automated, more manageable, and easier to remediate when there is a problem, along with enhancements to storage, data center infrastructure management (DCIM) capabilities, and a fundamental change in the way that services and support are delivered.

With a little more granularity, the major components of the Gen8 server technology announcement included:

  • Onboard Automation – A suite of capabilities and tools that provide improved agentless local intelligence to allow quicker and lower labor cost provisioning, including faster boot cycles, “one click” firmware updates of single or multiple systems, intelligent and greatly improved boot-time diagnostics, and run-time diagnostics. This is apparently implemented by more powerful onboard management controllers and pre-provisioning a lot of software on built-in flash memory, which is used by the onboard controller. HP claims that the combination of these tools can increase operator productivity by up to 65%. One of the eye-catching features is an iPhone app that will scan a code printed on the server and go back through the Insight Management Environment stack and trigger the appropriate script to provision the server.[i]Possibly a bit of a gimmick, but a cool-looking one.
Read more

Pushing The Envelope - SeaMicro Introduces Low-Power Xeon Servers

Richard Fichera

In late 2010 I noted that startup SeaMicro had introduced an ultra-dense server using Intel Atom chips in an innovative fabric-based architecture that allowed them to factor out much of the power overhead from a large multi-CPU server ( http://blogs.forrester.com/richard_fichera/10-09-21-little_servers_big_applications_intel_developer_forum). Along with many observers, I noted that the original SeaMicro server was well-suited to many light-weight edge processing tasks, but that the system would not support more traditional compute-intensive tasks due to the performance of the Atom core. I was, however, quite taken with the basic architecture, which uses a proprietary high-speed (1.28 Tb/s) 3D mesh interconnect to allow the CPU cores to share network, BIOS and disk resources that are normally replicated on a per-server in conventional designs, with commensurate reductions in power and an increase in density.

Read more

2011 Retrospective – The Best And The Worst Of The Technology World

Richard Fichera

OK, it’s time to stretch the 2012 writing muscles, and what better way to do it than with the time honored “retrospective” format. But rather than try and itemize all the news and come up with a list of maybe a dozen or more interesting things, I decided instead to pick the best and the worst – events and developments that show the amazing range of the technology business, its potentials and its daily frustrations. So, drum roll, please. My personal nomination for the best and worst of the year (along with a special extra bonus category) are:

The Best – IBM Watson stomps the world’s best human players in Jeopardy. In early 2011, IBM put its latest deep computing project, Watson, up against some of the best players in the world in a game of Jeopardy. Watson, consisting of hundreds of IBM Power CPUs, gazillions of bytes of memory and storage, and arguably the most sophisticated rules engine and natural language recognition capability ever developed, won hands down. If you haven’t seen the videos of this event, you should – seeing the IBM system fluidly answer very tricky questions is amazing. There is no sense that it is parsing the question and then sorting through 200 – 300 million pages of data per second in the background as it assembles its answers. This is truly the computer industry at its best. IBM lived up to its brand image as the oldest and strongest technology company and showed us a potential for integrating computers into untapped new potential solutions. Since the Jeopardy event, IBM has been working on commercializing Watson with an eye toward delivering domain-specific expert advisors. I recently listened to a presentation by a doctor participating in the trials of a Watson medical assistant, and the results were startling in terms of the potential to assist medical professionals in diagnostic procedures.

Read more

HP Expands Its x86 Options With Mission-Critical Program – Defense And Offense Combined

Richard Fichera

Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.

What’s Coming?

Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:

ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.

Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…

Read more

A Letter To Meg Whitman From The Market

Glenn O'Donnell

Dear Meg,

Now that you’ve settled into your latest position as the head of Hewlett-Packard, we wish to make a request of you. That request is, “Please take HP back to the greatness it once represented.” The culture once known as “The HP Way” has gone astray and the people have suffered as a result. Those people are of course the vast collection of incredible HP employees, but also its even vaster collection of customers. They (ahem, we) once believed in the venerable enterprise that Bill Hewlett and David Packard conceived and built through the latter half of the 20th century.

HP became renowned for its innovation and the quality of its products. While they tended to be pricey, we bought HP products because we knew they would perform well and perform long. We could count on HP to not only sell us technology, but to guide us in our journey to use this technology for the betterment of our own lives. We yearn for the old HP that inspired Steve Jobs to change the world – and he did!

We need not remind you of what transpired over the past decade or so, but we do have some suggestions for what you should address to restore the luster of HP’s golden age:

  • Commit to a mission. HP needs an audacious mission that articulates a purpose for every employee, from you and the HP board all the way down to the lowest levels. Borrow a page from IBM’s Smarter Planet mission. While it sometimes seems over the top, that’s the whole point. It is over the top and speaks to a bold mission to create a new world. Slowly but surely, IBM is making the planet smarter. Steve Jobs got Apple to convince us to Think Different, and we did. What is HP’s mission?
Read more