Cisco’s Turn At Bat, Introduces Next Generation Of UCS

Richard Fichera

Next up in the 2012 lineup for the Intel E5 refresh cycle of its infrastructure offerings is Cisco, with its announcement last week of what it refers to as its third generation of fabric computing. Cisco announced a combination of tangible improvements to both the servers and the accompanying fabric components, as well as some commitments for additional hardware and a major enhancement of its UCS Manager software immediately and later in 2012. Highlights include:

  • New servers – No surprise here, Cisco is upgrading its servers to the new Intel CPU offerings, leading with its high-volume B200 blade server and two C-Series rack-mount servers, one a general-purpose platform and the other targeted at storage-intensive requirements. On paper, the basic components of these servers sound similar to competitors – new E5 COUs, faster I/O, and more memory. In addition to the servers announced for March availability, Cisco stated that it would be delivering additional models for ultra-dense computing and mission-critical enterprise workloads later in the year.
  • Fabric improvements – Because Cisco has a relatively unique architecture, it also focused on upgrades to the UCS fabric in three areas: server, enclosure, and top-level interconnect. The servers now have an optional improved virtual NIC card with support for up to 128 VLANs per adapter and two 20 GB ports per adapter. One in on the motherboard and another can be plugged in as a mezzanine card, giving up to 80 GB bandwidth to each server. The Fabric Interconnect, the component that connects each enclosure to the top-level Fabric Interconnect, has seen its bandwidth doubled to a maximum of 160 GB. The Fabric Interconnect, the top of the UCS management hierarchy and interface to the rest of the enterprise network, has been up graded to a maximum of 96 universal 10Gb ports (divided between downlinks to the blade enclosures and uplinks to the enterprise fabric.
Read more

Windows 8: Think You Can Skip It? Think Again.

David Johnson

My colleague Benjamin Gray and I have been looking closely at Windows 8 for the past several months to make sure we have a clear understanding of what it means for I&O organizations, leaders, and professionals. We have been briefed in depth by Microsoft executives, program managers, and engineers. We have downloaded, installed, and used the Windows 8 Consumer Preview, and we have had hundreds of conversations with I&O professionals in the past year on Windows 7 (and now Windows 8) adoption — from those looking for guidance, as well as those with strong opinions already formed. As you might expect, we have formed some opinions of our own.

For those who haven't talked with Ben Gray, he is a fantastic authority on Windows adoption trends with complete mastery of the data. He has closely watched Windows Vista, Windows 7, and now Windows 8 go through the cycles of preparation, migration, adoption, and operation. Ben was the first at Forrester to point out that Windows 8 is an "off-cycle release," coming too soon on the heels of Windows 7 for companies to be ready to adopt it. He and I authored a document on Windows adoption trends for 2012, which will be published shortly and provides additional data and context. Ben has also dissected the Forrsights Workforce Employee survey data in dozens of ways, and he delivers a fantastic presentation for Forrester customers on what he's learned.

Read more

Intel (Finally) Announces Its Latest Server Processors — Better, Faster, Cooler

Richard Fichera

Today, after two of its largest partners have already announced their systems portfolios that will use it, Intel finally announced one of the worst-kept secrets in the industry: the Xeon E5-2600 family of processors.

OK, now that I’ve got in my jab at the absurdity of the announcement scheduling, let’s look at the thing itself. In a nutshell, these new processors, based on the previous-generation 32 nm production process of the Xeon 5600 series but incorporating the new “Sandy Bridge” architecture, are, in fact, a big deal. They incorporate several architectural innovations and will bring major improvements in power efficiency and performance to servers. Highlights include:

  • Performance improvements on selected benchmarks of up to 80% above the previous Xeon 5600 CPUs, apparently due to both improved CPU architecture and larger memory capacity (up to 24 DIMMs at 32 GB per DIMM equals a whopping 768 GB capacity for a two-socket, eight-core/socket server).
  • Improved I/O architecture, including an on-chip PCIe 3 controller and a special mode that allows I/O controllers to write directly to the CPU cache without a round trip to memory — a feature that only a handful of I/O device developers will use, but one that contributes to improved I/O performance and lowers CPU overhead during PCIe I/O.
  • Significantly improved energy efficiency, with the SPECpower_ssj2008 benchmark showing a 50% improvement in performance per watt over previous models.
Read more

Dell’s Turn For Infrastructure Announcements — Common Theme Emerging For 2012?

Richard Fichera

Last week it was Dell’s turn to tout its new wares, as it pulled back the curtain on its 12th-eneration servers and associated infrastructure. I’m still digging through all the details, but at first glance it looks like Dell has been listening to a lot of the same customer input as HP, and as a result their messages (and very likely the value delivered) are in many ways similar. Among the highlights of Dell’s messaging are:

  • Faster provisioning with next-gen agentless intelligent controllers — Dell’s version is iDRAC7, and in conjunction with its LifeCyle Controller firmware, Dell makes many of the same claims as HP, including faster time to provision and maintain new servers, automatic firmware updates, and many fewer administrative steps, resulting in opex savings.
  • Intelligent storage tiering and aggressive use of flash memory, under the aegis of Dell’s “Fluid Storage” architecture, introduced last year.
  • A high-profile positioning for its Virtual Network architecture, building on its acquisition of Force10 Networks last year. With HP and now Dell aiming for more of the network budget in the data center, it’s not hard to understand why Cisco was so aggressive in pursuing its piece of the server opportunity — any pretense of civil coexistence in the world of enterprise networks is gone, and the only mutual interest holding the vendors together is their customers’ demand that they continue to play well together.
Read more

AMD Acquires SeaMicro — Big Bet On Architectural Shift For Servers

Richard Fichera

At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business AMD made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.

SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2), with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not for SeaMicro’s original Atom-based offering.

Read more

HP Announces Gen8 Servers – Focus On Opex And Improving SLAs Sets A High Bar For Competitors

Richard Fichera

On Monday, February 13, HP announced its next turn of the great wheel for servers with the announcement of its Gen8 family of servers. Interestingly, since the announcement was ahead of Intel’s official announcement of the supporting E5 server CPUs, HP had absolutely nothing to say about the CPUs or performance of these systems. But even if the CPU information had been available, it would have been a sideshow to the main thrust of the Gen8 launch — improving the overall TCO (particularly Opex) of servers by making them more automated, more manageable, and easier to remediate when there is a problem, along with enhancements to storage, data center infrastructure management (DCIM) capabilities, and a fundamental change in the way that services and support are delivered.

With a little more granularity, the major components of the Gen8 server technology announcement included:

  • Onboard Automation – A suite of capabilities and tools that provide improved agentless local intelligence to allow quicker and lower labor cost provisioning, including faster boot cycles, “one click” firmware updates of single or multiple systems, intelligent and greatly improved boot-time diagnostics, and run-time diagnostics. This is apparently implemented by more powerful onboard management controllers and pre-provisioning a lot of software on built-in flash memory, which is used by the onboard controller. HP claims that the combination of these tools can increase operator productivity by up to 65%. One of the eye-catching features is an iPhone app that will scan a code printed on the server and go back through the Insight Management Environment stack and trigger the appropriate script to provision the server.[i]Possibly a bit of a gimmick, but a cool-looking one.
Read more

Suddenly, Dell Is A Software Company!

Glenn O'Donnell

The Dell brand is one of the most recognizable in technology. It was born a hardware company in 1984 and deservedly rocketed to fame, but it has always been about the hardware. In 2009, its big Perot Systems acquisition marked the first real departure from this hardware heritage. While it made numerous software acquisitions, including some good ones like Scalent, Boomi, and KACE, it remains a marginal player in software. That is about to change.

Read more

Verne Global And Colt Technology Show A Zero Carbon Data Center – It’s Real, Running, And Impressive In Iceland

Richard Fichera

Data centers, like any other aspect of real estate, follow the age-old adage of “location, location, location,” and if you want to build one that is really efficient in terms of energy consumption as well as possessing all the basics of reliability, you have to be really picky about ambient temperatures, power availability and, if your business is hosting for others rather than just needing one for yourself, potential expansion. If you want to achieve a seeming impossibility – a zero carbon footprint to satisfy increasingly draconian regulatory pressures – you need to be even pickier. In the end, what you need is:

  • Low ambient temperature to reduce your power requirements for cooling.
  • Someplace where you can get cheap “green” energy, and lots of it.
  • A location with adequate network connectivity, both in terms of latency as well as bandwidth, for global business.
  • A cooperative regulatory environment in a politically stable venue.
Read more

Pushing The Envelope - SeaMicro Introduces Low-Power Xeon Servers

Richard Fichera

In late 2010 I noted that startup SeaMicro had introduced an ultra-dense server using Intel Atom chips in an innovative fabric-based architecture that allowed them to factor out much of the power overhead from a large multi-CPU server ( http://blogs.forrester.com/richard_fichera/10-09-21-little_servers_big_applications_intel_developer_forum). Along with many observers, I noted that the original SeaMicro server was well-suited to many light-weight edge processing tasks, but that the system would not support more traditional compute-intensive tasks due to the performance of the Atom core. I was, however, quite taken with the basic architecture, which uses a proprietary high-speed (1.28 Tb/s) 3D mesh interconnect to allow the CPU cores to share network, BIOS and disk resources that are normally replicated on a per-server in conventional designs, with commensurate reductions in power and an increase in density.

Read more

BMC Acquires Numara Software In A Mid-Market Makeover: What It Means For Customers

David Johnson

BMC has a golden opportunity to take a different track with Numara than it has for past mid-market acquisitions (see Magic Solutions), and it must do so if it hopes to build on this one and drive new revenue for the long haul. Numara enjoys a massive installed base of customers with its Track-It and Footprints product lines in the small and mid-market. They have been hard at work rounding out their portfolio to include Client Management (software management, systems management, and OS management), and other areas. Numara has been on a journey to re-invent itself and has been succeeding. Further, we believe that the culture of the Numara organization and BMC's will align well, as long as Numara is given the autonomy and investment they need to grow their portfolio and momentum in the field.

BMC Will Need Time To Work
Numara customers should expect relatively little change in daily operations for the first few months, as BMC aligns the organizations. If history is a reliable guide, BMC will typically give a larger acquisition such as this the opportunity to remain mostly intact, and inject key people and processes to help align the acquired organization with the BMC culture and ways of doing business. If this holds true for Numara, customers should see it as a positive step.

Get Clear Direction From BMC In Areas Of Overlap

Read more