IBM is First Mover with Disruptive Flash Memory Technology on New x6 Servers

Richard Fichera

This week, IBM announced its new line of x86 servers, and included among the usual incremental product improvements is a performance game-changer called eXFlash. eXFlash is the first commercially available implantation of the MCS architecture announced last year by Diablo Technologies. The MCS architecture, and IBM’s eXFlash offering in particular, allows flash memory to be embedded on the system as close to the CPU as main memory, with latencies substantially lower than any other available flash options, offering better performance at a lower solution cost than other embedded flash solutions. Key aspects of the announcement include:

■  Flash DIMMs offer scalable high performance. Write latency (a critical metric) for IBM eXFlash will be in the 5 to 10 microsecond range, whereas best-of-breed competing mezzanine card and PCIe flash can only offer 15 to 20 microseconds (and external flash storage is slower still). Additionally, since the DIMMs are directly attached to the memory controller, flash I/O does not compete with other I/O on the system I/O hub and PCIe subsystem, improving overall system performance for heavily-loaded systems. Additional benefits include linear performance scalability as the number of DIMMs increase and optional built-in hardware mirroring of DIMM pairs.

■  eXFlash DIMMs are compatible with current software. Part of the magic of MCS flash is that it appears to the OS as a standard block-mode device, so all existing block-mode software will work, including applications, caching and tiering or general storage management software. For IBM users, compatibility with IBM’s storage management and FlashCache Storage Accelerator solutions is guaranteed. Other vendors will face zero to low effort in qualifying their solutions.

Read more

Intel Lays Out Future Data Center Strategy - Serious Focus on Emerging Opportunities

Richard Fichera

Yesterday Intel had a major press and analyst event in San Francisco to talk about their vision for the future of the data center, anchored on what has become in many eyes the virtuous cycle of future infrastructure demand – mobile devices and “the Internet of things” driving cloud resource consumption, which in turn spews out big data which spawns storage and the requirement for yet more computing to analyze it. As usual with these kinds of events from Intel, it was long on serious vision, and strong on strategic positioning but a bit parsimonious on actual future product information with a couple of interesting exceptions.

Content and Core Topics:

No major surprises on the underlying demand-side drivers. The the proliferation of mobile device, the impending Internet of Things and the mountains of big data that they generate will combine to continue to increase demand for cloud-resident infrastructure, particularly servers and storage, both of which present Intel with an opportunity to sell semiconductors. Needless to say, Intel laced their presentations with frequent reminders about who was the king of semiconductor manufacturingJ

Read more

Dell Grabs Enstratius in Cloud Management Land Grab

Dave Bartoletti

Dell just picked up Enstratius for an undisclosed amount today, making the cloud management vendor the latest well-known cloud controller to get snapped up by a big infrastructure or OS vendor. Dell will add Enstratius cloud management capabilities to its existing management suite for converged and cloudy infrastructure, which includes element manager and configuration automator Active System Manager (ASM, the re-named assets acquired with Gale Technologies in November), Quest Foglight performance monitoring, and (maybe) what’s still around from Scalent and DynamicOps.

This is a good move for Dell, but it doesn’t exactly clarify where all these management capabilities will fall out. The current ASM product seems to be a combo of code from the original Scalent acquisition upgraded with the GaleForce product; regardless of what’s in it, though, what it does is discover, configure and deploy physical and virtual converged infrastructure components. A private cloud automation platform, basically. Like all private cloud management stacks, it does rapid template-based provisioning and workflow orchestration. But it doesn’t provision apps or provision to public or open-source cloud stacks. That’s where Enstratius comes in.

Read more

Is IBM Selling Its Server Business To Lenovo?

Richard Fichera


The industry is abuzz with speculation that IBM will sell its x86 server business to Lenovo. As usual, neither party is talking publicly, but at this point I’d give it a better than even chance, since usually these kind of rumors tend to be based on leaks of real discussions as opposed to being completely delusional fantasies. Usually.

So the obvious question then becomes “Huh?”, or, slightly more eloquently stated, “Why would they do something like that?”. Aside from the possibility that this might all be fantasy, two explanations come to mind:

1. IBM is crazy.

2. IBM is not crazy.

Of the two explanations, I’ll have to lean toward the latter, although we might be dealing with a bit of the “Hey, I’m the new CEO and I’m going to do something really dramatic today” syndrome. IBM sold its PC business to Lenovo to the tune of popular disbelief and dire predictions, and it's doing very well today because it transferred its investments and focus to higher margin business, like servers and services. Lenovo makes low-end servers today that it bootstrapped with IBM licensed technology, and IBM is finding it very hard to compete with Lenovo and other low-cost providers. Maybe the margins on its commodity server business have sunk below some critical internal benchmark for return on investment, and it believes that it can get a better return on its money elsewhere.

Read more

The Tablet Market Is Fragmenting Into Subcategories

JP Gownder

In recent research, I have laid out some similarities and differences between tablets and laptops. But the tablet market is growing ever more fragmented, yielding subtleties that aren’t always captured with a simple “PC vs. tablet” dichotomy.  As Infrastructure & Operations (I&O) professionals try to determine the composition of their hardware portfolios, the product offerings themselves are more protean. Just describing the “tablet” space is much harder than it used to be. Today, we’re looking at multiple OSes (iOS, Android, Windows, Blackberry, forked Android), form factors (eReader, tablet, hybrid, convertible, touchscreen laptop), and screen sizes (from 5” phablets and to giant 27” furniture tablets) – not to mention a variety of brands, price points, and applications. If, as rumored, Microsoft were to enter the 7” to 8” space – competing with Google Nexus, Apple iPad Mini, and Kindle Fire HD – we would see even more permutations. Enterprise-specific – some vertically specific – devices are proliferating alongside increased BYO choices for workers.

Read more

HP Shows its Next Generation Blade and Converged Infrastructure – No Revolution, but Strong Evolution

Richard Fichera

With the next major spin of Intel server CPUs due later this year, HP’s customers have been waiting for HP’s next iteration of its core c-Class BladeSystem, which has been on the market for almost 7 years without any major changes to its basic architecture. IBM made a major enhancement to its BladeCenter architecture, replacing it with the new Pure Systems, and Cisco’s offering is new enough that it should last for at least another three years without a major architectural refresh, leaving HP customers to wonder when HP was going to introduce its next blade enclosure, and whether it would be compatible with current products.

At their partner conference this week, HP announced a range of enhancements to its blade product line that on combination represent a strong evolution of the current product while maintaining compatibility with current investments. This positioning is similar to what IBM did with its BladeCenter to BladeCenter-H upgrade, preserving current customer investment and extending the life of the current server and peripheral modules for several more years.

Tech Stuff – What Was Announced

Among the goodies announced on February 19 was an assortment of performance and functionality enhancements, including:

  • Platinum enclosure — The centerpiece of the announcement was the new c7000 Platinum enclosure, which boosts the speed of the midplane signal paths from 10 GHz to 14GHz, for an increase of 40% in raw bandwidth of the critical midplane, across which all of the enclosure I/O travels. In addition to the increased bandwidth midplane, the new enclosure incorporates location aware sensors and also doubles the available storage bandwidth.
Read more

IBM Embraces Emerson for DCIM – Major Change in DCIM Market Dynamics

Richard Fichera

Emerson Network Power today announced that it is entering into a significant partnership with IBM to both integrate Emerson’s new Trellis DCIM suite into IBM’s ITSM products as well as to jointly sell Trellis to IBM customers. This partnership has the potential to reshape the DCIM market segment for several reasons:

  • Connection to enterprise IT — Emerson has sold a lot of chillers, UPS and PDU equipment and has tremendous cachet with facilities types, but they don’t have a lot of people who know how to talk IT. IBM has these people in spades.
  • IBM can use a DCIM offering  — IBM, despite being a huge player in the IT infrastructure and data center space, does not have a DCIM product. Its Maximo product seems to be more of a dressed up asset management product, and this partnership is an acknowledgement of the fact that to build a full-fledged DCIM product would have been both expensive and time-consuming.
  • IBM adds sales bandwidth — My belief is that the development of the DCIM market has been delivery bandwidth constrained. Market leaders Nlyte, Emerson and Schneider do not have enough people to address the emerging total demand, and the host of smaller players are even further behind. IBM has the potential to massively multiply Emerson’s ability to deliver to the market.
Read more

Why Dell Going Private is Less Risk for Customers than their Current Path

David Johnson

To publish this post, I must first discredit myself. I'm 42, and while I love what I do for a living, Michael Dell is 47 and his company was already doing $1 million a day in business by the time he was 31. I look at guys like that and think: "What the h*** have I been doing with my time?!?" Nevertheless, Dell is a company I've followed more closely than any other but Apple since the mid-2000s, and in the past two years I've had the opportunity to meet with several Dell executives and employees - from Montpellier, France to Austin, Texas.

Because I cover both PC hardware as well as client virtualization here at Forrester, it puts me in regular contact with Dell customers who will inevitably ask what we as a firm think about Dell's latest announcements to go private, just as they have for HP these past several quarters since the circus started over there with Mr. Apotheker. Hopefully what follows here is information and analysis that you as an I&O leader can rely on to develop your own perspective on Dell with more clarity.

Complexity is Dell's enemy
The complexity of Dell as an organization right now is enormous. They have been on a "Quest" to re-invent themselves and go from PC and server vendor, to an end-to-end solutions vendor with the hope that their chief differentiator could be unique software to drive more repeatable solutions delivery, and in turn lower solutions cost. I say the word 'hope' deliberately because to do that means focusing most of their efforts around a handful of solutions that no other vendor could provide. It's a massive undertaking because as a public company, they have to do this while keeping cash-flow going in their lines of business from each acquisition and growing those while they develop the focused solutions. So far, they haven't.
Read more

2013 Server Virtualization Predictions: Driving Value Above And Beyond The Hypervisor

Dave Bartoletti

Now that we’ve been back from the holidays for a month, I’d like to round out the 2013 predictions season with a look at the year ahead in server virtualization. If you’re like me (or this New York Times columnist), you’ll agree that a little procrastination can sometimes be a good thing to help collect and organize your plans for the year ahead. (Did you buy that rationalization?)

We’re now more than a decade into the era of widespread x86 server virtualization. Hypervisors are certainly a mature (if not peaceful) technology category, and the consolidation benefits of virtualization are now uncontestable. 77% of you will be using virtualization by the end of this year, and you’re running as many as 6 out of 10 workloads in virtual machines. With such strong penetration, what’s left? In our view: plenty. It’s time to ask your virtual infrastructure, “What have you done for me lately?”

With that question in mind, I asked my colleagues on the I&O team to help me predict what the year ahead will hold. Here are the trends in 2013 you should track closely:

  1. Consolidation savings won’t be enough to justify further virtualization. For most I&O pros, the easy workloads are already virtualized. Looking ahead at 2013, what’s left are the complex business-critical applications the business can’t run without (high-performance databases, ERP, and collaboration top the list). You won’t virtualize these to save on hardware; you’ll do it to make them mobile, so they can be moved, protected, and duplicated easily. You’ll have to explain how virtualizing these apps will make them faster, safer, and more reliable—then prove it.
Read more

HP’s Troubles Continue, But Does It Matter?

Richard Fichera

HP seems to be on a tear, bouncing from litigation with one of its historically strongest partners to multiple CEOs in the last few years, continued layoffs, and a recent massive write-down of its EDS purchase. And, as we learned last week, the circus has not left town. The latest “oops” is an $8.8 billion write-down for its purchase of Autonomy, under the brief and ill-fated leadership of Léo Apotheker, combined with allegations of serious fraud on the part of Autonomy during the acquisition process.

The eventual outcome of this latest fiasco will be fun to watch, with many interesting sideshows along the way, including:

  • Whose fault is it? Can they blame it on Léo, or will it spill over onto Meg Whitman, who was on the board and approved it?
  • Was there really fraud involved?
  • If so, how did HP miss it? What about all the internal and external people involved in due diligence of this acquisition? I’ve been on the inside of attempted acquisitions at HP, and there were always many more people around with the power to say “no” than there were people who were trying to move the company forward with innovative acquisitions, and the most persistent and compulsive of the group were the various finance groups involved. It’s really hard to see how they could have missed a little $5 billion discrepancy in revenues, but that’s just my opinion — I was usually the one trying to get around the finance guys. :)
Read more