IBM – Ramping Up x86 Investment

I recently spent a day with IBM’s x86 team, primarily to get back up to speed on their entire x86 product line, and partially to realign my views of them after spending almost five years as a direct competitor. All in all, time well spent, with some key takeaways:

  • IBM has fixed some major structural problems with the entire x86 program and it perception in the company – As recently as two years ago, it appeared to the outside world that IBM was not really serious about x86 servers. Between licensing its low-end server designs to Lenovo (although IBM continued to sell its own versions) and an apparent retreat to the upper-end of the segment, it appeared that IBM was not serious about x86 severs. New management, new alignment with sales, and a higher internal profile for x86 seems to have moved the division back into IBM’s mainstream.
  • Increased investment – It looks like IBM significantly ramped up investments in x86 products about three years ago. The result has been a relatively steady flow of new products into the marketplace, some of which, such as the HS22 blade, significantly reversed deficits versus equivalent HP products. Others followed in high-end servers, virtualization and systems management, and increased velocity of innovation in low-end systems.
  • Established leadership in new niches such as dense modular server deployments – IBM’s iDataplex, while representing a small footprint in terms of their total volume, gave them immediate visibility as an innovator in the rapidly growing niche for hyper scale dense deployments. Along the way, IBM has also apparently become the leader in GPU deployments as well, another low-volume but high-visibility niche.
Read more

Infrastructure & Operations Communities Are Coming

Look for the new "Community" tab on the Forrester site. This is your access to a community of like-minded peers. You can use the community to start and participate in discussions, share ideas and experiences, and help guide Forrester Research for your role. The success or failure of this community effort depends largely on you. The analysts will participate, but in this forum they have less weight than do you, the Forrester I&O user. So help us, help your peers, and help yourself make this an active and thriving online community. Some thoughts to maybe get you going: Have any particularly good or bad experiences with products, solutions or technology? What key enablers are you looking at as you transform your data centers and operations? What does "cloud" mean to you? Any thoughts on vendor management and negotiations? This is just a random stream of consciousness selection. Make the community yours by adding your own topics.

Little Servers For Big Applications At Intel Developer Forum

I attended Intel Developer Forum (IDF) in San Francisco last week, one of the premier events for anyone interested in microprocessors, system technology, and of course, Intel itself. Among the many wonders on display, including high-end servers, desktops and laptops, and presentations related to everything Cloud, my attention was caught by a pair of small wonders – very compact, low power servers paradoxically targeted at some of the largest hyper-scale web-facing workloads. Despite being nominally targeted at an overlapping set of users and workloads, the two servers, the Dell “Viking” and the SeaMicro SM10000, represent a study in opposite design philosophies on how to address the problem of scaling infrastructure to address high-throughput web workloads. In this case, the two ends of the spectrum are adherence to an emerging standardized design and utilization of Intel’s reference architectures as a starting point versus a complete refactoring of the constituent parts of a server to maximize performance per watt and physical density.

Read more

Interesting Hardware Products At VMworld: NEC FT x86 Servers And Xsigo I/O Virtualization

Not a big surprise, but VMworld was overwhelmingly focused on software. I was concerned that great hardware might get lost in the software noise, so I looked for hardware standouts using two criteria:

  1. They had to have some relevance to virtualization. Most offerings could check this box – we were at WMworld, after all.
  2. They had to have some potential impact on Infrastructure & Operations (I&O) for a wide range of companies.
Read more

Oh No! Not Another VMworld Blog Post!

It was reported that sometime over the past weekend the number of tweets and blogs about VMworld exceeded Plankk’s limit (postulated by blogger Marvin Plankk, now confined to an obscure institution in an unidentified state with more moose than people), and quietly coalesced into an undifferentiated blob of digital entropy as a result of too many semantically identical postings online at the same time. So this leaves the field clear for me to write the first VMworld post in the new cycle.

This year was my first time at VMworld, and it left a profound impression – while the energy and activity among the 17,000 attendees, exhibitors and VMware itself would have been impressive in any context, the underlying evidence of a fundamental transformation of the IT landscape was even more so. The theme this year was “clouds,” but to some extent I think themes of major shows like this are largely irrelevant. The theme serves as an organizing principle for the communications and promotion of the show, but the technology content of the show, particularly as embodied by its exhibitors and attendees, is based on what is actually being done in the real world. If the technology was not already there, the show might have to find another label. Keeping the cart firmly behind the horse, this activity is being driven by real IT problems, real investments in solutions, and real technology being brought to market. So to me the revelation of the show was not in the fact that VMware called it “cloud,” but that the world is really thinking “cloud.”

Read more

Wall Street Pilgrimage – Infrastructure & Operations At The Top

A Wall Street infrastructure & operations day

I recently had an opportunity to spend a day in three separate meetings with infrastructure & operations professionals from three of the top six financial service firms in the country, and discuss topics ranging from long-term business and infrastructure strategy to specific likes and dislikes regarding their Tier-1 vendors and their challengers. The day’s meetings were neither classic consulting nor classic briefings, but rather a free-form discussion, guided only loosely by an agenda and, despite possible Federal regulations to the contrary, completely devoid of PowerPoint presentations. As in the past, these in-depth meetings provided a wealth of food for thought, interesting and sometimes contradictory indicators from the three groups. There was a lot of material to ponder, but I’ll try and summarize some of the high-level takeaways in this post.

Servers and Vendors

These companies between them own in the neighborhood of 180,000 servers, and probably purchase 30,000 - 50,000 servers per year in various cycles of procurements. In short, these are heavyweight users. One thing that struck me in the course of the conversations was the Machiavellian view of their Tier-1 server vendors. Viewed as key partners, at the same time the majority of this group of users devoted a substantial amount of time to keeping their key vendors at arm’s length through aggressive vendor management techniques like deliberate splitting of procurements between competitors. They understand their suppliers' margins and cost structures well, and are committed to driving hardware supplier margins to “as close to zero as we can,” in the words of one participant.

Read more

Oracle Cancels OpenSolaris – What’s The Big Deal?

There has been turmoil and angst recently in the 0pen source community of late over Oracle’s decision to cancel OpenSolaris. Since this community can be expected to react violently anytime something is taken out of open source, the real question is whether this action has any impact on real-world IT and operations professionals. The short answer is no.

 Enterprise Solaris users, be they small, medium or large, are using it to run critical applications; and as far as we can tell, the uptake of OpenSolaris as opposed to Solaris supplied and sold by Sun was very low in commercial accounts, other than possibly a surge in test and dev environments. The decision to take Solaris into the open source arena was, in my opinion, fundamentally flawed, and Oracle’s subsequent decision to change this is eminently rational – Oracle’s customers almost certainly are not going to run their companies on an OS that is built and maintained by any open source community (even the vast majority of corporate Linux use is via a distribution supported by a major vendor and under a paid subscription model), and Oracle cannot continue to develop Solaris unless they have absolute control over it, just as is the case with every other enterprise OS. In the same vein, unless Oracle can also have an expectation of being compensated for their investments in future Solaris development, there is little motivation for them to continue to invest heavily in Solaris.

Dell – Ready To Challenge HP And IBM For The High Ground?

Historically, the positioning of Dell versus its two major competitors for high-value enterprise business, particularly where it involved complex services and the ability to deliver deeply integrated infrastructure and management stacks, has been as sort of an also ran. Competitors looked at Dell as a price spoiler and a channel for standard storage and networking offerings from its partners, not as a potential threat to the high-ground of being able to deliver complex integrated infrastructure solutions.

This comforting image of Dell as being a glorified box pusher appears to be coming to an end. When my colleague Andrew Reichman recently wrote about Dell’s attempted acquisition of 3Par, it made me take another look at Dell’s recent pattern of investments and the series of announcements they have made around delivering integrated infrastructure with a message and solution offering that looks like it is aimed squarely at HP and IBM's Virtual Fabric.

Consider the overall pattern of investments:

Read more

Will Plug-In Hybrids Change The Data Center?

In a recent discussion with a group of infrastructure architects, power architecture, especially UPS engineering, was on the table as a topic. There was general agreement that UPS systems are a necessary evil, cumbersome and expensive beasts to put into a DC, and a lot of speculation on alternatives. There was general consensus that the goal was to develop a solution that would be more granular install and deploy and thus allow easier and ad-hoc decisions about which resources to protect, and agreement that battery technologies and current UPS architectures were not optimal for this kind of solution.

So what if someone were to suddenly expand battery technology R&D investment by a factor of maybe 100x of R&D and into battery technology,  expand high-capacity battery production by a giant factor, and drive prices down precipitously? That’s a tall order for today’s UPS industry, but it’s happening now courtesy of the auto industry and the anticipated wave of plug-in hybrid cars. While batteries for cars and batteries for computers certainly have their differences in terms of depth and frequency of charge/discharge cycles, packaging, lifespan, etc, there is little doubt that investments in dense and powerful automotive batteries and power management technology will bleed through into the data center. Throw in recent developments in high-charge capacitors (referred to in the media as “super capacitors”), which add the impedance match between the requirements for spike demands and a chemical battery’s dislike of sudden state changes, and you have all the foundational ingredients for major transformation in the way we think about supplying backup power to our data center components.

Read more

The Need For Speed – GPUs Emerge As Mainstream Accelerators

It’s probably fair to say that the computer community is obsessed with speed. After all, our people buy computers to solve problems, and generally the faster the computer, the faster the problem gets solved. The earliest benchmark that I have seen is published in “High Speed Computing Devices, Engineering Resource Devices, McGraw Hill, 1950.” They cite the Marchant desktop calculator as achieving a best-in-class result of 1,350 digits per minute for addition, and the threshold problems then were figuring out how to break down Newton Raphsen equation solvers for maximum computational efficiency. And so the race begins…

Not much has changed since 1950. While our appetites are now expressed in GFLOPs per CPU and TFLOPS per system, users continue to push for escalation of performance in numerically intensive problems. Just as we settled down to a relatively predictable performance model with standard CPUs and cores glued into servers and aggregated into distributed computing architectures of various flavors, along came the notion of attached processors. First appearing in the 1960s and 1970s as attached mainframe vector processors and attached floating point array processors for minicomputers, attached processors have always had a devoted and vocal minority support within the industry. My own brush with them was as a developer using a Floating Point Systems array processor attached to a 32-bit minicomputer to speed up a nuclear reactor core power monitoring application. When all was said and done, the 50X performance advantage of the FPS box had decreased to about 3.5X for the total application. Not bad, but a defeat of expectations. Subsequent brushes with attempts to integrate DSPs with workstations left me a bit jaundiced about the future of attached processors as general purpose accelerators.

Read more