In October, with great fanfare, the Open Data Center Alliance unfurled its banners. The ODCA is a consortium of approximately 50 large IT consumers, including large manufacturing, hosting and telecomm providers, with the avowed intent of developing standards for interoperable cloud computing. In addition to the roster of users, the announcement highlighted Intel with an ambiguous role as a technology advisor to the group. The ODCA believes that it will achieve some weight in the industry due to its estimated $50 billion per year of cumulative IT purchasing power, and the trade press was full of praises for influential users driving technology as opposed to allowing rapacious vendors such as HP and IBM to drive users down proprietary paths that lead to vendor lock-in.
Now that we’ve had a month or more to allow the purple prose to settle a bit, let’s look at the underlying claims, potential impact of the ODCA and the shifting roles of vendors and consumers of technology. And let’s not forget about the role of Intel.
First, let me state unambiguously that one of the core intentions of the ODCA, the desire to develop common use case models that will in turn drive vendors to develop products that comply with the models based on the economic clout of the ODCA members (and hopefully there will be a correlation between ODCA member requirements and those of a wider set of consumers), is a good idea. Vendors spend a lot of time talking to users and trying to understand their requirements, and having the ODCA as a proxy for the requirements of a lot of very influential customers will be a benefit to all concerned.
I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.
This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:
They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.
I recently spent a day with IBM’s x86 team, primarily to get back up to speed on their entire x86 product line, and partially to realign my views of them after spending almost five years as a direct competitor. All in all, time well spent, with some key takeaways:
IBM has fixed some major structural problems with the entire x86 program and it perception in the company – As recently as two years ago, it appeared to the outside world that IBM was not really serious about x86 servers. Between licensing its low-end server designs to Lenovo (although IBM continued to sell its own versions) and an apparent retreat to the upper-end of the segment, it appeared that IBM was not serious about x86 severs. New management, new alignment with sales, and a higher internal profile for x86 seems to have moved the division back into IBM’s mainstream.
Increased investment – It looks like IBM significantly ramped up investments in x86 products about three years ago. The result has been a relatively steady flow of new products into the marketplace, some of which, such as the HS22 blade, significantly reversed deficits versus equivalent HP products. Others followed in high-end servers, virtualization and systems management, and increased velocity of innovation in low-end systems.
Established leadership in new niches such as dense modular server deployments – IBM’s iDataplex, while representing a small footprint in terms of their total volume, gave them immediate visibility as an innovator in the rapidly growing niche for hyper scale dense deployments. Along the way, IBM has also apparently become the leader in GPU deployments as well, another low-volume but high-visibility niche.
My work at Forrester is focused on helping strategists at IT suppliers (vendors) align their development, positioning, and messaging with the big trends and disruptions in the industry. Mobility, cloud computing, globalization … trends at that high altitude. Over the last 3 years or so, that has included sustainability as it has appeared on and risen higher on the strategy agenda of companies around the world.
When I meet with strategists at tech suppliers large and small, we talk sustainability both in terms of how the companies are cleaning up their own practices and processes, and what they are doing to help their customers do the same. SAP’s “exemplar and enabler” language captures these parallel efforts nicely. But it’s still a limited perspective, one that I characterize as the IT industry playing defense. “We are improving our energy efficiency!” says the collective industry voice, as if trying to deflect public criticism of energy-hog data centers, or mountains of e-waste, or PCs left running 7 x 24. And yes, absolutely, the IT industry and its customers have more work to do to make IT infrastructure and processes less wasteful and more responsible.
Yesterday I got a sneak peak at the new Intel Classmate PC, both the clamshell and convertible models. These new models are significant upgrades from the previous version. While I never really wanted my own mini-laptop, I now have Classmate envy.
Highlights that mattered to me included (drum roll please):
10.1 inch screen to replace the tiny 8.9 inch screen – as a result the keyboard is bigger, accommodating adolescent and even adult fingers. Honestly, the previous design was just too dang small. My fingers were all over each other.
Ruggedization (is that a word?) – now designed to withstand accidental drops from desk height, with a water resistant LCD, keyboard, touch pad, HDD shock protection using the accelerometer to detect falls, and a really nice rubberized surface making it easier to hold onto.
Retractable handle – while I’m on the topic of holding it… may I say that I really don’t understand why other PC vendors don’t put handles on their laptops. My Panasonic Toughbook has one and I love it.
eReader interface – while I’ve never used my Kindle (really, not once), I do think I’d take advantage of the eReader capability of the Classmate. The accelerometer flips the content to portrait and the touchscreen allows you to “flick to scroll.” You can also highlight and take notes directly on the page. The eReader feature was what Wired magazine picked up on in their Classmate product review.
Many of the case studies you've seen me write about are B2C. But in the report on ROI of Social Media, I gathered data on B2B companies too. Here's a list of B2B communities.
Many people know Intel by their catch tune, "Inside Intel." And what's inside are the most amazing microprocessors that allow us to do great things back 25 years ago people could only imagine. Key to having been an innovator is always innovating. Intel- when they first came out with a new chip-- think back to the 286 processor and then transition to the 386. They met with some resistance in getting computer manufactuers to be interested in the chip. Why would you need more computing power?
So instead of staying stuck or ditching the product, Intel brought together a multidisciplinary team of individuals to tackle the problem. The net-net is that the team realized that its the end-user who is really their customer! when they went into computer shops and talked to the customers, they asked, "Would you like to be able to have many files open at once? Would you like to be able to run graphics programs, plays games, etc...." The customers responded positively with, "Of course we would!" That drove the computer store operators to tell the computer manufacturers to get those intel chips in their computers. Ah... I love that "voice of the customer" story.
But what I love more is that Intel innovated, why? Because they listened. That's a skill most companies don't have. And with social media, Intel has put their listening on dual processor tubro charged power. They know that their ability to innovate and lead the market is based on harnessing the power, knowledge and collaboration among customers, resellers, etc.. and Intel.