Lies, Damned Lies, And Statistics . . . And Benchmarks

Richard Fichera

I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.

This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:

  • They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
  • They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.
Read more

Fujitsu – Ready To Play In North America?

Richard Fichera

Fujitsu? Who? I recently attended Fujitsu’s global analyst conference in Boston, which gave me an opportunity to check in with the best kept secret in the North American market. Even Fujitsu execs admit that many people in this largest of IT markets think that Fujitsu has something to do with film, and few of us have ever seen a Fujitsu system installed in the US unless it was a POS system.

So what is the management of this global $50 Billion information and communications technology company, with a competitive portfolio of client, server and storage products and a global service and integration capability, going to do about its lack of presence in the world’s largest IT market? In a word, invest. Fujitsu’s management, judging from their history and what they have disclosed of their plans, intends to invest in the US over the next three to four years to consolidate their estimated $3 Billion in N. American business into a more manageable (simpler) set of operating companies, and to double down on hiring and selling into the N. American market. The fact that they have given themselves multiple years to do so is very indicative of what I have always thought of as Fujitsu’s greatest strength and one of their major weaknesses – they operate on Japanese time, so to speak. For an American company to undertake to build a presence over multiple years with seeming disregard for quarterly earnings would be almost unheard of, so Fujitsu’s management gets major kudos for that. On the other hand, years of observing them from a distance also leads me to believe that their approach to solving problems inherently lacks the sense of urgency of some of their competitors.

Read more

Walking The Walk – Mobile Devices And The Infrastructure & Operations Group

Richard Fichera

Recently I’ve been living a double life. By day a mild-mannered functionary for Forrester Research, helping I&O professionals cope with the hurly-burly of our rapid-paced world. By night I have been equipping myself with an iPhone, iPad and trying out any other mobile devices I can get my hands on, including Dell Stream, Android phones, and the incredibly appealing new Apple Macbook Air. While my colleague Ted Schadler has been writing on these devices from a more strategic perspective, I wanted to see what the daily experience felt like and simultaneously get a perspective from our I&O customers about their experiences.

So, the first question, is the mobile phenomenon real? The answer is absolutely yes. While the rise of mobile devices is a staple of every vendor’s strategic pitch, it also seems to be a real trend. In conversations with I&O groups, I have been polling them on mobile devices in their company, and the feedback has been largely the same – employees are buying their own consumer devices and using them for work, forcing I&O, security and email/collaboration application owners, often well outside of plan, to support them. Why can’t IT groups “just say no”? The answer is that IT in rational companies is fundamentally in the fundamental business of enabling business, and the value and productivity unlocked by these devices is too much to pass up.

Read more

IBM Acquires BNT – Nuclear War In The Converged Infrastructure World?

Richard Fichera

There has been a lot of press about IBM’s acquisition of BNT (Blade Network Technologies) focusing on the economics and market share of BNT as a competitor to Cisco and HP’s ProCurve/3Com franchise. But at its heart the acquisition is more about defending and expanding a position in the emerging converged server, networking, and storage infrastructure segment than it is about raw switch port market share. It is also a powerful vindication of the proposition that infrastructure convergence is driving major realignment in the vendor industry.

Starting with HP’s success with its c-Class blade servers and Virtual Connect technology, and escalating with Cisco’s entrance into the server market, IBM continued its investment in its Virtual Fabric and Open Fabric Manager technology, heavily leveraging BNT’s switch platforms. At some point it became clear that BNT was a critical element of IBM’s convergence strategy, with IBM’s plans now heavily dependent on a vendor with whom they had an excellent, but non-exclusive relationship, and one whose acquisition by another player could severely compromise their product plans. Hence the acquisition. Now that it owns BNT, IBM can capitalize on its excellent edge network technology for further development of its converged infrastructure strategy without hesitation about further leveraging BNT’s technology.

Read more

IBM – Ramping Up x86 Investment

Richard Fichera

I recently spent a day with IBM’s x86 team, primarily to get back up to speed on their entire x86 product line, and partially to realign my views of them after spending almost five years as a direct competitor. All in all, time well spent, with some key takeaways:

  • IBM has fixed some major structural problems with the entire x86 program and it perception in the company – As recently as two years ago, it appeared to the outside world that IBM was not really serious about x86 servers. Between licensing its low-end server designs to Lenovo (although IBM continued to sell its own versions) and an apparent retreat to the upper-end of the segment, it appeared that IBM was not serious about x86 severs. New management, new alignment with sales, and a higher internal profile for x86 seems to have moved the division back into IBM’s mainstream.
  • Increased investment – It looks like IBM significantly ramped up investments in x86 products about three years ago. The result has been a relatively steady flow of new products into the marketplace, some of which, such as the HS22 blade, significantly reversed deficits versus equivalent HP products. Others followed in high-end servers, virtualization and systems management, and increased velocity of innovation in low-end systems.
  • Established leadership in new niches such as dense modular server deployments – IBM’s iDataplex, while representing a small footprint in terms of their total volume, gave them immediate visibility as an innovator in the rapidly growing niche for hyper scale dense deployments. Along the way, IBM has also apparently become the leader in GPU deployments as well, another low-volume but high-visibility niche.
Read more

Join Forrester's New Community For Consumer Product Strategists!

JP Gownder

I'd like to invite you to participate in an exciting new forum for discussion: our community for Consumer Product Strategy professionals!

The community is a place for product strategists to exchange ideas, opinions, and real-world solutions with each other. Forrester analysts will also be part of the community, helping facilitate the discussions and sharing their views.

Right now, we already have discussions going on the topics of product co-creation, creating video content for your product or brand, and the effects disruptive technologies like the iPad have on product strategies. These vibrant conversations are just getting started, but they're already pretty exciting discussions.

In general, here's what you’ll find

  • A simple platform on which you can pose your questions and get advice from peers who face the same business challenges.
  • Insight from our analysts, who weigh in frequently on the issues. 
  • Fresh perspective from peers, who share their real-world success stories and best practices.
  • Content on the latest technologies and trends affecting your business — from Forrester and other thought leaders

I encourage you to become part of the community:

  • Ask a question about a complex business problem.
  • Start a discussion on an emerging trend that’s having an impact on your work.
  • Contribute to an existing discussion thread from a community member.
  • Suggest topics for upcoming Forrester research reports.
  • Create a community profile.
  • Share your perspective with others.

The community is open to both Forrester clients AND to non-clients.  Why not visit today? 

http://community.forrester.com/community/cps

Part 2: Three FUD Statements I&O Managers Use Not To Implement Standards-Based Networking

Andre Kindness

Carrying on from my thoughts in Part 1:  It’s time to start deploying purely standards-based infrastructure outside the data center; data center protocols are just starting to be created for a converged and virtualized world.  With the amount of tested and deployed standards protocols, there’s no excuse for networks to be locked in to a certain vendor with proprietary protocols when standards-based network solutions provide access to compelling volume economics, flexibility to adapt a much wider array of solutions, and relief from hiring specialized talent to run a science project.  Although many organizations understand that standards-based networking provides them with the flexibility to choose from the best available solutions at a lower cost of ownership, they still feel trapped.  Listed below are three top shackles and the keys to open them up:

Read more

Infrastructure & Operations Communities Are Coming

Richard Fichera

Look for the new "Community" tab on the Forrester site. This is your access to a community of like-minded peers. You can use the community to start and participate in discussions, share ideas and experiences, and help guide Forrester Research for your role. The success or failure of this community effort depends largely on you. The analysts will participate, but in this forum they have less weight than do you, the Forrester I&O user. So help us, help your peers, and help yourself make this an active and thriving online community. Some thoughts to maybe get you going: Have any particularly good or bad experiences with products, solutions or technology? What key enablers are you looking at as you transform your data centers and operations? What does "cloud" mean to you? Any thoughts on vendor management and negotiations? This is just a random stream of consciousness selection. Make the community yours by adding your own topics.

Part 1: Standards And Proprietary Technology: A Time And Place For Both

Andre Kindness

I was listening to a briefing the other day and got swept up in a Western melodrama, set against the backdrop of Calamity Jane’s saloon in Deadwood Gulch, South Dakota, revolving around three major characters: helpless heroine (the customer); valiant hero (vendor A, riding a standards-based white horse); and the scoundrel villain (a competitor, riding the proprietary black stallion) (insert boo and hiss). Vendor A tries to evoke sympathy at his plight of not offering the latest features because he doesn’t have the same powers as the villain and has chosen to follow the morally correct path, which is filled with prolonged and undeserved suffering supporting standards-based functions. What poppycock! There is no such thing as good and evil in networking. If the vendors were reversed in positions, vendor A would be doing the same thing as its competitors. Every vendor has some type of special sauce to differentiate themselves. Anyway, it’s business, plain and simple; networking fundamentally needs proprietary and standards-based features. However, there’s a time and place for both.  

With that in mind, I want to let you know that I’m a big proponent of standards-based networking. The use of open standards improves the choices that help you reduce risk, implement durable solutions, obtain flexibility, and benefit from quality. Ninety-plus percent of networking should be leveraging standard protocols, but to get to that point features need to go through three stages:

Read more

Little Servers For Big Applications At Intel Developer Forum

Richard Fichera

I attended Intel Developer Forum (IDF) in San Francisco last week, one of the premier events for anyone interested in microprocessors, system technology, and of course, Intel itself. Among the many wonders on display, including high-end servers, desktops and laptops, and presentations related to everything Cloud, my attention was caught by a pair of small wonders – very compact, low power servers paradoxically targeted at some of the largest hyper-scale web-facing workloads. Despite being nominally targeted at an overlapping set of users and workloads, the two servers, the Dell “Viking” and the SeaMicro SM10000, represent a study in opposite design philosophies on how to address the problem of scaling infrastructure to address high-throughput web workloads. In this case, the two ends of the spectrum are adherence to an emerging standardized design and utilization of Intel’s reference architectures as a starting point versus a complete refactoring of the constituent parts of a server to maximize performance per watt and physical density.

Read more