I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.
This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:
They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.
Fujitsu? Who? I recently attended Fujitsu’s global analyst conference in Boston, which gave me an opportunity to check in with the best kept secret in the North American market. Even Fujitsu execs admit that many people in this largest of IT markets think that Fujitsu has something to do with film, and few of us have ever seen a Fujitsu system installed in the US unless it was a POS system.
So what is the management of this global $50 Billion information and communications technology company, with a competitive portfolio of client, server and storage products and a global service and integration capability, going to do about its lack of presence in the world’s largest IT market? In a word, invest. Fujitsu’s management, judging from their history and what they have disclosed of their plans, intends to invest in the US over the next three to four years to consolidate their estimated $3 Billion in N. American business into a more manageable (simpler) set of operating companies, and to double down on hiring and selling into the N. American market. The fact that they have given themselves multiple years to do so is very indicative of what I have always thought of as Fujitsu’s greatest strength and one of their major weaknesses – they operate on Japanese time, so to speak. For an American company to undertake to build a presence over multiple years with seeming disregard for quarterly earnings would be almost unheard of, so Fujitsu’s management gets major kudos for that. On the other hand, years of observing them from a distance also leads me to believe that their approach to solving problems inherently lacks the sense of urgency of some of their competitors.
Today, IBM announced the acquisition of privately-held Clarity Systems for an undisclosed sum. The acquisition bolsters IBM’s solution set for the CFO, and complements its recent acquisition of OpenPages, a governance, risk, and compliance (GRC) vendor. Clarity, based in Toronto, had approximately 390 employees and 600 customers at the time of this deal.
Clarity Systems is a Strong Performer in "The Forrester Wave™: Business Performance Solutions, Q4 2009", offering a very good planning, budgeting, and forecasting solution as part of its flagship product, Clarity 7, along with an improved financial consolidations component. During the past few years, Clarity developed a market-leading regulatory reporting solution, Clarity FSR, which supports the process of creating full SEC filings and also embeds technology for XBRL reporting. IBM Cognos is ranked as a Leader in the same comparative evalution.
The success of FSR alone during the past two years made the large BPS vendors, IBM, SAP, and Oracle, envious of Clarity’s success. Oracle made a competitive response early this year with the release of Oracle Hyperion Disclosure Management. It seemed to this observer that SAP would make the next move by doing a deal to acquire Clarity, but IBM beat them to the punch.
On Sunday I will be participating in IBM’s Middle East and North Africa CIO Conference 2010, where I will present my research on Smart Cities. I’m looking forward to speaking with practitioners from the region to hear about their experiences in making their cities, organizations, and businesses more efficient through innovative technology-based initiatives. My presentation is entitled “Taking Lessons from Smart Cities,” because the real smarts lie in how these “cities” – whatever form they take – have overcome obstacles from budget battles to stakeholder standoffs.
One aspect of those smarts lies in the business models that have enabled smart cities. With talk of municipal bankruptcy and public sector debt, it is not surprising that public sector IT decision-makers are not all that optimistic about their industry outlook. In Forrester’s Forrsights Budgets And Priorities Tracker Survey, Q2 2010, only 26% of public sector IT decision-makers considered their industry outlook to be good, while 70% – the vast majority – expected a bad year. The public sector came in next to last among other industry verticals.
That same survey, however, also revealed expectations of IT spending increases in the public sector: 37% of public sector IT decision-makers expected IT budgets to grow by at least 5%; 11% expected increases of more than 10%. Some of that spending is creatively financed.
Several new business models have emerged to enable technology investment.
There has been a lot of press about IBM’s acquisition of BNT (Blade Network Technologies) focusing on the economics and market share of BNT as a competitor to Cisco and HP’s ProCurve/3Com franchise. But at its heart the acquisition is more about defending and expanding a position in the emerging converged server, networking, and storage infrastructure segment than it is about raw switch port market share. It is also a powerful vindication of the proposition that infrastructure convergence is driving major realignment in the vendor industry.
Starting with HP’s success with its c-Class blade servers and Virtual Connect technology, and escalating with Cisco’s entrance into the server market, IBM continued its investment in its Virtual Fabric and Open Fabric Manager technology, heavily leveraging BNT’s switch platforms. At some point it became clear that BNT was a critical element of IBM’s convergence strategy, with IBM’s plans now heavily dependent on a vendor with whom they had an excellent, but non-exclusive relationship, and one whose acquisition by another player could severely compromise their product plans. Hence the acquisition. Now that it owns BNT, IBM can capitalize on its excellent edge network technology for further development of its converged infrastructure strategy without hesitation about further leveraging BNT’s technology.
I recently spent a day with IBM’s x86 team, primarily to get back up to speed on their entire x86 product line, and partially to realign my views of them after spending almost five years as a direct competitor. All in all, time well spent, with some key takeaways:
IBM has fixed some major structural problems with the entire x86 program and it perception in the company – As recently as two years ago, it appeared to the outside world that IBM was not really serious about x86 servers. Between licensing its low-end server designs to Lenovo (although IBM continued to sell its own versions) and an apparent retreat to the upper-end of the segment, it appeared that IBM was not serious about x86 severs. New management, new alignment with sales, and a higher internal profile for x86 seems to have moved the division back into IBM’s mainstream.
Increased investment – It looks like IBM significantly ramped up investments in x86 products about three years ago. The result has been a relatively steady flow of new products into the marketplace, some of which, such as the HS22 blade, significantly reversed deficits versus equivalent HP products. Others followed in high-end servers, virtualization and systems management, and increased velocity of innovation in low-end systems.
Established leadership in new niches such as dense modular server deployments – IBM’s iDataplex, while representing a small footprint in terms of their total volume, gave them immediate visibility as an innovator in the rapidly growing niche for hyper scale dense deployments. Along the way, IBM has also apparently become the leader in GPU deployments as well, another low-volume but high-visibility niche.
CityOne, IBM's new Smarter City Simulation game, is interesting. But who will really play?
IBM introduced a new Smarter City Simulation game yesterday. I took a few minutes to play around with it. I love the idea. It is SimCity meets Smarter City, and together they make CityOne. Players are presented with challenges faced by decision-makers in Retail, Banking, Energy and Water industries within a city. They start with a budget for each industry. And, for each challenge, they are provided with a list of recommended actions and must choose among them. Each action has a cost and associated benefits. Some are more “right” than others, earning bonus credits and increasing customer satisfaction and other key performance indicators, as well as earning special awards. A player likely knows not to pick the "Ignore the problem" option. Yet, when in doubt you can also query a consultant for additional advice.
My sense was that the “right” answers seemed pretty obvious. However, that said, I certainly didn’t get a high score. And, when I got to the end of my ten turns, I was feeling pretty overwhelmed by the issues across these industries.
Everyone’s using the term “sustainability.” And, I’ll admit I’m a little jaded. But, given that it’s around to stay for a while, let’s take a look at the term. What are the primary objectives of “sustainability” initiatives? Are they “green” – with an eye toward protecting the environment by reducing the effects of climate change? Are they economic – cost cutting, increasing efficiency? “Sustain” seems static, maintain the current state. But some are thinking about “sustainability” as a means of generating growth. A few weeks ago, I started an interesting discussion about “operational sustainability” with Rich Lechner, IBM Vice President for Energy and Environment. (I say started because it actually continued this week, and will likely continue further.)
“Sustain to grow” may seem like an oxymoron, but it’s not. First let’s think about efficiency. What does it mean to be more efficient? Efficiency to me is the goal to “do more with less” – improving the ratio of output to input. So you cut and improve productivity ratios that way. But what if you’ve cut as much as you can, and you still want to do more, to improve those ratios? How can you grow within the limits of the resources you have? Sustain resources, increase productivity or capacity – in whatever terms or measures of capacity you use. This translates into the objective behind “operational sustainability.” How do you improve operations or processes in order to improve outcomes, within the limits of available resources?
Historically, the positioning of Dell versus its two major competitors for high-value enterprise business, particularly where it involved complex services and the ability to deliver deeply integrated infrastructure and management stacks, has been as sort of an also ran. Competitors looked at Dell as a price spoiler and a channel for standard storage and networking offerings from its partners, not as a potential threat to the high-ground of being able to deliver complex integrated infrastructure solutions.
This comforting image of Dell as being a glorified box pusher appears to be coming to an end. When my colleague Andrew Reichman recently wrote about Dell’s attempted acquisition of 3Par, it made me take another look at Dell’s recent pattern of investments and the series of announcements they have made around delivering integrated infrastructure with a message and solution offering that looks like it is aimed squarely at HP and IBM's Virtual Fabric.
To paraphrase Charles Dickens, Q2 2010 seemed like the best of times or the worst of times for the big software vendors. For Microsoft, it was the best of times; for IBM, it was (comparatively) the worst of times; and for SAP it was in between. IBM on June 19, 2010, reported total revenue growth of just 2% in the fiscal quarter ending June 30, 2010, with its software unit also reporting 2% growth (6%, excluding the revenues of its divested product lifecycle management group from Q2 2009). Those growth rates were down from 5% growth for IBM overall in Q1 2010, and 11% for the software group. In comparison, Microsoft on June 22, 2010, reported 22% growth in its revenues, with Windows revenues up 44%, Server and Tools revenues up 14%, and Microsoft Business Division (Office and Dynamics) up 15%. And SAP on June 27, 2010, posted 12% growth in its revenues in euros, 5% growth on a constant currency basis, and 5% growth when its revenues were converted into dollars.
What do these divergent results for revenue growth say about the state of the enterprise software market?