On Monday, February 13, HP announced its next turn of the great wheel for servers with the announcement of its Gen8 family of servers. Interestingly, since the announcement was ahead of Intel’s official announcement of the supporting E5 server CPUs, HP had absolutely nothing to say about the CPUs or performance of these systems. But even if the CPU information had been available, it would have been a sideshow to the main thrust of the Gen8 launch — improving the overall TCO (particularly Opex) of servers by making them more automated, more manageable, and easier to remediate when there is a problem, along with enhancements to storage, data center infrastructure management (DCIM) capabilities, and a fundamental change in the way that services and support are delivered.
With a little more granularity, the major components of the Gen8 server technology announcement included:
Onboard Automation – A suite of capabilities and tools that provide improved agentless local intelligence to allow quicker and lower labor cost provisioning, including faster boot cycles, “one click” firmware updates of single or multiple systems, intelligent and greatly improved boot-time diagnostics, and run-time diagnostics. This is apparently implemented by more powerful onboard management controllers and pre-provisioning a lot of software on built-in flash memory, which is used by the onboard controller. HP claims that the combination of these tools can increase operator productivity by up to 65%. One of the eye-catching features is an iPhone app that will scan a code printed on the server and go back through the Insight Management Environment stack and trigger the appropriate script to provision the server.[i]Possibly a bit of a gimmick, but a cool-looking one.
In late 2010 I noted that startup SeaMicro had introduced an ultra-dense server using Intel Atom chips in an innovative fabric-based architecture that allowed them to factor out much of the power overhead from a large multi-CPU server ( http://blogs.forrester.com/richard_fichera/10-09-21-little_servers_big_applications_intel_developer_forum). Along with many observers, I noted that the original SeaMicro server was well-suited to many light-weight edge processing tasks, but that the system would not support more traditional compute-intensive tasks due to the performance of the Atom core. I was, however, quite taken with the basic architecture, which uses a proprietary high-speed (1.28 Tb/s) 3D mesh interconnect to allow the CPU cores to share network, BIOS and disk resources that are normally replicated on a per-server in conventional designs, with commensurate reductions in power and an increase in density.
So I made the trek from Singapore to Orlando for Lotusphere the week of January 15th and it proved well worth the time and effort. It was actually one of the best events of its kind I’ve attended in years — and I’ve attended loads. IBM expanded the focus well beyond the “legacy” Lotus brand. In fact, this was a social business event from start to finish, with IBM linking its much broader social computing portfolio to business process improvement and value creation.
The focus and scope has clearly grown beyond the current event branding. But putting event naming issues aside for the moment, below are some key takeaways:
Evolving into a social business applies to all organizations — any process that relies on people will fundamentally change. IBM made a solid case that business transformation is not only possible but mandatory. A social business excels at discovering and sharing new ideas — fundamentally changing how people work and therefore how companies operate. Companies not embracing this change will get left behind.
IBM’s vision for social business — business process disruption is inevitable. Focusing heavily on a process-centric view, IBM downplayed tools and technology. Per IBM, social business is the intersection of social technologies and front-office business processes — as significant to top-line revenue growth over the next decade as SOA has been to back-office business processes and bottom-line cost savings over the last decade.
I’ve been with Forrester for just over a month now. It’s great to be involved with our clients and communities and to be helping businesses across the world evaluate the quality of software suppliers' proposals from a commercial perspective (e.g., is this a great deal or can the supplier do better?). One of the best parts of being at Forrester now is seeing the continuation of the work I did prior to joining Forrester — advising businesses on software contract and pricing negotiations. One thing I noticed then, and continue to hear about now, is the reluctance of software suppliers like IBM, BMC, CA, and Compuware to publish meaningful list prices or to explain how their price book worked or how discounts had been determined. Time and again I had to ask suppliers to un-bundle prices and confirm the basis for the net prices they were proposing. Does anyone else agree with me that pricing should be clear and transparent and not a black art?
Here’s an example of an “art” that should be science: list pricing. While it’s logical to think list pricing is the same foundation upon which all bids are built, that’s actually not the case. Often, I found that my clients were being quoted “list pricing” that was different. Isn’t list pricing supposed to be the same by definition? Which is why you may with good reason doubt the validity of a list price or the competitiveness of a discount that you’re being offered by a software supplier. It’s why I love my work, and why you should make sure you get third-party validation of your deals.
How you do validate your software vendors’ list pricing and proposed discounts?
OK, it’s time to stretch the 2012 writing muscles, and what better way to do it than with the time honored “retrospective” format. But rather than try and itemize all the news and come up with a list of maybe a dozen or more interesting things, I decided instead to pick the best and the worst – events and developments that show the amazing range of the technology business, its potentials and its daily frustrations. So, drum roll, please. My personal nomination for the best and worst of the year (along with a special extra bonus category) are:
The Best – IBM Watson stomps the world’s best human players in Jeopardy. In early 2011, IBM put its latest deep computing project, Watson, up against some of the best players in the world in a game of Jeopardy. Watson, consisting of hundreds of IBM Power CPUs, gazillions of bytes of memory and storage, and arguably the most sophisticated rules engine and natural language recognition capability ever developed, won hands down. If you haven’t seen the videos of this event, you should – seeing the IBM system fluidly answer very tricky questions is amazing. There is no sense that it is parsing the question and then sorting through 200 – 300 million pages of data per second in the background as it assembles its answers. This is truly the computer industry at its best. IBM lived up to its brand image as the oldest and strongest technology company and showed us a potential for integrating computers into untapped new potential solutions. Since the Jeopardy event, IBM has been working on commercializing Watson with an eye toward delivering domain-specific expert advisors. I recently listened to a presentation by a doctor participating in the trials of a Watson medical assistant, and the results were startling in terms of the potential to assist medical professionals in diagnostic procedures.
The proposed acquisitions of SuccessFactors by SAP, and of Emptoris by IBM got me thinking about the impact on buyers of market consolidation, in respect of the difference between dealing with independent specialists versus technology giants selling a large portfolio of products and services. Sourcing professionals talk about wanting “one throat to choke,” but personally I’ve never met one with hands big enough to get round the neck of a huge vendor such as IBM or Oracle. Moreover, many of the giants organize their sales teams by product line, to ensure they fully understand the product they are selling, rather than giving customers one account manager for the whole portfolio who may not understand any of it in sufficient depth. Our clients complain about having to deal with just as many reps as before the acquisitions. They all now have the same logo on their business card, but can’t fix problems outside their area, nor negotiate based on the complete relationship. It seems that buyers end up like Hercules, wrestling either with a Nemean lion or with a Lernaean hydra.
The acquirers' press releases tend to take it for granted that customers will be better off with the one-stop shop. Bill McDermott, co-CEO of SAP, said, “Together, SAP and SuccessFactors will create tremendous business value for customers.” While Lars Dalgaard, founder and CEO of SuccessFactors, talks about “expanding relationships with SAP’s 176,000 customers.” Craig Hayman, general manager of industry solutions at IBM, said, “Adding Emptoris strengthens the comprehensive capabilities we deliver and enables IBM to meet the specific needs of chief procurement officers."
Just over a week after SAP published its intention to buy Success Factors, IBM announced yesterday that it will acquire Emptoris, one of the leading ePurchasing suite vendors. My colleague Andrew Bartels has described in his blog some of the implications for other vendors in the ePurchasing market:
My interest is in what the acquisition means for sourcing professionals, not just the CPOs who might be Emptoris customers, but the IT sourcing professionals setting strategies for dealing with major suppliers such as IBM and SAP.
· Emptoris customers should give IBM the benefit of the doubt, for now. Craig Hayman, General Manager of IBM’s Industry Solutions division, assured me that he would take great care not to damage Emptoris’s strengths, the ones that attracted him to the company, as they did you, its customers. Emptoris consistently does well in Forrester Wave™ evaluations, not only for its functionality but also its focus on sourcing and procurement, its emphasis on ensuring customer success, and its consistent record of innovation. The good news is that Hayman doesn’t underestimate the challenges of integrating Emptoris into IBM, but is confident he can overcome them. It will take a couple of years before we can judge his success.
IBM today announced that it will acquire Emptoris, a leading vendor of ePurchasing software products, with strengths in eSourcing, spend analysis, contract lifecycle management, services procurement, and supplier risk and performance management (see December 15, 2011, “IBM Acquisition of Emptoris Bolsters Smarter Commerce Initiative, Helps Reduce Procurement Costs and Risks”). That IBM made an acquisition of this kind was not a surprise to me, given that the heads of IBM's Smarter Commerce software team at the IBM Software Analyst Connect 2011 event on November 30 had laid out a vision of providing solutions for the buying activities of commerce as well as the sales, marketing, and services activities. Indeed, in the breakout session in which Craig Hayman, general manager of industry solutions at IBM, laid out the Smarter Commerce software strategy and showed the vendors that IBM had acquired in the sales, marketing, and services arenas, he said in response to my comment about the obvious gaps that IBM had in the buying area that we should expect to see IBM acquisitions in that area.
What was a surprise to me was that IBM acquired Emptoris. My prediction would have been that IBM would buy Ariba, because of the long relationship that has existed between these companies. In contrast, Emptoris has generally worked more with Accenture, and not as much with IBM.
Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.
Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:
ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.
Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…