Just over a week after SAP published its intention to buy Success Factors, IBM announced yesterday that it will acquire Emptoris, one of the leading ePurchasing suite vendors. My colleague Andrew Bartels has described in his blog some of the implications for other vendors in the ePurchasing market:
My interest is in what the acquisition means for sourcing professionals, not just the CPOs who might be Emptoris customers, but the IT sourcing professionals setting strategies for dealing with major suppliers such as IBM and SAP.
· Emptoris customers should give IBM the benefit of the doubt, for now. Craig Hayman, General Manager of IBM’s Industry Solutions division, assured me that he would take great care not to damage Emptoris’s strengths, the ones that attracted him to the company, as they did you, its customers. Emptoris consistently does well in Forrester Wave™ evaluations, not only for its functionality but also its focus on sourcing and procurement, its emphasis on ensuring customer success, and its consistent record of innovation. The good news is that Hayman doesn’t underestimate the challenges of integrating Emptoris into IBM, but is confident he can overcome them. It will take a couple of years before we can judge his success.
IBM today announced that it will acquire Emptoris, a leading vendor of ePurchasing software products, with strengths in eSourcing, spend analysis, contract lifecycle management, services procurement, and supplier risk and performance management (see December 15, 2011, “IBM Acquisition of Emptoris Bolsters Smarter Commerce Initiative, Helps Reduce Procurement Costs and Risks”). That IBM made an acquisition of this kind was not a surprise to me, given that the heads of IBM's Smarter Commerce software team at the IBM Software Analyst Connect 2011 event on November 30 had laid out a vision of providing solutions for the buying activities of commerce as well as the sales, marketing, and services activities. Indeed, in the breakout session in which Craig Hayman, general manager of industry solutions at IBM, laid out the Smarter Commerce software strategy and showed the vendors that IBM had acquired in the sales, marketing, and services arenas, he said in response to my comment about the obvious gaps that IBM had in the buying area that we should expect to see IBM acquisitions in that area.
What was a surprise to me was that IBM acquired Emptoris. My prediction would have been that IBM would buy Ariba, because of the long relationship that has existed between these companies. In contrast, Emptoris has generally worked more with Accenture, and not as much with IBM.
Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.
Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:
ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.
Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…
I love the idea of the Edmonton’s Planning Academy, which offers planning courses to anyone in the city. What a great way to get citizens involved in the complex challenges of city planning! It made me want to live in Edmonton. OK, so maybe I’m kind of addicted to school, and taking classes (corporate learning programs, continuing studies programs and even the Red Cross have seen me in their classrooms in recent years). But really, this one looks so cool I had to write about it.
The City of Edmonton’s Planning Academy’s goal is to “provide a better understanding of the planning and development process in Edmonton.” And, it grants a Certificate of Participation following completion of the three core courses and one elective. These three core courses include:
Emerging ARM server Calxeda has been hinting for some time that they had a significant partnership announcement in the works, and while we didn’t necessarily not believe them, we hear a lot of claims from startups telling us to “stay tuned” for something big. Sometimes they pan out, sometimes they simply go away. But this morning Calxeda surpassed our expectations by unveiling just one major systems partner – but it just happens to be Hewlett Packard, which dominates the WW market for x86 servers.
At its core (unintended but not bad pun), the HP Hyperscale business unit Project Moonshot and Calxeda’s server technology are about improving the efficiency of web and cloud workloads, and promises improvements in excess of 90% in power efficiency and similar improvements in physical density compared with current x86 solutions. As I noted in my first post on ARM servers and other documents, even if these estimates turn out to be exaggerated, there is still a generous window within which to do much, much, better than current technologies. And workloads (such as memcache, Hadoop, static web servers) will be selected for their fit to this new platform, so the workloads that run on these new platforms will potentially come close to the cases quoted by HP and Calxeda.
There has been a lot of ill-considered press coverage about the “death” of UNIX and coverage of the wholesale migration of UNIX workloads to LINUX, some of which (the latter, not the former) I have contributed to. But to set the record straight, the extinction of UNIX is not going to happen in our lifetime.
While UNIX revenues are not growing at any major clip, it appears as if they have actually had a slight uptick over the past year, probably due to a surge by IBM, and seem to be nicely stuck around the $18 - 20B level annual range. But what is important is the “why,” not the exact dollar figure.
UNIX on proprietary RISC architectures will stay around for several reasons that primarily revolve around their being the only close alternative to mainframes in regards to specific high-end operational characteristics:
Performance – If you need the biggest single-system SMP OS image, UNIX is still the only realistic commercial alternative other than mainframes.
Isolated bulletproof partitionability – If you want to run workload on dynamically scalable and electrically isolated partitions with the option to move workloads between them while running, then UNIX is your answer.
Near-ultimate availability – If you are looking for the highest levels of reliability and availability ex mainframes and custom FT systems, UNIX is the answer. It still possesses slight availability advantages, especially if you factor in the more robust online maintenance capabilities of the leading UNIX OS variants.
I just spent several days at Dell World, and came away with the impression of a company that is really trying to change its image. Old Dell was boxes, discounts and low cost supply chain. New Dell is applications, solution, cloud (now there’s a surprise!) and investments in software and integration. OK, good image, but what’s the reality? All in all, I think they are telling the truth about their intentions, and their investments continue to be aligned with these intentions.
As I wrote about a year ago, Dell seems to be intent on climbing up the enterprise food chain. It’s investment in several major acquisitions, including Perot Systems for services and a string of advanced storage, network and virtual infrastructure solution providers has kept the momentum going, and the products have been following to market. At the same time I see solid signs of continued investment in underlying hardware, and their status as he #1 x86 server vendor in N. America and #2 World-Wide remains an indication of their ongoing success in their traditional niches. While Dell is not a household name in vertical solutions, they have competent offerings in health care, education and trading, and several of the initiatives I mentioned last year are definitely further along and more mature, including continued refinement of their VIS offerings and deep integration of their much-improved DRAC systems management software into mainstream management consoles from VMware and Microsoft.
OK, out of respect for your time, now that I’ve caught you with a title that promises some drama I’ll cut to the chase and tell you that I definitely lean toward the former. Having spent a couple of days here at Oracle Open World poking around the various flavors of Engineered Systems, including the established Exadata and Exalogic along with the new SPARC Super Cluster (all of a week old) and the newly announced Exalytic system for big data analytics, I am pretty convinced that they represent an intelligent and modular set of optimized platforms for specific workloads. In addition to being modular, they give me the strong impression of a “composable” architecture – the various elements of processing nodes, Oracle storage nodes, ZFS file nodes and other components can clearly be recombined over time as customer requirements dictate, either as standard products or as custom configurations.
In the good old days, computer industry trade shows were bigger than life events – booths with barkers and actors, ice cream and espresso bars and games in the booth, magic acts and surging crowds gawking at technology. In recent years, they have for the most part become sad shadows of their former selves. The great SHOWS are gone, replaced with button-down vertical and regional events where you are lucky to get a pen or a miniature candy bar for your troubles.
Enter Oracle OpenWorld. Mix 45,000 people, hundreds of exhibitors, one of the world’s largest software and systems company looking to make an impression, and you have the new generation of technology extravaganza. The scale is extravagant, taking up the entire Moscone Center complex (N, S and W) along with a couple of hotel venues, closing off a block of a major San Francisco street for a week, and throwing a little evening party for 20 or 30 thousand people.
But mixed with the hoopla, which included wheel of fortune giveaways that had hundreds of people snaking around the already crowded exhibition floor in serpentine lines, mini golf and whack-a-mole-games in the exhibit booths along with the aforementioned espresso and ice cream stands, there was genuine content and the public face of some significant trends. So far, after 24 hours, some major messages come through loud and clear:
Smart cities come in all shapes and sizes. There is not one definition of smart. Think about the terms “street smart” and “book smart.” When I think about the initiatives or reforms that we’re seeing across cities, I’ve started categorizing them along these lines. New initiatives like sensor-based parking and traffic optimization fall into street smart, while streamlining of back office processes and applications tend to be more book smart. And as we know, it takes all kinds.
The hype of smart cities, however, has focused on the sexy new kid on the block. Everything sensor-based and “intelligent” has gotten top billing from vendors. However, many cities need to start cracking the books first.
Here are a few ways to start:
Rationalization of back office applications. Sprawling or at least siloed IT infrastructure and business apps can be upgraded and consolidated. Several CIOs I’ve spoken with have mentioned that this is a big challenge. Department heads don’t want to give up control over their domain, as they see it. Big cities find themselves with multiple enterprise resource planning (ERP) systems running across different departments in a city: Parks and Recreation licenses ERP from one vendor; Public Works subscribes to ERP services from another; Transportation manages their fleet with yet another.