In October, with great fanfare, the Open Data Center Alliance unfurled its banners. The ODCA is a consortium of approximately 50 large IT consumers, including large manufacturing, hosting and telecomm providers, with the avowed intent of developing standards for interoperable cloud computing. In addition to the roster of users, the announcement highlighted Intel with an ambiguous role as a technology advisor to the group. The ODCA believes that it will achieve some weight in the industry due to its estimated $50 billion per year of cumulative IT purchasing power, and the trade press was full of praises for influential users driving technology as opposed to allowing rapacious vendors such as HP and IBM to drive users down proprietary paths that lead to vendor lock-in.
Now that we’ve had a month or more to allow the purple prose to settle a bit, let’s look at the underlying claims, potential impact of the ODCA and the shifting roles of vendors and consumers of technology. And let’s not forget about the role of Intel.
First, let me state unambiguously that one of the core intentions of the ODCA, the desire to develop common use case models that will in turn drive vendors to develop products that comply with the models based on the economic clout of the ODCA members (and hopefully there will be a correlation between ODCA member requirements and those of a wider set of consumers), is a good idea. Vendors spend a lot of time talking to users and trying to understand their requirements, and having the ODCA as a proxy for the requirements of a lot of very influential customers will be a benefit to all concerned.
As an immediate reaction to the recent announcement of Attachmate’s intention to acquire Novell, covered in depth by my colleagues and synthesized by Chris Voce in his recent blog post, I have received a string of inquiries about the probable fate of SUSE LINUX. Should we continue to invest? Will Attachmate kill it? Will it be sold?
Reduced to its essentials the answer is that we cannot predict the eventual ownership of SUSE Linux, but it is almost certain to remain a viable and widely available Linux distribution. SUSE is one of the crown jewels of Novell’s portfolio, with steady growth, gaining market share, generating increasing revenues, and from the outside at least, a profitable business.
Attachmate has two choices with SUSE – retain it as a profitable growth engine and attachment point for other Attachmate software and services, or package it up for sale. In either case they have to continue to invest in the product and its marketing. If Attachmate chooses to keep it, SUSE Linux will behave as it did with Novell. If they sell it, its acquirer will be foolish to do anything else. Speculation about potential acquirers has included HP, IBM, Cisco and Oracle, all of whom could make use of a Linux distribution as an internal product component in addition to the software and service revenues it could engender. But aside from an internal platform, for SUSE to have value as an industry alternative to Red Hat, it would have to remain vendor agnostic and widely available.
With the inescapable caveat that this is a developing situation, my current take on SUSE Linux is that there is no reason to back away from it or to fear that it will disappear into the maw of some giant IT company.
SAP customers shouldn't worry about the financial hit. SAP can pay the damages without having to rein back R&D. The pain may also stimulate it to greater competition with Oracle, both commercially and technologically, which will be beneficial for IT buyers.
Was the award fair? Well, IANAL, so I can't answer that. But my question is, if the basis of the award was "if you take something from someone and you use it, you have to pay", as the juror said, does that mean SAP gets to keep the licenses for which the court is forcing it to pay?
The $1.3 billion verdict in the Oracle v. SAP case is surprising, given that the third-party support subsidiary of SAP, TomorrowNow, was fixing glitches and making compliance updates, not trying to resell the software. The jury felt that the appropriate damage award was based on the fair market value of the software that was illegally downloaded, rather than Oracle’s lost revenues for support.
A news article by Bloomberg provides further insight into the jury’s thinking and the legal process. Quoting juror Joe Bangay, an auto body technician: “If you take something from someone and you use it, you have to pay.” Perhaps SAP should have made its case more in layman’s terms.
SAP is in a very difficult position, in that it faces the same threat of revenue loss from third-party support. It was unable to convincingly defend its entry into the third-party support business for fear of legitimizing a business that poses a similar threat to its lucrative maintenance business as to Oracle’s.
What happens to the third-party support business going forward? The size of the award potentially dampens customer interest in moving to third-party support, particularly with another case pending of Oracle v. Rimini Street. The SAP case, however, does not invalidate third-party support as a business. Third-party support, if carried out properly, offers an important option for enterprise application customers that are looking for relief from costly vendor maintenance contracts.
For SAP, the verdict is not only painful, but it prolongs the agony, because it is compelled to appeal the verdict. SAP certainly has the financial wherewithal to pay the damages but was hoping to put this embarrassing debacle behind them.
Oracle recently announced the availability of Solaris 11 Express, the first iteration of its Solaris 11 product cycle. The feature set of this release is along the lines promised by Oracle at their August analyst event this year, including:
Scalability enhancements to set it up for future systems with higher core counts and requirements to schedule large numbers of threads.
Improvements to zFS, Oracle’s highly scalable file system.
Reduction of boot times to the range of 10 seconds — a truly impressive accomplishment.
Optimizations to support Oracle Exadata and Exalogic integrated solutions. While some of these changes may be very specific to Oracle’s stack, most of them are almost certain to improve any application that requires some combination of high thread counts, large memory and low-latency communications with either 10G Ethernet or Infiniband.
Improvements in availability due to reductions on the number of reboot scenarios, improvements in patching and improved error recovery. This is hard to measure, but Oracle claims they are close to an OS which does not need to come down for normal maintenance, a goal of all of the major UNIX vendors and long a signature of mainframe environments.
I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.
This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:
They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.
With about 41,000 attendees, 1,800 sessions, and a whooping 63,000-plus slides, Oracle OpenWorld 2010 (September 19-23) in San Francisco was certainly a mega event with more information than one could possibly digest or even collect in a week. While the main takeaway for every attendee depends, of course, on the individual’s area of interest, there was a strong focus this year on hardware due to the Sun Microsystems acquisition. I’m a strong believer in the integration story of “Hardware and Software. Engineered to Work Together.” and really liked the Iron Man 2 show-off all around the event; but, because I’m an application guy, the biggest part of the story, including the launch of Oracle Exalogic Elastic Cloud, was a bit lost on me. And the fact that Larry Ellison basically repeated the same story in his two keynotes didn’t really resonate with me — until he came to what I was most interested in: Oracle Fusion Applications!
Fujitsu? Who? I recently attended Fujitsu’s global analyst conference in Boston, which gave me an opportunity to check in with the best kept secret in the North American market. Even Fujitsu execs admit that many people in this largest of IT markets think that Fujitsu has something to do with film, and few of us have ever seen a Fujitsu system installed in the US unless it was a POS system.
So what is the management of this global $50 Billion information and communications technology company, with a competitive portfolio of client, server and storage products and a global service and integration capability, going to do about its lack of presence in the world’s largest IT market? In a word, invest. Fujitsu’s management, judging from their history and what they have disclosed of their plans, intends to invest in the US over the next three to four years to consolidate their estimated $3 Billion in N. American business into a more manageable (simpler) set of operating companies, and to double down on hiring and selling into the N. American market. The fact that they have given themselves multiple years to do so is very indicative of what I have always thought of as Fujitsu’s greatest strength and one of their major weaknesses – they operate on Japanese time, so to speak. For an American company to undertake to build a presence over multiple years with seeming disregard for quarterly earnings would be almost unheard of, so Fujitsu’s management gets major kudos for that. On the other hand, years of observing them from a distance also leads me to believe that their approach to solving problems inherently lacks the sense of urgency of some of their competitors.