Oracle Releases Solaris 11 — Game Changer Or Place Keeper?

Oracle recently announced the availability of Solaris 11 Express, the first iteration of its Solaris 11 product cycle. The feature set of this release is along the lines promised by Oracle at their August analyst event this year, including:

  • Scalability enhancements to set it up for future systems with higher core counts and requirements to schedule large numbers of threads.
  • Improvements to zFS, Oracle’s highly scalable file system.
  • Reduction of boot times to the range of 10 seconds — a truly impressive accomplishment.
  • Optimizations to support Oracle Exadata and Exalogic integrated solutions. While some of these changes may be very specific to Oracle’s stack, most of them are almost certain to improve any application that requires some combination of high thread counts, large memory and low-latency communications with either 10G Ethernet or Infiniband.
  • Improvements in availability due to reductions on the number of reboot scenarios, improvements in patching and improved error recovery. This is hard to measure, but Oracle claims they are close to an OS which does not need to come down for normal maintenance, a goal of all of the major UNIX vendors and long a signature of mainframe environments.
Read more

Lies, Damned Lies, And Statistics . . . And Benchmarks

I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.

This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:

  • They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
  • They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.
Read more