On Tuesday November 8, after more than a year of pre-announcement disclosures that eventually left very little to the imagination, Intel finally announced the Itanium 9500, formerly known as Poulson. Added to this was the big surprise of HP announcing a refresh of its current line of Integrity servers, from blades to the large Superdome servers, with the new Itanium 9500.
As noted in an earlier post, the Itanium 9500 offers considerable performance improvements over its predecessors, and instantiated in HP’s new Integrity line it is positioned as delivering between 2X and 3X the performance per socket as previous Itanium 9300 (Tukwilla) systems at approximately the same price. For those remaining committed to Itanium and its attendant OS platforms, notably HP-UX, this is unmitigated good news. The fly in the ointment (I have never seen a fly in any ointment, but it does sound gross), of course, is HP’s dispute with Oracle. Despite the initial judgment in HP’s favor, the trial is a) not over yet, and b) Oracle has already filed for an early appeal of the initial verdict, which would ordinarily have to wait until the second phase of the trial, scheduled for next year, to finish. The net takeaway is that Oracle’s future availability on Itanium and HP-UX is not yet assured, so we really cannot advise the large number of Oracle users who will require Oracle 12 and later versions to relax yet.
Nathan Bedford Forrest, a Confederate general of despicable ideology and consummate tactics, spoke of “keepin up the skeer,” applying continued pressure to opponents to prevent them from regrouping and counterattacking. POWER7+, the most recent version of IBM’s POWER architecture, anticipated as a follow-up to the POWER7 for almost a year, was finally announced this week, and appears to be “keepin up the skeer” in terms of its competitive potential for IBM POWER-based systems. In short, it is a hot piece of technology that will keep existing IBM users happy and should help IBM maintain its impressive momentum in the Unix systems segment.
For the chip heads, the CPU is implemented in a 32 NM process, the same as Intel’s upcoming Poulson, and embodies some interesting evolutions in high-end chip design, including:
Use of DRAM instead of SRAM — IBM has pioneered the use of embedded DRAM (eDRAM) as embedded L3 cache instead of the more standard and faster SRAM. In exchange for the loss of speed, eDRAM requires fewer transistors and lower power, allowing IBM to pack a total of 80 MB (a lot) of shared L3 cache, far more than any other product has ever sported.
This week the California courts handed down a nice present for HP — a verdict confirming that Oracle was required to continue to deliver its software on HP’s Itanium-based Integrity servers. This was a major victory for HP, on the face of it giving them the prize they sought — continued availability of Oracle’s eponymous database on their high-end systems.
However, HP’s customers should not immediately assume that everything has returned to a “status quo ante.” Once Humpty Dumpty has fallen off the wall it is very difficult to put the pieces together again. As I see it, there are still three major elephants in the room that HP users must acknowledge before they make any decisions:
Oracle will appeal, and there is no guarantee of the outcome. The verdict could be upheld or it could be reversed. If it is upheld, then that represents a further delay in the start date from which Oracle will be measured for its compliance with the court ordered development. Oracle will also continue to press its counterclaims against HP, but those do not directly relate to the continued development or Oracle software on Itanium.
Itanium is still nearing the end of its road map. A reasonable interpretation of the road map tea leaves that have been exposed puts the final Itanium release at about 2015 unless Intel decides to artificially split Kittson into two separate releases. Integrity customers must take this into account as they buy into the architecture in the last few years of Itanium’s life, although HP can be depended on to offer high-quality support for a decade after the last Itanium CPU rolls off Intel’s fab lines. HP has declared its intention to produce Integrity-level x86 systems, but OS support intentions are currently stated as Linux and Windows, not HP-UX.
OK, it’s time to stretch the 2012 writing muscles, and what better way to do it than with the time honored “retrospective” format. But rather than try and itemize all the news and come up with a list of maybe a dozen or more interesting things, I decided instead to pick the best and the worst – events and developments that show the amazing range of the technology business, its potentials and its daily frustrations. So, drum roll, please. My personal nomination for the best and worst of the year (along with a special extra bonus category) are:
The Best – IBM Watson stomps the world’s best human players in Jeopardy. In early 2011, IBM put its latest deep computing project, Watson, up against some of the best players in the world in a game of Jeopardy. Watson, consisting of hundreds of IBM Power CPUs, gazillions of bytes of memory and storage, and arguably the most sophisticated rules engine and natural language recognition capability ever developed, won hands down. If you haven’t seen the videos of this event, you should – seeing the IBM system fluidly answer very tricky questions is amazing. There is no sense that it is parsing the question and then sorting through 200 – 300 million pages of data per second in the background as it assembles its answers. This is truly the computer industry at its best. IBM lived up to its brand image as the oldest and strongest technology company and showed us a potential for integrating computers into untapped new potential solutions. Since the Jeopardy event, IBM has been working on commercializing Watson with an eye toward delivering domain-specific expert advisors. I recently listened to a presentation by a doctor participating in the trials of a Watson medical assistant, and the results were startling in terms of the potential to assist medical professionals in diagnostic procedures.
Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.
Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:
ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.
Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…
There has been a lot of ill-considered press coverage about the “death” of UNIX and coverage of the wholesale migration of UNIX workloads to LINUX, some of which (the latter, not the former) I have contributed to. But to set the record straight, the extinction of UNIX is not going to happen in our lifetime.
While UNIX revenues are not growing at any major clip, it appears as if they have actually had a slight uptick over the past year, probably due to a surge by IBM, and seem to be nicely stuck around the $18 - 20B level annual range. But what is important is the “why,” not the exact dollar figure.
UNIX on proprietary RISC architectures will stay around for several reasons that primarily revolve around their being the only close alternative to mainframes in regards to specific high-end operational characteristics:
Performance – If you need the biggest single-system SMP OS image, UNIX is still the only realistic commercial alternative other than mainframes.
Isolated bulletproof partitionability – If you want to run workload on dynamically scalable and electrically isolated partitions with the option to move workloads between them while running, then UNIX is your answer.
Near-ultimate availability – If you are looking for the highest levels of reliability and availability ex mainframes and custom FT systems, UNIX is the answer. It still possesses slight availability advantages, especially if you factor in the more robust online maintenance capabilities of the leading UNIX OS variants.
Last year I wrote about Oracle’s new plans for SPARC, anchored by a new line of SPARC CPUs engineered in conjunction with Fujitsu (Does SPARC have a Future?), and commented that the first deliveries of this new technology would probably be in early 2012, and until we saw this tangible evidence of Oracle’s actual execution of this road map we could not predict with any confidence the future viability of SPARC.
The T4 CPU
Fast forward a year and Oracle has delivered the first of the new CPUs, ahead of schedule and with impressive gains in performance that make it look like SPARC will remain a viable platform for years. Specifically, Oracle has introduced the T4 CPU and systems based on them. The T4, an evolution of Oracle’s highly threaded T-Series architecture, is implemented with an entirely new core that will form the basis, with variations in number of threads versus cores and cache designs, of the future M and T series systems. The M series will have fewer threads and more performance per thread, while the T CPUs will, like their predecessors, emphasize throughput for highly threaded workloads. The new T4 will have 8 cores, and each core will have 8 threads. While the T4 emphasizes highly threaded workload performance, it is important to note that Oracles has radically improved single-thread performance over its predecessors, with Oracle claiming performance per thread improvements of 5X over its predecessors, greatly improving its utility as a CPU to power less thread-intensive workloads as well.
At the Hot Chips conference last week, Intel disclosed additional details about the upcoming Poulson Itanium CPU due for shipment early next year. For Itanium loyalists (essentially committed HP-UX customers) the disclosures are a ray of sunshine among the gloomy news that has been the lot of Itanium devotees recently.
Poulson will bring several significant improvements to Itanium in both performance and reliability. On the performance side, we have significant improvements on several fronts:
Process – Poulson will be manufactured with the same 32 nm semiconductor process that will (at least for a while) be driving the high-end Xeon processors. This is goodness all around – performance will improve and Intel now can load its latest production lines more efficiently.
More cores and parallelism – Poulson will be an 8-core processor with a whopping 54 MB of on-chip cache, and Intel has doubled the width of the multi-issue instruction pipeline, from 6 to 12 instructions. Combined with improved hyperthreading, the combination of 2X cores and 2X the total number of potential instructions executed per clock cycle by each core hints at impressive performance gains.
Architecture and instruction tweaks – Intel has added additional instructions based on analysis of workloads. This kind of tuning of processor architectures seldom results in major gains in performance, but every small increment helps.
On June 15, HP announced that it had filed suit against Oracle, saying in a statement:
“HP is seeking the court’s assistance to compel Oracle to:
Reverse its decision to discontinue all software development on the Itanium platform
Reaffirm its commitment to offer its product suite on HP platforms, including Itanium;
Immediately reset the Itanium core processor licensing factor consistent with the model prior to December 1, 2010 for RISC/EPIC systems
HP also seeks:
Injunctive relief, including an order prohibiting Oracle from making false and misleading statements regarding the Itanium microprocessor or HP’s Itanium-based servers and remedying the harm caused by Oracle’s conduct.
Damages and fees and other standard remedies available in cases of this nature.”
A recent RFP for consulting services regarding strategic platforms for SAP from a major European company which included, among other things, a request for historical and forecast data for all the relevant platforms broken down by region and a couple of other factors, got me thinking about the whole subject of the use and abuse of market share histories and forecasts.
The merry crew of I&O elves here at Forrester do a lot of consulting for companies all over the world on major strategic technology platform decisions – management software, DR and HA, server platforms for major applications, OS and data center migrations, etc. As you can imagine, these are serious decisions for the client companies, and we always approach these projects with an awareness of the fact that real people will make real decisions and spend real money based on our recommendations.
The client companies themselves usually approach these as serious diligences, and usually have very specific items they want us to consider, almost always very much centered on things that matter to them and are germane to their decision.
The one exception is market share history and forecasts for the relevant vendors under consideration. For some reason, some companies (my probably not statistically defensible impression is that it is primarily European and Japanese companies) think that there is some magic implied by these numbers. As you can probably guess from this elaborate lead-in, I have a very different take on their utility.