Intel Discloses Details on “Poulson,” Next-Generation Itanium

Richard Fichera

This week at ISSCC, Intel made its first detailed public disclosures about its upcoming “Poulson” next-generation Itanium CPU. While not in any sense complete, the details they did disclose paint a picture of a competent product that will continue to keep the heat on in the high-end UNIX systems market. Highlights include:

  • Process — Poulson will be produced in a 32 nm process, skipping the intermediate 45 nm step that many observers expected to see as a step down from the current 65 nm Itanium process. This is a plus for Itanium consumers, since it allows for denser circuits and cheaper chips. With an industry record 3.1 billion transistors, Poulson needs all the help it can get keeping size and power down. The new process also promises major improvements in power efficiency.
  • Cores and cache — Poulson will have 8 cores and 54 MB of on-chip cache, a huge amount, even for a cache-sensitive architecture like Itanium. Poulson will have a 12-issue pipeline instead of the current 6-issue pipeline, promising to extract more performance from existing code without any recompilation.
  • Compatibility — Poulson is socket- and pin-compatible with the current Itanium 9300 CPU, which will mean that HP can move more quickly into production shipments when it's available.
Read more

The Global Software Market In Transformation: Findings From The Forrsights Software Survey, Q4 2010

Holger Kisker

Two months ago, we announced our upcoming Forrester Forrsights Software Survey, Q4 2010. Now the data is back from more than 2,400 respondents in North America and Europe and provides us with deep and sometimes surprising insights into the software market dynamics of today and the next 24 months.

We’d like to give you a sneak preview of interesting results around some of the most important trends in the software market: cloud computing integrated information technology, business intelligence, mobile strategy, and overall software budgets and buying preferences.

Companies Start To Invest More Into Innovation In 2011

After the recent recession, companies are starting to invest more in 2011, with 12% and 22% of companies planning to increase their software budgets by more than 10% or between 5% and 10%, respectively. At the same time, companies will invest a significant part of the additional budget into new solutions. While 50% of the total software budgets are still going into software operations and maintenance (Figure 1), this number has significantly dropped from 55% in 2010; spending on new software licenses will accordingly increase from 23% to 26% and custom-development budgets from 23% to 24% in 2011.

Cloud Computing Is Getting Serious

In this year’s survey, we have taken a much deeper look into companies’ strategies and plans around cloud computing besides simple adoption numbers. We have tested to what extent cloud computing makes its way from complementary services into business critical processes, replacing core applications and moving sensitive data into public clouds.

Read more

Oracle Rolls Out Private Cloud Architecture And World-Record Transaction Performance

Richard Fichera

On Dec. 2, Oracle announced the next move in its program to integrate its hardware and software assets, with the introduction of Oracle Private Cloud Architecture, an integrated infrastructure stack with Infiniband and/or 10G Ethernet fabric, integrated virtualization, management and servers along with software content, both Oracle’s and customer-supplied. Oracle has rolled out the architecture as a general platform for a variety of cloud environments, along with three specific implementations, Exadata, Exalogic and the new Sunrise Supercluster, as proof points for the architecture.

Exadata has been dealt with extensively in other venues, both inside Forrester and externally, and appears to deliver the goods for I&O groups who require efficient consolidation and maximum performance from an Oracle database environment.

Exalogic is a middleware-targeted companion to the Exadata hardware architecture (or another instantiation of Oracle’s private cloud architecture, depending on how you look at it), presenting an integrated infrastructure stack ready to run either Oracle or third-party apps, although Oracle is positioning it as a Java middleware platform. It consists of the following major components integrated into a single rack:

  1. Oracle x86 or T3-based servers and storage.
  2. Oracle Quad-rate Infiniband switches and the Oracle Solaris gateway, which makes the Infiniband network look like an extension of the enterprise 10G Ethernet environment.
  3. Oracle Linux or Solaris.
  4. Oracle Enterprise Manager Ops Center for management.
Read more

Open Data Center Alliance – Lap Dog Or Watch Dog?

Richard Fichera

In October, with great fanfare, the Open Data Center Alliance unfurled its banners. The ODCA is a consortium of approximately 50 large IT consumers, including large manufacturing, hosting and telecomm providers, with the avowed intent of developing standards for interoperable cloud computing. In addition to the roster of users, the announcement highlighted Intel with an ambiguous role as a technology advisor to the group. The ODCA believes that it will achieve some weight in the industry due to its estimated $50 billion per year of cumulative IT purchasing power, and the trade press was full of praises for influential users driving technology as opposed to allowing rapacious vendors such as HP and IBM to drive users down proprietary paths that lead to vendor lock-in.

Now that we’ve had a month or more to allow the purple prose to settle a bit, let’s look at the underlying claims, potential impact of the ODCA and the shifting roles of vendors and consumers of technology. And let’s not forget about the role of Intel.

First, let me state unambiguously that one of the core intentions of the ODCA, the desire to develop common use case models that will in turn drive vendors to develop products that comply with the models based on the economic clout of the ODCA members (and hopefully there will be a correlation between ODCA member requirements and those of a wider set of consumers), is a good idea. Vendors spend a lot of time talking to users and trying to understand their requirements, and having the ODCA as a proxy for the requirements of a lot of very influential customers will be a benefit to all concerned.

Read more

What Will Be The Fate Of SUSE Linux?

Richard Fichera

As an immediate reaction to the recent announcement of Attachmate’s intention to acquire Novell, covered in depth by my colleagues and synthesized by Chris Voce in his recent blog post, I have received a string of inquiries about the probable fate of SUSE LINUX. Should we continue to invest? Will Attachmate kill it? Will it be sold?

Reduced to its essentials the answer is that we cannot predict the eventual ownership of SUSE Linux, but it is almost certain to remain a viable and widely available Linux distribution. SUSE is one of the crown jewels of Novell’s portfolio, with steady growth, gaining market share, generating increasing revenues, and from the outside at least, a profitable business.

Attachmate has two choices with SUSE – retain it as a profitable growth engine and attachment point for other Attachmate software and services, or package it up for sale. In either case they have to continue to invest in the product and its marketing. If Attachmate chooses to keep it, SUSE Linux will behave as it did with Novell. If they sell it, its acquirer will be foolish to do anything else. Speculation about potential acquirers has included HP, IBM, Cisco and Oracle, all of whom could make use of a Linux distribution as an internal product component in addition to the software and service revenues it could engender. But aside from an internal platform, for SUSE to have value as an industry alternative to Red Hat, it would have to remain vendor agnostic and widely available.

With the inescapable caveat that this is a developing situation, my current take on SUSE Linux is that there is no reason to back away from it or to fear that it will disappear into the maw of some giant IT company.

SUSE users, please weigh in.

What Does Oracle's Court Victory Mean For IT Sourcing Professionals? Not Much, Actually.

Duncan Jones

Yesterday, Oracle got a surprisingly high award from an Oakland jury in its case against SAP, in respect of its now defunct TomorrowNow subsidiary.

Photo of Oakland Raiders Fans

The Oakland Jury, pictured after the verdict.

As my colleague Paul Hamerman blogs here (  ) SAP wasn't able to test the validity of the 3rd party support model, so this case has no bearing on the separate case between Oracle and Rimini Street.  I've stated previously that IT sourcing managers should not be put off by that dispute: Don't Let Oracle's Lawsuit Dissuade You From Considering 3SPs, But Recognize The Risks.

SAP customers shouldn't worry about the financial hit. SAP can pay the damages without having to rein back R&D. The pain may also stimulate it to greater competition with Oracle, both commercially and technologically, which will be beneficial for IT buyers. 

Was the award fair? Well, IANAL, so I can't answer that. But my question is, if the basis of the award was "if you take something from someone and you use it, you have to pay", as the juror said, does that mean SAP gets to keep the licenses for which the court is forcing it to pay?

Oracle Wins $1.3 Billion Award Over SAP

Paul Hamerman

The $1.3 billion verdict in the Oracle v. SAP case is surprising, given that the third-party support subsidiary of SAP, TomorrowNow, was fixing glitches and making compliance updates, not trying to resell the software. The jury felt that the appropriate damage award was based on the fair market value of the software that was illegally downloaded, rather than Oracle’s lost revenues for support.

A news article by Bloomberg provides further insight into the jury’s thinking and the legal process. Quoting juror Joe Bangay, an auto body technician: “If you take something from someone and you use it, you have to pay.” Perhaps SAP should have made its case more in layman’s terms.

SAP is in a very difficult position, in that it faces the same threat of revenue loss from third-party support. It was unable to convincingly defend its entry into the third-party support business for fear of legitimizing a business that poses a similar threat to its lucrative maintenance business as to Oracle’s.

What happens to the third-party support business going forward? The size of the award potentially dampens customer interest in moving to third-party support, particularly with another case pending of Oracle v. Rimini Street. The SAP case, however, does not invalidate third-party support as a business. Third-party support, if carried out properly, offers an important option for enterprise application customers that are looking for relief from costly vendor maintenance contracts.

For SAP, the verdict is not only painful, but it prolongs the agony, because it is compelled to appeal the verdict. SAP certainly has the financial wherewithal to pay the damages but was hoping to put this embarrassing debacle behind them.

Oracle Releases Solaris 11 — Game Changer Or Place Keeper?

Richard Fichera

Oracle recently announced the availability of Solaris 11 Express, the first iteration of its Solaris 11 product cycle. The feature set of this release is along the lines promised by Oracle at their August analyst event this year, including:

  • Scalability enhancements to set it up for future systems with higher core counts and requirements to schedule large numbers of threads.
  • Improvements to zFS, Oracle’s highly scalable file system.
  • Reduction of boot times to the range of 10 seconds — a truly impressive accomplishment.
  • Optimizations to support Oracle Exadata and Exalogic integrated solutions. While some of these changes may be very specific to Oracle’s stack, most of them are almost certain to improve any application that requires some combination of high thread counts, large memory and low-latency communications with either 10G Ethernet or Infiniband.
  • Improvements in availability due to reductions on the number of reboot scenarios, improvements in patching and improved error recovery. This is hard to measure, but Oracle claims they are close to an OS which does not need to come down for normal maintenance, a goal of all of the major UNIX vendors and long a signature of mainframe environments.
Read more

Lies, Damned Lies, And Statistics . . . And Benchmarks

Richard Fichera

I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.

This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:

  • They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
  • They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.
Read more

One Code To Rule Them All: Reflections On Oracle Fusion Applications From Oracle OpenWorld 2010

Holger Kisker

With about 41,000 attendees, 1,800 sessions, and a whooping 63,000-plus slides, Oracle OpenWorld 2010 (September 19-23) in San Francisco was certainly a mega event with more information than one could possibly digest or even collect in a week. While the main takeaway for every attendee depends, of course, on the individual’s area of interest, there was a strong focus this year on hardware due to the Sun Microsystems acquisition. I’m a strong believer in the integration story of “Hardware and Software. Engineered to Work Together.” and really liked the Iron Man 2 show-off all around the event; but, because I’m an application guy, the biggest part of the story, including the launch of Oracle Exalogic Elastic Cloud, was a bit lost on me. And the fact that Larry Ellison basically repeated the same story in his two keynotes didn’t really resonate with me — until he came to what I was most interested in: Oracle Fusion Applications!

Read more