I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.
This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:
They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.
With about 41,000 attendees, 1,800 sessions, and a whooping 63,000-plus slides, Oracle OpenWorld 2010 (September 19-23) in San Francisco was certainly a mega event with more information than one could possibly digest or even collect in a week. While the main takeaway for every attendee depends, of course, on the individual’s area of interest, there was a strong focus this year on hardware due to the Sun Microsystems acquisition. I’m a strong believer in the integration story of “Hardware and Software. Engineered to Work Together.” and really liked the Iron Man 2 show-off all around the event; but, because I’m an application guy, the biggest part of the story, including the launch of Oracle Exalogic Elastic Cloud, was a bit lost on me. And the fact that Larry Ellison basically repeated the same story in his two keynotes didn’t really resonate with me — until he came to what I was most interested in: Oracle Fusion Applications!
Fujitsu? Who? I recently attended Fujitsu’s global analyst conference in Boston, which gave me an opportunity to check in with the best kept secret in the North American market. Even Fujitsu execs admit that many people in this largest of IT markets think that Fujitsu has something to do with film, and few of us have ever seen a Fujitsu system installed in the US unless it was a POS system.
So what is the management of this global $50 Billion information and communications technology company, with a competitive portfolio of client, server and storage products and a global service and integration capability, going to do about its lack of presence in the world’s largest IT market? In a word, invest. Fujitsu’s management, judging from their history and what they have disclosed of their plans, intends to invest in the US over the next three to four years to consolidate their estimated $3 Billion in N. American business into a more manageable (simpler) set of operating companies, and to double down on hiring and selling into the N. American market. The fact that they have given themselves multiple years to do so is very indicative of what I have always thought of as Fujitsu’s greatest strength and one of their major weaknesses – they operate on Japanese time, so to speak. For an American company to undertake to build a presence over multiple years with seeming disregard for quarterly earnings would be almost unheard of, so Fujitsu’s management gets major kudos for that. On the other hand, years of observing them from a distance also leads me to believe that their approach to solving problems inherently lacks the sense of urgency of some of their competitors.
Vendor managers in companies with Oracle applications may have heard a lot of talk about its next-generation applications over the last five years. Well, the news from Oracle’s customer event in San Francisco is that Fusion is almost here. Oracle is extensively demonstrating the product here at the event, early adopter customers are already in the implementation process, and Oracle intends to generally release it in the first quarter of next year.
Oracle hasn’t announced final pricing yet, but Steve Miranda, SVP of Oracle Application Development, confirmed that customers on maintenance will get a 1:1 exchange when they swap the product they own now for the Fusion equivalent. That is good news, although to be fair, my Oracle contacts had indicated this, off the record, all along.
The packaging into SKUs will mimic that of the current product set, to make the swap easier. I.e., the price list for HR will look like the PeopleSoft price list, CRM like Siebel, and so on. That makes some sense, but I wish Oracle had taken the opportunity to simplify the pricing so that there are fewer SKUs. For instance, Siebel's price list is over 20 pages long, and there's no clear link between the the items in the price list and the functionality you want to use. As a result, some customers buy modules by mistake, while others fail to buy ones they really need. Hopefully Fusion will provide a clearer audit trail between functionality and SKU.
As someone who has worked in development teams, I take it for granted that not everyone on the team has the same needs and interests. A twenty-something Java developer, fresh out of college, is interested in questions like, "Which emerging framework might be worth learning?" The architect on the team may be interested in the same frameworks, but for entirely different reasons. Unlike the rank-and-file developer, the architect has decision-making power over which framework to adopt. The architect bears responsibility for the long-term consequences of this decision, while the rank-and-file developer is primarily concerned about delivering components written for whatever framework the team selects. Meanwhile, the development manager has to oversee the work of both the developer and architect, ensuring that, collectively, the developers and architects and testers and everyone else deliver their work product on time, at an acceptable level of quality.
Different roles, different questions -- not a hard principle to understand. When applied to developer support, it means that developer conferences, discussion forums, and other resources must tailor their content to a specific audience. Not surprisingly, the material interesting to developers might not be as interesting for architects, and vice-versa. (And if you're still not convinced that the two roles have different needs, take a look at the chart in this earlier blog post, which shows the sources of information and advice to which developers and architects turn.)
There has been turmoil and angst recently in the 0pen source community of late over Oracle’s decision to cancel OpenSolaris. Since this community can be expected to react violently anytime something is taken out of open source, the real question is whether this action has any impact on real-world IT and operations professionals. The short answer is no.
Enterprise Solaris users, be they small, medium or large, are using it to run critical applications; and as far as we can tell, the uptake of OpenSolaris as opposed to Solaris supplied and sold by Sun was very low in commercial accounts, other than possibly a surge in test and dev environments. The decision to take Solaris into the open source arena was, in my opinion, fundamentally flawed, and Oracle’s subsequent decision to change this is eminently rational – Oracle’s customers almost certainly are not going to run their companies on an OS that is built and maintained by any open source community (even the vast majority of corporate Linux use is via a distribution supported by a major vendor and under a paid subscription model), and Oracle cannot continue to develop Solaris unless they have absolute control over it, just as is the case with every other enterprise OS. In the same vein, unless Oracle can also have an expectation of being compensated for their investments in future Solaris development, there is little motivation for them to continue to invest heavily in Solaris.
We’ve all heard software reps blame “revenue recognition” and “Sarbanes-Oxley” as an excuse for not giving an extra discount or contractual concession. IT sourcing professionals may now hear “GSA Rules” and the “False Claims Act” cited as similar justification: “We didn’t give that concession to the government, so we can’t give it to you.” Could that be the worrying unintended consequence of the Justice Department’s action against Oracle: http:/searchoracle.techtarget.com/news/2240019712/US-government-sues-Oracle-for-tens-of-millions-of-dollars?
I can’t comment on the details of the Oracle case, but I’m sure it is complex and two-sided. For instance, I’ve helped clients negotiate reasonable compromises with Oracle to handle special circumstances that won’t apply to many other organizations. These may have involved an extra discretionary discount, if Oracle didn’t have a programmatic way to handle the exception. I wouldn’t expect to get the same concession or discount for another client to whom those special circumstances didn’t apply. For example, this report describes one issue that is particularly important to public sector agencies, but whose impact varies widely: Do Your Software Contracts Permit External Use?
I have received a number of inquiries on the future of SPARC and Solaris. Sun’s installed base was already getting somewhat nervous as Sun continued to self-destruct with a series of bad calls by management, marginal financial performance, and the cancellation of its much-touted “Rock” CPU architecture. Coming on top of this long series of negative events, the acquisition by Oracle had much the same effect as throwing a cat into the middle of the Westminster dog show, and Oracle’s public responses were vague enough that they apparently increased rather than decreased customer angst (to be fair, Oracle does not agree with this assessment of customer reaction, and has provided a public list of customers who endorsed the acquisition at http://www.oracle.com/us/sun/030019.htm).
Fast forward to last week at Oracle’s first analyst meeting focused on integrated systems. While much of the content was focused on integrating the software stack and discussions of the new organization, there were some significant nuggets for existing and prospective Solaris and SPARC customers:
This week, I was at the Microsoft Worldwide Partner Conference in Washington, D.C., and it was all about THE CLOUD. Now, many colleagues argue that Microsoft will be the second-to-last major vendor to show a 100% cloud commitment, saying that “it’s too embedded in its traditional software business,” “it doesn’t understand the new world,” and “it’d be scared of cannibalizing existing and predictable maintenance revenues.” But I remember Stephen Elop, president of Microsoft Business Systems, tell me with a mischievous grin that he’ll probably earn more money from Exchange Online than the on-premise version — “firstly, it’s mainly new business from other platforms like Lotus Notes, and second, I even generate revenues by charging for things like the data center buildings, the infrastructure, even the electricity I use.” That was in Berlin last November. I suspected then that Microsoft did get it but was just getting its platform ready. This week, I am convinced — Microsoft is “all in,” as they say.
And at the Microsoft Worldwide Partner Conference, it was driving its partners to the cloud as aggressively as any vendor has ever talked to its partners at such an event. All of the Microsoft executives preached a consistent mantra: “MOVE to the cloud, or you may not be around in five years.”
Microsoft’s cloud-based Business Productivity Online Suite (BPOS) is already being promoted by 16,000 partners that either get referral incentives for Microsoft-billed BPOS fees or bundle it into their own offerings (mainly telcos). There are nearly 5,000 certified Azure-ready partners. This week, Microsoft turned up the heat with these announcements:
Informatica is one of the traditional leaders when it comes to data quality and data integration. More than 4,000 customers trust Informatica's software products globally and drive more than half a billion dollars in revenue. Informatica solves many of the traditional data integration challenges, for example, between custom developed apps and packaged ERP solutions. As a result, IT operations professionals and enterprise architects are well aware of Informatica’s solutions. However, what has gone under the radar so far is Informatica's cloud computing approach. For about two years now, Informatica has provided www.informaticacloud.com, a cloud-based integration offering, for customers. Informatica recently announced a new version of this service, and Forrester had the chance to talk to the vendor prior to the launch. The new solution offers an improved service for data quality, B2B data transformations, and a number of continuous improvements. But what really caught my attention is Informatica's well-kept secret of a sophisticated agent technology.
Back-office managers and European customers have ignored the message — until now