SAP launched its HANA in-memory computing platform in 2010. HANA is a converged analytics appliance. Three years later, SAP has officially launched Business Suite on HANA: globally in January and in China on March 19. SAP clients can now run mission-critical applications on the converged infrastructure for optimized performance. Personally, I would suggest calling this an example of converged applications, which in short refers to the business applications that are architected around the converged infrastructure for performance and simplicity.
I had several conversations with architects from the retail, logistics, and manufacturing industries, as well as Tom Kindermans, SAP’s senior vice president of applications for APJ, about these converged applications. I tend to believe that this is the next wave of application architecture, after mainframe, client/server, and browser/server. With the deployment of these converged infrastructure offerings and the evolution of the applications that run on top of them, it might change technical architectures across infrastructure, information, and applications, as well as the organizational structure of IT, the architecture, and the partner ecosystems. My assessment:
The definition of converged applications is blurry. The meaning of incorporating converged applications can vary quite a bit. Sometimes it means migrating an application from one server to the other; sometimes it means refactoring your networking and storage design for load balancing and disaster recovery; and sometimes eliminating an original performance bottleneck means that business challenges that had been lurking under the surface might emerge for you to resolve. It totally depends on your business goals.
In the Business Apps Casino, change is afoot. For a long time, one table – software-as-a-service ERP – attracted a limited number of players and fans. However, over the past 12 months, an increasing number of ERP vendors have lined up to place sizeable SaaS bets, while more potential customers are paying close attention to the gambles those vendors are making.
In Forrester ERP inquiries, it’s now the norm for clients to ask us about SaaS ERP. In fact, it’s unusual to field a call where SaaS isn’t mentioned. Firms may be actively considering a future change in deployment model or simply wanting to kick the tires on SaaS ERP adoption, pros and cons, and comparisons with on-premises ERP. They also seek more information about SaaS ERP market players and likely future entrants. In general, what’s changed since a year ago is that companies want to include SaaS ERP options in their assessments.
Each ERP vendor’s SaaS bet differs somewhat from those of its peers, determined both by the type of customers it’s aiming at and architectural concerns. However, there are some shared themes:
Repurposing existing apps. Some ERP vendors began their SaaS endeavors with apps targeted at small and midsize businesses. They’re now working to deepen the functionality of those apps to appeal to a broader, more enterprise audience. There are two key approaches: 1) expand the scope of an existing SMB product and aim it up market; or 2) carve off functionality from a SaaS midmarket apps suite (while retaining that suite) and create a new enterprise app.
If you wanted to see the full spectrum of cloud choices that are coming to market today you only have to look at these two efforts as they are starting to evolve. They represent the extremes. And ironically both held analyst events this week.
OpenStack is clearly an effort by a vendor (Rackspace) to launch a community to help advance technology and drive innovation around a framework that multiple vendors can use to bring myriad cloud services to market and deliver differentiated values. Whereas Oracle, who gave analysts a brief look inside its public cloud efforts this week, is taking a completely closed and self-built approach that looks to fulfill all cloud values from top to bottom.
Oracle Corporation announced its purchase of Taleo for $1.9 billion on Feb. 9, 2012, signaling a major shift in its stance on software-as-a-service (SaaS) and talent management applications. The transaction is expected to close midyear 2012, subject to regulatory and stockholder approvals.
Oracle has long held a “we can build it better” position on talent management, learning, and recruitment applications but struggled to compete with best-of-breed talent management vendors like SuccessFactors (recently acquired by rival SAP), Taleo, Kenexa, Cornerstone, and SumTotal Systems. Oracle has been reticent to offer these (or any other) applications via SaaS, preferring a licensed/on-premises business model that provides early revenue recognition versus the deferred revenue model of SaaS.
In fact, Oracle CEO Larry Ellison has been outspoken in his anti-SaaS stance in recent years, changing his posture somewhat with the Oracle Public Cloud announcement at last October’s Oracle OpenWorld conference. Meanwhile, the HR apps market shifted overwhelmingly to the SaaS (subscription-based) deployment model, which has become virtually ubiquitous in recruitment, learning, and talent management and is also growing in core HRMS via ADP, Ultimate Software, and Workday.
By acquiring Taleo, Oracle puts itself back in the game for SaaS recruiting and talent management. Taleo is a market leader in recruitment automation and has a competitive portfolio of products across performance, compensation, and learning management. The $1.9 billion deal price is more than six times Taleo’s 2011 annual revenues of $309 million, a high premium but substantially less than the $3.4 billion and 11-times revenues that SAP recently paid for SuccessFactors.
OK, out of respect for your time, now that I’ve caught you with a title that promises some drama I’ll cut to the chase and tell you that I definitely lean toward the former. Having spent a couple of days here at Oracle Open World poking around the various flavors of Engineered Systems, including the established Exadata and Exalogic along with the new SPARC Super Cluster (all of a week old) and the newly announced Exalytic system for big data analytics, I am pretty convinced that they represent an intelligent and modular set of optimized platforms for specific workloads. In addition to being modular, they give me the strong impression of a “composable” architecture – the various elements of processing nodes, Oracle storage nodes, ZFS file nodes and other components can clearly be recombined over time as customer requirements dictate, either as standard products or as custom configurations.
At the Hot Chips conference last week, Intel disclosed additional details about the upcoming Poulson Itanium CPU due for shipment early next year. For Itanium loyalists (essentially committed HP-UX customers) the disclosures are a ray of sunshine among the gloomy news that has been the lot of Itanium devotees recently.
Poulson will bring several significant improvements to Itanium in both performance and reliability. On the performance side, we have significant improvements on several fronts:
Process – Poulson will be manufactured with the same 32 nm semiconductor process that will (at least for a while) be driving the high-end Xeon processors. This is goodness all around – performance will improve and Intel now can load its latest production lines more efficiently.
More cores and parallelism – Poulson will be an 8-core processor with a whopping 54 MB of on-chip cache, and Intel has doubled the width of the multi-issue instruction pipeline, from 6 to 12 instructions. Combined with improved hyperthreading, the combination of 2X cores and 2X the total number of potential instructions executed per clock cycle by each core hints at impressive performance gains.
Architecture and instruction tweaks – Intel has added additional instructions based on analysis of workloads. This kind of tuning of processor architectures seldom results in major gains in performance, but every small increment helps.
Over the past months server vendors have been announcing benchmark results for systems incorporating Intel’s high-end x86 CPU, the E7, with HP trumping all existing benchmarks with their recently announced numbers (although, as noted in x86 Servers Hit The High Notes, the results are clustered within a few percent each other). HP recently announced new performance numbers for their ProLiant DL980, their high-end 8-socket x86 server using the newest Intel E7 processors. With up to 10 cores, these new processors can bring up to 80 cores to bear on large problems such as database, ERP and other enterprise applications.
The performance results on the SAP SD 2-Tier benchmark, for example, at 25160 SD users, show a performance improvement of 35% over the previous high-water mark of 18635. The results seem to scale almost exactly with the product of core count x clock speed, indicating that both the system hardware and the supporting OS, in this case Windows Server 2008, are not at their scalability limits. This gives us confidence that subsequent spins of the CPU will in turn yield further performance increases before hitting system of OS limitations. Results from other benchmarks show similar patterns as well.
Key takeaways for I&O professionals include:
Expect to see at least 25% to 35% throughput improvements in many workloads with systems based on the latest the high-performance PCUs from Intel. In situations where data center space and cooling resources are constrained this can be a significant boost for a same-footprint upgrade of a high-end system.
For Unix to Linux migrations, target platform scalability continues become less of an issue.
On June 15, HP announced that it had filed suit against Oracle, saying in a statement:
“HP is seeking the court’s assistance to compel Oracle to:
Reverse its decision to discontinue all software development on the Itanium platform
Reaffirm its commitment to offer its product suite on HP platforms, including Itanium;
Immediately reset the Itanium core processor licensing factor consistent with the model prior to December 1, 2010 for RISC/EPIC systems
HP also seeks:
Injunctive relief, including an order prohibiting Oracle from making false and misleading statements regarding the Itanium microprocessor or HP’s Itanium-based servers and remedying the harm caused by Oracle’s conduct.
Damages and fees and other standard remedies available in cases of this nature.”
A recent RFP for consulting services regarding strategic platforms for SAP from a major European company which included, among other things, a request for historical and forecast data for all the relevant platforms broken down by region and a couple of other factors, got me thinking about the whole subject of the use and abuse of market share histories and forecasts.
The merry crew of I&O elves here at Forrester do a lot of consulting for companies all over the world on major strategic technology platform decisions – management software, DR and HA, server platforms for major applications, OS and data center migrations, etc. As you can imagine, these are serious decisions for the client companies, and we always approach these projects with an awareness of the fact that real people will make real decisions and spend real money based on our recommendations.
The client companies themselves usually approach these as serious diligences, and usually have very specific items they want us to consider, almost always very much centered on things that matter to them and are germane to their decision.
The one exception is market share history and forecasts for the relevant vendors under consideration. For some reason, some companies (my probably not statistically defensible impression is that it is primarily European and Japanese companies) think that there is some magic implied by these numbers. As you can probably guess from this elaborate lead-in, I have a very different take on their utility.