This week Microsoft announced a new offering (available in the Fall): Microsoft Dynamics 365. Sound familiar? It should. Office 365, Microsoft Dynamics CRM, and Microsoft Dynamics AX all come to mind, and this was not done by mistake. Microsoft is bringing together the capabilities from these products, their intelligence tools, and third party or internally-built apps from its newly launched AppSource. Microsoft will use Dynamics 365 to provide disaggregated applications that serve the functional needs formerly delivered through CRM and ERP suites (e.g. sales, service, marketing, operations, etc.) atop is a common application platform and data model.
So what is Microsoft looking to achieve with these changes? Well, business doesn’t end with a customer interaction, and delivering superior customer experiences doesn’t end at the front office. Front office and back office apps need to talk to one another to make sure companies are able to win, serve, and retain customers. Microsoft aims to:
Give employees access to the right data and tools to perform their jobs. By utilizing a common data model, Dynamics 365 will show a consolidated view of the customer, inclusive of transactional data. This consolidated view delivered in the context of business apps will provide marketing, sales, and service professionals the appropriate context and functionality to serve their customers.
I recently had the opportunity to spend some quality time with NetSuite in San Jose at its customer forum — SuiteWorld. The event gave me a long, overdue deep-dive into their current strategy and the chance to speak with many of their customers one-on-one.
The big announcement from the event was the availability of its manufacturing solution. The evening before the event started I had a good conversation with our Sourcing Analyst Liz Herbert — who spends a lot of her life focused on the SaaS providers — and asked her why NetSuite was not growing more quickly. Her response was that its lack of a manufacturing solution is partly to blame. So when it was announced by CEO Zach Nelson the next morning, it certainly helped to fill me with confidence about its future.
SAP launched its HANA in-memory computing platform in 2010. HANA is a converged analytics appliance. Three years later, SAP has officially launched Business Suite on HANA: globally in January and in China on March 19. SAP clients can now run mission-critical applications on the converged infrastructure for optimized performance. Personally, I would suggest calling this an example of converged applications, which in short refers to the business applications that are architected around the converged infrastructure for performance and simplicity.
I had several conversations with architects from the retail, logistics, and manufacturing industries, as well as Tom Kindermans, SAP’s senior vice president of applications for APJ, about these converged applications. I tend to believe that this is the next wave of application architecture, after mainframe, client/server, and browser/server. With the deployment of these converged infrastructure offerings and the evolution of the applications that run on top of them, it might change technical architectures across infrastructure, information, and applications, as well as the organizational structure of IT, the architecture, and the partner ecosystems. My assessment:
The definition of converged applications is blurry. The meaning of incorporating converged applications can vary quite a bit. Sometimes it means migrating an application from one server to the other; sometimes it means refactoring your networking and storage design for load balancing and disaster recovery; and sometimes eliminating an original performance bottleneck means that business challenges that had been lurking under the surface might emerge for you to resolve. It totally depends on your business goals.
Oracle Corporation announced its purchase of Taleo for $1.9 billion on Feb. 9, 2012, signaling a major shift in its stance on software-as-a-service (SaaS) and talent management applications. The transaction is expected to close midyear 2012, subject to regulatory and stockholder approvals.
Oracle has long held a “we can build it better” position on talent management, learning, and recruitment applications but struggled to compete with best-of-breed talent management vendors like SuccessFactors (recently acquired by rival SAP), Taleo, Kenexa, Cornerstone, and SumTotal Systems. Oracle has been reticent to offer these (or any other) applications via SaaS, preferring a licensed/on-premises business model that provides early revenue recognition versus the deferred revenue model of SaaS.
In fact, Oracle CEO Larry Ellison has been outspoken in his anti-SaaS stance in recent years, changing his posture somewhat with the Oracle Public Cloud announcement at last October’s Oracle OpenWorld conference. Meanwhile, the HR apps market shifted overwhelmingly to the SaaS (subscription-based) deployment model, which has become virtually ubiquitous in recruitment, learning, and talent management and is also growing in core HRMS via ADP, Ultimate Software, and Workday.
By acquiring Taleo, Oracle puts itself back in the game for SaaS recruiting and talent management. Taleo is a market leader in recruitment automation and has a competitive portfolio of products across performance, compensation, and learning management. The $1.9 billion deal price is more than six times Taleo’s 2011 annual revenues of $309 million, a high premium but substantially less than the $3.4 billion and 11-times revenues that SAP recently paid for SuccessFactors.
OK, out of respect for your time, now that I’ve caught you with a title that promises some drama I’ll cut to the chase and tell you that I definitely lean toward the former. Having spent a couple of days here at Oracle Open World poking around the various flavors of Engineered Systems, including the established Exadata and Exalogic along with the new SPARC Super Cluster (all of a week old) and the newly announced Exalytic system for big data analytics, I am pretty convinced that they represent an intelligent and modular set of optimized platforms for specific workloads. In addition to being modular, they give me the strong impression of a “composable” architecture – the various elements of processing nodes, Oracle storage nodes, ZFS file nodes and other components can clearly be recombined over time as customer requirements dictate, either as standard products or as custom configurations.
At the Hot Chips conference last week, Intel disclosed additional details about the upcoming Poulson Itanium CPU due for shipment early next year. For Itanium loyalists (essentially committed HP-UX customers) the disclosures are a ray of sunshine among the gloomy news that has been the lot of Itanium devotees recently.
Poulson will bring several significant improvements to Itanium in both performance and reliability. On the performance side, we have significant improvements on several fronts:
Process – Poulson will be manufactured with the same 32 nm semiconductor process that will (at least for a while) be driving the high-end Xeon processors. This is goodness all around – performance will improve and Intel now can load its latest production lines more efficiently.
More cores and parallelism – Poulson will be an 8-core processor with a whopping 54 MB of on-chip cache, and Intel has doubled the width of the multi-issue instruction pipeline, from 6 to 12 instructions. Combined with improved hyperthreading, the combination of 2X cores and 2X the total number of potential instructions executed per clock cycle by each core hints at impressive performance gains.
Architecture and instruction tweaks – Intel has added additional instructions based on analysis of workloads. This kind of tuning of processor architectures seldom results in major gains in performance, but every small increment helps.
Over the past months server vendors have been announcing benchmark results for systems incorporating Intel’s high-end x86 CPU, the E7, with HP trumping all existing benchmarks with their recently announced numbers (although, as noted in x86 Servers Hit The High Notes, the results are clustered within a few percent each other). HP recently announced new performance numbers for their ProLiant DL980, their high-end 8-socket x86 server using the newest Intel E7 processors. With up to 10 cores, these new processors can bring up to 80 cores to bear on large problems such as database, ERP and other enterprise applications.
The performance results on the SAP SD 2-Tier benchmark, for example, at 25160 SD users, show a performance improvement of 35% over the previous high-water mark of 18635. The results seem to scale almost exactly with the product of core count x clock speed, indicating that both the system hardware and the supporting OS, in this case Windows Server 2008, are not at their scalability limits. This gives us confidence that subsequent spins of the CPU will in turn yield further performance increases before hitting system of OS limitations. Results from other benchmarks show similar patterns as well.
Key takeaways for I&O professionals include:
Expect to see at least 25% to 35% throughput improvements in many workloads with systems based on the latest the high-performance PCUs from Intel. In situations where data center space and cooling resources are constrained this can be a significant boost for a same-footprint upgrade of a high-end system.
For Unix to Linux migrations, target platform scalability continues become less of an issue.
On June 15, HP announced that it had filed suit against Oracle, saying in a statement:
“HP is seeking the court’s assistance to compel Oracle to:
Reverse its decision to discontinue all software development on the Itanium platform
Reaffirm its commitment to offer its product suite on HP platforms, including Itanium;
Immediately reset the Itanium core processor licensing factor consistent with the model prior to December 1, 2010 for RISC/EPIC systems
HP also seeks:
Injunctive relief, including an order prohibiting Oracle from making false and misleading statements regarding the Itanium microprocessor or HP’s Itanium-based servers and remedying the harm caused by Oracle’s conduct.
Damages and fees and other standard remedies available in cases of this nature.”
A recent RFP for consulting services regarding strategic platforms for SAP from a major European company which included, among other things, a request for historical and forecast data for all the relevant platforms broken down by region and a couple of other factors, got me thinking about the whole subject of the use and abuse of market share histories and forecasts.
The merry crew of I&O elves here at Forrester do a lot of consulting for companies all over the world on major strategic technology platform decisions – management software, DR and HA, server platforms for major applications, OS and data center migrations, etc. As you can imagine, these are serious decisions for the client companies, and we always approach these projects with an awareness of the fact that real people will make real decisions and spend real money based on our recommendations.
The client companies themselves usually approach these as serious diligences, and usually have very specific items they want us to consider, almost always very much centered on things that matter to them and are germane to their decision.
The one exception is market share history and forecasts for the relevant vendors under consideration. For some reason, some companies (my probably not statistically defensible impression is that it is primarily European and Japanese companies) think that there is some magic implied by these numbers. As you can probably guess from this elaborate lead-in, I have a very different take on their utility.