ERP vendors are showing strong interest in the HRM SaaS market. They are either attempting to build a solution (as Oracle is doing with Fusion) or looking to acquire HRM functionality (as SAP is about to do with SuccessFactors). Talent applications — including offerings like performance, succession, and learning — are not easy to build. The niche players have been laser-focused for years on building these solutions or integrating acquisitions, and generally they have done a good job. Now we see other vendors that want this functionality buying up these niche players to offer a complete end-to-end HRM solution. The HRM market is hot! My colleague Paul Hamerman and I have authored research that shows performance growing faster — at 16.5% — than any other HRM segment (HRM Solutions: Traditional Models Clash With Next Generation Processes And Technology). Executives know that having highly skilled employees who know the business and can execute well on strategy is critical to business growth.
Some Reflections On The Deal For Competitors, Partners, and Customers
On December 3, SAP announced the acquisition of SuccessFactors, a leading vendor for human capital management (HCM) cloud solutions. SAP will pay $3.5 billion (a 52% premium over the Dec 2 closing price) out of its full battle chest and take a $1 billion loan. SuccessFactors brings about 1,500 employees, more than 3,500 customers, and about 15 million users to the table. In 2010, the company reported revenues of $206 million and a net loss of $12.5 million. A price of $3.5 billion is certainly a big premium, but the acquisition catapults SAP into the ranks of leading software-as-a-service (SaaS) solution providers — a business that will grow from $21.3 billion in 2011 to $78.4 billion by 2015 (for more information, check out our report “Sizing The Cloud”). The deal will certainly help SAP to achieve its 2015 target of $20 billion revenue and 1 billion users as it mainly targets the 500,000 employees that SAP’s already existing customers have. The deal is expected to close in Q1 next year. However, because most of the stocks are widely spread, stakeholders might hold back for now, waiting for possible counter bids from competition.
SAP is a paying a substantial premium to acquire SuccessFactors, a leading SaaS performance and talent management vendor. The press release of December 3, 2011 states that the deal price of $40 per share is a 52% premium over the Dec. 2 closing stock price. Even more startling is that SuccessFactors has a revenue run rate of roughly $300 to $330 million for 2011, and the acquisition price of $3.4 billion is more than 10 times revenue! Why then did SAP make this move?
SAP’s cloud strategy has been struggling with time-to-market issues, and its core on-premises HR management software has been at a competitive disadvantage with best-of-breed solutions in areas such as employee performance, succession planning, and learning management. By acquiring SuccessFactors, SAP puts itself into a much stronger competitive position in human resources applications and reaffirms its commitment to software-as-a-service as a key business model.
In my recent research for a soon-to-be-published Forrester Wave™ on human resource management systems (HRMS), I noted that SAP has more than 13,000 customers using its HCM suite. Yet the adoption of SAP’s learning and talent management products is much less (a few thousand, perhaps), which is noted in my colleague Claire Schooley’s “The Forrester Wave™: Talent Management, Q2 2011.” The talent management Forrester Wave also clearly shows that SAP’s embedded talent management offerings lag well behind the best-of-breed specialists in learning and performance management. The bottom line here is that SAP HCM customers predominantly run best-of-breed talent management solutions alongside their SAP core HRMS (i.e., the transactional employee system of record).
This week AMD finally released their AMD 6200 and 4200 series CPUs. These are the long-awaited server-oriented Interlagos and Valencia CPUs, based on their new “Bulldozer” core, offering up to 16 x86 cores in a single socket. The announcement was targeted at (drum roll, one guess per customer only) … “The Cloud.” AMD appears to be positioning its new architectures as the platform of choice for cloud-oriented workloads, focusing on highly threaded throughput oriented benchmarks that take full advantage of its high core count and unique floating point architecture, along with what look like excellent throughput per Watt metrics.
At the same time it is pushing the now seemingly mandatory “cloud” message, AMD is not ignoring the meat-and-potatoes enterprise workloads that have been the mainstay of server CPUs sales –virtualization, database, and HPC, where the combination of many cores, excellent memory bandwidth and large memory configurations should yield excellent results. In its competitive comparisons, AMD targets Intel’s 5640 CPU, which it claims represents Intel’s most widely used Xeon CPU, and shows very favorable comparisons in regards to performance, price and power consumption. Among the features that AMD cites as contributing to these results are:
Advanced power and thermal management, including the ability to power off inactive cores contributing to an idle power of less than 4.4W per core. Interlagos offers a unique capability called TDP, which allows I&O groups to set the total power threshold of the CPU in 1W increments to allow fine-grained tailoring of power in the server racks.
Turbo CORE, which allows boosting the clock speed of cores by up to 1 GHz for half the cores or 500 MHz for all the cores, depending on workload.
Emerging ARM server Calxeda has been hinting for some time that they had a significant partnership announcement in the works, and while we didn’t necessarily not believe them, we hear a lot of claims from startups telling us to “stay tuned” for something big. Sometimes they pan out, sometimes they simply go away. But this morning Calxeda surpassed our expectations by unveiling just one major systems partner – but it just happens to be Hewlett Packard, which dominates the WW market for x86 servers.
At its core (unintended but not bad pun), the HP Hyperscale business unit Project Moonshot and Calxeda’s server technology are about improving the efficiency of web and cloud workloads, and promises improvements in excess of 90% in power efficiency and similar improvements in physical density compared with current x86 solutions. As I noted in my first post on ARM servers and other documents, even if these estimates turn out to be exaggerated, there is still a generous window within which to do much, much, better than current technologies. And workloads (such as memcache, Hadoop, static web servers) will be selected for their fit to this new platform, so the workloads that run on these new platforms will potentially come close to the cases quoted by HP and Calxeda.
“… and they lived happily ever after.” This is the typical ending of most Hollywood movies, which is why I am not a big fan. I much prefer European or independent movies that leave it up to the viewer to draw their own conclusions. It’s just so much more realistic. Keep this in mind, please, as you read this blog, because its only purpose is to present my point of view on what’s happening in the cloud BI market, not to predict where it’s going. I’ll leave that up to your comments — just like your own thoughts and feelings after a good, thoughtful European or indie movie.
First of all, let’s define the market. Unfortunately, the terms SaaS and cloud are often used synonymously and therefore, alas, incorrectly.
SaaS is just a licensing structure. Many vendors (open source, for example) offer SaaS software subscription models, which has nothing to do with cloud-based hosting.
Cloud, in my humble opinion, is all about multitenant software hosted on public or private clouds. It’s not about cloud hosting of traditional software innately architected for single tenancy.
My colleague James Staten recently wrote about AutoDesk Cloud as an exemplar of the move toward App Internet, the concept of implementing applications that are distributed between local and cloud resources in a fashion that is transparent to the user except for the improved experience. His analysis is 100% correct, and AutoDesk Cloud represents a major leap in CAD functionality, intelligently offloading the inherently parallel and intensive rendering tasks and facilitating some aspects of collaboration.
But (and there’s always a “but”), having been involved in graphics technology on and off since the '80s, I would say that “cloud” implementation of rendering and analysis is something that has been incrementally evolving for decades, with hundreds of well-documented distributed environments with desktops fluidly shipping their renderings to local rendering and analysis farms that would today be called private clouds, with the results shipped back to the creating workstations. This work was largely developed and paid for either by universities and by media companies as part of major movie production projects. Some of them were of significant scale, such as “Massive,” the rendering and animation farm for "Lord of the Rings" that had approximately 1,500 compute nodes, and a subsequent installation at Weta that may have up to 7,000 nodes. In my, admittedly arguable, opinion, the move to AutoDesk Cloud, while representing a major jump in capabilities by making the cloud accessible to a huge number of users, does not represent a major architectural innovation, but rather an incremental step.
I just attended IDF and I’ve got to say, Intel has certainly gotten the cloud message. Almost everything is centered on clouds, from the high-concept keynotes to the presentations on low-level infrastructure, although if you dug deep enough there was content for general old-fashioned data center and I&O professionals. Some highlights:
Chips and processors and low-level hardware
Intel is, after all, a semiconductor foundry, and despite their expertise in design, their true core competitive advantage is their foundry operations – even their competitors grudgingly acknowledge that they can manufacture semiconductors better than anyone else on the planet. As a consequence, showing off new designs and processes is always front and center at IDF, and this year was no exception. Last year it was Sandy Bridge, the 22nm shrink of the 32nm Westmere (although Sandy Bridge also incorporated some significant design improvements). This year it was Ivy Bridge, the 22nm “tick” of the Intel “tick-tock” design cycle. Ivy Bridge is the new 22nm architecture and seems to have inherited Intel’s recent focus on power efficiency, with major improvements beyond the already solid advantages of their 22nm process, including deeper P-States and the ability to actually shut down parts of the chip when it is idle. While they did not discuss the server variants in any detail, the desktop versions will get an entirely new integrated graphics processor which they are obviously hoping will blunt AMD’s resurgence in client systems. On the server side, if I were to guess, I would guess more cores and larger caches, along with increased support for virtualization of I/O beyond what they currently have.
Well, maybe everybody is saying “cloud” these days, but my first impression of Microsoft Windows Server 8 (not the final name) is that Microsoft has been listening very closely to what customers want from an OS that can support both public and private enterprise cloud implementations. And most importantly, the things that they have built into WS8 for “clouds” also look like they make life easier for plain old enterprise IT.
Microsoft appears to have focused its efforts on several key themes, all of which benefit legacy IT architectures as well as emerging clouds:
Management, migration and recovery of VMs in a multi-system domain – Major improvements in Hyper-V and management capabilities mean that I&O groups can easily build multi-system clusters of WS8 servers, and easily migrate VMs across system boundaries. Muplitle systems can be clustered with Fibre Channel, making it easier to implement high-performance clusters.
Multi-tenancy – A host of features, primarily around management and role-based delegation that make it easier and more secure to implement multi-tenant VM clouds.
Recovery and resiliency – Microsoft claims that they can failover VMs from one machine to another in 25 seconds, a very impressive number indeed. While vendor performance claims are always like EPA mileage – you are guaranteed never to exceed this number – this is an impressive claim and a major capability, with major implications for HA architecture in any data center.
Last year at VMworld I noted Xsigo Systems, a small privately held company at VMworld showing their I/O Director technology, which delivereda subset of HP Virtual Connect or Cisco UCS I/O virtualization capability in a fashion that could be consumed by legacy rack-mount servers from any vendor. I/O Director connects to the server with one or more 10 G Ethernet links, and then splits traffic out into enterprise Ethernet and FC networks. On the server side, the applications, including VMware, see multiple virtual NICs and HBAs courtesy of Xsigo’s proprietary virtual NIC driver.
Controlled via Xsigo’s management console, the server MAC and WWNs can be programmed, and the servers can now connect to multiple external networks with fewer cables and substantially lower costs for NIC and HBA hardware. Virtualized I/O is one of the major transformative developments in emerging data center architecture, and will remain a theme in Forrester’s data center research coverage.
This year at VMworld, Xsigo announced a major expansion of their capabilities – Xsigo Server Fabric, which takes the previous rack-scale single-Xsigo switch domains and links them into a data-center-scale fabric. Combined with improvements in the software and UI, Xsigo now claims to offer one-click connection of any server resource to any network or storage resource within the domain of Xsigo’s fabric. Most significantly, Xsigo’s interface is optimized to allow connection of VMs to storage and network resources, and to allow the creation of private VM-VM links.