SAP’s Acquisition Of SuccessFactors Re-engergizes Its HCM And SaaS Strategy

Paul Hamerman

SAP is a paying a substantial premium to acquire SuccessFactors, a leading SaaS performance and talent management vendor. The press release of December 3, 2011 states that the deal price of $40 per share is a 52% premium over the Dec. 2 closing stock price. Even more startling is that SuccessFactors has a revenue run rate of roughly $300 to $330 million for 2011, and the acquisition price of $3.4 billion is more than 10 times revenue! Why then did SAP make this move?

SAP’s cloud strategy has been struggling with time-to-market issues, and its core on-premises HR management software has been at a competitive disadvantage with best-of-breed solutions in areas such as employee performance, succession planning, and learning management. By acquiring SuccessFactors, SAP puts itself into a much stronger competitive position in human resources applications and reaffirms its commitment to software-as-a-service as a key business model.

In my recent research for a soon-to-be-published Forrester Wave™ on human resource management systems (HRMS), I noted that SAP has more than 13,000 customers using its HCM suite. Yet the adoption of SAP’s learning and talent management products is much less (a few thousand, perhaps), which is noted in my colleague Claire Schooley’s “The Forrester Wave™: Talent Management, Q2 2011.” The talent management Forrester Wave also clearly shows that SAP’s embedded talent management offerings lag well behind the best-of-breed specialists in learning and performance management. The bottom line here is that SAP HCM customers predominantly run best-of-breed talent management solutions alongside their SAP core HRMS (i.e., the transactional employee system of record).

Read more

AMD Releases Interlagos And Valencia – Bulldozers In The Cloud

Richard Fichera

This week AMD finally released their AMD 6200 and 4200 series CPUs. These are the long-awaited server-oriented Interlagos and Valencia CPUs, based on their new “Bulldozer” core, offering up to 16 x86 cores in a single socket. The announcement was targeted at (drum roll, one guess per customer only) … “The Cloud.” AMD appears to be positioning its new architectures as the platform of choice for cloud-oriented workloads, focusing on highly threaded throughput oriented benchmarks that take full advantage of its high core count and unique floating point architecture, along with what look like excellent throughput per Watt metrics.

At the same time it is pushing the now seemingly mandatory “cloud” message, AMD is not ignoring the meat-and-potatoes enterprise workloads that have been the mainstay of server CPUs sales –virtualization, database, and HPC, where the combination of many cores, excellent memory bandwidth and large memory configurations should yield excellent results. In its competitive comparisons, AMD targets Intel’s 5640 CPU, which it claims represents Intel’s most widely used Xeon CPU, and shows very favorable comparisons in regards to performance, price and power consumption. Among the features that AMD cites as contributing to these results are:

  • Advanced power and thermal management, including the ability to power off inactive cores contributing to an idle power of less than 4.4W per core. Interlagos offers a unique capability called TDP, which allows I&O groups to set the total power threshold of the CPU in 1W increments to allow fine-grained tailoring of power in the server racks.
  • Turbo CORE, which allows boosting the clock speed of cores by up to 1 GHz for half the cores or 500 MHz for all the cores, depending on workload.
Read more

HP Embraces Calxeda ARM Architecture With "Project Moonshot" - New Hyperscale Business Unit Program

Richard Fichera

What's the Big Deal?

Emerging ARM server Calxeda has been hinting for some time that they had a significant partnership announcement in the works, and while we didn’t necessarily not believe them, we hear a lot of claims from startups telling us to “stay tuned” for something big. Sometimes they pan out, sometimes they simply go away. But this morning Calxeda surpassed our expectations by unveiling just one major systems partner – but it just happens to be Hewlett Packard, which dominates the WW market for x86 servers.

At its core (unintended but not bad pun), the HP Hyperscale business unit Project Moonshot and Calxeda’s server technology are about improving the efficiency of web and cloud workloads, and promises improvements in excess of 90% in power efficiency and similar improvements in physical density compared with current x86 solutions. As I noted in my first post on ARM servers and other documents, even if these estimates turn out to be exaggerated, there is still a generous window within which to do much, much, better than current technologies. And workloads (such as memcache, Hadoop, static web servers) will be selected for their fit to this new platform, so the workloads that run on these new platforms will potentially come close to the cases quoted by HP and Calxeda.

The Program And New HP Business Unit

Read more

BI In The Cloud: Separating Facts From Fiction

Boris Evelson

“… and they lived happily ever after.” This is the typical ending of most Hollywood movies, which is why I am not a big fan. I much prefer European or independent movies that leave it up to the viewer to draw their own conclusions. It’s just so much more realistic. Keep this in mind, please, as you read this blog, because its only purpose is to present my point of view on what’s happening in the cloud BI market, not to predict where it’s going. I’ll leave that up to your comments — just like your own thoughts and feelings after a good, thoughtful European or indie movie.

Market definition

First of all, let’s define the market. Unfortunately, the terms SaaS and cloud are often used synonymously and therefore, alas, incorrectly.

  • SaaS is just a licensing structure. Many vendors (open source, for example) offer SaaS software subscription models, which has nothing to do with cloud-based hosting.
  • Cloud, in my humble opinion, is all about multitenant software hosted on public or private clouds. It’s not about cloud hosting of traditional software innately architected for single tenancy.
Read more

Compliance And Cloud – Responsible Or Accountable?

Andrew Rose

It’s interesting how many threads there are on the Internet that still debate the difference between these two words: “responsible” and “accountable.” Oddly enough, today I stumbled across two definitions, from seemingly respectable sources, that hold diametrically opposite views! To me, the answer is simple – you can delegate responsibility, but accountability remains fixed.

This is a key point in the extended enterprises in which we now function. Firms are now made up of a myriad of offshore and outsourced services, running on systems that are similarly fragmented and distributed across vendors. This complex tangle of people and data represents a huge challenge to the CISO, who remains accountable for the security, and often the compliance, of his employer yet is no longer responsible for their provision.

With a methodical and comprehensive process and a surfeit of resource (please stop laughing at the back!), the CISO does, however, have the ability to follow the data trails and manage risk down in this regard. Unfortunately, with the advent of cloud, things are taking a turn for the worse. Cloud vendors are reluctant to be scrutinized, and the security and compliance demands of the CISO can often go unanswered. If cloud really is to be a mainstay of computing in the future, something has to give – we need to find a balance where compliance and security assurance requirements are met without fatally undermining the cloud model. This is a key topic for 2012 and something we’ll be following with interest.  

As security professionals, we remain accountable for resolving these issues, no matter how much responsibility has been pushed to third parties and cloud vendors. So, how do you minimize the workload involved in managing the third parties that make up your extended enterprise, and how do you gain assurance around cloud vendors?

Read more

Categories:

The Empire Strikes Back — But Who’s The Target?

Stefan Ried

Source: Philips http://news6.designinterviews.com/tumblr_kzdg88og1l1qzel9oo1_500.jpg

It was only about a year ago when Larry Ellison was confusing the OpenWorld audience with the “cloud in a box” approach, and only a very few CIOs managed to turn a large Oracle landscape into a real private cloud based on an opex model to their business units. But a lot has changed since last year.

Read more

Silk Browser, The BIG Leap For Amazon’s Fire, Shows Innovative Use Of App Internet

Richard Fichera

My colleague James Staten recently wrote about AutoDesk Cloud as an exemplar of the move toward App Internet, the concept of implementing applications that are distributed between local and cloud resources in a fashion that is transparent to the user except for the improved experience. His analysis is 100% correct, and AutoDesk Cloud represents a major leap in CAD functionality, intelligently offloading the inherently parallel and intensive rendering tasks and facilitating some aspects of collaboration.

But (and there’s always a “but”), having been involved in graphics technology on and off since the '80s, I would say that “cloud” implementation of rendering and analysis is something that has been incrementally evolving for decades, with hundreds of well-documented distributed environments with desktops fluidly shipping their renderings to local rendering and analysis farms that would today be called private clouds, with the results shipped back to the creating workstations. This work was largely developed and paid for either by universities and by media companies as part of major movie production projects. Some of them were of significant scale, such as “Massive,” the rendering and animation farm for "Lord of the Rings" that had approximately 1,500 compute nodes, and a subsequent installation at Weta that may have up to 7,000 nodes. In my, admittedly arguable, opinion, the move to AutoDesk Cloud, while representing a major jump in capabilities by making the cloud accessible to a huge number of users, does not represent a major architectural innovation, but rather an incremental step.

Read more

Intel Developer Forum (IDF) - Cloud. And Cloud, Cloud, Cloud. Oh, Yes, Did I Mention “Cloud”?

Richard Fichera

I just attended IDF and I’ve got to say, Intel has certainly gotten the cloud message. Almost everything is centered on clouds, from the high-concept keynotes to the presentations on low-level infrastructure, although if you dug deep enough there was content for general old-fashioned data center and I&O professionals. Some highlights:

Chips and processors and low-level hardware

Intel is, after all, a semiconductor foundry, and despite their expertise in design, their true core competitive advantage is their foundry operations – even their competitors grudgingly acknowledge that they can manufacture semiconductors better than anyone else on the planet. As a consequence, showing off new designs and processes is always front and center at IDF, and this year was no exception. Last year it was Sandy Bridge, the 22nm shrink of the 32nm Westmere (although Sandy Bridge also incorporated some significant design improvements). This year it was Ivy Bridge, the 22nm “tick” of the Intel “tick-tock” design cycle. Ivy Bridge is the new 22nm architecture and seems to have inherited Intel’s recent focus on power efficiency, with major improvements beyond the already solid advantages of their 22nm process, including deeper P-States and the ability to actually shut down parts of the chip when it is idle. While they did not discuss the server variants in any detail, the desktop versions will get an entirely new integrated graphics processor which they are obviously hoping will blunt AMD’s resurgence in client systems. On the server side, if I were to guess, I would guess more cores and larger caches, along with increased support for virtualization of I/O beyond what they currently have.

Read more

An Early Look at Windows Server 8 – Can You Say Cloud?

Richard Fichera

Well, maybe everybody is saying “cloud” these days, but my first impression of Microsoft Windows Server 8 (not the final name) is that Microsoft has been listening very closely to what customers want from an OS that can support both public and private enterprise cloud implementations. And most importantly, the things that they have built into WS8 for “clouds” also look like they make life easier for plain old enterprise IT.

Microsoft appears to have focused its efforts on several key themes, all of which benefit legacy IT architectures as well as emerging clouds:

  • Management, migration and recovery of VMs in a multi-system domain – Major improvements in Hyper-V and management capabilities mean that I&O groups can easily build multi-system clusters of WS8 servers, and easily migrate VMs across system boundaries. Muplitle systems can be clustered with Fibre Channel, making it easier to implement high-performance clusters.
  • Multi-tenancy – A host of features, primarily around management and role-based delegation that make it easier and more secure to implement multi-tenant VM clouds.
  • Recovery and resiliency – Microsoft claims that they can failover VMs from one machine to another in 25 seconds, a very impressive number indeed. While vendor performance claims are always like EPA mileage – you are guaranteed never to exceed this number – this is an impressive claim and a major capability, with major implications for HA architecture in any data center.
Read more

Xsigo Expands to a Data Center Fabric: Converged Infrastructure for the Virtual Data Center

Richard Fichera

Last year at VMworld I noted Xsigo Systems, a small privately held company at VMworld showing their I/O Director technology, which delivereda subset of HP Virtual Connect or Cisco UCS I/O virtualization capability in a fashion that could be consumed by legacy rack-mount servers from any vendor. I/O Director connects to the server with one or more 10 G Ethernet links, and then splits traffic out into enterprise Ethernet and FC networks. On the server side, the applications, including VMware, see multiple virtual NICs and HBAs courtesy of Xsigo’s proprietary virtual NIC driver.

Controlled via Xsigo’s management console, the server MAC and WWNs can be programmed, and the servers can now connect to multiple external networks with fewer cables and substantially lower costs for NIC and HBA hardware. Virtualized I/O is one of the major transformative developments in emerging data center architecture, and will remain a theme in Forrester’s data center research coverage.

This year at VMworld, Xsigo announced a major expansion of their capabilities – Xsigo Server Fabric, which takes the previous rack-scale single-Xsigo switch domains and links them into a data-center-scale fabric. Combined with improvements in the software and UI, Xsigo now claims to offer one-click connection of any server resource to any network or storage resource within the domain of Xsigo’s fabric. Most significantly, Xsigo’s interface is optimized to allow connection of VMs to storage and network resources, and to allow the creation of private VM-VM links.

Read more