I bet you are thinking, “Oh no, this looks like a typical Friday IT blog post” and it has all the key ingredients – It’s Friday-tick-has science fiction references-tick-has a weird title-tick – but please go with the flow with this one.
Just three months after SAP acquired SuccessFactors, a cloud leader for human capital management solutions, for $3.4 billion, it has now announced the acquisition of Ariba, a cloud leader for eProcurement solutions, for another $4.3 billion. Now, $7.7 billion is a lot of money to spend in a short amount of time on two companies that hardly make any profit. But it’s all for the cloud, which means it’s for the future business opportunity in cloud computing services. So far, so good; SAP has invested and acquired quite a number of cloud companies over the past years: Frictionless, Clear Standards, Crossgate, etc. The difference in this most recent acquisition is the big overlap with existing solutions and internal R&D.
Following the first wave of cloud acquisitions, SAP was sitting amid a zoo of cloud solutions, all based on different platforms: ePurchasing, CRM-OnDemand, BI-OnDemand, Carbon Impact, ByDesign, Streamwork . . . They all used very different technology, resulting in big integration and scale challenges behind the scenes. The market welcomed with open arms SAP’s announcement 1.5 years ago that it would consolidate its cloud strategy on the new NetWeaver platform for both ABAP- and Java-based cloud solutions.
As developers, we often ask for more resources from the infrastructure & operations (I&O) teams than we really need so we don't have to go back later and ask for more - too painful and time consuming. We also often don't know how many resources our code might need, so we might as well take as much as we can get. But do we ever give it back when we learn it is more than we need?
On the other hand, I&O often isn't any better. The first rule we learned about capacity planning was that it's more expensive to underestimate resource needs and be wrong than to overestimate, and we always seem to consume more resources eventually.
Through a combination of analyst briefings and customer events, Cisco has ramped up outbound communication and marketing of its collaboration strategy in Asia Pacific over the past several months. The foundation remains video (TelePresence), webconferencing (WebEx), and IP telephony, areas where Cisco is a leader. But Cisco understands that to drive growth and expand its customer footprint within enterprise accounts, it must move further up the stack and increasingly compete with both traditional collaboration vendors like Microsoft and IBM and cloud-based alternatives like Google and salesforce.com.
While the strategy still plays to the company’s core networking strength, I question whether Cisco can position itself as a “go-to” vendor in the traditional collaboration space. As our research shows, senior IT and business decision-makers in Asia Pacific don’t currently equate Cisco with collaboration.
To address this challenge, Cisco is pursuing multiple initiatives/approaches:
Leveraging its core strengths. Cisco is focused on expanding from existing unified communications (UC) initiatives within customer accounts by leveraging the combination of networking and video to drive value. Cisco is pushing “control” via intelligent networking capabilities (e.g., security, identity management, authentication, access), all delivered through Cisco networking hardware. Simultaneously, Cisco is pushing “flexibility” via device- and platform-independent collaboration capabilities like content, video, instant messaging, and social computing.
Sound familiar? Executives across the globe feel peer and competitive pressure to “get to yes” on private cloud. This burden falls on IT to provide a cloud solution — oh, and by the way, we need it by the end of the year. With this clock ticking, it’s hard to think about private cloud strategically. In fact, why not to just cloudwash your virtual environment and buy your team time? Many enterprises (yes, even those presenting at events) have gone down this road. And some vendors will suggest this as a short-term fix. DON’T DO IT.
You’re cutting yourself short on what you could achieve with this environment while losing credibility with the business and your peers. Sound overdramatic? The consumerization of IT is forcing IT to connect with the business or risk circumvention. For many, the existing relationship isn't great. And each future interaction could either improve or worsen that relationship. Promising the business a cloud delivered within your own data center, and then failing to provide basic functionality of a cloud will just make future initiatives and interactions even harder. In the meantime, the business will continue to circumvent your department. If you're going to invest the resources/time to build this environment and rope in rogue cloud users — make sure you get to cloud.
Shakespeare wrote in his famous play Hamlet,"Whether 'tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles, and by opposing end them? To die: to sleep; No more." He of course was talking about the betrayal in his family but the quote is just as appropriate today in the world of cloud computing. Because in the minds of many I&O professionals, the business is conducting the betrayal.
SaaS vendors must collect customer insights for innovation and compliance.
As of the end of last year, about 30% of companies from our Forrsights Software Survey, Q4 2011, were using some software-as-a-service (SaaS) solution; that number will grow to 45% by the end of 2012 and 60% by the end of 2013. The public cloud market for SaaS is the biggest and fastest-growing of all of the cloud markets ($33 billion in 2012, growing to $78 billion by the end of 2015).
However, most of this growth is based on the cannibalization of the on-premises software market; software companies need to build their cloud strategy or risk getting stuck in the much slower-growing traditional application market and falling behind the competition. This is no easy task, however. Implementing a cloud strategy involves a lot of changes for a software company in terms of products, processes, and people.
A successful SaaS strategy requires an open architecture (note: multitenancy is not a prerequisite for a SaaS solution from a definition point of view but is highly recommended for vendors for better scale) and a flexible business model that includes the appropriate sales incentive structure that will bring the momentum to the street. For the purposes of this post, I’d like to highlight the challenge that software vendors need to solve for sustainable growth in the SaaS market: maintaining and increasing customer insights.
The other day I visited Colt’s London HQ and saw how the telco is revamping its approach to developing more customer-centric and Agile solutions (Colt consciously avoids the “cloud” terminology). By now, most telcos managed to jump onto the cloud bandwagon by launching cloud-based services. The challenge, from an end user perspective, is that these solutions all seem very similar. Customers can get storage, server capacity, unified communications, etc., from most telcos. All telcos underline the value-added nature of end-to-end network QoS and security that they can ensure (check out our report, "Telcos As Cloud Rainmakers"). Indeed, telcos have some right to feel that they have achieved some progress regarding their cloud offerings — although it took Amazon to show them the opportunity.
But most telco cloud offerings suffer from the fact that telcos develop cloud solutions in the traditional sense through their traditional product factories. This approach tends to follow rather than slow product innovation cycles. Moreover, it produces products that, once developed, are pushed to the customer as a standard offering. All customisation costs extra.
The reality of cloud demand is that each customer is different. Most customers want some form of customisation. Most customers want some form of hybrid cloud, a private part for core apps, as well as access to the open Internet to, for instance, exchange views and information with end customers via Twitter or for crowd sourcing with suppliers. Similarly, most customers want a mix of fixed and virtual assets and a blend of self-service and managed service solutions as the chart indicates.
I said last year that this would happen sometime in the first half of this year, but for some reason my colleagues and clients have kept asking me exactly when we would see a real ARM server running a real OS. How about now?
To copy from Calxeda’s most recent blog post:
“This week, Calxeda is showing a live Calxeda cluster running Ubuntu 12.04 LTS on real EnergyCore hardware at the Ubuntu Developer and Cloud Summit events in Oakland, CA. … This is the real deal; quad-core, w/ 4MB cache, secure management engine, and Calxeda’s fabric all up and running.”
This is a significant milestone for many reasons. It proves that Calxeda can indeed deliver a working server based on its scalable fabric architecture, although having HP signing up as a partner meant that this was essentially a non-issue, but still, proof is good. It also establishes that at least one Linux distribution provider, in this case Ubuntu, is willing to provide a real supported distribution. My guess is that Red Hat and Centos will jump on the bus fairly soon as well.
Most importantly, we can get on with the important work of characterizing real benchmarks on real systems with real OS support. HP’s discovery centers will certainly play a part in this process as well, and I am willing to bet that by the end of the summer we will have some compelling data on whether the ARM server will deliver on its performance and energy efficiency promises. It’s not a slam dunk guaranteed win – Intel has been steadily ratcheting up its energy efficiency, and the latest generation of x86 server from HP, IBM, Dell, and others show promise of much better throughput per watt than their predecessors. Add to that the demonstration of a Xeon-based system by Sea Micro (ironically now owned by AMD) that delivered Xeon CPUs at a 10 W per CPU power overhead, an unheard of efficiency.
In the latest evolution of its Linux push, IBM has added to its non-x86 Linux server line with the introduction of new dedicated Power 7 rack and blade servers that only run Linux. “Hah!” you say. “Power already runs Linux, and quite well according to IBM.” This is indeed true, but when you look at the price/performance of Linux on standard Power, the picture is not quite as advantageous, with the higher cost of Power servers compared to x86 servers offsetting much if not all of the performance advantage.
Enter the new Flex System p24L (Linux) Compute Node blade for the new PureFlex system and the IBM PowerLinuxTM 7R2 rack server. Both are dedicated Linux-only systems with 2 Power 7 6/8 core, 4 threads/core processors, and are shipped with unlimited licenses for IBM’s PowerVM hypervisor. Most importantly, these systems, in exchange for the limitation that they will run only Linux, are priced competitively with similarly configured x86 systems from major competitors, and IBM is betting on the improvement in performance, shown by IBM-supplied benchmarks, to overcome any resistance to running Linux on a non-x86 system. Note that this is a different proposition than Linux running on an IFL in a zSeries, since the mainframe is usually not the entry for the customer — IBM typically sells to customers with existing mainframe, whereas with Power Linux they will also be attempting to sell to net new customers as well as established accounts.