Shakespeare wrote in his famous play Hamlet,"Whether 'tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles, and by opposing end them? To die: to sleep; No more." He of course was talking about the betrayal in his family but the quote is just as appropriate today in the world of cloud computing. Because in the minds of many I&O professionals, the business is conducting the betrayal.
SaaS vendors must collect customer insights for innovation and compliance.
As of the end of last year, about 30% of companies from our Forrsights Software Survey, Q4 2011, were using some software-as-a-service (SaaS) solution; that number will grow to 45% by the end of 2012 and 60% by the end of 2013. The public cloud market for SaaS is the biggest and fastest-growing of all of the cloud markets ($33 billion in 2012, growing to $78 billion by the end of 2015).
However, most of this growth is based on the cannibalization of the on-premises software market; software companies need to build their cloud strategy or risk getting stuck in the much slower-growing traditional application market and falling behind the competition. This is no easy task, however. Implementing a cloud strategy involves a lot of changes for a software company in terms of products, processes, and people.
A successful SaaS strategy requires an open architecture (note: multitenancy is not a prerequisite for a SaaS solution from a definition point of view but is highly recommended for vendors for better scale) and a flexible business model that includes the appropriate sales incentive structure that will bring the momentum to the street. For the purposes of this post, I’d like to highlight the challenge that software vendors need to solve for sustainable growth in the SaaS market: maintaining and increasing customer insights.
The other day I visited Colt’s London HQ and saw how the telco is revamping its approach to developing more customer-centric and Agile solutions (Colt consciously avoids the “cloud” terminology). By now, most telcos managed to jump onto the cloud bandwagon by launching cloud-based services. The challenge, from an end user perspective, is that these solutions all seem very similar. Customers can get storage, server capacity, unified communications, etc., from most telcos. All telcos underline the value-added nature of end-to-end network QoS and security that they can ensure (check out our report, "Telcos As Cloud Rainmakers"). Indeed, telcos have some right to feel that they have achieved some progress regarding their cloud offerings — although it took Amazon to show them the opportunity.
But most telco cloud offerings suffer from the fact that telcos develop cloud solutions in the traditional sense through their traditional product factories. This approach tends to follow rather than slow product innovation cycles. Moreover, it produces products that, once developed, are pushed to the customer as a standard offering. All customisation costs extra.
The reality of cloud demand is that each customer is different. Most customers want some form of customisation. Most customers want some form of hybrid cloud, a private part for core apps, as well as access to the open Internet to, for instance, exchange views and information with end customers via Twitter or for crowd sourcing with suppliers. Similarly, most customers want a mix of fixed and virtual assets and a blend of self-service and managed service solutions as the chart indicates.
I said last year that this would happen sometime in the first half of this year, but for some reason my colleagues and clients have kept asking me exactly when we would see a real ARM server running a real OS. How about now?
To copy from Calxeda’s most recent blog post:
“This week, Calxeda is showing a live Calxeda cluster running Ubuntu 12.04 LTS on real EnergyCore hardware at the Ubuntu Developer and Cloud Summit events in Oakland, CA. … This is the real deal; quad-core, w/ 4MB cache, secure management engine, and Calxeda’s fabric all up and running.”
This is a significant milestone for many reasons. It proves that Calxeda can indeed deliver a working server based on its scalable fabric architecture, although having HP signing up as a partner meant that this was essentially a non-issue, but still, proof is good. It also establishes that at least one Linux distribution provider, in this case Ubuntu, is willing to provide a real supported distribution. My guess is that Red Hat and Centos will jump on the bus fairly soon as well.
Most importantly, we can get on with the important work of characterizing real benchmarks on real systems with real OS support. HP’s discovery centers will certainly play a part in this process as well, and I am willing to bet that by the end of the summer we will have some compelling data on whether the ARM server will deliver on its performance and energy efficiency promises. It’s not a slam dunk guaranteed win – Intel has been steadily ratcheting up its energy efficiency, and the latest generation of x86 server from HP, IBM, Dell, and others show promise of much better throughput per watt than their predecessors. Add to that the demonstration of a Xeon-based system by Sea Micro (ironically now owned by AMD) that delivered Xeon CPUs at a 10 W per CPU power overhead, an unheard of efficiency.
In the latest evolution of its Linux push, IBM has added to its non-x86 Linux server line with the introduction of new dedicated Power 7 rack and blade servers that only run Linux. “Hah!” you say. “Power already runs Linux, and quite well according to IBM.” This is indeed true, but when you look at the price/performance of Linux on standard Power, the picture is not quite as advantageous, with the higher cost of Power servers compared to x86 servers offsetting much if not all of the performance advantage.
Enter the new Flex System p24L (Linux) Compute Node blade for the new PureFlex system and the IBM PowerLinuxTM 7R2 rack server. Both are dedicated Linux-only systems with 2 Power 7 6/8 core, 4 threads/core processors, and are shipped with unlimited licenses for IBM’s PowerVM hypervisor. Most importantly, these systems, in exchange for the limitation that they will run only Linux, are priced competitively with similarly configured x86 systems from major competitors, and IBM is betting on the improvement in performance, shown by IBM-supplied benchmarks, to overcome any resistance to running Linux on a non-x86 system. Note that this is a different proposition than Linux running on an IFL in a zSeries, since the mainframe is usually not the entry for the customer — IBM typically sells to customers with existing mainframe, whereas with Power Linux they will also be attempting to sell to net new customers as well as established accounts.
In the last couple of weeks, I finally put a couple of pieces together . . . the tech industry is pushing hard, down two parallel tracks, toward much more resource-efficient computing architectures.
Track 1: Integrated systems. Computer suppliers are putting hardware components (including compute, network, and storage) together with middleware and application software in pre-integrated packages. The manufacturers will do assembly and testing of these systems in their factories, rather than on the customer's site. And they will tailor the system — to a greater or lesser degree, depending on the system — to the characteristics of the workload(s) it will be running.
The idea is to use general-purpose components (microprocessors, memory, network buses, and the like) to create special-purpose systems on a mass-customization basis. This trend has been evident for a while in the Oracle Exadata and Cisco UCS systems; IBM's Pure systems introductions push it even further into pre-configured applications and systems management.
Track 2. Modular data centers. Now, zoom out from individual computing systems to aggregations of those systems into data centers. And again, assemble as much of the componentry as possible in the factory rather than on-site. Vendors like Schneider, Emerson, and the systems shops like IBM and HP are creating a design approach and infrastructure systems that will allow data centers to be designed in modular fashion, with much of the equipment like air handling and power trucked to the customer's site, set up in the parking lot, and quickly turned on.
Vodafone agreed to acquire Cable & Wireless Worldwide (CWW) for 1.04 billion pounds in cash, valuing CWW at three times EBITDA. The deal propels Vodafone to the second largest telco in the UK with revenues of GBP6.97 billion, behind BT with revenues of GBP15.6 billion. From a financial perspective, the deal has a limited impact, accounting for only 3% of Vodafone’s 2011 EBITDA. However, given BT’s lack of a mobile division, Vodafone, becomes the leading integrated telco in the UK, offering fixed and mobile operations. The deal is expected to complete in Q3 2012.
The main focus of the deal is on CWW’s UK fixed-line network and CWW’s business customer base, both of which Vodafone aims to add to its UK mobile network. CWW provides managed voice, data, hosting, and IP-based services and applications. The deal boosts Vodafone’s enterprise offering, both in terms of access and transport infrastructure and also in terms of customer base. CWW is a major global infrastructure player: Its international cable network spans 425,000 km in length, covering 150 countries. In the UK, CWW operates a 20,500 km fiber network. Moreover, CWW has about 6,000 business customers. The future of CWW’s non-UK assets remains uncertain. In our view they do provide true value for Vodafone, strengthening its global network infrastructure. Vodafone will provide further details regarding these non-UK assets later in the year.
Over the last couple of years, IBM, despite having a rich internal technology ecosystem and a number of competitive blade and CI offerings, has not had a comprehensive integrated offering to challenge HP’s CloudSystem Matrix and Cisco’s UCS. This past week IBM effectively silenced its critics and jumped to the head of the CI queue with the announcement of two products, PureFlex and PureApplication, the results of a massive multi-year engineering investment in blade hardware, systems management, networking, and storage integration. Based on a new modular blade architecture and new management architecture, the two products are really more of a continuum of a product defined by the level of software rather than two separate technology offerings.
PureFlex is the base product, consisting of the new hardware (which despite having the same number of blades as the existing HS blade products, is in fact a totally new piece of hardware), which integrates both BNT-based networking as well as a new object-based management architecture which can manage up to four chassis and provide a powerful setoff optimization, installation, and self-diagnostic functions for the hardware and software stack up to and including the OS images and VMs. In addition IBM appears to have integrated the complete suite of Open Fabric Manager and Virtual Fabric for remapping MAC/WWN UIDs and managing VM networking connections, and storage integration via the embedded V7000 storage unit, which serves as both a storage pool and an aggregation point for virtualizing external storage. The laundry list of features and functions is too long to itemize here, but PureFlex, especially with its hypervisor-neutrality and IBM’s Cloud FastStart option, is a complete platform for an enterprise private cloud or a horizontal VM compute farm, however you choose to label a shared VM utility.
If you wanted to see the full spectrum of cloud choices that are coming to market today you only have to look at these two efforts as they are starting to evolve. They represent the extremes. And ironically both held analyst events this week.
OpenStack is clearly an effort by a vendor (Rackspace) to launch a community to help advance technology and drive innovation around a framework that multiple vendors can use to bring myriad cloud services to market and deliver differentiated values. Whereas Oracle, who gave analysts a brief look inside its public cloud efforts this week, is taking a completely closed and self-built approach that looks to fulfill all cloud values from top to bottom.
While the bulk of the enterprise IT market grumbles about the maturity and security of cloud computing services, it looks like the media & entertainment segment is just doing it. At the annual conference for the National Association of Broadcasters (NAB) in Las Vegas, myriad technology vendors are showing off their solutions that are transforming the way video content gets to us and behind the scenes there appears to be a lot of cloud computing making this happen. And there is a strong fit between these two industries because their business and economic models are evolving in complementary ways.
Sure, we all know that video streaming to your phone, tablet and TV is the new normal, but how this is accomplished is changing under the covers and cloud computing brings the economic model that maps better to the business of media and entertainment. You see, while broadcasting is a steady state business, the production process and eventual popularity of any particular video segment or show isn't. The workflow behind the scenes is evolving rapidly — or more appropriately devolving.