As of late 2011, more than half the organizations we surveyed in Asia Pacific excluding Japan (APEJ) are either currently using or actively planning cloud initiatives — 52% in fact. This number has nearly tripled since 2009.
But adoption rates alone don’t tell the whole story. Vendor strategists should also be closely tracking how organizations evolve from ad hoc, disjointed cloud projects to well-defined, effectively managed cloud procurement. Our recent survey results indicate a surprising degree of maturity across the region — along with some clear areas for growth.
Centralized IT procurement of cloud services varies widely across the region. Australia (82%) and India (83%) currently lead in driving centralized procurement and management of cloud services through IT. Both markets are well above the regional average of 74%. This is no surprise for Australia, which is the most mature market for cloud computing in the region. But the strong results for India are surprising, and indicate the strong potential for a sharp increase in demand for cloud services over the next six to 12 months as early projects begin delivering positive returns. Only 66% of respondents in China are currently centralizing cloud procurement and management — not unexpected given the relative lag in cloud adoption in China relative to other APEJ markets.
Organizations in China are least likely to have a formal cloud strategy in place. Fifty-six percent of respondents in China currently see unsanctioned buying by the business outside of IT. This is the highest rate in APEJ by far, where the average is 35% and there are lows of 23% in Australia and 25% in Singapore.
In late 2010 I noted that startup SeaMicro had introduced an ultra-dense server using Intel Atom chips in an innovative fabric-based architecture that allowed them to factor out much of the power overhead from a large multi-CPU server ( http://blogs.forrester.com/richard_fichera/10-09-21-little_servers_big_applications_intel_developer_forum). Along with many observers, I noted that the original SeaMicro server was well-suited to many light-weight edge processing tasks, but that the system would not support more traditional compute-intensive tasks due to the performance of the Atom core. I was, however, quite taken with the basic architecture, which uses a proprietary high-speed (1.28 Tb/s) 3D mesh interconnect to allow the CPU cores to share network, BIOS and disk resources that are normally replicated on a per-server in conventional designs, with commensurate reductions in power and an increase in density.
I attended Trend Forum 2012 last week in Bonn, effectively an analyst day where Deutsche Telekom presented its innovation strategy. There was no focus on overall group strategy. Still, innovation matters greatly as part of the repositioning efforts of telcos. As the role of telcos in the value chain is weakening, largely due to increasing competition by over-the-top providers (OTTPs), telcos need to differentiate themselves increasingly via service provision and their ability to innovate quickly and prolifically. Failure to do so will cement their status as transport utilities for OTTPs.
Deutsche Telekom’s Core Beliefs focus on: a) building its platform business by partnering with software firms; b) leveraging the cloud by providing high QoS and secure connectivity; and c) leverage differentiating terminals through device management and customer experience provision. These Core Beliefs form the basis for pursuing its focus growth segments in digital media distribution, cloud storage, cross-device digital advertising, classified marketplaces, and mobile payment in addition to the core telco business. These targets match up well against our evaluation of best cloud markets for telcos.
A defining characteristic of next-generation network (NGN) infrastructure and the move towards cloud-based business models is openness. As a consequence, OTTPs increasingly deal directly with end customers across the network. Relationships between telcos and other members of value chain become more complex. Emerging cloud services by telcos need to become network agnostic to deliver cross-network solutions and ensure cloud interoperability. Deutsche Telekom has made significant progress in the recent past to adapt its strategy to these new telco realities.
OK, it’s time to stretch the 2012 writing muscles, and what better way to do it than with the time honored “retrospective” format. But rather than try and itemize all the news and come up with a list of maybe a dozen or more interesting things, I decided instead to pick the best and the worst – events and developments that show the amazing range of the technology business, its potentials and its daily frustrations. So, drum roll, please. My personal nomination for the best and worst of the year (along with a special extra bonus category) are:
The Best – IBM Watson stomps the world’s best human players in Jeopardy. In early 2011, IBM put its latest deep computing project, Watson, up against some of the best players in the world in a game of Jeopardy. Watson, consisting of hundreds of IBM Power CPUs, gazillions of bytes of memory and storage, and arguably the most sophisticated rules engine and natural language recognition capability ever developed, won hands down. If you haven’t seen the videos of this event, you should – seeing the IBM system fluidly answer very tricky questions is amazing. There is no sense that it is parsing the question and then sorting through 200 – 300 million pages of data per second in the background as it assembles its answers. This is truly the computer industry at its best. IBM lived up to its brand image as the oldest and strongest technology company and showed us a potential for integrating computers into untapped new potential solutions. Since the Jeopardy event, IBM has been working on commercializing Watson with an eye toward delivering domain-specific expert advisors. I recently listened to a presentation by a doctor participating in the trials of a Watson medical assistant, and the results were startling in terms of the potential to assist medical professionals in diagnostic procedures.
We have just published Forrester's current forecast for the global market for information technology goods and services purchased by businesses and governments (see January 6, 2011, "Global Tech Market Outlook For 2012 And 2013"), and it shows growth of 5.4% in 2012 in US dollars and 5.3% in local currency terms. Those growth rates are a bit lower than our prior forecast in September 2011 (see September 16, 2011, “Global Tech Market Outlook For 2011 And 2012 — Economic And Financial Turmoil Dims 2012 Prospects"), where we projected 2012 growth of 5.5% in US dollars and 6.5% in local currency terms. I would note that these numbers include business and government purchases of computer and communications equipment, software, and IT consulting and outsourcing services equal to $2.1 trillion in 2012, but do not include telecommunications services.
Based on the very high interest in this blog and its cloud predictions we are planning to host a Forrester Teleconference entiteled "2012 — The Year The Cloud Matures: A Deeper Dive Into 10 Cloud Predictions For The Upcoming Year" on February 28th, 1-2pm EST/6-7pm UK time, where we will highlight and go through the 10 below predictions one by one. For more details and registration please follow the link to the: teleconference web page.
1. Multicloud becomes the norm
As companies quickly adopt a variety of cloud resources, they’ll increasingly have to address working with several different cloud solutions, often from different providers. By the end of 2012, cloud customers will already be using more than 10 different cloud apps on average. Cloud orchestration will become a big topic and an opportunity for service providers.
2. The Wild West of cloud procurement is over
While 2011 still witnessed different stakeholders within a company brokering (sometimes unsanctioned by IT) a lot of cloud deals, most companies will have established their formal cloud strategy by the end 2012, including the business models between IT and lines of business for their own, private cloud resources.
Cloud – people can’t agree on exactly what it is, but everyone can agree that they want some piece of it. I have not talked to a single client who isn’t doing something proactively to pursue cloud in some form or fashion. This cloud-obsession was really evident in our 2011 technology tweet jam as well, which is why this year’s business technology and technology trends reports cover cloud extensively. Our research further supports this – for example, 29% of infrastructure and operations executives surveyed stated that building a private cloud was a critical priority for 2011, while 28% plan to use public offerings, and these numbers are rising every year.
So what should EAs think about cloud? My suggestion is that you think about how your current IT strategy supports taking advantage of what cloud is offering (and what it’s not). Here are our cloud-related technology trends along with some food for thought:
The next phase of IT industrialization begins. This trend points out how unprepared our current IT delivery model is for the coming pace of technology change, which is why cloud is appealing. It offers potentially faster ways to acquire technology services. Ask yourself – is my firm’s current IT model and strategy good enough to meet technology demands of the future?
Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.
Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:
ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.
Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…
This week AMD finally released their AMD 6200 and 4200 series CPUs. These are the long-awaited server-oriented Interlagos and Valencia CPUs, based on their new “Bulldozer” core, offering up to 16 x86 cores in a single socket. The announcement was targeted at (drum roll, one guess per customer only) … “The Cloud.” AMD appears to be positioning its new architectures as the platform of choice for cloud-oriented workloads, focusing on highly threaded throughput oriented benchmarks that take full advantage of its high core count and unique floating point architecture, along with what look like excellent throughput per Watt metrics.
At the same time it is pushing the now seemingly mandatory “cloud” message, AMD is not ignoring the meat-and-potatoes enterprise workloads that have been the mainstay of server CPUs sales –virtualization, database, and HPC, where the combination of many cores, excellent memory bandwidth and large memory configurations should yield excellent results. In its competitive comparisons, AMD targets Intel’s 5640 CPU, which it claims represents Intel’s most widely used Xeon CPU, and shows very favorable comparisons in regards to performance, price and power consumption. Among the features that AMD cites as contributing to these results are:
Advanced power and thermal management, including the ability to power off inactive cores contributing to an idle power of less than 4.4W per core. Interlagos offers a unique capability called TDP, which allows I&O groups to set the total power threshold of the CPU in 1W increments to allow fine-grained tailoring of power in the server racks.
Turbo CORE, which allows boosting the clock speed of cores by up to 1 GHz for half the cores or 500 MHz for all the cores, depending on workload.
Emerging ARM server Calxeda has been hinting for some time that they had a significant partnership announcement in the works, and while we didn’t necessarily not believe them, we hear a lot of claims from startups telling us to “stay tuned” for something big. Sometimes they pan out, sometimes they simply go away. But this morning Calxeda surpassed our expectations by unveiling just one major systems partner – but it just happens to be Hewlett Packard, which dominates the WW market for x86 servers.
At its core (unintended but not bad pun), the HP Hyperscale business unit Project Moonshot and Calxeda’s server technology are about improving the efficiency of web and cloud workloads, and promises improvements in excess of 90% in power efficiency and similar improvements in physical density compared with current x86 solutions. As I noted in my first post on ARM servers and other documents, even if these estimates turn out to be exaggerated, there is still a generous window within which to do much, much, better than current technologies. And workloads (such as memcache, Hadoop, static web servers) will be selected for their fit to this new platform, so the workloads that run on these new platforms will potentially come close to the cases quoted by HP and Calxeda.