I get a lot of questions about the best way for developers to move to the cloud. That’s a good thing, because trying to forklift your existing applications as is isn’t a recipe for success. Building elastic applications requires a focus on statelessness, atomicity, idempotence, and parallelism — qualities that are not often built into traditional “scale-up” applications. But I also get questions that I think are a bit beside the point, like “Which is better: infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS)?” My answer: "It depends on what you’re trying to accomplish, your teams’ skills, and how you like to consume software from ISVs.” That first question is often followed up by a second: “Who’s the leader in the public cloud space?” It’s like asking, “Who's the leading car maker?” There’s a volume answer and there’s a performance answer. It’s one answer if you like pickups, and it’s a different answer if you want an EV. You have to look at your individual needs and match the capabilities of the car and its “ilities” to those needs. That’s how I think we’re starting to see developer adoption of cloud services evolve, based around the capabilities of individual services — not the *aaS taxonomy that we pundits and vendors apply to what’s out there. This approach to service-based adoption is reflected in data from our Forrsights Developer Survey, Q1 2013, so I've chosen publish some of it today to illustrate the adoption differences we see from service to service.
Peter O'Neill here with some observations about cloud computing and channel partners. While cloud computing has been a boon for the tech industry in general, for channel partners the story is different. Channel partners have to deal with shrinking product margins, skills shortages, and new competitor types (including tech vendors themselves!).
And the funny thing is: many vendors still haven’t internalized what predicament their partners are in. How else can you explain Microsoft executives berating their partners that “only 2% of you are in the cloud business” at their recent Worldwide Partner Conference – and then adding insult to injury by suggesting calmly that the partners could host future customer visits in Microsoft Stores, where they can see those MS cloud products (I count the Surface tablet in that list) they cannot even sell!
Forrester Principal Analyst Tim Harmon and myself are discussing these issues almost every day with technology vendors; in fact with B2B vendors in general, because cloud computing is affecting every sector now (including insurance, health care, etc.). Channel partners are changing their business model stripes — in myriad directions, and oftentimes as ungrounded "experiments."
In our new Forrester report, “The Shape-Shifting Tech Industry Channel Ecosystem”, we write about how the successful channel partners of the future will be those that operate under a hybrid business model umbrella, combining on-premises and cloud delivery, and IT and business value.
Yesterday Intel had a major press and analyst event in San Francisco to talk about their vision for the future of the data center, anchored on what has become in many eyes the virtuous cycle of future infrastructure demand – mobile devices and “the Internet of things” driving cloud resource consumption, which in turn spews out big data which spawns storage and the requirement for yet more computing to analyze it. As usual with these kinds of events from Intel, it was long on serious vision, and strong on strategic positioning but a bit parsimonious on actual future product information with a couple of interesting exceptions.
Content and Core Topics:
No major surprises on the underlying demand-side drivers. The the proliferation of mobile device, the impending Internet of Things and the mountains of big data that they generate will combine to continue to increase demand for cloud-resident infrastructure, particularly servers and storage, both of which present Intel with an opportunity to sell semiconductors. Needless to say, Intel laced their presentations with frequent reminders about who was the king of semiconductor manufacturingJ
Google is officially serious about the enterprise space. I met with Google Enterprise execs hosting their very first analyst day in Singapore recently, and was introduced to their enterprise suite of services, which was, unsurprisingly, similar to their consumer suite of services.
However, while they took their starting point from the consumer end, providing enterprise-ready solutions requires a different level of product calibration. To that end, Google cites spending of approximately US$3 billion annually on building/improving its data center infrastructure, investing in undersea cable systems, and laying fiber networks in the US specifically. In Asia Pacific (AP) last year, they spent approximately US$700 million building three data centers in Singapore, Hong Kong, and Taiwan.
In addition to infrastructure investments, Google has also acquired companies like Quickoffice to enhance their appeal to enterprises weaned on Microsoft Office, while also expanding existing offerings in areas like communications and collaboration (Gmail, Google Plus), contextualized services (Maps, Compute Engine, Big Query), access devices (Nexus range, Chromebook), application development (App Engine) and discovery and archiving (Search, Vault).
My Forrester colleagues Ted Schadler and John McCarthy have written about the differences between Systems of Reference (SoR) and Systems of Engagement (SoE) in the customer-facing systems and mobility, but after further conversations with some very smart people at IBM, I think there are also important reasons for infrastructure architects to understand this dichotomy. Scalable and flexible systems of engagement, engagement, built with the latest in dynamic web technology and the back-end systems of record, highly stateful usually transactional systems designed to keep track of the “true” state of corporate assets are very different animals from an infrastructure standpoint in two fundamental areas:
Suitability to cloud (private or public) deployment – SoE environments, by their nature, are generally constructed using horizontally scalable technologies, generally based on some level of standards including web standards, Linux or Windows OS, and some scalalable middleware that hides the messy details of horizontally scaling a complex application. In addition, the workloads are generally highly parallel, with each individual interaction being of low value. This characteristic leads to very different demands on the necessity for consistency and resiliency.
At the half mark through 2013, both the global and the European tech markets have pockets of strength and other pockets of weakness, both by product and by geography. Forrester's mid-2013 global tech market update (July 12, 2013, “A Mixed Outlook For The Global Tech Market In 2013 And 2014 –The US Market And Software Buying Will Be The Drivers Of 2.3% Growth This Year And 5.4% Growth Next Year”) shows the US market for business and government purchases of information technology goods and services doing relatively well, along with tech markets in Latin America and Eastern Europe/Middle East/Africa and parts of Asia Pacific. However, the tech market in Western and Central Europe will post negative growth and those in Japan, Canada, Australia, and India will grow at a moderate pace. Measured in US dollars, growth will be subdued at 2.3% in 2013, thanks to the strong dollar, and revenues of US tech vendors will suffer as a result. However, in local currency terms, growth will more respectable, at 4.6%. Software -- especially for analytical and collaborative applications and for software-as-a-service products -- continue to be a bright spot, with 3.3% dollar growth and 5.7% in local currency-terms. Apart from enterprise purchases of tablets, hardware -- both computer equipment and communications equipment -- will be weak. IT services will be mixed, with slightly stronger demand for IT consulting and systems integration services than for IT outsourcing and hardware maintenance.
Ten days ago, three of us traveled to Japan for a Fujitsu analyst day held in conjunction with the firm’s huge customer event – the Fujitsu Forum. The analyst day was a follow-on from the firm’s European event last fall. At the two events, the management team, led by Masami Yamamoto, president and representative director, and Rod Vawdrey, the president of Fujitsu’s International Business, talked about the organization’s vision and key imperatives:
Creating a common vision around “Human-Centric Intelligent Society.” Management highlighted publishing the firm’s global vision document. Speakers repeatedly pointed toward Fujitsu’s new “human-centric” vision for how information technology improves business, personal, and societal outcomes. Fujitsu is positioning itself as a provider of solutions aimed at facilitating the activities of consumers and businesses, combining elements of its hardware, software, and services portfolio.
Back in October 2011, Microsoft named the initiative to introduce Windows Azure cloud platform into the Chinese market “Moon Cake,” which represents harmony and happiness in Chinese culture. On May 23, 2013, Microsoft made the announcement in Shanghai that Windows Azure will be available in Chinese market starting on June 6 — almost half a year after its agreement with Shanghai government and 21ViaNet to operate Windows Azure together last November. Chinese customers will finally be able to “taste” this foreign moon cake.
I believe that a new chapter of cloud is going to be written by a new ecosystem in China market, and Microsoft will be the leader of this disruption. My reasons:
The cloud market in China will be more disrupted. Due to the regulatory limitations on data center and related telecom value-added services operations for foreign players, the cloud market for both infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) has been an easy battlefield for local players, such as Alibaba/HiChina. Microsoft’s innovative way working with both government and local service partners to break through this “great wall” shows all of the major global giants, such as Amazon.com, the great opportunity from this approach to the Chinese market. We can anticipate that they will also enter the Chinese market in the coming six to 18 months.
Background — High Performance Attached Processors Handicapped By Architecture
The application of high-performance accelerators, notably GPUs, GPGPUs (APUs in AMD terminology) to a variety of computing problems has blossomed over the last decade, resulting in ever more affordable compute power for both horizon and mundane problems, along with growing revenue streams for a growing industry ecosystem. Adding heat to an already active mix, Intel’s Xeon Phi accelerators, the most recent addition to the GPU ecosystem, have the potential to speed adoption even further due to hoped-for synergies generated by the immense universe of x86 code that could potentially run on the Xeon Phi cores.
However, despite any potential synergies, GPUs (I will use this term generically to refer to all forms of these attached accelerators as they currently exist in the market) suffer from a fundamental architectural problem — they are very distant, in terms of latency, from the main scalar system memory and are not part of the coherent memory domain. This in turn has major impacts on performance, cost, design of the GPUs, and the structure of the algorithms:
Performance — The latency for memory accesses generally dictated by PCIe latencies, which while much improved over previous generations, are a factor of 100 or more longer than latency from coherent cache or local scalar CPU memory. While clever design and programming, such as overlapping and buffering multiple transfers can hide the latency in a series of transfers, it is difficult to hide the latency for an initial block of data. Even AMD’s integrated APUs, in which the GPU elements are on a common die, do not share a common memory space, and explicit transfers are made in and out of the APU memory.