Google is officially serious about the enterprise space. I met with Google Enterprise execs hosting their very first analyst day in Singapore recently, and was introduced to their enterprise suite of services, which was, unsurprisingly, similar to their consumer suite of services.
However, while they took their starting point from the consumer end, providing enterprise-ready solutions requires a different level of product calibration. To that end, Google cites spending of approximately US$3 billion annually on building/improving its data center infrastructure, investing in undersea cable systems, and laying fiber networks in the US specifically. In Asia Pacific (AP) last year, they spent approximately US$700 million building three data centers in Singapore, Hong Kong, and Taiwan.
In addition to infrastructure investments, Google has also acquired companies like Quickoffice to enhance their appeal to enterprises weaned on Microsoft Office, while also expanding existing offerings in areas like communications and collaboration (Gmail, Google Plus), contextualized services (Maps, Compute Engine, Big Query), access devices (Nexus range, Chromebook), application development (App Engine) and discovery and archiving (Search, Vault).
My Forrester colleagues Ted Schadler and John McCarthy have written about the differences between Systems of Reference (SoR) and Systems of Engagement (SoE) in the customer-facing systems and mobility, but after further conversations with some very smart people at IBM, I think there are also important reasons for infrastructure architects to understand this dichotomy. Scalable and flexible systems of engagement, engagement, built with the latest in dynamic web technology and the back-end systems of record, highly stateful usually transactional systems designed to keep track of the “true” state of corporate assets are very different animals from an infrastructure standpoint in two fundamental areas:
Suitability to cloud (private or public) deployment – SoE environments, by their nature, are generally constructed using horizontally scalable technologies, generally based on some level of standards including web standards, Linux or Windows OS, and some scalalable middleware that hides the messy details of horizontally scaling a complex application. In addition, the workloads are generally highly parallel, with each individual interaction being of low value. This characteristic leads to very different demands on the necessity for consistency and resiliency.
At the half mark through 2013, both the global and the European tech markets have pockets of strength and other pockets of weakness, both by product and by geography. Forrester's mid-2013 global tech market update (July 12, 2013, “A Mixed Outlook For The Global Tech Market In 2013 And 2014 –The US Market And Software Buying Will Be The Drivers Of 2.3% Growth This Year And 5.4% Growth Next Year”) shows the US market for business and government purchases of information technology goods and services doing relatively well, along with tech markets in Latin America and Eastern Europe/Middle East/Africa and parts of Asia Pacific. However, the tech market in Western and Central Europe will post negative growth and those in Japan, Canada, Australia, and India will grow at a moderate pace. Measured in US dollars, growth will be subdued at 2.3% in 2013, thanks to the strong dollar, and revenues of US tech vendors will suffer as a result. However, in local currency terms, growth will more respectable, at 4.6%. Software -- especially for analytical and collaborative applications and for software-as-a-service products -- continue to be a bright spot, with 3.3% dollar growth and 5.7% in local currency-terms. Apart from enterprise purchases of tablets, hardware -- both computer equipment and communications equipment -- will be weak. IT services will be mixed, with slightly stronger demand for IT consulting and systems integration services than for IT outsourcing and hardware maintenance.
Forrester research compares frequently capabilities of software vendors and cloud platforms in our Forrester Waves™. John Rymer and James Staten recently published the comparison of Enterprise Public Cloud Platforms. I am currently researching such a Forrester Wave™ around Hybrid² Integration capabilities. This is the integration between cloud and on-premise and across different traditional integration tools.
However, all these Forrester Waves™ have one significant gap. They all start with a product or service offering by a vendor or (cloud) service provider. Analysts at Forrester Research investigate for sure if the vendor statements are real, watch demos and we try out products on our own before we write about it. One part of the Forrester Wave™ process is also customer interviews, which validate that product features work in reality. All criteria, scores and notes are published to our clients, not only a final PDF.
So where's the gap in such a thorough process? Well, the starting point is always the capabilities that product vendors and service providers claim – never the actual challenge withing the enterprises! I was not aware of any assessment or competition, which really starts here – with a real client challenge and project! Maybe one enterprise ends up using a unique combination of different products and cloud services.
Ten days ago, three of us traveled to Japan for a Fujitsu analyst day held in conjunction with the firm’s huge customer event – the Fujitsu Forum. The analyst day was a follow-on from the firm’s European event last fall. At the two events, the management team, led by Masami Yamamoto, president and representative director, and Rod Vawdrey, the president of Fujitsu’s International Business, talked about the organization’s vision and key imperatives:
Creating a common vision around “Human-Centric Intelligent Society.” Management highlighted publishing the firm’s global vision document. Speakers repeatedly pointed toward Fujitsu’s new “human-centric” vision for how information technology improves business, personal, and societal outcomes. Fujitsu is positioning itself as a provider of solutions aimed at facilitating the activities of consumers and businesses, combining elements of its hardware, software, and services portfolio.
IBM didn't just pick up a hosting company with their acquisition of SoftLayer this week, they picked up a sophisticated data center operations team -- one that could teach IBM Global Technical Services (GTS) a thing or two about efficiency when it comes to next-generation cloud data centers. Here's hoping IBM will listen.
Back in October 2011, Microsoft named the initiative to introduce Windows Azure cloud platform into the Chinese market “Moon Cake,” which represents harmony and happiness in Chinese culture. On May 23, 2013, Microsoft made the announcement in Shanghai that Windows Azure will be available in Chinese market starting on June 6 — almost half a year after its agreement with Shanghai government and 21ViaNet to operate Windows Azure together last November. Chinese customers will finally be able to “taste” this foreign moon cake.
I believe that a new chapter of cloud is going to be written by a new ecosystem in China market, and Microsoft will be the leader of this disruption. My reasons:
The cloud market in China will be more disrupted. Due to the regulatory limitations on data center and related telecom value-added services operations for foreign players, the cloud market for both infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) has been an easy battlefield for local players, such as Alibaba/HiChina. Microsoft’s innovative way working with both government and local service partners to break through this “great wall” shows all of the major global giants, such as Amazon.com, the great opportunity from this approach to the Chinese market. We can anticipate that they will also enter the Chinese market in the coming six to 18 months.
Background — High Performance Attached Processors Handicapped By Architecture
The application of high-performance accelerators, notably GPUs, GPGPUs (APUs in AMD terminology) to a variety of computing problems has blossomed over the last decade, resulting in ever more affordable compute power for both horizon and mundane problems, along with growing revenue streams for a growing industry ecosystem. Adding heat to an already active mix, Intel’s Xeon Phi accelerators, the most recent addition to the GPU ecosystem, have the potential to speed adoption even further due to hoped-for synergies generated by the immense universe of x86 code that could potentially run on the Xeon Phi cores.
However, despite any potential synergies, GPUs (I will use this term generically to refer to all forms of these attached accelerators as they currently exist in the market) suffer from a fundamental architectural problem — they are very distant, in terms of latency, from the main scalar system memory and are not part of the coherent memory domain. This in turn has major impacts on performance, cost, design of the GPUs, and the structure of the algorithms:
Performance — The latency for memory accesses generally dictated by PCIe latencies, which while much improved over previous generations, are a factor of 100 or more longer than latency from coherent cache or local scalar CPU memory. While clever design and programming, such as overlapping and buffering multiple transfers can hide the latency in a series of transfers, it is difficult to hide the latency for an initial block of data. Even AMD’s integrated APUs, in which the GPU elements are on a common die, do not share a common memory space, and explicit transfers are made in and out of the APU memory.
Software AG today announced its cloud strategy. It is based on services that are already available, one that will soon be available (H2 2013), as well as a service planned for Q1 2014.
Journalists have already been in touch with me, asking the following question: Is this an overdue “coming out” after many competitors have already announced or offered extensive cloud strategies — or is this a courageous act from a leading technology firm demonstrating its strength in innovation?
I've known Software AG quite well for many years and believe that today’s announcement marks the next stage in a 10-year corporate turnaround strategy. I well remember the time before Karl-Heinz Streibich took over a nearly bankrupt software vendor 10 years ago. Since then, the firm has been through a financial stabilization phase, which saw both a spending and innovation freeze in many areas. Then, Software AG started to renovate its existing products to stabilize its market share, innovating both carefully and cost-effectively. The third phase saw its acquisition of webMethods and IDS Scheer, which brought the firm sufficient scale in both current products and consulting services. For more details, see my earlier blog post.
Dell just picked up Enstratius for an undisclosed amount today, making the cloud management vendor the latest well-known cloud controller to get snapped up by a big infrastructure or OS vendor. Dell will add Enstratius cloud management capabilities to its existing management suite for converged and cloudy infrastructure, which includes element manager and configuration automator Active System Manager (ASM, the re-named assets acquired with Gale Technologies in November), Quest Foglight performance monitoring, and (maybe) what’s still around from Scalent and DynamicOps.
This is a good move for Dell, but it doesn’t exactly clarify where all these management capabilities will fall out. The current ASM product seems to be a combo of code from the original Scalent acquisition upgraded with the GaleForce product; regardless of what’s in it, though, what it does is discover, configure and deploy physical and virtual converged infrastructure components. A private cloud automation platform, basically. Like all private cloud management stacks, it does rapid template-based provisioning and workflow orchestration. But it doesn’t provision apps or provision to public or open-source cloud stacks. That’s where Enstratius comes in.