Hello from the newest analyst serving Forrester Research’s CIO role. My name is Paul Miller, and I joined Forrester at the beginning of August. I am attached to Forrester’s London office, but it’s already clear that I’ll be working with clients across many time zones.
As my Analyst bio describes, my primary focus is on cloud computing, with a particular interest in the way that cloud-based approaches enable (or even require) organizations to embrace digital transformation of themselves and their customer relationships. Before joining Forrester, I spent six years as an independent analyst and consultant. My work spanned cloud computing and big data and I am sure that this broader portfolio of interests will continue into my Forrester research, particularly where I can explore the demonstrable value that these approaches bring to those who embrace them.
I am still working on the best way to capture and explain my research coverage, talking with many of my new colleagues, and learning about potential synergies between what they already do and what I could or should be doing. I know that the first document to appear with my name on it will be a CIO-friendly look at OpenStack, as the genesis of this new Brief lies in a report that I had to write as part of Forrester’s recruitment process. I have a long (long, long) list of further reports I am keen to get started on, and these should begin to appear online as upcoming titles in the very near future. I shall also be blogging here, and look forward to using this as a way to get shorter thoughts and perspectives online relatively quickly. I’ve been regularly blogging for work since early 2004, although too many of the blogs I used to write for are now only preserved in the vaults of Brewster Kahle’s wonderful Internet Archive.
To successfully grow in Asia Pacific (AP), you must excel at understanding customers’ needs, wants, and behaviors and have the capabilities necessary to transform this insight into improved customer engagement. But that’s true everywhere. What sets the AP region apart are the continued vast differences between markets. Appreciating these market differences, and the impact they have on customers’ expectations, is critical when sourcing enterprise marketing capabilities.
In the world of CMOS semiconductor process, the fundamental heartbeat that drives the continuing evolution of all the devices and computers we use and governs at a fundamantal level hte services we can layer on top of them is the continual shrinkage of the transistors we build upon, and we are used to the regular cadence of miniaturization, generally led by Intel, as we progress from one generation to the next. 32nm logic is so old-fashioned, 22nm parts are in volume production across the entire CPU spectrum, 14 nm parts have started to appear, and the rumor mill is active with reports of initial shipments of 10 nm parts in mid-2016. But there is a collective nervousness about the transition to 7 nm, the next step in the industry process roadmap, with industry leader Intel commenting at the recent 2015 International Solid State Circuit conference that it may have to move away from conventional silicon materials for the transition to 7 nm parts, and that there were many obstacles to mass production beyond the 10 nm threshold.
But there are other players in the game, and some of them are anxious to demonstrate that Intel may not have the commanding lead that many observers assume they have. In a surprise move that hints at the future of some of its own products and that will certainly galvanize both partners and competitors, IBM, discounted by many as a spent force in the semiconductor world with its recent divestiture of its manufacturing business, has just made a real jaw-dropper of an announcement – the existence of working 7nm semiconductors.
In a world where OS and low-level platform software is considered unfashionable, it was refreshing to see the Linux glitterati and cognoscenti descended on Boston for the last three days, 5000 strong and genuinely passionate about Linux. I spent a day there mingling with the crowds in the eshibit halls, attending some sessions and meeting with Red Hat management. Overall, the breadth of Red Hat’s offerings are overwhelming and way too much to comprehend ina single day or a handful of days, but I focused my attention on two big issues for the emerging software-defined data center – containers and the inexorable march of OpenStack.
Containers are all the rage, and Red Hat is firmly behind them, with its currently shipping RHEL Atomic release optimized to support them. The news at the Summit was the release of RHEL Atomic Enterprise, which extends the ability to execute and manage containers over a cluster as opposed to a single system. In conjunction with a tool stack such as Docker and Kubernates, this paves the way for very powerful distributed deployments that take advantage of the failure isolation and performance potential of clusters in the enterprise. While all the IP in RHEL Atomic, Docker and Kubernates are available to the community and competitors, it appears that RH has stolen at least a temporary early lead in bolstering the usability of this increasingly central virtualization abstraction for the next generation data center.
As companies get serious about digital transformation, we see investments shifting toward extensible software platforms used to build and manage a differentiated customer experience. My colleague John McCarthy has an excellent slide describing what's happening:
Before, tech management spent most of its time and budget managing a set of monolithic enterprise applications and databases. With an addressable market of a finite number of networked PCs, spending on the front end was largely an afterthought.
Today, applications must scale to millions, if not billions of connected devices while retaining a rich and seamless user experience. Infrastructure, in turn, must flex to meet these new specs. Since complete overhauls of the back end are a nonstarter for large enterprises with 30-plus years of investments in mainframes and legacy server systems, new investments gear toward the intermediary software platforms that connect digital touchpoints with enterprise applications and transaction systems.
At Forrester, we’ve been working to quantify some of the most viable software categories that exemplify this shift. A shortlist below:
· API management solutions: US CAGR 2015-2020: 22%.
· Public cloud platforms: Global CAGR 2015-2020: 30%. (Note: We have a forecast update in the works that segments the market into subcategories.)
Recently we’ve had a chance to look again at two very conflicting views from HP and Facebook on how to do web-scale and cloud computing, both announced at the recent OCP annual event in California.
From HP come its new CloudLine systems, the public face of their joint venture with Foxcon. Early details released by HP show a line of cost-optimized servers descended from a conventional engineering lineage and incorporating selected bits of OCP technology to reduce costs. These are minimalist rack servers designed, after stripping away all the announcement verbiage, to compete with white-box vendors such as Quanta, SuperMicro and a host of others. Available in five models ranging from the minimally-featured CL1100 up through larger nodes designed for high I/O, big data and compute-intensive workloads, these systems will allow large installations to install capacity at costs ranging from 10 – 25% less than the equivalent capacity in their standard ProLiant product line. While the strategic implications of HP having to share IP and market presence with Foxcon are still unclear, it is a measure of HP’s adaptability that they were willing to execute on this arrangement to protect against inroads from emerging competition in the most rapidly growing segment of the server market, and one where they have probably been under immense margin pressure.
Intel has made no secret of its development of the Xeon D, an SOC product designed to take Xeon processing close to power levels and product niches currently occupied by its lower-power and lower performance Atom line, and where emerging competition from ARM is more viable.
The new Xeon D-1500 is clear evidence that Intel “gets it” as far as platforms for hyperscale computing and other throughput per Watt and density-sensitive workloads, both in the enterprise and in the cloud are concerned. The D1500 breaks new ground in several areas:
It is the first Xeon SOC, combining 4 or 8 Xeon cores with embedded I/O including SATA, PCIe and multiple 10 nd 1 Gb Ethernet ports.
It is the first of Intel’s 14 nm server chips expected to be introduced this year. This expected process shrink will also deliver a further performance and performance per Watt across the entire line of entry through mid-range server parts this year.
Why is this significant?
With the D-1500, Intel effectively draws a very deep line in the sand for emerging ARM technology as well as for AMD. The D1500, with 20W – 45W power, delivers the lower end of Xeon performance at power and density levels previously associated with Atom, and close enough to what is expected from the newer generation of higher performance ARM chips to once again call into question the viability of ARM on a pure performance and efficiency basis. While ARM implementations with embedded accelerators such as DSPs may still be attractive in selected workloads, the availability of a mainstream x86 option at these power levels may blunt the pace of ARM design wins both for general-purpose servers as well as embedded designs, notably for storage systems.
We have been watching many variants on efficient packaging of servers for highly scalable workloads for years, including blades, modular servers, and dense HPC rack offerings from multiple vendors, most of the highly effective, and all highly proprietary. With the advent of Facebook’s Open Compute Project, the table was set for a wave of standardized rack servers and the prospect of very cost-effective rack-scale deployments of very standardized servers. But the IP for intelligently shared and managed power and cooling at a rack level needed a serious R&D effort that the OCP community, by and large, was unwilling to make. Into this opportunity stepped Intel, which has been quietly working on its internal Rack Scale Architecture (RSA) program for the last couple of years, and whose first product wave was officially outed recently as part of an announcement by Intel and Ericsson.
While not officially announcing Intel’s product nomenclature, Ericsson announced their “HDS 8000” based on Intel’s RSA, and Intel representatives then went on to explain the fundamental of RSA, including a view of the enhancements coming this year.
RSA is a combination of very standardized x86 servers, a specialized rack enclosure with shared Ethernet switching and power/cooling, and layers of firmware to accomplish a set of tasks common to managing a rack of servers, including:
· Asset discovery
· Switch setup and management
· Power and cooling management across the servers with the rack
I recently joined Forrester as a senior analyst on the infrastructure and operations (I&O) team based out of New Delhi, India. I’m delighted to be a part of Forrester and have begun work on my first report, which will focus on cloud trends in Asia Pacific and put a regional spin on a report my colleague Lauren E. Nelson published in February titled Adoption Profile: Private Cloud in North America, Q3 2014. My new role will enable me to continue pursuing my passion for next-generation solutions like cloud computing, automation, and customer experience management and their ability to support business objectives.
As I reviewed data from Forrester’s Business Technographics® Global Infrastructure Survey, 2014 of business technology decision-makers in Australia, China, and India, I found causes to be concerned about the private cloud initiatives of the region’s large enterprises. A finer-grained analysis of the most senior executives from large enterprises (companies with 1,000 or more employees) in the survey found that nearly half (43%) of the private cloud deployments in AP will fail to meet business objectives, for the following reasons:
26% of private clouds will not offer self-service to developers.
17% of firms will discourage their developers from using public cloud.
On one level, IBM’s new z13, announced last Wednesday in New York, is exactly what the mainframe world has been expecting for the last two and a half years – more capacity (a big boost this time around – triple the main memory, more and faster cores, more I/O ports, etc.), a modest boost in price performance, and a very sexy cabinet design (I know it’s not really a major evaluation factor, but I think IBM’s industrial design for its system enclosures for Flex System, Power and the z System is absolutely gorgeous, should be in the MOMA*). IBM indeed delivered against these expectations, plus more. In this case a lot more.
In addition to the required upgrades to fuel the normal mainframe upgrade cycle and its reasonably predictable revenue, IBM has made a bold but rational repositioning of the mainframe as a core platform for the workloads generated by mobile transactions, the most rapidly growing workload across all sectors of the global economy. What makes this positioning rational as opposed to a pipe-dream for IBM is an underlying pattern common to many of these transactions – at some point they access data generated by and stored on a mainframe. By enhancing the economics of the increasingly Linux-centric processing chain that occurs before the call for the mainframe data, IBM hopes to foster the migration of these workloads to the mainframe where its access to the resident data will be more efficient, benefitting from inherently lower latency for data access as well as from access to embedded high-value functions such as accelerators for inline analytics. In essence, IBM hopes to shift the center of gravity for mobile processing toward the mainframe and away from distributed x86 Linux systems that they no longer manufacture.