My colleague Henry Baltazar and I have been watching the development of new systems and storage technology for years now, and each of us has been trumpeting in our own way the future potential of new non-volatile memory technology (NVM) to not only provide a major leap for current flash-based storage technology but to trigger a major transformation in how servers and storage are architected and deployed and eventually in how software looks at persistent versus nonpersistent storage.
All well and good, but up until very recently we were limited to vague prognostications about which flavor of NVM would finally belly up to the bar for mass production, and how the resultant systems could be architected. In the last 30 days, two major technology developments, Intel’s further disclosure of its future joint-venture NVM technology, now known as 3D XPoint™ Technology, and Diablo Technologies introduction of Memory1, have allowed us to sharpen the focus on the potential outcomes and routes to market for this next wave of infrastructure transformation.
Intel has made no secret of its development of the Xeon D, an SOC product designed to take Xeon processing close to power levels and product niches currently occupied by its lower-power and lower performance Atom line, and where emerging competition from ARM is more viable.
The new Xeon D-1500 is clear evidence that Intel “gets it” as far as platforms for hyperscale computing and other throughput per Watt and density-sensitive workloads, both in the enterprise and in the cloud are concerned. The D1500 breaks new ground in several areas:
It is the first Xeon SOC, combining 4 or 8 Xeon cores with embedded I/O including SATA, PCIe and multiple 10 nd 1 Gb Ethernet ports.
It is the first of Intel’s 14 nm server chips expected to be introduced this year. This expected process shrink will also deliver a further performance and performance per Watt across the entire line of entry through mid-range server parts this year.
Why is this significant?
With the D-1500, Intel effectively draws a very deep line in the sand for emerging ARM technology as well as for AMD. The D1500, with 20W – 45W power, delivers the lower end of Xeon performance at power and density levels previously associated with Atom, and close enough to what is expected from the newer generation of higher performance ARM chips to once again call into question the viability of ARM on a pure performance and efficiency basis. While ARM implementations with embedded accelerators such as DSPs may still be attractive in selected workloads, the availability of a mainstream x86 option at these power levels may blunt the pace of ARM design wins both for general-purpose servers as well as embedded designs, notably for storage systems.
We have been watching many variants on efficient packaging of servers for highly scalable workloads for years, including blades, modular servers, and dense HPC rack offerings from multiple vendors, most of the highly effective, and all highly proprietary. With the advent of Facebook’s Open Compute Project, the table was set for a wave of standardized rack servers and the prospect of very cost-effective rack-scale deployments of very standardized servers. But the IP for intelligently shared and managed power and cooling at a rack level needed a serious R&D effort that the OCP community, by and large, was unwilling to make. Into this opportunity stepped Intel, which has been quietly working on its internal Rack Scale Architecture (RSA) program for the last couple of years, and whose first product wave was officially outed recently as part of an announcement by Intel and Ericsson.
While not officially announcing Intel’s product nomenclature, Ericsson announced their “HDS 8000” based on Intel’s RSA, and Intel representatives then went on to explain the fundamental of RSA, including a view of the enhancements coming this year.
RSA is a combination of very standardized x86 servers, a specialized rack enclosure with shared Ethernet switching and power/cooling, and layers of firmware to accomplish a set of tasks common to managing a rack of servers, including:
· Asset discovery
· Switch setup and management
· Power and cooling management across the servers with the rack
There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.
And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.
EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:
In this playbook, we do not predict the future of technology but we try to understand how, in the age of the customer, I&O must transform to support businesses by accelerating the speed of service delivery, enabling capacity when and where needed and improving customers and employee experience.
All industries mature towards commoditization and abstraction of the underlying technology because knowledge and expertise are cumulative. Our industry will follow an identical trajectory that will result in ubiquitous and easier to implement, manage and change technology.
Forrester’s Infrastructure and Operations research team has been on the leading edge of infrastructure technology and its proper operational aspects for years. We pushed the industry on both the supply side (vendors) and the demand side (enterprises) toward new models and we pushed hard. I’m proud to say we’ve been instrumental in changing the world of infrastructure and we’re about to change it again!
As the entire technology management profession evolves into the Age of the Customer, the whole notion of infrastructure is morphing in dramatic ways. The long-criticized silos are finally collapsing, cloud computing quickly became mainstream, and you now face a dizzying variety of infrastructure options. Some are outside your traditional borders – like new outsourcing, hosting and colocation services as well as too many cloud forms to count. Some remain inside and will for years to come. More of these options will come from the outside though, and even those “legacy” technologies remaining inside will be created and managed differently.
Your future lies not in managing pockets of infrastructure, but in how you assemble the many options into the services your customers needs. Our profession has been locally brilliant, but globally stupid. We’re now helping you become globally brilliant. We call this service design, a much broader design philosophy rooted in systems thinking. The new approach packages technology into a finished “product” that is much more relevant and useful than any of the parts alone.
A group of us just published an analysis of VMworld (Breaking Down VMworld), and I thought I’d take this opportunity to add some additional color to the analysis. The report is an excellent synthesis of our analysis, the work of a talented team of collaborators with my two cents thrown in as well, but I wanted to emphasize a few additional impressions, primarily around storage, converged infrastructure, and the overall tone of the show.
First, storage. If they ever need a new name for the show, they might consider “StorageWorld” – it seemed to me that just about every other booth on the show floor was about storage. Cloud storage, flash storage, hybrid storage, cheap storage, smart storage, object storage … you get the picture.[i] Reading about the hyper-growth of storage and the criticality of storage management to the overall operation of a virtualized environment does not drive the concept home in quite the same way as seeing 1000s of show attendees thronging the booths of the storage vendors, large and small, for days on end. Another leading indicator, IMHO, was the “edge of the show” booths, the cheaper booths on the edge of the floor, where smaller startups congregate, which was also well populated with new and small storage vendors – there is certainly no shortage of ambition and vision in the storage technology pipeline for the next few years.
Several events over the past few months in China will affect both the IT procurement strategy of Chinese organizations and the market position and development of local and foreign IT vendors, including:
A government-led push away from foreign IT vendors. Amid security concerns, the Chinese government has issued policies to discourage the use of technology from foreign IT vendors. As a result, many IT and business decision-makers at state-owned enterprises (SOEs) and government agencies have put their IT infrastructure plans — most of which involved products and solutions from foreign IT vendors — on hold. They’ve also begun to consider replacing some of their existing technology, such as servers and storage, with equivalents from domestic vendors. This is significant given that government agencies and SOEs are the key IT spenders in China.
A trend to get rid of IBM, Oracle, and EMC. Alibaba was an early mover, replacing its IBM Unix servers, Oracle databases, and EMC storage with x86 servers, open source databases like MySQL and MongoDB, and PCIe flash storage. This has evolved into replacing these foreign products and solutions with ones from local Chinese vendors. For example, Inspur launched the I2I project to stimulate customers to drop IBM Unix servers in favor of Inspur Linux servers to support business development. The Postal Savings Bank of China, China Construction Bank, and many city commercial banks have started deploying Inspur servers in their data centers. However, this only affects the x86 server and storage product market: While domestic vendors can provide x86 servers and storage, they still have no databases to replace Oracle’s.
Last month I attended Huawei’s annual Global Analyst Summit, for the requisite several days of mass presentations, executive meetings and tours that typically constitute such an event. Underneath my veneer of blasé cynicism, I was actually quite intrigued, since I really knew very little about Huawei. And what I did know was tainted by popular and persistent negatives – they were the ones who supposedly copied Cisco’s IP to get into the network business, and, until we got better acquainted with our own Federal Government’s little shenanigans, Huawei was the big bad boogie man who was going to spy on us with every piece of network equipment they installed.
Reality was quite a bit different. Ancient disputes about IP aside, I found a $40B technology powerhouse who is probably the least-known and understood company of its size in the world, and one which appears poised to pose major challenges to incumbents in several areas, including mainstream enterprise IT.
So you don’t know Huawei
First, some basics. Huawei’s 2013 revenue was $39.5 Billion, which puts it right up there with some much better-known names such as Lenovo, Oracle, Dell and Cisco.