I’ve written and commented in the past about the inevitability of a new class of infrastructure called “composable”, i.e. integrated server, storage and network infrastructure that allowed its users to “compose”, that is to say configure, a physical server out of a collection of pooled server nodes, storage devices and shared network connections.[i]
The early exemplars of this class were pioneering efforts from Egenera and blade systems from Cisco, HP, IBM and others, which allowed some level of abstraction (a necessary precursor to composablity) of server UIDs including network addresses and storage bindings, and introduced the notion of templates for server configuration. More recently the Dell FX and the Cisco UCS M-Series servers introduced the notion of composing of servers from pools of resources within the bounds of a single chassis.[ii] While innovative, they were early efforts, and lacked a number of software and hardware features that were required for deployment against a wide spectrum of enterprise workloads.
This morning, HPE put a major marker down in the realm of composable infrastructure with the announcement of Synergy, its new composable infrastructure system. HPE Synergy represents a major step-function in capabilities for core enterprise infrastructure as it delivers cloud-like semantics to core physical infrastructure. Among its key capabilities:
Looking at Oracle’s latest iteration of its SPARC processor technology, the new M7 CPU, it is at first blush an excellent implementation of SPARC, with 32 cores with 8 threads each implemented in an aggressive 20 nm process and promising a well-deserved performance bump for legacy SPARC/Solaris users. But the impact of the M7 goes beyond simple comparisons to previous generations of SPARC and competing products such as Intel’s Xeon E7 and IBM POWER 8. The M7 is Oracle’s first tangible delivery of its “Software on Silicon” promise, with significant acceleration of key software operations enabled in the M7 hardware.[i]
Oracle took aim at selected performance bottlenecks and security exposures, some specific to Oracle software, and some generic in nature but of great importance. Among the major enhancements in the M7 are:[ii]
Cryptography – While many CPUs now include some form of acceleration for cryptography, Oracle claims the M7 includes a wider variety and deeper support, resulting in almost indistinguishable performance across a range of benchmarks with SSL and other cryptographic protocols enabled. Oracle claims that the M7 is the first CPU architecture that does not present users with the choice of secure or fast, but allows both simultaneously.
The acquisition of EMC by Dell has is generating an immense amount of hype and prose, much of it looking forward at how the merged entity will try and compete in cloud, integrate and rationalize its new product line, and how Dell will pay for it (see Forrester report “Quick Take: Dell Buys EMC, Creating a New Legacy Vendor”). Interestingly not a lot has been written about the changes in the fundamental competitive faceoff between Dell and HP, both newly transformed by divestiture and by acquisition.
Yesterday the competition was straightforward and relatively easy to characterize. HP is the dominant enterprise server vendor, Dell a strong challenger, both with PCs and both with some storage IP that was good but in no sense dominant. Both have competent data center practices and embryonic cloud strategies which were still works in process. Post transformation we have a totally different picture with two very transformed companies:
A slimmer HP. HP is smaller (although $50B is not in any sense a small company), and bereft of its historical profit engine, the margins on its printer supplies. Free to focus on its core mandate of enterprise systems, software and services, HP Enterprise is positioning itself as a giant startup, focused and agile. Color me slightly skeptical but willing to believe that it can’t be any less agile than its precursor at twice the size. Certainly along with the margin contribution they lose the option to fight about budget allocations between enterprise and print/PC priorities.
Discussing with Asia Pacific marketers, I often hear that they struggle to find and recruit the right social marketing skills, including data analysts. While staffing is important insofar as tactics go, having a proper team structure to execute on these tactics is, in my view, even more crucial.
In fact, they can mitigate some of these HR challenges with a properly structured social team. My report on building a usable social team structure addresses how organizational models will evolve as social marketing matures. These models include the a) Hub, b) Hub and spoke and c) distributed hub and spoke.
The Hub, for example, is meant to help firms that are starting out on social marketing. This could be a firm that is beginning to get more serious about how social is used strategically to drive business outcomes, or one that operates in highly regulated industries like banking and finance. The centralized hub model puts all of the responsibility (and money) for social marketing in the hands of one small team. This model provides training wheels for marketers for social marketing — especially in learning how to coordinate or test social marketing campaigns in the early phases of social maturity. A centralized hub acts as an incubator for social marketing experimentation and allows other teams to focus on their own objectives until the social program can be implemented at scale with minimal risk. Execution can be in-house, but some marketers partner with an external agency for additional dedicated resources.
Recently we’ve had a chance to look again at two very conflicting views from HP and Facebook on how to do web-scale and cloud computing, both announced at the recent OCP annual event in California.
From HP come its new CloudLine systems, the public face of their joint venture with Foxcon. Early details released by HP show a line of cost-optimized servers descended from a conventional engineering lineage and incorporating selected bits of OCP technology to reduce costs. These are minimalist rack servers designed, after stripping away all the announcement verbiage, to compete with white-box vendors such as Quanta, SuperMicro and a host of others. Available in five models ranging from the minimally-featured CL1100 up through larger nodes designed for high I/O, big data and compute-intensive workloads, these systems will allow large installations to install capacity at costs ranging from 10 – 25% less than the equivalent capacity in their standard ProLiant product line. While the strategic implications of HP having to share IP and market presence with Foxcon are still unclear, it is a measure of HP’s adaptability that they were willing to execute on this arrangement to protect against inroads from emerging competition in the most rapidly growing segment of the server market, and one where they have probably been under immense margin pressure.
Intel has made no secret of its development of the Xeon D, an SOC product designed to take Xeon processing close to power levels and product niches currently occupied by its lower-power and lower performance Atom line, and where emerging competition from ARM is more viable.
The new Xeon D-1500 is clear evidence that Intel “gets it” as far as platforms for hyperscale computing and other throughput per Watt and density-sensitive workloads, both in the enterprise and in the cloud are concerned. The D1500 breaks new ground in several areas:
It is the first Xeon SOC, combining 4 or 8 Xeon cores with embedded I/O including SATA, PCIe and multiple 10 nd 1 Gb Ethernet ports.
It is the first of Intel’s 14 nm server chips expected to be introduced this year. This expected process shrink will also deliver a further performance and performance per Watt across the entire line of entry through mid-range server parts this year.
Why is this significant?
With the D-1500, Intel effectively draws a very deep line in the sand for emerging ARM technology as well as for AMD. The D1500, with 20W – 45W power, delivers the lower end of Xeon performance at power and density levels previously associated with Atom, and close enough to what is expected from the newer generation of higher performance ARM chips to once again call into question the viability of ARM on a pure performance and efficiency basis. While ARM implementations with embedded accelerators such as DSPs may still be attractive in selected workloads, the availability of a mainstream x86 option at these power levels may blunt the pace of ARM design wins both for general-purpose servers as well as embedded designs, notably for storage systems.
I remember the first time I attended 3GSM in Cannes: It was primarily a B2B telecoms trade show and centered on DVB-H, WiMAX, and other technology-centric acronyms. Fast-forward 11 years, and Mobile World Congress (MWC) will be the center of the business world for a couple of days (March 2 to 5). Some things don’t change: We will continue to hear too much about technology. Simply ignore the hype, especially around 5G; it will have no impact at all on your marketing strategy for the next five years!
However, the list of keynote speakers is a good indication of what MWC has become: a priority event for leaders willing to transform their businesses. The CEOs of Facebook, Renault-Nissan, SAP, MasterCard, and BBVA will be speaking, and more than 4,500 CEOs will be among the 85,000 attendees (only 25% of which are from operators). It is fascinating to see how mobile has changed the world in the past 10 years — not just in the way that we live and communicate but also in terms of disrupting every business. I strongly believe that mobile will have a bigger impact than the PC or Web revolutions. Why?
First, mobile is the fastest and most ubiquitous technology ever to spread globally. People in Asia and Africa are skipping the PC Internet and going direct to mobile phones; they’re the ultimate convergent device and often the only way to reach people in rural areas. As Andreessen Horowitz's Benedict Evans put it, mobile is “eating the world”. It has already cannibalized several markets, such as cameras, video recorders, and GPS, and is now disrupting entire industries, changing the game for payments, health, and education, especially in emerging countries. Second, mobile is the bridge to the physical world. It is not just another “subdigital” channel. This alone has a huge impact on business models. Last, mobile is a catalyst for business transformation.
Last year I published a reasonably well-received research document on Hadoop infrastructure, “Building the Foundations for Customer Insight: Hadoop Infrastructure Architecture”. Now, less than a year later it’s looking obsolete, not so much because it was wrong for traditional (and yes, it does seem funny to use a word like “traditional” to describe a technology that itself is still rapidly evolving and only in mainstream use for a handful of years) Hadoop, but because the universe of analytics technology and tools has been evolving at light-speed.
If your analytics are anchored by Hadoop and its underlying map reduce processing, then the mainstream architecture described in the document, that of clusters of servers each with their own compute and storage, may still be appropriate. On the other hand, if, like many enterprises, you are adding additional analysis tools such as NoSQL databases, SQL on Hadoop (Impala, Stinger, Vertica) and particularly Spark, an in-memory-based analytics technology that is well suited for real-time and streaming data, it may be necessary to begin reassessing the supporting infrastructure in order to build something that can continue to support Hadoop as well as cater to the differing access patterns of other tools sets. This need to rethink the underlying analytics plumbing was brought home by a recent demonstration by HP of a reference architecture for analytics, publicly referred to as the HP Big Data Reference Architecture.
There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.
And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.
EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:
I recently attended VMware’s vForum 2014 event in Beijing. The vendor has established a local ecosystem for the three pillars of its business: the software-defined data center (SDDC), cloud services, and end user computing. VMware is working with:
Huawei to refine SDDC technologies.VMware is leveraging Huawei’s technology capability to improve its product feature. VMware integrated Huawei Agile Controller into NSX and vCenter to operate and manage network automation and quickly migrate virtual machines online. Huawei provides the technology to unify the management of virtual and physical networks based on VMware’s virtualization platform. This partnership can help VMware optimize its existing software features and improve the customer experience.