Red Hat Summit – Can you say OpenStack and Containers?

In a world where OS and low-level platform software is considered unfashionable, it was refreshing to see the Linux glitterati and cognoscenti descended on Boston for the last three days, 5000 strong and genuinely passionate about Linux. I spent a day there mingling with the crowds in the eshibit halls, attending some sessions and meeting with Red Hat management. Overall, the breadth of Red Hat’s offerings are overwhelming and way too much to comprehend ina single day or a handful of days, but I focused my attention on two big issues for the emerging software-defined data center – containers and the inexorable march of OpenStack.

Containers are all the rage, and Red Hat is firmly behind them, with its currently shipping RHEL Atomic release optimized to support them. The news at the Summit was the release of RHEL Atomic Enterprise, which extends the ability to execute and manage containers over a cluster as opposed to a single system. In conjunction with a tool stack such as Docker and Kubernates, this paves the way for very powerful distributed deployments that take advantage of the failure isolation and performance potential of clusters in the enterprise. While all the IP in RHEL Atomic, Docker and Kubernates are available to the community and competitors, it appears that RH has stolen at least a temporary early lead in bolstering the usability of this increasingly central virtualization abstraction for the next generation data center.

Read more

Thoughts on Huawei 2015 – The Juggernaut Continues to Build

In late April I once again attended Huawei’s annual analyst meeting in Shenzen, China. As with my last trip to this event, I approached it with a mix of dread and curiosity – dread because it is a long tiring trip and doing business in China if you are dependent on Google services is at best a delicate juggling act, and curiosity because Huawei is one of the most interesting and poorly understood of the large technology companies in the world, especially here in North America.

I came away with reinforcement of my previous impressions that Huawei is an unapologetically Chinese company. Not a global company that happens to be Chinese, as Lenovo presents itself, but a Chinese company that is intent upon and is making progress toward becoming a major global competitor in multiple arenas where it is not dominant now while continuing to maximize its success in its strong domestic market. A year later, all the programs that were in motion at the end of 2014 are still in operation, and Y/Y results indicate that the overall momentum in areas where Huawei is building its franchise, particularly mobile and enterprise IT, are, If anything, doing even better than promised.

Read more

Facebook and HP Show Different Visions for Web-scale

Recently we’ve had a chance to look again at two very conflicting views from HP and Facebook on how to do web-scale and cloud computing, both announced at the recent OCP annual event in California.

From HP come its new CloudLine systems, the public face of their joint venture with Foxcon. Early details released by HP show a line of cost-optimized servers descended from a conventional engineering lineage and incorporating selected bits of OCP technology to reduce costs. These are minimalist rack servers designed, after stripping away all the announcement verbiage, to compete with white-box vendors such as Quanta, SuperMicro and a host of others. Available in five models ranging from the minimally-featured CL1100 up through larger nodes designed for high I/O, big data and compute-intensive workloads, these systems will allow large installations to install capacity at costs ranging from 10 – 25% less than the equivalent capacity in their standard ProLiant product line. While the strategic implications of HP having to share IP and market presence with Foxcon are still unclear, it is a measure of HP’s adaptability that they were willing to execute on this arrangement to protect against inroads from emerging competition in the most rapidly growing segment of the server market, and one where they have probably been under immense margin pressure.

Read more

Intel Announces Xeon SOC – Seriously Raising the Bar for AMD and ARM Competition

Intel has made no secret of its development of the Xeon D, an SOC product designed to take Xeon processing close to power levels and product niches currently occupied by its lower-power and lower performance Atom line, and where emerging competition from ARM is more viable.

The new Xeon D-1500 is clear evidence that Intel “gets it” as far as platforms for hyperscale computing and other throughput per Watt and density-sensitive workloads, both in the enterprise and in the cloud are concerned. The D1500 breaks new ground in several areas:

It is the first Xeon SOC, combining 4 or 8 Xeon cores with embedded I/O including SATA, PCIe and multiple 10 nd 1 Gb Ethernet ports.

(Source: Intel)

It is the first of Intel’s 14 nm server chips expected to be introduced this year. This expected process shrink will also deliver a further performance and performance per Watt across the entire line of entry through mid-range server parts this year.

Why is this significant?

With the D-1500, Intel effectively draws a very deep line in the sand for emerging ARM technology as well as for AMD. The D1500, with 20W – 45W power, delivers the lower end of Xeon performance at power and density levels previously associated with Atom, and close enough to what is expected from the newer generation of higher performance ARM chips to once again call into question the viability of ARM on a pure performance and efficiency basis. While ARM implementations with embedded accelerators such as DSPs may still be attractive in selected workloads, the availability of a mainstream x86 option at these power levels may blunt the pace of ARM design wins both for general-purpose servers as well as embedded designs, notably for storage systems.

Read more

Rack-Scale Architectures get Real with Intel RSA Introduction

What Is It?

We have been watching many variants on efficient packaging of servers for highly scalable workloads for years, including blades, modular servers, and dense HPC rack offerings from multiple vendors, most of the highly effective, and all highly proprietary. With the advent of Facebook’s Open Compute Project, the table was set for a wave of standardized rack servers and the prospect of very cost-effective rack-scale deployments of very standardized servers. But the IP for intelligently shared and managed power and cooling at a rack level needed a serious R&D effort that the OCP community, by and large, was unwilling to make. Into this opportunity stepped Intel, which has been quietly working on its internal Rack Scale Architecture (RSA) program for the last couple of years, and whose first product wave was officially outed recently as part of an announcement by Intel and Ericsson.

While not officially announcing Intel’s product nomenclature, Ericsson announced their “HDS 8000” based on Intel’s RSA, and Intel representatives then went on to explain the fundamental of RSA, including a view of the enhancements coming this year.

RSA is a combination of very standardized x86 servers, a specialized rack enclosure with shared Ethernet switching and power/cooling, and layers of firmware to accomplish a set of tasks common to managing a rack of servers, including:

·         Asset discovery

·         Switch setup and management

·         Power and cooling management across the servers with the rack

·         Server node management

Read more

Rethinking Analytics Infrastructure

Last year I published a reasonably well-received research document on Hadoop infrastructure, “Building the Foundations for Customer Insight: Hadoop Infrastructure Architecture”. Now, less than a year later it’s looking obsolete, not so much because it was wrong for traditional (and yes, it does seem funny to use a word like “traditional” to describe a technology that itself is still rapidly evolving and only in mainstream use for a handful of years) Hadoop, but because the universe of analytics technology and tools has been evolving at light-speed.

If your analytics are anchored by Hadoop and its underlying map reduce processing, then the mainstream architecture described in the document, that of clusters of servers each with their own compute and storage, may still be appropriate. On the other hand, if, like many enterprises, you are adding additional analysis tools such as NoSQL databases, SQL on Hadoop (Impala, Stinger, Vertica) and particularly Spark, an in-memory-based analytics technology that is well suited for real-time and streaming data, it may be necessary to begin reassessing the supporting infrastructure in order to build something that can continue to support Hadoop as well as cater to the differing access patterns of other tools sets. This need to rethink the underlying analytics plumbing was brought home by a recent demonstration by HP of a reference architecture for analytics, publicly referred to as the HP Big Data Reference Architecture.

Read more

IBM Amps up the Mainframe and Aggressively Targets Mobile Workloads with new z13 Announcement

On one level, IBM’s new z13, announced last Wednesday in New York, is exactly what the mainframe world has been expecting for the last two and a half years – more capacity (a big boost this time around – triple the main memory, more and faster cores, more I/O ports, etc.), a modest boost in price performance, and a very sexy cabinet design (I know it’s not really a major evaluation factor, but I think IBM’s industrial design for its system enclosures for Flex System, Power and the z System is absolutely gorgeous, should be in the MOMA*). IBM indeed delivered against these expectations, plus more. In this case a lot more.

In addition to the required upgrades to fuel the normal mainframe upgrade cycle and its reasonably predictable revenue, IBM has made a bold but rational repositioning of the mainframe as a core platform for the workloads generated by mobile transactions, the most rapidly growing workload across all sectors of the global economy. What makes this positioning rational as opposed to a pipe-dream for IBM is an underlying pattern common to many of these transactions – at some point they access data generated by and stored on a mainframe. By enhancing the economics of the increasingly Linux-centric processing chain that occurs before the call for the mainframe data, IBM hopes to foster the migration of these workloads to the mainframe where its access to the resident data will be more efficient, benefitting from inherently lower latency for data access as well as from access to embedded high-value functions such as accelerators for inline analytics. In essence, IBM hopes to shift the center of gravity for mobile processing toward the mainframe and away from distributed x86 Linux systems that they no longer manufacture.

Read more

Mainframe Futures – Reading the Tea Leaves for Future Investments

I’ve been getting a steady trickle of inquires this year about the future of the mainframe from our enterprise clients. Most of them are more or less in the form of “I have a lot of stuff running on mainframes. Is this a viable platform for the next decade or is IBM going to abandon them.” I think the answer is that the platform is secure, and in the majority of cases the large business-critical workloads that are currently on the mainframe probably should remain on the mainframes. In the interests of transparency I’ve tried to lay out my reasoning below so that you can see if it applies to your own situation.

How Big is the Mainframe LOB?

It's hard to get exact figures for the mainframe contributions to IBM's STG (System & Technology Group) total revenues, but the data they have shared shows that their mainframe revenues seem to have recovered from the declines of previous quarters and at worst flattened. Because the business is inherently somewhat cyclical, I would expect that the next cycle of mainframes, rumored to be arriving next year, should give them a boost similar to the last major cycle, allowing them to show positive revenues next year.

Read more

Bare Metal Clouds – Performance and Isolation Drive Consideration

I’ve been talking to a number of users and providers of bare-metal cloud services, and am finding the common threads among the high-profile use cases both interesting individually and starting to connect some dots in terms of common use cases for these service providers who provide the ability to provision and use dedicated physical servers with very similar semantics to the common VM IaaS cloud – servers that can be instantiated at will in the cloud, provisioned with a variety of OS images, be connected to storage and run applications. The differentiation for the customers is in behavior of the resulting images:

  • Deterministic performance – Your workload is running on a dedicated resource, so there is no question of any “noisy neighbor” problem, or even of sharing resources with otherwise well-behaved neighbors.
  • Extreme low latency – Like it or not, VMs, even lightweight ones, impose some level of additional latency compared to bare-metal OS images. Where this latency is a factor, bare-metal clouds offer a differentiated alternative.
  • Raw performance – Under the right conditions, a single bare-metal server can process more work than a collection of VMs, even when their nominal aggregate performance is similar. Benchmarking is always tricky, but several of the bare metal cloud vendors can show some impressive comparative benchmarks to prospective customers.
Read more

Shifting Sands – Changing Alliances Underscore the Dynamism of the Infrastructure Systems Market

There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.

And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.

EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:

Read more