Linux vs Unix Hot Patching – Have We Reached The Tipping Point?

The Background – Linux as a Fast Follower and the Need for Hot Patching

No doubt about it, Linux has made impressive strides in the last 15 years, gaining many features previously associated with high-end proprietary Unix as it made the transition from small system plaything to core enterprise processing resource and the engine of the extended web as we know it. Along the way it gained reliable and highly scalable schedulers, a multiplicity of efficient and scalable file systems, advanced RAS features, its own embedded virtualization and efficient thread support.

As Linux grew, so did supporting hardware, particularly the capabilities of the ubiquitous x86 CPU upon which the vast majority of Linux runs today. But the debate has always been about how close Linux could get to “the real OS”, the core proprietary Unix variants that for two decades defined the limits of non-mainframe scalability and reliability. But “the times they are a changing”, and the new narrative may be “when will Unix catch up to Linux on critical RAS features like hot patching”.

Hot patching, the ability to apply updates to the OS kernel while it is running, is a long sought-after but elusive feature of a production OS. Long sought after because both developers and operations teams recognize that bringing down an OS instance that is doing critical high-volume work is at best disruptive and worst a logistical nightmare, and elusive because it is incredibly difficult. There have been several failed attempts, and several implementations that “almost worked” but were so fraught with exceptions that they were not really useful in production.[i]

Read more

Big Iron Lives — Huawei Shows Off KunLun 32S x86 Server

I was recently at an event that Huawei hosted in Latin America for its telecom carrier community, in which Huawei was showing off an impressive range of carrier-related technology, including distributed data center management, advanced analytics and a heavy emphasis on compute and storage in addition to their traditionally strong core carrier technology. Interestingly they chose this venue for the Latin America unveling of the KunLun server, an impressive bit of engineering which clearly shows that innovation in big-iron x86 servers is not dead. There is some confusion about whether the March announcement at CeBIT constituted the official unveiling of the actual machine, but they had a real system on the floor at this event and claimed it was the first public showing of the actual system.

The Kunlun server, named after a mountian range in Quinghai Province, places Huawei squarely up against the highest end servers from HPE, IBM, Oracle, NEC and Fujitsu, with a list of very advanced RAS features, including memory migration, hot memory and CPU swap, predictive failure diagnostics and a host of others, some enabled by the underlying Xeon E7 technology and others added by Huawei through their custom node controller architecture ( essentially a standard feature of all large x86 servers). Partitionable into smaller logical servers, the Kunlun can serve as a core transaction processor for extreme workloads or as a collection of tightly coupled electrically and logically isolated servers.

So why unveil this high-end engine at a telecom carrier show? My read is that since the carriers will be at the center of much of the IoT action, and that the data streams they process will need an ever expanding inventory of processing capacity, so this is a pretty good venue, Plus it reinforces the emerging primacy of analytics, especially in-memory analytics, which it can address extremely well with its current 24TB (32G DIMMs) of DRAM.

Read more

Azure Stack Preview – Microsoft-s End-Game for On-Premise IT?

What’s Happening?

In 2014 I wrote about Microsoft and Dell’s joint Cloud Platform System offering, Microsoft’s initial foray into an “Azure-Like” experience in the enterprise data center. While not a complete or totally transparent Azure experience, it was a definite stake in the ground around Microsoft’s intentions to provide enterprise Azure with hybrid on-premise and public cloud (Azure) interoperability.

I got it wrong about other partners – as far as I know, Dell is the only hardware partner to offer Microsoft CPS – but it looks like my idiot-proof guess that CPS was a stepping stone toward a true on premise Azure was correct, with Microsoft today announcing its technology preview of Azure Stack, the first iteration of a true enterprise Azure offering with hybrid on-prem and public cloud interoperability.

Azure Stack is in some ways a parallel offering to the existing Windows Server/Systems Center and Azure Pack offering, and I believe it represents Microsoft’s long-term vision for enterprise IT, although Microsoft will do nothing to compromise the millions of legacy environments who want to incremental enhance their Windows environment. But for those looking to embrace a more complete cloud experience, Azure Stack is just what the doctor ordered – an Azure environment that can run in the enterprise that has seamless access to the immense Azure public cloud environment.

On the partner front, this time Microsoft will be introducing this as a pure software that can run on one or more standard x86 servers, no special integration required, although I’m sure there will be many bundled offerings of Azure Stack and integration services from partners.

Read more

HPE Transforms Infrastructure Management with Synergy Composable Infrastructure Announcement

Background

I’ve written and commented in the past about the inevitability of a new class of infrastructure called “composable”, i.e. integrated server, storage and network infrastructure that allowed its users to “compose”, that is to say configure, a physical server out of a collection of pooled server nodes, storage devices and shared network connections.[i]

The early exemplars of this class were pioneering efforts from Egenera and  blade systems from Cisco, HP, IBM and others, which allowed some level of abstraction (a necessary precursor to composablity) of server UIDs including network addresses and storage bindings, and introduced the notion of templates for server configuration. More recently the Dell FX and the Cisco UCS M-Series servers introduced the notion of composing of servers from pools of resources within the bounds of a single chassis.[ii] While innovative, they were early efforts, and lacked a number of software and hardware features that were required for deployment against a wide spectrum of enterprise workloads.

What’s New?

This morning, HPE put a major marker down in the realm of composable infrastructure with the announcement of Synergy, its new composable infrastructure system. HPE Synergy represents a major step-function in capabilities for core enterprise infrastructure as it delivers cloud-like semantics to core physical infrastructure. Among its key capabilities:

Read more

Oracle Delivers “Software on Silicon” – Doubles Down on Optimizing its Own Software with Latest Hardware

What’s new?

Looking at Oracle’s latest iteration of its SPARC processor technology, the new M7 CPU, it is at first blush an excellent implementation of SPARC, with 32 cores with 8 threads each implemented in an aggressive 20 nm process and promising a well-deserved performance bump for legacy SPARC/Solaris users. But the impact of the M7 goes beyond simple comparisons to previous generations of SPARC and competing products such as Intel’s Xeon E7 and IBM POWER 8. The M7 is Oracle’s first tangible delivery of its “Software on Silicon” promise, with significant acceleration of key software operations enabled in the M7 hardware.[i]

Oracle took aim at selected performance bottlenecks and security exposures, some specific to Oracle software, and some generic in nature but of great importance. Among the major enhancements in the M7 are:[ii]

  • Cryptography – While many CPUs now include some form of acceleration for cryptography, Oracle claims the M7 includes a wider variety and deeper support, resulting in almost indistinguishable performance across a range of benchmarks with SSL and other cryptographic protocols enabled. Oracle claims that the M7 is the first CPU architecture that does not present users with the choice of secure or fast, but allows both simultaneously.
Read more

Sea Changes in the Industry – A New HP and a New Dell Face Off

The acquisition of EMC by Dell has is generating an immense amount of hype and prose, much of it looking forward at how the merged entity will try and compete in cloud, integrate and rationalize its new product line, and how Dell will pay for it (see Forrester report “Quick Take: Dell Buys EMC, Creating a New Legacy Vendor”). Interestingly not a lot has been written about the changes in the fundamental competitive faceoff between Dell and HP, both newly transformed by divestiture and by acquisition.

Yesterday the competition was straightforward and relatively easy to characterize. HP is the dominant enterprise server vendor, Dell a strong challenger, both with PCs and both with some storage IP that was good but in no sense dominant. Both have competent data center practices and embryonic cloud strategies which were still works in process. Post transformation we have a totally different picture with two very transformed companies:

  • A slimmer HP. HP is smaller (although $50B is not in any sense a small company), and bereft of its historical profit engine, the margins on its printer supplies. Free to focus on its core mandate of enterprise systems, software and services, HP Enterprise is positioning itself as a giant startup, focused and agile. Color me slightly skeptical but willing to believe that it can’t be any less agile than its precursor at twice the size. Certainly along with the margin contribution they lose the option to fight about budget allocations between enterprise and print/PC priorities.
Read more

New Announcements Foreshadow Fundamental Changes in Server and Storage Architectures

My colleague Henry Baltazar and I have been watching the development of new systems and storage technology for years now, and each of us has been trumpeting in our own way the future potential of new non-volatile memory technology (NVM) to not only provide a major leap for current flash-based storage technology but to trigger a major transformation in how servers and storage are architected and deployed and eventually in how software looks at persistent versus nonpersistent storage.

All well and good, but up until very recently we were limited to vague prognostications about which flavor of NVM would finally belly up to the bar for mass production, and how the resultant systems could be architected. In the last 30 days, two major technology developments, Intel’s further disclosure of its future joint-venture NVM technology, now known as 3D XPoint™ Technology, and Diablo Technologies introduction of Memory1, have allowed us to sharpen the focus on the potential outcomes and routes to market for this next wave of infrastructure transformation.

Intel/Micron Technology 3D XPoint Technology

Read more

IBM Pushes Chip Technology with Stunning 7 nm Chip Demonstration

In the world of CMOS semiconductor process, the fundamental heartbeat that drives the continuing evolution of all the devices and computers we use and governs at a fundamantal level hte services we can layer on top of them is the continual shrinkage of the transistors we build upon, and we are used to the regular cadence of miniaturization, generally led by Intel, as we progress from one generation to the next. 32nm logic is so old-fashioned, 22nm parts are in volume production across the entire CPU spectrum, 14 nm parts have started to appear, and the rumor mill is active with reports of initial shipments of 10 nm parts in mid-2016. But there is a collective nervousness about the transition to 7 nm, the next step in the industry process roadmap, with industry leader Intel commenting at the recent 2015 International Solid State Circuit conference that it may have to move away from conventional silicon materials for the transition to 7 nm parts, and that there were many obstacles to mass production beyond the 10 nm threshold.

But there are other players in the game, and some of them are anxious to demonstrate that Intel may not have the commanding lead that many observers assume they have. In a surprise move that hints at the future of some of its own products and that will certainly galvanize both partners and competitors, IBM, discounted by many as a spent force in the semiconductor world with its recent divestiture of its manufacturing business, has just made a real jaw-dropper of an announcement – the existence of working 7nm semiconductors.

What was announced?

Read more

Red Hat Summit – Can you say OpenStack and Containers?

In a world where OS and low-level platform software is considered unfashionable, it was refreshing to see the Linux glitterati and cognoscenti descended on Boston for the last three days, 5000 strong and genuinely passionate about Linux. I spent a day there mingling with the crowds in the eshibit halls, attending some sessions and meeting with Red Hat management. Overall, the breadth of Red Hat’s offerings are overwhelming and way too much to comprehend ina single day or a handful of days, but I focused my attention on two big issues for the emerging software-defined data center – containers and the inexorable march of OpenStack.

Containers are all the rage, and Red Hat is firmly behind them, with its currently shipping RHEL Atomic release optimized to support them. The news at the Summit was the release of RHEL Atomic Enterprise, which extends the ability to execute and manage containers over a cluster as opposed to a single system. In conjunction with a tool stack such as Docker and Kubernates, this paves the way for very powerful distributed deployments that take advantage of the failure isolation and performance potential of clusters in the enterprise. While all the IP in RHEL Atomic, Docker and Kubernates are available to the community and competitors, it appears that RH has stolen at least a temporary early lead in bolstering the usability of this increasingly central virtualization abstraction for the next generation data center.

Read more

Thoughts on Huawei 2015 – The Juggernaut Continues to Build

In late April I once again attended Huawei’s annual analyst meeting in Shenzen, China. As with my last trip to this event, I approached it with a mix of dread and curiosity – dread because it is a long tiring trip and doing business in China if you are dependent on Google services is at best a delicate juggling act, and curiosity because Huawei is one of the most interesting and poorly understood of the large technology companies in the world, especially here in North America.

I came away with reinforcement of my previous impressions that Huawei is an unapologetically Chinese company. Not a global company that happens to be Chinese, as Lenovo presents itself, but a Chinese company that is intent upon and is making progress toward becoming a major global competitor in multiple arenas where it is not dominant now while continuing to maximize its success in its strong domestic market. A year later, all the programs that were in motion at the end of 2014 are still in operation, and Y/Y results indicate that the overall momentum in areas where Huawei is building its franchise, particularly mobile and enterprise IT, are, If anything, doing even better than promised.

Read more