The Background – Linux as a Fast Follower and the Need for Hot Patching
No doubt about it, Linux has made impressive strides in the last 15 years, gaining many features previously associated with high-end proprietary Unix as it made the transition from small system plaything to core enterprise processing resource and the engine of the extended web as we know it. Along the way it gained reliable and highly scalable schedulers, a multiplicity of efficient and scalable file systems, advanced RAS features, its own embedded virtualization and efficient thread support.
As Linux grew, so did supporting hardware, particularly the capabilities of the ubiquitous x86 CPU upon which the vast majority of Linux runs today. But the debate has always been about how close Linux could get to "the real OS", the core proprietary Unix variants that for two decades defined the limits of non-mainframe scalability and reliability. But "the times they are a changing", and the new narrative may be "when will Unix catch up to Linux on critical RAS features like hot patching".
Hot patching, the ability to apply updates to the OS kernel while it is running, is a long sought-after but elusive feature of a production OS. Long sought after because both developers and operations teams recognize that bringing down an OS instance that is doing critical high-volume work is at best disruptive and worst a logistical nightmare, and elusive because it is incredibly difficult. There have been several failed attempts, and several implementations that "almost worked" but were so fraught with exceptions that they were not really useful in production.[i]
Last year I published a reasonably well-received research document on Hadoop infrastructure, “Building the Foundations for Customer Insight: Hadoop Infrastructure Architecture”. Now, less than a year later it’s looking obsolete, not so much because it was wrong for traditional (and yes, it does seem funny to use a word like “traditional” to describe a technology that itself is still rapidly evolving and only in mainstream use for a handful of years) Hadoop, but because the universe of analytics technology and tools has been evolving at light-speed.
If your analytics are anchored by Hadoop and its underlying map reduce processing, then the mainstream architecture described in the document, that of clusters of servers each with their own compute and storage, may still be appropriate. On the other hand, if, like many enterprises, you are adding additional analysis tools such as NoSQL databases, SQL on Hadoop (Impala, Stinger, Vertica) and particularly Spark, an in-memory-based analytics technology that is well suited for real-time and streaming data, it may be necessary to begin reassessing the supporting infrastructure in order to build something that can continue to support Hadoop as well as cater to the differing access patterns of other tools sets. This need to rethink the underlying analytics plumbing was brought home by a recent demonstration by HP of a reference architecture for analytics, publicly referred to as the HP Big Data Reference Architecture.
On one level, IBM’s new z13, announced last Wednesday in New York, is exactly what the mainframe world has been expecting for the last two and a half years – more capacity (a big boost this time around – triple the main memory, more and faster cores, more I/O ports, etc.), a modest boost in price performance, and a very sexy cabinet design (I know it’s not really a major evaluation factor, but I think IBM’s industrial design for its system enclosures for Flex System, Power and the z System is absolutely gorgeous, should be in the MOMA*). IBM indeed delivered against these expectations, plus more. In this case a lot more.
In addition to the required upgrades to fuel the normal mainframe upgrade cycle and its reasonably predictable revenue, IBM has made a bold but rational repositioning of the mainframe as a core platform for the workloads generated by mobile transactions, the most rapidly growing workload across all sectors of the global economy. What makes this positioning rational as opposed to a pipe-dream for IBM is an underlying pattern common to many of these transactions – at some point they access data generated by and stored on a mainframe. By enhancing the economics of the increasingly Linux-centric processing chain that occurs before the call for the mainframe data, IBM hopes to foster the migration of these workloads to the mainframe where its access to the resident data will be more efficient, benefitting from inherently lower latency for data access as well as from access to embedded high-value functions such as accelerators for inline analytics. In essence, IBM hopes to shift the center of gravity for mobile processing toward the mainframe and away from distributed x86 Linux systems that they no longer manufacture.
Lenovo recently announced record results for the third quarter of the 2013/14 fiscal year: the first time that the firm has exceeded US$10 billion in revenue in a single quarter. Lenovo has continued to prioritize maintaining or increasing its share of the PC market — the majority of its business. This strategy has paid off: Lenovo’s PC business (laptops plus desktops) grew by 8% year on year — in stark contrast to its slumping rivals. Lenovo can attribute its success to a strategy that sacrifices profit to keep prices competitive, maintains a direct local sales team, and retains channel partners after acquisitions.
Forrester believes that the mobile mind shift is one of four key market imperatives that enterprises can use to win in the age of customer. Lenovo has gotten a good start on this journey with its effort to enhance its mobile-related capabilities. Although the coming Motorola deal may have a negative impact on Lenovo’s performance over the next three to five quarters, the firm believes that mobile can change its business — and not just its digital business. In the next two to three years, Lenovo’s key strategy will be to provide customers with mobile devices and related infrastructure that will address their mobile mind shift. In particular:
Dane Anderson, Dan Bieler, Charlie Kun Dai, Chris Mines, Nupur Singh Andley, Tirthankar Sen, Christopher Voce, Bryan Wang
Huawei is one of the most intriguing companies in the ICT industry, but its overall strategy remains largely unchanged: imitating established products and services, then adjusting and enhancing them, and making them available at an attractive price point. But to be fair: Huawei is pushing more and more innovative products.
In 2012, Huawei’s annual revenue growth slowed down to 8% to CNY 220 billion (about US$ 35 billion). During the same period, its EBIT margin remained flat at 9%, despite the changing revenue composition due to the growth of its consumer and enterprise business. Unlike last year’s event which was dominated by the announcement to push into the enterprise space, this year’s Global Analyst Summit in Shenzhen saw little ground breaking news. It was more of a progress report: