The Background – Linux as a Fast Follower and the Need for Hot Patching
No doubt about it, Linux has made impressive strides in the last 15 years, gaining many features previously associated with high-end proprietary Unix as it made the transition from small system plaything to core enterprise processing resource and the engine of the extended web as we know it. Along the way it gained reliable and highly scalable schedulers, a multiplicity of efficient and scalable file systems, advanced RAS features, its own embedded virtualization and efficient thread support.
As Linux grew, so did supporting hardware, particularly the capabilities of the ubiquitous x86 CPU upon which the vast majority of Linux runs today. But the debate has always been about how close Linux could get to “the real OS”, the core proprietary Unix variants that for two decades defined the limits of non-mainframe scalability and reliability. But “the times they are a changing”, and the new narrative may be “when will Unix catch up to Linux on critical RAS features like hot patching”.
Hot patching, the ability to apply updates to the OS kernel while it is running, is a long sought-after but elusive feature of a production OS. Long sought after because both developers and operations teams recognize that bringing down an OS instance that is doing critical high-volume work is at best disruptive and worst a logistical nightmare, and elusive because it is incredibly difficult. There have been several failed attempts, and several implementations that “almost worked” but were so fraught with exceptions that they were not really useful in production.[i]
Over the past several years, Forrester's research has written extensively about the age of the customer. Forrester believes that only the enterprises that are obsessed with winning, serving, and retaining customers will thrive in this highly competitive, customer-centric economy. But in order to get a full view of customer behavior, sentiment, emotion, and intentions, Information Management professionals must help enterprises leverage all the data at their disposal, not just structured, but also unstructured. Alas, that's still an elusive goal, as most enterprises leverage only 40% of structured data and 31% of unstructured data for business and customer insights and decision-making.
So what do you need to do to start enriching your customer insights with unstructured data ? First, get your yext analysis terminology straight. For Information Management pros, the process of text mining and text analytics should not be a black box, where unstructured text goes in and structured information comes out. But today, there is a lot of market confusion on the terminology and process of text analytics. The market, both vendors and users, often uses the terms text mining and text analytics interchangeably; Forrester makes a distinction and recommends that Information Management pros working on text mining/text analytics initiatives adopt the following terminology:
IBM opened its global Watson Internet of Things (IoT) headquarters in Munich this week. It is hardly unusual for this quintessential global business to open research centers on a global scale. But the decision to move the HQ for one of the most dynamic areas of the digital transformation arena to Munich is noteworthy for several reasons. The move underlines that:
IoT has a very strong B2B component. Yes, IoT will play a role in consumer segments such as the connected home. But connectivity limitations and costs, compliance, and security will put many IoT ambitions in the consumer space to rest. The real action will be in the B2B space, where IoT will be elemental to drive activities like predictive maintenance, fleet management, traffic management, supply chain management, and order processing. Forrester expects the market size for B2B eCommerce, of which IoT is a subset, to be about twice that of B2C by 2020.
IoT and big data are closely intertwined. The real value of IoT solutions will not come from the hardware components of connected assets but from the data they generate and consume. In order to manage and make sense of the data that connected assets generate, cognitive systems and machine learning will play a fundamental role for the evolution of IoT. “Employing” Watson in the IoT context elevates IBM’s role in the IoT market significantly.
Predictive analytics has become the key to helping businesses — especially those in the highly dynamic Chinese market — create differentiated, individualized customer experiences and make better decisions. Enterprise architecture professionals must take a customer-oriented approach to developing their predictive analytics strategy and architecture.
I’ve recently published tworeports focusing on how to architect predictive analytics capability. These reports analyze the trends around predictive analytics adoption in China and discuss four key areas that EA pros must focus on to accelerate digital transformation. They also show EA pros how to unleash the power of digital business by analyzing the predictive analytics practices of visionary Chinese firms. Some of the key takeaways:
Predictive analytics must cover the full customer life cycle and leverage business insights. Organizations require predictable insights into customer behaviors and business operations. Youmust implement predictive analytics solutions and deliver value to customers throughout their life cycle to differentiate your customer experience and sustain business growth.You should also realize the importance of business stakeholders and define effective mechanisms for translating their business knowledge into predictive algorithm inputs to optimize predictive models faster and generate deeper customer insights.
I’ve written and commented in the past about the inevitability of a new class of infrastructure called “composable”, i.e. integrated server, storage and network infrastructure that allowed its users to “compose”, that is to say configure, a physical server out of a collection of pooled server nodes, storage devices and shared network connections.[i]
The early exemplars of this class were pioneering efforts from Egenera and blade systems from Cisco, HP, IBM and others, which allowed some level of abstraction (a necessary precursor to composablity) of server UIDs including network addresses and storage bindings, and introduced the notion of templates for server configuration. More recently the Dell FX and the Cisco UCS M-Series servers introduced the notion of composing of servers from pools of resources within the bounds of a single chassis.[ii] While innovative, they were early efforts, and lacked a number of software and hardware features that were required for deployment against a wide spectrum of enterprise workloads.
This morning, HPE put a major marker down in the realm of composable infrastructure with the announcement of Synergy, its new composable infrastructure system. HPE Synergy represents a major step-function in capabilities for core enterprise infrastructure as it delivers cloud-like semantics to core physical infrastructure. Among its key capabilities:
Looking at Oracle’s latest iteration of its SPARC processor technology, the new M7 CPU, it is at first blush an excellent implementation of SPARC, with 32 cores with 8 threads each implemented in an aggressive 20 nm process and promising a well-deserved performance bump for legacy SPARC/Solaris users. But the impact of the M7 goes beyond simple comparisons to previous generations of SPARC and competing products such as Intel’s Xeon E7 and IBM POWER 8. The M7 is Oracle’s first tangible delivery of its “Software on Silicon” promise, with significant acceleration of key software operations enabled in the M7 hardware.[i]
Oracle took aim at selected performance bottlenecks and security exposures, some specific to Oracle software, and some generic in nature but of great importance. Among the major enhancements in the M7 are:[ii]
Cryptography – While many CPUs now include some form of acceleration for cryptography, Oracle claims the M7 includes a wider variety and deeper support, resulting in almost indistinguishable performance across a range of benchmarks with SSL and other cryptographic protocols enabled. Oracle claims that the M7 is the first CPU architecture that does not present users with the choice of secure or fast, but allows both simultaneously.
The acquisition of EMC by Dell has is generating an immense amount of hype and prose, much of it looking forward at how the merged entity will try and compete in cloud, integrate and rationalize its new product line, and how Dell will pay for it (see Forrester report “Quick Take: Dell Buys EMC, Creating a New Legacy Vendor”). Interestingly not a lot has been written about the changes in the fundamental competitive faceoff between Dell and HP, both newly transformed by divestiture and by acquisition.
Yesterday the competition was straightforward and relatively easy to characterize. HP is the dominant enterprise server vendor, Dell a strong challenger, both with PCs and both with some storage IP that was good but in no sense dominant. Both have competent data center practices and embryonic cloud strategies which were still works in process. Post transformation we have a totally different picture with two very transformed companies:
A slimmer HP. HP is smaller (although $50B is not in any sense a small company), and bereft of its historical profit engine, the margins on its printer supplies. Free to focus on its core mandate of enterprise systems, software and services, HP Enterprise is positioning itself as a giant startup, focused and agile. Color me slightly skeptical but willing to believe that it can’t be any less agile than its precursor at twice the size. Certainly along with the margin contribution they lose the option to fight about budget allocations between enterprise and print/PC priorities.
In the world of CMOS semiconductor process, the fundamental heartbeat that drives the continuing evolution of all the devices and computers we use and governs at a fundamantal level hte services we can layer on top of them is the continual shrinkage of the transistors we build upon, and we are used to the regular cadence of miniaturization, generally led by Intel, as we progress from one generation to the next. 32nm logic is so old-fashioned, 22nm parts are in volume production across the entire CPU spectrum, 14 nm parts have started to appear, and the rumor mill is active with reports of initial shipments of 10 nm parts in mid-2016. But there is a collective nervousness about the transition to 7 nm, the next step in the industry process roadmap, with industry leader Intel commenting at the recent 2015 International Solid State Circuit conference that it may have to move away from conventional silicon materials for the transition to 7 nm parts, and that there were many obstacles to mass production beyond the 10 nm threshold.
But there are other players in the game, and some of them are anxious to demonstrate that Intel may not have the commanding lead that many observers assume they have. In a surprise move that hints at the future of some of its own products and that will certainly galvanize both partners and competitors, IBM, discounted by many as a spent force in the semiconductor world with its recent divestiture of its manufacturing business, has just made a real jaw-dropper of an announcement – the existence of working 7nm semiconductors.
Today, IBM and Box announced a partnership and integration strategyto “transform work in the cloud." This is an interesting move that further validates Forrester’s view that the ECM market is transforming — largely due to new, often customer-activated, use cases. We also see that the current horizontal collaboration market is shifting to better target specific work output, as opposed to more general-purpose knowledge-dissemination use cases.
What does this partnership mean for IBM, Box, and their partners and customers?
For Box, the company gets important access to the extensive IBM ecosystem: Global Services, developer communities via IBM’s Bluemix platform, and the IBM-Apple MobileFirst relationship, as well as engineering acceleration to fill gaps in its content collaboration offering in areas such as capture, case management, governance, and analytics, including Watson.
Unfortunately, visa issues prevented me from attending the OpenStack summit in Vancouver last week — despite submitting my application to the Canadian embassy in Beijing 40 days in advance! However after following extensive online discussions of the event and discussing it with vendors and peers, I would say that OpenStack is moving to a new phase, for two reasons:
The rise of containers is laying the foundation for the next level of enterprise readiness. Docker’s container technology has become a major factor in the evolution of OpenStack components. Docker drivers have been implemented for the key components of Nova and Heat for extended computing and orchestration capabilities, respectively. The Magnum project aiming at container services allows OpenStack to create clusters with Kubernetes (k8s) by Google and Swarm by Docker.com. The Murano project contributed by Mirantis aiming at application catalog services is also integrated with k8s.