Customers today are hyper-connected and their connectivity is rewriting the rules of business. Access to mobility, social networks, wearable devices, connected cars and hotels managed by robots are rapidly changing the behaviors of how customers engage and purchase. Think how you watch a film, shop or order a taxi.
The disruptive power in the hands of newly tech-savvy customers is forcing every business to evolve into a digital business or perish.
Infrastructure is at the center of the Digital Transformation
The digital transformation requires that organizations evolve their underlying technology infrastructure investments to fuel a business technology (BT) agenda, with technology designed to win, serve, and retain customers. Infrastructure – whether it is managed internally or hidden behind some cloud service – is a big part of the digital in digital business. I&O leaders can no longer simply focus on the same old approach to infrastructure. Internal business operations, or systems of record will remain important, but the emphasis must shift more to systems powering the newer digital customer experience
We are all aware that software is at the center of transitioning every successful business today. This software focus fueled a rapid expansion of cloud services and many argue that there is no longer a necessity to own hardware. This has turned the infrastructure world upside down. Hardware speeds and feeds no longer dominate infrastructure and operations (I&O) professionals' criteria. In some use cases, qualities like the fastest packet-processing chip or largest disk capacity are critical, but they matter less to many of the systems of engagement in the BT agenda. As you design your BT services, be aware of which solution is right for optimizing the customer experience.
Containers. One of those nasty terms, like metadata (ok - maybe you had to move in the odd circles I did for that one to resonate), cloud, or big data. To some, the solution to every problem. To others, yet another unforgivable explosion of over-exuberant hype that should be ignored at all costs. And, like so many things, the truth lies somewhere in the middle.
Containers are an important component in broader efforts to transform the way in which an enterprise builds, tests, deploys, and scales its applications. Particularly, today, its customer-facing systems of engagement. But they're not the answer to every problem, and they don't replace all your virtual machines, mainframes, and other infrastructure.
Most enterprise CIOs, today, have probably heard of containers... or Docker. And, for most of you, there will be a group or individual inside your organisation loudly singing containers' praises. There will be an equally vocal group or individual, pointing to every factoid supporting their view that the container emperor has nothing on.
My latest Brief takes a look at some of the ways containers are being used, and argues that CIOs need to pay attention - now. That's not to say you should wholeheartedly embrace containers in everything you do. But you do need to ensure you're aware of their strengths, and track the rapid evolution in the underlying technologies. Some pieces are even beginnint to be standardised, between competing companies.
And, just to see if the metadata crowd are still reading... Z39.50!
Whether it’s the growth of service providers transitioning to offer services, the emergence of Containers within Hyperconverged solutions, or the potential emergence of Google succeeding, the public cloud is set for a year of “hyper-growth”! That said we have to sort through the FUD (Fear, Uncertainty and Doubt), especially in security, to determine the appropriateness of public cloud for your organization.
Is the low hanging cloud fruit eaten?
The rush to cloud to date has clearly been within “systems of innovation,” applications geared mostly to customer engagement (so-called “systems of engagement”). Enterprises leveraging public cloud are looking to get new innovative applications and services rapidly to market. These applications have been primarily driving customer acquisition and then fostering customer loyalty. These initiatives represent just the tip of the iceberg, the real opportunity is in moving “systems of record”, or everyday work to the public cloud.
Over the past year Containers such as Docker have generated tremendous interest and uptake among well-known cloud providers, who use them to deliver some of the largest and most popular cloud services and applications. Container adoption is being driven by the promise that containers deliver the ability to “build once and run anywhere", allowing increased server efficiency and scalability for technology managers.
Hyperconvergance growing in adoption
A second trend developing at a similar rate is the adoption of Hyperconverged platforms. Hyperconverged platforms architect compute, storage and network together as a complete system (whether physical or virtual). Blending ease of use, scalability, and integration into easily consumable webscale building blocks which allows infrastructure and operations (I&O) leaders to spend less time engineering and tuning fundamental infrastructure and more time putting capabilities in the hands of their firms' customers.
Hyperconvergance leveraging Containers, the perfect Cloud match
The growth of containers and Hyperconverged solutions with containers is emerging and in 2016 will become commonplace.This combination will yield the most flexible application packaging yet. AWS, CoreOS, Docker, Google, Mesosphere, Red Hat, VMware, and the various OpenStack players will lead the way. Hyperconverged infrastructure will be the foundation because it provides great flexibility with underlying resources in the pool for cloud services.
In a world where OS and low-level platform software is considered unfashionable, it was refreshing to see the Linux glitterati and cognoscenti descended on Boston for the last three days, 5000 strong and genuinely passionate about Linux. I spent a day there mingling with the crowds in the eshibit halls, attending some sessions and meeting with Red Hat management. Overall, the breadth of Red Hat’s offerings are overwhelming and way too much to comprehend ina single day or a handful of days, but I focused my attention on two big issues for the emerging software-defined data center – containers and the inexorable march of OpenStack.
Containers are all the rage, and Red Hat is firmly behind them, with its currently shipping RHEL Atomic release optimized to support them. The news at the Summit was the release of RHEL Atomic Enterprise, which extends the ability to execute and manage containers over a cluster as opposed to a single system. In conjunction with a tool stack such as Docker and Kubernates, this paves the way for very powerful distributed deployments that take advantage of the failure isolation and performance potential of clusters in the enterprise. While all the IP in RHEL Atomic, Docker and Kubernates are available to the community and competitors, it appears that RH has stolen at least a temporary early lead in bolstering the usability of this increasingly central virtualization abstraction for the next generation data center.