How often have you been told you can't use a mainstream public cloud provider? Quite often, probably, especially if you happen to work in a regulated industry like banking or healthcare. And what justifications are you given? The regulator "won't let you," no doubt? That's a good one. And "it's not secure" is often pretty close behind. Either that, or the argument that generic public cloud infrastructure can't possibly meet your very special, very unique, very carefully crafted mix of requirements?
Sadly, despite the frequency with which they're trotted out, these attempts at justification stand a pretty good chance of being either hearsay, or just complete nonsense.
It's easy not to change, and to justify your inertia with reference to the scary, punitive, hopelessly luddite regulator. It's easy to continue lovingly polishing the hideously complex snowflake your internal computing environment has become. It's far harder to look at the truth behind the hearsay, and to work out when doing something different might — or might not — be the better approach for your business, and its effort to win, serve, and retain customers.
(Confusing messages. Image by Wikimedia Commons user 'Melburnian')
Again and again, we hear examples of companies struggling as they try to realise the benefits of moving to cloud. They know what they want to achieve as a business, they know that cloud can help, but they cannot translate that understanding into the way they specify, procure, and run the technology.
There are plenty of organisations willing to help, offering everything from design and migration services through to management of infrastructure and applications on an ongoing basis. Even in the public cloud world, it's easy to find companies eager to take your money, and then start and stop workloads on your behalf.
"Cloud computing changes the way that applications are designed, built, and run. It is often part of a broader organizational change, as enterprises move to embrace digital opportunities. Providers of managed cloud solutions need to recognize this shift: They must do more than simply run a customer’s computers. But CIOs seeking a trusted partner to assume this broader role find that too many managed cloud offerings fail to rise above basic management of infrastructure."
As soon as the news of the Brexit vote in the UK came out, the Forrester team began revising our UK and European tech market forecast to take into account the economic implications and uncertainties of the voters’decision that the UK should leave the EU. Based on this revised analysis, we predict the UK tech market will grow by just 1% (pounds sterling) in 2016 with zero growth in 2017, compared with our prior forecast of 5% in both years.
Europe as a whole, will post no growth in 2016 (euros), and just 1% growth in 2017 — two percentage points slower than our earlier forecast. With the plummeting pound and enervated euro, European tech market measured in US dollars will be similarly weak with 0.2% growth in 2016 and 1.1% in 2017.
The slowing of UK and European tech market growth results from multiple uncertainties created by the Brexit vote coming on top of what was already a weak and shaky European economy. As a result:
The UK economy, which had been outperforming most of the Eurozone countries, will take a hit. The Belgian, Dutch, French, German, Italian, and Swiss economies, which are growing by 1-1/2% or less, are vulnerable to declines, with Italy especially exposed due to a looming banking crisis.
Greece and Portugal are struggling once again, with threats of renewed recessions leading to declines in tech spending.
The only countries with decent economic growth and above average tech market growth are Ireland and Spain in the Eurozone, and Sweden, Poland, and other Central European countries outside it.
Two weeks on, the result of the UK referendum on membership of the European Union (EU) continues to reverberate around the world. Forrester provided advice for clients needing to understand the business implications. Looking at the specific impact on public cloud deployments in Europe introduces a number of additional points. These are best considered in three separate contexts:
that of companies wishing to serve customers in the UK
that of companies wishing to serve customers in the remaining 27 EU member states (the EU27)
that of companies wishing to serve customers in the EU27 from a base in the UK.
When we think about the public cloud, the list of credible providers can sometimes seem rather short.
(The Great Wall of China. Source: Paul Miller)
In North America, Europe, and elsewhere, the same few names tend to dominate. But not in China. There, big local brands continue to command impressive market share. And now they're looking to expand into new territories, including Europe.
Huawei hardware and Huawei's distribution of the OpenStack open source cloud platform power T-Systems' Open Telekom Cloud. This was launched, with some fanfare, at CeBIT in Hannover.
Alibaba Cloud, which leads the Chinese public cloud market, is also coming to Europe this year.
In my latest report, I take a look at what both Alibaba and Huawei bring to Europe's public cloud market, and ask whether they can repeat their domestic success in this market.
TL;DR - it would be unwise to discount either of them.
Today’s customers, products, business operations, and competitors are fundamentally digital. Succeeding in this new era mandates everyone constantly reinvent their businesses as fundamentally digital. You have two choices,
· become a digital predator; or
· become digital prey.
To compete in this new digital market norm, software applications and products must contain new sources of customer value while at the same time adopting new operational agility. I&O pros need to change from the previous methods of releasing large software products and services at sporadic intervals to continuous deployment. All must adopt key automation technologies to make continuous deployment a reality.
Over the past 25 years, many organizations have modelled their support – and in some cases their delivery organization – after the ITIL frameworks and processes. For many, ITIL has been helpful in establishing the rigor and governance that they needed to bring their infrastructure under control in an era where quality and consistency of service was critical and technology was sometimes fragile.
Today, we are 5 years into “The age of the customer” – an era where customer obsession is driving technology and which demands a culture of speed and collaboration to differentiate and deliver extraordinary customer experience to drive business growth. In this era, the rise of mobility and the race to deliver differentiated business processes is critical to success. Your development teams are driving velocity and elasticity with increased quality and availability, leveraging DevOps practices and often driving change directly to production.
This transition has led some organizations to experience friction between the competing priorities, velocity and control, especially for those who continue to execute on the traditional model of ITIL.
ITIL is starting to show signs of age. That does not mean it is on the verge of demise. ITIL must adapt. To understand the relevance of ITIL and IT Service Management practices in this era of Modern Service Delivery, Eveline Oehrlich and Elinor Klavens and I have embarked on a review of ITIL and the use of IT Service Management practices supporting todays BT agenda.
As we embark in the era of “cloud first” being business as usual for operations, one of the acronyms flying aground the industry is SDDC or the Software Defined Data Center. The term, very familiar to me since starting with Forrester less than six months ago, has become an increasing topic of conversation with Forrester clients and vendors alike. It is germane to my first Forrester report “Infrastructure as Code, The Missing Element In The I&O Agenda”, where I discuss the changing role of I&O pros from building and managing physical hardware to abstracting configurations as code. The natural extension of this is the SDDC.
We believe that the SDDC is an evolving architectural and operational philosophy rather that simply a product that you purchase. It is rooted in a series of fundamental architectural constructs built on modular standards-based infrastructure, virtualization of and at all layers, with complete orchestration and automation.
The Forrester definition of the SDDC is:
A SDDC is an integrated abstraction model that defines a complete data center by means of a layer of software that presents the resources of the data center as pools of virtual and physical resources, and allows them to be composed into arbitrary user-defined services.
For many years, infrastructure and operations (I&O) professionals have been dedicated to delivering services at lower costs and ever greater efficiency, but the business technology (BT) agenda requires innovation that delivers top-line growth.
The evolution and success of digital business models is leading I&O organizations to disrupt their traditional infrastructure models to pursue cloud strategies and new infrastructure architectures and mindsets that closely resemble cloud models.
Such a cloud-first strategy supports the business agenda for agility, rapid innovation, and delivery of solutions. This drives customer acquisition and retention and extends the focus beyond ad hoc projects to their complete technology stack. The transition to cloud-first mandates a transition for infrastructure delivery, management, and maintenance to support its delivery and consumption as a reusable software component. Such infrastructure can be virtual or physical and consumed as required, without lengthy build and deployment cycles.
Growing cloud maturity, the move of systems of record to the cloud (see my blog “Driving Systems of Records to the Cloud, your focus for 2016!)container growth, extensive automation, and availability of "infrastructure as code" change the roles within I&O, as far less traditional administration is needed. I&O must transition from investing in traditional administration to the design, selection, and management of the tooling it needs for composable infrastructure.
The Internet of Things, or IoT, finds its way into a lot of conversations these days. CES in Las Vegas last week was awash with internet-connected doo-dahs, including cars, fridges, televisions, and more. Moving away from the home and into the world of business, the IoT furore continues unabated. Instead of connecting cars to Netflix or a teen-tracking insurance company, we connect entire fleets of trucks to warehouses, delivery locations, and driver monitoring systems. Instead of connecting the domestic fridge to Carrefour or Tesco or Walmart in order to automatically order another litre of milk, we connect entire banks of chiller units to stock control systems, backup generators, and municipal environmental health officers. And then we connect the really big things; a locomotive, a jet engine, a mountainside covered in wind turbines, a valley bursting with crops, a city teeming with people.
Wind turbines in Ayrshire. (Source: Paul Miller)
The IoT hype is compelling, pervasive, and full of bold promises and eye-watering valuations. And yet, despite talking about connected cars or smarter cities for decades, the all-encompassing vision remains distant. The reality, mostly, is one in which incompatible standards, immature implementations, and patchy network connectivity ensure that each project or procurement delivers an isolated little bubble of partially connected intelligence. Stitching these together, to deliver meaningful views — and control — across all of the supposedly connected systems within a factory, a company, a power network, a city, or a watershed often remains more hope than dependable reality.