I’ve been getting a steady trickle of inquires this year about the future of the mainframe from our enterprise clients. Most of them are more or less in the form of “I have a lot of stuff running on mainframes. Is this a viable platform for the next decade or is IBM going to abandon them.” I think the answer is that the platform is secure, and in the majority of cases the large business-critical workloads that are currently on the mainframe probably should remain on the mainframes. In the interests of transparency I’ve tried to lay out my reasoning below so that you can see if it applies to your own situation.
How Big is the Mainframe LOB?
It's hard to get exact figures for the mainframe contributions to IBM's STG (System & Technology Group) total revenues, but the data they have shared shows that their mainframe revenues seem to have recovered from the declines of previous quarters and at worst flattened. Because the business is inherently somewhat cyclical, I would expect that the next cycle of mainframes, rumored to be arriving next year, should give them a boost similar to the last major cycle, allowing them to show positive revenues next year.
SaaS has been around for 20 years, cloud platforms nearly a decade. That must mean they’ve worked all the kinks out, right. You know better. There are wide variances in the maturity, stability and enterprise-readiness of the many cloud services categories. There are certainly differences between vendors within the same category, as demonstrated in each of our Forrester Waves, but there are significant differences between the many classes of cloud services. For example, internal private clouds are far less mature than their public counterparts and the desktop-as-a-service category continues to struggle to find its place in the market.
This is the reason Forrester created the Tech Radar. This class of report helps enterprise clients delineate between categories of services based on their maturity, adoption by other enterprise clients and helps ascertain the likely return on investment you can expect at its current level of maturity. The latest Cloud Computing Tech Radar, published this month on Forrester.com, plots each major category of cloud services along two axis:
I’ve been talking to a number of users and providers of bare-metal cloud services, and am finding the common threads among the high-profile use cases both interesting individually and starting to connect some dots in terms of common use cases for these service providers who provide the ability to provision and use dedicated physical servers with very similar semantics to the common VM IaaS cloud – servers that can be instantiated at will in the cloud, provisioned with a variety of OS images, be connected to storage and run applications. The differentiation for the customers is in behavior of the resulting images:
Deterministic performance – Your workload is running on a dedicated resource, so there is no question of any “noisy neighbor” problem, or even of sharing resources with otherwise well-behaved neighbors.
Extreme low latency – Like it or not, VMs, even lightweight ones, impose some level of additional latency compared to bare-metal OS images. Where this latency is a factor, bare-metal clouds offer a differentiated alternative.
Raw performance – Under the right conditions, a single bare-metal server can process more work than a collection of VMs, even when their nominal aggregate performance is similar. Benchmarking is always tricky, but several of the bare metal cloud vendors can show some impressive comparative benchmarks to prospective customers.
There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.
And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.
EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:
We’ve been seeing for years in our surveys, that business users and application developers are the primary consumers of cloud services. SaaS and cloud platforms are not infrastructure or alternatives to the corporate data center but are instead application services your organization leverages to create new user experiences and greater efficiencies that maximize profitability and derive trends that result in business insights.
In 2015 this realization will become a motivator for vendors and enterprise CIOs to focus their cloud strategies on empowering business and developers first and put aside their own concerns and priorities. In 2015, cloud adoption will accelerate and technology management groups must adapt to this reality by learning how to add value to their company’s use of these services through facilitation, adaptation and evangelism. The days of fighting the cloud are over. This means major changes are ahead for you, your application architecture, portfolio, and your vendor relationships.
In this playbook, we do not predict the future of technology but we try to understand how, in the age of the customer, I&O must transform to support businesses by accelerating the speed of service delivery, enabling capacity when and where needed and improving customers and employee experience.
All industries mature towards commoditization and abstraction of the underlying technology because knowledge and expertise are cumulative. Our industry will follow an identical trajectory that will result in ubiquitous and easier to implement, manage and change technology.
Dell today announced its new FX system architecture, and I am decidedly impressed.
Dell FX is a 2U flexible infrastructure building block that allows infrastructure architects to compose an application-appropriate server and storage infrastructure out of the following set of resources:
Multiple choices of server nodes, ranging from multi-core Atom to new Xeon E5 V3 servers. With configurations ranging from 2 to 16 server nodes per enclosure, there is pretty much a configuration point for most mainstream applications.
A novel flexible method of mapping disks from up to three optional disk modules, each with 16 drives - the mapping, controlled by the onboard management, allows each server to appear as if the disk is locally attached DASD, so no changes are needed in any software that thinks it is accessing local storage. A very slick evolution in storage provisioning.
A set of I/O aggregators for consolidating Ethernet and FC I/O from the enclosure.
All in all, an attractive and flexible packaging scheme for infrastructure that needs to be tailored to specific combinations of server, storage and network configurations. Probably an ideal platform to support the Nutanix software suite that Dell is reselling as well. My guess is that other system design groups are thinking along these lines, but this is now a pretty unique package, and merits attention from infrastructure architects.
Forrester’s Infrastructure and Operations research team has been on the leading edge of infrastructure technology and its proper operational aspects for years. We pushed the industry on both the supply side (vendors) and the demand side (enterprises) toward new models and we pushed hard. I’m proud to say we’ve been instrumental in changing the world of infrastructure and we’re about to change it again!
As the entire technology management profession evolves into the Age of the Customer, the whole notion of infrastructure is morphing in dramatic ways. The long-criticized silos are finally collapsing, cloud computing quickly became mainstream, and you now face a dizzying variety of infrastructure options. Some are outside your traditional borders – like new outsourcing, hosting and colocation services as well as too many cloud forms to count. Some remain inside and will for years to come. More of these options will come from the outside though, and even those “legacy” technologies remaining inside will be created and managed differently.
Your future lies not in managing pockets of infrastructure, but in how you assemble the many options into the services your customers needs. Our profession has been locally brilliant, but globally stupid. We’re now helping you become globally brilliant. We call this service design, a much broader design philosophy rooted in systems thinking. The new approach packages technology into a finished “product” that is much more relevant and useful than any of the parts alone.
On Monday Microsoft officially announced the launch of two Azure Data Centers in Australia. This is big news for the many Australia-based organizations concerned about data sovereignty, as well as those who simply equate on-shore data residency with increased security and control.
Announced as part of TechEd 2014 in Sydney, Microsoft specifically called out Amazon Web Services (AWS) and Google as it’s key competition. In fact, Microsoft has gone to great lengths over the past year plus to consistently position these two companies as the only other viable longterm cloud providers. This is based on three cloud provider capabilities identified by Microsoft as critical: hyper-scale, enterprise-grade, and hybrid.
Overall it’s a good angle for Microsoft. All three players operate at hyper-scale as public cloud providers. All three also offer enterprise-grade services, (although this definition varies based on workload). Most importantly for Microsoft, neither AWS nor Google have a primary focus on enabling hybrid cloud services.
In contrast, all traditional large infrastructure vendors (Fujitsu, HP, IBM, VMware, etc.), system integrators (Dimension Data, NTT, etc.), and telco’s (Telstra) focus squarely on enterprise-grade services and hybrid cloud enablement. Rackspace, IBM and HP also have Australia-based data centers. But all these providers lack hyper-scale.
This time last year, we published our predictions of what would be the major events and changes in enterprise cloud adoption in 2014. In this post, we look back on these prognostications to see which came true, which are still pending and which missed the mark. Look for our 2015 Cloud Predictions in the next few weeks. Thanks to Dave Bartoletti, Ed Ferrara and the rest of the Cloud Playbook team for their contributions.