I’ve been getting a steady trickle of inquires this year about the future of the mainframe from our enterprise clients. Most of them are more or less in the form of “I have a lot of stuff running on mainframes. Is this a viable platform for the next decade or is IBM going to abandon them.” I think the answer is that the platform is secure, and in the majority of cases the large business-critical workloads that are currently on the mainframe probably should remain on the mainframes. In the interests of transparency I’ve tried to lay out my reasoning below so that you can see if it applies to your own situation.
How Big is the Mainframe LOB?
It's hard to get exact figures for the mainframe contributions to IBM's STG (System & Technology Group) total revenues, but the data they have shared shows that their mainframe revenues seem to have recovered from the declines of previous quarters and at worst flattened. Because the business is inherently somewhat cyclical, I would expect that the next cycle of mainframes, rumored to be arriving next year, should give them a boost similar to the last major cycle, allowing them to show positive revenues next year.
I’ve been talking to a number of users and providers of bare-metal cloud services, and am finding the common threads among the high-profile use cases both interesting individually and starting to connect some dots in terms of common use cases for these service providers who provide the ability to provision and use dedicated physical servers with very similar semantics to the common VM IaaS cloud – servers that can be instantiated at will in the cloud, provisioned with a variety of OS images, be connected to storage and run applications. The differentiation for the customers is in behavior of the resulting images:
Deterministic performance – Your workload is running on a dedicated resource, so there is no question of any “noisy neighbor” problem, or even of sharing resources with otherwise well-behaved neighbors.
Extreme low latency – Like it or not, VMs, even lightweight ones, impose some level of additional latency compared to bare-metal OS images. Where this latency is a factor, bare-metal clouds offer a differentiated alternative.
Raw performance – Under the right conditions, a single bare-metal server can process more work than a collection of VMs, even when their nominal aggregate performance is similar. Benchmarking is always tricky, but several of the bare metal cloud vendors can show some impressive comparative benchmarks to prospective customers.
There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.
And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.
EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:
In this playbook, we do not predict the future of technology but we try to understand how, in the age of the customer, I&O must transform to support businesses by accelerating the speed of service delivery, enabling capacity when and where needed and improving customers and employee experience.
All industries mature towards commoditization and abstraction of the underlying technology because knowledge and expertise are cumulative. Our industry will follow an identical trajectory that will result in ubiquitous and easier to implement, manage and change technology.
Dell today announced its new FX system architecture, and I am decidedly impressed.
Dell FX is a 2U flexible infrastructure building block that allows infrastructure architects to compose an application-appropriate server and storage infrastructure out of the following set of resources:
Multiple choices of server nodes, ranging from multi-core Atom to new Xeon E5 V3 servers. With configurations ranging from 2 to 16 server nodes per enclosure, there is pretty much a configuration point for most mainstream applications.
A novel flexible method of mapping disks from up to three optional disk modules, each with 16 drives - the mapping, controlled by the onboard management, allows each server to appear as if the disk is locally attached DASD, so no changes are needed in any software that thinks it is accessing local storage. A very slick evolution in storage provisioning.
A set of I/O aggregators for consolidating Ethernet and FC I/O from the enclosure.
All in all, an attractive and flexible packaging scheme for infrastructure that needs to be tailored to specific combinations of server, storage and network configurations. Probably an ideal platform to support the Nutanix software suite that Dell is reselling as well. My guess is that other system design groups are thinking along these lines, but this is now a pretty unique package, and merits attention from infrastructure architects.
Forrester’s Infrastructure and Operations research team has been on the leading edge of infrastructure technology and its proper operational aspects for years. We pushed the industry on both the supply side (vendors) and the demand side (enterprises) toward new models and we pushed hard. I’m proud to say we’ve been instrumental in changing the world of infrastructure and we’re about to change it again!
As the entire technology management profession evolves into the Age of the Customer, the whole notion of infrastructure is morphing in dramatic ways. The long-criticized silos are finally collapsing, cloud computing quickly became mainstream, and you now face a dizzying variety of infrastructure options. Some are outside your traditional borders – like new outsourcing, hosting and colocation services as well as too many cloud forms to count. Some remain inside and will for years to come. More of these options will come from the outside though, and even those “legacy” technologies remaining inside will be created and managed differently.
Your future lies not in managing pockets of infrastructure, but in how you assemble the many options into the services your customers needs. Our profession has been locally brilliant, but globally stupid. We’re now helping you become globally brilliant. We call this service design, a much broader design philosophy rooted in systems thinking. The new approach packages technology into a finished “product” that is much more relevant and useful than any of the parts alone.
On Monday Microsoft officially announced the launch of two Azure Data Centers in Australia. This is big news for the many Australia-based organizations concerned about data sovereignty, as well as those who simply equate on-shore data residency with increased security and control.
Announced as part of TechEd 2014 in Sydney, Microsoft specifically called out Amazon Web Services (AWS) and Google as it’s key competition. In fact, Microsoft has gone to great lengths over the past year plus to consistently position these two companies as the only other viable longterm cloud providers. This is based on three cloud provider capabilities identified by Microsoft as critical: hyper-scale, enterprise-grade, and hybrid.
Overall it’s a good angle for Microsoft. All three players operate at hyper-scale as public cloud providers. All three also offer enterprise-grade services, (although this definition varies based on workload). Most importantly for Microsoft, neither AWS nor Google have a primary focus on enabling hybrid cloud services.
In contrast, all traditional large infrastructure vendors (Fujitsu, HP, IBM, VMware, etc.), system integrators (Dimension Data, NTT, etc.), and telco’s (Telstra) focus squarely on enterprise-grade services and hybrid cloud enablement. Rackspace, IBM and HP also have Australia-based data centers. But all these providers lack hyper-scale.
Early next month, Forrester will publish a report on the dynamics of China’s private cloud market. This research demonstrates that Chinese I&O pros have started to leverage the benefits of private cloud — including highly standardized and automated virtual pooling and metered pay-per-use chargeback — to support the digital transformation of traditional business. By using private cloud, Chinese I&O pros not only support their business units’ digital transformation, but also provide the cost transparency that the CFO’s office demands. In practical business terms, Chinese organizations use private cloud to:
Improve business agility. There is fierce market competition to give Chinese consumers more choices. To do this, Chinese organizations must shift their business operations to increase their product portfolio to win new customers and provide a better customer experience to serve and retain existing customers. Chinese I&O pros need to provide a cloud platform that also supports business units’ requirement to lower their capital and operating expenditures.
Avoid disruption by Internet companies. Chinese web-based companies have started to use high-quality service to disrupt traditional businesses. Chinese I&O pros need to provide more flexible computing to help the application development team to improve the development cycle and respond to customers more quickly, flexibly, and effectively.
Develop new business without adding redundancy. Chinese organizations want to scale up new business to offset declines in revenue. However, the existing IT infrastructure at these firms often cannot support new business models — and can even take a toll. Chinese I&O pros need to find a new way — such as private cloud — to support business development and reuse existing infrastructure.
I’ve recently been thinking a lot about application-specific workloads and architectures (Optimize Scalalable Workload-Specific Infrastructure for Customer Experiences), and it got me to thinking about the extremes of the server spectrum – the very small and the very large as they apply to x86 servers. The range, and the variation in intended workloads is pretty spectacular as we diverge from the mean, which for the enterprise means a 2-socket Xeon server, usually in 1U or 2U form factors.
At the bottom, we find really tiny embedded servers, some with very non-traditional packaging. My favorite is probably the technology from Arnouse digital technology, a small boutique that produces computers primarily for military and industrial ruggedized environments.
Slightly bigger than a credit card, their BioDigital server is a rugged embedded server with up to 8 GB of RAM and 128 GB SSD and a very low power footprint. Based on an Atom-class CPU, thus is clearly not the choice for most workloads, but it is an exemplar of what happens when the workload is in a hostile environment and the computer maybe needs to be part of a man-carried or vehicle-mounted portable tactical or field system. While its creators are testing the waters for acceptance as a compute cluster with up to 4000 of them mounted in a standard rack, it’s likely that these will remain a niche product for applications requiring the intersection of small size, extreme ruggedness and complete x86 compatibility, which includes a wide range of applications from military to portable desktop modules.
The last few days have been eventful in the cloud gateway space and should provide I&O organizations more incentive to start evaluating gateways. Yesterday, EMC announced its acquisition of cloud gateway startup TwinStrata which will allow EMC customers to move on-premise data from EMC arrays to public cloud storage providers. Today, Panzura launched a free cloud gateway and their partner Google is adding 2TB of free cloud storage for a year to entice companies to kick the tires on a gateway. Innovation and investment in this area does not appear to be slowing down. CTERA locked in an additional $25 million in VC funding last week to accelerate the sales and marketing efforts to support its cloud gateway and file sync & share products.
Though the cloud gateway market has grown slowly so far, this technology category is about to become mainstream. Cloud Gateways are disruptive since they can facilitate data migration from on-premises to a public cloud storage service to create a true hybrid cloud storage environment. Basically, a cloud gateway is a virtual or physical storage appliance which looks like a NAS or block storage device to users and applications on-premises, but can write data back to a public cloud storage service using the native APIs of that cloud.
A number of use cases have emerged for cloud gateways including: