Mainframe Futures – Reading the Tea Leaves for Future Investments

I’ve been getting a steady trickle of inquires this year about the future of the mainframe from our enterprise clients. Most of them are more or less in the form of “I have a lot of stuff running on mainframes. Is this a viable platform for the next decade or is IBM going to abandon them.” I think the answer is that the platform is secure, and in the majority of cases the large business-critical workloads that are currently on the mainframe probably should remain on the mainframes. In the interests of transparency I’ve tried to lay out my reasoning below so that you can see if it applies to your own situation.

How Big is the Mainframe LOB?

It's hard to get exact figures for the mainframe contributions to IBM's STG (System & Technology Group) total revenues, but the data they have shared shows that their mainframe revenues seem to have recovered from the declines of previous quarters and at worst flattened. Because the business is inherently somewhat cyclical, I would expect that the next cycle of mainframes, rumored to be arriving next year, should give them a boost similar to the last major cycle, allowing them to show positive revenues next year.

Read more

Bare Metal Clouds – Performance and Isolation Drive Consideration

I’ve been talking to a number of users and providers of bare-metal cloud services, and am finding the common threads among the high-profile use cases both interesting individually and starting to connect some dots in terms of common use cases for these service providers who provide the ability to provision and use dedicated physical servers with very similar semantics to the common VM IaaS cloud – servers that can be instantiated at will in the cloud, provisioned with a variety of OS images, be connected to storage and run applications. The differentiation for the customers is in behavior of the resulting images:

  • Deterministic performance – Your workload is running on a dedicated resource, so there is no question of any “noisy neighbor” problem, or even of sharing resources with otherwise well-behaved neighbors.
  • Extreme low latency – Like it or not, VMs, even lightweight ones, impose some level of additional latency compared to bare-metal OS images. Where this latency is a factor, bare-metal clouds offer a differentiated alternative.
  • Raw performance – Under the right conditions, a single bare-metal server can process more work than a collection of VMs, even when their nominal aggregate performance is similar. Benchmarking is always tricky, but several of the bare metal cloud vendors can show some impressive comparative benchmarks to prospective customers.
Read more

Shifting Sands – Changing Alliances Underscore the Dynamism of the Infrastructure Systems Market

There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.

And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.

EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:

Read more

Dell Introduces FX system - the Shape of Infrastructure to Come?

Dell today announced its new FX system architecture, and I am decidedly impressed.

Dell FX is a 2U flexible infrastructure building block that allows infrastructure architects to compose an application-appropriate server and storage infrastructure out of the following set of resources:

  • Multiple choices of server nodes, ranging from multi-core Atom to new Xeon E5 V3 servers. With configurations ranging from 2 to 16 server nodes per enclosure, there is pretty much a configuration point for most mainstream applications.
  • A novel flexible method of mapping disks from up to three optional disk modules, each with 16 drives - the mapping, controlled by the onboard management, allows each server to appear as if the disk is locally attached DASD, so no changes are needed in any software that thinks it is accessing local storage. A very slick evolution in storage provisioning.
  • A set of I/O aggregators for consolidating Ethernet and FC I/O from the enclosure.

All in all, an attractive and flexible packaging scheme for infrastructure that needs to be tailored to specific combinations of server, storage and network configurations. Probably an ideal platform to support the Nutanix software suite that Dell is reselling as well. My guess is that other system design groups are thinking along these lines, but this is now a pretty unique package, and merits attention from infrastructure architects.

Forrester clients, I've published a Quick Take report on this, Quick Take: Dell's FX Architecture Holds Promise To Power Modern Services

Microsoft And Dell Change The Private/Hybrid Cloud Game With On-Premise Azure

What was announced?

On October 20 at TechEd, Microsoft quietly slipped in what looks like a potential game-changing announcement in the private/hybrid cloud world when they rolled out Microsoft Cloud Platform System (CPS), an integrated hardware/software system that combines an Azure-consistent on premise cloud with an optimized hardware stack from Dell.

Why does it matter?

Read more

IBM Sheds Yet Another Hardware Business - Pays To Get Rid Of Semiconductor Fabrication

While the timing of the event comes as a surprise, the fact that IBM has decided to unload its technically excellent but unprofitable semiconductor manufacturing operation does not, nor does its choice of Globalfoundries, with whom it has had a longstanding relationship.
 
Read more

New ARM-based Moonshot Servers from HP Exemplify Workload-Specific Computing

One of the developing trends in computing, relevant to both enterprise and service providers alike, is the notion of workload-specific or application-centric computing architectures. These architectures, optimized for specific workloads, promise improved efficiencies for running their targeted workloads, and by extension the services that they support. Earlier this year we covered the basics of this concept in “Optimize Scalable Workload-Specific Infrastructure for Customer Experiences”, and this week HP has announced a pair of server cartridges for their Moonshot system that exemplify this concept, as well as being representative of the next wave of ARM products that will emerge during the remainder of 2014 and into 2015 to tilt once more at the x86 windmill that currently dominates the computing landscape.

Specifically, HP has announced the ProLiant m400 Server Cartridge (m400) and the ProLiant m800 Server Cartridge (m800), both ARM-based servers packaged as cartridges for the HP Moonshot system, which can hold up to 45 of these cartridges in its approximately 4U enclosure. These servers are interesting from two perspectives – that they are both ARM-based products, one being the first tier-1 vendor offering of a 64-bit ARM CPU and that they are both being introduced with a specific workload target in mind for which they have been specifically optimized.

Read more

Windows Server 2003 – A Very Unglamorous But Really Important Problem, Waiting To Bite

Very much in the shadows of all the press coverage and hysteria attendant on emerging cloud architectures and customer-facing systems of engagement are the nitty-gritty operational details that lurk like monsters in the swamp of legacy infrastructure, and some of them have teeth. And sometimes these teeth can really take a bite out of the posterior of an unprepared organization.

One of those toothy animals that I&O groups are increasingly encountering in their landscapes is the problem of what to do with Windows Server 2003 (WS2003). It turns out there are still approximately 11 million WS2003 systems running today, with another 10+ million instances running as VM guests. Overall, possibly more than 22 million OS images and a ton of hardware that will need replacing and upgrading. And increasing numbers of organizations have finally begun to take seriously the fact that Microsoft is really going to end support and updates as of July 2015.

Based on the conversations I have been having with our clients, the typical I&O group that is now scrambling to come up with a plan has not been willfully negligent, nor are they stupid. Usually WS2003 servers are legacy servers, quietly running some mature piece of code, often in satellite locations or in the shops of acquired companies. The workloads are a mix of ISV and bespoke code, but it is often a LOB-specific application, with the run-of-the-mill collaboration, infrastructure servers and, etc. having long since migrated to newer platforms. A surprising number of clients have told me that they have identified the servers, but not always the applications or the business owners – often a complex task for an old resource in a large company.

Read more

Taking Stock Of Linux – Maturation Continues

[Apologies to all who have just read this post with a sense of deja-vue. I saw a typo, corrected it and then republished the blog, and it reset the publication date. This post was originally published several months ago.]

Having been away from the Linux scene for a while, I recently took a look at a newer version of Linux, SUSE Enterprise Linux Version 11.3, which is representative of the latest feature sets from the Linux 3.0 et seq kernel available to the entre Linux community, including SUSE, Red Hat, Canonical and others. It is apparent, both from the details on SUSE 11.3 and from perusing the documentation on other distribution providers, that Linux has continued to mature nicely as both a foundation for large scale-out clouds as well as a strong contender for the kind of enterprise workloads that previously were only comfortable on either RISC/UNIX systems or large Microsoft Server systems. In effect, Linux has continued its maturation to the point where its feature set and scalability begin to look like a top-tier UNIX from only a couple of years ago.

Among the enterprise technology that caught my eye:

  • Scalability – The Linux kernel now scales to 4096 x86 CPUs and up to 16 TB of memory, well into high-end UNIX server territory, and will support the largest x86 servers currently shipping.
  • I/O – The Linux kernel now includes btrfs (a geeky contraction of “Better File System), an open source file system that promises much of the scalability and feature set of Oracle’s popular ZFS file system including checksums, CoW, snapshotting, advanced logical volume management including thin provisioning and others. The latest releases also include advanced features like geoclustering and remote data replication to support advanced HA topologies.
Read more

From Intel Developer Forum – New Xeon E5 v3 Promises A Respite For Capacity Planners

I'm at IDF, a major geekfest for the people interested in the guts of today’s computing infrastructure, and will be immersing myself in the flow for a couple of days. Before going completely off the deep end, I wanted to call out the announcement of the new Xeon E5. While I’ve discussed it in more depth in an accompanying Quick Take just published on our main website, I wanted to add some additional comments on its implications for data center operations, particularly in the areas of capacity planning and long-term capital budgeting.

For many years, each successive iteration of Intel’s and partners’ roadmaps has been quietly delivering a major benefit that seldom gets top billing – additional capacity within the same power and physical footprint, and the resulting ability for users from small enterprises to mega-scale service providers, to defer additional data spending capital expense.

Read more