Red Hat Summit – Can you say OpenStack and Containers?

Richard Fichera

In a world where OS and low-level platform software is considered unfashionable, it was refreshing to see the Linux glitterati and cognoscenti descended on Boston for the last three days, 5000 strong and genuinely passionate about Linux. I spent a day there mingling with the crowds in the eshibit halls, attending some sessions and meeting with Red Hat management. Overall, the breadth of Red Hat’s offerings are overwhelming and way too much to comprehend ina single day or a handful of days, but I focused my attention on two big issues for the emerging software-defined data center – containers and the inexorable march of OpenStack.

Containers are all the rage, and Red Hat is firmly behind them, with its currently shipping RHEL Atomic release optimized to support them. The news at the Summit was the release of RHEL Atomic Enterprise, which extends the ability to execute and manage containers over a cluster as opposed to a single system. In conjunction with a tool stack such as Docker and Kubernates, this paves the way for very powerful distributed deployments that take advantage of the failure isolation and performance potential of clusters in the enterprise. While all the IP in RHEL Atomic, Docker and Kubernates are available to the community and competitors, it appears that RH has stolen at least a temporary early lead in bolstering the usability of this increasingly central virtualization abstraction for the next generation data center.

Read more

IBM Amps up the Mainframe and Aggressively Targets Mobile Workloads with new z13 Announcement

Richard Fichera

On one level, IBM’s new z13, announced last Wednesday in New York, is exactly what the mainframe world has been expecting for the last two and a half years – more capacity (a big boost this time around – triple the main memory, more and faster cores, more I/O ports, etc.), a modest boost in price performance, and a very sexy cabinet design (I know it’s not really a major evaluation factor, but I think IBM’s industrial design for its system enclosures for Flex System, Power and the z System is absolutely gorgeous, should be in the MOMA*). IBM indeed delivered against these expectations, plus more. In this case a lot more.

In addition to the required upgrades to fuel the normal mainframe upgrade cycle and its reasonably predictable revenue, IBM has made a bold but rational repositioning of the mainframe as a core platform for the workloads generated by mobile transactions, the most rapidly growing workload across all sectors of the global economy. What makes this positioning rational as opposed to a pipe-dream for IBM is an underlying pattern common to many of these transactions – at some point they access data generated by and stored on a mainframe. By enhancing the economics of the increasingly Linux-centric processing chain that occurs before the call for the mainframe data, IBM hopes to foster the migration of these workloads to the mainframe where its access to the resident data will be more efficient, benefitting from inherently lower latency for data access as well as from access to embedded high-value functions such as accelerators for inline analytics. In essence, IBM hopes to shift the center of gravity for mobile processing toward the mainframe and away from distributed x86 Linux systems that they no longer manufacture.

Read more

Mainframe Futures – Reading the Tea Leaves for Future Investments

Richard Fichera

I’ve been getting a steady trickle of inquires this year about the future of the mainframe from our enterprise clients. Most of them are more or less in the form of “I have a lot of stuff running on mainframes. Is this a viable platform for the next decade or is IBM going to abandon them.” I think the answer is that the platform is secure, and in the majority of cases the large business-critical workloads that are currently on the mainframe probably should remain on the mainframes. In the interests of transparency I’ve tried to lay out my reasoning below so that you can see if it applies to your own situation.

How Big is the Mainframe LOB?

It's hard to get exact figures for the mainframe contributions to IBM's STG (System & Technology Group) total revenues, but the data they have shared shows that their mainframe revenues seem to have recovered from the declines of previous quarters and at worst flattened. Because the business is inherently somewhat cyclical, I would expect that the next cycle of mainframes, rumored to be arriving next year, should give them a boost similar to the last major cycle, allowing them to show positive revenues next year.

Read more

Bare Metal Clouds – Performance and Isolation Drive Consideration

Richard Fichera

I’ve been talking to a number of users and providers of bare-metal cloud services, and am finding the common threads among the high-profile use cases both interesting individually and starting to connect some dots in terms of common use cases for these service providers who provide the ability to provision and use dedicated physical servers with very similar semantics to the common VM IaaS cloud – servers that can be instantiated at will in the cloud, provisioned with a variety of OS images, be connected to storage and run applications. The differentiation for the customers is in behavior of the resulting images:

  • Deterministic performance – Your workload is running on a dedicated resource, so there is no question of any “noisy neighbor” problem, or even of sharing resources with otherwise well-behaved neighbors.
  • Extreme low latency – Like it or not, VMs, even lightweight ones, impose some level of additional latency compared to bare-metal OS images. Where this latency is a factor, bare-metal clouds offer a differentiated alternative.
  • Raw performance – Under the right conditions, a single bare-metal server can process more work than a collection of VMs, even when their nominal aggregate performance is similar. Benchmarking is always tricky, but several of the bare metal cloud vendors can show some impressive comparative benchmarks to prospective customers.
Read more

New ARM-based Moonshot Servers from HP Exemplify Workload-Specific Computing

Richard Fichera

One of the developing trends in computing, relevant to both enterprise and service providers alike, is the notion of workload-specific or application-centric computing architectures. These architectures, optimized for specific workloads, promise improved efficiencies for running their targeted workloads, and by extension the services that they support. Earlier this year we covered the basics of this concept in “Optimize Scalable Workload-Specific Infrastructure for Customer Experiences”, and this week HP has announced a pair of server cartridges for their Moonshot system that exemplify this concept, as well as being representative of the next wave of ARM products that will emerge during the remainder of 2014 and into 2015 to tilt once more at the x86 windmill that currently dominates the computing landscape.

Specifically, HP has announced the ProLiant m400 Server Cartridge (m400) and the ProLiant m800 Server Cartridge (m800), both ARM-based servers packaged as cartridges for the HP Moonshot system, which can hold up to 45 of these cartridges in its approximately 4U enclosure. These servers are interesting from two perspectives – that they are both ARM-based products, one being the first tier-1 vendor offering of a 64-bit ARM CPU and that they are both being introduced with a specific workload target in mind for which they have been specifically optimized.

Read more

Taking Stock Of Linux – Maturation Continues

Richard Fichera

[Apologies to all who have just read this post with a sense of deja-vue. I saw a typo, corrected it and then republished the blog, and it reset the publication date. This post was originally published several months ago.]

Having been away from the Linux scene for a while, I recently took a look at a newer version of Linux, SUSE Enterprise Linux Version 11.3, which is representative of the latest feature sets from the Linux 3.0 et seq kernel available to the entre Linux community, including SUSE, Red Hat, Canonical and others. It is apparent, both from the details on SUSE 11.3 and from perusing the documentation on other distribution providers, that Linux has continued to mature nicely as both a foundation for large scale-out clouds as well as a strong contender for the kind of enterprise workloads that previously were only comfortable on either RISC/UNIX systems or large Microsoft Server systems. In effect, Linux has continued its maturation to the point where its feature set and scalability begin to look like a top-tier UNIX from only a couple of years ago.

Among the enterprise technology that caught my eye:

  • Scalability – The Linux kernel now scales to 4096 x86 CPUs and up to 16 TB of memory, well into high-end UNIX server territory, and will support the largest x86 servers currently shipping.
  • I/O – The Linux kernel now includes btrfs (a geeky contraction of “Better File System), an open source file system that promises much of the scalability and feature set of Oracle’s popular ZFS file system including checksums, CoW, snapshotting, advanced logical volume management including thin provisioning and others. The latest releases also include advanced features like geoclustering and remote data replication to support advanced HA topologies.
Read more

VMworld – Reflections on a Transformational Event

Richard Fichera

A group of us just published an analysis of VMworld (Breaking Down VMworld), and I thought I’d take this opportunity to add some additional color to the analysis. The report is an excellent synthesis of our analysis, the work of a talented team of collaborators with my two cents thrown in as well, but I wanted to emphasize a few additional impressions, primarily around storage, converged infrastructure, and the  overall tone of the show.

First, storage. If they ever need a new name for the show, they might consider “StorageWorld” – it seemed to me that just about every other booth on the show floor was about storage. Cloud storage, flash storage, hybrid storage, cheap storage, smart storage, object storage … you get the picture.[i] Reading about the hyper-growth of storage and the criticality of storage management to the overall operation of a virtualized environment does not drive the concept home in quite the same way as seeing 1000s of show attendees thronging the booths of the storage vendors, large and small, for days on end. Another leading indicator, IMHO, was the “edge of the show” booths, the cheaper booths on the edge of the floor, where smaller startups congregate, which was also well populated with new and small storage vendors – there is certainly no shortage of ambition and vision in the storage technology pipeline for the next few years.

Read more

IBM Announces Next Generation POWER Systems – Big Win for AIX Users, New Option for Linux

Richard Fichera

On April 23, IBM rolled out the long-awaited POWER8 CPU, the successor to POWER7+, and given the extensive pre-announcement speculation, the hardware itself was no big surprise (the details are fascinating, but not suitable for this venue), offering an estimated  30 - 50% improvement in application performance over the latest POWER7+, with potential for order of magnitude improvements with selected big data and analytics workloads. While the technology is interesting, we are pretty numb to the “bigger, better, faster” messaging that inevitably accompanies new hardware announcements, and the real impact of this announcement lies in its utility for current AIX users and IBM’s increased focus on Linux and its support of the OpenPOWER initiative.

Technology

OK, so we’re numb, but it’s still interesting. POWER8 is an entirely new processor generation implemented in 22 nm CMOS (the same geometry as Intel’s high-end CPUs). The processor features up to 12 cores, each with up to 8 threads, and a focus on not only throughput but high performance per thread and per core for low-thread-count applications. Added to the mix is up to 1 TB of memory per socket, massive PCIe 3 I/O connectivity and Coherent Accelerator Processor Interface (CAPI), IBM’s technology to deliver memory-controller-based access for accelerators and flash memory in POWER systems. CAPI figures prominently in IBM’s positioning of POWER as the ultimate analytics engine, with the announcement profiling the performance of a configuration using 40 TB of CAPI-attached flash for huge in-memory analytics at a fraction of the cost of a non-CAPI configuration.[i]

A Slam-dunk for AIX users and a new play for Linux

Read more

IBM is First Mover with Disruptive Flash Memory Technology on New x6 Servers

Richard Fichera

This week, IBM announced its new line of x86 servers, and included among the usual incremental product improvements is a performance game-changer called eXFlash. eXFlash is the first commercially available implantation of the MCS architecture announced last year by Diablo Technologies. The MCS architecture, and IBM’s eXFlash offering in particular, allows flash memory to be embedded on the system as close to the CPU as main memory, with latencies substantially lower than any other available flash options, offering better performance at a lower solution cost than other embedded flash solutions. Key aspects of the announcement include:

■  Flash DIMMs offer scalable high performance. Write latency (a critical metric) for IBM eXFlash will be in the 5 to 10 microsecond range, whereas best-of-breed competing mezzanine card and PCIe flash can only offer 15 to 20 microseconds (and external flash storage is slower still). Additionally, since the DIMMs are directly attached to the memory controller, flash I/O does not compete with other I/O on the system I/O hub and PCIe subsystem, improving overall system performance for heavily-loaded systems. Additional benefits include linear performance scalability as the number of DIMMs increase and optional built-in hardware mirroring of DIMM pairs.

■  eXFlash DIMMs are compatible with current software. Part of the magic of MCS flash is that it appears to the OS as a standard block-mode device, so all existing block-mode software will work, including applications, caching and tiering or general storage management software. For IBM users, compatibility with IBM’s storage management and FlashCache Storage Accelerator solutions is guaranteed. Other vendors will face zero to low effort in qualifying their solutions.

Read more

Systems of Engagement vs Systems of Reference – Core Concept for Infrastructure Architecture

Richard Fichera

My Forrester colleagues Ted Schadler and John McCarthy have written about the differences between Systems of Reference (SoR) and Systems of Engagement (SoE) in the customer-facing systems and mobility, but after further conversations with some very smart people at IBM, I think there are also important reasons for infrastructure architects to understand this dichotomy. Scalable and flexible systems of engagement, engagement, built with the latest in dynamic web technology and the back-end systems of record, highly stateful usually transactional systems designed to keep track of the “true” state of corporate assets are very different animals from an infrastructure standpoint in two fundamental areas:

Suitability to cloud (private or public) deployment – SoE environments, by their nature, are generally constructed using horizontally scalable technologies, generally based on some level of standards including web standards, Linux or Windows OS, and some scalalable middleware that hides the messy details of horizontally scaling a complex application. In addition, the workloads are generally highly parallel, with each individual interaction being of low value. This characteristic leads to very different demands on the necessity for consistency and resiliency.

Read more