Windows Server 2003 – A Very Unglamorous but Really Important Problem, Waiting to Bite

Richard Fichera

Very much in the shadows of all the press coverage and hysteria attendant on emerging cloud architectures and customer-facing systems of engagement are the nitty-gritty operational details that lurk like monsters in the swamp of legacy infrastructure, and some of them have teeth. And sometimes these teeth can really take a bite out of the posterior of an unprepared organization.

One of those toothy animals that I&O groups are increasingly encountering in their landscapes is the problem of what to do with Windows Server 2003 (WS2003). It turns out there are still approximately 9 million WS2003 systems running today, with another 2+ million instances running as VM guests. Overall, around 11 million OS images and a ton of hardware that will need replacing and upgrading. And increasing numbers of organizations have finally begun to take seriously the fact that Microsoft is really going to end support and updates as of July 2015.

Based on the conversations I have been having with our clients, the typical I&O group that is now scrambling to come up with a plan has not been willfully negligent, nor are they stupid. Usually WS2003 servers are legacy servers, quietly running some mature piece of code, often in satellite locations or in the shops of acquired companies. The workloads are a mix of ISV and bespoke code, but it is often a LOB-specific application, with the run-of-the-mill collaboration, infrastructure servers and, etc. having long since migrated to newer platforms. A surprising number of clients have told me that they have identified the servers, but not always the applications or the business owners – often a complex task for an old resource in a large company.

So what are the options and best practices for organizations facing the W2003K “monster”? Conversations with clients and vendors providing migration assistance point to several good practices and options:

Read more

Taking Stock of Linux – Maturation Continues

Richard Fichera

[Apologies to all who have just read this post with a sense of deja-vue. I saw a typo, corrected it and then republished the blog, and it reset the publication date. This post was originally published several months ago.]

Having been away from the Linux scene for a while, I recently took a look at a newer version of Linux, SUSE Enterprise Linux Version 11.3, which is representative of the latest feature sets from the Linux 3.0 et seq kernel available to the entre Linux community, including SUSE, Red Hat, Canonical and others. It is apparent, both from the details on SUSE 11.3 and from perusing the documentation on other distribution providers, that Linux has continued to mature nicely as both a foundation for large scale-out clouds as well as a strong contender for the kind of enterprise workloads that previously were only comfortable on either RISC/UNIX systems or large Microsoft Server systems. In effect, Linux has continued its maturation to the point where its feature set and scalability begin to look like a top-tier UNIX from only a couple of years ago.

Among the enterprise technology that caught my eye:

  • Scalability – The Linux kernel now scales to 4096 x86 CPUs and up to 16 TB of memory, well into high-end UNIX server territory, and will support the largest x86 servers currently shipping.
  • I/O – The Linux kernel now includes btrfs (a geeky contraction of “Better File System), an open source file system that promises much of the scalability and feature set of Oracle’s popular ZFS file system including checksums, CoW, snapshotting, advanced logical volume management including thin provisioning and others. The latest releases also include advanced features like geoclustering and remote data replication to support advanced HA topologies.
Read more

From Intel Developer Forum – New Xeon E5 v3 Promises A Respite For Capacity Planners

Richard Fichera

I'm at IDF, a major geekfest for the people interested in the guts of today’s computing infrastructure, and will be immersing myself in the flow for a couple of days. Before going completely off the deep end, I wanted to call out the announcement of the new Xeon E5. While I’ve discussed it in more depth in an accompanying Quick Take just published on our main website, I wanted to add some additional comments on its implications for data center operations, particularly in the areas of capacity planning and long-term capital budgeting.

For many years, each successive iteration of Intel’s and partners’ roadmaps has been quietly delivering a major benefit that seldom gets top billing – additional capacity within the same power and physical footprint, and the resulting ability for users from small enterprises to mega-scale service providers, to defer additional data spending capital expense.

Read more

VMworld – Reflections on a Transformational Event

Richard Fichera

A group of us just published an analysis of VMworld (Breaking Down VMworld), and I thought I’d take this opportunity to add some additional color to the analysis. The report is an excellent synthesis of our analysis, the work of a talented team of collaborators with my two cents thrown in as well, but I wanted to emphasize a few additional impressions, primarily around storage, converged infrastructure, and the  overall tone of the show.

First, storage. If they ever need a new name for the show, they might consider “StorageWorld” – it seemed to me that just about every other booth on the show floor was about storage. Cloud storage, flash storage, hybrid storage, cheap storage, smart storage, object storage … you get the picture.[i] Reading about the hyper-growth of storage and the criticality of storage management to the overall operation of a virtualized environment does not drive the concept home in quite the same way as seeing 1000s of show attendees thronging the booths of the storage vendors, large and small, for days on end. Another leading indicator, IMHO, was the “edge of the show” booths, the cheaper booths on the edge of the floor, where smaller startups congregate, which was also well populated with new and small storage vendors – there is certainly no shortage of ambition and vision in the storage technology pipeline for the next few years.

Read more

Extremes of x86 Servers Illustrate the Depth of the Ecosystem and the Diversity of Workloads

Richard Fichera

I’ve recently been thinking a lot about application-specific workloads and architectures (Optimize Scalalable Workload-Specific Infrastructure for Customer Experiences), and it got me to thinking about the extremes of the server spectrum – the very small and the very large as they apply to x86 servers. The range, and the variation in intended workloads is pretty spectacular as we diverge from the mean, which for the enterprise means a 2-socket Xeon server, usually in 1U or 2U form factors.

At the bottom, we find really tiny embedded servers, some with very non-traditional packaging. My favorite is probably the technology from Arnouse digital technology, a small boutique that produces computers primarily for military and industrial ruggedized environments.

Slightly bigger than a credit card, their BioDigital server is a rugged embedded server with up to 8 GB of RAM and 128 GB SSD and a very low power footprint. Based on an Atom-class CPU, thus is clearly not the choice for most workloads, but it is an exemplar of what happens when the workload is in a hostile environment and the computer maybe needs to be part of a man-carried or vehicle-mounted portable tactical or field system. While its creators are testing the waters for acceptance as a compute cluster with up to 4000 of them mounted in a standard rack, it’s likely that these will remain a niche product for applications requiring the intersection of small size, extreme ruggedness and complete x86 compatibility, which includes a wide range of applications from military to portable desktop modules.

Read more

HP Hooks Up With Foxcon for Volume Servers

Richard Fichera

Yesterday HP announced that it will be entering into a “non-equity joint venture” (think big strategic contract of some kind with a lot of details still in flight) to address the large-scale web services providers. Under the agreement, Foxcon will design and manufacture and HP will be the primary sales channel for new servers targeted at hyper scale web service providers. The new servers will be branded HP but will not be part of the current ProLiant line of enterprise servers, and HP will deliver additional services along with hardware sales.

Why?

The motivation is simple underneath all the rhetoric. HP has been hard-pressed to make decent margins selling high-volume low-cost and no-frills servers to web service providers, and has been increasingly pressured by low-cost providers. Add to that the issue of customization, which these high-volume customers can easily get from smaller and more agile Asian ODMs and you have a strategic problem. Having worked at HP for four years I can testify to the fact that HP, a company maniacal about quality but encumbered with an effective but rigid set of processes around bringing new products to market, has difficulty rapidly turning around a custom design, and has a cost structure that makes it difficult to profitably compete for deals with margins that are probably in the mid-teens.

Enter the Hon Hai Precision Industry Co, more commonly known as Foxcon. A longtime HP partner and widely acknowledged as one of the most efficient and agile manufacturing companies in the world, Foxcon brings to the table the complementary strengths to match HP – agile design, tightly integrated with its manufacturing capabilities.

Who does what?

Read more

IBM Announces Next Generation POWER Systems – Big Win for AIX Users, New Option for Linux

Richard Fichera

On April 23, IBM rolled out the long-awaited POWER8 CPU, the successor to POWER7+, and given the extensive pre-announcement speculation, the hardware itself was no big surprise (the details are fascinating, but not suitable for this venue), offering an estimated  30 - 50% improvement in application performance over the latest POWER7+, with potential for order of magnitude improvements with selected big data and analytics workloads. While the technology is interesting, we are pretty numb to the “bigger, better, faster” messaging that inevitably accompanies new hardware announcements, and the real impact of this announcement lies in its utility for current AIX users and IBM’s increased focus on Linux and its support of the OpenPOWER initiative.

Technology

OK, so we’re numb, but it’s still interesting. POWER8 is an entirely new processor generation implemented in 22 nm CMOS (the same geometry as Intel’s high-end CPUs). The processor features up to 12 cores, each with up to 8 threads, and a focus on not only throughput but high performance per thread and per core for low-thread-count applications. Added to the mix is up to 1 TB of memory per socket, massive PCIe 3 I/O connectivity and Coherent Accelerator Processor Interface (CAPI), IBM’s technology to deliver memory-controller-based access for accelerators and flash memory in POWER systems. CAPI figures prominently in IBM’s positioning of POWER as the ultimate analytics engine, with the announcement profiling the performance of a configuration using 40 TB of CAPI-attached flash for huge in-memory analytics at a fraction of the cost of a non-CAPI configuration.[i]

A Slam-dunk for AIX users and a new play for Linux

Read more

Cisco UCS at Five Years – Successful Disruption and a New Status-Quo

Richard Fichera

March Madness – Five Years Ago

It was five years ago, March 2009, when Cisco formally announced  “Project California,” its (possibly intentionally) worst-kept secret, as Cisco Unified Computing System. At the time, I was working at Hewlett Packard, and our collective feelings as we realized that Cisco really did intend to challenge us in the server market were a mixed bag. Some of us were amused at their presumption, others were concerned that there might be something there, since we had odd bits and pieces of intelligence about the former Nuova, the Cisco spin-out/spin-in that developed UCS. Most of us were convinced that they would have trouble running a server business at margins we knew would be substantially lower than their margins in their core switch business. Sitting on top of our shiny, still relatively new HP c-Class BladeSystem, which had overtaken IBM’s BladeCenter as the leading blade product, we were collectively unconcerned, as well as puzzled about Cisco’s decision to upset a nice stable arrangement where IBM, HP and Dell sold possibly a Billion dollars’ worth of Cisco gear between them.

Fast Forward

Five years later, HP is still number one in blade server units and revenue, but Cisco appears to be now number two in blades, and closing in on number three world-wide in server sales as well. The numbers are impressive:

·         32,000 net new customers in five years, with 14,000 repeat customers

·         Claimed $2 Billion+ annual run-rate

·         Order growth rate claimed in “mid-30s” range, probably about three times the growth rate of any competing product line.

Lessons Learned

Read more

Intel Bumps up High-End Servers with New Xeon E7 V2 - A Long Awaited and Timely Leap

Richard Fichera

The long draught at the high-end

It’s been a long wait, about four years if memory serves me well, since Intel introduced the Xeon E7, a high-end server CPU targeted at the highest performance per-socket x86, from high-end two socket servers to 8-socket servers with tons of memory and lots of I/O. In the ensuing four years (an eternity in a world where annual product cycles are considered the norm), subsequent generations of lesser Xeons, most recently culminating in the latest generation 22 nm Xeon E5 V2 Ivy Bridge server CPUs, have somewhat diluted the value proposition of the original E7.

So what is the poor high-end server user with really demanding single-image workloads to do? The answer was to wait for the Xeon E7 V2, and at first glance, it appears that the wait was worth it. High-end CPUs take longer to develop than lower-end products, and in my opinion Intel made the right decision to skip the previous generation 22nm Sandy Bridge architecture and go to Ivy Bridge, it’s architectural successor in the Intel “Tick-Tock” cycle of new process, then new architecture.

What was announced?

The announcement was the formal unveiling of the Xeon E7 V2 CPU, available in multiple performance bins with anywhere from 8 to 15 cores per socket. Critical specifications include:

  • Up to 15 cores per socket
  • 24 DIMM slots, allowing up to 1.5 TB of memory with 64 GB DIMMs
  • Approximately 4X I/O bandwidth improvement
  • New RAS features, including low-level memory controller modes optimized for either high-availability or performance mode (BIOS option), enhanced error recovery and soft-error reporting
Read more

IBM is First Mover with Disruptive Flash Memory Technology on New x6 Servers

Richard Fichera

This week, IBM announced its new line of x86 servers, and included among the usual incremental product improvements is a performance game-changer called eXFlash. eXFlash is the first commercially available implantation of the MCS architecture announced last year by Diablo Technologies. The MCS architecture, and IBM’s eXFlash offering in particular, allows flash memory to be embedded on the system as close to the CPU as main memory, with latencies substantially lower than any other available flash options, offering better performance at a lower solution cost than other embedded flash solutions. Key aspects of the announcement include:

■  Flash DIMMs offer scalable high performance. Write latency (a critical metric) for IBM eXFlash will be in the 5 to 10 microsecond range, whereas best-of-breed competing mezzanine card and PCIe flash can only offer 15 to 20 microseconds (and external flash storage is slower still). Additionally, since the DIMMs are directly attached to the memory controller, flash I/O does not compete with other I/O on the system I/O hub and PCIe subsystem, improving overall system performance for heavily-loaded systems. Additional benefits include linear performance scalability as the number of DIMMs increase and optional built-in hardware mirroring of DIMM pairs.

■  eXFlash DIMMs are compatible with current software. Part of the magic of MCS flash is that it appears to the OS as a standard block-mode device, so all existing block-mode software will work, including applications, caching and tiering or general storage management software. For IBM users, compatibility with IBM’s storage management and FlashCache Storage Accelerator solutions is guaranteed. Other vendors will face zero to low effort in qualifying their solutions.

Read more