Windows Server 2003 – A Very Unglamorous but Really Important Problem, Waiting to Bite

Richard Fichera

Very much in the shadows of all the press coverage and hysteria attendant on emerging cloud architectures and customer-facing systems of engagement are the nitty-gritty operational details that lurk like monsters in the swamp of legacy infrastructure, and some of them have teeth. And sometimes these teeth can really take a bite out of the posterior of an unprepared organization.

One of those toothy animals that I&O groups are increasingly encountering in their landscapes is the problem of what to do with Windows Server 2003 (WS2003). It turns out there are still approximately 11 million WS2003 systems running today, with another 10+ million instances running as VM guests. Overall, possibly more than 22 million OS images and a ton of hardware that will need replacing and upgrading. And increasing numbers of organizations have finally begun to take seriously the fact that Microsoft is really going to end support and updates as of July 2015.

Based on the conversations I have been having with our clients, the typical I&O group that is now scrambling to come up with a plan has not been willfully negligent, nor are they stupid. Usually WS2003 servers are legacy servers, quietly running some mature piece of code, often in satellite locations or in the shops of acquired companies. The workloads are a mix of ISV and bespoke code, but it is often a LOB-specific application, with the run-of-the-mill collaboration, infrastructure servers and, etc. having long since migrated to newer platforms. A surprising number of clients have told me that they have identified the servers, but not always the applications or the business owners – often a complex task for an old resource in a large company.

Read more

Taking Stock of Linux – Maturation Continues

Richard Fichera

[Apologies to all who have just read this post with a sense of deja-vue. I saw a typo, corrected it and then republished the blog, and it reset the publication date. This post was originally published several months ago.]

Having been away from the Linux scene for a while, I recently took a look at a newer version of Linux, SUSE Enterprise Linux Version 11.3, which is representative of the latest feature sets from the Linux 3.0 et seq kernel available to the entre Linux community, including SUSE, Red Hat, Canonical and others. It is apparent, both from the details on SUSE 11.3 and from perusing the documentation on other distribution providers, that Linux has continued to mature nicely as both a foundation for large scale-out clouds as well as a strong contender for the kind of enterprise workloads that previously were only comfortable on either RISC/UNIX systems or large Microsoft Server systems. In effect, Linux has continued its maturation to the point where its feature set and scalability begin to look like a top-tier UNIX from only a couple of years ago.

Among the enterprise technology that caught my eye:

  • Scalability – The Linux kernel now scales to 4096 x86 CPUs and up to 16 TB of memory, well into high-end UNIX server territory, and will support the largest x86 servers currently shipping.
  • I/O – The Linux kernel now includes btrfs (a geeky contraction of “Better File System), an open source file system that promises much of the scalability and feature set of Oracle’s popular ZFS file system including checksums, CoW, snapshotting, advanced logical volume management including thin provisioning and others. The latest releases also include advanced features like geoclustering and remote data replication to support advanced HA topologies.
Read more

From Intel Developer Forum – New Xeon E5 v3 Promises A Respite For Capacity Planners

Richard Fichera

I'm at IDF, a major geekfest for the people interested in the guts of today’s computing infrastructure, and will be immersing myself in the flow for a couple of days. Before going completely off the deep end, I wanted to call out the announcement of the new Xeon E5. While I’ve discussed it in more depth in an accompanying Quick Take just published on our main website, I wanted to add some additional comments on its implications for data center operations, particularly in the areas of capacity planning and long-term capital budgeting.

For many years, each successive iteration of Intel’s and partners’ roadmaps has been quietly delivering a major benefit that seldom gets top billing – additional capacity within the same power and physical footprint, and the resulting ability for users from small enterprises to mega-scale service providers, to defer additional data spending capital expense.

Read more

VMworld – Reflections on a Transformational Event

Richard Fichera

A group of us just published an analysis of VMworld (Breaking Down VMworld), and I thought I’d take this opportunity to add some additional color to the analysis. The report is an excellent synthesis of our analysis, the work of a talented team of collaborators with my two cents thrown in as well, but I wanted to emphasize a few additional impressions, primarily around storage, converged infrastructure, and the  overall tone of the show.

First, storage. If they ever need a new name for the show, they might consider “StorageWorld” – it seemed to me that just about every other booth on the show floor was about storage. Cloud storage, flash storage, hybrid storage, cheap storage, smart storage, object storage … you get the picture.[i] Reading about the hyper-growth of storage and the criticality of storage management to the overall operation of a virtualized environment does not drive the concept home in quite the same way as seeing 1000s of show attendees thronging the booths of the storage vendors, large and small, for days on end. Another leading indicator, IMHO, was the “edge of the show” booths, the cheaper booths on the edge of the floor, where smaller startups congregate, which was also well populated with new and small storage vendors – there is certainly no shortage of ambition and vision in the storage technology pipeline for the next few years.

Read more

Extremes of x86 Servers Illustrate the Depth of the Ecosystem and the Diversity of Workloads

Richard Fichera

I’ve recently been thinking a lot about application-specific workloads and architectures (Optimize Scalalable Workload-Specific Infrastructure for Customer Experiences), and it got me to thinking about the extremes of the server spectrum – the very small and the very large as they apply to x86 servers. The range, and the variation in intended workloads is pretty spectacular as we diverge from the mean, which for the enterprise means a 2-socket Xeon server, usually in 1U or 2U form factors.

At the bottom, we find really tiny embedded servers, some with very non-traditional packaging. My favorite is probably the technology from Arnouse digital technology, a small boutique that produces computers primarily for military and industrial ruggedized environments.

Slightly bigger than a credit card, their BioDigital server is a rugged embedded server with up to 8 GB of RAM and 128 GB SSD and a very low power footprint. Based on an Atom-class CPU, thus is clearly not the choice for most workloads, but it is an exemplar of what happens when the workload is in a hostile environment and the computer maybe needs to be part of a man-carried or vehicle-mounted portable tactical or field system. While its creators are testing the waters for acceptance as a compute cluster with up to 4000 of them mounted in a standard rack, it’s likely that these will remain a niche product for applications requiring the intersection of small size, extreme ruggedness and complete x86 compatibility, which includes a wide range of applications from military to portable desktop modules.

Read more

Some Friendly Advice For Dell Customer Service

Harley Manning

Right before school started last year I bought my son a new Dell laptop, a Windows 8 machine with a touchscreen. He loves it.

Fast forward to a month ago when our family rented a vacation house. My son brought his laptop along so he could play DVDs on it – online gaming was right out because we had purposefully rented a house with no Internet connection so we could unplug from work.

The first time my son tried to log on he found that Windows did not want to accept his password because he was not online. I’m going to skip the lengthy explanation of why this is not supposed to happen, why it happened anyway, all the things we tried to do to fix the problem ourselves, etc. (Maybe they’ll end up in a different post – who knows?)

Suffice it to say that since the laptop was still under warranty, and the problem seemed simple enough, I decide to call Dell. I assumed they’d encountered this situation a million times and could tell me a fix in their sleep. Well, I was wrong. After talking to five different people (could have been four, could have been six, I lost count after a while) I realized that I had made a mistake and hung up on the hold music.

Since I hate to let an interesting customer experience go to waste, though, I’d like to offer some hopefully helpful advice to the Dell customer service people – because, in fact, we do like that machine we bought from them and would love them to be around for our next laptop purchase. With that in mind, here are my top suggestions for the people who tried to help me as well as anyone else who runs a customer service operation.

Read more

Decoding Huawei – Emergence as a Major IT Player Looms

Richard Fichera

Last month I attended Huawei’s annual Global Analyst Summit, for the requisite several days of mass presentations, executive meetings and tours that typically constitute such an event. Underneath my veneer of blasé cynicism, I was actually quite intrigued, since I really knew very little about Huawei. And what I did know was tainted by popular and persistent negatives – they were the ones who supposedly copied Cisco’s IP to get into the network business, and, until we got better acquainted with our own Federal Government’s little shenanigans, Huawei was the big bad boogie man who was going to spy on us with every piece of network equipment they installed.

Reality was quite a bit different. Ancient disputes about IP aside, I found a $40B technology powerhouse who is probably the least-known and understood company of its size in the world, and one which appears poised to pose major challenges to incumbents in several areas, including mainstream enterprise IT.

So you don’t know Huawei

First, some basics. Huawei’s 2013 revenue was $39.5 Billion, which puts it right up there with some much better-known names such as Lenovo, Oracle, Dell and Cisco.

 

% Revenue / $ revenue (Billions)

Annual Growth rate

Telco & network equipment

70 / $27.7

7%

Consumer (mobile devices)

24 / $9.5

18%

Enterprise business (servers, storage, software)

Read more

HP Hooks Up With Foxcon for Volume Servers

Richard Fichera

Yesterday HP announced that it will be entering into a “non-equity joint venture” (think big strategic contract of some kind with a lot of details still in flight) to address the large-scale web services providers. Under the agreement, Foxcon will design and manufacture and HP will be the primary sales channel for new servers targeted at hyper scale web service providers. The new servers will be branded HP but will not be part of the current ProLiant line of enterprise servers, and HP will deliver additional services along with hardware sales.

Why?

The motivation is simple underneath all the rhetoric. HP has been hard-pressed to make decent margins selling high-volume low-cost and no-frills servers to web service providers, and has been increasingly pressured by low-cost providers. Add to that the issue of customization, which these high-volume customers can easily get from smaller and more agile Asian ODMs and you have a strategic problem. Having worked at HP for four years I can testify to the fact that HP, a company maniacal about quality but encumbered with an effective but rigid set of processes around bringing new products to market, has difficulty rapidly turning around a custom design, and has a cost structure that makes it difficult to profitably compete for deals with margins that are probably in the mid-teens.

Enter the Hon Hai Precision Industry Co, more commonly known as Foxcon. A longtime HP partner and widely acknowledged as one of the most efficient and agile manufacturing companies in the world, Foxcon brings to the table the complementary strengths to match HP – agile design, tightly integrated with its manufacturing capabilities.

Who does what?

Read more

Cisco UCS at Five Years – Successful Disruption and a New Status-Quo

Richard Fichera

March Madness – Five Years Ago

It was five years ago, March 2009, when Cisco formally announced  “Project California,” its (possibly intentionally) worst-kept secret, as Cisco Unified Computing System. At the time, I was working at Hewlett Packard, and our collective feelings as we realized that Cisco really did intend to challenge us in the server market were a mixed bag. Some of us were amused at their presumption, others were concerned that there might be something there, since we had odd bits and pieces of intelligence about the former Nuova, the Cisco spin-out/spin-in that developed UCS. Most of us were convinced that they would have trouble running a server business at margins we knew would be substantially lower than their margins in their core switch business. Sitting on top of our shiny, still relatively new HP c-Class BladeSystem, which had overtaken IBM’s BladeCenter as the leading blade product, we were collectively unconcerned, as well as puzzled about Cisco’s decision to upset a nice stable arrangement where IBM, HP and Dell sold possibly a Billion dollars’ worth of Cisco gear between them.

Fast Forward

Five years later, HP is still number one in blade server units and revenue, but Cisco appears to be now number two in blades, and closing in on number three world-wide in server sales as well. The numbers are impressive:

·         32,000 net new customers in five years, with 14,000 repeat customers

·         Claimed $2 Billion+ annual run-rate

·         Order growth rate claimed in “mid-30s” range, probably about three times the growth rate of any competing product line.

Lessons Learned

Read more

Intel Bumps up High-End Servers with New Xeon E7 V2 - A Long Awaited and Timely Leap

Richard Fichera

The long draught at the high-end

It’s been a long wait, about four years if memory serves me well, since Intel introduced the Xeon E7, a high-end server CPU targeted at the highest performance per-socket x86, from high-end two socket servers to 8-socket servers with tons of memory and lots of I/O. In the ensuing four years (an eternity in a world where annual product cycles are considered the norm), subsequent generations of lesser Xeons, most recently culminating in the latest generation 22 nm Xeon E5 V2 Ivy Bridge server CPUs, have somewhat diluted the value proposition of the original E7.

So what is the poor high-end server user with really demanding single-image workloads to do? The answer was to wait for the Xeon E7 V2, and at first glance, it appears that the wait was worth it. High-end CPUs take longer to develop than lower-end products, and in my opinion Intel made the right decision to skip the previous generation 22nm Sandy Bridge architecture and go to Ivy Bridge, it’s architectural successor in the Intel “Tick-Tock” cycle of new process, then new architecture.

What was announced?

The announcement was the formal unveiling of the Xeon E7 V2 CPU, available in multiple performance bins with anywhere from 8 to 15 cores per socket. Critical specifications include:

  • Up to 15 cores per socket
  • 24 DIMM slots, allowing up to 1.5 TB of memory with 64 GB DIMMs
  • Approximately 4X I/O bandwidth improvement
  • New RAS features, including low-level memory controller modes optimized for either high-availability or performance mode (BIOS option), enhanced error recovery and soft-error reporting
Read more