Lenovo Buys IBM x86 Server Business

Wow, wake up and it’s a whole new world – a central concept of many contemplative belief systems and a daily reality on the computer industry.  I woke up this morning to a pleseant New England day with low single-digit temperatures under a brilliant blue sky, and lo and behold, by the time I got to work, along came the news that Lenovo had acquired IBM’s x86 server business, essentially lock, stock and barrel. For IBM the deal is compelling, given that it has decided to move away from the volume hardware manufacturing business, giving them a long-term source for its needed hardware components, much as they did with PCs and other volume hardware in the past. Lenovo gains a world-class server product line for its existing channel organization that vastly expands its enterprise reach, along with about 7,500 engineering, sales and marketing employees who understand the enterprise server business.

What’s Included

The rumors have been circulating for about a year, but the reality is still pretty impressive – for $2.3 Billion in cash and stock, Lenovo acquired all x86 systems line, including the entire rack and blade line, Flex System, blade networking, and the newer NeXtScale and iDataPlex. In addition, Lenovo will have licensed access to many of the surrounding software and hardware components, including SmartCLoud Entry, Storewize, Director, Platform computing, GPFS, etc.

IBM will purchase hardware on an OEM basis to continue to deliver value-added integrated systems such as Pure Application and Pure Data systems.

What IBM Keeps

Read more

Lenovo Buys IBM x86 Server Business

Wow, wake up and it’s a whole new world – a central concept of many contemplative belief systems and a daily reality on the computer industry. I woke up this morning to a pleseant New England day with low single-digit temperatures under a brilliant blue sky, and lo and behold, by the time I got to work, along came the news that Lenovo had acquired IBM’s x86 server business, essentially lock, stock and barrel. For IBM the deal is compelling, given that it has decided to move away from the volume hardware manufacturing business, giving them a long-term source for its needed hardware components, much as they did with PCs and other volume hardware in the past. Lenovo gains a world-class server product line for its existing channel organization that vastly expands its enterprise reach, along with about 7,500 engineering, sales and marketing employees who understand the enterprise server business.

What’s Included

The rumors have been circulating for about a year, but the reality is still pretty impressive – for $2.3 Billion in cash and stock, Lenovo acquired all x86 systems line, including the entire rack and blade line, Flex System, blade networking, and the newer NeXtScale and iDataPlex. In addition, Lenovo will have licensed access to many of the surrounding software and hardware components, including SmartCLoud Entry, Storewize, Director, Platform computing, GPFS, etc.

IBM will purchase hardware on an OEM basis to continue to deliver value-added integrated systems such as Pure Application and Pure Data systems.

What IBM Keeps

IBM will keep its mainframe, Power Systems including its Flex System Power systems, and its storage business, and will both retain and expand its service and integration business, as well as provide support for the new Lenovo server offerings.

What Does it Mean for IBM Customers?

Read more

IBM is First Mover with Disruptive Flash Memory Technology on New x6 Servers

This week, IBM announced its new line of x86 servers, and included among the usual incremental product improvements is a performance game-changer called eXFlash. eXFlash is the first commercially available implantation of the MCS architecture announced last year by Diablo Technologies. The MCS architecture, and IBM’s eXFlash offering in particular, allows flash memory to be embedded on the system as close to the CPU as main memory, with latencies substantially lower than any other available flash options, offering better performance at a lower solution cost than other embedded flash solutions. Key aspects of the announcement include:

■  Flash DIMMs offer scalable high performance. Write latency (a critical metric) for IBM eXFlash will be in the 5 to 10 microsecond range, whereas best-of-breed competing mezzanine card and PCIe flash can only offer 15 to 20 microseconds (and external flash storage is slower still). Additionally, since the DIMMs are directly attached to the memory controller, flash I/O does not compete with other I/O on the system I/O hub and PCIe subsystem, improving overall system performance for heavily-loaded systems. Additional benefits include linear performance scalability as the number of DIMMs increase and optional built-in hardware mirroring of DIMM pairs.

■  eXFlash DIMMs are compatible with current software. Part of the magic of MCS flash is that it appears to the OS as a standard block-mode device, so all existing block-mode software will work, including applications, caching and tiering or general storage management software. For IBM users, compatibility with IBM’s storage management and FlashCache Storage Accelerator solutions is guaranteed. Other vendors will face zero to low effort in qualifying their solutions.

Read more

2014 Server and Data Center Predictions

As the new year looms, thoughts turn once again to our annual reading of the tea leaves, in this case focused on what I see coming in server land. We’ve just published the full report, Predictions for 2014: Servers & Data Centers, but as teaser, here are a few of the major highlights from the report:

1.      Increasing choices in form factor and packaging – I&O pros will have to cope with a proliferation of new form factors, some optimized for dense low-power cloud workloads, some for general purpose legacy IT, and some for horizontal VM clusters (or internal cloud if you prefer). These will continue to appear in an increasing number of variants.

2.      ARM – Make or break time is coming, depending on the success of coming 64-bit ARM CPU/SOC designs with full server feature sets including VM support.

3.      The beat goes on – Major turn of the great wheel coming for server CPUs in early 2014.

4.      Huge potential disruption in flash architecture – Introduction of flash in main memory DIMM slots has the potential to completely disrupt how flash is used in storage tiers, and potentially can break the current storage tiering model, initially physically with the potential to ripple through memory architectures.

Read more

Meeting with Tech Mahindra – Insights and Reality Check on IT Automation

I recently had a meeting with executives from Tech Mahindra, an Indian-based IT services company, which was refreshing for the both the candor with which they discussed the overall mechanics of a support and integration model with significant components located half a world away, as well as their insights on the realities and limitations of automation, one of the hottest topics in IT operations today.

On the subject of the mechanics and process behind their global integration process, the eye opener for me was the depth of internal process behind the engagements. The common (possibly only common in my mind since I have had less exposure to these companies than some of my peers) mindset of “develop the specs, send them off and receive code back” is no longer even remotely possible. To perform a successful complex integration project takes a reliable set of processes that can link the efforts of the approximately 20 – 40% of the staff on-site with the client with the supporting teams back in India. Plus a massive investment in project management, development frameworks, and collaboration tools, a hallmark of all of the successful Indian service providers.

From a the client I&O group perspective, the relationship between the outsourcer and internal groups becomes much more than an arms-length process, but rather a tightly integrated team in which the main visible differentiator is who pays their salary rather than any strict team, task or function boundary. For the integrator, this is a strong positive, since it makes it difficult for the client to disengage, and gives the teams early knowledge of changes and new project opportunities. From the client side there are drawbacks and benefits – disengagement is difficult, but knowledge transfer is tightly integrated and efficient.

Read more

A Ray of Hope for HP NonStop Users – HP Announces x86 NonStop Plans

Lost in the excess of press and collective angst over the fate of HP’s HP-UX servers and the widely-accepted premise that Itanium is nearing the end of its less than stellar run has been the fate of HP’s NonStop users. These customers, some dating back to the original Tandem customer roster, almost universally use HP NonStop systems as mission-critical hubs for their business in industries as diverse as securities trading, public safety and retail sales. NonStop is far more difficult to engineer out of an organization than is HP-UX since there are few viable alternatives at any reasonable cost to replace the combination of scalable processing power and fault-tolerance that the NonStop environment provides.

NonStop users can now breathe collective sigh of relief - on November 4 HP announced that it was undertaking to migrate NonStop to an x86 system platform. Despite the lack of any specifics on system details, timing or pretty much anything else, I think that NonStop users can take this to the bank, figuratively and literally, for a couple of reasons:

  • HP has a pretty good track record of actually delivering major initiatives that it commits to. Their major stumbles in dealing with their Itanium-based HP-UX program has been in not communicating rather than missing commitments. Technically, given another cycle of server CPUs and their collective expertise in systems design, including the already undeway high-end x86 systems programs, there is little doubt that HP can deliver a platform suitable for supporting NonStop.
Read more

Categories:

HP Rolls Out HP OneView – Systems Management Done Right

In The Beginning

I was perusing one of my favorite trade pubs, The Register, and noticed an article about the new HP OneView systems management, which reminded me that I was going to write a blog on it at some point. Further perusing the article gave me even more incentive to get down to penning this post, since I really think that this is one of the rare occasions where the usually excellent staff of “El Reg” allowed themselves to get carried away with their enviable witticisms and just plain missed the point.

The Register article seemed to dismiss HP OneView as some sort of cosmetic trick, with references to things like “dressing up software in easy to use user interfaces”. My perception is completely the opposite — dressing up software in easy to use interfaces is exactly what is needed in a world drowning in IT complexity, and I believe that HP OneView is a significant development in systems management tools, both useful to HP customers today and probably setting a significant bar for competitive offerings as well.

What It Is                                                                                                   

Read more

Intel Lays Out Future Data Center Strategy - Serious Focus on Emerging Opportunities

Yesterday Intel had a major press and analyst event in San Francisco to talk about their vision for the future of the data center, anchored on what has become in many eyes the virtuous cycle of future infrastructure demand – mobile devices and “the Internet of things” driving cloud resource consumption, which in turn spews out big data which spawns storage and the requirement for yet more computing to analyze it. As usual with these kinds of events from Intel, it was long on serious vision, and strong on strategic positioning but a bit parsimonious on actual future product information with a couple of interesting exceptions.

Content and Core Topics:

No major surprises on the underlying demand-side drivers. The the proliferation of mobile device, the impending Internet of Things and the mountains of big data that they generate will combine to continue to increase demand for cloud-resident infrastructure, particularly servers and storage, both of which present Intel with an opportunity to sell semiconductors. Needless to say, Intel laced their presentations with frequent reminders about who was the king of semiconductor manufacturingJ

Read more

Systems of Engagement vs Systems of Reference – Core Concept for Infrastructure Architecture

My Forrester colleagues Ted Schadler and John McCarthy have written about the differences between Systems of Reference (SoR) and Systems of Engagement (SoE) in the customer-facing systems and mobility, but after further conversations with some very smart people at IBM, I think there are also important reasons for infrastructure architects to understand this dichotomy. Scalable and flexible systems of engagement, engagement, built with the latest in dynamic web technology and the back-end systems of record, highly stateful usually transactional systems designed to keep track of the “true” state of corporate assets are very different animals from an infrastructure standpoint in two fundamental areas:

Suitability to cloud (private or public) deployment – SoE environments, by their nature, are generally constructed using horizontally scalable technologies, generally based on some level of standards including web standards, Linux or Windows OS, and some scalalable middleware that hides the messy details of horizontally scaling a complex application. In addition, the workloads are generally highly parallel, with each individual interaction being of low value. This characteristic leads to very different demands on the necessity for consistency and resiliency.

Read more

AMD Quietly Rolls Out hUMA – Potential Game-Changer for Parallel Computing

Background  High Performance Attached Processors Handicapped By Architecture

The application of high-performance accelerators, notably GPUs, GPGPUs (APUs in AMD terminology) to a variety of computing problems has blossomed over the last decade, resulting in ever more affordable compute power for both horizon and mundane problems, along with growing revenue streams for a growing industry ecosystem. Adding heat to an already active mix, Intel’s Xeon Phi accelerators, the most recent addition to the GPU ecosystem, have the potential to speed adoption even further due to hoped-for synergies generated by the immense universe of x86 code that could potentially run on the Xeon Phi cores.

However, despite any potential synergies, GPUs (I will use this term generically to refer to all forms of these attached accelerators as they currently exist in the market) suffer from a fundamental architectural problem — they are very distant, in terms of latency, from the main scalar system memory and are not part of the coherent memory domain. This in turn has major impacts on performance, cost, design of the GPUs, and the structure of the algorithms:

  • Performance — The latency for memory accesses generally dictated by PCIe latencies, which while much improved over previous generations, are a factor of 100 or more longer than latency from coherent cache or local scalar CPU memory. While clever design and programming, such as overlapping and buffering multiple transfers can hide the latency in a series of transfers, it is difficult to hide the latency for an initial block of data. Even AMD’s integrated APUs, in which the GPU elements are on a common die, do not share a common memory space, and explicit transfers are made in and out of the APU memory.
Read more