Having been away from the Linux scene for a while, I recently took a look at a newer version of Linux, SUSE Enterprise Linux Version 11.3, which is representative of the latest feature sets from the Linux 3.0 et seq kernel available to the entre Linux community, including SUSE, Red Hat, Canonical and others. It is apparent, both from the details on SUSE 11.3 and from perusing the documentation on other distribution providers, that Linux has continued to mature nicely as both a foundation for large scale-out clouds as well as a strong contender for the kind of enterprise workloads that previously were only comfortable on either RISC/UNIX systems or large Microsoft Server systems. In effect, Linux has continued its maturation to the point where its feature set and scalability begin to look like a top-tier UNIX from only a couple of years ago.
Among the enterprise technology that caught my eye:
Scalability – The Linux kernel now scales to 4096 x86 CPUs and up to 16 TB of memory, well into high-end UNIX server territory, and will support the largest x86 servers currently shipping.
I/O – The Linux kernel now includes btrfs (a geeky contraction of “Better File System), an open source file system that promises much of the scalability and feature set of Oracle’s popular ZFS file system including checksums, CoW, snapshotting, advanced logical volume management including thin provisioning and others. The latest releases also include advanced features like geoclustering and remote data replication to support advanced HA topologies.
Ten years ago, open source software (OSS) was more like a toy for independent software vendors (ISVs) in China: Only the geeks in R&D played around with it. However, the software industry has been developing quickly in China throughout the past decade, and technology trends such as service-oriented architecture (SOA), business process management (BPM), cloud computing, the mobile Internet, and big data are driving much broader adoption of OSS.
OSS has become a widely used element of firms’ enterprise architecture. For front-end application architecture on the client side, various open source frameworks, such as jQuery and ExtJS, have been incorporated into many ISVs’ front-end frameworks. On the server side, OSS like Node.js is becoming popular for ISVs in China for high Web throughput capabilities. From an infrastructure and information architecture perspective, open source offerings like Openstack, Cloudstack, and Eucalyptus have been piloted by major telecom carriers including China Telecom and China Unicom, as well as information and communication solution providers like Huawei and IT service providers like CIeNET. To round this out, many startup companies are developing solutions based on MongoDB, an open source NoSQL database.
Familiarity with OSS is becoming a necessary qualification for software developers and product strategy professionals. Because of the wide usage of OSS among both vendors and end users, working experience and extensive knowledge with OSS is becoming a necessary qualification not only for software engineers, but also an important factors for product strategy professionals to establish appropriate product road maps and support their business initiatives.
The industry is abuzz with speculation that IBM will sell its x86 server business to Lenovo. As usual, neither party is talking publicly, but at this point I’d give it a better than even chance, since usually these kind of rumors tend to be based on leaks of real discussions as opposed to being completely delusional fantasies. Usually.
So the obvious question then becomes “Huh?”, or, slightly more eloquently stated, “Why would they do something like that?”. Aside from the possibility that this might all be fantasy, two explanations come to mind:
1. IBM is crazy.
2. IBM is not crazy.
Of the two explanations, I’ll have to lean toward the latter, although we might be dealing with a bit of the “Hey, I’m the new CEO and I’m going to do something really dramatic today” syndrome. IBM sold its PC business to Lenovo to the tune of popular disbelief and dire predictions, and it's doing very well today because it transferred its investments and focus to higher margin business, like servers and services. Lenovo makes low-end servers today that it bootstrapped with IBM licensed technology, and IBM is finding it very hard to compete with Lenovo and other low-cost providers. Maybe the margins on its commodity server business have sunk below some critical internal benchmark for return on investment, and it believes that it can get a better return on its money elsewhere.
In recent research, I have laid out some similarities and differences between tablets and laptops. But the tablet market is growing ever more fragmented, yielding subtleties that aren’t always captured with a simple “PC vs. tablet” dichotomy. As Infrastructure & Operations (I&O) professionals try to determine the composition of their hardware portfolios, the product offerings themselves are more protean. Just describing the “tablet” space is much harder than it used to be. Today, we’re looking at multiple OSes (iOS, Android, Windows, Blackberry, forked Android), form factors (eReader, tablet, hybrid, convertible, touchscreen laptop), and screen sizes (from 5” phabletsand to giant 27” furniture tablets) – not to mention a variety of brands, price points, and applications. If, as rumored, Microsoft were to enter the 7” to 8” space – competing with Google Nexus, Apple iPad Mini, and Kindle Fire HD – we would see even more permutations. Enterprise-specific – some vertically specific – devices are proliferating alongside increased BYO choices for workers.
HP today announced the Moonshot 1500 server, their first official volume product in the Project Moonshot server product family (the initial Redstone, a Calxeda ARM-based server, was only available in limited quantities as a development system), and it represents both a significant product today and a major stake in the ground for future products, both from HP and eventually from competitors. It’s initial attractions – an extreme density low power x86 server platform for a variety of low-to-midrange CPU workloads – hides the fact that it is probably a blueprint for both a family of future products from HP as well as similar products from other vendors.
Geek Stuff – What was Announced
The Moonshot 1500 is a 4.3U enclosure that can contain up to 45 plug-in server cartridges, each one a complete server node with a dual-core Intel Atom 1200 CPU, up to 8 GB of memory and a single disk or SSD device, up to 1 TB, and the servers share common power supplies and cooling. But beyond the density, the real attraction of the MS1500 is its scalable fabric and CPU-agnostic architecture. Embedded in the chassis are multiple fabrics for storage, management and network giving the MS1500 (my acronym, not an official HP label) some of the advantages of a blade server without the advanced management capabilities. At initial shipment, only the network and management fabric will be enabled by the system firmware, with each chassis having up two Gb Ethernet switches (technically they can be configured with one, but nobody will do so), allowing the 45 servers to share uplinks to the enterprise network.
With the next major spin of Intel server CPUs due later this year, HP’s customers have been waiting for HP’s next iteration of its core c-Class BladeSystem, which has been on the market for almost 7 years without any major changes to its basic architecture. IBM made a major enhancement to its BladeCenter architecture, replacing it with the new Pure Systems, and Cisco’s offering is new enough that it should last for at least another three years without a major architectural refresh, leaving HP customers to wonder when HP was going to introduce its next blade enclosure, and whether it would be compatible with current products.
At their partner conference this week, HP announced a range of enhancements to its blade product line that on combination represent a strong evolution of the current product while maintaining compatibility with current investments. This positioning is similar to what IBM did with its BladeCenter to BladeCenter-H upgrade, preserving current customer investment and extending the life of the current server and peripheral modules for several more years.
Tech Stuff – What Was Announced
Among the goodies announced on February 19 was an assortment of performance and functionality enhancements, including:
Platinum enclosure — The centerpiece of the announcement was the new c7000 Platinum enclosure, which boosts the speed of the midplane signal paths from 10 GHz to 14GHz, for an increase of 40% in raw bandwidth of the critical midplane, across which all of the enclosure I/O travels. In addition to the increased bandwidth midplane, the new enclosure incorporates location aware sensors and also doubles the available storage bandwidth.
Emerson Network Power today announced that it is entering into a significant partnership with IBM to both integrate Emerson’s new Trellis DCIM suite into IBM’s ITSM products as well as to jointly sell Trellis to IBM customers. This partnership has the potential to reshape the DCIM market segment for several reasons:
Connection to enterprise IT — Emerson has sold a lot of chillers, UPS and PDU equipment and has tremendous cachet with facilities types, but they don’t have a lot of people who know how to talk IT. IBM has these people in spades.
IBM can use a DCIM offering — IBM, despite being a huge player in the IT infrastructure and data center space, does not have a DCIM product. Its Maximo product seems to be more of a dressed up asset management product, and this partnership is an acknowledgement of the fact that to build a full-fledged DCIM product would have been both expensive and time-consuming.
IBM adds sales bandwidth — My belief is that the development of the DCIM market has been delivery bandwidth constrained. Market leaders Nlyte, Emerson and Schneider do not have enough people to address the emerging total demand, and the host of smaller players are even further behind. IBM has the potential to massively multiply Emerson’s ability to deliver to the market.
As businesses get larger, and the need for effective alignment of the business with technology capabilities grows, enterprise architecture becomes an essential competency. But in China, many CIOs are struggling with setting up a high-performance enterprise architecture program to support their business strategies in a disruptive market landscape. This seems equally true for state-owned enterprises (SOEs) and multinational companies (MNCs).
To gain a better understanding of the problem, I had an interesting conversation with Le Yao, general secretary of Center for Informatization and Information Management (CIIM) and director of the CIO program at Peking University. Le Yao is one of the first pioneers introducing The Open Group Architecture Framework (TOGAF) into China to help address the above challenges. I believe that the five-year journey of TOGAF in China is just an early beginning for EA, and companies in the China market need relevant EA insights to help them support their business:
Taking an EA course is one thing; practicing EA is something else. Companies taking TOGAF courses in China seem to be aiming more at sales enablement than practicing EA internally. MNCs like IBM, Accenture, and HP are more likely to try to infuse the essence of the methodology into their PowerPoint slides for marketing and/or bidding purposes; IBM has also invited channel partners such as Neusoft, Digital China, CS&S, and Asiainfo to take the training.
TOGAF is too high-level to be relevant. End user trainees learning the enterprise architecture framework that Yao’s team introduced in China in 2007 found it to be too high-level and conceptual. Also, the trainers only went through what was written in the textbook without using industry-specific cases or practice-related information — making the training less relevant and difficult to apply.
To publish this post, I must first discredit myself. I'm 42, and while I love what I do for a living, Michael Dell is 47 and his company was already doing $1 million a day in business by the time he was 31. I look at guys like that and think: "What the h*** have I been doing with my time?!?" Nevertheless, Dell is a company I've followed more closely than any other but Apple since the mid-2000s, and in the past two years I've had the opportunity to meet with several Dell executives and employees - from Montpellier, France to Austin, Texas.
Because I cover both PC hardware as well as client virtualization here at Forrester, it puts me in regular contact with Dell customers who will inevitably ask what we as a firm think about Dell's latest announcements to go private, just as they have for HP these past several quarters since the circus started over there with Mr. Apotheker. Hopefully what follows here is information and analysis that you as an I&O leader can rely on to develop your own perspective on Dell with more clarity.
Complexity is Dell's enemy
The complexity of Dell as an organization right now is enormous. They have been on a "Quest" to re-invent themselves and go from PC and server vendor, to an end-to-end solutions vendor with the hope that their chief differentiator could be unique software to drive more repeatable solutions delivery, and in turn lower solutions cost. I say the word 'hope' deliberately because to do that means focusing most of their efforts around a handful of solutions that no other vendor could provide. It's a massive undertaking because as a public company, they have to do this while keeping cash-flow going in their lines of business from each acquisition and growing those while they develop the focused solutions. So far, they haven't.