Yesterday Intel had a major press and analyst event in San Francisco to talk about their vision for the future of the data center, anchored on what has become in many eyes the virtuous cycle of future infrastructure demand – mobile devices and “the Internet of things” driving cloud resource consumption, which in turn spews out big data which spawns storage and the requirement for yet more computing to analyze it. As usual with these kinds of events from Intel, it was long on serious vision, and strong on strategic positioning but a bit parsimonious on actual future product information with a couple of interesting exceptions.
Content and Core Topics:
No major surprises on the underlying demand-side drivers. The the proliferation of mobile device, the impending Internet of Things and the mountains of big data that they generate will combine to continue to increase demand for cloud-resident infrastructure, particularly servers and storage, both of which present Intel with an opportunity to sell semiconductors. Needless to say, Intel laced their presentations with frequent reminders about who was the king of semiconductor manufacturingJ
Having been away from the Linux scene for a while, I recently took a look at a newer version of Linux, SUSE Enterprise Linux Version 11.3, which is representative of the latest feature sets from the Linux 3.0 et seq kernel available to the entre Linux community, including SUSE, Red Hat, Canonical and others. It is apparent, both from the details on SUSE 11.3 and from perusing the documentation on other distribution providers, that Linux has continued to mature nicely as both a foundation for large scale-out clouds as well as a strong contender for the kind of enterprise workloads that previously were only comfortable on either RISC/UNIX systems or large Microsoft Server systems. In effect, Linux has continued its maturation to the point where its feature set and scalability begin to look like a top-tier UNIX from only a couple of years ago.
Among the enterprise technology that caught my eye:
Scalability – The Linux kernel now scales to 4096 x86 CPUs and up to 16 TB of memory, well into high-end UNIX server territory, and will support the largest x86 servers currently shipping.
I/O – The Linux kernel now includes btrfs (a geeky contraction of “Better File System), an open source file system that promises much of the scalability and feature set of Oracle’s popular ZFS file system including checksums, CoW, snapshotting, advanced logical volume management including thin provisioning and others. The latest releases also include advanced features like geoclustering and remote data replication to support advanced HA topologies.
Dell just picked up Enstratius for an undisclosed amount today, making the cloud management vendor the latest well-known cloud controller to get snapped up by a big infrastructure or OS vendor. Dell will add Enstratius cloud management capabilities to its existing management suite for converged and cloudy infrastructure, which includes element manager and configuration automator Active System Manager (ASM, the re-named assets acquired with Gale Technologies in November), Quest Foglight performance monitoring, and (maybe) what’s still around from Scalent and DynamicOps.
This is a good move for Dell, but it doesn’t exactly clarify where all these management capabilities will fall out. The current ASM product seems to be a combo of code from the original Scalent acquisition upgraded with the GaleForce product; regardless of what’s in it, though, what it does is discover, configure and deploy physical and virtual converged infrastructure components. A private cloud automation platform, basically. Like all private cloud management stacks, it does rapid template-based provisioning and workflow orchestration. But it doesn’t provision apps or provision to public or open-source cloud stacks. That’s where Enstratius comes in.
The industry is abuzz with speculation that IBM will sell its x86 server business to Lenovo. As usual, neither party is talking publicly, but at this point I’d give it a better than even chance, since usually these kind of rumors tend to be based on leaks of real discussions as opposed to being completely delusional fantasies. Usually.
So the obvious question then becomes “Huh?”, or, slightly more eloquently stated, “Why would they do something like that?”. Aside from the possibility that this might all be fantasy, two explanations come to mind:
1. IBM is crazy.
2. IBM is not crazy.
Of the two explanations, I’ll have to lean toward the latter, although we might be dealing with a bit of the “Hey, I’m the new CEO and I’m going to do something really dramatic today” syndrome. IBM sold its PC business to Lenovo to the tune of popular disbelief and dire predictions, and it's doing very well today because it transferred its investments and focus to higher margin business, like servers and services. Lenovo makes low-end servers today that it bootstrapped with IBM licensed technology, and IBM is finding it very hard to compete with Lenovo and other low-cost providers. Maybe the margins on its commodity server business have sunk below some critical internal benchmark for return on investment, and it believes that it can get a better return on its money elsewhere.
In recent research, I have laid out some similarities and differences between tablets and laptops. But the tablet market is growing ever more fragmented, yielding subtleties that aren’t always captured with a simple “PC vs. tablet” dichotomy. As Infrastructure & Operations (I&O) professionals try to determine the composition of their hardware portfolios, the product offerings themselves are more protean. Just describing the “tablet” space is much harder than it used to be. Today, we’re looking at multiple OSes (iOS, Android, Windows, Blackberry, forked Android), form factors (eReader, tablet, hybrid, convertible, touchscreen laptop), and screen sizes (from 5” phabletsand to giant 27” furniture tablets) – not to mention a variety of brands, price points, and applications. If, as rumored, Microsoft were to enter the 7” to 8” space – competing with Google Nexus, Apple iPad Mini, and Kindle Fire HD – we would see even more permutations. Enterprise-specific – some vertically specific – devices are proliferating alongside increased BYO choices for workers.
With the next major spin of Intel server CPUs due later this year, HP’s customers have been waiting for HP’s next iteration of its core c-Class BladeSystem, which has been on the market for almost 7 years without any major changes to its basic architecture. IBM made a major enhancement to its BladeCenter architecture, replacing it with the new Pure Systems, and Cisco’s offering is new enough that it should last for at least another three years without a major architectural refresh, leaving HP customers to wonder when HP was going to introduce its next blade enclosure, and whether it would be compatible with current products.
At their partner conference this week, HP announced a range of enhancements to its blade product line that on combination represent a strong evolution of the current product while maintaining compatibility with current investments. This positioning is similar to what IBM did with its BladeCenter to BladeCenter-H upgrade, preserving current customer investment and extending the life of the current server and peripheral modules for several more years.
Tech Stuff – What Was Announced
Among the goodies announced on February 19 was an assortment of performance and functionality enhancements, including:
Platinum enclosure — The centerpiece of the announcement was the new c7000 Platinum enclosure, which boosts the speed of the midplane signal paths from 10 GHz to 14GHz, for an increase of 40% in raw bandwidth of the critical midplane, across which all of the enclosure I/O travels. In addition to the increased bandwidth midplane, the new enclosure incorporates location aware sensors and also doubles the available storage bandwidth.
Emerson Network Power today announced that it is entering into a significant partnership with IBM to both integrate Emerson’s new Trellis DCIM suite into IBM’s ITSM products as well as to jointly sell Trellis to IBM customers. This partnership has the potential to reshape the DCIM market segment for several reasons:
Connection to enterprise IT — Emerson has sold a lot of chillers, UPS and PDU equipment and has tremendous cachet with facilities types, but they don’t have a lot of people who know how to talk IT. IBM has these people in spades.
IBM can use a DCIM offering — IBM, despite being a huge player in the IT infrastructure and data center space, does not have a DCIM product. Its Maximo product seems to be more of a dressed up asset management product, and this partnership is an acknowledgement of the fact that to build a full-fledged DCIM product would have been both expensive and time-consuming.
IBM adds sales bandwidth — My belief is that the development of the DCIM market has been delivery bandwidth constrained. Market leaders Nlyte, Emerson and Schneider do not have enough people to address the emerging total demand, and the host of smaller players are even further behind. IBM has the potential to massively multiply Emerson’s ability to deliver to the market.
To publish this post, I must first discredit myself. I'm 42, and while I love what I do for a living, Michael Dell is 47 and his company was already doing $1 million a day in business by the time he was 31. I look at guys like that and think: "What the h*** have I been doing with my time?!?" Nevertheless, Dell is a company I've followed more closely than any other but Apple since the mid-2000s, and in the past two years I've had the opportunity to meet with several Dell executives and employees - from Montpellier, France to Austin, Texas.
Because I cover both PC hardware as well as client virtualization here at Forrester, it puts me in regular contact with Dell customers who will inevitably ask what we as a firm think about Dell's latest announcements to go private, just as they have for HP these past several quarters since the circus started over there with Mr. Apotheker. Hopefully what follows here is information and analysis that you as an I&O leader can rely on to develop your own perspective on Dell with more clarity.
Complexity is Dell's enemy
The complexity of Dell as an organization right now is enormous. They have been on a "Quest" to re-invent themselves and go from PC and server vendor, to an end-to-end solutions vendor with the hope that their chief differentiator could be unique software to drive more repeatable solutions delivery, and in turn lower solutions cost. I say the word 'hope' deliberately because to do that means focusing most of their efforts around a handful of solutions that no other vendor could provide. It's a massive undertaking because as a public company, they have to do this while keeping cash-flow going in their lines of business from each acquisition and growing those while they develop the focused solutions. So far, they haven't.
Yesterday I had the pleasure of attending Dell’s Technology Camp in Amsterdam. It was a full on day starting at 7.30am and I finally got back home at 11pm but it was a fascinating event. Dell is currently heavily in the news and various sources are reporting that over the coming weekend they are likely to go private. Going from public back to private is not an easy decision to take and Microsoft’s reported interest in Dell certainly makes this situation all the more interesting. This will be a big change and I am sure will be subject of detailed analysis and commentary next week.
For now, I would rather concentrate on an interesting conversation that I had with Sam Greenblatt, Chief Architect for Dell’s Enterprise solutions group. Sam needs no introduction as his career and successes are very impressive. As many of you may know, before Dell, he worked for HP as their CTO for webOS but he has also worked with Steve Jobs and many of the other founders of the modern IT market. As an Analyst, I am lucky that I get to speak with many senior executives and so I thought I would record this session for you. I apologize if the sound quality is not crystal clear but I am no Bill Talbott (famous Hollywood sound engineer) and we actually had to do this recording standing up in a kitchen area as the venue was one big open space. I was also fairly refrained in my questioning so as I could share the content a bit quicker with you.
It’s 15 minutes in length and here are the questions I asked:
(1) So what’s your role at Dell?
(2) What does success look like in this role?
(3) What would you say are three key strengths for Dell?
(4) What is the main challenge that Dell faces today?
Now that we’ve been back from the holidays for a month, I’d like to round out the 2013 predictions season with a look at the year ahead in server virtualization. If you’re like me (or this New York Times columnist), you’ll agree that a little procrastination can sometimes be a good thing to help collect and organize your plans for the year ahead. (Did you buy that rationalization?)
We’re now more than a decade into the era of widespread x86 server virtualization. Hypervisors are certainly a mature (if not peaceful) technology category, and the consolidation benefits of virtualization are now uncontestable. 77% of you will be using virtualization by the end of this year, and you’re running as many as 6 out of 10 workloads in virtual machines. With such strong penetration, what’s left? In our view: plenty. It’s time to ask your virtual infrastructure, “What have you done for me lately?”
With that question in mind, I asked my colleagues on the I&O team to help me predict what the year ahead will hold. Here are the trends in 2013 you should track closely:
Consolidation savings won’t be enough to justify further virtualization. For most I&O pros, the easy workloads are already virtualized. Looking ahead at 2013, what’s left are the complex business-critical applications the business can’t run without (high-performance databases, ERP, and collaboration top the list). You won’t virtualize these to save on hardware; you’ll do it to make them mobile, so they can be moved, protected, and duplicated easily. You’ll have to explain how virtualizing these apps will make them faster, safer, and more reliable—then prove it.