The Tablet Market Is Fragmenting Into Subcategories

JP Gownder

In recent research, I have laid out some similarities and differences between tablets and laptops. But the tablet market is growing ever more fragmented, yielding subtleties that aren’t always captured with a simple “PC vs. tablet” dichotomy.  As Infrastructure & Operations (I&O) professionals try to determine the composition of their hardware portfolios, the product offerings themselves are more protean. Just describing the “tablet” space is much harder than it used to be. Today, we’re looking at multiple OSes (iOS, Android, Windows, Blackberry, forked Android), form factors (eReader, tablet, hybrid, convertible, touchscreen laptop), and screen sizes (from 5” phablets and to giant 27” furniture tablets) – not to mention a variety of brands, price points, and applications. If, as rumored, Microsoft were to enter the 7” to 8” space – competing with Google Nexus, Apple iPad Mini, and Kindle Fire HD – we would see even more permutations. Enterprise-specific – some vertically specific – devices are proliferating alongside increased BYO choices for workers.

Read more

HP Shows its Next Generation Blade and Converged Infrastructure – No Revolution, but Strong Evolution

Richard Fichera

With the next major spin of Intel server CPUs due later this year, HP’s customers have been waiting for HP’s next iteration of its core c-Class BladeSystem, which has been on the market for almost 7 years without any major changes to its basic architecture. IBM made a major enhancement to its BladeCenter architecture, replacing it with the new Pure Systems, and Cisco’s offering is new enough that it should last for at least another three years without a major architectural refresh, leaving HP customers to wonder when HP was going to introduce its next blade enclosure, and whether it would be compatible with current products.

At their partner conference this week, HP announced a range of enhancements to its blade product line that on combination represent a strong evolution of the current product while maintaining compatibility with current investments. This positioning is similar to what IBM did with its BladeCenter to BladeCenter-H upgrade, preserving current customer investment and extending the life of the current server and peripheral modules for several more years.

Tech Stuff – What Was Announced

Among the goodies announced on February 19 was an assortment of performance and functionality enhancements, including:

  • Platinum enclosure — The centerpiece of the announcement was the new c7000 Platinum enclosure, which boosts the speed of the midplane signal paths from 10 GHz to 14GHz, for an increase of 40% in raw bandwidth of the critical midplane, across which all of the enclosure I/O travels. In addition to the increased bandwidth midplane, the new enclosure incorporates location aware sensors and also doubles the available storage bandwidth.
Read more

IBM Embraces Emerson for DCIM – Major Change in DCIM Market Dynamics

Richard Fichera

Emerson Network Power today announced that it is entering into a significant partnership with IBM to both integrate Emerson’s new Trellis DCIM suite into IBM’s ITSM products as well as to jointly sell Trellis to IBM customers. This partnership has the potential to reshape the DCIM market segment for several reasons:

  • Connection to enterprise IT — Emerson has sold a lot of chillers, UPS and PDU equipment and has tremendous cachet with facilities types, but they don’t have a lot of people who know how to talk IT. IBM has these people in spades.
  • IBM can use a DCIM offering  — IBM, despite being a huge player in the IT infrastructure and data center space, does not have a DCIM product. Its Maximo product seems to be more of a dressed up asset management product, and this partnership is an acknowledgement of the fact that to build a full-fledged DCIM product would have been both expensive and time-consuming.
  • IBM adds sales bandwidth — My belief is that the development of the DCIM market has been delivery bandwidth constrained. Market leaders Nlyte, Emerson and Schneider do not have enough people to address the emerging total demand, and the host of smaller players are even further behind. IBM has the potential to massively multiply Emerson’s ability to deliver to the market.
Read more

Why Dell Going Private is Less Risk for Customers than their Current Path

David Johnson

To publish this post, I must first discredit myself. I'm 42, and while I love what I do for a living, Michael Dell is 47 and his company was already doing $1 million a day in business by the time he was 31. I look at guys like that and think: "What the h*** have I been doing with my time?!?" Nevertheless, Dell is a company I've followed more closely than any other but Apple since the mid-2000s, and in the past two years I've had the opportunity to meet with several Dell executives and employees - from Montpellier, France to Austin, Texas.

Because I cover both PC hardware as well as client virtualization here at Forrester, it puts me in regular contact with Dell customers who will inevitably ask what we as a firm think about Dell's latest announcements to go private, just as they have for HP these past several quarters since the circus started over there with Mr. Apotheker. Hopefully what follows here is information and analysis that you as an I&O leader can rely on to develop your own perspective on Dell with more clarity.

 
Complexity is Dell's enemy
The complexity of Dell as an organization right now is enormous. They have been on a "Quest" to re-invent themselves and go from PC and server vendor, to an end-to-end solutions vendor with the hope that their chief differentiator could be unique software to drive more repeatable solutions delivery, and in turn lower solutions cost. I say the word 'hope' deliberately because to do that means focusing most of their efforts around a handful of solutions that no other vendor could provide. It's a massive undertaking because as a public company, they have to do this while keeping cash-flow going in their lines of business from each acquisition and growing those while they develop the focused solutions. So far, they haven't.
Read more

Vendor Podcast: Sam Greenblatt, Chief Architect @Dell

John Rakowski

 

Yesterday I had the pleasure of attending Dell’s Technology Camp in Amsterdam. It was a full on day starting at 7.30am and I finally got back home at 11pm but it was a fascinating event. Dell is currently heavily in the news and various sources are reporting that over the coming weekend they are likely to go private. Going from public back to private is not an easy decision to take and Microsoft’s reported interest in Dell certainly makes this situation all the more interesting. This will be a big change and I am sure will be subject of detailed analysis and commentary next week.   

For now, I would rather concentrate on an interesting conversation that I had with Sam Greenblatt, Chief Architect for Dell’s Enterprise solutions group. Sam needs no introduction as his career and successes are very impressive. As many of you may know, before Dell, he worked for HP as their CTO for webOS but he has also worked with Steve Jobs and many of the other founders of the modern IT market. As an Analyst, I am lucky that I get to speak with many senior executives and so I thought I would record this session for you. I apologize if the sound quality is not crystal clear but I am no Bill Talbott (famous Hollywood sound engineer) and we actually had to do this recording standing up in a kitchen area as the venue was one big open space. I was also fairly refrained in my questioning so as I could share the content a bit quicker with you.

It’s 15 minutes in length and here are the questions I asked:

(1)    So what’s your role at Dell?

(2)    What does success look like in this role?

(3)    What would you say are three key strengths for Dell?

(4)    What is the main challenge that Dell faces today?

Read more

2013 Server Virtualization Predictions: Driving Value Above And Beyond The Hypervisor

Dave Bartoletti

Now that we’ve been back from the holidays for a month, I’d like to round out the 2013 predictions season with a look at the year ahead in server virtualization. If you’re like me (or this New York Times columnist), you’ll agree that a little procrastination can sometimes be a good thing to help collect and organize your plans for the year ahead. (Did you buy that rationalization?)

We’re now more than a decade into the era of widespread x86 server virtualization. Hypervisors are certainly a mature (if not peaceful) technology category, and the consolidation benefits of virtualization are now uncontestable. 77% of you will be using virtualization by the end of this year, and you’re running as many as 6 out of 10 workloads in virtual machines. With such strong penetration, what’s left? In our view: plenty. It’s time to ask your virtual infrastructure, “What have you done for me lately?”

With that question in mind, I asked my colleagues on the I&O team to help me predict what the year ahead will hold. Here are the trends in 2013 you should track closely:

  1. Consolidation savings won’t be enough to justify further virtualization. For most I&O pros, the easy workloads are already virtualized. Looking ahead at 2013, what’s left are the complex business-critical applications the business can’t run without (high-performance databases, ERP, and collaboration top the list). You won’t virtualize these to save on hardware; you’ll do it to make them mobile, so they can be moved, protected, and duplicated easily. You’ll have to explain how virtualizing these apps will make them faster, safer, and more reliable—then prove it.
Read more

IBM STG Is Upbeat On PureSystems And Growth Markets

Manish Bahl

 

Last month, I attended an IBM Systems and Technology Group (STG) Executive Summit in the US, where IBM outlined its key strategies for accelerating sales in growth markets, including:

·         Aggressively marketing PureSystems. IBM is positioning PureSystems (a pre-integrated, converged system of servers, storage, and networking technology with automated self-management and built-in SmartCloud technology) as an integrated and simplified data center offering to help organizations reduce the money and time they spend on the management and administration of servers. 

·         Continuing to expand in “tier two” cities. Over the next 12 months, IBM plans to continue its expansion outside of major metropolitan areas by opening small branches in nearly 100 locations in growth markets, most notably India, China, Brazil, and Russia.

·         Expanding channel capabilities and accelerating new routes to market. IBMplans to certify 2,800 global resellers on PureSystems in 2013 and upgrade the solution and technical expertise of 500 of its partners. Also, the company plans to drive the revenue of managed service providers (MSPs) by working with them closely to develop cloud-based services and solutions on PureSystems.

Considering the vast potential demand from growth markets and slowdown in developed markets, IBM is among the growing camp of multinational vendors aggressively targeting them as an engine for future business. Some of my key observations on IBMs event and recent announcements:

Read more

HP’s Troubles Continue, But Does It Matter?

Richard Fichera

HP seems to be on a tear, bouncing from litigation with one of its historically strongest partners to multiple CEOs in the last few years, continued layoffs, and a recent massive write-down of its EDS purchase. And, as we learned last week, the circus has not left town. The latest “oops” is an $8.8 billion write-down for its purchase of Autonomy, under the brief and ill-fated leadership of Léo Apotheker, combined with allegations of serious fraud on the part of Autonomy during the acquisition process.

The eventual outcome of this latest fiasco will be fun to watch, with many interesting sideshows along the way, including:

  • Whose fault is it? Can they blame it on Léo, or will it spill over onto Meg Whitman, who was on the board and approved it?
  • Was there really fraud involved?
  • If so, how did HP miss it? What about all the internal and external people involved in due diligence of this acquisition? I’ve been on the inside of attempted acquisitions at HP, and there were always many more people around with the power to say “no” than there were people who were trying to move the company forward with innovative acquisitions, and the most persistent and compulsive of the group were the various finance groups involved. It’s really hard to see how they could have missed a little $5 billion discrepancy in revenues, but that’s just my opinion — I was usually the one trying to get around the finance guys. :)
Read more

AMD Acquires SeaMicro — Big Bet On Architectural Shift For Servers

Richard Fichera

[For some reason this has been unpublished since April — so here it is well after AMD announced its next spin of the SeaMicro product.]

At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business, it made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.

SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2) with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not a part of SeaMicro’s original Atom-based offering.

Read more

Data Center Power And Efficiency – Public Enemy #1 Or The Latest Media Punching Bag?

Richard Fichera

This week, the New York Times ran a series of articles about data center power use (and abuse) “Power, Pollution and the Internet” (http://nyti.ms/Ojd9BV) and “Data Barns in a Farm Town, Gobbling Power and Flexing Muscle” (http://nyti.ms/RQDb0a). Among the claims made in the articles were that data centers were “only using 6 to 12 % of the energy powering their servers to deliver useful computation. Like a lot of media broadsides, the reality is more complex than the dramatic claims made in these articles. Technically they are correct in claiming that of the electricity going to a server, only a very small fraction is used to perform useful work, but this dramatic claim is not a fair representation of the overall efficiency picture. The Times analysis fails to take into consideration that not all of the power in the data center goes to servers, so the claim of 6% efficiency of the servers is not representative of the real operational efficiency of the complete data center.

On the other hand, while I think the Times chooses drama over even-keeled reporting, the actual picture for even a well-run data center is not as good as its proponents would claim. Consider:

  • A new data center with a PUE of 1.2 (very efficient), with 83% of the power going to IT workloads.
  • Then assume that 60% of the remaining power goes to servers (storage and network get the rest), for a net of almost 50% of the power going into servers. If the servers are running at an average utilization of 10%, then only 10% of 50%, or 5% of the power is actually going to real IT processing. Of course, the real "IT number" is the server + plus storage + network, so depending on how you account for them, the IT usage could be as high as 38% (.83*.4 + .05).
Read more