How Telcos Will Play A Larger Role In Cloud Computing

Dan Bieler

Corporate CIOs should not ignore the network-centric nature of cloud-based solutions when developing their cloud strategies and choosing their cloud providers. And end users should understand what role(s) telcos are likely to play in the evolution of the wider cloud marketplace.

Like many IT suppliers, telcos view cloud computing as a big opportunity to grow their business. Cloud computing will dramatically affect telcos — but not by generating significant additional revenues. Instead, cloud computing will alter the role of telcos in the value chain irreversibly, putting their control over usage metering and billing at risk. Alarm bells should ring for telcos as Google, Amazon, et al. put their own billing and payment relationships with customers in place.

Telcos must defend their revenue collection role at all costs; failure to do so will accelerate their decline to invisible utility status. At the same time, cloud computing offers telcos a chance to become more than bitpipe providers. Cloud solutions will increasingly be delivered by ecosystems of providers that include telcos, software, hardware, network equipment vendors, and OTT providers.

Telcos have a chance to leverage their network and financial assets to grow into the role of ecosystem manager. To start on this path, telcos will provide cloud-based solutions that are adjacent to communication services they already provide (like home area networking and machine-to-machine solutions), such as connected healthcare and smart grid solutions. Expanding from this beachhead into a broader role in cloud solutions markets is a tricky path that only some telcos will successfully navigate.

We are analyzing the potential role of telcos in cloud computing markets in the research report Telcos as Cloud Rainmakers.

AMD Acquires SeaMicro — Big Bet On Architectural Shift For Servers

Richard Fichera

At its recent financial analyst day, AMD indicated that it intended to differentiate itself by creating products that were advantaged in niche markets, with specific mention, among other segments, of servers, and to generally shake up the trench warfare that has had it on the losing side of its lifelong battle with Intel (my interpretation, not AMD management’s words). Today, at least for the server side of the business AMD made a move that can potentially offer it visibility and differentiation by acquiring innovative server startup SeaMicro.

SeaMicro has attracted our attention since its appearance (blog post 1, blog post 2), with its innovative architecture that dramatically reduces power and improves density by sharing components like I/O adapters, disks, and even BIOS over a proprietary fabric. The irony here is that SeaMicro came to market with a tight alignment with Intel, who at one point even introduced a special dual-core packaging of its Atom CPU to allow SeaMicro to improve its density and power efficiency. Most recently SeaMicro and Intel announced a new model that featured Xeon CPUs to address the more mainstream segments that were not for SeaMicro’s original Atom-based offering.

Read more

HP Announces Gen8 Servers – Focus On Opex And Improving SLAs Sets A High Bar For Competitors

Richard Fichera

On Monday, February 13, HP announced its next turn of the great wheel for servers with the announcement of its Gen8 family of servers. Interestingly, since the announcement was ahead of Intel’s official announcement of the supporting E5 server CPUs, HP had absolutely nothing to say about the CPUs or performance of these systems. But even if the CPU information had been available, it would have been a sideshow to the main thrust of the Gen8 launch — improving the overall TCO (particularly Opex) of servers by making them more automated, more manageable, and easier to remediate when there is a problem, along with enhancements to storage, data center infrastructure management (DCIM) capabilities, and a fundamental change in the way that services and support are delivered.

With a little more granularity, the major components of the Gen8 server technology announcement included:

  • Onboard Automation – A suite of capabilities and tools that provide improved agentless local intelligence to allow quicker and lower labor cost provisioning, including faster boot cycles, “one click” firmware updates of single or multiple systems, intelligent and greatly improved boot-time diagnostics, and run-time diagnostics. This is apparently implemented by more powerful onboard management controllers and pre-provisioning a lot of software on built-in flash memory, which is used by the onboard controller. HP claims that the combination of these tools can increase operator productivity by up to 65%. One of the eye-catching features is an iPhone app that will scan a code printed on the server and go back through the Insight Management Environment stack and trigger the appropriate script to provision the server.[i]Possibly a bit of a gimmick, but a cool-looking one.
Read more

Pushing The Envelope - SeaMicro Introduces Low-Power Xeon Servers

Richard Fichera

In late 2010 I noted that startup SeaMicro had introduced an ultra-dense server using Intel Atom chips in an innovative fabric-based architecture that allowed them to factor out much of the power overhead from a large multi-CPU server ( http://blogs.forrester.com/richard_fichera/10-09-21-little_servers_big_applications_intel_developer_forum). Along with many observers, I noted that the original SeaMicro server was well-suited to many light-weight edge processing tasks, but that the system would not support more traditional compute-intensive tasks due to the performance of the Atom core. I was, however, quite taken with the basic architecture, which uses a proprietary high-speed (1.28 Tb/s) 3D mesh interconnect to allow the CPU cores to share network, BIOS and disk resources that are normally replicated on a per-server in conventional designs, with commensurate reductions in power and an increase in density.

Read more

2011 Retrospective – The Best And The Worst Of The Technology World

Richard Fichera

OK, it’s time to stretch the 2012 writing muscles, and what better way to do it than with the time honored “retrospective” format. But rather than try and itemize all the news and come up with a list of maybe a dozen or more interesting things, I decided instead to pick the best and the worst – events and developments that show the amazing range of the technology business, its potentials and its daily frustrations. So, drum roll, please. My personal nomination for the best and worst of the year (along with a special extra bonus category) are:

The Best – IBM Watson stomps the world’s best human players in Jeopardy. In early 2011, IBM put its latest deep computing project, Watson, up against some of the best players in the world in a game of Jeopardy. Watson, consisting of hundreds of IBM Power CPUs, gazillions of bytes of memory and storage, and arguably the most sophisticated rules engine and natural language recognition capability ever developed, won hands down. If you haven’t seen the videos of this event, you should – seeing the IBM system fluidly answer very tricky questions is amazing. There is no sense that it is parsing the question and then sorting through 200 – 300 million pages of data per second in the background as it assembles its answers. This is truly the computer industry at its best. IBM lived up to its brand image as the oldest and strongest technology company and showed us a potential for integrating computers into untapped new potential solutions. Since the Jeopardy event, IBM has been working on commercializing Watson with an eye toward delivering domain-specific expert advisors. I recently listened to a presentation by a doctor participating in the trials of a Watson medical assistant, and the results were startling in terms of the potential to assist medical professionals in diagnostic procedures.

Read more

HP Expands Its x86 Options With Mission-Critical Program – Defense And Offense Combined

Richard Fichera

Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.

What’s Coming?

Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:

ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.

Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…

Read more

A Letter To Meg Whitman From The Market

Glenn O'Donnell

Dear Meg,

Now that you’ve settled into your latest position as the head of Hewlett-Packard, we wish to make a request of you. That request is, “Please take HP back to the greatness it once represented.” The culture once known as “The HP Way” has gone astray and the people have suffered as a result. Those people are of course the vast collection of incredible HP employees, but also its even vaster collection of customers. They (ahem, we) once believed in the venerable enterprise that Bill Hewlett and David Packard conceived and built through the latter half of the 20th century.

HP became renowned for its innovation and the quality of its products. While they tended to be pricey, we bought HP products because we knew they would perform well and perform long. We could count on HP to not only sell us technology, but to guide us in our journey to use this technology for the betterment of our own lives. We yearn for the old HP that inspired Steve Jobs to change the world – and he did!

We need not remind you of what transpired over the past decade or so, but we do have some suggestions for what you should address to restore the luster of HP’s golden age:

  • Commit to a mission. HP needs an audacious mission that articulates a purpose for every employee, from you and the HP board all the way down to the lowest levels. Borrow a page from IBM’s Smarter Planet mission. While it sometimes seems over the top, that’s the whole point. It is over the top and speaks to a bold mission to create a new world. Slowly but surely, IBM is making the planet smarter. Steve Jobs got Apple to convince us to Think Different, and we did. What is HP’s mission?
Read more

HP Embraces Calxeda ARM Architecture With "Project Moonshot" - New Hyperscale Business Unit Program

Richard Fichera

What's the Big Deal?

Emerging ARM server Calxeda has been hinting for some time that they had a significant partnership announcement in the works, and while we didn’t necessarily not believe them, we hear a lot of claims from startups telling us to “stay tuned” for something big. Sometimes they pan out, sometimes they simply go away. But this morning Calxeda surpassed our expectations by unveiling just one major systems partner – but it just happens to be Hewlett Packard, which dominates the WW market for x86 servers.

At its core (unintended but not bad pun), the HP Hyperscale business unit Project Moonshot and Calxeda’s server technology are about improving the efficiency of web and cloud workloads, and promises improvements in excess of 90% in power efficiency and similar improvements in physical density compared with current x86 solutions. As I noted in my first post on ARM servers and other documents, even if these estimates turn out to be exaggerated, there is still a generous window within which to do much, much, better than current technologies. And workloads (such as memcache, Hadoop, static web servers) will be selected for their fit to this new platform, so the workloads that run on these new platforms will potentially come close to the cases quoted by HP and Calxeda.

The Program And New HP Business Unit

Read more

The Strategy Of Consumerization Was One Factor In HP's Decision To Keep PSG

Frank Gillett

HP made the right decision today to keep the Personal Systems Group. Beyond the reasons cited, supply chain and sales synergy and expense of spinning out, it's also crucial for HP to remain in the market for personal devices, which is entering a period of radical transformation and opportunity. The innovations spawned first by RIM with the BlackBerry, followed by the transformative effects of Apple's iPhone and iPad are beginning to ripple into the PC market. Apple's MacBook Air and Lion operating system, combined with Microsoft's Metro interface for Windows 8 herald the beginning of a transformation of personal computing devices. By keeping PSG, HP has the opportunity to innovate and differentiate in the PC market that will move away from commodity patterns.

For vendor strategists at vendors of all sizes, one of the lessons of HP's decision is that consumer businesses are becoming more relevant to succeeding in commercial products for end users. During the announcement call today, CEO Meg Whitman talked about the importance of "consumerization" in winning business from enterprises. I heartily endorse that view and look forward to sharing a report soon on how consumerization is changing commercial product development.

Do you think consumerization was a part of why HP kept PCs?

What effect do you think consumerization will have in IT markets?

Respond here: http://community.forrester.com/thread/5789

UNIX – Dead Or Alive?

Richard Fichera

There has been a lot of ill-considered press coverage about the “death” of UNIX and coverage of the wholesale migration of UNIX workloads to LINUX, some of which (the latter, not the former) I have contributed to. But to set the record straight, the extinction of UNIX is not going to happen in our lifetime.

While UNIX revenues are not growing at any major clip, it appears as if they have actually had a slight uptick over the past year, probably due to a surge by IBM, and seem to be nicely stuck around the $18 - 20B level annual range. But what is important is the “why,” not the exact dollar figure.

UNIX on proprietary RISC architectures will stay around for several reasons that primarily revolve around their being the only close alternative to mainframes in regards to specific high-end operational characteristics:

  • Performance – If you need the biggest single-system SMP OS image, UNIX is still the only realistic commercial alternative other than mainframes.
  • Isolated bulletproof partitionability – If you want to run workload on dynamically scalable and electrically isolated partitions with the option to move workloads between them while running, then UNIX is your answer.
  • Near-ultimate availability – If you are looking for the highest levels of reliability and availability ex mainframes and custom FT systems, UNIX is the answer. It still possesses slight availability advantages, especially if you factor in the more robust online maintenance capabilities of the leading UNIX OS variants.
Read more