Cisco UCS at Five Years – Successful Disruption and a New Status-Quo

Richard Fichera

March Madness – Five Years Ago

It was five years ago, March 2009, when Cisco formally announced  “Project California,” its (possibly intentionally) worst-kept secret, as Cisco Unified Computing System. At the time, I was working at Hewlett Packard, and our collective feelings as we realized that Cisco really did intend to challenge us in the server market were a mixed bag. Some of us were amused at their presumption, others were concerned that there might be something there, since we had odd bits and pieces of intelligence about the former Nuova, the Cisco spin-out/spin-in that developed UCS. Most of us were convinced that they would have trouble running a server business at margins we knew would be substantially lower than their margins in their core switch business. Sitting on top of our shiny, still relatively new HP c-Class BladeSystem, which had overtaken IBM’s BladeCenter as the leading blade product, we were collectively unconcerned, as well as puzzled about Cisco’s decision to upset a nice stable arrangement where IBM, HP and Dell sold possibly a Billion dollars’ worth of Cisco gear between them.

Fast Forward

Five years later, HP is still number one in blade server units and revenue, but Cisco appears to be now number two in blades, and closing in on number three world-wide in server sales as well. The numbers are impressive:

·         32,000 net new customers in five years, with 14,000 repeat customers

·         Claimed $2 Billion+ annual run-rate

·         Order growth rate claimed in “mid-30s” range, probably about three times the growth rate of any competing product line.

Lessons Learned

Read more

The Recent Ruling In Oracle vs Rimini Street Has Significant Implications For The Wider Outsourcing Industry

Duncan Jones

I've just published a Quick Take report that explains why the Nevada District Court’s recent decision on some of the issues in the four-year-old Oracle versus Rimini Street case has significant implications for sourcing professionals — and, indeed, the entire technology services industry — beyond its impact on the growing third-party support (3SP) market.

http://www.forrester.com/Quick+Take+The+Rimini+Street+Ruling+Has+Serious+Implications+For+Oracle+Customers/fulltext/-/E-RES115572

Read more

Case Study: News UK Transformed Its Data Center To Become More Agile

Sudhanshu Bhandari

Data center procurement approaches have significantly changed in the past five years. While many CIOs are following a cloud-first approach to commissioning new services, most enterprises struggle to move the majority of their infrastructure to public clouds due to application interdependencies and legacy infrastructure silos.

As profiled in my recently published case study, in 2008 News UK was one of a few news media companies embarking on infrastructure transformation. The firm’s data center transformation delivered a modern, agile, lean, and resilient infrastructure in a colocated data center with automated disaster recovery and business continuity. The case study highlights the significance of migration and consolidation as a step towards collocating your data center or migrating services to the cloud. Below are some highlights from the report:

  • Transformation areas: virtualization, compute, storage, and network. News UK had an aggressive timetable to review public cloud offerings and make strategic investments to help it smoothly transition to delivering IT infrastructure via the public cloud. The firm considered all aspects of IT infrastructure delivery and implemented the latest technologies to achieve its transformation goals. Key areas of focus included virtualization, compute and operating systems, and storage and networking.
Read more

Globalizing Tencent Puts Data Centers Where Its New Customers Are

Gene Cao

Now that WeChat has more than 100 million overseas subscribers, Tencent, China’s leading web content provider, faces a new challenge: improving the experience of its customers outside of China. Steep rises in content consumption — largely driven by the increasing use of mobile devices to access services and information — represent a significant opportunity for content companies like WeChat to go global. To achieve this, Tencent has made positive steps in boosting its investment in data centers and networking outside of China.

To improve its user experience in the rest of Asia, Tencent recently announced that it will colocate one data center in Hong Kong and has chosen Equinix to operate it. This is already the second node that Tencent has built outside of mainland China; the first was implemented in Canada to serve North American users.

As an Internet company that operates its own large data centers in mainland China, Tencent has deep experience in data center construction and management and has leveraged this experience to develop best practices and key criteria for data center provider selection. These include:

  • Networking and interconnection options. As Tencent intends to rapidly expand its business into more countries, it needs carrier-neutral data center providers to offer the necessary connectivity options. For its Hong Kong implementation, Tencent used Equinix to optimize transit routes to achieve lower latency and better connect users inside and outside of mainland China; the data center provider can access multiple networks and peer with members of the Equinix Internet Exchange.
Read more

Build Or Colocate? The ROI Of Data Center Facilities In India

Manish Bahl

Many Indian CIOs and their infrastructure and operations (I&O) teams are in the market for a new data center as their existing data centers are running low on space, power, and cooling capacity. Forrester finds that data growth, virtualization, and consolidation are the main culprits behind these capacity challenges in India. For instance:

  • Data growth increases data center storage investments. Forrester estimates that storage consumes somewhere between 5% and 15% of the total power consumed in the data center and that the volume of data is growing by 30% to 50% per year.
  • Virtualization drives higher-density infrastructure architecture. Organizations face pressure to support more extreme compute densities and experiment with new infrastructure architectures.
  • Data center consolidation puts more pressure on centralized facilities. Per Forrester’s Forrsights Budgets and Priorities Survey, Q4 2012, consolidating IT infrastructure was a critical or high priority for nearly 70% of Indian IT decision-makers. This means more power, cooling, and space for centralized sites.
Read more

You can learn from the clouds but you can’t compete

James Staten

If you want to be the best in data center operations you are right to benchmark yourself against the cloud computing leaders – just don’t delude yourself into thinking you can match them.

In our latest research report, Rich Fichera and I updated a 2007 study that looked at what enterprise infrastructure leaders could learn from the best in the cloud and hosting market. We found that while they may have greater buying power, deeper IT R&D and huge security teams, many of their best practices apply to a standard enterprise data center – or at least part of it.

There are several key differences between you and the cloud leaders, many of which are detailed in the table below. Perhaps the starkest however is that for the clouds, they are the product. And that means they get budgetary priority and R&D attention that I&O leaders in the enterprise can only dream about.

Some key differences between Clouds, hosters and you

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Read more

Open Compute Project – Rising Relevance And More Stakeholders

Richard Fichera

Background

Today’s announcements at the Open Compute Project (OCP) 2013 Summit could be considered as tangible markers for the OCP crossing the line into real relevance as an important influence on emerging hyper-scale and cloud computing as well as having a potential bleed-through into the world of enterprise data centers and computing. This is obviously a subjective viewpoint – there is no objective standard for relevance, only post-facto recognition that something was important or not. But in this case I’m going to stick my neck out and predict that OCP will have some influence and will be a sticky presence in the industry for many years.

Even if their specs (which look generally quite good) do not get picked up verbatim, they will act as an influence on major vendors who will, much like the auto industry in the 1970s, get the message that there is a market for economical “low-frills” alternatives.

Major OCP Initiatives

To date, OCP has announced a number of useful hardware specifications, including:

Read more

HP’s Troubles Continue, But Does It Matter?

Richard Fichera

HP seems to be on a tear, bouncing from litigation with one of its historically strongest partners to multiple CEOs in the last few years, continued layoffs, and a recent massive write-down of its EDS purchase. And, as we learned last week, the circus has not left town. The latest “oops” is an $8.8 billion write-down for its purchase of Autonomy, under the brief and ill-fated leadership of Léo Apotheker, combined with allegations of serious fraud on the part of Autonomy during the acquisition process.

The eventual outcome of this latest fiasco will be fun to watch, with many interesting sideshows along the way, including:

  • Whose fault is it? Can they blame it on Léo, or will it spill over onto Meg Whitman, who was on the board and approved it?
  • Was there really fraud involved?
  • If so, how did HP miss it? What about all the internal and external people involved in due diligence of this acquisition? I’ve been on the inside of attempted acquisitions at HP, and there were always many more people around with the power to say “no” than there were people who were trying to move the company forward with innovative acquisitions, and the most persistent and compulsive of the group were the various finance groups involved. It’s really hard to see how they could have missed a little $5 billion discrepancy in revenues, but that’s just my opinion — I was usually the one trying to get around the finance guys. :)
Read more

Tectonic Shift In The ARM Ecosystem — AMD Announces ARM Intentions

Richard Fichera

Earlier this week, in conjunction with ARM Holdings plc’s announcement of the upcoming Cortex A53 and A57, full 64-bit CPU implementations based on the ARM V8 specification, AMD also announced that it would be designing and selling SOC (System On a Chip) products based on this technology in 2014, roughly coinciding with availability of 64-bit parts from ARM and other partners.

This is a major event in the ARM ecosystem. AMD, while much smaller than Intel, is still a multi-billion-dollar enterprise, and for the second largest vendor of x86 chips to also throw its hat into the ARM ecosystem and potentially compete with its own mainstream server and desktop CPU business is an aggressive move on the part of AMD management that carries some risk and much potential advantage.

Reduced to its essentials, what AMD announced (and in some cases hinted at):

  • Intention to produce A53/A57 SOC modules for multiple server segments. There was no formal statement of intentions regarding tablet/mobile devices, but it doesn’t take a rocket scientist to figure out that AMD wants a piece of this market, and ARM is a way to participate.
  • The announcement is wider that just the SOC silicon. AMD also hinted at making a range of IP, including its fabric architecture from the SeaMicro architecture, available in the form of “reusable IP blocks.” My interpretation is that it intends to make the fabric, reference architectures, and various SOCs available to its hardware system partners.
Read more

Intel (Finally) Announces Its Latest Server Processors — Better, Faster, Cooler

Richard Fichera

Today, after two of its largest partners have already announced their systems portfolios that will use it, Intel finally announced one of the worst-kept secrets in the industry: the Xeon E5-2600 family of processors.

OK, now that I’ve got in my jab at the absurdity of the announcement scheduling, let’s look at the thing itself. In a nutshell, these new processors, based on the previous-generation 32 nm production process of the Xeon 5600 series but incorporating the new “Sandy Bridge” architecture, are, in fact, a big deal. They incorporate several architectural innovations and will bring major improvements in power efficiency and performance to servers. Highlights include:

  • Performance improvements on selected benchmarks of up to 80% above the previous Xeon 5600 CPUs, apparently due to both improved CPU architecture and larger memory capacity (up to 24 DIMMs at 32 GB per DIMM equals a whopping 768 GB capacity for a two-socket, eight-core/socket server).
  • Improved I/O architecture, including an on-chip PCIe 3 controller and a special mode that allows I/O controllers to write directly to the CPU cache without a round trip to memory — a feature that only a handful of I/O device developers will use, but one that contributes to improved I/O performance and lowers CPU overhead during PCIe I/O.
  • Significantly improved energy efficiency, with the SPECpower_ssj2008 benchmark showing a 50% improvement in performance per watt over previous models.
Read more