Facebook Opens New Data Center – And Shares Its Technology

Richard Fichera

A Peek Behind The Wizard's Curtain

The world of hyper scale web properties has been shrouded in secrecy, with major players like Google and Amazon releasing only tantalizing dribbles of information about their infrastructure architecture and facilities, on the presumption that this information represented critical competitive IP. In one bold gesture, Facebook, which has certainly catapulted itself into the ranks of top-tier sites, has reversed that trend by simultaneously disclosing a wealth of information about the design of its new data center in rural Oregon and contributing much of the IP involving racks, servers, and power architecture to an open forum in the hopes of generating an ecosystem of suppliers to provide future equipment to themselves and other growing web companies.

The Data Center

By approaching the design of the data center as an integrated combination of servers for known workloads and the facilities themselves, Facebook has broken some new ground in data center architecture with its facility.

At a high level, a traditional enterprise DC has a utility transformer that feeds power to a centralized UPS, and then power is subsequently distributed through multiple levels of PDUs to the equipment racks. This is a reliable and flexible architecture, and one that has proven its worth in generations of commercial data centers. Unfortunately, in exchange for this flexibility and protection, it extracts a penalty of 6% to 7% of power even before it reaches the IT equipment.

Read more

Cisco Buys A Credible Automation Entry Point With NewScale

Glenn O'Donnell

Cisco announced today its intent to acquire NewScale, a small, but well-respected automation software vendor. The financial terms were not disclosed, but it is a small deal in terms of money spent. It is big in the sense that Cisco needed the kind of capabilities offered by NewScale, and NewScale has proven to be one of the most innovative and visible players in that market segment.

The market segment in question is what has been described as “the tip of the iceberg” for the advanced automation suites needed to create and operate cloud computing services. The “tip” refers to the part of the overall suite that is exposed to customers, while the majority of the “magic” of cloud automation is hidden from view – as it should be. The main capabilities offered by NewScale deal with building and managing the service catalog and providing a self-service front end that allows cloud consumers to request their own services based on this catalog of available services. Forrester has been bullish on these capabilities because they are the customer-facing side of cloud – the most important aspect – whereas most of the cloud focus has been directed at the “back end” technologies such as virtual server deployment and workload migration. These are certainly important, but a cloud is not a cloud unless the consumers of those services can trigger their deployment on their own. This is the true power of NewScale, one of the best in this sub-segment.

Read more

Oracle Says No To Itanium – Embarrassment For Intel, Big Problem For HP

Richard Fichera

Oracle announced today that it is going to cease development for Itanium across its product line, stating that itbelieved, after consultation with Intel management, that x86 was Intel’s strategic platform. Intel of course responded with a press release that specifically stated that there were at least two additional Itanium products in active development – Poulsen (which has seen its initial specifications, if not availability, announced), and Kittson, of which little is known.

This is a huge move, and one that seems like a kick carefully aimed at the you know what’s of HP’s Itanium-based server business, which competes directly with Oracle’s SPARC-based Unix servers. If Oracle stays the course in the face of what will certainly be immense pressure from HP, mild censure from Intel, and consternation on the part of many large customers, the consequences are pretty obvious:

  • Intel loses prestige, credibility for Itanium, and a potential drop-off of business from its only large Itanium customer. Nonetheless, the majority of Intel’s server business is x86, and it will, in the end, suffer only a token loss of revenue. Intel’s response to this move by Oracle will be muted – public defense of Itanium, but no fireworks.
Read more

ARM Servers - Calxeda Opens The Kimono For A Tantalizing Tease

Richard Fichera

Calxeda, one of the most visible stealth mode startups in the industry, has finally given us an initial peek at the first iteration of its server plans, and they both meet our inflated expectations from this ARM server startup and validate some of the initial claims of ARM proponents.

While still holding their actual delivery dates and details of specifications close to their vest, Calxeda did reveal the following cards from their hand:

  • The first reference design, which will be provided to OEM partners as well as delivered directly to selected end users and developers, will be based on an ARM Cortex A9 quad-core SOC design.
  • The SOC, as Calxeda will demonstrate with one of its reference designs, will enable OEMs to design servers as dense as 120 ARM quad-core nodes (480 cores) in a 2U enclosure, with an average consumption of about 5 watts per node (1.25 watts per core) including DRAM.
  • While not forthcoming with details about the performance, topology or protocols, the SOC will contain an embedded fabric for the individual quad-core SOC servers to communicate with each other.
  • Most significantly for prospective users, Calxeda is claiming, and has some convincing models to back up these claims, that they will provide a performance advantage of 5X to 10X the performance/watt and (even higher when price is factored in for a metric of performance/watt/$) of any products they expect to see when they bring the product to market.
Read more

Intel Fires The First Shot Across The Bows Of ARM

Richard Fichera

Intel, despite a popular tendency to associate a dominant market position with indifference to competitive threats, has not been sitting still waiting for the ARM server phenomenon to engulf them in a wave of ultra-low-power servers. Intel is fiercely competitive, and it would be silly for any new entrants to assume that Intel will ignore a threat to the heart of a high-growth segment.

In 2009, Intel released a microserver specification for compact low-power servers, and along with competitor AMD, it has been aggressive in driving down the power envelope of its mainstream multicore x86 server products. Recent momentum behind ARM-based servers has heated this potential competition up, however, and Intel has taken the fight deeper into the low-power realm with the recent introduction of the N570, a an existing embedded low-power processor, as a server CPU aimed squarely at emerging ultra-low-power and dense servers. The N570, a dual-core Atom processor, is being currently used by a single server partner, ultra-dense server manufacturer SeaMicro (see Little Servers For Big Applications At Intel Developer Forum), and will allow them to deliver their current 512 Atom cores with half the number of CPU components and some power savings.

Technically, the N570 is a dual-core Atom CPU with 64 bit arithmetic, a differentiator against ARM, and the same 32-bit (4 GB) physical memory limitations as current ARM designs, and it should have a power dissipation of between 8 and 10 watts.

Read more

Mobile App Internet: Making Sense Of The 2011 Mobile Hysteria

John McCarthy

Starting with CES in early January and through the Mobile World Congress last week in Barcelona, the mobile industry has been in a feeding frenzy of announcement activity. At CES, it was centered on Android-powered tablets. During the Mobile World Congress, it was about the big Microsoft/Nokia deal and vendors scrambling to differentiate their Android handsets.

But behind all these announcements, there is a broader shift going on to what Forrester calls the mobile app Internet and the accompanying broader wave of app development and management. We have just published a report that explores the different vectors of innovation and sizes the mobile app Internet from an app sales and services opportunity.

The report looks at the three factors beyond hardware that will drive the market:

  1. Even at $2.43/app, the app market will emerge as a $38B market by 2015 as more tablets and smart phones are sold and the number of paid for apps per device increases due to improvements in the app store experience.
  2. A perfect storm of innovation is unleashed by the merger of mobile, cloud, and smart computing. I see innovation coming from the combination of apps and smart devices like appliances and cars, improved user experience around the apps by better leveraging the context from the sensors in the devices, and enabling the apps to take advantage of new capabilities like near field communications (NFC) for things such as mobile payments.
Read more

AMD Bumps Its Specs, Waits For Interlagos And Bulldozer

Richard Fichera

Since its introduction of its Core 2 architecture, Intel reversed much of the damage done to it by AMD in the server space, with attendant publicity. AMD, however, has been quietly reclaiming some ground with its 12-core 6100 series CPUs, showing strength in  benchmarks that emphasize high throughput in process-rich environments as opposed to maximum performance per core. Several AMD-based system products have also been cited by their manufacturers to us as enjoying very strong customer acceptance due to the throughput of the 12-core CPUs combined with their attractive pricing. As a fillip to this success, AMD this past week announced speed bumps for the 6100-series products to give a slight performance boost as they continue to compete with Intel’s Xeon 5600 and 7500 products (Intel’s Sandy Bridge server products have not yet been announced).

But the real news last week was the quiet subtext that the anticipated 16-core Interlagos products based on the new Bulldozer core appear to be on schedule for Q2 ’11 shipments system partners, who should probably be able to ship systems during Q3, and that AMD is still certifying them as compatible with the current sockets used for the 12-core 6000 CPUs. This implies that system partners will be able to quickly deliver products based on the new parts very rapidly.

Actual performance of these systems will obviously be dependent on the workloads being run, but our gut feeling is that while they will not rival the per-core performance of the Intel Xeon 7500 CPUs, for large throughput-oriented environments with high numbers of processes, a description that fits a large number of web and middleware environments, these CPUs, each with up to a 50% performance advantage per core over the current AMD CPUs, may deliver some impressive benchmarks and keep the competition in the server  space at a boil, which in the end is always helpful to customers.

Verizon Steps Into IaaS Cloud Leadership Ranks

James Staten

Pop Quiz: What’s the fastest way to build a credible, enterprise-relevant and highly profitable cloud computing services practice? Buy one that already is. That’s exactly what Verizon did last week when it pushed $1.4B across the table to Terremark. Despite its internal efforts to build an infrastructure-as-a-service (IaaS) business over the last two years, Verizon simply couldn’t learn the best practices fast enough to have matched the gains in the market it received through this move. Terremark has one of the strongest IaaS hosting businesses in the market and perhaps the best enterprise mix in its customer base of the top tier providers. It also has a significant presence with government clients including the United States’ Government Services Agency (GSA) which has production systems running in a hybrid mode between Terremark’s IaaS and traditional managed hosting services.

Confidential Forrester client inquiries have shown struggles by Verizon to win competitive IaaS bids with its computing-as-a-service (CaaS) offering, often losing to Terremark. This led to Verizon reselling the Terremark solution (its CaaS for SMB) so they could try before the buy.

Read more

Categories:

IBM And ARM Continue Their Collaboration – Major Win For ARM

Richard Fichera

Last week IBM and ARM Holdings Plc quietly announced a continuation of their collaboration on advanced process technology, this time with a stated goal of developing ARM IP optimized for IBM physical processes down to a future 14 nm size. The two companies have been collaborating on semiconductors and SOC design since 2007, and this extension has several important ramifications for both companies and their competitors.

It is a clear indication that IBM retains a major interest in low-power and mobile computing, despite its previous divestment of its desktop and laptop computers to Lenovo, and that it will be in a position to harvest this technology, particularly ARM's modular approach to composing SOC systems, for future productization.

For ARM, the implications are clear. Its latest announced product, the Cortex A15, which will probably appear in system-level products in approximately 2013, will be initially produced in 32 nm with a roadmap to 20nm. The existence of a roadmap to a potential 14 nm product serves notice that the new ARM architecture will have a process roadmap that will keep it on Intel’s heels for another decade. ARM has parallel alliances with TSMC and Samsung as well, and there is no reason to think that these will not be extended, but the IBM alliance is an additional insurance policy. As well as a source of semiconductor technology, IBM has a deep well of systems and CPU IP that certainly cannot hurt ARM.

Read more

Is The IaaS/PaaS Line Beginning To Blur?

James Staten

Forrester’s survey and inquiry research shows that, when it comes to cloud computing choices, our enterprise customers are more interested in infrastructure-as-a-service (IaaS) than platform-as-a-service (PaaS) despite the fact that PaaS is simpler to use. Well, this line is beginning to blur thanks to new offerings from Amazon Web Services LLC and upstart Standing Cloud.

The concern about PaaS lies around lock-in, as developers and infrastructure and operations professionals fear that by writing to the PaaS layer’s services their application will lose portability (this concern has long been a middleware concern — PaaS or otherwise). As a result, IaaS platforms that let you control the deployment model down to middleware, OS and VM resource choice are more open and portable. The tradeoff though, is that developer autonomy comes with a degree of complexity. As the below figure shows, there is a direct correlation between the degree of abstraction a cloud service provides and the skill set required by the customer. If your development skills are limited to scripting, web page design and form creation, most SaaS platforms provide the right abstraction for you to be productive. If you are a true coder with skills around Java, C# or other languages, PaaS offerings let you build more complex applications and integrations without you having to manage middleware, OS or infrastructure configuration. The PaaS services take care of this. IaaS, however, requires you to know this stuff. As a result, cloud services have an inverse pyramid of potential customers. Despite the fact that IaaS is more appealing to enterprise customers, it is the hardest to use.

Read more