Intel Shows the Way Forward, Demos 22 nm Parts with Breakthrough Semiconductor Design

Richard Fichera

What Intel said and showed

Intel has been publishing research for about a decade on what they call “3D Trigate” transistors, which held out the hope for both improved performance as well as power efficiency. Today Intel revealed details of its commercialization of this research in its upcoming 22 nm process as well as demonstrating actual systems based on 22 nm CPU parts.

The new products, under the internal name of “Ivy Bridge”, are the process shrink of the recently announced Sandy Bridge architecture in the next “Tock” cycle of the famous Intel “Tick-Tock” design methodology, where the “Tick” is a new optimized architecture and the “Tock” is the shrinking of this architecture onto then next generation semiconductor process.

What makes these Trigate transistors so innovative is the fact that they change the fundamental geometry of the semiconductors from a basically flat “planar” design to one with more vertical structure, earning them the description of “3D”. For users the concepts are simpler to understand – this new transistor design, which will become the standard across all of Intel’s products moving forward, delivers some fundamental benefits to CPUs implemented with them:

  • Leakage current is reduced to near zero, resulting in very efficient operation for system in an idle state.
  • Power consumption at equivalent performance is reduced by approximately 50% from Sandy Bridge’s already improved results with its 32 nm process.
Read more

Good Proactive Marketing Can’t Fix Problems Like Amazon’s EC2 Outage . . .

Tim Harmon

. . . but bad reactive marketing can make the problem much worse.

[co-authored by Zachary Reiss-Davis]

As has been widely reported, in sources broad and narrow, Amazon.com’s cloud service EC2 went down for an extended period of time yesterday, bringing many of the hottest high-tech startups with it, ranging from the well known (Foursquare, Quora) to the esoteric (About.me, EveryTrail). For a partial list of smaller startups affected, see http://ec2disabled.com/

While this is clearly a blow to both Amazon.com and to the cloud hosting market in general, it also serves as an example of how technology companies must quickly respond publicly and engage with their customers when problems arise. Amazon.com let their customers control the narrative by not participating in any social media response to the problem; their only communication was through their online dashboard with vague platitudes. Instead, they allowed angry heads of product management and CEOs who are used to communicating with their customers on blogs and Twitter to unequivocally blame Amazon.com for the problem.

Many startups, including Quora, AnyTrail, eCairn, and MobyPicture, blame Amazon.com for their downtime.

Read more

Marketing In “Cloud-Time”

Tim Harmon

 What is it that you think makes one tech company stand out from another? “My product is better than your product”? Not anymore. “My salespeople are better than your salespeople”? Possibly. “My channel is better than your channel”. You’re getting warmer. How about, “My marketing machine is better than your marketing machine”?

For example, 41% of customers identify “the vendor’s (not including its salespeople’s) ability to understand our business problem”, compared with only 21% who identified “the vendor’s salesperson’s ability to understand our business problem” as the most important vendor action factor when selecting a tech vendor. Marketing is clearly the difference-maker.

But cloud computing changes everything. The implications of cloud computing go far beyond its technology delivery/consumption model. It seems I get questions from tech marketers about all things cloud these days. A few examples:

  • “How can I use the cloud more effectively to market our solutions?” (Answer: It’s not what you read in USA Today about Facebook and Twitter. According to the results of our 2011 B2B Social Technographics® survey, discussion forums and professional social networking sites (read: not consumer social sites) outpace Facebook and Twitter ten-fold as information sources for informing businesses’ technology purchase decisions.)
Read more

Software License Models Are Changing — Participate in Forrester’s Online Survey

Holger Kisker

The lines are blurring between software and services — with the rise of cloud computing, that trend has accelerated faster than ever. But customers aren’t just looking at cloud business models, such as software-as-a-service (SaaS), when they want more flexibility in the way they license and use software. While in 2008 upfront perpetual software licenses (capex) made up more than 80% of a company’s software license spending, this percentage will drop to about 70% in 2011. The other 30% will consist of different, more flexible licensing models, including financing, subscription services, dynamic pricing, risk sharing, or used license models.

Forrester is currently digging deeper into the different software licensing models, their current status in the market, as well as their benefits and challenges. We kindly ask companies that are selling software and/or software related services to participate in our ~20-minute Online Forrester Research Software Licensing Survey, letting us know about current and future licensing strategies. Of course, all answers are optional and will be kept strictly confidential. We will only use anonymous, aggregated data in our upcoming research report, and interested participants can get a consolidated upfront summary of the survey results if they chose to enter an optional email address in the survey.

Read more

The Empire Strikes Back – Intel Reveals An Effective Low-Power And Micro Server Strategy

Richard Fichera

A lot has been written about potential threats to Intel’s low-power server hegemony, including discussions of threats from not only its perennial minority rival AMD but also from emerging non-x86 technologies such as ARM servers. While these are real threats, with potential for disrupting Intel’s position in the low power and small form factor server segment if left unanswered, Intel’s management has not been asleep at the wheel. As part of the rollout of the new Sandy Bridge architecture, Intel recently disclosed their platform strategy for what they are defining as “Micro Servers,” small single-socket servers with shared power and cooling to improve density beyond the generally accepted dividing line of one server per RU that separates “standard density” from “high density.” While I think that Intel’s definition is a bit myopic, mostly serving to attach a label to a well established category, it is a useful tool for segmenting low-end servers and talking about the relevant workloads.

Intel’s strategy revolves around introducing successive generations of its Sandy Bridge and future architectures embodied as Low Power (LP) and Ultra Low Power (ULP) products with promises of up to 2.2X performance per watt and 30% less actual power compared to previous generation equivalent x86 servers, as outlined in the following chart from Intel:

So what does this mean for Infrastructure & Operations professionals interested in serving the target loads for micro servers, such as:

  • Basic content delivery and web servers
  • Low-end dedicated server hosting
  • Email and basic SaaS delivery
Read more

Dell Delivers vStart – Ready To Run Virtual Infrastructure

Richard Fichera

Another Tier-1 Converged Infrastructure Option

The drum continues to beat for converged infrastructure products, and Dell has given it the latest thump with the introduction of vStart, a pre-integrated environment for VMware. Best thought of as a competitor to VCE, the integrated VMware, Cisco and EMC virtualization stack, vStart combines:

  • Dell PowerEdge R610 and R710 rack servers
  • Dell EqualLogic PS6000XV storage
  • Dell PowerConnect Ethernet switches
  • Preinstalled VMware (trial) software & Dell management extensions
  • Dell factory and onsite services
Read more

Facebook Opens New Data Center – And Shares Its Technology

Richard Fichera

A Peek Behind The Wizard's Curtain

The world of hyper scale web properties has been shrouded in secrecy, with major players like Google and Amazon releasing only tantalizing dribbles of information about their infrastructure architecture and facilities, on the presumption that this information represented critical competitive IP. In one bold gesture, Facebook, which has certainly catapulted itself into the ranks of top-tier sites, has reversed that trend by simultaneously disclosing a wealth of information about the design of its new data center in rural Oregon and contributing much of the IP involving racks, servers, and power architecture to an open forum in the hopes of generating an ecosystem of suppliers to provide future equipment to themselves and other growing web companies.

The Data Center

By approaching the design of the data center as an integrated combination of servers for known workloads and the facilities themselves, Facebook has broken some new ground in data center architecture with its facility.

At a high level, a traditional enterprise DC has a utility transformer that feeds power to a centralized UPS, and then power is subsequently distributed through multiple levels of PDUs to the equipment racks. This is a reliable and flexible architecture, and one that has proven its worth in generations of commercial data centers. Unfortunately, in exchange for this flexibility and protection, it extracts a penalty of 6% to 7% of power even before it reaches the IT equipment.

Read more

Cisco Buys A Credible Automation Entry Point With NewScale

Glenn O'Donnell

Cisco announced today its intent to acquire NewScale, a small, but well-respected automation software vendor. The financial terms were not disclosed, but it is a small deal in terms of money spent. It is big in the sense that Cisco needed the kind of capabilities offered by NewScale, and NewScale has proven to be one of the most innovative and visible players in that market segment.

The market segment in question is what has been described as “the tip of the iceberg” for the advanced automation suites needed to create and operate cloud computing services. The “tip” refers to the part of the overall suite that is exposed to customers, while the majority of the “magic” of cloud automation is hidden from view – as it should be. The main capabilities offered by NewScale deal with building and managing the service catalog and providing a self-service front end that allows cloud consumers to request their own services based on this catalog of available services. Forrester has been bullish on these capabilities because they are the customer-facing side of cloud – the most important aspect – whereas most of the cloud focus has been directed at the “back end” technologies such as virtual server deployment and workload migration. These are certainly important, but a cloud is not a cloud unless the consumers of those services can trigger their deployment on their own. This is the true power of NewScale, one of the best in this sub-segment.

Read more

Oracle Says No To Itanium – Embarrassment For Intel, Big Problem For HP

Richard Fichera

Oracle announced today that it is going to cease development for Itanium across its product line, stating that itbelieved, after consultation with Intel management, that x86 was Intel’s strategic platform. Intel of course responded with a press release that specifically stated that there were at least two additional Itanium products in active development – Poulsen (which has seen its initial specifications, if not availability, announced), and Kittson, of which little is known.

This is a huge move, and one that seems like a kick carefully aimed at the you know what’s of HP’s Itanium-based server business, which competes directly with Oracle’s SPARC-based Unix servers. If Oracle stays the course in the face of what will certainly be immense pressure from HP, mild censure from Intel, and consternation on the part of many large customers, the consequences are pretty obvious:

  • Intel loses prestige, credibility for Itanium, and a potential drop-off of business from its only large Itanium customer. Nonetheless, the majority of Intel’s server business is x86, and it will, in the end, suffer only a token loss of revenue. Intel’s response to this move by Oracle will be muted – public defense of Itanium, but no fireworks.
Read more

ARM Servers - Calxeda Opens The Kimono For A Tantalizing Tease

Richard Fichera

Calxeda, one of the most visible stealth mode startups in the industry, has finally given us an initial peek at the first iteration of its server plans, and they both meet our inflated expectations from this ARM server startup and validate some of the initial claims of ARM proponents.

While still holding their actual delivery dates and details of specifications close to their vest, Calxeda did reveal the following cards from their hand:

  • The first reference design, which will be provided to OEM partners as well as delivered directly to selected end users and developers, will be based on an ARM Cortex A9 quad-core SOC design.
  • The SOC, as Calxeda will demonstrate with one of its reference designs, will enable OEMs to design servers as dense as 120 ARM quad-core nodes (480 cores) in a 2U enclosure, with an average consumption of about 5 watts per node (1.25 watts per core) including DRAM.
  • While not forthcoming with details about the performance, topology or protocols, the SOC will contain an embedded fabric for the individual quad-core SOC servers to communicate with each other.
  • Most significantly for prospective users, Calxeda is claiming, and has some convincing models to back up these claims, that they will provide a performance advantage of 5X to 10X the performance/watt and (even higher when price is factored in for a metric of performance/watt/$) of any products they expect to see when they bring the product to market.
Read more