ARM Servers - Calxeda Opens The Kimono For A Tantalizing Tease

Richard Fichera

Calxeda, one of the most visible stealth mode startups in the industry, has finally given us an initial peek at the first iteration of its server plans, and they both meet our inflated expectations from this ARM server startup and validate some of the initial claims of ARM proponents.

While still holding their actual delivery dates and details of specifications close to their vest, Calxeda did reveal the following cards from their hand:

  • The first reference design, which will be provided to OEM partners as well as delivered directly to selected end users and developers, will be based on an ARM Cortex A9 quad-core SOC design.
  • The SOC, as Calxeda will demonstrate with one of its reference designs, will enable OEMs to design servers as dense as 120 ARM quad-core nodes (480 cores) in a 2U enclosure, with an average consumption of about 5 watts per node (1.25 watts per core) including DRAM.
  • While not forthcoming with details about the performance, topology or protocols, the SOC will contain an embedded fabric for the individual quad-core SOC servers to communicate with each other.
  • Most significantly for prospective users, Calxeda is claiming, and has some convincing models to back up these claims, that they will provide a performance advantage of 5X to 10X the performance/watt and (even higher when price is factored in for a metric of performance/watt/$) of any products they expect to see when they bring the product to market.
Read more

HP And Microsoft Ride The Converged Infrastructure Wave With Integrated Application Appliances

Richard Fichera

In another token that the movement toward converged infrastructures and vertically integrated solutions is becoming ever more mainstream, HP and Microsoft recently announced a line of specialized appliances that combine integrated hardware, software and pre-packaged software targeting Exchange email, business analytics with Microsoft SharePoint and PowerPivot, and data warehousing with SQL Server. The offerings include:

  • HP E5000 Messaging System – Microsoft Exchange mailboxes in standard sizes of 500 – 3000 mailboxes. This product incorporates a pair of servers derived from HP's blade family in a new 3U rack enclosure plus storage and Microsoft Exchange software. The product is installed as a turnkey system from HP.
  • HP Business Decision Appliance – Integrated servers and SQL Server PowerPivot software targeting analytics in midmarket and enterprise groups, tuned for 80 concurrent users. This offering is based on standard HP rack servers and integrated Microsoft software.
  • HP Enterprise Data Warehouse Appliance – Intended to compete with Oracle Exadata, at least for data warehouse applications, this is targeted at enterprise data warehouses in the 100s of Terabyte range. Like Exadata, it is a massive stack of integrated servers and software, including 13 HP rack servers, 10 of their MSA storage units and integrated Ethernet, Infiniband and FC networking, along with Microsoft SQL Server 2008 R2 Parallel Data Warehouse software.
Read more

Intel Fires The First Shot Across The Bows Of ARM

Richard Fichera

Intel, despite a popular tendency to associate a dominant market position with indifference to competitive threats, has not been sitting still waiting for the ARM server phenomenon to engulf them in a wave of ultra-low-power servers. Intel is fiercely competitive, and it would be silly for any new entrants to assume that Intel will ignore a threat to the heart of a high-growth segment.

In 2009, Intel released a microserver specification for compact low-power servers, and along with competitor AMD, it has been aggressive in driving down the power envelope of its mainstream multicore x86 server products. Recent momentum behind ARM-based servers has heated this potential competition up, however, and Intel has taken the fight deeper into the low-power realm with the recent introduction of the N570, a an existing embedded low-power processor, as a server CPU aimed squarely at emerging ultra-low-power and dense servers. The N570, a dual-core Atom processor, is being currently used by a single server partner, ultra-dense server manufacturer SeaMicro (see Little Servers For Big Applications At Intel Developer Forum), and will allow them to deliver their current 512 Atom cores with half the number of CPU components and some power savings.

Technically, the N570 is a dual-core Atom CPU with 64 bit arithmetic, a differentiator against ARM, and the same 32-bit (4 GB) physical memory limitations as current ARM designs, and it should have a power dissipation of between 8 and 10 watts.

Read more

Who Are Your Anchor Vendors?

Glenn O'Donnell

Every day we read about technology vendors making acquisitions and merging with their competitors. Some recent examples: Verizon acquired Terremark for $1.4B to take a leadership role in IaaS, NetApp acquired Akorri to move up the virtualization stack, and the highly popularized "storage shoot out" in late 2010 between Dell and HP for 3PAR (ending with HP’s winning bid of $2.4B). Since there is no evidence to suggest a decrease in the pace of these acquisitions, it’s important for infrastructure and operations (I&O) professionals to keep a keen eye on these proceedings. 

Read more

Categories:

If You Don’t Manage Everything, You Don’t Manage Anything

Jean-Pierre Garbani

I’m always surprised to see that the Citroen 2CV (CV: Cheval Vapeur, hence the name Deux Chevaux) has such a strong following, even in the United States. Granted, this car was the epitome of efficiency: It used minimum gas (60 miles to the gallon), was eminently practical, and its interior could be cleaned with a garden hose. Because the car was minimalist to the extreme, the gas gauge on the early models was a dipstick of some material, with marks to show how many liters of gas were left in the tank. For someone like me, who constantly forgot to consult the dipstick before leaving home, it meant that I would be out of gas somewhere far from a station almost every month. A great means of transportation failed regularly for lack of instrumentation. (Later models had a gas gauge.)

 This shows how failure to monitor one element leads to the failure of the complete system — and that if you don’t manage everything you don’t manage anything, since the next important issue can develop in blissful ignorance.

The point is that we often approach application performance management from the same angle that Citroen used to create the 2CV: provide only the most critical element monitoring in the name of cost-cutting. This has proved time and again to be fraught with risk and danger. Complex, multitier applications are composed of a myriad of components, hardware and software, that can fail.

In application performance management, I see a number of IT operations focus their tools on some critical elements and ignore others. But even though many of the critical hardware and software components have become extremely reliable, it doesn’t mean that they are impervious to failure: There is simply no way to guarantee the life of a specific electronic component.

Read more

You Must Go Further To Get Private Cloud Right . . . But How Much Further?

James Staten

 Lately it's starting to seem like private clouds are a lot like beauty – in the eye of the beholder. Or more accurately, in the eye of the builder. Sadly, unlike art and beauty, the value that comes from your private cloud isn’t as fluid, and the closer you get in your design to a public cloud, the greater the value. While it may be tempting to paint your VMware environment as a cloud or to automate a few tasks such as provisioning and then declare “cloud,”organizations that fall short of achieving true cloud value may find their investments miss the mark. But how do you get your private cloud right?

For the most part, enterprises understand that virtualization and automation are key components of a private cloud, but at what point does a virtualized environment become a private cloud? What can a private cloud offer that a virtualized environment can’t? How do you sell this idea internally? And how do you deliver a true private cloud in 2011?

In London, this March, I am facilitating a meeting of the Forrester Leadership Board Infrastructure & Operations Council, where we will tackle these very questions. If you are considering building a private cloud, there are changes you will need to make in your organization to get it right and our I&O council meeting will give you the opportunity to discuss this with other I&O leaders facing the same challenge.

Read more

Cisco Sends A Recall On Its Cloud Email Strategy

Christopher Voce

Infrastructure & operations executives have shown a tremendous interest in looking for opportunities to take advantage of the cloud to provision email and collaboration services to their employees – in fact in a recent Forrester survey, nearly half of IT execs report that they either are interested in or plan on making a move to the cloud for email. Why? It can be more cost effective, increase your flexibility, and help control the historical business and technical challenges of deploying these tools yourself.  

To date, we’ve talked about four core players in the market : Cisco, Google, IBM, and Microsoft. According to a recent blog post, Cisco has chosen to no longer invest in Cisco Mail. Cisco Mail was formerly known as WebEx Mail – and before that, the email platform was the property of PostPath, which Cisco acquired in 2008 with the intention of providing a more complete collaboration stack alongside its successful WebEx services and voice.  I've gathered feedback and worked with my colleagues Ted Schadler, TJ Keitt, and Art Schoeller to synthesize and discuss what this means to Infrastructure & Operations pros and coordinating with their Content & Collaboration colleagues.

 So what happened and what does it mean for I&O professionals? Here’s our take:

Read more

Citrix Acquires EMS-Cortex

John Rakowski

Another year and Citrix’s acquisition strategy of interesting companies continues as they have announced the purchase of EMS-Cortex. This acquisition has caught my eye because EMS-Cortex provides a web-based “cloud control panel” that can be used by service providers and end users to manage the provisioning and delegation administration of hosted business applications in a cloud environment such as XenApp, Microsoft Exchange, BlackBerry Enterprise Server, and a number of other critical business applications. In theory this means that customers and vendors will be able to “spin up” core business services quickly in a multi tenant environment.  

It is an interesting acquisition, as vendors are starting to address the fact that for “cloudonomics” to be achieved by their customers it is important that they ease the route to cloud adoption. While this acquisition is potentially a good move for Citrix I think it will be interesting for I&O professionals to see how they plan to integrate this ease of deployment with existing business service management processes, especially if the EMS-Cortex solution is going to be used in a live production environment.

Read more

Juniper’s QFabric: The Dark Horse In The Datacenter Fabric Race?

Andre Kindness

It’s been a few years since I was a disciple and evangelized for HP ProCurve’s Adaptive EDGE Architecture(AEA). Plain and simple, before the 3Com acquisition, it was HP ProCurve’s networking vision: the architecture philosophy created by John McHugh(once HP ProCurve’s VP/GM, currently the CMO of Brocade), Brice Clark (HP ProCurve Director of Strategy), and Paul Congdon (CTO of HP Networking) during a late-night brainstorming session. The trio conceived that network intelligence was going to move from the traditional enterprise core to the edge and be controlled by centralized policies. Policies based on company strategy and values would come from a policy manager and would be connected by high speed and resilient interconnect much like a carrier backbone (see Figure 1). As soon as users connected to the network, the edge would control them and deliver a customized set of advanced applications and services based on user identity, device, operating system, business needs, location, time, and business policies. This architecture would allow Infrastructure and Operation professionals to create an automated and dynamic platform to address the agility needed by businesses to remain relevant and competitive.

As the HP white paper introducing the EDGE said, “Ultimately, the ProCurve EDGE Architecture will enable highly available meshed networks, a grid of functionally uniform switching devices, to scale out to virtually unlimited dimensions and performance thanks to the distributed decision making of control to the edge.” Sadly, after John McHugh’s departure, HP buried the strategy in lieu of their converged infrastracture slogan: Change.

Read more

Intel Discloses Details on “Poulson,” Next-Generation Itanium

Richard Fichera

This week at ISSCC, Intel made its first detailed public disclosures about its upcoming “Poulson” next-generation Itanium CPU. While not in any sense complete, the details they did disclose paint a picture of a competent product that will continue to keep the heat on in the high-end UNIX systems market. Highlights include:

  • Process — Poulson will be produced in a 32 nm process, skipping the intermediate 45 nm step that many observers expected to see as a step down from the current 65 nm Itanium process. This is a plus for Itanium consumers, since it allows for denser circuits and cheaper chips. With an industry record 3.1 billion transistors, Poulson needs all the help it can get keeping size and power down. The new process also promises major improvements in power efficiency.
  • Cores and cache — Poulson will have 8 cores and 54 MB of on-chip cache, a huge amount, even for a cache-sensitive architecture like Itanium. Poulson will have a 12-issue pipeline instead of the current 6-issue pipeline, promising to extract more performance from existing code without any recompilation.
  • Compatibility — Poulson is socket- and pin-compatible with the current Itanium 9300 CPU, which will mean that HP can move more quickly into production shipments when it's available.
Read more