Dell Joins The ARMs Race, Announces ARM-Based 'Copper' Server

Richard Fichera

Earlier this week Dell joined arch-competitor HP in endorsing ARM as a potential platform for scale-out workloads by announcing “Copper,” an ARM-based version of its PowerEdge-C dense server product line. Dell’s announcement and positioning, while a little less high-profile than HP’s February announcement, is intended to serve the same purpose — to enable an ARM ecosystem by providing a platform for exploring ARM workloads and to gain a visible presence in the event that it begins to take off.

Dell’s platform is based on a four-core Marvell ARM V7 SOC implementation, which it claims is somewhat higher performance than the Calxeda part, although drawing more power, at 15W per node (including RAM and local disk). The server uses the PowerEdge-C form factor of 12 vertically mounted server modules in a 3U enclosure, each with four server nodes on them for a total of 48 servers/192 cores in a 3U enclosure. In a departure from other PowerEdge-C products, the Copper server has integrated L2 network connectivity spanning all servers, so that the unit will be able to serve as a low-cost test bed for clustered applications without external switches.

Dell is offering this server to selected customers, not as a GA product, along with open source versions of the LAMP stack, Crowbar, and Hadoop. Currently Cannonical is supplying Ubuntu for ARM servers, and Dell is actively working with other partners. Dell expects to see OpenStack available for demos in May, and there is an active Fedora project underway as well.

Read more

C3PO Essential To Solve Hard IT Issues?

John Rakowski

I bet you are thinking, “Oh no, this looks like a typical Friday IT blog post” and it has all the key ingredients – It’s Friday-tick-has science fiction references-tick-has a weird title-tick – but please go with the flow with this one.   

Read more

SAP Restocks Its Cloud-Zoo With Ariba

Holger Kisker

SAP Turns To Acquisitions For Cloud Innovations

Just three months after SAP acquired SuccessFactors, a cloud leader for human capital management solutions, for $3.4 billion, it has now announced the acquisition of Ariba, a cloud leader for eProcurement solutions, for another $4.3 billion. Now, $7.7 billion is a lot of money to spend in a short amount of time on two companies that hardly make any profit. But it’s all for the cloud, which means it’s for the future business opportunity in cloud computing services. So far, so good; SAP has invested and acquired quite a number of cloud companies over the past years: Frictionless, Clear Standards, Crossgate, etc. The difference in this most recent acquisition is the big overlap with existing solutions and internal R&D.

Following the first wave of cloud acquisitions, SAP was sitting amid a zoo of cloud solutions, all based on different platforms: ePurchasing, CRM-OnDemand, BI-OnDemand, Carbon Impact, ByDesign, Streamwork . . . They all used very different technology, resulting in big integration and scale challenges behind the scenes. The market welcomed with open arms SAP’s announcement 1.5 years ago that it would consolidate its cloud strategy on the new NetWeaver platform for both ABAP- and Java-based cloud solutions.

Read more

Cloud Inefficiency - Bad Habits Are Hard To Break

James Staten

We all have habits we would like to (and should) break such as leaving the lights on in rooms we are no longer in and good habits we want to encourage such as recycling plastic bottles and driving our cars more efficiently. We often don't because habits are hard to change and often the impact isn't immediate or all that meaningful to us. The same has long been true in IT. But keep up these bad habits in the cloud, and it will cost you - sometimes a lot.

As developers, we often ask for more resources from the infrastructure & operations (I&O) teams than we really need so we don't have to go back later and ask for more - too painful and time consuming. We also often don't know how many resources our code might need, so we might as well take as much as we can get. But do we ever give it back when we learn it is more than we need? 

On the other hand, I&O often isn't any better. The first rule we learned about capacity planning was that it's more expensive to underestimate resource needs and be wrong than to overestimate, and we always seem to consume more resources eventually. 

Read more

Evaluating Cisco's Collaboration Strategy

Michael Barnes

Through a combination of analyst briefings and customer events, Cisco has ramped up outbound communication and marketing of its collaboration strategy in Asia Pacific over the past several months. The foundation remains video (TelePresence), webconferencing (WebEx), and IP telephony, areas where Cisco is a leader. But Cisco understands that to drive growth and expand its customer footprint within enterprise accounts, it must move further up the stack and increasingly compete with both traditional collaboration vendors like Microsoft and IBM and cloud-based alternatives like Google and salesforce.com.

While the strategy still plays to the company’s core networking strength, I question whether Cisco can position itself as a “go-to” vendor in the traditional collaboration space. As our research shows, senior IT and business decision-makers in Asia Pacific don’t currently equate Cisco with collaboration.

To address this challenge, Cisco is pursuing multiple initiatives/approaches:

  • Leveraging its core strengths. Cisco is focused on expanding from existing unified communications (UC) initiatives within customer accounts by leveraging the combination of networking and video to drive value. Cisco is pushing “control” via intelligent networking capabilities (e.g., security, identity management, authentication, access), all delivered through Cisco networking hardware. Simultaneously, Cisco is pushing “flexibility” via device- and platform-independent collaboration capabilities like content, video, instant messaging, and social computing.
Read more

Private Cloud: 'Everyone’s Got One. Where’s Yours?'

Lauren Nelson

Sound familiar? Executives across the globe feel peer and competitive pressure to “get to yes” on private cloud. This burden falls on IT to provide a cloud solution — oh, and by the way, we need it by the end of the year. With this clock ticking, it’s hard to think about private cloud strategically. In fact, why not to just cloudwash your virtual environment and buy your team time? Many enterprises (yes, even those presenting at events) have gone down this road. And some vendors will suggest this as a short-term fix. DON’T DO IT.

You’re cutting yourself short on what you could achieve with this environment while losing credibility with the business and your peers. Sound overdramatic? The consumerization of IT is forcing IT to connect with the business or risk circumvention. For many, the existing relationship isn't great. And each future interaction could either improve or worsen that relationship. Promising the business a cloud delivered within your own data center, and then failing to provide basic functionality of a cloud will just make future initiatives and interactions even harder. In the meantime, the business will continue to circumvent your department. If you're going to invest the resources/time to build this environment and rope in rogue cloud users — make sure you get to cloud.

Read more

To Be Private Cloud, Or Be Public Cloud: Is That Really The Question?

James Staten

Shakespeare wrote in his famous play Hamlet,"Whether 'tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles, and by opposing end them? To die: to sleep; No more." He of course was talking about the betrayal in his family but the quote is just as appropriate today in the world of cloud computing. Because in the minds of many I&O professionals, the business is conducting the betrayal

Read more

Looking Through The Cloud

Holger Kisker

SaaS vendors must collect customer insights for innovation and compliance.

As of the end of last year, about 30% of companies from our Forrsights Software Survey, Q4 2011, were using some software-as-a-service (SaaS) solution; that number will grow to 45% by the end of 2012 and 60% by the end of 2013. The public cloud market for SaaS is the biggest and fastest-growing of all of the cloud markets ($33 billion in 2012, growing to $78 billion by the end of 2015).

However, most of this growth is based on the cannibalization of the on-premises software market; software companies need to build their cloud strategy or risk getting stuck in the much slower-growing traditional application market and falling behind the competition. This is no easy task, however. Implementing a cloud strategy involves a lot of changes for a software company in terms of products, processes, and people.

A successful SaaS strategy requires an open architecture (note: multitenancy is not a prerequisite for a SaaS solution from a definition point of view but is highly recommended for vendors for better scale) and a flexible business model that includes the appropriate sales incentive structure that will bring the momentum to the street. For the purposes of this post, I’d like to highlight the challenge that software vendors need to solve for sustainable growth in the SaaS market: maintaining and increasing customer insights.

Read more

Colt Revamps The Way It Develops Agile Solutions For Its Customers

Dan Bieler
The other day I visited Colt’s London HQ and saw how the telco is revamping its approach to developing more customer-centric and Agile solutions (Colt consciously avoids the “cloud” terminology). By now, most telcos managed to jump onto the cloud bandwagon by launching cloud-based services. The challenge, from an end user perspective, is that these solutions all seem very similar. Customers can get storage, server capacity, unified communications, etc., from most telcos. All telcos underline the value-added nature of end-to-end network QoS and security that they can ensure (check out our report, "Telcos As Cloud Rainmakers"). Indeed, telcos have some right to feel that they have achieved some progress regarding their cloud offerings — although it took Amazon to show them the opportunity.

But most telco cloud offerings suffer from the fact that telcos develop cloud solutions in the traditional sense through their traditional product factories. This approach tends to follow rather than slow product innovation cycles. Moreover, it produces products that, once developed, are pushed to the customer as a standard offering. All customisation costs extra.

The reality of cloud demand is that each customer is different. Most customers want some form of customisation. Most customers want some form of hybrid cloud, a private part for core apps, as well as access to the open Internet to, for instance, exchange views and information with end customers via Twitter or for crowd sourcing with suppliers. Similarly, most customers want a mix of fixed and virtual assets and a blend of self-service and managed service solutions as the chart indicates.

Read more

ARM Arrives – Calxeda Shows Real Hardware Running Linux

Richard Fichera

I said last year that this would happen sometime in the first half of this year, but for some reason my colleagues and clients have kept asking me exactly when we would see a real ARM server running a real OS. How about now?

 To copy from Calxeda’s most recent blog post:

“This week, Calxeda is showing a live Calxeda cluster running Ubuntu 12.04 LTS on real EnergyCore hardware at the Ubuntu Developer and Cloud Summit events in Oakland, CA. … This is the real deal; quad-core, w/ 4MB cache, secure management engine, and Calxeda’s fabric all up and running.”

This is a significant milestone for many reasons. It proves that Calxeda can indeed deliver a working server based on its scalable fabric architecture, although having HP signing up as a partner meant that this was essentially a non-issue, but still, proof is good. It also establishes that at least one Linux distribution provider, in this case Ubuntu, is willing to provide a real supported distribution. My guess is that Red Hat and Centos will jump on the bus fairly soon as well.

Most importantly, we can get on with the important work of characterizing real benchmarks on real systems with real OS support. HP’s discovery centers will certainly play a part in this process as well, and I am willing to bet that by the end of the summer we will have some compelling data on whether the ARM server will deliver on its performance and energy efficiency promises. It’s not a slam dunk guaranteed win – Intel has been steadily ratcheting up its energy efficiency, and the latest generation of x86 server from HP, IBM, Dell, and others show promise of much better throughput per watt than their predecessors. Add to that the demonstration of a Xeon-based system by Sea Micro (ironically now owned by AMD) that delivered Xeon CPUs at a 10 W per CPU power overhead, an unheard of efficiency.

Read more