Calxeda, one of the most visible stealth mode startups in the industry, has finally given us an initial peek at the first iteration of its server plans, and they both meet our inflated expectations from this ARM server startup and validate some of the initial claims of ARM proponents.
While still holding their actual delivery dates and details of specifications close to their vest, Calxeda did reveal the following cards from their hand:
The first reference design, which will be provided to OEM partners as well as delivered directly to selected end users and developers, will be based on an ARM Cortex A9 quad-core SOC design.
The SOC, as Calxeda will demonstrate with one of its reference designs, will enable OEMs to design servers as dense as 120 ARM quad-core nodes (480 cores) in a 2U enclosure, with an average consumption of about 5 watts per node (1.25 watts per core) including DRAM.
While not forthcoming with details about the performance, topology or protocols, the SOC will contain an embedded fabric for the individual quad-core SOC servers to communicate with each other.
Most significantly for prospective users, Calxeda is claiming, and has some convincing models to back up these claims, that they will provide a performance advantage of 5X to 10X the performance/watt and (even higher when price is factored in for a metric of performance/watt/$) of any products they expect to see when they bring the product to market.
Intel, despite a popular tendency to associate a dominant market position with indifference to competitive threats, has not been sitting still waiting for the ARM server phenomenon to engulf them in a wave of ultra-low-power servers. Intel is fiercely competitive, and it would be silly for any new entrants to assume that Intel will ignore a threat to the heart of a high-growth segment.
In 2009, Intel released a microserver specification for compact low-power servers, and along with competitor AMD, it has been aggressive in driving down the power envelope of its mainstream multicore x86 server products. Recent momentum behind ARM-based servers has heated this potential competition up, however, and Intel has taken the fight deeper into the low-power realm with the recent introduction of the N570, a an existing embedded low-power processor, as a server CPU aimed squarely at emerging ultra-low-power and dense servers. The N570, a dual-core Atom processor, is being currently used by a single server partner, ultra-dense server manufacturer SeaMicro (see Little Servers For Big Applications At Intel Developer Forum), and will allow them to deliver their current 512 Atom cores with half the number of CPU components and some power savings.
Technically, the N570 is a dual-core Atom CPU with 64 bit arithmetic, a differentiator against ARM, and the same 32-bit (4 GB) physical memory limitations as current ARM designs, and it should have a power dissipation of between 8 and 10 watts.
For the most part, enterprises understand that virtualization and automation are key components of a private cloud, but at what point does a virtualized environment become a private cloud? What can a private cloud offer that a virtualized environment can’t? How do you sell this idea internally? And how do you deliver a true private cloud in 2011?
In London, this March, I am facilitating a meeting of the Forrester Leadership Board Infrastructure & Operations Council, where we will tackle these very questions. If you are considering building a private cloud, there are changes you will need to make in your organization to get it right and our I&O council meeting will give you the opportunity to discuss this with other I&O leaders facing the same challenge.
Forrester’s Forrsights Software Survey, Q4 2010 has quantified for the first time how enterprise demand is shifting from traditional licensing models to subscriptions and other licensing models, such as financing and license leasing. However, the shift to subscriptions for business-applications-as-a-service is the major driver of this change. Traditional enterprise licenses are slowly decreasing, and Forrester predicts that subscriptions for SaaS applications will drive alternative license spending up to 29% — as early as 2011. This demand-side change goes beyond front-office applications like CRM. In 2011 and 2012, enterprises will opt for “as-a-service” subscriptions for more back-office applications, such as ERP, instead of licensed and on-premise installations. Detailed data cuts by company size and region are available to clients from our Forrsights service.
Base: 622 (2007), 1,026 (2008), 537 (2009), and 930 (2010) software decision-makers predicting license spending for the coming year Source: Enterprise And SMB Software Survey, North America And Europe, Q3 2007; Enterprise And SMB Software Survey, North America And Europe, Q4 2008; Enterprise And SMB Software Survey, North America And Europe, Q4 2009; Forrsights Software Survey, Q4 2010
What does this means for existing independent software vendors (ISVs) and infrastructure vendors?
Another year and Citrix’s acquisition strategy of interesting companies continues as they have announced the purchase of EMS-Cortex. This acquisition has caught my eye because EMS-Cortex provides a web-based “cloud control panel” that can be used by service providers and end users to manage the provisioning and delegation administration of hosted business applications in a cloud environment such as XenApp, Microsoft Exchange, BlackBerry Enterprise Server, and a number of other critical business applications. In theory this means that customers and vendors will be able to “spin up” core business services quickly in a multi tenant environment.
It is an interesting acquisition, as vendors are starting to address the fact that for “cloudonomics” to be achieved by their customers it is important that they ease the route to cloud adoption. While this acquisition is potentially a good move for Citrix I think it will be interesting for I&O professionals to see how they plan to integrate this ease of deployment with existing business service management processes, especially if the EMS-Cortex solution is going to be used in a live production environment.
SAP Has Managed A Turnaround After Léo Apotheker’s Departure
In February 2010, after Léo Apotheker resigned as CEO of SAP, I wrote a blog post with 10 predictions for the company for the remaining year. Although the new leadership mentioned again and again that this step would not have any influence on the company’s strategy, it was clear that further changes would follow, as it doesn’t make any sense to simply replace the CEO and leave everything else as is when problems were obviously growing bigger for the company.
I predicted that the SAP leadership change was just the starting point, the visible tip of an iceberg, with further changes to come. Today, one year later, I want to review these predictions and shed some light on 2010, which has become the “Turnaround Year For SAP.”
The 10 SAP Predictions For 2010 And Their Results (7 proved true / 3 proved wrong)
Only a few weeks to go before Forrester’s US EA Forum 2011 in San Francisco in February! I’ll be presenting a number of sessions, including the opening kickoff, where I’ll paint a picture of where I see EA going in the next decade. As Alex Cullen mentioned, I’ll examine three distinct scenarios where EA rises in importance, EA crashes and burns, or EA becomes marginalized.
But the most fun I’ve had preparing for this year’s event is putting together a new track: “Key Technology Trends That Will Change Your Business.” In the past, we’ve focused this conference on the practice of EA and used our big IT Forum conference in the spring to talk about technology strategies, but this year I’ve had the opportunity to put together five sessions that drill down into the technology trends that we think will have significant impact in your environment, with a particular focus on impacting business outcomes. Herewith is a quick summary of the sessions in this track:
The General Services Administration made a bold decision to move its email and collaboration systems to the cloud. In the RFP issued last June, it was easy to see their goals in the statement of objectives:
This Statement of Objectives (SOO) describes the goals that GSA expects to achieve with regard to the
1. modernization of its e-mail system;
2. provision of an effective collaborative working environment;
3. reduction of the government’s in-house system maintenance burden by providing related business, technical, and management functions; and
4. application of appropriate security and privacy safeguards.
GSA announced yesterday that they choose Google Apps for email and collaboration and Unisys as the implementation partner.
So what does this mean?
What it means (WIM) #1: GSA employees will be using a next-generation information workplace. And that means mobile, device-agnostic, and location-agile. Gmail on an iPad? No problem. Email from a home computer? Yep. For GSA and for every other agency and most companies, it's important to give employees the tools to be productive and engage from every location on every device. "Work becomes a thing you do and not a place you go." [Thanks to Earl Newsome of Estee Lauder for that quote.]
With its latest public cloud offering, T-Systems not only comes close to Amazon’s EC2 pricing, it might even be cheaper than Amazon. The €4 billion, German headquartered IT services firm announced today a public beta running from November 2010 to February 2011.
Although Amazon recently made a time-bombed version of its EC2 available for free, a real, unlimited service still costs in the range of $0.095 per hour for a small server of one core with 1.7 GB RAM in Europe. Last week, Forrester had the chance to look at a beta version of T-Systems’ public cloud offering. Although no pricing has been announced officially, the beta showed the price for a virtual machine of a similar size to the aforementioned Amazon machine starting at €0.2/hour. T-Systems inidcated that they even like to go below the Amazon pricing! T-Systems has been working for more than a year with cloud provisioning tools from Zimory to manage the virtualization of larger-scale server and landscape compositions. Leveraging this experience, T-Systems manages to drive efficiency even further than the current economies of scale, which makes this aggressive move possible.
Is T-Systems planning to seriously compete with Amazon in the future and does it make sense for a traditional large enterprise IT services and hosting firm to compete with low-price public cloud offerings?
T-Systems’ public cloud beta shows a continuous memory sizing in a state-of-the-art self-service portal.
With about 41,000 attendees, 1,800 sessions, and a whooping 63,000-plus slides, Oracle OpenWorld 2010 (September 19-23) in San Francisco was certainly a mega event with more information than one could possibly digest or even collect in a week. While the main takeaway for every attendee depends, of course, on the individual’s area of interest, there was a strong focus this year on hardware due to the Sun Microsystems acquisition. I’m a strong believer in the integration story of “Hardware and Software. Engineered to Work Together.” and really liked the Iron Man 2 show-off all around the event; but, because I’m an application guy, the biggest part of the story, including the launch of Oracle Exalogic Elastic Cloud, was a bit lost on me. And the fact that Larry Ellison basically repeated the same story in his two keynotes didn’t really resonate with me — until he came to what I was most interested in: Oracle Fusion Applications!