Intel, despite a popular tendency to associate a dominant market position with indifference to competitive threats, has not been sitting still waiting for the ARM server phenomenon to engulf them in a wave of ultra-low-power servers. Intel is fiercely competitive, and it would be silly for any new entrants to assume that Intel will ignore a threat to the heart of a high-growth segment.
In 2009, Intel released a microserver specification for compact low-power servers, and along with competitor AMD, it has been aggressive in driving down the power envelope of its mainstream multicore x86 server products. Recent momentum behind ARM-based servers has heated this potential competition up, however, and Intel has taken the fight deeper into the low-power realm with the recent introduction of the N570, a an existing embedded low-power processor, as a server CPU aimed squarely at emerging ultra-low-power and dense servers. The N570, a dual-core Atom processor, is being currently used by a single server partner, ultra-dense server manufacturer SeaMicro (see Little Servers For Big Applications At Intel Developer Forum), and will allow them to deliver their current 512 Atom cores with half the number of CPU components and some power savings.
Technically, the N570 is a dual-core Atom CPU with 64 bit arithmetic, a differentiator against ARM, and the same 32-bit (4 GB) physical memory limitations as current ARM designs, and it should have a power dissipation of between 8 and 10 watts.
Since its introduction of its Core 2 architecture, Intel reversed much of the damage done to it by AMD in the server space, with attendant publicity. AMD, however, has been quietly reclaiming some ground with its 12-core 6100 series CPUs, showing strength in benchmarks that emphasize high throughput in process-rich environments as opposed to maximum performance per core. Several AMD-based system products have also been cited by their manufacturers to us as enjoying very strong customer acceptance due to the throughput of the 12-core CPUs combined with their attractive pricing. As a fillip to this success, AMD this past week announced speed bumps for the 6100-series products to give a slight performance boost as they continue to compete with Intel’s Xeon 5600 and 7500 products (Intel’s Sandy Bridge server products have not yet been announced).
But the real news last week was the quiet subtext that the anticipated 16-core Interlagos products based on the new Bulldozer core appear to be on schedule for Q2 ’11 shipments system partners, who should probably be able to ship systems during Q3, and that AMD is still certifying them as compatible with the current sockets used for the 12-core 6000 CPUs. This implies that system partners will be able to quickly deliver products based on the new parts very rapidly.
Actual performance of these systems will obviously be dependent on the workloads being run, but our gut feeling is that while they will not rival the per-core performance of the Intel Xeon 7500 CPUs, for large throughput-oriented environments with high numbers of processes, a description that fits a large number of web and middleware environments, these CPUs, each with up to a 50% performance advantage per core over the current AMD CPUs, may deliver some impressive benchmarks and keep the competition in the server space at a boil, which in the end is always helpful to customers.
Last week IBM and ARM Holdings Plc quietly announced a continuation of their collaboration on advanced process technology, this time with a stated goal of developing ARM IP optimized for IBM physical processes down to a future 14 nm size. The two companies have been collaborating on semiconductors and SOC design since 2007, and this extension has several important ramifications for both companies and their competitors.
It is a clear indication that IBM retains a major interest in low-power and mobile computing, despite its previous divestment of its desktop and laptop computers to Lenovo, and that it will be in a position to harvest this technology, particularly ARM's modular approach to composing SOC systems, for future productization.
For ARM, the implications are clear. Its latest announced product, the Cortex A15, which will probably appear in system-level products in approximately 2013, will be initially produced in 32 nm with a roadmap to 20nm. The existence of a roadmap to a potential 14 nm product serves notice that the new ARM architecture will have a process roadmap that will keep it on Intel’s heels for another decade. ARM has parallel alliances with TSMC and Samsung as well, and there is no reason to think that these will not be extended, but the IBM alliance is an additional insurance policy. As well as a source of semiconductor technology, IBM has a deep well of systems and CPU IP that certainly cannot hurt ARM.
From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.
Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.
The Promised Land
The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?
Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…
Two months ago, we announced our upcoming Forrester Forrsights Software Survey, Q4 2010. Now the data is back from more than 2,400 respondents in North America and Europe and provides us with deep and sometimes surprising insights into the software market dynamics of today and the next 24 months.
We’d like to give you a sneak preview of interesting results around some of the most important trends in the software market: cloud computing integrated information technology, business intelligence, mobile strategy, and overall software budgets and buying preferences.
Companies Start To Invest More Into Innovation In 2011
After the recent recession, companies are starting to invest more in 2011, with 12% and 22% of companies planning to increase their software budgets by more than 10% or between 5% and 10%, respectively. At the same time, companies will invest a significant part of the additional budget into new solutions. While 50% of the total software budgets are still going into software operations and maintenance (Figure 1), this number has significantly dropped from 55% in 2010; spending on new software licenses will accordingly increase from 23% to 26% and custom-development budgets from 23% to 24% in 2011.
Cloud Computing Is Getting Serious
In this year’s survey, we have taken a much deeper look into companies’ strategies and plans around cloud computing besides simple adoption numbers. We have tested to what extent cloud computing makes its way from complementary services into business critical processes, replacing core applications and moving sensitive data into public clouds.
On Dec. 2, Oracle announced the next move in its program to integrate its hardware and software assets, with the introduction of Oracle Private Cloud Architecture, an integrated infrastructure stack with Infiniband and/or 10G Ethernet fabric, integrated virtualization, management and servers along with software content, both Oracle’s and customer-supplied. Oracle has rolled out the architecture as a general platform for a variety of cloud environments, along with three specific implementations, Exadata, Exalogic and the new Sunrise Supercluster, as proof points for the architecture.
Exadata has been dealt with extensively in other venues, both inside Forrester and externally, and appears to deliver the goods for I&O groups who require efficient consolidation and maximum performance from an Oracle database environment.
Exalogic is a middleware-targeted companion to the Exadata hardware architecture (or another instantiation of Oracle’s private cloud architecture, depending on how you look at it), presenting an integrated infrastructure stack ready to run either Oracle or third-party apps, although Oracle is positioning it as a Java middleware platform. It consists of the following major components integrated into a single rack:
Oracle x86 or T3-based servers and storage.
Oracle Quad-rate Infiniband switches and the Oracle Solaris gateway, which makes the Infiniband network look like an extension of the enterprise 10G Ethernet environment.
Oracle Linux or Solaris.
Oracle Enterprise Manager Ops Center for management.
In October, with great fanfare, the Open Data Center Alliance unfurled its banners. The ODCA is a consortium of approximately 50 large IT consumers, including large manufacturing, hosting and telecomm providers, with the avowed intent of developing standards for interoperable cloud computing. In addition to the roster of users, the announcement highlighted Intel with an ambiguous role as a technology advisor to the group. The ODCA believes that it will achieve some weight in the industry due to its estimated $50 billion per year of cumulative IT purchasing power, and the trade press was full of praises for influential users driving technology as opposed to allowing rapacious vendors such as HP and IBM to drive users down proprietary paths that lead to vendor lock-in.
Now that we’ve had a month or more to allow the purple prose to settle a bit, let’s look at the underlying claims, potential impact of the ODCA and the shifting roles of vendors and consumers of technology. And let’s not forget about the role of Intel.
First, let me state unambiguously that one of the core intentions of the ODCA, the desire to develop common use case models that will in turn drive vendors to develop products that comply with the models based on the economic clout of the ODCA members (and hopefully there will be a correlation between ODCA member requirements and those of a wider set of consumers), is a good idea. Vendors spend a lot of time talking to users and trying to understand their requirements, and having the ODCA as a proxy for the requirements of a lot of very influential customers will be a benefit to all concerned.
This past weekend, my wife wanted desperately to attend Jon Stewart’s “Rally to Restore Sanity and/or Fear,” to support the message of civility and moderation. An injured foot and problems with travel logistics kept her from attending, but we watched it on the Comedy Central network. It was, of course, a counterpoint to the “Restoring Honor” rally that Fox News’ Glen Beck held in August. However, there were two striking commonalities about the two rallies:
First, the ability of cable program show hosts to gather hundreds of thousands of people (estimates seem to be around 100,000 for the Beck rally and 200,000 for the Stewart rally) to travel to Washington for a rally. We’re not talking about rallies organized by a major political leader like President Obama or a media giant like Walter Cronkite with a TV audience of tens of millions of people. Instead, the TV personalities who hosted these events have cable audiences that on a good night may reach 3 to 5 million people.
Second, the absence of attention to substantive economic issues facing this country, such as persistent high unemployment, economic recovery strategies, education and competitiveness, global warming, or budget deficits and priorities. Instead, the rallies focused on culture, tone, and attitudes, with the Beck rally resembling a college homecoming event where the returning alumni complain about how the place has gone downhill since they left, while current seniors crack jokes and make fun of the old geezers wandering around the campus.
Sourcing executives are winding down 2010 and gearing up for 2011. Most of the sourcing executives we have spoken with recently are bullish about the year ahead, despite some looming uncertainty about the economy, particularly in Europe. Spend is opening up again, and buyers are investing in more strategic initiatives. But sourcing groups still struggle to balance low cost and high value.
Many of the sourcing groups currently working with Forrester are asking about cloud as a viable alternative to traditional deployment models. Cloud promises rapid deployment, potentially significant cost savings, and variable pricing in line with how buyers want to pay in the current economy. And cloud offerings continue to mature in areas where buyers previously had concerns (vendor viability, security, architecture, location of data). Cloud adoption is already over 25% in North America, and continues to grow in Europe (led by UK, but also growing in areas like Germany, France, the Nordics).
Most sourcing strategies around cloud consist of five key phases:
1. Understanding the evolving supplier landscape and market maturity across cloud offerings.
2. Educating business (and potentially IT) about the advantages and disadvantages of cloud.
3. Building decision frameworks to support cloud purchases.
4. Creating a contract negotiation and pricing strategy for cloud; building contract templates.
5. Working with business, vendor management, and IT to routinely evaluate ROI and decide whether to renew relationships or find alternatives (potentially cloud, hosted, on-premise, or hybrid).
My research team at Forrester helps tech vendor strategists anticipate and navigate in rapidly changing market environments. We can't take a siloed view on specific product and service categories, but rather broaden our perspective in order to trace cross-market dynamics around key issues such as sustainability, globalization, collaboration, mobility, and cloud. This puts tech vendors in better positions to act on emerging demand signals, competitive scenarios, and opportunities to select best-fit partner, channel, or acquisition targets.
The market for dedicated solutions and services that help companies manage their energy, emissions, and overall sustainability strategy is still nascent. The evolution of sustainability at larger software and IT service vendors started with their internal efforts, which is an important factor framing their portfolio and go-to-market strategies.
As a result, there are still a significant number of vendors that are converting their internal capabilities to analyze, define, and implement sustainability into customer-facing software and/or service portfolios. These portfolios of services go well beyond green IT, increasingly focused to serve clients wrestling with hot topics such as enterprise carbon and energy management (ECEM), green supply chain, and sustainability performance management.
My colleague, Daniel Krauss, and I have recently completed a comprehensive round of interviews and analysis with many different players offering sustainability consulting services, including software, IT and business services, hardware, and even industrial companies (see Figure 1).