AMD Bumps Its Specs, Waits For Interlagos And Bulldozer

Richard Fichera

Since its introduction of its Core 2 architecture, Intel reversed much of the damage done to it by AMD in the server space, with attendant publicity. AMD, however, has been quietly reclaiming some ground with its 12-core 6100 series CPUs, showing strength in  benchmarks that emphasize high throughput in process-rich environments as opposed to maximum performance per core. Several AMD-based system products have also been cited by their manufacturers to us as enjoying very strong customer acceptance due to the throughput of the 12-core CPUs combined with their attractive pricing. As a fillip to this success, AMD this past week announced speed bumps for the 6100-series products to give a slight performance boost as they continue to compete with Intel’s Xeon 5600 and 7500 products (Intel’s Sandy Bridge server products have not yet been announced).

But the real news last week was the quiet subtext that the anticipated 16-core Interlagos products based on the new Bulldozer core appear to be on schedule for Q2 ’11 shipments system partners, who should probably be able to ship systems during Q3, and that AMD is still certifying them as compatible with the current sockets used for the 12-core 6000 CPUs. This implies that system partners will be able to quickly deliver products based on the new parts very rapidly.

Actual performance of these systems will obviously be dependent on the workloads being run, but our gut feeling is that while they will not rival the per-core performance of the Intel Xeon 7500 CPUs, for large throughput-oriented environments with high numbers of processes, a description that fits a large number of web and middleware environments, these CPUs, each with up to a 50% performance advantage per core over the current AMD CPUs, may deliver some impressive benchmarks and keep the competition in the server  space at a boil, which in the end is always helpful to customers.

Verizon Steps Into IaaS Cloud Leadership Ranks

James Staten

Pop Quiz: What’s the fastest way to build a credible, enterprise-relevant and highly profitable cloud computing services practice? Buy one that already is. That’s exactly what Verizon did last week when it pushed $1.4B across the table to Terremark. Despite its internal efforts to build an infrastructure-as-a-service (IaaS) business over the last two years, Verizon simply couldn’t learn the best practices fast enough to have matched the gains in the market it received through this move. Terremark has one of the strongest IaaS hosting businesses in the market and perhaps the best enterprise mix in its customer base of the top tier providers. It also has a significant presence with government clients including the United States’ Government Services Agency (GSA) which has production systems running in a hybrid mode between Terremark’s IaaS and traditional managed hosting services.

Confidential Forrester client inquiries have shown struggles by Verizon to win competitive IaaS bids with its computing-as-a-service (CaaS) offering, often losing to Terremark. This led to Verizon reselling the Terremark solution (its CaaS for SMB) so they could try before the buy.

Read more

Categories:

IBM And ARM Continue Their Collaboration – Major Win For ARM

Richard Fichera

Last week IBM and ARM Holdings Plc quietly announced a continuation of their collaboration on advanced process technology, this time with a stated goal of developing ARM IP optimized for IBM physical processes down to a future 14 nm size. The two companies have been collaborating on semiconductors and SOC design since 2007, and this extension has several important ramifications for both companies and their competitors.

It is a clear indication that IBM retains a major interest in low-power and mobile computing, despite its previous divestment of its desktop and laptop computers to Lenovo, and that it will be in a position to harvest this technology, particularly ARM's modular approach to composing SOC systems, for future productization.

For ARM, the implications are clear. Its latest announced product, the Cortex A15, which will probably appear in system-level products in approximately 2013, will be initially produced in 32 nm with a roadmap to 20nm. The existence of a roadmap to a potential 14 nm product serves notice that the new ARM architecture will have a process roadmap that will keep it on Intel’s heels for another decade. ARM has parallel alliances with TSMC and Samsung as well, and there is no reason to think that these will not be extended, but the IBM alliance is an additional insurance policy. As well as a source of semiconductor technology, IBM has a deep well of systems and CPU IP that certainly cannot hurt ARM.

Read more

Is The IaaS/PaaS Line Beginning To Blur?

James Staten

Forrester’s survey and inquiry research shows that, when it comes to cloud computing choices, our enterprise customers are more interested in infrastructure-as-a-service (IaaS) than platform-as-a-service (PaaS) despite the fact that PaaS is simpler to use. Well, this line is beginning to blur thanks to new offerings from Amazon Web Services LLC and upstart Standing Cloud.

The concern about PaaS lies around lock-in, as developers and infrastructure and operations professionals fear that by writing to the PaaS layer’s services their application will lose portability (this concern has long been a middleware concern — PaaS or otherwise). As a result, IaaS platforms that let you control the deployment model down to middleware, OS and VM resource choice are more open and portable. The tradeoff though, is that developer autonomy comes with a degree of complexity. As the below figure shows, there is a direct correlation between the degree of abstraction a cloud service provides and the skill set required by the customer. If your development skills are limited to scripting, web page design and form creation, most SaaS platforms provide the right abstraction for you to be productive. If you are a true coder with skills around Java, C# or other languages, PaaS offerings let you build more complex applications and integrations without you having to manage middleware, OS or infrastructure configuration. The PaaS services take care of this. IaaS, however, requires you to know this stuff. As a result, cloud services have an inverse pyramid of potential customers. Despite the fact that IaaS is more appealing to enterprise customers, it is the hardest to use.

Read more

ARM-Based Servers – Looming Tsunami Or Just A Ripple In The Industry Pond?

Richard Fichera

From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.

Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.

The Promised Land

The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?

Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…

Read more

The Global Software Market In Transformation: Findings From The Forrsights Software Survey, Q4 2010

Holger Kisker

Two months ago, we announced our upcoming Forrester Forrsights Software Survey, Q4 2010. Now the data is back from more than 2,400 respondents in North America and Europe and provides us with deep and sometimes surprising insights into the software market dynamics of today and the next 24 months.

We’d like to give you a sneak preview of interesting results around some of the most important trends in the software market: cloud computing integrated information technology, business intelligence, mobile strategy, and overall software budgets and buying preferences.

Companies Start To Invest More Into Innovation In 2011

After the recent recession, companies are starting to invest more in 2011, with 12% and 22% of companies planning to increase their software budgets by more than 10% or between 5% and 10%, respectively. At the same time, companies will invest a significant part of the additional budget into new solutions. While 50% of the total software budgets are still going into software operations and maintenance (Figure 1), this number has significantly dropped from 55% in 2010; spending on new software licenses will accordingly increase from 23% to 26% and custom-development budgets from 23% to 24% in 2011.

Cloud Computing Is Getting Serious

In this year’s survey, we have taken a much deeper look into companies’ strategies and plans around cloud computing besides simple adoption numbers. We have tested to what extent cloud computing makes its way from complementary services into business critical processes, replacing core applications and moving sensitive data into public clouds.

Read more

Oracle Rolls Out Private Cloud Architecture And World-Record Transaction Performance

Richard Fichera

On Dec. 2, Oracle announced the next move in its program to integrate its hardware and software assets, with the introduction of Oracle Private Cloud Architecture, an integrated infrastructure stack with Infiniband and/or 10G Ethernet fabric, integrated virtualization, management and servers along with software content, both Oracle’s and customer-supplied. Oracle has rolled out the architecture as a general platform for a variety of cloud environments, along with three specific implementations, Exadata, Exalogic and the new Sunrise Supercluster, as proof points for the architecture.

Exadata has been dealt with extensively in other venues, both inside Forrester and externally, and appears to deliver the goods for I&O groups who require efficient consolidation and maximum performance from an Oracle database environment.

Exalogic is a middleware-targeted companion to the Exadata hardware architecture (or another instantiation of Oracle’s private cloud architecture, depending on how you look at it), presenting an integrated infrastructure stack ready to run either Oracle or third-party apps, although Oracle is positioning it as a Java middleware platform. It consists of the following major components integrated into a single rack:

  1. Oracle x86 or T3-based servers and storage.
  2. Oracle Quad-rate Infiniband switches and the Oracle Solaris gateway, which makes the Infiniband network look like an extension of the enterprise 10G Ethernet environment.
  3. Oracle Linux or Solaris.
  4. Oracle Enterprise Manager Ops Center for management.
Read more

Open Data Center Alliance – Lap Dog Or Watch Dog?

Richard Fichera

In October, with great fanfare, the Open Data Center Alliance unfurled its banners. The ODCA is a consortium of approximately 50 large IT consumers, including large manufacturing, hosting and telecomm providers, with the avowed intent of developing standards for interoperable cloud computing. In addition to the roster of users, the announcement highlighted Intel with an ambiguous role as a technology advisor to the group. The ODCA believes that it will achieve some weight in the industry due to its estimated $50 billion per year of cumulative IT purchasing power, and the trade press was full of praises for influential users driving technology as opposed to allowing rapacious vendors such as HP and IBM to drive users down proprietary paths that lead to vendor lock-in.

Now that we’ve had a month or more to allow the purple prose to settle a bit, let’s look at the underlying claims, potential impact of the ODCA and the shifting roles of vendors and consumers of technology. And let’s not forget about the role of Intel.

First, let me state unambiguously that one of the core intentions of the ODCA, the desire to develop common use case models that will in turn drive vendors to develop products that comply with the models based on the economic clout of the ODCA members (and hopefully there will be a correlation between ODCA member requirements and those of a wider set of consumers), is a good idea. Vendors spend a lot of time talking to users and trying to understand their requirements, and having the ODCA as a proxy for the requirements of a lot of very influential customers will be a benefit to all concerned.

Read more

Cloud Predictions For 2011: Gains From Early Experiences Come Alive

James Staten

The second half of 2010 has laid a foundation in the infrastructure-as-a-service (IaaS) market that looks to make 2011 a landmark year. Moves by a variety of players may just turn this into a vibrant, steady market rather than today’s Amazon Web Services and a distant race for second. VMware vCloud Director finally shipped after much delay — a break from VMware’s rather steady on-time execution prior — and will power both ISP public clouds and enterprise private efforts in 2011. VMops changed its name and landed a passel of service providers; we’ll see if they live up to be the “.com” in Cloud.comOpenStack came out of the gate with strong ISV support and small ISP momentum; 2011 may prove a make-or-break year for the open source upstart. And nearly every enter

Read more

Changes In The Media Explain Why The Smart Computing Revolution Is Not Yet Running On Internet Time

Andrew Bartels

This past weekend, my wife wanted desperately to attend Jon Stewart’s “Rally to Restore Sanity and/or Fear,” to support the message of civility and moderation. An injured foot and problems with travel logistics kept her from attending, but we watched it on the Comedy Central network. It was, of course, a counterpoint to the “Restoring Honor” rally that Fox News’ Glen Beck held in August. However, there were two striking commonalities about the two rallies:

  • First, the ability of cable program show hosts to gather hundreds of thousands of people (estimates seem to be around 100,000 for the Beck rally and 200,000 for the Stewart rally) to travel to Washington for a rally. We’re not talking about rallies organized by a major political leader like President Obama or a media giant like Walter Cronkite with a TV audience of tens of millions of people. Instead, the TV personalities who hosted these events have cable audiences that on a good night may reach 3 to 5 million people.
  • Second, the absence of attention to substantive economic issues facing this country, such as persistent high unemployment, economic recovery strategies, education and competitiveness, global warming, or budget deficits and priorities. Instead, the rallies focused on culture, tone, and attitudes, with the Beck rally resembling a college homecoming event where the returning alumni complain about how the place has gone downhill since they left, while current seniors crack jokes and make fun of the old geezers wandering around the campus.
Read more