ARM-Based Servers – Looming Tsunami Or Just A Ripple In The Industry Pond?

Richard Fichera

From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.

Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.

The Promised Land

The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?

Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…

Read more

Intel Announces Sandy Bridge. A Big Deal? You Bet!

Richard Fichera

Intel today officially announced the first products based on the much-discussed Sandy Bridge CPU architecture, and first impressions are highly favorable, with my take being that Sandy Bridge represents the first step in a very aggressive product road map for Intel in 2011.

Sandy Bridge is the next architectural spin after Intel’s Westmere shrink of the predecessor Nehalem architecture (the “tick” in Intel’s famous “tick-tock” progression of architectural changes followed by process shrink) and incorporates some major innovations compared to the previous architecture:

  • Minor but in toto significant changes to many aspects of the low-level microarchitecture – more registers, better prefetch, changes to the way instructions and operands are decode, cached and written back to registers and cache.
  • Major changes in integration of functions on the CPU die – Almost all major subsystems, including CPU, memory controller, graphics controller and PCIe controller, are now integrated onto the same die, along with the ability to share data with much lower latency than in previous generations. In addition to more efficient data sharing, this level of integration allows for better power efficiency.
  • Improvements to media processing – A dedicated video transcoding engine and an extended vector instruction set for media and floating point calculations improves Sandy Bridge capabilities in several major application domains.
Read more

Checking In On Linux – Latest Linux Releases Show Continued Progress

Richard Fichera

I’ve recently had the opportunity to talk with a small sample of SLES 11 and RH 6 Linux users, all developing their own applications. All were long-time Linux users, and two of them, one in travel services and one in financial services, had applications that can be described as both large and mission-critical.

The overall message is encouraging for Linux advocates, both the calm rational type as well as those who approach it with near-religious fervor. The latest releases from SUSE and Red Hat, both based on the 2.6.32 Linux kernel, show significant improvements in scalability and modest improvements in iso-configuration performance. One user reported that an application that previously had maxed out at 24 cores with SLES 10 was now nearing production certification with 48 cores under SLES 11. Performance scalability was reported as “not linear, but worth doing the upgrade.”

Overall memory scalability under Linux is still a question mark, since the widely available x86 platforms do not exceed 3 TB of memory, but initial reports from a user familiar with HP’s DL 980 verify that the new Linux Kernel can reliably manage at least 2TB of RAM under heavy load.

File system options continue to expand as well. The older Linux FS standard, ETX4, which can scale to “only” 16 TB, has been joined by additional options such as XFS (contributed by SGI), which has been implemented in several installations with file systems in excess of 100 TB, relieving a limitation that may have been more psychological than practical for most users.

Read more

ScaleMP – Interesting Twist On Systems Scalability And Virtualization

Richard Fichera

I just spent some time talking to ScaleMP, an interesting niche player that provides a server virtualization solution. What is interesting about ScaleMP is that rather than splitting a single physical server into multiple VMs, they are the only successful offering (to the best of my knowledge) that allows I&O groups to scale up a collection of smaller servers to work as a larger SMP.

Others have tried and failed to deliver this kind of solution, but ScaleMP seems to have actually succeeded, with a claimed 200 customers and expectations of somewhere between 250 and 300 next year.

Their vSMP product comes in two flavors, one that allows a cluster of machines to look like a single system for purposes of management and maintenance while still running as independent cluster nodes, and one that glues the member systems together to appear as a single monolithic SMP.

Does it work? I haven’t been able to verify their claims with actual customers, but they have been selling for about five years, claim over 200 accounts, with a couple of dozen publicly referenced. All in all, probably too elaborate a front to maintain if there was really nothing there. The background of the principals and the technical details they were willing to share convinced me that they have a deep understanding of the low-level memory management, prefectching, and caching that would be needed to make a collection of systems function effectively as a single system image. Their smaller scale benchmarks displayed good scalability in the range of 4 – 8 systems, well short of their theoretical limits.

My quick take is that the software works, and bears investigation if you have an application that:

  1. Either is certified to run with ScaleMP (not many), or one where that you control the code.
  2. You understand the memory reference patterns of the application, and
Read more

Oracle Rolls Out Private Cloud Architecture And World-Record Transaction Performance

Richard Fichera

On Dec. 2, Oracle announced the next move in its program to integrate its hardware and software assets, with the introduction of Oracle Private Cloud Architecture, an integrated infrastructure stack with Infiniband and/or 10G Ethernet fabric, integrated virtualization, management and servers along with software content, both Oracle’s and customer-supplied. Oracle has rolled out the architecture as a general platform for a variety of cloud environments, along with three specific implementations, Exadata, Exalogic and the new Sunrise Supercluster, as proof points for the architecture.

Exadata has been dealt with extensively in other venues, both inside Forrester and externally, and appears to deliver the goods for I&O groups who require efficient consolidation and maximum performance from an Oracle database environment.

Exalogic is a middleware-targeted companion to the Exadata hardware architecture (or another instantiation of Oracle’s private cloud architecture, depending on how you look at it), presenting an integrated infrastructure stack ready to run either Oracle or third-party apps, although Oracle is positioning it as a Java middleware platform. It consists of the following major components integrated into a single rack:

  1. Oracle x86 or T3-based servers and storage.
  2. Oracle Quad-rate Infiniband switches and the Oracle Solaris gateway, which makes the Infiniband network look like an extension of the enterprise 10G Ethernet environment.
  3. Oracle Linux or Solaris.
  4. Oracle Enterprise Manager Ops Center for management.
Read more

Checking In With Cisco UCS – Continued Momentum, Decoupled From Corporate Malaise

Richard Fichera

I met recently with Cisco’s UCS group in San Jose to get a quick update on sales and maybe some hints about future development. The overall picture is one of rapid growth decoupled from whatever pressures Cisco management has cautioned about in other areas of the business.

Overall, according to recent disclosure by Cisco CEO John Chambers, Cisco’s UCS revenue is growing at a 550% Y/Y growth rate, with the most recent quarterly revenues indicating a $500M run rate (we make that out as about $125M quarterly revenue). This figure does not seem to include the over 4,000 blades used by Cisco IT, nor does it include units being consumed internally by Cisco and subsequently shipped to customers as part of appliances or other Cisco products. Also of note is the fact that it is fiscal Q1 for Cisco, traditionally its weakest quarter, although with an annual growth rate in excess of 500% we would expect that UCS sequential quarters will be marching to a totally different drummer than the overall company numbers.

Read more

Lies, Damned Lies, And Statistics . . . And Benchmarks

Richard Fichera

I have been working on a research document, to be published this quarter, on the impact of 8-socket x86 servers based on Intel’s new Xeon 7500 CPU. In a nutshell, these systems have the performance of the best-of-breed RISC/UNIX systems of three years ago, at a substantially better price, and their overall performance improvement trajectory has been steeper than competing technologies for the past decade.

This is probably not shocking news and is not the subject of this current post, although I would encourage you to read it when it is finally published. During the course of researching this document I spent time trying to prove or disprove my thesis that x86 system performance solidly overlapped that of RISC/UNIX with available benchmark results. The process highlighted for me the limitations of using standardized benchmarks for performance comparisons. There are now so many benchmarks available that system vendors are only performing each benchmark on selected subsets of their product lines, if at all. Additionally, most benchmarks suffer from several common flaws:

  • They are results from high-end configurations, in many cases far beyond the norm for any normal use cases, but results cannot be interpolated to smaller, more realistic configurations.
  • They are often the result of teams of very smart experts tuning the system configurations, application and system software parameters for optimal results. For a large benchmark such as SAP or TPC, it is probably reasonable to assume that there are over 1,000 variables involved in the tuning effort. This makes the results very much like EPA mileage figures — the consumer is guaranteed not to exceed these numbers.
Read more

Fujitsu – Ready To Play In North America?

Richard Fichera

Fujitsu? Who? I recently attended Fujitsu’s global analyst conference in Boston, which gave me an opportunity to check in with the best kept secret in the North American market. Even Fujitsu execs admit that many people in this largest of IT markets think that Fujitsu has something to do with film, and few of us have ever seen a Fujitsu system installed in the US unless it was a POS system.

So what is the management of this global $50 Billion information and communications technology company, with a competitive portfolio of client, server and storage products and a global service and integration capability, going to do about its lack of presence in the world’s largest IT market? In a word, invest. Fujitsu’s management, judging from their history and what they have disclosed of their plans, intends to invest in the US over the next three to four years to consolidate their estimated $3 Billion in N. American business into a more manageable (simpler) set of operating companies, and to double down on hiring and selling into the N. American market. The fact that they have given themselves multiple years to do so is very indicative of what I have always thought of as Fujitsu’s greatest strength and one of their major weaknesses – they operate on Japanese time, so to speak. For an American company to undertake to build a presence over multiple years with seeming disregard for quarterly earnings would be almost unheard of, so Fujitsu’s management gets major kudos for that. On the other hand, years of observing them from a distance also leads me to believe that their approach to solving problems inherently lacks the sense of urgency of some of their competitors.

Read more

IBM Acquires BNT – Nuclear War In The Converged Infrastructure World?

Richard Fichera

There has been a lot of press about IBM’s acquisition of BNT (Blade Network Technologies) focusing on the economics and market share of BNT as a competitor to Cisco and HP’s ProCurve/3Com franchise. But at its heart the acquisition is more about defending and expanding a position in the emerging converged server, networking, and storage infrastructure segment than it is about raw switch port market share. It is also a powerful vindication of the proposition that infrastructure convergence is driving major realignment in the vendor industry.

Starting with HP’s success with its c-Class blade servers and Virtual Connect technology, and escalating with Cisco’s entrance into the server market, IBM continued its investment in its Virtual Fabric and Open Fabric Manager technology, heavily leveraging BNT’s switch platforms. At some point it became clear that BNT was a critical element of IBM’s convergence strategy, with IBM’s plans now heavily dependent on a vendor with whom they had an excellent, but non-exclusive relationship, and one whose acquisition by another player could severely compromise their product plans. Hence the acquisition. Now that it owns BNT, IBM can capitalize on its excellent edge network technology for further development of its converged infrastructure strategy without hesitation about further leveraging BNT’s technology.

Read more

IBM – Ramping Up x86 Investment

Richard Fichera

I recently spent a day with IBM’s x86 team, primarily to get back up to speed on their entire x86 product line, and partially to realign my views of them after spending almost five years as a direct competitor. All in all, time well spent, with some key takeaways:

  • IBM has fixed some major structural problems with the entire x86 program and it perception in the company – As recently as two years ago, it appeared to the outside world that IBM was not really serious about x86 servers. Between licensing its low-end server designs to Lenovo (although IBM continued to sell its own versions) and an apparent retreat to the upper-end of the segment, it appeared that IBM was not serious about x86 severs. New management, new alignment with sales, and a higher internal profile for x86 seems to have moved the division back into IBM’s mainstream.
  • Increased investment – It looks like IBM significantly ramped up investments in x86 products about three years ago. The result has been a relatively steady flow of new products into the marketplace, some of which, such as the HS22 blade, significantly reversed deficits versus equivalent HP products. Others followed in high-end servers, virtualization and systems management, and increased velocity of innovation in low-end systems.
  • Established leadership in new niches such as dense modular server deployments – IBM’s iDataplex, while representing a small footprint in terms of their total volume, gave them immediate visibility as an innovator in the rapidly growing niche for hyper scale dense deployments. Along the way, IBM has also apparently become the leader in GPU deployments as well, another low-volume but high-visibility niche.
Read more