Oracle Says No To Itanium – Embarrassment For Intel, Big Problem For HP

Richard Fichera

Oracle announced today that it is going to cease development for Itanium across its product line, stating that itbelieved, after consultation with Intel management, that x86 was Intel’s strategic platform. Intel of course responded with a press release that specifically stated that there were at least two additional Itanium products in active development – Poulsen (which has seen its initial specifications, if not availability, announced), and Kittson, of which little is known.

This is a huge move, and one that seems like a kick carefully aimed at the you know what’s of HP’s Itanium-based server business, which competes directly with Oracle’s SPARC-based Unix servers. If Oracle stays the course in the face of what will certainly be immense pressure from HP, mild censure from Intel, and consternation on the part of many large customers, the consequences are pretty obvious:

  • Intel loses prestige, credibility for Itanium, and a potential drop-off of business from its only large Itanium customer. Nonetheless, the majority of Intel’s server business is x86, and it will, in the end, suffer only a token loss of revenue. Intel’s response to this move by Oracle will be muted – public defense of Itanium, but no fireworks.
Read more

Solving The Duplicate Address Book Problem Is One Of The Drivers For Development Of Personal Cloud

Frank Gillett

At yesterday’s HP Summit 2011, CEO Leo Apotheker made a public case for personal cloud — online services that work together to orchestrate and deliver work and personal information across personal digital devices (such as PCs, smartphones, and tablets). For people planning strategy at vendors, what are the implications of personal cloud? End users will need help getting access to their information across their devices seamlessly.

One type of information ripe for help from personal cloud services is contacts or address books. Every person using a mobile phone (251 million in the US, most of which can do email) confronts the issue of how to get all their work and personal contacts into a new mobile phone. Can they simply sync with an existing source? Do they have to export? Or <shudder> re-key them?

We’ve been researching how many people are actually using a sync service or would be interested in using one. The market for contact or calendar sync is vastly underserved today: Only 4% of North American and European information worker respondents (those using a computer 1 hour or more per day) report that they used a website or Internet service that required a login for contact and calendar synchronization, integration, or enhancement for work (Source: Forrsights Workforce Employee Survey, Q3 2010).

Yet, when Forrester asked US consumers whether they identified with the statement, “I have several electronic address books and can't always find the contact I want when I want it,” only 4% chose that as a frustration or concern that they experience with the information they’ve stored in their PCs, devices, online services, or mobile phones (Source: North American Technographics® Omnibus Online Survey, Q4 2010 [US]).

Read more

HP And Microsoft Ride The Converged Infrastructure Wave With Integrated Application Appliances

Richard Fichera

In another token that the movement toward converged infrastructures and vertically integrated solutions is becoming ever more mainstream, HP and Microsoft recently announced a line of specialized appliances that combine integrated hardware, software and pre-packaged software targeting Exchange email, business analytics with Microsoft SharePoint and PowerPivot, and data warehousing with SQL Server. The offerings include:

  • HP E5000 Messaging System – Microsoft Exchange mailboxes in standard sizes of 500 – 3000 mailboxes. This product incorporates a pair of servers derived from HP's blade family in a new 3U rack enclosure plus storage and Microsoft Exchange software. The product is installed as a turnkey system from HP.
  • HP Business Decision Appliance – Integrated servers and SQL Server PowerPivot software targeting analytics in midmarket and enterprise groups, tuned for 80 concurrent users. This offering is based on standard HP rack servers and integrated Microsoft software.
  • HP Enterprise Data Warehouse Appliance – Intended to compete with Oracle Exadata, at least for data warehouse applications, this is targeted at enterprise data warehouses in the 100s of Terabyte range. Like Exadata, it is a massive stack of integrated servers and software, including 13 HP rack servers, 10 of their MSA storage units and integrated Ethernet, Infiniband and FC networking, along with Microsoft SQL Server 2008 R2 Parallel Data Warehouse software.
Read more

How Can Apple Improve Mobile Me To Fulfill More Of The Vision Of Personal Cloud? Plus, Mozy To Add File Sync.

Frank Gillett

Most of the hype in advance of today’s Apple media event is rightly about a new iPad. Sarah Rotman Epps will post on her blog about the new iPad for consumer product strategists after the announcement. I’m focused on the published reports that Apple’s Mobile Me service will be upgraded. I cited Mobile Me as an example of emerging personal cloud services in a July 2009 report, and I’m working on a follow-on report now. Mobile Me is Apple’s horse in a contest with Facebook, Google, Microsoft, and others, to shift personal computing from being device-centric to user-centric, so that you and I don’t need to think about which gadget has the apps or data that we want. The vision of personal cloud is that a combination of local apps, cached data, and cloud-based services will put the right information in the right device at the right time, whether on personal or work devices. The strengths of Mobile Me today are:

  • Synced contacts, calendar, Safari bookmarks, and email account settings, as well as IMAP-based Mobile Me email accounts, for Web, Mac, Windows, and iOS devices.
  • Synced Mac preferences, including app and system preferences.
  • Mobile Me Gallery for easy uploading and sharing of photos and videos.
Read more

Intel Discloses Details on “Poulson,” Next-Generation Itanium

Richard Fichera

This week at ISSCC, Intel made its first detailed public disclosures about its upcoming “Poulson” next-generation Itanium CPU. While not in any sense complete, the details they did disclose paint a picture of a competent product that will continue to keep the heat on in the high-end UNIX systems market. Highlights include:

  • Process — Poulson will be produced in a 32 nm process, skipping the intermediate 45 nm step that many observers expected to see as a step down from the current 65 nm Itanium process. This is a plus for Itanium consumers, since it allows for denser circuits and cheaper chips. With an industry record 3.1 billion transistors, Poulson needs all the help it can get keeping size and power down. The new process also promises major improvements in power efficiency.
  • Cores and cache — Poulson will have 8 cores and 54 MB of on-chip cache, a huge amount, even for a cache-sensitive architecture like Itanium. Poulson will have a 12-issue pipeline instead of the current 6-issue pipeline, promising to extract more performance from existing code without any recompilation.
  • Compatibility — Poulson is socket- and pin-compatible with the current Itanium 9300 CPU, which will mean that HP can move more quickly into production shipments when it's available.
Read more

AMD Bumps Its Specs, Waits For Interlagos And Bulldozer

Richard Fichera

Since its introduction of its Core 2 architecture, Intel reversed much of the damage done to it by AMD in the server space, with attendant publicity. AMD, however, has been quietly reclaiming some ground with its 12-core 6100 series CPUs, showing strength in  benchmarks that emphasize high throughput in process-rich environments as opposed to maximum performance per core. Several AMD-based system products have also been cited by their manufacturers to us as enjoying very strong customer acceptance due to the throughput of the 12-core CPUs combined with their attractive pricing. As a fillip to this success, AMD this past week announced speed bumps for the 6100-series products to give a slight performance boost as they continue to compete with Intel’s Xeon 5600 and 7500 products (Intel’s Sandy Bridge server products have not yet been announced).

But the real news last week was the quiet subtext that the anticipated 16-core Interlagos products based on the new Bulldozer core appear to be on schedule for Q2 ’11 shipments system partners, who should probably be able to ship systems during Q3, and that AMD is still certifying them as compatible with the current sockets used for the 12-core 6000 CPUs. This implies that system partners will be able to quickly deliver products based on the new parts very rapidly.

Actual performance of these systems will obviously be dependent on the workloads being run, but our gut feeling is that while they will not rival the per-core performance of the Intel Xeon 7500 CPUs, for large throughput-oriented environments with high numbers of processes, a description that fits a large number of web and middleware environments, these CPUs, each with up to a 50% performance advantage per core over the current AMD CPUs, may deliver some impressive benchmarks and keep the competition in the server  space at a boil, which in the end is always helpful to customers.

The Passing Of A Giant – Digital Equipment Founder Ken Olsen Dead At 84

Richard Fichera

One evening in 1972 I was hanging out in the computer science department at UC Berkeley with a couple of equally socially backward friends waiting for our batch programs to run, and to kill some time we dropped in on a nearby physics lab that was analyzing photographs of particle tracks from one of the various accelerators that littered the Lawrence Radiation Laboratory. Analyzing these tracks was real scut work – the overworked grad student had to measure angles between tracks, length of tracks, and apply a number of calculations to them to determine if they were of interest. To our surprise, this lab had something we had never seen before – a computer-assisted screening device that scanned the photos and in a matter of seconds determined it had any formations that were of interest. It had a big light table, a fancy scanner, whirring arms and levers and gears, and off in the corner, the computer, “a PDP from Digital Equipment.” It was a 19” rack mount box with an impressive array of lights and switches on the front. As a programmer of the immense 1 MFLOP CDC 6400 in the Rad Lab computer center, I was properly dismissive…

This was a snapshot of the dawn of the personal computer era, almost a decade before IBM Introduced the PC and blew it wide open. The PDP (Programmable Data Processor) systems from MIT Professor Ken Olsen were the beginning of the fundamental change in the relationship between man and computer, putting a person in the computing loop instead of keeping them standing outside the temple.

Read more

ARM-Based Servers – Looming Tsunami Or Just A Ripple In The Industry Pond?

Richard Fichera

From nothing more than an outlandish speculation, the prospects for a new entrant into the volume Linux and Windows server space have suddenly become much more concrete, culminating in an immense buzz at CES as numerous players, including NVIDIA and Microsoft, stoked the fires with innuendo, announcements, and demos.

Consumers of x86 servers are always on the lookout for faster, cheaper, and more power-efficient servers. In the event that they can’t get all three, the combination of cheaper and more energy-efficient seems to be attractive to a large enough chunk of the market to have motivated Intel, AMD, and all their system partners to develop low-power chips and servers designed for high density compute and web/cloud environments. Up until now the debate was Intel versus AMD, and low power meant a CPU with four cores and a power dissipation of 35 – 65 Watts.

The Promised Land

The performance trajectory of processors that were formerly purely mobile device processors, notably the ARM Cortex, has suddenly introduced a new potential option into the collective industry mindset. But is this even a reasonable proposition, and if so, what does it take for it to become a reality?

Our first item of business is to figure out whether or not it even makes sense to think about these CPUs as server processors. My quick take is yes, with some caveats. The latest ARM offering is the Cortex A9, with vendors offering dual core products at up to 1.2 GHz currently (the architecture claims scalability to four cores and 2 GHz). It draws approximately 2W, much less than any single core x86 CPU, and a multi-core version should be able to execute any reasonable web workload. Coupled with the promise of embedded GPUs, the notion of a server that consumes much less power than even the lowest power x86 begins to look attractive. But…

Read more

Checking In On Linux – Latest Linux Releases Show Continued Progress

Richard Fichera

I’ve recently had the opportunity to talk with a small sample of SLES 11 and RH 6 Linux users, all developing their own applications. All were long-time Linux users, and two of them, one in travel services and one in financial services, had applications that can be described as both large and mission-critical.

The overall message is encouraging for Linux advocates, both the calm rational type as well as those who approach it with near-religious fervor. The latest releases from SUSE and Red Hat, both based on the 2.6.32 Linux kernel, show significant improvements in scalability and modest improvements in iso-configuration performance. One user reported that an application that previously had maxed out at 24 cores with SLES 10 was now nearing production certification with 48 cores under SLES 11. Performance scalability was reported as “not linear, but worth doing the upgrade.”

Overall memory scalability under Linux is still a question mark, since the widely available x86 platforms do not exceed 3 TB of memory, but initial reports from a user familiar with HP’s DL 980 verify that the new Linux Kernel can reliably manage at least 2TB of RAM under heavy load.

File system options continue to expand as well. The older Linux FS standard, ETX4, which can scale to “only” 16 TB, has been joined by additional options such as XFS (contributed by SGI), which has been implemented in several installations with file systems in excess of 100 TB, relieving a limitation that may have been more psychological than practical for most users.

Read more

Open Data Center Alliance – Lap Dog Or Watch Dog?

Richard Fichera

In October, with great fanfare, the Open Data Center Alliance unfurled its banners. The ODCA is a consortium of approximately 50 large IT consumers, including large manufacturing, hosting and telecomm providers, with the avowed intent of developing standards for interoperable cloud computing. In addition to the roster of users, the announcement highlighted Intel with an ambiguous role as a technology advisor to the group. The ODCA believes that it will achieve some weight in the industry due to its estimated $50 billion per year of cumulative IT purchasing power, and the trade press was full of praises for influential users driving technology as opposed to allowing rapacious vendors such as HP and IBM to drive users down proprietary paths that lead to vendor lock-in.

Now that we’ve had a month or more to allow the purple prose to settle a bit, let’s look at the underlying claims, potential impact of the ODCA and the shifting roles of vendors and consumers of technology. And let’s not forget about the role of Intel.

First, let me state unambiguously that one of the core intentions of the ODCA, the desire to develop common use case models that will in turn drive vendors to develop products that comply with the models based on the economic clout of the ODCA members (and hopefully there will be a correlation between ODCA member requirements and those of a wider set of consumers), is a good idea. Vendors spend a lot of time talking to users and trying to understand their requirements, and having the ODCA as a proxy for the requirements of a lot of very influential customers will be a benefit to all concerned.

Read more