HP Expands Its x86 Options With Mission-Critical Program – Defense And Offense Combined

Richard Fichera

Today HP announced a new set of technology programs and future products designed to move x86 server technology for both Windows and Linux more fully into the realm of truly mission-critical computing. My interpretation of these moves is that it is both a combined defensive and pro-active offensive action on HP’s part that will both protect them as their Itanium/HP-UX portfolio slowly declines as well as offer attractive and potentially unique options for both current and future customers who want to deploy increasingly critical services on x86 platforms.

What’s Coming?

Bearing in mind that the earliest of these elements will not be in place until approximately mid-2012, the key elements that HP is currently disclosing are:

ServiceGuard for Linux – This is a big win for Linux users on HP, and removes a major operational and architectural hurdle for HP-UX migrations. ServiceGuard is a highly regarded clustering and HA facility on HP-UX, and includes many features for local and geographically distributed HA. The lack of ServiceGuard is often cited as a risk in HP-UX migrations. The availability of ServiceGuard by mid-2012 will remove yet another barrier to smooth migration from HP-UX to Linux, and will help make sure that HP retains the business as it migrates from HP-UX.

Analysis engine for x86 – Analysis engine is internal software that provides system diagnostics, predictive failure analysis and self-repair on HP-UX systems. With an uncommitted delivery date, HP will port this to selected x86 servers. My guess is that since the analysis engine probably requires some level of hardware assist, the analysis engine will be paired with the next item on the list…

Read more

HP Folio Ultrabook: A Happy Meal For The Road Warrior

David Johnson

Enterprise laptops are on the shopping list for many I&O professionals I speak with every week, with some asking if Netbooks are the antidote to the MacBook Air for their people. Well, on the menu of enterprise laptops, I think of Netbooks as an appetizer -- inexpensive, but after an hour my stomach is growling again. Garden-variety ultraportables on the other hand are like a turkey sandwich -- everything I need to keep me going, but they make me sleepy halfway through the afternoon.

Ultrabooks are a new class of notebook promoted by Intel and are supposed to be a little more like caviar and champagne -- light and powerful, but served on business-class china with real silverware and espresso. At least that's what I took away after being briefed by Intel on the topic. I had the chance to sample HP's new Ultrabook fare in San Francisco a few weeks ago while they were still in the test kitchen, and it seems they took a little different approach. Not bad, just different.

It struck me that rather than beluga and Dom Perignon , HP has created more of a Happy Meal -- a tasty cheeseburger and small fries with a Diet Coke, in a lightweight, easy to carry package for a bargain price. It has everything the road warrior needs to get things done, and like a Happy Meal, they can carry it on the plane and set it on the tray table…even if the clown in front of them reclines. Folio offers the Core i5-2467M processor, 4GB RAM, a 13.3" LED display and a 128GB SSD storage, a 9-hour battery and USB 3.0 + Ethernet ports as highlights, all for $900. It's a true bargain. I think I will call it the McUltrabook.

Read more

Nokia World 2011: back from the brink but not yet fully out of the woods

Katyayan Gupta

Katyayan Gupta Dan Bieler

This was possibly the most important Nokia World event ever. Nokia had to demonstrate that it can deliver against its plans. In February 2011, Nokia communicated its intention to team up with Microsoft to develop its new platform and to “entrust” its Symbian operating system to accenture. In total 3,000 visitors from 70 countries attended Nokia World 2011 in London to hear and see what the “new Nokia” looks like.

In essence it was clear what Nokia World 2011 would be all about before the actual event had even started. Nokia had to produce a device that can take on the iPhone and the Galaxy. At the event Nokia announced the launch of the first “real Windows phone” in the form of the Lumia 800. The result is an impressive device that certainly secured Nokia a seat on the table of the tripartite of leading smartphones platforms.

Read more

An Early Look at Windows Server 8 – Can You Say Cloud?

Richard Fichera

Well, maybe everybody is saying “cloud” these days, but my first impression of Microsoft Windows Server 8 (not the final name) is that Microsoft has been listening very closely to what customers want from an OS that can support both public and private enterprise cloud implementations. And most importantly, the things that they have built into WS8 for “clouds” also look like they make life easier for plain old enterprise IT.

Microsoft appears to have focused its efforts on several key themes, all of which benefit legacy IT architectures as well as emerging clouds:

  • Management, migration and recovery of VMs in a multi-system domain – Major improvements in Hyper-V and management capabilities mean that I&O groups can easily build multi-system clusters of WS8 servers, and easily migrate VMs across system boundaries. Muplitle systems can be clustered with Fibre Channel, making it easier to implement high-performance clusters.
  • Multi-tenancy – A host of features, primarily around management and role-based delegation that make it easier and more secure to implement multi-tenant VM clouds.
  • Recovery and resiliency – Microsoft claims that they can failover VMs from one machine to another in 25 seconds, a very impressive number indeed. While vendor performance claims are always like EPA mileage – you are guaranteed never to exceed this number – this is an impressive claim and a major capability, with major implications for HA architecture in any data center.
Read more

A Rift At The High-End For Server Requirements?

Richard Fichera

We have been repeatedly reminded that the requirements of hyper-scale cloud properties are different from those of the mainstream enterprise, but I am now beginning to suspect that the top strata of the traditional enterprise may be leaning in the same direction. This suspicion has been triggered by the combination of a recent day in NY visiting I&O groups in a handful of very large companies and a number of unrelated client interactions.

The pattern that I see developing is one of “haves” versus “have nots” in terms of their ability to execute on their technology vision with internal resources. The “haves” are the traditional large sophisticated corporations, with a high concentration in financial services. They have sophisticated IT groups, are capable fo writing extremely complex systems management and operations software, and typically own and manage 10,000 servers or more. The have nots are the ones with more modest skills and abilities, who may own 1000s of servers, but tend to be less advanced than the core FSI companies in terms of their ability to integrate and optimize their infrastructure.

The divergence in requirements comes from what they expect and want from their primary system vendors. The have nots are companies who understand their limitations and are looking for help form their vendors in the form of converged infrastructures, new virtualization management tools, and deeper integration of management software to automate operational tasks, These are people who buy HP c-Class, Cisco UCS, for example, and then add vendor-supplied and ISV management and automation tools on top of them in an attempt to control complexity and costs. They are willing to accept deeper vendor lock-in in exchange for the benefits of the advanced capabilities.

Read more

Hyper-V Matures As An Enterprise Platform

Richard Fichera

A project I’m working on for an approximately half-billion dollar company in the health care industry has forced me to revisit Hyper-V versus VMware after a long period of inattention on my part, and it has become apparent that Hyper-V has made significant progress as a viable platform for at least medium enterprises. My key takeaways include:

  • Hyper-V has come a long way and is now a viable competitor in Microsoft environments up through mid-size enterprise as long as their DR/HA requirements are not too stringent and as long as they are willing to use Microsoft’s Systems Center, Server Management Suite and Performance Resource Optimization as well as other vendor specific pieces of software as part of their management environment.
  • Hyper-V still has limitations in VM memory size, total physical system memory size and number of cores per VM compared to VMware, and VMware boasts more flexible memory management and I/O options, but these differences are less significant that they were two years ago.
  • For large enterprises and for complete integrated management, particularly storage, HA, DR and automated workload migration, and for what appears to be close to 100% coverage of workload sizes, VMware is still king of the barnyard. VMware also boasts an incredibly rich partner ecosystem.
  • For cloud, Microsoft has a plausible story but it is completely wrapped around Azure.
  • While I have not had the time (or the inclination, if I was being totally honest) to develop a very granular comparison, VMware’s recent changes to its legacy licensing structure (and subsequent changes to the new pricing structure) does look like license cost remains an attraction for Microsoft Hyper-V, especially if the enterprise is using Windows Server Enterprise Edition. 
Read more

Recent Benchmarks Reinforce Scalability Of x86 Servers

Richard Fichera

Over the past months server vendors have been announcing benchmark results for systems incorporating Intel’s high-end x86 CPU, the E7, with HP trumping all existing benchmarks with their recently announced numbers (although, as noted in x86 Servers Hit The High Notes, the results are clustered within a few percent each other). HP recently announced new performance numbers for their ProLiant DL980, their high-end 8-socket x86 server using the newest Intel E7 processors. With up to 10 cores, these new processors can bring up to 80 cores to bear on large problems such as database, ERP and other enterprise applications.

The performance results on the SAP SD 2-Tier benchmark, for example, at 25160 SD users, show a performance improvement of 35% over the previous high-water mark of 18635. The results seem to scale almost exactly with the product of core count x clock speed, indicating that both the system hardware and the supporting OS, in this case Windows Server 2008, are not at their scalability limits. This gives us confidence that subsequent spins of the CPU will in turn yield further performance increases before hitting system of OS limitations. Results from other benchmarks show similar patterns as well.

Key takeaways for I&O professionals include:

  • Expect to see at least 25% to 35% throughput improvements in many workloads with systems based on the latest the high-performance PCUs from Intel. In situations where data center space and cooling resources are constrained this can be a significant boost for a same-footprint upgrade of a high-end system.
  • For Unix to Linux migrations, target platform scalability continues become less of an issue.

How Can Apple Improve Mobile Me To Fulfill More Of The Vision Of Personal Cloud? Plus, Mozy To Add File Sync.

Frank Gillett

Most of the hype in advance of today’s Apple media event is rightly about a new iPad. Sarah Rotman Epps will post on her blog about the new iPad for consumer product strategists after the announcement. I’m focused on the published reports that Apple’s Mobile Me service will be upgraded. I cited Mobile Me as an example of emerging personal cloud services in a July 2009 report, and I’m working on a follow-on report now. Mobile Me is Apple’s horse in a contest with Facebook, Google, Microsoft, and others, to shift personal computing from being device-centric to user-centric, so that you and I don’t need to think about which gadget has the apps or data that we want. The vision of personal cloud is that a combination of local apps, cached data, and cloud-based services will put the right information in the right device at the right time, whether on personal or work devices. The strengths of Mobile Me today are:

  • Synced contacts, calendar, Safari bookmarks, and email account settings, as well as IMAP-based Mobile Me email accounts, for Web, Mac, Windows, and iOS devices.
  • Synced Mac preferences, including app and system preferences.
  • Mobile Me Gallery for easy uploading and sharing of photos and videos.
Read more

Checking In On Linux – Latest Linux Releases Show Continued Progress

Richard Fichera

I’ve recently had the opportunity to talk with a small sample of SLES 11 and RH 6 Linux users, all developing their own applications. All were long-time Linux users, and two of them, one in travel services and one in financial services, had applications that can be described as both large and mission-critical.

The overall message is encouraging for Linux advocates, both the calm rational type as well as those who approach it with near-religious fervor. The latest releases from SUSE and Red Hat, both based on the 2.6.32 Linux kernel, show significant improvements in scalability and modest improvements in iso-configuration performance. One user reported that an application that previously had maxed out at 24 cores with SLES 10 was now nearing production certification with 48 cores under SLES 11. Performance scalability was reported as “not linear, but worth doing the upgrade.”

Overall memory scalability under Linux is still a question mark, since the widely available x86 platforms do not exceed 3 TB of memory, but initial reports from a user familiar with HP’s DL 980 verify that the new Linux Kernel can reliably manage at least 2TB of RAM under heavy load.

File system options continue to expand as well. The older Linux FS standard, ETX4, which can scale to “only” 16 TB, has been joined by additional options such as XFS (contributed by SGI), which has been implemented in several installations with file systems in excess of 100 TB, relieving a limitation that may have been more psychological than practical for most users.

Read more

Windows 7 Early Adopters Were Satisfied Upgraders

JP Gownder

We've just published two new reports concerning Windows 7 adoption and satisfaction, leveraging Forrester's Consumer Technographics(R) data. 

The reports show that Windows 7 penetrated the consciousness of the market by the end of 2009, with a strong majority of US consumers aware of the product.  We also found that consumers who adopted Windows 7 in Q4 were generally very satisfied with their Windows 7 PCs. 

Perhaps the most interesting finding of the reports involves upgrade behaviors. Historically, most consumers have not upgraded their PCs with new OSes -- though Mac users and some technophile consumers have been an exception on this count.  Instead, the majority of consumers have acquired new OSes when they purchase their new PC.  These are known as "replacement cycle upgrades." 

With Windows 7, however, upgrade behavior was much stronger.  Why?  In short, Windows 7 is a thinner client program than was Windows Vista, meaning that it works well on older hardware configurations.  In the past, OSes were designed with Moore's Law as an underlying assumption -- that is, that newer PC hardware would be significantly faster and more powerful than the previous generation's hardware. Windows 7, however, is a less burdensome OS than Windows Vista.  The rise of Netbooks, the physical assets of multi-PC households, and an attachment by many consumers to their Windows XP machines all contributed to the need for a sleeker, thinner Windows OS, which Windows 7 delivered. 

Among early adopters of Windows 7, in Q4, for the first time upgrading behavior matched replacement cycle purchasing, as this Figure shows:

 

Read more