Hyper-V Matures As An Enterprise Platform

Richard Fichera

A project I’m working on for an approximately half-billion dollar company in the health care industry has forced me to revisit Hyper-V versus VMware after a long period of inattention on my part, and it has become apparent that Hyper-V has made significant progress as a viable platform for at least medium enterprises. My key takeaways include:

  • Hyper-V has come a long way and is now a viable competitor in Microsoft environments up through mid-size enterprise as long as their DR/HA requirements are not too stringent and as long as they are willing to use Microsoft’s Systems Center, Server Management Suite and Performance Resource Optimization as well as other vendor specific pieces of software as part of their management environment.
  • Hyper-V still has limitations in VM memory size, total physical system memory size and number of cores per VM compared to VMware, and VMware boasts more flexible memory management and I/O options, but these differences are less significant that they were two years ago.
  • For large enterprises and for complete integrated management, particularly storage, HA, DR and automated workload migration, and for what appears to be close to 100% coverage of workload sizes, VMware is still king of the barnyard. VMware also boasts an incredibly rich partner ecosystem.
  • For cloud, Microsoft has a plausible story but it is completely wrapped around Azure.
  • While I have not had the time (or the inclination, if I was being totally honest) to develop a very granular comparison, VMware’s recent changes to its legacy licensing structure (and subsequent changes to the new pricing structure) does look like license cost remains an attraction for Microsoft Hyper-V, especially if the enterprise is using Windows Server Enterprise Edition. 
Read more

Catching Up With SUSE -- The Attachmate Group Clarifies Branding And Role For SUSE

Richard Fichera

I recently had an opportunity to spend some time with SUSE management, including President and General Manager Nils Brauckmann, and came away with what I think is a reasonably clear picture of The Attachmate Group’s (TAG) intentions and of SUSE’s overall condition these days. Overall, impressions were positive, with some key takeaways:

  • TAG has clarified its intentions regarding SUSE. TAG has organized its computer holdings as four independent business units, Novell, NetIQ, Attachmate and SUSE, each one with its own independent sales, development, marketing, etc. resources. The advantages and disadvantages of this approach are pretty straightforward, with the lack of opportunity to share resources aiming the business units for R&D and marketing/sales being balanced off by crystal clear accountability and the attendant focus it brings. SUSE management agrees that it has undercommunicated in the past, and says that now that the corporate structure has been nailed down it will be very aggressive in communicating its new structure and goals.
  • SUSE’s market presence has shifted to a more balanced posture. Over the last several years SUSE has shifted to a somewhat less European-centric focus, with 50% of revenues coming from North America, less than 50% from EMEA, and claims to be the No. 1 Linux vendor in China, where it has expanded its development staffing. SUSE claims to have gained market share overall, laying claim to approximately 30% of WW Linux market share by revenue.
  • Focus on enterprise and cloud. Given its modest revenues of under $200 million, SUSE realizes that it cannot be all things to all people, and states that it will be focusing heavily on enterprise business servers and cloud technology, with less emphasis on desktops and projects that do not have strong financial returns, such as its investment in Mono, which it has partnered with Xamarin to continue development,.
Read more

GPU Case Study Highlights Financial Application Acceleration

Richard Fichera

NVIDIA recently shared a case study involving risk calculations at a JP Morgan Chase that I think is significant for the extreme levels of acceleration gained by integrating GPUs with conventional CPUs, and also as an illustration of a mainstream financial application of GPU technology.

JP Morgan Chase’s Equity Derivatives Group began evaluating GPUs as computational accelerators in 2009, and now runs over half of their risk calculations on hybrid systems containing x86 CPUs and NVIDIA Tesla GPUs, and claims a 40x improvement in calculation times combined with a 75% cost savings. The cost savings appear to be derived from a combination of lower capital costs to deliver an equivalent throughput of calculations along with improved energy efficiency per calculation.

Implicit in the speedup of 40x, from multiple hours to several minutes, is the implication that these calculations can become part of a near real-time business-critical analysis process instead of an overnight or daily batch process. Given the intensely competitive nature of derivatives trading, it is highly likely that JPMC will enhance their use of GPUs as traders demand an ever increasing number of these calculations. And of course, their competition has been using the same technology as well, based on numerous conversations I have had with Wall Street infrastructure architects over the past year.

My net take on this is that we will see a succession of similar announcements as GPUs become a fully mainstream acceleration technology as opposed to an experimental fringe. If you are an I&O professional whose users are demanding extreme computational performance on a constrained space, power and capital budget, you owe it to yourself and your company to evaluate the newest accelerator technology. Your competitors are almost certainly doing so.

How Are You Reacting When New, Disruptive Products Come Out?

JP Gownder

We talk to product strategists in a wide variety of industries. Regardless of the vertical industry of their companies, they tell us that the release of new, disruptive products -- like Apple's iPad -- changes their relationships with their customers. Oftentimes, nearly overnight.

Whether their product comes in the form of “bits” (content, like media, software, or games) or “atoms” (physical products, like shoes, consumer packaged goods, or hardware), consumer product strategists must navigate a world filled with a dizzying array of new devices (like mobile phones, tablet computers, connected TVs, game consoles, eBook readers, and of course PCs). We call this proliferation of devices the Splinternet, a world in which consumers access the digital world across a diverse and growing number of hardware and platforms. And product strategists have to react by developing new apps, by crafting digital product experiences, and by rethinking their product marketing.

Delivering digital products across the Splinternet isn’t easy: It requires understanding -- and acting upon -- an ever-changing landscape of consumer preferences and behaviors. It also requires reapportioning scarce resources -- for example, from web development to iPad or Android development. Yet product strategists who fail to contend with newly disruptive devices (like the iPad or Xbox Kinect) will find themselves in danger of being left behind -- no matter what industry they’re in.

We'd like to invite product strategists to take our super-quick, two-minute survey to help us better understand how you are reacting to disruptions caused by the Splinternet: 


Thank you!

Recent Benchmarks Reinforce Scalability Of x86 Servers

Richard Fichera

Over the past months server vendors have been announcing benchmark results for systems incorporating Intel’s high-end x86 CPU, the E7, with HP trumping all existing benchmarks with their recently announced numbers (although, as noted in x86 Servers Hit The High Notes, the results are clustered within a few percent each other). HP recently announced new performance numbers for their ProLiant DL980, their high-end 8-socket x86 server using the newest Intel E7 processors. With up to 10 cores, these new processors can bring up to 80 cores to bear on large problems such as database, ERP and other enterprise applications.

The performance results on the SAP SD 2-Tier benchmark, for example, at 25160 SD users, show a performance improvement of 35% over the previous high-water mark of 18635. The results seem to scale almost exactly with the product of core count x clock speed, indicating that both the system hardware and the supporting OS, in this case Windows Server 2008, are not at their scalability limits. This gives us confidence that subsequent spins of the CPU will in turn yield further performance increases before hitting system of OS limitations. Results from other benchmarks show similar patterns as well.

Key takeaways for I&O professionals include:

  • Expect to see at least 25% to 35% throughput improvements in many workloads with systems based on the latest the high-performance PCUs from Intel. In situations where data center space and cooling resources are constrained this can be a significant boost for a same-footprint upgrade of a high-end system.
  • For Unix to Linux migrations, target platform scalability continues become less of an issue.

Cisco Tweaks UCS - New Interfaces, Management Software Expand Capabilities

Richard Fichera

Not to be left out of the announcement fever that has gripped vendors recently, Cisco today announced several updates to their UCS product line aimed at easing potential system bottlenecks by improving the whole I/O chain between the network and the servers, and improving management, including:

  • Improved Fabric Interconnect (FI) – The FI is the top of the UCS hardware hierarchy, a thinly disguised Nexus 5xxx series switch that connects the UCS hierarchy to the enterprise network and runs the UCS Manager (UCSM) software. Previously the highest end FI had 40 ports, each of which had to be specifically configured as Ethernet, FCoE, or FC. The new FI, the model 6248UP has 48 ports, each one of which can be flexibly assigned as up toa 10G port for any of the supported protocols. In addition to modestly raising the bandwidth, the 6248UP brings increased flexibility and a claimed 40% reduction in latency.
  • New Fabric Extender (FEX) – The FEXC connects the individual UCS chassis with the FI. With the new 2208 FEX, Cisco doubles the bandwidth between the chassis and the FI.
  • VIC1280 Virtual Interface Card (VIC) – At the bottom of the management hierarchy the new VIC1280 quadruples the bandwidth to each individual server to a total of 80 GB. The 80 GB can be presented as up to 8 10 GB physical NICs or teamed into a pair fo 40 Gb NICS, with up to 256 virtual devices (vNIC, vHBA, etc presented to the software running on the servers.
Read more

VMware Pushes Hypervisor And Management Features With vSphere 5 Announcement

Richard Fichera

After considerable speculation and anticipation, VMware has finally announced vSphere 5 as part of a major cloud infrastructure launch, including vCloud Director 1.5, SRM 5 and vShield 5. From our first impressions, it is both well worth the wait and merits immediate serious consideration as an enterprise virtualization platform, particularly for existing VMware customers.

The list of features is voluminous, with at least 100 improvements, large and small, but among the features, several stand out as particularly significant as I&O professionals continue their efforts to virtualize the data center, primarily dealing with and support for both larger VMs and physical host systems, and also with the improved manageability of storage and improvements Site Recovery Manager (SRM), the remote-site HA components:

  • Replication improvements for Site Recovery Manager, allowing replication without SANs
  • Distributed Resource Scheduling (DRS) for Storage
  • Support for up to 1 TB of memory per VM
  • Support for 32 vCPUs per VM
  • Support for up to 160 Logical CPUs and 2 TB or RAM
  • New GUI to configure multicore vCPUs
  • Storage driven storage delivery based on the VMware-Aware Storage APIs
  • Improved version of the Cluster File System, VMFS5
  • Storage APIs – Array Integration: Thin Provisioning enabling reclaiming blocks of a thin provisioned LUN on the array when a virtual disk is deleted
  • Swap to SSD
  • 2TB+ LUN support
  • Storage vMotion snapshot support
  • vNetwork Distributed Switch improvements providing improved visibility in VM traffic
  • vCenter Server Appliance
  • vCenter Solutions Manager, providing a consistent interface to configure and monitor vCenter-integrated solutions developed by VMware and third parties
  • Revamped VMware High Availability (HA) with Fault Domain Manager
Read more

Join Us July 27th In San Francisco For An iPad App Strategy Workshop!

JP Gownder

More than 90,000 iPad-only apps are available today. Forrester clients in a wide range of industries — media, software, retail, travel, consumer packaged goods, financial services, pharmaceuticals, utilities, and more — are scrambling to determine how to develop their own iPad app strategies (or browser-based iPad strategies).

Clients are asking us to help them address both challenges and opportunities associated with the iPad: How do I develop an app product strategy for the iPad? Does the browser matter, too? What will make my app or browser experience stand out from the competition? How will an iPad app complement my smartphone and Web properties?

If you are navigating these sorts of decisions, I'd like to invite you to a very exciting event being hosted by an analyst on my team, Sarah Rotman Epps. Sarah's holding a Workshop on July 27 (in San Francisco) to help clients like you separate the hype from the reality and take concrete steps toward developing a winning iPad app and browser strategy. 

The Workshop: POST — Refining Your Strategy For iPads And Tablets
This Workshop focuses on refining your strategy for reaching and supporting your key constituencies through iPads and other tablets. We'll take you through the POST (people, objectives, strategy, and technology) process, helping you to:

  • Understand where the tablet market is going based on Forrester's latest data and insights.
  • Apply what other companies have done to your own tablet strategy.
Read more

Tips For Your ITIL Journey

Eveline Oehrlich

Embarking on your ITIL initiative can be daunting. Often, the breadth and scope of ITIL can leave I&O departments struggling to create a solid road map -- Where do I start? Can I pick and choose ITIL principles? Do I even need ITIL? Without answers, any one of these questions can put up a roadblock on your journey to smooth service management, so here are some tips to put you on the right track:

Pre-Race Checklist

  • Make sure you take the time to define and understand exactly what problem you're trying to solve -- many companies who skip this step end up regretting it.
  • Before you can decide where you want to go, you need to know where you’re coming from. Measure your ITSM maturity level, and then define where you want to go and how much you want to improve.
  • Once you know your ITSM baseline and the problem that you want to solve, you can figure out the best place to start in the ITIL v3 framework.

Start Your Engines

  • Keep in mind that technology or domain silos don’t work, and process silos don’t work either. Switch to a hybrid model for best results.
  • When determining who your process heads should be, incident and problem management should NOT be rolled together under one person. Incident management is about fire-fighting, and problem management is about root cause analysis -- two very different competencies. 
Read more

Intel Steps On The Accelerator, Reveals Many Independent Core Road Map

Richard Fichera

While NVIDIA and to a lesser extent AMD (via its ATI branded product line) have effectively monopolized the rapidly growing and hyperbole-generating market for GPGPUs, highly parallel application accelerators, Intel has teased the industry for several years, starting with its 80-core Polaris Research Processor demonstration in 2008. Intel’s strategy was pretty transparent – it had nothing in this space, and needed to serve notice that it was actively pursuing it without showing its hand prematurely. This situation of deliberate ambiguity came to an end last month when Intel finally disclosed more details on its line of Many Independent Core (MIC) accelerators.

Intel’s approach to attached parallel processing is radically different than its competitors and appears to make excellent use of its core IP assets – fabrication and expertise and the x86 instruction set. While competing products from NVIDIA and AMD are based on graphics processing architectures, employing 100s of parallel non-x86 cores, Intel’s products will feature a smaller (32 – 64 in the disclosed products) number of simplified x86 cores on the theory that developers will be able to harvest large portions of code that already runs on 4 – 10 core x86 CPUs and easily port them to these new parallel engines.

Read more