A Rift At The High-End For Server Requirements?

Richard Fichera

We have been repeatedly reminded that the requirements of hyper-scale cloud properties are different from those of the mainstream enterprise, but I am now beginning to suspect that the top strata of the traditional enterprise may be leaning in the same direction. This suspicion has been triggered by the combination of a recent day in NY visiting I&O groups in a handful of very large companies and a number of unrelated client interactions.

The pattern that I see developing is one of “haves” versus “have nots” in terms of their ability to execute on their technology vision with internal resources. The “haves” are the traditional large sophisticated corporations, with a high concentration in financial services. They have sophisticated IT groups, are capable fo writing extremely complex systems management and operations software, and typically own and manage 10,000 servers or more. The have nots are the ones with more modest skills and abilities, who may own 1000s of servers, but tend to be less advanced than the core FSI companies in terms of their ability to integrate and optimize their infrastructure.

The divergence in requirements comes from what they expect and want from their primary system vendors. The have nots are companies who understand their limitations and are looking for help form their vendors in the form of converged infrastructures, new virtualization management tools, and deeper integration of management software to automate operational tasks, These are people who buy HP c-Class, Cisco UCS, for example, and then add vendor-supplied and ISV management and automation tools on top of them in an attempt to control complexity and costs. They are willing to accept deeper vendor lock-in in exchange for the benefits of the advanced capabilities.

Read more

What Does Google's Acquisition Of Motorola Mobility Mean To I&O Professionals?

Christian Kane

Google sent shock waves through the mobile world this morning as it announced a planned acquisition of Motorola Mobility for $12.5 billion in cash. The initial commentary has largely focused around Motorola’s patent portfolio, how this will affect the other Android manufacturers, and what Google will do with the rest of Moto’s hardware business which my colleague John McCarthy summed up nicely in his blog post.

So what kind of an impact does this have on infrastructure and operations (I&O) professionals? For the most part, not much of one.  I&O professionals are working to make their organizations platform-agnostic by deploying mobile device management (MDM) solutions. For them, Android is only one in an increasingly crowded space of platforms including iOS, Blackberry, and Windows 7 Mobile. 

Still, there is one interesting implication in this deal that I&O pros should take note of — Google gets 3LM. Back in February Motorola Mobility acquired 3LM, a startup including former Google employees who worked on Android, which specializes in enterprise security and management software. Rumors had already been flying that some of the 3LM functionality like storage encryption and anti-malware would be included in the next version of Android (Ice Cream Sandwich). With 3LM now a part of Google, firms might finally management and security capabilities I&O and security pros have been asking for in Android.

Read more

Hyper-V Matures As An Enterprise Platform

Richard Fichera

A project I’m working on for an approximately half-billion dollar company in the health care industry has forced me to revisit Hyper-V versus VMware after a long period of inattention on my part, and it has become apparent that Hyper-V has made significant progress as a viable platform for at least medium enterprises. My key takeaways include:

  • Hyper-V has come a long way and is now a viable competitor in Microsoft environments up through mid-size enterprise as long as their DR/HA requirements are not too stringent and as long as they are willing to use Microsoft’s Systems Center, Server Management Suite and Performance Resource Optimization as well as other vendor specific pieces of software as part of their management environment.
  • Hyper-V still has limitations in VM memory size, total physical system memory size and number of cores per VM compared to VMware, and VMware boasts more flexible memory management and I/O options, but these differences are less significant that they were two years ago.
  • For large enterprises and for complete integrated management, particularly storage, HA, DR and automated workload migration, and for what appears to be close to 100% coverage of workload sizes, VMware is still king of the barnyard. VMware also boasts an incredibly rich partner ecosystem.
  • For cloud, Microsoft has a plausible story but it is completely wrapped around Azure.
  • While I have not had the time (or the inclination, if I was being totally honest) to develop a very granular comparison, VMware’s recent changes to its legacy licensing structure (and subsequent changes to the new pricing structure) does look like license cost remains an attraction for Microsoft Hyper-V, especially if the enterprise is using Windows Server Enterprise Edition. 
Read more

Catching Up With SUSE -- The Attachmate Group Clarifies Branding And Role For SUSE

Richard Fichera

I recently had an opportunity to spend some time with SUSE management, including President and General Manager Nils Brauckmann, and came away with what I think is a reasonably clear picture of The Attachmate Group’s (TAG) intentions and of SUSE’s overall condition these days. Overall, impressions were positive, with some key takeaways:

  • TAG has clarified its intentions regarding SUSE. TAG has organized its computer holdings as four independent business units, Novell, NetIQ, Attachmate and SUSE, each one with its own independent sales, development, marketing, etc. resources. The advantages and disadvantages of this approach are pretty straightforward, with the lack of opportunity to share resources aiming the business units for R&D and marketing/sales being balanced off by crystal clear accountability and the attendant focus it brings. SUSE management agrees that it has undercommunicated in the past, and says that now that the corporate structure has been nailed down it will be very aggressive in communicating its new structure and goals.
  • SUSE’s market presence has shifted to a more balanced posture. Over the last several years SUSE has shifted to a somewhat less European-centric focus, with 50% of revenues coming from North America, less than 50% from EMEA, and claims to be the No. 1 Linux vendor in China, where it has expanded its development staffing. SUSE claims to have gained market share overall, laying claim to approximately 30% of WW Linux market share by revenue.
  • Focus on enterprise and cloud. Given its modest revenues of under $200 million, SUSE realizes that it cannot be all things to all people, and states that it will be focusing heavily on enterprise business servers and cloud technology, with less emphasis on desktops and projects that do not have strong financial returns, such as its investment in Mono, which it has partnered with Xamarin to continue development,.
Read more

GPU Case Study Highlights Financial Application Acceleration

Richard Fichera

NVIDIA recently shared a case study involving risk calculations at a JP Morgan Chase that I think is significant for the extreme levels of acceleration gained by integrating GPUs with conventional CPUs, and also as an illustration of a mainstream financial application of GPU technology.

JP Morgan Chase’s Equity Derivatives Group began evaluating GPUs as computational accelerators in 2009, and now runs over half of their risk calculations on hybrid systems containing x86 CPUs and NVIDIA Tesla GPUs, and claims a 40x improvement in calculation times combined with a 75% cost savings. The cost savings appear to be derived from a combination of lower capital costs to deliver an equivalent throughput of calculations along with improved energy efficiency per calculation.

Implicit in the speedup of 40x, from multiple hours to several minutes, is the implication that these calculations can become part of a near real-time business-critical analysis process instead of an overnight or daily batch process. Given the intensely competitive nature of derivatives trading, it is highly likely that JPMC will enhance their use of GPUs as traders demand an ever increasing number of these calculations. And of course, their competition has been using the same technology as well, based on numerous conversations I have had with Wall Street infrastructure architects over the past year.

My net take on this is that we will see a succession of similar announcements as GPUs become a fully mainstream acceleration technology as opposed to an experimental fringe. If you are an I&O professional whose users are demanding extreme computational performance on a constrained space, power and capital budget, you owe it to yourself and your company to evaluate the newest accelerator technology. Your competitors are almost certainly doing so.

How Are You Reacting When New, Disruptive Products Come Out?

JP Gownder

We talk to product strategists in a wide variety of industries. Regardless of the vertical industry of their companies, they tell us that the release of new, disruptive products -- like Apple's iPad -- changes their relationships with their customers. Oftentimes, nearly overnight.

Whether their product comes in the form of “bits” (content, like media, software, or games) or “atoms” (physical products, like shoes, consumer packaged goods, or hardware), consumer product strategists must navigate a world filled with a dizzying array of new devices (like mobile phones, tablet computers, connected TVs, game consoles, eBook readers, and of course PCs). We call this proliferation of devices the Splinternet, a world in which consumers access the digital world across a diverse and growing number of hardware and platforms. And product strategists have to react by developing new apps, by crafting digital product experiences, and by rethinking their product marketing.

Delivering digital products across the Splinternet isn’t easy: It requires understanding -- and acting upon -- an ever-changing landscape of consumer preferences and behaviors. It also requires reapportioning scarce resources -- for example, from web development to iPad or Android development. Yet product strategists who fail to contend with newly disruptive devices (like the iPad or Xbox Kinect) will find themselves in danger of being left behind -- no matter what industry they’re in.

We'd like to invite product strategists to take our super-quick, two-minute survey to help us better understand how you are reacting to disruptions caused by the Splinternet: 

UPDATED: THE SURVEY IS NOW CLOSED

Thank you!

Recent Benchmarks Reinforce Scalability Of x86 Servers

Richard Fichera

Over the past months server vendors have been announcing benchmark results for systems incorporating Intel’s high-end x86 CPU, the E7, with HP trumping all existing benchmarks with their recently announced numbers (although, as noted in x86 Servers Hit The High Notes, the results are clustered within a few percent each other). HP recently announced new performance numbers for their ProLiant DL980, their high-end 8-socket x86 server using the newest Intel E7 processors. With up to 10 cores, these new processors can bring up to 80 cores to bear on large problems such as database, ERP and other enterprise applications.

The performance results on the SAP SD 2-Tier benchmark, for example, at 25160 SD users, show a performance improvement of 35% over the previous high-water mark of 18635. The results seem to scale almost exactly with the product of core count x clock speed, indicating that both the system hardware and the supporting OS, in this case Windows Server 2008, are not at their scalability limits. This gives us confidence that subsequent spins of the CPU will in turn yield further performance increases before hitting system of OS limitations. Results from other benchmarks show similar patterns as well.

Key takeaways for I&O professionals include:

  • Expect to see at least 25% to 35% throughput improvements in many workloads with systems based on the latest the high-performance PCUs from Intel. In situations where data center space and cooling resources are constrained this can be a significant boost for a same-footprint upgrade of a high-end system.
  • For Unix to Linux migrations, target platform scalability continues become less of an issue.

Cisco Tweaks UCS - New Interfaces, Management Software Expand Capabilities

Richard Fichera

Not to be left out of the announcement fever that has gripped vendors recently, Cisco today announced several updates to their UCS product line aimed at easing potential system bottlenecks by improving the whole I/O chain between the network and the servers, and improving management, including:

  • Improved Fabric Interconnect (FI) – The FI is the top of the UCS hardware hierarchy, a thinly disguised Nexus 5xxx series switch that connects the UCS hierarchy to the enterprise network and runs the UCS Manager (UCSM) software. Previously the highest end FI had 40 ports, each of which had to be specifically configured as Ethernet, FCoE, or FC. The new FI, the model 6248UP has 48 ports, each one of which can be flexibly assigned as up toa 10G port for any of the supported protocols. In addition to modestly raising the bandwidth, the 6248UP brings increased flexibility and a claimed 40% reduction in latency.
  • New Fabric Extender (FEX) – The FEXC connects the individual UCS chassis with the FI. With the new 2208 FEX, Cisco doubles the bandwidth between the chassis and the FI.
  • VIC1280 Virtual Interface Card (VIC) – At the bottom of the management hierarchy the new VIC1280 quadruples the bandwidth to each individual server to a total of 80 GB. The 80 GB can be presented as up to 8 10 GB physical NICs or teamed into a pair fo 40 Gb NICS, with up to 256 virtual devices (vNIC, vHBA, etc presented to the software running on the servers.
Read more

VMware Pushes Hypervisor And Management Features With vSphere 5 Announcement

Richard Fichera

After considerable speculation and anticipation, VMware has finally announced vSphere 5 as part of a major cloud infrastructure launch, including vCloud Director 1.5, SRM 5 and vShield 5. From our first impressions, it is both well worth the wait and merits immediate serious consideration as an enterprise virtualization platform, particularly for existing VMware customers.

The list of features is voluminous, with at least 100 improvements, large and small, but among the features, several stand out as particularly significant as I&O professionals continue their efforts to virtualize the data center, primarily dealing with and support for both larger VMs and physical host systems, and also with the improved manageability of storage and improvements Site Recovery Manager (SRM), the remote-site HA components:

  • Replication improvements for Site Recovery Manager, allowing replication without SANs
  • Distributed Resource Scheduling (DRS) for Storage
  • Support for up to 1 TB of memory per VM
  • Support for 32 vCPUs per VM
  • Support for up to 160 Logical CPUs and 2 TB or RAM
  • New GUI to configure multicore vCPUs
  • Storage driven storage delivery based on the VMware-Aware Storage APIs
  • Improved version of the Cluster File System, VMFS5
  • Storage APIs – Array Integration: Thin Provisioning enabling reclaiming blocks of a thin provisioned LUN on the array when a virtual disk is deleted
  • Swap to SSD
  • 2TB+ LUN support
  • Storage vMotion snapshot support
  • vNetwork Distributed Switch improvements providing improved visibility in VM traffic
  • vCenter Server Appliance
  • vCenter Solutions Manager, providing a consistent interface to configure and monitor vCenter-integrated solutions developed by VMware and third parties
  • Revamped VMware High Availability (HA) with Fault Domain Manager
Read more

Join Us July 27th In San Francisco For An iPad App Strategy Workshop!

JP Gownder

More than 90,000 iPad-only apps are available today. Forrester clients in a wide range of industries — media, software, retail, travel, consumer packaged goods, financial services, pharmaceuticals, utilities, and more — are scrambling to determine how to develop their own iPad app strategies (or browser-based iPad strategies).

Clients are asking us to help them address both challenges and opportunities associated with the iPad: How do I develop an app product strategy for the iPad? Does the browser matter, too? What will make my app or browser experience stand out from the competition? How will an iPad app complement my smartphone and Web properties?

If you are navigating these sorts of decisions, I'd like to invite you to a very exciting event being hosted by an analyst on my team, Sarah Rotman Epps. Sarah's holding a Workshop on July 27 (in San Francisco) to help clients like you separate the hype from the reality and take concrete steps toward developing a winning iPad app and browser strategy. 

The Workshop: POST — Refining Your Strategy For iPads And Tablets
This Workshop focuses on refining your strategy for reaching and supporting your key constituencies through iPads and other tablets. We'll take you through the POST (people, objectives, strategy, and technology) process, helping you to:

  • Understand where the tablet market is going based on Forrester's latest data and insights.
  • Apply what other companies have done to your own tablet strategy.
Read more