Keep An Eye On Software-Defined Data Centers In China

Frank Liu

Although emerging markets like China tend to lag developed markets by 18 to 24 months in terms of technology deployment, Chinese organizations should start embracing new concepts like the software-defined data center (SDDC). The SDDC is an evolving architectural and operational philosophy, not a product you can buy with a demonstrable ROI. Chinese organizations can’t risk ignoring SDDC and falling behind global companies — so they need to pay attention to it, for a few reasons:

Read more

Windows Server 2003 – A Very Unglamorous but Really Important Problem, Waiting to Bite

Richard Fichera

Very much in the shadows of all the press coverage and hysteria attendant on emerging cloud architectures and customer-facing systems of engagement are the nitty-gritty operational details that lurk like monsters in the swamp of legacy infrastructure, and some of them have teeth. And sometimes these teeth can really take a bite out of the posterior of an unprepared organization.

One of those toothy animals that I&O groups are increasingly encountering in their landscapes is the problem of what to do with Windows Server 2003 (WS2003). It turns out there are still approximately 11 million WS2003 systems running today, with another 10+ million instances running as VM guests. Overall, possibly more than 22 million OS images and a ton of hardware that will need replacing and upgrading. And increasing numbers of organizations have finally begun to take seriously the fact that Microsoft is really going to end support and updates as of July 2015.

Based on the conversations I have been having with our clients, the typical I&O group that is now scrambling to come up with a plan has not been willfully negligent, nor are they stupid. Usually WS2003 servers are legacy servers, quietly running some mature piece of code, often in satellite locations or in the shops of acquired companies. The workloads are a mix of ISV and bespoke code, but it is often a LOB-specific application, with the run-of-the-mill collaboration, infrastructure servers and, etc. having long since migrated to newer platforms. A surprising number of clients have told me that they have identified the servers, but not always the applications or the business owners – often a complex task for an old resource in a large company.

Read more

Understanding Virtualized Videoconferencing

Philipp Karcher
Server virtualization has been and continues to be a top IT priority for good reasons like improving infrastructure manageability, lowering TCO, and improving business continuity and disaster recovery capabilities. In IT's quest to virtualize more workloads, however, videoconferencing has remained on its own island of specialized hardware due to its reliance on transcoding DSPs (digital signal processors), an incredibly compute intensive type of work. Transcoding is necessary for interoperability between unlike videoconferencing systems, and the performance of that specialized hardware has been difficult to match with software running on standard servers. 
 
That is, unless you turn to a model that doesn't use transcoding. Enter Vidyo, whose virtual edition infrastructure delivers comparable performance to its physical appliances since it doesn't have to transcode calls between Vidyo endpoints. Desktop videoconferencing solutions for the most part are available in virtualized models. However, transcoding based videoconferencing is also becoming available virtualized, with LifeSize offering its platform in this model. In the cloud, Blue Jeans -- the poster child for videoconferencing as a service -- has a virtualized platform based on transcoding that also provides a high quality experience. It will be interesting to see how the performance of virtualized transcoding workloads compares to traditional infrastructure.
 
Innovation in videoconferencing today is about making this historically cost prohibitive technology cheaper and easier to deploy. Server virtualization is key to that goal. In conversations with end user companies considering their videoconferencing strategy, virtualization is something they express interest in and would consider the next time they refresh their technology. Here's what vendors in the upcoming Forrester Wave on desktop videoconferencing are doing with virtualization today:
Read more

VMware Takes the Cover Off Its Public Cloud

James Staten

Sometimes you can only coax a reluctant partner and I&O customer community for so long before you feel you have to take matters into your own hands. That is exactly what VMware has decided to do to become relevant in the cloud platforms space. The hypervisor pioneer unveiled vCloud Hybrid Service to investors today in what is more a statement of intention than a true unveiling.

VMware's public cloud service — yep, a full public IaaS cloud meant to compete with Amazon Web Service, IBM SmartCloud Enterprise, HP Cloud, Rackspace, and others — won't be fully unveiled until Q2 2013, so much of the details about the service remain under wraps. VMware hired the former president for Savvis Cloud, Bill Fathers, to run this new offering and said it was a top three initiative for the company and thus would be getting "the level of investment appropriate to that priority and to capitalize on a $14B market opportunity," according to Matthew Lodge, VP of Cloud Services Product Marketing and Management for VMware, who spoke to us Tuesday about the pending announcement. 

Read more

The VMware Community Has the Innovator’s Dilemma

James Staten

 

This week at the VMware Partner Exchange, CEO Pat Gelsinger and his executive staff decided to demonize Amazon Web Services and their public cloud brethren in a very short sighted defensive move that frankly betrays the fact that they don’t understand the disruption they are facing. Pat, you and your market have the Innovator’s Dilemma, and the enemy isn’t public cloud but private clouds.

According to CRN’s article on the event, Gelsinger was quoted as saying, “"We want to own corporate workloads. We all lose if they end up in these commodity public clouds. We want to extend our franchise from the private cloud into the public cloud and uniquely enable our customers with the benefits of both. Own the corporate workload now and forever."

Forgive my frankness, Mr. Gelsinger, but you just don’t get it. Public clouds are not your enemy. And the disruption they are causing to your forward revenues are not their capture of enterprise workloads. The battle lines you should be focusing on are between advanced virtualization and true cloud services and the future placement of Systems of Engagement versus Systems of Record.

Read more

Why Your Enterprise Private Cloud is Failing

James Staten

You've told your ITOps team to make it happen, you've approved the purchase of cloud-in-a-box solutions, but your developers aren't using it. Why?

Forrester analyst Lauren Nelson and myself get this question often in our inquiries with enterprise customers and we've found the answer and published a new report specifically on this topic.
Its core finding: Your approach is wrong. 

You're asking the wrong people to build the solution. You aren't giving them clear enough direction on what they should build. You aren't helping them understand how this new service should operate or how it will affect their career and value to the organization. And more often than not you are building the private cloud without engaging the buyers who will consume this cloud.

And your approach is perfectly logical. For many of us in IT, we see a private cloud as an extension of our investments in virtualization. It's simply virtualization with some standardization, automation, a portal, and an image library isn't it? Yep. And a Porsche is just a Volkswagen with better engine, tires, suspension, and seats. That's the fallacy in this thinking.

To get private cloud right, you have to step away from the guts of the solution and start with the value proposition. From the point of view of the consumers of this service — your internal developers and business users. 

I&O Looks Up at Cloud; Developers Look Down Into It

Read more

2013 Server Virtualization Predictions: Driving Value Above And Beyond The Hypervisor

Dave Bartoletti

Now that we’ve been back from the holidays for a month, I’d like to round out the 2013 predictions season with a look at the year ahead in server virtualization. If you’re like me (or this New York Times columnist), you’ll agree that a little procrastination can sometimes be a good thing to help collect and organize your plans for the year ahead. (Did you buy that rationalization?)

We’re now more than a decade into the era of widespread x86 server virtualization. Hypervisors are certainly a mature (if not peaceful) technology category, and the consolidation benefits of virtualization are now uncontestable. 77% of you will be using virtualization by the end of this year, and you’re running as many as 6 out of 10 workloads in virtual machines. With such strong penetration, what’s left? In our view: plenty. It’s time to ask your virtual infrastructure, “What have you done for me lately?”

With that question in mind, I asked my colleagues on the I&O team to help me predict what the year ahead will hold. Here are the trends in 2013 you should track closely:

  1. Consolidation savings won’t be enough to justify further virtualization. For most I&O pros, the easy workloads are already virtualized. Looking ahead at 2013, what’s left are the complex business-critical applications the business can’t run without (high-performance databases, ERP, and collaboration top the list). You won’t virtualize these to save on hardware; you’ll do it to make them mobile, so they can be moved, protected, and duplicated easily. You’ll have to explain how virtualizing these apps will make them faster, safer, and more reliable—then prove it.
Read more

Microsoft Announces Windows Server 2012

Richard Fichera

The Event

On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.

So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”

Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.

What It Does

The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.

  • New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
Read more

IBM Rounds Out Its Linux Offerings With Power Linux

Richard Fichera

In the latest evolution of its Linux push, IBM has added to its non-x86 Linux server line with the introduction of new dedicated Power 7 rack and blade servers that only run Linux. “Hah!” you say. “Power already runs Linux, and quite well according to IBM.” This is indeed true, but when you look at the price/performance of Linux on standard Power, the picture is not quite as advantageous, with the higher cost of Power servers compared to x86 servers offsetting much if not all of the performance advantage.

Enter the new Flex System p24L (Linux) Compute Node blade for the new PureFlex system and the IBM PowerLinuxTM 7R2 rack server. Both are dedicated Linux-only systems with 2 Power 7 6/8 core, 4 threads/core processors, and are shipped with unlimited licenses for IBM’s PowerVM hypervisor. Most importantly, these systems, in exchange for the limitation that they will run only Linux, are priced competitively with similarly configured x86 systems from major competitors, and IBM is betting on the improvement in performance, shown by IBM-supplied benchmarks, to overcome any resistance to running Linux on a non-x86 system. Note that this is a different proposition than Linux running on an IFL in a zSeries, since the mainframe is usually not the entry for the customer — IBM typically sells to customers with existing mainframe, whereas with Power Linux they will also be attempting to sell to net new customers as well as established accounts.

Read more

Cisco’s Turn At Bat, Introduces Next Generation Of UCS

Richard Fichera

Next up in the 2012 lineup for the Intel E5 refresh cycle of its infrastructure offerings is Cisco, with its announcement last week of what it refers to as its third generation of fabric computing. Cisco announced a combination of tangible improvements to both the servers and the accompanying fabric components, as well as some commitments for additional hardware and a major enhancement of its UCS Manager software immediately and later in 2012. Highlights include:

  • New servers – No surprise here, Cisco is upgrading its servers to the new Intel CPU offerings, leading with its high-volume B200 blade server and two C-Series rack-mount servers, one a general-purpose platform and the other targeted at storage-intensive requirements. On paper, the basic components of these servers sound similar to competitors – new E5 COUs, faster I/O, and more memory. In addition to the servers announced for March availability, Cisco stated that it would be delivering additional models for ultra-dense computing and mission-critical enterprise workloads later in the year.
  • Fabric improvements – Because Cisco has a relatively unique architecture, it also focused on upgrades to the UCS fabric in three areas: server, enclosure, and top-level interconnect. The servers now have an optional improved virtual NIC card with support for up to 128 VLANs per adapter and two 20 GB ports per adapter. One in on the motherboard and another can be plugged in as a mezzanine card, giving up to 80 GB bandwidth to each server. The Fabric Interconnect, the component that connects each enclosure to the top-level Fabric Interconnect, has seen its bandwidth doubled to a maximum of 160 GB. The Fabric Interconnect, the top of the UCS management hierarchy and interface to the rest of the enterprise network, has been up graded to a maximum of 96 universal 10Gb ports (divided between downlinks to the blade enclosures and uplinks to the enterprise fabric.
Read more