Server virtualization has been and continues to be a top IT priority for good reasons like improving infrastructure manageability, lowering TCO, and improving business continuity and disaster recovery capabilities. In IT's quest to virtualize more workloads, however, videoconferencing has remained on its own island of specialized hardware due to its reliance on transcoding DSPs (digital signal processors), an incredibly compute intensive type of work. Transcoding is necessary for interoperability between unlike videoconferencing systems, and the performance of that specialized hardware has been difficult to match with software running on standard servers.
That is, unless you turn to a model that doesn't use transcoding. Enter Vidyo, whose virtual edition infrastructure delivers comparable performance to its physical appliances since it doesn't have to transcode calls between Vidyo endpoints. Desktop videoconferencing solutions for the most part are available in virtualized models. However, transcoding based videoconferencing is also becoming available virtualized, with LifeSize offering its platform in this model. In the cloud, Blue Jeans -- the poster child for videoconferencing as a service -- has a virtualized platform based on transcoding that also provides a high quality experience. It will be interesting to see how the performance of virtualized transcoding workloads compares to traditional infrastructure.
Innovation in videoconferencing today is about making this historically cost prohibitive technology cheaper and easier to deploy. Server virtualization is key to that goal. In conversations with end user companies considering their videoconferencing strategy, virtualization is something they express interest in and would consider the next time they refresh their technology. Here's what vendors in the upcoming Forrester Wave on desktop videoconferencing are doing with virtualization today:
Sometimes you can only coax a reluctant partner and I&O customer community for so long before you feel you have to take matters into your own hands. That is exactly what VMware has decided to do to become relevant in the cloud platforms space. The hypervisor pioneer unveiled vCloud Hybrid Service to investors today in what is more a statement of intention than a true unveiling.
VMware's public cloud service — yep, a full public IaaS cloud meant to compete with Amazon Web Service,IBM SmartCloud Enterprise, HP Cloud, Rackspace, and others — won't be fully unveiled until Q2 2013, so much of the details about the service remain under wraps. VMware hired the former president for Savvis Cloud, Bill Fathers, to run this new offering and said it was a top three initiative for the company and thus would be getting "the level of investment appropriate to that priority and to capitalize on a $14B market opportunity," according to Matthew Lodge, VP of Cloud Services Product Marketing and Management for VMware, who spoke to us Tuesday about the pending announcement.
According to CRN’s article on the event, Gelsinger was quoted as saying, “"We want to own corporate workloads. We all lose if they end up in these commodity public clouds. We want to extend our franchise from the private cloud into the public cloud and uniquely enable our customers with the benefits of both. Own the corporate workload now and forever."
Forgive my frankness, Mr. Gelsinger, but you just don’t get it. Public clouds are not your enemy. And the disruption they are causing to your forward revenues are not their capture of enterprise workloads. The battle lines you should be focusing on are between advanced virtualization and true cloud services and the future placement of Systems of Engagement versus Systems of Record.
You've told your ITOps team to make it happen, you've approved the purchase of cloud-in-a-box solutions, but your developers aren't using it. Why?
Forrester analyst Lauren Nelson and myself get this question often in our inquiries with enterprise customers and we've found the answer and published a new report specifically on this topic.
Its core finding: Your approach is wrong.
You're asking the wrong people to build the solution. You aren't giving them clear enough direction on what they should build. You aren't helping them understand how this new service should operate or how it will affect their career and value to the organization. And more often than not you are building the private cloud without engaging the buyers who will consume this cloud.
And your approach is perfectly logical. For many of us in IT, we see a private cloud as an extension of our investments in virtualization. It's simply virtualization with some standardization, automation, a portal, and an image library isn't it? Yep. And a Porsche is just a Volkswagen with better engine, tires, suspension, and seats. That's the fallacy in this thinking.
To get private cloud right, you have to step away from the guts of the solution and start with the value proposition. From the point of view of the consumers of this service — your internal developers and business users.
Now that we’ve been back from the holidays for a month, I’d like to round out the 2013 predictions season with a look at the year ahead in server virtualization. If you’re like me (or this New York Times columnist), you’ll agree that a little procrastination can sometimes be a good thing to help collect and organize your plans for the year ahead. (Did you buy that rationalization?)
We’re now more than a decade into the era of widespread x86 server virtualization. Hypervisors are certainly a mature (if not peaceful) technology category, and the consolidation benefits of virtualization are now uncontestable. 77% of you will be using virtualization by the end of this year, and you’re running as many as 6 out of 10 workloads in virtual machines. With such strong penetration, what’s left? In our view: plenty. It’s time to ask your virtual infrastructure, “What have you done for me lately?”
With that question in mind, I asked my colleagues on the I&O team to help me predict what the year ahead will hold. Here are the trends in 2013 you should track closely:
Consolidation savings won’t be enough to justify further virtualization. For most I&O pros, the easy workloads are already virtualized. Looking ahead at 2013, what’s left are the complex business-critical applications the business can’t run without (high-performance databases, ERP, and collaboration top the list). You won’t virtualize these to save on hardware; you’ll do it to make them mobile, so they can be moved, protected, and duplicated easily. You’ll have to explain how virtualizing these apps will make them faster, safer, and more reliable—then prove it.
On Tuesday, September 4, Microsoft made the official announcement of Windows Server 2012, ending what has seemed like an interminable sequence of rumors, Beta releases, and endless speculation about this successor to Windows Server 2008.
So, is it worth the wait and does it live up to its hype? All omens point to a resounding “YES.”
Make no mistake, this is a really major restructuring of the OS, and a major step-function in capabilities aligned with several major strategic trends for both Microsoft and the rest of the industry. While Microsoft’s high level message is centered on the cloud, and on the Windows Server 2012 features that make it a productive platform upon which both enterprises and service providers can build a cost-effective cloud, its features will be immensely valuable to a wide range of businesses.
What It Does
The reviewers guide for Windows Server 2012 is over 220 pages long, and the OS has at least 100 features that are worth noting, so a real exploration of the features of this OS is way beyond what I can do here. Nonetheless, we can look at several buckets of technology to get an understanding of the general capabilities. Also important to note is that while Microsoft has positioned this as a very cloud-friendly OS, almost all of these cloud-related features are also very useful to an enterprise IT environment.
New file system — Included in WS2012 is ReFS, a new file system designed to survive failures that would bring down or corrupt the previous NTFS file system (which is still available). Combined with improvements in cluster management and failover, this is a capability that will play across the entire user spectrum.
I recently went for coffee with a very interesting gentleman who had previously been responsible for threat and vulnerability management in a global bank – our conversation roamed far and wide but kept on circling back to one or two core messages – the real fundamental principles of information security. One of these principles was “know your assets.”
Asset management is something that many CISO tend to skip over, often in the belief that information assets are managed by the business owners and hardware assets are closely managed by IT. Unfortunately, I’m not convinced that either of these beliefs is true to any great extent.
Take, for example, Anonymous’ recent hack of a forgotten VM server within AAPT’s outsourced infrastructure. VM "sprawl" is one of the key risks that Forrester discusses, and this appears to be a classic example – a virtual server created in haste and soon forgotten about. Commonly, as these devices fall off asset lists, they get neglected – malware and patching updates are skipped and backups are overlooked – yet they still exist on the network. It’s the perfect place for an attacker to sit unnoticed and, if the device exists in a hosted environment, it can also have the negative economic impact of monthly cost and license fees. One anecdote I heard was of a system administrator who, very cautiously and very successfully, disabled around 200 orphaned virtual servers in his organisation – with no negative business impact whatsoever.
In the latest evolution of its Linux push, IBM has added to its non-x86 Linux server line with the introduction of new dedicated Power 7 rack and blade servers that only run Linux. “Hah!” you say. “Power already runs Linux, and quite well according to IBM.” This is indeed true, but when you look at the price/performance of Linux on standard Power, the picture is not quite as advantageous, with the higher cost of Power servers compared to x86 servers offsetting much if not all of the performance advantage.
Enter the new Flex System p24L (Linux) Compute Node blade for the new PureFlex system and the IBM PowerLinuxTM 7R2 rack server. Both are dedicated Linux-only systems with 2 Power 7 6/8 core, 4 threads/core processors, and are shipped with unlimited licenses for IBM’s PowerVM hypervisor. Most importantly, these systems, in exchange for the limitation that they will run only Linux, are priced competitively with similarly configured x86 systems from major competitors, and IBM is betting on the improvement in performance, shown by IBM-supplied benchmarks, to overcome any resistance to running Linux on a non-x86 system. Note that this is a different proposition than Linux running on an IFL in a zSeries, since the mainframe is usually not the entry for the customer — IBM typically sells to customers with existing mainframe, whereas with Power Linux they will also be attempting to sell to net new customers as well as established accounts.
Next up in the 2012 lineup for the Intel E5 refresh cycle of its infrastructure offerings is Cisco, with its announcement last week of what it refers to as its third generation of fabric computing. Cisco announced a combination of tangible improvements to both the servers and the accompanying fabric components, as well as some commitments for additional hardware and a major enhancement of its UCS Manager software immediately and later in 2012. Highlights include:
New servers – No surprise here, Cisco is upgrading its servers to the new Intel CPU offerings, leading with its high-volume B200 blade server and two C-Series rack-mount servers, one a general-purpose platform and the other targeted at storage-intensive requirements. On paper, the basic components of these servers sound similar to competitors – new E5 COUs, faster I/O, and more memory. In addition to the servers announced for March availability, Cisco stated that it would be delivering additional models for ultra-dense computing and mission-critical enterprise workloads later in the year.
Fabric improvements – Because Cisco has a relatively unique architecture, it also focused on upgrades to the UCS fabric in three areas: server, enclosure, and top-level interconnect. The servers now have an optional improved virtual NIC card with support for up to 128 VLANs per adapter and two 20 GB ports per adapter. One in on the motherboard and another can be plugged in as a mezzanine card, giving up to 80 GB bandwidth to each server. The Fabric Interconnect, the component that connects each enclosure to the top-level Fabric Interconnect, has seen its bandwidth doubled to a maximum of 160 GB. The Fabric Interconnect, the top of the UCS management hierarchy and interface to the rest of the enterprise network, has been up graded to a maximum of 96 universal 10Gb ports (divided between downlinks to the blade enclosures and uplinks to the enterprise fabric.
I recently published an update on power and cooling in the data center (http://www.forrester.com/go?docid=60817), and as I review it online, I am struck by the combination of old and new. The old – the evolution of semiconductor technology, the increasingly elegant attempts to design systems and components that can be incrementally throttled, and the increasingly sophisticated construction of the actual data centers themselves, with increasing modularity and physical efficiency of power and cooling.
The new is the incredible momentum I see behind Data Center Infrastructure Management software. In a few short years, DCIM solutions have gone from simple aggregated viewing dashboards to complex software that understands tens of thousands of components, collects, filters and analyzes data from thousands of sensors in a data center (a single CRAC may have in excess of 20 sensors, a server over a dozen, etc.) and understands the relationships between components well enough to proactively raise alarms, model potential workload placement and make recommendations about prospective changes.
Of all the technologies reviewed in the document, DCIM offers one of the highest potentials for improving overall efficiency without sacrificing reliability or scalability of the enterprise data center. While the various DCIM suppliers are still experimenting with business models, I think that it is almost essential for any data center operations group that expects significant change, be it growth, shrinkage, migration or a major consolidation or cloud project, to invest in DCIM software. DCIM consumers can expect to see major competitive action among the current suppliers, and there is a strong potential for additional consolidation.