With server-hosted virtual desktops (VDI), you take something that used to be a few centimeters from someone's fingertips - their Windows desktop - and move it sometimes thousands of miles away, and you expect them to be okay with that. It’s possible, but choose your technology vendor wisely, because the project’s success will hinge on the end user experience.
It’s not easy to give users an equal or better Windows desktop experience with VDI than they have with their local PC. If they rely on videoconferencing to collaborate with their colleagues, the VDI system has to work with their local webcam and it has to handle the video stream properly so they don’t get choppy voice and video. If they use a tablet, your VDI vendor’s tablet client has to be good, with intuitive touch gestures. There may need to be a way for them to install software, and they may need to use the system over a 4G/LTE network link while traveling.
To do all of these things and more across a wide range of work styles, devices, applications and networks requires sophisticated, expensive capabilities. If you choose your vendor primarily on cost, the solution you get may not have what you need to deliver an acceptable user experience - especially if your business needs change.
In a world where OS and low-level platform software is considered unfashionable, it was refreshing to see the Linux glitterati and cognoscenti descended on Boston for the last three days, 5000 strong and genuinely passionate about Linux. I spent a day there mingling with the crowds in the eshibit halls, attending some sessions and meeting with Red Hat management. Overall, the breadth of Red Hat’s offerings are overwhelming and way too much to comprehend ina single day or a handful of days, but I focused my attention on two big issues for the emerging software-defined data center – containers and the inexorable march of OpenStack.
Containers are all the rage, and Red Hat is firmly behind them, with its currently shipping RHEL Atomic release optimized to support them. The news at the Summit was the release of RHEL Atomic Enterprise, which extends the ability to execute and manage containers over a cluster as opposed to a single system. In conjunction with a tool stack such as Docker and Kubernates, this paves the way for very powerful distributed deployments that take advantage of the failure isolation and performance potential of clusters in the enterprise. While all the IP in RHEL Atomic, Docker and Kubernates are available to the community and competitors, it appears that RH has stolen at least a temporary early lead in bolstering the usability of this increasingly central virtualization abstraction for the next generation data center.
Unfortunately, visa issues prevented me from attending the OpenStack summit in Vancouver last week — despite submitting my application to the Canadian embassy in Beijing 40 days in advance! However after following extensive online discussions of the event and discussing it with vendors and peers, I would say that OpenStack is moving to a new phase, for two reasons:
The rise of containers is laying the foundation for the next level of enterprise readiness. Docker’s container technology has become a major factor in the evolution of OpenStack components. Docker drivers have been implemented for the key components of Nova and Heat for extended computing and orchestration capabilities, respectively. The Magnum project aiming at container services allows OpenStack to create clusters with Kubernetes (k8s) by Google and Swarm by Docker.com. The Murano project contributed by Mirantis aiming at application catalog services is also integrated with k8s.
I’ve been talking to a number of users and providers of bare-metal cloud services, and am finding the common threads among the high-profile use cases both interesting individually and starting to connect some dots in terms of common use cases for these service providers who provide the ability to provision and use dedicated physical servers with very similar semantics to the common VM IaaS cloud – servers that can be instantiated at will in the cloud, provisioned with a variety of OS images, be connected to storage and run applications. The differentiation for the customers is in behavior of the resulting images:
Deterministic performance – Your workload is running on a dedicated resource, so there is no question of any “noisy neighbor” problem, or even of sharing resources with otherwise well-behaved neighbors.
Extreme low latency – Like it or not, VMs, even lightweight ones, impose some level of additional latency compared to bare-metal OS images. Where this latency is a factor, bare-metal clouds offer a differentiated alternative.
Raw performance – Under the right conditions, a single bare-metal server can process more work than a collection of VMs, even when their nominal aggregate performance is similar. Benchmarking is always tricky, but several of the bare metal cloud vendors can show some impressive comparative benchmarks to prospective customers.
The rise of the DevOps role in the enterprise and the increasing requirements of agility beyond infrastructure and applications make the platform-as-a-service (PaaS) market one to watch for both CIOs and enterprise architecture professionals. On December 9, the membership of Cloud Foundry, a major PaaS open source project, announced the formation of the Cloud Foundry Foundation.
In my view, this is as important as the establishment of OpenStack foundation in 2012, which was a game-changing move for the cloud industry. Here’s why:
PaaS is becoming an important alternative to middleware stacks. Forrester defines PaaS as a complete application platform for multitenant cloud environments that includes development tools, runtime, and administration and management tools and services. (See our Forrester Wave evaluation for more detail on the space and its vendors.) In the cloud era, it’s a transformational alternative to established middleware stacks for the development, deployment, and administration of custom applications in a modern application platform, serving as a strategic layer between infrastructure-as-a-service (IaaS) and software-as-a-service (SaaS) with innovative tools.
Cloud Foundry is one major open source PaaS software. Cloud Foundry as a technology was designed and architected by Derek Collison and built in the Ruby and Go programming languages by Derek and Vadim Spivak (wiki is wrong!). VMware released it as open source in 2011 after Derek joined the company. Early adopters of Cloud Foundry include large multinationals like Verizon, SAP, NTT, and SAS, as well as Chinese Internet giants like Baidu.
There is always a tendency to regard the major players in large markets as being a static background against which the froth of smaller companies and the rapid dance of customer innovation plays out. But if we turn our lens toward the major server vendors (who are now also storage and networking as well as software vendors), we see that the relatively flat industry revenues hide almost continuous churn. Turn back the clock slightly more than five years ago, and the market was dominated by three vendors, HP, Dell and IBM. In slightly more than five years, IBM has divested itself of highest velocity portion of its server business, Dell is no longer a public company, Lenovo is now a major player in servers, Cisco has come out of nowhere to mount a serious challenge in the x86 server segment, and HP has announced that it intends to split itself into two companies.
And it hasn’t stopped. Two recent events, the fracturing of the VCE consortium and the formerly unthinkable hook-up of IBM and Cisco illustrate the urgency with which existing players are seeking differential advantage, and reinforce our contention that the whole segment of converged and integrated infrastructure remains one of the active and profitable segments of the industry.
EMC’s recent acquisition of Cisco’s interest in VCE effectively acknowledged what most customers have been telling us for a long time – that VCE had become essentially an EMC-driven sales vehicle to sell storage, supported by VMware (owned by EMC) and Cisco as a systems platform. EMC’s purchase of Cisco’s interest also tacitly acknowledges two underlying tensions in the converged infrastructure space:
I recently attended VMware’s vForum 2014 event in Beijing. The vendor has established a local ecosystem for the three pillars of its business: the software-defined data center (SDDC), cloud services, and end user computing. VMware is working with:
Huawei to refine SDDC technologies.VMware is leveraging Huawei’s technology capability to improve its product feature. VMware integrated Huawei Agile Controller into NSX and vCenter to operate and manage network automation and quickly migrate virtual machines online. Huawei provides the technology to unify the management of virtual and physical networks based on VMware’s virtualization platform. This partnership can help VMware optimize its existing software features and improve the customer experience.
A group of us just published an analysis of VMworld (Breaking Down VMworld), and I thought I’d take this opportunity to add some additional color to the analysis. The report is an excellent synthesis of our analysis, the work of a talented team of collaborators with my two cents thrown in as well, but I wanted to emphasize a few additional impressions, primarily around storage, converged infrastructure, and the overall tone of the show.
First, storage. If they ever need a new name for the show, they might consider “StorageWorld” – it seemed to me that just about every other booth on the show floor was about storage. Cloud storage, flash storage, hybrid storage, cheap storage, smart storage, object storage … you get the picture.[i] Reading about the hyper-growth of storage and the criticality of storage management to the overall operation of a virtualized environment does not drive the concept home in quite the same way as seeing 1000s of show attendees thronging the booths of the storage vendors, large and small, for days on end. Another leading indicator, IMHO, was the “edge of the show” booths, the cheaper booths on the edge of the floor, where smaller startups congregate, which was also well populated with new and small storage vendors – there is certainly no shortage of ambition and vision in the storage technology pipeline for the next few years.
Bill Gates said "People everywhere love Windows.” Whether or not you agree, the fact that Microsoft Windows remains the de facto standard for business productivity after nearly 3 decades, suggests that many still do. But as the sales figures of Microsoft’s competitors suggest, people everywhere love lots of other things too. And one of the reasons they love them so much is that they like to get things done, and sometimes that means getting away from the office to a quiet place, or using a technology that isn’t constrained by corporate policies and controls, so they can be freer to experiment, grow their skills and develop their ideas uninhibited.
Technology managers I speak with are aware of this, but they’re justifiably paranoid about security, costs, and complexity. So the result of these conflicting forces coming together is inspiring rapid innovation in a mosaic of technologies that Forrester collectively calls digital workspace delivery systems. It involves many vendors, including Microsoft, Citrix, VMware, Dell, nComputing, Amazon Web Services, Fujitsu, AppSense, Moka5, and more. The goal of our work is to help companies develop their capabilities for delivering satisfying Microsoft Windows desktop and application experiences to a wide range of users, devices, and locations.
It's hard to believe that a company could burn through $225 MILLION dollars in 11 months, but it looks like that may have been exactly what AirWatch did. According to data released by AirWatch and written by financial analysts (links to all data sources at bottom of post), AirWatch likely had burned through nearly all of its available cash in record time. Based on an assumption of $120K burn per employee (fully loaded) per year and an assumed removal of $50M in equity at the time of the venture round, AirWatch would have had somewhere between 5 and 6 months of runway left as of January 2014. These assumptions are corroborated by the fact that VMware has contractually extended AirWatch an offer to provide a bridge loan if the acquisition deal does not close in the next 6 months.
What did AirWatch do wrong? It sounds like they may have made some over-assumptions with regards to their growth rates for 2013. It could have possibly been the adoption rates in countries outside of North America. It may have just been bad luck. Or it could even be a cooling off of interest in mobile device management technologies based on containerization. We won't know exactly why they were getting near the end of the runway, but what we can say is that VMware may have overpaid in multiple. Based on the data provided by VMware of AirWatch bookings for 2013, VMware paid somewhere around 16x bookings for AirWatch. Man, that's a lot of bread!