At Forrester, each of us as analysts keep in regular contact with our clients and the industry through a process known as Inquiry. For workforce computing, this includes Benjamin Gray, Christian Kane, Michele Pelino, Onica King, and Chris Voce. Any Forrester client with Inquiry access can arrange for 1:1 time with an analyst to ask questions and seek advice, or simply ask for a response by e-mail. Most analysts also take advantage of the opportunity to ask a few well-considered questions of our own. Taken together with data, briefings from vendors, ongoing research and client advisory, the inquiry process helps us keep our eyes and ears focused on what matters to I&O professionals, and provides critical insights into their pain and needs. In this blog, I'll share my unvarnished responses to a client inquiry I received just last week:
1. What do you see as the most important trends in End User Computing for the next 3-4 years?
2. What will be the role of each type of device in an organization such as ours (financial services)?
3. What's the best way to find out what our employees need? What do other firms offer different types of workers?
4. Do you have any economic numbers about those devices (i.e. TCO per year)?
5. Do you have any data or examples from other firms like ours?
I bet in your head you just sang “What’s Going On” to yourself – I hope that you did, it’s a classic. Anyway, it’s that time again … my Forrester colleague Glenn O’Donnell and the itSMF USA are set to launch their annual itSMF USA/Forrester IT service management (ITSM) survey and I can’t help think that, as we are in a radically different ITSM world from when they did the last survey, the results will be significantly different – showing that we have upped our collective ITSM game.
What do I mean by “radically different ITSM world”?
Is it me, or does the network industry remind you of Revenge of the Nerds? Networking was cast aside in the cloud revolution, but now companies are learning -- the painful way – what a mistakes that was. Don’t kid yourself one bit if you think that VMware’s acquisition of Nicirawas mostly about developing heterogeneous hypervisor data centers or reducing networking hardware costs. If you do think that, you’re probably an application developer, hypervisor administrator, or data center architect. You’ve been strutting your newly virtualized self through rows of server racks over the last five year, casually brushing aside the networking administrators. You definitely had some outside support for your views: Google, VMware, and even OpenFlow communities have messaged that networking organizations aren’t cool anymore and need to be circumvented by coding around the network, making it a Layer 2 network or taking over the control plane.
To be fair, though, networking vendors and networking teams helped to create this friction, too, since they built their networks on:
40 years of outdated networking reliability principles. The current state of networking can be in many ways traced back to ARPANET’s principle: a single method to reliably communicate a host of multiple sets of flows, traffic, and workloads. Basically, voice, video, and all applications traverse the same rigid and static set of links that only change when a failure occurs. The package didn’t matter.
Here’s the hard truth:IT infrastructure and operations (I&O) teams are becoming less relevant. This will only accelerate now that we are in what Forrester calls “the age of the customer” where bring-your-own-technology policies and “as-a-service” software and infrastructure proliferate.
In this new world, developers still need compute and storage to keep up with growth. And workers need some sort of PC or mobile device to get their jobs done. But they don’t necessarily need you in corporate IT to give it to them. Case and point: employees pay for 70% of the tablets used for work.
At the end of the day, if you can’t deliver on what your workforce and developers care about, they will use whatever and whoever to get their jobs done better, faster and cheaper.
Much of this comes down to customer experience, or how your customers perceive their every interaction with the IT organization, from your staff in the helpdesk to corporate applications they access every day. Here’s a proof point on how much customer experience matters from Forrester’s soon to be published book, Outside In: over a recent five-year period during which the S&P 500 was flat, a stock portfolio of customer experience leaders grew 22% percent.
Last year, my colleague, James Staten, and I published evaluations of the (internal) private cloud and public cloud markets — this year we’re going to fill in the remaining gap in the IaaS space, by publishing a Forrester Wave evaluation on Hosted Private Cloud Solutions. Vendors participating in this report will be evaluated on key criteria, a demo following a mandatory script, and customer references for validation of the solution. Throughout the research process I’ll be providing some updates and interesting findings before it goes live in early Q4 2012.
So, what is hosted private cloud? Like almost every product in the cloud space, there’s a lot of ambiguity about what you’ll be getting if you sign on to use a hosted private cloud solution. Today, NIST defines private cloud as:
The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
Hosted private cloud refers to a variation of this where the solution lives off-premises in a hosted environment while still incorporating NIST's IaaS service definition, particularly where “[t]he consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications.” But there’s a great deal of variation in today’s hosted private cloud arena. Usually solutions differ in the following ways:
At the core of the software-defined datacenter is an abstracted and pooled set of shared resources. But the secret sauce is in the automation that slices up and allocates those shared resources on-demand, without manual tinkering. This is how the largest public clouds work today, but it’s not how the bulk of large enterprise datacenters work. VMware recognizes that and has been extending its reach beyond the compute stack for a while.
The long-rumored changing of the guard at VMware finally took place last week and with it came down a stubborn strategic stance that was a big client dis-satisfier. Out went the ex-Microsoft visionary who dreamed of delivering a new "cloud OS" that would replace Windows Server as the corporate standard and in came a pragmatic refocusing on infrastructure transformation that acknowledges the heterogeneous reality of today's data center.
Paul Maritz will move into a technology strategy role at EMC where he can focus on how the greater EMC company can raise its relevance with developers. Clearly, EMC needs developer influence and application-level expertise, and from a stronger, full-portfolio perspective. Here, his experience can be more greatly applied -- and we expect Paul to shine in this role. However, I wouldn't look to see him re-emerge as CEO of a new spin out of these assets. At heart, Paul is more a natural technologist and it's not clear all these assets would move out as one anyway.
I’ve been meaning to write about service catalog for a year now but I’ve just not had the bandwidth. It’s a common subject for Forrester client inquiries, mainly for my colleague Eveline Oehrlich who has several formal service catalog management outputs scheduled for 2012. Undertaking a recent service catalog webinar with ServiceNow, however, made me realize that I had already created the content for a quick service catalog blog. Hopefully it’s a blog that will help many learn from the service catalog mistakes of others.
What’s the big issue with service catalogs?
Service catalogs (or more importantly service catalog management) really hit the mainstream with ITIL v3 (introduced in June 2007) based on real world use of early service catalogs. So they are nothing new. However, many organizations struggle to start (and finish) service catalog initiatives AND to realize the anticipated benefits. The answer for many lies in that last sentence – they need more than “service catalog initiatives.”
As an aside, I often ask attendees of my presentations: “who has a service catalog?”, “who is planning a service catalog?”, and “who feels they have realized the anticipated benefits from deploying a service catalog?” While the answers to the first two questions can vary, the answer to the third is pretty consistent – organizations are consistently failing to realize the expected benefits from their service catalog initiatives.
So what goes wrong?
In my experience there are four key issues
It’s often seen as a technology project … “let’s buy a service catalog tool” rather than introducing service catalog management and enabling technology.
Bridgekeeper: "What ... is your name?"
Traveler: "John Swainson of Dell."
Bridgekeeper: "What ... is your quest?"
Traveler: "Hey! That's not a bad idea!"
We suspect Dell's process was more methodical than that!
This acquisition was not a surprise, of course. All along, it has been obvious that Dell needed stronger assets in software as it continues on its quest to avoid the Gorge of Eternal Peril that is spanned by the Bridge of Death. When the company announced that John Swainson was joining to lead the newly formed software group, astute industry watchers knew the next steps would include an ambitious acquisition. We predicted such an acquisition would be one of Swainson's first moves, and after only four months on the job, indeed it was.