My most popular blog of 2012 wasn’t written by me … but I guess you might have expected this if you’ve already read a few. That blog's author, an end-user (or is that a customer of an internal IT organization), now returns to look at the IT service desk through a customer and customer experience lens. I’ll let them continue in their own words …
So how is your customer experience?
It’s never been more important to build strong customer relationships (regardless of what type of service you're offering). Long gone are the days when the customer purchasing path was straight-forward, and when the only route of post-sales contact was the phone. In 2013, we need to be proactive and embrace consumer-driven change, harnessing the power of new technologies as well as improving older methods of contact.
Whether your interactions with customers are face-to-face, via the internet including social media, or over the phone; and whether they involve physical or virtual products; they now need to generate a good “experience” for customers. In the age of the “empowered customer” failure to manage these “experiences” can lead to missed opportunities and/or customer loss. And not just with the affected customer(s).
So what is “customer experience” and could it apply to IT service desks?
Forrester’s definition is simple: “How customers perceive their interactions with your company.” So for an IT service desk, could it be: “How end users perceive their interactions with your service desk”? And if so, how do you deliver this increasingly critical “customer experience”?
I was at an industry conference recently, standing in the booth of a large PC maker while being indoctrinated with the latest word: "You can manage it with existing tools!" - a marketing director beamed, as he waved a new Windows 8 tablet under my nose. He seemed so happy I thought for a second he might grab my hand and drag me skipping through the tradeshow floor followed by a troupe of merry singing penguins, like a sort of demented convention center edition of Mary Poppins.
Oracle makes itself an easy target for the ire of the cloud community when it makes dumb, cloudwashed announcements like last week's supposed IaaS offering. But then again, Oracle is just doing what it thinks it takes to be in the cloud discussion and is frankly reflecting what a lot of its I&O customers are defining as cloud efforts.
Forrester Forrsights surveys continue to show that enterprise IT infrastructure and operations (I&O) professionals are more apt to call their static virtualized server environments clouds than to recognize that true cloud computing environments are dynamic, cost optimized and automated environments. These same enterprise buyers are also more likely to say that the use of public cloud services lie in the future rather than already taking place today. Which fallacy is more dangerous?
The latter is definitely more harmful because while the first one is simply cloudwashing of your own efforts, the other is turning a blind eye to activities that are growing steadily, and increasingly without your involvement or control. Both clearly place I&O outside the innovation wave at their companies and reinforce the belief that IT manages the past and is not the engine for the future. But having your head in the sand about your company's use of public cloud services such as SaaS and cloud platforms could put you more at risk.
I was part of a Forrester Team that recently completed a multi-country rollout tour with Emerson Network Power as they formally released their Trellis DCIM product, a comprehensive DCIM environment many years in the building. One of the key takeaways was both an affirmation of our fundamental assertions about DCIM, plus hints about its popularity and attraction for potential customers that in some ways expand on the original value proposition we envisioned. Our audiences were in total approximately 500 selected data center users, most current Emerson customers of some sort, plus various partners.
The audiences uniformly supported the fundamental thesis around DCIM – there exists a strong underlying demand for integrated DCIM products, with a strong proximal emphasis on optimizing power and cooling to save opex and avoid the major disruption and capex of new data center capacity. Additionally, the composition of the audiences supported our contention that these tools would have multiple stakeholders in the enterprise. As expected, the groups were heavy with core Infrastructure & Operations types – the people who have to plan, provision and operate the data center infrastructure to deliver the services needed for their company’s operations. What was heartening was the strong minority presence of facilities people, ranging from 10% to 30% of the attendees, along with a sprinkling of corporate finance and real-estate executives. Informal conversations with a number of these people gave us consistent input that they understood the need, and in some cases were formerly tasked by their executives, to work more closely with the I&O group. All expressed the desire for an integrated tool to help with this.
Today’s announcements at the Open Compute Project (OCP) 2013 Summit could be considered as tangible markers for the OCP crossing the line into real relevance as an important influence on emerging hyper-scale and cloud computing as well as having a potential bleed-through into the world of enterprise data centers and computing. This is obviously a subjective viewpoint – there is no objective standard for relevance, only post-facto recognition that something was important or not. But in this case I’m going to stick my neck out and predict that OCP will have some influence and will be a sticky presence in the industry for many years.
Even if their specs (which look generally quite good) do not get picked up verbatim, they will act as an influence on major vendors who will, much like the auto industry in the 1970s, get the message that there is a market for economical “low-frills” alternatives.
Major OCP Initiatives
To date, OCP has announced a number of useful hardware specifications, including:
With a couple of months' perspective, I’m pretty convinced that Intel has made a potentially disruptive entry in the market for programmable computational accelerators, often referred to as GPGPUs (General Purpose Graphics Processing Units) in deference to the fact that the market leaders, NVIDIA and AMD, have dominated the segment with parallel computational units derived from high-end GPUs. In late 2012, Intel, referring to the architecture as MIC (Many Independent Cores) introduced the Xeon Phi product, the long-awaited productization of the development project that was known internally (and to the rest of the world as well) as Knight’s Ferry, a MIC coprocessor with up to 62 modified Xeon cores implemented in its latest 22 nm process.
When I returned to Forrester in mid-2010, one of the first blog posts I wrote was about Oracle’s new roadmap for SPARC and Solaris, catalyzed by numerous client inquiries and other interactions in which Oracle’s real level of commitment to future SPARC hardware was the topic of discussion. In most cases I could describe the customer mood as skeptical at best, and panicked and committed to migration off of SPARC and Solaris at worst. Nonetheless, after some time spent with Oracle management, I expressed my improved confidence in the new hardware team that Oracle had assembled and their new roadmap for SPARC processors after the successive debacles of the UltraSPARC-5 and Rock processors under Sun’s stewardship.
Two and a half years later, it is obvious that Oracle has delivered on its commitments regarding SPARC and is continuing its investments in SPARC CPU and system design as well as its Solaris OS technology. The latest evolution of SPARC technology, the SPARC T5 and the soon-to-be-announced M5, continue the evolution and design practices set forth by Oracle’s Rick Hetherington in 2010 — incremental evolution of a common set of SPARC cores, differentiation by variation of core count, threads and cache as opposed to fundamental architecture, and a reliable multi-year performance progression of cores and system scalability.
Forrester’s Asia Pacific (AP) team has just published its 2013 predictions report, focused on regional IT spending, technology adoption, and vendor dynamics. The predictions that will most affect the Chinese market:
Transformation imperatives will drive IT spending growth. China’s top government priorities for 2013 are ensuring economic stability during the ongoing political transition and counteracting the negative external market factors that have led to an economic slowdown. For 2013, Forrester expects the government to continue economic reform and invest in specific areas: infrastructure, education, and new technologies. We expect these initiatives to positively affect IT-related spending, which will grow approximately 11% in 2013 in local currency versus 9% in 2012.
Many device manufacturers will struggle despite surging demand. We expect that sub-$100 and even sub-$50 Android devices will hit the market. With rapid standardization and commoditization of smartphones in AP, tier two device manufacturers will further struggle to differentiate their products and maintain their margins. White-label or original design manufacturers (ODM) from mainland China are leveraging the opportunity to build their own brand and sales channels to gain share from tier two device makers from Japan and Taiwan. Forrester believes that 2013 will be a tough year for vendors like Acer and Asus in the smart mobile device and tablet space.
It's a little-known fact that both Southwest Airlines and the (soon-to-be) famous Yee-Haw Pickle Company began life on a cocktail napkin. What better medium to illustrate why Windows Intune should be on your radar as an I&O leader or professional?
In the late 1990s, no one could have imagined what PC management would eventually entail in an always-on, always-connected world. Those of you who know me, know that I've either managed or marketed 3 different client management product lines in my career. All of the vendors in the space, including Microsoft, have spent the last 15 years trying to make it easier to manage Windows PCs on an enterprise scale, for utility, security, business continuity and performance.
A mess? I'd say! I spoke with a mid-sized oil company a few weeks ago about their client management tools, processes and maturity. They use only a fraction of System Center Configuration Manager (SCCM) 2007's capabilities. The weekly patch cycle and packaging alone are a full time job for one person, and endpoint protection and remediation are still wishlist items. Half of their assets sit at the end of satellite links 50 miles from the nearest towns and they have a fleet of trucks manned by a small army of techs dedicated to just fixing PC problems over 5 big western US States. Expensive? You bet. Ineffective? Absolutely.
While attempting to clear my desk before the Christmas break I stumbled upon a bright-pink USB memory stick that contained the collected presentations from the 2009 itSMF UK annual conference. Having satisfied my curiosity as to the size of the memory stick (I’d forgotten that USB sticks were ever that small), I then wondered:
What were the IT service management (ITSM) hot topics in November 2009?
Which industry luminaries were presenting on them?
How many presentations would still make it to the 2013 itSMF UK conference?