As a follow up to his presentation at the 2013 itSMF Norway conference, Stuart Rance of HP has kindly donated some practical advice for those struggling with availability.
Many IT organizations define availability for IT services using a percentage (e.g. 99.999% or “five 9s”) without any clear understanding of what the number means, or how it could be measured. This often leads to dissatisfaction, with IT reporting that they have met their goals even though the customer is not satisfied.
A simple calculation of availability is based on agreed service time (AST), and downtime (DT).
If AST is 100 hours and downtime is 2 hours then availability would be
Customers are interested in their ability to use IT Services to support business processes. Availability reports will only be meaningful if they describe things the customer cares about, for example the ability to send and receive emails, or to withdraw cash from ATMs.
Number and duration of outages
A service that should be available for 100 hours and has 98% availability has 2 hours downtime. This could be a single 2 hour incident, or many shorter incidents. The relative impact of a single long incident or many shorter incidents is different for different business processes. For example, a billing run that has to be restarted and takes 2 days to complete will be seriously impacted by each outage, but the outage duration may not be important. A web-based shopping site may not be impacted by a 2 minute outage, but after 2 hours the loss of customers could be significant. Table 1 shows some examples of how an SLA might be documented to show this varying impact.
Cross-channel attribution. For customer insights and marketing practitioners, attribution is a white hot measurement topic. It’s viewed as the best way to measure effectiveness of marketing and media campaigns; a way for firms to assess…truly assess… the value of the customer journey. For the past 18 months, I have been living and breathing this topic and today I am happy….no, I’m elated…to announce the official publication of the Cross-Channel Attribution Playbook.
What’s a playbook, you ask? Well, a playbook is a framework to help organizations develop expertise around a specific business topic. The Cross-Channel Attribution Playbook helps marketers and customer insights professionals to take strategic steps in building an attribution strategy within their organization. It includes 12 chapters, including an executive overview, which covers different aspects of developing and managing a cross-channel attribution measurement framework. The four “chapters” specifically help organizations:
I guess I should have expected this (but alas I didn’t) – the Capita ITIL, the IT service management best practice framework, joint venture with the UK government wasn’t big news. If anything, the story made ripples rather than waves; and from a UK government “finances” rather than IT service management (ITSM) best practice perspective.
It’s interesting to consider why – particularly when enterprises are so adamant on requesting ITIL-alignment in ITSM tool selection RFPs. But first a few links:
In a previous blog post, SysAid – a provider of IT management solutions – was kind enough to share some metrics/performance snapshots collected from its customers. As a quick recap, SysAid captures service desk benchmarking information through its customers’ use of its software (on an opt-in basis of course) for the benefit of all.
At some point we should sit down and compare the SysAid stats to those provided by HDI – a great independent source of service desk benchmarks – that’s a challenge to you Roy Atkinson … BTW, I hope the HDI 2013 event is going well in Las Vegas this week (the Twitter hash tag is #hdiconf13 for people, like me, who aren’t there). Anyway, back to those SysAid stats …
A selection of community-based service desk stats …
There are two points to note here: not all SysAid customers participate (according to its website, SysAid now has over 100,000 customer organizations); and I have cherry-picked a handful of the available stats from March 2013. There is also one caveat from me – there is no differentiation of organization size in these stats, we need to drill down further to account for any small or very large organization bias.
Percentage of incident tickets originating from the End User Portal, Average 60.31%
You can guess where I stand on this otherwise I wouldn’t be writing this blog and others like it ...
Yesterday I was a guest speaker in an Axios webinar, called “Using ITSM to Increase Business User Satisfaction and the Perception of IT,” during which we ran four audience polls. I thought it would be great to share the poll results and my thoughts.
The webinar story arc …
I set the scene using many of my favorite graphics including the following which shows the gulf between the business’ and IT’s own opinions of how well the average internal IT organizations is doing …
… Before starting to look at how what we do and measure either increases or decreases the customer experience – including the fact that we often seem to be too focused on what we do in IT rather than what we achieve through what we do in IT (and IT service management (ITSM)). I also included a section on common metrics issues which I’ve previous blogged on here and here; and the customer experience work of my Forrester colleagues and its applicability to internal IT.
The poll results and my thoughts …
1. Do you consider the people that consume your IT services to be:
Are you trying to take your current customer experience measurement to the next level?
Many of the customer experience professionals we talk to regularly are working on improving their customer experience measurement. You are probably one of them. You might be working on picking the right metrics, on connecting customer experience to business outcomes or to operational variables, on using data to improve the customer experience, or on getting traction for CX measurement in your organization. To conquer any or all of these challenges, you need a solid and well-founded customer experience measurement framework.
A Forrester-client inquiry call last night and the creation of some slides for a webinar with Axios really got me thinking about how we measure our success in IT. It just seemed so easy to take the IT version of success (and the associated measures) and create a snide customer retort. It’s a little tongue-in-cheek but please take a read of one of my Axios slides:
I'm sure there are many more to play with.
If you read my blogs on a regular basis you will have seen:
It is that dreaded time of year again where we have to report via the performance management system (PMS) on our individual performance and the value we bring to the organization. I say dreaded, because we all know that in reality the goals and objectives were set some time ago in the past, maybe a year ago, and a lot has happened since that time. The person you report to may have changed, you were redirected to other tasks, and so on. Everything seemed possible at the time of the objective setting, but now the reality hits that you were or may have been far too optimistic about your own capability. The self-assessment is difficult as you are not sure whether your manager has the same view as you. You believe you met the objective, but does their expectation meet your actual delivery? If a good performance relates to more money, the pressure and stress builds.
So whilst I was preparing for my Orlando Business Architecture Forum presentation I started to think about how business architecture teams measure and manage their performance. One of my next reports for Forrester’s business architecture playbook addresses BA performance. It was also a hot topic for the EA Council members in Orlando. I had a number of 1-on-1’s with clients who particularly asked about BA metrics and performance — in particular, “What do other business architecture teams do?”
I started listing the questions that, when answered by clients, would lead to a very valuable report for all BA leaders:
Do you measure your BA’s performance? Clients often advise me that they have fairly mature BA practices. However, very few can articulate how they measure their performance, and often comment that the business asks them to demonstrate how BA adds value. So, it would be useful to understand whether BA leaders measure their team’s performance and why they do or don’t.
I recently finished reading Moneyball, the Michael Lewis bestseller and slightly above-average Hollywood movie. It struck me how great baseball minds could be so off in their focus on the right metrics to win baseball games. And by now you know the story — paying too much for high batting averages with insufficient focus where it counts —metrics that correlate with scoring runs, like on-base percentage. Not nearly as dramatic — but business is having its own “Moneyball” experience with way too much focus on traditional metrics like productivity and quality and not enough on customer experience and, most importantly, agility.
Agility is the ability to execute change without sacrificing customer experience, quality, and productivity and is “the” struggle for mature enterprises and what makes them most vulnerable to digital disruption. Enterprises routinely cite the incredible length of time to get almost any change made. I’ve worked at large companies and it’s just assumed that things move slowly, bureaucratically, and inefficiently. But why do so many just accept this? For one thing, poor agility undermines the value of other collected BPM metrics. Strong customer experience metrics are useless if you can’t respond to them in a timely manner, and so is enhanced productivity if it only results in producing out-of-date products or services faster.
In a recent Forrester report — Develop Your Service Management And Automation Balanced Scorecard — I highlight some of the common mistakes made when designing and implementing infrastructure & operations (I&O) metrics. This metric “inappropriateness” is a common issue, but there are still many I&O organizations that don’t realize that they potentially have the wrong set of metrics. So, consider the following:
When it comes to metrics, I&O is not always entirely sure what it’s doing or why. We often create metrics because we feel that we “should” rather than because we have definite reasons to capture and analyze data and consider performance against targets. Ask yourself: “Why do we want or need metrics?” Do your metrics deliver against this? You won’t be alone if they don’t.
Metrics are commonly viewed as an output in their own right. Far too many I&O organizations see metrics as the final output rather than as an input into something else, such as business conversations about services or improvement activity. The metrics become a “corporate game” where all that matters is that you’ve met or exceeded your targets. Metrics reporting should see the bigger picture and drive improvement.