Where IT Metrics Go Wrong: 13 Issues To Avoid

In a recent Forrester report — Develop Your Service Management And Automation Balanced Scorecard — I highlight some of the common mistakes made when designing and implementing infrastructure & operations (I&O) metrics. This metric “inappropriateness” is a common issue, but there are still many I&O organizations that don’t realize that they potentially have the wrong set of metrics. So, consider the following:

  1. When it comes to metrics, I&O is not always entirely sure what it’s doing or why. We often create metrics because we feel that we “should” rather than because we have definite reasons to capture and analyze data and consider performance against targets. Ask yourself: “Why do we want or need metrics?” Do your metrics deliver against this? You won’t be alone if they don’t.
  2. Metrics are commonly viewed as an output in their own right. Far too many I&O organizations see metrics as the final output rather than as an input into something else, such as business conversations about services or improvement activity. The metrics become a “corporate game” where all that matters is that you’ve met or exceeded your targets. Metrics reporting should see the bigger picture and drive improvement.
  3. I&O organizations have too many metrics. IT service management (ITSM) tools provide large numbers of reports and metrics, which encourages well-meaning I&O staff to go for quantity over quality. Just because we can measure something doesn’t mean that we should — and even if we should measure it, we don’t always need to report on it. The metrics we choose to disseminate should directly contribute to understanding whether we’ve achieved desired performance and outcomes.
  4. We measure things because they’re easy to measure, not because they’re important. I&O organizations shouldn’t spend more time collecting and reporting metrics than the value we get from them, but that still isn’t an excuse to just measure the easy stuff. The availability of system reports and metrics again comes into play, with little or no effort needed to suck performance-related information out of the ITSM tool or tools. Consider why you report each and every metric in your current reporting pack and assess the value they provide versus the effort required to report them. Not only will you find metrics that you report on just because you can (“they were there already”), you will also find metrics that are “expensive” to provide but provide little or no value (“they seemed like a good idea at the time”).
  5. I&O can easily fall into the trap of focusing on IT rather than business metrics. There is often a disconnect between IT activity and performance and business objectives, demands, and drivers. So consider your existing metrics from a business perspective: What does the fact that there are 4,000 incidents per month actually mean? From an ITSM perspective, it might mean that we’ve been busy or that it’s 10% lower (or higher) than the previous month. But is the business actually interested in incident volumes? If it is, does it interpret that as “you make a lot of mistakes in IT” or as “you’ve prevented the business working 4,000 times this month”?
  6. There is no structure for or context between metrics. Metrics can be stuck in silos rather than being end-to-end. There is also a lack of correlation and context between different metrics. A good example is the excitement over the fact that the cost-per-incident has dropped — but closer inspection of other metrics shows that the cost has gone down not because we’ve become more efficient but because we had more incidents during the reporting period than normal.
  7. We take a one-dimensional view of metrics. Firstly, I&O organizations can limit themselves to looking at performance in monthly silos — they don’t look at the month-on-month, quarter-on-quarter, or even year-on-year trends. So while the I&O organization might hit its targets, there might be a failure just around the corner as performance degrades over time.
  8. The metric hierarchy isn’t clear. Many don’t appreciate that: 1) not all metrics are born equal — there are differences between metrics, key performance indicators (KPIs), critical success factors (CSFs), and strategic objectives; and 2) metrics need to differentiate between a number of factors, such as hierarchy level, recipients, and their ultimate use. Different people will have different uses for different metrics, so one-size reporting definitely doesn’t fit all. As with all reporting, tell people what they need to know, when they need to know it, and in a format that’s easy for them to consume. If your metrics don’t support decision-making, then you’re suffering from one or more of these listed issues.
  9. We place too much emphasis on I&O benchmarks. The ability to compare yourself with other I&O organizations can help show how fantastic your organization is or justify spending on improvements. However, benchmark data is often misleading; one might be comparing apples with oranges. Two great examples are cost-per-incident and incidents handled per-service-desk-agent per-hour. In cost-per-incident, how do you know which costs have been included and which haven’t? The volume, types, and occurrence patterns of incidents will also affect the statistics. The incident profile will also affect incidents handled per-service-desk-agent per-hour statistics.
  10. Metric reporting is poorly designed and delivered. I&O professionals can spend more time collecting metric data than understanding the best way for it to be delivered and consumed — it’s similar to communications per se where a message sent doesn’t always equate to a message received and understood. You can also make metrics and reporting more interesting.
  11. We overlook the behavioral aspects of metrics. At a higher level, we aim for, and then reward, failure — we set targets such as 99.9% availability rather than saying, “We will aim for 100% availability, and we will never go below 99.9%.” At a team or individual level, metrics can drive the wrong behaviors, with particular metrics making individuals act for personal rather than corporate success. Metrics can also conflict and pull I&O staff in different directions. A good example is the tension between two common service desk metrics — average call-handling time and first-contact resolution. Scoring highly against one metric will adversely affect the other, so for I&O professionals to use one in isolation for team or individual performance measurement is potentially dangerous to operations and IT service delivery.
  12. I&O can become blinkered by the existing metrics. When your organization consistently makes its targets, the standard response is to increase the number or scope of targets. But this is not necessarily the right approach. Instead, I&O execs need to consider whether the metrics are still worthwhile — whether they still add value. Sometimes, the right answer is to abolish a particular metric and replace it with one that better reflects your current business needs and any improvement or degradation in performance that you’ve experienced.
  13. Metrics and performance can be easily misunderstood. A good example is incident volumes — a reduction in incident volumes is a good thing, right? Not necessarily. Consider this: A service desk providing a poor level of service might see incident volumes drop as internal customers decide that calling or emailing is futile and start seeking resolution elsewhere or struggling on with workarounds. Conversely, a service desk doing a fantastic job at resolving incidents might see an increase in volumes as more users reach out for help. Thus, I&O leaders need to view customer satisfaction scores in parallel with incident volume metrics to accurately gauge the effectiveness of a service desk.

Finally, consider this: In the wise words of Ivor McFarlane of IBM: “If we use the wrong metrics, do we not get better at the wrong things?”

Does any of this sound far too familiar? What did I miss? As always, your comments and opinions are appreciated.

Related blog: http://blogs.forrester.com/stephen_mann/11-10-14-it_service_management_metrics_advice_and_10_top_tips

 

________________________________________________________

If you enjoyed this, please read my latest blog: http://blogs.forrester.com/stephen_mann  

Comments

#14 - Metrics are used negatively to beat people up

When it comes to carrots and sticks, I have seen organizations use metrics for the stick. If this metric isn't maintained it will have negative consequences. I think this effects attitude more than behavior, which results in a negative culture of submission rather than focusing on improvement.

Hi Stephen thanks for this

Hi Stephen thanks for this elaborative list of don’ts and do’s.

I am especially grabbed by your point eight (The metric hierarchy isn’t clear). I do agree that organizations should build a metric tree. What you do mention in a way but not so clearly is that the metric tree should be related to the governance framework and thus the business architecture.

In other words companies first need to think of what they are required to do...what are the objectives of the business and how does IT contribute to that. For example...I currently work for a large bank and a large telco and both have the objective to “respond agile to the ever changing business needs”. This obviously requires the existence of two value chains: 1) from Strategy to (service) Portfolio and 2) from Demand to Production. These Value Chains are building upon competencies which I see as the CSF.

I am building a metric tree to measuring the effectiveness and efficiency of those value chains and competencies...using a few KPI’s (not more than 2) per competency...from there I have design the metrics I need to measure those KPI. Thus I have build a tree that supports the measurement of value chains and competencies which has directly build a bridge to business outcome and thus value

Another thing you didn’t mention specifically is that many time IT organizations measure because they want “prove”...the measures are not designed to improve but to “prove that we didn’t do anything wrong”...IMHO a very negative approach to build the metric three...maybe this is don’ts number 14...

Last I would like to state that we should actually not report through real-time dashboards. You didn’t suggested paper reports I know but I want to mention it specifically: reports are a bit out of date in my opinion...nobody reads them and it is not green also...it’s much better to have a real-time dashboard on your iPad or Samsung Galaxy showing that you haven’t responded fast enough to a changing business need...with the possibility to go to the heart of that issue immediately (because of the proper designed metric tree) and deal with it almost immediately...contacting the accountable people via sms, txt, phone or email...using the same device...acting as fast as possible to recover the customer satisfaction...

It all depends on your definition of "up"

An old blog of mine exchanged my own experiences of managing a data center on false metrics - its here with the names changed to protect the innocent and the guilty..

http://www.servicemanagement101.com/index.php/easyblog/entry/it-all-depe...