Of late I’ve been considering a more mundane version of the ultimate question — what is the ideal metric to use when evaluating business technology strategies? The challenge is that we already have a diverse set of investment metrics from which to choose. There’s Return On Investment (ROI), Net Present Value (NPV), Internal Rate Of return (IRR) and Payback period to name a few of the most common. Yet I can’t help feeling they all lack a little something — the ability to connect the project with the desired business outcome, which for a strategy is the attainment of the goal.
Recently I’ve been working with clients to apply a different measure — the T2BI ratio:
“Is the IT industry unique in its obsession with its own possible future demise? The sky is always falling in. #ITRapture”
IMO the average IT organization does appear to be somewhat Chicken Little-like and my response of “I think it is because IT is obsessed with itself :)” started me off …
While we have not necessarily fallen in love with our own reflection, it is difficult to argue that we are not overly obsessed with what WE are doing rather than what the business is doing – as per yesterday’s blog “Why Is IT Operations Like Pizza Delivery?”
Consider this exaggerated story
You meet two people at a soiree (that’s a posh cocktail party BTW). The first introduces themselves: “Hi, I’m Ian. I work for LANDesk. I do all sorts of product marketing nonsense.” The second does the same. Well, I say the same; there’s a big difference – “Hi, I’m Stephen. I work in IT.”
Whilst with a software vendor yesterday I reused a favorite IT service delivery analogy that was inspired by, or was it borrowed from, James Finister at least two years ago. At the Forrester I&O Forum in Las Vegas this Thursday I will use it again when Glenn O'Donnell and I present on "A Mindset Change Is Needed: Support The People, Not The Technology."
To me the analogy is indicative of the fact that despite all of the investments organizations have made in increasing IT service management maturity and IT service delivery we still seem to measure our relative success in terms of IT rather than business outcomes.
So consider this somewhat frivolous analogy: comparing IT operations to pizza delivery operations
The pizza company has a palatial store and has invested in the best catering equipment (read state-of-the-art data center). It employs highly-qualified chefs who take pride in creating culinary masterpieces. When the pizza leaves the store it scores ten out of ten on the internal measurement system. This is, however, measuring at the point of creation rather than the point of consumption.
Now consider the customer view of the pizza when it arrives: it is late, cold, has too much cheese, the wrong toppings (even toppings that are unrecognizable to the customer), and it costs more than the customer expected (and wanted) to pay.
How much of this example can be applied to IT delivery?
My colleague and friend Mike Gualtieri wrote a really interesting blog the other day titled "Agile Software Is A Cop-Out; Here's What's Next." While I am not going to discuss the great conclusions and "next practices" of software (SW) development Mike suggests in that blog, I do want to focus on the assumption he makes about using working SW as a measurement of Agile.
I am currently researching that area and investigating how organizations actually measure the value of Agile SW development (business and IT value). And I am finding that, while organizations aim to deliver working SW, they also define value metrics to measure progress and much more:
Cycle time (e.g., from concept to production);
Business value (from number of times a feature is used by clients to impact on sales revenue, etc.);
Productivity metrics (such as burndown velocity, number of features deployed versus estimated); and last but not least
Quality metrics (such as defects per sprint/release, etc.).
Is it a blog? Is it a musing (that’s not “amusing”)? Or is it just a cheap attempt to pick the brains of others smarter than myself? Does it matter? Can I do anything other than ask questions?
My point (or at least my line of thinking while I plan a couple of ITIL-related Forrester reports) is that we spend a lot of time talking about what to do (or more likely what not to do) when "adopting ITIL," but how often do we talk about whether we have been successful in applying the concepts of ITIL, the processes, and the enabling technology for business benefit?
Maybe it is because we quote the mantra that “ITIL is a journey” and we can’t see a point in time where we can stop and reflect on our achievements (or lack of)? Maybe we segue too quickly from the ITIL-technology adoption project into the firefighting realities of real-world IT service management? Whatever the potential barriers to taking stock, where is that statement that describes what we have achieved and our relative level of success?
Looking at this logically (fatal mistake, I know), assuming (potentially a big assumption) that there was a business case for the “ITIL adoption project” where is the post implementation review (PIR)? Where can we look to see the realization of business benefits (I deliberately didn’t say “IT benefits” BTW)? I’m trying not to be cynical but, even if we forget the formalities of a PIR, how many I&O organizations can quantify the benefits achieved through ITIL adoption? More importantly what has been achieved relative to the potential for achievement? Where did we get to in our desired-future-state?
Earlier this week, I attended the Hornbill User Group (or "HUG" as it is affectionately known) to listen to Malcolm Fry, IT service management (ITSM) legend and author of "ITIL Lite," talk about ITSM metrics in the context of ITIL 2011.
There is no doubt that metrics have long been a topic of interest, concern, and debate for ITSM practitioners (I wrote a piece a few years ago that is still the most popular item on my old blog site by a huge margin), and IMO I&O organizations struggle with the area due to a number of reasons:
I&O is not entirely sure what it is doing (in terms of metrics) and why.
We often measure what is easy to measure rather than what we should measure.
I&O can easily fall into the trap of focusing on IT metrics rather than business-focused metrics.
I&O organizations often have too many metrics as opposed to a select few (often led by the abundance of reports and metrics provided by the ITSM tool or tools of choice).
There is no structure or context between metrics (these can be stuck in silos rather than being “end-to-end”).
Metrics are commonly viewed as an output in their own right rather than as an input into business conversations about services or improvement activity.
We live in a time when customers expect services to be delivered non-stop, without interruption, 24x7x365. Need proof? Just look at the outrage this week stemming from RIM's 3+ day BlackBerry service/outage impairment. Yes, this was an unusually long and widespread disruption, but it seems like every week there is a new example of a service disruption whipping social networks and blogs into a frenzy, whether it's Bank of America, Target, or Amazon. I'm not criticizing those who use social media outlets to voice their dissatisfaction over service levels (I've even taken part in it, complaining on Twitter about Netflix streaming being down on a Friday night when I wanted to stream a movie), but pointing out that now more than ever infrastructure and operations professionals need to rethink how they deliver services to both their internal and external customers.
What are the right metrics to track the success of a CRM initiative? I just updated my report on this topic for 2011. The report illustrates over 70 different metrics and describes how to link them to business strategies and tactics.
What’s new in the report? My clients are incorporating new measures into their portfolio. In addition to traditional operational metrics, they are adding externally focused customer perception metrics. In particular, I see a rise in adoption of voice of the customer (VoC) metrics and “social metrics”:
One thing that I’ve found in common across infrastructure and operations groups of all shapes and sizes is that they are continually searching for the ideal set of key performance indicators. A set of metrics that perfectly measures their infrastructure, demonstrates the excellence of their operations, but are still simple and cheap to collect. At least once a week I speak with a client searching for the holy grail of metrics, hopeful that I hold that coveted knowledge. They’re inevitably disappointed to find out that I don’t know what the best set of metrics is, and that I truly think that it doesn’t exist! Sorry if I’m bursting your bubble, but there is no essential set of metrics for all infrastructure and operations organizations. What makes sense for one organization to measure may be completely useless for another organization. What may be very simple to collect at one company is nearly impossible at another.
While I don’t believe in the myth of a single set of perfect metrics for all organizations, I do think it is valuable to learn from other organizations what they are measuring in order to compare them to your own metrics (and maybe steal some of theirs), which is why I am gathering a list of metrics from infrastructure and operations groups globally in order to form a database of metrics. Once we have a good number of metrics on this list, I will work to consolidate them down to the most commonly cited metrics and collect a benchmark on them. We’re calling this project “Forrester's Consensus Metrics For Infrastructure & Operations” and I really hope you’ll consider contributing to it because we can’t do this without your input.