At some level, I see dysfunction in almost every client I work with. This isn't something new. There probably isn't an organization on the planet without some level of dysfunction. Perhaps a degree of dysfunction is acceptable or even desirable. But eventually organizational dysfunction reaches a point where it begins to impede the ability of the enterprise to function. One area where this appears to occur with great frequency is between IT and the rest of the business. In far too many organizations IT is seen as out of alignment with the business, or worse, as an impediment to business units. So why is this?
It's been my opinion for some time now that there is a root cause for almost all the dysfunction in organizations. The cause is metrics. Specifically, the metrics we use to measure employee performance. Sometimes we suffer from the unintended consequences of what appear to be sound metrics.
Take for example a conversation I recently had with a client in marketing with responsibility for e-commerce. He wanted to gain a better understanding of IT because it appeared to him they were making bad decisions. On investigation it turned out "IT" had taken the website offline in the middle of the fading day, much to the consternation of the e-commerce team. To understand why IT might do this you need to understand metrics. It turns out the help desk had received a call about a problem with SAP. In order to fix the problem with SAP, the database technician decided the fastest repair would require restarting SAP. Unfortunately the website was tied to SAP so when it went down, so too did the website. Had the help desk and the database engineer not been measured on how long it takes to repair a problem, they may have made a different decision.
"Let's just say I'm not lost when it comes to data . . . but I could be more found . . ." – (eBusiness team member at a top 50 US bank)
Digital teams are surrounded by data and metrics — from KPIs to customer analytics. Yet I often hear from clients who wish they were just a little more comfortable knowing what the data is really saying, or which metrics are most important.
We just published a brand new report on The Mobile Banking Metrics That Matter which outlines how mobile strategists at banks can put the right metrics in place and work with their analytics teams to get data outputs that guide them toward smart business decisions.
Writing this report got me thinking about which books, blogs, and articles I’ve found most useful when it comes to really getting data and metrics. Here are five I think might help you too:
The Tiger That Isn’t. Probably my personal favorite book about stats and measurement. Written for a mainstream audience, the book works as a guide to thinking through what a given stat or data point really means — and when to trust or doubt such data. It’s also a great read, full of interesting nuggets and statistical oddities (like how the vast majority of people have an above-average number of legs). The book’s thesis is that people who consume data should be skeptical but not cynical about statistics. From there, it helps the reader more easily contemplate and act on the data and metrics they encounter.
Last week, I had the pleasure of attending Forrester's Forum For Marketing Leaders in London and met some members of the Forrester Leadership Board (FLB) for Customer Insights (CI) professionals. I was eager to share my research on attribution measurement and (selfishly) get their point of view on measurement successes and challenges in Europe. Here are a few key takeaways from our CI colleagues across the pond:
Attribution measurement is a growing topic among European firms. When I met with the FLB members, I was delighted to learn that attribution is being widely adapted in most organizations, with the same challenges that we face in America. In fact, it seems that the firms I spoke with adapted attribution for quite a while, and they’re really looking to advance their attribution approach in the near future. Overall, they are making significant investments in the right data, resources, and tools to have a more sophisticated measurement approach.
Cross-channel attribution. For customer insights and marketing practitioners, attribution is a white hot measurement topic. It’s viewed as the best way to measure effectiveness of marketing and media campaigns; a way for firms to assess…truly assess… the value of the customer journey. For the past 18 months, I have been living and breathing this topic and today I am happy….no, I’m elated…to announce the official publication of the Cross-Channel Attribution Playbook.
What’s a playbook, you ask? Well, a playbook is a framework to help organizations develop expertise around a specific business topic. The Cross-Channel Attribution Playbook helps marketers and customer insights professionals to take strategic steps in building an attribution strategy within their organization. It includes 12 chapters, including an executive overview, which covers different aspects of developing and managing a cross-channel attribution measurement framework. The four “chapters” specifically help organizations:
Are you trying to take your current customer experience measurement to the next level?
Many of the customer experience professionals we talk to regularly are working on improving their customer experience measurement. You are probably one of them. You might be working on picking the right metrics, on connecting customer experience to business outcomes or to operational variables, on using data to improve the customer experience, or on getting traction for CX measurement in your organization. To conquer any or all of these challenges, you need a solid and well-founded customer experience measurement framework.
It is that dreaded time of year again where we have to report via the performance management system (PMS) on our individual performance and the value we bring to the organization. I say dreaded, because we all know that in reality the goals and objectives were set some time ago in the past, maybe a year ago, and a lot has happened since that time. The person you report to may have changed, you were redirected to other tasks, and so on. Everything seemed possible at the time of the objective setting, but now the reality hits that you were or may have been far too optimistic about your own capability. The self-assessment is difficult as you are not sure whether your manager has the same view as you. You believe you met the objective, but does their expectation meet your actual delivery? If a good performance relates to more money, the pressure and stress builds.
So whilst I was preparing for my Orlando Business Architecture Forum presentation I started to think about how business architecture teams measure and manage their performance. One of my next reports for Forrester’s business architecture playbook addresses BA performance. It was also a hot topic for the EA Council members in Orlando. I had a number of 1-on-1’s with clients who particularly asked about BA metrics and performance — in particular, “What do other business architecture teams do?”
I started listing the questions that, when answered by clients, would lead to a very valuable report for all BA leaders:
Do you measure your BA’s performance? Clients often advise me that they have fairly mature BA practices. However, very few can articulate how they measure their performance, and often comment that the business asks them to demonstrate how BA adds value. So, it would be useful to understand whether BA leaders measure their team’s performance and why they do or don’t.
I recently finished reading Moneyball, the Michael Lewis bestseller and slightly above-average Hollywood movie. It struck me how great baseball minds could be so off in their focus on the right metrics to win baseball games. And by now you know the story — paying too much for high batting averages with insufficient focus where it counts —metrics that correlate with scoring runs, like on-base percentage. Not nearly as dramatic — but business is having its own “Moneyball” experience with way too much focus on traditional metrics like productivity and quality and not enough on customer experience and, most importantly, agility.
Agility is the ability to execute change without sacrificing customer experience, quality, and productivity and is “the” struggle for mature enterprises and what makes them most vulnerable to digital disruption. Enterprises routinely cite the incredible length of time to get almost any change made. I’ve worked at large companies and it’s just assumed that things move slowly, bureaucratically, and inefficiently. But why do so many just accept this? For one thing, poor agility undermines the value of other collected BPM metrics. Strong customer experience metrics are useless if you can’t respond to them in a timely manner, and so is enhanced productivity if it only results in producing out-of-date products or services faster.
Of late I’ve been considering a more mundane version of the ultimate question — what is the ideal metric to use when evaluating business technology strategies? The challenge is that we already have a diverse set of investment metrics from which to choose. There’s Return On Investment (ROI), Net Present Value (NPV), Internal Rate Of return (IRR) and Payback period to name a few of the most common. Yet I can’t help feeling they all lack a little something — the ability to connect the project with the desired business outcome, which for a strategy is the attainment of the goal.
Recently I’ve been working with clients to apply a different measure — the T2BI ratio:
My colleague and friend Mike Gualtieri wrote a really interesting blog the other day titled "Agile Software Is A Cop-Out; Here's What's Next." While I am not going to discuss the great conclusions and "next practices" of software (SW) development Mike suggests in that blog, I do want to focus on the assumption he makes about using working SW as a measurement of Agile.
I am currently researching that area and investigating how organizations actually measure the value of Agile SW development (business and IT value). And I am finding that, while organizations aim to deliver working SW, they also define value metrics to measure progress and much more:
Cycle time (e.g., from concept to production);
Business value (from number of times a feature is used by clients to impact on sales revenue, etc.);
Productivity metrics (such as burndown velocity, number of features deployed versus estimated); and last but not least
Quality metrics (such as defects per sprint/release, etc.).