Last week, I had the pleasure of attending Forrester's Forum For Marketing Leaders in London and met some members of the Forrester Leadership Board (FLB) for Customer Insights (CI) professionals. I was eager to share my research on attribution measurement and (selfishly) get their point of view on measurement successes and challenges in Europe. Here are a few key takeaways from our CI colleagues across the pond:
Attribution measurement is a growing topic among European firms. When I met with the FLB members, I was delighted to learn that attribution is being widely adapted in most organizations, with the same challenges that we face in America. In fact, it seems that the firms I spoke with adapted attribution for quite a while, and they’re really looking to advance their attribution approach in the near future. Overall, they are making significant investments in the right data, resources, and tools to have a more sophisticated measurement approach.
Cross-channel attribution. For customer insights and marketing practitioners, attribution is a white hot measurement topic. It’s viewed as the best way to measure effectiveness of marketing and media campaigns; a way for firms to assess…truly assess… the value of the customer journey. For the past 18 months, I have been living and breathing this topic and today I am happy….no, I’m elated…to announce the official publication of the Cross-Channel Attribution Playbook.
What’s a playbook, you ask? Well, a playbook is a framework to help organizations develop expertise around a specific business topic. The Cross-Channel Attribution Playbook helps marketers and customer insights professionals to take strategic steps in building an attribution strategy within their organization. It includes 12 chapters, including an executive overview, which covers different aspects of developing and managing a cross-channel attribution measurement framework. The four “chapters” specifically help organizations:
Are you trying to take your current customer experience measurement to the next level?
Many of the customer experience professionals we talk to regularly are working on improving their customer experience measurement. You are probably one of them. You might be working on picking the right metrics, on connecting customer experience to business outcomes or to operational variables, on using data to improve the customer experience, or on getting traction for CX measurement in your organization. To conquer any or all of these challenges, you need a solid and well-founded customer experience measurement framework.
It is that dreaded time of year again where we have to report via the performance management system (PMS) on our individual performance and the value we bring to the organization. I say dreaded, because we all know that in reality the goals and objectives were set some time ago in the past, maybe a year ago, and a lot has happened since that time. The person you report to may have changed, you were redirected to other tasks, and so on. Everything seemed possible at the time of the objective setting, but now the reality hits that you were or may have been far too optimistic about your own capability. The self-assessment is difficult as you are not sure whether your manager has the same view as you. You believe you met the objective, but does their expectation meet your actual delivery? If a good performance relates to more money, the pressure and stress builds.
So whilst I was preparing for my Orlando Business Architecture Forum presentation I started to think about how business architecture teams measure and manage their performance. One of my next reports for Forrester’s business architecture playbook addresses BA performance. It was also a hot topic for the EA Council members in Orlando. I had a number of 1-on-1’s with clients who particularly asked about BA metrics and performance — in particular, “What do other business architecture teams do?”
I started listing the questions that, when answered by clients, would lead to a very valuable report for all BA leaders:
Do you measure your BA’s performance? Clients often advise me that they have fairly mature BA practices. However, very few can articulate how they measure their performance, and often comment that the business asks them to demonstrate how BA adds value. So, it would be useful to understand whether BA leaders measure their team’s performance and why they do or don’t.
I recently finished reading Moneyball, the Michael Lewis bestseller and slightly above-average Hollywood movie. It struck me how great baseball minds could be so off in their focus on the right metrics to win baseball games. And by now you know the story — paying too much for high batting averages with insufficient focus where it counts —metrics that correlate with scoring runs, like on-base percentage. Not nearly as dramatic — but business is having its own “Moneyball” experience with way too much focus on traditional metrics like productivity and quality and not enough on customer experience and, most importantly, agility.
Agility is the ability to execute change without sacrificing customer experience, quality, and productivity and is “the” struggle for mature enterprises and what makes them most vulnerable to digital disruption. Enterprises routinely cite the incredible length of time to get almost any change made. I’ve worked at large companies and it’s just assumed that things move slowly, bureaucratically, and inefficiently. But why do so many just accept this? For one thing, poor agility undermines the value of other collected BPM metrics. Strong customer experience metrics are useless if you can’t respond to them in a timely manner, and so is enhanced productivity if it only results in producing out-of-date products or services faster.
Of late I’ve been considering a more mundane version of the ultimate question — what is the ideal metric to use when evaluating business technology strategies? The challenge is that we already have a diverse set of investment metrics from which to choose. There’s Return On Investment (ROI), Net Present Value (NPV), Internal Rate Of return (IRR) and Payback period to name a few of the most common. Yet I can’t help feeling they all lack a little something — the ability to connect the project with the desired business outcome, which for a strategy is the attainment of the goal.
Recently I’ve been working with clients to apply a different measure — the T2BI ratio:
My colleague and friend Mike Gualtieri wrote a really interesting blog the other day titled "Agile Software Is A Cop-Out; Here's What's Next." While I am not going to discuss the great conclusions and "next practices" of software (SW) development Mike suggests in that blog, I do want to focus on the assumption he makes about using working SW as a measurement of Agile.
I am currently researching that area and investigating how organizations actually measure the value of Agile SW development (business and IT value). And I am finding that, while organizations aim to deliver working SW, they also define value metrics to measure progress and much more:
Cycle time (e.g., from concept to production);
Business value (from number of times a feature is used by clients to impact on sales revenue, etc.);
Productivity metrics (such as burndown velocity, number of features deployed versus estimated); and last but not least
Quality metrics (such as defects per sprint/release, etc.).