Customer experience horror stories are not quite as inevitable as death and taxes but they are close cousins and we all have a large back catalogue of screw-ups to rant about operatically. That crappy cheese sandwich, the misleading advice about product features or being ushered into an avoidable gargantuan queue by a staff drone. Some of my own frustration exotica include annoyances like harmoniums couriered from India and only good for firewood (or modern art) on arrival in Edinburgh*. Yes, the world is a stage but some brands can look like The Three Stooges on it.
Marketing Manager: “Net Promoter Score is the one number we need to grow!”
Customer Intelligence Manager: “Nonsense! ‘Satisfaction’ predicts customer loyalty better than ‘likelihood to recommend’ – it says so in the wonky business journals I read!”
Marketing Manager: “You don’t understand how business works!”
Customer Intelligence Manager: “You don’t understand how math works!”
The sad thing is that in a micro sense they’re both right, but in a macro sense they’re both wrong. The reason? They’re each taking an inside-out point of view based on their own specialties.
Where NPS Fits In A Customer Experience Measurement Framework
In our research into customer experience measurement, we see many organizations that use Net Promoter Score. Some use it poorly because – like the fictional marketing manager above – they don’t understand the limitations of what NPS can do.
Here’s how they should think of it: Customer experience is how customers perceive their interactions with a company along each step of a customer journey, from discovery, to purchase and use, to getting service. NPS measures what customers say they’ll do as a result of one or more of those interactions. It’s what Forrester calls an “outcome metric.”
But outcome metrics are just one out of three types of metrics captured by effective customer experience measurement programs. The best programs gather and analyze:
The ever-insightful Mike Glantz has picked up on something strange in the water for video (TV and online) advertising these days. After conducting a great panel at the Forrester Marketing Leadership Forum in Los Angeles last week, here's his take:
Online video is certainly rising fast as a medium and an ad vehicle. Just this week, comScore announced that Americans watched more than 8 billion video ad impressions in March alone, setting an all-time record. Audiences in the US are embracing online video across a wide variety of devices and show no signs of slowing down. To capitalize on this explosive growth, many of the big online publishers like AOL, Hulu, and Yahoo are hosting their own "New Fronts," with the hope of emulating TV and attracting bigger advertisers with deeper pockets and larger commitments to purchase the more valuable online ad space in advance.
I've noticed a disturbing trend in one of the markets I study. Thirty percent of marketers say their top social media goal is creating brand impact, but only 10% tell us they measure brand impact — a gap of 20 percentage points. But then while just 4% say sentiment or engagement are their top goals, a whopping 26% measure these numbers —leaving us with an almost identical gap of 22 percentage points, but in the other direction. It’s clear what's happening here: Marketers are using sentiment and engagement numbers as a proxy for brand impact surveys.
Deep down I love the idea of measurement proxies. A properly constructed and proven proxy could be a cheap, quick, and effective stand-in for direct measurement of things that are quite frankly hard to measure — like brand impact.
But there’s a big problem here: I've been looking pretty hard for good measurement proxies for a while now, and I’ve found very few that could be described as "properly constructed and proven." And I'm pretty sure none of the marketers in our survey have proven their proxies — because if they'd tried, they'd have almost certainly failed.
The Oil And Gas Information Technology Innovation Dilemma
The hydrocarbon logistics chain of natural gas and crude oil connects globally distributed exploration and production sites with industrial and private consumers via pipelines, tankers, rail cars, and trucks with massive intermediate buffering storage and conversion facilities (tank farms, refineries, gas plants); it is the lifeblood of our energy supply chain today and for the coming decades.
More than 75 million barrels of oil and 300 billion cubic feet of natural gas are produced, transported, and consumed all over the globe — every day. Along the complex transportation chain, these special bulk products, both liquids and gases, are transferred between the different modes of transportation, resulting in a number of challenges based on complex measurements of product volumes and masses:
Measurement accuracy. In an ideal world, we would always determine the mass of crude oil and natural gas at each measurement point; however, due to the large quantities involved, weighing is possible only at the very end of the logistics chain. Consequently, we have to live with measurement data that typically carries an uncertainty of 0.1% to 0.5 %, depending on the measurement devices’ intrinsic accuracy.
But saying that raises the question: If the number of fans or followers you have doesn’t tell us whether you’ve succeeded as a company, then what does it tell you? And if your CEO shouldn’t be worried about the number of wall posts you’ve generated, then who should be paying attention to this number?
Since last summer, I’ve been using a structured model to help my clients focus on delivering the right social media marketing data to various stakeholders inside their organization. Social media programs throw off so much data that the key to measuring and managing your programs well is focusing each stakeholder on just the pieces of data that are relevant to helping them do their jobs. If part of your job is measuring the success of your social media marketing programs, then you need to start segmenting the stakeholder groups you’re providing that data to and tailoring the type of metrics, the volume of metrics, and the frequency of reporting you provide them.