We all share this sentiment that we want to protect our resources — our planet for generations to come — so that our children and their children can live happily ever after. It’s that warm and fuzzy feeling we get when we see a little girl holding a flower in her hand. I realize that we all share this sentiment every time the press reacts with irate reports criticizing the extent of pollution in China — or when “Reduce, Reuse, Recycle” became part of pop culture with Jack Johnson’s song of the same name (sorry if you have that song playing in your head now). Protecting the environment is the right thing to do. But how many times have you used disposable dishes or cutlery when there were other options that were just less convenient? And why do you do that? It’s easy: Life gets in the way.
As a customer experience (CX) professional, you’ll have noticed the parallels by now. You regularly try to share insights from CX measurement or the voice of the customer (VoC) program with your colleagues across the organization to tell them what important customers think about their experiences with the company and what their pain points are. Using these insights is the right thing to do. But how many times have you met polite but superficial interest? And why is that? Life gets in the way. Your colleagues are busy, don’t know why to care, or have other priorities. It’s no wonder then that 72% of CX pros we asked in our recent survey on the state of CX maturity said that their organizations have only been somewhat or not effective at all in improving customer experience.
I looked at ways that CX pros have managed to rally their organizations around CX metrics and found 10 tactics that companies like Avaya, Elsevier, Hampton Inn & Suites, Sage Software North America, and Verizon have proven to work in the real world.
OK, it is certainly a cliché and clearly suffers from an incomplete view of the world, but many contact center executives would still nod their heads in agreement with the statement, “You can’t manage what you can’t measure.” Contact centers generate a huge volume of data, and everyone from agents on the floor to CEOs in their corner offices would benefit from being presented with actionable analytics based on that data. However, turning that data consistently into actionable knowledge that is useful to improving performance remains challenging. The key questions for contact center professionals around this data are:
What do you measure?
How do you present the data from those measurements?
Every so often I check my blog stats to see what you, the reader, find most interesting - my goal is to continue to bring you great content in both my blog and my research. While I was looking back over my blog stats I thought you might like to see the top ten blog posts in case you missed any of them. But just how should I assess the top ten? Like all outcome metrics, this one is open to interpretation.
I could take the simple route and just count which posts have the most reads (Table 1a). But that would fail to take into account how many days it has been since the blog was published - it stands to reason that older blog posts might garner more reads. So a ranking based on the number of reads divided by the number of days the post has been online would yield a more accurate result in terms of most read post (See Table 1b - Top ten most read posts)*.
The wild west of mobile in insurance is getting tamed. Mobile is no longer just a fun experiment—it’s now a crucial element in the customer and agent experience. We first published our mobile insurance metrics report in August of 2013. At the time, we were struck by how dependent insurers were on a single metric to prove their mobile success: Application downloads.
With 15 more months of mobile development chops under their belts, in November, we decided to take a look at how much more sophisticated mobile insurance strategists had become in their mobile performance measurement strategies. The answer? Unlike other industries where mobile metrics have grown up, insurers remain stuck in mobile adolescence. How do we know? Because topping the mobile insurance metrics list in 2014 are web traffic and app downloads. Fewer insurers are tracking metrics that measure real business outcomes like conversions and mobile revenue transactions.
Blogged in collaboration with Rebecca McAdams, Research Associate, serving Customer Insights professionals.
Consumers are connected, constantly influenced by marketing messages, their friend’s social posts, blog posts, reviews, mobile messages, and Twitter posts. In fact, US Adults have an average of three connected devices. Consumers are leaving breadcrumbs of information behind, across multiple channels and devices. Marketers are jumping at the chance to connect with their customers through proactive marketing campaigns and even through non-marketing interactions. But which interactions actually drive impact? What interactions are responsible for sales conversions, and which interactions merely "assist" conversions? CI Pros and marketers are stumped; they must measure these complex interactions to help drive future marketing and media investments and to actually measure their marketing efforts.
On the morning of May 6, 2014, Google announced its intent to acquire Adometry, a leader in the attribution technology space. Later on the same day, AOL announced its intent to acquire Convertro, another top-performing attribution technology vendor. The Adometry acquisition is not surprising, as Google needed to make major investments in its existing attribution offering with some enhanced analytics and insights services, which Adometry can provide. AOL’s acquisition of Convertro was a move to further build out its ad technology stack, hoping to obtain strong attribution algorithm and stellar engineering staff through this acquisition.
Both companies stand to benefit from the acquisition of these small but extremely knowledgeable experts in marketing and media measurement. Two of the biggest benefits for each include:
A strong services staff with deep knowledge of all media and marketing data and, more importantly, the expertise in driving actionable insights in a complicated media-buying world.
An innovative ability to stitch data sources together — online, offline, and mobile — across the buyer’s journey.
More news from Mountain View on Tuesday, where Internet powerhouse Google released the much-anticipated Data Driven Attribution (DDA) feature for its Premium users. The release of Google’s DDA approach comes as no surprise to the analytics and measurement community. The world of attribution measurement is constantly evolving and new attribution approaches, new players, and new tools regularly enter the market, enabling marketers to select the right attribution tool for their business needs. It was only a matter of time before Google released a persuasive, more advanced measurement offering.
First, the Data Driven Attribution feature is only available for Google Analytics Premium users. It has several notable features worth highlighting:
Google DDA’s approach is statistically driven methodology. Google’s DDA approach is a huge improvement over its rules-based Attribution Modeling tool (which is available for FREE for Google Analytics users). The DDA approach uses probability modeling to best estimate the values of each interaction. The approach itself is transparent, understandable, and Google is extremely open about how it calculates the value parameters.
At some level, I see dysfunction in almost every client I work with. This isn't something new. There probably isn't an organization on the planet without some level of dysfunction. Perhaps a degree of dysfunction is acceptable or even desirable. But eventually organizational dysfunction reaches a point where it begins to impede the ability of the enterprise to function. One area where this appears to occur with great frequency is between IT and the rest of the business. In far too many organizations IT is seen as out of alignment with the business, or worse, as an impediment to business units. So why is this?
It's been my opinion for some time now that there is a root cause for almost all the dysfunction in organizations. The cause is metrics. Specifically, the metrics we use to measure employee performance. Sometimes we suffer from the unintended consequences of what appear to be sound metrics.
Take for example a conversation I recently had with a client in marketing with responsibility for e-commerce. He wanted to gain a better understanding of IT because it appeared to him they were making bad decisions. On investigation it turned out "IT" had taken the website offline in the middle of the fading day, much to the consternation of the e-commerce team. To understand why IT might do this you need to understand metrics. It turns out the help desk had received a call about a problem with SAP. In order to fix the problem with SAP, the database technician decided the fastest repair would require restarting SAP. Unfortunately the website was tied to SAP so when it went down, so too did the website. Had the help desk and the database engineer not been measured on how long it takes to repair a problem, they may have made a different decision.
"Let's just say I'm not lost when it comes to data . . . but I could be more found . . ." – (eBusiness team member at a top 50 US bank)
Digital teams are surrounded by data and metrics — from KPIs to customer analytics. Yet I often hear from clients who wish they were just a little more comfortable knowing what the data is really saying, or which metrics are most important.
We just published a brand new report on The Mobile Banking Metrics That Matter which outlines how mobile strategists at banks can put the right metrics in place and work with their analytics teams to get data outputs that guide them toward smart business decisions.
Writing this report got me thinking about which books, blogs, and articles I’ve found most useful when it comes to really getting data and metrics. Here are five I think might help you too:
The Tiger That Isn’t. Probably my personal favorite book about stats and measurement. Written for a mainstream audience, the book works as a guide to thinking through what a given stat or data point really means — and when to trust or doubt such data. It’s also a great read, full of interesting nuggets and statistical oddities (like how the vast majority of people have an above-average number of legs). The book’s thesis is that people who consume data should be skeptical but not cynical about statistics. From there, it helps the reader more easily contemplate and act on the data and metrics they encounter.
Last week, I had the pleasure of attending Forrester's Forum For Marketing Leaders in London and met some members of the Forrester Leadership Board (FLB) for Customer Insights (CI) professionals. I was eager to share my research on attribution measurement and (selfishly) get their point of view on measurement successes and challenges in Europe. Here are a few key takeaways from our CI colleagues across the pond:
Attribution measurement is a growing topic among European firms. When I met with the FLB members, I was delighted to learn that attribution is being widely adapted in most organizations, with the same challenges that we face in America. In fact, it seems that the firms I spoke with adapted attribution for quite a while, and they’re really looking to advance their attribution approach in the near future. Overall, they are making significant investments in the right data, resources, and tools to have a more sophisticated measurement approach.