Every so often I check my blog stats to see what you, the reader, find most interesting - my goal is to continue to bring you great content in both my blog and my research. While I was looking back over my blog stats I thought you might like to see the top ten blog posts in case you missed any of them. But just how should I assess the top ten? Like all outcome metrics, this one is open to interpretation.
I could take the simple route and just count which posts have the most reads. But that would fail to take into account how many days it has been since the blog was published - it stands to reason that older blog posts might garner more reads. So a ranking based on the number of reads divided by the number of days the post has been online would yield a more accurate result in terms of most read post (See Table 1 - Top ten most read posts).
The wild west of mobile in insurance is getting tamed. Mobile is no longer just a fun experiment—it’s now a crucial element in the customer and agent experience. We first published our mobile insurance metrics report in August of 2013. At the time, we were struck by how dependent insurers were on a single metric to prove their mobile success: Application downloads.
With 15 more months of mobile development chops under their belts, in November, we decided to take a look at how much more sophisticated mobile insurance strategists had become in their mobile performance measurement strategies. The answer? Unlike other industries where mobile metrics have grown up, insurers remain stuck in mobile adolescence. How do we know? Because topping the mobile insurance metrics list in 2014 are web traffic and app downloads. Fewer insurers are tracking metrics that measure real business outcomes like conversions and mobile revenue transactions.
Blogged in collaboration with Rebecca McAdams, Research Associate, serving Customer Insights professionals.
Consumers are connected, constantly influenced by marketing messages, their friend’s social posts, blog posts, reviews, mobile messages, and Twitter posts. In fact, US Adults have an average of three connected devices. Consumers are leaving breadcrumbs of information behind, across multiple channels and devices. Marketers are jumping at the chance to connect with their customers through proactive marketing campaigns and even through non-marketing interactions. But which interactions actually drive impact? What interactions are responsible for sales conversions, and which interactions merely "assist" conversions? CI Pros and marketers are stumped; they must measure these complex interactions to help drive future marketing and media investments and to actually measure their marketing efforts.
On the morning of May 6, 2014, Google announced its intent to acquire Adometry, a leader in the attribution technology space. Later on the same day, AOL announced its intent to acquire Convertro, another top-performing attribution technology vendor. The Adometry acquisition is not surprising, as Google needed to make major investments in its existing attribution offering with some enhanced analytics and insights services, which Adometry can provide. AOL’s acquisition of Convertro was a move to further build out its ad technology stack, hoping to obtain strong attribution algorithm and stellar engineering staff through this acquisition.
Both companies stand to benefit from the acquisition of these small but extremely knowledgeable experts in marketing and media measurement. Two of the biggest benefits for each include:
A strong services staff with deep knowledge of all media and marketing data and, more importantly, the expertise in driving actionable insights in a complicated media-buying world.
An innovative ability to stitch data sources together — online, offline, and mobile — across the buyer’s journey.
Perhaps you’ve heard him in meetings — he is the one questioning your results. Perhaps you’ve seen him at his desk surrounded by tombs and tables in an effort to lower incremental sales calculations — he calls it reducing bias. Perhaps you’ve hoped he will not be assigned to your project — he delivers lower lift estimates than his peers. He is the measurement curmudgeon.
How do you detect if a measurement curmudgeon resides in your office? Listen for the following clues/questions:
Is that control group really comparable to the experimental group? Isn’t it biased toward less engaged customers and inflating your measured lift?
Wasn’t that concurrent with our fall promotion? Isn’t that event likely accounting for most of your positive results?
Haven’t sales been trending up? Did you incorporate that trend into your analysis?
More news from Mountain View on Tuesday, where Internet powerhouse Google released the much-anticipated Data Driven Attribution (DDA) feature for its Premium users. The release of Google’s DDA approach comes as no surprise to the analytics and measurement community. The world of attribution measurement is constantly evolving and new attribution approaches, new players, and new tools regularly enter the market, enabling marketers to select the right attribution tool for their business needs. It was only a matter of time before Google released a persuasive, more advanced measurement offering.
First, the Data Driven Attribution feature is only available for Google Analytics Premium users. It has several notable features worth highlighting:
Google DDA’s approach is statistically driven methodology. Google’s DDA approach is a huge improvement over its rules-based Attribution Modeling tool (which is available for FREE for Google Analytics users). The DDA approach uses probability modeling to best estimate the values of each interaction. The approach itself is transparent, understandable, and Google is extremely open about how it calculates the value parameters.
At some level, I see dysfunction in almost every client I work with. This isn't something new. There probably isn't an organization on the planet without some level of dysfunction. Perhaps a degree of dysfunction is acceptable or even desirable. But eventually organizational dysfunction reaches a point where it begins to impede the ability of the enterprise to function. One area where this appears to occur with great frequency is between IT and the rest of the business. In far too many organizations IT is seen as out of alignment with the business, or worse, as an impediment to business units. So why is this?
It's been my opinion for some time now that there is a root cause for almost all the dysfunction in organizations. The cause is metrics. Specifically, the metrics we use to measure employee performance. Sometimes we suffer from the unintended consequences of what appear to be sound metrics.
Take for example a conversation I recently had with a client in marketing with responsibility for e-commerce. He wanted to gain a better understanding of IT because it appeared to him they were making bad decisions. On investigation it turned out "IT" had taken the website offline in the middle of the fading day, much to the consternation of the e-commerce team. To understand why IT might do this you need to understand metrics. It turns out the help desk had received a call about a problem with SAP. In order to fix the problem with SAP, the database technician decided the fastest repair would require restarting SAP. Unfortunately the website was tied to SAP so when it went down, so too did the website. Had the help desk and the database engineer not been measured on how long it takes to repair a problem, they may have made a different decision.
"Let's just say I'm not lost when it comes to data . . . but I could be more found . . ." – (eBusiness team member at a top 50 US bank)
Digital teams are surrounded by data and metrics — from KPIs to customer analytics. Yet I often hear from clients who wish they were just a little more comfortable knowing what the data is really saying, or which metrics are most important.
We just published a brand new report on The Mobile Banking Metrics That Matter which outlines how mobile strategists at banks can put the right metrics in place and work with their analytics teams to get data outputs that guide them toward smart business decisions.
Writing this report got me thinking about which books, blogs, and articles I’ve found most useful when it comes to really getting data and metrics. Here are five I think might help you too:
The Tiger That Isn’t. Probably my personal favorite book about stats and measurement. Written for a mainstream audience, the book works as a guide to thinking through what a given stat or data point really means — and when to trust or doubt such data. It’s also a great read, full of interesting nuggets and statistical oddities (like how the vast majority of people have an above-average number of legs). The book’s thesis is that people who consume data should be skeptical but not cynical about statistics. From there, it helps the reader more easily contemplate and act on the data and metrics they encounter.
Last week, I had the pleasure of attending Forrester's Forum For Marketing Leaders in London and met some members of the Forrester Leadership Board (FLB) for Customer Insights (CI) professionals. I was eager to share my research on attribution measurement and (selfishly) get their point of view on measurement successes and challenges in Europe. Here are a few key takeaways from our CI colleagues across the pond:
Attribution measurement is a growing topic among European firms. When I met with the FLB members, I was delighted to learn that attribution is being widely adapted in most organizations, with the same challenges that we face in America. In fact, it seems that the firms I spoke with adapted attribution for quite a while, and they’re really looking to advance their attribution approach in the near future. Overall, they are making significant investments in the right data, resources, and tools to have a more sophisticated measurement approach.
Cross-channel attribution. For customer insights and marketing practitioners, attribution is a white hot measurement topic. It’s viewed as the best way to measure effectiveness of marketing and media campaigns; a way for firms to assess…truly assess… the value of the customer journey. For the past 18 months, I have been living and breathing this topic and today I am happy….no, I’m elated…to announce the official publication of the Cross-Channel Attribution Playbook.
What’s a playbook, you ask? Well, a playbook is a framework to help organizations develop expertise around a specific business topic. The Cross-Channel Attribution Playbook helps marketers and customer insights professionals to take strategic steps in building an attribution strategy within their organization. It includes 12 chapters, including an executive overview, which covers different aspects of developing and managing a cross-channel attribution measurement framework. The four “chapters” specifically help organizations: