A Little Doubt Wouldn't Hurt Research

I’m not sure what it is. Maybe it’s the time of year and the fact that my upcoming holiday makes me a bit introspective. Maybe it’s the weather, as it’s been a horrendous summer so far in the Netherlands. Or maybe it’s just me, being inundated with tweets, blog post, articles, white papers, vendor briefings, etc., about market research. Whatever the reason, the outcome is the same: I’m currently struggling a bit with the pervasive authoritative voice in the industry. Don’t get me wrong; I’m well aware that I’m as guilty as everyone else. But we all seem so certain about what’s going on in research, what needs to happen, what’s wrong, and what’s right; about who’s in and who’s out. I feel we’re losing an important skill that distinguishes good market researchers from great ones: the ability to doubt.

With market research, there is no absolute truth. Research is about interpretation of results, placing numbers into context, finding the story behind the numbers. Any data set can have multiple stories; it’s the market researcher who uncovers and shares the story that he or she believes to be most powerful for the business. In the end, however, it’s just one perception of the truth. Great researchers know this, and they always challenge themselves, trying to pick holes in their story, finding examples that prove the opposite. The problem with today’s business environment is that it doesn’t leave much room for doubt or uncertainty. In fact, doubt and uncertainty are seen as weaknesses. So, what do we do? We cover up and only show our best side.

For example, I’ve been to quite a number of conferences in the past year, and I’ve only seen one (1!) case study that showed where research had it wrong. We seem to be falling collectively into the “file drawer problem.” When something doesn’t work or doesn’t look good, we store it safely in a drawer never daring to look at it again. Rather than really taking stock of what went right and what went wrong, moving forward with it and announcing it to others as a way to aid in the growth of market research. It doesn’t look good to show that we struggled — and that is both human nature as well as a sign of the times. So we end up with case studies that only show the shiny side of the new toy, and that’s a bad thing. The overall industry could benefit at least as much from stories that show the issues and challenges researchers face as from stories that share their successes.

But who dares to go first?

Comments

Reineke, thanks for this

Reineke, thanks for this candid view. Researchers do indeed have a great responsibiliity which can feel onerous at times. A professional is constantly gauging the market outside of models, tools, templates and frameworks as well. She is talking to clients, consumers, business managers. She is even picking signals about a city's economy by a chat with her cab driver. An analyst may not have the full picture but he/she has a pretty good picture, better than several others, and in that sharing everyone benefits. I do agree that it is a good idea to also express uncertainty where there is uncertainty. It is all part of the process and one makes the reader or consumer of research also a part of it. In the US an analyst cannot be compelled to express what he/she does not believe to be true. I sure hope it is the same the world over because research is conducted worldwide. If there are any expectations to the contrary the burden of finding your own voice and making it heard are also an analyst/researchers to bear, and for that reason itself thank you for sharing your views here.

Very well put

Excellent reflection. Thanks for sharing it.

Agree

Very well put Reineke! I think the 'learning from our mistakes' piece is extremely important. Will you be facilitating that at a Forum later this year? : )

Sharing our experiences

Alveena, Alex, and Jackie,
thanks for your comments. Yes, we definitely plan on sharing what we learn.
In our research we always cover both the opportunities as well as the challenges of research methodologies. We sometimes do that here on the blog, like my post about global research challenges, or the posts about mobile research. And we sometimes share at conferences. We will for example talk about our experiences with combining tracking, qualitative and quantitative research methodologies at the Esomar conference in Atlanta. Hope others will follow.
Thanks, Reineke

Hi Reineke, This is a great

Hi Reineke,

This is a great piece and I hope people step up to answer your call. It's often difficult to define research getting it wrong. As you say, there's a matter of interpretation. I had an experience warning that movement on brand attribute metrics was being driven more by sampling than advertising impact. This message was heard without incident but direction wasn't changed. Was I wrong? Was the brand leader wrong?

It may not matter. What matters, and the feedback I received at that time, is that my analysis added value. Right or wrong, there was a well supported story which hadn't been heard and it likely improved business outcomes down the road if not in the next 12 months.