About six weeks ago, I attended the Mobile Research conference 2011 in London, where a variety of vendors and clients talked about their experiences with mobile as a research methodology. They shared a range of mobile research methodologies, like using text messages in emerging markets, mobile ethnographic studies, geolocation tracking, and mobile behavioral tracking data. You can find most of the presentations here, and if you want to see me in action as roving reporter, you can click here.
During the whole conference, there was a clear line between the benefits and challenges of online research versus mobile research, and how the two can strengthen each other. Then at the end of the second day, someone asked the following question to the audience: “Do you consider tablets a PC or mobile device?” The answer was almost unanimous: a mobile device.
This got me thinking about the whole concept of mobile research in more detail. In fact, I was wondering if something like a mobile research conference would still exist in a couple of years, because the rapid technological developments of smartphones and tablets will blur the line between mobile and online research. Can we, as researchers, continue to define the research methodology in the future, or are the respondents going to do that? This line of thinking led me to ask this question to our community members: Should research be device-agnostic?
At the end of January, I spoke at the Esomar Shopper Insights Conference and part of my speech was about how technology makes the market insights professional role more challenging in some ways. For example, technology has made the world flat: The Internet makes it possible for information to travel fast, and it feels like we know everything about anything (or at least we could).* But my point was that knowing doesn’t equal understanding.
And in the past weeks, with the world on fire, this thought has been nibbling at the back of my mind. It was there when I watched television and followed the latest developments in Egypt or Morocco. When I read the news or watched the videos and pictures from the earthquake in Japan, or more recently when Britain, France and the US decided to intervene in Libya. I can follow the news minute by minute via Facebook or Twitter (and I do), but I feel I lack the context and local background to really understand what’s going on — like most of us. How will the intervention in Libya change the relationships in that part of the world? How will the earthquake and the issues with the Fukushima Daiichi nuclear power plant affect the Japanese economy? The world is flat, but we are still limited by our own horizons.
Yesterday I attended the first day of the ESOMAR Shopper Insights Conference 2011 in Brussels, and I was pleasantly surprised by the innovative thinking by the presenters, both in the methodologies used and in the way they look at the Market Insights profession.
There were a number of presentations on innovative methodologies, such as eye-tracking. All of them had cool videos to share and gave insights into how these methodologies can be used to better understand shopper behaviors. The presentation that really stuck with me, however, was from Stephanie Grootenhuis, from Kraft Foods International, who talked about the “Incite to Action” initiative.
She came on stage, and said: "All the presentations until now have talked about understanding shoppers better and the difficulties you encounter when doing (global) research. But to be honest, that's not my biggest challenge. What my team struggles with is HOW to share our knowledge and communicate our findings effectively into the organization."
It’s the time of year again, in which we tend to look back at what has been, and look forward to what will happen. Looking at this from a professional angle, 2010 was a very interesting year for the industry: research vendors bounced back from the recession, there was an increased focus on added value, and we saw a lot of innovation happening. In our report Predictions 2011: What Will Happen In Market Research, my team and I have identified a number of trends that we expect to shape market research in 2011.
Organization, technology, and social are defining the research agenda in 2011. In fact, in 2011 market researchers need to embrace social media as an information source, recognize technology as a driver of change while understanding how to implement it effectively, and continue to identify and integrate innovative methodologies to prepare for the future ahead. This will drive, for example, the following trends:
Last week I was at Forrester's Consumer Forum in Chicago, where I gave a presentation with the title “If The Company Only Knew What The Company Knows: Introduction Of A Knowledge Center Can Empower Market Research Professionals.” For this presentation I did quite a lot of research and talked to many market researchers who have implemented some kind of knowledge management system. Knowledge management systems come in all kinds of flavors and with varying degrees of success, but the market researchers who managed to build a successful, engaging, and widely used system all agreed that it had changed their role.
In fact, the companies we spoke to all saw their knowledge management as a competitive advantage. Although we found a number of market researchers willing to participate in our research, none of them wanted to share all the ins and outs. In keeping with the theme, they said, "We don’t want others to know what we know."
But how can market researchers introduce knowledge management to their organizations? Based on our research, we see three different levels:
Build a research center of excellence within the department.
Implement a system for sharing and distributing (research) information with the organization.
Develop a companywide knowledge management system.
You might be wondering why this post has nothing to do with Latin American consumers. Well, in addition to my Latin American research, enterprise feedback management (EFM) is a new and exciting coverage area that I will be addressing to help market research (MR) professionals. My goal is to assist you in finding the right tools and processes that will aid you in making sense of all the copious amounts of information that is collected from all parts of your company regarding consumers and synthesize them into coherent, actionable solutions.
What is EFM? Right now it means several things. From the viewpoint of a customer experience (CXP) professional, it is a tool that can be used to assist in developing a systematic approach for incorporating the needs of one’s customers into the design of better customer experiences, or what we call at Forrester voice of the customer (VoC) programs. My colleague Andrew McInnes will be covering EFM, as well, but from the perspective of how CXP professionals can utilize these tools.
For a market research professional, it is also used as a tool, but is not specific to solely collecting customer experience feedback. I see it as an advantage in two main ways.
As mentioned in some earlier posts, in the past quarters, I have been looking into the role that Market Research professionals play (and can play) with regard to information management. I’ve had many enlightening conversations about this topic with both vendors and client-side market researchers.
Technology developments result in more and more information becoming available internally, and at different parts of the organization. Just think about all the data an average company collects or buys — media measurement data, advertising awareness, advertising spend, retail data, sales data, competitive intelligence, Web-tracking data (from listening tools), Web site tracking, marketing data (e.g., Nielsen Claritas), customer satisfaction surveys, brand trackers, and other primary research data, to name just a few. One vendor estimated that the average research department handles around 50 different research sources!
When I spoke with vendors about their relationship with clients, each and every one of them was looking for ways to increase the level of engagement. For one thing, they are working on best-in-class reporting tools to make it easier for clients to process their data and make it visually more interesting — and hopefully easier to use. However, not many vendors think further than their own set of data. When questioned, they mention that their systems don’t allow for third-party data. Yes, it’s possible to link to internal CRM systems, but that’s about as far as things go.
On two occasions in the past few months, I’ve given a speech to members of Forrester’s Market Research Forrester Leadership Board about vendor management best practices, a topic I’m writing a report on.[i] With market research budgets increasingly shrinking and research expectations growing, we see that market researchers need to select, manage, and measure their vendors more efficiently.
The key to success here is to develop partnerships with your key vendors. Why? Because conversations with Market Research professionals at a variety of organizations show that partnering with research vendors leads to better projects, deeper insights, and lower costs. As one of my interviewees said: “It’s about intellectual ROI: You need to invest less time for each project. You build a lot of equity. You also get more of a team thing going — to me, this is very important. You work with these people on a daily basis, so finding the right vendor and contact is critical, as we see them as colleagues.”
To understand how Market Research professionals currently collaborate with their research vendors, we surveyed our Market Research Panel earlier this year. The majority of our panelists feel that they already have established partnerships with most vendors, and two-thirds state that price is less important than quality.
For a track session at Forrester's Marketing Forum at the end of April, I dived into the topic of customer satisfaction. For market researchers looking to set up a customer satisfaction (CSAT) study, much guidance is available. However, it also became clear to me why, despite all this advice, many customer satisfaction projects fail.
Most of the information I found -- or the conversations I had, for that matter -- were around the ‘science’ part of CSAT studies: the methodology and set-up. There are many discussions online about questions like which scale to use, which questions to ask (or not), whether a company should focus on relational versus transactional measurement, or if it's better to conduct a customized CSAT project or use an established method like Net Promoter.
However, in my conversations with market researchers, I found that the success of CSAT projects isn't based as much on science -- although a sound and repeatable set-up doesn't hurt -- as much as it is on ‘art.’ The art lies in understanding the company’s business issues; translating these into a well-structured questionnaire; finding the drivers for success; and later, when the results are in, presenting the results in an actionable format.
Any customer satisfaction project that focuses on numbers misses out on the 'art' element of CSAT. Of course, using a standardized methodology helps the company benchmark itself against its competitors. But what does it mean when 80% of your clients are satisfied? The organization will look at this number and want to drive it up, without any understanding of what the impact on the bottom line will be when the percentage of satisfied customers increases from 80% to 82%.
Last week, I was in LA, hosting a session on online panel quality at Forrester’s Marketing Forum. I discussed the past, present, and future of online panel quality with Steve Schwartz from Microsoft, Maria Cristina Gomez from Procter & Gamble, and Frank Findley from ARS Group.
Online panel quality is still a major issue in the industry. The whole discussion started in 2006 with a speech by Kim Dedeker -- at that time, the VP of global consumer and market knowledge at Procter & Gamble. In it, she publicly expressed her concerns about online panel quality, how it affected their research results, and, as a result, the credibility of market research. In her speech, she stressed that, in her opinion, the industry – both research suppliers and clients – needed to focus on how to improve the overall quality of research. Her appeal to the industry was very successful. Many other research buyers weighed in with their stories, and the research providers took up the challenge. Since then, many initiatives have started, such as the ARF’s Foundation of Quality and ESOMAR’s 26 questions, as well as more technology-driven approaches like Peanut Labs’ Optimus and MarketTools’ TrueSample.