In most cases, the answers to life’s more complex questions have really simple answers. In today’s selling environment it’s often hard to determine who exactly is “the buyer.” Your salespeople are given a lot of inputs:
Your executive leadership want them calling on “business people” or “executives.”
The sales training courses they have been to instruct them to find “champions,” “decision-makers,” and “influencers.”
Marketers produce information about “personas.”
Business unit leaders and other subject matter experts talk about “users” or “doers.”
Sales managers tend to be more interested in understanding the opportunity (Access to power? Is it qualified? Is there budget allocate? When is the account going to make a decision?).
Their contacts within an given account give them different people or process steps to follow, or kick them over to procurement.
With all of the different voices – “You should do this,” “You should say that,” “You need to present this way” – echoing in the heads of your salespeople, things can get very confusing.
A Tale Of Two Sales
The thing is – the buying environment for most of us has changed, leaving us with two distinctively different buying patterns:
On the one hand, the customer knows what they want and have developed fairly sophisticated procurements steps to acquired what they need at the best possible price.
On the other hand, the customer is looking for the expertise to help them get value from their investment and solve a problem.
I regularly hear CIOs and IT suppliers discussing the “four pillars” of cloud, social, mobile, and big data as if they’re an end in themselves, creating plenty of buzz around all four. But really, they’re just a means to an end: Cloud, social, mobile, and big data are the tools we use to reach the ultimate goal of providing a great customer experience. Most CIOs in Australia do understand that digital disruption and customer obsession are the factors that are changing their world, and that the only way to succeed is to embrace this change.
Last weekend I used my AAdvantage miles on a plane ticket for my husband. I went to AA.com, it was easy to trade off options based on number of miles used and flight schedule. When I went to book, my name and AAdvantage number were pre-populated into the form. I changed the name and number to his but got an error: “The AAdvantage number for Passenger 1 does not match the name entered. Please verify and re-enter.”*
Problem #1: A design problem stopped me from booking the ticket myself on the site.
Problem #2: An unhelpful error message didn’t help me fix the first problem.
Without any other choice, I called for help. Before I could reach a person – or even a menu, I got this message:
“With the refreshed and redesigned AA.com it’s easy to book, explore, and plan all of your travel needs in one place because we’ve organized things better, made it more intuitive, smarter, simpler, cleaner, all to help bring your next trip closer to reality. This is the first step of more exciting changes we have planned for AA.com. Whether you are looking or booking, a better travel experience awaits with the new, easy to navigate AA.com. Book a trip now and see for yourself. To expedite your call, please have your Advantage number ready.”
Problem #3: I had to spend a full minute hearing about how American’s new site could help me — the same site that had already failed to help me.
When I finally reached an agent and explained my problem, she said: “Well, you just had to think on it harder. You needed to leave the Advantage number blank.”
Problem #4: The agent told me I’m stupid. Who likes that?
Armed with new instructions, I tried to book the ticket. But instead I got an error message saying the site had timed out.
How should you measure customer experience? Is it even possible to measure something that feels as squishy as customer experience?
As it turns out, you can measure it, you should measure it, and you even have some decent options for measuring it. Your alternatives range from monitoring the real-world interactions your customers have with your firm (like clicks on a site or the length of a call) to asking your customers for their perceptions of those interactions (the real customer experience) to tracking what your customers do as a result of the experience (like making another purchase or recommending you).
At Forrester, we have our own direct measure of customer experience that we’ve been using since 2007: the Customer Experience Index (CxPi). Today we published the results for 2011, which are based on research conducted at the end of 2010.
To help understand those results, let me explain how the CxPi works. We ask more than 7,000 consumers to identify companies they do business with in 13 different industries. We then ask respondents to tell us how well each firm met their needs, how easy the firm was to work with, and how enjoyable it was to work with (questions that correspond to the three levels of the classic customer experience pyramid). Then for all three questions, we calculate each firm’s CxPi score by subtracting the percentage of its customers who reported a bad experience from the percentage who reported a good experience. The overall CxPi is an average of those three results.
Quantifies the correlation between a rise in a company’s Customer Experience Index score and the corresponding increase in three loyalty metrics that every company cares about: purchase intent, likelihood to switch business to a competitor, and likelihood to recommend.
Makes conservative but realistic assumptions about the business fundamentals of companies in 13 different industries.
Produces eyepopping projections of increased annual revenue as a result of realistically attainable improvements in customer experience — by industry.