Customer Effort Score (CES) is one of the big three drivers of modern customer service management, along with a company’s Net Promoter Score (NPS) and Customer Satisfaction (CSAT) metric.
CES is supported by a fair amount of research showing showing strong long-term correlation between perceived customer effort and customer loyalty.
No surprise, but after a lifetime of being marketed to, consumers have learned to demand that things we buy, and the experience of buying itself, should make our complicated lives easier.
In this guide I'll walk you through CES basics, what you can gain from it, and risks to watch out for.
The definition of Customer Effort Score is pretty straightforward.
It's the metric businesses use to measure how much work customers have to do in order to get their needs met.
To calculate CES, subtract the number of positive interactions a customer has with a business from the total number of interactions they have with that business.
The higher the CES, the more effort customers are having to put in to get what they need from your company.
Today, Customer Experience (or CX) professionals tend to get the most value from measuring CES. But product managers, customer success teams, and marketers commonly measure it as well.
CES is an important metric because it gives those teams insight into how much work customers are putting in towards their needs - and what you can do about it (if anything).
If customer effort score shows low scores for interactions, this might be indicative of how difficult it is for your customers to communicate with your staff by phone or email - or, even worse, that there's no easy way for them to contact your business at all.
This means that customers could potentially give up trying to get their issues resolved, which would mean lost sales and increased customer churn.
Let's say your business has 100 customers, and you get 10 total interactions with them each month (both positive and negative).
Of those 100 customers, 50 of them are generally very satisfied with your product or service.
They don't require much assistance, and end up having a positive experience every time they interact with your company.
The other half of your clients aren't happy at all.
Each interaction they have on the phone is difficult. They often wait for extended periods on hold, get passed around to different people, deal with unhelpful staff members, etc.
From these interactions, let's calculate how many 'pushes' the average unhappy customer had to make in order to get their problems solved.
Number of Pushes = Number of Negative Interactions / Number of Total Interactions = 50 / 100 = 0.5 positive experiences per negative one
That's right - the average person unhappy with your company has had to push your business half a time in order to get their needs met.
So you can probably guess how they feel about using your service again.
Customer Effort Score is calculated by taking the number of positive interactions and dividing it by the total interactions, then subtracting that new number from 1.
So, 10/100 gives us 0.1 for our numerator (positive experiences), and our denominator (total interactions) is, which means 20%.
This results in a CES of 80%. So, your company's CES score is 80 out of 100 - this score indicates that customers are experiencing fairly high effort when getting their needs met.
Getting a Customer Effort Score is powerfully simple: you ask the customer on a scale of 1-5 how easy [whatever you just made her do] was.
Ask her in as close to real-time as possible, so the experience is still salient in her mind, and voilá, an accurate subjective data-point is generated.
Aggregate it, benchmark it, improve it. Measure, ergo manage.
Of course the risk of simplicity is oversimplification.
Obviously one number is not the whole story, even about a single service experience.
A qualitative study consisting of deep interviews, follow-up surveys, diaries, social media engagement, etc. would generate ~10⁷ times the information about the same event as a single Customer Effort Score.
Who has the budget for that much research? No one.
These days, the research conducts itself: there’s customer experience data relevant to virtually every company in the US lying in the street (virtually) online.
At Apple/Google/Amazon, at Facebook/Twitter/Instagram, customers review and comment on their experiences with thousands of brands per second, not to mention the surrounding ecosystem of specialized review sites and services.
Yet precisely no one in most companies is paying any attention to this tsunami of experience-flavored "exhaust data".
And it is seemingly the opposite of simple and actionable.
But with recent advances in Natural Language Processing, Machine Learning, and Sentiment Analysis, this sea of unstructured but relevant data is actually navigable, and open for fishing.
Another risk of relying on your customers to tell you how they feel is that they could tell you the wrong thing.
For instance, if they’re still waiting on hold with your business, or at their wit’s end about an unresolved issue with your product, it’s unlikely that they want to take another call with your representative.
Even if she promises that this time around the experience will be “balloons and rainbows”.
One way to avoid getting false positives in this vein is by asking for a CES when the customer is expecting one (i.e., after having experienced something positive) — rather than during an interaction where the company has managed somehow to botch things up yet again.
If you want to create a CES that carries as much meaning as possible, make sure it is not asked too early.
After all, there’s bound to be the occasional pretty good experience with your company (i.e., customer service rep having resolved an issue before it blew up into a pesky problem).
And if you ask about them right away they might skew your numbers in favor of lower effort scores.
Attitude vs. Ability
Another way for Customer Effort Score to run-off the tracks is mixing up customers' willingness with their ability.
If they're willing, but can't go through with something (for lack of time or money) then don't expect a high CES score. Even though you should consider asking them why they didn't want to.
James Taylor, who literally wrote the book on Customer Effort Score ("Making the Case for Customer Effort Score") provides an example of this pitfall.
He writes "I would argue that if you ask people about their willingness to sign up for your product at 8 am when they are rushing off to work, you're likely asking them about their ability - not their attitude!"
Well said Mr Taylor.
The point being, CES measures effort customers are willing to make, not overall performance.
Ask them after they've finished using your product or service - it's too late to be thinking about making excuses.
Think before you ask.
Resist Turning CES Into a Vanity Metric
Improving CES might make you feel like you're making "progress", but don't let it go to your head.
Customer Effort Score is not the be-all-end-all of the customer experience, and should be used as an indicator of progress (or regression).
Not a goal unto itself.
It's an experience flavor additive more than a magical elixir for all that ails your business.
Don't forget CSAT and Customer Effort Score have their limits, just like any other satisfaction metric. And customers are only one part of the puzzle.
Your next step is getting managers involved in giving feedback on experiences they witness first hand.
Customer Effort Score is one of the most popular metrics in customer experience analytics.
You can read all about it on the web in even greater detail than here.
Which makes sense. Companies who can get closer to the customer usually end up winning.
But Customer Experience Analytics are still just that - numbers on an Excel sheet someone has to look at and act upon.
And CES scores without action are useless.
I hope this article has helped you understand Customer Effort Scores better, how to benefit from them, and what you should watch out for.