Your customer feedback pile needs a makeover: A system for sorting so you’ll always work on the right things
All signal no noise: A practical method for sorting through high-volume feedback and finding customer gold.
In the past few years, all this continuous discovery and better friendships between teams has created a monster.
UXR, product, and customer success are working together, and sharing a veritable mountain of customer feedback.
Don’t get me wrong. I’m a big fan of teams finally working together, and making anti-siloed work finally work.
But all that customer feedback we wanted to collect from all sides of the org? Now it’s stored in scattered drives that are messier than a college dorm.
And nobody has time to sort through it.
Likewise, a product team running their own research makes me happy and cringe at the same time. I’ve never met a product team that knew what to do with its own insights after talking to a truckload of customers.
They tend to want to use all the insights for the upcoming project, and the overwhelm builds fast. Because some insights matter, and some don’t, but the team doesn’t know which is which.
The problem with piles
Whether it’s a pile from a single round of discovery, or a whole organization’s worth of triangulated feedback, it faces the same challenge.
We’ve saved the feedback bits and pieces in a wasteland that, if we're honest, we will never have hours to sort through, to finally find what we need.
All the teams I’ve worked with can search their pile for the keywords customers might have mentioned, but it still isn’t enough.
You still need to decide what to prioritize now.
“How do we know which things we’ve heard from users we should actually prioritize?”
- 90% of my clients
Channels in beta
Introducing Channels—bring all your feedback into once place and let Dovetail make sense of it with our machine-learning engine.
Join the waitlistImportant and impactful insights only
I want to give you the fastest and most effective shortcut I’ve ever found to make sense of that pile. It doesn’t take days or a full-on project on the roadmap to sort it out.
The problem with piles of feedback is that we usually don’t have a plan for sorting the relevant information from the rest.
These days, AI tools that say they’ll help with this are popping up left and right, and I think that’s great. But we still need to know what makes an “important stuff” pile different from items that don’t make the cut, AI or no.
What you want at the end of a day of feedback-collecting is a storage of just the good stuff that we can work with, right now.
Posts all over the Internet will tell you that we must “quantify the impact and importance” of any feedback item before deciding if it should be a priority.
That sounds good, just like having a huge pile of feedback. But in practice, it leaves a lot up to fate and guesswork. It won’t help unless we know precisely what importance and impact mean.
How do we decide that something is, in fact, important?
What kind of impact should a single piece of feedback from Customer Support chats or Sales calls have?
I don’t like prescriptions that leave success or failure to guesswork by a single inexperienced person. I like systems that make it easy for the most senior and the most junior person in a team to all get a solid result.
To get there, we need to define importance and impact in concrete terms.
Regardless of your team’s current focus, from big vision to small optimization projects, this system can work.
Because no matter what you’re working on, you very likely have two specific needs.
The two needs of every product team
Nine times out of ten, the teams I work with carry too much customer feedback through their analysis and prioritization process.
All the insights on the pile might be interesting. But they’re irrelevant unless they help you do one of two jobs that every product team needs done:
Add proof for or against an existing hypothesis to make an upcoming decision.
Highlight new opportunities, now or soon.
Need to revamp the whole pricing model? Look for proof addressing the hypotheses you have about what’s happening with pricing today. Secondly, you might look for new opportunities to price differently than competitors.
Want to optimize onboarding for new users? Look for proof addressing the hypotheses you have about what’s happening with onboarding today. Then, look for new opportunities to make onboarding more engaging.
Planning to develop a new product for a different segment? You get it.
A simple two-point checklist that has worked for startups, scale-ups, and more established teams
Most product teams are (or should be) looking for these two things all the time, and putting feedback into a pile for each:
Customer input that helps us prove or disprove an existing hypothesis
Customer input that highlights new opportunities
Once you start labeling customer feedback, you’ll notice a lot of feedback is not one of those things.
So there’s a third category. Everything else goes into a “slush pile”.
Borrowed from the literary world, a slush pile is a place where currently irrelevant things live. Book publishers put unsolicited manuscripts sent by unknown authors into this pile until they have time to read them later.
If feedback doesn’t do one of those two core jobs above—addressing a hypothesis or highlighting an opportunity—it should go to a backlog. It should not go into the select list of observations and insights we are using to inform our current bets.
However, we don’t want to trash this feedback completely. It might be important later. We only want to clarify that, for the moment, it’s not what we need.
When we put every piece of feedback into one of those piles, we get a clear picture: this feedback is important now, and we can ignore the rest.
An example for the naysayers
I’ll assume you’re still skeptical and don’t want this to work. Let me give you an example.
My client’s team was working on their onboarding. The product manager in charge was tasked with an onboarding iteration that would deliver higher conversion and activation. Typical, right?
But they knew they should collect proof points before the team dived in.
They checked their collection of customer feedback in various tools.
Per usual, they weren’t sure how to pull out what mattered.
So they searched their feedback pile for everything around onboarding. Whatever the CS team has tagged, comments from new users in the first seven days, and so on.
Now they had a longer list than you expected. No one really wanted to read 257 individual feedback items about onboarding, and without knowing what they were looking for.
But the product manager could tell me that they were assuming a few things about onboarding. One, that asking the new user ten questions in the onboarding process was causing some drop-off. Two, that asking for less information would probably increase conversion through all onboarding steps. They had a hypothesis.
So for each piece of feedback, I told them to ask:
Does this address our belief that we are asking too much of the user in the onboarding today?
Does this present a new opportunity for us within onboarding/the first X days of use?
It took just half an hour to skim through most onboarding feedback items, because they knew what to look for. In the end, only 50 pieces of feedback played the two roles they needed—addressing an onboarding hypothesis, or presenting an opportunity.
They quickly saw a few trends. They quickly disproved part of the hypothesis. They had a feedback shortlist that helped them make a decision. Plus, they had a shortlist of opportunities to discuss and test as a team.
The bigger pile left was their slushpile. They’re not forgotten, but they’re not important, impactful and relevant right now. They don’t help you make the decision on the table about how to improve onboarding.
...But don’t forget about segmenting
This wouldn’t be complete if I left out the importance of listening to the right customer.
In using this two-point check system, I assume you know who your ICP is, and how to recognize them in feedback or other identifying information.
There is no magic pill that solves bad customer identification. You can use this two-point system for all eternity and still not prioritize the right things if you don’t know which customer types to listen to.
If you know that your team gets feedback from multiple different customer segments, it’s best to add a third check:
Is this feedback coming from the specific audience we are building the product for?
Conclusion
Sorting the mass of qualitative feedback doesn’t have to be complicated. When it feels that way, it’s usually because we’re missing a critical definition for what is important, and what is impactful for us.
Since most product teams today know they’re making assumptions that they can often explain, this system of checking feedback items against current beliefs can quickly tell us if feedback is important and impactful.
When feedback doesn’t add up to proving or disproving what we believe to be true about the experience at hand, it still might be the next great opportunity. When it isn’t either of those things, get comfortable with putting it out of mind, for now—and keeping the feedback list short and focused.