5 ways to quickly understand customer issues – and avoid the ‘feedback fallacy’
I call it the feedback fallacy – the illusion that with just enough customer data, organised just the right way, product decisions will become obvious and irrefutable.
In a VP Product role, I inherited what I thought was a mess. Customer feedback scattered across Slack, Jira, email, and our support platform. No unified tracking system, no clear prioritisation framework. “Amateur hour,” I thought.
Within months, I had architected what I considered a significant improvement.
Our unified feedback engine pulled from seven data sources, including several manual forms from our teams. Every item categorized. We weighted feedback by customer segment, ARR, renewal upside, and NPS score. Our scoring algorithm incorporated 10+ factors, including business impact, technical complexity, and strategic alignment.
Every month, this analysis would generate a pristine priority ranking that, in theory, represented objective truth about what we should build next. The product team called it “a game-changer.”
There was just one problem: it was a colossal waste of time.
Six months later, despite our perfect system, we released features that flopped. How? Our sophisticated system had prioritized a feature that represented 24 per cent of the feedback. But the scoring algorithm missed crucial context that couldn’t be quantified: those customers were asking because a competitor had it, but none were actually using it after implementation.
This painful, expensive failure forced me to confront an uncomfortable truth. I had spent more time perfecting the feedback system than understanding the underlying customer problems.
The true cost of my elaborate system wasn’t just my team’s time – it was the false confidence it created.
When our executive team questioned the failed feature, I pointed to the dashboard:
“The data told us to build this.”
This wasn’t just deflection; I genuinely believed our system was ‘correct’”.
The more sophisticated our weighting algorithms became, the more I trusted them over human judgment – and the more wrong I was.
I’ve since recognised this pattern across dozens of product organizations:
The biggest irony? Products built to solve actual customer problems often came from insights our system never captured – a passing comment in a customer call, an offhand remark from a churned user, a pattern noticed by an attentive CSM.
After my expensive lesson, I studied how the most successful product organisations I knew approached customer feedback.
One company had no formal feedback tracking system at all. Instead, they required every product team member to speak with at least five customers per week. Their CPO told me: “We value living, ongoing relationships over internal process.”
Another consistently outperforming team used a simple two-page document for each potential initiative, with three sections:
When I asked about prioritisation frameworks, their Head of Product said: “We hire for judgment, not spreadsheet skills.”
I’ve rebuilt my approach from the ground up. These five practices have served me well:
Dismantling my elaborate system wasn’t easy. I’d invested my ego in it. My team had spent hundreds of hours building it. Leaders had praised it.
But the evidence was undeniable: teams moving faster, building better products, and creating more customer value were spending less time on feedback systems, not more.
The first step to avoiding this is always the same: accepting that customer understanding is a continuous human practice, not a data problem to be solved.
Your customers don’t need you to classify their feedback perfectly. They need you to understand their problems deeply and solve them quickly. Everything else is just busy work.
Comments