Part 2: Different ways to gather evaluative feedback from users

A true customer-centric process is about understanding key user needs, creating a solution to meet these and then validating the designs to ensure they're successful. This post looks at methods of gathering direct user feedback and how this will shape your design.

The validation phase, typically known as 'Evaluative Research' is where we start to gather direct feedback from users representing the key personas (identified during the initial 'Formative Research' stage) to ensure it meets their required needs. If the service doesn't meet these, it will fail. The sooner we identify this, then the quicker and cheaper it is to improve the solution.

There are two main feedback types within the validation phase:

Quantitative and Qualitative.

Quantitative is about capturing the 'what', the stats and relates to gathering a large number of responses from a particular persona type (minimum of 40+) so we can identify trends about user behaviours and if a task completion was successful or not.

Typical quant validation methods include:

  • Tree-testing
  • Card sorting
  • Click testing
  • Preference testing
  • Feedback surveys
  • True intent surveys

Qualitative is about capturing the 'why', and relates to gathering a smaller number of responses for a particular persona type (typically 5-10) so we can understand (through conversations) why they failed to/struggled to/successfully completed a task.

Typical qual validation methods include:

  • Face-to-face testing
  • Remote video testing
  • Lab testing

Did you know, here at Answer Digital we have a dedicated Experience Design (XD) service? Called 'XD by AD', we work with companies of all sizes offering dedicated UX, CX & Service Design and Design Sprint expertise to help design great experiences and moments across digital and non-digital touch-points.

Experience great experiences. Read more about XD by AD >

We're not believers in gathering feedback on half-baked ideas or rough sketches because this can lead to unreliable results. Of course we want to learn fast and gain insight quickly, but we also need to learn smartly and follow the right procedures to understand success and deliver confidence that the design direction is the right one.

"Forrester estimates that for every $1 to fix a problem during design, it would cost $5 to fix the same problem during development. Even worse, if a problem is not spotted until after release that price rockets to $30."

However, failing smart doesn't mean we should spend months perfecting the wireframe solution before being put in-front of users. It requires a fine balance, a clear understanding of embedding customer-centric disciplines into an Agile process and knowing exactly what validation techniques to use (and when to use them). We've mastered these traits though years of expertise in creating engaging, intuitive and relevant solutions for the retail, finance and health sectors within Agile only methodologies.

Other related UX blogs...

Here is the 1st installment of lifting the lid on UX Practices Part 1: The role of a UX designer.

And here is the 3rd installment Part 3: Measuring success through quantitative insights.

Want to learn more?

If you want to hear more about our UX research, design & evaluation services and expertise then contact our Principal UX Consultant, Andy Wilby, at This email address is being protected from spambots. You need JavaScript enabled to view it. or on 07595 878876.

About the Author...

Andy has been an Experience Design professional for over 13 years, responsible for solving problems within UX, Service Design and Design Sprint roles. Companies he's worked for include HSBC, Aviva, NHS, Bupa, Co-op and Government agencies to name a few. Previous to joining Answer Digital, he was a Lead UX at Aviva for a 45-strong team of designers. His passion is working with clients to help solve complicated problems through beautifully simple experiences marrying business and customer needs.