Part 3: Measuring success through quantitative insights

The key question everyone should ask when a product or service goes live is 'can we prove it's been successful?'. This post looks at how to gather metrics to demonstrate successful project delivery. Working in digital means almost everything we create can be measured, because where there's technology, there's data. It's this data that provides us with the insight in understanding success or failure.

To fully understand success of the new product and justify it's spend, we must first benchmark as-is data so we have something to measure against during the design phase and after launch. Measuring success is not an afterthought, it's a process that starts with the business case (and should ultimately help shape the reasoning for the business case in the first place based on KPIs / ROI predictions).

Typically defined metrics for our B2B clients

For our B2B clients, typical as-is data is captured manually through ethnographic studies to observe and record dependent variables of key tasks such as Time, Success, Error failure and Efficiency wastage. For B2B clients this is typically a manual process because of limited recorded analytical data for common tasks being completed.

Other methods include creating pre & post task completion surveys for internal staff. Whatever the method(s) being used, it's important a minimum of 40-50 responses are captured for each task because measuring success can only be reliably achieved through quantitative analysis.

Typically defined metrics for our B2C clients

For our B2C clients, where existing live sites/systems,applications exist, defining the as-is state is typically achieved by recording existing analytical data (such as Time, Clicks, Completion rates and Repeat visits). Other specific measurements for capturing the 'as-is' states include 'Interception surveys' on existing live sites and through SUS scores (System Usability Scale) to record a user's attitude of the existing site. Other options include NPS (Net Promoter Score) and CSUQ (Computer System Usability Questionnaire).

The recorded scores, along with dependent findings, become the benchmark data that all future improvements should be measured against. No matter what measurement techniques/tools are used, identifying trends and statistics can only be achieved through large volumes of data (achieved through quantitative studies).

Did you know, here at Answer Digital we have a dedicated Experience Design (XD) service? Called 'XD by AD', we work with companies of all sizes offering dedicated UX, CX & Service Design and Design Sprint expertise to help design great experiences and moments across digital and non-digital touch-points.

Experience great experiences. Read more about XD by AD >

Measuring success on proposed concepts

Once concepts (represented by wireframes/prototypes) are ready to receive feedback within the evaluation research phase, the benchmark as-is data is used to help shape the tests script to ensure we're asking the right questions. We then use a range of different quantitative analysis tools to directly compare the proposed concepts and gather reliable results so we’re making future design decisions based on fact, and not subjective opinions. This feedback process and analysis are all completed within a typical agile project methodology structure.

Forrester estimates that for every $1 to fix a problem during design, it would cost $5 to fix the same problem during development. Even worse, if a problem is not spotted until after release that price rockets to $30.

The benefits to measuring success

There are huge benefits to measuring success throughout the design process. It can build confidence within the design phase that the new proposed experience is suitable. And if not, then we know we should look to pivot design direction without committing to code being cut (which is 5x more costly).

By gathering user feedback and measuring success ensures we're reducing subjectiveness where possible and increase focus on making decisions based on fact. This can certainly be a culture shift in some organisations (both large and small) but it's one that will certaintly benefit the business because we can start to reduce unneccessary distractions about design, reduce ambiguity of the design direction and ultimately ensure we're delivering a truly customer-centric solution.

Once a product, website or application goes live key business stakeholders will want proof that there is a tangible benefit to the business, through greater conversions, efficiencies, satisfaction and completion rates... which leads to a more profitable business. If you can prove success and business benefit, then why wouldn't future funding be approved?

Quickly prove something will fail!

Big tech businesses like Google pay bonuses for people to kill projects. The quicker you can prove something is going to fail then the more money you'll save and reduced customer negativity. Decisions like this are based on reliable insights and stats.

Other related UX blogs...

Here is the 1st installment of lifting the lid on UX Practices Part 1: The role of a UX designer.

And here is the 2nd installment Part 2: Understanding the needs of the customer.

Want to learn more?

If you want to hear more about our UX research, design & evaluation services and expertise then contact our Principal UX Consultant, Andy Wilby, at This email address is being protected from spambots. You need JavaScript enabled to view it. or on 07595 878876.

About the Author...

Andy has been an Experience Design professional for over 13 years, responsible for solving problems within UX, Service Design and Design Sprint roles. Companies he's worked for include HSBC, Aviva, NHS, Bupa, Co-op and Government agencies to name a few. Previous to joining Answer Digital, he was a Lead UX at Aviva for a 45-strong team of designers. His passion is working with clients to help solve complicated problems through beautifully simple experiences marrying business and customer needs.