top of page

A scalable UX and UI audit method

The essentials

The project

Generate a repeatable process for auditing user experiences and interfaces in a product where UX hasn't always been at the front of the queue. â€‹

​

My role

As team leader, I worked with the researchers and designers to identify the best-performing elements of several different processes already being used in the business, and then codify these into a 'best practise' workflow, with templates for key sections, adding steps where necessary.

​

The impact
  • Easier to deliver audits as a team, rather than each individual using a bespoke process

  • Anyone from the team (or outside) could understand and engage with the process and its artefacts

  • Easier to hand over to development squads for implementation

​

Challenges

The key issue was delivering a repeatable system which could be picked up by any future UXer, within the culture and constraints of the business. The method  needed to work in a way which would lead to impactful outcomes, as well as scaling easily between the "let's check for problem" basic audit and the JTBD-focused, research-dependent extended audit. Both tasks had to feed into the software development lifecycle in a consistent way, and have outcomes clearly linked back to evidence.

 

We managed this by working closely as a team to understand what was and wasn't working with existing audits, and then testing and evolving our combined approach in real-world situations, constantly returning to incorporate new insights. 

Audit scope

Our audit method needed to test for the following...

The method

High level overview - scroll down for details and examples at each stage

The method in detail

1 / Type and scope

The method was designed to allow us to perform two types of review, with one always performed and the other optional:

  • Basic UX/UI audit: check for adherence to standards and expectations (see step 4 for a definition of these). The majority of issues arising could be fixed with front end changes only.

  • Extended usability review: extend the audit with a detailed review of the usability. Check for fitness of the system to deliver the desired outcomes, and look for opportunities for improvement, some of which would require structural (back-end) changes.

 

We defined scope by one of two methods, depending on the type of audit required:

  • Basic: define a module or page scope, for an audit which was focused purely on UX and UI standards and expert opinion.

  • Extended: the review would use a specific set of JTBD and the user journeys required to support these. In limited cases, a simple 'task' could be used in place of a full journey (usually if the scope of the JTBD was limited to a single page).

2 / SUS baseline (usability review only)

If the audit was designed to assess and improve usability, the baseline SUS score was assessed in the standard way:

  • Define a journey to follow, with a script to support the user

    • This was usually composed of sections of multiple separate user journeys from our research database​

  • Ask a number of users to attempt to follow the journey and deliver the desired outcome

  • Assess SUS with the standard survey, and score responses

3 / Detailed user review (usability review only)

Taking recordings of the SUS scoring sessions from stage 2, we annotated the user journey with user insights and screenshots to give context. Where possible, we used direct quotes from users to highlight frustrations, missing elements, areas which were over-complicated for he majority of use cases, and any other perceived opportunities to improve the flow and/or function. 

4 / Core UX/UI audit (UX specialist)

For any audit, whether JTBD/journey-based, or 'simply' a generalised UX review, this stage was critical. We used three key bases to perform a 'hygiene' test of the UX and the UI:

  • For general UX, we used the lens of the Nielsen-Moloch usability heuristics.

  • For the UI, we compared what was on screen with our internal design system, checking that right components were being used in the right way. 

  • Accessible standards were already basked into the design system, but we performed a manual audit and ran Lighthouse checks to ensure that nothing had been missed, or incorrectly implemented.  

5 / Reporting

In all cases where we identified the need for improvement, we used a bug reporting mechanism based on 4 levels. These bugs were derived from both the user and the specialist reviews, and were categorised as follows:

  • Opportunities: there was nothing specifically 'wrong' with the experience, but an opportunity was found to improve it; for example, we might re-align items on a page to make it clearer to the user how to complete their task

  • Low severity bug: a minor update is required - for example, a page might hold two primary CTAs, one of which should become secondary. Or, an out-of-date variant of a component might be in use. Or perhaps an image is missing alt text for screen readers. 

  • Medium severity bug: the user's still able to carry out their task, but they had to stop and think about how to do it. Perhaps it took some time reading docs and a bit of experimentation to achieve their target. 

  • High severity bug: a show-stopper. An issue which good UX could solve, but is highly disruptive or even catastrophic. A good example is a delete action which cannot be undone, and doesn't have a warning and confirmation option. Or, a button hidden on certain browsers due to the way the page renders. 

Bugs were gathered in a 'usability tracker' table, which allowed for suggested solutions and connection to a backlog system (in this case, trackers were stored in Confluence with a direct connection to tickets in Jira).

6 / SUS review

For audits where we had carried out an initial system usability scoring, we went back after upgrades were delivered and re-ran the scoring to assess the impact. In one case, we saw the average score jump from the low 40s to over 80. 

Follow Me

  • LinkedIn

© 2024 Tom Rowson

Icons for this website are in part provided by FontAwesome

bottom of page