Working with users in early stage development to design a better product

Test and repeat

How do you go about designing and building the right product for your target audience? There are two things that are key in a good design and development approach:

  • Establishing a product goal and identifying what success looks like, so that we can continuously work towards meeting that goal

    A product goal is essentially a clear definition of the long-term objectives for your product. It answers the “what does success look like” question so that you can set targets against these objectives.

  • Establishing a process of regular testing with our target audience, so that we can continuously improve the UX in accordance with the product goal. 

In this post I will talk about the latter, more specifically about when I worked with Substrakt product managing Viadukt, we worked closely with users in the initial stages of design to build and improve a product that makes it quicker and easier to book tickets to live performances online.

That’s because I believe that groundbreaking products can only ever be created with feedback from the people who will end up using it.

You will find this useful if you want to learn: 

  • How to work with users early on in the design process to test and validate  assumptions and get usability insights ahead of building the product

  • How to turn user behaviour insights into actionable design improvements

  • How to make the most of a testing tool to save time, make better design choices and build a better product 

Heads up – strap in, this post is rather long!

Viadukt was built with and for users to create the easiest, most accessible, mobile-first check out experience - mock-ups by Substrakt

Mapping out the users 

Based on some initial desk research (which involved looking at the full ticketing user journey, what influences how users make decisions around performances and tickets, and how this affects their perception of quality and value for money), we created three user groups and mapped out our assumptions about their needs, priorities and purchase behaviour. This led us to call them: 

  • Planners

  • Savourers

  • On-the-goers

We saw ‘Planners’ as the audience group who cared about where they were seated in the auditorium. The assumptions we made were: 

  • They’d want to have control over the seat they were choosing and the amount of money they were willing to spend. 

  • They’d trust themselves to choose a seat and assess value for money, and would therefore be more resistant to recommendations. 

  • They put time and effort into booking the right seat, and were therefore more likely to use a desktop device. 

In a nutshell, ‘the planner’ was the typical ‘select your own seat’ user. 

We saw ‘Savourers’ as the customers who cared most about having the best experience. Our assumptions for this group:

  • They’d have more trust in the venue to suggest the best seats for them (particularly if they’d visited the venue before).

  • They’d be interested in adding a meal, snacks or a drink to their basket, which they could redeem and collect from a bar near their seats on the night.

  • They’d have less time and/or patience to select their own seats. 

  • They were not restricted by budget.

We defined ‘On-the-goers’ as users who were driven (for various reasons) by a sense of urgency when it came to booking tickets. Assumptions made:

  • They could be travelling to or from work, meaning they’d be using their mobile device and have little time or patience. 

  • They might be booking too close to the performance date, meaning they’d care less about the seats and more about booking tickets quickly.  

  • If they have a limited or set budget, they’d want to choose the best seat available within their budget. 

  • They’d be more open to the venue recommending the right seats for them 

  • In an effort to save time, they might even be willing to skip the entire seat map section and seat selection process. 

Assumptions are dangerous

Of course, these were still just assumptions about our user groups. We needed to validate or dispute them to really understand our users’ needs and get closer to meeting the product goal. How? 

Using the results from an initial survey we shared with our Twitter community (and the online communities of our partner organisations), we recruited 16 testers with a variety of user behaviours and presented them with clickable prototypes of our initial designs. These behaviours varied from both a booking and attendance perspective:

  • How frequently they have visited cultural venues (one-off attendance vs members) 

  • How they usually book their tickets (e.g. on mobile or desktop, what time of the day, how much urgency they might be dealing with when booking)

  • The kind of experience they were looking to have on the night of the performance (regular evening vs an event) 

  • How digitally native they were (this was largely determined by age) 

To keep development work as light touch as possible, we worked with a tool called Maze that enabled us to turn designs into clickable prototypes quickly.

This meant we could send our testers on a predetermined journey (one that we were looking to assess).

Maze also allowed us to add instructions to each step of the journey, so we could share it as a link to a ‘self-test’ that testers could complete without our facilitation. 

Testing the user journey

The journey was made up of a series of ‘missions’ and written instructions, briefing the tester to choose a performance, find a seat, select a ticket, add it to basket, and check out.

We made every relevant call to action button clickable, so that Maze could map out the ideal user journey and track whether users were completing it as we’d predicted.

Example ‘mission’ from testing in Maze – users could complete the test remotely by either writing feedback or clicking on certain calls to action in the mockups.

Testers could choose the tickets via two routes in the designs – either ‘choose your own seat’ or ‘choose by price’. This proved to be a crucial element to our testing (more on this later).

Part of the mission completing process involved asking testers to submit answers to more open-ended questions around what they were seeing.

The purpose of this was to gather qualitative data as well as understand what the testers were clicking on, for example:

  • Can you describe what you think you can do on this page? 

  • What information is most important to you on this page? 

  • Is this page what you expected to see next? If not, what did you expect to see instead? 

We also presented each user with two versions of the test that were identical in terms of the missions, but were designed slightly differently when it came to navigation style:

  1. An ‘accordion style’, which sorted the progress into a stack of neatly sorted steps down the page (very much mobile-first) 

  2. A ‘tabbed style’ that showed the user a progress bar at the top of the page as they advanced through the user journey 

Example of an accordion-style vs a tabbed-style navigation. Source: component.gallery

Lastly, we asked the testers to answer a few questions around how they found the experience - what worked well and what they would improve. 

Finding the right tools

In order to understand what users were clicking on in addition to the reasons behind those clicks, we split the testing sessions into:

  • 12 self-tests: These consisted of a link we sent to testers to complete on their own. 

  • 4 observed sessions: These were facilitated individually on Zoom by Substrakt. We asked the user to share their screen and ‘narrate’ their experience. Maze collected and analysed all of the insights in the background, while we noted any moments that caused confusion, frustration or misunderstanding. 

Maze analysed the insights from the self-tests and the observed sessions and calculated a ‘usability score’ for the clickable prototype. This gave us a clear sense of where there were friction points and where the journey was straightforward for the user. The usability score was based on four key questions:

  1. Was the mission completed? 

  2. How long did it take the user to complete each mission? 

  3. Where was the user clicking? (This created a ‘heatmap’ around the CTAs) 

  4. What did the user do vs their expected behaviour?

Assessing usability 

The insights from this gave us specific areas of improvement for the designs and product development moving forward.

Some results validated our assumptions, and others surprised us: 

  • We learned that the flow from the basket page to the order confirmation page was clear and straightforward - the majority of missions here got a 85%-100% usability score. 

  • The average time to complete each mission from the basket page to the order confirmation page was between 1.7-5.5 seconds. These missions included ‘go back’ actions such as removing a donation, updating the items in the basket or going back to update a delivery method. 

    • As these were more complex tasks, 5 second actions was not necessarily a bad score. 

  • When it came to navigation and the design style, we had a high usability score across both, but qualitative feedback revealed a strong preference for the accordion style: “It’s quick and easy to use”, “I was in control of exactly what I was buying”, “it’s easy to navigate.” 

  • Conversely, here’s some of the feedback about the tabbed style:

“I liked this style of navigation less. It felt as though you were going back a page or in danger of losing something. The previous version showed ‘Edit’ buttons, which made it feel ok to make changes.”

"It was not clear that you can click on the different journey points.”

"You can't see your basket contents easily throughout.”

By asking users to spare a few minutes to share what they enjoyed and what they’d suggest we improve, we were able to identify which navigation style enabled the quickest, easiest check out process. 

This also meant we could define how the style needed adapting for desktop designs (we were deliberately testing with mobile first as we knew this would be the more complex design challenge).

So through qualitative testing – and looking at the usability score in Maze – we were able to understand that the user journey, from adding items to the basket through to check out worked well.

We confirmed the user journey we’d mapped out around ‘choose your own seat’ was clear and straightforward.

The logic behind the ‘best available seat’

As mentioned before, one of our most important discoveries was the confusion caused when presenting users with the two ticket selection options - ‘Choose by seat’ and ‘Choose by price’. 

Based on the initial survey we circulated to recruit our testers, we suspected that ‘Choose by seat’ would be the more popular option, but we still wanted to find an alternative route that supported our promise of Viadukt being a ‘fast way to find tickets’. 

‘Choose by seat’ was fairly straightforward for users to understand – the tester was presented with a seating overview of the venue with the relevant levels available. Upon selecting one, the user landed on the seat map and was asked to choose a seat according to a colour (a specific price point), followed by confirming a ticket type (e.g. a student ticket).

‘Choose by price’ was more tricky. Many live performance venues typically present this path as ‘Best available’ (which can be configured in Spektrix or Tessitura), meaning that the user is assigned recommended seats based on a number of tickets chosen. 

But what are the recommended seats based on?

Ticket availability within a certain price zone or a venue level (which can include many price zones)? ‘Best’ seats also mean different things to different people, as everyone has different needs and preferences. For example, seats near an aisle closer to an exit might be more preferable to one audience member vs a middle seat with a better view to another. 

Each tester we spoke to had a different understanding of what ‘Best Available’ meant, which validated our assumptions.

  • to some, the best available seats were the most expensive in the house, on a given venue level. 

  • to others, they were the best available seats in a particular price category, across the venue. 

  • and others expected ‘best available’ to represent the best value for money  (i.e. if I have a small budget, what are the best seats I can get?) 

‘Best available’, we determined, was not a useful term going forward. So we needed to decide:

a) how the logic actually worked and 

b) give users an accurate and concise description of what it meant

We therefore decided to simply call it ‘Choose by price’, which gave users the ability to set their own budget and choose a number of tickets, with a view to give them the best value for their buck. 

Shockingly, none of the users were able to complete this mission, which resulted in a 0% usability score.

This meant that when landing on the ‘Choose by price’ page, the testers didn’t know how to choose a number of tickets, set a ticket price, choose their ticket type and add them to the basket.

Instead, they got stuck on the page not knowing what to click on, until eventually pressing the dreaded ‘I give up’ button in Maze.

So where did we go wrong?

Firstly, the header ‘Choose by price’ was confusing to testers (despite us moving away from the ‘Best available’ wording):

  • Some argued that they could choose by price in the ‘Choose by seat’ option anyway (which was true). 

  • Others still expected to see the seat map under ‘Choose by price’ but be able to set their budget parameters first, and then find the seats within their price range highlighted on the seat map. 

Secondly, the ‘Choose by price’ page was confusing to the testers.

We presented too much information on one page simultaneously and the navigation for the call to action buttons was unclear.

When wanting to book any other ticket than those at full price, testers were also not sure about whether to ‘apply a concession’ to a chosen full price ticket or to ‘choose a concession price ticket’.

When it came to setting a limit on the user’s budget – which we thought was the actual genius of the page – some testers were unsure if they were expected to choose and set a budget for the number of tickets they were after, or specify the maximum price of a ticket they were willing to pay.

This resulted in some key learnings:

  • On the ‘Choose by seat’ or ‘Choose by price’ selection page, we needed to signal to the user that the seat map would be skipped. 

  • We needed to reconsider the logic of the ‘Choose by price’ selection criteria page, by:

    • Making it clear that the user needed to choose their number of tickets first, followed by setting a price for the individual  ticket (some users thought this item was the total budget). 

    • Only then should we reveal the ticket types available to them and show the ‘Buy now’ call to action button in an active state, for the user to add to their basket. 

    • Ensuring that the way users understood and chose concession tickets was consistent across both ‘Choose by seat’ and ‘Choose by price’, discarding the idea of ‘applying a concession to a full price ticket’. For the purposes of not overloading the user with too much information at once, we decided to hide the concession tickets and display them in full. 

With these immediately actionable design enhancements, we were one step closer to a better product. 

User-centric, helpful copy

What also became very clear was how crucial content – especially helpful copy – was going to be to the success of the UX.

Starting with getting the ‘Choose by price’ header and messaging right (which we tweaked to ‘OR skip the seat map and choose by price’ in the next round of development), we knew that we needed to implement a more holistic approach to building the right product – with thoroughly considered design and content choices built into each iteration. 

This is where content strategist Lauren Pope came in and became an integral part of the product team (to read more about content and Viadukt – read this UX focussed case study.

Validating the user groups and their needs  

So how did we fare with our assumptions about our users? Was there such a thing as Planners, Savourers and On-the-goers? Yes and no. 

We realised that rather than validating specific user groups, there was more value in improving how we can better meet our product goal: 

  • How can we quickly enable users to find and choose the best seat for them?

    • The most important insight gained was that the majority of the testers (14 out of 16) preferred choosing their own seats – particularly as the seat map was easy to understand and navigate. 

  • How can we best enable users to skip the seat map and check out faster? 

    • So the ‘on-the-go’ behaviour was still important to consider, as it represented the need for a quicker way of finding tickets and checking out  (rather than navigating the seat map).

The next round of testing to validate our designs and product development had a two-fold purpose:

1) To test the usability of the actual product in progress using a test link 

2) To assess the accessibility of the product and note exactly what needed improving – for a variety of access needs while keeping aligned with our product goal.

Summary & recommendations

If you are in the early stage of product development and are looking to test your assumptions by working with users, here are my main recommendations:

Map out your assumptions

  • Be very honest about what you don’t know (even desk research needs validating) as this will become the basis of what you want to test.

Build variety into your pool of testers

  • Make sure you have a variety of user behaviours in your pool of testers – ask yourself how many testers and how many different types of testers are likely to dispute or validate your assumptions. 

Build testing into your timeline

  • Allow for the right amount of time in your product timeline for testing – ensure you have a time period in your product development dedicated to prototype testing, but also try to keep that time brief (and intense). 

Understand the what and the why

  • Gather quantitative and qualitative data – no matter what testing tool you use, or whether you just decide to do observed sessions, make sure that you can track what testers are clicking on and get an understanding of why they are selecting those calls to actions. 

Open-ended questions only

  • Ask open-ended questions and remove biases when facilitating observed sessions – especially when you see a user struggling, don’t be tempted to help them find the right answer when completing a task. Instead, ask them to narrate what they are experiencing, or ask open-ended questions about what they are finding frustrating.

User-centred copy

  • Work with a content strategist as soon as you can – the right copy forms such a huge part of usability and how a user understands the UX of a product. If you can, work with a content strategist before you present your early product prototypes to the testers. 

If you have any questions, or would like to find out more, get in touch with me or the Viadukt team.