Summary

Brief
- To improve the state of our Information Architecture

For who
- Premier Inn

Things I learnt
- Creating a project plan gives innumerable benefits such as better focus, more buy in, and an easier way to get tasks prioritised
- How to conduct card sorts and tree tests and then analyse their results
- Collaboration across different teams and departments is necessary to create diverse ideas

Team responsibilities
- Me and colleague
Created roadmap, led on guerrilla testing, card sorts and tree test, set up tests in UserZoom, iterated on our research method strategy, analysed and synthesised quant data, ran workshops and ideation sessions with SEO, Content, Brand, Copy, Front and Back End Dev, UI, and UX
- Colleague
Initial presentation to senior staff to get buy-in for project

Results
- Discovered our current taxonomy was incredibly confusing for our users, this was previously unknown as no work on IA had been done before
- Proved that our new, data-driven, taxonomy was much more intuitive for our users (see ‘Tree test’ section)

Planning

Deciding early on how we were to work and what we needed to do gave us a path to follow and equipped us with the tools to confidently carry this project out.

Despite it seeming uncommon within IA, we chose to take methods from Lean, Agile, and iterative ways of working as a framework for this project. This was due to the idea of 'build, measure, learn' greatly helping us in other areas of UX.

With IA being so large, we broke it down into three buckets of work: top-down, bottom-up, and feedback systems (this would allow us to gain insights from users and easily improve over time). Firstly, we tackled our top-down architecture. 

Knowing what we were to then focus on, we formed a RASCI of stakeholders. Outlining all who would be affected by this work and who needed to have input allowed us to be clear on who was to be included in meetings and workshops, who needed to be informed of our findings, and who was able to help us when we faced blockers or got stuck.

We then created a project plan and kept it available to be publicly viewed. This kept us accountable to timescales and helped us get tasks prioritised. It also meant that it was easier for us to get a budget approved for testing as our approvers knew what was to be involved from the beginning.

Finally, we worked with analytics to create KPIs that we were able to track and hoped to be able to improve.

Research

Card sort

To understand what the mental models our customers had of our content was, we began with an open card sort.

We conducted desk research and learnt several things, iterating after several rounds of guerrilla testing:

  • Words used to label cards can cause bias that negatively affects the card sorts results e.g. if two cards were similarly named then a user may group them together because of this, despite them representing very different pieces of content

    • We made sure to review all of our labels several times to make sure that this was not the case, and we watched out for any instances of this happening during guerrilla testing

  • There are only so many cards that you can include before a user will either get overwhelmed or fatigued before the study ends, this can cause the participant to rush and give bad results

    • Through guerrilla testing and desk research we decided to use 35 cards

      • These were chosen after working with Analytics to understand what our top pieces of content were based on search volume and page visits, we also worked with Content to include future items

  • Instructions set up a study for a participant and so it's important to be clear and concise

    • During guerrilla testing, some believed they were designing a website with their groups rather than communicating their own thoughts on the relationships between the items and so we started seeing categories such as 'header' and 'footer', being more clear on what we were asking solved this

Once we had done this and iterated on our test, we brought in 6 participants to do a moderated version so that we could further iterate and start to form early hypotheses, specifically around what categories might be made.

Lily card sort.JPG

This was all culminating in us being ready to release this test remotely to a mass amount of participants to get robust data back. Before we did this, our final step was to release it as a pilot using UserZoom. This gave us more insight into how we could better structure the study. One area that we fixed was the instructions.

We then sent out the study, again, using UserZoom, and what we got back was a dendrogram which allowed us to see what relationships participants believed existed between our content. 

This is a dendrogram of our results with the labels redacted

This is a dendrogram of our results with the labels redacted

This gave us v0.1 of our taxonomy.

Workshops

We wanted to collaborate and iterate on this and so we held 4 workshops, all including SEO, Brand, and Copy, to rework the taxonomy all the while being very conscious of what the data has shown us. The biggest change made was the labelling of the content, SEO made sure they had a high search volume, Brand kept it aligned with our personality, and Copy brought it to a place where it was succinct and understandable. We also created subcategories where we thought necessary.

Tree test

With a taxonomy we were happy to test, we built a tree test in the same way as we did with our card sort - conducting desk research first and then doing several rounds of guerrilla testing with a pilot to make sure the method was robust enough to obtain trustworthy data. We learnt that 20 tasks was a good number to complete before participant fatigue set in and so we chose 20 individual tasks of which the answers to them were high scoring pages as uncovered by Analytics earlier in the project. These 20 tasks were asked in the context of our legacy taxonomy and then our newly proposed one. These were randomised when given to participants to reduce extraneous variables.

We used Jeff Sauro's method of analysing tree tests. An SEQ was asked after every task to see how confident the participant was in getting it correct, we then measured this against task success. What we found was that participants were generally confident they had gotten tasks correct when they hadn't with the legacy taxonomy and confident they were correct as well as actually being successful in our new taxonomy. This was a great finding as it was proof what we had delivered could benefit the site's Information Architecture.

The findings from our tree test

The findings from our tree test

Ideation

With a taxonomy backed by research, we sought to design a navigation that would suit it. This led us to hold a design workshop.

For this, we conducted desk research to create several principles for navigation design. These were around four main themes:

  • Information scent

  • Visibility

  • Communicating current location

  • Coordinating menus with user tasks

We then brought in a representative from SEO, Content, Brand, Copy, Front and Back End Dev, UI, and UX so that we could have a room full of people from different backgrounds with different expertise that would be able to give us insight on ideas they liked and what they knew to be feasible.

To help, we provided them with some inspiration and high level information on our principles.

One of the examples we used, inspiration can come from anywhere

One of the examples we used, inspiration can come from anywhere

They all briefly presented navigations from websites they liked to inject a few ideas to the room. We then gave them our taxonomy and two minutes to sketch something before a round of constructive criticism given to anyone who wanted to present. We repeated this several times with each round having a new rule introduced e.g. there are two levels of navigation or it has to fit on mobile. 

IMG_1289.jpg

Using the learnings from that, our UX and UI teams designed a new header and footer that will be released alongside further new responsive designs.