What’s the most important thing you can do for your site in a week?
The answer is almost certainly this: conduct a usability evaluation. By doing so, we set the foundations for future success. We discover what works and what doesn’t. We learn a huge amount about what our objectives really are, and what stops us from meeting them. We see first hand where users have difficulty. And if we do it right, we might even learn something about our customers!
Without a usability evaluation, in fact, we are effectively ‘designing in the dark’. Whatever we create, without a clear understanding of our UX challenges it may well represent money and time wasted.
But what is a usability evaluation? There’s a huge amount of confusion about that subject. This is my brief attempt to shed some light on the subject and set you on the right track.
A multi-disciplinary approach
Before we go any further, we need to understand one thing. A good usability evaluation integrates and contextualises data and input from multiple sources and approaches.
We’ll talk about those approaches in detail below, but this general note should serve as a warning: it is foolish to rely on any one source of data, or one approach, to tell you the full story about how ‘usable’ your site is.
You would be surprised how common a mistake this is, usually made by someone who knows one method to measure or analyze ‘usability’ and is therefore comfortable sticking to it. These days you’ll also find types who eschew any form of expert input, having fallen victim to the cult of ‘data and data alone’.
I love data as much as the next man, but it will never tell you absolutely everything, and when used without understanding it can lead you down many a blind alley.
Avoid also the temptation to apply arbitrary ‘rules’ to the site in question. I realise that most usability professionals understand this point, but usability professionals are not always the sole source of input during the evaluation process.
The correct response to “every piece of content should be no more than 2 clicks from the homepage” or “no menu should have more than four items” is to reach for the company revolver, but failing that reminds everyone involved that what matters is whether people can perform the jobs they want to, not whether we’ve met our own artificial usability criteria.
Which brings us rather neatly to you checklist for a successful usability evaluation.
Elements of a usability evaluation
This is not necessarily gospel, but what follows is a summary of how we would go about conducting a brief usability evaluation. You don’t have to follow this exact order, but if probably helps if you do.
- Get Some Users. Five or six will do. But it is most important these are the right users. As with anything else, in usability evaluation: garbage in, garbage out. Never, ever cut corners. Find a user panel that genuinely represents the target audience for the site or product in question.
- Establish Relevant Objectives. Speak to your user panel. Ask them what they want from the site. What do they wish to achieve? Write all of this down. Then speak to business stakeholders. What does the business want to achieve? What particular actions or conversions are desired? When we have answers to these questions, we have a relevant framework through with the current reality can be judged
- Examine Google Analytics And Performance Data. Data relating to load times, page visits, bounce rates, common paths and so on can be enormously useful. But it must always be understood in context. Is a page popular because users are interested, or because the site makes it the most obvious destination? GA cannot tell you. But data can show, for example, whether some paths and pages are favoured by one demographic or another. The times of day and devices used to access the site can also suggest potential use cases to build around.
- Run A Hotjar Analysis. At Kooba, Hotjar is our tool of choice for click, scroll and heatmap analysis. These can be extremely useful. The first indicates where users want to click and cannot, and where they can click and do not want to. The second is usually an alarming reminder of how much content gets missed by users who don’t or won’t scroll. And the last indicates where the eye and user interest is drawn. It’s a great way to find low hanging fruit and suggest avenues for further exploration.
- Test With User Panel. Now the fun begins. Invite your users to play with the site, and in addition give them some specific tasks to complete. Watch them and record them. Encourage them to talk out loud as they navigate the experience. You will learn a huge amount from simply observing the challenges and listening to feedback - including issues you could never have imagined or observed any other way,.
- Conduct An Expert Evaluation. Expert opinion is unfashionable, but it can still count for a lot. Have a usability expert (a real one) take a long look at the site in the context of business objectives. Independently of other modes of analysis, this expert should then note down areas where the site deviates from established best practice, and any other potential issues.
- Competitor Analysis. For every challenge, a very clever person elsewhere has usually found a solution. Take a look at competitors, or simply alternative sites the client admires. Reverse engineer elegant solutions and provide real world examples of alternative approaches to areas where there are difficulties today. This stage is important, as it is immensely helpful to see what is possible, rather than just itemise problems.
- Write The Report. By far the most important element of the piece. The report is everything. It communicates the findings of the entire process, and should do so in a clear, compelling manner. DO NOT simply amalgamate the results of each aspect of the evaluation. Instead, DO itemise specific conclusions, bringing together relevant data, expert opinion, and possible approaches under each.
Following these steps might in truth take a little longer than a week. But it will certainly be time well spent. Oh, and don’t forget mobile!