Heuristic Evaluations: Getting to Usable
Hero image

Heuristic evaluations are structured usability tests that identify red-flag issues in interfaces. They provide direction, improve usability, ensure completeness, maintain objectivity, and help prioritize problems. Although limited in scope and needing updated heuristics, they remain a valuable tool.

We’ve all fumbled through a poorly designed, difficult-to-use product, satisficing along the way to do what we needed to get done, but unless you’re a usability expert you probably haven’t systematically determined why the product was difficult to use. A heuristic evaluation is a time-tested and structured way for usability experts to tease out the big, red-flag usability issues in an interface. Usually, 3–5 people inspect the interface for compliance with 10 fundamental usability principles. If the interface is complex, each person attacks the interface in sections. I recently had the opportunity to perform such an evaluation – with a twist – on a client’s product. Armed with Jakob Nielsen’s ten heuristics and a set of frequent user tasks, I set out to run the product through heuristics such as ‘recognition over recall’ and ‘consistency and standards’. Structuring the evaluation around tasks gave this evaluation an extra boost from its classic execution. In doing this exercise, I was reminded of the tremendous value of performing such an evaluation.

Direction

Since you’re running the interface through a script of sorts, and structuring it around tasks, there’s a clearly marked path, beginning, and end. You aren’t aimlessly bouncing around the interface like a billiard ball in hopes of hitting upon the errant problem. Nor are you limited to the quantitative evaluation of some other user testing methods, which brings me to the next benefit.

Usability

A heuristic evaluation is a usability testing method. Performing such an evaluation and fixing the issues found will definitely get your interface in a minimally viable, usable state.

Completeness

I think of performing a heuristic evaluation like casting a wide, sturdy net. If the heuristics are robust and relevant, you can feel quite confident that you will catch the most important issues, and a good deal of smaller ones as well. Combining the heuristics with the tasks ensures that your users will have a usable experience when they are deeply entrenched in their workflow.

Objectivity

With heuristics guiding you along the way, it’s difficult to justify making subjective statements such as ‘It needs to pop more’ or ‘The button should be red instead of green’ because each issue you find needs to be backed up with a heuristic (not to mention, the issue should be derived from the heuristic, not the other way around). The evaluation keeps your opinions in check.

Deja Vu

In the course of compiling your stack of issues, certain issues will crop up over and over again. This is a good indicator that these issues may be worth investigating further. Spotting and prioritizing these repeat offenders will give you a good bang for your buck. You may not have the time or resources to fix every issue, but prioritizing the issues based on severity and frequency will help you funnel the issues into a more manageable handful.

Final Thoughts

As a quick and dirty discount usability method, heuristic evaluations produce a great deal of usability goodness for the effort. This isn’t to say that they are the end all be all of user research methods. For one, they do not address the larger issue of user experience; they deal primarily with usability, not with the ‘stickiness’ of an interface or how engaging the experience may be. For another, the user experience community would benefit from revisiting and updating the heuristics for modern-day interfaces. Mobile interfaces, web applications, tablet interfaces, and information visualizations each represent their own unique challenges and deserve their own sets of heuristics. With that said, heuristic evaluations are quite a powerful and handy tool to have in your back pocket, whether your mission is to test out a scrappy new interface or peer at an established one under the microscope.