Learn about "Event Style Usability Testing," a new user experience testing method that combines natural workflow observation with quantitative data collection to bridge the gap between traditional usability testing and more contextual/ethnographic methods, promoting innovation, efficiency, and user-centeredness in UX design.
During a recent project, a client asked us to evaluate the initial user experience for setting up a telephony product. They were not interested in doing a more traditional usability test as they felt that this “out of the box” experience could not be easily captured or replicated by forcing a user to perform a set of pre-defined tasks. They wanted feedback that was more contextual and ethnographic in nature, but they still wanted the quantitative data such as times, errors, assists, and usability ratings that you get from standard usability testing.
The more we thought about this, the more we came to realize that there is a gap in methods from where lab testing ends and contextual/ethnographic studies begin. Usability testing is often used for getting feedback or benchmarking the usability of key task flows of a product or service. These key tasks are discrete and isolated to make it easier to get more quantitative data, but this comes at the cost of losing what the natural workflow may be since a structure or order is enforced. The natural workflow is something that is researched through more contextual methods.
What we wanted for this project was to be able to allow a natural workflow that would allow users to use multiple tools and do things that were not expected by the product team while being able to get metrics such as time, errors, assists, and perceived usability. With this in mind, we created a method that we’ve dubbed internally as Event Style Usability Testing to help bridge this gap between usability testing and contextual/ethnographic studies.
Components of Event Style Usability Testing
There are a couple of basic components to the use of Event Style Usability Testing
A set of pre-defined “events” are identified that users are asked to perform.
Users are allowed to use a system (or software) to do the events in any way and any order that they deem appropriate. There is no forced order.
The test facilitator records the events as they take place creating a workflow record as well as keeping metrics such as time that events start and stop, number of errors and assists for events, and perceived usability of events.
New events that were not pre-defined or expected are added ad hoc as they are observed during sessions.
To better understand this, what do I mean by an “event”? Events are very similar to tasks that are used in more standard usability testing with a couple of distinctions. Like tasks, events are created that users are asked to perform or experience. For example, an event would be to send an online payment to a person from an online bank account. If this were simply a task in a usability study, then users may be told specifics about who to send the payment to, for how much, and when it should arrive by.
For an event, things are turned into a scenario as much as possible. They are told what they are trying to accomplish, not given specifics. In the case of the event for sending an online payment, users would choose who to send it to, for how much, from what account, etc. Allowing them to shape the event using their own data or specific to their own interests allows interesting insights and observations about what users may do in real situations.
Once the events have been pre-defined, an event list is created for participants to use for the session. Prior to beginning the session, users are asked to read the entire list of events that they will be asked to do. They are then instructed to complete the list of events in whatever order they would like and makes the most sense to them. Having them read through the entire list of events this way allows them simulate more closely what they would do out in the real world. It’s assumed in most systems and software that users generally know what they want to do (or else why are they using the system or software in the first place) but may not know specifically how to do it. Event testing aims to more closely replicated this.
The rest of the process now relies on the careful observation of the testing facilitator. Since there is no specified order that participants are using to complete events, the facilitator needs to monitor what events the participant is attempting or in the process of doing. This includes capturing time stamps of when an event is started, completed, or stopped and not completed, or resumed after being stopped previously. While observing each event, other usability metrics can be captured as well such as any observed errors, assists needed, and other usability scores.
The last piece to the Event Style Process is being flexible to add new events to record and capture in the middle of a session. When trying to replicate real world workflows, it can be nearly impossible to predict and plan for all paths and avenues that users may try to use to work through events. Also, unforeseen or unanticipated events may occur in context that would not occur in a highly structured usability test. The event testing allows these newly observed events to be captured mid test so that they can be reported on later.
Benefits of Event Style Testing
The event testing allows for a number of benefits and deliverables that cannot normally be obtained in a usability test. These include:
Workflow Analysis
Since the order that events take place is recorded during the study, a full task flow can be created and analyzed. This is particularly useful for systems or software where very little is known about how users setup or experience them, especially in complex systems where multiple tools or software may be needed.The resulting task flows can show where various tools and software are frequently used, in what order, and for what purpose. In the case of the telephony product we tested, the task flows highlighted at what points different software products were used for configurations and at what points users stopped to verify correct setups and troubleshoot problems.
Quantitative data collection
While the natural workflow analysis can be observed, quantitative data such as errors, assists, and usability ratings can still be collected and compared across users. This creates a more holistic view of the experience being evaluated from a qualitative and quantitative standpoint, the best of both worlds.
--
Event Style Usability Testing has become a powerful method in our user experience toolbox to bridge the usability test and ethnographic research divide.We are always looking for new takes on old methods and hope others may find this useful as well!