Usability Reflections: Analyzing Remote Usability Test Results

This week, we compile the results from the Loop11 test and generate a high-level report of the findings.

The Takeaway

Poring through all of the raw data was a bit of a chore. Quantitative data is great, but the lack of context means making a lot of assumptions about what was going through a participant’s mind at a given point. That said, this was a pretty fun assignment. I may have gotten carried away with making the graphics for it.

Some challenges to the remote approach

Small Sample Size

With only five participants in this test, the impact of one single person is much higher and much more noticeable. One participant train-wrecked through the entire process and left unspecific negative feedback. This person brought down the average score for all tasks across the board. Since trying to piece their sentiment together using just raw data is impossible, I can only assume that this person just did not like the site.

Loss of context can lead to unanswered questions

Looking at the navigation path for the participants, there was backtracking to the home page even after they reached the “success” URL. Did they get lost? Were they just procrastinating before clicking Complete? Co-location with the participant during the testing process would provide a simple answer for that, but that is lost when looking at raw results after the fact.

Participants clicked on… ads?

A number of failures were due to users clicking on URLs belonging to Google’s adsense network. I’m not sure if this is an actual phenomenon—people in 2015 clicking on banner advertisements—or a technical issue related to the proxying that Loop11 does. Undoubtedly, this will go down as the biggest mystery of the 21st century.

Attachments

PDF

loading blog data