Usability Reflections: Analyzing Eye Tracking Data
The final post in the Usability Reflections series.
This week’s assignment was to watch a series of usability test recordings done by previous classes with an eye tracking setup. It’s one thing to look at static images of heatmaps of gaze patterns, but quite another to see it in real-time.
Tough to build a narrative
I may just be out of it this week, but I had to watch the videos several times over and go back to the readings before I could construct any kind of meaningful explanation of the results, even with the participants thinking aloud. Even at the end, I’m not sure how accurate my writeup was on their performance. This is probably another one of those things that just takes practice.
The task determines the value of collecting eye tracking data
Part of the assignment was to analyze the task that the users were asked to perform and suggest ways to improve upon it. The task itself—the digital equivalent of asking someone to search for a needle in a stack of needles—was not well suited to getting good value from watching the user’s eye movements.
One good way to get the most info from an eye tracking session is sending the user on an exploration rather than a scavenger hunt. That way, you get to see what they like rather than what doesn’t fit a very narrowly-defined definition of correctness.