Fauxvea: Crowdsourcing Gaze Location Estimates for Visualization Analysis Tasks

IEEE Trans Vis Comput Graph. 2017 Feb;23(2):1042-1055. doi: 10.1109/TVCG.2016.2532331. Epub 2016 Feb 19.

Abstract

We present the design and evaluation of a method for estimating gaze locations during the analysis of static visualizations using crowdsourcing. Understanding gaze patterns is helpful for evaluating visualizations and user behaviors, but traditional eye-tracking studies require specialized hardware and local users. To avoid these constraints, we developed a method called Fauxvea, which crowdsources visualization tasks on the Web and estimates gaze fixations through cursor interactions without eye-tracking hardware. We ran experiments to evaluate how gaze estimates from our method compare with eye-tracking data. First, we evaluated crowdsourced estimates for three common types of information visualizations and basic visualization tasks using Amazon Mechanical Turk (MTurk). In another, we reproduced findings from a previous eye-tracking study on tree layouts using our method on MTurk. Results from these experiments show that fixation estimates using Fauxvea are qualitatively and quantitatively similar to eye tracking on the same stimulus-task pairs. These findings suggest that crowdsourcing visual analysis tasks with static information visualizations could be a viable alternative to traditional eye-tracking studies for visualization research and design.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Adult
  • Attention
  • Crowdsourcing / methods*
  • Eye Movement Measurements*
  • Female
  • Fixation, Ocular / physiology*
  • Humans
  • Internet*
  • Male
  • Task Performance and Analysis