Toward theoretical techniques for measuring the use of human effort in visual analytic systems.
Published in IEEE Transactions on Visualization and Computer Graphics, 2016
Abstract: Visual analytic systems have long relied on user studies and standard datasets to demonstrate advances to the state of the art, as well as to illustrate the efficiency of solutions to domain-specific challenges. This approach has enabled some important comparisons between systems, but unfortunately the narrow scope required to facilitate these comparisons has prevented many of these lessons from being generalized to new areas. At the same time, advanced visual analytic systems have made increasing use of human-machine collaboration to solve problems not tractable by machine computation alone. To continue to make progress in modeling user tasks in these hybrid visual analytic systems, we must strive to gain insight into what makes certain tasks more complex than others. This will require the development of mechanisms for describing the balance to be struck between machine and human strengths with respect to analytical tasks and workload. In this paper, we argue for the necessity of theoretical tools for reasoning about such balance in visual analytic systems and demonstrate the utility of the Human Oracle Model for this purpose in the context of sensemaking in visual analytics. Additionally, we make use of the Human Oracle Model to guide the development of a new system through a case study in the domain of cybersecurity.
Recommended citation: R. Jordan Crouser, Lyndsey Franklin, Alex Endert, and Kris Cook. Toward theoretical techniques for measuring the use of human effort in visual analytic systems. IEEE Transactions on Visualization and Computer Graphics, 23(1):121–130, 2017.