Научная визуализация, 2021, том 13, номер 2, страницы 50 - 66, DOI: 10.26583/sv.13.2.04
Visual Analytics of Gaze Tracks in Virtual Reality Environment
Авторы: K.V. Ryabinin1,А,В, K.I. Belousov2,А,В
A Saint Petersburg State University, Saint Petersburg, Russia
B Perm State University, Perm, Russia.
1 ORCID: 0000-0002-8353-7641, kostya.ryabinin@gmail.com
2 ORCID: 0000-0003-4447-1288, belousovki@gmail.com
Аннотация
The paper is devoted to the development of software tools to support eye-tracking-based research in an immersive virtual reality environment. Eye tracking is a popular technology for studying human behavior because it provides objective metrics to estimate human perception strategies. The corresponding hardware evolves rapidly, and nowadays its ergonomics and accessibility enable to use this hardware in a wide range of research. Recently, eye tracking devices are combined with head-mounted virtual reality displays, which allows the detecting of virtual objects the user is looking at. This, in turn, opens three new development roads. First, new interaction methods emerge, when the user can select objects with a gaze. Second, new ways of presenting virtual reality become possible, like, for example, foveated rendering (graphics rendering optimization that locates by eye tracker the zone the user is looking at, increases the image quality in that zone, and decreases the image quality in the peripheral vision). Third, new opportunities emerge to carry out the eye-tracking-based research of human behavior, wherein the spectrum of possible experiments increases dramatically compared to what is achievable in the real world. In this paper, we focus on the third road.
While there is a lot of mature software to support traditional eye-tracking-based experiments, virtual reality brings new challenges not yet tackled by the existing means. The main challenge is a seamless integration of eye tracking analytics tools with virtual reality engines. In the present work, we address this challenge by proposing a flexible data mining and visual analytics pipeline based on the ontology-driven platform SciVi that deeply integrates with the virtual scene rendered by Unreal Engine and displayed by HTC Vive Pro Eye head-mounted display.
We are interested in using eye tracking to study the reading process in immersive virtual reality. While the reading process in normal conditions is studied quite well, there is a lack of corresponding research related to the virtual reality environment. To the best of our knowledge, currently, just one attempt is reported in the literature, considering the reading of short phrases. In contrast, we plan to examine the reading of complete texts. The aim of the present work is to develop software tools needed to support the eye-tracking-based reading experiments in virtual reality and to obtain preliminary results.
To enable the visual mining of eye tracking data obtained in the reading experiments, we propose a new modification of a well-known radial transition graph that allows visually inspecting the scanpaths (sequences of eye fixations – moments when eyes are stationary – and interchanging saccades – moments when eyes rapidly move between viewing positions). Our modification is based on the SciVi::CGraph visualization module that performs well on handling large graphs and provides advanced search and filtering capabilities. The distinctiveness of our modification is the efficient tackling of the so-called “hairball problem” (problem of visual mess in the image that appears due to the big amount of data displayed at once) when inspecting fixations on the big number of interest areas (areas the gaze is tracked within). This tackling is leveraged by two main features. The first one is the concise yet comprehensive representation of interest areas as graph nodes color-coded according to the fixation count, while the dwell time is depicted by a radial histogram on top of the nodes. The second one is advanced filtering with the re-tracing function, which allows removing short intermediate fixations and merging corresponding saccades thereby enabling analysts to focus on the most significant parts of the scanpath studied.
The above features make it possible to study the reading process of the text on the word level (when each word is an individual area of interest). Currently, we implemented and tested the setup for the experiments. This setup includes an appropriate virtual reality scene and a particular visual analytics pipeline. Next, we plan to extend our pipeline with other eye tracking metrics and conduct the reading experiments.
Ключевые слова: Visual Analytics, Data Mining, Eye Tracking, Virtual Reality, Circular Graph, Reading, Ontology Engineering, HTC Vive Pro Eye, Unreal Engine.