Intention-Aware Gaze-Based Assistance on Maps (IGAMaps)Adaptive human computer interfaces can significantly facilitate the interaction with digital maps in application scenarios such as navigation, information search, and for users with physical limitations. A map system that is able to recognize its user’s information requirements fast and accurately could adapt the map content, thus increasing effectiveness and efficiency of interaction. Since maps are primarily perceived and explored with the visual sense, it is plausible to assume that a user’s gaze on a map can be used to recognize intentions and activities, which can then trigger map adaptation.The main goal of this research project consists in the investigation of methods for the recognition of activities and intentions from visual attention during the interaction with maps. The project tries to answer the following questions:
- Can we infer a map user’s activity and intention from the visual attention he or she is spending on the map (e.g., route planning, searching for a restaurant)?
- How should a map adaptation be designed to be helpful?
- What is the general user acceptance of gaze-based intention recognition on maps?
Diploma Thesis "A Concept to Leverage the Visual Periphery for Parallel Information"Over the last years, display screens have drastically improved not only with respect to resolution but also in terms of dimension. Many applications utilize this space for presenting more information as well as to support multiple task by still applying conventional UI paradigms. Considering peoples visual focus of attention to be limited by human factors, intelligent UIs have to be developed to take better advantage of the gained space. This work investigates, using the visual periphery for displaying information while focusing on a primary task. Based on the understanding of the human visual system, vital aspects for perceiving information within the field of view have been tested in a lab study. The outcome indicates that secondary information can be quickly and reliably perceived in the periphery without significantly affecting a primary task. Carefully considering the study results and the participants’ feedback, a concept for a novel visual attentive UI is elaborated. It combines eye tracking with intelligent UI adaptions to make more information perceivable without affecting the primary task performance and thus, use huge displays more efficiently.
Gaze-supported Foot InteractionWhen working with zoomable information spaces, we can distinguish complex tasks into primary and secondary tasks (e.g., pan and zoom). In this context, a multimodal combination of gaze and foot input is highly promising for supporting manual interactions, for example, using mouse and keyboard. Motivated by this, we present several alternatives for multimodal gaze-supported foot interaction in a computer desktop setup for pan and zoom. While our eye gaze is ideal to indicate a user’s current point of interest and where to zoom in, foot interaction is well suited for parallel input controls, for example, to specify the zooming speed. Our investigation focuses on varied foot input devices differing in their degree of freedom (e.g., one- and two-directional foot pedals) that can be seamlessly combined with gaze input.
Gaze-supported Foot InteractionDepthTouch is an installation which explores future interactive surfaces and features elastic feedback, allowing the user to go deeper than with regular multi-touch surfaces. DepthTouch’s elastic display allows the user to create valleys and ascending slopes by depressing or grabbing its textile surface. We describe the experimental approach for eliciting appropriate interaction metaphors from interaction with real materials and the resulting digital prototype.
|2018||Göbel F. Kiefer P., Giannopoulos I. and Raubal M. (2018).
Gaze Sequences and Map Task Complexity. In Proceedings of the 10th International Conference on Geographic Information Science (GIScience 2018) 2018, Melbourne, Australia.
|2018||Göbel F. and Martin H. (2018).
Unsupervised Clustering of Eye Tracking Data. Spatial Big Data and Machine Learning in GIScience, Workshop at GIScience 2018, Melbourne, Australia, 2018.
|2018||Göbel F., Kiefer P., Giannopoulos I., Duchowski, A.T. and Raubal M. (2018).
Improving Map Reading with Gaze-Adaptive Legends. In ETRA ’18: 2018 Symposium on Eye Tracking Research and Applications.
|2018||Göbel F., Giannopoulos I., Kiefer P., Raubal M. and Duchowski, A.T. (2018).
ET4S Eye Tracking for Spatial Research, Proceedings of the 3rd International Workshop.
|2018||Göbel F., Bakogiannis N., Henggeler K., Tschümperlin R., Xu Y., Kiefer P. and Raubal M. (2018).
A Public Gaze-Controlled Campus Map. 3rd International Workshop on Eye Tracking for Spatial Research.
|2017||Göbel F., Kiefer P. and Raubal M. (2017).
FeaturEyeTrack: A Vector Tile-Based Eye Tracking Framework for Interactive Maps.
In Societal Geo-Innovation : short papers, posters and poster abstracts of the 20th AGILE Conference on Geographic Information Science, Editors: A Bregt, T Sarjakoski, R. van Lammeren, F. Rip
|2016||Göbel F., Giannopoulos I. and Raubal M. (2016).
The Importance of Visual Attention for Adaptive Interfaces.
In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI '16), Florence, Italy
|2015||Klamka K, Siegel A., Vogt S.,
Göbel F., Stellmach S. and Dachselt R. (2015). Look & Pedal: Hands-free Navigation in Zoomable Information
Spaces through Gaze-supported Foot Input. In Proceedings of the 2015 ACM on International Conference on Multimodal
Interaction (ICMI '15). ACM, New York, NY, USA, 123-130.
Göbel F., Klamka K, Siegel A., Vogt S., Stellmach S. and Dachselt R. (2013). Gaze-supported Foot Interaction in
Zoomable Information Spaces. Interactivity at CHI '13. In Proc. of CHI '13 Extended Abstracts on Human
Factors in Computing Systems (CHI EA '13). ACM, New York, NY, USA, 3059-3062.
Göbel F., Klamka K, Siegel A., Vogt S., Stellmach S. and Dachselt R. (2013). Gaze-supported Foot Interaction
in Zoomable Information Spaces. In Proc. of the CHI '13 Workshop on "Gaze Interaction in the Post-WIMP
World". Paris, France, April 27, 2013.
Göbel F. and Groh R. (2012). DepthTouch: Elastische Membran zwischen virtuellem und realem Raum. In:
Reiterer, H. & Deussen, O. (Hrsg.), Mensch & Computer 2012 – Workshopband: interaktiv informiert –
allgegenwärtig und allumfassend!? München: Oldenbourg Verlag. (S. 493-496)
Göbel F., Gründer T., Keck M., Kammer D. and Groh R. (2012). DepthTouch: an elastic surface for tangible
computing. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI '12),
Genny Tortora, Stefano Levialdi, and Maurizio Tucci (Eds.). ACM, New York, NY, USA, 770-771.
Academic Honors and Achievements
|2014||Visiting student (1 weeks) at Microsoft Research Cambridge, Great Britain|
|2013||Presentation “Gaze-supported Foot Interaction in Zoomable Information Spaces” at the Workshop “Gaze Interaction in the Post-WIMP World“, CHI International Conference on Computer Human Interaction|
|2012||Mensch und Computer 2012 “Best Demo” award for Project “DepthTouch”|
Other Professional Activities
|2013||Demonstration of “Gaze-supported Foot Interaction in Zoomable Information Spaces“ as Interactivity at “ACM Conference on Human Factors in Computing Systems“, 2013, Paris, France|
|2013||Presentation of “Gaze-supported Foot Interaction in Zoomable Information Spaces“ at the CHI '13 Workshop on "Gaze Interaction in the Post-WIMP World". Paris, France|
|2013||Demonstration of “Gaze-supported Foot Interaction in Zoomable Information Spaces“ at "OUTPUT.DD", Dresden, Germany|
|2012||Demonstration “DepthTouch“ at "OUTPUT.DD", Dresden, Germany|
|2012||Demonstration of “DepthTouch“ at “MB21-Festival 2012“, a festival on multimedia for children, “Medienkulturzentrum Dresden e.V“., Dresden, Germany|