Gaze-based Interactions in Geovisualisations

One article can have multiple GBIs. Each GBI is identified by a unique identifier. First number is an ID of an article where the GBI was found. Second number is an ID of the GBI within the article. Color represents the tracker type.

Tracker type

Remote (n=36)
Head-mounted (n=14)
Inferred (n=2)
Mouse (n=1)
Integrated (n=1)
Intent-response typeModality type
SingleCombined

Evidence Map

Sort by: Direction:
Carto-type BenefitsLimitsΣ
Hands-Free usability
Higher Engagement
Quicker Task Completion
Improved Contextualization
Reduced Cognitive Load
Other
Tracker Accuracy
Undesired Activations
User Fatigue
Learning Curve
Visual Distraction
Incomplete Evaluation
Blink-Only Gesturing3 3 3 3 1 3
Gaze-Only Dwell Activation10 4 2 3 2 8 4 7 3 1 1 9 10
Gaze-Only Dwell Feature Selection4 2 2 3 3 4 3 3 4
Gaze-Only Minimap Navigation1 1 1 1 1 1
Gaze-Pivot Central Zooming4 5 2 1 1 4 3 5 4 4 1 7
Gaze-Pivot Localised Zooming 1 1 1 1 1
Continuous Gesturing4 4 1 4 4 1 1 4
Gaze-Continuous Enabling1 1 1 1 1 1
Gaze-Directed Continuous Panning1 2 1 1 2 2 1 3 1 3 3
Gaze-Contingent Layer Fusion4 4 3 5 3 4 4 4 5
Gaze-Contingent Magnification1 1 1 1 1 1 1 1 1
Gaze-Only Bookmarking3 3 3 3 3 3 3 3
Gaze-Only Navigation Assistance1 1 1 1 1 1
Off-Map Gaze-Activated Context1 1 1 1 1 1 1 1
On-Map Gaze-Activated Context7 6 3 7 3 2 7 2 4 1 4 2 7
On-Map Gaze-Locked Context 1 1 1 1 1 1 1
Gaze Point Only Sharing1 1 1 1 1 1 1

Color intensity indicates count of GBIs with that property:

1
3
5
7
9 +
Benefits
1
3
5
7
9 +
Limits

Intent Type and Modality of the GBIs:

Intent Types:

Active Discrete Command
Active Continuous Command
Passive Gaze-Contingent Rendering
Passive Gaze-Informed Adaptation
Passive Gaze Sharing

Modality:

Single Modality (Sole Gaze)
Combined Modality

Notes

  • [a] Reference identification is based on the year of publication to provide an additional layer of information.
  • [b] Last data update in 2024/11.
  • [c] Created by ANONYMIZED.

References

  • [1] Nikolov, S.G., Bull, D.R., Canagarajah, C.N., Jones, M.G., Gilchrist, I.D. (2002). Multi-modality gaze-contingent displays for image fusion. DOI: 10.1109/ICIF.2002.1020951
  • [2] Nikolov, S.G., Bull, D.R., Gilchrist, I.D. (2003). Gaze-Contingent Multi-modality Displays of Multi-layered Geographical Maps. DOI: 10.1007/3-540-36487-0_36
  • [3] Eaddy, M., Blasko, G., Babcock, J., Feiner, S. (2004). My own private kiosk: Privacy-preserving public displays. DOI: 10.1109/ISWC.2004.32
  • [4] Gepner, D., Simonin, J., Carbonell, N. (2007). Gaze as a Supplementary Modality for Interacting with Ambient Intelligence Environments. DOI: 10.1007/978-3-540-73281-5_93
  • [5] Nétek, R. (2011). Possibilities of contactless control of web map applications by sight. DOI: 10.14311/gi.7.5
  • [6] Bektaş, K., Çöltekin, A. (2011). An Approach to Modeling Spatial Perception for Geovisualization. DOI: 10.1016/j.sbspro.2011.07.027
  • [7] Stellmach, S., Dachselt, R. (2012). Investigating Gaze-supported Multimodal Pan and Zoom. DOI: 10.1145/2168556.2168636
  • [8] Giannopoulos, I., Kiefer, P., Raubal, M. (2012). GeoGazemarks: providing gaze history for the orientation on small display maps. DOI: 10.1145/2388676.2388711
  • [9] Giannopoulos, I., Kiefer, P., Raubal, M. (2013). The influence of gaze history visualization on map interaction sequences and cognitive maps. DOI: 10.1145/2534931.2534940
  • [10] Pfeuffer, K., Zhang, Y., Gellersen, H. (2015). A collaborative gaze aware information display. DOI: 10.1145/2800835.2800922
  • [11] Klamka, K., Siegel, A., Vogt, S., Göbel, F., Stellmach, S., Dachselt, R. (2015). Look & pedal: Hands-free navigation in zoomable information spaces through gaze-supported foot input. DOI: 10.1145/2818346.2820751
  • [12] Bektas, K., Çöltekin, A., Krüger, J., Duchowski, A. T. (2015). A Testbed Combining Visual Perception Models for Geographic Gaze Contingent Displays. DOI: 10.2312/eurovisshort.20151127
  • [13] Çöltekin, A., Hempel, J., Brychtova, A., Giannopoulos, I., Stellmach, S., Dachselt, R. (2016). GAZE AND FEET AS ADDITIONAL INPUT MODALITIES FOR INTERACTING WITH GEOSPATIAL INTERFACES. DOI: 10.5194/isprs-annals-III-2-113-2017
  • [14] Tateosian, L.G., Glatz, M., Shukunobe, M., Chopra, P. (2017). GazeGIS: A Gaze-Based Reading and Dynamic Geographic Information System. DOI: 10.1007/978-3-319-47024-5_8
  • [15] Göbel, F., Kiefer, P., Giannopoulos, I., Duchowski, A.T., Raubal, M. (2018). Improving map reading with gaze-adaptive legends. DOI: 10.1145/3204493.3204544
  • [16] Bektaş, K., Çöltekin, A., Krüger, J., Duchowski, A.T., Fabrikant, S.I. (2019). GeoGCD: improved visual search via gaze-contingent display. DOI: 10.1145/3317959.3321488
  • [17] Göbel, F., Kiefer, P. (2019). POITrack: improving map-based planning with implicit POI tracking. DOI: 10.1145/3317959.3321491
  • [18] Goebel, F., Kurzhals, K., Schinazi, V.R., Kiefer, P., Raubal, M. (2020). Gaze-adaptive lenses for feature-rich information spaces. DOI: 10.1145/3379155.3391323
  • [19] Xie, Y., Wang, H., Luo, C., YANG, Z., ZHAN, Y. (2021). GazeMetro: A Gaze-Based Interactive System for Metro Map. DOI: 10.1145/3441852.3476569
  • [20] Pfeuffer, K., Alexander, J., Gellersen, H. (2021). Multi-user Gaze-based Interaction Techniques on Collaborative Touchscreens. DOI: 10.1145/3448018.3458016
  • [21] Putra, H.F., Ogata, K. (2022). Navigating through Google Maps Using an Eye-Gaze Interface System. DOI: 10.24507/ijicic.18.02.417
  • [22] Liao, H., Zhang, C., Zhao, W., Dong, W. (2022). Toward Gaze-Based Map Interactions: Determining the Dwell Time and Buffer Size for the Gaze-Based Selection of Map Features. DOI: 10.3390/ijgi11020127
  • [23] Zhang, H., Hu, Y., Zhu, J., Fu, L., Xu, B., Li, W. (2022). A gaze‐based interaction method for large‐scale and large‐space disaster scenes within mobile virtual reality.. DOI: 10.1111/tgis.12914
  • [24] Zhang, C., Liao, H., Huang, Y., Dong, W. (2023). Evaluating the Usability of a Gaze-Adaptive Approach for Identifying and Comparing Raster Values between Multilayers. DOI: 10.3390/ijgi12100412
  • [25] Chalimas, T., Mania, K. (2023). Cross-Device Augmented Reality Systems for Fire and Rescue based on Thermal Imaging and Live Tracking. DOI: 10.1109/ISMAR-Adjunct60411.2023.00018
  • [26] Zhang, C., Liao, H., Meng, J. (2024). Evaluating the performance of gaze interaction for map target selection. DOI: 10.1080/15230406.2024.2335331