Confidential Interaction using Eye-Contact (CUE)

In a crowded room you cannot escape being observed but you want to discreetly direct your partner’s attention towards an object of interest. To do this, you cannot use language, and you cannot point. What you can do, though, is to try to use your eyes to lead your partner to the object and thus establish joint attention. However, in a complex environment containing many different objects you have to avoid misunderstandings. This project is concerned with how we avoid or correct misunderstandings when establishing joint attention through the exchange of gaze signals, supplemented by quiet vocal feedback such as ‘‘mmhm’. Depending on the strategies we use, outside observers might have difficulties when trying to follow the interaction. Therefore, we furthermore address how far joint attention interactions are exclusive and which mechanisms enhance this exclusivity, both when participants are not explicitly hiding their attentional focus and when they are attempting to maintain secrecy. We record eye gaze and vocalization in three party face-to-face interaction with mobile eye tracking glasses and synchronized high quality audio and video recordings. Participants are sitting around a table on which an assortment of objects are placed. One participant (the ‘Sender’) has to convey the position of certain target objects to another participant (the ‘Receiver’) either by using gaze and vocal feedback, just gaze or just vocal feedback. The third participant (the ‘Observer’) observes the interaction and tries to guess the target objects. We assess whether the two interaction partners can influence whether the observer is able to follow the interaction and thus intentionally increase the confidentiality of joint attention. This study will elucidate the role and potential of joint attention in adult everyday life. In addition, results will inform about how gaze signals are integrated with other nonverbal modalities and whether nonverbal communication channels can act as a substitute for each other. The setup used for our investigation will be made available, including comprehensive description and documentation, to benefit other research. To this end, we will endeavor to keep thresholds for the usage of the setup low by building it from scratch from comparably affordable components and using open-source software solutions. Scientifically and methodologically this study will further the field of gaze research, integrating it with multimodal communication and supporting interdisciplinary collaborations.

Deutsche Forschungsgemeinschaft (DFG) - Project number 563096772

Prof. Dr. Martine Grice

Institute for Linguistics – Phonetics, University of Cologne

Prof. Dr. Martine Grice is Professor of Phonetics at the Institute of Linguistics at the University of Cologne. She is Principal Investigator of the Key Profile Area ‘Skills and Structures in Language and Cognition’ and board member of the Collaborative Research Centre SFB1252 ‘Prominence in Language’. Her research focuses on intonation. This includes intonational phonology, the multidimensionality of prosodic categories, how intonational tones are associated with metrical structure and how these two levels of representation interact. It also includes the intentions conveyed by prosody, especially when asking questions, prosodic signalling of the points at which common ground is established, and the role of prosody in attention orienting. In her work on multimodal interaction, she also investigates turn-taking and vocal and visual feedback signals (backchannels such as mhmm, eye gaze and head nods) in unscripted conversation and task-oriented dialogue.

Dr. Mathis Jording

Department of Psychiatry and Psychotherapy, University Hospital Cologne

Institute of Neuroscience & Medicine (INM-3), Research Centre Jülich

Dr. Mathis Jording is a researcher at University Hospital Cologne and visiting scientist at the Institute of Neuroscience & Medicine (INM-3) at the Research Center Jülich. In his dissertation, he investigated which gaze dynamics enable - or lead - persons to ascribe intentions in gaze encounters. He then moved to the Research Center Jülich to work on a project on the experience of the passage of time in Virtual Reality across different psychiatric pathologies, while maintaining his enthusiasm and fascination for the enfolding and dynamics of nonverbal interactions. In 2024, he moved back to the University Hospital and today focusses on research in social cognition, gaze processing and visual attention. He is especially interested in understanding of how persons gain information in gaze interactions and how the emergence of mutual understanding is deliberately facilitated.

Team members

Svea Bösch

Svea Bönsch is a PhD student working on CUE.

Solveigh Janzen

Solveigh Janzen is a PhD student working on CUE.

Matteo Schmelzer

Matteo Schmelzer is a student assistant working on CUE.

Extended team members

Theodor Klinker

Malin Spaniol

Kai Vogeley