Project Details
The role of multimodal turn-completion cues in facilitating precision-timed turn transition
Applicants
Professor Dr. Peter Auer; Professor Stefan Th. Gries, Ph.D.; Privatdozent Dr. Christoph Rühlemann; Professorin Dr. Marion Schulte
Subject Area
Individual Linguistics, Historical Linguistics
Term
since 2022
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 497779797
Turn-taking in human face-to-face interaction is remarkably precision-timed: turn transitions happen within a gap of just 200 ms across languages and cultures. The precision carries semiotic significance: longer gaps are taken as harbingers of dispreferred responses. How is this precision-timing brought about? Psycholinguists agree that it is facilitated by a synergy of early predictive comprehension by the recipient and the use of turn-completion cues by the speaker, including, for example, address forms, turn-final drawling or the return of the speaker’s gaze to the recipient, giving the recipient the ultimate ‘go-signal’ to launch the response. A large number of verbal, gestural, and prosodic turn-completion cues has already been identified, albeit based on based on very small and specialized data sets and/or in experimental settings and largely in isolation from one another. We therefore do not know how they interact, which of them have an actual effect on the precision-timing in turn transitions, how large the effect is, and under what conditions the effect holds. This project addresses this gap. It comprehensively covers not just isolated cues but the whole repertoire of turn-completion cues used in face-to-face interaction. Its objectives are to establish which turn-completion cues are used and under which conditions they are used, detect their potential interactions in context, and determine which turn-completion cues have an effect on the precision-timing of turn transition observed in prior research. The analyses will be based on the Freiburg Multimodal Interaction Corpus, currently under construction with funding from the DFG (project number 414153266; cf. https://gepris.dfg.de/gepris/projekt/414153266) and decidedly multimodal: once completed in fall 2022, FreMIC will comprise more than 70 hrs of video-recordings supplemented by eye-tracking videos for all recruits and complete conversation-analytic transcription; this rich multimodal data will be analyzed with predictive modeling techniques. This comprehensive approach also suggests a collaboration of researchers with different specializations, each contributing their specific expertise, namely Peter Auer (Interactional Linguistics, with a focus on gaze research), Christoph Rühlemann (Interactional Corpus Linguistics, with a focus on storytelling research, Marion Schulte (Phonetics, with a focus on Sociophonetics), Stefan Th. Gries (Corpus Linguistics, with a focus on statistical modeling), and Judith Holler as a closely integrated Mercator Fellow (Interactional Linguistics and Psycholinguistics, with a focus on gesture research). The comprehensive approach, the innovative database, and the close collaboration of experts in the field promise to substantially deepen our understanding of the precision timing of human multimodal communication.
DFG Programme
Research Grants