Detailseite
Projekt Druckansicht

DINCO - Detektion von Interaktions-Kompetenzen und -Hindernissen

Antragsteller Dr. Felix Putze
Fachliche Zuordnung Arbeitswissenschaft, Ergonomie, Mensch-Maschine-Systeme
Bild- und Sprachverarbeitung, Computergraphik und Visualisierung, Human Computer Interaction, Ubiquitous und Wearable Computing
Förderung Förderung von 2016 bis 2021
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 316930318
 
Erstellungsjahr 2020

Zusammenfassung der Projektergebnisse

In summary, the DINCO project has reached all its envisioned goals and provides the components for adaptive end-to-end systems, which use multimodal data streams for the detection of multiple interaction obstacles as well as learning algorithms to adapt to them. This marks an important milestone for the development of systems that take into account individual user needs. Many of the developed components and achieved findings will shape future research in this direction: an end-to-end system including application, obstacle detection and automatic interface adaptation, multiple collected annotated multimodal data sets for further analysis and to bootstrap training of future models, and extensible cognitive user simulation, and a set of evaluated adaptation mechanisms for several interaction obstacles. Still after the conclusion of the DINCO project, some limitations of the investigation remain. First, while we purposefully avoided unnatural restrictions of the experiment participants and chose a motivating task, the experiments were still performed under experimental conditions, i.e. with a substantial level of control and supervision. This may prevent fully natural user behavior. Follow-up research must look into experiments which take the investigated application outside the lab and with applications which provide more intrinsic motivation for the users. Such in-the-wild evaluation would also involve a more long-term perspective; while DINCO already explored adaptation across multiple interaction sessions, it does not take familiarization and coadaptation effects into account. Second, the way interaction obstacles are modeled in the system does not capture the full complexity of the real world, were obstacles can appear or disappear dynamically, where multiple interaction obstacles can occur at the same time and where applications are not only used within multiple trials during one experiment session but over extended periods of time. Future models should be expanded along this axis to cover such situations. Third, the possible interaction obstacles are hardwired into the architecture and adding new obstacles requires full retraining of the model, including a corresponding data collection. This limits the generalization capabilities of the system as substantial effort has to be invested for each new domain. Future research should look into how existing large data corpora or cognitive models can be exploited to extract knowledge on how interaction obstacles influence specific applications, how these can be countered by targeted adaptation mechanisms derived from a formal task description and how user behavior in such tasks can be simulated.

Projektbezogene Publikationen (Auswahl)

  • (2017). Automatic classification of auto-correction errors in predictive text entry based on EEG and context information. In Proceedings of the 19th ACM International Conference on Multimodal Interaction
    Putze, F., Schünemann, M., Schultz, T., & Stuerzlinger, W.
    (Siehe online unter https://doi.org/10.1145/3136755.3136784)
  • (2020). Observing the Effects of Predictive Features on Mobile Keyboards. Proceedings of the ACM on Human-Computer Interaction ISS, 1(1)
    Alharbi, O., Putze, F., & Stuerzlinger, W.
    (Siehe online unter https://doi.org/10.1145/3427311)
  • (2020). Platform for Studying Self-Repairing Auto-Corrections in Mobile Text Entry based on Brain Activity, Gaze, and Context. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
    Putze, F., Ihrig, T., Schultz, T., & Stuerzlinger, W.
    (Siehe online unter https://doi.org/10.1145/3313831.3376815)
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung