Project Details
PIPE: Probabilistic Models of Instructions, Perception and Experience - Representation, Learning and Reasoning
Applicant
Professor Michael Beetz, Ph.D.
Subject Area
Automation, Mechatronics, Control Systems, Intelligent Technical Systems, Robotics
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
from 2017 to 2023
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 322037152
We propose the development of joint Bayesian models for representation, learning and reasoning in sophisticated reasoning tasks in autonomous intelligent robotics. These tasks include: (1) the instruction interpretation and completion of vaguely formulated high-level tasks in natural language, (2) the detection, localization, categorization and reconstruction of perceived objects (3) the task execution with action-specific executive reasoning about task parameterization at runtime, possible failures or undesired effects to improve the own behavior with every new situation.Probabilistic models are currently used with promising results in all three of the above subproblems. However, these are usually very strongly tailored to the respective subproblem, rendering the overarching use of knowledge difficult, if not impossible.The aim of this project is to equip autonomous robotic systems with the ability to build up probabilistic knowledge bases from experiences of object perception, interpretation of natural language and the physical execution of tasks, and to use the gained knowledge across domains. To this end, we will develop unifying data structures and algorithms that leverage the representation, acquisition and reasoning in a comprehensive framework. Thereby the system gains knowledge about the relations between actions, their effects, involved objects and perceptual characteristics as well as information about how to perform a task in different contexts. Our preliminary work in the respective subareas demonstrates that the synergistic interaction of previously independent components in robotic control routines substantially boosts performance with respect to autonomy, generality and versatility of robots.
DFG Programme
Research Grants