Project Details
In-time Virtual Reality Simulation Patient Models: Machine Learning and immersive-interactive Modeling of Virtual Patient Bodies
Applicant
Professor Dr. Andre Mastmeyer
Subject Area
Medical Informatics and Medical Bioinformatics
Term
since 2018
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 401393200
To provide high-quality body models given medical imaging data by the segmentation and annotation of relevant structures without significant time loss is highly relevant. Fast modelling is relevant in clinical-radiological everyday routine and currently published VR simulators for ad hoc training, planning and navigation purposes of minimally invasive interventions. After the modeling steps, an interactive, immersive quality assurance of the automatically generated model proposals is attractive to be made possible directly in a VR simulator, which can then also be used for actual training, planning and navigation. For this purpose, visuo-haptic VR methods shall be used to correct segmentation errors. This inspection phase serves the plausibility check of the patient model and is absolutely necessary, since the proposed fully automatic procedures can often not give flawless results due to the variability of normal and pathological anatomies. In the automatic modeling phase, current machine learning methods and atlas-based methods are to be compared for their segmentation proposals and combined to their strengths. A major goal of this project is the development of novel, fully automatic, group-oriented deep-learning and multi-atlas segmentation estimators for the highly efficient multi-organ and simultaneous tumor segmentation in medical 3D-CT image data sets. The liver, spleen, pancreas and kidneys are considered as examples. In the case of the deep learning methods, discriminating features are learned and automatically grouped. In the case of multi-atlas segmentation, form pre-knowledge from already segmented similar cases (atlases) is used for the automatic segmentation of patient image data. For the clinical-diagnostic applicability of the possibly unsupervised individual learning methods, the quality and efficiency of the segmentation estimators and annotators within the framework of the project is determined by the particular interplay of semantic and numerical levels, i.e. from spatio-ontological and statistical-numeric multi-organ and tumor difference characteristics to normal tissues. The deep-learning and registration framework to be developed and trained with quality assurance directly in the targeted environment of Virtual Reality (VR) provides new method building blocks for a clinically applicable systems. A new standard is to be set in the areas of quality, robustness and registration-free efficiency through the combination of innovative methodologies (ontology and atlas basics, deep learning, VR interaction) and with special consideration of pathologies (the goal of the interventions).The project is based at the Institute of Medical Informatics at the University of Lübeck and can rely on the hardware, methodological infrastructure and the knowledge of colleagues as well as medical partners.
DFG Programme
Research Grants