Project Details
Projekt Print View

Uncertainty Assessment and Contrastive Explanations for Instance Segmentation

Subject Area Medical Informatics and Medical Bioinformatics
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term since 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 459422098
 
This project aims at establishing concepts and methods for uncertainty modeling and explainability in the context of structured predictions on images. We study predictions that are of complex structure compared to, e.g., image-level predictions in that they involve many interacting output variables, usually one per pixel. Our use case is image segmentation, with particular focus on instance segmentation in microscopy images from the biomedical domain. Instance segmentation is the task of separately delineating each instance of an object class of interest, like e.g. each cell nucleus in microscopy images. Instance segmentation outputs are commonly obtained from dense pixel-individual predictions via inference schemes like e.g. maximum a posteriori inference in a probabilistic graphical model on a pixel grid.Despite DL advances in automated instance segmentation, in many biomedical applications, manual proofreading of segmentation results remains a tedious reality, e.g. for the abundant task of cell nuclei segmentation, as well as for current large-scale efforts in segmenting neuronal structures. Existing instance segmentation methods commonly yield a single output in a deterministic fashion, which neither allows to guide a proofreader towards potential errors, nor does it come with the capability of suggesting alternatives in case a proofreader identifies an error. To this end, this project aims at an instance segmentation method that produces samples from a distribution of results. This will allow for both (1) guidance towards areas of high uncertainty in the output distribution, and (2) suggesting alternative results in case of observed errors. To further facilitate proofreading, we aim at providing explanations that highlight which features in an input image are specifically decisive for one vs. another type of result, i.e., contrastive explanations.Methods for capturing uncertainty in structured predictions on images are in their infancy, as is explainability for pixel-wise predictors. We aim at establishing Bayesian neural networks for instance segmentation, as well as a framework for explainable instance segmentation, and will combine both to yield contrastive explanations for the modes of predictive instance segmentation distributions. Instance segmentation is our use case, yet our methods will apply to other structured prediction problems on images, like semantic segmentation and object tracking.
DFG Programme Research Units
 
 

Additional Information

Textvergrößerung und Kontrastanpassung