Project Details
ComPLetely Unsupervised Multimodal Character identification On TV series and movies
Applicant
Professor Dr.-Ing. Rainer Stiefelhagen
Subject Area
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
from 2016 to 2021
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 316692988
Automatic character identification in multimedia videos is an extensive and challenging problem. Person identification serves as foundation and building block for many higher level video analysis tasks, for example semantic indexing, search and retrieval, interaction analysis and video summarization.The goal of this project is to exploit textual, audio and video information to automatically identify characters in TV series and movies without requiring any manual annotation for training character models. A fully automatic and unsupervised approach is especially appealing when considering the huge amount and growth of available multimedia data. Text, audio and video provide complementary cues to the identity of a person, and thus allow to better identify a person than from either modality alone.To this end, we will address three main research questions: unsupervised clustering of speech turns (i.e. speaker diarization) and face tracks in order to group similar tracks of the same person without prior labels or models; unsupervised identification by propagation of automatically generated weak labels from various sources of information (such as subtitles and speech transcripts); and multimodal fusion of acoustic, visual and textual cues at various levels of the identification pipeline.While there exist many generic approaches to unsupervised clustering, they are not adapted to heterogeneous audiovisual data (face tracks vs. speech turns) and do not perform as well on challenging TV series and movies content as they do on other controlled data. Our general approach is therefore to first over-cluster the data and make sure that clusters remain pure, before assigning names to these clusters in a second step. On top of domain specific improvements for either modality alone, we expect joint multimodal clustering to take advantage of three modalities and improve clustering performance over each modality alone.Then, unsupervised identification aims at assigning character names to clusters in a completely automatic manner (i.e. using only available information already present in the speech and video). In TV series and movies, character names are usually introduced and re-iterated throughout the video. We will detect and use addresser-addressee relationships in both speech transcripts (using named entity detection techniques) and video (using mouth movements, viewing direction and focus of attention of faces). This allows to assign names to some clusters, learn discriminative models and assign names to the remaining clusters.For evaluation, we will extend and further annotate a corpus of four TV series (57 episodes) and one movie series (8 movies), a total of about 50 hours of video. This diverse data covers different filming styles, type of stories and challenges contained in both video and audio. We will evaluate the different steps of this project on this corpus, and also make our annotations publicly available for other researchers in the field.
DFG Programme
Research Grants
International Connection
France
Partner Organisation
Agence Nationale de la Recherche / The French National Research Agency
Cooperation Partners
Professor Dr.-Ing. Claude Barras; Hervé Bredin, Ph.D.; Professorin Camille Guinaudeau, Ph.D.