Project Details
Surveillance Videos Meet Wearable Cameras on the Cloud
Applicant
Professor Dr.-Ing. Gerhard Rigoll
Subject Area
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Human Factors, Ergonomics, Human-Machine Systems
Human Factors, Ergonomics, Human-Machine Systems
Term
from 2015 to 2023
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 243643046
This is an application for an extension to the research project Augmented Synopsis of Surveillance Videos in Adaptive Camera Networks (AZ: RI 658/25-1) under the trilateral research projects which will end in September 2017 (after 2 years of funding). This extension addresses improving and adding capabilities to the existing system such as: (1) the integration of videos recorded by mixed types of cameras (static and moving) such as hand held mobile phones, or wearable cameras and those worn by police officers, (2) re-identification and segmentation of the same object in non-overlapping camera views, and (3) a new cloud computing platform with access and version control. Static video cameras are being installed everywhere as part of comprehensive security systems, in hopes of crime prevention, traffic control, and malfunction detection. Mobile cameras are in everyones mobile phones and wearable cameras are becoming more common for policemen, firemen, soldiers and people wearing GoPros. In this proposal we expand the capabilities of stationary surveillance cameras, and incorporate videos from moving cameras to get a more holistic understanding of the scene. In addition, the entire system is geared to work in the cloud. In the new phase of the proposal, video streams from the mixed cameras (fixed and moving) will be stored on a shared cloud server. Access to the server will be shared by all partners. They can query for raw videos and video related metadata (XML files) to run their video processing algorithms locally and then the results will be uploaded to the cloud. The cloud server will provide enhanced services to support the dynamics of this integration like version and access control of the video metadata. Since the field of view of each individual camera is limited, it is essential to integrate information from multiple cameras in order to cover a large target area and to holistically understand a more complete scenario. This will include tracking of objects across cameras, creating a global view from multiple cameras, and performing joint analysis of actions over multiple cameras. A major innovation in this project will be the integration of fixed and moving cameras and re-identification of tracked objects based on a cloud infrastructure.The three partners complement each other in providing the expertise necessary to carry out the proposed research. TUM has expertise in object detection, segmentation and tracking of people. HUJI has image processing and computer vision expertise, in particular calibration, synopsis, and analyzing wearable camera streams. AQU has expertise in distributed applications development and cloud computing.
DFG Programme
Research Grants
International Connection
Israel, Palestine
Cooperation Partner
Professor Dr. Michael Werman
International Co-Applicants
Professor Radwan Kasrawi, since 9/2020; Professor Dr.-Ing. Shmuel Peleg; Professor Dr. Raid Zaghal, from 5/2018 until 8/2020