Exploration for Micro Aerial Vehicles
Final Report Abstract
The goal of the project is the development of an autonomous exploration algorithm for flying vehicles. A central focus is the exploitation of known background information and previously acquired data to improve the exploration strategy. We developed an information-driven autonomous exploration algorithm for micro aerial vehicles (MAVs) that iteratively computes the next best viewpoint that the robot has to visit to maximize the information gain of the sensor. Moreover, our approach takes into account the cost of reaching a new viewpoint in terms of distance and predictability of the flight path for a human observer. Furthermore, our method selects a path that reduces the risk of crashes when the expected battery life comes to an end, while still maximizing the information gain in the process. Finally, the algorithm is fast enough to be executed online on an exploring MAV. In case a previously acquired model is available, it is possible to update it without performing the entire exploration process again. We developed an approach for quickly finding structural changes between the current state of the world and a given 3D model using a small number of images. Our approach finds inconsistencies between pairs of images by re-projecting an image onto another one by passing through the given 3D model. This process leads to ambiguities, which we resolve by combining multiple images such that the 3D location of the change can be estimated. A focus of our approach is that it can be executed fast enough to allow the operation on an exploring MAV, which can detect online the areas that have changed and consequently plan an exploration strategy to update the model. Finally, we explored the use of consumer RGB-D sensors for the 3D reconstruction of environments. We developed an approach that is able to consistently map scenes containing multiple dynamic elements. For localization and mapping, we employ an efficient direct tracking on the truncated signed distance function (TSDF) and leverage color information encoded in the TSDF to estimate the pose of the sensor. The TSDF is efficiently represented using voxel hashing. For detecting dynamics, we exploit the residuals obtained after an initial registration.
Publications
- (2017). Change Detection in 3D Models Based on Camera Images. In 9th Workshop on Planning, Perception and Navigation for Intelligent Vehicles at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)
Palazzolo, E. and Stachniss, C.
- (2017). Information-Driven Autonomous Exploration for a Vision-Based MAV. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-2/W3:59–66
Palazzolo, E. and Stachniss, C.
(See online at https://doi.org/10.5194/isprs-annals-IV-2-W3-59-2017) - (2018). Effective Exploration for MAVs Based on the Expected Information Gain. Drones, 2(1)
Palazzolo, E. and Stachniss, C.
(See online at https://doi.org/10.3390/drones2010009) - (2018). Fast Image-Based Geometric Change Detection Given a 3D Model. In Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA)
Palazzolo, E. and Stachniss, C.
(See online at https://doi.org/10.1109/ICRA.2018.8461019) - (2019). ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)
Palazzolo, E., Behley, J., Lottes, P., Giguère, P., and Stachniss, C.
(See online at https://doi.org/10.1109/IROS40897.2019.8967590)