The research investigates a probabilistic multiple-resolution approach to vision-based aerial search in a dense urban environment. Several new ideas are explored, including a coverage planning algorithm subject to energy or time constraints, the advantage of conducting an iterative process at different altitudes, and semantic segmentation as a detection “sensor” for the search problem.
In order to validate the overall approach and test the new algorithms, a new and comprehensive dataset was built and made available to the general research community. Please follow this link to the dataset download page.
Terrain-aided navigation (TAN) was developed before the GPS era to prevent the error growth of inertial navigation. TAN algorithms were initially developed to exploit altitude over ground or clearance measurements from a radar altimeter in combination with a Digital Terrain Map (DTM). After almost two decades of silence, the availability of inexpensive cameras and computational power and the need to find efficient GPS-denied positioning solutions have prompted a renewed interest in this solution. However, vision-based TAN is more challenging in many aspects than the original one, as visual observables can only provide a range up to a scale, preventing a straightforward extension of classical TAN techniques.
The main contributions of this work are the introduction of a new, more flexible, and efficient algorithm for solving the visual-assisted TAN. The algorithm combines two fast stages for solving the problem. In addition, a new outlier-rejection step is introduced between the two stages to make the algorithm robust and suitable for real-world data.