The final, formatted version of the article will be published soon.
EDITORIAL article
Front. Robot. AI
Sec. Field Robotics
Volume 11 - 2024 |
doi: 10.3389/frobt.2024.1509637
This article is part of the Research Topic Localization and Scene Understanding in Urban Environments View all 5 articles
Localization and Scene Understanding in Urban Environments
Provisionally accepted- 1 Politechnic School, University of Alcalá, Alcalá de Henares, Madrid, Spain
- 2 Albert-Ludwigs-Universität Freiburg, Freiburg, Germany
- 3 Department of Informatics, Systems and Communication DISCO, Università degli Studi di Milano - Bicocca, Milan, Italy
Regarding the reference map, nowadays different proposals from mapping companies such as TomTom [1] and Here [2], as well as self-driving companies like Waymo [3], are paving the way for reliable sources of information. On the other hand, open-source community-created geospatial projects like OpenStreetMap are also a valuable source of information that have been extensively used for research purposes over the last decade. Localization algorithms are responsible for accurate localization of the vehicle within the map. This is achieved using different on-board sensors, starting with the most obvious: GNSS. Despite these systems' ability to provide very good localization accuracies, up to the order of 0.01m, their reliability issues place them in the danger tier, especially in urban areas. This is due to the need for a clear line of sight to a significant part of the sky, which can be often obstructed by trees, buildings or urban canyons or even thick clouds.Matteo Frosi et al. investigated this aspect in their first contribution, focusing on the evaluation of the positioning accuracy and precision of localization systems that are not based on GNSS sensors. Besides presenting also a novel Simultaneous Localization and Mapping (SLAM) system that achieves real-time performance, the authors benchmarked a set of algorithms on manually collected datasets, demonstrating the active interest of the scientific community in the topic. They presented a comparison of algorithms that use LiDAR data coupled with inertial measurements units, a technique that has been extensively investigated over the last decade. Results from both outdoor and indoor scenarios show very promising results for real-world applications such as autonomous driving.In a second contribution, a different group also led by Matteo Frosi, proposed an algorithm that directly connects with the HD mapping concept previously described. Among all the features that HD maps integrate, they interestingly used one of the major sources of problems: the buildings themselves. They extended a SLAM system to exploit mapped buildings in OpenStreetMap by matching single LiDAR scan inputs. Their contribution also presented the re-localization capabilities of their systems. This approach is interesting and paves the way to further investigation in the field, as similar techniques could also be used for other elements of mapped environments.In this perspective, the work of the group led by Simone Mentasti proposes a pipeline to enrich HD-maps with traffic lights, enhancing their detection with specific features like shape, orientation, and pictogram. Unlike the previous two contributions, their focus is leveraging image information from onboard cameras on surveying vehicles and state-of-theart deep neural networks for traffic light detection. Furthermore, to accommodate GNSS errors during the mapping process, the authors proposed to use Kalman filtering techniques to provide the most accurate traffic light position possible.Even though localization is, in a way, a lower scale of complexity compared to SLAM algorithms, as they exploit already available maps, all the algorithms presented so far use special techniques that require high computational resources. Most of the time, parts of these algorithms include high-consuming subprocesses due to repetitive massive sets of operations performed on the sensed data. This can be the case of either LiDAR input with thousands of 3D points as well as imagery data, processed using deep neural networks. While specific hardware is available to speedup inference and classification of images, the same hardware can be used to parallelize consolidated filtering techniques. This is demonstrated in the last work presented in this collection, where Lesia Mochurad proposed an implementation of a parallel Kalman filter algorithm to speedup vehicle localization based on a specific usage of the LiDAR data. Using the CUDA programming language and NVIDIA hardware, her proposal achieved a 3.8x speedup compared to the original unparallelized implementation of the Kalman algorithm, which is remarkable as it allows easy scaling with the number of 3D LiDAR points used for localization.
Keywords: GNSS-denied environments, Openstreetmap, localization, HD-map, Traffic light, Extended Kalman filter, CUDA, Self-driving
Received: 11 Oct 2024; Accepted: 28 Oct 2024.
Copyright: © 2024 Ballardini, Cattaneo, Sorrenti and Parra Alonso. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Augusto Luis Ballardini, Politechnic School, University of Alcalá, Alcalá de Henares, 28805, Madrid, Spain
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.