Skip to main content

EDITORIAL article

Front. Robot. AI, 24 November 2022
Sec. Field Robotics
This article is part of the Research Topic AI Processing of UAV Acquired Images for Pattern Monitoring in Natural and Urban Environments View all 5 articles

Editorial: AI processing of UAV acquired images for pattern monitoring in natural and urban environments

  • 1Faculty of Science, Yamagata University, Yamagata, Japan
  • 2Underwater Robotics Research Center, Computer Vision and Robotics Institute, University of Girona, Girona, Spain
  • 3Brain and Mind Centre, University of Sydney, Sydney, NSW, Australia
  • 4Geomatics Group, Geography Department, Ruhr-University, Bochum, Germany
  • 5Faculty of Agriculture, Yamagata University, Tsuruoka, Japan

Remote sensing Data, and especially data collected by Unmanned Air Vehicles (UAVs), is currently being used in an increasing number of research areas. These data, together with the new image analysis possibilities that Deep Learning provides, are pushing forward significant advances in research areas as diverse as Traffic Surveillance Bozcan and Kayacan (2020), Forestry (Diez et al., 2021), Agriculture Kamilaris and Prenafeta Boldu (2018) or Cultural Heritage inspection Themistocleous (2020). Our aim in this Research Topic is to explore how the combination of remote sensing data and AI (and very particularly Deep Learning) can be exploited to further widen their field of application, and to obtain better results in existing applications.

We believe that the four articles gathered in this Research Topic fulfill our goals and provide a very interesting perspective on how much this research area has already advanced in the last few years and how much further it can go. From novel takes on existing technology such as microdrones being used to obtain 3D-reconstructions of industrial buildings, to developments on the way data is collected (for example, semantic segmentation) to influence future data collection missions, and, finally, to results in established application areas, such as SLAM for agriculture or underwater natural environment monitoring, that transcend UAV data collection but use the same types of information and analysis approaches. We encourage the readers of this Research Topic to carefully examine the contributions presented but also to consider the future development possibilities that they open for these rapidly developing technologies.

The first contribution, Weißmann et al. presents a novel approach to construct 3D point clouds, with data from post-industrial buildings in the Rhine-Ruhr Metropolitan region in Germany. By using inexpensive micro-drones together with widely available open source software, the authors demonstrate how drones can act as new data sources that can change the way we visualize and relate to our environment. In particular, the authors show how the generated photorealistic 3D models can achieve accuracy similar to that of publicly available airborne laser scanning (ALS) data sets. Both the intelligent use of an inexpensive type of drone and the readiness of the technology convincingly emphasize the argument that this type of applications will continue to improve and enhance the perception of our environment. In conclusions, the authors express that their line of research should be further developed in other practical applications and single out the need to include cognitive feedback on the processing of data.

As an example of how research in fundamentally distinct practical applications in this field frequently uses similar techniques and creates interesting synergies, the second contribution Pimentel de Figueiredo et al. already serves as an answer to some of the questions that the authors of Weißmann et al. pose. Pimentel de Figueiredo et al. presents a new framework for autonomous mapping based on using geometric and semantic segmentation information computed with Deep Convolutional Neural Networks to improve the map representations used in Next-Best-View planning techniques. This paper represents a very interesting example on how to build on existing technologies, such as the off-the-shelf visual-inertial SLAM system used by the authors, and go beyond their capabilities by using the multi-sensor capacities of drones (in this case multiple cameras, Inertial Motion Units (IMUs), and an altimeter) and processing the data collected with state-of-the-art deep learning networks (in this case Bisenet Yu et al. (2018) for semantic segmentation).

Although the remaining two contributions, Silva Aguiar et al. and Runyan et al., do not use remote sensing data collected by UAVs, they not only significantly advance aspects of the algorithms that are used when processing UAV data, but also represent interesting applications that showcase the current practical impact of Deep Learning networks. Silva Aguiar et al. present a SLAM algorithm aimed at vineyard environments. The processing of the point data obtained with a LIDAR sensor mounted on a land robot, and in particular the combined use of a point-based and semi-plane based environment mapping technique has the potential to improve existing point cloud analysis techniques used for other sources of data such as, for example, the detection of individual trees or plants in forestry or agriculture applications. The authors of the paper in particular mention the possibility to extract features with semantic importance such as trunks and mention the need to integrate these semantic features in the automatic processing of the data. The future research directions outlined by the authors include some of the advances presented in the other papers in this Research Topic, showing how even long established research areas such as SLAM can benefit from the current developments in drone technology and in image analysis.

The final contribution, Runyan et al. shows the potential and the current limitations of Deep Learning—based image analysis techniques. The authors of Runyan et al. present a study where underwater image mosaics and point clouds were used to study the evolution of a coral reef environment over several years. Remarkably, the data processing pipeline https://www.overleaf.com/project/6329074e4f4f733ee854558b described in this paper is the same as that used in UAV image analysis down to the software used for mosaic and point cloud construction. Similarly, Deep Learning networks such as ResNet He et al. (2016) are used, as is common in many UAV-based applications. As a major point of interest in this contribution, several deep learning networks with different characteristics and using different dimension data (2D, 2.5D, and 3D) are used but the results obtained are mentioned as one of the limitations of the study.

We believe that the four contributions presented, when taken together, show the rapid pace of development of this researcher area, its many accomplishments and also picture some of its current limitations and evolution possibilities. We hope the high quality research presented in them is of interest for readers and helps sparks future research developments.

Author contributions

All the authors provided their expertise in the definition of the Research Topic and in assessing the importance of each of the contributions and placing them in proper context. YD was in charge of drafting the editorial, NG and MC provided corrections and checked the language and style with ML and CJ providing guidance throughout the writing process.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Bozcan, I., and Kayacan, E. (2020). “Au-Air: A multi-modal unmanned aerial vehicle dataset for low altitude traffic surveillance,” in Proceeding of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, September 2020 (IEEE), 8504–8510.

CrossRef Full Text | Google Scholar

Diez, Y., Kentsch, S., Fukuda, M., Caceres, M. L. L., Moritake, K., and Cabezas, M. (2021). Deep learning in forestry using uav-acquired rgb data: A practical review. Remote Sens. 13, 2837. doi:10.3390/rs13142837

CrossRef Full Text | Google Scholar

He, K., Zhang, X., Ren, S., and Sun, J. (2016). “Deep residual learning for image recognition,” in Proceeding of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (Los Alamitos, CA, USA: IEEE Computer Society), 770–778.

CrossRef Full Text | Google Scholar

Kamilaris, A., and Prenafeta Boldu, F. (2018). A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 156, 312–322. doi:10.1017/s0021859618000436

CrossRef Full Text | Google Scholar

Themistocleous, K. (2020). The use of UAVs for cultural heritage and archaeology. Springer International Publishing, 241–269.

CrossRef Full Text | Google Scholar

Yu, C., Wang, J., Peng, C., Gao, C., Yu, G., and Sang, N. (2018). “Bisenet: Bilateral segmentation network for real-time semantic segmentation,” in Proceedings of the European conference on computer vision (ECCV), October 2018, 325–341.

CrossRef Full Text | Google Scholar

Keywords: editorial, deep learning, UAV imaging, natural environments, urban environments, SLAM

Citation: Diez Donoso Y, Gracias N, Cabezas  M, Juergens  C and Lopez Caceres  ML (2022) Editorial: AI processing of UAV acquired images for pattern monitoring in natural and urban environments. Front. Robot. AI 9:1053063. doi: 10.3389/frobt.2022.1053063

Received: 25 September 2022; Accepted: 10 November 2022;
Published: 24 November 2022.

Edited and reviewed by:

Dongbing Gu, University of Essex, United Kingdom

Copyright © 2022 Diez Donoso, Gracias, Cabezas , Juergens  and Lopez Caceres . This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yago Diez Donoso, yago@sci.kj.yamagata-u.ac.jp

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.