Skip to main content

ORIGINAL RESEARCH article

Front. Robot. AI
Sec. Field Robotics
Volume 12 - 2025 | doi: 10.3389/frobt.2025.1548143
This article is part of the Research Topic Revolutionizing Agriculture: The Role of Robotics and AI in Smart Farming View all articles

Unsupervised Semantic Label Generation in Agricultural Fields

Provisionally accepted
Gianmarco Roggiolani Gianmarco Roggiolani 1*Julius Rückin Julius Rückin 1Marija Popović Marija Popović 2Jens Behley Jens Behley 1Cyrill Stachniss Cyrill Stachniss 1
  • 1 University of Bonn, Bonn, Germany
  • 2 Delft University of Technology, Delft, Netherlands

The final, formatted version of the article will be published soon.

    Robust perception systems allow farm robots to recognize weeds and vegetation, enabling the selective application of fertilizers and herbicides to mitigate the environmental impact of traditional agricultural practices. Today's perception systems typically rely on deep learning to interpret sensor data for tasks such as distinguishing soil, crops, and weeds.These approaches usually require substantial amounts of manually labeled training data, which is often time-consuming and requires domain expertise. This paper aims to reduce this limitation and propose an automated labeling pipeline for crop-weed semantic image segmentation in managed agricultural fields. It allows the training of deep learning models without or with only limited manual labeling of images.Our system uses RGB images recorded with unmanned aerial or ground robots operating in the field to produce semantic labels exploiting the field row structure for spatially consistent labeling. We use the rows previously detected to identify multiple crop rows, reducing labeling errors and improving consistency. We further reduce labeling errors by assigning an "unknown" class to challenging-to-segment vegetation. We use evidential deep learning because it provides predictions uncertainty estimates that we use to refine and improve our predictions. In this way, the evidential deep learning assigns high uncertainty to the weed class, as it is often less represented in the training data, allowing us to use the uncertainty to correct the semantic predictions. Experimental results suggest that our approach outperforms general-purpose labeling methods applied to crop fields by a large margin and domain-specific approaches on multiple fields and crop species.Using our generated labels to train deep learning models boosts our prediction performance on previously unseen fields with respect to unseen crop species, growth stages, or different lighting conditions. We obtain an IoU of 88.6% on crops, and 22.7% on weeds for a managed field of sugarbeets, where fully supervised methods have 83.4% on crops and 33.5% on weeds and other unsupervised domain-specific methods get 54.6% on crops and 11.2% on weeds. Finally, our method allows fine-tuning models trained in a fully supervised fashion to improve their performance in unseen field conditions up to +17.6% in mean IoU without additional manual labeling.

    Keywords: Agricultural automation, Robotic Crop Monitoring, Deep Learning for Agricultural Robots, Semantic scene understanding, automatic labeling, unsupervised learning

    Received: 19 Dec 2024; Accepted: 31 Jan 2025.

    Copyright: © 2025 Roggiolani, Rückin, Popović, Behley and Stachniss. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Gianmarco Roggiolani, University of Bonn, Bonn, Germany

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.