The final, formatted version of the article will be published soon.
ORIGINAL RESEARCH article
Front. Plant Sci.
Sec. Technical Advances in Plant Science
Volume 15 - 2024 |
doi: 10.3389/fpls.2024.1512632
This article is part of the Research Topic Agricultural Innovation in the Age of Climate Change: A 4.0 Approach View all 3 articles
Efficient detection of eyes on potato tubers using deep-learning for robotic high-throughput sampling
Provisionally accepted- 1 Center for Precision and Automated Agricultural Systems, Washington State University, Prosser, United States
- 2 Department of Biological Systems Engineering, College of Agricultural, Human, and Natural Resource Sciences, Washington State University, Pullman, Washington, United States
- 3 Department of Plant Pathology, College of Agricultural, Human, and Natural Resource Sciences, Washington State University, Pullman, Washington, United States
Molecular-based detection of pathogens from potato tubers hold promise, but the initial sample extraction process is labor-intensive. Developing a robotic tuber sampling system, equipped with a fast and precise machine vision technique to identify optimal sampling locations on a potato tuber, offers a viable solution. However, detecting sampling locations such as eyes and stolon scar is challenging due to variability in their appearance, size, and shape, along with soil adhering to the tubers. In this study, we addressed these challenges by evaluating various deep-learning-based object detectors, encompassing You Look Only Once (YOLO) variants of YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9, YOLOv10, and YOLO11, for detecting eyes and stolon scars across a range of diverse potato cultivars. A robust image dataset obtained from tubers of five potato cultivars (three russet skinned, a red skinned, and a purple skinned) was developed as a benchmark for detection of these sampling locations. The mean average precision at an intersection over union threshold of 0.5 (mAP@0.5) ranged from 0.832 and 0.854 with YOLOv5n to 0.903 and 0.914 with YOLOv10l. Among all the tested models, YOLOv10m showed the optimal trade-off between detection accuracy (mAP@0.5 of 0.911) and inference time (92 ms), along with satisfactory generalization performance when cross-validated among the cultivars used in this study. The model benchmarking and inferences of this study provide insights for advancing the development of a robotic potato tuber sampling device.
Keywords: Tissue sampling robot, Machine Vision, Molecular diagnostics, Potato pathogens, FTA card, YOLO
Received: 17 Oct 2024; Accepted: 29 Nov 2024.
Copyright: © 2024 Loganathan Girija, Khanal, Paudel, Mattupalli and Karkee. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Divyanth Loganathan Girija, Center for Precision and Automated Agricultural Systems, Washington State University, Prosser, United States
Salik Ram Khanal, Center for Precision and Automated Agricultural Systems, Washington State University, Prosser, United States
Manoj Karkee, Center for Precision and Automated Agricultural Systems, Washington State University, Prosser, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.