Skip to main content

EDITORIAL article

Front. Plant Sci., 18 September 2023
Sec. Plant Bioinformatics
This article is part of the Research Topic Synthetic Data for Computer Vision in Agriculture View all 6 articles

Editorial: Synthetic data for computer vision in agriculture

  • 1Biometris, Wageningen Plant Research, Wageningen University and Research, Wageningen, Netherlands
  • 2School of Computing, Edinburgh Napier University, Edinburgh, United Kingdom

Availability of good quality data is often the deciding factor in the application of deep learning to computer vision problems in agriculture, as in other domains. Synthetic data, if realistic enough, can offer a solution in this case. Due to the inherent nature of creating or simulating data, the respective ground truth, whether image level labels, bounding boxes, or instance masks, can be prepared automatically. In fact, synthesis can be thought of as the opposite of segmentation, and therefore lends itself to generative models, such as generative adversarial networks (GANs) Goodfellow et al. (2020). It has been previously shown that synthetically generated images of sweet pepper plants using 3D modeling for the plant structure and with GANs for improving the realsim of the projected 2D images Barth et al (2018a; 2018b). when used as the initial training set for sweet pepper detection models, led to an improvement in the overall detection performance across different datasets Afonso (2019). This suggests that training on synthetic data can make the models learn some latent characteristics, which can then be refined by training on the respective empirical data.

The aim of this Research Topic is to showcase some of the recent applications of the use of synthetic data in agricultural computer vision problems. Here, we highlight some of the topics from the following contributions.

In Wang et al., a dataset for training a YOLO Redmon et al. (2016) based object detector for pear flowers was developed synthesizing extracted flowers on the backgrounds of the target use case. The modified model called YOLO-PEFL differs from YOLOv4 in that the backbone architecture used is ShuffleNetv2 Ma et al. (2018) combined with the SENet model, and obtained an Average Precision of 96.71%, with all metrics better than those obtained by training only on natural data.

Image enhancement transforms were used in Yu et al. to generate a training dataset for the detection of brown leaf spot, frog-eye leaf spot, and phyllosticta leaf spot in soybeans. A modified resnet18 He et al. (2016) model including an attention layer was developed, which when trained on this dataset resulted in an average disease recognition accuracy of 98.49%.

Another work on leaf disease detection Sapoukhina et al. generated synthetic fluorescent images of arabidopsis plants by statistical quantification of pixel values corresponding to healthy and diseased leaves, and using the parameters obtained to generate appropriate pixel values given an input mask indicating the healthy regions and leaf lesions. A U-Net model for segmenting the lesions trained on this dataset showed 0.793 recall and 0.723 average precision on an empirical fluorescent test dataset.

Two works dealt with the problem of segmentation of roots. In Li et al., a model for 3D semantic segmentation of roots was proposed, based on U-Net with modified layers to reduce the training volume, to improve the feature utilization capability and to improve the segmentation accuracy of the object through

multi-scale features and attention fusion. In addition, the dice-focal loss function was modified to be able to avoid data imbalance problems related to the backgrounds such as between the root system and soil. Experimental results on a test set of the peanut root images obtained a root segmentation accuracy of 0.9917, an Intersection Over Union of 0.9548, and an F1-score of 95.10. Transfer Learning for segmentation of corn roots was also reported.

A method for 3D segmentation of cassava roots Alle et al. utilized a custom CNN architecture with a spatial pyramid pooling layer to deal with the variance in the root structures, along with a dataset obtained by recursively refining ground truth segmentation obtained using the root force algorithm Gerth et al. (2021) in the weakly supervised framework Khoreva et al. (2017). Experimental results showed that finer root structures could be segmented using this approach coupled with locally adapting the field-of-view.

Final comments

The articles presented in this Research Topic show different approaches to generating synthetic data, such as relatively simple pixel colorspace transformations, overlaying objects of interest onto target backgrounds, statistical modeling of pixel values, and recursively refining baseline segmentations. The deep learning problems included segmentation and object detection, with disease detection also formulated as a segmentation problem for the diseased/damaged regions. The reported results show that improvements in the metrics of interest could be obtained using models trained on the synthetic data, thus making a strong case for using synthetic data as a starting point for the introduction of deep learning in these kind of computer vision problems in situations hitherto without annotated data.

Author contributions

MA: Writing – original draft, Writing – review & editing. VG: Writing – review & editing.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Afonso, M. (2019). “Deep learning based plant part detection in greenhouse settings,” in 12th EFITA international conference (EFITA), 48–53.

Google Scholar

Barth, R., Hemming, J., van Henten, E. J. (2018a). Optimising realism of synthetic images using cycle generative adversarial networks for improved part segmentation. Computers and Electronics in Agriculture Elsevier 173 (0168-1699), 105378. doi: 10.1016/j.compag.2020.105378

CrossRef Full Text | Google Scholar

Barth, R., IJsselmuiden, J., Hemming, J., Van Henten, E. J. (2018b). Data synthesis methods for semantic segmentation in agriculture: A capsicum annuum dataset. Comput. Electron. Agric. 144 (0168-1699), 284–296. doi: 10.1016/j.compag.2017.12.001

CrossRef Full Text | Google Scholar

Gerth, S., Claußen, J., Eggert, A., Wörlein, N., Waininger, M., Wittenberg, T., et al. (2021). Semiautomated 3d root segmentation and evaluation based on x-ray ct imagery. Plant Phenomics. 2021, 8747930. doi: 10.34133/2021/8747930

PubMed Abstract | CrossRef Full Text | Google Scholar

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2020). Generative adversarial networks. Commun. ACM 63, 139–144.

Google Scholar

He, K., Zhang, X., Ren, S., Sun, J. (2016). “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Vegas, USA 770–778.

Google Scholar

Khoreva, A., Benenson, R., Hosang, J., Hein, M., Schiele, B. (2017). “Simple does it: Weakly supervised instance and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, USA 876–885.

Google Scholar

Ma, N., Zhang, X., Zheng, H.-T., Sun, J. (2018). “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” in Proceedings of the European conference on computer vision (ECCV), Munich, Germany 116–131.

Google Scholar

Redmon, J., Divvala, S., Girshick, R., Farhadi, A. (2016). “You only look once: unified, real-time object detection,” in 2016 IEEE conference on computer vision and pattern recognition (CVPR), Las Vegas, USA 779–788. doi: 10.1109/CVPR.2016.91

CrossRef Full Text | Google Scholar

Keywords: synthetic data, computer vision in agriculture, segmentation, deep learning, object detection

Citation: Afonso M and Giufrida V (2023) Editorial: Synthetic data for computer vision in agriculture. Front. Plant Sci. 14:1277073. doi: 10.3389/fpls.2023.1277073

Received: 13 August 2023; Accepted: 28 August 2023;
Published: 18 September 2023.

Edited and Reviewed by:

Lida Zhang, Shanghai Jiao Tong University, China

Copyright © 2023 Afonso and Giufrida. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Manya Afonso, manya.afonso@wur.nl

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.