Skip to main content

ORIGINAL RESEARCH article

Front. Plant Sci.
Sec. Plant Bioinformatics
Volume 15 - 2024 | doi: 10.3389/fpls.2024.1463113
This article is part of the Research Topic Recent Advances in Big Data, Machine, and Deep Learning for Precision Agriculture, Volume II View all 10 articles

Chrysanthemum Classification Method Integrating Deep Visual Features from both the Front and Back Sides

Provisionally accepted
Yifan Chen Yifan Chen 1Xichen Yang Xichen Yang 1*Hui Yan Hui Yan 2Jia Liu Jia Liu 2Jian Jiang Jian Jiang 1Zhongyuan Mao Zhongyuan Mao 1Tianshu Wang Tianshu Wang 2
  • 1 Nanjing Normal University, Nanjing, China
  • 2 Nanjing University of Chinese Medicine, Nanjing, Jiangsu Province, China

The final, formatted version of the article will be published soon.

    \textcolor{red}{Chrysanthemum morifolium Ramat (hereinafter referred to as Chrysanthemum) is one of the most beloved and economically valuable Chinese herbal crops, which contains abundant medicinal ingredients and wide application prospects.} Therefore, identifying the classification and origin of Chrysanthemum is important for producers, consumers, and market regulators. The existing Chrysanthemum classification methods mostly rely on visual subjective identification, are time-consuming, and always need high equipment costs. A novel method is proposed to accurately identify the Chrysanthemum classification in a swift, non-invasive, and non-contact way. The proposed method is based on the fusion of deep visual features of both the front and back sides. Firstly, the different Chrysanthemums images are collected and labeled with origins and classifications. Secondly, the background area with less available information is removed by image preprocessing. Thirdly, a two-stream feature extraction network is designed with two inputs which are the preprocessed front and back Chrysanthemum images. Meanwhile, the incorporation of \textcolor{red}{single-stream} residual connections and \textcolor{red}{cross-stream} residual connections is employed to extend the receptive field of the network and fully fusion the features from both the front and back sides. \textcolor{red}{Experimental results demonstrate that the proposed method achieves an accuracy of 93.8$\%$, outperforming existing methods and exhibiting superior stability. The proposed method provides an effective and dependable solution for identifying Chrysanthemum classification and origin while offering practical benefits for quality assurance in production, consumer markets, and regulatory processes.} Code and data are available at \url{https://github.com/dart-into/CCMIFB}.

    Keywords: Chrysanthemum classification, Two-stream network, Visual information, Feature fusion, deep learning

    Received: 11 Jul 2024; Accepted: 26 Dec 2024.

    Copyright: © 2024 Chen, Yang, Yan, Liu, Jiang, Mao and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Xichen Yang, Nanjing Normal University, Nanjing, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.