The final, formatted version of the article will be published soon.
ORIGINAL RESEARCH article
Front. Environ. Sci.
Sec. Big Data, AI, and the Environment
Volume 12 - 2024 |
doi: 10.3389/fenvs.2024.1515752
This article is part of the Research Topic Advanced Applications of Artificial Intelligence and Big Data Analytics for Integrated Water and Agricultural Resource Management: Emerging Paradigms and Methodologies View all articles
A Vision-Language Model for Predicting Potential Distribution Land of Soybean Double Cropping
Provisionally accepted- Shaanxi Meteorological Service Center of Agricultural Remote Sensing and Economic Crops, Xi'an, China
Accurately predicting suitable areas for double-cropped soybeans under changing climatic conditions is critical for ensuring food security and optimizing land use. Traditional methods, relying on single-modal approaches such as remote sensing imagery or climate data in isolation, often fail to capture the complex interactions among environmental factors, leading to suboptimal predictions. Moreover, these approaches lack the ability to integrate multi-scale data and contextual information, limiting their applicability in diverse and dynamic environments. To address these challenges, we propose AgriCLIP, a novel remote sensing vision-language model that integrates remote sensing imagery with textual data, such as climate reports and agricultural practices, to predict potential distribution areas of double-cropped soybeans under climate change. AgriCLIP employs advanced techniques including multi-scale data processing, self-supervised learning, and cross-modality feature fusion, enabling comprehensive analysis of factors influencing crop suitability. Extensive evaluations on four diverse remote sensing datasets-RSICap, RSIEval, MillionAID, and HRSID-demonstrate AgriCLIP's superior performance over state-of-the-art models. Notably, AgriCLIP achieves a 97.54% accuracy on the RSICap dataset and outperforms competitors across metrics such as recall, F1 score, and AUC.Its efficiency is further highlighted by reduced computational demands compared to baseline methods. AgriCLIP's ability to seamlessly integrate visual and contextual information not only advances prediction accuracy but also provides interpretable insights for agricultural planning and climate adaptation strategies, offering a robust and scalable solution for addressing the challenges of food security in the context of global climate change.
Keywords: AgriCLIP, remote sensing, Vision-Language Model, Climate Change, Double-Cropped Soybeans, Predicting Distribution Areas
Received: 24 Oct 2024; Accepted: 23 Dec 2024.
Copyright: © 2024 Bei. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Gao Bei, Shaanxi Meteorological Service Center of Agricultural Remote Sensing and Economic Crops, Xi'an, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.