ORIGINAL RESEARCH article
Front. Remote Sens.
Sec. Multi- and Hyper-Spectral Imaging
Volume 6 - 2025 | doi: 10.3389/frsen.2025.1545983
GLN-LRF: Global Learning Network based on Large Receptive Fields for Hyperspectral Image Classification
Provisionally accepted- 1福建农林大学, 福州市, China
- 2Fujian Police College, Fuzhou, Fujian Province, China
- 3Newford Research Institute of Advanced Technology, wenzhou, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Deep learning has been widely applied to high-dimensional hyperspectral image classification and has achieved significant improvements in classification accuracy. However, most current hyperspectral image classification networks follow a patch-based learning framework, which divides the entire image into multiple overlapping patches and uses each patch as input to the network. Such locality-based methods have limitations in capturing global contextual information and incur high computational costs due to patch overlap. To alleviate this problem, we propose a global learning network with a large receptive fields network (GLNet) to capture more comprehensive and accurate global contextual information and to enrich the underlying feature representation for hyperspectral image classification. The proposed GLNet is an encoder-decoder architecture with skip connections. In the encoder phase, a large receptive field context exploration (LRFC) block is proposed to extract multi-scale contextual features. LRFC block enables the network to enlarge the receptive field and capture more spectral-spatial information. In the decoder phase, to further extract rich semantic information, a multi-scale simple attention (MSA) block is proposed, which extracts deep semantic information by using multi-scale convolution kernels and fusing the obtained features with SimAM. Specifically, GLNet achieved overall accuracies (OA) of 98.72%, average accuracies (AA) of 98.63%, and Kappa coefficients of 98.3% on the IP dataset; similar improvements were observed on the PU and HOS18 datasets, confirming its superior performance compared to the baseline models.
Keywords: hyperspectral image classification, Multi-scale fusion, Spatially separable convolution, Large receptive fields, Global contextual
Received: 16 Dec 2024; Accepted: 23 Apr 2025.
Copyright: © 2025 dai, liu, lin, wang, lin, Yang and CHEN. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
yaohai lin, 福建农林大学, 福州市, China
Changcai Yang, 福建农林大学, 福州市, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.