BRIEF RESEARCH REPORT article

Front. Phys., 08 March 2023

Sec. Optics and Photonics

Volume 11 - 2023 | https://doi.org/10.3389/fphy.2023.1159266

A hybrid neural architecture search for hyperspectral image classification

  • 1. Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin, China

  • 2. Department of Computer Science, Chubu University, Kasugai, Aichi, Japan

Article metrics

View details

5

Citations

2k

Views

474

Downloads

Abstract

Convolution neural network (CNN)is widely used in hyperspectral image (HSI) classification. However, the network architecture of CNNs is usually designed manually, which requires careful fine-tuning. Recently, many technologies for neural architecture search (NAS) have been proposed to automatically design networks, further improving the accuracy of HSI classification to a new level. This paper proposes a circular kernel convolution-β-decay regulation NAS-confident learning rate (CK-βNAS-CLR) framework to automatically design the neural network structure for HSI classification. First, this paper constructs a hybrid search space with 12 kinds of operation, which considers the difference between enhanced circular kernel convolution and square kernel convolution in feature acquisition, so as to improve the sensitivity of the network to hyperspectral information features. Then, the β-decay regulation scheme is introduced to enhance the robustness of differential architecture search (DARTS) and reduce the discretization differences in architecture search. Finally, we combined the confidence learning rate strategy to alleviate the problem of performance collapse. The experimental results on public HSI datasets (Indian Pines, Pavia University) show that the proposed NAS method achieves impressive classification performance and effectively improves classification accuracy.

1 Introduction

Hyperspectral sensing images (HSIs) collect rich spatial–spectral information in hundreds of spectral bands, which can be used to effectively distinguish ground cover. HSI classification is based on pixel level, and many traditional methods based on machine learning have been used, such as the K-nearest neighbor (KNN) [1] and support vector machine (SVM) [2]. The HSI classification method based on deep learning can effectively extract the robust features to obtain better classification performance [35].

Limited by the cost of computing resources and the workload of parameter adjustment, it is inevitable to promote the development of automatic design efficient neural network technology [6]. The goal of NAS (neural architecture search)is to select and combine different neural operations from predefined search spaces and to automate the construction of high-performance neural network structures. Traditional NAS work uses the reinforcement learning algorithm (RL) [7], evolutionary algorithm (EA) [8], and the gradient-based method to conduct architecture search.

In order to reduce resource consumption, one-shot NAS methods based on supernet are developed [9]. DARTS is a one-shot NAS method with a distinguishable search strategy [10]. By introducing Softmax function, it expands the discrete space into a continuous search optimization process. Specifically, it can reduce the workload of network architecture design and reduce the process of a large number of verification experiments [9].

The method based on the automatic design of convolutional neural network for hyperspectral image classification (CNAS) introduces DARTS into the HSI classification task for the first time. The method uses point-by-point convolution to compress the spectral dimensions of HSI into dozens of dimensions and then uses DARTS to search for neural network architecture suitable for the HSI dataset [11]. Subsequently, based on the method of 3D asymmetric neural architecture search (3D-ANAS), a classification framework from pixel to pixel was designed, and the redundant operation problem was solved by using the 3D asymmetric CNN, which significantly improved the calculation speed of the model [12].

Traditional CNN design uses square kernel to extract image features, which brings significant challenges to the computing system because the number of arithmetic operations increases exponentially with the increase of network size. The features acquired by the square kernel are usually unevenly distributed [

13

] because the weights at the central intersection are usually large. Inspired by circular kernel (CK) convolution, this paper studies a new NAS paradigm to classify HSI data by automatically designing hybrid search space. The main contributions of this paper are as follows:

  • 1) An effective framework is proposed to design the NAS, called CK-βNAS-CLR, which is composed of a hybrid search space of 12 operations of circular convolution with different convolution methods, different scales, and attention mechanism to effectively improve the feature acquisition ability.

  • 2) β-decay regularization is introduced effectively to stabilize the search process and make the searched network architecture transferable among multiple HSI datasets.

  • 3) We introduced the confident learning rate strategy to focus on the confidence level when updating the structure weight gradient and to prevent over-parameterization.

2 Materials and methods

As shown in Figure 1, the NAS framework for HSI proposed is described, called as CK-βNAS-CLR. Compared with other HSI classification methods, this method aims to alleviate the shortcomings of traditional microNAS methods from three aspects of search space, search strategy, and architecture resource optimization and effectively improve the classification accuracy.

FIGURE 1

FIGURE 1

Overall framework of the proposed CK-βNAS-CLR model.

DARTS is a basic framework which adopts weight sharing and combines hypernetwork training with the search of the best candidate architecture to effectively reduce the waste of computing resources. First, the hyperspectral image is clipped into patch by sliding window as input. Then, the hybrid search space of CK convolution and attention mechanism is constructed, and the operation search between nodes is carried out in the hybrid search space to effectively improve the feature acquisition ability of the receptive field. At the same time, the architecture parameter set β, which represents the importance of the operator, is attenuated and regularized, effectively strengthening the robustness of DARTS and reducing the discretization differences in the architecture search process. After the search is completed, the algorithm stacks multiple normal cells and reduction cells to form the optimal neural structure, and then the classification results are obtained through Softmax operation. In addition, CLR is proposed to stack decay regularization to alleviate the performance crash of DARTS, improve memory efficiency, and reduce architecture search time.

2.1 The proposed NAS framework for HSI classification

2.1.1 Integrating circular kernel to convolution

The circular kernel is isotropic and can be realized from all directions. In addition, symmetric circular nuclei can ensure rotation invariance, which uses bilinear interpolation to approximate the traditional square convolution kernel to a circular convolution kernel, and uses matrix transformation to reparametrize the weight matrix, replacing the original matrix with the changed matrix to realize the offset of receptive field reception. Without considering the loss, the expression of receptive field H of standard 3 × 3 square convolution kernel with a dilation of 1 is written as follows:

where represents the offset set of the neighborhood convolved on the center pixel. By convolution, the feature map is and kernel is . The output feature map can be obtained, and the coordinates of each position are shown in formula (2).

So, we get , where represents the classical convolution operation used by the CNN. Therefore, the change of receptive field of nucleus circularis 3 × 3 is shown in formula(3).

For the sampling problem of circular convolution kernels, we selected the offset () of for different discrete kernel positions and resampled the offset to input to obtain circular receptive fields. Because the sampling receptive field of circular nucleus has a fraction, we use bilinear interpolation to approximate the sampling value of the receptive field.

where represents the grid position in the circular receptive field and represents all grid positions in the square receptive field, which is the kernel of two-dimensional bilinear interpolation. According to the bilinear interpolation, can be divided into two one-dimensional cores.

Therefore, and only correspond to the corresponding grid of receptive field with grid location . Then, we let and represent the adjusted receptive field centered on position and nucleus, respectively. Generally, the standard convolution can be defined as shown in formula (8), so after replacing the circular convolution kernel, the circular convolution can be located as shown in formula (9).

where is a fixed sparse coefficient matrix, so let , , and be the input feature map, output feature map, and kernel, respectively, so the corresponding definition of formula (9) can be written as formula (10).

where is the convolution process of changing the square receptive field into a circular receptive field. Thus, we can calculate the core weight to achieve operation . This calculation method can effectively avoid calculating the offset of multiple convolutions and reduce the cost of core operation. Next, we summarize the analysis of the actual effect of the transformation matrix. We let , and the value of a change on the output is shown in formula (11). The squared value of a change on the output is shown in formula (12).

In contrast, of the traditional convolution layer is defined by . Therefore, it can be concluded that the transformation matrix caused by the circular core can provide a better choice for the gradient descent path of DARTS.

2.1.2 β-decay regularization scheme

In order to alleviate unfair competition in DARTS, we introduced the β-decay regulation scheme [14] so as to improve its robustness and generalization ability and effectively reduce the search memory and the search cost to find the best architecture as shown in Figure 2.

FIGURE 2

FIGURE 2

β-decay regularization scheme.

converts the discretization operation of the optional operation set in the search space into an operable continuous set. After implementing Softmax operation, the architecture parameter set is obtained, which is attenuated and regularized.

where is the combination of architecture parameters between node m and node n and is the number of optional operations. Each cell can have up to N nodes, and represents the architecture parameters. A special coefficient for each candidate operation is defined.

Starting from the default setting of regularization, consider the one-step update of architecture parameter , where represents the learning rate a of architecture parameters.

For the special gradient descent algorithm of DARTS, these regularized gradients need to be normalized () through the sum size and to realize the average distribution of the total gradient without normalization.

In the DARTS search process, the architecture parameter set, β, is used to express the importance of all operators. The research on explicit regularization of β can more clearly standardize the optimization of architecture parameters so as to improve the robustness and architecture universality of DARTS. We use the function with as the independent variable to express the total impact of attenuation regularization.

where the function ( is the independent variable) represents the overall influence of β attenuation regularization and is the mapping function. We can iterate for dividing the single-step update parameter value and parameter value weighted sum of β.

It can be found that mapping function determines the impact of on β. To avoid excessive regularization and optimization difficulties, Softmax is used to normalize .

We can obtain the impact and effect of our method.

2.1.3 Confident learning rate strategy

When the NAS method is used to classify hyperspectral datasets, a large number of parameters will be generated. When the training samples are limited, the performance of the network may be reduced due to the over-fitting phenomenon, which will lead to low memory utilization during the training process. CLR is used to alleviate these two problems [15].

After applying the Softmax operation, the structure is relaxed. The gradient descent algorithm is used to optimize the matrix, and the original weight of the network is called . Then, the cross-entropy formula is used to calculate the loss value in the training stage and the parameters and are updated.

In order to enable both to achieve the optimization strategy at the same time, it is necessary to fix the value of matrix on the training set, update using the gradient descent algorithm, fix the value on the verification set, update the value using the gradient descent algorithm, and obtain the best parameter value repeatedly. Stop the optimization after finding the best architecture neural architecture and minimize the verification loss .

NAS architecture parameters will be over-parameterized with the increase of training time. Therefore, the gradient confidence obtained from the parameterized DARTS should increase with the training time of the architecture weight update.

where represents the number of epochs currently trained, represents the preset total epochs, and is the confidence factor of CLR. Through the update of the confidence learning rate, the network obtains and uses it for gradient update.

The confidence learning rate is established in the process of architecture gradient update.

3 Results

Our experiments are conducted using Intel (R) Xeon (R) 4208 Processor and Nvidia GeForce RTX 2080Ti graphics card. We selected the average of 10 experiments to compute the overall accuracy (OA), average accuracy (AA), and Kappa coefficient (K).

3.1 Comparison with state-of-the-art methods

In this section, we select some advanced methods to make comparison so as to evaluate the classification performance, which include extended morphological profile combined with support vector machine (EMP-SVM) [16], spectral spatial residual network (SSRN) [17], residual network (ResNet) [18], pyramid residual network (PyResNet) [19], multi-layer perceptron mixer (MLP Mixer) [20], CNAS [11], and efficient convolutional neural architecture search (ANAS-CPA-LS) [21]. All experimental results are shown in Tables 1, 2. The sample is clipped by using the sliding window strategy size of 32 × 32, and the overlap rate is set to 50%. We randomly selected 30 samples as the training dataset and 20 samples as the validation dataset. The training time is set to 200, and the learning rate of the three data sets is set to 0.001.

TABLE 1

MethodEMP-SVMSSRNResNetPyResNetCNASMLP-MixerANAS-CPA-LSCK-βNAS-CLR
Alfalfa51.16 ± 17.51100 ± 0.0095.36 ± 0.2897.43 ± 0.3695.14 ± 4.27100 ± 0.0099.39 ± 0.32100.0 ± 0.75
Corn-no till66.26 ± 3.3788.95 ± 1.1993.36 ± 3.7698.87 ± 1.0496.19 ± 1.2196.56 ± 0.0996.43 ± 0.7996.93 ± 0.31
Corn-min till70.40 ± 3.7492.87 ± 0.5495.64 ± 0.3789.44 ± 0.3492.86 ± 2.6492.10 ± 0.0394.25 ± 0.9796.10 ± 0.34
Corn56.63 ± 6.6896.56 ± 0.9194.99 ± 4.3297.47 ± 0.5796.56 ± 3.9498.15 ± 0.6899.07 ± 0.7499.95 ± 0.00
Grass-pasture87.04 ± 3.5290.52 ± 7.5697.69 ± 2.3097.38 ± 0.9597.13 ± 0.2797.87 ± 1.1699.56 ± 0.5594.59 ± 1.16
Grass-trees87.30 ± 1.8296.91 ± 1.1697.27 ± 1.8098.40 ± 0.1294.02 ± 0.2296.98 ± 0.8696.44 ± 0.9798.39 ± 0.79
Grass-pasture-mowed82.78 ± 8.1186.60 ± 0.8999.01 ± 0.00100 ± 0.0092.31 ± 0.88100 ± 0.0090.68 ± 2.6695.21 ± 0.05
Hay-windrowed89.45 ± 1.7596.28 ± 3.7193.64 ± 1.1795.81 ± 1.6899.44 ± 4.6695.80 ± 0.6097.91 ± 0.6795.73 ± 1.95
Oats64.83 ± 16.384.37 ± 0.0696.66 ± 1.6694.67 ± 1.1198.96 ± 0.56100 ± 0.0099.51 ± 2.0293.15 ± 2.87
Soybean-not till71.90 ± 2.1292.46 ± 0.0792.88 ± 1.9088.20 ± 7.1392.00 ± 0.5693.38 ± 0.3495.02 ± 0.4398.69 ± 1.04
Soybean-mint till73.01 ± 1.6295.33 ± 0.0694.27 ± 0.1195.26 ± 1.8793.90 ± 0.9295.34 ± 0.3798.36 ± 0.6599.04 ± 0.79
Soybean
-clean
66.49 ± 4.5694.69 ± 2.7796.13 ± 1.4395.42 ± 1.7994.88 ± 0.1994.34 ± 1.3299.07 ± 0.8398.38 ± 3.19
Wheat88.62 ± 4.3597.84 ± 1.1499.02 ± 0.0199.55 ± 0.6294.65 ± 0.45100 ± 0.0095.28 ± 0.5799.41 ± 0.12
Woods90.44 ± 1.1994.82 ± 2.3193.97 ± 1.6796.84 ± 1.1498.33 ± 0.2995.64 ± 0.0695.57 ± 0.9998.77 ± 0.62
Buildings-grass-trees-drivers71.35 ± 7.3991.29 ± 2.5293.66 ± 0.1193.56 ± 3.9094.99 ± 0.4595.55 ± 0.1595.14 ± 1.6698.65 ± 1.24
Stone-steel-towers98.10 ± 1.8286.70 ± 1.2294.47 ± 1.4396.34 ± 1.7289.57 ± 2.6688.12 ± 2.1288.28 ± 0.9685.30 ± 0.85
OA/%81.64 ± 0.0293.58 ± 1.1294.53 ± 0.5994.95 ± 1.0795.00 ± 0.5695.95 ± 0.1796.57 ± 0.5697.90 ± 0.31
AA/%75.98 ± 5.3692.88 ± 1.6495.50 ± 1.3995.91 ± 1.5295.05 ± 1.5196.23 ± 0.4896.24 ± 0.9896.76 ± 1.00
100 K71.92 ± 2.8292.67 ± 1.2993.74 ± 0.6794.31 ± 0.1994.88 ± 0.8795.38 ± 0.2095.69 ± 0.3996.97 ± 0.90

Performance comparison of different methods of the Indian Pines dataset.

TABLE 2

MethodEMP-SVMSSRNResNetPyResNetCNASMLP-MixerANAS-CPA-LSCK-βNAS-CLR
Asphalt89.06 ± 1.3390.59 ± 0.6295.59 ± 3.4394.01 ± 0.5495.55 ± 0.2595.56 ± 1.2697.56 ± 1.3498.75 ± 0.69
Meadows88.12 ± 0.2389.26 ± 1.0597.10 ± 2.0699.40 ± 0.7098.91 ± 0.7199.45 ± 0.15100 ± 0.00100 ± 0.00
Gravel78.65 ± 3.0578.89 ± 2.6587.53 ± 5.2798.26 ± 1.6993.81 ± 4.6093.83 ± 1.0399.78 ± 0.0398.73 ± 0.28
Trees88.95 ± 0.5389.05 ± 1.4599.03 ± 0.3398.73 ± 0.6199.35 ± 0.2898.78 ± 0.4593.73 ± 1.6497.12 ± 0.02
Metal93.23 ± 1.2994.55 ± 0.6798.56 ± 1.3699.64 ± 0.2799.67 ± 0.1799.81 ± 0.8199.24 ± 0.0198.93 ± 0.36
Bare soil90.13 ± 0.5490.23 ± 1.2398.35 ± 1.0699.28 ± 0.0498.39 ± 0.1898.97 ± 0.97100 ± 0.00100 ± 0.00
Bitumen81.66 ± 3.3183.69 ± 2.8299.29 ± 0.5196.35 ± 0.1992.31 ± 0.8899.22 ± 0.1994.63 ± 0.2497.61 ± 0.48
Bricks83.05 ± 1.6183.57 ± 2.9194.61 ± 0.5084.58 ± 1.3489.44 ± 4.6688.88 ± 1.9292.89 ± 0.4298.23 ± 0.69
Shadows95.26 ± 0.5694.68 ± 0.4699.39 ± 0.5899.74 ± 0.0598.96 ± 0.5698.92 ± 1.4696.33 ± 0.3297.16 ± 0.91
OA/%91.07 ± 0.8591.32 ± 1.2296.49 ± 1.7896.97 ± 1.3297.05 ± 0.4597.55 ± 0.1397.61 ± 0.8798.46 ± 0.57
AA/%87.57 ± 1.3888.28 ± 1.5496.61 ± 1.6896.67 ± 0.1396.27 ± 1.3697.05 ± 0.9197.12 ± 0.4498.50 ± 0.34
100 K88.72 ± 1.4489.43 ± 1.0995.31 ± 2.4196.07 ± 0.4196.22 ± 0.1196.76 ± 0.1896.86 ± 0.7498.06 ± 0.67

Performance comparison of different methods of the Pavia University dataset.

In Table 1, compared with EMP-SVM, SSRN, ResNet, PyResNet, CNAS, MLP Mixer, and ANAS-CPA-LS, OA obtained by our proposed method is increased by 16.26%, 4.32%, 3.37%, 2.95%, 2.9%, 1.95%, and 1.33%, respectively, on the Indian Pines dataset. Figures 3, 4 shows the classification diagram obtained from a visual perspective. By comparing the classification diagrams obtained, we can draw a conclusion that our algorithm achieves better performance. Compared with CNAS, our method uses a hybrid search space, which can effectively expand the receptive field acquired by pixels, improve the flexibility of different convolution kernel operations to process spectrum and space, and achieve higher classification accuracy.

FIGURE 3

FIGURE 3

Classification results of the Indian pines dataset. (A) Ground-truth map, (B) EMP-SVM, (C) SSRN, (D)ResNet, (E) PyResNet, (F) CNAS, (G) MLP-Mixer, (H) ANAS-CPA-LS, and (I) (I) CK-βNAS-CLR.

FIGURE 4

FIGURE 4

Classification results of the Pavia University dataset. (A) Ground-truth map, (B) EMP-SVM, (C) SSRN, (D)ResNet, (E) PyResNet, (F) CNAS, (G) MLP-Mixer, (H) ANAS-CPA-LS, and (I) CK-βNAS-CLR.

4 Discussion

The ablation study results are provided in Table 3. When CNAS is combined with hybrid search space, OA increases by 0.70%, 0.35%, and 0.54%, which proves that the hybrid search space can improve the sensitivity of the network to hyperspectral information features and slightly improve the classification performance of the model. Compared with CNAS, CK-NAS has no time change in the search time on the three datasets but has achieved better classification accuracy. CK-βNAS-CLR search gets better results with fewer parameters and involves less computational complexity.

TABLE 3

DatasetIndexCNASCK + NASCK+β+NASCK+β+NAS + CLR
Indian PinesOA (%)95.00 ± 0.5695.70 ± 0.8796.80 ± 0.3197.82 ± 0.31
AA (%)95.05 ± 1.5193.12 ± 0.4294.60 ± 0.4296.76 ± 1.00
100 K94.88 ± 0.8795.11 ± 0.1796.95 ± 0.1796.87 ± 0.90
Search cost (h)2.7022.6872.5622.473
Params (M)0.0820.0750.0750.070
Pavia UniversityOA (%)97.05 ± 0.4597.40 ± 0.2997.82 ± 0.2098.46 ± 0.57
AA (%)96.27 ± 1.3697.14 ± 1.3697.51 ± 1.8398.50 ± 0.86
100 K96.22 ± 0.1196.65 ± 0.7997.25 ± 2.5198.06 ± 0.67
Search cost (h)3.0543.0132.8672.733
Params (M)0.1760.1720.1720.168

Ablation results on the two datasets.

5 Conclusion

In this paper, the neural network structure CK-βNAS-CLR is proposed. First of all, we introduce a hybrid search space with circular kernel convolution, which can not only enhance the robustness of the model and the ability of receptive field acquisition but also achieve a better role in optimizing the path. Second, we quoted the β-decay regulation scheme, which reduced the discretization difference and the search time. Finally, the confidence learning rate strategy is introduced to improve the accuracy of model classification and reduce computational complexity. The experiment was conducted on two HSI datasets, and CK-βNAS-CLR is compared with seven methods, and the experimental results show that our method achieves the most advanced performance while using less computing resources. In future, we will use an adaptive subset of the data even when training the final architecture, which may lead to faster runtime and lower regularization term.

Statements

Data availability statement

Publicly available datasets were analyzed in this study. These data can be found here: https://www.ehu.eus/ccwintco/index.php?%20title=Hyperspectral-Remote-Sensing-Scenes.

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Funding

This work was funded by the Reserved Leaders of Heilongjiang Provincial Leading Talent Echelon of 2021, high and foreign expert’s introduction program (G2022012010L), and the Key Research and Development Program Guidance Project (GZ20220123).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1.

    ChandraBSharmaRK. On improving recurrent neural network for image classification. In: Proceeding of the International Joint Conference on Neural Networks. Alaska, USA: IJCNN (2017). p. 19047. 10.1109/IJCNN.2017.7966083

  • 2.

    SamadzadeganFHasaniHSchenkT. Simultaneous feature selection and SVM parameter determination in classification of hyperspectral imagery using Ant Colony Optimization. Remote Sens (2012) 38:13956. 10.5589/m12-022

  • 3.

    HuWHuangYWeiLZhangFLiH. Deep convolutional neural networks for hyperspectral image classification. J Sensors (2015) 2015:112. 10.1155/2015/258619

  • 4.

    MakantasisKKarantzalosKDoulamisADoulamisN. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In: 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). Milan, Italy: IGARSS (2015). p. 495962.

  • 5.

    LiWWuGZhangFDuQ. Hyperspectral image classification using deep pixel-pair features. IEEE Trans Geosci Remote Sensing (2017) 55(2):84453. 10.1109/tgrs.2016.2616355

  • 6.

    KaifengBLingxiXChenXLonghuiWTianQ. GOLD-NAS: Gradual,one-level, differentiable (2020). p. 03331. arXiv:2007.

  • 7.

    RealEAggarwalAHuangYLeQV, “Regularized evolution for image classifier architecture search,” in Proc. AAAI Conf. Artif. Intell (2019) 33, 47809. 10.1609/aaai.v33i01.33014780

  • 8.

    TanMChenBPangRV asudevanVSandlerMHowardAet alMnasNet: Platform-aware neural architecture search for mobile. Long Beach, CA, USA: CVPR (2019). 28208.

  • 9.

    LiangH. DARTS+: Improved differentiable architecture search with early stopping (2020). arXiv:1909.06035. [Online]. Available: https://arxiv.org/abs/1909.06035 (Accessed October 20, 2020).

  • 10.

    GuoZ, “Single path one-shot neural architecture search with uniform sampling,” in Proc. IEEE Eur. Conf. Comput. Vis (2020), 54460.

  • 11.

    ChenYZhuKZhuLHeXGhamisiPBenediktssonJA. Automatic design of convolutional neural network for hyperspectral image classification. IEEE Trans Geosci Remote Sensing (2019) 57(9):704866. 10.1109/tgrs.2019.2910603

  • 12.

    ZhangHGongCBaiYBaiZLiY. 3d-anas: 3d asymmetric neural architecture search for fast hyperspectral image classification (2021). arXiv preprint arXiv:2101.04287, 2021.[Online]. Available: https://arxiv.org/abs/2101.04287 (Accessed Janunary 12, 2021).

  • 13.

    LiGQianGDelgadilloICMullerMThabetAGhanemB. Sgas: Sequential greedy architecture search. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit (2020). p. 162030.

  • 14.

    YePLiBLiYChenTFanJOuyangWet albeta$-DARTS: Beta-Decay regularization for differentiable architecture search, in Proceeding of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA (2022), 1086473.

  • 15.

    DingZChenYLiNZhaoD. BNAS-v2: Memory-Efficient and performance-collapse-prevented broad neural architecture search. IEEE Trans Syst Man, Cybernetics: Syst (2022) 52(10):625972. 10.1109/TSMC.2022.3143201

  • 16.

    MelganiFBruzzoneL. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans Geosci Remote Sensing (2004) 42(8):177890. 10.1109/tgrs.2004.831865

  • 17.

    ZhongZLiJLuoZChapmanM. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans Geosci Remote Sensing (2018) 56(2):84758. 10.1109/tgrs.2017.2755542

  • 18.

    LiuXMengYFuM, "Classification research based on residual network for hyperspectral image,"In, Proceeding of the 2019 IEEE 4th International Conference on Signal and Image Processing (IC) (2019). 9115.

  • 19.

    PaolettiMEHautJMFernandez-BeltranRPlazaJPlazaAJPlaF. Deep pyramidal residual networks for spectral–spatial hyperspectral image classification. IEEE Trans Geosci Remote Sensing (2019) 57(2):74054. 10.1109/TGRS.2018.2860125

  • 20.

    HeXChenY. Modifications of the multi-layer Perceptron for hyperspectral image classification. Remote Sensing (2021) 13(17):3547. 10.3390/rs13173547

  • 21.

    WangAXueDWuHGuY. Efficient convolutional neural architecture search for LiDAR DSM classification. IEEE Trans Geosci Remote Sensing (2022) 60:117. Art no. 5703317. 10.1109/TGRS.2022.3171520

Summary

Keywords

hyperspectral image classification, neural architecture search, differentiable architecture search (DARTS), circular kernel convolution, convolution neural network

Citation

Wang A, Song Y, Wu H, Liu C and Iwahori Y (2023) A hybrid neural architecture search for hyperspectral image classification. Front. Phys. 11:1159266. doi: 10.3389/fphy.2023.1159266

Received

05 February 2023

Accepted

16 February 2023

Published

08 March 2023

Volume

11 - 2023

Edited by

Zhenxu Bai, Hebei University of Technology, China

Reviewed by

Liguo Wang, Dalian Nationalities University, China

Xiaobin Hong, South China Normal University, China

Updates

Copyright

*Correspondence: Haibin Wu,

This article was submitted to Optics and Photonics, a section of the journal Frontiers in Physics

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics