Skip to main content

ORIGINAL RESEARCH article

Front. Neurosci.
Sec. Visual Neuroscience
Volume 18 - 2024 | doi: 10.3389/fnins.2024.1322623

Multi-Gate Weighted Fusion Network for Neuronal Morphology Classification

Provisionally accepted
  • University of Science and Technology of China, Hefei, Anhui Province, China

The final, formatted version of the article will be published soon.

    Analyzing the types of neurons based on morphological characteristics is pivotal for understanding brain function and human development. Existing analysis approaches based on 2D view images fully use complementary information across images. However, these methods ignore the redundant information caused by similar images and the effects of different views on the analysis results during the fusion process. Considering these factors, this paper proposes a Multigate Weighted Fusion Network (MWFNet) to characterize neuronal morphology in a hierarchical manner. MWFNet mainly consists of a Gated View Enhancement Module (GVEM) and a Gated View Measurement Module (GVMM). GVEM enhances view-level descriptors and eliminates redundant information by mining the relationships among different views. GVMM calculates the weights of view images based on the salient activated regions to assess their influence on the analysis results. Furthermore, the enhanced view-level features are fused differentially according to the view weight to generate a more discriminative instance-level descriptor. In this way, the proposed MWFNet not only eliminates unnecessary features but also maps the representation differences of views into decision-making. This can improve the accuracy and robustness of MWFNet for the identification of neuron type. Experimental results show that our method achieves accuracies of 91.73% and 98.18% on classifying 10 types and 5 types of neurons, respectively, outperforming other state-of-the-art methods.

    Keywords: Weighted fusion, Hierarchical descriptors, Morphological representation, Multiple Views, Neuronal morphology analysis

    Received: 16 Oct 2023; Accepted: 21 Oct 2024.

    Copyright: © 2024 Sun and Zhao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Feng Zhao, University of Science and Technology of China, Hefei, 230026, Anhui Province, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.