Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci.

Volume 19 - 2025 | doi: 10.3389/fncom.2025.1545971

NeuroFusionNet: Cross-modal Modeling from Brain Activity to Visual Understanding

Provisionally accepted
Kehan Lang Kehan Lang 1*Jianwei Fang Jianwei Fang 2Guangyao Su Guangyao Su 2
  • 1 Nankai University, Tianjin, China
  • 2 China Unicom Software Research Institute, Beijing,, China

The final, formatted version of the article will be published soon.

    In recent years, the integration of machine vision and neuroscience has provided a new perspective for deeply understanding visual information. This paper proposes an innovative deep learning model, NeuroFusionNet, designed to enhance the understanding of visual information by integrating fMRI signals with image features. Specifically, images are processed by a visual model to extract region-of-interest (ROI) features and contextual information, which are then encoded through fully connected layers. The fMRI signals are passed through 1D convolutional layers to extract features, effectively preserving spatial information and improving computational efficiency. Subsequently, the fMRI features are embedded into a 3D voxel representation to capture the brain's activity patterns in both spatial and temporal dimensions. To accurately model the brain's response to visual stimuli, this paper introduces a Mutli-scale fMRI Timeformer module, which processes fMRI signals at different scales to extract both fine details and global responses. To further optimize the model's performance, we introduce a novel loss function called the fMRIguided loss. Experimental results show that NeuroFusionNet effectively integrates image and brain activity information, providing more precise and richer visual representations for machine vision systems, with broad potential applications.

    Keywords: cognitive computing, Neuroscience, deep learning, Machine Vision, Cross-modal fusion

    Received: 16 Dec 2024; Accepted: 05 Mar 2025.

    Copyright: © 2025 Lang, Fang and Su. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Kehan Lang, Nankai University, Tianjin, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

    Research integrity at Frontiers

    Man ultramarathon runner in the mountains he trains at sunset

    94% of researchers rate our articles as excellent or good

    Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


    Find out more