
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
EDITORIAL article
Front. Comput. Neurosci. , 04 March 2025
Volume 19 - 2025 | https://doi.org/10.3389/fncom.2025.1553207
This article is part of the Research Topic Brain-Inspired Intelligence: the Deep Integration of Brain Science and Artificial Intelligence View all 6 articles
Editorial on the Research Topic
Brain-inspired intelligence: the deep integration of brain science and artificial intelligence
The emergence of consciousness and biological behavior from neural activity represents one of the most profound and challenging questions in neuroscience (Bullmore and Sporns, 2009; Latora et al., 2017). As the cornerstone of understanding brain function, it also holds transformative potential for advancing the diagnosis and treatment of mental disorders and for the development of brain-inspired artificial general intelligence. The brain, composed of an extraordinary number of neurons with diverse morphologies and functions, forms a labyrinth of intricate structural and functional connections (Yuan et al., 2019). Deciphering the neural circuit principles that underpin cognitive functions remains a formidable scientific challenge. Thus far, extensive efforts have been devoted to unraveling how neural activity orchestrates the emergence of consciousness and governs behavior, and how the brain's structural architecture supports its extraordinary complexity—spanning scales from brain regions to individual neurons and synapses.
The medial prefrontal cortex (mPFC) plays a crucial role in behaviors involving working memory, such as planning and decision-making, but the complexity of its neural processes remains difficult to capture with current experimental designs. Studies using rodent and primate models, particularly in T-maze tasks, have highlighted the statistical limitations of existing methods, including the inability to fully leverage neuronal spike sequences and local field potentials (LFPs) for understanding neural synchrony and its behavioral relevance. Unlike the evolutionarily older visual cortex, which benefits from spatially organized and robust electrical signals, the mPFC lacks such spatial regularity, resulting in weaker signals and necessitating invasive and highly sensitive electrophysiological techniques that remain limited in scale. Recent advances, such as the use of dynamic time warping, offer potential for capturing neural synchrony, a key feature of mPFC function, but are constrained by the inadequacy of current datasets and tools. Future progress will require larger, higher-resolution datasets, innovative experimental approaches, and interdisciplinary integration of computational modeling to address these challenges and advance our understanding of how the mPFC supports complex cognitive and behavioral processes.
The visual cortex, one of the earliest areas of the brain to be studied, has played a key role in understanding visual processing mechanisms, particularly in the application of visual tasks in deep learning (Grill-Spector and Malach, 2004; Yuan et al., 2023). Through the study of simple and complex cells in the visual cortex, scientists have revealed how the brain processes different levels of visual information, which has driven the development of deep learning models such as convolutional neural networks (CNNs) (Górriz et al., 2023). These models, by simulating the processing methods of the visual cortex, have achieved breakthrough advancements in tasks such as image classification and object recognition. Research on the visual cortex not only provides deep insights into neuroscience but also offers valuable inspiration for the design of visual systems in artificial intelligence. Despite significant progress in artificial vision, there are still notable gaps between biological and artificial systems. First, convolutional neural networks (CNNs) reduce input data dimensionality through pooling, which contrasts with the increase in neurons and synapses seen in the brain's visual hierarchy (e.g., V1–V4). This discrepancy led Barlow to revise his redundancy reduction hypothesis into the redundancy exploitation hypothesis. Second, CNNs lack generalizability, especially beyond direct experiences, which is tied to the binding problem and consciousness. Achieving human-level generalization requires a compositional approach to AI. Finally, the brain's ability to manage learning and memory with trillions of synapses while consuming very little power remains an unresolved mystery.
Traditional computational neuroscience relies on hypothesis-driven, hand-engineered models, which struggle to capture the complexity of closed-loop perception-action systems. Conversely, goal-driven deep learning autonomously learns complex tasks, providing new insights into neurocomputational mechanisms. AngoraPy addresses challenges in training recurrent convolutional neural networks (RCNNs) on sensorimotor tasks, leveraging reinforcement learning (RL) to bypass the need for costly labeled data. It supports customizable architectures and tasks, enabling efficient, large-scale training with anthropomorphic sensory inputs and motor outputs. Benchmarks showcase its flexibility across diverse robotic and control tasks, making it a valuable tool for advancing sensorimotor neuroscience.
A deeper understanding of neural mechanisms has significantly advanced the development of brain-inspired artificial intelligence technologies. For example, by integrating neuroscience insights, the researchers developed a spatial cognition model inspired by the hippocampus's structure and function, particularly the cornu ammonis (CA) subregions. Leveraging brain reference architecture (BRA)-driven development, the model combines Monte Carlo localization (MCL) for allocentric mapping and a recurrent state-space model (RSSM) for learning egocentric state representations from sensory inputs. Simulations demonstrated improved self-localization during teleportation, with sparse neural activity in CA3-mimicking latent variables supporting adaptability. This approach highlights the potential of brain-inspired architectures to enhance robotic self-localization, emphasizing structural-functional consistency in adapting to unanticipated changes.
It is foreseeable that the development of neuroscience and artificial intelligence will become increasingly intertwined, with mutual promotion gradually becoming the main theme of brain science development both now and in the future.
YY: Writing – original draft, Writing – review & editing. XC: Project administration, Writing – review & editing. JL: Project administration, Supervision, Writing – review & editing.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Bullmore, E., and Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 10, 186–198. doi: 10.1038/nrn2575
Górriz, J. M., Álvarez-Illán, I., Álvarez-Marquina, A., Arco, J. E., Atzmueller, M., Ballarini, F., et al. (2023). Computational approaches to explainable artificial intelligence: advances in theory, applications and trends. Inf. Fusion 100:101945. doi: 10.1016/j.inffus.2023.101945
Grill-Spector, K., and Malach, R. (2004). The human visual cortex. Annu. Rev. Neurosci. 27, 649–677. doi: 10.1146/annurev.neuro.27.070203.144220
Latora, V., Nicosia, V., and Russo, G. (2017). Complex Networks: Principles, Methods and Applications. Cambridge: Cambridge University Press.
Yuan, Y., Liu, J., Zhao, P., Xing, F., Huo, H., Fang, T., et al. (2019). Structural insights into the dynamic evolution of neuronal networks as synaptic density decreases. Front. Neurosci. 13:892. doi: 10.3389/fnins.2019.00892
Keywords: brain-inspired intelligence, brain science, artificial intelligence, brain cortex, neural mechanism
Citation: Yuan Y, Chen X and Liu J (2025) Editorial: Brain-inspired intelligence: the deep integration of brain science and artificial intelligence. Front. Comput. Neurosci. 19:1553207. doi: 10.3389/fncom.2025.1553207
Received: 30 December 2024; Accepted: 19 February 2025;
Published: 04 March 2025.
Edited and reviewed by: Si Wu, Peking University, China
Copyright © 2025 Yuan, Chen and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jian Liu, bGl1amlhbjkyQHVzc3QuZWR1LmNu
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.