Skip to main content

ORIGINAL RESEARCH article

Front. Neural Circuits

Volume 19 - 2025 | doi: 10.3389/fncir.2025.1528309

This article is part of the Research Topic Neuro-inspired computation View all 3 articles

Sparse connectivity enables efficient information processing in cortex-like artificial neural networks

Provisionally accepted
  • 1 In Silico Brain Sciences Group, Max Planck Institute for Neurobiology of Behaviour - caesar, Bonn, Germany
  • 2 International Max Planck Research School for Brain and Behavior, Bonn, North Rhine-Westphalia, Germany
  • 3 Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, Faculty of Science, VU Amsterdam, Amsterdam, Netherlands

The final, formatted version of the article will be published soon.

    Neurons in cortical networks are very sparsely connected; even neurons whose axons and dendrites overlap are highly unlikely to form a synaptic connection. What is the relevance of such sparse connectivity for a network's function? Surprisingly, it has been shown that sparse connectivity impairs information processing in artificial neural networks (ANNs). Does this imply that sparse connectivity also impairs information processing in biological neural networks? Although ANNs were originally inspired by the brain, conventional ANNs differ substantially in their structural network architecture from cortical networks. To disentangle the relevance of these structural properties for information processing in networks, we systematically constructed ANNs constrained by interpretable features of cortical networks. We find that in large and recurrently connected networks, as are found in the cortex, sparse connectivity facilitates time-and data-efficient information processing. We explore the origins of these surprising findings and show that conventional dense ANNs distribute information across only a very small fraction of nodes, whereas sparse ANNs distribute information across more nodes. We show that sparsity is most critical in networks with fixed excitatory and inhibitory nodes, mirroring neuronal cell types in cortex. This constraint causes a large learning delay in densely connected networks which is eliminated by sparse connectivity. Taken together, our findings show that sparse connectivity enables efficient information processing given key constraints from cortical networks, setting the stage for further investigation into higher-order features of cortical connectivity.

    Keywords: connectivity, Structure-function, Cortex, artificial neural networks, recurrent, sparse Please format in British English

    Received: 14 Nov 2024; Accepted: 03 Mar 2025.

    Copyright: © 2025 Fruengel and Oberlaender. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence:
    Rieke Fruengel, In Silico Brain Sciences Group, Max Planck Institute for Neurobiology of Behaviour - caesar, Bonn, Germany
    Marcel Oberlaender, In Silico Brain Sciences Group, Max Planck Institute for Neurobiology of Behaviour - caesar, Bonn, Germany

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

    Research integrity at Frontiers

    Man ultramarathon runner in the mountains he trains at sunset

    94% of researchers rate our articles as excellent or good

    Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


    Find out more