Skip to main content

EDITORIAL article

Front. Comput. Neurosci.
Volume 18 - 2024 | doi: 10.3389/fncom.2024.1527725
This article is part of the Research Topic Symmetry as a Guiding Principle in Artificial and Brain Neural Networks, Volume II View all 7 articles

Editorial: "Symmetry as a Guiding Principle in Artificial and Brain Neural Networks Volume II"

Provisionally accepted
Fabio Anselmi Fabio Anselmi 1*Ankit B Patel Ankit B Patel 2
  • 1 Department of Mathematics and Geosciences, University of Trieste, Trieste, Italy
  • 2 Baylor College of Medicine, Houston, Texas, United States

The final, formatted version of the article will be published soon.

    The first contribution, Time as a Supervisor: Temporal Regularity and Auditory Object Learning delves into how the brain may use time as a supervisory signal to learn auditory features. By exploring the natural regularities and symmetries in the auditory domain, the authors propose that temporal consistency is key to learning auditory object representations, particularly in cluttered environments. The work demonstrates that models capturing these temporal regularities outperform conventional feature-selection algorithms such as principal component analysis (PCA) and independent component analysis (ICA) in auditory discrimination tasks. The implications for both neuroscience and machine learning are profound, suggesting that the temporal structure of stimuli provides an essential basis for efficient sensory processing and generalization. Vision is another sensory modality where symmetry plays a pivotal role. The article Covariance Properties under Natural Image Transformations for the Generalized Gaussian Derivative Model for Visual Receptive Fields presents a theoretical framework for understanding the geometric properties of visual receptive fields in the brain. Covariance, or equivariance, ensures that the transformation of sensory input results in a corresponding transformation in the neural representation. The study of these properties reveals how visual receptive fields in the primary visual cortex (V1) are tuned to transformations such as spatial scaling and temporal scaling. The authors argue that these symmetry properties provide the biological vision system with robustness to the variations encountered in natural environments, opening new avenues for biologically inspired machine vision systems that mimic these neural mechanisms. Symmetry also governs higher-level perceptual tasks, such as three-dimensional (3D) shape reconstruction from visual input. Monocular Reconstruction of Shapes of Natural Objects from Orthographic and Perspective Images explores how the human visual system uses symmetry, specifically mirror symmetry and 3D compactness, to reconstruct 3D shapes from two-dimensional images. The study's computational models, validated by human performance in psychophysical tasks, suggest that secondary mirror symmetry plays a crucial role in enabling accurate 3D perception. This work highlights the importance of symmetry as a computational principle in both biological and artificial vision systems and contributes to our understanding of the neural mechanisms underlying 3D object recognition.The last two papers delve into how symmetries relate to an artificial neural network representation and its robustness. In specific, the article A Topological Model for Partial Equivariance in Deep Learning and Data Analysis introduces a novel class of operators, called P-GENEOs, that enable the encoding of partial equivariance in neural networks. This approach allows networks to respect specific symmetry transformations without requiring full equivariance. By formalizing these operators using topological spaces and pseudo-metrics, the authors provide a new perspective on how neural networks can achieve flexible and efficient learning of symmetries. This theoretical advance has potential applications in a variety of domains, from computer vision to the analysis of complex biological data. The final contribution in this collection, Translational Symmetry in Convolutions with Localized Kernels Causes an Implicit Bias toward High-Frequency Adversarial Examples, explores the relationship between symmetry in convolutional operations and adversarial vulnerability in neural networks. The authors propose that the translational symmetry in convolutional layers biases networks toward learning high-frequency features, making them more susceptible to adversarial attacks. By analyzing the impact of convolutional kernel sizes and architectural choices on adversarial robustness, the study demonstrates that changing this symmetry by using larger kernels or breaking it by adopting nonconvolutional architectures such as vision transformers, reduces this vulnerability. The work underscores the critical role of implicit architectural biases in the generalization and robustness of neural networks.All together the contributions in this Research Topic highlight that symmetry not only facilitates efficient information processing by reducing the complexity of sensory input (in natural and artificial networks) but also plays a crucial role in robustness to adversarial perturbations. Thus, by understanding how neural networks encode, learn, and exploit symmetries, we can gain deeper insights into the principles governing intelligence. We hope that this collection will inspire further research into the interplay between symmetry, learning, and neural representations, ultimately advancing our understanding of both natural and artificial intelligence.

    Keywords: Symmetry, machine learning (ML), computational neuroscience, implicit bias, neural networks

    Received: 13 Nov 2024; Accepted: 25 Nov 2024.

    Copyright: © 2024 Anselmi and Patel. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Fabio Anselmi, Department of Mathematics and Geosciences, University of Trieste, Trieste, Italy

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.