Skip to main content

HYPOTHESIS AND THEORY article

Front. Netw. Physiol.

Sec. Networks of Dynamical Systems

Volume 5 - 2025 | doi: 10.3389/fnetp.2025.1521963

This article is part of the Research Topic Self-Organization of Complex Physiological Networks: Synergetic Principles and Applications — In Memory of Hermann Haken View all 3 articles

Renormalising generative models

Provisionally accepted
Karl Friston Karl Friston 1,2*Conor Heins Conor Heins 2Tim Verbelen Tim Verbelen 2Lancelot Da Costa Lancelot Da Costa 2Tommaso Salvatori Tommaso Salvatori 2Dimitrije Marković Dimitrije Marković 3Alexander Tschantz Alexander Tschantz 2Magnus Koudahl Magnus Koudahl 2Christopher Buckley Christopher Buckley 2,4Thomas Parr Thomas Parr 5
  • 1 University College London, London, United Kingdom
  • 2 VERSES Research Lab, Los Angeles, United States
  • 3 Technical University Dresden, Dresden, Lower Saxony, Germany
  • 4 University of Sussex, Brighton, West Sussex, United Kingdom
  • 5 Nuffield Department of Clinical Neurosciences, Medical Sciences Division, University of Oxford, Oxford, England, United Kingdom

The final, formatted version of the article will be published soon.

    This paper describes a discrete state-space model-and accompanying methods-for generative modelling. This model generalises partially observed Markov decision processes to include paths as latent variables, rendering it suitable for active inference and learning in a dynamic setting. Specifically, we consider deep or hierarchical forms using the renormalisation group. The ensuing renormalising generative models (RGM) can be regarded as discrete homologues of deep convolutional neural networks or continuous state-space models in generalised coordinates of motion. By construction, these scale-invariant models can be used to learn compositionality over space and time, furnishing models of paths or orbits; i.e., events of increasing temporal depth and itinerancy. This technical note illustrates the automatic discovery, learning and deployment of RGMs using a series of applications. We start with image classification and then consider the compression and generation of movies and music. Finally, we apply the same variational principles to the learning of Atari-like games.

    Keywords: active inference, Active Learning, bayesian model selection, renormalisation group, compression, structure learning, Network-physiology

    Received: 03 Nov 2024; Accepted: 02 Apr 2025.

    Copyright: © 2025 Friston, Heins, Verbelen, Da Costa, Salvatori, Marković, Tschantz, Koudahl, Buckley and Parr. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Karl Friston, University College London, London, United Kingdom

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

    Research integrity at Frontiers

    Man ultramarathon runner in the mountains he trains at sunset

    95% of researchers rate our articles as excellent or good

    Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


    Find out more