
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
ORIGINAL RESEARCH article
Front. Netw. Physiol. , 21 March 2025
Sec. Networks of Dynamical Systems
Volume 5 - 2025 | https://doi.org/10.3389/fnetp.2025.1539166
This article is part of the Research Topic Self-Organization of Complex Physiological Networks: Synergetic Principles and Applications — In Memory of Hermann Haken View all articles
A version of the speed-gradient evolution models for systems obeying the maximum information entropy principle developed by H. Haken in his book of 1988 is proposed in this article. An explicit relation specifying system dynamics for general linear constraints is established. Two versions of the human brain entropy detailed balance-breaking model are proposed. In addition, the contours of a new scientific field called cybernetical neuroscience dedicated to the control of neural systems have been outlined.
In Haken (1988), the eminent scientist Hermann Haken explored the interplay between the concepts of information and self-organization. He took a significant step toward broadening the applicability of the Gibbs–Jaynes principle of maximum entropy (Jaynes, 1957). Specifically, Haken incorporated functions that act as order parameters in nonequilibrium phase transitions into the set of additional constraints. In Chapter 3 of the book, he presents a modified version of the maximum entropy principle. Haken’s adaptation of this principle involves seeking a new future state of the system that maximizes information while adhering to physical conditions describing the system’s physical properties.
Let the elements of some system, for example, molecules of an ideal gas, stay in
where
The physical conditions act as constraints; for example, the position of the center of masses may be given:
where
and so on. In many important cases, all the constraints specify the values
Additionally, the normalization constraint for the distribution should be added:
Several examples in Haken (1988) illustrate the form of the system’s state distribution that achieves maximum information, as well as the state corresponding to the self-organization of the system. However, neither in Haken’s book nor in any other known works does the question arise regarding how a system evolves to attain the state of maximum information entropy (or self-organized state).
The question “How does a system evolve when striving for a state of maximum entropy?” was first addressed in the works by Fradkov (2007) and Fradkov (2008) and further investigated in a series of subsequent publications. It was hypothesized that this evolution aligns with the speed-gradient principle, originally developed within control systems theory, as seen in Fradkov (1979), Fradkov (1991), and Fradkov et al. (1999). According to this hypothesis, the evolution process can be understood if the system aims to maximize a certain functional. If the system seeks to achieve the optimal state, it should logically strive to do so in the most efficient manner.
Indeed, if the objective is to increase the value of a given target functional, the quickest path to achieving this goal would be to follow the direction of the speed gradient: the gradient of the rate at which this functional changes. Subsequent studies have examined the application of the speed-gradient principle to different types of entropies, including Shannon, Rényi, Tsallis, relative entropy, and Kullback–Leibler divergence (Fradkov, 2008; Fradkov and Shalymov, 2014; Fradkov and Shalymov, 2015; Shalymov et al., 2017). The evolution of distributed systems governed by the law of maximum differential entropy was also analyzed by Fradkov and Shalymov (2015a). In each instance, it was demonstrated and mathematically verified that the trajectories of systems evolving according to the speed-gradient principle converge to a state of maximum entropy that is asymptotically stable.
Herein, we show that analogous principles and laws govern the transition to the state of maximum information entropy described in Hermann Haken’s works. We propose an explicit form for the speed-gradient evolution of systems characterizing the dynamics of information entropy in the presence of multiple linear constraints, as discussed in Section 3.2 of Haken (1988). This case generalizes previous considerations involving constraints related to the conservation of mass and energy.
Consider the problem of finding the system dynamics equations in the form
where
Consider the formalism developed in Chapter 3 of Haken’s book as an interpretation of Jaynes’ maximum entropy principle for the discrete systems possessing information entropy
where
Haken writes that the main task to which the book is devoted is to find ways to determine the frequencies
According to the principle of maximum entropy, the distribution that carries the greatest information is realized with the greatest probability. This also happens in other cases when the maximum entropy principle is applicable.
Let us pose the question: how and in what way does the system move to a state with maximum information? In order to find an affirmative answer, it is necessary to formulate the problem more formally, as finding the system dynamics in the form
where
Assume that the following constraints hold during system evolution:
where
Such a statement allows one to determine the input vector-function
where
Recall that in our case, the goal function is entropy
where
The above results can be extended to take into account the topology of the network describing interactions of the nodes (Fradkov et al., 2016).
Recently, a new multi-disciplinary research field on the border between system science and biology entitled network physiology has emerged. It is devoted to the study of biological and physiological systems possessing network structures (Ivanov and Bartsch, 2014; Ivanov, 2021). It is well known that both animal and human organisms are integrated networks, where multi-component physiological systems, each with its own regulatory mechanism, continuously interact to coordinate their functions. However, we still do not know the principles and mechanisms through which diverse systems and sub-systems in the human body dynamically interact as a network and integrate their functions to generate physiological states in health and disease. Network physiology aims to address these fundamental questions.
Among the tasks of network physiology are those that are similar to the tasks of information dynamics and self-organization considered in the works of Hermann Haken, for example, Haken (1988). For example, as is known, the entropy of a working brain can be measured using fMRI equipment (Lynn et al., 2021). Therefore, it is technically possible to use the concept of entropy in the analysis of the work of the human brain. Indeed, the neurons and the neuron ensembles of the human brain can stay in different states and change their states in time. Such an uncertainty can be described by some probabilities, and then the entropy of the state of the brain at each time instant can be evaluated. Therefore, the entropy production can also be evaluated. Hence, the results of the previous section that propose the principle to estimate dynamics of the information entropy changes can be used to analyze the state and dynamics of the real brain.
Indeed, the analysis of the whole-brain imaging data has demonstrated that the human brain breaks detailed balance at large scales and that the brain’s entropy production (that is, its distance from detailed balance) varies critically with the specific function being performed, increasing with both physical and cognitive demands (Lynn et al., 2021). To analyze the mutual dynamics of the regions of the spatially distributed system, a network-adopted version of the speed-gradient principle (Fradkov et al., 2016) can be employed.
Among other examples related to network physiology problems, one can mention analysis of the interactions among brain and cardiac networks (González et al., 2022), spike-timing-dependent plasticity and its role in Parkinson’s disease pathophysiology (Madadi Asl et al., 2022), and criticality in the healthy brain (Shi et al., 2022).
As an example, let us consider the process of breaking and restoring detailed balance between regions in the human brain (Lynn et al., 2021). This process is interesting because, as was noted in the celebrated work by Schrödinger (1944), see also Gnesotto et al. (2018), the brain, as well as a living being as a whole, tends to increase its entropy. At first glance, the number of neurons in the brain is overwhelmingly large, and the structure of connections between them is overwhelmingly complex, making comprehensive analysis impractical. However, recent achievements of the international “Connectome” project have demonstrated that for many purposes, describing the brain as a network with a finite and relatively small number of nodes (approximately 100) suffices; see Van Essen et al. (2013). Hence, coarse-grained models of the human brain can be constructed that maintain manageable complexity. While at rest, the brain sustains a detailed balance of transitions between states. When engaged in physical or cognitive tasks, however, this detailed balance breaks down. Given that information entropy serves as a measure of uncertainty, modeling brain dynamics seems natural on its basis.
Based on the hypothesis that the brain, and perhaps the entire living organism, strives to break the detailed balance and increase information entropy, it is interesting to find the law or model according to which the brain increases its entropy. It seems plausible that the brain (or organism) endeavors to maximize its entropy in an optimal way. How might this be achieved? Drawing upon prior discussions in Section 2, we propose addressing this issue through the speed-gradient principle.
Consider a system with the state vector
Evidently, the right-hand side of Equation 10 corresponds to Kullback–Leibler divergence measuring the distance between two distributions (forward and backward movements). If the system is in the state of the detailed balance, then
Assume for simplicity that the reverse probabilities
The gradients of the entropy production should be taken over the controlling (input) variables. As such, it is natural to take those probabilities that determine the next coarse-grained state of the brain numbered
Note that
To take into account constraints (Equation 11), introduce Lagrange multipliers
satisfy constraints
Finally, the law of the fastest transition probabilities evolution is as follows
where
Let us consider a modified version of the breaking detailed balance model based on the same speed-gradient principle. Once again, we start with the assumption that entropy production grows in the optimal manner. However, let us now measure the entropy production by its deviation from the uniform distribution corresponding to the maximum system entropy. It means that the uniform distribution is chosen as the base level for the entropy production evaluation, and the following model is used instead of Equation 10:
where probability
The Kullback–Leibler divergence (Equation 18) can also serve as a measure of the current state deviation from the detailed balance state. Taking Equation 18 as the model of the goal function for the speed-gradient method and repeating the calculations, we obtain the following expressions instead of Equation 16:
where
Equations 16, 19 and Equations 17, 20 are continuous-time models and discrete-time models, respectively, for human brain entropy dynamics proposed via speed-gradient method. Which model is closer to reality? It is necessary to conduct a series of experiments with real data to answer this question.
Over the past 2 decades, system theory and cybernetics have yielded many new results and approaches that enable researchers to study various properties of complex networks. These findings can be applied to network physiology. For instance, numerous stability and synchronization criteria for complex networks are relevant to network models composed of interconnected mathematical models of neurons. The first results on the control of neuron and neural network models were obtained in the 1990s, focusing on chaos control and synchronization. Carroll (1995) proposed an algorithm for pulse synchronization control of two FitzHugh–Nagumo (FHN) neuron models, drawing parallels between neuronal and electrical processes. In Dragoi and Grosu (1998), an algorithm was designed to control a chain of FHN neurons, aiming to synchronize each neuron oscillation with those of a “reference” neuron. The stability of the synchronization process was established within a certain range of initial conditions using a linear approximation.
Plotnikov et al. (2016a) proposed algorithms for synchronizing a heterogeneous network of diffusion-coupled models of FHN neurons with a hierarchical architecture based on the speed-gradient method. Synchronization conditions were obtained based on the Lyapunov function method. Similar results were obtained for adaptive control algorithms that do not require precise knowledge of the neuron model parameters (Plotnikov et al., 2016b).
The Lyapunov function method and the speed-gradient method have also been effectively utilized in designing and analyzing control algorithms for synchronization and chaos control problems involving Hindmarsh–Rose models and their networks (Plotnikov, 2021; Semenov et al., 2022).
Currently, a growing body of research focuses not only on studying the properties of neural networks with neurophysiological interpretations but also delves deeply into the challenges associated with intentionally creating or eliminating these properties, that is, controlling networks. Other cybernetics-related tasks concerning networks of neurons or their models are being explored as well, such as state and parameter estimation, pattern recognition, and machine learning. In summary, there is a notable trend leading to the establishment of a substantial and significant new domain within computational neuroscience, which can naturally be called cybernetical neuroscience.
The main directions of the research in cybernetical neuroscience are as follows (Fradkov, 2024):
1. Analysis of the conditions for the models of neural ensembles to possess some special behaviors observed in the brain, such as synchronization, desynchronization, spiking, bursting, solitons, chaos, and chimeras.
2. Synthesis of external (control) inputs that create the special behaviors in the brain models.
3. Estimation of the state and parameters of the brain models based on the results of measuring input and output variables.
4. Classification of brain states and human intentions (using adaptation and machine learning methods) based on real brain state measurements (invasive or noninvasive).
5. Design of control algorithms that provide specified properties of a closed loop system consisting of a controlled neural system and a controlling device, interacting via brain-computer interface.
The approach to searching for how a system should evolve to reach the state of maximum information entropy presented in this article also originated in the area of control science or cybernetics. Its applications to neural systems belong to the area of cybernetical neuroscience.
This article proposes a version of the speed-gradient evolution model for systems following the maximum information entropy principle developed by H. Haken in his seminal book in 1988. An explicit relationship (Equation 8) defining system dynamics for general linear constraints
Furthermore, the contours of a novel scientific area termed cybernetical neuroscience, focused on controlling neural systems, are delineated.
Future research might focus on examining the diverse dynamic issues in network physiology. For example, the methodology presented here could be applied to recent findings on utilizing entropy to analyze brain dynamics, as reported by Antonopoulos et al. (2015), Jirsa and Sheheitli, (2022), Keshmiri (2020), and Yufik (2019).
The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.
AF: writing–original draft and writing–review and editing.
The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
The author is grateful to Professor Eckehard Schöll for his invitation to contribute to this valuable issue dedicated to the memory of the outstanding scientist Hermann Haken.
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The author(s) declare that no Generative AI was used in the creation of this manuscript.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
1Notation col stands for column vector hereafter.
Antonopoulos, C. G., Srivastava, S., Pinto, S. E. d. S., and Baptista, M. S. (2015). Do brain networks evolve by maximizing their information flow capacity? PLOS Comput. Biol. 11 (8), e1004372. doi:10.1371/journal.pcbi.1004372
Carroll, T. (1995). Synchronization and complex dynamics in pulse-coupled circuit models of neurons. Biol. Cybern. 73, 553–559. doi:10.1007/BF00199547
Dragoi, V., and Grosu, I. (1998). Synchronization of locally coupled neural oscillators. Neural Process. Lett. 7, 199–210. doi:10.1023/A:1009618318908
Fradkov, A. L. (1979). Speed-Gradient scheme and its application in adaptive-control problems. Automation Remote Control 40 (9), 1333–1342.
Fradkov, A. L. (1991). “Speed-gradient laws of control and evolution,”, 1991. 1861–1865.Proc. 1st Eur. Control Conf. (ECC).
Fradkov, A. L. (2007). Cybernetical physics: from control of chaos to quantum control. Springer-Verlag. doi:10.1007/978-3-540-46277-4
Fradkov, A. L. (2008). Speed-gradient entropy principle for nonstationary processes Entropy. Entropy (Basel). 10 (4), 757–764. doi:10.3390/e10040757
Fradkov, A. L., Miroshnik, I. V., and Nikiforov, V. O. (1999). Nonlinear and adaptive control of complex systems. Dordrecht: Springer (former Kluwer Acad. Publisher). doi:10.1007/978-94-015-9261-1
Fradkov, A. L., and Shalymov, D. S. (2014). Information entropy dynamics and maximum entropy production principle. arXiv:1401.2921v1
Fradkov, A. L., and Shalymov, D. S. (2015). Speed gradient and MaxEnt principles for Shannon and Tsallis entropies. Entropy 17 (3), 1090–1102. doi:10.3390/e17031090
Fradkov, A. L., and Shalymov, D. S. (2015a). Dynamics of non-stationary nonlinear processes that follow the maximum of differential entropy principle. Commun. Nonlinear Sci. Numer. Simul. 29, 488–498. doi:10.1109/MED.2015.7158787
Fradkov, A. L., Shalymov, D. S., and Proskurnikov, A. V. (2016). “Speed-gradient entropy maximization in networks 2016,” in IEEE conference on norbert wiener in the 21st century (21CW) july 13-16, 2016. IEEE, 62–66.
Gnesotto, F. S., Mura, F., Gladrow, J., and Broedersz, C. P. (2018). Broken detailed balance and non-equilibrium dynamics in living systems: a review. Rep. Prog. Phys. 81, 066601 doi:10.1088/1361-6633/aab3ed
González, C., Garcia-Hernando, G., Jensen, E. W., and Vallverdú-Ferrer, M. (2022). Assessing rheoencephalography dynamics through analysis of the interactions among brain and cardiac networks during general anesthesia. Front. Netw. Physiol. 2, 912733. doi:10.3389/fnetp.2022.912733
Haken, H. (1988). Information and Self-Organization. A macroscopic approach to complex systems. Berlin, Heidelberg: Springer-Verlag.
Ivanov, P. C. (2021). The new field of network physiology: building the human physiolome. Front. Netw. Physiol. 1, 711778. doi:10.3389/fnetp.2021.711778
Ivanov, P. C., and Bartsch, R. P. (2014). “Network physiology: mapping interactions between networks of physiologic networks,” in Networks of networks: the last frontier of complexity. Editors G. D. Agostino, and A. Scala (Cham: Springer International Publishing), 203–222.
Jaynes, E. T. (1957). Information theory and statistical mechanics. Phys. Rev. 106, 620–630. doi:10.1103/PhysRev.106.620
Jirsa, V., and Sheheitli, H. (2022). Entropy, free energy, symmetry and dynamics in the brain. J. Phys. Complex. 3. doi:10.1088/2632-072X/ac4bec
Lynn, C. W., Cornblath, E. J., Papadopoulos, L., Danielle, S., and Bassett, D. S. (2021). Broken detailed balance and entropy production in the human brain. PNAS 118 (47), e2109889118. doi:10.1073/pnas.2109889118
Madadi Asl, M., Vahabie, A.-H., Valizadeh, A., and Tass, P. A. (2022). Spike-timing-dependent plasticity mediated by dopamine and its role in Parkinson’s disease pathophysiology. Front. Netw. Physiol. 2, 817524. doi:10.3389/fnetp.2022.817524
Plotnikov, S. A. (2021). Synchronization conditions in networks of Hindmarsh–Rose systems. Cybern. Phys. 10, 254–259. doi:10.35470/2226-4116-2021-10-4-254-259
Plotnikov, S. A., Lehnert, J., Fradkov, A. L., and Schöll, E. (2016a). Synchronization in heterogeneous FitzHugh-Nagumo networks with hierarchical architecture. Phys. Rev. E 94, 012203. doi:10.1103/PhysRevE.94.012203
Plotnikov, S. A., Lehnert, J., Fradkov, A. L., and Schöll, E. (2016b). Adaptive control of synchronization in delay-coupled heterogeneous networks of FitzHugh-Nagumo nodes. Int. J. Bifurcation Chaos, 26, 4, 1650058. doi:10.1142/S0218127416500589
Schrödinger, E. (1944). What is life? The physical aspect of the living cell and mind. Cambridge, UK: Cambridge University Press.
Semenov, D. M., Plotnikov, S. A., and Fradkov, A. L. (2022). Controlled synchronization in regular delay-coupled networks of hindmarsh-rose neurons. 2022 6th scientific school dynamics of complex networks and their applications (DCNA). IEEE, Kaliningr., 2022. doi:10.1109/dcna56428.2022.9923218
Shalymov, D., Fradkov, A., Liubchich, S., and Sokolov, B. (2017). Dynamics of the relative entropy minimization processes. Cybern. Phys., 6, 2, 80–87.
Shi, J., Kirihara, K., Tada, M., Fujioka, M., Usui, K., Koshiyama, D., et al. (2022). Criticality in the healthy brain. Front. Netw. Physiol. 1, 755685. doi:10.3389/fnetp.2021.755685
Van Essen, D. C., Smith, S. M., Barch, D. M., Behrens, T. E. J., Yacoub, E., Ugurbil, K., et al. (2013). The Wu-minn human connectome project: an overview. Neuroimage 80, 62–79. doi:10.1016/j.neuroimage.2013.05.041
Keywords: information, entropy, network, speed-gradient, evolution, self-organization, control, network physiology
Citation: Fradkov A (2025) Information entropy dynamics, self-organization, and cybernetical neuroscience. Front. Netw. Physiol. 5:1539166. doi: 10.3389/fnetp.2025.1539166
Received: 03 December 2024; Accepted: 19 February 2025;
Published: 21 March 2025.
Edited by:
Eckehard Schöll, Technical University of Berlin, GermanyReviewed by:
Alexander E. Hramov, Immanuel Kant Baltic Federal University, RussiaCopyright © 2025 Fradkov. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Alexander Fradkov, YWxmQGlwbWUucnU=
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.