Skip to main content

ORIGINAL RESEARCH article

Front. Netw. Physiol. , 21 March 2025

Sec. Networks of Dynamical Systems

Volume 5 - 2025 | https://doi.org/10.3389/fnetp.2025.1539166

This article is part of the Research Topic Self-Organization of Complex Physiological Networks: Synergetic Principles and Applications — In Memory of Hermann Haken View all articles

Information entropy dynamics, self-organization, and cybernetical neuroscience

  • Control of Complex Systems Lab, Institute for Problems of Mechanical Engineering, Saint Petersburg, Russia

A version of the speed-gradient evolution models for systems obeying the maximum information entropy principle developed by H. Haken in his book of 1988 is proposed in this article. An explicit relation specifying system dynamics for general linear constraints is established. Two versions of the human brain entropy detailed balance-breaking model are proposed. In addition, the contours of a new scientific field called cybernetical neuroscience dedicated to the control of neural systems have been outlined.

1 Introduction

In Haken (1988), the eminent scientist Hermann Haken explored the interplay between the concepts of information and self-organization. He took a significant step toward broadening the applicability of the Gibbs–Jaynes principle of maximum entropy (Jaynes, 1957). Specifically, Haken incorporated functions that act as order parameters in nonequilibrium phase transitions into the set of additional constraints. In Chapter 3 of the book, he presents a modified version of the maximum entropy principle. Haken’s adaptation of this principle involves seeking a new future state of the system that maximizes information while adhering to physical conditions describing the system’s physical properties.

Let the elements of some system, for example, molecules of an ideal gas, stay in n cells. If we are looking for the distribution of molecules by possible states (cells), that is, we need to find the probabilities p1,p2,,pn where pi is the probability of finding a molecule in a cell i. Then, the information entropy S of the system is defined as

S=Ki=1npilnpi,(1)

where K is the Boltzmann constant.

The physical conditions act as constraints; for example, the position of the center of masses may be given:

i=1npiqi=M,(2)

where qi is the position of the ith cell, the value M is the coordinate of the center of masses, and N is the total number of the particles. Alternatively, let fi be the kinetic energy of the ith particle. Then, the average value of the kinetic energy of the system may be specified by

i=1npifi=E,

and so on. In many important cases, all the constraints specify the values Lk of some linear combinations of some characteristics fik,k=1,,L of the system:

i=1npifik=Lk,

Additionally, the normalization constraint for the distribution should be added:

i=1npi=1,pi0.(3)

Several examples in Haken (1988) illustrate the form of the system’s state distribution that achieves maximum information, as well as the state corresponding to the self-organization of the system. However, neither in Haken’s book nor in any other known works does the question arise regarding how a system evolves to attain the state of maximum information entropy (or self-organized state).

The question “How does a system evolve when striving for a state of maximum entropy?” was first addressed in the works by Fradkov (2007) and Fradkov (2008) and further investigated in a series of subsequent publications. It was hypothesized that this evolution aligns with the speed-gradient principle, originally developed within control systems theory, as seen in Fradkov (1979), Fradkov (1991), and Fradkov et al. (1999). According to this hypothesis, the evolution process can be understood if the system aims to maximize a certain functional. If the system seeks to achieve the optimal state, it should logically strive to do so in the most efficient manner.

Indeed, if the objective is to increase the value of a given target functional, the quickest path to achieving this goal would be to follow the direction of the speed gradient: the gradient of the rate at which this functional changes. Subsequent studies have examined the application of the speed-gradient principle to different types of entropies, including Shannon, Rényi, Tsallis, relative entropy, and Kullback–Leibler divergence (Fradkov, 2008; Fradkov and Shalymov, 2014; Fradkov and Shalymov, 2015; Shalymov et al., 2017). The evolution of distributed systems governed by the law of maximum differential entropy was also analyzed by Fradkov and Shalymov (2015a). In each instance, it was demonstrated and mathematically verified that the trajectories of systems evolving according to the speed-gradient principle converge to a state of maximum entropy that is asymptotically stable.

Herein, we show that analogous principles and laws govern the transition to the state of maximum information entropy described in Hermann Haken’s works. We propose an explicit form for the speed-gradient evolution of systems characterizing the dynamics of information entropy in the presence of multiple linear constraints, as discussed in Section 3.2 of Haken (1988). This case generalizes previous considerations involving constraints related to the conservation of mass and energy.

2 Principle of maximum entropy and speed-gradient formalism

Consider the problem of finding the system dynamics equations in the form

dx/dt=Fx,u,t,t0,(4)

where x is the system state vector, and u is input vector. The above-mentioned speed-gradient principle is formulated as follows (Fradkov, 1991; Fradkov, 2007). Among all possible motion of (Equation 4), only those are realized for which the input vector u(t) changes proportionally to the gradient in u of the speed of changing in t of an appropriate goal functional Qt. If constraints are imposed on the system motion, then the speed-gradient vector should be projected onto the set of admissible (compatible with constraints) directions.

Consider the formalism developed in Chapter 3 of Haken’s book as an interpretation of Jaynes’ maximum entropy principle for the discrete systems possessing information entropy

Sp=Ki=1npilnpi,(5)

where pi,i=1,2,,n are the probabilities or relative frequencies for staying of the particle in the ith state, n is the total number of particles, K is the Boltzmann constant, and S(p) is the information or information entropy of the state.

Haken writes that the main task to which the book is devoted is to find ways to determine the frequencies pi, taking into account the constraints and available additional information. For example, when considering an ideal one-dimensional gas, one can measure, for example, the position of the center of mass having the form (Equation 2), where qi is the i-th cell position. In addition, the normalization condition (Equation 3) should be valid.

According to the principle of maximum entropy, the distribution that carries the greatest information is realized with the greatest probability. This also happens in other cases when the maximum entropy principle is applicable.

Let us pose the question: how and in what way does the system move to a state with maximum information? In order to find an affirmative answer, it is necessary to formulate the problem more formally, as finding the system dynamics in the form

dp/dt=u,(6)

where p=col(p1,p2,,pn) is the column1 vector of the system state distribution, and u is the external input vector-function to be determined.

Assume that the following constraints hold during system evolution:

Ap=b,(7)

where A is the m×nmatrix, b is the m-vector, and m is the number of constraints. Assume that matrix A has full rank; that is, different constraints are linearly independent. Let the constraints (Equation 7) be valid in the initial time instant: Ap(0)=b. Choose entropy S(p) defined in Equation 5 as the goal function. According to the speed-gradient principle, among the whole set of the directions (ways) of evolution satisfying constraints (Equation 7), the one that is realized is the movement along the trajectory of the fastest growth of the entropy S(p(t)). In other words, according to the speed-gradient principle, the system takes the path of maximum energy production.

Such a statement allows one to determine the input vector-function u(p) explicitly. To this end, evaluate the projection onto the set of the points satisfying constraints (Equation 7) of the gradient in u of the speed of change of goal function - entropy (Equation 5) along trajectories of Equation 6. Apparently, the speed of change of the goal function along Equation 6 is as follows: Ṡ(p,u)=pS(p)Tu, and the gradient of this speed in u is S(p), where denotes the gradient (vector of partial derivatives of a function, and upper index T means transposition. However, one needs to take into account the constraints (Equation 7), that is, to make a projection onto the set P={p:Ap=b}. Introducing and evaluating the m-vector of Lagrange multipliers λ1,,λm and taking into account initial conditions Ap(0)=b, the following expression is obtained after some algebra:

u=γInATAAT1ApSp,(8)

where γ>0 is the gain parameter, and In is the unity n×n matrix. It is seen that the matrix in the square brackets in (Equation 8) is nothing but the matrix PA of the projection onto the subspace determined by the condition Ap=0.

Recall that in our case, the goal function is entropy S in Equation 1. Its partial derivatives are defined as follows: S/pi=ln(pi)+1. Therefore, the system will evolve according to the rule

dp/dt=γPAlnp+1,(9)

where 1 is the m-vector with all components equal to 1. Note that the conditions pi>0, which are necessary for keeping the system well posed, will be valid automatically for solutions of (Equation 9), because they are valid in the initial time instant, and the goal function S(p) grows to when any pi tends to 0.

The above results can be extended to take into account the topology of the network describing interactions of the nodes (Fradkov et al., 2016).

3 Application to network physiology problems

Recently, a new multi-disciplinary research field on the border between system science and biology entitled network physiology has emerged. It is devoted to the study of biological and physiological systems possessing network structures (Ivanov and Bartsch, 2014; Ivanov, 2021). It is well known that both animal and human organisms are integrated networks, where multi-component physiological systems, each with its own regulatory mechanism, continuously interact to coordinate their functions. However, we still do not know the principles and mechanisms through which diverse systems and sub-systems in the human body dynamically interact as a network and integrate their functions to generate physiological states in health and disease. Network physiology aims to address these fundamental questions.

Among the tasks of network physiology are those that are similar to the tasks of information dynamics and self-organization considered in the works of Hermann Haken, for example, Haken (1988). For example, as is known, the entropy of a working brain can be measured using fMRI equipment (Lynn et al., 2021). Therefore, it is technically possible to use the concept of entropy in the analysis of the work of the human brain. Indeed, the neurons and the neuron ensembles of the human brain can stay in different states and change their states in time. Such an uncertainty can be described by some probabilities, and then the entropy of the state of the brain at each time instant can be evaluated. Therefore, the entropy production can also be evaluated. Hence, the results of the previous section that propose the principle to estimate dynamics of the information entropy changes can be used to analyze the state and dynamics of the real brain.

Indeed, the analysis of the whole-brain imaging data has demonstrated that the human brain breaks detailed balance at large scales and that the brain’s entropy production (that is, its distance from detailed balance) varies critically with the specific function being performed, increasing with both physical and cognitive demands (Lynn et al., 2021). To analyze the mutual dynamics of the regions of the spatially distributed system, a network-adopted version of the speed-gradient principle (Fradkov et al., 2016) can be employed.

Among other examples related to network physiology problems, one can mention analysis of the interactions among brain and cardiac networks (González et al., 2022), spike-timing-dependent plasticity and its role in Parkinson’s disease pathophysiology (Madadi Asl et al., 2022), and criticality in the healthy brain (Shi et al., 2022).

4 Dynamics of human brain entropy

As an example, let us consider the process of breaking and restoring detailed balance between regions in the human brain (Lynn et al., 2021). This process is interesting because, as was noted in the celebrated work by Schrödinger (1944), see also Gnesotto et al. (2018), the brain, as well as a living being as a whole, tends to increase its entropy. At first glance, the number of neurons in the brain is overwhelmingly large, and the structure of connections between them is overwhelmingly complex, making comprehensive analysis impractical. However, recent achievements of the international “Connectome” project have demonstrated that for many purposes, describing the brain as a network with a finite and relatively small number of nodes (approximately 100) suffices; see Van Essen et al. (2013). Hence, coarse-grained models of the human brain can be constructed that maintain manageable complexity. While at rest, the brain sustains a detailed balance of transitions between states. When engaged in physical or cognitive tasks, however, this detailed balance breaks down. Given that information entropy serves as a measure of uncertainty, modeling brain dynamics seems natural on its basis.

Based on the hypothesis that the brain, and perhaps the entire living organism, strives to break the detailed balance and increase information entropy, it is interesting to find the law or model according to which the brain increases its entropy. It seems plausible that the brain (or organism) endeavors to maximize its entropy in an optimal way. How might this be achieved? Drawing upon prior discussions in Section 2, we propose addressing this issue through the speed-gradient principle.

Consider a system with the state vector xt at time t, which can take N possible values, denoted as {1,2,,N}. Suppose that the dynamics of the system are stochastic and let Pij(t) be the probability of an event {xt1=i,xt=j}. In other words, Pij(t) are forward transition probabilities, and Pji(t) are backward transition probabilities. If the system has Markovian dynamics [e.g., Ising model, see Lynn et al. (2021)], then the rate of changing entropy (entropy production) is given by Lynn et al. (2021):

Ṡt=i,j=1NPijtlogPijtPjit,(10)

Evidently, the right-hand side of Equation 10 corresponds to Kullback–Leibler divergence measuring the distance between two distributions (forward and backward movements). If the system is in the state of the detailed balance, then Pij=Pji; that is, the entropy production vanishes and vice versa, that is, Ṡ(t) is a measure of broken detailed balance. Therefore, the problem is to find the law of changing Pij,Pji, such that the entropy growth as fast as possible under normalization constraints

j=1NPij=1,i=1,,N.(11)
Pij0,  i,j=1,,N.(12)

Assume for simplicity that the reverse probabilities Pji are not changed: Pji=const and evaluate the gradients of the entropy production according to the approach of Section 2.

The gradients of the entropy production should be taken over the controlling (input) variables. As such, it is natural to take those probabilities that determine the next coarse-grained state of the brain numbered j, and the direction of the fastest growth of Ṡ(t), that is, the gradient of Ṡ(t) with respect to those probabilities. We have a fixed current state, and i is the index corresponding to the current state. This means that we need to take the gradient with respect to the next state vector, which plays the role of a control action because the brain goes into a new state, and we want to know how this choice is made. Therefore, let us evaluate the gradient of Ṡ with respect to Pij, assuming that Pji are fixed. To avoid notational confusion, replace summation indices (i,j) with (k,l). Then, the expression for Ṡ reads

PijṠt=Pijk,l=1NPkllogPklPlk.(13)

Note that k,l in (Equation 13) are running indices, and for any fixed pair (i,j), only one term in the sum (Equation 13) depends on Pij. Hence

PijṠt=1logPijPji+PijPijlogPijlogPji=logPijPji+1.(14)

To take into account constraints (Equation 11), introduce Lagrange multipliers λi,i=1,,N and choose them in such a way that the equations

Ṗij=logPijPji+1λi.(15)

satisfy constraints l=1NṖil=0,i=1,,N. Then, the constraints (Equation 11) will be valid for all t0, provided that they are valid for t=0. As for constraints (Equation 12), they will be fulfilled automatically for all t0 if they are strictly fulfilled for t=0 (Pij(0)>0) because Pij appears in Equation 15 under the log operation and cannot approach zero. It is easy to see that such λi may be chosen as follows:

λi=1+1Nl=1NlogPilPli

Finally, the law of the fastest transition probabilities evolution is as follows

Ṗijt=γlogPijtPji1Nl=1NlogPiltPli,i,j=1,N,(16)

where γ>0 is the gain (activity) coefficient. Because in reality, only measurements in the discrete (sampled) time instants are possible, we arrive at the following final relation:

PijtPijt1=γlogPijt1Pji1Nl=1NlogPilt1Pli,i,j=1,,N.(17)

Let us consider a modified version of the breaking detailed balance model based on the same speed-gradient principle. Once again, we start with the assumption that entropy production grows in the optimal manner. However, let us now measure the entropy production by its deviation from the uniform distribution corresponding to the maximum system entropy. It means that the uniform distribution is chosen as the base level for the entropy production evaluation, and the following model is used instead of Equation 10:

Ṡt=k,l=1NPkltlogPkltP*,(18)

where probability P*=1N2 defines the uniform distribution.

The Kullback–Leibler divergence (Equation 18) can also serve as a measure of the current state deviation from the detailed balance state. Taking Equation 18 as the model of the goal function for the speed-gradient method and repeating the calculations, we obtain the following expressions instead of Equation 16:

Ṗijt=γlogPijtP*1Nl=1NlogPiltP*,i,j=1,,N.(19)

where γ>0 is the gain (activity) coefficient. Because in reality, only measurements in the discrete (sampled) time instants are possible, we arrive at the following final relation instead of Equation 17:

PijtPijt1=γlogPijt1P*1Nl=1NlogPilt1P*,i,j=1,,N.(20)

Equations 16, 19 and Equations 17, 20 are continuous-time models and discrete-time models, respectively, for human brain entropy dynamics proposed via speed-gradient method. Which model is closer to reality? It is necessary to conduct a series of experiments with real data to answer this question.

5 Networks and cybernetical neuroscience

Over the past 2 decades, system theory and cybernetics have yielded many new results and approaches that enable researchers to study various properties of complex networks. These findings can be applied to network physiology. For instance, numerous stability and synchronization criteria for complex networks are relevant to network models composed of interconnected mathematical models of neurons. The first results on the control of neuron and neural network models were obtained in the 1990s, focusing on chaos control and synchronization. Carroll (1995) proposed an algorithm for pulse synchronization control of two FitzHugh–Nagumo (FHN) neuron models, drawing parallels between neuronal and electrical processes. In Dragoi and Grosu (1998), an algorithm was designed to control a chain of FHN neurons, aiming to synchronize each neuron oscillation with those of a “reference” neuron. The stability of the synchronization process was established within a certain range of initial conditions using a linear approximation.

Plotnikov et al. (2016a) proposed algorithms for synchronizing a heterogeneous network of diffusion-coupled models of FHN neurons with a hierarchical architecture based on the speed-gradient method. Synchronization conditions were obtained based on the Lyapunov function method. Similar results were obtained for adaptive control algorithms that do not require precise knowledge of the neuron model parameters (Plotnikov et al., 2016b).

The Lyapunov function method and the speed-gradient method have also been effectively utilized in designing and analyzing control algorithms for synchronization and chaos control problems involving Hindmarsh–Rose models and their networks (Plotnikov, 2021; Semenov et al., 2022).

Currently, a growing body of research focuses not only on studying the properties of neural networks with neurophysiological interpretations but also delves deeply into the challenges associated with intentionally creating or eliminating these properties, that is, controlling networks. Other cybernetics-related tasks concerning networks of neurons or their models are being explored as well, such as state and parameter estimation, pattern recognition, and machine learning. In summary, there is a notable trend leading to the establishment of a substantial and significant new domain within computational neuroscience, which can naturally be called cybernetical neuroscience.

The main directions of the research in cybernetical neuroscience are as follows (Fradkov, 2024):

1. Analysis of the conditions for the models of neural ensembles to possess some special behaviors observed in the brain, such as synchronization, desynchronization, spiking, bursting, solitons, chaos, and chimeras.

2. Synthesis of external (control) inputs that create the special behaviors in the brain models.

3. Estimation of the state and parameters of the brain models based on the results of measuring input and output variables.

4. Classification of brain states and human intentions (using adaptation and machine learning methods) based on real brain state measurements (invasive or noninvasive).

5. Design of control algorithms that provide specified properties of a closed loop system consisting of a controlled neural system and a controlling device, interacting via brain-computer interface.

The approach to searching for how a system should evolve to reach the state of maximum information entropy presented in this article also originated in the area of control science or cybernetics. Its applications to neural systems belong to the area of cybernetical neuroscience.

6 Conclusion

This article proposes a version of the speed-gradient evolution model for systems following the maximum information entropy principle developed by H. Haken in his seminal book in 1988. An explicit relationship (Equation 8) defining system dynamics for general linear constraints Apb=0 is derived. Analogous results can be formulated for the spatially continuous case, where discrete information entropy is replaced by differential information entropy, in line with Fradkov and Shalymov (2015a). The approach is also extended to living systems. Two versions of a human brain entropy detailed balance-breaking model are proposed (Equations 16, 17, 19, 20).

Furthermore, the contours of a novel scientific area termed cybernetical neuroscience, focused on controlling neural systems, are delineated.

Future research might focus on examining the diverse dynamic issues in network physiology. For example, the methodology presented here could be applied to recent findings on utilizing entropy to analyze brain dynamics, as reported by Antonopoulos et al. (2015), Jirsa and Sheheitli, (2022), Keshmiri (2020), and Yufik (2019).

Data availability statement

The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.

Author contributions

AF: writing–original draft and writing–review and editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Acknowledgments

The author is grateful to Professor Eckehard Schöll for his invitation to contribute to this valuable issue dedicated to the memory of the outstanding scientist Hermann Haken.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1Notation col stands for column vector hereafter.

References

Antonopoulos, C. G., Srivastava, S., Pinto, S. E. d. S., and Baptista, M. S. (2015). Do brain networks evolve by maximizing their information flow capacity? PLOS Comput. Biol. 11 (8), e1004372. doi:10.1371/journal.pcbi.1004372

PubMed Abstract | CrossRef Full Text | Google Scholar

Carroll, T. (1995). Synchronization and complex dynamics in pulse-coupled circuit models of neurons. Biol. Cybern. 73, 553–559. doi:10.1007/BF00199547

CrossRef Full Text | Google Scholar

Dragoi, V., and Grosu, I. (1998). Synchronization of locally coupled neural oscillators. Neural Process. Lett. 7, 199–210. doi:10.1023/A:1009618318908

CrossRef Full Text | Google Scholar

Fradkov, A. L. (1979). Speed-Gradient scheme and its application in adaptive-control problems. Automation Remote Control 40 (9), 1333–1342.

Google Scholar

Fradkov, A. L. (1991). “Speed-gradient laws of control and evolution,”, 1991. 1861–1865.Proc. 1st Eur. Control Conf. (ECC).

Google Scholar

Fradkov, A. L. (2007). Cybernetical physics: from control of chaos to quantum control. Springer-Verlag. doi:10.1007/978-3-540-46277-4

CrossRef Full Text | Google Scholar

Fradkov, A. L. (2008). Speed-gradient entropy principle for nonstationary processes Entropy. Entropy (Basel). 10 (4), 757–764. doi:10.3390/e10040757

CrossRef Full Text | Google Scholar

Fradkov, A. L. (2024). Definition of cybernetical neuroscience. doi:10.48550/arXiv.2409.16314

CrossRef Full Text | Google Scholar

Fradkov, A. L., Miroshnik, I. V., and Nikiforov, V. O. (1999). Nonlinear and adaptive control of complex systems. Dordrecht: Springer (former Kluwer Acad. Publisher). doi:10.1007/978-94-015-9261-1

CrossRef Full Text | Google Scholar

Fradkov, A. L., and Shalymov, D. S. (2014). Information entropy dynamics and maximum entropy production principle. arXiv:1401.2921v1

Google Scholar

Fradkov, A. L., and Shalymov, D. S. (2015). Speed gradient and MaxEnt principles for Shannon and Tsallis entropies. Entropy 17 (3), 1090–1102. doi:10.3390/e17031090

CrossRef Full Text | Google Scholar

Fradkov, A. L., and Shalymov, D. S. (2015a). Dynamics of non-stationary nonlinear processes that follow the maximum of differential entropy principle. Commun. Nonlinear Sci. Numer. Simul. 29, 488–498. doi:10.1109/MED.2015.7158787

CrossRef Full Text | Google Scholar

Fradkov, A. L., Shalymov, D. S., and Proskurnikov, A. V. (2016). “Speed-gradient entropy maximization in networks 2016,” in IEEE conference on norbert wiener in the 21st century (21CW) july 13-16, 2016. IEEE, 62–66.

Google Scholar

Gnesotto, F. S., Mura, F., Gladrow, J., and Broedersz, C. P. (2018). Broken detailed balance and non-equilibrium dynamics in living systems: a review. Rep. Prog. Phys. 81, 066601 doi:10.1088/1361-6633/aab3ed

PubMed Abstract | CrossRef Full Text | Google Scholar

González, C., Garcia-Hernando, G., Jensen, E. W., and Vallverdú-Ferrer, M. (2022). Assessing rheoencephalography dynamics through analysis of the interactions among brain and cardiac networks during general anesthesia. Front. Netw. Physiol. 2, 912733. doi:10.3389/fnetp.2022.912733

PubMed Abstract | CrossRef Full Text | Google Scholar

Haken, H. (1988). Information and Self-Organization. A macroscopic approach to complex systems. Berlin, Heidelberg: Springer-Verlag.

Google Scholar

Ivanov, P. C. (2021). The new field of network physiology: building the human physiolome. Front. Netw. Physiol. 1, 711778. doi:10.3389/fnetp.2021.711778

PubMed Abstract | CrossRef Full Text | Google Scholar

Ivanov, P. C., and Bartsch, R. P. (2014). “Network physiology: mapping interactions between networks of physiologic networks,” in Networks of networks: the last frontier of complexity. Editors G. D. Agostino, and A. Scala (Cham: Springer International Publishing), 203–222.

CrossRef Full Text | Google Scholar

Jaynes, E. T. (1957). Information theory and statistical mechanics. Phys. Rev. 106, 620–630. doi:10.1103/PhysRev.106.620

CrossRef Full Text | Google Scholar

Jirsa, V., and Sheheitli, H. (2022). Entropy, free energy, symmetry and dynamics in the brain. J. Phys. Complex. 3. doi:10.1088/2632-072X/ac4bec

CrossRef Full Text | Google Scholar

Keshmiri, S. (2020). Entropy and the brain: an overview. Entropy, 22, 917. doi:10.3390/e22090917

PubMed Abstract | CrossRef Full Text | Google Scholar

Lynn, C. W., Cornblath, E. J., Papadopoulos, L., Danielle, S., and Bassett, D. S. (2021). Broken detailed balance and entropy production in the human brain. PNAS 118 (47), e2109889118. doi:10.1073/pnas.2109889118

PubMed Abstract | CrossRef Full Text | Google Scholar

Madadi Asl, M., Vahabie, A.-H., Valizadeh, A., and Tass, P. A. (2022). Spike-timing-dependent plasticity mediated by dopamine and its role in Parkinson’s disease pathophysiology. Front. Netw. Physiol. 2, 817524. doi:10.3389/fnetp.2022.817524

PubMed Abstract | CrossRef Full Text | Google Scholar

Plotnikov, S. A. (2021). Synchronization conditions in networks of Hindmarsh–Rose systems. Cybern. Phys. 10, 254–259. doi:10.35470/2226-4116-2021-10-4-254-259

CrossRef Full Text | Google Scholar

Plotnikov, S. A., Lehnert, J., Fradkov, A. L., and Schöll, E. (2016a). Synchronization in heterogeneous FitzHugh-Nagumo networks with hierarchical architecture. Phys. Rev. E 94, 012203. doi:10.1103/PhysRevE.94.012203

PubMed Abstract | CrossRef Full Text | Google Scholar

Plotnikov, S. A., Lehnert, J., Fradkov, A. L., and Schöll, E. (2016b). Adaptive control of synchronization in delay-coupled heterogeneous networks of FitzHugh-Nagumo nodes. Int. J. Bifurcation Chaos, 26, 4, 1650058. doi:10.1142/S0218127416500589

CrossRef Full Text | Google Scholar

Schrödinger, E. (1944). What is life? The physical aspect of the living cell and mind. Cambridge, UK: Cambridge University Press.

Google Scholar

Semenov, D. M., Plotnikov, S. A., and Fradkov, A. L. (2022). Controlled synchronization in regular delay-coupled networks of hindmarsh-rose neurons. 2022 6th scientific school dynamics of complex networks and their applications (DCNA). IEEE, Kaliningr., 2022. doi:10.1109/dcna56428.2022.9923218

CrossRef Full Text | Google Scholar

Shalymov, D., Fradkov, A., Liubchich, S., and Sokolov, B. (2017). Dynamics of the relative entropy minimization processes. Cybern. Phys., 6, 2, 80–87.

Google Scholar

Shi, J., Kirihara, K., Tada, M., Fujioka, M., Usui, K., Koshiyama, D., et al. (2022). Criticality in the healthy brain. Front. Netw. Physiol. 1, 755685. doi:10.3389/fnetp.2021.755685

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Essen, D. C., Smith, S. M., Barch, D. M., Behrens, T. E. J., Yacoub, E., Ugurbil, K., et al. (2013). The Wu-minn human connectome project: an overview. Neuroimage 80, 62–79. doi:10.1016/j.neuroimage.2013.05.041

PubMed Abstract | CrossRef Full Text | Google Scholar

Yufik, Y. M. (2019). The understanding capacity and information dynamics in the human brain. Entropy 21, 308. doi:10.3390/e21030308

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: information, entropy, network, speed-gradient, evolution, self-organization, control, network physiology

Citation: Fradkov A (2025) Information entropy dynamics, self-organization, and cybernetical neuroscience. Front. Netw. Physiol. 5:1539166. doi: 10.3389/fnetp.2025.1539166

Received: 03 December 2024; Accepted: 19 February 2025;
Published: 21 March 2025.

Edited by:

Eckehard Schöll, Technical University of Berlin, Germany

Reviewed by:

Alexander E. Hramov, Immanuel Kant Baltic Federal University, Russia
Riccardo Meucci, National Research Council (CNR), Italy

Copyright © 2025 Fradkov. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alexander Fradkov, YWxmQGlwbWUucnU=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Research integrity at Frontiers

Man ultramarathon runner in the mountains he trains at sunset

94% of researchers rate our articles as excellent or good

Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


Find out more