- 1Facultad de Ciencias, Universidad Nacional Autonóma de México, México City, Mexico
- 2Centro de Ciencias de la Complejidad, Universidad Nacional Autonóma de México, México City, Mexico
- 3Institute for Cross Disciplinary Physics and Complex Systems, IFISC (CSIC-UIB), Palma de Mallorca, Spain
- 4Azure Core Security Services, Microsoft, Redmond, WA, United States
- 5Department of Network and Data Science, Central European University, Vienna, Austria
- 6Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
- 7Department of Computer Science, Aalto University School of Science, Aalto, Finland
- 8Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autonóma de México, México City, Mexico
- 9Lakeside Labs GmbH, Klagenfurt, Carinthia, Austria
- 10Santa Fe Institute, Santa Fe, NM, United States
Criticality has been proposed as a mechanism for the emergence of complexity, life, and computation, as it exhibits a balance between order and chaos. In classic models of complex systems where structure and dynamics are considered homogeneous, criticality is restricted to phase transitions, leading either to robust (ordered) or fragile (chaotic) phases for most of the parameter space. Many real-world complex systems, however, are not homogeneous. Some elements change in time faster than others, with slower elements (usually the most relevant) providing robustness, and faster ones being adaptive. Structural patterns of connectivity are also typically heterogeneous, characterized by few elements with many interactions and most elements with only a few. Here we take a few traditionally homogeneous dynamical models and explore their heterogeneous versions, finding evidence that heterogeneity extends criticality. Thus, parameter fine-tuning is not necessary to reach a phase transition and obtain the benefits of (homogeneous) criticality. Simply adding heterogeneity can extend criticality, making the search/evolution of complex systems faster and more reliable. Our results add theoretical support for the ubiquitous presence of heterogeneity in physical, biological, social, and technological systems, as natural selection can exploit heterogeneity to evolve complexity “for free”. In artificial systems and biological design, heterogeneity may also be used to extend the parameter range that allows for criticality.
1 Introduction
Phase transitions have been studied extensively to describe changes in states of physical matter (Stanley, 1987), and are typically characterized by symmetry breaking (Anderson, 1972). They have also been studied more generally in dynamical systems, such as vehicular traffic (Chowdhury et al., 2000; Helbing, 2001). Near phase transitions, critical dynamics are known to occur (Mora and Bialek, 2011). These are also associated with scale invariance and complexity (Christensen and Moloney, 2005). There are several examples of criticality in biological systems (Muñoz, 2018), including neural dynamics (Beggs, 2008; Chialvo, 2010), genetic regulatory networks (Shmulevich et al., 2005; Balleza et al., 2008), and collective motion (Vicsek and Zafeiris, 2012). These are already serving as inspiration for building artificial critical systems, such as robots (Braccini et al., 2022).
It is often argued that critical dynamics are prevalent or desirable in a broad variety of systems because they offer a balance between robustness and adaptability (Monod, 1970; Langton, 1990; Kauffman, 1993; Hidalgo et al., 2016). If dynamics are too ordered, then information and functionality can be preserved, but it is difficult to adapt. The opposite occurs with chaotic dynamics: change allows for adaptability, but it also leads to fragility, as small changes percolate through the system and useful information tends to be lost. Thus, for phenomena such as life, computation, and complex systems in general, critical dynamics should be favored by evolutionary processes (Gershenson, 2012; Torres-Sosa et al., 2012; Roli et al., 2018).
There are different ways in which one can measure criticality (Langton, 1990; Wuensche, 1999; Luque and Solé, 2000; Pascual and Guichard, 2005; Mastromatteo and Marsili, 2011), some of which are related to entropies. For example, Fisher information maximizes at phase transitions (Prokopenko et al., 2011; Wang et al., 2011). Still, it rapidly decreases and it is difficult to evaluate how far a system is from criticality. In this work, we use a measure of complexity (Fernández et al., 2014) based on Shannon information that also maximizes at phase transitions, but reduces its value more gradually and is straightforward to calculate compared to Fisher information, as the latter requires to measure the effects of controlled perturbations. Moreover, Fisher information requires the observer to perturb the system (to measure the change in information), and this is not possible in many systems. The measure we use can be simply and computationally efficiently applied to any time series. There are several definitions and measures of complexity (Lloyd, 2001), but, crucially, the one we use here is highly correlated with criticality.
If criticality is found only near a phase transition, then most of a parameter space would have non-viable solutions, or in the best cases, suboptimal. Thus, how can a search procedure find the right parameters for criticality? Self-organized criticality (Bak et al., 1987; Adami, 1995; Hesse and Gross, 2014; Vidiella et al., 2020) has been proposed as an answer. Although interesting and useful for specific cases, it is not present in many critical phenomena. Criticality can also be realized through a mixture of excitatory and inhibitory interactions (Usefie Mafahim et al., 2015). In general, one can think of different mechanisms that will find or adjust parameters so that criticality is achieved. And yet, could criticality be more prevalent than previously thought?
In previous work where we have studied rank dynamics in a variety of systems (Cocho et al., 2015; Iñiguez et al., 2022), we observe that the most relevant elements change more slowly than less relevant elements. We hypothesized that heterogeneous temporality equips systems with robustness and adaptability at the same time. Here we explore the role of heterogeneity in different dynamical systems. We show that different types of heterogeneity extend the parameter region where critical dynamics are observed (Bailly and Longo, 2008). Thus, we can say that heterogeneity results in “criticality for free”, reducing the problem of fine-tuning parameters.
2 Measuring complexity
Following Lopez-Ruiz et al. (1995), we have proposed a measure of complexity (Fernández et al., 2014) based on Shannon’s information (Shannon, 1948),
where K is a positive constant and b is the length of the alphabet (for all the cases considered in this paper, b = 2). This measure is mathematically equivalent to the Boltzmann-Gibbs entropy. To normalize I to [0,1], we use
I is maximal when the probabilities are homogeneous, i.e., there is the same probability of observing any symbol along a string. I is minimal when only one symbol is found in a string (so it has a probability of one, and all the rest have a probability of zero). Chaotic dynamics are characterized by a high I, while ordered (static) dynamics are characterized by a low I. Inspired by Lopez-Ruiz et al. (1995), we define complexity C as the balance between ordered and chaotic dynamics,
where the constant 4 is added to normalize the measure to [0,1].
3 Results
We first present results of a heterogeneous version of the Ising model, where elements have different temperatures. We then explore structural and temporal heterogeneity in random Boolean networks. Afterwards, we abstract the specific dynamics of a system and investigate the general conditions under which heterogeneity promotes criticality. Finally, we provide a general solution, independent of any measure, using Jensen’s inequality.
3.1 Value heterogeneity: the Ising model
We consider a system of interacting atoms arranged in a two-dimensional square lattice with periodic boundary conditions (Figure 1A). The state of an atom is defined by its dipole nuclear magnetic moment: a two-valued spin representing the orientation of the magnetic field produced by the atom. Intuitively, neighboring atoms with the same spin value contribute less to the total energy of the system than atoms with different spin values. Systems of this kind evolve preferentially to states with the lowest possible energy. When the temperature of the environment is increased, the system heats, and we observe a sudden change in a global property of the system, namely, loss of magnetization. A simple theoretical model of such a system is the Ising model (Ising, 1925; Glauber, 1963).
FIGURE 1. (A) Two-dimensional Ising model over a square lattice. The graph is wrapped into a torus, implementing periodic boundary conditions. (B) The correlation function is relatively lower at low and high temperatures than at the critical temperature T ≈ 2.27, where the correlation function is maximum. (C) Correlation as a function of complexity illustrates that complexity is a good proxy for criticality. (D) Average complexity with error bars for the Ising model at different temperatures, considering homogeneous (blue), heterogeneous with Poisson distributed (orange), and heterogeneous with exponentially distributed (gray) temperatures. The black dotted vertical line represents the theoretical phase transition at T ≈ 2.27 (in practice smaller due to finite size effects).
The Ising model is usually homogeneous: all atoms are subjected to the same temperature, and one explores different properties as the temperature T varies. This is a good assumption when all atoms can be considered to behave in a similar way. However, if we are modeling an Ising-like biological system (Hopfield, 1982), then each element might have slightly different properties. In the proposed heterogeneous case, each atom has a temperature taken from a Poisson or exponential distribution with a mean equal to the temperature of the homogeneous case, for comparison (see Section 5.1 for details).
Figure 1B shows the standard correlation function of the Ising model for varying temperature. This is maximal in the phase transition at T ≈ 2.27, i.e., criticality. Figure 1C shows that there is a correspondence between the correlation function and the complexity measure in Eq. 3. Figure 1D shows the average complexity C as T increases. Complexity is maximal near the phase transition for the homogeneous case. Heterogeneity shifts the expected maximum complexity (that reflects criticality), but it also expands it, in the sense that the area under the curve is broadened. In other words, critical-like dynamics are found for a broader range of T values (here defined as complexity larger than an arbitrary threshold of 0.8, as an example).
Figure 2A explores the role of finite-size effects in the Ising model for homogeneous and heterogeneously distributed temperatures, by increasing the length L of the underlying square lattice. In all cases, the width of the complexity curves increases with the length of the lattice, but the region where criticality is extended by heterogeneity does not depend greatly on the lattice size. The exponential distribution shows the largest effect of all. We also performed a standard finite-size scaling analysis (see Section 5.1.4). As seen in Figure 2B, the scaling function fits data into the scaled function. This shows that criticality is extended by heterogeneity independently of the measure of complexity used (Eq. 3).
FIGURE 2. (A) Complexity as a function of temperature in the Ising model for varying length L of the underlying finite square lattice. The extended criticality behaviour seen the heterogeneous cases (Poisson and exponentially distributed temperatures, as opposed to the homogeneous case) does not change greatly with the size of the lattice. (B) Plot of the scaling function leading to data collapse into the original scaled function. Plotting L−ζ/νA (L, T) against L1/ν(T − Tc) should let the experimental data collapse into the single curve f(x) (see Section 5.1.4). We also show the values of critical exponents ν and ζ, calculated analytically using pyfssa (Sorge, 2015) python library. Extended criticality is also exhibited by the scaling function in the heterogeneous version of the Ising model, independently of the complexity measure used elsewhere.
Table 1 contains an overview of the critical exponents calculated via finite-size scaling. For the homogeneous distribution, the temperature and critical exponents roughly agree with known values for the Ising model, as expected. In all cases, the critical exponent ζ shows that data from all lattice sizes collapse into one single curve and the scaled function is correct. For the heterogeneous distributions, we notice slightly lower critical temperatures than for the homogeneous Ising model, but higher values for the critical exponent ν. This is yet another signal of the increase in the width of the complexity curves around the transition point, further indicating that heterogeneity extends criticality in the Ising model. Similar effects due to spatial heterogeneity have been noticed already by Griffiths (1969) and others (Bray, 1987; Vojta, 2006; Munoz et al., 2010; Ódor and Hartmann, 2018), and also used to study neural systems (Haimovici et al., 2013; Moretti and Muñoz, 2013). Less has been explored about temporal heterogeneity, with a notable expection by Vazquez et al. (2011).
TABLE 1. Critical temperature Tc and critical exponents ν and ζ, calculated via finite-size scaling (see Section 5.1.4) for the Ising model with both homogeneous and heterogeneously distributed temperatures.
3.2 Temporal and structural heterogeneity: random Boolean networks
We now turn to the role of temporal heterogeneity in the critical behaviour exhibited by random Boolean networks. A gene is a part of the genomic sequence that encodes how to produce (synthesize) either a protein or some RNA (gene products). Gene product synthesis is called gene expression. Because not all gene products are synthesized at the same time, the regulation of gene expression is constantly taking place within a cell. In fact, the expression of each gene is regulated (among many things) by the expression of other genes in the genome. This gives rise to an interaction structure known as a genetic regulatory network. Boolean networks are a theoretical model of genetic regulatory networks. In random Boolean networks (RBNs) (Kauffman, 1969; 1993), traditionally there is homogeneous topology and updating. In this case, critical dynamics are found close to a phase transition between ordered and chaotic phases (Derrida and Pomeau, 1986; Luque and Solé, 1997; Wang et al., 2010).
Figure 3A shows an example of the topology of a RBN with seven nodes (N = 7) and two connections (inputs K) each. Each node has a lookup table where all possible combinations of their inputs are specified (Figure 3A). Using an ensemble approach, for each parameter combination, we randomly generate topologies (structure) and lookup tables (function), and then evaluate them in simulations. Depending on different parameters, the dynamics of RBNs can be classified as ordered, critical (near a phase transition), and chaotic. Figure 3C shows example of these dynamics for different K values.
FIGURE 3. (A) Example of a k-in regular directed graph with set of nodes V ={1,2,… ,7} (N =7) and K =2. (B) Truth table of the functions comprising a Boolean network with 7 nodes and K =2. (C) Example of three regimes of CRBN and their measures of complexity using 40 nodes (N =40) with 100 steps each. (time flows downwards) For K =1, C =0.0558. For K =2, C =0.9951. For K =5, C =0.4714. (D) Average complexity of RBNs as the average connectivity K is increased. Combinations of “homogeneous” structure (Poisson), heterogeneous structure (Exponential), homogeneous temporality (CRBN), and heterogeneous temporality (DGARBN). ΔK = 0.2, N=100, with 1000 iterations for each K
One can have heterogeneous topology in different ways (Oosawa and Savageau, 2002; Aldana, 2003), as genetic regulatory networks are not homogeneous: few genes affect many genes, and many genes affect few genes. Just like in the previous case of the Ising model, here we use Poisson and exponential distributions. Strictly speaking, both are heterogeneous, but exponential is more heterogeneous than Poisson, which here we consider as “homogeneous”. The technical reason for using a Poisson distribution is that it allows us to explore non-integer average connectivity values in the network.
We can also have heterogeneous updating schemes (Gershenson, 2002), as it can be argued that not all genes in a network “March in step” (Harvey and Bossomaier, 1997). Classical RBNs (CRBNs) have synchronous, homogeneous temporality, while in here we use Deterministic Generalized Asynchronous RBNs (DGARBNs) for heterogeneous temporality. In particular, each node is updated every number of time steps equal to its out-degree, so the more nodes one node affects, the slower it will be updated (see Section 5.2 for details).
Figure 3D compares the average complexity C as the average connectivity K is increased. Structural and temporal homogeneity (CRBN-Poisson) has a classical complexity profile, maximizing near the phase transition (K = 2 for the thermodynamical limit, i.e., N → ∞). It can be seen that structural heterogeneity (CRBN-Exponential) extends criticality more than temporal heterogeneity (DGARBN-Poisson), which basically shifts the curve to the right. Still, having both structural and temporal heterogeneity (DGARBN-Exponential) extends criticality even more than having structural heterogeneity only.
3.3 Arbitrary complexity
Abstracting the results from the previous examples, and trying not to rely on any model in particular, here we explore exhaustively the measure of complexity (Eq. 3) in homogeneous and heterogeneous settings, to observe when each case yields a higher average complexity. So we simply vary the probability p1 of having ones in a binary string directly (Figure 4A).
FIGURE 4. (A) Average complexity C for collections of strings with average probability of ones p1, in homogeneous (blue circles) and heterogeneous (red triangles) cases. The latter yields higher average complexity in the central region, where the homogeneous complexity is low. (B) Illustration of Jensen’s inequality. The function of the averages f (|x|) of a variable with a distribution with average |x| is lower than the average of the functions |f(x)| for concave functions. The opposite is the case for convex functions.
In the homogeneous case, we calculate directly the complexity C as a function of p1 using Eq. 3, assuming that we are averaging the complexities of several elements with the same p1. For the heterogeneous case, we generate a collection of probabilities with mean p1 and standard deviation of 0.2 (truncating to 0 negative values, and to 1 values greater than one), calculate their complexity, and then average it. Heterogeneity achieves higher complexities for roughly 0.25 < p1 < 0.75. One might wonder why all heterogenous complexities avoid extreme values, even when heterogeneous RBNs can have complexities close to zero and one. This is because of the standard deviation of the distributions from which the means are generated. Smaller standard deviations yield curves closer to the heterogeneous case.
By assuming that heterogeneity sometimes will be better than homogeneity and vice versa, we can further generalize our results to be independent of any measure or function. If we have homogeneity of a variable x, all elements will have the same value for x, and thus the mean |x| will be equal to any xi. Thus, the average of any function |f(x)| will be equal to any f (xi). If we have heterogeneity, then the mean |x| will be given by some distribution of different values of x, and similarly for |f(x)|.
We can then say that heterogeneity is preferred when the average of the function is greater than the function of the average,
Jensen’s inequality (McShane, 1937) tells us that heterogeneity will be “better” than homogeneity for concave functions (Figure 4B). If we have a heterogeneous distribution with mean |x|, a concave function will fulfill that the average of the functions |f(x)| (heterogeneity) will be greater than the function of the averages f (|x|) (homogeneity). For more complex functions, their concave parts will benefit from heterogeneity and their convex parts will benefit from homogeneity (as can be seen for C in Figure 4A).
For linear functions, it can be shown that there is no difference between homogeneity and heterogeneity, as f (|x|) will always be equal to |f(x)| (see proof in Section 5.3). Thus, we conclude that the difference between homogeneity and heterogeneity is relevant only for nonlinear functions.
4 Discussion
There are several recent examples of heterogeneity offering advantages when compared to homogeneous systems. And yet, it seems that a general treatment of the role of heterogeneity in criticality and other collective phenomena has remained elusive. In public transportation systems, for example, theory tells us that passengers are served optimally (wait at stations for a minimum time) if headways are equal, i.e., homogeneous. However, equal headways are unstable (Gershenson and Pineda, 2009; Quek et al., 2021). In turn, adaptive heterogeneous headways can deliver supraoptimal performance through self-organization (Gershenson, 2011), due to the slower-is-faster effect (Gershenson and Helbing, 2015): passengers do wait more time at stations, but once they board a vehicle, on average they will reach their destination faster, as the idling required to maintain equal headways is avoided.
There are other examples where heterogeneity promotes synchronization (see Zhang et al. (2021) and references therein). Zhang et al. (2021) showed that random parameter heterogeneity among oscillators can consistently rescue the system from losing synchrony. In related work, Molnar et al. (2021) found that heterogeneous generators improve stability in power grids. Recently, Ratnayake et al. (2021) explored complex networks with heterogeneous nodes, observing that they have a greater robustness as compared to networks with homogeneous nodes. In social networks, Zhou et al. (2020) have found that heterogeneity of social status may drive network evolution towards self-optimization. Structural heterogeneity has also been shown to favor the evolution of cooperation (Santos et al., 2006; 2008).
These examples suggest that heterogenous networks improve information processing. With heterogeneity, elements can in principle process information differently, potentially increasing the computing power of a heterogeneous system over an homogeneous one with similar characteristics. This is related to Ashby’s law of requisite variety (Ashby, 1956; Gershenson, 2015), which states that an active controller should have at least the same variety (number of states) as the controlled. It is straightforward to see with random Boolean networks that temporal heterogeneity increases the variety of the system: the state space (of size 2N for homogeneous temporality) can explode once we have to include the precise periods and phases of all nodes (in heterogeneous temporality), as different combinations of the temporal substates may lead a transition from the same node substate to different node substates. Higher K also implies more possible networks. Even if there are evolutionary pressures for efficiency (smaller networks), if heterogeneity shifts criticality to higher K, then it will be easier for an evolutionary search to find critical dynamics in larger spaces. In recent work (López-Díaz et al., 2023), we have found that functional heterogeneity (having a distribution of bias in lookup tables, rather than the same value for all nodes) also extends criticality, as well as antifragility (Taleb, 2012).
Shannon’s information (Shannon, 1948), mathematically equivalent to Boltzmann-Gibbs entropy, is maximal when the probability of every symbol or state is the same, i.e., homogeneous. Thus, one can measure heterogeneity as an inverse of entropy (one minus the normalized Shannon’s information) (Fernández et al., 2014). It is clear that maximum heterogeneity has its limitations (as measured here, it would occur when only one symbol or state has probability one, and all the rest probability zero). Thus, we can assume that there will be an “optimal” balance between minimum and maximum heterogeneities. The precise balance will probably depend on the system, its context, and may even change in time. If we want heterogeneity to take the dynamics towards criticality (or somewhere else), then the precise “optimal” heterogeneity will depend on how far we are from criticality (Gershenson, 2012; Pineda et al., 2019). In this sense, a potential relationship with no-free-lunch theorems (Wolpert and Macready, 1995; 1997) seems an interesting area of further research.
When homogeneous systems are analyzed in terms of their symmetries, heterogeneity is a type of symmetry breaking. In converse symmetry breaking (Nishikawa and Motter, 2016), only heterogeneity leads to stability, i.e., system symmetry is broken to preserve state symmetry. This idea can be used to control the stability of complex systems using heterogeneity (Nicolaou et al., 2021). A further avenue of research is the relationship between heterogeneity and Lévy flights (Iñiguez et al., 2022). Lévy flights are heterogeneous, since they consist of many short jumps and a few large ones. They offer a balance between exploration and exploitation, and seem advantageous for foraging (Ramos-Fernández et al., 2004), extinction prevention (Dannemann et al., 2018), and search algorithms (Martínez-Arévalo et al., 2020). Another interesting relationship to study is the one between heterogeneity and non-reciprocal systems (Fruchart et al., 2021). The exploration of heterogeneity within self-organized criticality may also prove useful.
Network science (Albert and Barabasi, 2002; Newman, 2003; Barabási, 2016) has demonstrated the relevance of structural heterogeneity. This should be complemented with a systematic exploration of temporal (Barabási, 2005) and other types of heterogeneity. It would be interesting to study heterogeneous adaptive (Gross and Sayama, 2009) and temporal (Holme and Saramäki, 2012; Holme, 2015) networks, where each node has a different speed for its dynamics. Temporal heterogeneity enables a system to match the requisite variety of their environment at different timescales. If systems can adapt at the scales at which their environments change, then they will do so better if they have a variety of timescales, i.e., heterogeneous temporality. Recently, Sormunen et al. (2022) have shown that adaptive networks have critical manifolds that can be navigated as parameters change. In other words, criticality is not restricted to a single value, but can be associated to a manifold in a multidimensional system.
In ecology, there is a global tendency towards increased homogenization (fewer species of plants and animals), i.e., reduced biodiversity due to agricultural expansion and invasive species (Ruckelshaus et al., 2020). There is also an increase in the intensity of disturbances, such as fire (Bowman et al., 2020), that are predicted to lead to critical transitions (Abades et al., 2014; Scheffer, 2020) with global consequences (Barnosky et al., 2012). Thus, increasing ecosystem heterogeneity (e.g., diversity) might be a way of reducing the effects of climate change, by promoting criticality.
Further research is required to better understand the role of heterogeneity in the criticality of complex systems. The present work is limited and many open questions remain. We encourage the reader to experiment with a heterogeneous version of their favorite homogeneous complex system model, be it structural, temporal, or other type of heterogeneity. We could learn more from heterogeneous models of collective motion (Arenas et al., 2008), opinion formation (Peralta et al., 2022), epidemic spreading (Sander et al., 2002; Pastor-Satorras et al., 2015), financial markets, urban growth, ecosystems (Roy et al., 2003; Pascual et al., 2011), supply chains, brains (Balasubramanian, 2015; Effenberger et al., 2022), and more. This could contribute to a broader understanding of heterogeneity and its relationship with criticality.
5 Methods
A graph G consists of a set of vertices V and a set of edges E, where an edge is an unordered pair of distinct vertices of G. We write u ∼ v to denote that {u, v} is an edge and in this case we say that u and v are adjacent. If H is a graph with vertex set W ⊂ V and edge set F ⊂ E, we say that H is a subgraph of G. A graph is said to be connected if for every pair of distinct vertices u and v, there is a finite sequence of distinct vertices a0, a1 … , an such that a0 = u, an = v, and ai−1 ∼ ai for each i = 0, 1, … , n. A connected component of G is a connected subgraph of G. A graph is said to be finite just in case its vertex set is finite. A graph is called d-regular if every vertex is adjacent to exactly d ≥ 1 distinct vertices.
A directed graph D consists of a set V of elements a, b, c, … called the nodes of D and a set A of ordered pairs of nodes (a, b), (b, c), … called the arcs of D. We use the symbol ab to represent the arc (a, b). If ab is in the arc set A of D, then we say that a is an incoming neighbour (or in-neighbour) of b, and also that b is a outgoing neighbour (or out-neighbour) of a. We say that D is k-in regular (k ≥ 1) if every node has exactly k in-neighbours: for every node a there are distinct nodes a1, … , ak, such that aja ∈ A for j = 1, … , k. In other words, D is k-in regular just in case the set of in-neighbours of any node has exactly k elements, all distinct, and possibly including itself. The out-degree of a node a is the number of nodes b such that the arc ab is in the arc set of D. Thus the out-degree of a is the number of out-neighbours of a. Similarly, the in-degree of a node a is the number of nodes c such that ca ∈ A. Thus the in-degree of a is the number of in-neighbours of a.
5.1 The Ising model with individual temperatures
It is quite common to study the Ising model on a finite, connected 4-regular graph where the number of edges is twice the number of vertices. This graph is usually introduced as a finite lattice of two-dimensional points on the surface of a three-dimensional torus. An example of such a graph with 25 vertices and 50 edges is shown in Figure 1A.
5.1.1 The Ising model
We start with a finite graph G = (V, E). We identify the vertex set of G with a system of interacting atoms. Each vertex u ∈ V is assigned a spin σu which can take the value +1 or −1. The energy of a configuration of spins is
The energy increases with the number of pairs of adjacent vertices having different spins. The Ising model is a way to assign probabilities to the system configurations. The probability of a configuration σ is proportional to exp (−βH(σ)), where β ≥ 0 is a variable inversely proportional to the temperature.
More precisely, the Ising model with inverse temperature β is the probability measure μ on the set of configurations X = {+1,−1}V defined by
where Z = Z (G, β) is a normalizing constant. This constant can be computed explicitly as
where |A| denotes the cardinality of a finite set A, and k⟨F⟩ the number of connected components of the (spanning) subgraph ⟨F⟩ = (V, F) of G. Then
where C = ∑F⊂E2k⟨F⟩ and so, for any configuration σ, we have that
As the temperature increases (and hence β → 0), μ converges to the uniform measure over the space of configurations. When the temperature decreases, β > 0 increases, and μ assigns greater probability to configurations that have a large number of pairs of adjacent vertices with the same spin.
5.1.2 Simulation
Most simulations of the Ising model use either the Glauber dynamics or the Metropolis algorithm for constructing a Markov chain with stationary measure μ. Here we only describe the Metropolis chain for the Ising model.
Given two configurations σ, σ′ ∈ X, let P (σ, σ′) denote the probability that the Metropolis chain for the Ising model moves from σ to σ′. For every a ∈ V, we write σa to denote the configuration obtained from σ by flipping the sign of the value that σ assigns to a and leaving all the other spins the same. In other words, σa ∈ X is the unique configuration which agrees everywhere with σ except for the spin assigned to vertex a: for every u ∈ V,
where x ∧ y denotes the minimum of the quantities x and y. The probability that the chain stays at the same configuration σ is then
A key property about these transition probabilities is that they only depend on the ratios μ(σa)/μ(σ). Therefore, to simulate the Metropolis chain it is not necessary to compute the normalizing constant Z of the Ising measure μ.
To summarize, we have constructed a transition matrix P that defines a reversible Markov chain with stationary measure μ.
Proposition 1. The Metropolis chain for the Ising model has stationary measure μ.
Proof. It is sufficient to prove that the probability measure μ and the transition matrix P satisfy the detailed balance equations
for all σ ≠ σ′. To show this, it suffices to verify that Eq. 5 holds when σ′ = σa for some a ∈ V. After cancellation of 1/|V| and distributing μ(σ) and μ(σa) accordingly, it suffices to check
or equivalently
which is obvious.
5.1.3 Individual temperatures
In the previous section, we described how to construct a transition matrix P that defines a reversible Markov chain with stationary measure μ. Starting at a configuration σ, the probability that the chain moves to a new configuration σa for any a ∈ V, is given by
where
Thus, the transition probability from σ to σa of the Metropolis chain P for the Ising model with parameter β ≥ 0 is determined by the quantity
We now turn to study a situation where each vertex a has its own parameter βa. In other word, we shall describe a Markov chain Pind that moves from σ to σa with probability depending on
where βa ≥ 0 is a individual (possibly distinct) parameter for each a ∈ V. More precisely, the probability that the new chain moves from σ to σa is defined as
The probability that the chain stays at the same configuration is
Hence, all the configurations σ′ that differ from σ in at least two vertices are not reachable from σ. That is to say, Pind (σ, σ′) = 0 if and only if σ′ ≠ σa for any a ∈ V.
Definition 1. (Ising measure with individual temperatures). Let G = (V, E) be a finite, connected graph and (βu: u ∈ V) a collection of non-negative real numbers. The probability measure μind on X = {+1,−1}V is defined by
where Zind = ∑σ∈Xμind(σ) is a normalizing constant.
Remark 1. We can think of μind as an heterogenous Ising model as opposed to the homogeneous version μ defined in Section 5.1.1 by
Remark 2. It is cleat that the probability measure μ is a stationary measure of the Markov chain defined by the transition matrix Pind just in case we have βa = β for all a ∈ V. In other words, μind = μ if and only if the individual parameters βa in the definition of Pind are all equal to the single parameter β of the homogeneous Ising model.
Proposition 2. The probability measure μind is the stationary measure of the Markov chain defined by the transition matrix Pind.
Proof. In order to satisfy the detailed balanced equations
we must have
for all σ and σa, because
Now, if ΔHa(σ) ≥ 0 then βaΔHa(σ) ≥ 0, and hence exp (βaΔHa(σ)) ≥ 1, so
Otherwise, if ΔHa(σ) < 0 then −βaΔHa(σ) ≥ 0, and so exp (−βaΔHa(σ)) ≥ 1, hence
In both cases, we arrive at the conclusion that in order for μind to be the stationary measure of the chain defined by Pind, we must have
for every σ ∈ X and a ∈ V.Now we proceed to prove that Eq. 6 holds. After cancellation of 1/Zind and using properties of the exponential function, it suffices to check
By inspection,
Therefore, the probability measure μind and the transition matrix Pind satisfy the detailed balance equations and the result follows. □□
5.1.4 Finite-size scaling analysis
Finite-size scaling (FSS) analysis explores the observables of critical phenomena in a finite-size system. A phase transition is an abrupt change in an infinite volume system at some values of control parameters like temperature and magnetic field. The values of the control parameters where this happens are known as critical points. Following these ideas, we characterize the temperature-driven phase transition of the two-dimensional Ising model (at zero external magnetic field) on both homogeneous and heterogeneous systems.
Consider a system with some temperature T, which experiences a phase transition at a critical temperature Tc. In the critical region, a diverging quantity A∞(T) scales as |T − Tc|−ζ with some critical exponent ζ. This critical behavior should hold in systems of finite length L at scales much larger than the characteristic length scale ξ. The characteristic length scale ξ is the correlation length in the infinite system (L → ∞).
As the correlation length diverges, i.e., ξ∼|T − Tc|−ν for T → Tc, in large systems we have
For smaller systems, we also have the cutoff
that is,
The scaling function f(x) depends on the ratio L/ξ between the length of the finite system and the correlation length of the infinite system. This ratio controls finite-size effects. According to Newman and Barkema (1999) and Binder and Heermann (2010), the conventional scaling function can be written, in terms of temperature, as
In the case of the two-dimensional Ising model in a square lattice with nearest neighbor interactions and no external magnetic field, the critical temperature is
5.2 Random Boolean networks
5.2.1 Homogeneous random Boolean networks
Let D = (V, A) be a directed graph. We identify the nodes of D with the genes in a gene regulatory network. Suppose D is a k-in regular directed graph. Figure 3A is an example of a 2-in regular digraph with 7 nodes, i.e., N = 7, K = 2.
A family
For every σ, we have a sequence of states σ, σ′, σ″, … such that each state is the updating function of the previous state in the sequence: σ′ = F(σ), σ″ = F (σ′), and so on. The sequence of states
5.2.2 Heterogeneous random Boolean networks
The description given in 5.2.1 corresponds to the case where the structure and the updating scheme of the random Boolean network are homogeneous. Here we describe the two versions of heterogeneous random Boolean networks that were used in the simulations. The first of these heterogeneous descriptions is structural, while the second gives rise to some sort of asynchronous dynamics.
The definition of Boolean network above makes the assumption that every node in the directed graph has the same in-degree. Now we consider Boolean networks over arbitrary (not necessarily k-in regular, directed) graphs. A generalized Boolean network on a directed graph D consists of a family
For talking about temporal heterogeneity we need to introduce asynchronous updating schemes (Gershenson, 2002). The heterogeneous updating function of a state σ of a random heterogeneous Boolean network on D is the function
where t is called the discrete time-step, and
5.3 Linear functions
Here we observe that for linear functions, there is no difference between homogeneity and heterogeneity. Indeed a function
For
Thus, in the context of linear functions, average value (heterogeneity) is the same as value of the average (homogeneity).
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding author.
Author contributions
All authors conceived and designed the study. FS-P, OZ, and OP. performed numerical simulations and derived mathematical results. All authors wrote the paper.
Funding
OZ acknowledges support from CONACyT-SNI (Grant No. 620178). GI acknowledges support from AFOSR (Grant No. FA8655-20-1-7020), project EU H2020 Humane AI-net (Grant No. 952026), and CHIST-ERA project SAI (Grant No. FWF I 5205-N). CG acknowledges support from UNAM-PAPIIT (IN107919, IV100120, IN105122) and from the PASPA program from UNAM-DGAPA.
Acknowledgments
We appreciate useful comments from Dante Chialvo, János Kertész, Hyobin Kim, Pablo A. Marquet, Adilson Motter, Géza Ódor, Mercedes Pascual, Pedro Rivera, and reviewers of this manuscript.
Conflict of interest
Author CG is an honorary member of the company Lakeside Labs GmbH.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Abades, S. R., Gaxiola, A., and Marquet, P. A. (2014). Fire, percolation thresholds and the savanna forest transition: A neutral model approach. J. Ecol. 102, 1386–1393. doi:10.1111/1365-2745.12321
Adami, C. (1995). Self-organized criticality in living systems. Phys. Lett. A 203, 29–32. doi:10.1016/0375-9601(95)00372-a
Albert, R., and Barabasi, A.-L. (2002). Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47–97. doi:10.1103/revmodphys.74.47
Aldana, M. (2003). Boolean dynamics of networks with scale-free topology. Phys. D. Nonlinear Phenom. 185, 45–66. doi:10.1016/s0167-2789(03)00174-x
Arenas, A., Díaz-Guilera, A., Kurths, J., Moreno, Y., and Zhou, C. (2008). Synchronization in complex networks. Phys. Rep. 469, 93–153. doi:10.1016/j.physrep.2008.09.002
Ayala-Orozco, B., Cocho, G., Larralde, H., Cocho, G., Mateos, J. L., and Miramontes, O. (2004). Lévy walk patterns in the foraging movements of spider monkeys (Ateles geoffroyi). Behav. Ecol. Sociobiol. 55, 223–230. doi:10.1007/s00265-003-0700-6
Bailly, F., and Longo, G. (2008). Extended critical situations: The physical singularity of life phenomena. J. Biol. Syst. 16, 309–336. doi:10.1142/s0218339008002514
Bak, P., Tang, C., and Wiesenfeld, K. (1987). Self-organized criticality: An explanation of the 1/f noise. Phys. Rev. Lett. 59, 381–384. doi:10.1103/physrevlett.59.381
Balasubramanian, V. (2015). Heterogeneity and efficiency in the brain. Proc. IEEE 103, 1346–1358. doi:10.1109/JPROC.2015.2447016
Balleza, E., Alvarez-Buylla, E. R., Chaos, A., Kauffman, S., Shmulevich, I., and Aldana, M. (2008). Critical dynamics in genetic regulatory networks: Examples from four kingdoms. PLoS ONE 3, e2456. doi:10.1371/journal.pone.0002456
Barabási, A.-L. (2005). The origin of bursts and heavy tails in human dynamics. Nature 435, 207–211. doi:10.1038/nature03459
Barnosky, A. D., Hadly, E. A., Bascompte, J., Berlow, E. L., Brown, J. H., Fortelius, M., et al. (2012). Approaching a state shift in earth’s biosphere. Nature 486, 52–58. doi:10.1038/nature11018
Beggs, J. M. (2008). The criticality hypothesis: How local cortical networks might optimize information processing. Philosophical Trans. R. Soc. A Math. Phys. Eng. Sci. 366, 329–343. doi:10.1098/rsta.2007.2092
Binder, K., and Heermann, D. W. (2010). Monte Carlo simulation in statistical physics. Berlin, Heidelberg: Springer.
Bowman, D. M. J. S., Kolden, C. A., Abatzoglou, J. T., Johnston, F. H., van der Werf, G. R., and Flannigan, M. (2020). Vegetation fires in the anthropocene. Nat. Rev. Earth Environ. 1, 500–515. doi:10.1038/s43017-020-0085-3
Braccini, M., Roli, A., Barbieri, E., and Kauffman, S. A. (2022). On the criticality of adaptive boolean network robots. Entropy 24, 1368. doi:10.3390/e24101368
Bray, A. (1987). Nature of the griffiths phase. Phys. Rev. Lett. 59, 586–589. doi:10.1103/PhysRevLett.59.586
Chialvo, D. R. (2010). Emergent complex neural dynamics. Nat. Phys. 6, 744–750. doi:10.1038/nphys1803
Chowdhury, D., Santen, L., and Schadschneider, A. (2000). Statistical physics of vehicular traffic and some related systems. Phys. Rep. 329, 199–329. doi:10.1016/S0370-1573(99)00117-9
Christensen, K., and Moloney, N. R. (2005). Complexity and criticality. Singapore: World Scientific.
Cocho, G., Flores, J., Gershenson, C., Pineda, C., and Sánchez, S. (2015). Rank diversity of languages: Generic behavior in computational linguistics. PLoS ONE 10, e0121898. doi:10.1371/journal.pone.0121898
Dannemann, T., Boyer, D., and Miramontes, O. (2018). Lévy flight movements prevent extinctions and maximize population abundances in fragile lotka–volterra systems. Proc. Natl. Acad. Sci. 115, 3794–3799. doi:10.1073/pnas.1719889115
[Dataset] Sorge, A. (2015). pyfssa. https://zenodo.org/badge/latestdoi/6089/andsor/pyfssa.
Derrida, B., and Pomeau, Y. (1986). Random networks of automata: A simple annealed approximation. Europhys. Lett. 1, 45–49. doi:10.1209/0295-5075/1/2/001
Effenberger, F., Carvalho, P., Dubinin, I., and Singer, W. (2022). A biology-inspired recurrent oscillator network for computations in high-dimensional state space. bioRxiv. doi:10.1101/2022.11.29.518360
Fernández, N., Maldonado, C., and Gershenson, C. (2014). “Information measures of complexity, emergence, self-organization, homeostasis, and autopoiesis,” in Guided self-organization: Inception. Editor M. Prokopenko (Berlin Heidelberg: Springer), 9, 19–51. Emergence, Complexity and Computation. doi:10.1007/978-3-642-53734-9_2
Fruchart, M., Hanai, R., Littlewood, P. B., and Vitelli, V. (2021). Non-reciprocal phase transitions. Nature 592, 363–369. doi:10.1038/s41586-021-03375-9
Gershenson, C. (2002). “Classification of random Boolean networks,” in Artificial life VIII: Proceedings of the eight international conference on artificial life. Editors R. K. Standish, M. A. Bedau, and H. A. Abbass (Cambridge, MA, USA: MIT Press), 1–8.
Gershenson, C. (2012). Guiding the self-organization of random Boolean networks. Theory Biosci. 131, 181–191. doi:10.1007/s12064-011-0144-x
Gershenson, C., and Helbing, D. (2015). When slower is faster. Complexity 21, 9–15. doi:10.1002/cplx.21736
Gershenson, C., and Pineda, L. A. (2009). Why does public transport not arrive on time? The pervasiveness of equal headway instability. PLoS ONE 4, e7292. doi:10.1371/journal.pone.0007292
Gershenson, C. (2015). Requisite variety, autopoiesis, and self-organization. Kybernetes 44, 866–873. doi:10.1108/k-01-2015-0001
Gershenson, C. (2011). Self-organization leads to supraoptimal performance in public transportation systems. PLoS ONE 6, e21469. doi:10.1371/journal.pone.0021469
Glauber, R. J. (1963). Time-dependent statistics of the Ising model. J. Math. Phys. 4, 294–307. doi:10.1063/1.1703954
Griffiths, R. B. (1969). Nonanalytic behavior above the critical point in a random ising ferromagnet. Phys. Rev. Lett. 23, 17–19. doi:10.1103/PhysRevLett.23.17
T. Gross, and H. Sayama (Editors) (2009). Adaptive networks: Theory, models and applications. Understanding complex systems (Berlin Heidelberg: Springer). doi:10.1007/978-3-642-01284-6
Haimovici, A., Tagliazucchi, E., Balenzuela, P., and Chialvo, D. R. (2013). Brain organization into resting state networks emerges at criticality on a model of the human connectome. Phys. Rev. Lett. 110, 178101. doi:10.1103/PhysRevLett.110.178101
Harvey, I., and Bossomaier, T. (1997). “Time out of joint: Attractors in asynchronous random Boolean networks,” in Proceedings of the fourth European conference on artificial life (ECAL97). Editors P. Husbands, and I. Harvey (MIT Press), 67–75.
Helbing, D. (2001). Traffic and related self-driven many-particle systems. Rev. Mod. Phys. 73, 1067–1141. doi:10.1103/revmodphys.73.1067
Hesse, J., and Gross, T. (2014). Self-organized criticality as a fundamental property of neural systems. Front. Syst. Neurosci. 8, 166. doi:10.3389/fnsys.2014.00166
Hidalgo, J., Grilli, J., Suweis, S., Maritan, A., and Muñoz, M. A. (2016). Cooperation, competition and the emergence of criticality in communities of adaptive systems. J. Stat. Mech. Theory Exp. 2016, 033203. doi:10.1088/1742-5468/2016/03/033203
Holme, P. (2015). Modern temporal network theory: A colloquium. Eur. Phys. J. B 88, 1–30. doi:10.1140/epjb/e2015-60657-4
Holme, P., and Saramäki, J. (2012). Temporal networks. Phys. Rep. 519, 97–125. doi:10.1016/j.physrep.2012.03.001
Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79, 2554–2558. doi:10.1073/pnas.79.8.2554
Iñiguez, G., Pineda, C., Gershenson, C., and Barabási, A.-L. (2022). Dynamics of ranking. Nat. Commun. 13, 1646. doi:10.1038/s41467-022-29256-x
Ising, E. (1925). Beitrag zur theorie des ferromagnetismus. Z. für Phys. 31, 253–258. doi:10.1007/bf02980577
Kauffman, S. A. (1969). Metabolic stability and epigenesis in randomly constructed genetic nets. J. Theor. Biol. 22, 437–467. doi:10.1016/0022-5193(69)90015-0
Langton, C. G. (1990). Computation at the edge of chaos: Phase transitions and emergent computation. Phys. D. Nonlinear Phenom. 42, 12–37. doi:10.1016/0167-2789(90)90064-v
Lloyd, S. (2001). Measures of complexity: A non-exhaustive list. Cambridge, MA: Department of Mechanical Engineering, Massachusetts Institute of Technology.
López-Díaz, A. J., Sánchez-Puig, F., and Gershenson, C. (2023). Temporal, structural, and functional heterogeneities extend criticality and antifragility in random boolean networks. Entropy 25, 254. doi:10.3390/e25020254
Lopez-Ruiz, R., Mancini, H. L., and Calbet, X. (1995). A statistical measure of complexity. Phys. Lett. A 209, 321–326. doi:10.1016/0375-9601(95)00867-5
Luque, B., and Solé, R. V. (2000). Lyapunov exponents in random Boolean networks. Phys. A Stat. Mech. its Appl. 284, 33–45. doi:10.1016/s0378-4371(00)00184-9
Luque, B., and Solé, R. V. (1997). Phase transitions in random networks: Simple analytic determination of critical points. Phys. Rev. E 55, 257–260. doi:10.1103/physreve.55.257
Martínez-Arévalo, Y. I., Rodríguez-Vazquez, K., and Gershenson, C. (2020). Temporal heterogeneity improves speed and convergence in genetic algorithms, 13194. ArXiv:2203.
Mastromatteo, I., and Marsili, M. (2011). On the criticality of inferred models. J. Stat. Mech. Theory Exp. 2011, P10012. doi:10.1088/1742-5468/2011/10/P10012
McShane, E. J. (1937). Jensen’s inequality. Bull. Am. Math. Soc. 43, 521–527. doi:10.1090/s0002-9904-1937-06588-8
Molnar, F., Nishikawa, T., and Motter, A. E. (2021). Asymmetry underlies stability in power grids. Nat. Commun. 12, 1457. doi:10.1038/s41467-021-21290-5
Monod, J. (1970). Le hasard et la nécessité. Essai sur la philosophie naturelle de la biologie moderne (Paris, France: Éditions du Seuil).
Mora, T., and Bialek, W. (2011). Are biological systems poised at criticality? J. Stat. Phys. 144, 268–302. doi:10.1007/s10955-011-0229-4
Moretti, P., and Muñoz, M. A. (2013). Griffiths phases and the stretching of criticality in brain networks. Nat. Commun. 4, 1–10. doi:10.1038/ncomms3521
Muñoz, M. A. (2018). Colloquium: Criticality and dynamical scaling in living systems. Rev. Mod. Phys. 90, 031001. doi:10.1103/RevModPhys.90.031001
Munoz, M. A., Juhász, R., Castellano, C., and Ódor, G. (2010). Griffiths phases on complex networks. Phys. Rev. Lett. 105, 128701. doi:10.1103/PhysRevLett.105.128701
Newman, M. E. J., and Barkema, G. T. (1999). Monte Carlo methods in statistical physics. Oxford University Press.
Newman, M. E. J. (2003). The structure and function of complex networks. SIAM Rev. 45, 167–256. doi:10.1137/s003614450342480
Nicolaou, Z. G., Case, D. J., Wee, E. B. v. d., Driscoll, M. M., and Motter, A. E. (2021). Heterogeneity-stabilized homogeneous states in driven media. Nat. Commun. 12, 4486. doi:10.1038/s41467-021-24459-0
Nishikawa, T., and Motter, A. E. (2016). Symmetric states requiring system asymmetry. Phys. Rev. Lett. 117, 114101. doi:10.1103/PhysRevLett.117.114101
Ódor, G., and Hartmann, B. (2018). Heterogeneity effects in power grid network models. Phys. Rev. E 98, 022305. doi:10.1103/PhysRevE.98.022305
Oosawa, C., and Savageau, M. A. (2002). Effects of alternative connectivity on behavior of randomly constructed Boolean networks. Phys. D. Nonlinear Phenom. 170, 143–161. doi:10.1016/s0167-2789(02)00530-4
Pascual, M., and Guichard, F. (2005). Criticality and disturbance in spatial ecological systems. Trends Ecol. Evol. 20, 88–95. doi:10.1016/j.tree.2004.11.012
Pascual, M., Roy, M., and Laneri, K. (2011). Simple models for complex systems: Exploiting the relationship between local and global densities. Theor. Ecol. 4, 211–222. doi:10.1007/s12080-011-0116-2
Pastor-Satorras, R., Castellano, C., Van Mieghem, P., and Vespignani, A. (2015). Epidemic processes in complex networks. Rev. Mod. Phys. 87, 925–979. doi:10.1103/revmodphys.87.925
Peralta, A. F., Kertész, J., and Iñiguez, G. (2022). Opinion dynamics in social networks: From models to data. Eprint arXiv:2201.01322.
Pineda, O. K., Kim, H., and Gershenson, C. (2019). A novel antifragility measure based on satisfaction and its application to random and biological Boolean networks. Complexity 2019, 1–10. doi:10.1155/2019/3728621
Prokopenko, M., Lizier, J. T., Obst, O., and Wang, X. R. (2011). Relating Fisher information to order parameters. Phys. Rev. E 84, 041116. doi:10.1103/PhysRevE.84.041116
Quek, W. L., Chung, N. N., Saw, V. L., Chew, L. Y., and Chew, L. Y. (2021). Analysis and simulation of intervention strategies against bus bunching by means of an empirical agent-based model. Complexity 2021, 1–24. doi:10.1155/2021/2606191
Ratnayake, P., Weragoda, S., Wansapura, J., Kasthurirathna, D., and Piraveenan, M. (2021). Quantifying the robustness of complex networks with heterogeneous nodes. Mathematics 9, 2769. doi:10.3390/math9212769
Roli, A., Villani, M., Filisetti, A., and Serra, R. (2018). Dynamical criticality: Overview and open questions. J. Syst. Sci. Complex. 31, 647–663. doi:10.1007/s11424-017-6117-5
Roy, M., Pascual, M., and Franc, A. (2003). Broad scaling region in a spatial ecological system. Complexity 8, 19–27. doi:10.1002/cplx.10096
Ruckelshaus, M. H., Jackson, S. T., Mooney, H. A., Jacobs, K. L., Kassam, K.-A. S., Arroyo, M. T., et al. (2020). The ipbes global assessment: Pathways to action. Trends Ecol. Evol. 35, 407–414. doi:10.1016/j.tree.2020.01.009
Sander, L., Warren, C., Sokolov, I., Simon, C., and Koopman, J. (2002). Percolation on heterogeneous networks as a model for epidemics. Math. Biosci. 180, 293–305. doi:10.1016/S0025-5564(02)00117-7
Santos, F. C., Pacheco, J. M., and Lenaerts, T. (2006). Evolutionary dynamics of social dilemmas in structured heterogeneous populations. Proc. Natl. Acad. Sci. U. S. A. 103, 3490–3494. doi:10.1073/pnas.0508201103
Santos, F. C., Santos, M. D., and Pacheco, J. M. (2008). Social diversity promotes the emergence of cooperation in public goods games. Nature 454, 213–216. doi:10.1038/nature06940
Shannon, C. E. (1948). A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x
Shmulevich, I., Kauffman, S. A., and Aldana, M. (2005). Eukaryotic cells are dynamically ordered or critical but not chaotic. Proc. Natl. Acad. Sci. 102, 13439–13444. doi:10.1073/pnas.0506771102
Sormunen, S., Gross, T., and Saramäki, J. (2022). Critical drift in a neuro-inspired adaptive network. ArXiv:2206.10315v1.
Stanley, H. E. (1987). Introduction to phase transitions and critical phenomena. Oxford, UK: Oxford University Press.
Torres-Sosa, C., Huang, S., and Aldana, M. (2012). Criticality is an emergent property of genetic networks that exhibit evolvability. PLoS Comput. Biol. 8, e1002669. doi:10.1371/journal.pcbi.1002669
Usefie Mafahim, J., Lambert, D., Zare, M., and Grigolini, P. (2015). Complexity matching in neural networks. New J. Phys. 17, 1–18. doi:10.1088/1367-2630/17/1/015003
Vazquez, F., Bonachela, J. A., López, C., and Munoz, M. A. (2011). Temporal griffiths phases. Phys. Rev. Lett. 106, 235702. doi:10.1103/PhysRevLett.106.235702
Vicsek, T., and Zafeiris, A. (2012). Collective motion. Phys. Rep. 517, 71–140. doi:10.1016/j.physrep.2012.03.004
Vidiella, B., Guillamon, A., Sardanyés, J., Maull, V., Conde-Pueyo, N., and Solé, R. (2020). Engineering self-organized criticality in living cells. bioRxiv 11, 385385. doi:10.1101/2020.11.16.385385
Vojta, T. (2006). Rare region effects at classical, quantum and nonequilibrium phase transitions. J. Phys. A Math. General 39, R143–R205. doi:10.1088/0305-4470/39/22/r01
Wang, X., Lizier, J., and Prokopenko, M. (2011). Fisher information at the edge of chaos in random Boolean networks. Artif. Life 17, 315–329. doi:10.1162/artl_a_00041
Wang, X. R., Lizier, J., and Prokopenko, M. (2010). “A Fisher information study of phase transitions in random Boolean networks,” in Artificial life XII proceedings of the twelfth international conference on the synthesis and simulation of living systems. Editors H. Fellermann, M. Dörr, M. M. Hanczyc, L. L. Laursen, S. Maurer, D. Merkleet al. (Odense, Denmark: MIT Press), 305–312.
Wolpert, D. H., and Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 67–82. doi:10.1109/4235.585893
Wolpert, D. H., and Macready, W. G. (1995). Tech. Rep. SFI-WP-95-02-010. Santa Fe, NM, USA: Santa Fe Institute. No free lunch theorems for search.
Wuensche, A. (1999). Classifying cellular automata automatically: Finding gliders, filtering, and relating space-time patterns, attractor basins, and the Z parameter. Complexity 4, 47–66. doi:10.1002/(sici)1099-0526(199901/02)4:3<47::aid-cplx9>3.0.co;2-v
Zhang, Y., Ocampo-Espindola, J. L., Kiss, I. Z., and Motter, A. E. (2021). Random heterogeneity outperforms design in network synchronization. Proc. Natl. Acad. Sci. U. S. A. 118, e2024299118. doi:10.1073/pnas.2024299118
Keywords: complexity, phase transitions, criticality, Ising model, random Boolean networks
Citation: Sánchez-Puig F, Zapata O, Pineda OK, Iñiguez G and Gershenson C (2023) Heterogeneity extends criticality. Front. Complex Syst. 1:1111486. doi: 10.3389/fcpxs.2023.1111486
Received: 29 November 2022; Accepted: 20 April 2023;
Published: 03 May 2023.
Edited by:
Panos Argyrakis, Aristotle University of Thessaloniki, GreeceReviewed by:
Roberto Murcio, Birkbeck, Geography, United KingdomDavid Sidney Byrne, Durham University, United Kingdom
Andrea Roli, University of Bologna, Italy
Paolo Grigolini, University of North Texas, United States
Copyright © 2023 Sánchez-Puig, Zapata, Pineda, Iñiguez and Gershenson. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Carlos Gershenson, cgg@unam.mx