Skip to main content

REVIEW article

Front. Phys., 23 December 2020
Sec. Interdisciplinary Physics
This article is part of the Research Topic Self-Organized Criticality, Three Decades Later View all 9 articles

Mechanisms of Self-Organized Quasicriticality in Neuronal Network Models

  • 1Laboratório de Física Estatística e Biologia Computacional, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Departmento de Física, Universidade de São Paulo, Ribeirão Preto, Brazil
  • 2Departamento de Física, Universidade Federal de Pernambuco, Recife, Brazil

The critical brain hypothesis states that there are information processing advantages for neuronal networks working close to the critical region of a phase transition. If this is true, we must ask how the networks achieve and maintain this critical state. Here, we review several proposed biological mechanisms that turn the critical region into an attractor of a dynamics in network parameters like synapses, neuronal gains, and firing thresholds. Since neuronal networks (biological and models) are not conservative but dissipative, we expect not exact criticality but self-organized quasicriticality, where the system hovers around the critical point.

1 Introduction

Thirty-three years after the initial formulation of the self-organized criticality (SOC) concept [1] (and 37 years after the self-organizing extremal invasion percolation model [2]), one of the most active areas that employ these ideas is theoretical neuroscience. However, neuronal networks, similar to earthquakes and forest fires, are nonconservative systems, in contrast to canonical SOC systems like sandpile models [3, 4]. To model such systems, one uses nonconservative networks of elements represented by cellular automata, discrete time maps, or differential equations. Such models have distinct features from conservative systems. A large fraction of them, in particular neuronal networks, have been described as displaying self-organized quasi-criticality (SOqC) [57] or weak criticality [8, 9], which is the subject of this review.

The first person that made an analogy between brain activity and a critical branching process probably was Alan Turing, in his memorable paper Computing machinery and intelligence [10]. Decades later, the idea that SOC models could be important to describe the activity of neuronal networks was in the air as early as 1995 [1116], eight years before the fundamental 2003 experimental article of Beggs and Plenz [17] reporting neuronal avalanches. This occurred because several authors, working with models for earthquakes and pulse-coupled threshold elements, noticed the formal analogy between such systems and networks of integrate-and-fire neurons. Critical learning was also conjectured by Chialvo and Bak [1820]. However, in the absence of experimental support, these works, although prescient, were basically theoretical conjectures. A historical question would be to determine in what extent this early literature motivated Beggs and Plenz to perform their experiments.

Since 2003, however, the study of criticality in neuronal networks developed itself as a research paradigm, with a large literature, diverse experimental approaches, and several problems addressed theoretically and computationally (some reviews include Refs. [7, 2127]). One of the main results is that information processing seems to be optimized at a second-order absorbing phase transition [2842]. This transition occurs between no activity (the absorbing phase) and nonzero steady-state activity (the active phase). Such transition is familiar from the SOC literature and pertains to the directed percolation (DP) or the conservative-DP (C-DP or Manna) universality classes [7, 4245].

An important question is how neuronal networks self-organize toward the critical region. The question arises because, like earthquake and forest-fire models, neuronal networks are not conservative systems, which means that in principle they cannot be exactly critical [5, 6, 45, 46]. In these networks, we can vary control parameters like the strength of synapses and obtain subcritical, critical, and supercritical behavior. The critical point is therefore achieved only by fine-tuning.

Over time, several authors proposed different biological mechanisms that could eliminate the fine-tuning and make the critical region a self-organized attractor. The obtained criticality is not perfect, but it is sufficient to account for the experimental data. Also, the mechanisms (mainly based on dynamic synapses but also on dynamic neuronal gains and adaptive firing thresholds) are biologically plausible and should be viewed as a research topic per se.

The literature about these homeostatic mechanisms is vast, and we do not intend to present an exhaustive review. However, we discuss here some prototypical mechanisms and try to connect them to self-organized quasicriticality (SOqC), a concept developed to account for nonconservative systems that hover around but do not exactly sit on the critical point [57].

For a better comparison between the models, we will not rely on the original notation of the reviewed articles, but will try to use a universal notation instead. For example, the synaptic strength between a presynaptic neuron j and a postsynaptic neuron i will be always denoted by Wij (notice the convention in the order of the indexes), the membrane potential is Vi, the binary firing state is si{0,1}, the gain of the firing function is Γi, and the firing threshold is θi. To prevent an excess of index subscripts as is usual in dynamical systems, like Wij,t, we use the convention Wij(t) for continuous time and Wij[t] for discrete time.

Last, before we begin, a few words about the fine-tuning problem. Even perfect SOC systems are in a sense fine-tuned: they must be conservative and require infinite separation of time scales with driving rate 1/τ0+ and dissipation rate u0+ with 1/(τu)0 [3, 4, 7, 43, 45]. For homeostatic systems, we turn a control parameter like the coupling W into a time-dependent slow variable W[t]=Wij[t] by imposing a local dynamics in the individual Wij. This dynamics could depend on new parameters (here called hyperparameters) which need some tuning (in some cases, this tuning can be very coarse in the large τ case). Have we exchanged the fine tuning on W by several tuning operations on the homeostatic hyperparameters? Not exactly, as nicely discussed by Hernandez-Urbina and Herrmann [47]:

To Tune or Not to Tune

In this article, we have shown how systems self-organize into a critical state through [homeostasis]. Thus, we became relieved from the task of fine-tuning the control parameter W, but instead, we acquire a new task: that of estimating the appropriate values for parameters A,B,C, and D. Is there no way to be relieved from tuning any parameter in the system?

The issue of tuning or not tuning depends mainly on what we understand by control parameter. (…) a control parameter can be thought of a knob or dial that when turned the system exhibits some quantifiable change. We say that the system self-organizes if nobody turns that knob but the system itself. In order to achieve this, the elements comprising the system require a feedback mechanism to be able to change their inner dynamics in response to their surroundings. (…) The latter does not require an external entity to turn the dial for the system to exhibit critical dynamics. However, its internal dynamics are configured in a particular way in order to allow feedback mechanisms at the level of individual elements.

Did we fine-tune their configuration? Yes. Otherwise, we would have not achieved what was desired, as nothing comes out of nothing. Did we change control parameter from W to A,B,C, and D? No, the control parameter is still intact, and now it is “in the hands” of the system. (…) Last and most importantly, the new configuration stresses the difference between global and local mechanisms. The control parameter W (the dial) is an external quantity that observes and governs the global (i.e., the collective), whereas [homeostasis] provides the system with local mechanisms that have an effect over the collective. This is the main feature of a complex system.

2 Plastic Synapses

Consider an absorbing-state second-order phase transition where the activity is ρ=0 below a critical point Ec and

ρC(EEcEc)β,(1)

for EEc, where E is a generic control parameter (see Figures 1A,B). For topologies such as random and complete graphs, one typically obtains β=1, which is consistent with a transition in the mean-field directed percolation (DP) class (or perhaps, the compact-DP (Manna) class usual in SOC models, which has the same mean-field exponents but different ones below the upper critical dimension; see Refs. 3, 7, 42, 48).

FIGURE 1
www.frontiersin.org

FIGURE 1. Example of homeostatic mechanisms in a stochastic neuron with firing probability P(si=1). (A) Scheme of the loci for homeostatic mechanisms: synapses Wij, neuronal gain Γi, and firing threshold θi. Inset: Firing probability with homeostatic variables. (B) Bifurcation diagram for the activity ρ as a function of a generic control parameter E. The critical point is Ec, but the homeostatic fixed point (a focus) is slightly supercritical. (C) Self-organization of the generic “control” parameter E(t), where the standard deviation of the stochastic oscillations around the fixed point depends on system size as s.d.Na.

The basic idea underlying most of the proposed mechanism for homeostatic self-organization is to define a slow dynamics in the individual links Ei(t)(i=1,,N) such that if the network is in the subcritical state, their average value E(t)=Ei(t) grows toward Ec, but if the network is in the supercritical state, E(t) decreases toward Ec (see Figure 1C). Ideally, these mechanisms should be local, that is, they should not have access to global network information such as the density of active sites ρ (the order parameter) but rather only to the local firing of the neurons connected by Ei. In the following, we give several examples from the literature.

2.1 Short-Term Synaptic Plasticity

Markram and Tsodyks [49, 50] proposed a short-term synaptic model that inspired several authors in the area of self-organization to criticality. The Markram–Tsodyks (MT) dynamics is

dJij(t)dt=1τ[Au(t)Jij(t)]u(t)Jij(t)δ(tt^j),(2)
du(t)dt=1τu[Uu(t)]+U[1u(t)]δ(tt^j),(3)

where Jij is the available neurotransmitter resources, u is the fraction used after the presynaptic firing at time t^j (so that the effective synaptic efficacy is Wij(t)=u(t)Jij(t)), A and U are baseline constants (hyperparameters), and τ and τu are recovery time constants.

In an influential article, Levina, Herrmann, and Geisel (LHG) [51] proposed to use depressing–recovering synapses. In their model, we have leaky integrate-and-fire (LIF) neurons in a complete-graph topology. As a self-organizing mechanism, they used a simplified version of the MT dynamics with constant u, that is, only Eq. 2. They studied the system varying A and found that although we need some tuning in the hyperparameter A, any initial distribution of synapses P(Wij(t=0)) converges to a stationary distribution P*(Wij) with Wij*Wc. We will refer to Eq. 2 with constant u as the LHG dynamics. These authors found quasicriticality for 1.7<A<2.3,u]0,1] and τN. Levina et al. also studied synapses with the full MT model in Refs. 52, 53.

Bonachela et al. [6] studied in depth the LHG model and found that, like forest-fire models, it is an instance of SOqC. The system presents the characteristic hovering around the critical point in the form of stochastic sawtooth oscillations in the W(t) that do not disappear in the thermodynamic limit. Using the same model, Wang and Zhou [54] showed that the LHG dynamics also works in hierarchical modular networks, with an apparent improvement in SOqC robustness in this topology.

Note that the LHG dynamics can be written in terms of the synaptic efficacy Wij=uJij by multiplying Eq. 2 by u, leading to

dWij(t)dt=1τ[AWij(t)]uWij(t)δ(tt^j).(4)

Brochini et al. [55] studied a complete graph of stochastic discrete time LIFs [56, 57] and proposed a discrete time LHG dynamics:

Wij[t+1]=Wij[t]+1τ(AWij[t])uWij[t]sj[t],(5)

where the firing index sj[t]{0,1} denotes spikes. Kinouchi et al. [58], in the same system, studied the stability of the fixed points of the joint neuronal LHG dynamics. They found that, for the average synaptic value W, the fixed point is W*=Wc+O((A1)/τu), meaning that for large τu, the systems approach the critical point Wc if A>1. However, since it is not biologically plausible to assume an infinite recovering time τ, one always obtains a system which is slightly supercritical. This work also showed that the fixed point is a barely stable focus, around which the system is excited by finite size (demographic) noise, leading to the characteristic sawtooth oscillations of SOqC. A similar scenario was already found by Grassberger and Kantz for forest-fire models [59].

The discrete time LHG dynamics was also studied for cellular automata neurons in random networks with an average of K neighbors connected by probabilistic synapses Pij[0,1] (Costa et al. [60], Campos et al. [61] and Kinouchi et al. [58]):

Pij[t+1]=Pij[t]+1τ(AKPij[t])uPij[t]sj[t],(6)

with an upper limit Pmax=1. Multiplying by K and summing over i, we get an equation for the local branching ratio:

σj[t+1]=σj[t]+1τ(Aσj[t])uσj[t]sj[t].(7)

It has been found that such depressing synapses induce correlations inside the synaptic matrix, affecting the global branching ratio σ[t]=σj[t], so that criticality does not occur at the branching ratio σc=1 but rather when the largest eigenvalue of the synaptic matrix is λc=1, with σ*=KPij*1.1 [61].

After examining this diverse literature, it seems that any homeostatic dynamics of the form

Wij[t+1]=Wij[t]+R(Wij[t])D(Wij,sj[t])(8)

can self-organize the networks, where R and D are the recovery and depressing processes, for example:

Wij[t+1]=Wij[t]+1τWij[t]uWij[t]sj[t].(9)

In particular, the simplest mechanism would be

Wij[t+1]=Wij[t]+1τusj[t],(10)

a usual dynamics in SOC models [5, 7]. This means that the full LHG dynamics, and also the full MT dynamics, is a sufficient but not a necessary condition for SOqC.

The average W=Wij for this dynamics is

W[t+1]=W[t]+1τuρ[t],(11)

where ρ[t]=si[t] is the time-dependent network activity. The stationary state is ρ*=1/(τu), and if τu is large, this means that ρ*=O(1/(τu))ρc=0+. Also, if we use Eq. 1, we get W*=Wc+O(1/(τu)). The dissipative term u should also be small, meaning that, if we desire absolute separation of time scales, we need 1/τ0+,u0+,1/(τu)0, as is usual in other SOC systems [3, 5, 7, 43, 45].

Here, for biological plausibility, it is better to assume a large but finite recovery time, say τ[100,10,000] ms, in comparison with 1 ms for spikes. Also, to obtain SOqC, u need not be small. We must have A>1 because A<1 produces subcritical activity [6, 51, 58]. So, moderate A[1,2], u]0,1], and large τ>1000 seem to be the coarse tuning conditions for homeostasis. This produces the hovering of the average value W[t]=Wij[t] around the critical point Wc, with the characteristic sawtooth oscillations of SOqC and power-law avalanches for some decades.

We observe that the original LHG model [6, 51] had τN to produce the infinite separation of time scales in the large-N limit. This, however, did not prevent the SOqC hovering stochastic oscillations in the thermodynamic limit. Moreover, a recovery time proportional to N is a very unrealistic feature for biological synapses. Curiously, if we use a finite τu instead, the oscillations are damped in the thermodynamic limit because the fixed point ρ*=O(1/(τu)),W*=Wc+O(1/(τu)) continues to be an attractive focus, but the demographic noise vanishes. On the other hand, when we use τu, the fixed point loses its stability and continues to be perturbed even by the N vanishing fluctuations [58].

As early as 1998, Kinouchi [62] proposed the synaptic dynamics:

Wij[t+1]=Wij[t]+1τWij[t]usj[t],(12)

with small but finite τ and u. The difference here from the former mechanisms is that, like in Eq. 10, depression is not proportional to Wij (but recovery is). He also discussed the several concepts of SOC at the time, and called these homeostatic system as self-tuned criticality, which is equivalent to a SOqC system with finite separation of time scales.

Hsu and Beggs [63] studied a model for the activity Ai(t) of the local field potential at electrode i:

Ai[t+1]=Hi[t]+jPij[t]sj[t],(13)

where Hi(t) is a spontaneous activity used to prevent the freezing of the system in the absorbing state (this is similar to a time-dependent SOC drive term h). The probabilistic coupling is Pij[0,1]. Firing-rate homeostasis and critical homeostasis are achieved by increasing or decreasing H and P if the firing rate is too low or too high compared to a target firing rate s0=1/τ0:

Hi[t+1]=exp[kS(si[t]s0)]Hi[t],(14)
Pij[t+1]=exp[kP(si[t]s0)]Pij[t],(15)

where represents a moving average over a memory widow τm.

Hsu and Beggs found that for kS/kP0.5, this dynamics leads to a critical branching ratio σ=1. They also found that the target firing rate s0 can be maintained by this homeostasis. Equation 15 reminds us of the depressing–recovering synaptic rule of Eq. 9. Indeed, if we examine the small kP limit (as used by the authors), we have

Pij[t+1]Pij[t]+1τPij[t]uPij[t]si[t],(16)

where now τ=1/(kPs0) and u=kP. A similar reasoning applies to the equation for H[t], which could be identified with the homeostatic threshold Eq. 60 discussed in Section 4, with H[t]=θ[t].

In another article, Hsu et al. [64] extended the model to include distance-dependent connectivity and Hebbian learning [64]. Changing the homeostasis equations to our standard notation, we have

dHi(t)dt=1τS(1ηi(t))Hi(t)uSHi(t)(sis0),
dPij(t)dt=1τ(1ηi(t))Pij(t)uPij(t)(sis0)uDDijPij(t),

where Hi[0,1] is now a probability of spontaneous firing, s0 is a target average activity, and Dij is the distance between electrodes i and j. The input ratio is ηi(t)=jPij(t). Remember that, for a critical branching process, ηi=1. These values were chosen as homeostatic targets.

Shew et al. [65] studied experimentally the visual cortex of the turtle and proposed a (complete graph) self-organizing model for the input synapses Ωi and the cortical synapses Wij. The stochastic neurons fire with a linear saturating function:

Prob(si[t+1]=1)={Vi[t]ifV<1,1ifV>1,
Vi[t]=Ωi[t]Hi[t]+1NjWij[t]sj[t],

where, like in Eq. 13, Hi accounts for external stimuli. For both types of synapses, they used the discrete time LHG dynamics, Eq. 5, and concluded that the computational model accounts very well for the experimental data.

Hernandez-Urbina and Herrmann [47] studied a discrete time IF model where they define a local measure called node success:

ϕj[t]=iAijsi[t+1]iAij,

where A is the adjacency matrix of the network, with Aij=1 if j projects onto i (Aij=0 otherwise). Note that we reversed the indices as compared with the original notation [47]. Observe that ϕj measures how many postsynaptic neurons are excited by the presynaptic neuron j.

The authors then define the node success–driven plasticity (NSDP):

Wij[t+1]=Wij[t]+1τexp(ϕj(t)/B)u exp(Δtj/D),

where Δtj=tt^j is the time difference between the spike of node j occurring at current time step t and its previous spike which occurred at t^j (the last spike), while B and D are constants. Notice that the drive term is larger if the node success is small and the dissipation term is larger if the firing rate (inferred locally as ρ^=1/Δtj) is large [compare with Eq. 8].

They analyzed the relation among the avalanche critical exponents, the largest eigenvalue Λ associated to the weight matrix, and the data collapse of the shape of avalanches for several network topologies. All results are compatible with (quasi-)criticality. They also found that if they stop NSDP and introduce STDP, the criticality vanishes, but if the two dynamics are done together, criticality survives.

Levina et al. [66] proposed a model in a complete graph in which the branching ratio σ is estimated as the local branching σi of a neuron that initiates an avalanche. The homeostatic rule is to increase the synapses if σi<1 and decreasing them if σi>1. The network converges, with SOqC oscillations, to σ*σc=1.

2.2 Meta-Plasticity

Peng and Beggs [67] studied a square lattice (K=4) of IF neurons with open boundary conditions. A random neuron receives a small increment of voltage (slow drive). If the voltage of presynaptic neuron j is above a threshold θ=1, we have

Vj[t+1]=Vj[t]1,
sj[t+1]=Θ(Vj[t+1]θ),
Vi[t+1]=Vi[t]+1KWij[t]sj[t],

where Θ is the Heaviside function. The self-organization is made by a LHG dynamics plus a meta-plasticity term:

Wij[t+1]=Wij[t]+1τ(AWij[t])uWij[t]sj[t],
ua+1=ua(1Xa)/N,

where Xa is the total fraction of neurons at the boundary that fired during the a-th avalanche and ua+1 is the updated value of u after the avalanche. Notice that the meta-plasticity term differs from the MT model of Eq. 3, because the hyperparameter u is updated in a much slower time scale. Peng and Beggs show that this dynamics converges automatically to good values for the parameter u; that is, we no longer need set the u value in advance. We observe, however, that Xa is a nonlocal variable.

2.3 Hebbian Synapses

Ever since Donald Hebb’s proposal that neurons that fire together wire together [6870], several attempts have been made to implement this idea in models of self-organization. However, a pure Hebbian mechanism can lead to diverging synapses, so that some kind of normalization or decay needs also be included in Hebbian plasticity.

In 2006, de Arcangelis, Perrone-Capano, and Herrmann introduced a neuronal network with Hebbian synaptic dynamics [71] that we call the APH model. There are several small variations in the models proposed by de Arcangelis et al., but perhaps the simplest one [72] is given by the following neuronal dynamics on a square lattice of L×L neurons: If at time t a presynaptic neuron j has a membrane potential above a firing threshold, Vj[t]>θ, it fires, sending neurotransmitters to all its (nonrefractory) neighbors:

Vi[t+1]=Vi[t]+W¯ijVj[t],(28)

where W¯ij=Wij/lnnWlj. Then, neuron j enters in a refractory period of one time step. The synaptic self-organizing dynamics is given by

Wij[t+1]=Wij[t]+1θW¯ijVj[t](activesynapses),
WijWij1NBijδWij(inactivesynapses,afteravalanche),

where NB is the total number bonds and active (inactive) synapses are the ones used (not used) in Eq. 28. The sum in Eq. 30 is over all synaptic modifications δWij[t+1]=Wij[t+1]Wij[t], a step which involves nonlocal information and amounts to a kind of synaptic rescaling. If the synaptic strength falls below some threshold, the synapse is deleted (pruning), so that this mechanism sculpts the network architecture. So, co-activation of pre- and postsynaptic neurons makes the synapse grow, and inactive synapses are depressed, which means that it is a Hebbian process. Several authors explored the APH model in different contexts, including learning phenomena [7280].

Çiftçi [81] studied a neuronal SIRs model on the C. elegans neuronal network topology. The spontaneous activation rate (the drive) is h=1/τ0+, and the recovery rate to the susceptible state is q. The author studied the system as a function of q/h (separation of time scales qh). The probability that a neuron j activates its neighbor i is Pij (gij=1Pij is the probability of synaptic failure in the author notation). The synaptic update occurs after an avalanche (of size S) and affects two neighbors that are active at the same time (Hebbian term):

Pij[t+1]={Pij[t]+1τ1S(1Pij[t])ifthesynapsewasnotused,Pij[t]u(11S)Pij[t]ifthesynapsewasused.

Ciftçi found robust self-organization to quasicriticality. The author notes, however, that S is nonlocal information.

Uhlig et al. [82] considered the effect of LHG synapses in the presence of an associative Hebb synaptic matrix. They found that, although the two processes are not irreconcilable, the critical state has detrimental effects to the attractor recovery. They interpret this as a suggestion that the standard paradigm of memories as fixed point attractors should be replaced by more general approaches like transient dynamics [83].

2.4 Spike Time–Dependent Plasticity

Rubinov et al. [84] studied a hierarchical modular network of LIF neurons with STDP plasticity. The synapses are modeled by double exponentials:

dVi(t)dt=(Vi(t)E)+I+Iisyn(t),
Iisyn(t)=jWijV0t^j[exp(tt^jτ1)exp(tt^jτ2)],

where {t^j} are the presynaptic firing times. Synaptic weight changes at every spike of a presynaptic neuron, following the STDP rule:

ΔWij={A+(Wij)exp(t^jt^iτ+)ift^j<t^i,A(Wij)exp(t^jt^iτ)ift^jt^i,

where A+(Wij) and A(Wij) are weight-dependent functions (see Ref. 84 for details). The authors show an association among modularity, low cost of wiring, STDP, and self-organized criticality in a neurobiologically realistic model of neuronal activity.

Del Papa et al. [85] explored the interaction between criticality and learning in the context of self-organized recurrent networks (SORN). The ratio between inhibitory to excitatory neurons is NI/NE=0.2. These neurons interact via WEE,WIE, and WEI synapses (no inhibitory self-coupling). Synapses are dynamic, and also the excitatory thresholds θiE. The neurons evolve as

siE[t+1]=Θ(jNEWijEE[t]sjE[t]kNIWikEIsjI[t]θiE[t]+Ii[t]+ηiE[t]),(35)
siI[t+1]=Θ(jNEWijIEsjE[t]θiI+ηiI[t]),

where ηi[t] represents membrane noise. Synapses and thresholds evolve following five combined dynamics:

WijEE[t+1]=WijEE[t]+1τSTDP[siE[t+1]sjE[t]sjE[t+1]siE[t]]excitatorySTDP,
WijEI[t+1]=WijEI[t]1τiSTDPsjI[t][1siE[t+1](1+1/μIP)]inhibitorySTDP,
Wij[t+1]Wij[t+1]jWij[t+1]synapticnormalization(SN),
p(NE)=NE(NE1)N(N1)p(N)structuralplasticity(SP),
θiE[t+1]=θiE[t]+1τIP[siE[t]μIP]intrinsicplasticity(IP),

where μIP is the desired activity level. In the structural plasticity process, excitatory synapses are added with probability p(NE). The authors found that this SORN model presents well-behaved power-law avalanche statistics and that the plastic mechanisms are necessary to drive the network to criticality, but not to maintain it critical; that is, the plasticity can be turned off after the networks reach the critical region. Also, they found that noise was essential to produce the avalanches, but degrade the learning performance. From this, they conclude that the relation between criticality and learning is more complex, and it is not obvious if criticality optimizes learning.

Levina et al. [86] studied the combined effect of LHG synapses, homeostatic branching parameter Wh, and STDP:

Wij(t)=uJij(t)Wh(t)WSTDP(t).

They found that there is cooperativity of these mechanisms in extending the robustness of the critical state to variations on the hyperparameter A (see Eq. 2).

Stepp et al. [87] examined a LIF neuronal network which has both Markram–Tsodyks dynamics and spiking time–dependent plasticity STDP (both excitatory and inhibitory). They found that, although MT dynamics produces some self-organization, the STDP mechanism increases the robustness of the network criticality.

Delattre et al. [88] included in the STDP synaptic change ΔW+ a resource depletion term:

ΔW'+=γ(η(t))ΔW+,(43)
γ(η(t))=1exp(η*η(t)m)1+exp(η*η(t)m),

where resource availability η(t) evolves as

dη(t)dt=1τηη(t)η0(α(t))τη.

Here, α(t) is a continuous estimator of the network firing rate, τη is the recovery time of the resources availability, and the term η0(α(t))=(1+α/k)1 in the denominator ensures that depletion is fast and recovery is slow (k=20 Hz). They called this mechanism as network spiking–dependent plasticity and showed that, in contrast to pure STDP, it leads to power-law avalanches with branching ratio around one.

2.5 Homeostatic Neurite Growth

Kossio et al. [89] studied IF neurons randomly distributed in a plane, with neurites distributed within circles of radii Ri that evolved according to

dRidt=1τutiδ(tti),

where {ti} are the spike times of neuron i, with τ and u constants. Since the connections are given by Wij=gOij, where g is a constant and Oij are the overlapping areas of the synaptic discs, Eq. 46 is not much different from the simple synaptic dynamics of Eq. 10, with constant drive and decay due to spikes.

Tetzlaff et al. [90] studied experimentally neuronal avalanches during the maturation of cell cultures, finding that criticality is achieved in a third stage of the dendrites/axons growth process. They modeled the system using neurons with membrane potential Vi(t)<1 and calcium dynamics Ci(t):

dVi(t)dt=Vi(t)V0τV+jkj±Wij(t)Θ(Vj(t)ηj(t)),
dCi(t)dt=1τCCi(t)+βΘ(Vi(t)ηi(t)),(48)

where k+>0(k<0) defines excitatory (inhibitory) neurons, and ηj(t)[0,1] is a random number. Dendritic and axonal spatial distributions are again represented by their radii Ri and Ai, whose dynamics are governed by calcium dynamics as

dRi(t)dt=1τR(Ci(t)Ctarget),
dAi(t)dt=1τA(Ci(t)Ctarget).

Finally, the effective connection is defined as

Wij(t)=[γ1(t)12sin(2γ1(t))]Aj2(t)+[γ2(t)12sin(2γ2(t))]Rj2(t),
γ1(t)=arccos(Aj2(t)+Dij2Ri2(t)2Aj(t)Dij),γ2(t)=arccos(Ri2(t)+Dij2Aj2(t)2Ri(t)Dij),

where Dij is the distance between the neurons. This essentially represents the overlap of the axonal and dendritic zones, which can be understood as an abstract representation for the probability of synapse formation.

3 Dynamic Neuronal Gains

For all-to-all topologies as used in Refs. 6, 51, 53, 55, the number of synapses is N(N1), which means that simulations become impractical for large N. Brochini et al. [55] discovered that, in their model with stochastic neurons, adaptation in a single parameter per neuron (the dynamic gain) is sufficient to self-organize the network. This reduces the number of dynamic equations from O(N2) to O(N), enabling large-scale simulations.

The stochastic neuron has a probabilistic firing function, say, a linear saturating function or a rational function:

P(s=1|V)=Φ(V)=Γ(Vθ)Θ(Vθ)Θ(1Γ(Vθ))+Θ(Γ(Vθ)1),(53)
P(s=1|V)=Φ(V)=Γ(Vθ)1+Γ(Vθ)Θ(Vθ),

where s=1 means a spike, V is the membrane potential, θ is the threshold, and Γ is the neuronal gain.

Now, let us assume that each neuron i has its neuronal gain Γi. Several adaptive dynamics work, similar to LHG and even simpler:

Γi(t+1)=Γi(t)+1τ[AΓi(t)]uΓi(t)si(t),
Γi(t+1)=Γi(t)+1τΓi(t)uΓi(t)si(t),
Γi(t+1)=Γi(t)+1τusi(t).(57)

Costa et al. [91] and Kinouchi et al. [58] studied the stability of the fixed points of mechanisms given by Eqs 55 and 56 and concluded that the fixed point solution (ρ*,Γ*) is of the form ρ*=0++O(1/τ), Γ*=Γc+O(1/τ). The fixed point is a barely stable focus for large τ, which means that demographic noise creates the hovering around the critical point (the sawtooth SOqC stochastic oscillations). The peaks of theses oscillations correspond to large excursions in the supercritical region, producing the so-called dragon king avalanches [77].

Zierenberg et al. [92] considered a cellular automaton neuronal model with binary states si and probabilistic synapses Pij[t]=αi[t]Wij, where αi[t] is a homeostatic scaling factor. The homeostasis is given by a negative feedback:

αi[t+1]=αi[t]+1τhp(r*si[t]),

where τhp is the time constant of the homeostatic process and r* is a target level. Notice that this mechanism depends only on the activity of the postsynaptic neuron i, not the presynaptic neuron j as in the LHG model. So, αi[t] plays the same role of the neuronal gain Γi[t] discussed above.

Indeed, for a cellular automata model similar to [60, 61], a probabilistic synapse with neuronal gains could be written as Pij[t]=Γi[t]Wij. In order to compare with the neuronal gain dynamics, we rewrite Eq. 58 as

Γi[t+1]=Γi[t]+1τusi[t],

where τ=τhp/r* and u=1/τhp. So, in Zierenberg et al., we have a neuronal gain dynamics similar to Eq. 10, with hovering around the critical point and the ubiquitous sawtooth oscillations in α[t]αi[t].

4 Adaptive Firing Thresholds

Girardi-Schappo et al. [93] examined a network with NE=pN=0.8N excitatory and NI=qN=0.2N inhibitory stochastic LIF neurons. They found a phase diagram very similar to that of the Brunel model [94], with synchronous regular (SR), asynchronous regular (AR), synchronous irregular (SI), and asynchronous irregular (AI) states. Close to the balanced state g=WII/WEE=p/q=4 they found an absorbing-active second-order phase transition with a critical point gc=p/q1/(qΓWEE). The self-organization of the WII and WEI inhibitory synapses was accomplished by a LHG dynamics.

They noticed, however, that for these stochastic LIF systems, the critical point requires also a zero field h=I(1μ)θ, where I is the external input and μ is the leakage parameter. While setting h=0 for the critical point of spin systems is natural, obtaining zero field in this case demands self-organization, which is done by an adaptive firing threshold:

θi[t+1]=θi[t]1τθθi[t]+uθθi[t]si[t].(60)

Notice the plus signal in the last term, since if the postsynaptic neuron fires (si=1) then the threshold must increase to hinder new firings. This mechanism is biologically plausible and also explains classical firing rate adaptation. Remembering that ρ=sih1/δh in the critical point, where δh is the field critical exponent, from Eq. 60, we have h1/(τθuθ)δh0 for large τθuθ.

As already seen, Del Pappa et al. [85] considered a similar threshold dynamics, Eq. 41. Bienenstock and Lehmann [95] also studied, at the mean field level, the joint evolution of firing thresholds and dynamic synapses (see Section 6.3).

5 Topological Self-Organization

Consider a cellular automata model [29, 32, 60, 61] in a network with average degree K and average probabilistic synaptic weights P=Pij. The critical branching ratio is σ=PK=1; that is, critical average weight Pc=1/K. Notice that we can study networks with any K, even the complete graph, where Pc=1/(N1). In this network, what is critical is the activity, which does not depend on the topology (the degree K).

In another sense, we call a network topology critical if there is a barely infinite percolating cluster, which for a random network occurs for Kc=2. Several authors, starting in 2,000 with Bornholdt and Rohlf [96], explored the self-organization toward this type of topological criticality [22, 97104].

So, we can have a critical network with a Wc and any K or a topologically critical network with a well-defined Kc. The two concepts (activity criticality and topological criticality) are different, but sometimes a topological criticality also presents a phase transition with power-law avalanches and critical phenomena. The topological phase transition is continuous and has a critical point, related to the formation of a percolating cluster of nodes, but in the Bornholdt and Rohlf (BR) model, it is related to an order-chaos phase transition, not to an absorbing state phase transition.

We present here a more advanced version of the BR model [97]. It follows the idea of deleting synapses from correlated neurons and increasing synapses of uncorrelated neurons. The correlation over time T is calculated as

Cij[T]=1T+1t=t0t0+Tsi[t]sj[t],(61)

where the stochastic neurons evolve as

Vi[t+1]=jWijsj[t],(62)
Prob(si[t+1]=+1)=Φ(Vi),(63)
Prob(si[t+1]=1)=1Φ(Vi),
Φ[Vi]=11+exp(2Γ(Viθi)).

The self-organization procedure is as follows:

Choose at random a pair (i,j) of neurons.

Calculate the correlation Cij(T).

Define a threshold α. If Cij(T)>α, i receives a new link Wij randomly drawn from a uniform distribution on [1,1] from site j, and if Cij<α, the link is deleted.

Then, continue updating the network state {si} and self-organizing the network.

Interesting analytic results for this class of topological models were obtained by Droste et al. [105]. The self-organized connectivity is about Kc2, where the order-chaos transition occurs. We must notice, however, that K=2 seems to be a very low degree for biological neuronal networks. Kuehn [106] studied how the topological dynamics time scale τ and noise level D affect the BR model, finding that optimal convergence to the critical point occurs for finite values of τopt and Dopt.

Zeng et al. [107] combined the rewiring rules of the BR model with the neuronal dynamics of the APH model. They obtained an interesting result: the final topology is a small-world network with a large number of neighbors, say K100. This avoids the criticism made above about the low number K2 of the BR model.

6 Self-Organization to Other Phase Transitions

6.1 First-Order Transition

Mejias et al. [108] studied a neuronal population model with firing rate ν(t), which can be written in terms of the firing density ρ=ν/νmax:

τρdρdt=ρ+S(W(t)ρθ)+Dηη(t),

where S(z)=(1/2)[1+tanh(z)] is a (deterministic) firing function, η(t) is a zero-mean Gaussian noise, and Dη is a noise amplitude. They used a depressing average synaptic weight inspired by a noisy LHG model:

dW(t)dt=1τ[1W(t)]uW(t)ρ(t)+DWη(t),

where DW is the synaptic noise amplitude. Within a certain range of noise, they observed up–down states with irregular intervals, leading to a distribution of permanence times T in the upstate as P(T)T3/2. Notice that this model already starts with the mean-field equations; it is not a microscopic model (although a microscopic model perhaps could be constructed from it).

Millman et al. [109] obtained similar results at a first-order phase transition, but now in a random network of LIF neurons with average of K neighbors and chemical synapses. The synapses follow the LHG mechanism:

dWij(t)dt=1τ[AWij(t)]uWij(t)sj(t),

where Wij(t)=prUij(t) in the authors notation (pr for probability of releasing vesicles, Uij(t) for synaptic resources) and A=pr. They found that the branching ratio is close to one in the upstate, with power-law avalanches with size exponent 3/2 and lifetime exponent 2.

Di Santo et al. [110, 111] and Buendía et al. [7, 46] studied the self-organization toward a first-order phase transition (called self-organized bistability or SOB). The simplest self-organizing dynamics was used in a two-dimensional model:

dρ(x,t)dt=[a+ωE(x,t)]ρ(x,t)bρ2(x,t)ρ3(x,t)+D2ρ(x,t)+η(x,t),(69)
dE(x,t)dt=2ρ(x,t)+1τ[AE(x,t)]uρ(x,t),

where ω,a>0,b<0 are constants, A is the maximum level of charging, D is the diffusion constant, and η(x,t) is a zero-mean Gaussian noise with amplitude ρ. The authors’ original notation is h=1/τ,ϵ=u, and E is a (former) control parameter. In the limit 1/τ0+,u0+,1/(τu)0, this self-organization is conservative and can produce a tuning to the Maxwell point with power-law avalanches (with mean-field exponents) and dragon-king quasi-periodic events.

Relaxing the conditions of infinite separation of time scales and bulk conservation, the authors studied the model with an LHG dynamics [7, 46, 111]:

dρ(x,t)dt=[a+W(x,t)]ρ(x,t)bρ2(x,t)ρ3(x,t)+I+D2ρ(x,t)+η(x,t),
dW(x,t)dt=1τ[AW(x,t)]uW(x,t)ρ(x,t),

where W is the synaptic weight and I a small input. They found that this is the equivalent SOqC version for first-order phase transitions, obtaining hysteretic up–down activity, which has been called self-organized collective oscillations (SOCOs) [7, 46, 111]. They also observed bistability phenomena.

Cowan et al. [112] also found hysteresis cycles due to bistability in an IF model from the combination of an excitatory feedback loop with anti-Hebbian synapses in its input pathway. This leads to avalanches both in the upstate and in the downstate, each one with power-law statistics (size exponents close to 3/2). The hysteresis loop leads to a sawtooth oscillation in the average synaptic weight. This is similar to the SOCO scenario.

6.2 Hopf Bifurcation

Absorbing-active phase transitions are associated to transcritical bifurcations in the low-dimensional mean-field description of the order parameter. Other bifurcations (say, between fixed points and periodic orbits) can also appear in the low-dimensional reduction of systems exhibiting other phase transitions, such as between steady states and collective oscillations. They are critical in the sense that they present phenomena like critical slowing down (power-law relaxation to the stationary state) and critical exponents. Some authors explored the homeostatic self-organization toward such bifurcation lines.

In what can be considered a precursor in this field, Bienenstock and Lehmann [95] proposed to apply a Hebbian-like dynamics at the level of rate dynamics to the Wilson–Cowan equations, having shown that the model self-organizes near a Hopf bifurcation to/from oscillatory dynamics.

The model has excitatory and inhibitory stochastic neurons. The neuronal equations are

ViE(t)=jWijEEsjE(t)+jWijEIsjI(t)θiE,
ViI(t)=jWijIEsjE(t)+jWijEIsjI(t)θiI,

where, as before, the binary variable s{0,1} denotes the firing of the neuron. The update process is an asynchronous (Glauber) dynamics:

P(s=1|V)=12[1+tanh(ΓV(t))],

where Γ is the neuronal gain.

The authors proposed a covariance-based regulation for the synapses WEE and WIE and a homeostatic process for the firing thresholds θE(t),θI(t). The homeostatic mechanisms are

dWEE(t)dt=1τEE(cEE(t)ΘEE),dWIE(t)dt=1τIE(cIEΘIE),
dθE(t)dt=1τE(ρE(t)ΘE),dθI(t)dt=1τI(ρE(t)ΘI),(77)

where cEE(ρE(t)ρE(t))2 is the variance of the excitatory activity ρE(t), cIE(ρE(t)ρE)(ρI(t)ρI) is the excitatory–inhibitory covariance, τEE,τIE,τE,τI are time constants, and ΘEE,ΘIE,ΘE,ΘI are target constants.

The authors show that there are Hopf and saddle-node lines in this system and that the regulated system self-organizes at the crossing of these lines. So, the system is very close to the oscillatory bifurcation, showing great sensibility to external inputs.

As commented, this article is a pioneer in the sense of searching for homeostatic self-organization at a phase transition in a neuronal network in 1998, well before the work of Beggs and Plenz [17]. However, we must recognize some deficiencies that later models tried to avoid. First, all the synapses and thresholds have the same value, instead of an individual dynamics for each one, as we saw in the preceding sections. Most importantly, the network activities ρE and ρI are nonlocal quantities, not locally accessible to Eqs 76 and 77.

Magnasco et al. [113] examined a very stylized model of neural activity with time-dependent anti-Hebbian synapses:

dVi(t)dt=jWij(t)Vj(t),
dWij(t)dt=1τ(δijVi(t)Vj(t)),

where δij is the Kronecker delta. They found that the system self-organizes around a Hopf bifurcation, showing power-law avalanches and hovering phenomena similar to SOqC.

6.3 Edge of Synchronization

Khoshkhou and Montakhab [114] studied a random network with K=Ki neighbors. The cells are Izhikevich neurons described by

dVi(t)dt=0.04Vi2(t)+5Vi(t)+140ui(t)+I+Iisyn(t),(80)
dui(t)dt=a(bVi(t)ui(t)),
ifVi30 thenVic,uiui+d.

The parameters a,b,c, and d are chosen to have regular spiking excitatory neurons and fast spiking inhibitory neurons. The synaptic input is composed of chemical double-exponential pulses with time constants τs and τf:

Iisyn=V0ViKi(τsτf)jWij[exp(t(tj+τij)τs)exp(t(tj+τij)τf)],

where τij are axonal delays from j to i, V0 is the reversal potential of the synapses, and Ki is the in-degree of node i.

The inhibitory synapses are fixed, but the excitatory ones evolve with a STDP dynamics. If the firing difference is Δt=tposttpre, when the postsynaptic neuron i fires, the synapses change by

ΔWij={A+(WmaxWij)exp(Δtτijτ+)ifΔt>τij,A(WmaxWij)exp(Δtτijτ)ifΔtτij.

This system presents a transition from out-of-phase to synchronized spiking. The authors show that a STDP dynamics self-organizes in a robust way the system to the border of this transition, where critical features like avalanches (coexisting with oscillations) appear.

7 Concluding Remarks

In this review, we described several examples of self-organization mechanisms that drive neuronal networks to the border of a phase transition (mostly a second-order absorbing phase transition, but also to first-order, synchronization, Hopf, and order-chaos transitions). Surprisingly, for all cases, it is possible to detect neuronal avalanches with mean-field exponents similar to those obtained in the experiments of Beggs and Plenz [17].

By using a standardized notation, we recognized several common features between the proposed homeostatic mechanisms. Most of them are variants of the fundamental drive-dissipation dynamics of SOC and SOqC and can be grouped into a few classes.

Following Hernandez-Urbina and Herrmann [47], we stress that the coarse tuning on hyperparameters of homeostatic SOqC is not equivalent to the fine-tuning of the original control parameter. This homeostasis is a bona-fide self-organization, in the same sense that the regulation of body temperature is self-organized (although presumably there are hyperparameters in that regulation). The advantage of these explicit homeostatic mechanisms is that they are biologically inspired and could be studied in future experiments to determine which are more relevant to cortical activity.

Due to nonconservative dynamics and the lack of an infinite separation of time scales, all these mechanisms lead to SOqC [57], not SOC. In particular, conservative sandpile models should not be used to model neuronal avalanches because neurons are not conservative. The presence of SOqC is revealed by stochastic sawtooth oscillations in the former control parameter, leading to large excursions in the supercritical and subcritical phases. However, hovering around the critical point seems to be sufficient to account for the current experimental data. Also, perhaps the omnipresent stochastic oscillations could be detected experimentally (some authors conjecture that they are the basis for brain rhythms [91]).

One suggestion for further research is to eliminate nonlocal variables in the homeostatic mechanisms. Another is to study how the branching ratio σ, or better, the synaptic matrix largest eigenvalue Λ, depends on the self-organization hyperparameters (as done in Ref. [61]). As several results in this review have shown, the dependence of criticality on the hyperparameters is always weaker than the dependence on the original control parameter. Finally, one could devise local metaplasticity rules for the hyperparameters, similarly to Peng and Beggs [67] (which, however, is unfortunately nonlocal). An intuitive possibility is that, at each level of metaplasticity, the need for coarse tuning of hyperparameters decreases and criticality will turn out more robust.

Author Contributions

OK and MC contributed to conception and design of the study; RP organized the database of revised articles and made Figure 1; OK and MC wrote the manuscript. All authors contributed to manuscript revision, and read and approved the submitted version.

Funding

This article was produced as part of the activities of FAPESP Research, Innovation, and Dissemination Center for Neuromathematics (Grant No. 2013/07699-0, São Paulo Research Foundation). We acknowledge the financial support from CNPq (Grant Nos. 425329/2018-6, 301744/2018-1 and 2018/20277-0), FACEPE (Grant No. APQ-0642-1.05/18), and Center for Natural and Artificial Information Processing Systems (CNAIPS)-USP. Support from CAPES (Grant Nos. 88882.378804/2019-01 and 88882.347522/2010-01) and FAPESP (Grant Nos. 2018/20277-0 and 2019/12746-3) is also gratefully acknowledged.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors thank Miguel Muñoz for discussions and advice.

References

1. Bak P, Tang C, Wiesenfeld K. Self-organized criticality: an explanation of the 1/fnoise. Phys Rev Lett (1987). 59:381. doi:10.1103/physrevlett.59.381

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Wilkinson D, Willemsen JF. Invasion percolation: a new form of percolation theory. J Phys A Math Gen (1983). 16:3365. doi:10.1088/0305-4470/16/14/028

CrossRef Full Text | Google Scholar

3. Jensen HJ. Self-organized criticality: emergent complex behavior in physical and biological systems. Vol. 10. Cambridge: Cambridge University Press (1998).

Google Scholar

4. Pruessner G. Self-organised criticality: theory, models and characterization. Cambridge: Cambridge University Press (2012).

Google Scholar

5. Bonachela JA, Muñoz MA. Self-organization without conservation: true or just apparent scale-invariance? J Stat Mech (2009) 2009:P09009. doi:10.1088/1742-5468/2009/09/P09009

CrossRef Full Text | Google Scholar

6. Bonachela JA, De Franciscis S, Torres JJ, Muñoz MA. Self-organization without conservation: are neuronal avalanches generically critical? J Stat Mech 2010 (2010). P02015. doi:10.1088/1742-5468/2010/02/P02015

CrossRef Full Text | Google Scholar

7. Buendía V, di Santo S, Bonachela JA, Muñoz MA. Feedback mechanisms for self-organization to the edge of a phase transition. Front Phys 8 (2020). 333. doi:10.3389/fphy.2020.00333

CrossRef Full Text | Google Scholar

8. Palmieri L, Jensen HJ. The emergence of weak criticality in soc systems. Epl 123 (2018). 20002. doi:10.1209/0295-5075/123/20002

CrossRef Full Text | Google Scholar

9. Palmieri L, Jensen HJ. The forest fire model: the subtleties of criticality and scale invariance. Front Phys 8 (2020). 257. doi:10.3389/fphy.2020.00257

CrossRef Full Text | Google Scholar

10. Turing AM. I.-computing machinery and intelligence. Mind LIX (1950). 433. doi:10.1093/mind/lix.236.433

CrossRef Full Text | Google Scholar

11. Usher M, Stemmler M, Olami Z. Dynamic pattern formation leads to1fnoise in neural populations. Phys Rev Lett (1995). 74:326. doi:10.1103/PhysRevLett.74.326

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Corral Á, Pérez CJ, Díaz-Guilera A, Arenas A. Synchronization in a lattice model of pulse-coupled oscillators. Phys Rev Lett (1995). 75:3697. doi:10.1103/PhysRevLett.75.3697

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Bottani S. Pulse-coupled relaxation oscillators: from biological synchronization to self-organized criticality. Phys Rev Lett (1995). 74:4189. doi:10.1103/PhysRevLett.74.4189

CrossRef Full Text | Google Scholar

14. Chen D-M, Wu S, Guo A, Yang ZR. Self-organized criticality in a cellular automaton model of pulse-coupled integrate-and-fire neurons. J Phys Math GenJ Phys A Math Gen (1995). 28:5177. doi:10.1088/0305-4470/28/18/009

CrossRef Full Text | Google Scholar

15. Herz AVM, Hopfield JJ. Earthquake cycles and neural reverberations: collective oscillations in systems with pulse-coupled threshold elements. Phys Rev Lett (1995). 75:1222. doi:10.1103/PhysRevLett.75.1222

CrossRef Full Text | Google Scholar

16. Middleton , Tang C. Self-organized criticality in nonconserved systems. Phys Rev Lett 74 (1995). 742. doi:10.1103/PhysRevLett.74.742

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Beggs JM, Plenz D. Neuronal avalanches in neocortical circuits. J Neurosci 23 (2003). 11167–77. doi:10.1523/JNEUROSCI.23-35-11167.2003

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Stassinopoulos D, Bak P. Democratic reinforcement: a principle for brain function. Phys Rev E (1995). 51:5033. doi:10.1103/physreve.51.5033

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Chialvo DR, Bak P. Learning from mistakes. Neuroscience (1999). 90:1137–48. doi:10.1016/S0306-4522(98)00472-2

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Bak P, Chialvo DR. Adaptive learning by extremal dynamics and negative feedback. Phys Rev E (2001). 63:031912. doi:10.1103/PhysRevE.63.031912

CrossRef Full Text | Google Scholar

21. Chialvo DR. Emergent complex neural dynamics. Nat Phys (2010). 6:744–50. doi:10.1038/nphys1803

CrossRef Full Text | Google Scholar

22. Hesse J, Gross T. Self-organized criticality as a fundamental property of neural systems. Front Syst Neurosci (2014). 8:166. doi:10.3389/fnsys.2014.00166

PubMed Abstract | CrossRef Full Text | Google Scholar

23.D Plenz, and E Niebur, editors. Criticality in neural systems. Hoboken: John Wiley & Sons (2014).

Google Scholar

24. Cocchi L, Gollo LL, Zalesky A, Breakspear M. Criticality in the brain: a synthesis of neurobiology, models and cognition. Prog Neurobiol Prog Neurobiol (2017). 158:132–52. doi:10.1016/j.pneurobio.2017.07.002

PubMed Abstract | CrossRef Full Text | Google Scholar

25. Muñoz MA. Colloquium: criticality and dynamical scaling in living systems. Rev Mod Phys (2018). 90:031001. doi:10.1103/RevModPhys.90.031001

CrossRef Full Text | Google Scholar

26. Wilting J, Priesemann V. 25 years of criticality in neuroscience—established results, open controversies, novel concepts. Curr Opin Neurobiol (2019). 58:105–11. doi:10.1016/j.conb.2019.08.002

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Zeraati R, Priesemann V, Levina A. Self-organization toward criticality by synaptic plasticity. arXiv (2020) 2010.07888.

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Haldeman C, Beggs JM. Critical branching captures activity in living neural networks and maximizes the number of metastable states. Phys Rev Lett (2005). 94:058101. doi:10.1103/PhysRevLett.94.058101

PubMed Abstract | CrossRef Full Text | Google Scholar

29. Kinouchi O, Copelli M. Optimal dynamical range of excitable networks at criticality. Nat Phys (2006). 2:348–351. doi:10.1038/nphys289

CrossRef Full Text | Google Scholar

30. Copelli M, Campos PRA. Excitable scale free networks. Eur Phys J B (2007). 56:273–78. doi:10.1140/epjb/e2007-00114-7

CrossRef Full Text | Google Scholar

31. Wu A-C, Xu X-J, Wang Y-H. Excitable Greenberg-Hastings cellular automaton model on scale-free networks. Phys Rev E (2007). 75:032901. doi:10.1103/PhysRevE.75.032901

CrossRef Full Text | Google Scholar

32. Assis VRV, Copelli M. Dynamic range of hypercubic stochastic excitable media. Phys Rev E (2008). 77:011923. doi:10.1103/PhysRevE.77.011923

CrossRef Full Text | Google Scholar

33. Beggs JM. The criticality hypothesis: how local cortical networks might optimize information processing. Phil Trans R Soc A 366 (2008). 29–343. doi:10.1098/rsta.2007.2092

CrossRef Full Text | Google Scholar

34. Ribeiro TL, Copelli M. Deterministic excitable media under Poisson drive: Power law responses, spiral waves, and dynamic range. Phys Rev E (2008). 77:051911. doi:10.1103/PhysRevE.77.051911

CrossRef Full Text | Google Scholar

35. Shew WL, Yang H, Petermann T, Roy R, Plenz D. Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. J Neurosci (2009). 29:15595–600. doi:10.1523/JNEUROSCI.3864-09.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Larremore DB, Shew WL, Restrepo JG. Predicting criticality and dynamic range in complex networks: effects of topology. Phys Rev Lett (2018). 106:058101. doi:10.1103/PhysRevLett.106.058101

CrossRef Full Text | Google Scholar

37. Shew WL, Yang H, Yu S, Roy R, Plenz D. Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. J Neurosci (2011). 31:55–63. doi:10.1523/JNEUROSCI.4637-10.2011

PubMed Abstract | CrossRef Full Text | Google Scholar

38. Shew WL, Plenz D. The functional benefits of criticality in the cortex. Neuroscientist (2013). 19:88–100. doi:10.1177/1073858412445487

CrossRef Full Text | Google Scholar

39. Mosqueiro TS, Maia LP. Optimal channel efficiency in a sensory network. Phys Rev E (2013). 88:012712. doi:10.1103/PhysRevE.88.012712

CrossRef Full Text | Google Scholar

40. Wang C-Y, Wu ZX, Chen MZQ. Approximate-master-equation approach for the Kinouchi-Copelli neural model on networks. Phys Rev E (2017). 95:012310. doi:10.1103/PhysRevE.95.012310

PubMed Abstract | CrossRef Full Text | Google Scholar

41. Zierenberg J, Wilting J, Priesemann V, Levina A. Tailored ensembles of neural networks optimize sensitivity to stimulus statistics. Phys Rev Res (2020). 2:013115. doi:10.1103/physrevresearch.2.013115

CrossRef Full Text | Google Scholar

42. Galera EF, Kinouchi O. Physics of psychophysics: two coupled square lattices of spiking neurons have huge dynamic range at criticality. arXiv (2020). 11254.

Google Scholar

43. Dickman R., Vespignani A, Zapperi S. Self-organized criticality as an absorbing-state phase transition. Phys Rev E (1998). 57:5095. doi:10.1103/PhysRevE.57.5095

CrossRef Full Text | Google Scholar

44. Muñoz MA, Dickman R, Vespignani A, Zapperi S. Avalanche and spreading exponents in systems with absorbing states. Phys Rev E (1999). 59:6175. doi:10.1103/PhysRevE.59.6175

CrossRef Full Text | Google Scholar

45. Dickman R, Muñoz MA, Vespignani A, Zapperi S. Paths to self-organized criticality. Braz J Phys (2000). 30:27–41. doi:10.1590/S0103-97332000000100004

CrossRef Full Text | Google Scholar

46. Buendía V., di Santo S., Villegas P., Burioni R, Muñoz MA. Self-organized bistability and its possible relevance for brain dynamics. Phys Rev Res (2020). 2:013318. doi:10.1103/PhysRevResearch.2.013318

CrossRef Full Text | Google Scholar

47. Hernandez-Urbina V, Herrmann JM. Self-organized criticality via retro-synaptic signals. Front Phys (2017). 4:54. doi:10.3389/fphy.2016.00054

CrossRef Full Text | Google Scholar

48. Lübeck S. Universal scaling behavior of non-equilibrium phase transitions. Int J Mod Phys B (2004). 18:3977–4118. doi:10.1142/s0217979204027748

CrossRef Full Text | Google Scholar

49. Markram H, Tsodyks M. Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature (1996). 382:807–10. doi:10.1038/382807a0

PubMed Abstract | CrossRef Full Text | Google Scholar

50. Tsodyks M, Pawelzik K, Markram H. Neural networks with dynamic synapses. Neural Comput (1998). 10:821–35. doi:10.1162/089976698300017502

PubMed Abstract | CrossRef Full Text | Google Scholar

51. Levina A., Herrmann K, Geisel T. Dynamical synapses causing self-organized criticality in neural networks. Nat Phys (2007a). 3:857–860. doi:10.1038/nphys758

CrossRef Full Text | Google Scholar

52. Levina A, Herrmann M. Dynamical synapses give rise to a power-law distribution of neuronal avalanches. Adv Neural Inf Process Syst (2006). 771–8.

Google Scholar

53. Levina A., Herrmann M, Geisel T. Phase transitions towards criticality in a neural system with adaptive interactions. Phys Rev Lett(2009). 102:118110. doi:10.1103/PhysRevLett.102.118110

PubMed Abstract | CrossRef Full Text | Google Scholar

54. Wang SJ, Zhou C. Hierarchical modular structure enhances the robustness of self-organized criticality in neural networks. New J Phys (2012). 14:023005. doi:10.1088/1367-2630/14/2/023005

CrossRef Full Text | Google Scholar

55. Brochini L, de Andrade Costa A, Abadi M, Roque AC, Stolfi J, Kinouchi O. Phase transitions and self-organized criticality in networks of stochastic spiking neurons. Sci Rep (2016). 6:35831. doi:10.1038/srep35831

PubMed Abstract | CrossRef Full Text | Google Scholar

56. Gerstner W, van Hemmen JL. Associative memory in a network of 'spiking' neurons. Netw Comput Neural SystNetw Comput Neural Syst (1992). 3:139–64. doi:10.1088/0954-898X_3_2_004

CrossRef Full Text | Google Scholar

57. Galves A, Löcherbach E. Infinite systems of interacting chains with memory of variable length-A stochastic model for biological neural nets. J Stat Phys (2013). 151:896–921. doi:10.1007/s10955-013-0733-9

CrossRef Full Text | Google Scholar

58. Kinouchi O, Brochini L, Costa AA, Campos JGF, Copelli M. Stochastic oscillations and dragon king avalanches in self-organized quasi-critical systems. Sci Rep (2019). 9:1–12. doi:10.1038/s41598-019-40473-1

CrossRef Full Text | Google Scholar

59. Grassberger P, Kantz H. On a forest fire model with supposed self-organized criticality. J Stat Phys (1991). 63:685–700. doi:10.1007/BF01029205

CrossRef Full Text | Google Scholar

60. Costa AdA, Copelli M, Kinouchi O. Can dynamical synapses produce true self-organized criticality? J Stat Mech (2015). 2015:P06004. doi:10.1088/1742-5468/2015/06/P06004

CrossRef Full Text | Google Scholar

61. Campos JGF, Costa AdA, Copelli M, Kinouchi O. Correlations induced by depressing synapses in critically self-organized networks with quenched dynamics. Phys Rev E (2017). 95:042303. doi:10.1103/PhysRevE.95.042303

PubMed Abstract | CrossRef Full Text | Google Scholar

62. Kinouchi O. Self-organized (quasi-)criticality: the extremal Feder and Feder model. arXiv (1998). 9802311.

Google Scholar

63. Hsu D, Beggs JM. Neuronal avalanches and criticality: a dynamical model for homeostasis. Neurocomputing 69 (2006). 1134–36. doi:10.1016/j.neucom.2005.12.060

CrossRef Full Text | Google Scholar

64. Hsu D, Tang A, Hsu M, Beggs JM. Simple spontaneously active Hebbian learning model: homeostasis of activity and connectivity, and consequences for learning and epileptogenesis. Phys Rev E 76 (2007). 041909. doi:10.1103/PhysRevE.76.041909

CrossRef Full Text | Google Scholar

65. Shew WL, Clawson WP, Pobst J, Karimipanah Y, Wright NC, Wessel R. Adaptation to sensory input tunes visual cortex to criticality. Nature Phys 11 (2015). 659–663. doi:10.1038/nphys3370

CrossRef Full Text | Google Scholar

66. Levina A., Ernst U, Michael Herrmann JM. Criticality of avalanche dynamics in adaptive recurrent networks. Neurocomputing 70 (2007b). 1877–1881. doi:10.1016/j.neucom.2006.10.056

CrossRef Full Text | Google Scholar

67. Peng J, Beggs JM. Attaining and maintaining criticality in a neuronal network model. Physica A Stat Mech Appl (2013). 392:1611–20. doi:10.1016/j.physa.2012.11.013

CrossRef Full Text | Google Scholar

68. Hebb DO. The organization of behavior: a neuropsychological theory. Hoboken: J. Wiley; Chapman & Hall (1949).

Google Scholar

69. Turrigiano GG, Nelson SB. Hebb and homeostasis in neuronal plasticity. Curr Opin Neurobiol (2000). 10:358–64. doi:10.1016/S0959-4388(00)00091-X

PubMed Abstract | CrossRef Full Text | Google Scholar

70. Kuriscak E, Marsalek P, Stroffek J, Toth PG. Biological context of Hebb learning in artificial neural networks, a review. Neurocomputing (2015). 152:27–35. doi:10.1016/j.neucom.2014.11.022

CrossRef Full Text | Google Scholar

71. de Arcangelis L, Perrone-Capano C, Herrmann HJ. Self-organized criticality model for brain plasticity. Phys Rev Lett (2006). 96:028107. doi:10.1103/PhysRevLett.96.028107

CrossRef Full Text | Google Scholar

72. Lombardi F, Herrmann HJ, de Arcangelis L. Balance of excitation and inhibition determines 1/f power spectrum in neuronal networks. Chaos (2017). 27:047402. doi:10.1063/1.4979043

PubMed Abstract | CrossRef Full Text | Google Scholar

73. Pellegrini G. L., de Arcangelis L., Herrmann HJ, Perrone-Capano C. Activity-dependent neural network model on scale-free networks. Phys Rev E (2007). 76:016107. doi:10.1103/PhysRevE.76.016107

CrossRef Full Text | Google Scholar

74. de Arcangelis L. Are dragon-king neuronal avalanches dungeons for self-organized brain activity? Eur Phys J Spec Top (2012). 205:243–57. doi:10.1140/epjst/e2012-01574-6

CrossRef Full Text | Google Scholar

75. de Arcangelis L, Herrmann HJ. Activity-dependent neuronal model on complex networks. Front Physiol (2012). 3:62. doi:10.3389/fphys.2012.00062

PubMed Abstract | CrossRef Full Text | Google Scholar

76. Lombardi F, Herrmann HJ, Perrone-Capano C, Plenz D, De Arcangelis L. Balance between excitation and inhibition controls the temporal organization of neuronal avalanches. Phys Rev Lett (2012). 108:228703. doi:10.1103/PhysRevLett.108.228703

PubMed Abstract | CrossRef Full Text | Google Scholar

77. de Arcangelis L, Lombardi F, Herrmann HJ. Criticality in the brain. J Stat Mech (2014). 2014:P03026. doi:10.1088/1742-5468/2014/03/P03026

CrossRef Full Text | Google Scholar

78. Lombardi F, Herrmann HJ, Plenz D, De Arcangelis L. On the temporal organization of neuronal avalanches. Front Syst Neurosci (2014). 8:204. doi:10.3389/fnsys.2014.00204

PubMed Abstract | CrossRef Full Text | Google Scholar

79. Lombardi F, de Arcangelis L. Temporal organization of ongoing brain activity. Eur Phys J Spec Top (2014). 223:2119–2130. doi:10.1140/epjst/e2014-02253-4

CrossRef Full Text | Google Scholar

80. Van Kessenich LM, De Arcangelis L, Herrmann HJ. Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches. Sci Rep (2016). 6:32071. doi:10.1038/srep32071

CrossRef Full Text | Google Scholar

81. Çiftçi K. Synaptic noise facilitates the emergence of self-organized criticality in the caenorhabditis elegans neuronal network. Netw Comput Neural Syst (2018). 29:1–19. doi:10.1080/0954898X.2018.1535721

CrossRef Full Text | Google Scholar

82. Uhlig M, Levina A, Geisel T, Herrmann JM. Critical dynamics in associative memory networks. Front Comput Neurosci (2013). 7:87. doi:10.3389/fncom.2013.00087

CrossRef Full Text | Google Scholar

83. Rabinovich M, Huerta R, Laurent G. Neuroscience: transient dynamics for neural processing. Science (2008). 321:48–50. doi:10.1126/science.1155564

PubMed Abstract | CrossRef Full Text | Google Scholar

84. Rubinov M., Sporns O., Thivierge JP, Breakspear M. Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons. PLoS Comput Biol (2011). 7:e1002038. doi:10.1371/journal.pcbi.1002038

PubMed Abstract | CrossRef Full Text | Google Scholar

85. Del Papa B., Priesemann V, Triesch J. Criticality meets learning: criticality signatures in a self-organizing recurrent neural network. PLoS One (2017). 12:e0178683. doi:10.1371/journal.pone.0178683

PubMed Abstract | CrossRef Full Text | Google Scholar

86. Levina A, Herrmann JM, Geisel T. Theoretical neuroscience of self-organized criticality: from formal approaches to realistic models. In D Plenz, and E Niebur, editors. Criticality in neural systems. Hoboken: Wiley Online Library (2014). 417–36. doi:10.1002/9783527651009.ch20

CrossRef Full Text | Google Scholar

87. Stepp N, Plenz D, Srinivasa N. Synaptic plasticity enables adaptive self-tuning critical networks. PLoS Comput Biol (2015). 11:e1004043. doi:10.1371/journal.pcbi.1004043

PubMed Abstract | CrossRef Full Text | Google Scholar

88. Delattre V, Keller D, Perich M, Markram H, Muller EB. Network-timing-dependent plasticity. Front Cell Neurosci (2015). 9:220. doi:10.3389/fncel.2015.00220

PubMed Abstract | CrossRef Full Text | Google Scholar

89. Kossio FYK, Goedeke S, van den Akker B, Ibarz B, Memmesheimer RM. Growing critical: self-organized criticality in a developing neural system. Phys Rev Lett (2018). 121:058301. doi:10.1103/PhysRevLett.121.058301

PubMed Abstract | CrossRef Full Text | Google Scholar

90. Tetzlaff C, Okujeni S, Egert U, Wörgötter F, Butz M. Self-organized criticality in developing neuronal networks. PLoS Comput Biol (2010). 6, e1001013. doi:10.1371/journal.pcbi.1001013

PubMed Abstract | CrossRef Full Text | Google Scholar

91. Costa A., Brochini L, Kinouchi O. Self-organized supercriticality and oscillations in networks of stochastic spiking neurons. Entropy (2017). 19:399. doi:10.3390/e19080399

CrossRef Full Text | Google Scholar

92. Zierenberg J., Wilting J, Priesemann V. Homeostatic plasticity and external input shape neural network dynamics. Phys Rev X (2018). 8:031018. doi:10.1103/PhysRevX.8.031018

CrossRef Full Text | Google Scholar

93. Girardi-Schappo M., Brochini L., Costa AA, Carvalho TT, Kinouchi O. Synaptic balance due to homeostatically self-organized quasicritical dynamics. Phys Rev Res (2020). 2:012042. doi:10.1103/PhysRevResearch.2.012042

CrossRef Full Text | Google Scholar

94. Brunel N Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci (2000). 8:183–208. doi:10.1023/A:1008925309027

PubMed Abstract | CrossRef Full Text | Google Scholar

95. Bienenstock E, Lehmann D. Regulated criticality in the brain? Advs Complex Syst (1998). 01:361–84. doi:10.1142/S0219525998000223

CrossRef Full Text | Google Scholar

96. Bornholdt S, Rohlf T. Topological evolution of dynamical networks: global criticality from local dynamics. Phys Rev Lett (2000). 84:6114. doi:10.1103/PhysRevLett.84.6114

PubMed Abstract | CrossRef Full Text | Google Scholar

97. Bornholdt S, Röhl T. Self-organized critical neural networks. Phys Rev E (2003). 67:066118. doi:10.1103/PhysRevE.67.066118

CrossRef Full Text | Google Scholar

98. Rohlf T. Self-organization of heterogeneous topology and symmetry breaking in networks with adaptive thresholds and rewiring. Europhys Lett (2008). 84:10004. doi:10.1209/0295-5075/84/10004

CrossRef Full Text | Google Scholar

99. Gross T, Sayama H. Adaptive networks. Berlin: Springer (2009). 1–8. doi:10.1007/978-3-642-01284-6_1

CrossRef Full Text | Google Scholar

100. Rohlf T, Bornholdt S. Self-organized criticality and adaptation in discrete dynamical networks. Adaptive Networks. Berlin: Springer (2009). 73–106.

Google Scholar

101. Meisel C, Gross T. Adaptive self-organization in a realistic neural network model. Phys Rev E 80 (2009). 061917. doi:10.1103/PhysRevE.80.061917

CrossRef Full Text | Google Scholar

102. Min L, Gang Z, Tian-Lun C. Influence of selective edge removal and refractory period in a self-organized critical neuron model. Commun Theor Phys 52 (2009). 351. doi:10.1088/0253-6102/52/2/31

CrossRef Full Text | Google Scholar

103. Rybarsch M, Bornholdt S. Avalanches in self-organized critical neural networks: a minimal model for the neural soc universality class. PLoS One (2014). 9:e93090. doi:10.1371/journal.pone.0093090

PubMed Abstract | CrossRef Full Text | Google Scholar

104. Cramer B, Stöckel D, Kreft M, Wibral M, Schemmel J, Meier K, et al. Control of criticality and computation in spiking neuromorphic networks with plasticity. Nat Commun (2020). 11:1–11. doi:10.1038/s41467-020-16548-3

CrossRef Full Text | Google Scholar

105. Droste F., Do AL, Gross T. Analytical investigation of self-organized criticality in neural networks. J R Soc Interface (2013). 10:20120558. doi:10.1098/rsif.2012.0558

PubMed Abstract | CrossRef Full Text | Google Scholar

106. Kuehn C. Time-scale and noise optimality in self-organized critical adaptive networks. Phys Rev E (2012). 85:026103. doi:10.1103/PhysRevE.85.026103

CrossRef Full Text | Google Scholar

107. Zeng H-L, Zhu C-P, Guo Y-D, Teng A, Jia J, Kong H, et al. Power-law spectrum and small-world structure emerge from coupled evolution of neuronal activity and synaptic dynamics. J Phys: Conf Ser (2015). 604:012023. doi:10.1088/1742-6596/604/1/012023

CrossRef Full Text | Google Scholar

108. Mejias JF, Kappen HJ, Torres JJ. Irregular dynamics in up and down cortical states. PLoS One (2010). 5:e13651. doi:10.1371/journal.pone.0013651

PubMed Abstract | CrossRef Full Text | Google Scholar

109. Millman D, Mihalas S, Kirkwood A, Niebur E. Self-organized criticality occurs in non-conservative neuronal networks during ‘up’ states. Nat Phys (2010). 6:801–05. doi:10.1038/nphys1757

PubMed Abstract | CrossRef Full Text | Google Scholar

110. di Santo S, Burioni R, Vezzani A, Muñoz MA. Self-organized bistability associated with first-order phase transitions. Phys Rev Lett (2016). 116:240601. doi:10.1103/PhysRevLett.116.240601

PubMed Abstract | CrossRef Full Text | Google Scholar

111. Di Santo S, Villegas P, Burioni R, Muñoz MA. Landau–Ginzburg theory of cortex dynamics: scale-free avalanches emerge at the edge of synchronization. Proc Natl Acad Sci USA (2018). 115:E1356–65. doi:10.1073/pnas.1712989115

PubMed Abstract | CrossRef Full Text | Google Scholar

112. Cowan JD, Neuman J, Kiewiet B, Van Drongelen W. Self-organized criticality in a network of interacting neurons. J Stat Mech(2013). 2013:P04030. doi:10.1088/1742-5468/2013/04/p04030

CrossRef Full Text | Google Scholar

113. Magnasco MO, Piro O, Cecchi GA. Self-tuned critical anti-Hebbian networks. Phys Rev Lett (2009). 102:258102. doi:10.1103/PhysRevLett.102.258102

PubMed Abstract | CrossRef Full Text | Google Scholar

114. Khoshkhou M, Montakhab A. Spike-timing-dependent plasticity with axonal delay tunes networks of izhikevich neurons to the edge of synchronization transition with scale-free avalanches. Front Syst Neurosci (2019). 13:22–7. doi:10.3389/fnsys.2019.00073

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: self-organized criticality, neuronal avalanches, self-organization, neuronal networks, adaptive networks, homeostasis, synaptic depression, learning

Citation: Kinouchi O, Pazzini R and Copelli M (2020) Mechanisms of Self-Organized Quasicriticality in Neuronal Network Models. Front. Phys. 8:583213. doi: 10.3389/fphy.2020.583213

Received: 14 July 2020; Accepted: 19 October 2020;
Published: 23 December 2020.

Edited by:

Attilio L. Stella, University of Padua, Italy

Reviewed by:

Srutarshi Pradhan, Norwegian University of Science and Technology, Norway
Ignazio Licata, Institute for Scientific Methodology (ISEM), Italy

Copyright © 2020 Kinouchi, Pazzini and Copelli. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Osame Kinouchi, osame@ffclrp.usp.br

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.