Skip to main content

ORIGINAL RESEARCH article

Front. Syst. Neurosci., 26 November 2020
This article is part of the Research Topic Deciphering Population Neuronal Dynamics: From Theories to Experiments View all 5 articles

Hopf Bifurcation in Mean Field Explains Critical Avalanches in Excitation-Inhibition Balanced Neuronal Networks: A Mechanism for Multiscale Variability

\r\nJunhao Liang,Junhao Liang1,2Tianshou ZhouTianshou Zhou2Changsong Zhou,*Changsong Zhou1,3*
  • 1Department of Physics, Centre for Nonlinear Studies, Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems, Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
  • 2Key Laboratory of Computational Mathematics, Guangdong Province, and School of Mathematics, Sun Yat-sen University, Guangzhou, China
  • 3Department of Physics, Zhejiang University, Hangzhou, China

Cortical neural circuits display highly irregular spiking in individual neurons but variably sized collective firing, oscillations and critical avalanches at the population level, all of which have functional importance for information processing. Theoretically, the balance of excitation and inhibition inputs is thought to account for spiking irregularity and critical avalanches may originate from an underlying phase transition. However, the theoretical reconciliation of these multilevel dynamic aspects in neural circuits remains an open question. Herein, we study excitation-inhibition (E-I) balanced neuronal network with biologically realistic synaptic kinetics. It can maintain irregular spiking dynamics with different levels of synchrony and critical avalanches emerge near the synchronous transition point. We propose a novel semi-analytical mean-field theory to derive the field equations governing the network macroscopic dynamics. It reveals that the E-I balanced state of the network manifesting irregular individual spiking is characterized by a macroscopic stable state, which can be either a fixed point or a periodic motion and the transition is predicted by a Hopf bifurcation in the macroscopic field. Furthermore, by analyzing public data, we find the coexistence of irregular spiking and critical avalanches in the spontaneous spiking activities of mouse cortical slice in vitro, indicating the universality of the observed phenomena. Our theory unveils the mechanism that permits complex neural activities in different spatiotemporal scales to coexist and elucidates a possible origin of the criticality of neural systems. It also provides a novel tool for analyzing the macroscopic dynamics of E-I balanced networks and its relationship to the microscopic counterparts, which can be useful for large-scale modeling and computation of cortical dynamics.

Introduction

The mammal brain consists of tens of billions of neurons, which process information and communicate through electrophysiological action potentials, also known as spikes. This large number of neurons exhibit diverse spiking behaviors across broad ranges of spatial and temporal scales. Understanding the origin and dynamic mechanism of this complexity is crucial for the advancement of neurobiology, the development of therapies for brain diseases and the future design of brain-inspired intelligent systems.

Two striking features can be simultaneously observed at different levels of cortical neuronal systems: 1) irregularity in spiking times, indicated by seemingly random spiking time that resembles Poisson process (Softky and Koch, 1993; Holt et al., 1996) of individual neurons; 2) variability in population firing rates, manifested in widely observed collective neural activities such as population oscillations (Brunel and Wang, 2003; Herrmann et al., 2004) and critical neural avalanches (Beggs and Plenz, 2003; Gireesh and Plenz, 2008; Bellay et al., 2015; Shew et al., 2015; Ma et al., 2019), etc. Biologically, the spiking irregularity has been proposed to originate from the balance between excitation (E) and inhibition (I) inputs so that spiking of neurons is driven by fluctuations (Shu et al., 2003; Okun and Lampl, 2008; Xue et al., 2014), and has been associated with functional advantages in efficient coding and information processing (Denève and Machens, 2016). The emergence of collective cortical activities originates in the fact that neurons interact through recurrent networks (Abeles, 1991), in which dynamic activities can reverberate. As a result, dynamic correlations arise from structural correlations. In particular, even weak pairwise correlation is sufficient to induce strongly correlated collective network activities (Schneidman et al., 2006). Collective neural activities can emerge with different amplitudes and are often organized as critical avalanches with various sizes. These avalanches are cascades of activity bursts in neuronal networks. At criticality, the size and duration of avalanches are approximately distributed according to power-laws, with critical exponents satisfying the scaling relation (Friedman et al., 2012; Fontenele et al., 2019). Avalanches in the critical state can maximize the informational complexity and variability, and are thought to have functional advantages in information processing (Kinouchi and Copelli, 2006; Shew et al., 2009, 2011).

Traditional mean-field theory of E-I balanced networks (Van Vreeswijk and Sompolinsky, 1996, 1998; Renart et al., 2010) with binary neuron and instantaneous synapse explains the spiking irregularity of individual neurons. However, it fails to account for collective neural activities, because it predicts an asynchronous dynamic state with vanishing correlation in unstructured (i.e., random topology) networks. Such vanishing correlation arises due to sparse network connectivity (Van Vreeswijk and Sompolinsky, 1998) or shared excitatory and inhibitory inputs cancelling correlation in dense recurrent networks (Renart et al., 2010). In terms of rate coding, the asynchronous state is not efficient for information processing, as the population firing rate only exhibits a linear response to the input rate (Van Vreeswijk and Sompolinsky, 1996) but without firing rate variability on faster time scales. In this case, the whole network acts as a rate unit for computation. The traditional E-I balanced theory can be generalized in two directions, which are biologically more plausible: structured networks and synaptic kinetics. Firstly, heterogeneous neural network structures (Landau et al., 2016) can induce firing rate variability. For example, clustered network structures (Litwin-Kumar and Doiron, 2012) can induce slow firing rate oscillations and show stimulus-induced variability reductions. Hierarchical modular networks (Wang et al., 2011) can support self-sustained firing rate oscillations across different levels. Spatial networks involving a distance-dependent coupling rule can unveil the distance-tuned correlation relation (Rosenbaum et al., 2017; Darshan et al., 2018) and the emergence of propagating waves (Keane and Gong, 2015; Keane et al., 2018; Gu et al., 2019; Huang et al., 2019) observed in experiments. Secondly, even in unstructured networks, network firing rate oscillations can be induced by realistic synaptic filtering kinetics (Brunel and Wang, 2003; Yang et al., 2017). Such oscillations typically occur in cases where the synaptic decay time scales of inhibition are slower than excitation, which is actually a biologically plausible situation if the synaptic receptors under consideration are AMPA for excitation and GABA for inhibition (Salin and Prince, 1996; Zhou and Hablitz, 1998). More importantly, network oscillation can be sparsely participated by subgroups of neurons, thus preserving the irregular spiking feature of individual neurons (Brunel, 2000; Brunel and Hakim, 2008). Note, however, that theoretical analysis of the macroscopic dynamics of E-I neural network with synaptic kinetics is very difficult. Existing theory is very limited, e.g., by requiring very special assumptions, such as the Lorenz distribution of certain parameters (Dumont and Gutkin, 2019).

Critical avalanches can also rise from neural dynamics under unstructured network topology (Beggs and Plenz, 2003; Kinouchi and Copelli, 2006), while its dynamic origin is still controversial. Previous theories have suggested that critical neural avalanches may arise at the edge of a phase transition. Early studies indicated that it may occur between a quiescent and active phase from critical branching processes (Beggs and Plenz, 2003; Haldeman and Beggs, 2005), while later experimental (Fontenele et al., 2019) and theoretical (di Santo et al., 2018; Dalla Porta and Copelli, 2019) studies also proposed that it may occur near the onset of synchrony. Mechanisms other than criticality that generate avalanches with power-law distribution have also been proposed (Martinello et al., 2017; Touboul and Destexhe, 2017; Wilting and Priesemann, 2019b). The emergence of critical avalanches has also been proposed to be closely related to the maintenance of E-I balance (Lombardi et al., 2012; Poil et al., 2012; Yang et al., 2012; Gautam et al., 2015). Nevertheless, the exact relationship between neural criticality and E-I balance remains poorly understood, as previous modeling studies of critical avalanches often did not consider the properties of individual neuronal spiking.

Here, we try to address the above important open questions. In particular, how E-I balance induced irregular spiking reconciles with collective neural activities? E-I neural networks can organize into a ‘sparse synchrony’ state (Brunel, 2000; Brunel and Hakim, 2008) where neurons are remained fluctuation-driven to spike irregularly whereas firing rate oscillation emerges in population level. However, the relationship between E-I balance and sparse synchrony is less clear. Most importantly, the mechanism by which E-I balance induced irregular spiking can coexist with critical neural activities (Bellay et al., 2015) in recurrent neural circuit remains unclear. In this work, we first re-examine the dynamics of integrate-and-fire (IF) E-I neuronal network with realistic synaptic kinetics, which can manifest individually irregular spiking activities with different synchronous levels. The network firing rate dynamics can be effectively captured by a set of macroscopic field equations derived by a novel semi-analytical mean-field theory. An advantage of our theory is that it is simple and does not require special properties of the model. The synchronous transition point where network firing rate oscillation emerges is predicted by a Hopf bifurcation in the field equations. We find that critical microscopic avalanche dynamics emerges near the onset of synchronization, with critical exponents approximately satisfying the scaling relations, which manifests the hallmark of criticality (Sethna et al., 2001). The mechanism of critical avalanches could be understood as demographic noise-driven random walks near a macroscopic bifurcation point. On this basis, we propose that the E-I balanced state in the microscopic spiking network corresponds to a stable macroscopic state in the field equations. The asynchronous state, consistent with the traditional theory, corresponds to a stable fixed point, which can be destabilized through a Hopf bifurcation, giving rise to a stable limit circle, corresponding to network firing rate oscillation. As such, the E-I balanced state can incorporate network oscillation with different synchronous levels, which accounts for the coexistence of variability in both individual and population scales. In order to show that the coexistence of irregular spiking and collective critical avalanches is a widely existent phenomenon, we empirically verify this coexistence in up-states in the public experimental data of spontaneous spiking activities recorded in mouse somatosensory cortex in vitro (Ito et al., 2016). Scaling relations similar to the network model are also found to hold in these critical data sets. Our own analysis further indicates the universality of the observed phenomena in model networks. The theory proposed here explains how collective neural activities coexist with irregular neuron spiking and reveals a possible origin of criticality in neural systems. Compared with previous study (di Santo et al., 2018; Dalla Porta and Copelli, 2019) showing that critical avalanches exist around synchronous transition, our model incorporating synaptic kinetics, E-I balance, and irregular spiking of individual neurons provides a more biological plausible explanation about the nature of this transition. The theory also serves as a novel tool to study the dynamics of IF networks with biologically realistic synaptic filtering kinetics, and thus has useful application in large-scale modeling of brain networks.

Materials and Methods

Spiking Neuronal Network

We study a leaky IF spiking neuronal circuit. Neurons are coupled by a random network with density p and size = NE + NI, which consists of N_E excitatory (E) neurons and N_I inhibitory (I) neurons. Thus, each neuron in the network has on average n_E=pN_E E neighbors and n_I=pN_I I neighbors. Each neuron also receives n_o excitatory inputs modelled by independent Poisson processes with frequency Q_o, mimicking external inputs for the circuit under consideration. We set p = 0.2, NE:NI = 4:1, n_o=n_E and the network size is N = 104, unless otherwise specified. The sub-threshold membrane potential of neuron i at time t, denoted as Vi(t), is governed by

d V i d t = f α ( V i ) + J α o j i o F E * s j ( t - τ l E )
+ J α E j i E F E * s j ( t - τ l E ) + J α I j i I F I * s j ( t - τ l I ) , (1)

where, Vi(t) is the membrane potential of neuron i(belonging to type α = E,I) at time t. iα represents the α neighbors of neuron i. The input sources for neurons include excitatory inputs from external neurons (population O), inputs from recurrent excitatory neurons (population E) and inputs from recurrent inhibitory neurons (population I). The first term of Equation (1) describes the leaky current fα(Vi)=(Vrestα-Vi)/τα, which has the effect to drive the membrane potential back to the leaky potentials, which are set to be V^Erest = V^Irest = -70mV. The membrane time constants are set as τE = 20ms, τI = 10ms for E and I neurons, respectively. The second to fourth terms of Equation (1) are the external, excitatory recurrent and inhibitory recurrent currents, respectively. Input currents are the summations of the filtered pulse trains. Here, sj(t)=nδ(t-tjn) denotes the spike train of the j-th neuron. The excitatory and inhibitory synapses have latency period (delay) τlE and τlI respectively. For the numerical results presented, we consider τlE=τlI=0 for simplicity (i.e., no transmission delay), which is a reasonable approximation for local circuits. The synaptic filter is modelled as a bi-exponential function, i.e., Equation (2).

F α ( t ) = 1 τ d α - τ r [ exp ( - t τ d α ) - exp ( - t τ r ) ] , t 0 . (2)

In Equation (2), we set the synaptic rise time τr = 0.5ms for both E and I neurons, while the synaptic decay times τdE,τdI depend on the type of presynaptic neuron. We set τdE=2ms and let τdI change from 1to4.5ms to study the effect of different E and I synaptic filtering time scales. Hence, in our study here, τdI serves as a control parameter to induce the dynamical transition. Biologically, the inhibition decay time τdI depends on the constitution of synaptic receptors (Salin and Prince, 1996; Zhou and Hablitz, 1998), and can also be changed by chemicals such as narcotics (Brown et al., 2010). We point out that suitable changes of other model parameters such as synaptic strength, network connection density, etc., may also induce similar dynamical transitions we are going to study below. The integration dynamics is as follows. When the membrane potential reaches the threshold Vth = −50mV, a spike is emitted and the membrane potential is reset to Vreset = −60mV. Then, synaptic integration is halted for 2 ms for E neurons and 1 ms for I neurons, modelling the refractory periods in real neurons. Synaptic weights are set as JEO = 0.45mV, JIO = 0.72mV, JEE = 0.36mV, JIE = 0.72mV, JEI = −0.81mV and JII = −1.44mV, which will satisfy the balanced condition. Network dynamics are simulated by a modified second-order Runge-Kutta scheme (Shelley and Tao, 2001) with a time step of dt = 0.05ms. For each parameter, the network is simulated for 16s with the first 1s discarded to avoid a transient effect. The statistical indexes are then computed by averaging the results of 15 trials with randomly distributed initial membrane potentials. The dynamics we considered here are current based. The case of conductance-based dynamics where the input of a neuron depends on its membrane potential is further studied in Supplementary Material: Appendix 1. Model Extensions.

Mean-Field Theory of Network Dynamics

Derivation of the Macroscopic Field Equations

We will develop a semi-analytical mean-field theory to approximate the average dynamics of the network by noting the fact that 1) the network topology is homogeneous and 2) the number of neuron N is large. The following mean-field derivation holds for large enough N and the network size is explicitly included in the field equations (see Equations (11), (12) below).

We denote the average membrane potential of the network as Vα = ⟨Vii ∈ α:= ⟨Vi⟩α, α = E,I. The goal is to derive a set of field equations governing the temporal dynamics of Vα.

First, by noting that the convolution Fα*sj(t-τlα) obeys the equation

( τ d α d d t + 1 ) ( τ r d d t + 1 ) [ F α * s j ( t - τ l α ) ] = n δ ( t - t j n - τ l α ) ,

then we have

( τ d α d d t + 1 ) ( τ r d d t + 1 ) j i α F α * s j ( t - τ l α ) i E , I
   = j i α n δ ( t - t j n - τ l α ) i E , I . (4)

Under mean-field approximation, each neuron is the same in terms of their neighbors, so that

j i α n δ ( t - t j n - τ l α ) i E , I n α Q α ( t - τ l α ) , (5)

where nα is the average number of α neighbors of a neuron in the network and Qα(t) is the mean firing rate of α type neurons, defined as

Q α ( t ) = lim t 0 1 t t t t n δ ( s - t j n ) j α d s . (6)

In the standard definition of firing rate (Dayan and Abbott, 2001) of a neuron, the average in Equation (6) is taken over different simulation trials. Since ergodicity and the network homogeneity, neurons within a population should have the same firing rate and it can be computed through Equation (6) (i.e., population averages can be thought as sample averages) when the network is large enough in the stationary state. Formally, for measuring the firing rate from data, the time interval Δt in Equation (6) has to be finite. Here we choose Δt = 1ms (shorter than the refractory period) so that a neuron can at most have one spike between t and t + Δt. Then, by the definition of δ function, Qα(t) represents the proportion of α type neurons that spike between t and t + Δt as well as the mean firing rate of α type neurons at time t with unit per ms.

In previous analysis framework of IF neurons through continuous stochastic processes theory (Burkitt, 2006), the membrane potential V_i of neuron i cannot cross the spiking threshold (V_i is restricted to (−∞,Vth) with Vth being an absorbing boundary). This is a theoretical artefact, contrary to the true neurophysiology. Furthermore, in numerical integration, the resetting is achieved by finding those neurons whose membrane potential increases over the spiking threshold in each numerical step (Shelley and Tao, 2001). This inspires us to naturally consider that a neuron j should have a spike at time t if Vj(t) > Vth. Formally, we can consider Vj(t) as the membrane potential of neuron j at time t before the resetting rule in each numerical step, then

Q α ( t ) = H ( V j - V t h ) j α , (7)

where H is the Heaviside function Hx={1,x00,x<0. Equation (7) explicitly builds the link that the population firing rate is the proportion of the neurons whose membrane potential is above the spiking threshold. As a preliminary approximation, we assume Vj(t) obey a Gaussian distribution Pα(V) with time-dependent mean Vα(t) and time-independent variance σα2. We will verify that this assumption is plausible in the network and dynamic regimes we studied, referring to Figure 1D later. Then,

FIGURE 1
www.frontiersin.org

Figure 1. Synchronization and network oscillation induced by slow inhibition in balanced networks. Fast inhibition (B,E,H) induces strict balance with asynchronous spiking and an almost steady network firing rate. Slow inhibition (C,F,I) results in loose balance, with synchronous grouped spiking and network oscillation. Individual neurons spike irregularly in both cases. (A) Normalized E/I post-synaptic response when receiving a pre-synaptic spike. (B,C) Raster plot of the spiking time. Each blue/red point corresponds to a spike of the E/I neuron. (D) The distribution of membrane potential. (E,F) The corresponding firing rate of E population (blue) and the firing rate predicted by the field equations (green). (G) The pdf. of the CV of ISI of E neurons. (H,I) The blue/red curves represent the average input E/I current of a neuron in the network and the black curves represent the difference between them. The parameters are set as Qo = 5Hz; τdI=1ms in the fast inhibition case (B,E,H and left panels of D,G) and τdI=3.5ms in the slow inhibition case (C,F,I and right panels of D,G).

Q α ( t ) = H ( V j - V t h ) j α = V t h P α ( V ) d V
= 1 2 - 1 2 erf ( V t h - V α 2 σ α ) , (8)

where erf is the error function erf(x)=2π0xe-t2dt. Although there is no elementary expression for the error function, it can be approximated by elementary functions. For example, a good approximation that can keep the first and second moments is erf(x)tanh(πx6). Under this approximation, we have

Q α ( t ) = 1 2 - 1 2 tanh ( π 6 V t h - V α 2 σ α ) = 1 1 + exp ( V t h - V α σ α π 3 ) . (9)

Here, the standard deviation of the voltage, σα, acts as an effective parameter to construct the voltage-dependent temporal firing rate. Note that this approximation scheme basing only on the first-order statistics neglects several factors that affect the accurate firing rate, including higher order statistics, noise correlation and refractory time. Thus, it does not have an analytical form and should be estimated numerically, from the steady-state mean voltage Vαss and mean firing rate Qαss at the asynchronous state and Equation (9) that

σ α = V t h - V α s s ln [ ( Q α s s ) - 1 - 1 ] π 3 . (10)

The sigmoid transfer function Equation (9) is the intrinsic non-linear property that induces oscillation transition in the field model. Note that neural field models of Wilson-Cowan type (Wilson and Cowan, 1972) would also contain a presumed sigmoid transfer function. Field models of this type can also qualitatively reproduce some dynamic features of E-I neuronal networks. Here we explicitly construct the sigmoid function from the microscopic spiking network. Thus, the quality of the scheme depends on suitable choices of effective parameters σα and once σα are chosen as suitable values, our field equations can predict the dynamics of the E-I network quantitatively (see Supplementary Material: Appendix 2. Sensitivity of the Critical Points on the Effective Parameters, where we study the sensitivity of the oscillation transition with respect to the effective parameters σα), while traditional neural field models (such as Wilson-Cowan model) are not quantitatively related to microscopic spiking networks in general.

Denote Φα(t)=jiαFα*sj(t-τlα)E,I as the averaged synaptic time course of α inputs received by a neuron and from Equations (4), (5), (9) it will obey

( τ d α d d t + 1 ) ( τ r d d t + 1 ) Φ α = n α [ 1 + exp ( V t h - V α ( t - τ l α ) σ α π 3 ) ] ,
α = E , I . (11)

Note that each neuron receives n_o independent Poisson spike trains externally with rate Q_o. Thus, the input of each neuron has a variance n_oQ_o. If we do not consider its filtering effect since the fast decay time of excitatory synapse, by diffusion approximation we know the external input jiOFE*sj(t-τlE) of each neuron can be approximated by noQo+noQoξi(t), where ξi(t) is a standard Gaussian white noise (GWN) with zero mean and unit variance. Since {ξi(t)}i are independent GWNs, noQoξi(t)α can be equivalently approximated by noQo/Nαξα(t), where ξα(t) is another standard GWN. Thus, taking the average ⟨.⟩α of the original Equation (1) and note that in the leaky IF model, fα is linear, we arrive at

d V α d t = f α ( V α ) + J α o ( n o Q o + n o Q o N α ξ α ( t ) )
+ J α E Φ E + J α I Φ I , α = E , I . (12)

In the field Equation (12), ξE(t) and ξI(t) are two independent GWNs. We find that this approximation is independent of the nature of noise in the spiking network model. In the network model Equation (1), the nature of noise from external inputs is synaptic-filtered Poisson shot noise. We further examine the case where external inputs in Equation (1) are with GWNs and the case of constant external input (i.e., no noise) and find that in such cases Equation (12) can still well approximate the macroscopic dynamics of the network [for constant external input the network dynamics is not stochastic, but the spiking activity still appears irregular due to the chaotic nature of the network (Van Vreeswijk and Sompolinsky, 1996, 1998)]. Generally speaking, mean-field theory only holds when the system size is infinity. The incorporation of noise into the field model can smooth out the systematic errors, compensate the finite size effect and make it closer to the true rate dynamics statistically. Thus, for numerical simulation of the field equations we will keep the noise terms in Equation (12) whereas the deterministic counterpart would be used for stability analysis.

In summary, we have proposed a novel technique to derive a set of field Equations (11), (12), to approximate the average dynamics of the original spiking network Equations (1), (2).

Analysis of the Steady-State Dynamics

The deterministic steady-state (fixed point) of the field equations can be found from Equations (11), (12) by letting ddt=0 and assuming ξα(t) = 0, resulting in algebraic equations

f α ( V α ) + J α o n o Q o + J α E n E Q E + J α I n I Q I = 0 , α = E , I . (13)

where QE,QI are given by Equation (9). In our case, the synaptic strengths and external input rates are the major parameters determining the value of the fixed point, while synaptic decay time would affect its stability. This is because the value of steady-state does not depend on τdα,τr, which can also be seen from Equation (2) that the synaptic filter is normalized (0Fα*δ(t)=1 independent of τdα,τr) so that synaptic rise and decay times would not affect the time-averaged firing rate. Thus, the scheme here cannot capture the nontrivial effect of synaptic filtering on affecting the firing rate [e.g., see the formula given by Fourcaud and Brunel (2002)]. Note that in general settings of balanced networks with dense and strong coupling (Renart et al., 2010), the quantities no,nE,nI are of order O(N) but the synaptic weights are of order O(N−1/2). When N is large enough, the first term in Equation (13) can be neglected and it reduces to

J α o n o Q o + J α E n E Q E + J α I n I Q I = 0 , α = E , I , (14)

which is a set of linear equations to solve the steady firing rate QE,QI. To guarantee a unique positive solution in this case, the sequence {JEOJIO,JEIJII,JEEJIE} should be in ascending or descending order, which is the so-called balanced condition in the traditional theory (Van Vreeswijk and Sompolinsky, 1996, 1998; Renart et al., 2010). Thus, the theory here is a generalization of the traditional theory of balanced network.

In the traditional theory of asynchronous dynamics, the E-I balanced state can be considered as the existence of a stable fixed point of Equations (11), (12). Now we can consider how the stability of this fixed point can be changed with the aid of this dynamic form. For the case without synaptic delay (i.e., τlE=τlI=0), the field equations are ordinary differential equations. By taking X=(VE,VI,ΦE,dΦEdt,ΦI,dΦIdt)T, the field model Equations (11), (12) can be written in the first-order form dXdt=F(X) without considering the noise. The Jacobian matrix at the steady state is

J = ( - 1 τ E 0 J E E 0 J E I 0 0 - 1 τ I J I E 0 J I I 0 0 0 0 1 0 0 n E Q E ( V E ) τ d E τ r 0 - 1 τ d E τ r - 1 τ d E - 1 τ r 00 0 0 0 0 0 1 0 n I Q I ( V I ) τ d I τ r 00 - 1 τ d I τ r - 1 τ d I - 1 τ r ) , (15)

with Qα(Vα)=πexp[(Vth-Vα)π/(3σα)]3σα(1+exp[(Vth-Vα)π3σα])2 estimated at the steady-state value of Vα given by Equation (13). The eigenvalues of J can determine the stability of the steady state. Note that many models use single exponential function as the synaptic filter, i.e., τr = 0, and in this case the dynamic form becomes 4-dimensional with X = (VE,VIEI)T. For models without considering synaptic filtering effect (that is, the case of instantaneous synapse where τr=τdα=0), the dynamic form becomes 2-dimensional, which can be considered as the dynamic form of traditional theory without synaptic kinetics. In these cases, the stability analysis can be performed in a similar way. In the presence of synaptic transmission delay, the field equations would become delay differential equations. In this case, stability analysis would in general become more difficult.

Statistical Analysis of Model and Experiment Data

Spike Count Series

For statistically analyzing neural dynamics, the neuron spike train series have to be constructed from the model simulation data or the experimental data as follows. The time axis is first divided into consecutive time windows with sizes Δt ms. The number of spikes of neuron i is then counted in each window to obtain a discrete sequence Ni(t), which is designated as the spike count series of neuron i with time windows Δt. Alternatively, the number of spikes of the whole neuron population can be counted in each window. This constructs the population spike count series Nα(t) for the E and I populations, respectively. Furthermore, qα(t)=Nα(t)nαΔt is the population averaged firing rate series and the power spectrum of it can indicate the collective oscillation property. For computing certain quantities, such as the correlation between neurons, the spike count series is filtered by a square kernel KT(t)={1T,t[-T,0]0,other with length T and the ensuing filtered spike train is defined as N~i(t)=KT*Ni=1Ts=0T-1Ni(t-s).

Quantifying Spiking Irregularity and Firing Rate Variability

The spiking time irregularity of a neuron can be quantified using the CV (coefficient of variance), which is defined as the standard deviation of the neuron ISIs (inter-spike intervals) over its mean. Totally regular activities have CV values of 0, while Poisson processes have CV values of 1. A higher CV value indicates larger irregularity. We measure the firing rate variability along the time on the individual neuron level and the population level by the relevant Fano Factor (FF). The FF of neuron i is defined as FFi = var(Ni(t))/⟨Ni(t)⟩. The average and the variance here are taken across the spike count series. Similarly, population FF is defined as FFα = var(Nα(t))/⟨Nα(t)⟩. Note that FF depends on the time window size Δt to construct the spike count series (we will use Δt = 50ms). The population FF can also be computed using the macroscopic field equations, as the field equations predicts the population firing rate, which can be transferred to population spike counts (by multiplying the number of neurons in the population).

Quantifying Network Synchrony

The synchrony of the network can be characterized from two aspects: the cross-correlation of spiking times and the coherence of the membrane potential (Golomb, 2007). The former quantifies the coherence of threshold events whereas the later quantifies the coherence of the subthreshold dynamics.

We employ the commonly used Pearson correlation coefficient (PCC) to quantify the synchrony of the spiking time. The spike count series of neuron i is first constructed with time window Δt = 1ms and then filtered by a square kernel with length T = 50ms. The PCC between neuron i and j is defined as cij=cov(ni(t),nj(t))var(ni(t))var(nj(t)). The details such as filtering in calculating PCC would affect its exact value (Cohen and Kohn, 2011), but not the qualitative change. The index ⟨ciji,jE, PCC averaged over all excitatory neuron pairs, is used to quantify the network synchrony degree of threshold events.

The voltage series of each neuron is constructed for each millisecond that Vi(k):=Vi(t)|t = kms. Voltage coherence is defined as σα2/σi2α, where σα2=Vα2t-Vαt2 is the variance of the mean voltage Vα = ⟨Vii ∈ α and σi2=Vi2t-Vit2 is the variance of the voltage of neuron i. The voltage coherence of the excitatory population is used to quantify the coherence of the subthreshold dynamics.

We further use the CV of the excitatory population firing rate series constructed with Δt = 1ms to quantify temporal firing rate variability at short timescales, which is another way of indicating the network synchrony, as stronger synchrony indicates larger population firing rate variability at short timescales.

Neuronal Avalanche Analysis

We measure the neuronal avalanches of the excitatory neuron population from its population spike count series NE(t) constructed with window (bin) size Δt. An avalanche is defined as a sequence of consecutive non-empty bins, separated by empty bins (with no spiking inside). The size S of an avalanche is defined as the total number of spikes within the period and the duration T is defined as the number of time bins it contains. To compare different data sets in a unified standard, window size is chosen as the average ISI of the merged spiking train (constructed by merging the spike trains of all neurons), denoted as T_m. Thus, it depends on the mean firing rate of neurons in different data sets. This choice has been described as the ‘optimal’ window size to measure avalanches (Beggs and Plenz, 2003), as excessively small or large windows would lead to systematic bias. A further advantage of this choice is that the measured size and duration would approximately lie in the same scale ranges for different data sets, allowing better comparison. The effect of different choice of time windows for measuring avalanches is further considered in Supplementary Figure 3 and the results are robust for using time windows around T_m. Note that the definition of avalanche here is the time-binning (non-causal) avalanche, which corresponds to the case of experimental measurement (Beggs and Plenz, 2003; Bellay et al., 2015), but is different from the causal avalanche (Williams-Garcia et al., 2017) studied by many physical models rooted in critical branching processes.

To empirically judge the criticality of avalanche systems is a technically difficult issue. Inspired by the statistical property that avalanche size should obey a power-law distribution for scale-invariance, we adopt a two-step paradigm as follows.

First, we use a simple index to first roughly compare under which parameter the model is closer to criticality. The avalanche size frequency distribution histogram (S,P(S)), is first obtained with 80 plotting-bins from minimum to maximum size with points where P(S) = 0. The least squares method is then used to find the best-fit-line in doubly logarithmic coordinates, such that ∑S[lgP(S)−(b0 + b1lgS)]2 is minimised. After obtaining the best-fit coefficients (b0,b1), the fitted frequency distribution values were estimated as Pfit(S) = 10b0 + b1lgS. Finally, the normalised distance, defined as D = ∑SS|P(S)−Pfit(S)|/∑SSP(S), the average size difference per avalanche between the actual and fitting frequency distribution normalized by the mean avalanche size, is used to measure the distance to the best fitting power-law distribution. The approach here is essentially the same as the widely used κ index to assess the proximity to criticality in experiments (Shew et al., 2009). However, the use of κ index requires a pre-definition of the slope of power-law (since it requires a form of the distribution function), which is not clear from data and was typically assumed to be −1.5 in some literature. Instead, the index we use is more convenient in that it does not require this pre-defined slope. This distance acts as an index for a rough comparison about which set is closer to criticality. However, it does not provide sufficient statistical evidence to claim criticality even when D is very small.

Second, we use standard statistical methods to examine the critical properties when D is relatively small. The maximum likelihood estimation (MLE) method by NCC toolbox (Marshall et al., 2016) is used to estimate the critical exponents. The toolbox provides a doubly truncated algorithm based on the MLE to find the range that passed the truncation based KS statistics test (Marshall et al., 2016). This truncation scheme can avoid the noise (in the small avalanche size range) and finite size effect (in the large avalanche size range) interruption in estimating the critical exponents. We find the largest truncated range that can pass the KS-based test with a p value larger than 0.1 and broader than one-third of the whole range on the logarithm scale. This means that the data can produce a KS-statistics value that is less than the values generated by at least 10% of the power-law models in the truncated range. The estimated slopes within the truncated ranges in the avalanche size and duration distributions define the critical exponents P(S)∼S−τ and P(T)∼T−α. A third exponent is defined as ⟨S⟩(T)∼T1/συz, where ⟨S⟩(T) is the average size of avalanches with the same duration T. This exponent is fitted using a weighted least squares method (Marshall et al., 2016) to those avalanches that fall into the truncated duration range for estimating α. Finally, we examine the scaling relation (Equation (16) below) between these three exponents, which is a hallmark for criticality from the theory of statistical physics.

Experimental Data Analysis

We used the public experimental data measuring the neuronal spiking activity in mouse somatosensory cortex cultures in vitro (Ito et al., 2016). A total of 25 data sets were used and the length of each record is 1 h, with the exception of Set 19, which was 48 min long. The recordings were performed on organo-typical slice cultures after 2 to 4 weeks in vitro, without stimulation (Ito et al., 2014). Spiking times were sorted with a PCA-based algorithm (Litke et al., 2004) to locate the signals recorded with a large and dense multi-electrode array of 512 electrodes.

Under in vitro conditions, spiking in the culture clearly shows up-down state transition. Active spiking periods (up-state) and silent periods (down-state) alternated slowly with a frequency of circa 0.1 Hz. We focus on analyzing the up-state defined as follows. For each data set, the population firing rate series was first constructed with a time window of Δt = 10ms and then filtered by a square kernel with length T = 100ms. Then, the up-state is defined as the time periods that the population firing rates are higher than 30% of the maximum rate of the given dataset and last longer than 1s. We further examine the power spectral density of the population firing rate series (before further filtering) in the up-state, the distribution of firing rates of the neurons, CV of ISIs and the avalanches in the up-states. The time bin for measuring the avalanche was the mean ISI, averaged through the up-state, of the merged spike train. To test critical properties, we first determine whether the size and duration distribution is close to a power-law and then estimate its critical exponents. As in our analysis of modelling data, the doubly truncated and statistical test algorithm from the NCC toolbox (Marshall et al., 2016) is used. We accept the power-law distribution of a data set if the following two conditions can be jointly satisfied: 1) the truncated range has to be broader than one-third of the whole range in the logarithm scale and; 2) the data in the truncated range can pass the KS-based test with p-value larger than 0.1. For the cases deemed to be power-law distribution, the truncated ranges, estimated exponents and p-values in the KS-based test are shown. For the data sets that have power-law avalanche distributions in both size and duration, we further compare the critical exponents 1/συz and the value α-1τ-1 to see whether scaling relation (Equation (16) below) holds. To test whether power-law distribution is the intrinsic structure of the data, we also analyze the surrogated data constructed by randomly shuffling the ISIs of neurons in each up-state period. This random shuffling can destroy the intrinsic temporal correlation structures of the avalanches.

Results

Network Synchrony Arises From Loose Temporal Balance

To begin, we first intuitively illustrate the non-trivial effect of synaptic filtering (Figure 1A) in shaping the network dynamics. For fast inhibition decay time, the network spiking dynamics is asynchronous (Renart et al., 2010), as shown in Figure 1B. Such a ‘strict balance’ is the case of traditional E-I balance theory (Van Vreeswijk and Sompolinsky, 1996, 1998) with inhibition domination, where network inhibitory feedback can cancel the excitatory current spill on a fast time scale (Figure 1H).

This scenario of strict balance breaks down when inhibition becomes slow, which induces an effective delay in the inhibitory cancellation (Figure 1I). Such a delay before the cancellation would induce a window during which excitatory current spills over the network, resulting in the collective spiking shown in Figure 1C. The strength of the collective activity depends on the length of this excitation-dominant window. As inhibition becomes dominant again after a delay, a temporal quiescent episode ensues after the collective spiking. Thus, slow synaptic inhibition induces network oscillation, as was also shown in previous studies (Brunel and Wang, 2003; Yang et al., 2017; Huang et al., 2019). In this ‘loose balance’ scenario, the network maintains balance on a slower time scale. In this case, as excitation and inhibition dominate alternately, the strict balance is temporally broken on a fast time scale. Note that in the loose balance state, the delay in the inhibitory cancellation is merely a few milliseconds (Figure 1I), consistent with the inhibition tracking delays observed in experiments (Okun and Lampl, 2008).

Alternatively, these different collective behaviors can be seen from the network population firing rate dynamics. The network firing rate in the strict balance, asynchronous state is almost steady (Figure 1E), whereas the loose balance induces firing rate oscillation (Figure 1F), with fast firing rate variability at the population level. The emergence of oscillation implies a regularity of the population dynamics of the network. However, it is important to stress that the spiking of the neurons is still highly irregular. There are at least two reasons for this irregularity. First, the population oscillation is quasi-periodic rather than periodic due to the stochastic nature of the network dynamics. Second, and more importantly, each collective burst in the network is randomly participated in by a small portion (∼10% on average) of neurons in the network. These neurons spike with a firing rate much smaller than the network oscillation frequency, referring to Figure 2F. This can also be understood from the fact that the firing rate is still low at the peaks of spiking activity corresponding to spiking synchrony (Figure 1F). As such, the dynamical phenomenon here with loose balance was also referred to as ‘sparse synchrony’ (Brunel and Hakim, 2008). The histogram counts of the coefficient of variance (CV) of the inter-spike intervals (ISIs) of neurons (see Materials and Methods) in the network are shown in Figure 1G. In both asynchronous and synchronous states, the overall value of the CV is around 1, indicating that the spiking irregularity approximately resembles the Poisson process (Holt et al., 1996). As our mean-field theory assumes that the membrane potentials of the neurons in the network approximate Gaussian, we further analyze the distribution of the membrane potentials. As shown in Figure 1D, in both fast and slow inhibition cases the distribution is not totally Gaussian and instead right skewed, a feature of the finite threshold IF dynamic (Brunel, 2000; Keane and Gong, 2015). However, the skewness (defined as the third central moment divided by the cubic of the standard deviation) and kurtosis (defined as the fourth central moment divided by the quartic of the standard deviation) measuring the deviation from Gaussian are (−0.67, 3.12) and (−0.66, 3.3) in the fast and slow inhibition cases respectively, which are not far from (0, 3) for the case of Gaussian distribution. Thus, our theory that assumes a Gaussian distribution is still effective, as will be shown later.

FIGURE 2
www.frontiersin.org

Figure 2. The mean-field theory prediction of the dynamic transition of the spiking network. (A) A schematic diagram of the mean-field reduction from spiking neuronal network to macroscopic neural field. (B) The excitatory firing rate in the asynchronous steady-state (τdI=1ms) under different strengths of external input and different network sizes. Blue symbols, obtained by the field equations with numerically estimated σα under different input rates Q_o, fits well with the network simulations (blue dashed lines). Green symbol-lines are obtained by the field equations fixed σα=Jαo2noQoτα for Qo = 5Hz. Synaptic strengths Jαβ are multiplied by 1.41 and 1.15 in the cases of N = 5000 and N = 7500, respectively to maintain the usual scale JαβO(N(−1/2)). (C) Deterministic field equations predict that a Hopf bifurcation occurs with the increase of inhibitory decay time τdI at a critical value around τdI3ms (indicated by a vertical dashed line). The real part and imaginary part divided by 2π of the dominant eigenvalue are denoted by the solid and dashed lines, respectively. Blue circles are peak frequencies of the excitatory firing rate oscillation from network simulation. (D) The PCC and coherence index show the emergence of network oscillation as τdI increases. (E) The CV of population firing rate increases with the increase of τdI. (F) The average CV of ISIs and the ratio between mean firing rate and peak frequency of network oscillation. (G) The average FF of individual neurons (left axis), and the FF of the network population spiking counts in network and field model (right axis). Time windows for measuring FFs are 50ms. Averages are taken across the excitatory population for the measurements in panels (D–G). The external input is set as Qo = 5Hz in panels (C–G).

Mean-Field Theory Predicts the Synchronous Transition

The network dynamic transition induced by a looser E-I balance can be characterized by the dynamics of population firing rate (Figures 1E,F). However, it is theoretically challenging to analyze the population dynamics of IF networks with synaptic kinetics. Here, we propose a novel mean-field approximation theory to derive the macroscopic dynamic equations of an IF network, i.e., Equations (11), (12), referring to a schematic diagram in Figure 2A. This technique for deriving the macroscopic field equations is highly generalizable. Extensions to the cases of time-varying inputs and conductance-based dynamics are presented in Supplementary Material: Appendix 1. Model Extensions.

For numerically estimated σα (i.e., use Equation (10) to compute σα under different external input strength Q_o), the mean firing rate of the network can be correctly estimated from the field Equations (11), (12) (Figure 2B blue). We further consider whether the field equations can predict the firing rate with fixed parameters σα. Following the derivation of Equation (12), if we simply assume dVidtconst-Viτα+JαonoQoξi(t), then V_i is Gaussian distributed with standard deviation σα=Jαo2noQoτα. Figure 2B green shows the corresponding results of fixed σα=Jαo2noQoτα for Qo = 5Hz. It cannot predict the exact firing rate but can correctly predict the linear response to external input, a property of asynchronous balanced network (Van Vreeswijk and Sompolinsky, 1996).

The dynamic difference between the asynchronous state and synchronous state can already be predicted by the field equations in terms of the population firing rate dynamics, as shown in Figures 1E,F. The mechanism is explained by a Hopf bifurcation in the field equations through stability analysis (see Materials and Methods), as shown in Figure 2C. In the case of fast inhibition, the fixed point of the field model is generally a stable focus, whose Jacobian has complex eigenvalues with negative real parts. In this case, the network firing rate only fluctuates mildly due to noise perturbations. When τdI increases, the fixed point will lose its stability through a supercritical Hopf bifurcation, as indicated by a pair of its dominant complex conjugate eigenvalues λ = α±iω crossing the imaginary axis. The Hopf bifurcation predicts that the stable fixed point will give way to a stable periodic solution, whose amplitude grows from zero. The frequency of the periodic motion can be estimated as ω/2π in the linear order. This prediction is approximately equal to the numerically measured peak frequency of the network excitatory firing rate oscillation near the critical bifurcation point, as shown by the blue circles in Figure 2C. Previous studies have shown that E-I networks can undergo a transition to oscillation through perturbation analysis by assuming the form of the network steady-state firing rate solution (Brunel and Wang, 2003; Brunel and Hakim, 2008) and the phenomenon has been associated with a Hopf bifurcation of rate equations with effective transmission delay (Brunel and Hakim, 2008). However, there is no direct connection from the spiking networks to the heuristic rate equations in that theory. Here, our mean-field approach with the aid of macroscopic field equations derived from microscopic neuronal network straightforwardly reveals the Hopf bifurcation mechanism during this transition to sparse synchrony state. Note, however, that sparse synchrony states can also be achieved in purely inhibitory neuronal networks without synaptic kinetics (Brunel and Hakim, 1999).

As shown in Figure 2D, the network synchrony increases dramatically after τdI crosses the Hopf bifurcation point. Figure 2E shows that the network temporal firing rate variability (see Materials and Methods) increases conspicuously during this transition, which is also qualitatively predicted by the field equations. The CV of ISIs averaged over the excitatory population is shown in Figure 2F. During the onset of collective oscillation, the averaged CV of ISIs first slightly decreases and then increases (as the strong bursting oscillation activity develops at around τdI4ms, referring to Figure 3A bottom panel) while its overall value is around 1. The coexistence of irregular spiking and collective oscillation can be understood from the ratio between the mean firing rate of neurons and peak frequency of network oscillation. If each oscillation is participated by all the neurons, then the ratio should approach one. However, as can be seen from Figure 2F, the synchrony is sparse (Brunel and Hakim, 2008) in that this ratio is nearly 0.1 after oscillation onset, implying that each oscillation is randomly participated by only 10% of the neurons. Equivalently, each neuron only spikes sparsely and randomly participates in about 10% of the collective oscillations, giving rise to high variability of the ISI. As τdI further increases to bursting onset, this ratio rapidly increases to approaching one, implying that almost all the neurons participate in each oscillation in the bursting state, where the inhibitory feedback is too slow such that the excitatory current can spill over the whole network in the excitatory dominant period.

FIGURE 3
www.frontiersin.org

Figure 3. Critical dynamics near the onset of collective oscillation. (A) The raster plots of the spiking time of the excitatory neurons at τdI=1.2ms (upper panel), τdI=3ms (middle panel), and τdI=4.3ms (lower panel). (B) An example of the time course of an avalanche. (C) The distance between the avalanche size distribution and its best-fitting power-law distribution. (D) The PSD of the network excitatory firing rate. (E) The probability density distribution of the avalanche size. (F) The probability density distribution of the avalanche duration. (G) The mean avalanche size with respect to a given avalanche duration. For panels (D–G), the blue, green and red curves correspond to the cases of τdI=1.2, 3, 4.3ms, respectively. (H) Scaling relation Equation (16) approximately holds for critical states (τdI=3ms) with different input strengths Qo, where the dashed line represents Equation (16) exactly holding. The inset shows an approximate linear relationship between exponents τ and α. Here, Qo = 5Hz is used in panels (A–D) and the example of panels (E–G) is Qo = 8Hz.

In addition to spiking time variability, we further examine the fluctuations of firing rate in a short time scale (time window 50ms) by Fano Factor (FF) (Teich et al., 1997). As shown in Figure 2G, the change of FF of the individual neurons (see Materials and Methods) is similar to that of CV (Figure 2F), indicating that the individual firing rate fluctuation does not increase with the onset of collective synchrony. In contrast, the change of population FF (Figure 2G, and see Materials and Methods for details) is similar to the spiking correlation (Figure 2D) and temporal variability of firing rate (Figure 2E). This trend was further qualitatively confirmed by the field model prediction, as shown in Figure 2G. Thus, it appeared that the observed collective oscillation is a network property rather than an individual neuron property. Hence, the network collective oscillation activity is compatible with individual neuron irregular spiking. The latter property is the prominent feature of E-I balance (Okun and Lampl, 2009).

In summary, strict balance with asynchronous network spiking is predicted by a stable focus whereas loose balance with sparsely synchronous network spiking is predicted by a stable limit circle in the field equations. Both strict and loose balance conditions support the irregular spiking of individual neurons.

Scale-Free Neuronal Avalanches Near the Critical Transition Point

For our mean-field theory of balanced networks, the Hopf bifurcation predicts that a global oscillation emerges in the field equations with oscillation amplitude growing continuously from zero. This picture is similar to a second-order phase transition in statistical physics when oscillation amplitude is taken as the order parameter. Thus, we measure the spiking avalanches (see Materials and Methods) to examine whether this kind of criticality can result in scale-invariant spiking behaviour, i.e., critical avalanches, as observed in experiments (Beggs and Plenz, 2003; Friedman et al., 2012; Bellay et al., 2015; Fontenele et al., 2019). Figure 3B illustrates the time course of an avalanche.

To more clearly compare the feature of avalanche dynamics, raster plots of the spiking time of excitatory neurons in three typical cases are shown in Figure 3A. These plots illustrate the asynchronous state (upper panel), the onset of synchrony and oscillation (middle panel) and the developed collective oscillation (lower panel). Power spectrum density (PSD) analysis of network firing rate is shown in Figure 3D. The asynchronous state appears without collective oscillation (frequency peak) and the avalanche size and duration distributions are exponential-like, as shown by the blue curves in Figures 3E,F. It is thus subcritical. Alternatively, the asynchronous state can be considered as very noisy population oscillations that are participated by neurons very sparsely. The onset of synchrony is usually accompanied by a frequency peak of fast oscillation situated within the gamma bands, which is thought to have functional importance in various cognitive processes (Herrmann et al., 2004). This fast oscillation is temporally organized as scale-invariant avalanches, with power-law-like size and duration distributions, as shown by the green curves in Figures 3E,F and it is thus a critical state. It can be seen from Figure 2D that the average pairwise correlation is still low in the critical state, since the avalanches at this stage are only randomly participated by a small portion of neurons (Figure 2F), which reconciles the coexistence of weak pairwise correlation and strong clustered spiking patterns (Schneidman et al., 2006). The collective oscillation state has more slow frequency peaks and the avalanche size and duration distributions have heavier tails (corresponding to the red curves in Figures 3E,F) compared with the critical state, which are features of a supercritical state. This is because avalanches from collective oscillations with specific harmony can produce typical scales, giving rise to heavy tail in the avalanche size or duration distribution. From Figure 3D, we see that the PSD at critical state shows a peak at around 100Hz accompanied by its harmonies on top of a power-law decay P(f)∼f−β. Such a ‘scale-free’ behavior on PSD is observed in different local field potential data (He, 2014). By linear regression, we estimate that the decay exponent β≈3.5 in the critical state. Such decay in supercritical have a smaller exponent β≈2. Overall, these exponents lie in the typical range observed in local field potential data (such as EEG and EcoG) where β ranges from 1 to 4 (He et al., 2010) and the value of beta is shown related to physiological or cognitive states (He et al., 2010).

To compare the critical properties of difference dynamic states, we first examine the distance D between the avalanche size distribution and its fitted power-law distribution (see Materials and Methods), an efficient way to roughly judge which dynamic state is closer to criticality, as power-law distribution is its most striking feature. Figure 3C shows that the distance D is smallest when the network is poised near the Hopf bifurcation point predicted by the mean-field theory. Next, we confirmed the statistical significance of power-law distribution through truncated K-S test (see Materials and Methods) for dynamic states sufficiently close to the minimum of D. The existence of power-law distribution is only partial evidence of criticality, as other mechanisms could generate power-law distribution (Yadav et al., 2017). Finally, we further examined the scaling relation (Sethna et al., 2001) in the critical state (see Materials and Methods). The estimated slopes within the truncated ranges in the avalanche size and duration distributions define the critical exponents P(S)∼S−τ and P(T)∼T−α. A third exponent is defined as ⟨S⟩(T)∼T1/συz, where ⟨S⟩(T) is the average size of avalanches with the same duration T. The third scaling feature can be found in both the subcritical and critical cases (Figure 3G), in accordance with previous experimental findings (Friedman et al., 2012). We find that the scaling relation (Sethna et al., 2001)

α - 1 τ - 1 = 1 σ υ z (16)

approximately holds at the critical state. The exact value of critical exponents may depend on the details of the system. To demonstrate this, we slightly vary the input strength Q_o from 4 to 8 Hz and the network still shows significant critical properties at the critical value τdI=3ms. As can be seen in Figure 3H, different exponents keeping the scaling relation Equation (16) can be found by varying the input strength Q_o. This phenomenon, together with the approximate linear relationship between exponents τ and α (inset of Figure 3H), is in accordance with the results measured in vivo in the primary visual cortexes (V1) of various animals (Fontenele et al., 2019) across a wide range of neural activity states, as well as our experimental data analysis later. The scaling relation expressed by Equation (16) provides additional evidence that the avalanches in the microscopic network occurring near the bifurcation point of the mean-field equations possess the properties of criticality. Interestingly, a recent theoretical study (Zierenberg et al., 2018) suggested that changing the external imposed input in vitro can tune the criticality of neural tissue (i.e., to make it approach to or deviate from criticality) and it is latter confirmed in neuromorphic chips (Cramer et al., 2020). In line with this idea, our modeling study here predicts that around the critical states, slightly tuning the strength of external input can maintain the criticality of the network but modulate the critical exponents of the scale-free avalanches, which may be an interesting testable prediction for in vitro experiments.

Note that an avalanche can be understood as a temporal propagation of spiking activity in a network. These temporal propagations occur when the excitatory current temporally spill over the network and the avalanche size depends on the strength of the propagations, referring to Figures 1H,I. In the strict balanced state where excitation is cancelled by inhibition very fast, only small size avalanches can occur and the avalanche distribution is thus subcritical. On the contrary, in the loose balanced state with sparse network oscillations, large avalanches with typical scales can be induced by the temporal domination of excitation and the avalanche distribution can become supercritical. Only near the transition point where the macroscopic dynamic is also noise-driven, avalanches occur with all scales and the avalanche distribution thus can be scale-free. More specifically, on the macroscopic scale, the dynamical process of avalanche corresponds to a noise-induced excursion of the population firing rate. As our measurement of avalanches in the network by individual spiking times requires a fine time scale, and the information in this fine scale is averaged out in the field model, thus, information describing small avalanches vanishes in the field model, which only predicts the global firing rate dynamics. Although it is still difficult to directly link the microscopic avalanches dynamics to the macroscopic firing rate dynamics, the scale-invariance property of criticality inspires that the properties of burst activities (avalanche) in the macroscopic scale can shed light on the origin of power-law scaling in the microscopic network.

To consider the avalanche in the macroscopic scale, one should inspect the fluctuation behavior of a macroscopic signal x(t) of the network, such as the mean firing rate, etc. An avalanche is a process that starts growing at x(0) = xth + ε(ε→0+) and at the first time it goes back to x(T) = xth at time t=T. Here, xth is a threshold above which the avalanche is defined. The quantity T turns out to be the first-passage time (FPT) (back to xth) of this process and it defines the duration of an avalanche. The area S=0Tx(t)dt in this process defines the size of the avalanche. The scale-free behavior at the synchronous transition point may be understood as the general feature of dynamical systems near a bifurcation point when subjected to demographic noise dxdtxξ(t) (di Santo et al., 2017). This is because in the critical state of the E-I network, neurons are subjected to Poisson-like noise input with very weak correlation. Thus, according to a Gaussian approximation given by the central limit theorem, the overall fluctuation of population activity density scales with its square root as given by the central limit theorem (Benayoun et al., 2010; di Santo et al., 2018). Noise-driven random walker theory predicts that power-law distribution of the avalanche is a general feature of a dynamical system subjected to such kind of noise when near the Hopf bifurcation (actually, general bifurcations) point. Specifically, although the dynamic form of this macroscopic signal may not be explicit in the field equations, we can heuristically consider a situation that X(t) = xxth obeys an intrinsic dynamic as the normal form of the amplitude dynamics of Hopf bifurcation but driven by noisy force modeled as GWN. The Langevin equation it obeys is

d X d t = a X - X 3 + η ( X , t ) . (19)

The first part of Equation (19) is the normal form of the oscillation amplitude dynamics of a supercritical Hopf bifurcation (Marsden and McCracken, 2012), where periodic motion arises when a increases across the bifurcation point a=0. The second part η(X,t) is the noisy driving force, where the fluctuation scales with square root of the activity. Thus, it has the form η(X,t)=h+Xξ(t), where h is the mean bias, including the effect from recurrent excitatory, recurrent inhibitory and external inputs. ξ(t) is a standard GWN. The fact is that avalanche dynamics given by the first-passage process of Equation (19) can be mapped to the case of random walks in logarithmic potential (di Santo et al., 2017) by a scaling analysis (details in di Santo et al., 2017). Under this approach, the FPT distribution of the avalanche process can be solved by absorbing boundary approach in an analytical way, resulting in

P ( T = t ) = ( 2 ε ) 1 - 4 h Γ ( 1 2 - h ) t 4 h - 3 2 exp ( - 2 ε 2 t ) . (20)

This explains the power-law distribution relation P(T)∼T−α with α=32-2h and other exponents can be obtained by scaling argument (di Santo et al., 2017), although the relation between the avalanche critical exponents in the macroscopic scale and the microscopic scale needs further exploration. In all, irregular microscopic spiking leads to macroscopic fluctuation, which becomes the dominant effect that shapes the dynamic when near the macroscopic bifurcation point. The scale-free avalanche dynamics in the microscopic spiking network at the critical state may be understood as the scaling features of the first-passage dynamics near the Hopf bifurcation point in the macroscopic field model.

Coexistence of Irregular Spiking and Critical Avalanches in Experimental Data

In the following, we further analyze public experimental data measuring the in vitro neuronal spiking activity of mouse somatosensory cortex cultures (Ito et al., 2016) (25 data sets in total), to confirm whether the coexistence of irregular spiking and collective critical avalanches is a widely existent phenomenon. For the in vitro experimental data, spiking in the culture clearly shows up-down state transition (Sanchez-Vives and McCormick, 2000; Luczak et al., 2007). Active spiking periods (up-state) and silent periods (down-state) alternate slowly with a frequency of circa 0.1 Hz, referring to Figure 4A. We focus on analyzing the up-state (see the definition in Materials and Methods) for the following reasons. First, neurons in up-states exhibit more spiking and stronger oscillation trends, which can be seen from the power spectral density (PSD) in Figure 4C. On the contrary, spikes were too few in the down-state, thus up-states are more likely to represent normal neural activities and closer to the dynamic regime we have studied in the model. Furthermore, the neurons recorded in data exhibit heterogeneity (Ito et al., 2014), i.e., broad distribution of individual firing rate, which is different from our modeling results with homogeneous random network topology. However, the spiking of neurons in up-state is more homogeneous, which can be seen from the evener distribution of firing rates of neurons (Figure 4B). Taken together, we expect that our modeling results may explain partial properties of the experimental data in up-states.

FIGURE 4
www.frontiersin.org

Figure 4. Coexistence of irregular spiking and critical avalanches in experimental data sets. (A) The raster plot of spiking time of data Set 1 in a period. Each point corresponds to a spike of a neuron. Indexes are sorted according to the total number of spikes during the whole recoding time. The red/yellow segments in the top indicate the up/down states. (B) The distribution of the neurons firing rate in up and down states. (C) The power spectral density of the population firing rate in up and down states. In panels (B,C), the examples are shown by an up and a down period of data Set 1. (D) Boxplots of the CV distribution across neurons in different data sets. (E–G): critical avalanche properties of data Set 1. (E) Probability density distribution of the avalanche size. (F) Probability density distribution of the avalanche duration. (G) Mean avalanche size with respect to the given avalanche duration. The power-law distributions of size and duration are destroyed by surrogated data with shuffled ISI (denoted by blue curves). (H) Scaling relations between critical exponents of different data sets, similar to Figure 3H.

While most of the data sets display heavy tails in the distributions of avalanche size and duration (Supplementary Figures 4, 5), we find that nine of the 25 data sets exhibit both size and duration distributions that correspond to power-laws according to our standards (see Materials and Methods). Among them, seven sets contain critical exponents that approximately satisfy the scaling relation Equation (16) with errors < 0.1. Further details of the analysis results of the data sets are presented in Supplementary Table 1, which estimated the critical exponents of the data sets. These estimated critical exponents α,τ and 1συz are also close to the ranges found in our model. The statistical results of these seven data sets are shown in Figures 4D,H. The boxplots of the distribution of CV of ISI across neurons in each set are shown in Figure 4D. Typical values of CV of neuron ISI are around 1∼1.5, a range similar to the model prediction, indicating irregular spiking. Figures 4E–G illustrates the avalanche size and duration statistics of Set 1. Power-law distribution of avalanche size and duration can be observed. For surrogated data, where the inter-spike intervals of neurons were randomly shuffled, power-law distributions cannot be maintained, indicating that the critical properties are intrinsic in the data. As shown in Figure 4H, the scaling relationship between critical exponents (i.e., Equation (16)) approximately holds for these data sets. For these critical data sets, we also find a linear relationship between exponents τ and α (as seen in the inner panel of Figure 4H) whereby α = 1.36τ−0.37, while the slope is slightly different from the value in our model with α = 1.21τ−0.154 (inner panel, Figure 3H) and a previous study (Fontenele et al., 2019) with α = 1.28τ−0.28. These results, obtained from experimental data, indicate the phenomena of the model that irregular neuron spiking can collectively organize as scale-invariant critical avalanches are also widespread in vitro, complementing the profound finding of this wide-spread phenomenon between different cortical states in vivo (Fontenele et al., 2019). Note that the model we study here does not have the property of slow transition between up and down states, which in general involves additional adaptation mechanisms (Jercog et al., 2017; Tartaglia and Brunel, 2017).

Discussion

In summary, we have shown that E-I balanced IF networks with synaptic filtering kinetics can reconcile the coexistence of irregular spiking and collective critical avalanches near a synchronous transition point. The mechanism is unveiled by the macroscopic field equations derived by a novel mean-field theory which effectively capture the network dynamics. Finally, we further show that the phenomenon can be widely observed in the up-states of spontaneous spiking activities recorded in vitro in mouse somatosensory cortex cultures.

The E-I Balanced State Reconciles With Different Levels of Synchrony

Traditionally, E-I balance (Okun and Lampl, 2009) has been deemed to be the origin of neuron spiking irregularity. Classical mean-field theory (Van Vreeswijk and Sompolinsky, 1996, 1998; Renart et al., 2010) predicts that irregular spiking is an asynchronous state with low spiking time correlation. However, input synchrony (Stevens and Zador, 1998) has also been shown to contribute to the property of irregular spiking. This varied understanding renders a question on how to effectively characterize the dynamic of the E-I balance state.

Here, we study E-I networks that incorporates synaptic kinetics and the critical transition from strict to looser balance, where individual neurons continue to spike irregularly during these different dynamic states. Thus, individual irregular spiking is compatible with asynchronous (Renart et al., 2010) or sparse synchrony (Brunel and Hakim, 2008) state (i.e., synchronous inputs). Our theory shows that the balanced state can be a stable fixed point or a stable limit cycle on the macroscopic scale. The former induced by strict balance corresponds to the asynchronous state in accordance with the traditional theory, while the latter induced by loose balance corresponds to collective network oscillation, with critical dynamics during the transition between them. The critical dynamics where neurons are weakly synchronous also provides an explanation of how weak pairwise correlation can induce abundant collective behavior (Schneidman et al., 2006). Our study thus gives an effective characterization that the E-I balanced state can be a stable state of different characteristics (fixed point or limit cycle) in the macroscopic scale.

Possible Origin of Crackling Noise in Neural Systems Around Synchronous Transition Point

The criticality of neural systems has been long debated (Wilting and Priesemann, 2019a). Previous experimental and theoretical studies (Beggs and Plenz, 2003; Lombardi et al., 2012; Poil et al., 2012; Yang et al., 2012) have suggested that critical avalanches exist in the E-I balanced state, while many simplified models use unrealistic neural dynamics (such as spreading processes on networks) and the definition of E-I balance is usually ambiguous. Our study of IF network with realistic synaptic kinetics considers realistic irregular asynchronous or synchronous spiking in E-I balanced state, and reveals that that critical avalanches exist near the synchronous transition point between these states, offering a more biologically plausible explanation of the origin of scale-free dynamics in neural systems.

Our theory is thus consistent with the understanding that criticality occurs at the edge of a synchronous transition (di Santo et al., 2018; Dalla Porta and Copelli, 2019; Fontenele et al., 2019) with intermediate levels of spiking variability (Fontenele et al., 2019), and that critical avalanches can temporally organize as collective oscillations (Gireesh and Plenz, 2008). It may appear that the co-existence of scale-free avalanches with scale-dependent network oscillations is contradictory. This can be understood by the accumulation-release processes of large avalanches where the system takes a typical period of time to accumulate before another large avalanche release, leading to a rhythm of large avalanches (Wang et al., 2016). Furthermore, the portion of large avalanches takes dominant effects on modulating the network oscillation, whereas the whole avalanches are still scale-free distributed. On the contrary, how these avalanches are modulated by individual oscillation of neurons may be another interesting problem for further exploration.

Some previous modeling studies also indicate that critical avalanches emerge near synchronous transitions. For example, (di Santo et al., 2018) studied spatially coupled units modeled by cubic normal form rate equations. Their physical model facilitates theoretical treatment but omits too many biological components (Dalla Porta and Copelli, 2019) studied a probability spiking model but without theoretical characterizing the transition dynamics. They focused on the analysis of population activity without considering the irregular spiking and E-I balanced input of individual units. Oscillation component can also be included in physical model of branching process (Pausch et al., 2020), while being away from the criticality in branching process means either explosion or being completely in salience, which is not biologically desirable. Thus, the biological reality of our model with synaptic filtering kinetics, E-I balance and irregular spiking time in individual neurons is well beyond these previous studies. Significantly, in our model, the synaptic transmission is not instantaneous due to the filtering effect, which renders difficulty in the distinction of the spatially causal relation (Martinello et al., 2017) between successive spiking. The time binning avalanches we measure here are temporally causal (as in experimental measurements) but not necessarily spatially causal in the network. Thus, our estimated critical exponents do not agree with the spatially causal avalanches produced by critical branching processes (Haldeman and Beggs, 2005) (i.e., directed percolation (DP) class), which predicts the classical results τ = 1.5, α = 2, 1συz=2. Indeed, critical exponents different from the DP class have been found in previously reported experiments (Palva et al., 2013; Fontenele et al., 2019). Our own analysis of the experimental data (Figure 4H) also confirmed the variation of the exponents while maintaining the scaling relationship Equation (16). Furthermore, the linear relationship between exponents τ and α, also found in recent studies (Dalla Porta and Copelli, 2019; Fontenele et al., 2019), may depend on detailed properties of the underlying circuit and its dynamic origin remains to be further explored.

Another prediction of critical theory is that different avalanches can be collapsed into a single scale function (Sethna et al., 2001) at criticality. Such data collapse is a stricter criterion to probe criticality than the scaling relation Equation (16) we use here. It would be interesting to further explore whether there is a unique scale function that different avalanches can be collapsed into and how different oscillation components in the network can modulate its shape (Miller et al., 2019).

A Macroscopic Description of IF Neuronal Networks With Synaptic Kinetics

Traditional theory of E-I IF neuronal networks makes use of diffusion approximation (Amit and Brunel, 1997). For current-based dynamics without synaptic kinetics, Fokker Planck equation of membrane potential distribution can be constructed and firing rate can be derived by first-passage time theory (Amit and Brunel, 1997). Furthermore, the stability of the stationary membrane potential distribution can be analyzed through linear perturbation (Brunel, 2000), which can predict the transition from asynchronous spiking to sparse synchronous state through a Hopf bifurcation. On the contrary, our theory here analyzes stability of the dynamic of the mean membrane potential itself. Importantly, when synaptic kinetics is included, a complete analytical treatment does not exist. For example, the firing rate could only be derived asymptotically through a singular perturbation method by assuming fast decay synapses (Fourcaud and Brunel, 2002), whereas more ad hoc assumptions have to be further imposed for conductance-based dynamics (Brunel and Wang, 2001), and the oscillation properties of the network could only be analyzed by assuming a heuristic form of the network steady-state firing rate solution (Brunel and Wang, 2003). On the contrary, our semi-analytical theory can unifiedly treat networks with current-based or conductance-based (see Supplementary Material: Appendix 1. Model Extensions) dynamics with or without synaptic kinetics.

A more challenging but also more useful issue that has attracted much recent attention is to find macroscopic transient dynamic descriptions, e.g., rate equations, of the neuronal networks. Schaffer et al. (2013) derived the complex firing rate equations of IF networks through the eigenfunction expansion of the Fokker-Planck equation under diffusion approximation. Deriving rate equations of adaptive non-linear IF networks has also been studied (Augustin et al., 2017) under some effective approximation of the Fokker-Planck equation. Montbrió et al. (2015) derived the rate equations for quadratic IF networks using the Lorentzian ansatz. This approach has been generalized to including gap junctions (Laing, 2015) and synaptic filtering kinetics (Dumont and Gutkin, 2019). Schwalger et al. (Schwalger et al., 2017) developed a method to derive the stochastic rate equations of adaptive IF networks based on mean-field approximation of the renewal equation. In general, analytical theory works for specific conditions and thus is highly specific and with strong complexity. Furthermore, most proposed theories failed to capture the synchronous transitions induced by the synaptic filtering effect studied in our model.

Instead of being theoretically perfect, here we seek for simplicity and effectiveness. The key novel feature of our framework (see detailed derivations in Materials and Methods) is that the mean-field equations of the macroscopic dynamical variables can be closed by the voltage-dependent mean firing rate (i.e., Equation (9)). While many previous theoretical analyses of E-I network usually require the assumption of low input correlation (Van Vreeswijk and Sompolinsky, 1998; Brunel, 2000), the formula here is constructed by counting the number of neuron spiking in a small time window but not required to explicitly consider the correlation of neurons, which allows it to essentially capture the sub and supra threshold microscopic dynamics of a spiking network. Our novel scheme to derive the dynamic equations governing the first-order statistics of the neural network is not completely analytical, since the derivation neglects several factors that shape the network dynamics, including higher order statistics, noise correlation and refractory time. All these neglected factors have been incorporated into the effective parameters, σα, given by Equation (10). Nonetheless, this semi-analytical theory provides a simple yet useful method to analyze dynamic features directly related to the network firing rate, such as synchrony, criticality and response to dynamic stimuli. The semi-analytical nature of the theory results in the simple form of the macroscopic field equations, which facilitates further generalization. Previous work (Wong and Wang, 2006) proposed another semi-analytical theory for deriving rate equation from spiking neural networks. That method is a direct generalization of the classical Wilson-Cowan model (Wilson and Cowan, 1972) with presumed f-I curves (the relationship between input current and output rate). The derived field equations include the synaptic gating variables and thus can capture the synaptic filtering effect. Yet, unlike our approach here, that analysis cannot directly predict the dynamical properties of the spiking network (e.g., criticality, network oscillations) from the derived field equations.

Outlook

It is straightforward that the mean-field analysis introduced here can be used in the study of neural dynamics across a diverse range of topics. It can be easily generalized to include other factors by assuming different macroscopic variables in the field model. Possible extensions of the analysis include studying the effects of multiple neuronal populations and synaptic receptor types, cluster or spatially extended network connections, adaptive behaviors such as short-term plasticity, etc. Thus, our work allows further exploration of the mechanism that determines the role of synaptic kinetics in working memory retrieval (Mongillo et al., 2008), self-organized critical phenomena due to plasticity (Levina et al., 2007; Millman et al., 2010) and spatially causal avalanches or waves (Keane and Gong, 2015; Keane et al., 2018; Gu et al., 2019) arising from competition between Hopf and Turing instability (Huang et al., 2019). As a theory that links microscopic neuronal spiking and macroscopic collective activity that are consistent in several aspects with real neural dynamics with regard to E-I balance and neural criticality, our theory also establish a base to model large-scale brain connectomes (Haimovici et al., 2013; Chaudhuri et al., 2015; Wang et al., 2019), to study large-scale brain networks and information processing with realistic multiscale complex dynamics. In the experimental data analysis, we have confirmed the wide coexistence of irregular spiking and critical avalanches in the up-states of in vitro neuronal cultures, but our model could not reproduce some features in the experimental data sets: the uneven distribution of firing rates of neurons may require a heterogeneous network topology; the up-down states transition may require adaptation mechanisms in a slower time scale (Jercog et al., 2017; Tartaglia and Brunel, 2017). It would be interesting to further generalize our theory to models with such features. The potential applications of our theory listed here will receive the further exploration they deserve in due course.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: http://crcns.org/data-sets/ssc/ssc-3.

Author Contributions

JL and CZ conceived the study. TZ and CZ supervised the project. JL performed the theoretical analysis, numerical simulation, and data analysis. JL, TZ, and CZ wrote the manuscript. All authors contributed to the article and approved the submitted version.

Funding

This work was supported by the Hong Kong Baptist University (HKBU) Strategic Development Fund, the Hong Kong Research Grant Council (GRF12200217), the HKBU Research Committee and Interdisciplinary Research Clusters Matching Scheme 2018/19 (RC-IRCMs/18-19/SCI01), the National Natural Science Foundation of China (Grants Nos. 11775314, 91530320, and 11975194), and Key-Area Research and Development Program of Guangzhou (Grants No. 202007030004). This research was conducted using the resources of the High-Performance Computing Cluster Centre at HKBU, which receives funding from the RGC and the HKBU.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This manuscript has been released as a pre-print at arXiv.org (Liang et al., 2020). The authors would like to thank John M. Beggs for helpful discussion and pointing us to the public data sets (Ito et al., 2016). We also thank all reviewers for their comments and suggestions.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnsys.2020.580011/full#supplementary-material

References

Abeles, M. (1991). Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511574566

CrossRef Full Text | Google Scholar

Amit, D. J., and Brunel, N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cereb. Cortex 7, 237–252. doi: 10.1093/cercor/7.3.237

PubMed Abstract | CrossRef Full Text | Google Scholar

Augustin, M., Ladenbauer, J., Baumann, F., and Obermayer, K. (2017). Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: comparison and implementation. PLoS Comput. Biol. 13:e1005545. doi: 10.1371/journal.pcbi.1005545

PubMed Abstract | CrossRef Full Text | Google Scholar

Beggs, J. M., and Plenz, D. (2003). Neuronal avalanches in neocortical circuits. J. Neurosci. 23, 11167–11177. doi: 10.1523/JNEUROSCI.23-35-11167.2003

PubMed Abstract | CrossRef Full Text | Google Scholar

Bellay, T., Klaus, A., Seshadri, S., and Plenz, D. (2015). Irregular spiking of pyramidal neurons organizes as scale-invariant neuronal avalanches in the awake state. eLife 4:e07224. doi: 10.7554/eLife.07224.019

CrossRef Full Text | Google Scholar

Benayoun, M., Cowan, J. D., van Drongelen, W., and Wallace, E. (2010). Avalanches in a stochastic model of spiking neurons. PLoS Comput. Biol. 6:e1000846. doi: 10.1371/journal.pcbi.1000846

PubMed Abstract | CrossRef Full Text | Google Scholar

Brown, E. N., Lydic, R., and Schiff, N. D. (2010). General anesthesia, sleep, and coma. N. Engl. J. Med. 363, 2638–2650. doi: 10.1056/NEJMra0808281

PubMed Abstract | CrossRef Full Text | Google Scholar

Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 8, 183–208. doi: 10.1023/A:1008925309027

CrossRef Full Text | Google Scholar

Brunel, N., and Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput. 11, 1621–1671. doi: 10.1162/089976699300016179

PubMed Abstract | CrossRef Full Text | Google Scholar

Brunel, N., and Hakim, V. (2008). Sparsely synchronized neuronal oscillations. Chaos An Interdiscip. J. Nonlinear Sci. 18:15113. doi: 10.1063/1.2779858

CrossRef Full Text | Google Scholar

Brunel, N., and Wang, X.-J. (2001). Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition. J. Comput. Neurosci. 11, 63–85. doi: 10.1023/A:1011204814320

CrossRef Full Text | Google Scholar

Brunel, N., and Wang, X.-J. (2003). What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance. J. Neurophysiol. 90, 415–430. doi: 10.1152/jn.01095.2002

PubMed Abstract | CrossRef Full Text | Google Scholar

Burkitt, A. N. (2006). A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 95, 1–19. doi: 10.1007/s00422-006-0068-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Chaudhuri, R., Knoblauch, K., Gariel, M.-A., Kennedy, H., and Wang, X.-J. (2015). A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex. Neuron 88, 419–431. doi: 10.1016/j.neuron.2015.09.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Cohen, M. R., and Kohn, A. (2011). Measuring and interpreting neuronal correlations. Nat. Neurosci. 14:811. doi: 10.1038/nn.2842

PubMed Abstract | CrossRef Full Text | Google Scholar

Cramer, B., Stöckel, D., Kreft, M., Wibral, M., Schemmel, J., Meier, K., et al. (2020). Control of criticality and computation in spiking neuromorphic networks with plasticity. Nat. Commun. 11, 1–11. doi: 10.1038/s41467-020-16548-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Dalla Porta, L., and Copelli, M. (2019). Modeling neuronal avalanches and long-range temporal correlations at the emergence of collective oscillations: continuously varying exponents mimic M/EEG results. PLoS Comput. Biol. 15:e1006924. doi: 10.1371/journal.pcbi.1006924

PubMed Abstract | CrossRef Full Text | Google Scholar

Darshan, R., van Vreeswijk, C., and Hansel, D. (2018). Strength of correlations in strongly recurrent neuronal networks. Phys. Rev. X 8:31072. doi: 10.1103/PhysRevX.8.031072

CrossRef Full Text | Google Scholar

Dayan, P., and Abbott, L. F. (2001). Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. Cambridge, MA: MIT press.

Google Scholar

Denève, S., and Machens, C. K. (2016). Efficient codes and balanced networks. Nat. Neurosci. 19:375. doi: 10.1038/nn.4243

PubMed Abstract | CrossRef Full Text | Google Scholar

di Santo, S., Villegas, P., Burioni, R., and Muñoz, M. A. (2017). Simple unified view of branching process statistics: random walks in balanced logarithmic potentials. Phys. Rev. E 95:32115. doi: 10.1103/PhysRevE.95.032115

PubMed Abstract | CrossRef Full Text | Google Scholar

di Santo, S., Villegas, P., Burioni, R., and Muñoz, M. A. (2018). Landau–Ginzburg theory of cortex dynamics: scale-free avalanches emerge at the edge of synchronization. Proc. Natl. Acad. Sci. U.S.A. 115, E1356—-E1365. doi: 10.1073/pnas.1712989115

PubMed Abstract | CrossRef Full Text | Google Scholar

Dumont, G., and Gutkin, B. (2019). Macroscopic phase resetting-curves determine oscillatory coherence and signal transfer in inter-coupled neural circuits. PLoS Comput. Biol. 15:e1007019. doi: 10.1371/journal.pcbi.1007019

PubMed Abstract | CrossRef Full Text | Google Scholar

Fontenele, A. J., de Vasconcelos, N. A. P., Feliciano, T., Aguiar, L. A. A., Soares-Cunha, C., Coimbra, B., et al. (2019). Criticality between cortical states. Phys. Rev. Lett. 122:208101. doi: 10.1103/PhysRevLett.122.208101

PubMed Abstract | CrossRef Full Text | Google Scholar

Fourcaud, N., and Brunel, N. (2002). Dynamics of the firing probability of noisy integrate-and-fire neurons. Neural Comput. 14, 2057–2110. doi: 10.1162/089976602320264015

PubMed Abstract | CrossRef Full Text | Google Scholar

Friedman, N., Ito, S., Brinkman, B. A. W., Shimono, M., DeVille, R. E. L., Dahmen, K. A., et al. (2012). Universal critical dynamics in high resolution neuronal avalanche data. Phys. Rev. Lett. 108:208102. doi: 10.1103/PhysRevLett.108.208102

PubMed Abstract | CrossRef Full Text | Google Scholar

Gautam, S. H., Hoang, T. T., McClanahan, K., Grady, S. K., and Shew, W. L. (2015). Maximizing sensory dynamic range by tuning the cortical state to criticality. PLoS Comput. Biol. 11:e1004576. doi: 10.1371/journal.pcbi.1004576

PubMed Abstract | CrossRef Full Text | Google Scholar

Gireesh, E. D., and Plenz, D. (2008). Neuronal avalanches organize as nested theta-and beta/gamma-oscillations during development of cortical layer 2/3. Proc. Natl. Acad. Sci. U.S.A. 105, 7576–7581. doi: 10.1073/pnas.0800537105

PubMed Abstract | CrossRef Full Text | Google Scholar

Golomb, D. (2007). Neuronal synchrony measures. Scholarpedia 2:1347. doi: 10.4249/scholarpedia.1347

CrossRef Full Text | Google Scholar

Gu, Y., Qi, Y., and Gong, P. (2019). Rich-club connectivity, diverse population coupling, and dynamical activity patterns emerging from local cortical circuits. PLoS Comput. Biol. 15:e1006902. doi: 10.1371/journal.pcbi.1006902

PubMed Abstract | CrossRef Full Text | Google Scholar

Haimovici, A., Tagliazucchi, E., Balenzuela, P., and Chialvo, D. R. (2013). Brain organization into resting state networks emerges at criticality on a model of the human connectome. Phys. Rev. Lett. 110:178101. doi: 10.1103/PhysRevLett.110.178101

PubMed Abstract | CrossRef Full Text | Google Scholar

Haldeman, C., and Beggs, J. M. (2005). Critical branching captures activity in living neural networks and maximizes the number of metastable states. Phys. Rev. Lett. 94:58101. doi: 10.1103/PhysRevLett.94.058101

PubMed Abstract | CrossRef Full Text | Google Scholar

He, B. J. (2014). Scale-free brain activity: past, present, and future. Trends Cogn. Sci. 18, 480–487. doi: 10.1016/j.tics.2014.04.003

PubMed Abstract | CrossRef Full Text | Google Scholar

He, B. J., Zempel, J. M., Snyder, A. Z., and Raichle, M. E. (2010). The temporal structures and functional significance of scale-free brain activity. Neuron 66, 353–369. doi: 10.1016/j.neuron.2010.04.020

PubMed Abstract | CrossRef Full Text | Google Scholar

Herrmann, C. S., Munk, M. H. J., and Engel, A. K. (2004). Cognitive functions of gamma-band activity: memory match and utilization. Trends Cogn. Sci. 8, 347–355. doi: 10.1016/j.tics.2004.06.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Holt, G. R., Softky, W. R., Koch, C., and Douglas, R. J. (1996). Comparison of discharge variability in vitro and in vivo in cat visual cortex neurons. J. Neurophysiol. 75, 1806–1814. doi: 10.1152/jn.1996.75.5.1806

PubMed Abstract | CrossRef Full Text | Google Scholar

Huang, C., Ruff, D. A., Pyle, R., Rosenbaum, R., Cohen, M. R., and Doiron, B. (2019). Circuit models of low-dimensional shared variability in cortical networks. Neuron 101, 337–348. doi: 10.1016/j.neuron.2018.11.034

PubMed Abstract | CrossRef Full Text | Google Scholar

Ito, S., Yeh, F.-C., Hiolski, E., Rydygier, P., Gunning, D. E., Hottowy, P., et al. (2014). Large-scale, high-resolution multielectrode-array recording depicts functional network differences of cortical and hippocampal cultures. PLoS One 9:e105324. doi: 10.1371/journal.pone.0105324

PubMed Abstract | CrossRef Full Text | Google Scholar

Ito, S., Yeh, F.-C., Timme, N. M., Hottowy, P., Litke, A. M., and Beggs, J. M. (2016). Spontaneous Spiking Activity of Hundreds of Neurons in Mouse Somatosensory Cortex Slice Cultures Recorded Using a dense 512 Electrode Array.

Google Scholar

Jercog, D., Roxin, A., Bartho, P., Luczak, A., Compte, A., and De La Rocha, J. (2017). UP-DOWN cortical dynamics reflect state transitions in a bistable network. eLife 6:e22425. doi: 10.7554/eLife.22425.018

CrossRef Full Text | Google Scholar

Keane, A., and Gong, P. (2015). Propagating waves can explain irregular neural dynamics. J. Neurosci. 35, 1591–1605. doi: 10.1523/JNEUROSCI.1669-14.2015

PubMed Abstract | CrossRef Full Text | Google Scholar

Keane, A., Henderson, J. A., and Gong, P. (2018). Dynamical patterns underlying response properties of cortical circuits. J. R. Soc. Interface 15:20170960. doi: 10.1098/rsif.2017.0960

PubMed Abstract | CrossRef Full Text | Google Scholar

Kinouchi, O., and Copelli, M. (2006). Optimal dynamical range of excitable networks at criticality. Nat. Phys. 2:348. doi: 10.1038/nphys289

CrossRef Full Text | Google Scholar

Laing, C. R. (2015). Exact neural fields incorporating gap junctions. SIAM J. Appl. Dyn. Syst. 14, 1899–1929. doi: 10.1137/15M1011287

CrossRef Full Text | Google Scholar

Landau, I. D., Egger, R., Dercksen, V. J., Oberlaender, M., and Sompolinsky, H. (2016). The impact of structural heterogeneity on excitation-inhibition balance in cortical networks. Neuron 92, 1106–1121. doi: 10.1016/j.neuron.2016.10.027

PubMed Abstract | CrossRef Full Text | Google Scholar

Levina, A., Herrmann, J. M., and Geisel, T. (2007). Dynamical synapses causing self-organized criticality in neural networks. Nat. Phys. 3:857. doi: 10.1038/nphys758

CrossRef Full Text | Google Scholar

Liang, J., Zhou, T., and Zhou, C. (2020). Hopf bifurcation in mean field explains critical avalanches in excitation-inhibition balanced neuronal networks: a mechanism for multiscale variability. arXiv [Preprint]. Available online at: https://arxiv.org/abs/2001.05626 (accessed July 4, 2020).

Google Scholar

Litke, A. M., Bezayiff, N., Chichilnisky, E. J., Cunningham, W., Dabrowski, W., Grillo, A. A., et al. (2004). What does the eye tell the brain?: development of a system for the large-scale recording of retinal output activity. IEEE Trans. Nucl. Sci. 51, 1434–1440. doi: 10.1109/TNS.2004.832706

CrossRef Full Text | Google Scholar

Litwin-Kumar, A., and Doiron, B. (2012). Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat. Neurosci. 15:1498. doi: 10.1038/nn.3220

PubMed Abstract | CrossRef Full Text | Google Scholar

Lombardi, F., Herrmann, H. J., Perrone-Capano, C., Plenz, D., and De Arcangelis, L. (2012). Balance between excitation and inhibition controls the temporal organization of neuronal avalanches. Phys. Rev. Lett. 108:228703. doi: 10.1103/PhysRevLett.108.228703

PubMed Abstract | CrossRef Full Text | Google Scholar

Luczak, A., Barthó, P., Marguet, S. L., Buzsáki, G., and Harris, K. D. (2007). Sequential structure of neocortical spontaneous activity in vivo. Proc. Natl. Acad. Sci. U.S.A. 104, 347–352. doi: 10.1073/pnas.0605643104

PubMed Abstract | CrossRef Full Text | Google Scholar

Ma, Z., Turrigiano, G. G., Wessel, R., and Hengen, K. B. (2019). Cortical circuit dynamics are homeostatically tuned to criticality in vivo. Neuron 104, 655–664. doi: 10.1016/j.neuron.2019.08.031

PubMed Abstract | CrossRef Full Text | Google Scholar

Marsden, J. E., and McCracken, M. (2012). The Hopf Bifurcation and Its Applications. Berlin: Springer Science & Business Media.

Google Scholar

Marshall, N., Timme, N. M., Bennett, N., Ripp, M., Lautzenhiser, E., and Beggs, J. M. (2016). Analysis of power laws, shape collapses, and neural complexity: new techniques and matlab support via the NCC toolbox. Front. Physiol. 7:250. doi: 10.3389/fphys.2016.00250

PubMed Abstract | CrossRef Full Text | Google Scholar

Martinello, M., Hidalgo, J., Maritan, A., Di Santo, S., Plenz, D., and Muñoz, M. A. (2017). Neutral theory and scale-free neural dynamics. Phys. Rev. X 7:41071. doi: 10.1103/PhysRevX.7.041071

CrossRef Full Text | Google Scholar

Miller, S. R., Yu, S., and Plenz, D. (2019). The scale-invariant, temporal profile of neuronal avalanches in relation to cortical γ–oscillations. Sci. Rep. 9, 1–14. doi: 10.1038/s41598-019-52326-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Millman, D., Mihalas, S., Kirkwood, A., and Niebur, E. (2010). Self-organized criticality occurs in non-conservative neuronal networks during up states. Nat. Phys. 6:801. doi: 10.1038/nphys1757

PubMed Abstract | CrossRef Full Text | Google Scholar

Mongillo, G., Barak, O., and Tsodyks, M. (2008). Synaptic theory of working memory. Science 319, 1543–1546. doi: 10.1126/science.1150769

PubMed Abstract | CrossRef Full Text | Google Scholar

Montbrió, E., Pazó, D., and Roxin, A. (2015). Macroscopic description for networks of spiking neurons. Phys. Rev. X 5:21028. doi: 10.1103/PhysRevX.5.021028

CrossRef Full Text | Google Scholar

Okun, M., and Lampl, I. (2008). Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nat. Neurosci. 11:535. doi: 10.1038/nn.2105

PubMed Abstract | CrossRef Full Text | Google Scholar

Okun, M., and Lampl, I. (2009). Balance of excitation and inhibition. Scholarpedia 4:7467. doi: 10.4249/scholarpedia.7467

CrossRef Full Text | Google Scholar

Palva, J. M., Zhigalov, A., Hirvonen, J., Korhonen, O., Linkenkaer-Hansen, K., and Palva, S. (2013). Neuronal long-range temporal correlations and avalanche dynamics are correlated with behavioral scaling laws. Proc. Natl. Acad. Sci. U.S.A. 110, 3585–3590. doi: 10.1073/pnas.1216855110

PubMed Abstract | CrossRef Full Text | Google Scholar

Pausch, J., Garcia-Millan, R., and Pruessner, G. (2020). Time-dependent branching processes: a model of oscillating neuronal avalanches. Sci. Rep. 10, 1–17. doi: 10.1038/s41598-020-69705-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Poil, S.-S., Hardstone, R., Mansvelder, H. D., and Linkenkaer-Hansen, K. (2012). Critical-state dynamics of avalanches and oscillations jointly emerge from balanced excitation/inhibition in neuronal networks. J. Neurosci. 32, 9817–9823. doi: 10.1523/JNEUROSCI.5990-11.2012

PubMed Abstract | CrossRef Full Text | Google Scholar

Renart, A., De La Rocha, J., Bartho, P., Hollender, L., Parga, N., Reyes, A., et al. (2010). The asynchronous state in cortical circuits. Science 327, 587–590. doi: 10.1126/science.1179850

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosenbaum, R., Smith, M. A., Kohn, A., Rubin, J. E., and Doiron, B. (2017). The spatial structure of correlated neuronal variability. Nat. Neurosci. 20:107. doi: 10.1038/nn.4433

PubMed Abstract | CrossRef Full Text | Google Scholar

Salin, P. A., and Prince, D. A. (1996). Spontaneous GABAA receptor-mediated inhibitory currents in adult rat somatosensory cortex. J. Neurophysiol. 75, 1573–1588. doi: 10.1152/jn.1996.75.4.1573

PubMed Abstract | CrossRef Full Text | Google Scholar

Sanchez-Vives, M. V., and McCormick, D. A. (2000). Cellular and network mechanisms of rhythmic recurrent activity in neocortex. Nat. Neurosci. 3:1027. doi: 10.1038/79848

PubMed Abstract | CrossRef Full Text | Google Scholar

Schaffer, E. S., Ostojic, S., and Abbott, L. F. (2013). A complex-valued firing-rate model that approximates the dynamics of spiking networks. PLoS Comput. Biol. 9:e1003301. doi: 10.1371/journal.pcbi.1003301

PubMed Abstract | CrossRef Full Text | Google Scholar

Schneidman, E., Berry, M. J. II, Segev, R., and Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 440:1007. doi: 10.1038/nature04701

PubMed Abstract | CrossRef Full Text | Google Scholar

Schwalger, T., Deger, M., and Gerstner, W. (2017). Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS Comput. Biol. 13:e1005507. doi: 10.1371/journal.pcbi.1005507

PubMed Abstract | CrossRef Full Text | Google Scholar

Sethna, J. P., Dahmen, K. A., and Myers, C. R. (2001). Crackling noise. Nature 410:242. doi: 10.1038/35065675

PubMed Abstract | CrossRef Full Text | Google Scholar

Shelley, M. J., and Tao, L. (2001). Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks. J. Comput. Neurosci. 11, 111–119. doi: 10.1023/A:1012885314187

CrossRef Full Text | Google Scholar

Shew, W. L., Clawson, W. P., Pobst, J., Karimipanah, Y., Wright, N. C., and Wessel, R. (2015). Adaptation to sensory input tunes visual cortex to criticality. Nat. Phys. 11:659. doi: 10.1038/nphys3370

CrossRef Full Text | Google Scholar

Shew, W. L., Yang, H., Petermann, T., Roy, R., and Plenz, D. (2009). Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. J. Neurosci. 29, 15595–15600. doi: 10.1523/JNEUROSCI.3864-09.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Shew, W. L., Yang, H., Yu, S., Roy, R., and Plenz, D. (2011). Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. J. Neurosci. 31, 55–63. doi: 10.1523/JNEUROSCI.4637-10.2011

PubMed Abstract | CrossRef Full Text | Google Scholar

Shu, Y., Hasenstaub, A., and McCormick, D. A. (2003). Turning on and off recurrent balanced cortical activity. Nature 423:288. doi: 10.1038/nature01616

PubMed Abstract | CrossRef Full Text | Google Scholar

Softky, W. R., and Koch, C. (1993). The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J. Neurosci. 13, 334–350. doi: 10.1523/JNEUROSCI.13-01-00334.1993

PubMed Abstract | CrossRef Full Text | Google Scholar

Stevens, C. F., and Zador, A. M. (1998). Input synchrony and the irregular firing of cortical neurons. Nat. Neurosci. 1:210. doi: 10.1038/659

PubMed Abstract | CrossRef Full Text | Google Scholar

Tartaglia, E. M., and Brunel, N. (2017). Bistability and up/down state alternations in inhibition-dominated randomly connected networks of LIF neurons. Sci. Rep. 7:11916. doi: 10.1038/s41598-017-12033-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Teich, M. C., Heneghan, C., Lowen, S. B., Ozaki, T., and Kaplan, E. (1997). Fractal character of the neural spike train in the visual system of the cat. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 14, 529–546. doi: 10.1364/JOSAA.14.000529

PubMed Abstract | CrossRef Full Text | Google Scholar

Touboul, J., and Destexhe, A. (2017). Power-law statistics and universal scaling in the absence of criticality. Phys. Rev. E 95:12413. doi: 10.1103/PhysRevE.95.012413

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Vreeswijk, C., and Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726. doi: 10.1126/science.274.5293.1724

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Vreeswijk, C., and Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural Comput. 10, 1321–1371. doi: 10.1162/089976698300017214

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, R., Lin, P., Liu, M., Wu, Y., Zhou, T., and Zhou, C. (2019). Hierarchical connectome modes and critical state jointly maximize human brain functional diversity. Phys. Rev. Lett. 123:38301. doi: 10.1103/PhysRevLett.123.038301

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, S.-J., Hilgetag, C., and Zhou, C. (2011). Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations. Front. Comput. Neurosci. 5:30. doi: 10.3389/fncom.2011.00030

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, S.-J., Ouyang, G., Guang, J., Zhang, M., Wong, K. Y. M., and Zhou, C. (2016). Stochastic oscillation in self-organized critical states of small systems: sensitive resting state in neural systems. Phys. Rev. Lett. 116:18101. doi: 10.1103/PhysRevLett.116.018101

PubMed Abstract | CrossRef Full Text | Google Scholar

Williams-Garcia, R. V., Beggs, J. M., and Ortiz, G. (2017). Unveiling causal activity of complex networks. Europhys. Lett. 119:18003. doi: 10.1209/0295-5075/119/18003

CrossRef Full Text | Google Scholar

Wilson, H. R., and Cowan, J. D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12, 1–24. doi: 10.1016/S0006-3495(72)86068-5

CrossRef Full Text | Google Scholar

Wilting, J., and Priesemann, V. (2019a). 25 years of criticality in neuroscience: established results, open controversies, novel concepts. Curr. Opin. Neurobiol. 58, 105–111. doi: 10.1016/j.conb.2019.08.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilting, J., and Priesemann, V. (2019b). Between perfectly critical and fully irregular: a reverberating model captures and predicts cortical spike propagation. Cereb. Cortex 29, 2759–2770. doi: 10.1093/cercor/bhz049

PubMed Abstract | CrossRef Full Text | Google Scholar

Wong, K.-F., and Wang, X.-J. (2006). A recurrent network mechanism of time integration in perceptual decisions. J. Neurosci. 26, 1314–1328. doi: 10.1523/JNEUROSCI.3733-05.2006

PubMed Abstract | CrossRef Full Text | Google Scholar

Xue, M., Atallah, B. V., and Scanziani, M. (2014). Equalizing excitation–inhibition ratios across visual cortical neurons. Nature 511, 596. doi: 10.1038/nature13321

PubMed Abstract | CrossRef Full Text | Google Scholar

Yadav, A. C., Ramaswamy, R., and Dhar, D. (2017). General mechanism for the 1/f noise. Phys. Rev. E 96:22215. doi: 10.1103/PhysRevE.96.022215

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, D.-P., Zhou, H.-J., and Zhou, C. (2017). Co-emergence of multi-scale cortical activities of irregular firing, oscillations and avalanches achieves cost-efficient information capacity. PLoS Comput. Biol. 13:e1005384. doi: 10.1371/journal.pcbi.1005384

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, H., Shew, W. L., Roy, R., and Plenz, D. (2012). Maximal variability of phase synchrony in cortical networks with neuronal avalanches. J. Neurosci. 32, 1061–1072. doi: 10.1523/JNEUROSCI.2771-11.2012

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhou, F.-M., and Hablitz, J. J. (1998). AMPA receptor-mediated EPSCs in rat neocortical layer II/III interneurons have rapid kinetics. Brain Res. 780, 166–169. doi: 10.1016/S0006-8993(97)01311-5

CrossRef Full Text | Google Scholar

Zierenberg, J., Wilting, J., and Priesemann, V. (2018). Homeostatic plasticity and external input shape neural network dynamics. Phys. Rev. X 8:31018.

Google Scholar

Keywords: excitation-inhibition balance, neuronal avalanches, criticality, cortical dynamics, mean-field theory

Citation: Liang J, Zhou T and Zhou C (2020) Hopf Bifurcation in Mean Field Explains Critical Avalanches in Excitation-Inhibition Balanced Neuronal Networks: A Mechanism for Multiscale Variability. Front. Syst. Neurosci. 14:580011. doi: 10.3389/fnsys.2020.580011

Received: 04 July 2020; Accepted: 02 November 2020;
Published: 26 November 2020.

Edited by:

Shan Yu, Institute of Automation, Chinese Academy of Sciences, China

Reviewed by:

Junsong Wang, Tianjin Medical University, China
Woodrow Shew, University of Arkansas, United States
Pulin Gong, The University of Sydney, Australia

Copyright © 2020 Liang, Zhou and Zhou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Changsong Zhou, Y3N6aG91QGhrYnUuZWR1Lmhr

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.