Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 15 July 2021
Sec. Dynamical Systems
This article is part of the Research Topic From Structure to Function in Neuronal Networks: Effects of Adaptation, Time-Delays, and Noise View all 14 articles

Coherence Resonance in Random Erdös-Rényi Neural Networks: Mean-Field Theory

A. Hutt
A. Hutt1*T. WahlT. Wahl1N. VogesN. Voges2Jo HausmannJo Hausmann3J. LefebvreJ. Lefebvre4
  • 1Team MIMESIS, INRIA Nancy Grand Est, Strasbourg, France
  • 2ILCB and INT UMR 7289, Aix Marseille Université, Marseille, France
  • 3R&D Department, Hyland Switzerland Sarl, Geneva, Switzerland
  • 4Krembil Research Institute, University Health Network, Toronto, ON, Canada

Additive noise is known to tune the stability of nonlinear systems. Using a network of two randomly connected interacting excitatory and inhibitory neural populations driven by additive noise, we derive a closed mean-field representation that captures the global network dynamics. Building on the spectral properties of Erdös-Rényi networks, mean-field dynamics are obtained via a projection of the network dynamics onto the random network’s principal eigenmode. We consider Gaussian zero-mean and Poisson-like noise stimuli to excitatory neurons and show that these noise types induce coherence resonance. Specifically, the stochastic stimulation induces coherent stochastic oscillations in the γ-frequency range at intermediate noise intensity. We further show that this is valid for both global stimulation and partial stimulation, i.e. whenever a subset of excitatory neurons is stimulated only. The mean-field dynamics exposes the coherence resonance dynamics in the γ-range by a transition from a stable non-oscillatory equilibrium to an oscillatory equilibrium via a saddle-node bifurcation. We evaluate the transition between non-coherent and coherent state by various power spectra, Spike Field Coherence and information-theoretic measures.

1 Introduction

Synchronization is a well characterized phenomenon in natural systems [1]. A confluence of experimental studies indicate that synchronization may be a hallmark pattern of self-organization [24]. While various mechanisms are possible, synchronization may emerge notably through an enhancement of internal interactions or via changes in external stimuli statistics. A specific type of synchronization can occur due to random external perturbations, leading to a noise-induced coherent activity. Such a phenomenon is called coherence resonance (CR) and has been found experimentally in solid states [5], nanotubes [6] and in neural systems [7, 8]. Theoretical descriptions of CR have been developed for single excitable elements [9, 9, 10], for excitable populations [11] and for clustered networks [12].

In general, stimulus-induced synchronization is well-known in neural systems [2]. Synchronization has been observed intracranially in the presence of noise between single neurons in specific brain areas [13, 14] and between brain areas [1517]. The source of these random perturbations is still under debate. In this context, it is interesting to mention that [18] have found that the ascending reticular arousal system (ARAS) affects synchronization in the visual cortex. The ARAS provides dynamic inputs to many brain areas [1921]. It has thus been hypothesized that synchronization in the visual system represents a CR effect triggered by ARAS-mediated drive. This hypothesis has been supported recently by [22] showing in numerical simulations that an intermediate intensity of noise maximizes the interaction in a neural network of Hodgkin-Huxley neurons. Furthermore, recent theoretical work [21] has provided key insights on how human occipital electrocorticographic γ-activity (40–120 Hz) commonly observed with open eyes [21] is closely linked to CR. Coherence resonance has further been associated with states of elevated information processing and transfer [22], which are difficult to assess in the absence of mean-field descriptions. For illustration, Figure 1 (upper panel) shows average network activity for increasing noise intensities D1 and one observes a jump from non-oscillatory to oscillatory activity. Moreover, the figure presents very low coherence in the network under study for weak and strong noise intensities D1, whereas high coherence emerges for intermediate noise intensities (bottom panel). In the present work, we will explain this noise-induced coherence by a mean-field description.

FIGURE 1
www.frontiersin.org

FIGURE 1. Synchronization dependent on noise intensity as a marker of coherence resonance. The top panel shows the network average of V in Eq. 1 and the bottom panel provides the Spike Field Coherence (SFC) in the Θ (red), α (green), β (orange) and γ (blue) frequency range. For low noise intensity D1/τe there is no SFC, intermediate noise intensity yields strong SFC while large noise intensities diminish SFC again. To gain the SFC values, we have integrated in time the model system with 104 time steps while increasing the noise variance according to (Eq. 7). For illustration reasons, the SFC-values have been averaged by a sliding window of length Δ(D1/τe)=0.004. Definitions are given in section 2 and parameters are the same as in Figure 4 with q=1.

To better understand the mechanisms underlying CR and its impact on information processing, we consider a simple two-population Erdös-Rényi network of interconnected McCullogh-Pitts neurons. Our goal is to use this model to provide some insight into the emergence of stimulus-induced synchronization in neural systems and its influence on the neural network’s information content. The neural network under study has random connections, a simplification inspired from the lack structure neural circuits possess at microscopic scales. Previous studies [23] have shown that such systems are capable of noise-induced CR. Building on these results, we here provide a rigorous derivation of a mean-field equation based on an appropriate eigenmode decomposition to highlight the role of the network’s connectivity–Erdös-Rényi more specifically eigenspectrum in supporting accurate mean-field representations. We extend previous results by further considering both global (all neurons are stimulated) and partial (some neurons are stimulated) stochastic stimulation and its impact on CR similar to some previous studies [2426]. This partial stimulation is both more general and realistic than global stimulation as considered in most previous studies [11, 23, 27]. We apply our results to both zero-mean Gaussian and Poisson-like stochastic stimuli, and derive the resulting mean-field description. It is demonstrated rigorously that partial stochastic stimulation shifts the system’s dynamic topology and promotes CR, compared to global stimulation. We confirm and explore the presence of CR using various statistical measures.

2 Materials and Methods

We first introduce the network model under study, motivate the mean-field description, mentions the nonlinear analysis employed and provides details on the statistical evaluation.

2.1 The Network Model

Generically, biological neuronal networks are composed of randomly connected excitatory and inhibitory neurons, which interact through synapses with opposite influence on post-synaptic cells. We assume neural populations of excitatory and inhibitory neurons with N neurons in each population. Excitatory neurons in excite each other through the connectivity matrix F, and excite inhibitory neurons in through the connectivity matrix M. Similarly, neurons in inhibit each other by F and inhibit excitatory neurons through the connectivity matrix M. Hence, F and M represent the intra-population and inter-population synaptic connections, respectively. Mathematically, such neural population interactions are described by a 2N dimensional non-linear dynamical system governing the evolution of the state variable vectors V,WN,

τedVdt=V+FS1[V]MS2[W]+eIe+ξe(t)τidWdt=W+MS1[V]FS2[W]+eIi+ξi(t).(1)

This formulation is reminiscent of many rate-based models discussed previously [28], where it is assumed that neuronal activity is asynchronous and synaptic response functions are of first order. The state variables V and W represent excitatory and inhibitory dendritic currents, respectively. The terms ξe,i represent respective stochastic inputs from various sources, such as ion channel fluctuations, stochastic input from other brain areas or external stimuli not directly accounted for in the model [29]. More specifically, we assume noise ξe,iN, constant input Ie,i with e=(1,,1)t. The connectivity matrices are defined by F,MN×N while the nonlinear transfer function is given by S1,2[u]N with (S1[u])n=H0S(un), (S2[u])n=S(un), H0>0 and the scalar transfer function S(u)>0 u. Specifically, we will consider the transfer function S(u)=Θ(u) with the Heaviside function Θ(u)=0 u<0,Θ(u)=1 u0. In addition, the synaptic time scales are τe,i.

The present work considers directed Erdös-Rényi networks (ERN) with connection probability density c=0.95, i.e. both neuron populations exhibit intra-population and inter-population non-sparse random connections. Let us assume F=AF0, M=AM0 and A is the non-symmetric adjacency matrix of the ERN for which (A)nm=0 with probability 1c and (A)nm=1/cN with probability c. At first, let A=S+U with the symmetric matrix S=(A+At)/2, the antisymmetric matrix U=(AAt)/2 and the eigenvalues λA and λS of the matrix A and S, respectively. Then e(λA)=λS, i.e. the real part of the eigenvalue spectrum in the directed (i.e. non-symmetric) and non-directed (i.e. symmetric) random matrix A and S is identical. Moreover, for non-directed ERNs with symmetric adjacency matrix and N its edge spectrum contains the maximum eigenvalue λ1=1 with eigenvector v1=(1,1,,1)t [3033] and the bulk spectrum has the maximum eigenvalue

λ2=2σNcN=21ccN,(2)

with the corresponding Bernoulli distribution variance σ2=c(1c). It is obvious that λ2λ1 and λ20 for large mean degree cN. Since e(λA)=λS, the finite-size non-symmetric connectivity matrix F (M) has a maximum eigenvalue λ1F0 and λn>10 (λ1M0,λn>10). If c decreases, then λ2 increases, i.e. the spectral gap decreases, and this approximation does not hold anymore. The Supplementary Appendix illustrates the limits of this approximation in numerical simulations. Figure 2A shows the single maximum eigenvalue λ1 of A representing the edge spectrum and the other very small eigenvalues of the bulk spectrum. Hence, the matrix F has maximum eigenvalue F0 and the other eigenvalues vanish. The same holds for matrix M=M0A with a maximum eigenvalue M0. Figure 2B shows the real and imaginary part of the eigenvectors. The eigenvectors of the bulk spectrum (i>1) have uniformly distributed elements in good accordance with theory of symmetric ER networks [34]. The eigenvector of the edge spectrum is Φ1=(1,,1)t, see Figure 2C.

FIGURE 2
www.frontiersin.org

FIGURE 2. Eigenvalue spectrum of an Erdös-Rényi adjency matrix A under study and its eigenbasis. (A) The plot shows the eigenvalues in the complex plane demonstrating a clear spectral gap between the first eigenvalue λ1 and the other eigenvalues λn>1. (B) The panels show the real (top) and imaginary (bottom) part of all unit-normalized eigenvectors for illustration. They appear to be random reflecting the random network topology. (C) The normalized eigenvector Φ1(1,,1)/N with maximum eigenvalue λ11 plotted in complex plane together with the eigenvector Φ2 of the second largest eigenvalue λ2=0.015+i0.0006. Each dot corresponds to a complex-numbered vector entry in the complex plane. This result confirms the choice Φ1(1,,1) in Eq. 9.

Moreover, we assume that each noise process at inhibitory neurons (ηi)n=ηni at network node n is Gaussian distributed with zero mean, noise intensity D2 and uncorrelated in time

ξni(t)ξmi(τ)=2D2δnmδ(tτ),

Conversely each noise process at excitatory neurons ξne belongs to a certain class Gm,m=1,,M of M classes [23]. Noise processes in a specific class Gm, i.e. nGm, share their mean ξ¯me and variances D1m, i.e.

ξke(t)ξli(τ)=2D1mδklδ(tτ),k,lGm,

In the following, we assume two classes M=2 with ξ¯1e0,D11=D1 and ξ¯2e=0,D12=0, i.e. only a subset of nodes nG1 are stimulated. Hence we consider a partial stimulation at number of nodes N1=|G1|.

In biological neural systems, the input to a neural population is well-described by incoming spike trains that induce dendritic currents at synaptic receptors. According to renewal theory, neurons emit spike trains whose interspike interval obeys a Poisson distribution [35]. Then incoming spike trains at mean spike rate r induce random responses at excitatory synapses with time constant τin. This random process Iin(t) has the ensemble mean E[Iin]=winrτin and ensemble variance Var[Iin]=win2rτin/2 [36] assuming the synaptic coupling weight win. Since a Poisson distribution converges to a Gaussian distribution for large enough mean, we implement this input current as a Gaussian random process with mean E[Iin] and variance Var[Iin] while ensuring the validity of this approximation by a large enough input firing rate λin. It is important to point out that for Poisson noise, in contrast to the zero-mean Gaussian noise, both mean and variance are proportional to the input firing rate.

2.2 Conventional Mean-Field Analysis

To compare mesoscopic neural population dynamics to macroscopic experimental findings, it is commonplace to describe the network activity by the mean population response, i.e. the mean-field dynamics [3739]. A naive mean-field approach was performed in early neuroscience studies [4042], in which one blindly computes the mean network activity to obtain

τedE[V]dt=E[V]+fS1[V]mS2[W]+eIeτidE[W]dt=E[W]+mS1[V]fS2[W]+eIi,(3)

with the network average E[x]=kxk/N and (f)k=lFlk/N,(m)k=lMlk/N assuming zero-mean external noise with k(ξe,i)k=0. In addition, one may assume identical network interactions with (f)k=f0/N=const, (m)k=m0/N=const and the simplifying but questionable linear assumption.

E[S1,2(x)]=S1,2(E[x]).(4)

Combined, these assumptions lead to mean-field equations.

τedE[V]dt=E[V]+f0S1[E[V]]m0S2[E[W]]+eIeτidE[W]dt=E[W]+m0S1[E[V]]f0S2[E[W]].+eIi(5)

In this approximate description, additive noise does not affect the system dynamics. The assumption (Eq. 4) is very strong and typically not valid. In a more reasonable ansatz.

E[S1,2(x)]=E[S1,2(x0)+n=11n!S1,2(n)(xx0)n]=S1,2(x0)+n=11n!S1,2(n)E[(xx0)n]=(E[x],E[x2],E[x3],),(6)

with S1,2(n)=S1,2n(x)/xn computed at an arbitrary point x=x0 and a function 1,2. Hence the dynamics of the mean-field E[V] depends on the higher-order statistical orders E[Vn] via the nonlinear function E[S1,2(V)]. This is called the closure problem that is solvable in specific cases only [43].

Motivated by previous studies on stochastic bifurcations [4453], in which additive noise may tune the stability close to the bifurcation point, the present work shows how additive noise strongly impacts the nonlinear dynamics of the system for arbitrary noise intensity and away from the bifurcation. Previous ad-hoc studies have already used mean-field approaches [23, 54, 55] which circumvents the closure problem (Eq. 6) through a different mean-field ansatz. These motivational studies left open a more rigorous derivation. This derivation will be given in the present work: presenting in more detail its power and its limits of validity.

2.3 Equilibria, Stability and Quasi-Cycles

The dynamic topology of a model differential equation system may be described partially by the number and characteristics of its equilibria. In general, for the non-autonomous differential equation system

z˙=Az+N(z)+I(t),

with state variable zN, the driving force IN, the nonlinear vector NN and the matrix AN×N, it is insightful to consider the equilibria of the corresponding autonomous system z0 with z˙=0 yielding the implicit condition

Az0=N(z0),

The stability of an equilibrium z0 is given by the eigenvalue spectrum of the corresponding Jacobian

J=A+N0,

where (N0)ij=Ni(z)/zj computed at z0. The eigenvalues {λk} of J can be written as λk=ak+i2πνk with the damping ak and the eigenfrequency νk. Asymptotically stable equilibria have (λk)<0, e.g. stable foci have ak<0,νk. Linear response theory tells that noise-driven linear systems, whose deterministic dynamics exhibit a stable focus, exhibit quasi-cycles with a spectral power peak close to the eigenfrequency, see e.g. [51, 56, 57]. The smaller the noise intensity, the closer is the spectral peak frequency to the eigenfrequency. Hence, the eigenfrequency νk provides a reasonable estimate of the quasi-cycle spectral peak.

2.4 Numerical Simulations

The Langevin Eq. 1 have been integrated over time utilizing the Euler-Maruyama scheme [58]. Table 1 presents the parameters used. In certain cases, the noise variance has been changed over time t according to

D1(t)=Dmin+DmaxDminTt,(7)

with the maximum integration time T and the maximum and minimum noise variance values Dmax and Dmin, respectively.

TABLE 1
www.frontiersin.org

TABLE 1. Parameter set of model (1).

2.5 Numerical Spectral Data Analysis

Since prominent oscillations of the network mean activity indicates synchronized activity in the population, we have computed the power spectrum of the network mean activity V¯(t)=Σn=1N(Vn(t)/N) employing the Bartlett-Welch method with overlap rate 0.8. To gain a power spectrum with frequency resolution Δf, the Bartlett-Welch segments were chosen to the length 1/Δf and the time series had a duration of 5 s for the zero-mean Gaussian noise and 8s for the Poisson noise stimulation.

In addition to the power spectrum, the synchronization between single neuron spike activity and the dendritic current reflects the degree of coherence in the system. To this end, we have computed the Spike Field Coherence (SFC) [59]. To estimate the SFC, we have chosen a time window of 5s for zero-mean Gaussian stimulation and 8s for Poisson stimulation and computed the spike-triggered average and power spectra in these time windows to compute the SFC for each frequency. Then we have averaged the SFC in the Θ (4–8 Hz), α (8–12 Hz), β (12–20 Hz) and γ (25–60 Hz) frequency band to gain an average SFC in the corresponding band. This standard measure estimates the coherence between spikes, that occur if H[Vn](t)=1, and their corresponding dendritic currents Vn(t) at the same cell averaged over all cells in the excitatory population. Significant differences of SFC at different noise intensities are evaluated by an unpaired Welch t-test with α=0.05.

2.6 Information Measures

Coherence quantifies the degree of mutual behavior of different elements. Interestingly, recent studies of biological neural systems have shown that synchronization and information content are related [60, 61]. For instance, under general anesthesia asynchronous cortical activity in conscious patients is accompanied by less stored information and much available information whereas synchronous cortical activity in unconscious patients exhibits more stored information and less available information [19, 20, 6264]. We are curious how much information is stored and available in coherence resonance described in the present work. The result may indicate a strong link between coherence and information content. To this end, we compute the amount of stored information in the excitatory population as the predictable information and the amount of available information as the population’s entropy, cf [64].

The predictable information in the excitatory population is computed as the Active Information Storage AIS [65, 66] utilizing the Gaussian Copula Mutual Information (GCMI) estimation [67]. Assuming a single time series Vi(t)

AISi=MI(Vi(t);V(k)),V(k)=(Vi(tΔ),Vi(t2Δ),Vi(tkΔ)),(8)

where MI is the mutual information [64, 68], k is the embedding dimension and Δ is the embedding delay. The value AISi describes how much the dendritic current Vi(t) in excitatory neuron i is influenced by its past. To gain an estimate of stored information in the excitatory population, we evaluate the average stored information in the population and its variance

AIS=1Ni=1NAISi
σAIS2=1N1i=1N(AISiAIS)2,

with N=200. Significant AIS differences at different noise intensities are evaluated by an unpaired Welch t-test with α=0.05.

Moreover, we compute the available information in the excitatory cortex of the dendritic current Vi(t) at excitatory neuron i as its entropy Hi utilizing the GCMI estimation. Its population average and variance reads

H=1Ni=1NHi
σH2=1Ni=1N(HiH)2,

and entropy differences at different noise intensities are evaluated statistically by an unpaired Welch t-test with α=0.05.

In subsequent sections, we have computed AIS and H for embedding dimension k[1;60] and Δ{Δt,2Δt,5Δt} with kΔ=60 and find consistent significance test results. Specifically, we have chosen Δ=Δt and k=1 in the shown results.

3 Results

The subsequent section shows the derivation of the mean-field equations, before they are applied to describe network dynamics for two types of partial stimulation.

3.1 Mean-Field Description

To derive the final equations, we first introduce the idea of a mode projection before deriving the mean-field equations as a projection on the principal mode. The extension to partial stimuli extends the description.

Mode Decomposition

In the model (1), the system activity VU in space U may be expanded into a mode basis {Φne},n=1,,N,ΦneN,

V=n=1NanΦne,

with complex mode amplitude an and a biorthogonal basis {Ψne},ΨneN and

ΨkeΦne=δkn ,k,n=1,,N,

Here, denotes the transpose complex conjugate. The same holds for W with the basis {Φni},n=1,,N,ΦniN,

W=n=1NbnΦni,

with the complex mode amplitude bn and the biorthogonal basis {Ψni},ΨniN and

ΨkiΦni=δkn ,k,n=1,,N,

Projecting V,W onto the respective basis {Ψke} and {Ψki}, we obtain amplitude equations

τedakdt=ak+ΨkeFS1[V]ΨkeMS2[W]+Ie+Ψkeξe(t)
τidbkdt=bk+ΨkiMS1[V]ΨkiFS2[W]+Ii+Ψkiξi(t),

Now let us assume that Ψke,Φke are eigenvectors of F with eigenvalue λke

FΦke=λkeΦke
ΨkeF=λkeΨke,

and Ψki,Φki are eigenvectors of M with eigenvalue λki

MΦki=λkiΦki
ΨkiM=λkiΨki,

Then

λ1e=F0 ,Φ1e=e,Ψ1e=eNλne0,n=2,,N,(9)

cf. section 2.1, where we have utilized the bi-orthogonality of the basis. Equivalently,

λ1i=M0 ,Φ1i=e,Ψ1i=eN
λni0,n=2,,N,

We observe that Ψ1i=Ψ1e and Φ1e=Φ1i. The vector space U can be decomposed into complement subspaces Ƶ,Ƶ with U=ƵƵ and Ψ1e,Ψ1iƵ. Then Ψk>1e,Ψk>1iƵ. Each vector Ψk>1i can be described in the basis Ψk>1e in Ƶ and one gains

Ψk>1iF=n=2NcnΨneF=n=2NcnλneΨne=0,

due to (Eq. 9) and equivalently

Ψk>1eM=n=2NcnΨniM=n=2NcnλniΨni=0,

with some coefficients cn. This yields

τeda1dt=a1+λ1eNetS1[V]λ1iNetS2[W]+Ie+me(t)(10)
τidb1dt=b1+λ1iNetS1[V]λ1eNetS2[W]+Ii+mi(t)(11)
τedakdt=ak+Ψkeξe(t) ,k=2,,N(12)
τidbkdt=bk+Ψkiξi(t) ,k=2,,N,(13)

with me,i(t)=etξe,i(t)/N.

The Mean-Field Equations

Equations 12, 13 describe an Ornstein-Uhlenbeck process with solution

ak(t)=te(tτ)/τeΨkeξe(τ)dτbk(t)=te(tτ)/τiΨkiξi(τ)dτ,(14)

for t. In Eqs 10, 11 the terms V,W can be written as

V=n=1Nan(t)Φne=a1Φ1e+n=2Nan(t)ΦneW=n=1Nbn(t)Φni=b1Φ1i+n=2Nbn(t)Φni .(15)

Inserting expressions in Eq. 14 into these expressions leads to

n=2Nan(t)Φne=te(tτ)/τen=2NΦneΨneξ(τ)dτ.n=2Nbn(t)Φni=te(tτ)/τin=2NΦniΨniξ(τ)dτ.(16)

By virtue of the completeness of the basis, it is

n=2NΦneΨne=IΦ1eΨ1e
n=2NΦniΨni=IΦ1iΨ1i,

with the unity matrix IN×N. Then inserting these identities into (Eq. 16)

n=2Nan(t)Φne=te(tτ)/τeξe(τ)dτte(tτ)/τeΦ1eme(τ)dτn=2Nbn(t)Φni=te(tτ)/τiξi(τ)dτte(tτ)/τiΦ1imi(τ)dτ,(17)

We define ηe,i(t)=ξe,i(t)ξ0e,i, etηe,i(t)=Nρe,i(t) with ρe,iN(0,D1,2/N) and temporally constants ξ0e,i, i.e. ρe,i are finite size fluctuations with variance D1,2/N and ρe,i0 for N. With the definitions

we,i(t)=te(tτ)/τe,iηe,i(τ)dτ(18)
se,i(t)=τe(ξ0e,ieξ0¯e,i)ete(tτ)/τe,iρe,i(τ)dτ,(19)

with ξ¯0e,i=n=1N(ξ0,ne,i/N) and inserting Eq. 17 into Eq. 15

V(t)=a1(t)e+se(t)+we(t)W(t)=b1(t)e+si(t)+wi(t).(20)

and the mean-field equations can the be written as

τeda1dt=a1+F0NetS1[a1(t)e+se(t)+we(t)]M0NetS2[b1(t)e+si(t)+wi(t)]+Ie+ξ¯0e+ρe(t)τidb1dt=b1+M0NetS1[a1(t)e+se(t)+we(t)]F0NetS2[b1(t)e+si(t)+wi(t)]+Ii+ξ¯0i+ρi(t),(21)

By virtue of the finite-size fluctuations over time ρe,i(t) the system’s mean-field obeys stochastic dynamics.

Equation 14 describe an Ornstein-Uhlenbeck process of mode k and thus we,i(t) describes a multivariate Ornstein-Uhlenbeck process over time. In addition, we,i(t) is stationary over time and, since all modes k share identical properties, it is stationary over the network. Consequently, the process is ergodic and the stationary probability density function p(we,i) of ωe,i can be computed over the network yielding

1NetS1[xe+w]=1Nn=1NS[x+wn]S(x+w)pe(w)dw=G1(x),(22)

where the approximation is good for large N. Specifically, for Gaussian zero-mean uncorrelated noise ξe with variance D [69]

pe(w)=12πσeω2/2σ2 ,σ2=Dτe.

Similarly,

1NetS2[xe+w]S(x+w)pi(ω)dw=G2(x),(23)

Moreover, if the mean input is ξ0e,i=αe,ie and N, then se,i=0 and ρe,i=0 and consequently the mean-field equation

τeda1dt=a1+F0G1(a1)M0G2(b1)+Ie+αeτidb1dt=b1+M0G1(a1)F0G2(b1)+Ii+αi,(24)

obeys deterministic dynamics. However, the above formulation depends implicitly on the additive noise through the convolution of the transfer function.

Partial Stimuli

Each noise baseline stimulus at inhibitory neurons (ξi)n=ξni at network node n is Gaussian distributed with zero mean and variance D2 (cf. section 2.1). Then ξ¯0i=0, si(t)=ρ¯i(t)N(0,D2/τiN) and, considering Eq. 18, the corresponding probability density function in Eq. 23 is pi(w)=N(0,D2/τi). Here N(0,σ2) denotes a normal distribution with zero mean and variance σ2 .Additionally, stochastic stimuli driving excitatory neurons in class G1 are ergodic (cf. section 2.1). Then the mean and variance of class G1 is

ξ¯1e=1N1nG1ξneD1=1N1nG1(ξne)2,(25)

Using Eq. 18 and Eq. 19 and assuming N, then

we(t)+se(t)=te(tτ)/τe(ηe(τ)+Δξ)dτ,

whose probability density function pe(w) is [23].

pe(w)=m=12qmN(ξ¯me,D1m/τe)=qN(ξ¯1e,D1/τe)[w]+(1q)δ(w),(26)

with q=N1/N, q1=q,q2=1q. Here, Δξ=(1q,1q,,q,q)ξ1e with terms 1q of number N1 and assuming that the nodes n=1,,N1 receive stochastic input. In addition the constant input in the mean-field equation is ξ¯0e=qξ1e.

Then, utilizing Eqs 22, 23 and specifying S to a step function (cf. section 2.1), the mean-field transfer functions in Eq. 24 read

G1(a1)=H0q2[1erf(a12D1/τe)]+(1q)Θ(a1)G2(b1)=12[1erf(b12D2/τi)],(27)

Here, Θ() denotes the Heaviside step function. Figure 3 shows examples for pe and G1.

FIGURE 3
www.frontiersin.org

FIGURE 3. The probability density function p (Eq. 26) and the resulting transfer function G (Eq. 27). For q=1.0D1/τe=0.15 and for q=0.5D1/τe=0.5.

Essentially, the mean-field obeys

τeda1dt=a1+F0G1(a1)M0G2(b1)+Ie+ρe(t)τidb1dt=b1+M0G1(a1)F0G2(b1)+Ii+ρi(t)(28)

utilizing (Eq. 27).

3.2 Zero-Mean Gaussian Partial Stimulation

At first, we consider the case of a partial noise stimulation with zero network mean, i.e. etξ¯e=0 and se(t)N(0,D1/τeN1) and ξ1e=ξ¯0e=0. Then D1 parametrizes the noise intensity only. Figure 4 shows the network evolution of V(t) for increasing noise intensities, cf. Equation 7. Starting from a high activity state, increasing the noise intensity yields a phase transition of the system to a network state at lower activity. This occurs for global (q=1.0) and partial stimulation (q=0.8, q=0.6 and q=0.5). Please re-call that, for instance, q=0.5 reflects a stimulation where 50% of the network nodes are stimulated. These stimulated network nodes have been randomly chosen from a uniform distribution.

FIGURE 4
www.frontiersin.org

FIGURE 4. Enhanced zero-mean Gaussian noise induces phase transitions in spatiotemporal dynamics. The panels show the network activity V(t) according to Eq. 1 with temporally increasing noise variances D1/τe for different stimulus ratios q.

Figure 5 shows the respective power spectra of the network mean V(t), which provides insights about the system’s synchronization at low and high noise intensity. High noise intensity induces strong oscillations in the γ-frequency band, whereas the low noise intensity states does not - in contrast, this state shows a decaying low-pass power spectral density that is expected from a non-oscillatory stochastic process.

FIGURE 5
www.frontiersin.org

FIGURE 5. Enhanced noise yields strong power of the global mode V¯(t) in the γ-frequency range. The panels show the power spectra of V¯ for the stimulus ratios q=0.5 (D1/τe=0.35 (black) and D1/τe=0.55 (red)), q=0.6 (D1/τe=0.25 (black) and D1/τe=0.33 (red)), q=0.8 (D1/τe=0.20 (black) and D1/τe=0.25 (red)) and q=1.0 (D1/τe=0.15 (black) and D1/τe=0.20 (red)). Power spectra at lower noise intensities are computed on the respective upper branch of the bistable system.

Stronger power spectral density at a given frequency is the signature of a coherent network, as seen in Figure 5. Since the neurons in our network model emit spikes and exhibit synaptic input currents, noise-induced coherence may be visible in the coherence between spiking and synaptic activity as well. In fact, in Figure 6A one observes a significant strongly enhanced Spike Field Coherence at high noise intensities for both global and partial stimulation. Hence, in sum the system exhibits coherence resonance in the sense that strong noise induces coherent oscillations that are not present at low noise intensities.

FIGURE 6
www.frontiersin.org

FIGURE 6. High zero-mean Gaussian noise enhances the Spike Field Coherence in all frequency bands and affects heterogeneously Active Information Storage (AIS) and differential entropy (H). (A) The differences between high noise intensity (grey-colored) and low noise intensity (black-colored) is significant (p<0.001) for both global and partial stimulation. (B) For global stimulation (q=1.0), high noise intensity induces states of significantly enhanced stored active information (AIS) and available information (H), whereas partial stimulation with q=0.8 diminishes both AIS and H significantly. Results for q=0.6 are not consistent and difficult to interpret. In all panels, vertical bars denote the standard deviation, p<0.001 and parameters are identical to the parameters used in Figure 5.

Coherence resonance is supposed to be linked to information processing in neural systems. Thus we investigate the relationship between stimulus noise intensity and information in the system across frequency bands. Figure 6B shows how much information is stored in the networks (AIS) and how much information is available (H). We observe that significantly more information is stored (AIS) and available (H) at high noise intensities for global stimulation q=1.0, whereas high noise partial stimulation with q=0.8 diminishes the stored active information and available information significantly. For more sparse stimulation with q=0.6 the finding in information measures is heterogeneous and no interpretation consistent with the results for larger q is possible.

To understand this noise-induced coherence, we take a closer look at the dynamic topology of the mean-field Eq. 28. Their equilibria (cf. section 2.3) for negligible finite-size fluctuations ρe,i(t)1 are shown in Figure 7 together with simulated mean-field activity V¯(t) for illustrative purposes. Low noise intensity induces a bistable regime with a stable node as upper equilibrium and a focus as lower equilibrium. For global stimulation (q=1.0), this lower focus is unstable at very low noise intensity and stable at larger noise intensities. Moreover, the lower equilibrium is a stable focus at all noise intensities for partial (q<1.0) stimulation. The center branch is always a saddle node. For larger noise intensity, the upper equilibrium branch merges with the center branch via a saddle-node bifurcation and the lower stable focus is preserved as noise is further increased. This finding remains valid for both global (q=1.0) and partial (q<1.0) stimulation as shown in Figure 7 for q ranging within the interval 0.5q1.0. One can see that for smaller q (i.e. less excitatory neurons are stimulated) the bifurcation point moves to larger noise intensities. Hence thinning out the stimulation of excitatory neurons increases the noise intensity interval at which bistability occurs. Moreover, we point out that the bifurcation points predicted by the mean-field description and shown in Figure 7 show very good accordance to the values of D1/τe in Figure 4, where the system transitions from the upper to the lower state.

FIGURE 7
www.frontiersin.org

FIGURE 7. Equilibria and representative time series of the global mode V¯(t) for the zero-mean partial stimulation. There is a bistability and saddle-node bifurcation from a stable node to an stable focus at enhanced noise intensity. The numbers denote the values of the stimulus ratio q. Solid (dashed) lines mark stable (unstable) states. The time series V¯ results from the time-varying noise intensity according to Eq. 7.

The mean-field solution involves finite-size fluctuations that affect the solutions principal oscillation frequency and magnitude. By construction, these mean-field solutions converge to the network average for increasing network size N. Figure 8 compares the time series of mean-field solutions and network averages for increasing network sizes and affirms the convergence and thus the validity of the mean-field description. It is interesting to note that, besides the mean-field dynamics, the network’s dynamical properties change with increasing N as well. Figure 8 provides the principal oscillation frequencies for both solutions for the given network size: the network speeds up with increasing size and its frequency converges to the mean-field principal frequency that remains about the same value. However, we point out that the mean-field solution remains still slightly different even for very large N since it implies the approximation of negligible connectivity matrix bulk spectra. Figure 9 affirms this finding by comparing simulation trials of the transitions from the non-oscillatory to the oscillatory coherent state. We observe that the transition values of D1/τe of the network mean and the mean-field are closer to each other for larger network size. The mean-field description (Eq. 28) with (Eq. 27) assumes vanishing finite-size fluctuations and these are reduced for larger network size N, i.e. the effective noise level (the finite-size fluctuations) is reduced and thus deterministic mean-field and stochastic network activity transition are closer to each other.

FIGURE 8
www.frontiersin.org

FIGURE 8. Comparison of network average and mean-field solution for different network sizes. The network average (black) and mean-field solutions (red) resembles more and more the larger the network of size N. This holds for the magnitude and frequency (provided in panels) of both solutions. The initial value of the mean-field activity has been chosen to the initial value of the network average. Simulations consider zero-mean Gaussian simulations with q=1 and D1/τe=0.2.

FIGURE 9
www.frontiersin.org

FIGURE 9. Comparison of transitions in network and mean-field for different network sizes. The network average (dashed line) and mean-field solutions (solid line) resemble more for larger network size N. This is explained by reduced finite-size fluctuations for larger networks. The initial value of the mean-field activity has been chosen to the initial value of the network average. Simulations consider zero-mean Gaussian simulations with q=1.

The frequency range of oscillations observed for steady states located within the lower branch (see Figure 7) is a consequence of both network connectivity and neuronal properties and is further tuned by additive noise. Figure 10 shows the maximum eigenvalue real part for the upper (A) and the lower branch (B, top opanel) and the eigenfrequency (cf. section 2.3) of the equilibrium at the lower branch (B, lowel panel). We observe that increasing noise intensity decreases slightly the eigenfrequency in the γ frequency range and decreases the negative maximum eigenvalue real part. This means that additive noise increases the damping of the response of the system to perturbations - including noise. This increased noise-induced damping leads to magnitude changes in quasi-cycle solutions - which is manifested in the power spectral density distribution. Indeed, the power spectral density distribution widens as noise intensity increases, leading to the spectra as seen in Figure 10C. This broad spectral power distribution is the signature of suppressed coherence. As a corollary, our analysis demonstrates that coherent band-limited oscillations emerge for intermediate noise intensities only. This is a known feature of coherence resonance. For additional illustration, Figure 1 shows the typical bell-shape of coherence (here Spike Field Coherence) in different frequency bands. We observe that the coherence effect is strongest in the γ frequency range.

FIGURE 10
www.frontiersin.org

FIGURE 10. Eigenvalues at the top and bottom branch in Figure 7 and corresponding power spectra. (A) maximum eigenvalue of equilibria on the top branch in Figure 7. (B) maximum real part r of the eigenvalue r+i2πν (top panel) and the corresponding eigenfrequency frequency ν. The numbers denote the values of the stimulus ratio q in all panels. (C) Power spectra of V(t) about the lower branch for q=0.6 for different noise intensities D1/τe.

3.3 Poisson Partial Stimulation

Synaptic receptors respond to afferent Poisson-distributed input spike trains, whose properties differ substantially from the Gaussian noise processes we considered so far. To generalize our results to more physiological stimuli statistics, we considered a partial Poisson noise stimulation with dependent mean and variance. Specifically, afferent spike trains at spike rate rin induce random responses at excitatory synapses with time constant τin and synaptic weight win. Then

se(t)=τeΔξ+ρ¯e(t)
ξ1e=winrinτin
D1=winξ0/2
ξ¯0e=qξ1e,

and finite-size fluctuations ρ¯e(t)N(0,D1/N1). Figure 11A illustrates the temporal network activity for a low and high stimuli firing rates rin. Increasing rin induces a transition from a high-activity to a low activity state for both global and partial stimulation - similarly as in the Gaussian noise case. The high-activity state is non-oscillatory while the low-activity state is oscillatory, with frequency found in the γ frequency range (Figure 11B). In addition, the low-activity state induced by high Poisson input rate exhibits a strong Spike Field Coherence in contrast to the high-activity state (Figure 11C). Moreover, high stimulation noise increases the stored information and the available information for global stimulation with q=1.0, cf. Figure 11D. Information measures for partial stimulation (q=0.6) are heterogeneous and an interpretation of results for AIS and H is difficult.

FIGURE 11
www.frontiersin.org

FIGURE 11. Poisson noise induces transitions from a non-oscillatory to an oscillatory state for both global and partial stimulation. (A) Network activity Vn(t) for low input firing rate (r=0.04 for q=1.0 and r=0.09 for q=0.6) and high input firing rate (r=0.14 for q=1.0 and r=0.19 for q=0.6). For the low (high) input rate the system evolves about an upper (lower) state. (B) Power spectra of the network mean V¯(t) showing γ activity for the large input rate. (C) The high input firing rate (grey-colored) induces a state of large Spike Field Coherence compared to the state for low input firing rate (black-colored) for both global and partial stimulation (p<0.01). (D) For global stimulation (q=1.0), high input firing induces a state of significantly enhanced stored active information (AIS) and available information (H). This is not consistent to results for partial stimulation (q=0.6). Here is p<0.01.

These results can be understood by taking a closer look at the dynamic topology of the system. Figure 12 reveals that, for global stimulation (q=1.0), the system has two unstable equilibria and one stable equilibrium at lower noise intensities. The top branch is a stable node, the center branch a saddle node and the lower branch an unstable focus. There is a very small noise intensity interval at which the top and bottom branch are both stable. Increasing the Poisson stimuli firing rate leads to a sudden suppression of high-activity equilibria through a saddle-node bifurcation. Consequently, the transition observed in Figure 11A is a jump from the stable node on the top bifurcation branch to the stable focus on the bottom branch similar to the effect shown in Figure 4. For partial stimulation (q=0.6), the lower branch exhibits a stable focus for much lower input firing rates. The saddle-node bifurcation is delayed, leading to an increased noise intensity interval of bistability. Hence, the system exhibits coherence resonance for Poisson noise as well.

FIGURE 12
www.frontiersin.org

FIGURE 12. Equilibria of the mean-field V¯(t) for the Poisson partial stimulation. For global stimulation q=1.0, the system is always monostable with three equilibria at low input firing rates and a single equilibrium at large input firing rates. Increasing the input firing rate from low to large firing rates, the system jumps from the upper stationary state (stable node) to a stable focus on the lower stationary state via a saddle-node bifurcation. For partial stimulation q=0.6, the system is monostable with three equilibria at low input firing rates. For larger input rates, the system is bistable and passes a saddle-node bifurcation inducing a transition from a stable node to an stable focus at enhanced input firing rate r. Solid (dashed) lines mark stable (unstable) states, black- and blue-colored lines denote equilibria for global and partial stimulation, respectively. The bifurcation diagram of the mean-field W¯(t) is equivalent.

4 Discussion

This study presents a rigorous derivation of mean-field equations for two nonlinearly coupled non-sparse Erdös-Rényi networks (ERN) that are stimulated by additive noise. This mean field representation is made possible through spectral separation: the eigenspectrum of ERN networks exhibits a large spectral gap between the eigenvalue with largest real part and the rest of the spectrum. We show that the projection of the network dynamics onto the leading eigenmode represents the mean-field. Its dynamics are shaped by eigenmodes located in the complement subspace spanned by non-leading eigenmodes. In our model, the subspace dynamics are governed and influenced by additive noise statistics and they obey an Ornstein-Uhlenbeck process.

We extended the mean-field derivation to various types of additive noise, such as global and partial noise stimuli (i.e. when only a fraction of the excitatory neurons are stimulated) and for both zero-mean Gaussian and Poisson-like noise. Collectively, our analysis shows that additive noise induces a phase transition from a non-oscillatory state to an oscillatory coherent state. Such noise-induced coherence is known as coherence resonance (CR). This phase transition has been shown to occur not only for Gaussian zero-mean noise but also for Poisson-like noise. To the best of our knowledge, CR has not been found yet for such Poisson-like noise. The general underlying mechanism is a noise-induced multiplicative impact of additive stimulation via the nonlinear coupling of different modes. This multiplicative effect modifies the net transfer function of the network and thus enlarges its dynamical repertoire. This resembles the impact of additive noise in stochastic bifurcations [51, 52, 70, 71].

Embedding into Literature

Our results build on previous studies from the authors [23, 54, 55] to provide a rigorous derivation of the mean-field description, whereas previous work have motivated heuristically the mean-field reduction and, e.g., failed to show in detail whether the mean-field equation is the only solution for any given additive stimuli. Several other previous studies have presented mean-field descriptions in stochastically driven systems. For instance, Bressloff et al. [28] have derived rigorously mean-field equations for stochastic neural fields considering, inter alia, finite-element fluctuations by utilizing a Master equation and van Kampen’s volume expansion approach. We note here that we also took into account finite-size fluctuations resulting from a non-negligible variance of statistical mean values. Moreover [28], do not specify the network type and results in a rather opaque description, whereas we assume an ERN and thus exploit its unique eigenspectrum structure. This yields directly to a mean-field description, whose dependence of stochastic forces is obvious and avoids its implicit closure problem known from mean-field theories [43]. This is possible since the ERN considered share many properties with Izing models, that are known to permit an analytically treatable solution of the closure problem, see e.g. [72].

Moreover, several technical analysis steps in the present work have been applied in previous studies in a similar context. In a work on stochastic neural mean-field theory, Faugeras and others [27] have assumed that the system activity fluctuations obey a normal probability distribution and have derived an effective nonlinear interaction in their Proposition 2.1 similar to our Eq. 22. Further, the authors have shown how the fluctuation correlation function, i.e. the system activity’s second moment, determine the mean-field dynamics. This is in line with our result (Eq. 22) showing how the mean and variance of the additive noise tunes the system’s stability. However, the authors have not considered in detail the random nature of the system connectivity, whereas we have worked out the interaction of external stimulation and the ERN. This interaction yields directly the mean-field and its dependence of the external stimulus that is not present in [27]. Moreover, the present work also shows how the mean-field fluctuations affect the mean-field dynamics by deriving the fluctuation’s probability density function that describes all higher moments.

Noise-induced synchronization has been found recently in a system of stochastically driven linearly coupled FitzHugh-Nagumo neurons by Touboul and others [73]. The authors have found a minimum ratio of activated neurons that are necessary to induce global oscillatory synchronization, i.e. CR in the sense presented in our work. This question has been considered in the present work as well by asking how the mean-field dynamics, and thus how noise-induced synchronization, changes when modifying the ratio of stimulated excitatory neurons q while retaining the stimulation of inhibitory neurons. We find that global stimulation, i.e. stimulation of all excitatory neurons, yields a finite critical noise intensity below which the system is bistable and exhibits CR. Partial stimulation shifts this critical noise intensity to larger values and enlarges the bistability parameter space and thus promotes CR.

Several previous studies of mean-field dynamics in neural systems have applied the master equation formalism [7476]. This works nicely in completely irregular networks and the asynchronous activity regime and has been applied successfully to neural populations considering biological neuron models [7780]. However, the analysis of more regular networks will be very difficult to develop with the Master equation since the implicit integration over system states would be more complex. Conversely, our presented approach may consider regular structures by a corresponding matrix eigenvalue decomposition.

At last, we mention the relation to the Master stability function [81, 82]. This function describes the stability of identical synchronization of complex networks in a synchronization manifold and this manifold corresponds to the mean-field in our study. Although the Master stability function has been proven to be powerful, to the best of our knowledge it does not allow to reveal coherence resonance as the current work.

Limits and Outlook

The present work proposes to describe mean-field dynamics in a topological network by projection onto the networks eigenmodes. This works well for non-sparse random ERN with large connectivity probability. This network does not exhibit a spatial structure. However, less connected ERN networks show different dynamics, cf. the Supplementary Appendix. Moreover, biological networks are not purely random but may exhibit distance-dependent synaptic weights [83] or spatial clusters [84]. Our specific analysis applies for networks with a large spectral gap in their eigenspectra and it might fail for biological networks with smaller spectral gaps (as shown in the Supplementary Appendix). Future work will attempt to utilize the presented approach to derive mean-field dynamics for heterogeneous networks that exhibit a smaller spectral gap, such as scale-free networks [84].

Moreover, the single neuron model in the present work assumes a simple static threshold firing dynamics (McCullough-Pitts neuron) while neglecting somatic dynamics as described by Hodgkin-Huxley type models or the widely used FitzHugh-Nagumo model [11, 73]. Future work will aim at reinforcing the biological relevance of neurons coupled through ERN. This will be possible by extending the trivial transfer function from a step function to sigmoidal shapes for type I or type II neurons [76, 85, 86].

Our results show that noise-induced CR emerges in the γ frequency range. This frequency band is thought to play an important role in visual information processing [1317]. Experimental studies have shown that the degree of this γ-synchronization in primary cortical areas may be modulated by attention [59, 8789]. Since attention is known to affect the ARAS activity [90] and specifically the brain stem as part of the ARAS [91] and ARAS, in turn, provides input to the cortex [92]. We conclude that it is possible that attention modulates the cortical input activity, i.e. the Poisson firing rate in our model. In this picture, attention-modulated enhanced ARAS activity induces γ coherence and may enhance stored information [93], as shown in Figures 6, 11. Future more detailed brain models including the cortico-thalamic feedback and cortical interactions [21, 57] will provide further evidence whether coherence resonance is present in visual processing.

Data Availability Statement

The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.

Author Contributions

AH conceived the work structure motivated by intensive discussions with JL; AH, TW, NV, and JH contributed different work parts and all authors have written the work.

Conflict of Interest

JH was employed by Hyland Switzerland Sarl.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fams.2021.697904/full#supplementary-material

References

1. Pikovsky, A, Rosenblum, M, and Kurths, J. Synchronization: A Universal Concept in Nonlinear Sciences. Cambridge University Press (2001).

2. Singer, W. The Brain as a Self-Organizing System. Eur Arch Psychiatr Neurol Sci (1986) 236:4–9. doi:10.1007/bf00641050

CrossRef Full Text | Google Scholar

3. Witthaut, D, Wimberger, S, Burioni, R, and Timme, M. Classical Synchronization Indicates Persistent Entanglement in Isolated Quantum Systems. Nat Commun (2017) 8:14829. doi:10.1038/ncomms14829

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Hutt, A, and Haken, H. Synergetics. New York: Springer-Verlag (2020).

5. Mompo, E, Ruiz-Garcia, M, Carretero, M, Grahn, HT, Zhang, Y, and Bonilla, LL. Coherence Resonance and Stochastic Resonance in an Excitable Semiconductor Superlattice. Phys Rev Lett (2018) 121:086805. doi:10.1103/PhysRevLett.121.086805

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Lee, CY, Choi, W, Han, J-H, and Strano, MS. Coherence Resonance in a Single-Walled Carbon Nanotube Ion Channel. Science (2010) 329:1320–4. doi:10.1126/science.1193383

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Gu, H, Yang, M, Li, L, Liu, Z, and Ren, W. Experimental Observation of the Stochastic Bursting Caused by Coherence Resonance in a Neural Pacemaker. Neuroreport (2002) 13:1657–60. doi:10.1097/00001756-200209160-00018

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Ratas, I, and Pyragas, K. Noise-induced Macroscopic Oscillations in a Network of Synaptically Coupled Quadratic Integrate-And-Fire Neurons. Phys Rev E (2019) 100:052211. doi:10.1103/PhysRevE.100.052211

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Pikovsky, AS, and Kurths, J. Coherence Resonance in a Noise-Driven Excitable System. Phys Rev Lett (1997) 78:775–8. doi:10.1103/physrevlett.78.775

CrossRef Full Text | Google Scholar

10. Gang, H, Ditzinger, T, Ning, CZ, and Haken, H. Stochastic Resonance without External Periodic Force. Phys Rev Lett (1993) 71:807–10. doi:10.1103/physrevlett.71.807

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Baspinar, E, Schüler, L, Olmi, S, and Zakharova, A. Coherence Resonance in Neuronal Populations: Mean-Field versus Network Model. Submitted (2020).

12. Tönjes, R, Fiore, CE, and Pereira, T. Coherence Resonance in Influencer Networks. Nat Commun (2021) 12:72. doi:10.1038/s41467-020-20441-4

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Singer, W, and Gray, CM. Visual Feature Integration and the Temporal Correlation Hypothesis. Annu Rev Neurosci (1995) 18:555–86. doi:10.1146/annurev.ne.18.030195.003011

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Eckhorn, R, Bauer, R, Jordan, W, Brosch, M, Kruse, W, Munk, M, et al. Coherent Oscillations: A Mechanism of Feature Linking in the Visual Cortex? Biol Cybern (1988) 60:121–30. doi:10.1007/bf00202899

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Castelo-Branco, M, Neuenschwander, S, and Singer, W. Synchronization of Visual Responses between the Cortex, Lateral Geniculate Nucleus, and Retina in the Anesthetized Cat. J Neurosci (1998) 18:6395–410. doi:10.1523/jneurosci.18-16-06395.1998

CrossRef Full Text | Google Scholar

16. Nelson, JI, Salin, PA, Munk, MH-J, Arzi, M, and Bullier, J. Spatial and Temporal Coherence in Cortico-Cortical Connections: a Cross-Correlation Study in Areas 17 and 18 in the Cat. Vis Neurosci (1992) 9:21–37. doi:10.1017/s0952523800006349

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Bressler, SL. Interareal Synchronization in the Visual Cortex. Behav Brain Res (1996) 76:37–49. doi:10.1016/0166-4328(95)00187-5

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Munk, MHJ, Roelfsema, PR, Konig, P, Engel, AK, and Singer, W. Role of Reticular Activation in the Modulation of Intracortical Synchronization. Science (1996) 272:271–4. doi:10.1126/science.272.5259.271

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Hutt, A, Lefebvre, J, Hight, D, and Sleigh, J. Suppression of Underlying Neuronal Fluctuations Mediates EEG Slowing during General Anaesthesia. Neuroimage (2018) 179:414–28. doi:10.1016/j.neuroimage.2018.06.043

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Hutt, A. Cortico-thalamic Circuit Model for Bottom-Up and Top-Down Mechanisms in General Anesthesia Involving the Reticular Activating System. Arch Neurosci (2019) 6:e95498. doi:10.5812/ans.95498

CrossRef Full Text | Google Scholar

21. Hutt, A, and Lefebvre, J. Arousal Fluctuations Govern Oscillatory Transitions between Dominant γ- and α Occipital Activity during Eyes Open/Closed Conditions. Brain Topography (2021), in press.

Google Scholar

22. Pisarchik, AN, Maksimenko, VA, Andreev, AV, Frolov, NS, Makarov, VV, Zhuravlev, MO, et al. Coherent Resonance in the Distributed Cortical Network during Sensory Information Processing. Sci Rep (2019) 9:18325. doi:10.1038/s41598-019-54577-1

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Hutt, A, Lefebvre, J, Hight, D, and Kaiser, HA. Phase Coherence Induced by Additive Gaussian and Non-gaussian Noise in Excitable Networks with Application to Burst Suppression-like Brain Signals. Front Appl Math Stat (2020) 5:69. doi:10.3389/fams.2019.00069

CrossRef Full Text | Google Scholar

24. Chacron, MJ, Longtin, A, and Maler, L. The Effects of Spontaneous Activity, Background Noise, and the Stimulus Ensemble on Information Transfer in Neurons. Netw Comput Neural Syst (2003) 14:803–24. doi:10.1088/0954-898x_14_4_010

CrossRef Full Text | Google Scholar

25. Chacron, MJ, Lindner, B, and Longtin, A. Noise Shaping by Interval Correlations Increases Information Transfer. Phys.Rev.Lett. (2004) 93:059904. doi:10.1103/physrevlett.93.059904

CrossRef Full Text | Google Scholar

26. Chacron, MJ, doiron, B, Maler, L, Longtin, A, and Bastian, J. Non-classical Receptive Field Mediates Switch in a Sensory Neuron's Frequency Tuning. Nature (2003) 423:77–81. doi:10.1038/nature01590

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Faugeras, OD, Touboul, JD, and Cessac, B. A Constructive Mean-Field Analysis of Multi Population Neural Networks with Random Synaptic Weights and Stochastic Inputs. Front Comput Neurosci (2008) 3:1. doi:10.3389/neuro.10.001.2009

CrossRef Full Text | Google Scholar

28. Bressloff, PC. Stochastic Neural Field Theory and the System Size Expansion. SIAM J Appl Math (2009) 70:1488–521.

Google Scholar

29. Terney, D, Chaieb, L, Moliadze, V, Antal, A, and Paulus, W. Increasing Human Brain Excitability by Transcranial High-Frequency Random Noise Stimulation. J Neurosci (2008) 28:14147–55. doi:10.1523/jneurosci.4248-08.2008

CrossRef Full Text | Google Scholar

30. Erdős, L, Knowles, A, Yau, H-T, and Yin, J. Spectral Statistics of Erdős-Rényi Graphs I: Local Semicircle Law. Ann Probab (2013) 41:2279–375. doi:10.1214/11-AOP734

CrossRef Full Text | Google Scholar

31. Ding, X, and Jiang, T. Spectral Distributions of Adjacency and Laplacian Matrices of Random Graphs. Ann Appl Prob (2010) 20:2086–117. doi:10.1214/10-aap677

CrossRef Full Text | Google Scholar

32. Kadavankandy, A. Spectral Analysis of Random Graphs with Application to Clustering and Sampling. In: Ph.D. Thesis, Université Cote d’Azur. Nice, France: NNT: 2017AZUR4059 (2017).

Google Scholar

33. Füredi, Z, and Komlós, J. The Eigenvalues of Random Symmetric Matrices. Combinatorica (1981) 1:233–41. doi:10.1007/bf02579329

CrossRef Full Text | Google Scholar

34. O'Rourke, S, Vu, V, and Wang, K. Eigenvectors of Random Matrices: A Survey. J Comb Theor Ser A (2016) 144:361–442. doi:10.1016/j.jcta.2016.06.008

CrossRef Full Text | Google Scholar

35. Koch, C. Biophysics of Computation. Oxford: Oxford University Press (1999).

36. Ross, S. Stochastic Processes (Probability and Mathemati- Cal Statistics). Wiley (1982).

37. Wright, JJ, and Kydd, RR. The Electroencephalogram and Cortical Neural Networks. Netw Comput Neural Syst (1992) 3:341–62. doi:10.1088/0954-898x_3_3_006

CrossRef Full Text | Google Scholar

38. Nunez, PL. Toward a Quantitative Description of Large-Scale Neocortical Dynamic Function and EEG. Behav Brain Sci (2000) 23:371–98. doi:10.1017/s0140525x00003253

PubMed Abstract | CrossRef Full Text | Google Scholar

39. Nunez, P, and Srinivasan, R. Electric Fields of the Brain: The Neurophysics of EEG. New York - Oxford: Oxford University Press (2006).

40. Wilson, HR, and Cowan, JD. Excitatory and Inhibitory Interactions in Localized Populations of Model Neurons. Biophysical J (1972) 12:1–24. doi:10.1016/s0006-3495(72)86068-5

CrossRef Full Text | Google Scholar

41. Gerstner, W, and Kistler, W. Spiking Neuron Models. Cambridge: Cambridge University Press (2002).

42. Bressloff, PC, and Coombes, S. Physics of the Extended Neuron. Int J Mod Phys B (1997) 11:2343–92. doi:10.1142/s0217979297001209

CrossRef Full Text | Google Scholar

43. Kuehn, C. Moment Closure-A Brief Review. In: Schöll E, Klapp S, and Hövel P, editors. Control of Self-Organizing Nonlinear Systems. Heidelberg: Springer (2016). p. 253–71. doi:10.1007/978-3-319-28028-8_13

CrossRef Full Text | Google Scholar

44. Sri Namachchivaya, N. Stochastic Bifurcation. Appl Math Comput (1990) 39:37s–95s. doi:10.1016/0096-3003(90)90003-L

CrossRef Full Text | Google Scholar

45. Berglund, N, and Gentz, B. Geometric Singular Perturbation Theory for Stochastic Differential Equations. J Differential Equations (2003) 191:1–54. doi:10.1016/s0022-0396(03)00020-2

CrossRef Full Text | Google Scholar

46. Bloemker, D, Hairer, M, and Pavliotis, GA. Modulation Equations: Stochastic Bifurcation in Large Domains. Commun Math Phys (2005) 258:479–512. doi:10.1007/s00220-005-1368-8

CrossRef Full Text | Google Scholar

47. Boxler, P. A Stochastic Version of center Manifold Theory. Probab Th Rel Fields (1989) 83:509–45. doi:10.1007/bf01845701

CrossRef Full Text | Google Scholar

48. Hutt, A, and Lefebvre, J. Stochastic center Manifold Analysis in Scalar Nonlinear Systems Involving Distributed Delays and Additive Noise. Markov Proc Rel Fields (2016) 22:555–72.

Google Scholar

49. Lefebvre, J, Hutt, A, LeBlanc, VG, and Longtin, A. Reduced Dynamics for Delayed Systems with Harmonic or Stochastic Forcing. Chaos (2012) 22:043121. doi:10.1063/1.4760250

PubMed Abstract | CrossRef Full Text | Google Scholar

50. Hutt, A. Additive Noise May Change the Stability of Nonlinear Systems. Europhys Lett (2008) 84:34003. doi:10.1209/0295-5075/84/34003

CrossRef Full Text | Google Scholar

51. Hutt, A, Longtin, A, and Schimansky-Geier, L. Additive Noise-Induced Turing Transitions in Spatial Systems with Application to Neural fields and the Swift-Hohenberg Equation. Physica D: Nonlinear Phenomena (2008) 237:755–73. doi:10.1016/j.physd.2007.10.013

CrossRef Full Text | Google Scholar

52. Hutt, A, Longtin, A, and Schimansky-Geier, L. Additive Global Noise Delays Turing Bifurcations. Phys Rev Lett (2007) 98:230601. doi:10.1103/physrevlett.98.230601

PubMed Abstract | CrossRef Full Text | Google Scholar

53. Hutt, A, and Lefebvre, J. Additive Noise Tunes the Self-Organization in Complex Systems. In: Hutt A, and Haken H, editors. Synergetics, Encyclopedia of Complexity and Systems Science Series. New York: Springer (2020). p. 183–95. doi:10.1007/978-1-0716-0421-2_696

CrossRef Full Text | Google Scholar

54. Lefebvre, J, Hutt, A, Knebel, J-F, Whittingstall, K, and Murray, MM. Stimulus Statistics Shape Oscillations in Nonlinear Recurrent Neural Networks. J Neurosci (2015) 35:2895–903. doi:10.1523/jneurosci.3609-14.2015

CrossRef Full Text | Google Scholar

55. Hutt, A, Mierau, A, and Lefebvre, J. Dynamic Control of Synchronous Activity in Networks of Spiking Neurons. PLoS One (2016) 11:e0161488. doi:10.1371/journal.pone.0161488

PubMed Abstract | CrossRef Full Text | Google Scholar

56. Hutt, A, Sutherland, C, and Longtin, A. Driving Neural Oscillations with Correlated Spatial Input and Topographic Feedback. Phys.Rev.E (2008) 78:021911. doi:10.1103/physreve.78.021911

PubMed Abstract | CrossRef Full Text | Google Scholar

57. Hashemi, M, Hutt, A, and Sleigh, J. How the Cortico-Thalamic Feedback Affects the EEG Power Spectrum over Frontal and Occipital Regions during Propofol-Induced Sedation. J Comput Neurosci (2015) 39:155–79. doi:10.1007/s10827-015-0569-1

CrossRef Full Text | Google Scholar

58. Klöden, PE, and Platen, E. Numerical Solution of Stochastic Differential Equations. Heidelberg: Springer-Verlag (1992).

59. Fries, P, Reynolds, J, Rorie, A, and Desimone, R. Modulation of Oscillatory Neuronal Synchronization by Selective Visual Attention. Science (2001) 291:1560–3. doi:10.1126/science.1055465

PubMed Abstract | CrossRef Full Text | Google Scholar

60. Tononi, G. An Information Integration Theory of Consciousness. BMC Neurosci (2004) 5:42. doi:10.1186/1471-2202-5-42

PubMed Abstract | CrossRef Full Text | Google Scholar

61. Alkire, MT, Hudetz, AG, and Tononi, G. Consciousness and Anesthesia. Science (2008) 322:876–80. doi:10.1126/science.1149213

PubMed Abstract | CrossRef Full Text | Google Scholar

62. Lee, M, Sanders, RD, Yeom, S-K, Won, D-O, Seo, K-S, Kim, HJ, et al. Network Properties in Transitions of Consciousness during Propofol-Induced Sedation. Sci Rep (2017) 7:16791. doi:10.1038/s41598-017-15082-5

PubMed Abstract | CrossRef Full Text | Google Scholar

63. Massimini, M, Ferrarelli, F, Huber, R, Esser, SK, Singh, H, and Tononi, G. Breakdown of Cortical Effective Connectivity during Sleep. Science (2005) 309:2228–32. doi:10.1126/science.1117256

PubMed Abstract | CrossRef Full Text | Google Scholar

64. Wollstadt, P, Sellers, KK, Rudelt, L, Priesemann, V, Hutt, A, Fröhlich, F, et al. Breakdown of Local Information Processing May Underlie Isoflurane Anesthesia Effects. Plos Comput Biol (2017) 13:e1005511. doi:10.1371/journal.pcbi.1005511

PubMed Abstract | CrossRef Full Text | Google Scholar

65. Lizier, JT, Prokopenko, M, and Zomaya, AY. Local Measures of Information Storage in Complex Distributed Computation. Inf Sci (2012) 208:39–54. doi:10.1016/j.ins.2012.04.016

CrossRef Full Text | Google Scholar

66. Wibral, M, Lizier, JT, Vögler, S, Priesemann, V, and Galuske, R. Local Active Information Storage as a Tool to Understand Distributed Neural Information Processing. Front Neuroinform (2014) 8:1. doi:10.3389/fninf.2014.00001

PubMed Abstract | CrossRef Full Text | Google Scholar

67. Ince, RAA, Giordano, BL, Kayser, C, Rousselet, GA, Gross, J, and Schyns, PG. A Statistical Framework for Neuroimaging Data Analysis Based on Mutual Information Estimated via a Gaussian Copula. Hum Brain Mapp (2017) 38:1541–73. doi:10.1002/hbm.23471

PubMed Abstract | CrossRef Full Text | Google Scholar

68. Wibral, M, Pampu, N, Priesemann, V, Siebenhühner, F, Seiwert, H, Lindner, RV, et al. Measuring Information-Transfer Delays. PLoS One (2013) 8:e55809. doi:10.1371/journal.pone.0055809

PubMed Abstract | CrossRef Full Text | Google Scholar

69. Risken, H. The Fokker-Planck Equation — Methods of Solution and Applications. Berlin: Springer (1989).

70. Arnold, L. Random Dynamical Systems. Berlin: Springer-Verlag (1998).

71. Xu, C, and Roberts, AJ. On the Low-Dimensional Modelling of Stratonovich Stochastic Differential Equations. Physica A: Stat Mech its Appl (1996) 225:62–80. doi:10.1016/0378-4371(95)00387-8

CrossRef Full Text | Google Scholar

72. Derrida, B, Gardner, E, and Zippelius, A. An Exactly Solvable Asymmetric Neural Network Model. Europhys Lett (1987) 4:187. doi:10.1209/0295-5075/4/2/007

CrossRef Full Text | Google Scholar

73. Touboul, JD, Piette, C, Venance, L, and Ermentrout, GB. Noise-Induced Synchronization and Antiresonance in Interacting Excitable Systems: Applications to Deep Brain Stimulation in Parkinson's Disease. Phys Rev X (2019) 10:011073. doi:10.1103/PhysRevX.10.011073

CrossRef Full Text | Google Scholar

74. El Boustani, S, and Destexhe, A. A Master Equation Formalism for Macroscopic Modeling of Asynchronous Irregular Activity States. Neural Comput (2009) 21:46–100. doi:10.1162/neco.2009.02-08-710

PubMed Abstract | CrossRef Full Text | Google Scholar

75. Soula, H, and Chow, CC. Stochastic Dynamics of a Finite-Size Spiking Neural Network. Neural Comput (2007) 19:3262–92. doi:10.1162/neco.2007.19.12.3262

PubMed Abstract | CrossRef Full Text | Google Scholar

76. Montbrio, E, Pazo, D, and Roxin, A. Macroscopic Description for Networks of Spiking Neurons. Phys Rev X (2015) 5:021028. doi:10.1103/physrevx.5.021028

CrossRef Full Text | Google Scholar

77. Brunel, N, and Hakim, V. Fast Global Oscillations in Networks of Integrate-And-Fire Neurons with Low Firing Rates. Neural Comput (1999) 11:1621–71. doi:10.1162/089976699300016179

PubMed Abstract | CrossRef Full Text | Google Scholar

78. Roxin, A, Brunel, N, and Hansel, D. Rate Models with Delays and the Dynamics of Large Networks of Spiking Neurons. Prog Theor Phys Suppl (2006) 161:68–85. doi:10.1143/ptps.161.68

CrossRef Full Text | Google Scholar

79. Fourcaud, N, and Brunel, N. Dynamics of the Firing Probability of Noisy Integrate-And-Fire Neurons. Neural Comput (2002) 14:2057–110. doi:10.1162/089976602320264015

PubMed Abstract | CrossRef Full Text | Google Scholar

80. di Volo, M, and Torcini, A. Transition from Asynchronous to Oscillatory Dynamics in Balanced Spiking Networks with Instantaneous Synapses. Phys Rev Lett (2018) 121:128301. doi:10.1103/physrevlett.121.128301

PubMed Abstract | CrossRef Full Text | Google Scholar

81. Arenas, A, Díaz-Guilera, A, Kurths, J, Moreno, Y, and Zhou, C. Synchronization in Complex Networks. Phys Rep (2008) 469:93–153. doi:10.1016/j.physrep.2008.09.002

CrossRef Full Text | Google Scholar

82. Della Rossa, F, and DeLellis, P. Stochastic Master Stability Function for Noisy Complex Networks. Phys Rev E (2020) 101:052211. doi:10.1103/PhysRevE.101.052211

PubMed Abstract | CrossRef Full Text | Google Scholar

83. Hellwig, B. A Quantitative Analysis of the Local Connectivity between Pyramidal Neurons in Layers 2/3 of the Rat Visual Cortex. Biol Cybern (2000) 82:111–21. doi:10.1007/pl00007964

PubMed Abstract | CrossRef Full Text | Google Scholar

84. Yan, G, Martinez, ND, and Liu, Y-Y. Degree Heterogeneity and Stability of Ecological Networks. J R Soc Interf (2017) 14:20170189. doi:10.1098/rsif.2017.0189

CrossRef Full Text | Google Scholar

85. Hutt, A, and Buhry, L. Study of GABAergic Extra-synaptic Tonic Inhibition in Single Neurons and Neural Populations by Traversing Neural Scales: Application to Propofol-Induced Anaesthesia. J Comput Neurosci (2014) 37:417–37. doi:10.1007/s10827-014-0512-x

CrossRef Full Text | Google Scholar

86. Brunel, N. Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons. J Comput Neurosci (2000) 8:183–208. doi:10.1023/a:1008925309027

CrossRef Full Text | Google Scholar

87. Steinmetz, PN, Roy, A, Fitzgerald, PJ, Hsiao, SS, Johnson, KO, and Niebur, E. Attention Modulates Synchronized Neuronal Firing in Primate Somatosensory Cortex. Nature (2000) 404:187–90. doi:10.1038/35004588

PubMed Abstract | CrossRef Full Text | Google Scholar

88. Coull, J. Neural Correlates of Attention and Arousal: Insights from Electrophysiology, Functional Neuroimaging and Psychopharmacology. Prog Neurobiol (2019) 55:343–61. doi:10.1016/s0301-0082(98)00011-2

CrossRef Full Text | Google Scholar

89. Lakatos, P, Szilágyi, N, Pincze, Z, Rajkai, C, Ulbert, I, and Karmos, G. Attention and Arousal Related Modulation of Spontaneous Gamma-Activity in the Auditory Cortex of the Cat. Cogn Brain Res (2004) 19:1–9. doi:10.1016/j.cogbrainres.2003.10.023

CrossRef Full Text | Google Scholar

90. Kinomura, S, Larsson, J, Guly s, Bz., and Roland, PE. Activation by Attention of the Human Reticular Formation and Thalamic Intralaminar Nuclei. Science (1996) 271:512–5. doi:10.1126/science.271.5248.512

PubMed Abstract | CrossRef Full Text | Google Scholar

91. Galbraith, GC, Olfman, DM, and Huffman, TM. Selective Attention Affects Human Brain Stem Frequency-Following Response. Neuroreport (2003) 14:735–8. doi:10.1097/00001756-200304150-00015

PubMed Abstract | CrossRef Full Text | Google Scholar

92. Koval’zon, V. Ascending Reticular Activating System of the Brain. Transl Neurosci Clin (2016) 2:275–85. doi:10.18679/CN11-6030/R.2016.034

CrossRef Full Text | Google Scholar

93. Serences, JT. Neural Mechanisms of Information Storage in Visual Short-Term Memory. Vis Res (2016) 128:53–67. doi:10.1016/j.visres.2016.09.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: coherence resonance, phase transition, stochastic process, excitable system, mean-field, random networks

Citation: Hutt A, Wahl T, Voges N, Hausmann J and Lefebvre J (2021) Coherence Resonance in Random Erdös-Rényi Neural Networks: Mean-Field Theory. Front. Appl. Math. Stat. 7:697904. doi: 10.3389/fams.2021.697904

Received: 20 April 2021; Accepted: 17 June 2021;
Published: 15 July 2021.

Edited by:

Alessandro Torcini, Université de Cergy-Pontoise, France

Reviewed by:

Matteo Di Volo, Université de Cergy-Pontoise, France
Miguel Pineda, University College London, United Kingdom

Copyright © 2021 Hutt, Wahl, Voges, Hausmann and Lefebvre. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: A. Hutt, YXhlbC5odXR0QGlucmlhLmZy

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.