Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci., 05 January 2021
This article is part of the Research Topic Synchronization, Swarming and Emergent Behaviors in Complex Networks and Neuroscience View all 19 articles

On the Validity of Neural Mass Models

\nNicols Deschle,
Nicolás Deschle1,2*Juan Ignacio GossnJuan Ignacio Gossn3Prejaas Tewarie,Prejaas Tewarie4,5Bjrn SchelterBjörn Schelter2Andreas Daffertshofer
Andreas Daffertshofer1*
  • 1Faculty of Behavioural and Movement Sciences, Amsterdam Movement Sciences & Institute for Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
  • 2Institute for Complex Systems and Mathematical Biology, University of Aberdeen, King's College, Aberdeen, United Kingdom
  • 3Consejo Nacional de Investigaciones Científicas y Técnicas, Instituto de Astronomía y Física del Espacio, Universidad de Buenos Aires, Ciudad Universitaria, Buenos Aires, Argentina
  • 4Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, Nottingham, United Kingdom
  • 5Department of Clinical Neurophysiology and MEG Center, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam, Netherlands

Modeling the dynamics of neural masses is a common approach in the study of neural populations. Various models have been proven useful to describe a plenitude of empirical observations including self-sustained local oscillations and patterns of distant synchronization. We discuss the extent to which mass models really resemble the mean dynamics of a neural population. In particular, we question the validity of neural mass models if the population under study comprises a mixture of excitatory and inhibitory neurons that are densely (inter-)connected. Starting from a network of noisy leaky integrate-and-fire neurons, we formulated two different population dynamics that both fall into the category of seminal Freeman neural mass models. The derivations contained several mean-field assumptions and time scale separation(s) between membrane and synapse dynamics. Our comparison of these neural mass models with the averaged dynamics of the population reveals bounds in the fraction of excitatory/inhibitory neuron as well as overall network degree for a mass model to provide adequate estimates. For substantial parameter ranges, our models fail to mimic the neural network's dynamics proper, be that in de-synchronized or in (high-frequency) synchronized states. Only around the onset of low-frequency synchronization our models provide proper estimates of the mean potential dynamics. While this shows their potential for, e.g., studying resting state dynamics obtained by encephalography with focus on the transition region, we must accept that predicting the more general dynamic outcome of a neural network via its mass dynamics requires great care.

Introduction

Over the years, neural mass models have profoundly contributed to our understanding of the meso- and macroscopic dynamics of populations of neurons. This is particularly true when it comes to the oscillatory behavior of mean post-synaptic potentials and firing rates. Central there is the notion of brain rhythms arguably resembling (episodic) local and distant synchronization of neural oscillators. Corresponding theoretical studies date back as far as the mid of the last century (Beurle, 1956; Griffith, 1963, 1965) though it was Walter Freeman who coined the notion neural masses (Freeman, 1975). Building neural mass models typically relies on phenomenological insights and one “prescribes” the evolution of the neural activity over time (Deco et al., 2008). Yet, as said, these models strongly contributed to advancing our understanding of brain rhythms as they succeeded in mimicking signals recorded especially via magneto- and electroencephalography (Freeman, 1987; Kozma and Freeman, 2016).

Freeman's K-sets are based on a hierarchy of interacting sets of neural populations or masses (Freeman, 1975). These masses are composed of non-interacting, identical neurons. Without interactions in the mass, it is called a KO-set for which the original Freeman model applies. In fact, the absence of interactions in the mass allows for great algebraic simplicity: The mass dynamics can be cast in the form of a linear second-order ordinary differential equation,

[ddt+α][ddt+β]V=(αβ)J,    (1)

where V = V(t) denotes the mean potential of all the somas over the neural population, J = J(t) represents some (common) input into the population, and α and β are (inverse) time constants specifying the rise and decay of the mean potential. The input J is usually a continuous function of time t, but if J is meant to resemble point processes, e.g., given via microscopic action potentials or spikes, continuity may be introduced (albeit heuristically) using sigmoidal activation functions (Marreiros et al., 2008).

Obviously, the dynamics (1) is a gross approximation for most (non-linear) neural dynamics at the microscopic level. Nonetheless, the model may provide reasonable approximations of selected features of a neural population's mean activity. However, apart from the weakness (if not absence) of internal interactions, this requires the population to contain sufficiently many neurons1 and/or symmetries. Symmetries typically imply an adequate amount of homogeneity. Even under these assumptions, detailed derivations of mass models like (1) from microscopic neural dynamics are rare (Stefanescu and Jirsa, 2008a; Byrne et al., 2017). As a consequence, it remains difficult to judge the degree to which the outcome of neural mass models really agrees with the mean activity of a “real” neural network. This is unfortunate because, given their mathematical ease, neural mass models appear ideal candidates for estimating parameter dependencies of network activation and predicting their dynamical outcome. For the Freeman model (1), Rodrigues and co-workers recently presented a mapping between a microscopic conductance-based model and the macroscopic mass dynamics (Rodrigues et al., 2010). They imposed strong homogeneity assumptions on a population of interconnected leaky integrate-and-fire (LIF) neurons. We adopted this approach but complemented it by an alternative, since some steps in Rodrigues et al. (2010) missed some rigor.

To test the quality of these two mass models, we simulated an externally driven, finite-size LIF network. We used its overall spiking activity and the external drive as input to the mass models and compared the resulting time series with the ones of the average LIF potentials using different measures. Following previous studies, we set these neural masses to contain neurons of mixed types, i.e., inhibitory and excitatory ones (Lopes da Silva et al., 1974; Jansen and Rit, 1995; Wendling et al., 2000; David and Friston, 2003; Stefanescu and Jirsa, 2008b; Ponten et al., 2010). Given the relevance for neural mass models in the study on brain rhythms we first investigated the spectral distributions of the potentials and complemented this by correlating the average network activity with that of the two neural mass models. This procedure allowed for a systematic assessment of the influence of population parameters on the quality of the mass model approximation(s). Here, we particularly focus on the (im-)balance between inhibitory and excitatory neurons and the degree of their connectivity. As will be shown, the quality of approximation(s) is (are) limited, except for parameter ranges defining the onset of synchrony and/or the range in which LIF neurons are synchronized at low spiking frequencies.

Results

Our two neural mass models obey the generic form

[τ(mem)ddt+1][τ(syn)ddt+1]V=                                     J(net)(V;A,)+J(ext)(V;)    (2)

where J(net)(V; A, …) is the sum of all spike-related currents in the LIF network, i.e., the population mean of the expectation values of the spike trains generated by network times the corresponding synapse conductances. Its value may depend on the mean membrane potential V and the networks adjacency matrix A (amongst other parameters). J(ext) summarizes the external drive, here always realized as a Poisson train. If the V-dependency of J is absent, we refer to (2) as the “conventional” Freeman model (CFM) and in the presence of a V-dependent currents, we refer to it as a slightly modified Freeman model (MFM); the explicit forms are given in (20) and (24), respectively.

The linearities in (2) stem from the facts that: (i) apart from the spike-related reset, the LIF dynamics is linear with membrane time constant τ(mem); and (ii) we connected them via exponential synapses containing a linear conductance dynamics with time constant τ(syn). In relation to (1) we have α = 1/τ(mem), β = 1/τ(syn).

In the absence of J(net), the impulse response of (the left-hand side of) the dynamics (2) equals that of a second-order linear system with rise and decay times given by τ(mem) and τ(syn). The corresponding frequency response function is that of a second-order low-pass filter (cf. Appendix). As soon as J(net) is included, i.e., once the LIF neurons start to fire, the response functions in the time- and frequency-domains become less trivial and quantitative assessments require numerical approaches. For this, we simulated a network composed of N = 10, 000 LIF neurons and used the simulated network's spiking activity as input to the mass dynamics, i.e., ~J(net) in (2).

To study the influence of different adjacencies A on the quality of agreement between the average network potential and that modeled via the neural mass dynamics (2), we modified A in two ways. (i) We considered the adjacency of an Erdős-Rényi random graph and changed the network's overall degree p from 0 to 1 so A represents connections between all excitatory and inhibitory neurons. (ii) By the same token, we varied the relation between excitatory vs. inhibitory units that we quantified via

λ:=#excitatory#inhibitory+#excitatory.    (3)

Details on the implementation of the different kinds of neurons can be found in the Methods section below; all parameters are summarized in Table 1.

TABLE 1
www.frontiersin.org

Table 1. Parameters values used when simulating the network of LIF neurons.

Before investigating the extent of agreement in more detail, we first verified that the chosen parameter range covered different dynamical regimes including phase transitions from de-synchronized to synchronized states. Figure 1 illustrates typical examples of the network with and without synchronization, while a more complete picture of the network's synchronization characteristics over the {p, λ} parameter space is given in Figure 2. There, we quantified the degree of synchrony using a spike train measure called spike-contrast (Ciba et al., 2018). In brief, one contrasts activity vs. non-activity (spike vs. no-spike) in temporal bins and varies the bin-size to obtain a time scale independent result. This allows for unraveling time scales of synchrony, over which we averaged here. Once the overall network degree exceeds a minimal value, increasing it further has little to no influence on the state of synchronization in the network. In contrast, altering λ, e.g., increasing the relative amount of excitatory units, one can observe a spontaneous switch between de-synchronized to synchronized states.

FIGURE 1
www.frontiersin.org

Figure 1. Typical behavior of the LIF network for different fractions of excitatory and inhibitory units. Top panels (a-c) contain raster plots for 104 units, excitatory in blue and inhibitory in green. The bottom panels (d-f) show the corresponding LIF mean-field potential, V(t); (a,d) λ = 0.6, (b,e) λ = 0.75 and (c,e) λ = 0.8; in all cases the overall network degree was set to p = 0.2.

FIGURE 2
www.frontiersin.org

Figure 2. The synchronization degree in the {p, λ} space computed using the Spike-contrast measure (Ciba et al., 2018). A value close to 1 indicates strong synchronization, here particularly pronounced for large λ, whereas a value close to 0 indicates de-synchronized states. This is the case if p is very small, or if the number of inhibitory units exceeds that of the excitatory ones, i.e., if λ is small.

The degree of synchronization in the network plays a crucial role for the appropriateness of the neural mass models in relation to the average network dynamics. As said, there is particular interest in the spectral content of the dynamics of neural populations (Buzsaki, 2006; Başar, 2012). Hence, we first summarize our comparisons in the frequency domain.

Spectral Characteristics

In what follows, we denote the average LIF network potential and the neural mass potentials by VLIF, VCFM, and VMFM, respectively, and refer to the corresponding spectra as PLIF, PCFM, and PMFM. Figure 3 shows the median frequency of VLIF, which in combination with Figure 2, provides a more encompassing view on the LIF network dynamics: when passing from small to larger values of λ, the network starts to synchronize. At the onset of synchrony the network's median frequency remains very low (this transition can be observed in Figure 1). When further increasing λ, this first transition from the de-synchronized to a synchronous state is followed by a second one, at which the network enters the synchronous regular regime with high spiking rate (Brunel and Hakim, 1999; Yger et al., 2011). The first transition appears smoother, which may be attributed to an effect of the chosen measure, i.e., the median frequency.

FIGURE 3
www.frontiersin.org

Figure 3. Median frequency of the LIF network's average potential VLIF; several contour lines were added to highlight the increase of the median frequency when increasing λ. When looking also at Figure 2, one can identify two transitions, one from the de-synchronized to a synchronized state (at low frequencies), followed by a second one from low- to high-frequency synchronization.

As expected, in the de-synchronized region both neural mass models displays significantly lower median frequencies than the network counterpart due to their low-pass filter characteristics. In all other dynamical regimes, the median frequencies seems to agree, at least at first glance. In general, one can expect that around a transition to or from synchrony, a spectral distribution may change qualitatively, rendering a comparison based solely on the median frequency incomplete. We therefore supplemented our analysis by a χ2-statistic (25) between the network's and the neural masses' power spectral densities shown in Figure 4 for both CFM and MFM; see the Methods section for details.

FIGURE 4
www.frontiersin.org

Figure 4. χ2-statistic computed between the power spectra (χ2(·, ·)) in the {p, λ} space (105 values). (a) χ2 between the LIF network and the CFM, χ2(PLIF, PCFM). (b) χ2 between the LIF network and the MFM, χ2(PLIF, PMFM). We add several contour lines in white to improve legibility. The dashed-red line indicates the boundaries of significance region with α = 0.01 (conform the χ2 distribution): inside the small region encircle by the dashed-red line, the CFM/MFM spectra were not significantly different from the LIF network spectrum.

In the region of de-synchronization, the χ2-values between the LIF network and the MFM spectra are clearly larger than the CFM counterpart. That is, there, the CFM provides a better representation of the network's average potential; cf. Figure 4. In the synchronous network state both models generate similar χ2-values. In the region of regular high-frequency synchronization, both spectra substantially disagree with the VLIF spectrum. There, the quality-of-fit is poor. By contrast, in the low-frequency synchronous region, especially close to the transition points from de-synchronization to synchronization, the spectra from both models agree to a level in which our χ2-statistics does not identify any statistically significant differences; see the areas encircled by the red dashed lines in Figure 4.

Temporal Characteristics

We analyzed the VCFM and VMFM time series by determining the cross-correlation function between them and VLIF. In view of the aforementioned response characteristics, we expected the extrema of this correlation to be located at finite, non-vanishing time lags τ. Therefore, we first estimated these time lags and, subsequently, determined the corresponding correlation coefficient as ρkmax) with

ρk(τ)=V^LIF(t)V^k(t+τ)dt    (4)

and τmax=argmaxτ|ρk(τ)|—in (4) the (·^) indicates the computation of z-scores and k ∈ {CFM, MFM}. The resulting time lags and maximum correlation values are shown in Figure 5. As expected from the close relationship between the neural mass model and the linear response, the time lags switch from zero to positive values around the points of transition between de-synchronized and synchronized states—the fact that in the de-synchronized regime, the MFM displays a negative time lag shows its deviation from the mere low-pass characteristics for CFM. The drop in time lag in the transition from low-frequency to high-frequency synchronization is unexpected and we return to it in the Discussion section below.

FIGURE 5
www.frontiersin.org

Figure 5. Time lags and correlation coefficients. (a,b) depict the optimal time lags τmax, (a) CFM and (b) MFM. We added contour lines (in white) to improve legibility. In (b) there is a change in the time lags when p is sufficiently large for the LIF network to generate spikes. In (c,d) the corresponding correlation coefficients ρkmax) between the LIF model and (c) CFM and (d) MFM are shown. The red-dashed lines in panels (c,d) indicate boundaries of significance; α = 0.01 obtained by applying the Fisher transformation to the correlation values (Fisher, 1915). Inside the area defined by the red-dashed line in the synchronized region and the small area in the asynchronous region where p → 0, the time series of the two neural mass models were not significantly different than the LIF mean field.

For both CFM and MFM, the correlation followed a {p, λ}-dependent pattern similar to that of the synchronization degree combined with the median frequency (Figures 2, 3). Apart from the regions with very small p, i.e., where the network was set to be very sparsely connected, the region with pronounced low-frequency synchronization (λ ∈ [0.70, 0.85]) is accompanied with the largest correlation values. Similar to the χ2-based approach for the spectra, we determined a significance interval for the ρkmax), now after Fisher transform (Fisher, 1915). The red-dashed significance boundaries indicate regions of proper approximations. In line with the results for the power spectral densities, also here the agreement between neural mass models and the average LIF network dynamics becomes arbitrarily bad in the region of high-frequency synchronization (larger values of λ). While both models largely agreed there, they differ when the network is de-synchronized (λ < 0.7) where ρVCFM displayed larger correlation values than ρVMFM.

Discussion

We compared two neural mass models with the average potential of a network of LIF neurons. Both models provide limited approximations of the average network potential for large regions of the {p, λ} parameter space spanning network degree and the relation between excitatory and inhibitory neurons. We found arbitrary discrepancies between the neural mass models and the average potential of the LIF network, manifested in both the cross-correlation between model and network potentials and in the corresponding spectral densities that we assessed via χ2-statistics. These were not just minor quantitative deviations but qualitative and largely unpredictable ones. Only in a very confined region around the transition from the de-synchronized state to low-frequency synchronization in the LIF network, both models performed well. There, our simulations did not reveal any significant differences between the “real” average network potential and the outcome of the two neural mass dynamics. Although this finding is troublesome as it implies a quite limited accuracy of neural mass approximations—at least in some parameter regimes—, we want to emphasize that our models appear suitable for studying neural activity at the transition between synchronized and de-synchronous states. In fact, transition regimes have recently gained much interest as they seem to particularly characterize neural dynamics during resting state (Ito et al., 2005, 2007) and also may describe more general metastability in ongoing whole-brain activity (Tognoli and Kelso, 2014; Deco et al., 2017; Beim Graben et al., 2019).

We incorporated two neural mass models derived from microscopic conductance-based neural models. For the first one, we followed the procedure described by Rodrigues et al. (2010). As already mentioned in the Introduction, some parts of their derivation arguably lack some rigor. For small fluctuations of the driving forces, they assumed the latter to be constant across channels, i.e., the membrane potential was considered constant and identical for all neurons on the synaptic terms; see Equation (17). While Rodrigues et al. did mention that this may not be a valid assumption, they also claimed this approximation to be required for deriving their neural mass model. Being constant is a very strong constraint for the membrane potential, which has—to the best of our knowledge—not been supported by previous research. Moreover, to us remains unclear why this approximation has only been applied for the synaptic terms but nowhere else in Equation (16). For the second neural mass model, we conducted an alternative derivation with the same starting point (16). To compute the population mean, we separated the time scales of the neural dynamics. This led to a slightly more complicated dynamics and reducing it further required a second approximation: the expectation values of the spike trains have to be identical among the different units. While this may be true for homogeneous cases, it may not hold in general. Strikingly, however, our results for both CFM and MFM largely agree. Both represent fairly accurately the results of the LIF network mean-field around the onset of low-frequency synchronization, while in other regions of the {p, λ}-space they both performed arbitrarily bad. Given these poor performances, we hope for future work to focus on (even more) rigorous alternatives.

In the population modeled in this work, the heterogeneity has been included by a mixture of excitatory and inhibitory units, while the external and internal connectivities are uniformly distributed and the different type of units are identical among them. This simplified the implementation of the models and the interpretation of our results. Yet, we have to admit that —when it comes to biological plausibility—this choice might be considered unrealistic: the homogeneity between neurons of the same type can be challenged (Reyes et al., 1998; Jinno et al., 2007; Ávila-Åkerberg et al., 2010) and the uniform distribution of connectivity might be replaced by, e.g., small-world topologies (Bettencourt et al., 2007; van den Heuvel et al., 2016; Bassett and Bullmore, 2017). Here we would like to add that using the current modeling approach the cell-to-cell heterogeneity including their role in neural coding has been explored elsewhere (Boustani and Destexhe, 2009; Mejias and Longtin, 2012, 2014; Carlu et al., 2020) while the modeling of small-world, modular and more realistic topologies remains future work.

As a final remark we would like to point out that discrepancies between microscopic and macroscopic descriptions for the same neural network are problematic when seeking for inferences from one level to the other. However, this ability for such inferences is fundamental, since a model has only value when it allows for predictions. Studying the network on the macroscopic neural mass level should allow to forecast a dynamics that could be verified on the microscopic, i.e., full network level. Of course, this requires a proper modeling of the latter. Since this cannot be guaranteed under all circumstances, the litmus test remains to forecast experimental data. This, however, will come with further challenges as one has to answer, e.g., “what is the microscopic level?” or “what defines the full network?”. For instance, in encephalography the likelihood that the recorded potentials of some cortical region contains large contributions of (tangentially oriented) inhibitory units is arguably small. That is, although the full network does contain inhibitory units the recorded mean values of the underlying neural population may not cover them. In the Appendix we sought to mimic this case by repeating our comparisons after selecting only excitatory units from the simulated LIF-network. Also in this case our results stay intact rendering our conclusions valid and possibly transferrable to this type of experimental data.

Conclusion

Neural masses are common tools to model neural population dynamics. They are believed to mimic selected brain activity patterns with great accuracy. We questioned the relation between these models and the underlying spiking neural network. For populations with both excitatory and inhibitory neurons and random connectivity, we found that approximations via the corresponding mean-field dynamics may deviate arbitrarily from the network's average potential. Deviations may be particularly large when the network is either de-synchronized or fully synchronized and spikes at high rate, while mass models can fit well around the onset of low-frequency synchronization. Neural mass models covering several dynamical regimes require more than mere mean-field approximations because they typically average out the (synchrony-defining) spiking behavior.

Methods

We will derive two neural mass models from a network of spiking neurons and compare them against the mean outcome of that network. The first model represents the CFM, and the second one contains a (slight) modification by means of a weakly non-linear response, i.e., the MFM. Then, our approach to test the model is the following: We simulate the spiking network for different values of two major topological parameters, i.e., the fraction of excitatory/inhibitory units and mean degree of the (random) network quantifying the general connectivity. We choose the parameters such that the dynamics undergoes a phase transition from the de-synchronized to a synchronized state (Yger et al., 2011). Throughout the simulations, we “record” both the output spiking activity and the mean membrane potential. While the latter is considered as reference, i.e., the “real” mean network activity, the first serves as input to the two neural mass models. Finally, we compare the outcome of the neural mass models with the real mean network activity in both the time and the frequency domain.

Below, we will specify the microscopic neuron and synapse dynamics and put them on a homogeneous network before deriving the two versions of the macroscopic Freeman model. Finally, we will provide all details about how we altered the network structure when probing model validity.

Microscopic Dynamics

We consider a population or network of n = 1, …, N neurons where neuron n is described in terms of the dynamics of its membrane potential vn = vn(t) and voltage- and time-dependent conductances. If cn and gn denote the membrane's capacitance and leak conductance, respectively, then the dynamics can be cast in the form

τndvn+fn(vn)dt=gn-1jn(t)dt+dwn.    (5)

The function fn(·) is—as of yet—generic and describes the voltage-dependent decay, jn(t) is the total current applied to neuron n. The membrane's time constant τ(mem) can be given by its capacitance and leak conductance in terms of τn: = cn/gn. And, w denotes a stochastic force summarizing random voltage fluctuations of the membrane; here, w will always reflect zero-centered, δ-correlated (white) Gaussian noise with variance Q. In what follows, we will specify both fn(·) and jn(t) and estimate the expectation values of the population average for finite N.

We first notice that the input current jn(t) can be a combination of an internal current generated within the network and an external one stemming from outside the network. We denote them as jn(net)and jn(ext), respectively, and assume that they superimpose like jn=jn(net)+jn(ext). Without loss of generality, the internal current will be given as

jn(net)=-σ{E,I}m=1Ngnm(σ)(vn-nm(σ))    (6a)

where nm(σ) is the reversal potential for a synapse between neurons n and m. The synapse can be excitatory or inhibitory, which we indicate by σ = E or σ = I, respectively. The synaptic activity is further quantified by a time-dependent conductance gnm(σ) that depends on incoming spikes. We consider the corresponding response to be cast into a first-order, linear dynamics, i.e., we include so-called exponential synapses with conductance dynamics which leads to the dynamics

τnm(σ)dgnm(σ)=-(gnm(σ)-ĝnm(σ)ϕnm(σ))dt.    (6b)

The parameter ĝnm(σ) relates to the maximum conductance, τnm(σ) is the characteristic time of the type-σ synapse between neurons n and m, and ϕnm(σ) is the input that neuron n receives from neuron m. If that input is composed of spikes, it can be cast into the form

ϕnm(σ)=Anmkδ(t-tm,k(σ))    (6c)

where Anm denotes the elements of the network's adjacency matrix, i.e., Anm = 1 if neuron m targets neuron n and 0 otherwise, and kδ(t-tm,k(σ)) is a spike train emitted by neuron m with spikes at times tm,k(σ). Similarly, the external current may be expressed as

jn(ext)=-m=1Mgnm(ext)(vn-nm(ext))    (7a)

given M external units that project into the network with external synaptic conductivity and external inputs of the form

τnm(ext)dgnm(ext)=-(gnm(ext)-ĝnm(ext)ϕnm(ext))dt    (7b)

and

ϕnm(ext)=Bnmkδ(t-tm,k(ext))    (7c)

respectively. The parameter ĝnm(ext) is again related to the maximum conductance, τnm(ext) denotes the characteristic time of the corresponding synapse and ϕnm(ext) is an external spike train that enters according to the adjacency matrix between the external and internal neurons (Bnm = 1 if the external neuron m targets internal neuron n and 0 otherwise).

We would like to note that, thus far, we did not detail the dynamics of the individual neuron n, i.e., the function f(·) can still be arbitrary (except that it has to be integrable). Put differently, the system (5) and (6a-c) covers a very general case for a conductance-based, stochastic spiking network model under impact of an external drive (7a-c).

Next, in order to make this system tractable, we consider the case in which all synapses of type σ are identical for every neuron. This means that

m=1,,M: {τnm(σ)=:τn(σ)ĝnm(σ)=:ĝn(σ)nm(σ)=:n(σ)},    (8)

i.e., all synapses have identical characteristic times, maximum conductances, and reversal potentials. For the sake of legibility, we further introduce two abbreviations, namely

gn(σ):=m=1Ngnm(σ) and ϕn(σ):=m=1Nϕnm(σ)    (9)

which represent the total conductivity of type-σ synapses in neuron n and the total spike input via type-σ synapses received by neuron n, respectively. Then, substituting (8) and computing the sum over m in (6b) yields

τn(σ)dgn(σ)=-(gn(σ)-ĝn(σ)ϕn(σ))dt.    (10)

Network of LIF Neurons

The arguably simplest case of spiking neurons are LIF neurons. To model them, we constrain fn(·) to be linear in vn. In more detail, we define fn(vn) = (vn − ṽn), where ṽn denotes the membrane reversal potential. We add a further homogeneity assumptions by considering identical fn as well as identical membrane characteristics for all neurons, i.e., cn =:c, gn=:gτn=:τ(mem), and ṽn =:ṽ. Likewise, we assume homogeneity of the synapse by setting τn(σ)=:6τ(σ), n(σ)=:(σ), and ĝn(σ)=:ĝ(σ), i.e., all synapses of the same type σ are identical across the population2. Using (10) and the homogeneity, we can simplify the system (5) and (6) as

τ(mem)dvn=[(vnv˜)+           +1g    σ{E,I}   gn(σ) (vnv˜(σ))]dt+dwn    (11a)

and

τ(σ)dgn(σ)=-(gn(σ)-ĝ(σ)ϕn(σ))dt    (11b)

with

ϕn(σ)=m=1NAnmkδ(t-tm,k(σ)).    (11c)

Finally, the membrane dynamics is supplemented by the reset rule that reads

if vn reaches v(thres), then neuron n emits a spike  -its membrane potential vn resets to v(reset)  -and stays there for a refractory period τ(ref).    (11d)

The set of equations (11a-d) defines our microscopic dynamics. This dynamics can be readily completed by adding external input as defined in (7a-c) much in line with the formulation of (11b) and (11c).

Macroscopic Dynamics

In the following we will estimate the population mean of the membrane potential's expectation value—recall that the dynamics (11a) contains noise that we “eliminate” by determining first the dynamics' first moment Vn: = 〈vn〉. Hence, the task is to approximate

V:=1Nn=1NVn:=1Nn=1Nvn.    (12)

Before doing so, however, we recast (11b) in the form

[τ(σ)ddt+1]gn(σ)=ĝ(σ)Φn(σ)    (13)

where we introduced the first moment of the spike trains, i.e.,

Φn(σ):=ϕn(σ).    (14)

By construction, 〈wn〉 = 0 holds, with which we find

[τ(mem)ddt+1]V=         -1gσ{E,I}1Nn=1N(gn(σ)vn-gn(σ)(σ)).    (15)

We can combine (13) and (15), in particular, when assuming identical time constants across synapse types σ, i.e., ∀ σ:τ(σ) =(syn). Then, we find

[τ(mem)ddt+1][τ(syn)ddt+1]V=v˜+σ{E,I}g^(σ)gv˜(σ)Φ(σ)1gσ{E,I}1Nn=1N[τ(syn)ddt+1]gn(σ)vn    (16)

with Φ(σ):=N-1n=1NΦn(σ). The last term on the right-hand side of (16) needs to be approximated, and the way of which discriminates our two models. We first adopt the line of reasoning by Rodrigues et al. (2010) leading to the CFM before presenting a slight adjustment culminating in the MFM (cf. Tewarie, 2014, chap. 2).

The Conventional Freeman Model (CFM)

Approximating the term gn(σ)vn in (15) can be difficult because smallness arguments may not hold in view of the stochastic nature of the dynamics. Rodrigues et al. (2010) introduced an admittedly gross step by considering

gn(σ)vngn(σ)V-=gn(σ)V-    (17)

where V- denotes the constant mean membrane potential of the population. This approximation implies that the individual membrane potentials vn are arbitrarily close to the population mean V-, averaged over time. Note that when applying this approximation one selectively ignores all of their dynamic characteristics on the right-hand side of (16); cf. Discussion section (but not on the left-hand side). Presuming this is acceptable, the last term on the right-hand side of (16) simplifies drastically because of

[τ(syn)ddt+1]gn(σ)vn        [τ(syn)ddt+1]gn(σ)V-=(13)ĝ(σ)V^Φn(σ)    (18)

Substituting (18) into (16) yields

[τ(mem)ddt+1][τ(syn)ddt+1]V=                                            -σ{E,I}ĝ(σ)g(V--(σ))Φ(σ).    (19)

In the presence of external input, as given in (7a-c), the full dynamics finally reads

[τ(mem)ddt+1][τ(syn)ddt+1]V=-σ{E,I}ĝ(σ)g(V--(σ))Φ(σ)                                                                                   -ĝ(ext)g(V--(ext))Φ(ext).    (20)

In our study, the function Φ(ext) consists of Poisson spike trains as specified in Equation (7c).

Both forms, (19) and (20), agree entirely with the Freeman model (1) when identifying α = 1/τ(mem), β = 1/τ(syn), and J = rhs(19) or J = rhs(20).

The Modified Freeman Model (MFM)

For an alternative approximation of the term gn(σ)vn in (16), let us detail the time scales, at which the membrane potentials and the synapses evolve. Synaptic time constants can be as small as 1.7 ms (Häusser and Roth, 1997), much in the range of typical time scales of the membrane dynamics. Yet, changes in most chemical synapses are much slower than the changes the membrane potential, in particular, the generation/emission of action potentials. Then, one may assume that the membrane potential instantly follows changes at the synapse, its dynamics can be eliminated adiabatically, i.e., we can use

|dvndt/vn|  |   dgn(σ)dt/gn(σ)   |    (21)

to rewrite

[τ(syn)ddt+1]gn(σ)vn([τ(syn)ddt+1]gn(σ))vn=(13)ĝ(σ)Φn(σ)vn.    (22)

While this approximation contains sufficient rigor under the proviso of a proper time scale separation, we also require that

1Nn=1Nĝ(σ)Φn(σ)vnĝ(σ)Φ(σ)V    (23)

which is true for Φ being the external spike train but may be arbitrarily inaccurate for the internal one Φ(σ)—again we refer to the Discussion section for a critical review. If this approximation turns out adequate, the MFM becomes

[τ(mem)ddt+1][τ(syn)ddt+1]V=-σ{E,I}ĝ(σ)g(V-(σ))Φ(σ)-ĝ(ext)g(V-(ext))Φ(ext).    (24)

In contrast to (20), the dynamics (24) contains a parametric forcing since on the right-hand side the constant V- is replaced by the time-dependent mean potential V. Note, however, that despite this difference our simulation results revealed that given the chosen parameter values (õver-damped second-order response) the outcome of both models (20) and (24) largely agree.

Numerical Methods

Simulations

We simulated N = 10, 000 LIF neurons with three types of synapses, each. The network equations were integrated using an Euler-Maruyama scheme with a time step of Δt = 0.1 ms and noise variance Q = 5·10−4 for a total duration of T = 3·104 ms, i.e., for 3·105 time steps. We discarded a transient regime of T0=3·103 ms. The network was stimulated by 10, 000 independent Poisson trains each of them connected to each neuron in the network with probability p(ext). The population average of the total spike input of each synaptic type σ received by each neuron at each time step t, ϕ(σ) was stored as it subsequently served as input to the Freeman model. The temporal average of the population mean, 1/(T-T0)T0TVdt served as proxy of V-. For the neural masses, we employed a simple Euler forward scheme with the same time parameters used for the network model. The time constant τ(syn) was set to 5 ms, i.e., the average of the synaptic time constants in the network; see Table 1.

To generate external input as Poisson spike trains, we drew random numbers from an exponential distribution. Since we drew the numbers at every time step for all the M = 10, 000 external units, we minimized the computational load by following (Zenke and Gerstner, 2014) and used that the union of distinct exponential distributions is again exponential. The mean frequency ν(ext) of the external input was set to 5 Hz. The Erdős-Rényi adjacency {Anm} was constructed using the Gilbert model published in Batagelj and Brandes (2005), adjusted for directed graphs. For the connection probability, we used a range of p = 0…1 implying a range of mean degrees of k = Np = 0…N. The distribution of excitatory vis-à-vis inhibitory neurons was quantified by the ratio given by (3), i.e.,

λ:=#excitatory#inhibitory+#excitatory

with #inhibitory + #excitatory = N = 10, 000. This network structure is similar to that in Brunel and Wang (2003) and Mazzoni et al. (2015) and has been considered as a good estimator of cortical activity (Mazzoni et al., 2015). Note, however, that it differs from other LIF networks such as the ones used in Brunel (2000) and Wong and Wang (2006) in their external drive: in the current work only excitatory neurons receive external input. The internal network connectivity is given by directed Erdős-Rényi network without discriminating excitatory and inhibitory units. The connectivity between the external Poisson trains and the network of LIF neurons was also given by a directed Erdős-Rényi network with mean out degree (Mp(ext)); cf. Figure 6.

FIGURE 6
www.frontiersin.org

Figure 6. Diagram of the network of LIF units (Left) together with the external de-correlated input (Right). Blue denotes excitatory, green inhibitory and black excitatory external neurons. The units on the LIF network are connected to each other with probability p independently on their type. The external units are modeled as independent Poisson trains and are connected with the same probability p(ext) only to excitatory LIF units and are not connected to inhibitory ones.

Parameter Values

The major parameters are summarized in Table 1. They largely agree with the settings in Yger et al. (2011) and resemble bio-physically plausible values.

Data Analysis

Per point {p, λ} in the parameter space, the network was simulated. We first verified that the chosen parameter range in fact covered the regime at which phase transitions from the de-synchronized to a synchronized state may occur by using a recently introduced, time-scale independent spike train synchrony measure coined Spike-contrast (Ciba et al., 2018). This measure yields results that are comparable to those of the well-established Spike-distance (Kreuz et al., 2013) but had our preference for its computational efficiency, which was necessary for our fairly large number of neurons. Subsequently, the regenerated internal and external spike trains served as input to the Freeman model. From the time series of the network's mean membrane potential and of the Freeman model's outcome we estimate power spectra via a discrete Fourier transform after boundary correction using a Hamming window. This procedure was repeated 10 times yielding average discrete power spectra Pω as sample mean approximation of the power spectral densities. The corresponding median frequency ϖ served as first, albeit very gross outcome measure to compare the spectra of the original network (i.e., its average potential) vis-á-vis the spectra of our models, CFM and MFM.

To quantify the agreement between spectra, we used a χ2-statistics: Given two discrete spectra P = (P1, P2, …, PL) and Q = (Q1, Q2, …, QL), their χ2-statistic can be given as

χ2(P,Q)=l=1L(Pl-Ql)2Pl+Ql    (25)

where the sum covers all L frequency components of the spectra (Press et al., 1989). Prior to using (25), the spectra were normalized to resemble histograms rather than probabilities.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author Contributions

ND and AD conceived and designed the study. AD derived the neural mass models. ND conducted the numerical simulations and all the data analysis. The first draft of the manuscript was written by ND and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Funding

This study received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement #642563 (COSMOS).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

ND and AD want to thank Rok Cestnik and Bastian Pietras for fruitful discussions.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fncom.2020.581040/full#supplementary-material

Footnotes

1. ^Ideally one can consider the thermodynamic limit, i.e., infinitely many neurons, though symmetries may allow for exact mean field description even for finite population sizes. Yet, the Freeman model is often considered appropriate for describing the “coordinated activity of cell assemblies of ~104 neurons with ~108 synapses”, Walter J. Freeman and Harry Erwin (2008), Scholarpedia, 3(2):3238.

2. ^This appears a reasonable assumption given that the variability between neurons of the same type might be lost in the presence of noise.

References

Ávila-Åkerberg, O., Krahe, R., and Chacron, M. J. (2010). Neural heterogeneities and stimulus properties affect burst coding in vivo. Neuroscience 168, 300–313. doi: 10.1016/j.neuroscience.2010.03.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Başar, E. (2012). Brain Function and Oscillations: Volume I: Brain Oscillations. Principles and Approaches. Berlin; Heidelberg: Springer Science & Business Media.

Google Scholar

Bassett, D. S., and Bullmore, E. T. (2017). Small-world brain networks revisited. Neuroscientist 23, 499–516. doi: 10.1177/1073858416667720

PubMed Abstract | CrossRef Full Text | Google Scholar

Batagelj, V., and Brandes, U. (2005). Efficient generation of large random networks. Phys. Rev. E 71:036113. doi: 10.1103/PhysRevE.71.036113

PubMed Abstract | CrossRef Full Text | Google Scholar

Beim Graben, P., Jimenez-Marin, A., Diez, I., Cortes, J. M., Desroches, M., and Rodrigues, S. (2019). Metastable resting state brain dynamics. Front. Comput. Neurosci. 13:62. doi: 10.3389/fncom.2019.00062

PubMed Abstract | CrossRef Full Text | Google Scholar

Bettencourt, L. M. A., Stephens, G. J., Ham, M. I., and Gross, G. W. (2007). Functional structure of cortical neuronal networks grown in vitro. Phys. Rev. E 75:021915. doi: 10.1103/PhysRevE.75.021915

CrossRef Full Text | Google Scholar

Beurle, R. L. (1956). Properties of a mass of cells capable of regenerating pulses. Philos. Trans. R. Soc. Lond. B 240, 55–94. doi: 10.1098/rstb.1956.0012

CrossRef Full Text | Google Scholar

Boustani, S. E., and Destexhe, A. (2009). A master equation formalism for macroscopic modeling of asynchronous irregular activity states. Neural Comput. 21, 46–100. doi: 10.1162/neco.2009.02-08-710

CrossRef Full Text | Google Scholar

Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J.Comput. Neurosci. 8, 183–208. doi: 10.1023/a:1008925309027

PubMed Abstract | CrossRef Full Text | Google Scholar

Brunel, N., and Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput. 11, 1621–1671. doi: 10.1162/089976699300016179

PubMed Abstract | CrossRef Full Text | Google Scholar

Brunel, N., and Wang, X.-J. (2003). What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance. J. Neurophysiol. 90, 415–430. doi: 10.1152/jn.01095.2002

PubMed Abstract | CrossRef Full Text | Google Scholar

Buzsaki, G. (2006). Rhythms of the Brain. New York, NY: Oxford University Press.

Google Scholar

Byrne, A., Brookes, M. J., and Coombes, S. (2017). A mean field model for movement induced changes in the beta rhythm. J. Comput. Neurosci. 43, 143–158. doi: 10.1007/s10827-017-0655-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Carlu, M., Chehab, O., Dalla Porta, L., Depannemaecker, D., Héricé, C., Jedynak, M., et al. (2020). A mean-field approach to the dynamics of networks of complex neurons, from nonlinear Integrate-and-Fire to Hodgkin–Huxley models. J. Neurophysiol. 123, 1042–1051. doi: 10.1152/jn.00399.2019

PubMed Abstract | CrossRef Full Text | Google Scholar

Ciba, M., Isomura, T., Jimbo, Y., Bahmer, A., and Thielemann, C. (2018). Spike-contrast: a novel time scale independent and multivariate measure of spike train synchrony. J. Neurosci. Methods 293, 136–143. doi: 10.1016/j.jneumeth.2017.09.008

PubMed Abstract | CrossRef Full Text | Google Scholar

David, O., and Friston, K. J. (2003). A neural mass model for MEG/EEG: coupling and neuronal dynamics. Neuroimage 20, 1743–1755. doi: 10.1016/j.neuroimage.2003.07.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Deco, G., Jirsa, V. K., Robinson, P. A., Breakspear, M., and Friston, K. (2008). The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput. Biol. 4:e1000092. doi: 10.1371/journal.pcbi.1000092

PubMed Abstract | CrossRef Full Text | Google Scholar

Deco, G., Kringelbach, M. L., Jirsa, V. K., and Ritter, P. (2017). The dynamics of resting fluctuations in the brain: metastability and its dynamical cortical core. Sci. Rep. 7, 1–14. doi: 10.1038/s41598-017-03073-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Fisher, R. A. (1915). Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika 10, 507–521. doi: 10.1093/biomet/10.4.507

CrossRef Full Text | Google Scholar

Freeman, W. J. (1975). Mass Action in the Nervous System: Examination of the Neurophysiological Basis of Adaptive Behavior Through the EEG. New York, NY: Academic Press.

Google Scholar

Freeman, W. J. (1987). Simulation of chaotic EEG patterns with a dynamic model of the olfactory system. Biol. Cybern. 56, 139–150.

PubMed Abstract | Google Scholar

Griffith, J. S. (1963). A field theory of neural nets: I: derivation of field equations. Bull. Math. Biophys. 25, 111–120.

PubMed Abstract | Google Scholar

Griffith, J. S. (1965). A field theory of neural nets: II. properties of the field equations. Bull. Math. Biophys. 27:187.

PubMed Abstract | Google Scholar

Häusser, M., and Roth, A. (1997). Estimating the time course of the excitatory synaptic conductance in neocortical pyramidal cells using a novel voltage jump method. J. Neurosci. 17, 7606–7625.

PubMed Abstract | Google Scholar

Ito, J., Nikolaev, A. R., and van Leeuwen, C. (2005). Spatial and temporal structure of phase synchronization of spontaneous alpha EEG activity. Biol. Cybern. 92, 54–60. doi: 10.1007/s00422-004-0533-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Ito, J., Nikolaev, A. R., and van Leeuwen, C. (2007). Dynamics of spontaneous transitions between global brain states. Hum. Brain Mapp. 28, 904–913. doi: 10.1002/hbm.20316

PubMed Abstract | CrossRef Full Text | Google Scholar

Jansen, B. H., and Rit, V. G. (1995). Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol. Cybern. 73, 357–366.

PubMed Abstract | Google Scholar

Jinno, S., Klausberger, T., Marton, L. F., Dalezios, Y., Roberts, J. D. B., Fuentealba, P., et al. (2007). Neuronal diversity in gabaergic long-range projections from the hippocampus. J. Neurosci. 27, 8790–8804. doi: 10.1523/JNEUROSCI.1847-07.2007

PubMed Abstract | CrossRef Full Text | Google Scholar

Kozma, R., and Freeman, W. J. (2016). Cognitive Phase Transitions in the Cerebral Cortex-Enhancing the Neuron Doctrine by Modeling Neural Fields. Switzerland: Springer International Publishing.

Google Scholar

Kreuz, T., Chicharro, D., Houghton, C., Andrzejak, R. G., and Mormann, F. (2013). Monitoring spike train synchrony. J. Neurophysiol. 109, 1457–1472. doi: 10.1152/jn.00873.2012

PubMed Abstract | CrossRef Full Text | Google Scholar

Lopes da Silva, F. H., Hoeks, A., Smits, H., and Zetterberg, L. H. (1974). Model of brain rhythmic activity. Kybernetik 15, 27–37.

PubMed Abstract | Google Scholar

Marreiros, A. C., Daunizeau, J., Kiebel, S. J., and Friston, K. J. (2008). Population dynamics: variance and the sigmoid activation function. Neuroimage 42, 147–157. doi: 10.1016/j.neuroimage.2008.04.239

PubMed Abstract | CrossRef Full Text | Google Scholar

Mazzoni, A., Lindn, H., Cuntz, H., Lansner, A., Panzeri, S., and Einevoll, G. T. (2015). Computing the local field potential (lfp) from integrate-and-fire network models. PLoS Comput. Biol. 11:e1004584. doi: 10.1371/journal.pcbi.1004584

PubMed Abstract | CrossRef Full Text | Google Scholar

Mejias, J. F., and Longtin, A. (2012). Optimal heterogeneity for coding in spiking neural networks. Phys. Rev. Lett. 108:228102. doi: 10.1103/PhysRevLett.108.228102

PubMed Abstract | CrossRef Full Text | Google Scholar

Mejias, J. F., and Longtin, A. (2014). Differential effects of excitatory and inhibitory heterogeneity on the gain and asynchronous state of sparse cortical networks. Front. Comput. Neurosci. 8:107. doi: 10.3389/fncom.2014.00107

PubMed Abstract | CrossRef Full Text | Google Scholar

Ponten, S. C., Daffertshofer, A., Hillebrand, A., and Stam, C. J. (2010). The relationship between structural and functional connectivity: graph theoretical analysis of an EEG neural mass model. Neuroimage 52, 985–994. doi: 10.1016/j.neuroimage.2009.10.049

PubMed Abstract | CrossRef Full Text | Google Scholar

Press, W. H., Flannery, B. P., Teukolsky, S. A., and Vetterling, W. T., et al. (1989). Numerical Recipes, Vol. 3. Cambridge: Cambridge University Press.

Google Scholar

Reyes, A., Lujan, R., Rozov, A., Burnashev, N., Somogyi, P., and Sakmann, B. (1998). Target-cell-specific facilitation and depression in neocortical circuits. Nat. Neurosci. 1, 279–285. doi: 10.1038/1092

PubMed Abstract | CrossRef Full Text | Google Scholar

Rodrigues, S., Chizhov, A. V., Marten, F., and Terry, J. R. (2010). Mappings between a macroscopic neural-mass model and a reduced conductance-based model. Biol. Cybern. 102, 361–371. doi: 10.1007/s00422-010-0372-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Stefanescu, R. A., and Jirsa, V. K. (2008a). A low dimensional description of globally coupled heterogeneous neural networks of excitatory and inhibitory neurons. PLoS Comput. Biol. 4:e1000219. doi: 10.1371/journal.pcbi.1000219

PubMed Abstract | CrossRef Full Text | Google Scholar

Stefanescu, R. A., and Jirsa, V. K. (2008b). A low dimensional description of globally coupled heterogeneous neural networks of excitatory and inhibitory neurons. PLoS Comput. Biol. 4:e1000219.

PubMed Abstract | Google Scholar

Tewarie, P. K. B. (2014). Functional brain networks in multiple sclerosis: linking structural pathology to clinical disability (Ph.D. thesis). Vrije Universiteit Amsterdam, Amsterdam, Netherlands.

Google Scholar

Tognoli, E., and Kelso, J. (2014). The metastable brain. Neuron 81, 35–48. doi: 10.1016/j.neuron.2013.12.022

PubMed Abstract | CrossRef Full Text | Google Scholar

van den Heuvel, M. P., Bullmore, E. T., and Sporns, O. (2016). Comparative connectomics. Trends Cogn. Sci. 20, 345–361. doi: 10.1016/j.tics.2016.03.001

CrossRef Full Text | Google Scholar

Wendling, F., Bellanger, J.-J., Bartolomei, F., and Chauvel, P. (2000). Relevance of nonlinear lumped-parameter models in the analysis of depth-EEG epileptic signals. Biol. Cybern. 83, 367–378. doi: 10.1007/s004220000160

PubMed Abstract | CrossRef Full Text | Google Scholar

Wong, K.-F., and Wang, X.-J. (2006). A recurrent network mechanism of time integration in perceptual decisions. J. Neurosci. 26, 1314–1328. doi: 10.1523/JNEUROSCI.3733-05.2006

PubMed Abstract | CrossRef Full Text | Google Scholar

Yger, P., El Boustani, S., Destexhe, A., and Frégnac, Y. (2011). Topologically invariant macroscopic statistics in balanced networks of conductance-based integrate-and-fire neurons. J. Comput. Neurosci. 31, 229–245. doi: 10.1007/s10827-010-0310-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Zenke, F., and Gerstner, W. (2014). Limits to high-speed simulations of spiking neural networks using general-purpose computers. Front. Neuroinformatics 8:76. doi: 10.3389/fninf.2014.00076

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: neural mass model, leaky integrate and fire, random graph, mean field approximation, Freeman model

Citation: Deschle N, Ignacio Gossn J, Tewarie P, Schelter B and Daffertshofer A (2021) On the Validity of Neural Mass Models. Front. Comput. Neurosci. 14:581040. doi: 10.3389/fncom.2020.581040

Received: 07 July 2020; Accepted: 01 December 2020;
Published: 05 January 2021.

Edited by:

Spase Petkoski, INSERM U1106 Institut de Neurosciences des Systmes, France

Reviewed by:

Jorge F. Mejias, University of Amsterdam, Netherlands
Cees van Leeuwen, KU Leuven, Belgium

Copyright © 2021 Deschle, Ignacio Gossn, Tewarie, Schelter and Daffertshofer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Andreas Daffertshofer, YS5kYWZmZXJ0c2hvZmVyJiN4MDAwNDA7dnUubmw=; Nicolás Deschle, bi5kZXNjaGxlJiN4MDAwNDA7dnUubmw=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.