- 1School of Mathematical Sciences, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
- 2Department of Physics and Astronomy, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China
- 3Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY, United States
- 4NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
It is hypothesized that cortical neuronal circuits operate in a global balanced state, i.e., the majority of neurons fire irregularly by receiving balanced inputs of excitation and inhibition. Meanwhile, it has been observed in experiments that sensory information is often sparsely encoded by only a small set of firing neurons, while neurons in the rest of the network are silent. The phenomenon of sparse coding challenges the hypothesis of a global balanced state in the brain. To reconcile this, here we address the issue of whether a balanced state can exist in a small number of firing neurons by taking account of the heterogeneity of network structure such as scale-free and small-world networks. We propose necessary conditions and show that, under these conditions, for sparsely but strongly connected heterogeneous networks with various types of single-neuron dynamics, despite the fact that the whole network receives external inputs, there is a small active subnetwork (active core) inherently embedded within it. The neurons in this active core have relatively high firing rates while the neurons in the rest of the network are quiescent. Surprisingly, although the whole network is heterogeneous and unbalanced, the active core possesses a balanced state and its connectivity structure is close to a homogeneous Erdös-Rényi network. The dynamics of the active core can be well-predicted using the Fokker-Planck equation. Our results suggest that the balanced state may be maintained by a small group of spiking neurons embedded in a large heterogeneous network in the brain. The existence of the small active core reconciles the balanced state and the sparse coding, and also provides a potential dynamical scenario underlying sparse coding in neuronal networks.
1. Introduction
Neuronal firing activity in the cortex can be highly irregular (Britten et al., 1993; Shadlen and Newsome, 1998; Compte et al., 2003; London et al., 2010). Because the precise timing of spikes may contain substantial information about the external stimuli, irregular activity may serve as a rich encoding and processing space for neural computation (Hertz and Prügel-Bennett, 1996; Gütig and Sompolinsky, 2006; Sussillo and Abbott, 2009; Monteforte and Wolf, 2012). To understand how the brain processes information, it is important to investigate how such irregularity emerges in the brain.
Some studies conclude that irregular firing may be regarded as noise, thus, conveying little information (Shadlen and Newsome, 1994; Han et al., 2015). Meanwhile, other studies show that timing of spikes and the temporal activity patterns of irregular neuronal firings in vivo are able to convey specific information (Richmond and Optican, 1990; Pillow et al., 2005; Whalley, 2013). A germinating mechanism underlying irregular activity was proposed in the balanced network theory (van Vreeswijk and Sompolinsky, 1996; Troyer and Miller, 1997; Vreeswijk and Sompolinsky, 1998; Vogels et al., 2005; Miura et al., 2007). In a balanced network, sparsely-connected neurons possess strong architectural coupling but weak pair-correlations in their activity. The excitatory and inhibitory inputs into each neuron, on average, dynamically balance, suppressing the mean of the total input. Consequently, fluctuations of the input become dynamically dominant, giving rise to irregular firing events of each neuron. The hallmarks of a balanced network include a broad and heterogeneous distribution of the single-neuron firing rate and a linear response of the mean population firing rate to the external input (Vreeswijk and Sompolinsky, 1998; Mehring et al., 2003; Renart et al., 2010). Consistent with theoretically predicted scenarios, certain experimental observations have been interpreted as consequences of balanced networks. For example, in vitro, the sustained irregular activity of neurons in slices of the ferret prefrontal and occipital cortex was shown to be driven by the balance of proportional excitation and inhibition (Shu et al., 2003). In vivo, the excitatory and inhibitory inputs to a neuron in ferret's prefrontal cortex were also found to be dynamically balanced (Haider et al., 2006).
As shown in recent experimental data, the structure of developing hippocampal networks in rats and mice conforms to a scale-free (SF) topology, with the number of connections per neuron following a power-law distribution (Bonifazi et al., 2009). Bidirectional and clustered three-neuron connection motifs were experimentally observed to occur with a frequency significantly above chance in the visual system (Song et al., 2005), thus strongly deviating from statistically homogeneous networks. The network in the somatosensory cortex of neonatal animals was found to be a small-world (SW) network (Perin et al., 2011), that is, its connectivity has properties of high clustering and short average path lengths (Newman, 2003b). These experimental observations show that the neuronal cortical connectivity is rather heterogeneous. Therefore, in this work, we investigate the influence of the wide distribution of the recurrent connectivity on neuronal network dynamics.
In general, it is theoretically challenging to understand the dynamical consequences of these complex network architectures (Boccaletti et al., 2006). Several studies have explored the dynamics of networks with heterogeneous connections. For instance, in Roxin (2011), the role of the broad degree distribution on the correlation of synaptic currents has been investigated. In addition, it has been observed that, in a heterogeneous network, a neuron with more presynaptic connections tends to fire less (Pyle and Rosenbaum, 2016). In a recent study (Landau et al., 2016), it has been found that a heterogeneous network is unbalanced in general because some neurons either never fire or fire fairly regularly in the network. The balanced state of the entire network (the global balanced state) can be achieved by setting strong correlations among the presynaptic excitatory, presynaptic inhibitory, and external inputs for each neuron, or through incorporating adaptation and plasticity into the dynamics of each neuron (Landau et al., 2016).
Theoretical and computational works so far have mainly focused on the global balanced state, i.e., each neuron in the network fires irregularly by receiving balanced excitation and inhibition. However, experimental studies have shown that information is often encoded by the firing of a relatively small set of neurons in the population, whereas other neurons in the network do not fire at all. This phenomenon is often referred to as sparse coding and has been observed in many cortical regions. For instance, sparse firing activity has been observed in the barrel cortex of mice (O'Connor et al., 2010), the auditory cortex of rats (Hromádka et al., 2008), and the primary olfactory cortex of rats (Poo and Isaacson, 2009), which is elicited by a variety of stimuli. Because a large proportion of neurons is silent during information processing, it is suggested that the global balanced state may not commonly exist in cortical regions.
Based on all the above observations, there are several important issues that remain to be clarified: whether the small group of those active neurons embedded in large heterogeneous neuronal networks can be in a balanced state and, if so, how such a balanced sate in the active subnetwork of heterogeneous networks differs from a global balanced state in homogeneous networks; whether the existence of a balanced active subnetwork sensitively depends on the topology of complex networks; what dynamical characteristics those active neurons have in order to maintain a balanced state in heterogeneous networks; how a balanced active subnetwork emerges from heterogeneous network dynamics; and what the dynamical implications of a balanced active subnetwork has for general complex networks. Below we will address these issues by investigating both the SF networks and SW networks with various types of single-node dynamics. Note that the definition of the SF network in our simulations deviates from the exact definition in which a network is called scale-free if its degree distribution exhibits power-law behavior, at least in its upper tail, i.e., P(k) ∝ k−γ as k → +∞ (Reed, 2006). In numerical simulations, the power-law degree distribution we use takes the form as P(k) ∝ k−γ for k ∈ [K0, K1]. Here, the degree of each neuron has a lower bound K0 determined as K0 ≈ 0.95%N according to an experimental observation (Bonifazi et al., 2009), where N is the network size. In addition, the degree of each neuron also has an upper bound K1 determined by Equations (8–10) to ensure that the network is sparsely connected. Because the mean connectivity of a sparse network is much smaller that the network size, the value of K1 is smaller than N.
2. Results
To contrast with networks of heterogeneous topologies below, we first recapitulate the balanced state in a homogeneous network, i.e., an Erds-Rényi (ER) network of binary neurons (Vreeswijk and Sompolinsky, 1998). In this balanced network, an important feature of its connectivity structure is that neurons are sparsely connected with strong synaptic strength. As discussed in section 4 specifically, the average number of connections K to each neuron from both presynaptic excitatory and presynaptic inhibitory populations is much smaller than the total number of neurons in the network, and the coupling strength is of the order . This scaling ensures persistent fluctuations of inputs in the large-K limit.
As shown in Figure S1, the hallmarks of the balanced state in a homogeneous neuronal network are summarized as follows: balanced net input, irregular activity, stationary population-averaged activity, heterogeneity of firing rate, linear response. A detailed description of the properties can be found in Figure S1 (Supplementary Material). All the balanced phenomena in the binary model can be demonstrated analytically from the standpoint of the classical balanced network theory (Vreeswijk and Sompolinsky, 1998). Note that both the theory and simulations are based on the assumptions that the network is homogeneous, i.e., of the ER type, and that the neuron is of the binary type. These assumptions are high simplifications of the biological reality. Biological neuronal networks tend not to be homogeneous, e.g., the connections can be of SF (Scannell et al., 1999; Sporns et al., 2004, 2007; Kaiser et al., 2007) or SW type (Sporns and Zwi, 2004; Sporns, 2006; Perin et al., 2011). In general, it is expected that the topology could strongly influence the dynamics of neuronal networks (Shkarayev et al., 2009). A natural and important extension of the theory is to examine the existence of a balanced state in heterogeneous networks. In the following, we first investigate the SF neuronal network, then discuss the case of the SW network. As an extension to the binary neuron model, we resort to the I&F model in our simulation (Carandini et al., 1996; Rauch et al., 2003; Cai et al., 2005; Rangan et al., 2005; Zhou et al., 2009, 2013).
2.1. Uncorrelated SF Network With I&F Neurons
In this section, we address the question of whether there exists balanced-network dynamics in an uncorrelated SF network using the current-based I&F neuronal model coupled with delta-pulse synaptic currents. This model is computationally simple but biologically more realistic than the binary model (the model details can be found in section 4).
Here, we focus on the SF topology with uncorrelated in-degree between neighboring neurons, and generate the SF networks with a given mean connectivity 2K (each neuron on average has K presynaptic excitatory neurons and K presynaptic inhibitory neurons). A network is called scale-free if its degree distribution exhibits power-law behavior, at least in its upper tail, i.e., P(k) ∝ k−γ as k → +∞ (Reed, 2006). It should be pointed out that the mean connectivity 2K and the decay exponent γ of the power-law distribution are the two main factors that determine the SF network connectivity structure (details can be seen in section 4. We again invoke the coupling strength of order to ensure that the network is fluctuation-driven when K is large. For each neuron in the network, the number of its presynaptic excitatory neurons is set to be highly correlated with that of presynaptic inhibitory neurons, consistent with the setting in the classic ER network as well as the experimental observation (Liu, 2004). The external input to each neuron is allowed to be uncorrelated with the cortical input, which will lead to the break of the global balanced state. However, it remains unclear whether a balanced state can exist in the subgroup consisting of the active neurons in such a network.
Our simulation results lead to the conclusion that only a group of neurons in this SF network can have firing activity and their dynamics follow a balanced state with all its hallmarks. In Figures 1A,B, we illustrate an example of the balance between the excitatory and inhibitory synaptic inputs to the firing neurons. We report the synaptic input at each moment by its time average within a small time window—we select a time bin of 2.5 ms. As shown in Figure S2, we can observe that the net input of each firing neuron can have a relatively small amplitude due to the cancellation of its excitatory and inhibitory parts. In addition, the firing rate of each individual active neuron is linearly correlated with its time-averaged net input, which is consistent with a recent study (Argaman and Golomb, 2018). Just as for neurons in the homogeneous balanced network, the CV value as shown in Figure 1C for the ISIs of each spiking neuron in the SF network is broadly distributed. This is consistent with the irregular activity of these neurons with heterogeneous connectivity. As shown in Figure 1D, the population activity is asynchronous and stationary as the percentage of firing neurons fluctuates in time around a constant with a small amplitude. In Figure 1E, we show that strong heterogeneity is captured by the bimodal distribution of the single-neuron firing rate. Compared with the distribution of firing rate in the homogeneous system (Figure S1E), the firing rate distribution in the SF case manifests a sharp peak near the origin (blue bar). Our result shows that there exists a group of neurons with no firing activity (we will further discuss the significance of this phenomenon below). We point out that a group of neurons with no firing activity has been previously found in other heterogeneous networks, e.g., the network with broad Gaussian distributions of degrees (Landau et al., 2016; Argaman and Golomb, 2018). Finally, in Figure 1F, we show the linear response of both the excitatory and inhibitory populations to the external rate. These features still exist asymptotically as the size of the network increases. In particular, the fluctuations of the synaptic currents received by active neurons do not vanish but are kept as order one, even for very large network size (Figure S3). To summarize, by the above hallmarks of the balanced state, the stationary state of those neurons with firing activity in the SF I&F neuronal network with delta-pulse synaptic currents can be readily identified as a balanced state.
Figure 1. Properties of an SF balanced network with pulse-current-based I&F neurons. (A) The balanced excitatory and inhibitory inputs into a sample neuron (transient dynamics have been removed). The magnitudes of the excitatory (red) and inhibitory (blue) inputs (scaled by the leakage conductance gL) stay far away from the firing threshold (green), whereas the total input (black) (scaled by gL) crosses the threshold stochastically with its mean (magenta, the value is 0.29) remaining below the threshold; (B) The probability density functions of the excitatory (red), inhibitory (blue) and total (black) inputs (scaled by gL) for the sample neuron in (A). The green line is the threshold; (C) The distribution of the CV value. Here, CV is calculated from the ISIs of each neuron; (D) The upper panel is the raster plot of a partial network (100 sample neurons selected at random from the network, with a time evolution of 300 ms), which exhibits asynchronous neuronal activity; the lower panel shows the percentage of the firing neurons over the network in each time window, where the time window is 2.5 ms. The transient dynamics have been removed; (E) The log-histogram of neuronal firing rates (normalized by the mean firing rate averaged across the entire network). The blue bar encodes quiescent neurons, and the red bars encode neurons with non-zero firing rates; (F) The mean firing rate of the excitatory (red) and inhibitory (blue) populations as a linear function of the external input. Here, and K = 400. In panels (A–E), ν0 = 15 Hz. Other parameters are specified in section 4.
2.2. Quiescent and Active Groups in the SF Network
As shown in Figure 1E, we find that the neuronal dynamics of SF networks separate the neuronal population into two subnetworks in our simulations: one consisting of neurons that fire no spikes (blue bar), which will be referred to as the quiescent group; the other consisting of firing neurons (red bar), referred to as the active group (core). Subsequently, we investigate the mechanism underlying how the SF network system evolves into these two different groups and what are the characteristics of dynamics for neurons in these two groups.
In a balanced network, the excitatory and inhibitory inputs to each neuron need to approximately cancel each other. Therefore, the mean-field balanced conditions in the large-K limit shall hold (Vreeswijk and Sompolinsky, 1998) as follows:
where mα is the mean firing rate of the αth population, Jαβ is the coupling strength from the βth population to the αth population, and fα and να are the strength and the rate of the external Poisson input to the αth population for α, β = E, I. As shown in Figure 2A, the excitatory and inhibtory inputs are indeed proportional to each other in both the active and quiescent groups. This is consistent with a recent experimental observation (Xue et al., 2014). In addition, it can be clearly observed that the quiescent group is strongly inhibited because the inhibitory input in the quiescent group is more than twice that in the active group given the same excitatory input. By calculating the time-averaged total input to a neuron normalized by its standard deviation and denoting as ϑ, it is clear from Figure 2B that the distribution of ϑ has a long negative tail for the quiescent group. Consequently, rarely can fluctuations drive their membrane potentials across the threshold. Note that the distribution of ϑ is concentrated around zero for the active group, thus indicating the neurons in the active group have fluctuation-dominated inputs. In addition, the fact that the quiescent group is strongly inhibited can also be reflected in the cross-correlation structure between the excitatory and inhibitory synaptic inputs to each neuron in the quiescent group (Roxin, 2011). As shown in Figure S4A, the average cross-correlation is higher for neurons in the quiescent group than that in the active group, indicating that the increase of the excitatory input is quickly followed and canceled by the inhibitory input such that a neuron in the quiescent group is more inhibited than a neuron in the active group at each moment. Therefore, neurons in the quiescent group are not in the balanced state.
Figure 2. The subgroups in the SF network. (A) The excitatory and inhibitory inputs (normalized by gL) into active neurons (red dots) and quiescent neurons (blue dots). The upper panel is for the excitatory population with slopes of −1 (red line) and −2.35 (blue line). The lower panel is for the inhibitory population with slopes of −1 (red line) and −2.11 (blue line). Red and blue lines are linear fitting of the red and blue dots respectively. Here, we select 1,000 active and 1,000 inactive neurons randomly for the plot; (B) The distribution of ϑ as the time average of the total input into each neuron normalized by its standard deviation. Blue line is for the quiescent subgroup, and red line is for the active subgroup; (C) The degree distributions of the entire network (black solid line) and that of neurons in the quiescent group for different coupling strength ratio ϕ = JEI/JEE. The insert is the log-log plot for the same distributions. In our simulations, we fix JII/JEI = 0.9. Here, ϕ = 3 for the blue solid line, ϕ = 2 for the red solid line, ϕ = 1.5 for the green solid line, and ϕ = 1.2 for the magenta solid line. The distributions agree with one another in the region of large degrees. Data in (A,B) are from the case in Figure 1.
Next, we investigate the issue of how the coupling strength of the network affects the emergence of the active group. In particular, we focus on the competition between the excitatory and inhibitory coupling strength quantified by the ratio ϕ = JEI/JEE with fixed JII/JEI. In our simulation, we fix the network topology while varying the value of ϕ. Note that the degree distribution of the entire network is given by an SF network construction, thus independent of ϕ; while the degree distribution of the active group depends on ϕ — different coupling strengths give rise to different dynamics, which in turn generate different active subnetworks dynamically. Figure 2C displays the degree distribution of the entire network and those of the quiescent groups with different values of ϕ. It is important to observe that these degree distributions agree with one another in the region of large degrees, that is, the quiescent group tends to be composed of the neurons with a large degree. This is consistent with a recent observation (Pyle and Rosenbaum, 2016) that a neuron with a larger degree tends to fire less. Because the neurons in the quiescent group have large degrees, each pair of them tend to share a large amount of common inputs. As shown in Figures S4B,C, by calculating the cross-correlation between the excitatory inputs or inhibitory inputs received by two neurons from the same group, we find that the average cross-correlation value between two inputs of the same type (excitatory or inhibitory) across all pairs of neurons in the quiescent group is indeed significantly greater than that in the active group.
Next, we deploy the coarse-grained approach to further deepen our understanding of the dynamics in this SF system.
2.3. Fokker-Planck Analysis of the SF Network Dynamics
From the mean-field balanced conditions (1), one can obtain the relationship between the population-averaged mean firing rate and the external drive:
As shown in Figure 3A, the predictions of the balanced conditions Equation (2) cannot adequately capture the linear response of the population-averaged mean firing rates to the external inputs obtained in the simulation. To understand quantitatively the influence of the degree heterogeneity of the SF network, we perform the analysis of the Fokker-Planck (FP) equations corresponding to the network dynamics below.
Figure 3. Theoretical analysis of the SF network. (A) Gain curves. The black solid line is obtained from the balanced condition. The theoretical gain curves for the excitatory and inhibitory populations overlap. The red dots (excitatory population) and blue dots (inhibitory population) are obtained from the simulation; (B) The distribution of the cross-correlation coefficient between spike trains of all pairs of neurons in the entire network. It is narrowly centered around zero; (C) The mean firing rate of each neuron ensemble as a function of its degree. Red dots and blue dots are from the simulation. The red and blue lines are obtained from the FP approximation by Equation (22). Red and blue colors encode excitatory and inhibitory populations, respectively. Data in panels (A–C) are from the case in Figure 1.
As shown in Figure 3B, we first note that the firing events between neurons are extremely weakly correlated in the SF network. Therefore, the input into each neuron in the system can be regarded as three Poisson trains (Cinlar, 1972): the external, the excitatory, and the inhibitory synaptic inputs. Accordingly, we can derive the FP equation to describe the dynamics of an I&F neuron with Poisson inputs (Brunel, 2000; Cai et al., 2006). To derive the FP equation for a group of coupled neurons, we need to take the structure of the SF network into account (see section 4 for details). By treating all the neurons that possess the same number of presynaptic neurons as one ensemble, we then derive the FP equation for each ensemble and further obtain its stationary-state solution. We find that the mean firing rate mk for the kth ensemble decays exponentially with the neuronal degree k in that ensemble. Consequently, neurons with a sufficiently large degree will not fire or have extremely low firing rates that can barely be detected in numerical results with a finite simulation time, thus they will be classified into the quiescent group. The exponential decay of the firing rate from the FP analysis has been further verified in numerical simulations as shown in Figure 3C.
2.4. Balanced Active Core and Conditions for Its Existence
From the above discussion, it can be clearly seen that the entire SF network is unbalanced, whereas the active subnetwork is balanced. We now focus on the balanced subnetwork that contains only the active neurons and the connectivity structure of these neurons. We will refer to this subnetwork as an active core, which captures the spiking activity and the effective communication of the entire neuronal network.
We first investigate the issue of how to quantitatively characterize the features of the active core. From Figure 4A, it is important to note that the degree distribution of the neurons in the active core is sharply peaked, resembling that of neurons in homogeneous networks. Why does the degree distribution of the active core in the heterogeneous SF network possess the characteristics of a homogeneous ER network? For each neuron, we first examine the fraction of its active presynaptic neurons amongst all its presynaptic neurons, which will be denoted as p below. The distribution of p as shown in the insert of Figure 4A is sufficiently narrow to be approximated as a constant. In general, the value p will be affected by the E-I input strength ratio ϕ and the decay exponent of the degree distribution γ, as shown in Figure S5. Note that, for each neuron, p can also be viewed as the probability of finding one of its presynaptic neurons to be active. The probability of finding a neuron with w active presynaptic neurons can then be derived from the law of total probability, , where P(k) is the probability of finding a neuron having k presynaptic neurons, as the case here, whose distribution follows a power-law, P(k) = ck−γ. By ignoring the correlation between the degree distribution of the active core and the formation of the active core, the conditional probability P(w|k) can be approximated by a binomial distribution . Further approximating the binomial distribution by a Gaussian, we can derive an approximation for P(w):
Figure 4. Properties of the active core. (A) The degree distribution in the active core. Numerical results (blue bars) can be well fitted by our prediction (Equation 3, red line). The insert is the distribution of p from the numerical simulation. For any single neuron, p is the fraction of the number of its active presynaptic neurons over the number of its total presynaptic neurons. The distribution is narrowly centered around a constant; (B) Relationship between the active core size and the network size. In the simulations, the sparsity K/N = 0.025 is fixed, while N and K vary in different cases. In each network of different size, we choose K0 ≈ 0.95%N, and the value of K1 according to Equation (8). The size (upper) and the mean connectivity (lower) of the active core both grow linearly with those of the entire network. The black solid line is a linear fitting of the simulation results (blue dots), with R2 = 0.993 for the upper panel and R2 = 0.990 for the lower panel; (C) The linear population response to the external drive in the active core. Black solid line is the prediction from the mean-field balanced conditions in the active core. Red (excitatory population) and blue (inhibitory population) dots are obtained from the simulation results. Data in (A,C) is from the case shown in Figure 1.
The probability P(w) is a sum of a series of Gaussian terms with the coefficient of each term weighted by k−γ. Therefore, a larger value of k has a smaller contribution to the sum. In particular, for sufficiently large γ, the dominant term can exactly be a Gaussian. When γ is O(1) as set in our simulations, the degree distribution of the active core still resembles a Gaussian and can be captured by Equation (3). As shown in Figure 4A and Figures S5B–D, the prediction by Equation (3) is in very good agreement with the measured degree distribution of the active core for various values of γ.
Denoting the size and mean connectivity (in-degree) in the active core as Nactive and Kactive respectively, we next examine the relationship between Nactive and N as well as Kactive and K. Recall that K is the average presynaptic connectivity of the original SF network. Numerically, as shown in Figure 4B, Nactive and Kactive increase linearly with N and K respectively. As a consequence, when K → +∞, N → +∞, we also have Kactive → +∞, Nactive → +∞. Therefore, the dynamics of the active core possess the same asymptotic behaviors as those of an ER network in the large-K limit.
By considering the active core as a homogeneous network, we can numerically solve its population-averaged mean firing rate from the following equations derived from the balanced condition in the large-K (Kactive) limit
where the averaged connectivity of the active core Kactive is read out from the simulation. As shown in Figure 4C, the linear response property of the active core can be well-captured by the predictions from Equation (4). The successful prediction also suggests the validity of the assumption in the analysis that the active core can be viewed as a balanced homogeneous network.
Note that the active core encompasses all spike events in the SF neuronal network, with connectivity similar to that of an ER network. The characteristics of the balanced state persist in the active core, that is the properties of balanced net input, irregular activity, stationary population-averaged activity, heterogeneity of firing rate, and linear response all hold. Clearly, our results demonstrate that there exists a balanced active core in the SF neuronal network.
Next, through theoretical analysis and numerical simulations, we have found that, in order to obtain the balanced active core, the following three conditions shall hold:
(1) The cortical and external input strengths shall satisfy the following relation
which can be derived from the balanced condition (Equation 4) by requiring the firing rates to be positive values and setting JIE = JEE for simplicity. Note that Equation (5) is consistent with the conditions derived from a homogeneous network (Vreeswijk and Sompolinsky, 1998).
(2) The excitatory and inhibitory in-degrees for each neuron shall be highly correlated. This condition is consistent with the experimental observation that a conserved ratio of the numbers of excitatory and inhibitory synapses has been observed throughout the dendrites of cultured hippocampal neurons (Liu, 2004).
(3) The smallest degree K0 in the network is required to be the same order as the population-averaged degree K. In fact, by analyzing the FP equation of the neuronal ensemble with the smallest degree K0, the total input to each neuron in this ensemble shall be close to zero in order to achieve the balance between the excitatory and inhibitory inputs, i.e.,
where f is the external input strength, ν0, rE, and rI are the average firing rates of the presynaptic external, cortical excitatory, and cortical inhibitory neuronal populations, respectively, K is the number of the presynaptic external neurons identical to the mean connectivity of the SF network, Γ is the ratio between the number of each neuron's presynaptic excitatory neurons to the number of each neuron's presynaptic inhibitory neurons. Because f, JαE, and JαI are of order , ν0, rE, rI, and Γ are of order , K0 is required to be the same order as K in order to make the total input canceled out.
As one example shown in Figure 5A, breaking the first condition results in the synchronous state of the network. In addition, as shown in Figure 5B, breaking the second or third condition results in the deviation of the ratio from excitatory to inhibitory current input to neurons in the active group from unity, which obviously breaks the balanced input condition and the balanced state of the subnetwork as a consequence. In order to keep the excitation and inhibition balanced in the active neurons, it requires sufficiently high correlation between the numbers of excitatory and inhibitory presynaptic connections as shown in Figure 5C. Note that these conditions for the existence of the balanced active core are different from that proposed by the previous works for the existence of the global balanced state (Landau et al., 2016), in which the number of cortical inhibitory input is required to be correlated with both the number of cortical excitatory inputs and the number of external inputs. In contrast, the existence of a balanced active core only requires that the number of cortical inhibitory inputs be correlated with that of cortical excitatory input but not that of external input.
Figure 5. Conditions for the existence of the balanced active core. (A) Synchronized network dynamics induced by the break of balanced condition (1). In the simulation, and , where K = 400. The raster plot (upper panel) and the percentage of firing neurons over time indicate the synchronous dynamics in the system. Red and blue color encode the excitatory and inhibitory population, respectively; (B) The distributions of E-I current ratio in SF networks satisfying all the three conditions (black), after breaking condition (2) (blue), and after breaking condition (3) (red). The blue curve is plotted by setting the cross-correlation coefficient of the presynaptic excitatory degree and inhibitory degree as 0.2; and the red curve is plotted by setting K0 = 80, K = 400; (C) The ratio distribution corresponding to the SF network with different cross-correlation coefficients of the presynaptic excitatory degree and inhibitory degree. The cross-correlation coefficient equals 0.99 (black), 0.8 (red), 0.6 (green), 0.4 (magenta), and 0.2 (blue).
2.5. Correlated SF Networks With I&F Neurons
Because the architectural degree-correlation may play an important role in the dynamics of a system (Shkarayev et al., 2009), we generate SF networks with degree-correlation between neighboring nodes using a reshuffling strategy (Xulvi-Brunet and Sokolov, 2005). A balanced active core can still arise in correlated SF neuronal networks. An example is shown in Figure S6, in which the five hallmarks of balanced net input, irregular activity, stationary population-averaged activity, heterogeneity of firing rate, and linear response again perseverate robustly. The degree distribution exponent γ of the SF network used here is γ = 2.6, which is the same as that of the SF network for the uncorrelated case reported above (Figure 1). The degree correlation coefficient for the SF network in Figure S6 is ρ = 0.03. Similar to an SF network without degree correlation, the distribution of single neuron firing rates also possesses a high peak at zero. The SF network with degree correlation can also be decomposed into two subnetworks of distinct dynamics characterized by their firing rates. Figure S7B demonstrates that the structure of the corresponding active core also displays that of homogeneous networks.
By generating SF networks with different correlation coefficients and with γ = 2.6, all these SF systems exhibit dynamics with a balanced active core. The degree distribution of the active core can be successfully described by Equation (3) for all values of ρ ranging from −0.3 to 0.31 as shown in Figure S7. The properties of the dynamics in these active cores are again similar to those of an ER balanced network.
In summary, our results show that the degree correlation between different nodes does not affect the properties of the balanced active core in SF networks. For SF neuronal networks with degree correlations, the existence of the active core persists with the structure similar to that of an ER network and the active core possesses all the characteristics of the balanced state.
3. Discussion
In this work, we have shown that a sparsely but strongly connected SF network of I&F neurons can reach the balanced state in the active subgroup if the network satisfies three conditions: the inequality of the ratio of external and coupling strength between excitation and inhibition, high correlation between the numbers of presynaptic cortical excitatory and presynaptic cortical inhibitory neurons for each neuron, and the order of the smallest degree K0 being the same as that of the population averaged degree K. Despite the fact that all neurons in the SF network receive external inputs, the network is naturally separated into two subnetworks: one is the quiescent group consisting of silent neurons and the other is the active group consisting of neurons with non-zero firing rates. The separation of active and quiescent subgroups has also been observed in other heterogeneous networks with broad Gaussian degree distributions (Landau et al., 2016; Argaman and Golomb, 2018). The subnetwork consisting of all the active neurons with the connections between these neurons is then defined as the active core here. From our simulation, this active core possesses a degree-distribution characteristic of a homogeneous ER network, which can be described well by our theoretical analysis (Equation 3). In addition, the active core displays similar dynamical properties of the balanced state of an ER network (Figures 1, 4).
In addition, our results suggest that the balanced active core can be found in various heterogeneous networks as the results shown below. We also find that the silent neurons always possess larger degrees than the active neurons. This can be understood intuitively as follows: if the number of in-degrees in the external input is fixed, and the ratio of the numbers of excitatory and inhibitory synapses is maintained, neurons in the heterogeneous network that receive a large number of recurrent connections receive effectively more inhibition and are therefore silent.
3.1. Balanced Active Core in Other Networks
In addition to the pulse-coupled I&F neurons, for the SF network of either binary neurons or smooth-current-based I&F neurons, as shown in Figures S8–S10, the balanced active core also can be found. These results imply that the existence of the balanced active core is robust with respect to detailed single-neuron dynamics.
It has been shown that different architectural degree-correlations can induce different dynamical properties in SF networks (Krapivsky and Redner, 2001). These correlations can strongly influence the dynamics of the system (Shkarayev et al., 2009). However, as far as the balanced state is concerned, there still exists a balanced active core in SF neuronal networks with degree correlations as shown in Figures S6, S7, in which an ER-like active core controls its dynamics. In addition, the properties of the balanced state have been studied for various SF networks with a different decay exponent of degree distribution γ. As shown in Figures S5, S11–S13, the value of γ does not affect the existence of the balanced active core, but affects the size of the active core.
As is shown that certain neuronal networks in the brain exhibit small-world (SW) characteristics (Perin et al., 2011), we have also conducted simulations with SW connectivity. An active core of the balanced dynamics is again observed in the SW neuronal network with different rewiring probabilities (Figure S14). The degree distribution in the active core is still close to that of an ER network. Our results suggest that the balanced state embedded in the active core may broadly exist for various heterogeneous networks (Figure 4 and Figures S5, S7, S10, S14).
3.2. Heterogeneity in External Input
Accounting for the fact that the external inputs may vary from neuron to neuron, we have also examined the case of heterogeneous inputs in the simulation. Here, we choose the rate of the external input to the ith neuron in the αth population from a Gaussian probability distribution with its mean να and standard deviation CV·να for α = E, I, where CV is the coefficient of variation. As shown in Figure S15, for CV ranging from 0.1 to 0.4, we can still observe the existence of the active core, in which neurons receive balanced excitatory and inhibitory inputs. This indicates that the the broadly-distributed external input may not affect the existence of the active core (Figure S16).
In addition, Figures S17A,B provides an example of the heterogeneous strength of the external input following a log-normal distribution (Song et al., 2005) with a uniformly-distributed rate for different neurons. For this case, the dynamics still manifest a balanced active core whose in-degree distribution is again in excellent agreement with the prediction of Equation (3) shown in Figure S17C. These results may suggest that the active core can exist for various external inputs.
3.3. Biological Relevance of the Balanced Active Core
Many neuronal networks in the brain exhibit statistically heterogeneous connectivity structures. It has been observed that the connections of the neurons in layer 5 of the rat visual cortex display various highly clustered three-neuron connectivity patterns (Song et al., 2005). In addition, neuronal connectivity has been found to possess SF properties in rat hippocampal networks (Bonifazi et al., 2009). The network connectivity between neurons in the somatosensory cortex of neonatal animals possesses the attributes of a SW network (Perin et al., 2011).
In addition, experimental studies have shown that there often exists a small subnetwork of highly active neurons along with a large proportion of neurons being silent in neocortex of the brain. For example, during a head-fixed object localization task, only about half of all the neurons in a barrel column have been found to fire (O'Connor et al., 2010). Experimental recordings in the primary auditory cortex of unanesthetized rats have shown that 50% of the neural population failed to respond to any of the simple stimuli (Hromádka et al., 2008). Furthermore, in vivo, each odor can only evoke the activity of about 10% of neurons from anterior piriform cortex Layer 2/3 (Poo and Isaacson, 2009).
Our results show that, starting from heterogeneous network connectivity, the emergent network dynamics naturally captures the phenomenon of sparse coding and balanced inputs in a group of neurons. In contrast, in the traditional theory of a balanced network, the majority of neurons are balanced and thus fire actively. Therefore, in such a network, information can hardly be encoded by only a few of active neurons with the other neurons being quiescent. Note that, in order to achieve a balanced active core, our model assumes that the numbers of cortical excitatory and inhibitory inputs should be highly correlated, which has been supported by experimental observation (Liu, 2004).
3.4. Comparison With Previous Studies
Several studies have explored the dynamics of networks with heterogeneous connections. For example, in a recent study (Landau et al., 2016), there has been found a large fraction of neurons silent in the networks with sufficiently broad degree distributions. Moreover, it has been demonstrated that a heterogeneous network cannot reach a global balanced state in general when the cortical excitatory, cortical inhibitory and external in-degrees are uncorrelated, because some neurons either never fire or fire fairly regularly in the network (Landau et al., 2016). To achieve the global balanced state, the authors introduced adaptation, plasticity, or degree correlation into the network. In particular, by setting the number of cortical inhibitory input to be correlated with that of cortical excitatory input and with that of external excitatory input, the whole network will stay in the global balanced state (Landau et al., 2016), in which all neurons in the network receive balanced input and fire irregularly. Thus the balanced active group in such a case is the entire network. Different from their settings, here we set the correlation only between the numbers of cortical excitatory and inhibitory inputs, leaving the number of external input to be uncorrelated, and clearly show the mean firing rate of each neuron will exponentially decay with its in-degree, giving rise to the emergence of the active core in the network consisting of small-degree neurons. In addition, we further investigate the property of the active core, i.e., the subnetwork composed of the active neurons, and find that the balanced state does exist in the active core of the whole network. Our simulation shows that the balanced active core exists in a variety of networks with different degree distributions (scale-free, small-world, and broad Gaussian), while the cortical excitatory inputs are required to be correlated with cortical inhibitory inputs. Therefore, the existence of the active core seems not to be dependent on the degree distribution, but dependent on the correlation of cortical excitatory and inhibitory inputs. Moreover, by increasing the correlation between the number of external inputs and the number of cortical inputs to each neuron, the size of the active core will increase accordingly. Note that the increased correlation level has different effects on the active and quiescent groups. Intuitively, neurons in the active core are already balanced by its definition, thus the increase of correlation can only moderate the firing rate of these active neurons but have little effect on the size of the active core. However, for neurons in the quiescent group, such as neurons with degree k, the increase of correlation will drive these neurons toward the balanced state by satisfying the balanced condition, i.e., kfν0 + kJαErE − kJαIrI ≈ 0. Therefore, as the correlation level increases, the size of the active core in the network will increase by recruiting more and more neurons that used to be in the quiescent group. Eventually, the size of the active core can be the same as the network size. We note that the balanced active core can also be found in networks with broad Gaussian distributions if the numbers of cortical excitatory input and cortical inhibitory input to each neuron are correlated, meanwhile they are uncorrelated with the number of the external input.
In another study (Argaman and Golomb, 2018), they have investigated a network of 150 inhibitory neurons in the barrel cortex with heterogeneous connections between each other, and neurons in this network receive heterogeneous excitatory inputs from thalamic neurons. The network does not have strong coupling and sparse connection. In addition, the number of inhibitory cortical input is set to be uncorrelated with the number of excitatory input. In this type of modestly-sized network, it has been found that the fraction of silent neurons is very small, and most of neurons seem to be in the balanced state. Here we have investigated strongly coupled but sparsely connected networks consisting of a large number of both excitatory and inhibitory neurons. With these different settings, we have shown that there is a large fraction of silent neurons in the network, and the balanced state only exists for a small fraction of neurons in the entire network, i.e., the active core.
Moreover, we have found that the degree distribution of the active core is close to the homogeneous connectivity structure. We note in passing that the emergence of the balanced active core does not naturally result from the high correlation between the numbers of the cortical excitatory inputs and the cortical inhibitory inputs. This can be illustrated by the following facts: first, large-degree neurons with such high correlation structure fail to reach the balanced state in general; second, small-degree neurons with such high correlation structure also fail to reach the balanced state if the network topology does not satisfy the third condition as discussed in section 4 (also shown in Figure 5). Another study has used a similar theoretical framework to investigate the effective gain in a heterogeneous network (Roxin, 2011). They also treated neurons with the same in-degree as one ensemble. However, they directly used the firing-rate-based neuron model, rather than deriving the FP equation as in our case. Moreover, the balanced property of the network has not been investigated in their work. In summary, the finding of the existence of the balanced active core embedded in a heterogeneous network distinguishes our work from several studies exploring the dynamics of networks with heterogeneous connections. Our work provides a potential dynamical scenario for the emergence of a balanced active core in a heterogeneous network in the brain.
4. Materials and Methods
4.1. Degree Distribution and Degree Correlation
In the study of networks, the degree of a node in a network is the number of connections it has to other nodes. For a directed network, nodes have two different degrees, the in-degree, which is the number of incoming edges to a node, and the out-degree, which is the number of outgoing edges from a node. In this work, we mainly focus on the in-degree distribution, and just use degree instead of in-degree in this work for ease of discussion. The degree distribution P(k) of a network is the probability of finding a k-node, where the k-node is a node of degree k. The degree distribution of a directed ER network follows the Poisson distribution, P(k) = λke−λ/k!, which can be approximated by a Gaussian distribution for large λ (λ ≫ 1), λ being the average degree of the network. The degree distribution of an SF network, by definition, follows a power-law distribution P(k) ∝ k−γ, γ being the decay exponent (Barabási et al., 1999).
Beyond the degree distribution, it is also important to characterize the degree-correlation between neighboring nodes for large networks of complex structures (Pastor-Satorras et al., 2001; Newman, 2003a). In general, a network may display degree-correlations if the wiring probability between the high- and low-degree nodes statistically significantly differs from the independent random wirings between nodes. In our work, the degree-correlation is quantified by the Pearson correlation coefficient between the in-degrees for pairs of nodes linked by a directed edge.
4.2. The Generation of Scale-Free Neuronal Networks
To generate an SF neuronal network, we first generate the in-degree (out-degree) of each neuron in the αth population based on the power-law distribution Pα, in (Pα, out) for α = E, I. For ease of discussion, we set the first NE nodes in the N-node network to be excitatory neurons, and those remaining are inhibitory. Then we generate N in/out-degree pairs (ki, li) for each neuron, and calculate the sums and . We use the method in Newman et al. (2001) to force the conservation of in-degree and out-degree, i.e., . To be specific, when , we randomly select a neuron i and regenerate a new pair of degree (ki, li) from the corresponding degree distributions. We repeat the procedure until . Then, we further define Γi as the ratio of the number of presynaptic excitatory neurons to that of presynaptic inhibitory neurons for the ith neuron, so that ki, E = kiΓi/(1 + Γi) is the number of the excitatory incoming connections, and ki, I = ki/(1 + Γi) is the number of the inhibitory incoming connections for the ith neuron. Various levels of cross-correlations between the number of excitatory cortical inputs {ki, E} and the number of inhibitory cortical inputs {ki, I} can be obtained by choosing different values of {Γi}. Finally, we make direct connections in the network according to {(ki, E, ki, I), li} with the configuration model (Newman et al., 2001; Newman, 2003b). Note that the degrees of the connected nodes in such an SF network are uncorrelated (Aiello et al., 2000; Newman et al., 2001). To generate an SF network with degree correlation, we use a simple edge-node reshuffling strategy, which is a simplified version of the algorithm in Xulvi-Brunet and Sokolov (2005). In our simulations, unless otherwise specified, the decay exponent is chosen to be γ = 2.6 in our work, which is within the normal range of γ for real-world SF networks according to Barabási et al. (1999).
is denoted as the element of the adjacency matrix with if there is a directed edge from the jth neuron in the βth population to the ith neuron in the αth population, where α, β = E, I. If each neuron is connected, on average, to K presynaptic excitatory neurons and K presynaptic inhibitory neurons, because each neuron is connected to a large number of presynaptic neurons in the cortex (Braitenberg and Schüz, 1998), the value of K should be chosen sufficiently large to reflect this fact of connectivity. In addition, by electrophysiological recordings from cortical neurons, the probability of connection is shown to be often rather low, thus yielding a sparse network (Holmgren et al., 2003). Therefore, the value of K should be chosen to be much smaller than the size of the population. As the cells in the primary visual cortex of adult cats were found experimentally firing much more irregularly in vivo than the cells in vitro when the same stimulus was used (passing the same current through the electrode), fluctuations of the synaptic inputs are particularly important for irregular spiking (Holt et al., 1996). In light of this, we choose the scaling of the coupling strength to be of order , imparting fluctuations of order one to persist in the large-K limit in the total synaptic input to a neuron (van Vreeswijk and Sompolinsky, 1996; Vreeswijk and Sompolinsky, 1998; Vogels et al., 2005). We adopt this scaling for all the neuron models used in this work.
Next we explain how to find a power-law degree distribution with the decay exponent γ and the mean connectivity 2K for the generation of an SF network. Because the network size is always finite in numerical simulations, the degree of each neuron varies and has a lower bound denoted as K0 and an upper bound denoted as K1. Therefore, the power-law distribution takes the form as
with a normalization constant C. By the definition of probability and its mean, we have
Intuitively, for fixed K and γ in an SF network, two parameters K0 and K1 cannot be simultaneously determined by Equation (7) since there are three unknowns, C, K0, and K1 and only two equations in Equation (7). This can be shown as follows.
(1) For γ > 0 and γ ≠ 1, 2, Equation (7) can be approximately reformed as
Subsequently, we can obtain the following relationship
(2) For γ = 1, Equation (7) can be calculated that
Then, we can obtain
(3) For γ = 2, it can be calculated that
Similarly, we have
Then, given the value of K and γ, we can choose proper K0 and K1 following one of Equations (8)–(10) to ensure that the balanced condition (3) holds. Since the starting point of the power-law distribution of the degree normalized by the network size from an experimental observation is about 0.95% (Bonifazi et al., 2009), we choose accordingly in many of our simulations, where 4 × 104 is the network size. The value of K0 is set to be different from 380 only when we investigate the effect of the network size in Figure 4B. Note that K1 cannot be larger than the network size.
4.3. The Current-Based I&F Model With Delta-Pulse Coupling
In our work, the sub-threshold membrane potential of an I&F neuron in a population obeys the following dynamics (Dayan et al., 2001; Newhall et al., 2010; Zhou et al., 2010)
where is the membrane potential of the ith neuron in the αth population (α = E, I), gL is the leakage conductance, ϵR is the resting voltage, and is the driving current. The voltage evolves according to Equation (11) while it remains below the firing threshold ϵT. When reaches ϵT, the ith neuron is said to fire a spike, and is set to the value of the reset voltage ϵR. Upon resetting, is governed by Equation (11) again. At the same time, appropriate currents induced by the spike are injected into all other postsynaptic neurons. We use physiological values for the parameters gL = 50 s−1, ϵR = −70 mV and ϵT = −55 mV. Upon non-dimensionalization, we have normalized ϵT = 1.0 and ϵR = 0.0.
The instantaneous current injected into the ith neuron of the αth population has the following form , where is the inhibitory input, whereas is the excitatory input — δ(·) is the Dirac delta function, Jαβ is the coupling strength from the βth population to the αth population (α, β = E, I), and fα is the strength of the external Poisson input to the αth population. The first term in corresponds to the current from the external input. The external input of the ith neuron in the αth population is modeled by a Poisson process with rate να. At the time, , of the sth input spike to the ith neuron in the αth population, the neuron's voltage jumps by the amount of fα. The second term in and the term in correspond to the currents induced by the coupled neurons in the excitatory and inhibitory populations in the network, in which is the spike train from the jth neuron in the excitatory population, is the spike train from the jth neuron in the inhibitory population, and s denotes the sth spike in the train.
In the simulation, the values of parameters in the model are set as follows: , , , and νE = ν0K, νI = 0.8ν0K. We vary the value of ν0 to control the rate of the external input. To perform the numerical simulation of this I&F model, we use an event-driven scheme (Brette et al., 2007), with which the numerical results of dynamics can be obtained up to the machine accuracy.
4.4. Fokker-Planck Equation for a Single Neuron
Under a Poisson external input, the spiking events of a neuron in the network, in general, are not Poissonian, i.e., and in the current are not a Poisson process for a fixed neuron j. However, the input to the ith neuron is a spike train summed over output spike trains from many other neurons in the network. If the firing event of each neuron is statistically independent of one another, then the spike train obtained by summing over a large number of output spike trains of neurons asymptotically tends to a Poisson process (Cinlar, 1972). In a balanced network, the firing event of each neuron is extremely weakly correlated with, and nearly independent of, other neurons (Vreeswijk and Sompolinsky, 1998). Therefore, for each neuron, the summed incoming spikes from its presynaptic neurons can be approximated by a Poisson train. Under the Poisson approximation, we can obtain the Fokker-Planck (FP) equation corresponding to Equation (11) for each neuron in the population (Cai et al., 2006). For the ith neuron in the αth population, we have
where is the probability density at time t of finding the membrane potential at v of the ith neuron in the αth population. Here is the mean total input,
and is the strength of fluctuations of the total input,
Note that and are the rates of the summed respective excitatory and inhibitory inputs from other neurons in the network, fα and να are the strength and rate of the external Poisson input to the αth population, respectively.
Equation (12) can be cast into the conservation form , with being the probability density flux through v at time t. For Equation (12), we need to specify boundary conditions at v = −∞, the reset potential ϵR , and the threshold ϵT. The probability flux through ϵT gives the instantaneous firing rate at t, . For the I&F neuron, its membrane potential cannot exceed the threshold, therefore, for v ≥ ϵT. At the reset potential v = ϵR , there is a probability flux coming from the neuron that just crosses the threshold: what goes out at time t at the threshold must come back at time t at the reset potential, thus . The natural boundary condition at v = −∞ is tends sufficiently rapidly toward zero to be integrable, and . By definition, satisfies the normalization condition .
The stationary solution of Equation (12) can be obtained as Brunel (2000)
Furthermore, by using the normalization condition, the firing rate can be obtained as
where erf(x) is the error function.
4.5. Fokker-Planck Equation for a Homogeneous Network
For the balanced state in homogeneous neuronal networks, one can reach a probabilistic characterization of the network beyond the dynamics of a single neuron. Because each neuron in the balanced state of a homogeneous network can be regarded as nearly statistically identical in a particular population, the input spike train of each neuron, which is summed from all presynaptic neurons, is Poisson with rate Kαmα(t), by noting that each neuron has KE presynaptic excitatory neurons and KI presynaptic inhibitory neurons on average. Here mα(t) is the population-averaged firing rate for a neuron in the αth population, α = E, I. Then, one can obtain
where ρα(v, t) is the probability of finding a neuron in the αth population whose membrane potential is v at time t (Brunel, 2000), and the probability density flux , where the input is characterized by μα = fανα+JαEKEmE−JαIKImI and . By the same argument for Equation (12), the boundary conditions for Equation (17) can be similarly obtained.
Similar to the single-neuron case, here the mean firing rate over the neuronal population can be obtained in a self-consistent way as
4.6. Fokker-Planck Equation for a Scale-Free Network
One can further derive the FP equations for an SF network with the following structural property — for each neuron in the network, the ratio of the number of its presynaptic excitatory neurons to the number of its presynaptic inhibitory neurons is almost a constant across the population. By denoting this constant ratio as Γ (Γ = 1 for the SF networks in our simulations), and treating all the neurons with the same number of presynaptic neurons as one ensemble, we can derive the FP equation for the kth ensemble (the neuron ensemble with k presynaptic neurons) in the αth population,
where is the average total input
and describes the strength of fluctuations of the total input
Here and are the presynaptic excitatory and inhibitory neurons' mean firing rates for the kth ensemble respectively, T(n|k) is the conditional probability of finding a directed connection that originates from an n-node (a neuron with n presynaptic neurons) given that it ends at a k-node (a neuron with k presynaptic neurons). And K0 (K1) denotes the smallest (largest) degree. In the stationary state, one can obtain the firing rate of neurons in the k-ensemble as
When there is no degree correlation between each node, the conditional probability can be calculated as T(n|k) = nP(n)/(2K) independent of k, where P(n) is the power-law degree distribution. Then according to Equations (19–20), decreases linearly with the degree k because the network is inhibition dominant, while increases linearly with the degree k. Moreover, for neurons with a large number of presynaptic connections, i.e., large k, one can find that and . Therefore, , and . The mean firing rate can thus be further approximated as
Because and , according to Equation (22), the firing rate of a neuron in the kth ensemble decays exponentially with k. Consequently, neurons with a sufficiently large degree will possess a very low firing rate which can barely be detected in numerical results with a finite simulation time, thus they will be classified to the quiescent group.
We comment that, for a homogeneous network with a broad degree distribution, the group of quiescent neurons also exists. As the width of the degree distribution becomes broader, the number of the quiescent neurons becomes larger. This phenomenon can also be explained from the result of the corresponding FP analysis in Roxin et al. (2011).
Author Contributions
QG, SL, WD, DZ, and DC conceived and designed the research, performed experiments and analyzed data, and wrote the paper.
Funding
This work is supported by NYU Abu Dhabi Institute G1301 (QG, SL,WD, DZ, and DC); NSFC-11671259, NSFC-11722107, NSFC-91630208, and Shanghai Rising-Star Program-15QA1402600 (DZ); NSFC-31571071 (DC); Shanghai 14JC1403800, 15JC1400104 and SJTU-UM Collaborative Research Program (DZ, DC).
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
The authors dedicate this paper to their late co-author and mentor DC. We thank the editor and reviewers for helpful comments on our manuscript.
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fncom.2018.00109/full#supplementary-material
References
Aiello, W., Chung, F., and Lu, L. (2000). “A random graph model for massive graphs,” in Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing (Portland, OR: ACM), 171–180. doi: 10.1145/335305.335326
Argaman, T., and Golomb, D. (2018). Does layer 4 in the barrel cortex function as a balanced circuit when responding to whisker movements? Neuroscience 368, 29–45. doi: 10.1016/j.neuroscience.2017.07.054
Barabási, A.-L., Albert, R., and Jeong, H. (1999). Mean-field theory for scale-free random networks. Phys. A Stat. Mech. Appl. 272, 173–187. doi: 10.1016/S0378-4371(99)00291-5
Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., and Hwang, D.-U. (2006). Complex networks: structure and dynamics. Phys. Rep. 424, 175–308. doi: 10.1016/j.physrep.2005.10.009
Bonifazi, P., Goldin, M., Picardo, M. A., Jorquera, I., Cattani, A., Bianconi, G., et al. (2009). Gabaergic hub neurons orchestrate synchrony in developing hippocampal networks. Science 326, 1419–1424. doi: 10.1126/science.1175509
Braitenberg, V., and Schuz, A. (1998). “Comparison between synaptic and neuronal density,” in Cortex: Statistics and Geometry of Neuronal Connectivity (Berlin; Heidelberg: Springer), 37–38.
Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, D., Bower, J. M., et al. (2007). Simulation of networks of spiking neurons: a review of tools and strategies. J. Comput. Neurosci. 23, 349–398. doi: 10.1007/s10827-007-0038-6
Britten, K. H., Shadlen, M. N., Newsome, W. T., and Movshon, J. A. (1993). Responses of neurons in macaque MT to stochastic motion signals. Vis. Neurosci. 10, 1157–1169. doi: 10.1017/S0952523800010269
Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 8, 183–208. doi: 10.1023/A:1008925309027
Cai, D., Rangan, A. V., and McLaughlin, D. W. (2005). Architectural and synaptic mechanisms underlying coherent spontaneous activity in V1. Proc. Natl. Acad. Sci. U.S.A. 102, 5868–5873. doi: 10.1073/pnas.0501913102
Cai, D., Tao, L., Rangan, A. V., and McLaughlin, D. W. (2006). Kinetic theory for neuronal network dynamics. Commun. Math. Sci. 4, 97–127. doi: 10.4310/CMS.2006.v4.n1.a4
Carandini, M., Mechler, F., Leonard, C. S., and Movshon, J. A. (1996). Spike train encoding by regular-spiking cells of the visual cortex. J. Neurophysiol. 76, 3425–3441. doi: 10.1152/jn.1996.76.5.3425
Cinlar, E. (1972). “Superposition of point processes,” in Stochastic Point Processes: Statistical Analysis, Theory, and Applications, ed P. A. W. Lewis (New York, NY: John Wiley), 549–606.
Compte, A., Constantinidis, C., Tegner, J., Raghavachari, S., Chafee, M. V., Goldman-Rakic, P. S., et al. (2003). Temporally irregular mnemonic persistent activity in prefrontal neurons of monkeys during a delayed response task. J. Neurophysiol. 90, 3441–3454. doi: 10.1152/jn.00949.2002
Dayan, P., and Abbott, L. F. (2001). Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems Cambridge, MA: MIT Press.
Gütig, R., and Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timing–based decisions. Nat. Neurosci. 9, 420–428. doi: 10.1038/nn1643
Haider, B., Duque, A., Hasenstaub, A. R., and McCormick, D. A. (2006). Neocortical network activity in vivo is generated through a dynamic balance of excitation and inhibition. J. Neurosci. 26, 4535–4545. doi: 10.1523/JNEUROSCI.5297-05.2006
Han, F., Wang, Z., Fan, H., and Sun, X. (2015). Optimum neural tuning curves for information efficiency with rate coding and finite-time window. Front. Comput. Neurosci. 9:62. doi: 10.3389/fncom.2015.00067
Hertz, J., and Prügel-Bennett, A. (1996). Learning short synfire chains by self-organization*. Network 7, 357–363. doi: 10.1088/0954-898X_7_2_017
Holmgren, C., Harkany, T., Svennenfors, B., and Zilberter, Y. (2003). Pyramidal cell communication within local networks in layer 2/3 of rat neocortex. J. Physiol. 551, 139–153. doi: 10.1113/jphysiol.2003.044784
Holt, G. R., Softky, W. R., Koch, C., and Douglas, R. J. (1996). Comparison of discharge variability in vitro and in vivo in cat visual cortex neurons. J. Neurophysiol. 75, 1806–1814. doi: 10.1152/jn.1996.75.5.1806
Hromádka, T., DeWeese, M. R., and Zador, A. M. (2008). Sparse representation of sounds in the unanesthetized auditory cortex. PLoS Biol. 6:e16. doi: 10.1371/journal.pbio.0060016
Kaiser, M., Martin, R., Andras, P., and Young, M. P. (2007). Simulation of robustness against lesions of cortical networks. Eur. J. Neurosci. 25, 3185–3192. doi: 10.1111/j.1460-9568.2007.05574.x
Krapivsky, P. L., and Redner, S. (2001). Organization of growing random networks. Phys. Rev. E 63:066123. doi: 10.1103/PhysRevE.63.066123
Landau, I. D., Egger, R., Dercksen, V. J., Oberlaender, M., and Sompolinsky, H. (2016). The impact of structural heterogeneity on excitation-inhibition balance in cortical networks. Neuron 92, 1106–1121. doi: 10.1016/j.neuron.2016.10.027
Liu, G. (2004). Local structural balance and functional interaction of excitatory and inhibitory synapses in hippocampal dendrites. Nat. Neurosci. 7, 373–379. doi: 10.1038/nn1206
London, M., Roth, A., Beeren, L., Häusser, M., and Latham, P. E. (2010). Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature 466, 123–127. doi: 10.1038/nature09086
Mehring, C., Hehl, U., Kubo, M., Diesmann, M., and Aertsen, A. (2003). Activity dynamics and propagation of synchronous spiking in locally connected random networks. Biol. Cybernet. 88, 395–408. doi: 10.1007/s00422-002-0384-4
Miura, K., Tsubo, Y., Okada, M., and Fukai, T. (2007). Balanced excitatory and inhibitory inputs to cortical neurons decouple firing irregularity from rate modulations. J. Neurosci. 27, 13802–13812. doi: 10.1523/JNEUROSCI.2452-07.2007
Monteforte, M., and Wolf, F. (2012). Dynamic flux tubes form reservoirs of stability in neuronal circuits. Phys. Rev. X 2:041007. doi: 10.1103/PhysRevX.2.041007
Newhall, K. A., Kovacic, G., Kramer, P. R., Zhou, D., Rangan, A. V., Cai, D., et al. (2010). Dynamics of current-based, poisson driven, integrate-and-fire neuronal networks. Commun. Math. Sci. 8, 541–600. doi: 10.4310/CMS.2010.v8.n2.a12
Newman, M. E. (2003a). Mixing patterns in networks. Phys. Rev. E 67:026126. doi: 10.1103/PhysRevE.67.026126
Newman, M. E. (2003b). The structure and function of complex networks. SIAM Rev. 45, 167–256. doi: 10.1137/S003614450342480
Newman, M. E., Strogatz, S. H., and Watts, D. J. (2001). Random graphs with arbitrary degree distributions and their applications. Phys. Rev. E 64:026118. doi: 10.1103/PhysRevE.64.026118
O'Connor, D. H., Peron, S. P., Huber, D., and Svoboda, K. (2010). Neural activity in barrel cortex underlying vibrissa-based object localization in mice. Neuron 67, 1048–1061. doi: 10.1016/j.neuron.2010.08.026
Pastor-Satorras, R., Vázquez, A., and Vespignani, A. (2001). Dynamical and correlation properties of the internet. Phys. Rev. Lett. 87:258701. doi: 10.1103/PhysRevLett.87.258701
Perin, R., Berger, T. K., and Markram, H. (2011). A synaptic organizing principle for cortical neuronal groups. Proc. Natl. Acad. Sci. U.S.A. 108, 5419–5424. doi: 10.1073/pnas.1016051108
Pillow, J. W., Paninski, L., Uzzell, V. J., Simoncelli, E. P., and Chichilnisky, E. (2005). Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. J. Neurosci. 25, 11003–11013. doi: 10.1523/JNEUROSCI.3305-05.2005
Poo, C., and Isaacson, J. S. (2009). Odor representations in olfactory cortex:“sparse” coding, global inhibition, and oscillations. Neuron 62, 850–861. doi: 10.1016/j.neuron.2009.05.022
Pyle, R., and Rosenbaum, R. (2016). Highly connected neurons spike less frequently in balanced networks. Phys. Rev. E 93:040302. doi: 10.1103/PhysRevE.93.040302
Rangan, A. V., Cai, D., and McLaughlin, D. W. (2005). Modeling the spatiotemporal cortical activity associated with the line-motion illusion in primary visual cortex. Proc. Natl. Acad. Sci. U.S.A. 102, 18793–18800. doi: 10.1073/pnas.0509481102
Rauch, A., La Camera, G., Lüscher, H.-R., Senn, W., and Fusi, S. (2003). Neocortical pyramidal cells respond as integrate-and-fire neurons to in vivo–like input currents. J. Neurophysiol. 90, 1598–1612. doi: 10.1152/jn.00293.2003
Reed, W. J. (2006). A brief introduction to scale-free networks. Nat. Resour. Model. 19, 3–14. doi: 10.1111/j.1939-7445.2006.tb00173.x
Renart, A., De La Rocha, J., Bartho, P., Hollender, L., Parga, N., Reyes, A., et al. (2010). The asynchronous state in cortical circuits. Science 327, 587–590. doi: 10.1126/science.1179850
Richmond, B. J., and Optican, L. M. (1990). Temporal encoding of two-dimensional patterns by single units in primate primary visual cortex. II. information transmission. J. Neurophysiol. 64, 370–380. doi: 10.1152/jn.1990.64.2.370
Roxin, A. (2011). The role of degree distribution in shaping the dynamics in networks of sparsely connected spiking neurons. Front. Comput. Neurosci. 5:8. doi: 10.3389/fncom.2011.00008
Roxin, A., Brunel, N., Hansel, D., Mongillo, G., and van Vreeswijk, C. (2011). On the distribution of firing rates in networks of cortical neurons. J. Neurosci. 31, 16217–16226. doi: 10.1523/JNEUROSCI.1677-11.2011
Scannell, J., Burns, G. A., Hilgetag, C. C., O'Neil, M. A., and Young, M. P. (1999). The connectional organization of the cortico-thalamic system of the cat. Cereb. Cortex 9, 277–299. doi: 10.1093/cercor/9.3.277
Shadlen, M. N. and Newsome, W. T. (1994). Noise, neural codes and cortical organization. Curr. Opin. Neurobiol. 4, 569–579. doi: 10.1016/0959-4388(94)90059-0
Shadlen, M. N., and Newsome, W. T. (1998). The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J. Neurosci. 18, 3870–3896. doi: 10.1523/JNEUROSCI.18-10-03870.1998
Shkarayev, M. S., Kovačič, G., Rangan, A. V., and Cai, D. (2009). Architectural and functional connectivity in scale-free integrate-and-fire networks. EPL 88:50001. doi: 10.1209/0295-5075/88/50001
Shu, Y., Hasenstaub, A., and McCormick, D. A. (2003). Turning on and off recurrent balanced cortical activity. Nature 423, 288–293. doi: 10.1038/nature01616
Song, S., Sjöström, P. J., Reigl, M., Nelson, S., and Chklovskii, D. B. (2005). Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 3:e68. doi: 10.1371/journal.pbio.0030068
Sporns, O. (2006). Small-world connectivity, motif composition, and complexity of fractal neuronal connections. Biosystems 85, 55–64. doi: 10.1016/j.biosystems.2006.02.008
Sporns, O., Chialvo, D. R., Kaiser, M., and Hilgetag, C. C. (2004). Organization, development and function of complex brain networks. Trends Cogn. Sci. 8, 418–425. doi: 10.1016/j.tics.2004.07.008
Sporns, O., Honey, C. J., and Kötter, R. (2007). Identification and classification of hubs in brain networks. PLoS ONE 2:e1049. doi: 10.1371/journal.pone.0001049
Sporns, O., and Zwi, J. D. (2004). The small world of the cerebral cortex. Neuroinformatics 2, 145–162. doi: 10.1385/NI:2:2:145
Sussillo, D., and Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic neural networks. Neuron 63, 544–557. doi: 10.1016/j.neuron.2009.07.018
Troyer, T. W., and Miller, K. D. (1997). Physiological gain leads to high ISI variability in a simple model of a cortical regular spiking cell. Neural Comput. 9, 971–983. doi: 10.1162/neco.1997.9.5.971
van Vreeswijk, C., and Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726. doi: 10.1126/science.274.5293.1724
Vogels, T. P., Rajan, K., and Abbott, L. (2005). Neural network dynamics. Annu. Rev. Neurosci. 28, 357–376. doi: 10.1146/annurev.neuro.28.061604.135637
Vreeswijk, C. V., and Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural Comput. 10, 1321–1371. doi: 10.1162/089976698300017214
Whalley, K. (2013). Neural coding: timing is key in the olfactory system. Nat. Rev. Neurosci. 14, 458–458. doi: 10.1038/nrn3532
Xue, M., Atallah, B. V., and Scanziani, M. (2014). Equalizing excitation-inhibition ratios across visual cortical neurons. Nature 511, 596–600. doi: 10.1038/nature13321
Xulvi-Brunet, R., and Sokolov, I. M. (2005). Changing correlations in networks: assortativity and dissortativity. Acta Phys. Pol. B 36:1431.
Zhou, D., Rangan, A. V., McLaughlin, D. W., and Cai, D. (2013). Spatiotemporal dynamics of neuronal population response in the primary visual cortex. Proc. Natl. Acad. Sci. U.S.A. 110, 9517–9522. doi: 10.1073/pnas.1308167110
Zhou, D., Rangan, A. V., Sun, Y., and Cai, D. (2009). Network-induced chaos in integrate-and-fire neuronal ensembles. Phys. Rev. E 80:031918. doi: 10.1103/PhysRevE.80.031918
Keywords: balanced state, homogeneous, heterogeneous, active core, sparse coding, Fokker-Planck equation
Citation: Gu QL, Li S, Dai WP, Zhou D and Cai D (2019) Balanced Active Core in Heterogeneous Neuronal Networks. Front. Comput. Neurosci. 12:109. doi: 10.3389/fncom.2018.00109
Received: 14 November 2018; Accepted: 21 December 2018;
Published: 29 January 2019.
Edited by:
David Hansel, Université Paris Descartes, FranceReviewed by:
Germán Mato, Bariloche Atomic Centre, ArgentinaDavid Golomb, Ben-Gurion University of the Negev, Israel
Copyright © 2019 Gu, Li, Dai, Zhou and Cai. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Songting Li, c29uZ3RpbmdAc2p0dS5lZHUuY24=
Douglas Zhou, emR6QHNqdHUuZWR1LmNu
†Dedicated to David Cai