Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci., 20 May 2021
This article is part of the Research Topic From Structure to Function in Neuronal Networks: Effects of Adaptation, Time-Delays, and Noise View all 14 articles

A Non-spiking Neuron Model With Dynamic Leak to Avoid Instability in Recurrent Networks

  • 1Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
  • 2Department of Informatics, Technical University of Munich, Munich, Germany
  • 3Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States

Recurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a “dynamic leak”, corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency.

Introduction

Recurrent excitatory loops are a common feature in the central nervous system, such as in neocortical circuits (Binzegger et al., 2004; Song et al., 2005; Koestinger et al., 2018; Kar and DiCarlo, 2020), thalamocortical loops (Steriade, 1997; Hooks et al., 2013), cerebrocerebellar and spinocerebellar loops (Allen and Tsukahara, 1974; Jörntell, 2017). Inhibitory interneurons have been described to provide lateral inhibition (Zhu and Lo, 2000; Douglas and Martin, 2009; Obermayer et al., 2018; Rongala et al., 2018) and feed-forward inhibition (Swadlow, 2003; Isaacson and Scanziani, 2011), but they also make synapses on other inhibitory neurons, thereby potentially forming recurrent disinhibitory loops as well (Jörntell and Ekerot, 2003; Pi et al., 2013; Sultan and Shi, 2018). Furthermore, such excitatory and inhibitory connectivity has been reported to be balanced (Anderson et al., 2000; Wehr and Zador, 2003; Okun and Lampl, 2008). Functionally, recurrent connections enable a network to use preceding states to impact the processing of the present state. Such state memory can, for example, improve learning performance (Sutskever et al., 2014). However, due to the many potential positive feedback loops in larger networks with extensive recurrent connections, imbalances in excitatory (E) and inhibitory (I) synaptic activity could lead to activity saturation (Brunel, 2000; Vogels and Abbott, 2005), such as observed in epilepsy (Chakravarthy et al., 2009; Liou et al., 2020), or, in milder cases, a noise-like perturbation of the information content of internal signals, which would be disadvantageous for learning.

We explored potential noise and stability issues that could arise in recurrent neuronal networks. In order to focus on the network architecture aspect of this problem, we used a non-spiking neuron model designed to be simple and computationally efficient, while embodying fundamental properties of Hodgkin-Huxley conductance-based models. The relevance of a non-spiking neuron model stems from the stochasticity inherent in neuronal spike generation (Naundorf et al., 2006; Saarinen et al., 2008; Spanne et al., 2014; Nilsson and Jörntell, 2021), which renders the spiking output of the individual neuron to some degree unreliable in terms of information content. To compensate for such unreliability, the brain could encode each representation across a population of neurons (below referred to as an ensemble of redundant neurons), as has been observed in the brain in vivo (Spanne and Jörntell, 2015). The input-output relationships across a range of neuron types in the central nervous system in vivo indicate that overall, each neuron’s spike output is a probability density function (PDF) of the underlying membrane potential of the neuron (Spanne et al., 2014). That PDF thereby approximates the membrane potential and could be considered to correspond to the spike output of an ensemble of neurons with similar inputs. Thus, simulating a non-spiking neuron and providing the PDF of the neuron as its output avoids the extreme resource demands of both simulating the highly complex spike generation stochasticity (Saarinen et al., 2008) and compensating for that stochasticity by simulating large populations of redundant neurons. Synaptic input creates modulation of the neuronal membrane potential, hence its PDF, by temporarily activating conductances that are added to the static leak conductances. The synaptic conductances and currents can modulate very rapidly but the membrane capacitance together with the static leak channels forms an RC circuit that constitutes a low-pass filter (herein, dynamic leak) for the resultant membrane potential. We hypothesized that this dynamic leak would improve network stability without compromising information transfer.

To test this hypothesis, we constructed a highly recurrent, two-layer neuronal network, with five excitatory and five inhibitory neurons in the first layer and four excitatory and four inhibitory neurons in the second layer. All neurons in both layers were reciprocally connected with randomized gains. All first layer neurons were provided with six randomized and broadly distributed input signals. A striking finding was that for all tested network configurations, synaptic weight distributions, various conduction delays and input density of sensory inputs, recurrent networks tended to generate high frequency components that were not present in the sensory input data. In all cases these transients were eliminated by incorporating a dynamic leak in the neuron models without compromising the representation of the input signals.

We note that the fully reciprocal connectivity employed in the networks described herein encompasses the wide range of connectivity that has been identified experimentally in cortical and other central neural structures (see above). The strictly layered connectivity of many popular neural network models for deep learning reflects only a small subset of the known complexity of biological networks. Attempts to add limited recurrency into such models have encountered stability problems (Brunel, 2000; Vogels and Abbott, 2005), for which dynamic leak appears to offer substantial mitigation. Chowdhury et al. (2020) demonstrated that a leaky component (a low-pass filter effect) in a spiking neuron model (Leaky-Integrate and Fire, LIF, model) eliminates the high-frequency components from the input, which resulted in improved robustness against random noise in a multi-layer feed forward network trained with back-propagation.

Materials and Methods

Neuron Model

Linear Summation Neuron Model (LSM)

The neuron model implemented for this study was a non-spiking, linear input summation model with an additional dynamic leak component. For the version without dynamic leak, the activity (ANoDyn) was given by the total weighted input activity (w*a) (where a is the activity of each individual input and w is the weight of that input) across all individual synapses (i) (Eq. 1). Electrotonic compactness in the neuron is assumed, so that all synapses have equal impact on the activity of the neuron. This simplified model of synaptic input activity integration can be shown to be closely related to a Hodgkin-Huxley (H-H) model (see Appendix 1), for example resulting in the preservation of two key dependencies of EPSPs and IPSPs on membrane biophysics: (i) input response magnitude depends on the difference between the membrane potential and the reversal potentials for the relevant corresponding “ion channels” (i.e., depending on if the input is provided through an excitatory or an inhibitory synapse); (ii) input response magnitude depends on relative shunting of synaptic currents by conductances resulting from the background of synaptic input activity (Eq. 1). The responsive properties of the LSM and the H-H neuron model are shown to be highly similar in Supplementary Figure 1.

The LSM implemented a degree of activity normalization (denominator of Eq. 1) by introducing a static leak, which was calculated as the product of a constant (kstatic) multiplied by the number of synapses on the neuron, plus a term reflecting the total number of open channels, which is activity dependent.

To mimic the effect of the RC circuit created by the ion channels and the capacitance of the membrane, we added a dynamic leak function to the neuron. To test the impact of the dynamic leak on network dynamics, we compared the networks composed of neurons with the dynamic leak with the same network when the neuron model did not include this dynamic leak. The neuron activity for the neuron model variant with dynamic leak (ADyn) is given by the linear summation model with an additional leak time constant (τDyn). Larger neurons with more synapses tend to have longer time constants (Zhang, 2004), so we tried various ways of scaling τDyn with number of synapses i. Thereby, the dynamic leak integrates the function of the capacitance in the RC circuit of the biological neuron into the LSM (Eq. 2). The neuron activity of this model is given by the following equations,

A N o D y n = ( w i * a i ) ( k s t a t i c * i ) + | w i * a i | (1)
τ D y n * d A D y n d t = - A D y n ( t ) + A N o D y n ( t ) (2)
0 A N o D y n a n d 0 A D y n (3)

Figure 1 illustrates the output activity of individual LSM neurons (Eqs. 1–3), which were isolated in the sense that they were not connected to any neuronal network other than the provided inputs, for different input combinations (from left to right in Figure 1) of emulated excitatory and inhibitory synaptic inputs (Figures 1A,B). The input spike trains were convoluted using a kernel function in order to emulate post-synaptic-potential inputs (detailed below, Eq. 6), that were fed to the LSM neuron (Figures 1C–E). The LSM activity without dynamic leak (ANoDyn, Figure 1F) shows the activity normalization resulting from the static leak constant (kstatic = 1, for this illustration), along with the effect of the neuron output activity threshold at zero (Eq. 3). The activity of the LSM neuron would also be expected to fall back toward this zero level of activity without any external or internal input. This level hence corresponds to a threshold for spike initiation among a population of similarly connected neurons that are typically represented by the one modeled neuron. The output activity for the LSM neuron with dynamic leak (ADyn, Figure 1G) exhibits a low pass filtering effect on the output activity, which is reflective of the effect of the RC component integrated in the LSM neuron model with the dynamic leak.

FIGURE 1
www.frontiersin.org

Figure 1. LSM responses to emulated synaptic inputs. (A) The activation times of three different excitatory synaptic inputs are indicated as spike trains. (B) The activation times of one inhibitory synaptic input. (C,D) The excitatory and inhibitory sensory input spike trains were convoluted using a kernel function (see section “Materials and Methods”) to create input that resembles post-synaptic potentials. Note that the input to the LSM neuron can exceed 1 a.u., while the output of the LSM neuron cannot. Calibration applies to (C–E) (traces in the shaded region). (E) The input from summation of the excitatory and inhibitory inputs. (F) LSM (without dynamic leak, kstatic = 1) output activity for the given PSP inputs. Calibration applies to (F–G). (G) LSM (with dynamic leak, kstatic = 1, τ dyn = 1/100 s) output activity for the given PSP inputs.

Figure 2 illustrates the impact of various static and dynamic leaks. As indicated in Figure 2A, the static leak constant acts as a normalization factor for the total neuron activity, without diminishing the underlying dynamics of that activity (Supplementary Figure 2). At very low values of the static leak constant, the mean activity reached sufficiently high levels for the reversal potential to start having a significant dampening effect on the activity dynamics (see uppermost trace in Figure 2A), substantially reducing the coefficient of variation (CV in Figure 2B). Figures 2C,D illustrates the additional impact of various values of the dynamic leak constant. Figure 2C and Supplementary Figure 3 demonstrate the filtering effect of the dynamic leak constant on the total neuron activity. A high value of this dynamic leak constant substantially smoothens the activity dynamics, which was reflected in the resulting low CV value (Figure 2D). The dynamic leak constant (τDyn) was set to 1/100 for the rest of this study, unless otherwise specified.

FIGURE 2
www.frontiersin.org

Figure 2. Impact of static leak (kstatic) and dynamic leak constant (τ dyn) in LSM. (A) Impact of the value of kstatic in the LSM (for τ dyn = 1/100) for a given pseudo-random sensory input at 50 Hz for each of six sensors (see Figure 3). (B) The perseverance of dynamics in the neuron activity (A) for varying kstatic value as assessed by the coefficient of variation [CV = σ(A)/Á]. A higher value of CV indicates a higher activity variance relative to the mean activity. (C) Impact of the value of τ dyn in the LSM (for kstatic = 1) for the same pseudo-random sensory input as in (A). (D) The CV as a function of the value of the dynamic leak (τ dyn) in the LSM. The arrow indicates the value of τ dyn used in rest of this paper unless otherwise specified.

In Appendix 1, we show that the LSM neuron model can be derived from a simplified H-H type conductance-based neuron model. In the H-H model, the leak is proportional to the membrane voltage and the synaptic currents are scaled depending on the membrane voltage, so that the voltage is limited to a fixed range. The differential equation describing this model suffers from numerical instability, therefore we solve it with the implicit Euler method. The model is simple enough so that an analytical solution can be obtained. Key H-H model features that are captured by the LSM neuron: (i) the response to a given input scales with the difference between the current activity level (membrane potential, V) and the reversal potentials of the excitatory/inhibitory inputs (which have been normalized to +1 and -1, respectively); and (ii) the impact of a given input is scaled by the degree of the shunting caused by the total synaptic activity the neuron receives at that time.

Izhikevich Neuron Model (IZ)

For the Izhikevich neuron model (Izhikevich, 2003), the membrane potential (IZv) and the adaptation variable (IZu) were updated via the following nonlinear differential equations discretized using Euler’s method.

I Z . v = I Z A I Z v 2 + I Z B I Z v + I Z C - I Z u + ( I Z i n p u t * I Z k ) (4)
I Z . u = I Z a ( I Z b I Z v - I Z u )

When the membrane potential reached the spike depolarization threshold of 30mV, one spike was produced followed by a reset:

i f I Z v 30 m V , then { I Z v I Z c I Z u I Z u + I Z d (5)

The IZA,IZB,andIZC parameters and the spiking threshold were the standard ones of the Izhikevich artificial neuron model, whereas the parameters IZa,IZb,IZc,andIZd were selected (Table 1 and Figures 3E,F) to mimic a regular spiking behavior (Izhikevich, 2003, 2004). IZinput was the input current to the neuron model, that was weighted synaptic activity (w*a) in this article and IZk is the input gain factor.

TABLE 1
www.frontiersin.org

Table 1. Izhikevich neuron model parameters used in the evaluation of this study (for the IZ model responses presented in Figures 3E–-F and Supplementary Figure 3).

FIGURE 3
www.frontiersin.org

Figure 3. Comparing the properties of different neuron models in isolation. (A) Pseudo-random input spike trains (six spike trains corresponding to the six sensory inputs, with an average spike frequency of 50 Hz in each sensor). (B) The sensory input spike trains were convoluted using a kernel function (see section “Materials and Methods”). The convoluted input responses were fed as weighted (randomly generated, mu = 0.4) EPSP inputs to the neuron model. (C) Output responses of the LSM without dynamic leak. The red line is the mean across 50 presentations (each presentation made different by adding Gaussian noise, black lines). In this and all panels below, tests were made for the neuron in isolation, without network connections. (D) Similar display as in (C), but for LSM with dynamic leak. (E) Similar display as in (C), but for output responses of Izhikevich neuron model. The spike output of the Izhikevich neuron model were convoluted using a kernel function (same kernel parameters setting as in (B). (F) Cross-correlations between the sensory input and the output responses of neuron models (illustrated in C–E). (G) Cross-correlation between different sensory input frequencies and neuron model outputs across a range of IZ model settings, compared to the LSM with dynamic leak. Thin blue lines indicate the cross-correlation with the sensory input for the IZ neuron model responses for each of the 405 IZ model parameter settings (IZa, IZb, IZC, IZd, and IZk; see section “Materials and Methods”) tested. Thick blue line indicates the mean of those cross-correlations. The red line indicates the cross-correlation between the sensory inputs and the LSM outputs. Asterisk indicates the cross-correlation measure for the parameters chosen in Figure 3F.

Further, to analyze the IZ model behavior across different spiking and bursting behaviors, we have explored the parametric space (Table 2 and Figure 3G) of IZa,IZb,IZc,IZd,andIZk (parameters in Eqs. 4, 5) within the boundaries identified in Izhikevich, 2003. We investigated the IZ neuron model responses (Figure 3G) across 405 different parameter settings for each given input spike frequency. The parameter space was defined by the possible combinations of parameters listed in Table 2.

TABLE 2
www.frontiersin.org

Table 2. Izhikevich neuron model parametric space explored in the evaluation of this study (for the IZ model presented in Figure 3G).

Network Connectivity

Our network was a two-layer fully connected neuronal network that comprised both inhibitory neurons (IN) and excitatory neurons (EN) (Figure 4A). This network configuration provides a simple system that includes the critical element of recurrency. The network architecture is defined based on the following two rules: (a) The sensory inputs are projected as excitatory synapses to all neurons in layer 1 only; (b) All excitatory and inhibitory neurons were fully reciprocally connected both within and between layers. Most of the analysis reported here utilized a "5 × 4" network architecture (five ENs and five INs in layer 1 and four ENs and four INs in layer 2). In the analysis of Figure 8, where different network sizes were explored, we simply scaled up the number of neurons in each layer using the same connectivity rules (Figure 8).

FIGURE 4
www.frontiersin.org

Figure 4. Activity dynamics in a sample recurrent network. (A) Principles of the connectivity structure in the recurrent network studied. Note that the default network, and the one used in panels (B–I) of this figure, contained excitatory neurons (five in the input layer, four in the output layer) and inhibitory neurons (same numbers) with the same connectivity, whereas only the neurons in layer 1 also received sensory inputs (the same six sensory inputs to each neuron) with all synapses having randomly generated Gaussian weights (mu = 0.4). (B) Frequency plot of the activity in an excitatory neuron. (C) Similar plot for an inhibitory neuron. (D) Raw data plots for sample signals in the excitatory neuron generated at the indicated presentation #. (E) Similar plot for the inhibitory neuron. (F–I) Similar plots as in (B–E) but when all the neurons were modeled to include the neuronal dynamic leak.

A two-layer, fully reciprocally connected neuronal network architecture with self-recurrent connections (autapses) was also investigated. In this specific network architecture, in addition to the network connectivity defined above, the excitatory neurons projected excitatory synaptic connections onto themselves, and inhibitory neurons projected inhibitory synaptic connections onto themselves (Supplementary Figure 7A).

Sensory Inputs

In this article, we investigated the individual neuron responses (Figures 2, 3 and Supplementary Figures 14) and network dynamics (Figures 48 and Supplementary Figures 6, 7, except Figures 7C,D) based on six sensory inputs. These sensory inputs were pseudo-randomly generated (see below) and provided as excitatory input to both excitatory and inhibitory neurons. We also tested our recurrent networks with higher input sensor density (#sensors = 6, 15, 30, and 50, Figures 7C,D), the inputs of which were also pseudo-randomly generated.

FIGURE 5
www.frontiersin.org

Figure 5. High frequency components and the effect of dynamic leak across different synaptic weight distributions. (A–C) The three types of synaptic weight distributions that were explored (Gaussian, Log-Normal, and Binary) and the average weight distributions for each mean weight value (five weight distributions were generated for each mean weight). (D) The frequency power distributions across all the above synaptic weight distributions. Color keys for the different average synaptic weights are in (A–C). (E) Similar display as in (C), for the same networks but with the neuron model with dynamic leak. Dashed orange traces in (D,E) show the corresponding frequency power distribution for the sensory inputs at 50 Hz, averaged across the six sensory inputs, for comparison.

FIGURE 6
www.frontiersin.org

Figure 6. Impact of conduction delays on the frequency distribution. (A) Frequency distributions for the same network (network settings as in Figure 4) with different average conduction delays between neurons. The vertical dashed lines indicate the supplementary frequency peaks (frequency peaks that were not present in the input sensory data / network dynamics without conduction delays) of the high-frequency component for addition of a mean conduction delay to the network. (B) Data for the same networks and delays when the neuron model included dynamic leak.

FIGURE 7
www.frontiersin.org

Figure 7. Network activity dynamics increased when sensor input density decreased. (A) The high frequency components became more prominent with lower average spike frequencies in the sensor input. (B) Neuronal dynamic leak resulted in disappearance of the high-frequency components across all input spike frequencies. (C) The high frequency components also became more prominent with lower number of sensory inputs, while a very high number of sensor inputs substantially reduced overall neuron activity dynamics. (D) Introduction of neuronal dynamic leak resulted in disappearance of the high-frequency components irrespective of the number of sensory inputs. The network settings for this analysis were similar to Figure 4, except for sensory input.

FIGURE 8
www.frontiersin.org

Figure 8. Activity frequency distributions altered with the scale of the networks. (A,E) For networks with Gaussian synaptic weight distribution of mean weight mu = 0.3 and 0.4, respectively, the high frequency components could appear without dynamic leak, regardless of network size. (B,F) Introduction of the neuronal dynamic leak (τ dyn = 1/100) “rescued” the networks from these high-frequency components. (C,G) The dynamic leak constant was adapted based on the square root of number of synapses (i). (D,H) The dynamic leak constant was adapted based on the total number of synapses (i). Dashed orange traces in all plots show the corresponding frequency power distribution for the sensory inputs at 50 Hz, averaged across the six sensory inputs, for comparison.

Pseudo-Random Inputs

For the sensory inputs to the LSM and the IZ neurons, we generated pseudorandom spike trains for several different average frequencies (50, 100, 150, and 200 Hz, Figure 3A and Supplementary Figure 4) with uniform normal distributions. We used an inbuilt MATLAB function “randi” to generate the spike time distributions in these spike trains. Furthermore, these spikes were convoluted to resemble post-synaptic-potentials (time continuous activity) using the following kernel equation (Mazzoni et al., 2008),

a i = τ k m τ k d - τ k r * [ e x p ( - t - τ k l - t * τ k d ) - e x p ( - t - τ k l - t * τ k r ) ] (6)

Where, t is the input spike time,τkd is the decay time (4 ms),τkr is the rise time (12.5 ms) and τkm is the constant to calculate ration between rise time and decay time (21.3 ms), and τkl is the latency time which is zero in this case. These values were chosen based on the previous work (Rongala et al., 2018). The convoluted sensor signal was then provided as synaptic input to the neuronal network.

In order to analyze the network dynamics, we provided 50 presentations of the same pseudorandom spike trains (for each given average frequency). Each input presentation differed by an addition of random noise of ±10 ms to individual spike times (Figures 3C–E, black lines) to the reference pseudorandom spike train (Figure 3A, for spike frequency of 50 Hz). These presentations were concatenated without pause or reset between them, so the input subdivided into 50 presentations was in effect one long presentation lasting for 50,000 ms.

To allow a comparison with the output of the LSM, we convoluted the output spike trains also of the spiking neuron model (IZ). The process of convolution emulated a post synaptic response that would have been generated in a receiving neuron, whereas the LSM output itself directly corresponded to such a signal.

Synaptic Weights

All excitatory and inhibitory synaptic weights in the network were randomly distributed, including the excitatory sensory inputs to only the layer 1 neurons. The synaptic weight distributions were either normal, lognormal or binary. The normal and the log-normal distributions were generated for different mean weights (μ) (values between 0.1and 0.5) each with a fixed coefficiency of variation (cv) of 20% [where sigma (σ) = (cv/ 100)*μ]. For binary distributions, we tested different probabilities of high weight synapses (w = 1) (probability varied between 10 and 50%), whereas the remainder of the synapses were set to zero weight (Figures 5A–C).

Statistical Analysis

Cross-Correlation

The correlation index measure was used to compute the similarity of the responses of the neuron models (Figures 3F,G and Supplementary Figure 4E). The correlation between two signals was computed with an inbuilt MATLAB function “xcorr” (with zero lag), which produces values from 0 (uncorrelated) to 1 (identical).

Frequency Analysis

We performed a continuous wavelet transform (using an inbuilt MATLAB function “cwt”) in order to define the frequency composition of the input signal over time. The wavelet transform was used to extract the power of each frequency band as a function of time for the continuous neuron activity signal. Here, we reported (Figures 48 and Supplementary Figures 57), for each frequency band, the maximum power of the signal within each input presentation time window (1 s).

In Supplementary Figure 5, the frequency analysis was performed on the sensory input signals (on the convoluted signal for each given average spiking frequency) across all the 6 sensory inputs for all 50 presentations (see section “Materials and Methods”). The maximum power was computed for each sensory input and each presentation, and the average across all six input sensors was reported in this figure.

In Figure 4, the frequency analysis was performed on the activity of one excitatory and one inhibitory neuron in layer 1 (Figures 4B,C,F,G) across all frequencies and presentations. Supplementary Figure 6 display the frequency analysis performed on the activity of all excitatory and inhibitory neurons in the network of Figure 4A. A similar frequency analysis was carried out in Figures 58 and Supplementary Figure 5, which show the average maximum power calculated across all neurons in all layers and across all 50 presentations.

Results

Comparison to Spiking Neuron Models

We first characterized the input-output relationship of the neuron model in isolation (Figure 3) for a standardized sensor input, consisting of randomized spike times in six sensor neurons that were convoluted to time-continuous input signals. These were synaptically integrated by a single modeled neuron (Figures 3A,B). The activity of the non-spiking LSM (linear summation neuron model) was compared with that of a spiking neuron model (Izhikevich, IZ), in terms of how well their output (Figures 3C–E) correlated with the input (Figure 3F). The IZ neuron model was chosen for this comparison, as it was created to mimic a rich neuronal response dynamics with computational efficiency (Izhikevich, 2003). The spikes generated by the IZ neuron model were convoluted (see section “Materials and Methods”) to a time continuous signal (Figure 3E) in order for it to be comparable with the output of the LSM.

Both neuron models (LSM and IZ) were provided with the same pseudo-random sensory inputs (average firing frequency of 50 Hz in each of six sensors, see section “Materials and Methods”) connected via six different synapses (Figure 3B). The IZ neuron model parameters for this particular comparison (Figures 3E,F) were chosen to mimic the regular spiking behavior (hypothesized to be a common neuron behavior in cortex; Izhikevich, 2003, 2004). The LSM neuron without dynamic leak reproduced on average a close representation of the source convolution signal for the input but the individual traces were considerably noisier without dynamic leak (Figure 3C) than with dynamic leak (Figure 3D). The main difference between the LSM and the IZ neuron model responses was that the IZ neuron model tended to create output dynamic behavior that was not present in the input signal (Figures 3B,E), a consequence of the binary nature of the spike output. A cross-correlation analysis between the neurons’ responses (Figure 3F) showed that the IZ neuron model reflected the input signal less faithfully than the LSM. Note that the cross-correlation is slightly poorer for the LSM with dynamic leak than without, which is due to that the some of the fine-timing details of the high frequency components of the underlying convoluted signal is slightly filtered by the dynamic leak.

We tested if this observation depended on the frequency of the spiking in the sensory inputs. The LSM consistently showed a higher correlation with the input signal than the IZ neuron model across a range of input spike frequencies (Supplementary Figure 3). Next we tested if the specific parameters chosen for the IZ neuron model (Figure 3E, also indicated by an asterisk in Figure 3G) were responsible for these results (Figure 3F). Therefore, we tested a range of parameter settings (405 different parameter combinations), which are known to reproduce specific output dynamics (bursting, for example) observed in a variety of neuron types in vitro (Izhikevich, 2003). The correlation analysis showed that LSM was more consistent than the IZ model in maintaining high correlation with the sensory inputs across the full range of sensory input spike frequencies (Figure 3G). The exception was the highest sensory input frequencies, but that can be explained by that the dynamics of the sensory input diminishes due to the density of the inputs (Supplementary Figures 3A–D), as previously described also for neurons in vivo (Bengtsson et al., 2011). This effect, which we will refer to as the input density problem, is also evident in Figure 3G.

Network Dynamics (With and Without Neuronal Dynamic Leak)

We next investigated the activity dynamics of a standardized recurrent neuronal network implemented using the LSM (Figure 4A). The sensory input was fed as excitatory input to both the excitatory and inhibitory neurons of the first layer for 50 presentations, where the sequential presentation differed by added Gaussian noise to the sensory signal (see section “Materials and Methods”). In the network with the neuron model without dynamic leak, there was initially a gradual increase in the power across the higher frequency components of the activity in both excitatory and inhibitory neurons (Figures 4B,C) (more extensively illustrated in Supplementary Figure 6, where the first few presentations of sensory input evoked a lower power response). These high frequency components were not present in the sensory input (Supplementary Figure 5A) and were therefore generated by the network, most likely as a consequence of the parallel excitatory and inhibitory connections, which would be expected to lead to some degree of signal derivation (Wehr and Zador, 2003). Interestingly, in the illustrated IN1 the high frequency components gradually built up (until presentation #10, approximately) and then faded away (after presentation #20, approximately), despite that the average intensity of the sensory input did not vary over time, which suggests a relatively rich internal dynamics in this type of recurrent network, despite its limited size. In neurons of the second layer, high frequency components typically faded away more slowly (Supplementary Figure 6). The appearance of these high frequency components was sometimes associated with the appearance of transients in the neuron activity (Figures 4D,E). In contrast, in the same network but with the neuron model including the dynamic leak, the transients and the high-frequency components of the neuron activity disappeared (Figures 4F–I). Hence, the low-pass filtering effect of the dynamic leak “rescued” the recurrent network from generating spurious high-frequency components.

The recurrent connections of the network were likely strongly contributing to these high-frequency components. An extreme case of recurrent connectivity is when a neuron makes synapse on itself (autapse). It is not clear to what extent autapses exist in adult neuronal circuitry, but they have been shown to be present in early development (Lübke et al., 1996; Tamás et al., 1997) and they are widely used in the field of RNN/computational neuroscience (Graves, 2012). To explore the impact of autapses we used the exact same network architecture used in Figure 4 but added autapses to all neurons (Supplementary Figure 7A). In this scenario, the high-frequency components were strongly amplified (Supplementary Figures 7B–E). However, in the same network with the neuron model with the dynamic leak, the transients and the high-frequency components of the neuron activity were again effectively removed (Supplementary Figures 7F–I). We did not explore networks with autapses any further.

We next compared the frequency power distributions of the neuronal activity in this recurrent network across a range of different synaptic weight distributions (Figure 5). We studied three different types of synaptic weight distributions (Gaussian, log-normal, and binary distributions). For each type of distribution, we tested five different mean synaptic weights (Figures 5A–C). Moreover, for each given synaptic weight distribution and mean weight, we generated five random weight distributions. The average signal of these five random weight distributions was used to calculate each frequency power distribution illustrated (Figures 5D,E), where each line represents the average activity across all the neurons of the network (Figures 5D,E). In the network with the neuron model without dynamic leak (Figure 5D), the relative power of the high-frequency components was amplified for synaptic weight distributions at mean synaptic weights of 0.3–0.4 or above (μ ≥ 0.4 for Gaussian and μ ≥ 0.3 for log-normal distributions) and for p > 10% for binary distribution, compared to the sensory input (Supplementary Figure 5). For other synaptic weight distributions (μ = 0.1 for Gaussian and log-normal distributions and for p = 10% for binary distributions, for example), there was much lower overall activity in the network, which could be the reason why the high frequency components were not induced in these networks. In the network with the neuron model with the dynamic leak component, the transients and the high-frequency components of the neuron activity disappeared for all settings (Figure 5E), though the setting of the dynamic leak component used also appeared to over-dampen the sensory input dynamics between 20 and 200 Hz. Note that each curve in Figures 5D,E represents the mean across five randomized repetitions. Supplementary Figure 8 instead shows the frequency power distribution of the neuronal activity for each individual network sorted by synaptic weight distribution. The overlap between these frequency power distribution curves across randomized weights and different distribution means implies that there was no simple linear relationship between the network structure and the spurious high frequency components.

To further explore if the high-frequency components observed were induced by the recurrent network, we tested if we could affect the “center of gravity” of the high-frequency components by introducing different conduction delays in signal transmission between the neurons (Figure 6). In the brain in vivo, these would correspond to the axonal conduction delays and synaptic delays combined. The delays were randomized between all the neurons, and several different mean delays were tested in different simulations. Interestingly, a supplementary frequency peak component (frequency peaks that were not present in the input sensory data / network dynamics without conduction delays, indicated with dashed vertical lines in Figure 6A) was observed without dynamic leak. These supplementary frequency peaks were approximately inversely proportional to the mean conduction delay. From previous studies (Jirsa and Stefanescu, 2011; Petkoski et al., 2018) we know that conduction delays could introduce such additional dynamics into the recurrent networks. These peaks were removed by adding dynamic leak (Figure 6B).

The high frequency components also appeared for lower average sensor input spike frequency (50 and 100 Hz, Figure 7A) and for lower number of sensory inputs per neuron (nSensors = 6 and 15, Figure 7C). In contrast, for higher input spike frequencies (150 and 200 Hz) and higher number of sensory inputs (nSensors = 30 and 50) the increased density of the inputs resulted in a paradoxical decrease in the power of the neuron activity across all frequencies analyzed (i.e., as shown in Supplementary Figure 1), most likely due to the large number of randomized inputs regressing toward the constant mean frequency of each sensory signal. In each case, in the network with the neuron model with the dynamic leak component, the high-frequency components of the neuron activity disappeared for the sensor input configurations where it had been present (Figures 7B,D).

We also explored if the size of the network could be a factor for the appearance of the high frequency components. We found that these high frequency components appeared for different network sizes and that in those cases the network activity was “rescued” when the LSM was implemented with the dynamic leak (Figure 8). Depending on the specific synaptic weight distribution, the high frequency components became unequally dominant for different network sizes (Figures 8A,E) according to unclear relationships. The largest network as a rule had the weakest overall dynamics, which could be due to the same input density problem discussed above, where the density of synaptic input increased as the larger network has a higher number of recurrent synaptic inputs per neuron, which caused the dynamics of the neuron activity to go down. As there is a tendency for membrane time constants to grow with the size of the neuron (Zhang, 2004), we scaled the τDyn with the network size (as the neurons of the larger networks had a higher number of synapses) (Figures 8C,D,G,H, for two different weight distributions). A moderate scaling of the τDyn (with the square root of the number of synapses, Figures 8C,G) actually increased the dynamics of some network sizes, while eliminating high frequency components. In contrast, a linear scaling (Figures 8D,H) instead appeared to dampen such dynamics and, unsurprisingly, low pass-filtered also signals well below 100 Hz for the largest networks.

Discussion

We explored the properties of a non-spiking neuronal model derived from the differential conductance-based H-H model when deployed in various recurrent neuronal networks. We found that in these recurrent networks, many different factors would tend to trigger network induction of high frequency signal components of a somewhat unpredictable magnitude and distribution (i.e., Figures 58 and Supplementary Figure 8). These signal components were not present in the input data (Supplementary Figure 5) and sometimes peaked to create overt spurious transients (Figures 4B,C). The dynamic leak in our neuron model invariably “rescued” the recurrent networks from their tendency to self-generate these high-frequency signal components (Figures 48 and Supplementary Figure 7). Corresponding to the capacitive component and the ion channels of the membrane circuit, dynamic leak is an inevitable feature of real neurons. Furthermore, this low-pass filter component made the behavior of recurrent networks more predictable for networks of different sizes.

We worked under the scenario that neuronal networks in the brain are recurrent and that excitatory and inhibitory connections are both pervasive, without any a priori assumed structure. Our network architecture contained the circuitry elements of previously reported “classical” network connectivity patterns (feedback and feedforward inhibition, for example). Feed-forward and feedback inhibition running in parallel with excitatory connections was likely the main network feature that caused the signal derivation effects/the high frequency components in the networks without the dynamic leak. The inclusion of autapses in the recurrent network strongly amplified these high frequency components (Supplementary Figure 7), presumably primarily through self-amplification of excitatory neurons. But note that in a recurrent network, any local circuity feature will at the global level automatically result in other functional network features as well. Hence, in contrast to a non-recurrent, feed forward neuronal network, in a recurrent network these circuitry features will hence become less clear-cut from a functional point of view, which could cause additional dynamic network effects that for example could explain our observations of gradual build-up of high frequency power components (Figure 4 and Supplementary Figure 5) while there was steady sensory input level to keep the network activity up. However, understanding such network dynamics at a deeper level was outside the scope of this paper, but would need to be addressed if such networks are to be used in a functional setting.

In our recurrent networks, apparently spurious high frequency components could be induced for different types of synaptic weight distributions, delays between neurons, sensory input densities and network sizes. It was hard to predict under what exact conditions such high frequency components would become more or less dominant (i.e., Figure 8 and Supplementary Figure 8), but in each case the dynamic leak effectively canceled them out. From the point of view of the functionality of a processing recurrent network, the fact that the frequency distribution of any given network did not match that of the sensory input is not automatically to be considered a disadvantage because the goal of a processing network would not be to perfectly replicate the sensory input. However, the fact that these high frequency components sometimes took the shape of clear-cut transients with no obvious counterpart in the sensor signal suggests that, at least in part, they should be considered spurious, i.e., noise injected into the signal due to the dynamics of the specific network.

In some cases, the activity of the network became highly suppressed relative to the sensory input (i.e., for low mean weights in Figure 5 and for the largest network in Figure 8). This effect can be ascribed to the input density problem, i.e., when too many unrelated but continuously active synaptic inputs converge on the same neuron, their signal dynamics would tend to cancel out, leaving the neuron with very little signal dynamics (Bengtsson et al., 2011). As these signals, due to the network structure, are paralleled by inhibitory connections, when the signal dynamics is lost, inhibition and excitation cancel each other out and the activity dynamics is lost in the network as a whole.

How would spiking neuron networks fare with respect to rescuing a recurrent network from spurious high frequency components? The phasic nature of discrete spike output would be expected to worsen the problem, whereas refractoriness would tend to dampen it. Refractoriness could certainly rescue the system from the extreme transients observed in networks that included autapses. Refractoriness, however, would not rescue the system from high frequency components generated through longer range recurrent excitatory loops.

Global stability has long been a concern in recurrent neural networks (RNNs) due to the non-linear dynamics that can arise within such networks due to the recurrency and the resulting feedback loops (Shen and Wang, 2011; Zhu and Shen, 2013). Periodic oscillations and stability issues in RNNs can arise, for example, as a consequence of input noise (Pham et al., 1998) and neuron activation delays (Gopalsamy and Leung, 1996). Moreover, such oscillations are inherent to any dynamical system with recurrency and amplification, such as parasitic oscillations in electronics and steady-state error in control theory. Apparently, the central nervous system, with prodigious recurrent loops (see section “Introduction”) and intrinsic noise, found a way to avoid such oscillations. In this study we explored the possibility that neuronal leak dampens such oscillations in a recurrent neuronal network.

Recurrent neuronal networks with balanced excitatory and inhibitory synaptic connections have been extensively studied previously (Brunel, 2000; Vogels and Abbott, 2005; Vogels et al., 2011; Rubin et al., 2017), using spiking neuron models (employing integrate-and-fire or related mechanisms for the spike generation). In these studies, the recurrent connections were sparsely distributed with an overall connection probability of 1–2%, and a ratio of 4:1 excitatory to inhibitory interneurons. These studies point out that factors such as high connection probability and unbalanced excitation-inhibition tend to produce network instability (Rubin et al., 2017) and in some cases failure in signal propagation across the layers of those neuronal networks (Vogels and Abbott, 2005). From the stability we observed across a wide range of recurrent network configurations, always at 100% connection probability (though weighted), it would seem that the LSM with dynamic leak would be beneficial for ensuring stable recurrent neuronal network behavior across a range of network sizes and density of connectivity.

The present findings suggest that the biological feature of neuronal dynamic leak, which causes the polarization (i.e., the activity) of the neuron to settle toward resting potential with a time constant, is an important functional feature. It allows brain networks to fully utilize recurrent neuronal network architectures with variable numbers of participating neurons without risking self-generated noise embodied as high frequency components and spurious transients.

Data Availability Statement

The original contributions presented in the study are publicly available. This data can be found here: https://figshare.com/projects/Non-Spiking_Neuron_Model_With_Dynamic_Leak_Rongala_et_al_2021_/112854. Further inquiries can be directed to the corresponding author/s.

Author Contributions

UR: design of study, design of model, data analysis, and writing the manuscript. JE and MK: design of model. GL: design of model and writing the manuscript. HJ: design of study, design of model, and writing the manuscript. All authors contributed to the article and approved the submitted version.

Funding

This work was supported by the EU Grant FET 829186 ph−coding (Predictive Haptic COding Devices In Next Generation interfaces), the Swedish Research Council (Project Grant No. K2014-63X-14780-12-3).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fncom.2021.656401/full#supplementary-material

Supplementary Figure 1 | Signal similarity between the LSM and H-H model. A comparison between the output responses for LSM (green line is the mean across 50 presentations) and the H-H (derived using backward Euler method, blue line is the mean across 50 presentations), for a given pseudo-random sensory input at 50 Hz for each of six sensors (see Figure 3). The responses of the LSM output were offset by 0.1 activity (a.u.) in order to visualize the coherence between the responses of both neuron models. The cross correlation (with zero lag) was 0.99.

Supplementary Figure 2 | Impact of the value of kstatic on the internal activity of the LSM for a given sensory input.

Supplementary Figure 3 | Impact of the value of τdyn on the internal activity of the LSM for a given sensory input.

Supplementary Figure 4 | Comparison of the non-spiking and the spiking neuron model outputs for different sensory input frequencies. (A–D) Neuron outputs in response to different sensory input frequencies. (E) Cross-correlation between sensory inputs and the neuron model outputs.

Supplementary Figure 5 | Frequency analysis of the sensory inputs. (A) Time-continuous frequency power analysis for each of the six sensory inputs (spike frequency = 50 Hz) across the 50 presentations used in the analysis of the network activity. (B) Frequency power analysis (using continuous wavelet transform, see section “Materials and Methods”), of sensory inputs. The plots show the average power of the activity across all the six sensors, for each of the four mean sensor firing frequencies, across all 50 presentations used in the analysis of the network activity.

Supplementary Figure 6 | Frequency analysis plots of the activity in all excitatory neurons (EN1EN9) and inhibitory neurons (IN1IN9) for the network shown in Figure 4A.

Supplementary Figure 7 | Activity in recurrent networks with autapses. (A) Principles of the connectivity structure in the recurrent network studied. The network presented here is a fully connected network as in Figure 4, with the addition of self-recurrent excitatory and inhibitory synapses (in excitatory and inhibitory neurons, respectively). (B) Frequency plot of the activity in an excitatory neuron. (C) Similar plot for an inhibitory neuron. (D) Raw data plots for sample signals in the excitatory neuron generated at the indicated presentation #. (E) Similar plot for the inhibitory neuron. (F–I) Similar plots as in (B–E) but when all the neurons were modeled with the dynamic leak.

Supplementary Figure 8 | High frequency components and the effect of dynamic leak across different specific synaptic weight distributions. The synaptic weight distributions used were as shown in Figures 5A–C, but instead of representing the five random simulations for each setting as an average, we here show them individually. (A) The frequency power distributions across all indicated synaptic weight distributions for five randomized repetitions each. Color keys for the different average synaptic weights are the same as in Figures 5A–C. (B) Similar display as in (A), for the same networks but with the neuron model with dynamic leak.

Supplementary Table 1 | H-H Model variable definitions (for the H-H neuron model derivation, presented in Appendix 1).

References

Allen, G. I., and Tsukahara, N. (1974). Cerebrocerebellar communication systems. Physiol. Rev. 54, 957–1006. doi: 10.1152/physrev.1974.54.4.957

PubMed Abstract | CrossRef Full Text | Google Scholar

Anderson, J. S., Carandini, M., and Ferster, D. (2000). Orientation tuning of input conductance, excitation, and inhibition in cat primary visual cortex. J. Neurophysiol. 84, 909–926. doi: 10.1152/jn.2000.84.2.909

PubMed Abstract | CrossRef Full Text | Google Scholar

Bengtsson, F., Ekerot, C. F., and Jörntell, H. (2011). In vivo analysis of inhibitory synaptic inputs and rebounds in deep cerebellar nuclear neurons. PLoS One 6:e18822. doi: 10.1371/journal.pone.0018822

PubMed Abstract | CrossRef Full Text | Google Scholar

Binzegger, T., Douglas, R. J., and Martin, K. A. C. (2004). A quantitative map of the circuit of cat primary visual cortex. J. Neurosci. 24, 8441–8453. doi: 10.1523/JNEUROSCI.1400-04.2004

PubMed Abstract | CrossRef Full Text | Google Scholar

Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 8, 183–208. doi: 10.1023/A:1008925309027

CrossRef Full Text | Google Scholar

Chakravarthy, N., Tsakalis, K., Sabesan, S., and Iasemidis, L. (2009). Homeostasis of brain dynamics in epilepsy: A feedback control systems perspective of Seizures. Ann. Biomed. Eng. 37, 565–585. doi: 10.1007/s10439-008-9625-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Chowdhury, S. S., Lee, C., and Roy, K. (2020). Towards understanding the effect of leak in spiking neural networks. arXiv [Preprint]. arXiv2006.08761.

Google Scholar

Douglas, R. J., and Martin, K. A. C. (2009). Inhibition in cortical circuits. Curr. Biol. 19, R398–R402. doi: 10.1016/j.cub.2009.03.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Gopalsamy, K., and Leung, I. (1996). Delay induced periodicity in a neural netlet of excitation and inhibition. Phys. D Nonlinear Phenom. 89, 395–426.

Google Scholar

Graves, A. (2012). “Supervised sequence labelling,” in Supervised Sequence Labelling with Recurrent Neural Networks., (Berlin, Heidelberg: Springer), 5–13.

Google Scholar

Hodgkin, A. L., and Huxley, A. F. (1952). A quantitative description of membrane current and its applicaiton to conduction and excitation in nerve. J. Physiol. 117, 500–544. doi: 10.1016/S0092-8240(05)80004-7

CrossRef Full Text | Google Scholar

Hooks, B. M., Mao, T., Gutnisky, D. A., Yamawaki, N., Svoboda, K., and Shepherd, G. M. G. (2013). Organization of cortical and thalamic input to pyramidal neurons in mouse motor cortex. J. Neurosci. 33, 748–760. doi: 10.1523/JNEUROSCI.4338-12.2013

PubMed Abstract | CrossRef Full Text | Google Scholar

Isaacson, J. S., and Scanziani, M. (2011). How inhibition shapes cortical activity. Neuron 72, 231–243. doi: 10.1016/j.neuron.2011.09.027

PubMed Abstract | CrossRef Full Text | Google Scholar

Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572. doi: 10.1109/TNN.2003.820440

PubMed Abstract | CrossRef Full Text | Google Scholar

Izhikevich, E. M. (2004). Which model to use for cortical spiking neurons? IEEE Trans. Neural Netw. 15, 1063–1070. doi: 10.1109/TNN.2004.832719

PubMed Abstract | CrossRef Full Text | Google Scholar

Jirsa, V. K., and Stefanescu, R. A. (2011). Neural population modes capture biologically realistic large scale network dynamics. Bull. Math. Biol. 73, 325–343.

Google Scholar

Jörntell, H. (2017). Cerebellar physiology: links between microcircuitry properties and sensorimotor functions. J. Physiol. 595, 11–27. doi: 10.1113/JP272769

PubMed Abstract | CrossRef Full Text | Google Scholar

Jörntell, H., and Ekerot, C. F. (2003). Receptive field plasticity profoundly alters the cutaneous parallel fiber synaptic input to cerebellar interneurons in vivo. J. Neurosci. 23, 9620–9631. doi: 10.1523/jneurosci.23-29-09620.2003

PubMed Abstract | CrossRef Full Text | Google Scholar

Kar, K., and DiCarlo, J. J. (2020). Fast recurrent processing via ventral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition. bioRxiv [Preprint]. doi: 10.1101/2020.05.10.086959

CrossRef Full Text | Google Scholar

Koestinger, G., Martin, K. A. C., and Rusch, E. S. (2018). Translaminar circuits formed by the pyramidal cells in the superficial layers of cat visual cortex. Brain Struct. Funct. 223, 1811–1828. doi: 10.1007/s00429-017-1588-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Lindner, B. (2014). “Low-pass filtering of information in the leaky integrate-and-fire neuron driven by white noise,” in Understanding Complex Systems, eds V. A. Palacios and P. Longhini (Cham: Springer). doi: 10.1007/978-3-319-02925-2_22

CrossRef Full Text | Google Scholar

Liou, J. Y., Smith, E. H., Bateman, L. M., Bruce, S. L., McKhann, G. M., Goodman, R. R., et al. (2020). A model for focal seizure onset, propagation, evolution, and progression. Elife 9:e50927. doi: 10.7554/eLife.50927

PubMed Abstract | CrossRef Full Text | Google Scholar

Lübke, J., Markram, H., Frotscher, M., and Sakmann, B. (1996). Frequency and dendritic distribution of autapses established by layer 5 pyramidal neurons in the developing rat neocortex: Comparison with synaptic innervation of adjacent neurons of the same class. J. Neurosci. 16, 3209–3218. doi: 10.1523/jneurosci.16-10-03209.1996

PubMed Abstract | CrossRef Full Text | Google Scholar

Mazzoni, A., Panzeri, S., Logothetis, N. K., and Brunel, N. (2008). Encoding of naturalistic stimuli by local field potential spectra in networks of excitatory and inhibitory neurons. PLoS Comput. Biol. 4:239. doi: 10.1371/journal.pcbi.1000239

PubMed Abstract | CrossRef Full Text | Google Scholar

Naundorf, B., Wolf, F., and Volgushev, M. (2006). Unique features of action potential initiation in cortical neurons. Nature 440, 1060–1063. doi: 10.1038/nature04610

PubMed Abstract | CrossRef Full Text | Google Scholar

Nilsson, M. N. P., and Jörntell, H. (2021). Channel current fluctuations conclusively explain neuronal encoding of internal potential into spike trains. Phys. Rev. E 103:22407.

Google Scholar

Obermayer, J., Heistek, T. S., Kerkhofs, A., Goriounova, N. A., Kroon, T., Baayen, J. C., et al. (2018). Lateral inhibition by Martinotti interneurons is facilitated by cholinergic inputs in human and mouse neocortex. Nat. Commun. 9:4101. doi: 10.1038/s41467-018-06628-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Okun, M., and Lampl, I. (2008). Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nat. Neurosci. 11, 535–537. doi: 10.1038/nn.2105

PubMed Abstract | CrossRef Full Text | Google Scholar

Petkoski, S., Palva, J. M., and Jirsa, V. K. (2018). Phase-lags in large scale brain synchronization: Methodological considerations and in-silico analysis. PLoS Comput. Biol. 14:e1006160.

Google Scholar

Pham, J., Pakdaman, K., and Vibert, J.-F. (1998). Noise-induced coherent oscillations in randomly connected neural networks. Phys. Rev. E 58: 3610.

Google Scholar

Pi, H. J., Hangya, B., Kvitsiani, D., Sanders, J. I., Huang, Z. J., and Kepecs, A. (2013). Cortical interneurons that specialize in disinhibitory control. Nature 503, 521–524. doi: 10.1038/nature12676

PubMed Abstract | CrossRef Full Text | Google Scholar

Rongala, U. B., Spanne, A., Mazzoni, A., Bengtsson, F., Oddo, C. M., and Jörntell, H. (2018). Intracellular dynamics in cuneate nucleus neurons support self-stabilizing learning of generalizable tactile representations. Front. Cell. Neurosci. 12:210. doi: 10.3389/fncel.2018.00210

PubMed Abstract | CrossRef Full Text | Google Scholar

Rubin, R., Abbott, L. F., and Sompolinsky, H. (2017). Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity. Proc. Natl. Acad. Sci. U.S.A. 114, E9366–E9375. doi: 10.1073/pnas.1705841114

PubMed Abstract | CrossRef Full Text | Google Scholar

Saarinen, A., Linne, M. L., and Yli-Harja, O. (2008). Stochastic differential equation model for cerebellar granule cell excitability. PLoS Comput. Biol. 4:e1000004. doi: 10.1371/journal.pcbi.1000004

PubMed Abstract | CrossRef Full Text | Google Scholar

Shen, Y., and Wang, J. (2011). Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances. IEEE Trans. Neur. Netw. Learn. Syst. 23, 87–96.

Google Scholar

Song, S., Sjöström, P. J., Reigl, M., Nelson, S., and Chklovskii, D. B. (2005). Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 3:e68. doi: 10.1371/journal.pbio.0030068

PubMed Abstract | CrossRef Full Text | Google Scholar

Spanne, A., and Jörntell, H. (2015). Questioning the role of sparse coding in the brain. Trends Neurosci. 38, 417–427. doi: 10.1016/j.tins.2015.05.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Spanne, A., Geborek, P., Bengtsson, F., and Jörntell, H. (2014). Spike generation estimated from stationary spike trains in a variety of neurons In vivo. Front. Cell. Neurosci. 8:199. doi: 10.3389/fncel.2014.00199

PubMed Abstract | CrossRef Full Text | Google Scholar

Steriade, M. (1997). Synchronized activities of coupled oscillators in the cerebral cortex and thalamus at different levels of vigilance. Cereb. Cortex. 7, 583–604. doi: 10.1093/cercor/7.6.583

PubMed Abstract | CrossRef Full Text | Google Scholar

Sultan, K. T., and Shi, S. H. (2018). Generation of diverse cortical inhibitory interneurons. Wiley Interdiscip. Rev. Dev. Biol. 7:e306. doi: 10.1002/wdev.306

PubMed Abstract | CrossRef Full Text | Google Scholar

Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. Adv. Neur. Inform. Proces. Syst.

Google Scholar

Swadlow, H. A. (2003). Fast-spike interneurons and feedforward inhibition in awake sensory neocortex. Cerebral Cortex 13, 25–32. doi: 10.1093/cercor/13.1.25

PubMed Abstract | CrossRef Full Text | Google Scholar

Tamás, G., Buhl, E. H., and Somogyi, P. (1997). Massive autaptic self-innervation of GABAergic neurons in cat visual cortex. J. Neurosci. 17, 6352–6364. doi: 10.1523/jneurosci.17-16-06352.1997

PubMed Abstract | CrossRef Full Text | Google Scholar

Tougaard, J. (2002). Signal detection theory, detectability and stochastic resonance effects. Biol. Cybern. 87, 79–90. doi: 10.1007/s00422-002-0327-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Vogels, T. P., and Abbott, L. F. (2005). Signal propagation and logic gating in networks of integrate-and-fire neurons. J. Neurosci. 25, 10786–10795. doi: 10.1523/JNEUROSCI.3508-05.2005

PubMed Abstract | CrossRef Full Text | Google Scholar

Vogels, T. P., Sprekeler, H., Zenke, F., Clopath, C., and Gerstner, W. (2011). Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 1569:334. doi: 10.1126/science.1211095

PubMed Abstract | CrossRef Full Text | Google Scholar

Wehr, M., and Zador, A. M. (2003). Balanced inhibition underlies tuning and sharpens spike timing in auditory cortex. Nature 426, 442–446. doi: 10.1038/nature02116

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, Z. W. (2004). Maturation of layer V pyramidal neurons in the rat prefrontal cortex: intrinsic properties and synaptic function. J. Neurophysiol. 91, 1171–1182. doi: 10.1152/jn.00855.2003

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhu, J. J., and Lo, F. S. (2000). Recurrent inhibitory circuitry in the deep layers of the rabbit superior colliculus. J. Physiol. 523(Pt 3), 731–740. doi: 10.1111/j.1469-7793.2000.00731.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhu, S., and Shen, Y. (2013). Robustness analysis for connection weight matrices of global exponential stability of stochastic recurrent neural networks. Neur. Netw. 38, 17–22.

Google Scholar

Appendix 1

Neuron Model Derivation From H-H Model

We first describe the rationale for our Linear Summation neuron model (LSM). In brief, the LSM aims to provide a simple and computationally efficient neuron model, while capturing important characteristics of H-H conductance models (Hodgkin and Huxley, 1952; Rongala et al., 2018). The membrane potential in the LSM model is normalized between +1 and -1 with a resting potential of zero. The output of an LSM neuron is a continuous, non-spiking signal that reflects the portion of the membrane potential that exceeds some threshold, which we assumed to be the zero resting potential. This would be suitable to represent one neuron or a population of similarly connected neurons that is biased by background activity to be at or near spontaneous activity (such as is hypothesized for stochastic resonance to prevent dead bands) (Tougaard, 2002). This continuous output signal is intended to reflect the mean spike rate that a population of similarly connected neurons would transmit to other centers in the nervous system.

In H-H models the various ion channels associated with ionic pumps and leaks define a resting membrane potential where there are no net currents. Any change in membrane potential away from this resting potential will settle back to the resting potential according to the combined conductance of all of these ion channels, which is called the static leak. Synaptic activation leads to opening of specific ion channels, which in the H-H models as a change of synaptic conductance. The ion(s) that are made permeable by the synapse have a reversal potential that is different from the resting membrane potential. When the ion channels of a particular synapse are open, the membrane potential will be driven toward that reversal potential, with a strength that depends on the strength of the synaptic conductance relative to the static leak conductance.

The synaptic currents charge the neuron, which is modeled as a single capacitor (assuming that the electrotonic distances between different parts of the neuron are negligible). Various synaptic signals are thus integrated and converted into a dynamically changing membrane potential. The static leak is in parallel with this capacitor, thereby defining a time-constant τ for these dynamic changes. The effect is that of low-pass filtering of the integrated synaptic currents (Lindner, 2014) to produce the membrane potential that defines the output state of the neuron. In an H-H model, the output state is created by converting the membrane voltage into patterns of spike output, with the help of a threshold for spike generation. Spike generation is omitted in the LSM model and the output of the neuron is instead the part of the membrane potential that exceeds some threshold (herein equal to the resting membrane potential).

First, we describe a conductance-based model similar to previously presented models (Hodgkin and Huxley, 1952; Rongala et al., 2018) and then show how a model similar to the LSM model can be derived. In this category of ion channel conductance-based models, the dynamics of one type of ion channel is lumped together into one single conductance. Compared to other conductance-based models, no spike generation is modeled and the neurons resting potential is set to zero. The neuron is modeled as a capacitor with capacitance (C) and is charged by excitatory (Isyn,exc) and inhibitory synaptic currents (Isyn,inh) and discharged by a leak current (Ileak). Therefore, the neurons membrane potential (V) measured across the capacitor follows the following equation.

C d V d t = I l e a k + I s y n , e x c + I s y n , i n h      ( A 1 )

The leak current is set proportional to the membrane potential by constant conductance gL according to Ohm’s law:

I l e a k = - g L V             ( A 2 )

If the synaptic currents are zero, the neuron’s membrane potential will decay to zero. Therefore, this model neuron’s resting membrane potential is zero. At each excitatory synapse i the firing rate of the presynaptic neuron (vi+) produces a current. The synapse has a baseline conductance (wi+), called synaptic weight. This conductance is scaled by the presynaptic neuron’s firing rate. The direction and magnitude of the current is determined by the difference between the constant reversal potential (Eexc) and the neuron’s membrane potential. For an excitatory synapse, the current would reverse if the membrane potential rose above the positive reversal voltage, so synaptic activity can never produce a membrane potential about the reversal potential. Therefore if the neuron has already reached this voltage, the neuron’s voltage cannot increase further. The current contributed by an individual synapse can be modeled as

I s y n , e x c i = - ( V - E e x c ) w i + v i +      ( A 3 )

The same model is applied for the current generated by an inhibitory synapse i, with reversal potential (Einh), synaptic weight (wi-) and presynaptic firing rate (vi-). The reversal potential of an inhibitory synapse is set so that the neurons potential cannot decrease below this potential.

I s y n , i n h i = - ( V - E i n h ) w i - v i -     ( A 4 )

Summing all synaptic currents and plugging in the current equations into the membrane potential equation we obtain

C d V d t = - g L V - i ( V - E e x c ) w i + v i + + i ( V - E i n h ) w i - v i -          ( A 5 )

where the first sum is over all excitatory synapses and the second sum over all inhibitory synapses. In order, to show the relationship of this equation to the LSM model, we set the leak conductance to one and replace leak conductance and membrane capacitance by a time constant τ. The reversal potentials are set to +1 and -1 for the excitatory and the inhibitory synapses, respectively. Hence the range of possible membrane potential is between +1 and -1, assuming that the initial voltage is also in that range.

The resulting differential equation is

τ d V d t = - V + ( 1 - V ) w j + v j + + ( 1 + V ) w k - v k -            ( A 6 )

This equation can be solved by applying the implicit Euler method. Let V_t be the membrane voltage in timestep t and h the stepsize. The following equation must be solved for Vt+1

V t + 1 = V t + h 1 τ ( - V t + 1 + ( 1 - V t + 1 ) w j + v j + + ( 1 + V t + 1 ) w k - v k - ) ( A 7 )

In this case an analytic solution is possible:

V t + 1 = V t + h τ w i v i 1 + h τ + h τ | w i | v i                    ( A 8 )

where both sums are over all synapses.

The parameters of this model are also listed in Supplementary Table 1. This new system has components not commonly found in neuron models. The reason is that usually the differential equation describes an instantaneous effect of the inputs and the state of the neuron. In this system the effect of the input on how the input is processed by the neuron in a future time step is already considered.

From the above derivation (Eq. A8) we could observe that the excitatory (Isyn,exc) and inhibitory (Isyn,inh) synaptic currents from a conductance based neuron model (Eq. A1), can be reduced to a total synaptic weight summation (wV1+|w|V), and that the leak current (Ileak) (Eq. A1), which is a dynamic leak because it occurs across capacitor, can be reduced to h/τ (a dynamic leak constant, Eq. A8; if this constant is larger than zero and smaller than one, it can be disregarded in this expression). Based on this neuron model derivation, we propose the simplified linear summation neuron model (LSM, A10) to capture the essential dynamics of the original H-H conductance-based model (Supplementary Figure 1). The LSM is given by two equations: LSM without dynamic leak (Eq. A9) and LSM with dynamic leak (Eq. A10).

A N o D y n = ( w i a i ) 1 + | w i a i |                        ( A 9 )
τ * D y n d A D y n d t = - A D y n ( t ) + A N o D y n ( t )                  ( A 10 )

Keywords: neuron model, recurrent networks, dynamic leak, spurious high frequency signals, non-spiking, excitation, inhibition

Citation: Rongala UB, Enander JMD, Kohler M, Loeb GE and Jörntell H (2021) A Non-spiking Neuron Model With Dynamic Leak to Avoid Instability in Recurrent Networks. Front. Comput. Neurosci. 15:656401. doi: 10.3389/fncom.2021.656401

Received: 20 January 2021; Accepted: 19 April 2021;
Published: 20 May 2021.

Edited by:

Viktor Jirsa, Aix-Marseille Université, France

Reviewed by:

Qing Yun Wang, Beihang University, China
Spase Petkoski, INSERM U1106 Institut de Neurosciences des Systèmes, France

Copyright © 2021 Rongala, Enander, Kohler, Loeb and Jörntell. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Udaya B. Rongala, udaya_bhaskar.rongala@med.lu.se

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.