Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci., 23 February 2024
This article is part of the Research Topic Advancing our Understanding of the Impact of Dynamics at Different Spatiotemporal Scales and Structure on Brain Synchronous Activity, Volume II View all 7 articles

Topological features of spike trains in recurrent spiking neural networks that are trained to generate spatiotemporal patterns

  • 1Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
  • 2Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
  • 3Department of Medical Research, China Medical University Hospital, China Medical University, Taichung City, Taiwan
  • 4Complexity Science Hub Vienna, Vienna, Austria
  • 5Department of Physics, Kyung Hee University, Seoul, Republic of Korea

In this study, we focus on training recurrent spiking neural networks to generate spatiotemporal patterns in the form of closed two-dimensional trajectories. Spike trains in the trained networks are examined in terms of their dissimilarity using the Victor–Purpura distance. We apply algebraic topology methods to the matrices obtained by rank-ordering the entries of the distance matrices, specifically calculating the persistence barcodes and Betti curves. By comparing the features of different types of output patterns, we uncover the complex relations between low-dimensional target signals and the underlying multidimensional spike trains.

1 Introduction

The challenge of understanding how spatiotemporal patterns of neural activity give rise to various sensory, cognitive, and motor phenomena in nervous systems is a significant task in computational and cognitive neuroscience. A prominent paradigm for proposing hypotheses about potential mechanisms involves training recurrent neural networks on target functions, considering biological constraints and relating dynamic and structural features in the obtained networks to characteristics of inputs and outputs (Sussillo, 2014; Barak, 2017; Yang and Wang, 2020; Amunts et al., 2022; Maslennikov et al., 2022). This approach is in line with a more traditional research domain of finding dynamical mechanisms underlying various spatiotemporal patterns observed in the brain (traveling waves, oscillatory rhythms in different frequency domains, chaotic or disordered spike firing etc.) (Liu et al., 2022a,b; Yu et al., 2023a,b), which is highly interdisciplinary and borrows different approaches from physics and mathematics.

An interdisciplinary approach has emerged at the intersection of computational neuroscience, machine learning, and non-linear dynamics. This approach considers similarities in time-dependent processes in biological brains and artificial neural networks as consequences of computations through population dynamics (Marblestone et al., 2016; Hassabis et al., 2017; Cichy and Kaiser, 2019; Vyas et al., 2020; Dubreuil et al., 2022; Ramezanian-Panahi et al., 2022). Works in this direction focus on training networks of rate neurons on cognitive-like and sensorimotor neuroscience-based tasks, revealing computational principles for completing target tasks in terms of dynamics, functional specialization of individual neurons, and coupling structure (Sussillo and Abbott, 2009; Mante et al., 2013; Sussillo and Barak, 2013; Abbott et al., 2016; Chaisangmongkon et al., 2017; Maslennikov and Nekorkin, 2019, 2020; Maslennikov, 2021).

Real neural networks differ from rate-based models, primarily in that they produce sequences of action potentials or spikes. To account for this important aspect, another class of neural networks—spiking ones—has been developed. On the one hand, they are more biologically realistic in producing firing patterns of a similar structure, leading to a more thorough comparison between artificial and biological spiking networks in their dynamics and structural mechanisms of functioning (Eliasmith et al., 2012; Gilra and Gerstner, 2017; Kim et al., 2019; Lobo et al., 2020; Pugavko et al., 2020, 2023; Amunts et al., 2022). On the other hand, spiking networks are the next-generation class of neural networks that are capable of energy-efficient computations when performed on specialized neuromorphic chips (Schuman et al., 2022). Although they can be obtained from convenient neural networks using some conversion techniques, to take their full advantage, one needs to use specific algorithms to train them (Demin and Nekhaev, 2018; Neftci et al., 2019; Tavanaei et al., 2019; Bellec et al., 2020; Dora and Kasabov, 2021). Spiking neural networks have demonstrated their capabilities in various applications including processing signals of different modalities (Bing et al., 2018; Auge et al., 2021; Yamazaki et al., 2022), robotics (Lobov et al., 2020, 2021; Angelidis et al., 2021), and more generally in brain-inspired artificial intelligence tasks and brain dynamics simulations (Zeng et al., 2023).

As in the case of biological neural systems, artificial spiking networks are hardly interpreted when they perform complex motor or cognitive-like tasks. While rate-based neural networks organize their dynamics along smooth manifolds which can be often studied as projections to low-dimensional subspaces, for spiking networks, such procedure in general may not be done (Muratore et al., 2021; Cimeša et al., 2023; DePasquale et al., 2023). One of the promising approaches to characterize spiking patterns is methods of algebraic topology. Such tools as persistent homology analysis have been used in relating spike patterns with functions of neural networks (Dabaghian et al., 2012; Petri et al., 2014; Curto, 2017; Bardin et al., 2019; Santos et al., 2019; Sizemore et al., 2019; Naitzat et al., 2020; Billings et al., 2021; Guidolin et al., 2022)—both biological and artificial—and more widely for studying topological aspects of dynamical systems (Maletić et al., 2016; Stolz et al., 2017; Salnikov et al., 2018; Myers et al., 2019).

In this study, we explore topological features in spiking neural networks trained to generate low-dimensional target patterns. We study recurrent networks in the class of reservoir computers (Maass et al., 2002; Lukoševičius and Jaeger, 2009; Sussillo, 2014) where training only occurs at the output connections. After training, the networks produce spiking dynamics which underlie the generation of output patterns, and our goal is to study how topological features of the spike trains carry information about output patterns in terms of persistence barcodes and Betti curves. In Section 2, we present the system under study and the mkey findings of our study. Section 3 sums up the results and Section 4 gives particular details of the model and methods.

2 Results

2.1 Training recurrent spiking neural networks to generate target outputs

We consider recurrent networks of spiking neurons trained to generate two-dimensional spatiotemporal signals and study how topological signatures of their spike patterns relate to the readout activity. The pipeline of our study is schematically presented in Figure 1. The neurons are randomly connected with sparse links whose weights are drawn from Gaussian distribution and kept fixed. The structure of links is determined by the adjacency matrix A. Two scalar outputs (which can be considered as one vector output) x^(t) and ŷ(t) linearly read out the filtered spiking activity of the recurrent network via output weight vectors w1 and w2. The output signals also send feedback connections given in matrix U to the recurrent neural network. While the feedback links are initialized and kept fixed as the recurrent ones, the output links are changed during training in order to minimize the error e(t) between the target pattern [x(t), y(t)] and the real output signals x^(t), ŷ(t), see Figure 1A. Such training setting is a particular case of the reservoir computing paradigm in which the weights only in the last layer are trained. In this study, training is made by the FORCE method (see details in Section 4).

Figure 1
www.frontiersin.org

Figure 1. Flowchart of the study. (A) Training a spiking neural network to generate a target trajectory at the output by the FORCE method. (B) After training, the self-sustained spiking patterns support the generation of the pattern of interest. (C) Spike trains are analyzed in several steps. First, matrix D of the Victor–Purpura distances is calculated. Second, matrix M obtained by rank ordering the entries of D. Finally, we apply several approaches of the algebraic topology, namely, we compute the persistent homology of the rank-ordered matrix obtaining persistence barcodes and Betti curves which give topological signatures of the spiking patterns.

The networks we study consist of leaky integrate-and-fire neurons with an absolute refractory period, and the output trajectories are chosen as closed polar curves, see details in Section 4. After training, the networks are capable of producing these two-dimensional signals which can be treated as target motor patterns produced by spiking activity, see Figure 1B. Our purpose is to relate the spiking patterns of the trained neural networks with the target trajectories. The output signals are produced as weighted sums of the firing-rate activity, but the question is to what extent the detailed spike trains—not the averaged rates—are responsible for producing target patterns? To answer this question, we measure how dissimilar individual spike trains are from each other. There are many correlation-based characteristics which enable to quantify similarity between signals produced by neurons, but they do not capture the fine structure of spike timing. Here, we adopt the method proposed by Victor and Purpura to compute a special quantity—the Victor–Purpura (VP) distance which considers a spike sequence as a point in some metric space. We calculate a matrix of VP distances where each entry (i, j) quantify how dissimilar or distant are the spike trains produced by the i-th and j-th neurons. After that we transform the obtained distance matrices by rank ordering their entries. Then, we apply the method of algebraic topology—the persistent homology—to the latter matrix and obtain the so-called persistence barcodes and Betti curves, see Figure 1C. These topological signatures are detailed characteristics of spike patterns responsible for the generation of the patterns under study, so we study how the spiking topology features relate to low-dimensional output signals.

We applied the proposed pipeline to different types of teacher signals but in order to conveniently visualize and easily understand topological features of the target patterns themselves, we show the results for four polar closed curves having different number of holes, as shown in Figure 2. The figures in fact show the real outputs perfectly matching the target signals and having a small noisy component resulting from the spiking nature of the network.

Figure 2
www.frontiersin.org

Figure 2. Examples of output trajectories [x(t), y(t)] produced by the trained recurrent spiking neural network: (from left to right) a circle, two-petal, three-petal, and four-petal polar roses.

In terms of individual spiking activity, different neurons in the trained networks fire at various rates. Namely, within particular segments of the target pattern some neurons actively generate action potentials while other are silent and start to fire in further segments of the pattern. The overall network activity can be characterized by the mean firing rate as the average number of spikes per second and per neuron. Figures 3A, C, E, G shows evolving mean firing rates in the networks, producing four corresponding target patterns, as shown in Figure 2. Notably, for all the cases, the firing rate changes within the interval of 20–80 Hz except the simplest circle target pattern where the firing rate varies within the narrow interval of 72–84 Hz. For target signals in the form of multipetal roses, the firing rate increases and decreases following each petal, see Figures 3C, E, G. The corresponding spike rasterograms shown in Figures 3B, D, F, H indicate that rises and falls of the mean firing rate are supported by the activity of different neurons. Therefore, although output patterns are produced by filtered spikes, i.e., instant firing rates of neurons, one cannot relate the rate activity with the properties of the output pattern in a direct way. Moreover, the temporal structure—not only rates—of spikes is responsible for generating output patterns of different forms.

Figure 3
www.frontiersin.org

Figure 3. Instant firing rate of the full network averaged over 20 ms for four different target outputs shown in Figure 2 (left column). Corresponding spike trains of 100 randomly chosen neurons (right column). For the circle target output, the firing rate changes within the interval of 72–84 Hz (A) and the corresponding spike train does not exhibit any distinct phases (B). For the target patterns in the form of polar roses (C, E, G), the network firing rate varies within the band of 20–80 Hz making discernible rise-fall excursions for each petal of the output trajectory. The corresponding spike trains (D, F, H) contain the same number of distinct phases as the number of petals in the target pattern.

2.2 Distance matrices for spike trains in the trained neural network

To compare different neurons in terms of dissimilarities between their spike trains, we apply the method proposed by Victor and Purpura (see Guidolin et al., 2022 and works cited therein). Namely, this method endows a pair of spike trains with a notion of distance. This is in contrast with the frequently used method to quantify pairs of neuronal responses by the rate-based correlations. Spike trains of some finite length are considered as points in an abstract space where a special metric rule is defined which assigns a non-negative number Dij to each pair of points i, j. The Victor–Purpura (VP) distance has several basic properties required to be a true metric, namely, it vanishes only for the pair of identical spike trains (Dii = 0) and it is positive otherwise (Dij > 0, ij), it is symmetrical (Dij = Dji), and it fulfills the triangle inequality (DikDij + Djk). The VP distance between spike trains is defined as the minimum cost of transforming one spike train into the other via the addition or deletion of spikes, shift of spike times, or change in the neuron of origin of the spikes. Each modifying move is characterized by cost q which controls the timescale for shifts of spikes. In general, there is a family of distances defined in this way which can capture the sensitivity to the neuron of origin of each spike. Here, we use the basic VP metric which assigns cost q = 1 per unit time to move a spike (see details in Section 4).

For each of the four target patterns illustrated here, we collect spike trains S(i)=[t1(i),t2(i),,tsi(i)], i = 1, …, N for the period of 1 s (the duration of the target generation). Then, we calculate the VP distances for each pair of spike trains and obtain matrices D = [Dij], as shown in Figure 4. These matrices are symmetric and reflect the intricate temporal structure of the spike patterns supporting corresponding output trajectories. Notably, even at this stage, one can make several qualitative conclusions about differences in spike patterns relating to different target outputs. Despite similar ranges of the firing rate varying for all targets, as shown in Figures 3A, C, E, G, matrices of VP distances for their spike trains have distinct differences. The simplest circle target correspond to the matrix where most of entries take similar values in the middle of the range of possible distances (see Figure 4A). For multi-petal closed trajectories, the maximum distance becomes smaller for increasing the number of holes in the output patterns, cf. Figures 4BD. Moreover, less-petal polar roses require more neurons that produce less distant spike trains.

Figure 4
www.frontiersin.org

Figure 4. Matrices of the Victor–Purpura distance D = [Dij] obtained for spike trains underlying the generation of four different target outputs in Figure 2: (A) a circle, (B) two-petal, (C) three-petal, and (D) four-petal polar roses. More distant neurons shown by red entries fire most dissimilar spike trains while less distant ones given by blue entries generate comparable spike patterns. The matrices show that different target patterns require special organization of spike trains.

To get more insight into intricate structure of spike trains, following Giusti et al. (2015) and Guidolin et al. (2022), we transform the obtained matrices by rank ordering their entries. Namely, given a matrix of VP distances Dij with zeros on its main diagonal, we consider the entries of its above-diagonal part and replace them by natural numbers 0, 1, …  in ascending order of their value. The below-diagonal part of the rank-ordered matrix is completed symmetrically, thus resulting in the rank-ordered matrix M = [Mij]. Thus, the more VP distance Dij, the smaller the corresponding entry Mij. Finally, we normalize the entries of the latter matrix by the maximum N(N − 1)/2 and reindex in descending order of the individual firing rate of the corresponding neurons, thus obtaining matrix M = [Mij] for four target patterns of interest, as shown in Figure 5.

Figure 5
www.frontiersin.org

Figure 5. Matrices M = [Mij] produced by rank ordering and normalization of the entries in the corresponding VP matrices D = [Dij] shown in Figure 4 for four different target outputs, as shown in Figure 4: (A) a circle, (B) two-petal, (C) three-petal, and (D) four-petal polar roses. The smaller entries (blue) of these matrices M correspond to most dissimilar spike trains while the larger ones (red) indicate the closest neurons in term of VP distance. The neurons here are reordered based on the individual average firing rates, thus the units with a smaller index produce more spikes during trials than those with larger indices. The form of the matrices emphasize that neurons that produce spikes with close firing rates are closer to each other than to those producing a greatly different number of spikes. However, different target patterns are characterized by individual signatures.

The smallest values of Mij correspond to the pairs of spike trains which are the most dissimilar, and the highest entries indicate most closest neurons in the Victor–Purpura sense. Notably, less active neurons with largest indices are the most similar between each other, see the up-right parts of matrices in Figure 5, and far away from the most active neurons. The down-left part of rank-ordered matrices correspond to neurons which fire most actively during the task implementation and thus mostly contributing to the output patterns. Their spike trains show highly complicated structure which depends on the target pattern. To characterize the structure of the relations between the core neurons, we take 100 most active ones and study the topological features of the graph of their rank-ordered VP distances.

2.3 Persistent homology of rank-ordered matrix

Most frequently used tool in topology data analysis is persistent homology. While initially this framework has been developed for static data sets, many ideas are adopted to studying time-varying dynamic data (Petri et al., 2014; Curto, 2017; Stolz et al., 2017; Myers et al., 2019; Santos et al., 2019). Homology refers to certain topological properties of data, whereas persistence reflects the properties which are maintained through multiple scales of the data.

Our set of neurons and the corresponding spike trains form a point cloud of vertices for which a notion of distance collected in M is determined. This set of vertices form zero-dimensional simplices while one-dimensional simplices are the edges between them. Imagine each vertex is surrounded by a circle of radius ρ, and this value is gradually increasing starting from zero. If the circle centered in vertex i has radius ρ larger than the distance Mij to vertex j, the pair of nodes i and j are considered coupled and form a one-dimensional simplex. Initially (ρ = 0), all the vertices are isolated, hence they form a set of zero-dimensional simplices and there is no one-dimensional simplices. When radius becomes such that some pair of vertices become coupled, a new one-dimensional simplex appears while zero-dimensional simplices corresponding to the vertices disappear. Such gradual increase of the radius is called filtration and can be presented in the form of persistence barcodes, as shown in Figure 6. Here, parameter ρ indicates the radius of circles surrounding each vertex and the bars show zero- and one-dimensional simplices: at which values of ρ they appear and when dissapear.

Figure 6
www.frontiersin.org

Figure 6. Barcodes showing the persistence of zero-dimensional and one-dimensional simplices for 100 most active neurons taken from the matrices shown in Figure 5 for four different target outputs: (A) a circle, (B) two-petal, (C) three-petal, and (D) four-petal polar roses. Bars in the top subfigures show existence of zero-dimensional simplices and those in bottom subfigures indicate the birth and death of one-dimensional simplices. These persistence barcodes reflect complex topological structure of spike trains produced by the neural networks performing particular output trajectories.

In the top subfigures, the number of initially existing zero-dimensional simplices is equal to the chosen number of most active neurons (100), and with increasing filtration parameter ρ, the number of bars gradually decreases, finally leading to one remaining simplex corresponding to the connected component which contains all the vertices. In the bottom subfigures, the barcodes show the birth and death of one-dimensional simplices with increasing filtration parameter. Altogether, these persistence barcodes describe topological signatures of spike trains relating to most active neurons which mostly contribute to generating particular target outputs.

To summarize the filtration process, the number of persisting topological invariants for particular values of ρ is plotted in the form of Betti curves shown in Figure 7. These generalizing curves show the course of emergence and disappearance of zero-dimensional (left column) and one-dimensional (right column) simplices in the point clouds formed by spike trains of most active neurons. The number of zero-dimensional simplices shows the distinct monotonically decreasing dependence on the filtration parameter. The most sharp drop is observed for the two-petal target pattern (Figure 7C) while the one-, three-, and four-hole trajectories result in a smoother decrease until the threshold value of ρ, in turn, slightly increases around ρ = 0.4 with increasing number of holes, cf. Figures 7A, E, G. The number of one-dimensional simplices shows a more intricate structure where the maximum is in complex dependence on the features of the target output. The largest one relates to the two-petal polar rose and the smallest maximum to the three-petal one while the remaining targets lead to the plots with with similar maxima.

Figure 7
www.frontiersin.org

Figure 7. Betti curves for zero-dimensional (left column) and one-dimensional (right column) simplices associated with increasing filtration of the persistence barcodes shown in Figure 6 for four different target outputs: (A, B) a circle, (C, D) two-petal, (E, F) three-petal, and (G, H) four-petal polar roses. Each curve indicates the number of simplices of particular dimension with varying filtration parameter ρ.

Comparing these figures with the corresponding barcodes in Figure 6 and matrices in Figure 5, one concludes that the chosen target patterns which have easily explainable forms in terms of topology require spike trains which are characterized by topologically complex characteristics. We found that there is no direct correspondence between Betti numbers of generated trajectories and simplicial complexes built upon the spike trains. However, topological analysis according to the proposed pipeline allows us to extract valuable information about coding principles of spikes at the level of precise firing timing and the topological relations between spike trains of different neurons.

3 Discussion

We applied algebraic topology methods, specifically persistent homology, to characterize geometry of spike trains produced by recurrent neural networks trained to generate two-dimensional target trajectories. We considered several easily interpreted two-dimensional closed trajectories as the target patterns for training recurrent spiking networks. The FORCE method is used for supervised learning which is a particular framework of reservoir computing, where weight modification occurs at the output layer while recurrent connections are randomly initialized and kept fixed. In addition, the random feedback connections from the output provide indirect low-rank perturbation to the recurrent matrix, thus creating the modified effective coupling architecture capable of producing target patterns. The neural spike trains in the trained networks were considered as points in some metric space, where the distances between them were calculated as cost-based Victor–Purpura quantities. We rank-ordered the measured distances and chose one hundred most active neurons for which we performed persistent homology analysis. We plotted persistence barcodes and Betti curves, which characterize how specific topological objects in the spiking data were preserved under continuous transformation. We find complicated relation between topological characteristic of spike trains and those of target patterns. The novelty of our study is that we apply persistence homology methods to the spiking networks trained to autonomously generate planar output trajectories. Previously, such methods were mostly applied to the neural networks performing navigation tasks and consisting of neurons which fire preferentially in particular locations in the environment—place fields. Thus, one was able to find a one-to-one correspondence between topological features of the environment and those of spiking patterns. Our study is an attempt to establish regularities in a more general case where generated trajectories do not carry navigation information yet have clear topological interpretation.

4 Methods

4.1 Spiking neural network and target outputs

We consider a recurrent spiking neural network consisting of N leaky integrate-and-fire neurons whose activity is projected into M scalar outputs, see Figure 1. Autonomous dynamics of the spiking network is described by the following system (Nicola and Clopath, 2017):

τmdvidt=vrest-vi+Ibias+j=1Naijrj,    (1)

where vi is a membrane potential (voltage) of the i-th neuron, τm is a time constant for the voltage relaxation, vrest is the resting voltage, Ibias is an input bias current (the default value in our numerical experiments is Ibias = 0), and aij are the weights describing the strength of recurrent links. After the membrane potential reaches the threshold vth, the neuron generates a spike and the voltage resets to v0. During the absolute refractory period τr after the spike generation, the voltage value remains constant at v0, i.e., during this interval the neuron is unaffected by external stimulation.

The coupling in (1) is implemented via the double exponential synaptic filter given by the dynamics of variables ri and hi for the i-th neuron:

dridt=-riτd+hi,dhidt=-hiτr+1τdτrtk(i)<tδ(t-tk(i)),    (2)

where τr and τd are the synaptic rise and decay time constants, respectively, tk(i) is the moment of generation of the k-th spike by the i-th neuron.

The coupling structure of recurrent connections is described by the weight matrix A = [aij] whose elements are drawn from a Gaussian distribution with zero mean and standard deviation g(pN)−1/2 where p is a fraction of non-zero elements, g is a global coupling strength. The output is given by M readout units whose dynamics are determined as follows:

k(t)=i=1Nwkiri(t),      k=1,M    (3)

where wki is the weight coefficient between the i-th neuron and k-th output (resulting in the output matrix W = [wki]), and ri(t) is the neural firing rate filtered according to Equation (2).

The FORCE method requires that the output units send feedback links to the spiking neurons whose weights are stored in N × M matrix U composed of concatenated vectors uk (k = 1, …, M) and whose elements are drawn randomly and uniformly from a uniform distribution between −q and q, where q is a feedback coupling strength. Therefore, the complete system taking into account the recurrent and feedback links is as follows:

τmdvidt=vrest-vi+Ibias+j=1N(aij+k=1Muikwkj)rj,                =vrest-vi+Ibias+j=1Nωijrj,    (4)

where matrix Ω=A+UWT=[ωij] determines the efficient topology shaped by the fixed recurrent and feedback links and the trained output weights.

The goal of training is to modify output weights wij in such a way that the linear readout (3) approximates the target signal: ẑk(t) ≈ zk(t). In this study, we use two-dimensional target signals in the form of closed polar figures which have a clear geometrical interpretation in order to study whether it is possible to relate distinct features of output geometry with the hidden geometry of spike pattern produced by (4). Namely, we illustrate our results by four target curves (58) for which equations governing their generation in (x, y)-plane are as follows:(a) a circle

xt=Rcos(2πf1t),      yt=Rsin(2πf1t),    (5)

(b) two-petal

xt=Rsin(4πf1t)cos(2πf1t),yt=Rsin(4πf1t)sin(2πf1t),ϕ=2πf1t[0,π/2][π,3π/2]    (6)

(c) three-petal,

xt=Rsin(3πf1t)cos(2πf1t),yt=Rsin(3πf1t)sin(2πf1t),    (7)

and (d) four-petal

xt=Rsin(4πf1t)cos(2πf1t),yt=Rsin(4πf1t)sin(2πf1t),    (8)

polar roses.

4.2 Force training

The output weights in matrix W are trained according to the algorithm of first-order reduced and controlled error (FORCE) learning adopted to spiking neural networks, see Sussillo and Abbott (2009) and Nicola and Clopath (2017). The error e(t) between the teaching signal and the real output is computed after each time interval of Δt:

e(t)=(t)-z(t)=WT(t)r(t)-z(t).    (9)

In addition, a running estimate of the inverse of the correlation matrix of the network rates P is computed as follows:

P(t)=P(t-Δt)-P(t-Δt)r(t)r(t)TP(t-Δt)1+r(t)TP(t-Δt)r(t),    (10)

where matrix P is initialized by I/α in which I is the identity matrix and α is a learning rate parameter. Moreover, after each time period of Δt matrix, W is trained according to the following rule based on (9, 10):

W(t)=W(t-Δt)-P(t)r(t)e(t)T.    (11)

Initially, the elements of output matrix W equal to zero, and after each interval, Δt changes with the adaptation rule according to Equation (11). Gradually, the values wk become close to some stationary states. After that, the learning procedure stops, and we have a multidimensional dynamical system of a complex network with fixed weights. This supervisely trained system is able to autonomously generate the target output closed trajectories. The core structure of the network defined by the adjacency matrix A after learning remains the same as before learning. The trained vectors wk multiplied by the feedback vectors uk introduce some low-rank perturbation to the coupling topology, and the corresponding network activity dramatically changed. Such structural perturbation leads to a global disturbance in the phase space of the recurrent network.

4.3 Victor–Purpura distance for spike trains

Each spike train is considered to be a point in an abstract topological space. A spike train metric is defined according to the special rule, which assigns a non-negative number to pairs of spike trains and expresses how dissimilar they are (Guidolin et al., 2022). We use the variant of the spike time VP distance which is parametrized by the cost quantity q in units of the inverse time. To compute the VP distance, the spike trains are compared in terms of allowed elementary steps, which can be applied to one sequence of spike timings to get another one. The following steps and associated costs are as follows: (a) insertion of a spike with the cost of one, (b) deletion of a spike with the cost of one, and (c) shifting a spike by amount of time t with the cost of qt. If q is very small, the metric becomes the simple spike count distance. If q is very large, all spike trains are far apart from each other, unless they are nearly identical. For intermediate values of q, the distance between two spike trains is small if they have a similar number of spikes, occurring at similar times. The motivation for this construction is that neurons which act like coincidence detectors should care about this metric. The value of q corresponds to the temporal precision 1/q of the coincidence detector. We calculate the VP distance of the described types using the scripts provided by the authors of this metric: http://www-users.med.cornell.edu/~jdvicto/metricdf.html, http://www-users.med.cornell.edu/~jdvicto/spkdm.html.

Each matrix of VP distances Dij is transformed via rank-ordering its entries, i.e., we replace the original entries in the above-diagonal part by natural numbers 0, 1, …  in ascending order of their value (Giusti et al., 2015). The below-diagonal part of the rank-ordered matrix is obtained according to the symmetrical transformation of the above-diagonal part. After that, the entries are normalized by N(N − 1)/2 and reindexed in descending order of the neural firing rates. Finally, we obtain the normalized rank-ordered matrix M = [Mij] which contains non-linearly transformed VP distances while having unchanged their relative order.

4.4 Persistence barcodes and Betti curves

For the normalized rank-ordered matrices, we perform persistence homology analysis of the following form. The set of 100 most active neurons with their spike trains are considered as vertices in an abstract space, where the distance between the i-th and j-th neurons are given by entry Mij.

These vertices form zero-dimensional or 0-simplices, and we introduce a filtration parameter ρ which defines the radius of abstract circles centered at vertices. With increasing ρ, two vertices i and j considered coupled if Mij ≤ ρ. The edge resulting from such construction is one-dimensional or 1-simplex. With increasing filtration parameter ρ, the number of 0-simplices and 1-simplices changed but may be unchanged or persisted over some intervals. This property is quantified by so-called Betti numbers, which count the number of correspondinc topological invariants at the current filtration scale. For example, the 0-th Betti number β0(ρ) gives the number of connected components and the 1-st Betti number β1(ρ) counts the number of one-dimensional simplices (edges). How particular simplex (k) emerges and disappears is reflected in the persistence barcode which consists of bars [ρb(k),ρd(k)], indicating the birth ρb and death ρd values of the filtration parameter for that simplex. Betti curves β0(ρ) and β1(ρ) summarize this information showing how the number of simplices of corresponding dimensions varies with increasing filtration parameter.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

OM: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Software, Visualization, Writing – original draft, Writing – review & editing. MP: Funding acquisition, Methodology, Project administration, Supervision, Writing – review & editing. VN: Conceptualization, Investigation, Supervision, Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. The results described in Sections 1, 2.1, 3, 4.1, 4.2 were supported by the Slovenian Research and Innovation Agency (Javna agencija za znanstvenoraziskovalno in inovacijsko dejavnost Republike Slovenije) (Grant No. P1-0403). The results reported in Sections 2. 2, 2.3, 4.3, 4.4 were supported by the Russian Science Foundation, project 23-72-10088, https://rscf.ru/en/project/23-72-10088/.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abbott, L. F., DePasquale, B., and Memmesheimer, R.-M. (2016). Building functional networks of spiking model neurons. Nat. Neurosci. 19, 350. doi: 10.1038/nn.4241

PubMed Abstract | Crossref Full Text | Google Scholar

Amunts, K., DeFelipe, J., Pennartz, C., Destexhe, A., Migliore, M., Ryvlin, P., et al. (2022). Linking brain structure, activity, and cognitive function through computation. eNeuro 9, ENEURO.0316-21.2022. doi: 10.1523/ENEURO.0316-21.2022

PubMed Abstract | Crossref Full Text | Google Scholar

Angelidis, E., Buchholz, E., Arreguit, J., Rougé, A., Stewart, T., von Arnim, A., et al. (2021). A spiking central pattern generator for the control of a simulated lamprey robot running on spinnaker and loihi neuromorphic boards. Neuromor. Comp. Eng. 1, 014005. doi: 10.1088/2634-4386/ac1b76

Crossref Full Text | Google Scholar

Auge, D., Hille, J., Mueller, E., and Knoll, A. (2021). A survey of encoding techniques for signal processing in spiking neural networks. Neural Proc. Lett. 53, 4693–4710. doi: 10.1007/s11063-021-10562-2

Crossref Full Text | Google Scholar

Barak, O. (2017). Recurrent neural networks as versatile tools of neuroscience research. Curr. Opin. Neurobiol. 46:1–6. doi: 10.1016/j.conb.2017.06.003

PubMed Abstract | Crossref Full Text | Google Scholar

Bardin, J.-B., Spreemann, G., and Hess, K. (2019). Topological exploration of artificial neuronal network dynamics. Netw. Neurosci. 3, 725–743. doi: 10.1162/netn_a_00080

PubMed Abstract | Crossref Full Text | Google Scholar

Bellec, G., Scherr, F., Subramoney, A., Hajek, E., Salaj, D., Legenstein, R., et al. (2020). A solution to the learning dilemma for recurrent networks of spiking neurons. Nat. Commun. 11, 3625. doi: 10.1038/s41467-020-17236-y

PubMed Abstract | Crossref Full Text | Google Scholar

Billings, J., Saggar, M., Hlinka, J., Keilholz, S., and Petri, G. (2021). Simplicial and topological descriptions of human brain dynamics. Netw. Neurosci. 5, 549–568. doi: 10.1101/2020.09.06.285130

PubMed Abstract | Crossref Full Text | Google Scholar

Bing, Z., Meschede, C., Röhrbein, F., Huang, K., and Knoll, A. C. (2018). A survey of robotics control based on learning-inspired spiking neural networks. Front. Neurorobot. 12, 35. doi: 10.3389/fnbot.2018.00035

PubMed Abstract | Crossref Full Text | Google Scholar

Chaisangmongkon, W., Swaminathan, S. K., Freedman, D. J., and Wang, X.-J. (2017). Computing by robust transience: how the fronto-parietal network performs sequential, category-based decisions. Neuron 93, 1504–1517. doi: 10.1016/j.neuron.2017.03.002

PubMed Abstract | Crossref Full Text | Google Scholar

Cichy, R. M., and Kaiser, D. (2019). Deep neural networks as scientific models. Trends Cogn. Sci. 23, 305–317. doi: 10.1016/j.tics.2019.01.009

PubMed Abstract | Crossref Full Text | Google Scholar

Cimeša, L., Ciric, L., and Ostojic, S. (2023). Geometry of population activity in spiking networks with low-rank structure. PLoS Comput. Biol. 19, e1011315. doi: 10.1371/journal.pcbi.1011315

PubMed Abstract | Crossref Full Text | Google Scholar

Curto, C. (2017). What can topology tell us about the neural code? Bull. New Ser. Am. Math. Soc. 54, 63–78. doi: 10.1090/bull/1554

Crossref Full Text | Google Scholar

Dabaghian, Y., Mémoli, F., Frank, L., and Carlsson, G. (2012). A topological paradigm for hippocampal spatial map formation using persistent homology. PLoS Comput. Biol. 8, e1002581. doi: 10.1371/journal.pcbi.1002581

PubMed Abstract | Crossref Full Text | Google Scholar

Demin, V., and Nekhaev, D. (2018). Recurrent spiking neural network learning based on a competitive maximization of neuronal activity. Front. Neuroinform. 12, 79. doi: 10.3389/fninf.2018.00079

PubMed Abstract | Crossref Full Text | Google Scholar

DePasquale, B., Sussillo, D., Abbott, L., and Churchland, M. M. (2023). The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 111, 631–649. doi: 10.1016/j.neuron.2022.12.007

PubMed Abstract | Crossref Full Text | Google Scholar

Dora, S., and Kasabov, N. (2021). Spiking neural networks for computational intelligence: an overview. Big Data Cognit. Comp. 5, 67. doi: 10.3390/bdcc5040067

Crossref Full Text | Google Scholar

Dubreuil, A., Valente, A., Beiran, M., Mastrogiuseppe, F., and Ostojic, S. (2022). The role of population structure in computations through neural dynamics. Nat. Neurosci. 25, 783–794. doi: 10.1038/s41593-022-01088-4

PubMed Abstract | Crossref Full Text | Google Scholar

Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., et al. (2012). A large-scale model of the functioning brain. Science 338, 1202–1205. doi: 10.1126/science.1225266

PubMed Abstract | Crossref Full Text | Google Scholar

Gilra, A., and Gerstner, W. (2017). Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network. Elife 6, e28295. doi: 10.7554/eLife.28295

PubMed Abstract | Crossref Full Text | Google Scholar

Giusti, C., Pastalkova, E., Curto, C., and Itskov, V. (2015). Clique topology reveals intrinsic geometric structure in neural correlations. Proc. Nat. Acad. Sci. 112, 13455–13460. doi: 10.1073/pnas.1506407112

PubMed Abstract | Crossref Full Text | Google Scholar

Guidolin, A., Desroches, M., Victor, J. D., Purpura, K. P., and Rodrigues, S. (2022). Geometry of spiking patterns in early visual cortex: a topological data analytic approach. J. Royal Soc. Interf. 19, 20220677. doi: 10.1098/rsif.2022.0677

PubMed Abstract | Crossref Full Text | Google Scholar

Hassabis, D., Kumaran, D., Summerfield, C., and Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron 95, 245–258. doi: 10.1016/j.neuron.2017.06.011

PubMed Abstract | Crossref Full Text | Google Scholar

Kim, R., Li, Y., and Sejnowski, T. J. (2019). Simple framework for constructing functional spiking recurrent neural networks. Proc. Nat. Acad. Sci. 116, 22811–22820. doi: 10.1073/pnas.1905926116

PubMed Abstract | Crossref Full Text | Google Scholar

Liu, Z., Han, F., and Wang, Q. (2022a). A review of computational models for gamma oscillation dynamics: from spiking neurons to neural masses. Nonlinear Dyn. 108, 1849–1866. doi: 10.1007/s11071-022-07298-6

Crossref Full Text | Google Scholar

Liu, Z., Yu, Y., and Wang, Q. (2022b). Functional modular organization unfolded by chimera-like dynamics in a large-scale brain network model. Science China Technol. Sci. 65, 1435–1444. doi: 10.1007/s11431-022-2025-0

Crossref Full Text | Google Scholar

Lobo, J. L., Del Ser, J., Bifet, A., and Kasabov, N. (2020). Spiking neural networks and online learning: an overview and perspectives. Neural Netw. 121, 88–100. doi: 10.1016/j.neunet.2019.09.004

PubMed Abstract | Crossref Full Text | Google Scholar

Lobov, S. A., Mikhaylov, A. N., Shamshin, M., Makarov, V. A., and Kazantsev, V. B. (2020). Spatial properties of stdp in a self-learning spiking neural network enable controlling a mobile robot. Front. Neurosci. 14, 88. doi: 10.3389/fnins.2020.00088

PubMed Abstract | Crossref Full Text | Google Scholar

Lobov, S. A., Zharinov, A. I., Makarov, V. A., and Kazantsev, V. B. (2021). Spatial memory in a spiking neural network with robot embodiment. Sensors 21, 2678. doi: 10.3390/s21082678

PubMed Abstract | Crossref Full Text | Google Scholar

Lukoševičius, M., and Jaeger, H. (2009). Reservoir computing approaches to recurrent neural network training. Comp. Sci. Rev. 3, 127–149. doi: 10.1016/j.cosrev.2009.03.005

Crossref Full Text | Google Scholar

Maass, W., Natschläger, T., and Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560. doi: 10.1162/089976602760407955

PubMed Abstract | Crossref Full Text | Google Scholar

Maletić, S., Zhao, Y., and Rajković, M. (2016). Persistent topological features of dynamical systems. Chaos 26, 053105. doi: 10.1063/1.4949472

PubMed Abstract | Crossref Full Text | Google Scholar

Mante, V., Sussillo, D., Shenoy, K. V., and Newsome, W. T. (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84. doi: 10.1038/nature12742

PubMed Abstract | Crossref Full Text | Google Scholar

Marblestone, A. H., Wayne, G., and Kording, K. P. (2016). Toward an integration of deep learning and neuroscience. Front. Comput. Neurosci. 10, 94. doi: 10.3389/fncom.2016.00094

PubMed Abstract | Crossref Full Text | Google Scholar

Maslennikov, O. V. (2021). Dynamics of an artificial recurrent neural network for the problem of modeling a cognitive function. Izvestiya VUZ. Appl. Nonlin. Dynam. 29, 799–811. doi: 10.18500/0869-6632-2021-29-5-799-811

Crossref Full Text | Google Scholar

Maslennikov, O. V., and Nekorkin, V. I. (2019). Collective dynamics of rate neurons for supervised learning in a reservoir computing system. Chaos 29, 103126. doi: 10.1063/1.5119895

PubMed Abstract | Crossref Full Text | Google Scholar

Maslennikov, O. V., and Nekorkin, V. I. (2020). Stimulus-induced sequential activity in supervisely trained recurrent networks of firing rate neurons. Nonlinear Dyn. 101, 1093–1103. doi: 10.1007/s11071-020-05787-0

Crossref Full Text | Google Scholar

Maslennikov, O. V., Pugavko, M. M., Shchapin, D. S., Nekorkin, V. I., et al. (2022). Nonlinear dynamics and machine learning of recurrent spiking neural networks. Physics-Uspekhi 65, 10. doi: 10.3367/UFNe.2021.08.039042

Crossref Full Text | Google Scholar

Muratore, P., Capone, C., and Paolucci, P. S. (2021). Target spike patterns enable efficient and biologically plausible learning for complex temporal tasks. PLoS ONE 16, e0247014. doi: 10.1371/journal.pone.0247014

PubMed Abstract | Crossref Full Text | Google Scholar

Myers, A., Munch, E., and Khasawneh, F. A. (2019). Persistent homology of complex networks for dynamic state detection. Phys. Rev. E. 100, 022314. doi: 10.1103/PhysRevE.100.022314

PubMed Abstract | Crossref Full Text | Google Scholar

Naitzat, G., Zhitnikov, A., and Lim, L.-H. (2020). Topology of deep neural networks. J. Mach. Learn. Res. 21, 1–40. doi: 10.5555/3455716.3455900

Crossref Full Text | Google Scholar

Neftci, E. O., Mostafa, H., and Zenke, F. (2019). Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Process. Mag. 36, 51–63. doi: 10.1109/MSP.2019.2931595

Crossref Full Text | Google Scholar

Nicola, W., and Clopath, C. (2017). Supervised learning in spiking neural networks with force training. Nat. Commun. 8, 2208. doi: 10.1038/s41467-017-01827-3

PubMed Abstract | Crossref Full Text | Google Scholar

Petri, G., Expert, P., Turkheimer, F., Carhart-Harris, R., Nutt, D., Hellyer, P. J., et al. (2014). Homological scaffolds of brain functional networks. J. Royal Soc. Interf. 11, 20140873. doi: 10.1098/rsif.2014.0873

PubMed Abstract | Crossref Full Text | Google Scholar

Pugavko, M. M., Maslennikov, O. V., and Nekorkin, V. I. (2020). Dynamics of spiking map-based neural networks in problems of supervised learning. Commun. Nonlinear Sci. Numer. Simulat. 90, 105399. doi: 10.1016/j.cnsns.2020.105399

Crossref Full Text | Google Scholar

Pugavko, M. M., Maslennikov, O. V., and Nekorkin, V. I. (2023). Multitask computation through dynamics in recurrent spiking neural networks. Sci. Rep. 13, 3997. doi: 10.1038/s41598-023-31110-z

PubMed Abstract | Crossref Full Text | Google Scholar

Ramezanian-Panahi, M., Abrevaya, G., Gagnon-Audet, J.-C., Voleti, V., Rish, I., and Dumas, G. (2022). Generative models of brain dynamics. Front. Artif. Intellig. 147, 807406. doi: 10.3389/frai.2022.807406

PubMed Abstract | Crossref Full Text | Google Scholar

Salnikov, V., Cassese, D., and Lambiotte, R. (2018). Simplicial complexes and complex systems. Eur. J. Phys. 40, 014001. doi: 10.1088/1361-6404/aae790

Crossref Full Text | Google Scholar

Santos, F. A., Raposo, E. P., Coutinho-Filho, M. D., Copelli, M., Stam, C. J., and Douw, L. (2019). Topological phase transitions in functional brain networks. Phys. Rev. E 100, 032414. doi: 10.1103/PhysRevE.100.032414

PubMed Abstract | Crossref Full Text | Google Scholar

Schuman, C. D., Kulkarni, S. R., Parsa, M., Mitchell, J. P., Kay, B., et al. (2022). Opportunities for neuromorphic computing algorithms and applications. Nat. Comp. Sci. 2, 10–19. doi: 10.1038/s43588-021-00184-y

PubMed Abstract | Crossref Full Text | Google Scholar

Sizemore, A. E., Phillips-Cremins, J. E., Ghrist, R., and Bassett, D. S. (2019). The importance of the whole: topological data analysis for the network neuroscientist. Netw. Neurosci. 3, 656–673. doi: 10.1162/netn_a_00073

PubMed Abstract | Crossref Full Text | Google Scholar

Stolz, B. J., Harrington, H. A., and Porter, M. A. (2017). Persistent homology of time-dependent functional networks constructed from coupled time series. Chaos 27, 047410. doi: 10.1063/1.4978997

PubMed Abstract | Crossref Full Text | Google Scholar

Sussillo, D. (2014). Neural circuits as computational dynamical systems. Curr. Opin. Neurobiol. 25, 156–163. doi: 10.1016/j.conb.2014.01.008

PubMed Abstract | Crossref Full Text | Google Scholar

Sussillo, D., and Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic neural networks. Neuron 63, 544–557. doi: 10.1016/j.neuron.2009.07.018

PubMed Abstract | Crossref Full Text | Google Scholar

Sussillo, D., and Barak, O. (2013). Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Comput. 25, 626–649. doi: 10.1162/NECO_a_00409

PubMed Abstract | Crossref Full Text | Google Scholar

Tavanaei, A., Ghodrati, M., Kheradpisheh, S. R., Masquelier, T., and Maida, A. (2019). Deep learning in spiking neural networks. Neural Netw. 111, 47–63. doi: 10.1016/j.neunet.2018.12.002

PubMed Abstract | Crossref Full Text | Google Scholar

Vyas, S., Golub, M. D., Sussillo, D., and Shenoy, K. V. (2020). Computation through neural population dynamics. Annu. Rev. Neurosci. 43, 249–275. doi: 10.1146/annurev-neuro-092619-094115

PubMed Abstract | Crossref Full Text | Google Scholar

Yamazaki, K., Vo-Ho, V.-K., Bulsara, D., and Le, N. (2022). Spiking neural networks and their applications: a review. Brain Sci. 12, 863. doi: 10.3390/brainsci12070863

Crossref Full Text | Google Scholar

Yang, G. R., and Wang, X.-J. (2020). Artificial neural networks for neuroscientists: a primer. Neuron 107, 1048–1070. doi: 10.1016/j.neuron.2020.09.005

Crossref Full Text | Google Scholar

Yu, Y., Fan, Y., Han, F., Luan, G., and Wang, Q. (2023a). Transcranial direct current stimulation inhibits epileptic activity propagation in a large-scale brain network model. Sci. China Technol. Sci. 66, 3628–3638. doi: 10.1007/s11431-022-2341-x

Crossref Full Text | Google Scholar

Yu, Y., Han, F., and Wang, Q. (2023b). A hippocampal-entorhinal cortex neuronal network for dynamical mechanisms of epileptic seizure. IEEE Trans. Neural Syst. Rehabil. Eng. 31, 1986–1996. doi: 10.1109/TNSRE.2023.3265581

PubMed Abstract | Crossref Full Text | Google Scholar

Zeng, Y., Zhao, D., Zhao, F., Shen, G., Dong, Y., Lu, E., et al. (2023). Braincog: a spiking neural network based, brain-inspired cognitive intelligence engine for brain-inspired ai and brain simulation. Patterns 4, 100789. doi: 10.1016/j.patter.2023.100789

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: spiking neural network, target spatiotemporal pattern, supervised learning, reservoir computing, spike metrics, persistent homology

Citation: Maslennikov O, Perc M and Nekorkin V (2024) Topological features of spike trains in recurrent spiking neural networks that are trained to generate spatiotemporal patterns. Front. Comput. Neurosci. 18:1363514. doi: 10.3389/fncom.2024.1363514

Received: 30 December 2023; Accepted: 06 February 2024;
Published: 23 February 2024.

Edited by:

Antonio Batista, Universidade Estadual de Ponta Grossa, Brazil

Reviewed by:

Qingyun Wang, Beihang University, China
Hung Nguyen-Xuan, Ho Chi Minh City University of Technology, Vietnam

Copyright © 2024 Maslennikov, Perc and Nekorkin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Oleg Maslennikov, b2xtYW92JiN4MDAwNDA7aXBmcmFuLnJ1

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.