Computational neuroscience is an interdisciplinary discipline in which modelling and analysis tools derived from mathematics, physics and computer science are used to investigate how the nervous system processes information. It mostly relies on the development, simulation, and analysis of multi-scale models of brain function, from the level of molecules through single neurons and neuronal networks up to cognition and behaviour. The analysis of real electrophysiological data recorded from various locations of the brain on different temporal and spatial scales is used to validate the models.
Our computational neuroscience work in Florence goes along two main lines of research, theoretical analysis of neuronal networks and data analysis. The first line of research relies mostly on the modelling and simulation of neuronal networks using simple models (such as Integrate and Fire and Hodgkin-Huxley for the single neurons or Wilson-Cowan at the population level). Currently, particular attention is devoted to research on hub cells, which are neurons able to strongly impact and control the network dynamics. The second line of research deals with the analysis of electrophysiological recordings such as EEG and neuronal spike trains. Here a recent focus of interest is neuronal population coding, i.e., the study of how the sensory world is represented in the action potentials of neuronal networks in the brain.
Moritz Gerster, Halgurd Taher, Antonín Škoch, Jaroslav Hlinka, Maxime Guye, Fabrice Bartolomei, Viktor Jirsa, Anna Zakharova, and Simona Olmi
Frontiers in systems neuroscience 79 (2021)
Dynamics underlying epileptic seizures span multiple scales in space and time, therefore, understanding seizure mechanisms requires identifying the relations between seizure components within and across these scales, together with the analysis of their dynamical repertoire. In this view, mathematical models have been developed, ranging from single neuron to neural population. In this study, we consider a neural mass model able to exactly reproduce the dynamics of heterogeneous spiking neural networks. We combine mathematical modeling with structural information from non invasive brain imaging, thus building large-scale brain network models to explore emergent dynamics and test the clinical hypothesis. We provide a comprehensive study on the effect of external drives on neuronal networks exhibiting multistability, in order to investigate the role played by the neuroanatomical connectivity matrices in shaping the emergent dynamics. In particular, we systematically investigate the conditions under which the network displays a transition from a low activity regime to a high activity state, which we identify with a seizure-like event. This approach allows us to study the biophysical parameters and variables leading to multiple recruitment events at the network level. We further exploit topological network measures in order to explain the differences and the analogies among the subjects and their brain regions, in showing recruitment events at different parameter values. We demonstrate, along with the example of diffusion-weighted magnetic resonance imaging (dMRI) connectomes of 20 healthy subjects and 15 epileptic patients, that individual variations in structural connectivity, when linked with mathematical dynamic models, have the capacity to explain changes in spatiotemporal organization of brain dynamics, as observed in network-based brain disorders. In particular, for epileptic patients, by means of the integration of the clinical hypotheses on the epileptogenic zone (EZ), i.e., the local network where highly synchronous seizures originate, we have identified the sequence of recruitment events and discussed their links with the topological properties of the specific connectomes. The predictions made on the basis of the implemented set of exact mean-field equations turn out to be in line with the clinical pre-surgical evaluation on recruited secondary networks.
Cortical propagation tracks functional recovery after stroke
Gloria Cecchini, Alessandro Scaglione, Anna Letizia Allegra Mascaro, Curzio Checcucci, Emilia Conti, Ihusan Adam, Duccio Fanelli, Roberto Livi, Francesco Saverio Pavone, Thomas Kreuz
Stroke is a debilitating condition affecting millions of people worldwide. The development of improved rehabilitation therapies rests on finding biomarkers suitable for tracking functional damage and recovery. To achieve this goal, we perform a spatiotemporal analysis of cortical activity obtained by wide-field calcium images in mice before and after stroke. We compare spontaneous recovery with three different post-stroke rehabilitation paradigms, motor train- ing alone, pharmacological contralesional inactivation and both combined. We identify three novel indicators that are able to track how movement-evoked global activation patterns are impaired by stroke and evolve during rehabilitation: the duration, the smoothness, and the angle of individual propagation events. Results show that, compared to pre-stroke condi- tions, propagation of cortical activity in the subacute phase right after stroke is slowed down and more irregular. When comparing rehabilitation paradigms, we find that mice treated with both motor training and pharmacological intervention, the only group associated with gener- alized recovery, manifest new propagation patterns, that are even faster and smoother than before the stroke. In conclusion, our new spatiotemporal propagation indicators could repre- sent promising biomarkers that are able to uncover neural correlates not only of motor defi- cits caused by stroke but also of functional recovery during rehabilitation. In turn, these insights could pave the way towards more targeted post-stroke therapies.
Emre Baspinar, Leonhard Schülen, Simona Olmi, and Anna Zakharova
Phys. Rev. E 103, 032308 (2021)
The counterintuitive phenomenon of coherence resonance describes a nonmonotonic behavior of the regularity of noise-induced oscillations in the excitable regime, leading to an optimal response in terms of regularity of the excited oscillations for an intermediate noise intensity. We study this phenomenon in populations of FitzHugh-Nagumo (FHN) neurons with different coupling architectures. For networks of FHN systems in an excitable regime, coherence resonance has been previously analyzed numerically. Here we focus on an analytical approach studying the mean-field limits of the globally and locally coupled populations. The mean-field limit refers to an averaged behavior of a complex network as the number of elements goes to infinity. We apply the mean-field approach to the globally coupled FHN network. Further, we derive a mean-field limit approximating the locally coupled FHN network with low noise intensities. We study the effects of the coupling strength and noise intensity on coherence resonance for both the network and the mean-field models. We compare the results of the mean-field and network frameworks and find good agreement in the globally coupled case, where the correspondence between the two approaches is sufficiently good to capture the emergence of coherence resonance, as well as of anticoherence resonance.
Marco Segneri, Honjie Bi, Simona Olmi, Alessandro Torcini
Front. Comput. Neurosci. 14:47 (2020)
Theta-nested gamma oscillations have been reported in many areas of the brain and are believed to represent a fundamental mechanism to transfer information across spatial and temporal scales. In a series of recent experiments in vitro it has been possible to replicate with an optogenetic theta frequency stimulation several features of cross-frequency coupling (CFC) among theta and gamma rhythms observed in behaving animals. In order to reproduce the main findings of these experiments we have considered a new class of neural mass models able to reproduce exactly the macroscopic dynamics of spiking neural networks. In this framework, we have examined two set-ups able to support collective gamma oscillations: namely, the pyramidal interneuronal network gamma (PING) and the interneuronal network gamma (ING). In both set-ups we observe the emergence of theta-nested gamma oscillations by driving the system with a sinusoidal theta-forcing in proximity of a Hopf bifurcation. These mixed rhythms always display phase amplitude coupling. However, two different types of nested oscillations can be identified: one characterized by a perfect phase locking between theta and gamma rhythms, corresponding to an overall periodic behavior; another one where the locking is imperfect and the dynamics is quasi-periodic or even chaotic. From our analysis it emerges that the locked states are more frequent in the ING set-up. In agreement with the experiments, we find theta-nested gamma oscillations for forcing frequencies in the range [1:10] Hz, whose amplitudes grow proportionally to the forcing intensity and which are clearly modulated by the theta phase. Furthermore, analogously to the experiments, the gamma power and the frequency of the gamma-power peak increase with the forcing amplitude. At variance with experimental findings, the gamma-power peak does not shift to higher frequencies by increasing the theta frequency. This effect can be obtained, in our model, only by incrementing, at the same time, also the stimulation power. An effect achieved by increasing the amplitude either of the noise or of the forcing term proportionally to the theta frequency. On the basis of our analysis both the PING and the ING mechanism give rise to theta-nested gamma oscillations with almost identical features.
Inferring network structure and local dynamics from neuronal patterns with quenched disorder
Ihusan Adam, Gloria Cecchini, Duccio Fanelli, Thomas Kreuz, Roberto Livi, Matteo di Volo, Anna Letizia Allegra Mascaro, Emilia Conti, Alessandro Scaglione, Ludovico Silvestri, Francesco Saverio Pavone
An inverse procedure is proposed and tested which aims at recovering the a priori unknown functional and structural information from global signals of living brains activity. To this end, we consider a Leaky-Integrate and Fire (LIF) model with short term plasticity neurons, coupled via a directed network. Neurons are assigned a specific current value, which is heterogenous across the sample, and sets the firing regime in which the neuron is operating in. The aim of the method is to recover the distribution of incoming network degrees, as well as the distribution of the assigned currents, from global field measurements. The proposed approach to the inverse problem implements the reductionist Heterogenous Mean-Field approximation. This amounts in turn to organizing the neurons in different classes, depending on their associated degree and current. When tested against synthetic data, the method returns accurate estimates of the sought distributions, while managing to reproduce and interpolate almost exactly the time series of the supplied global field. Finally, we also applied the proposed technique to longitudinal wide-field fluorescence microscopy data of cortical functionality in awake Thy1-GCaMP6f mice. Mice are induced a photothrombotic stroke in the primary motor cortex and their recovery monitored in time. An all-to-all LIF model which accommodates for currents heterogeneity allows to adequately explain the recorded patterns of activation. Altered distributions in neuron excitability are in particular detected, compatible with the phenomenon of hyperexcitability in the penumbra region after stroke.
A. Ceni, S. Olmi, A. Torcini, D. Angulo-Garcia
Chaos 30, 053121 (2020)
Coupling among neural rhythms is one of the most important mechanisms at the basis of cognitive processes in the brain. In this study, we consider a neural mass model, rigorously obtained from the microscopic dynamics of an inhibitory spiking network with exponential synapses, able to autonomously generate collective oscillations (COs). These oscillations emerge via a super-critical Hopf bifurcation, and their frequencies are controlled by the synaptic time scale, the synaptic coupling, and the excitability of the neural population. Furthermore, we show that two inhibitory populations in a master–slave configuration with different synaptic time scales can display various collective dynamical regimes: damped oscillations toward a stable focus, periodic and quasi-periodic oscillations, and chaos. Finally, when bidirectionally coupled, the two inhibitory populations can exhibit different types of θ–γ cross-frequency couplings (CFCs): phase-phase and phase-amplitude CFC. The coupling between θ and γ COs is enhanced in the presence of an external θ forcing, reminiscent of the type of modulation induced in hippocampal and cortex circuits via optogenetic drive.
Under healthy conditions, the brain’s activity consists of a series of intermingled oscillations, generated by large ensembles of neurons, which provide a functional substrate for information processing. Understanding how single neuron properties influence neuronal population dynamics could help in the comprehension of the collective behaviors emerging during cognitive processes. Here, we consider a neural mass model, which reproduces exactly the macroscopic activity of a network of spiking Quadratic Integrate-and-Fire (QIF) neurons. This mean-field model is employed to shed some light on an important and pervasive neural mechanism underlying information processing in the brain: the θ–γ cross-frequency coupling. In particular, we will explore in detail the conditions under which two coupled inhibitory neural populations, characterized by slow and fast synaptic kinetics, can generate these functionally relevant coupled rhythms.
Halgurd Taher, Alessandro Torcini, Simona Olmi
PLoS Comput Biol 16(12): e1008533 (2020)
A synaptic theory of Working Memory (WM) has been developed in the last decade as a possible alternative to the persistent spiking paradigm. In this context, we have developed a neural mass model able to reproduce exactly the dynamics of heterogeneous spiking neural networks encompassing realistic cellular mechanisms for short-term synaptic plasticity. This population model reproduces the macroscopic dynamics of the network in terms of the firing rate and the mean membrane potential. The latter quantity allows us to gain insight of the Local Field Potential and electroencephalographic signals measured during WM tasks to characterize the brain activity. More specifically synaptic facilitation and depression integrate each other to efficiently mimic WM operations via either synaptic reactivation or persistent activity. Memory access and loading are related to stimulus-locked transient oscillations followed by a steady-state activity in the β-γ band, thus resembling what is observed in the cortex during vibrotactile stimuli in humans and object recognition in monkeys. Memory juggling and competition emerge already by loading only two items. However more items can be stored in WM by considering neural architectures composed of multiple excitatory populations and a common inhibitory pool. Memory capacity depends strongly on the presentation rate of the items and it maximizes for an optimal frequency range. In particular we provide an analytic expression for the maximal memory capacity. Furthermore, the mean membrane potential turns out to be a suitable proxy to measure the memory load, analogously to event driven potentials in experiments on humans. Finally we show that the γ power increases with the number of loaded items, as reported in many experiments, while θ and β power reveal non monotonic behaviours. In particular, β and γ rhythms are crucially sustained by the inhibitory activity, while the θ rhythm is controlled by excitatory synapses.
Olmi S, Petkoski S, Guye M, Bartolomei F, Jirsa V:
PLoS computational biology 15(2): e1006805 (2019)
Epilepsy is characterized by perturbed dynamics that originate in a local network before spreading to other brain regions. In this paper we studied studied patient-specific brain network models of epilepsy patients, comprising 88 nodes equipped with region specific neural mass models capable of demonstrating epileptiform discharges. Applying stability analysis led to a seizure control strategy that is significantly less invasive than the traditional surgery, which typically resects the epileptogenic regions. The invasiveness of the procedure correlates with graph theoretical importance of the nodes. The novel method subsequently removes the most unstable links, a procedure possible by advent of novel surgery techniques. Our approach is entirely based on structural data, allowing creation of a brain model based on purely non-invasive data prior to any surgery.
We study a network of spiking neurons with heterogeneous excitabilities connected via inhibitory delayed pulses. For globally coupled systems the increase of the inhibitory coupling reduces the number of firing neurons by following a winner-takes-all mechanism. For sufficiently large transmission delay we observe the emergence of collective oscillations in the system beyond a critical coupling value. Heterogeneity promotes neural inactivation and asynchronous dynamics and its effect can be counteracted by considering longer time delays. In sparse networks, inhibition has the counterintuitive effect of promoting neural reactivation of silent neurons for sufficiently large coupling. In this regime, current fluctuations are on one side responsible for neural firing of subthreshold neurons and on the other side for their desynchronization. Therefore, collective oscillations are present only in a limited range of coupling values, which remains finite in the thermodynamic limit. Out of this range the dynamics is asynchronous and for very large inhibition neurons display a bursting behavior alternating periods of silence with periods where they fire freely in absence of any inhibition.
Simona Olmi, Alessandro Torcini
In "Nonlinear Dynamics in Computational Neuroscience", PoliTO Springer Series, p. 65-79 (2019)
Satuvuori E, Mulansky M, Daffertshofer A, Kreuz T:
JNeurosci Methods 308, 354 (2018) and arXiv [PDF]
This article simulates how neuronal populations in the brain work together to distinguish different sensory inputs from the real world (e.g. visual images). More specifically, it proposes two new algorithms (one where each neuron acts on its own and one where they all work together) to find among a large neuronal population the one subpopulation that discriminates different stimuli best.
PLoS Comput. Biol. 14(11) 2018 [PDF]
In this article we studied the mechanisms underlying the presence of synchronous activity in developing neural circuits. In particular, we developed a neural network model which reproduces recent experimental findings in the neo-cortex, i.e. the existence of peculiar neurons (called "drivers") which, under stimulation, have the capability to change the frequency of the synchronization events of the overall neural population.
Satuvuori E, Mulansky M, Daffertshofer A, Kreuz T
Comparison with existing methods
Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.
Repetitive spatio-temporal propagation patterns are encountered in fields as wide-ranging as climatology, social communication and network science. In neuroscience, perfectly consistent repetitions of the same global propagation pattern are called a synfire pattern. For any recording of sequences of discrete events (in neuroscience terminology: sets of spike trains) the questions arise how closely it resembles such a synfire pattern and which are the spike trains that lead/follow. Here we address these questions and introduce an algorithm built on two new indicators, termed SPIKE-order and spike train order, that define the synfire indicator value, which allows to sort multiple spike trains from leader to follower and to quantify the consistency of the temporal leader-follower relationships for both the original and the optimized sorting. We demonstrate our new approach using artificially generated datasets before we apply it to analyze the consistency of propagation patterns in two real datasets from neuroscience (giant depolarized potentials in mice slices) and climatology (El Niño sea surface temperature recordings). The new algorithm is distinguished by conceptual and practical simplicity, low computational cost, as well as flexibility and universality.
Understanding how the brain functions is one of the biggest challenges of our time. The analysis of experimentally recorded neural firing patterns (spike trains) plays a crucial role in addressing this problem. Here, the PySpike library is introduced, a Python package for spike train analysis providing parameter-free and time-scale independent measures of spike train synchrony. It allows to compute similarity and dissimilarity profiles, averaged values and distance matrices. Although mainly focusing on neuroscience, PySpike can also be applied in other contexts like climate research or social sciences. The package is available as Open Source on Github and PyPI.
Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISIdistance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface that facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance, and the SPIKE-synchronization (an improved and simplified extension of event synchronization) that have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector that makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels.
In a first step toward the comprehension of neural activity, one should focus on the stability of the possible dynamical states. Even the characterization of an idealized regime, such as that of a perfectly periodic spiking activity, reveals unexpected difficulties. In this paper we discuss a general approach to linear stability of pulse-coupled neural networks for generic phase-response curves and post-synaptic response functions. In particular, we present: (1) a mean-field approach developed under the hypothesis of an infinite network and small synaptic conductances; (2) a “microscopic” approach which applies to finite but large networks. As a result, we find that there exist two classes of perturbations: those which are perfectly described by the mean-field approach and those which are subject to finite-size corrections, irrespective of the network size. The analysis of perfectly regular, asynchronous, states reveals that their stability depends crucially on the smoothness of both the phase-response curve and the transmitted post-synaptic pulse. Numerical simulations suggest that this scenario extends to systems that are not covered by the perturbative approach. Altogether, we have described a series of tools for the stability analysis of various dynamical regimes of generic pulse-coupled oscillators, going beyond those that are currently invoked in the literature.
The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.