# Neuroscience

## Overview

Computational neuroscience is an interdisciplinary discipline in which modelling and analysis tools derived from mathematics, physics and computer science are used to investigate how the nervous system processes information. It mostly relies on the development, simulation, and analysis of multi-scale models of brain function, from the level of molecules through single neurons and neuronal networks up to cognition and behaviour. The analysis of real electrophysiological data recorded from various locations of the brain on different temporal and spatial scales is used to validate the models.

Our computational neuroscience work in Florence goes along two main lines of research, theoretical analysis of neuronal networks and data analysis. The first line of research relies mostly on the modelling and simulation of neuronal networks using simple models (such as Integrate and Fire and Hodgkin-Huxley for the single neurons or Wilson-Cowan at the population level). Currently, particular attention is devoted to research on hub cells, which are neurons able to strongly impact and control the network dynamics. The second line of research deals with the analysis of electrophysiological recordings such as EEG and neuronal spike trains. Here a recent focus of interest is neuronal population coding, i.e., the study of how the sensory world is represented in the action potentials of neuronal networks in the brain.** **

**Further information: ** **https://firenze-neuro.complexworld.net/home**** **

## People involved

## Recent publications

Moritz Gerster, Halgurd Taher, Antonín Škoch, Jaroslav Hlinka, Maxime Guye, Fabrice Bartolomei, Viktor Jirsa, Anna Zakharova, and **Simona Olmi**

*Frontiers in systems neuroscience 79* (2021)

Dynamics underlying epileptic seizures span multiple scales in space and time, therefore, understanding seizure mechanisms requires identifying the relations between seizure components within and across these scales, together with the analysis of their dynamical repertoire. In this view, mathematical models have been developed, ranging from single neuron to neural population. In this study, we consider a neural mass model able to exactly reproduce the dynamics of heterogeneous spiking neural networks. We combine mathematical modeling with structural information from non invasive brain imaging, thus building large-scale brain network models to explore emergent dynamics and test the clinical hypothesis. We provide a comprehensive study on the effect of external drives on neuronal networks exhibiting multistability, in order to investigate the role played by the neuroanatomical connectivity matrices in shaping the emergent dynamics. In particular, we systematically investigate the conditions under which the network displays a transition from a low activity regime to a high activity state, which we identify with a seizure-like event. This approach allows us to study the biophysical parameters and variables leading to multiple recruitment events at the network level. We further exploit topological network measures in order to explain the differences and the analogies among the subjects and their brain regions, in showing recruitment events at different parameter values. We demonstrate, along with the example of diffusion-weighted magnetic resonance imaging (dMRI) connectomes of 20 healthy subjects and 15 epileptic patients, that individual variations in structural connectivity, when linked with mathematical dynamic models, have the capacity to explain changes in spatiotemporal organization of brain dynamics, as observed in network-based brain disorders. In particular, for epileptic patients, by means of the integration of the clinical hypotheses on the epileptogenic zone (EZ), i.e., the local network where highly synchronous seizures originate, we have identified the sequence of recruitment events and discussed their links with the topological properties of the specific connectomes. The predictions made on the basis of the implemented set of exact mean-field equations turn out to be in line with the clinical pre-surgical evaluation on recruited secondary networks.

### Cortical propagation tracks functional recovery after stroke

Gloria Cecchini, Alessandro Scaglione, Anna Letizia Allegra Mascaro, Curzio Checcucci, Emilia Conti, Ihusan Adam, Duccio Fanelli, Roberto Livi, Francesco Saverio Pavone, **Thomas Kreuz**

PLoS Comput Biol 17(5): e1008963 *and bioRxiv [**PDF**] **(*2021)* ***[PDF]**

Stroke is a debilitating condition affecting millions of people worldwide. The development of improved rehabilitation therapies rests on finding biomarkers suitable for tracking functional damage and recovery. To achieve this goal, we perform a spatiotemporal analysis of cortical activity obtained by wide-field calcium images in mice before and after stroke. We compare spontaneous recovery with three different post-stroke rehabilitation paradigms, motor train- ing alone, pharmacological contralesional inactivation and both combined. We identify three novel indicators that are able to track how movement-evoked global activation patterns are impaired by stroke and evolve during rehabilitation: the duration, the smoothness, and the angle of individual propagation events. Results show that, compared to pre-stroke condi- tions, propagation of cortical activity in the subacute phase right after stroke is slowed down and more irregular. When comparing rehabilitation paradigms, we find that mice treated with both motor training and pharmacological intervention, the only group associated with gener- alized recovery, manifest new propagation patterns, that are even faster and smoother than before the stroke. In conclusion, our new spatiotemporal propagation indicators could repre- sent promising biomarkers that are able to uncover neural correlates not only of motor defi- cits caused by stroke but also of functional recovery during rehabilitation. In turn, these insights could pave the way towards more targeted post-stroke therapies.

Emre Baspinar, Leonhard Schülen, **Simona Olmi**, and Anna Zakharova

*Phys. Rev. E *103, 032308 (2021)

The counterintuitive phenomenon of coherence resonance describes a nonmonotonic behavior of the regularity of noise-induced oscillations in the excitable regime, leading to an optimal response in terms of regularity of the excited oscillations for an intermediate noise intensity. We study this phenomenon in populations of FitzHugh-Nagumo (FHN) neurons with different coupling architectures. For networks of FHN systems in an excitable regime, coherence resonance has been previously analyzed numerically. Here we focus on an analytical approach studying the mean-field limits of the globally and locally coupled populations. The mean-field limit refers to an averaged behavior of a complex network as the number of elements goes to infinity. We apply the mean-field approach to the globally coupled FHN network. Further, we derive a mean-field limit approximating the locally coupled FHN network with low noise intensities. We study the effects of the coupling strength and noise intensity on coherence resonance for both the network and the mean-field models. We compare the results of the mean-field and network frameworks and find good agreement in the globally coupled case, where the correspondence between the two approaches is sufficiently good to capture the emergence of coherence resonance, as well as of anticoherence resonance.

Marco Segneri, Honjie Bi, **Simona Olmi, **Alessandro Torcini

*Front. Comput. Neurosci.* 14:47 (2020)

Theta-nested gamma oscillations have been reported in many areas of the brain and are believed to represent a fundamental mechanism to transfer information across spatial and temporal scales. In a series of recent experiments in vitro it has been possible to replicate with an optogenetic theta frequency stimulation several features of cross-frequency coupling (CFC) among theta and gamma rhythms observed in behaving animals. In order to reproduce the main findings of these experiments we have considered a new class of neural mass models able to reproduce exactly the macroscopic dynamics of spiking neural networks. In this framework, we have examined two set-ups able to support collective gamma oscillations: namely, the pyramidal interneuronal network gamma (PING) and the interneuronal network gamma (ING). In both set-ups we observe the emergence of theta-nested gamma oscillations by driving the system with a sinusoidal theta-forcing in proximity of a Hopf bifurcation. These mixed rhythms always display phase amplitude coupling. However, two different types of nested oscillations can be identified: one characterized by a perfect phase locking between theta and gamma rhythms, corresponding to an overall periodic behavior; another one where the locking is imperfect and the dynamics is quasi-periodic or even chaotic. From our analysis it emerges that the locked states are more frequent in the ING set-up. In agreement with the experiments, we find theta-nested gamma oscillations for forcing frequencies in the range [1:10] Hz, whose amplitudes grow proportionally to the forcing intensity and which are clearly modulated by the theta phase. Furthermore, analogously to the experiments, the gamma power and the frequency of the gamma-power peak increase with the forcing amplitude. At variance with experimental findings, the gamma-power peak does not shift to higher frequencies by increasing the theta frequency. This effect can be obtained, in our model, only by incrementing, at the same time, also the stimulation power. An effect achieved by increasing the amplitude either of the noise or of the forcing term proportionally to the theta frequency. On the basis of our analysis both the PING and the ING mechanism give rise to theta-nested gamma oscillations with almost identical features.

### Inferring network structure and local dynamics from neuronal patterns with quenched disorder

Ihusan Adam, Gloria Cecchini, Duccio Fanelli, **Thomas Kreuz**, Roberto Livi, Matteo di Volo, Anna Letizia Allegra Mascaro, Emilia Conti, Alessandro Scaglione, Ludovico Silvestri, Francesco Saverio Pavone

*Chaos Solitons and Fractals* 140, 110235 * **and arXiv [**PDF**] **(2020)** ***[PDF]**

An inverse procedure is proposed and tested which aims at recovering the a priori unknown functional and structural information from global signals of living brains activity. To this end, we consider a Leaky-Integrate and Fire (LIF) model with short term plasticity neurons, coupled via a directed network. Neurons are assigned a specific current value, which is heterogenous across the sample, and sets the firing regime in which the neuron is operating in. The aim of the method is to recover the distribution of incoming network degrees, as well as the distribution of the assigned currents, from global field measurements. The proposed approach to the inverse problem implements the reductionist Heterogenous Mean-Field approximation. This amounts in turn to organizing the neurons in different classes, depending on their associated degree and current. When tested against synthetic data, the method returns accurate estimates of the sought distributions, while managing to reproduce and interpolate almost exactly the time series of the supplied global field. Finally, we also applied the proposed technique to longitudinal wide-field fluorescence microscopy data of cortical functionality in awake Thy1-GCaMP6f mice. Mice are induced a photothrombotic stroke in the primary motor cortex and their recovery monitored in time. An all-to-all LIF model which accommodates for currents heterogeneity allows to adequately explain the recorded patterns of activation. Altered distributions in neuron excitability are in particular detected, compatible with the phenomenon of hyperexcitability in the penumbra region after stroke.

A. Ceni, **S. Olmi**, A. Torcini, D. Angulo-Garcia

*Chaos* 30, 053121 (2020)

Coupling among neural rhythms is one of the most important mechanisms at the basis of cognitive processes in the brain. In this study, we consider a neural mass model, rigorously obtained from the microscopic dynamics of an inhibitory spiking network with exponential synapses, able to autonomously generate collective oscillations (COs). These oscillations emerge via a super-critical Hopf bifurcation, and their frequencies are controlled by the synaptic time scale, the synaptic coupling, and the excitability of the neural population. Furthermore, we show that two inhibitory populations in a master–slave configuration with different synaptic time scales can display various collective dynamical regimes: damped oscillations toward a stable focus, periodic and quasi-periodic oscillations, and chaos. * *Finally, when bidirectionally coupled, the two inhibitory populations can exhibit different types of

*θ*–

*γ*

*cross-frequency couplings (CFCs): phase-phase and phase-amplitude CFC. The coupling between*

*θ*

*and*

*γ*

*COs is enhanced in the presence of an external*

*θ*

*forcing, reminiscent of the type of modulation induced in hippocampal and cortex circuits via optogenetic drive.*

Under healthy conditions, the brain’s activity consists of a series of intermingled oscillations, generated by large ensembles of neurons, which provide a functional substrate for information processing. Understanding how single neuron properties influence neuronal population dynamics could help in the comprehension of the collective behaviors emerging during cognitive processes. Here, we consider a neural mass model, which reproduces exactly the macroscopic activity of a network of spiking Quadratic Integrate-and-Fire (QIF) neurons. This mean-field model is employed to shed some light on an important and pervasive neural mechanism underlying information processing in the brain: the * θ*–

**γ***cross-frequency coupling. In particular, we will explore in detail the conditions under which two coupled inhibitory neural populations, characterized by slow and fast synaptic kinetics, can generate these functionally relevant coupled rhythms.*

Halgurd Taher, Alessandro Torcini, **Simona Olmi**

*PLoS Comput Biol* 16(12): e1008533 (2020)

A synaptic theory of Working Memory (WM) has been developed in the last decade as a possible alternative to the persistent spiking paradigm. In this context, we have developed a neural mass model able to reproduce exactly the dynamics of heterogeneous spiking neural networks encompassing realistic cellular mechanisms for short-term synaptic plasticity. This population model reproduces the macroscopic dynamics of the network in terms of the firing rate and the mean membrane potential. The latter quantity allows us to gain insight of the Local Field Potential and electroencephalographic signals measured during WM tasks to characterize the brain activity. More specifically synaptic facilitation and depression integrate each other to efficiently mimic WM operations via either synaptic reactivation or persistent activity. Memory access and loading are related to stimulus-locked transient oscillations followed by a steady-state activity in the *β*-*γ* band, thus resembling what is observed in the cortex during vibrotactile stimuli in humans and object recognition in monkeys. Memory juggling and competition emerge already by loading only two items. However more items can be stored in WM by considering neural architectures composed of multiple excitatory populations and a common inhibitory pool. Memory capacity depends strongly on the presentation rate of the items and it maximizes for an optimal frequency range. In particular we provide an analytic expression for the maximal memory capacity. Furthermore, the mean membrane potential turns out to be a suitable proxy to measure the memory load, analogously to event driven potentials in experiments on humans. Finally we show that the *γ* power increases with the number of loaded items, as reported in many experiments, while *θ* and *β* power reveal non monotonic behaviours. In particular, *β* and *γ* rhythms are crucially sustained by the inhibitory activity, while the *θ* rhythm is controlled by excitatory synapses.

**Thomas Kreuz**, Conor Houghton, Jonathan Victor

*Encycl Comp Neurosci **(**2**019)**,** *doi.org/10.1007/978-1-4614-7320-6_409-2* ***[PDF]**

A “spike train distance” is a means for comparing two sequences of stereotyped events. The term “spike metric” refers to a spike train distance

that, additionally, has the formal mathematical properties of a metric. Spike train distances have broad application in neuroscience, since the action

potentials emitted by a neuron or set of neurons can be regarded as a sequence of stereotyped events; we briefly survey these applications here.

Epilepsy is characterized by perturbed dynamics that originate in a local network before spreading to other brain regions. In this paper we studied studied patient-specific brain network models of epilepsy patients, comprising 88 nodes equipped with region specific neural mass models capable of demonstrating epileptiform discharges. Applying stability analysis led to a seizure control strategy that is significantly less invasive than the traditional surgery, which typically resects the epileptogenic regions. The invasiveness of the procedure correlates with graph theoretical importance of the nodes. The novel method subsequently removes the most unstable links, a procedure possible by advent of novel surgery techniques. Our approach is entirely based on structural data, allowing creation of a brain model based on purely non-invasive data prior to any surgery.

**Stefano Luccioli**, David Angulo-Garcia, **Alessandro Torcini**

*Phys. Rev. **E* 99, 052412 (2019)

We study a network of spiking neurons with heterogeneous excitabilities connected via inhibitory delayed pulses. For globally coupled systems the increase of the inhibitory coupling reduces the number of firing neurons by following a winner-takes-all mechanism. For sufficiently large transmission delay we observe the emergence of collective oscillations in the system beyond a critical coupling value. Heterogeneity promotes neural inactivation and asynchronous dynamics and its effect can be counteracted by considering longer time delays. In sparse networks, inhibition has the counterintuitive effect of promoting neural reactivation of silent neurons for sufficiently large coupling. In this regime, current fluctuations are on one side responsible for neural firing of subthreshold neurons and on the other side for their desynchronization. Therefore, collective oscillations are present only in a limited range of coupling values, which remains finite in the thermodynamic limit. Out of this range the dynamics is asynchronous and for very large inhibition neurons display a bursting behavior alternating periods of silence with periods where they fire freely in absence of any inhibition.

**S. Luccioli**, E. Ben-Jacob, A. Barzilai, P. Bonifazi, A. Torcini

In "Nonlinear Dynamics in Computational Neuroscience", PoliTO Springer Series, p. 53-64 (2019)

**Simona Olmi**, Alessandro Torcini

In "Nonlinear Dynamics in Computational Neuroscience", PoliTO Springer Series, p. 65-79 (2019)

We analyse the possible dynamical states emerging for two symmetrically pulse coupled populations of leaky integrate-and-fire neurons. In particular, we observe broken symmetry states in this set-up: namely, breathing chimeras, where one population is fully synchronized and the other is in a state of partial synchronization (PS) as well as generalized chimera states, where both populations are in PS, but with different levels of synchronization. Symmetric macroscopic states are also present, ranging from quasi-periodic motions, to collective chaos, from splay states to population anti-phase partial synchronization. We then investigate the influence of disorder, such as random link removal or noise, on the dynamics of collective solutions in this model. As a result, we observe that broken symmetry chimera-like states, with both populations partially synchronized, persist up to 80% of broken links and up to noise amplitudes ≃8% of threshold-reset distance. Furthermore, the introduction of disorder on symmetric chaotic state has a constructive effect, namely to induce the emergence of chimera-like states at intermediate dilution or noise level.

This article simulates how neuronal populations in the brain work together to distinguish different sensory inputs from the real world (e.g. visual images). More specifically, it proposes two new algorithms (one where each neuron acts on its own and one where they all work together) to find among a large neuronal population the one subpopulation that discriminates different stimuli best.

**S. Luccioli**, D. Angulo-Garcia, R. Cossart, A. Malvache, L. Módol-Vidal, V. H. Sousa, P. Bonifazi, **A. Torcini**:

PLoS Comput. Biol. 14(11) 2018 **[****PDF****]**

In this article we studied the mechanisms underlying the presence of synchronous activity in developing neural circuits. In particular, we developed a neural network model which reproduces recent experimental findings in the neo-cortex, i.e. the existence of peculiar neurons (called "drivers") which, under stimulation, have the capability to change the frequency of the synchronization events of the overall neural population.

Satuvuori E, Mulansky M, Daffertshofer A, **Kreuz T**

**Background**

**Background**

Spike trains of multiple neurons can be analyzed following the summed population (SP) or the labeled line (LL) hypothesis. Responses to external stimuli are generated by a neuronal population as a whole or the individual neurons have encoding capacities of their own. The SPIKE-distance estimated either for a single, pooled spike train over a population or for each neuron separately can serve to quantify these responses.

**New method**

**New method**

For the SP case we compare three algorithms that search for the most discriminative subpopulation over all stimulus pairs. For the LL case we introduce a new algorithm that combines neurons that individually separate different pairs of stimuli best.

**Results**

**Results**

The best approach for SP is a brute force search over all possible subpopulations. However, it is only feasible for small populations. For more realistic settings, simulated annealing clearly outperforms gradient algorithms with only a limited increase in computational load. Our novel LL approach can handle very involved coding scenarios despite its computational ease.

**Comparison with existing methods**

**Comparison with existing methods**

Spike train distances have been extended to the analysis of neural populations interpolating between SP and LL coding. This includes parametrizing the importance of distinguishing spikes being fired in different neurons. Yet, these approaches only consider the population as a whole. The explicit focus on subpopulations render our algorithms complimentary.

**Conclusions**

**Conclusions**

The spectrum of encoding possibilities in neural populations is broad. The SP and LL cases are two extremes for which our algorithms provide correct identification results.

Eero Satuvuori, **Thomas Kreuz**

**Background:** It is commonly assumed in neuronal coding that repeated presentations of a stimulus to a coding neuron elicit similar responses. One common way to assess similarity are spike train distances.These can be divided into spike-resolved, such as the Victor-Purpura and the van Rossum distance, and time-resolved, e.g. the ISI-, the SPIKE- and the RI-SPIKE-distance.

**New method:** We use independent steady-rate Poisson processes as surrogates for spike trains with fixed rate and no timing information to address two basic questions: How does the sensitivity of the different spike train distances to temporal coding depend on the rates of the two processes and how do the distances deal with very low rates?

**Results:** Spike-resolved distances always contain rate information even for parameters indicating time coding. This is an issue for reasonably high rates but beneficial for very low rates. In contrast, the operational range for detecting time coding of time-resolved distances is superior at normal rates, but these measures produce artefacts at very low rates. The RI-SPIKE-distance is the only measure that is sensitive to timing information only.

**Comparison with existing methods:** While our results on rate-dependent expectation values for the spike-resolved distances agree with Chicharro et al. (2011), we here go one step further and specifically investigate applicability for very low rates.

**Conclusions:** The most appropriate measure depends on the rates of the data being analysed. Accordingly, we summarize our results in one table that allows an easy selection of the preferred measure for any kind of data.

The detection of directional couplings between dynamics based on measured spike trains is a crucial problem in the understanding of many different systems. In particular, in neuroscience it is important to assess the connectivity between neurons.One of the approaches that can estimate directional coupling from the analysis of point processes is the nonlinear interdependence measure L. Although its efficacy has already been demonstrated, it still needs to be tested under more challenging and realistic conditions prior to an application to real data. Thus, in this paper

we use the Hindmarsh-Rose model system to test the method in the presence of noise and for different spiking regimes.We also examine the influence of different parameters and spike train distances. Our results show that the measure L is versatile and robust to various types of noise, and thus suitable for application to experimental data.

**Simona ****Olmi**, David Angulo-Garcia, Alberto Imparato, Alessandro Torcini

*Scientific **R**eports *19, 053011 (2017)

Neurons in the intact brain receive a continuous and irregular synaptic bombardment from excitatory and inhibitory pre- synaptic neurons, which determines the firing activity of the stimulated neuron. In order to investigate the influence of inhibitory stimulation on the firing time statistics, we consider Leaky Integrate-and-Fire neurons subject to inhibitory instantaneous post- synaptic potentials. In particular, we report exact results for the firing rate, the coefficient of variation and the spike train spectrum for various synaptic weight distributions. Our results are not limited to stimulations of infinitesimal amplitude, but they apply as well to finite amplitude post-synaptic potentials, thus being able to capture the effect of rare and large spikes. The developed methods are able to reproduce also the average firing properties of heterogeneous neuronal populations.

Satuvuori E, Mulansky M, Bozanic N, Malvestio I, Zeldenrust F, Lenk K, **Kreuz T**

**Background: **Measures of spike train synchrony are widely used in both experimental and computational neuroscience. Time-scale independent and parameter-free measures, such as the ISI-distance, the SPIKE-distance and SPIKE-synchronization, are preferable to time scale parametric measures, since by adapting to the local firing rate they take into account all the time scales of a given dataset.

**New method: **In data containing multiple time scales (e.g. regular spiking and bursts) one is typically less interested in the smallest time scales and a more adaptive approach is needed. Here we propose the A-ISI-distance, the A-SPIKE-distance and A-SPIKE-synchronization, which generalize the original measures by considering the local relative to the global time scales. For the A-SPIKE-distance we also introduce a rate-independent extension called the RIA-SPIKE-distance, which focuses specifically on spike timing.

**Results: **The adaptive generalizations A-ISI-distance and A-SPIKE-distance allow to disregard spike time differences that are not relevant on a more global scale. A-SPIKE-synchronization does not any longer demand an unreasonably high accuracy for spike doublets and coinciding bursts. Finally, the RIA-SPIKE-distance proves to be independent of rate ratios between spike trains.

**Comparison with existing methods: **We find that compared to the original versions the A-ISI-distance and the A-SPIKE-distance yield improvements for spike trains containing different time scales without exhibiting any unwanted side effects in other examples. A-SPIKE-synchronization matches spikes more efficiently than SPIKE-synchronization.

**Conclusions: **With these proposals we have completed the picture, since we now provide adaptive generalized measures that are sensitive to firing rate only (A-ISI-distance), to timing only (ARI-SPIKE-distance),and to both at the same time (A-SPIKE-distance).

D. Angulo-Garcia, **S. Luccioli**, **S. Olmi**, A. Torcini

*New Journal of Physics** *7, 1577 (2017)

Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.

Repetitive spatio-temporal propagation patterns are encountered in fields as wide-ranging as climatology, social communication and network science. In neuroscience, perfectly consistent repetitions of the same global propagation pattern are called a *synfire pattern*. For any recording of sequences of discrete events (in neuroscience terminology: sets of spike trains) the questions arise how closely it resembles such a synfire pattern and which are the spike trains that lead/follow. Here we address these questions and introduce an algorithm built on two new indicators, termed *SPIKE-order* and *spike train order*, that define the *synfire indicator* value, which allows to sort multiple spike trains from leader to follower and to quantify the consistency of the temporal leader-follower relationships for both the original and the optimized sorting. We demonstrate our new approach using artificially generated datasets before we apply it to analyze the consistency of propagation patterns in two real datasets from neuroscience (giant depolarized potentials in mice slices) and climatology (El Niño sea surface temperature recordings). The new algorithm is distinguished by conceptual and practical simplicity, low computational cost, as well as flexibility and universality.

Understanding how the brain functions is one of the biggest challenges of our time. The analysis of experimentally recorded neural firing patterns (spike trains) plays a crucial role in addressing this problem. Here, the PySpike library is introduced, a Python package for spike train analysis providing parameter-free and time-scale independent measures of spike train synchrony. It allows to compute similarity and dissimilarity profiles, averaged values and distance matrices. Although mainly focusing on neuroscience, PySpike can also be applied in other contexts like climate research or social sciences. The package is available as Open Source on Github and PyPI.

Mulansky M, Bozanic N, Sburlea A, **Kreuz T**

*IEEE Proceeding on Event-based Control, Communication, and Signal Processing (EBCCSP)* 1-8 and arXiv [PDF] (2015)* ***[PDF]**

Measures of spike train synchrony have proven a valuable tool in both experimental and computational neuroscience. Particularly useful are time-resolved methods such as the ISI- and the SPIKE-distance, which have already been applied in various bivariate and multivariate contexts. Recently, SPIKE-Synchronization was proposed as another time-resolved synchronization measure. It is based on Event-Synchronization and has a very intuitive interpretation. Here, we present a detailed analysis of the mathematical properties of these three synchronization measures. For example, we were able to obtain analytic expressions for the expectation values of the ISI-distance and SPIKE-Synchronization for Poisson spike trains. For the SPIKE-distance we present an empirical formula deduced from numerical evaluations. These expectation values are crucial for interpreting the synchronization of spike trains measured in experiments or numerical simulations, as they represent the point of reference for fully randomized spike trains.

Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISIdistance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface that facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance, and the SPIKE-synchronization (an improved and simplified extension of event synchronization) that have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector that makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels.

**Simona Olmi**, Antonio Politi, Alessandro Torcini

*Front. Comput. Neurosci.* 8(8) (2014)

In a first step toward the comprehension of neural activity, one should focus on the stability of the possible dynamical states. Even the characterization of an idealized regime, such as that of a perfectly periodic spiking activity, reveals unexpected difficulties. In this paper we discuss a general approach to linear stability of pulse-coupled neural networks for generic phase-response curves and post-synaptic response functions. In particular, we present: (1) a mean-field approach developed under the hypothesis of an infinite network and small synaptic conductances; (2) a “microscopic” approach which applies to finite but large networks. As a result, we find that there exist two classes of perturbations: those which are perfectly described by the mean-field approach and those which are subject to finite-size corrections, irrespective of the network size. The analysis of perfectly regular, asynchronous, states reveals that their stability depends crucially on the smoothness of both the phase-response curve and the transmitted post-synaptic pulse. Numerical simulations suggest that this scenario extends to systems that are not covered by the perturbative approach. Altogether, we have described a series of tools for the stability analysis of various dynamical regimes of generic pulse-coupled oscillators, going beyond those that are currently invoked in the literature.

**SPIKY** (Kreuz et al., 2015) is a graphical user interface written in Matlab that facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. It contains several approaches to analyze spike train synchrony: the standard Peri-Stimulus Time Histogram (PSTH), the ISI-distance, SPIKE-distance, and SPIKE synchronization.

For a given data set SPIKY calculates the measures of choice and allows the user to switch between many different visualizations such as dissimilarity profiles, pairwise dissimilarity matrices, or hierarchical cluster trees. SPIKY also includes a spike train generator and an event detector which makes it capable of analyzing continuous data. Finally, the SPIKY-package includes complementary programs for the automatized analysis of a large number of datasets and for the evaluation of the statistical significance of the results.

The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

Recently, the SPIKE-distance has been proposed as a parameter-free and timescale-independent measure of spike train synchrony. This measure is time resolved since it relies on instantaneous estimates of spike train dissimilarity. However, its original definition led to spuriously high instantaneous values for eventlike firing patterns. Here we present a substantial improvement of this measure that eliminates this shortcoming. The reliability gained allows us to track changes in instantaneous clustering, i.e., time-localized patterns of (dis)similarity among multiple spike trains. Additional new features include selective and triggered temporal averaging as well as the instantaneous comparison of spike train groups. In a second step, a causal SPIKEdistance is defined such that the instantaneous values of dissimilarity rely on past information only so that time-resolved spike train synchrony can be estimated in real time. We demonstrate that these methods are capable of extracting valuable information from field data by monitoring the synchrony between neuronal spike trains during an epileptic seizure. Finally, the applicability of both the regular and the real-time SPIKE-distance to continuous data is illustrated on model electroencephalographic (EEG) recordings.

The **SPIKE-distance** is an estimator of the dissimilarity between two (or more) spike trains. In contrast to most other spike train distances (such as the Victor-Purpura distance) it is time-resolved and is able to track changes in instantaneous clustering, i.e., time-localized patterns of (dis)similarity among two or more spike trains. Additional features include selective and triggered temporal averaging as well as the instantaneous comparison of spike train groups. The SPIKE-distance can also be formulated as a causal measure which is defined such that the instantaneous values of dissimilarity rely on past information only so that time-resolved spike train synchrony can be estimated in real-time.

The final definition presented here is the one introduced in Kreuz et al., 2013, which improves considerably on the original proposal (Kreuz et al., 2011). Mathematical properties and expectation values for Poisson spike train can be found in Mulansky et al., 2015.

The van Rossum metric measures the distance between two spike trains. Measuring a single van Rossum distance between one pair of spike trains is not a computationally expensive task, however, many applications require a matrix of distances between all the spike trains in a set or the calculation of a multi-neuron distance between two populations of spike trains. Moreover, often these calculations need to be repeated for many different parameter values. An algorithm is presented here to render these calculation less computationally expensive, making the complexity linear in the number of spikes rather than quadratic.

**Measures of neuronal signal synchrony** are estimators of the synchrony between two or sometimes more continuous time series of brain activity which yield low values for independent time series and high values for correlated time series. A complementary class of approaches comprises measures of spike train synchrony which quantify the degree of synchrony between discrete signals.

Synchronization of continuous time series can manifest itself in many different ways. The simplest case of complete synchronization (Fujisaka and Yamada, 1983) can be attained if identical systems are coupled sufficiently strongly so that their states coincide after transients have died out. The concept of generalized synchronization (Afraimovich et al., 1986) introduced for uni-directionally coupled systems, describes the presence of some functional relation between the states of the two systems. Finally, phase synchronization, first described for chaotic oscillators (Rosenblum et al., 1996), is defined as the global entrainment of the phases while the amplitudes may remain uncorrelated.

Following this variety of concepts many different approaches to quantify the degree of synchronization between two continuous signals have been proposed. These approaches comprise linear ones like the cross correlation or the spectral coherence function as well as nonlinear measures like mutual information (Gray, 1990), transfer entropy (Schreiber, 2000), Granger causality (Granger, 1969), or the nonlinear interdependence (Arnhold et al., 1999; Quian Quiroga et al., 2002; Andrzejak et al., 2003). Furthermore, different indices of phase synchronization such as the mean phase coherence (Kuramoto, 1984; Mormann et al., 2000) have been introduced.

**Measures of spike train synchrony** (or inversely spike train distances) are estimators of the (dis)similarity between two or sometimes more spike trains. Here spike train refers to a sequence of neuronal action potentials. Under the assumption that neither the shape of the action potential nor the background activity carry relevant information, neuronal responses are reduced to a spike train where the only information maintained is the timing of the individual spikes. A complementary class of approaches comprises measures of neuronal signal synchrony.

Measures that estimate the degree of synchrony between spike trains are important tools for many applications. Among others, they can be used to quantify the reliability of neuronal responses upon repeated presentations of a stimulus (Mainen and Sejnowski, 1995) or to test the performance of neuronal models (Jolivet et al., 2008).

Experimental data comprising both time-continuous flows and point processes are recorded in many scientific disciplines. The characterization of causal interactions from such signals is key to an advanced understanding of the underlying dynamics. We therefore introduce a unified approach to characterize unidirectional couplings between point processes, between flows, as well as between point processes and flows. For this purpose we show and exploit the generality of the asymmetric state similarity conditioning principle. We use Hindmarsh-Rose neuron models and Lorenz oscillators to illustrate the high sensitivity and specificity of our approach.

Time scale parametric spike train distances like the Victor and the van Rossum distances are often applied to study the neural code based on neural stimuli discrimination. Different neural coding hypotheses, such as rate or coincidence coding, can be assessed by combining a time scale parametric spike train distance with a classifier in order to obtain the optimal discrimination performance. The time scale for which the responses to different stimuli are distinguished best is assumed to be the discriminative precision of the neural code. The relevance of temporal coding is evaluated by comparing the optimal discrimination performance with the one achieved when assuming a rate code. We here characterize the measures quantifying the discrimination performance, the discriminative precision, and the relevance of temporal coding. Furthermore, we evaluate the information these quantities provide about the neural code. We show that the discriminative precision is too unspecific to be interpreted in terms of the time scales relevant for encoding. Accordingly, the time scale parametric nature of the distances is mainly an advantage because it allows maximizing the discrimination performance across a whole set of measures with different sensitivities determined by the time scale parameter, but not due to the possibility to examine the temporal properties of the neural code.

A wide variety of approaches to estimate the degree of synchrony between two or more spike trains have been proposed. One of the most recent methods is the ISI-distance which extracts information from the interspike intervals (ISIs) by evaluating the ratio of the instantaneous firing rates. In contrast to most previously proposed measures it is parameter free and time-scale independent. However, it is not well suited to track changes in synchrony that are based on spike coincidences. Here we propose the SPIKE-distance, a complementary measure which is sensitive to spike coincidences but still shares the fundamental advantages of the ISI-distance. In particular, it is easy to visualize in a time-resolved manner and can be extended to a method that is also applicable to larger sets of spike trains. We show the merit of the SPIKE-distance using both simulated and real data.

Throughout the brain, neurons encode information in fundamental units of spikes. Each spike represents the combined thresholding of synaptic inputs and intrinsic neuronal dynamics. Here, we address a basic question of spike train formation: how do perithreshold synaptic inputs perturb the output of a spiking neuron? We recorded from single entorhinal principal cells in vitro and drove them to spike steadily at 5 Hz (theta range) with direct current injection, then used a dynamic-clamp to superimpose strong excitatory conductance inputs at varying rates. Neurons spiked most reliably when the input rate matched the intrinsic neuronal firing rate. We also found a striking tendency of neurons to preserve their rates and coefficients of variation, independently of input rates. As mechanisms for this rate maintenance, we show that the efficacy of the conductance inputs varied with the relationship of input rate to neuronal firing rate, and with the arrival time of the input within the natural period. Using a novel method of spike classification, we developed a minimal Markov model that reproduced the measured statistics of the output spike trains and thus allowed us to identify and compare contributions to the rate maintenance and resonance. We suggest that the strength of rate maintenance may be used as a new categorization scheme for neuronal response and note that individual intrinsic spiking mechanisms may play a significant role in forming the rhythmic spike trains of activated neurons; in the entorhinal cortex, individual pacemakers may dominate production of the regional theta rhythm.

Measures of multiple spike train synchrony are essential in order to study issues such as spike timing reliability, network synchronization, and neuronal coding. These measures can broadly be divided in multivariate measures and averages over bivariate measures. One of the most recent bivariate approaches, the ISI-distance, employs the ratio of instantaneous interspike intervals (ISIs). In this study we propose two extensions of the ISI-distance, the straightforward averaged bivariate ISI-distance and the multivariate ISI-diversity based on the coefficient of variation. Like the original measure these extensions combine many properties desirable in applications to real data. In particular, they are parameter-free, time scale independent, and easy to visualize in a time-resolved manner, as we illustrate with in vitro recordings from a cortical neuron. Using a simulated network of Hindemarsh–Rose neurons as a controlled configuration we compare the performance of our methods in distinguishing different levels of multi-neuron spike train synchrony to the performance of six other previously published measures. We show and explain why the averaged bivariate measures perform better than the multivariate ones and why the multivariate ISIdiversity is the best performer among the multivariate methods. Finally, in a comparison against standard methods that rely on moving window estimates, we use single-unit monkey data to demonstrate the advantages of the instantaneous nature of our methods.

Estimating the degree of synchrony or reliability between two or more spike trains is a frequent task in both experimental and computational neuroscience. In recent years, many different methods have been proposed that typically compare the timing of spikes on a certain time scale to be optimized by the analyst. Here, we propose the ISI-distance, a simple complementary approach that extracts information from the interspike intervals by evaluating the ratio of the instantaneous firing rates. The method is parameter free, time scale independent and easy to visualize as illustrated by an application to real neuronal spike trains obtained in vitro from rat slices. In a comparison with existing approaches on spike trains extracted from a simulated Hindemarsh–Rose network, the ISI-distance performs as well as the best time-scale-optimized measure based on spike timing

We study the regularity of noise-induced excitations in the FitzHugh–Nagumo (FHN) neuronal model subject to excitatory and inhibitory high-frequency input with and without correlations. For each value of the correlation a relative maximum of spike coherence can be observed for intermediate noise strengths (coherence resonance). Moreover, the FHN system exhibits an absolute maximum of coherent spiking for intermediate values of both the noise amplitude and the strength of correlation (double coherence resonance). The underlying mechanisms can be explained by means of the discrete input statistics.

We analyze the response of the Hodgkin–Huxley neuron to a large number of uncorrelated stochastic inhibitory and excitatory postsynaptic spike trains. In order to clarify the various mechanisms responsible for noise-induced spike triggering we examine the model in its silent regime. We report the coexistence of two distinct coherence resonances: the first one at low noise is due to the stimulation of correlated subthreshold oscillations; the second one at intermediate noise variances is instead related to the regularization of the emitted spike trains.

The investigation of synchronization phenomena on measured experimental data such as biological time series has recently become an increasing focus of interest. Different approaches for measuring synchronization have been proposed that rely on certain characteristic features of the dynamical system under investigation. For experimental data the underlying dynamics are usually not completely known, therefore it is difficult to decide a priori which synchronization measure is most suitable for an analysis. In this study we use three different coupled model systems to create a ‘controlled’ setting for a comparison of six different measures of synchronization. All measures are compared to each other with respect to their ability to distinguish between different levels of coupling and their robustness against noise. Results show that the measure to be applied to a certain task can not be chosen according to a fixed criterion but rather pragmatically as the measure which most reliably yields plausible information in test applications, although certain dynamical features of a system under investigation (e.g., power spectra, dimension) may render certain measures more suitable than others.

We study the influence of correlations among discrete stochastic excitatory or inhibitory inputs on the response of the FitzHugh-Nagumo neuron model. For any level of correlation, the emitted signal exhibits at some finite noise intensity a maximal degree of regularity, i.e., a coherence resonance. Furthermore, for either inhibitory or excitatory correlated stimuli, a double coherence resonance is observable. Double coherence resonance refers to a (absolute) maximum coherence in the output occurring for an optimal combination of noise variance and correlation. All of these effects can be explained by taking advantage of the discrete nature of the correlated inputs.

The response of the Hodgkin-Huxley neuronal model subjected to stochastic uncorrelated spike trains originating from a large number of inhibitory and excitatory post-synaptic potentials is analyzed in detail. The model is examined in its three fundamental dynamical regimes: silence, bistability, and repetitive firing. Its response is characterized in terms of statistical indicators interspike-interval distributions and their first moments as well as of dynamical indicators autocorrelation functions and conditional entropies. In the silent regime, the coexistence of two different coherence resonances is revealed: one occurs at quite low noise and is related to the stimulation of subthreshold oscillations around the rest state; the second one at intermediate noise variance is associated with the regularization of the sequence of spikes emitted by the neuron. Bistability in the low noise limit can be interpreted in terms of jumping processes across barriers activated by stochastic fluctuations. In the repetitive firing regime a maximization of incoherence is observed at finite noise variance. Finally, the mechanisms responsible for the different features appearing in the interspike-interval distributions like multimodality and exponential tails are clearly identified in the various regimes.

Andrzejak RG, Mormann F, Widman G, **Kreuz T**, Elger CE, Lehnertz K

*Epilepsy Research ***69***, 30 (2006) ***[PDF]**

An advanced characterization of the complicated dynamical system brain is one of science's biggest challenges. Nonlinear time series analysis allows characterizing nonlinear dynamical systems in which low-dimensional nonlinearity gives rise to complex and irregular behavior. While several studies indicate that nonlinear methods can extract valuable information from neuronal dynamics, others doubt their necessity and conjecture that the same information can be obtained using classical linear techniques. To address this issue, we compared these two concepts, but included furthermore a combination of nonlinear measures with surrogates, an approach that has been designed to specifically focus on nonlinearity. As a benchmark we used the discriminative power to detect the seizure-generating hemisphere in medically intractable mesial temporal lobe epilepsy. We analyzed intracranial electroencephalographic recordings from the seizure-free interval of 29 patients. While the performance of both linear and nonlinear measures was weak, if not insignificant, a very high performance was obtained by the use of surrogate-corrected measures. Focusing on nonlinearity by using a combination of nonlinear measures with surrogates appears as the key to a successful characterization of the spatial distribution of the epileptic process.

Mormann F, **Kreuz T**, Rieke C, Andrzejak RG, Kraskov A, David P, Elger CE, Lehnertz K

*Clin Neurophysiol* *116, 569 (2005) ***[PDF]**

**Objective:** An important issue in epileptology is the question whether information extracted from the EEG of epilepsy patients can be used for the prediction of seizures. Several studies have claimed evidence for the existence of a pre-seizure state that can be detected using different characterizing measures. In this paper, we evaluate the predictability of seizures by comparing the predictive performance of a variety of univariate and bivariate measures comprising both linear and non-linear approaches.

**Methods:** We compared 30 measures in terms of their ability to distinguish between the interictal period and the pre-seizure period. After completely analyzing continuous inctracranial multi-channel recordings from five patients lasting over days, we used ROC curves to distinguish between the amplitude distributions of interictal and preictal time profiles calculated for the respective measures. We compared different evaluation schemes including channelwise and seizurewise analysis plus constant and adaptive reference levels. Particular emphasis was placed on statistical validity and significance.

**Results:** Univariate measures showed statistically significant performance only in a channelwise, seizurewise analysis using an adaptive baseline. Preictal changes for these measures occurred 5–30 min before seizures. Bivariate measures exhibited high performance values reaching statistical significance for a channelwise analysis using a constant baseline. Preictal changes were found at least 240 min before seizures. Linear measures were found to perform similar or better than non-linear measures.

**Conclusions:** Results provide statistically significant evidence for the existence of a preictal state. Based on our findings, the most promising approach for prospective seizure anticipation could be a combination of bivariate and univariate measures.

**Significance:** Many measures reported capable of seizure prediction in earlier studies are found to be insignificant in performance, which underlines the need for statistical validation in this field.

**Kreuz T**, Andrzejak RG, Mormann F, Kraskov A, Stögbauer H, Elger CE, Lehnertz K, Grassberger P

*Phys Rev E ***69***, 061915 (2004) ***[PDF]**

In a growing number of publications it is claimed that epileptic seizures can be predicted by analyzing the electroencephalogram (EEG) with different characterizing measures. However, many of these studies suffer from a severe lack of statistical validation. Only rarely are results passed to a statistical test and verified against some null hypothesis H0 in order to quantify their significance. In this paper we propose a method to statistically validate the performance of measures used to predict epileptic seizures. From measure profiles rendered by applying a moving-window technique to the electroencephalogram we first generate an ensemble of surrogates by a constrained randomization using simulated annealing. Subsequently the seizure prediction algorithm is applied to the original measure profile and to the surrogates. If detectable changes before seizure onset exist, highest performance values should be obtained for the original measure profiles and the null hypothesis. “The measure is not suited for seizure prediction” can be rejected. We demonstrate our method by applying two measures of synchronization to a quasicontinuous EEG recording and by evaluating their predictive performance using a straightforward seizure prediction statistics. We would like to stress that the proposed method is rather universal and can be applied to many other prediction and detection problems

We demonstrate by means of a simple example that the arbitrariness of defining a phase from an aperiodic signal is not just an academic problem, but is more serious and fundamental. Decomposition of the signal into components with positive phase velocities is proposed as an old solution to this new problem.

The concept of surrogates allows testing results from time series analysis against specified null hypotheses. In application to bivariate model dynamics we here compare different types of surrogates, each designed to test against a different null hypothesis, e.g., an underlying bivariate linear stochastic process. Two measures that aim at a characterization of interdependence between nonlinear deterministic dynamics were used as discriminating statistics. We analyze eight different stochastic and deterministic models not only to demonstrate the power of the surrogates, but also to reveal some pitfalls and limitations.

We agree with the Comment by Duckrow and Albano [Phys. Rev. E 67, 063901 (2003)] that mutual information, estimated with an optimized algorithm, can be a useful tool for studying synchronization in real data. However, we point out that the improvement they found is mainly due to an interesting but nonstandard embedding technique used, and not so much due to the algorithm used for the estimation of mutual information itself. We also address the issue of stationarity of electroencephalographic (EEG) data

Rieke C, Mormann F, Andrzejak RG, Kreuz T, David P, Elger CE, Lehnertz K

*IEEE Trans Biomed Eng* **50**, 634 (2003)* ***[PDF]**

A number of recent studies indicate that nonlinear electroencephalogram (EEG) analyses allow to define a state predictive of an impending epileptic seizure. In this paper, we combine a method for detecting nonlinear determinism with a novel test for stationarity to characterize EEG recordings from both the seizure-free interval and the preseizure phase. We discuss differences between these periods, particularly an increased occurrence of stationary, nonlinear segments prior to seizures. These differences seem most prominent for recording sites within the seizure-generating area and for EEG segments less than one minute’s length.

The exact mechanisms leading to the occurrence of epileptic seizures in humans are still poorly understood. It is widely accepted, however, that the process of seizure generation is closely associated with an abnormal synchronization of neurons. In order to investigate this process, we here measure phase synchronization between different regions of the brain using intracranial EEG recordings. Based on our preliminary finding of a preictal drop in synchronization, we investigate whether this phenomenon can be used as a sensitive and specific criterion to characterize a preseizure state and to distinguish this state from the interictal interval.

Applying an automated technique for detecting decreased synchronization to EEG recordings from a group of 18 patients with focal epilepsy comprising a total of 117 h, we observe a characteristic decrease in synchronization prior to 26 out of 32 analyzed seizures at a very high specificity as tested on interictal recordings. The duration of this preictal state is found to range from several minutes up to a few hours. Investigation of the spatial distribution of preictal desynchronization indicates that the process of seizure generation in focal epilepsy is not necessarily confined to the focus itself but may instead involve more distant, even contralateral areas of the brain. Finally, we demonstrate an intrahemispheric asymmetry in the spatial dynamics of preictal desynchronization that is found in the majority of seizures and appears to be an immanent part of the mechanisms underlying the initiation of seizures in humans.

Mormann F, Andrzejak RG, **Kreuz T**, Rieke C, David P, Elger CE, Lehnertz K

*Phys. Rev. E ***67***, 021912 (2003) ***[PDF]**

The question whether information extracted from the electroencephalogram ~EEG! of epilepsy patients can be used for the prediction of seizures has recently attracted much attention. Several studies have reported evidence for the existence of a preseizure state that can be detected using different measures derived from the theory of dynamical systems. Most of these studies, however, have neglected to sufficiently investigate the specificity of the observed effects or suffer from other methodological shortcomings. In this paper we present an automated technique for the detection of a preseizure state from EEG recordings using two different measures for synchronization between recording sites, namely, the mean phase coherence as a measure for phase synchronization and the maximum linear cross correlation as a measure for lag synchronization. Based on the observation of characteristic drops in synchronization prior to seizure onset, we used this phenomenon for the characterization of a preseizure state and its distinction from the remaining seizure-free interval. After optimizing our technique on a group of 10 patients with temporal lobe epilepsy we obtained a successful detection of a preseizure state prior to 12 out of 14 analyzed seizures for both measures at a very high specificity as tested on recordings from the seizure-free interval. After checking for in-sample overtraining via cross validation, we applied a surrogate test to validate the observed predictability. Based on our results, we discuss the differences of the two synchronization measures in terms of the dynamics underlying seizure generation in focal epilepsies.

Lehnertz K, Mormann F, **Kreuz T**, Andrzejak RG, Rieke C, David P, Elger CE

*IEEE Trans Biomed Eng* (Special Issue), **22**, 57 (2003)* ***[PDF]**

Attempting to Increase Insight into the Spatio-Temporal Dynamics of the Epileptogenic Process to Address One of the Greatest Challenges in Epileptology

Andrzejak RG, Mormann F, **Kreuz T**, Rieke C, Kraskov A, Elger CE, Lehnertz K

*Phys Rev E ***67***, 010901 (2003) ***[PDF]**

A rapidly growing number of studies deals with the prediction of epileptic seizures. For this purpose, various techniques derived from linear and nonlinear time series analysis have been applied to the electroencephalogram of epilepsy patients. In none of these works, however, the performance of the seizure prediction statistics is tested against a null hypothesis, an otherwise ubiquitous concept in science. In consequence, the evaluation of the reported performance values is problematic. Here, we propose the technique of seizure time surrogates based on a Monte Carlo simulation to remedy this deficit.

We propose a simple method to measure synchronization and time-delay patterns between signals. It is based on the relative timings of events in the time series, defined, e.g., as local maxima. The degree of synchronization is obtained from the number of quasisimultaneous appearances of events, and the delay is calculated from the precedence of events in one signal with respect to the other. Moreover, we can easily visualize the time evolution of the delay and synchronization level with an excellent resolution. We apply the algorithm to short rat electroencephalogram (EEG) signals, some of them containing spikes. We also apply it to an intracranial human EEG recording containing an epileptic seizure, and we propose that the method might be useful for the detection of epileptic foci. It can be easily extended to other types of data and it is very simple and fast, thus being suitable for on-line implementations.

We study the synchronization between left and right hemisphere rat electroencephalographic (EEG) channels by using various synchronization measures, namely nonlinear interdependences, phase synchronizations, mutual information, cross correlation, and the coherence function. In passing we show a close relation between two recently proposed phase synchronization measures and we extend the definition of one of them. In three typical examples we observe that except mutual information, all these measures give a useful quantification that is hard to be guessed beforehand from the raw data. Despite their differences, results are qualitatively the same. Therefore, we claim that the applied measures are valuable for the study of synchronization in real data. Moreover, in the particular case of EEG signals their use as complementary variables could be of clinical relevance.

Lehnertz K, Andrzejak RG, Arnhold J, **Kreuz T**, Mormann F, Rieke C, Widman G, Elger CE

*J Clin Neurophysiol ***18***, 209-222 (2001) ***[PDF]**

Several recent studies emphasize the high value of nonlinear EEG analysis particularly for improved characterization of epileptic brain states. In this review the authors report their work to increase insight into the spatial and temporal dynamics of the epileptogenic process. Specifically, they discuss possibilities for seizure anticipation, which is one of the most challenging aspects of epileptology. Although there are numerous studies exploring basic neuronal mechanisms that are likely to be associated with seizures, to date no definite information is available regarding how, when, or why a seizure occurs. Nonlinear EEG analysis now provides strong evidence that the interictal–ictal state transition is not an abrupt phenomenon. Rather, findings indicate that it is indeed possible to detect a preseizure phase. The unequivocal definition of such a state with a sufficient length would enable investigations of basic mechanisms leading to seizure initiation in humans, and development of adequate seizure prevention strategies.