Error message

Monday, 07 February 2022

Kazuyuki Aihara
Title: DNB(Dynamical Network Biomarkers) Analysis and its Applications to Neuroscience
Abstract:

In this talk, first I will review the method of our DNB(Dynamical Network Biomarkers) analysis that detects early warning signals of critical transitions in complex systems. Then, I explain possible applications of the DNB analysis to Neuroscience.

Sarthak Chandra
Title: Reservoir computing in noisy real-world systems: network inference and dynamical skeletons
Abstract:

A reservoir computer (RC) is a machine learning (ML) architecture that uses high-dimensional internal dynamics of the ML device to be able to perform time-series analysis of dynamical systems. One particular exciting application of reservoir computing has been in the inference of network structure underlying dynamical systems. However, previous work towards this effort (using RCs and otherwise) has largely only considered relatively simplistic problems, where most inference techniques result in a clear separability between true and false network edges. Here I will talk about the challenges associated with application of RCs towards neural data obtained from C. Elegans. In particular, we show a novel surrogate data construction method that allows for accurate link assessment even when the data does not lead to clear indications of connectivity.

When working with such real-world data, an important issue that arises is the presence of dynamical noise, i.e., continual stochastic perturbations to system dynamics. I will discuss how RCs can be used as effective tools to filter out dynamical noise, allowing for the reconstruction of a "dynamical skeleton", i.e., the underlying deterministic system that governs the dynamics of the data. We demonstrate that RCs

can perform effective filtering without any access to the clean signal and by training solely on the stochastically perturbed dynamical trajectories, even when the dynamical noise causes significant distortions to the system attractor.

Klaus Lehnertz
Title: From nonlinear dynamics to complex networks: improving understanding of epilepsy
Abstract:

Epilepsy is the most common chronic brain disease that affects approximately 50 million people worldwide. Epileptic seizures are the cardinal symptom of this multi-facetted  disease and are usually characterized by an overly synchronized firing of neurons. Seizures cannot be controlled by any available therapy in about 25% of individuals. Although epilepsy is probably the oldest disease known to mankind, knowledge about mechanisms underlying generation, spread, and termination of the extreme event seizure in humans is still fragmentary. Over the last decades, an improved characterization of the spatial-temporal dynamics of the epileptic process could be achieved with concepts and methods from nonlinear dynamics, statistical physics, synchronization and network theory. In the first part of my talk, I shall provide a brief overview of the progress that has been made in the field: from preliminary descriptions of pre-seizure phenomena to implantable seizure prediction and prevention systems. The second part of my talk is devoted to more recent developments that promise an advanced characterization of complex brain dynamics, and we shall discuss necessary extensions to further advance the field.

Bernardo Gabriel Mindlin
Title: Birdsong as a model for learned, complex behavior
Abstract:

Birdsong is an exquisite, complex behavior which for approximately 40% of the known bird species, requires some degree of learning. The neural architecture needed for generating the instructions involved in birdsong generation is relatively well known, although the actual dynamics emerging from it is still being investigated. In this talk I will review experimental results as well as the main theoretical models being currently discussed. I will focus on the interaction between the nervous system and the rich and complex nonlinear biomechanics involved in birdsong production.

Tuesday, 08 February 2022

Jaideep Pathak
Title: Data-driven and data-augmented modeling of chaotic spatiotemporal dynamical systems with applications in geophysics and computational fluid dynamics
Abstract:

Advances in machine learning have the potential to be important in developing tools for climate modeling, weather prediction, and computational fluid dynamics. In this talk I will consider some aspects of purely data-driven models for forecasting chaotic dynamical systems and discuss extensions of the work for emulating large-scale circulation models of the atmosphere. I will also present some recent results on scaling data-driven forecasting techniques for forecasting atmospheric flows at resolutions comparable to operational numerical weather forecasting models using modern deep learning architectures. Finally, I will briefly discuss potential techniques of combining machine learning with numerical models for applications in computational fluid dynamics and geophysics.

Collins Assisi
Title: Neural encoding of odours in a turbulent environment.
Abstract:

Olfaction, with other chemical senses, is phylogenetically the oldest sense. Therefore, understanding the mechanisms underlying olfactory perception will provide insights into a successful, and perhaps optimal, biological algorithm for processing complex information. Olfaction, like other sensory systems, uses the timing of spikes to encode incoming sensory inputs. The identity of an odorant that arrives in a steady stream can be reliably ascertained by examining the spatiotemporal sequence of spikes that are generated by brain networks involved in olfaction. However, odor inputs come, not as a steady stream, but riding upon chaotically pulsed plumes of air. Each time an input arrives, it follows a unique temporal pattern. As a consequence, the spatiotemporal spike sequences that presumably encode the odor must also be different every time an animal encounters temporally varying odor input. It really is a wonder then that animals are capable of decoding sensory cues and navigating highly complex and turbulent ‘odorscapes’. Why, despite varying inputs, are odor percepts invariant (alternatively, why does a rose always smell like a rose)? I will address this question using a computational model of the insect olfactory system.

Wednesday, 09 February 2022

Kazuyuki Aihara
Title: DNB(Dynamical Network Biomarkers) Analysis and its Applications to Neuroscience
Abstract:

In this talk, first I will review the method of our DNB(Dynamical Network Biomarkers) analysis that detects early warning signals of critical transitions in complex systems. Then, I explain possible applications of the DNB analysis to Neuroscience.

Klaus Lehnertz
Title: From nonlinear dynamics to complex networks: improving understanding of epilepsy
Abstract:

Epilepsy is the most common chronic brain disease that affects approximately 50 million people worldwide. Epileptic seizures are the cardinal symptom of this multi-facetted disease and are usually characterized by an overly synchronized firing of neurons. Seizures cannot be controlled by any available therapy in about 25% of individuals. Although epilepsy is probably the oldest disease known to mankind, knowledge about mechanisms underlying generation, spread, and termination of the extreme event seizure in humans is still fragmentary. Over the last decades, an improved characterization of the spatial-temporal dynamics of the epileptic process could be achieved with concepts and methods from nonlinear dynamics, statistical physics, synchronization and network theory. In the first part of my talk, I shall provide a brief overview of the progress that has been made in the field: from preliminary descriptions of pre-seizure phenomena to implantable seizure prediction and prevention systems. The second part of my talk is devoted to more recent developments that promise an advanced characterization of complex brain dynamics, and we shall discuss necessary extensions to further advance the field.

Thursday, 10 February 2022

Manish Shrimali
Title: Reservoir Computing with a Pendulum
Abstract:

The study of natural information processing capacity of a dynamical system has sought attention of the researchers in last few decades. Reservoir Computing (RC) provides a computational framework to exploit that. Complex machine learning tasks are performedusing dynamical system as the computational substrate in RC. There are several examples of dynamical systems successfully used as reservoir, chosen mainly on the basis of few usual criteria; high dimensionality, rich non-linearity and fading memory. In this talk we will discuss the performance of a low dimensional dynamical system as reservoir, namely a single pendulum. In the conventional neural network models also there is a notion of single neuron being enough to perform complex task. Our objective is to exploit astrikingly simple system like a single pendulum to solve intelligent computational tasks. We also discuss the remarkable result of proof-of-principle experimental setup.

Sarthak Chandra
Title: Self-organized formation of topologically robust grid cell modules from smooth gradients
Abstract:

Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC.

How does discreteness in grid modules arise from continuous gradients? We show that spatially and functionally discrete grid modules can spontaneously emerge from lateral interactions that are continuously graded in scale, when combined with a fixed-scale interaction. We derive a comprehensive analytic theory to reveal that modularization is highly generic and robust, arising from a topological process. The theory generates universal predictions for the sequence of grid period ratios that strikingly do not depend on any microscopic details and furnish the most accurate explanation of grid cell data to date. Altogether, this work reveals novel principles by which simple local interactions and smooth gradients lead to macroscopic modular organization in cortical circuits.