On driven neural assemblies: synchrony, chaos and entropy.
MetadataShow full item record
In this dissertation, I address mathematical problems arising from the field of theo- retical neuroscience that share a common theme: temporally driven neural networks treated as perturbed, coupled nonlinear dynamical systems. Guided by this theme, I present projects falling in two main categories. The first relates to synchrony, or lack thereof, in neural populations receiving a common input. This is intimately linked to studies of neural pathologies such as Parkinson's disease and their treatments via neuroprosthetics. Here, synchronous dynamics within certain neural nuclei disrupt normal brain functions and electrical stimulation can restore them. How do these networks synchronize? How can we break this synchrony with (global) input stimulation? I use a variety of tools such as geometric perturbation theory, discrete dynamical systems and numerical simulations to approach these questions. The second set of projects is motivated by neural coding, or the ability of neural networks to encode information from a stimulus that drives their dynamics. Encoding of a signal by a dynamical system strongly depends on its reliability: the ability of a system to reproduce similar output, given the same stimulus, on many trials where initial conditions change. Many large networks such as the ones found in cortex are believed to be chaotic and hence highly sensitive to small changes in initial state. How does such chaos influence the reliability of spikes in large driven networks? What are the implications of the resulting unreliability on the neural coding of stimuli? I use mathematical techniques from random dynamical systems and information theory, along with large-scale numerical simulations, to address these questions for large networks of spiking neurons.
- Applied mathematics