CNS2015 Workshop on

Methods of Information Theory in Computational Neuroscience

July 22-23, 2015, Prague, Czech Republic






Overview

Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.

A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.

The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.

The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience. For the programme of the past IT workshops see the Bionet page at the Columbia University.


Organizers:
Alexander G. Dimitrov, Washington State University
Michael C. Gastpar, EPFL
Lubomir Kostal, Institute of Physiology CAS
Tatyana O. Sharpee, The Salk Institute
Simon R. Schultz, Imperial College London


Programme and Abstracts

The workshop (July 22-23) will follow the main CNS2015 meeting, see also the CNS2015 schedule for registration and other information.

The workshop will take place in ROOM: NB C at the Meeting Venue.

Wednesday, July 22 (9:00 AM - 3:40 PM)


Morning Session I (9:00 AM - 10:20 AM)

9:00 AM - 9:40 AM: Efficient information transmission and stimulus coding in neuronal models
Lubomir Kostal, Institute of Physiology CAS (CZ)

We investigate the limits of neuronal information transmission from the perspective of Shannon's information theory. However, the Shannon bounds are achievable only asymptotically as the coding complexity grows without any bounds. For sources equipped with a distortion measure, and for neuronal models that account for the metabolic cost, it might be possible to match the source and channel statistics in such a way that the uncoded transmission is optimal in the Shannon's sense. Since the uncoded transmission is completely analog (avoiding the source discretization and block coding) we hypothetize that it might represent a viable strategy in real neural systems. We analyze the possibility of source-channel matching in the Hodgkin-Huxley type of neuronal models with and compare the theoretical predictions with in-vivo experimental recordings of excitatory neurons in the sensorimotor cortex of a rat. Our results show that the post-synaptic firing rate histograms of real neurons matches the theoretical prediction when the model balances the information transmission and metabolic workload optimally.

9:40 AM - 10:20 AM: Variational Inference for Graphical Models of Multivariate Piecewise-Stationary Time Series
Hang Yu and Justin Dauwels, Nanyang Technological University (SG)

Graphical models provide a powerful formalism for statistical modeling of complex systems. Especially sparse graphical models have seen wide applications recently, as they allow us to infer network structure from multiple time series (e.g., functional brain networks from multichannel electroencephalograms). So far, most of the literature deals with stationary time series, whereas real-life time series often exhibit non-stationarity. In this paper, we focus on multivariate piecewise-stationary time series, and propose novel Bayesian techniques to infer the change points and the graphical models of stationary time segments. Concretely, we model the time series as a hidden Markov model whose hidden states correspond to different Gaussian graphical models. As such, the transition between different states represents a change point. We further impose a stick-breaking process prior on the hidden states and shrinkage priors on the inverse covariance matrices of different states. We then derive an efficient stochastic variational inference algorithm to learn the model with sublinear time complexity. As an important advantage of the proposed approach, the number and position of the change points as well as the graphical model structures are inferred in an automatic manner without tuning any parameters. The proposed method is validated through numerical experiments on synthetic data and multichannel recordings of epilepsy seizure.

10:20 AM - 11:00 AM: COFFEE BREAK

Morning Session II (11:00 AM - 12:10 AM)

11:00 AM - 11:40 AM: Correlations and the propagation of information through neural circuits
Joel Zylberberg, University of Washington (US)

Sensory neurons give noisy responses to stimulation. These trial-to-trial fluctuations in the neural responses (over repeats of the same stimulus) are not independent, but rather are correlated from cell-to-cell. To decipher neural population codes, it is critical to understand how these "noise correlations" affect the encoding of information by neural population responses. This question is almost invariably addressed using information theoretic measures that ask how well idealized decoders can recover the stimulus after observing the (noisy) neural responses. In the sensory periphery, however, the informativeness of the neural responses themselves is potentially less important than the issue of how well that information propagates to downstream neural structures: not all information is "equally useable" by the nervous system. In my talk, I will discuss analytical and computational studies that identify the patterns of correlation that allow neural responses to most robustly propagate information through (possibly noisy) neural circuits. These studies yield the surprising conclusion that the same types of correlations that are currently thought to be the worst for neural population coding might actually allow the encoded information to most robustly propagate.

12:20 PM - 1:40 PM: LUNCH

Afternoon Session I (1:40 PM - 3:00 PM)

1:40 PM - 2:20 PM: Measuring neuronal information transfer during task performance via the directed information
Adria Tauste, Universitat Pompeu Fabra (ES)

For a pair of random processes the directed information can be regarded as a refinement of the mutual information that quantifies the information that the past and present of one random process has about the present of the other given the other's past. Since its formulation by Massey in the early 90's, this measure has found applications in information theory (e.g. capacity of certain type of channels with feedback) and beyond including hypothesis testing and sequential prediction, among others. In this talk I will discuss the application of this measure to infer neuronal information transfer patterns from recorded single-cell activity during task performance. In particular, I will present a recent study where we estimated the directed information between neuronal spike trains that were simultaneously recorded in five cortical areas (two somatosensory, two pre-motor and one motor) of two monkeys while they were performing a tactile discrimination task and a (control) passive stimulation task. Our main results show that correlated neuronal activity is highly differentiated across both tasks suggesting the existence of a task-specific network of feed-forward and feedback interactions that transfers stimuli and response information across sensory, pre-motor and motor areas during tactile discrimination.

2:20 PM - 3:00 PM: Information and decision-making
Dari Trendafilov and Daniel Polani, University of Hertfordshire (GB)

3:00 PM - 3:40 PM: COFFEE BREAK


Thursday, July 23 (9:00 AM - 3:00 AM)


Morning Session I (9:00 AM - 10:20 AM)

9:00 AM - 9:40 AM: Network Information Theory and Neural Coding
Michael Gastpar, Ecole polytechnique federale de Lausanne (CH)

9:40 AM - 10:20 AM: Temporal precision and information in the awake cortex
Dan Butts, University of Maryland (US)

10:20 AM - 11:00 AM: COFFEE BREAK

Morning Session II (11:00 AM - 12:20 AM)

11:00 AM - 11:40 AM: Bayesian learning through stochastic synaptic plasticity
David Kappel, Graz University of Technology (AT)

General results from statistical learning theory suggest to understand not only brain computations, but also learning in biological neural systems as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network parameters. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.

11:40 AM - 12:20 AM: Beyond sensory bottleneck: Efficient coding of elements of visual form
Gasper Tkacik, Institute of Science and Technology (AT)

It has long been appreciated that the statistical properties of natural stimuli shape neural processing mechanisms in the sensory periphery, but the extent to which such a principle can be formulated for and applied to central processing is unclear. The periphery faces a transmission bottleneck, so efficiency implies compression of signal components with a predictably wider range. Cortex faces a different challenge - it must use limited samples to make inferences to guide decisions. In this regime, efficient coding predicts the opposite from the periphery: that greater resources are allocated to the signal components with a wider range. To test this hypothesis, we carry out two parallel studies. In one, we measure the joint distribution of local two-, three-, and four-point spatial correlations in an ensemble of natural images. In the other, we measure human perceptual sensitivity to these correlations and their combinations via psychophysical experiments that use synthetic visual textures. We show that psychophysical performance, described by dozens of independent parameters, can be predicted with surprising accuracy from the distribution of spatial correlations found in the natural images. Thus, the efficient coding principle extends beyond the sensory periphery to the central nervous system, where it applies in a very different guise and accounts for the sensitivity to higher-order elements of visual form.

12:20 PM - 1:40 PM: LUNCH

Afternoon Session I (1:40 PM - 3:00 PM)

1:40 PM - 2:20 PM: Local active information storage as a tool to understand distributed neural information processing
Joseph Lizier, University of Sydney (AU)

Every act of information processing can in principle be decomposed into the component operations of information storage, transfer, and modification. Yet, while this is easily done for today's digital computers, the application of these concepts to neural information processing was hampered by the lack of proper mathematical definitions of these operations on information. Recently, definitions were given for the dynamics of these information processing operations on a local scale in space and time in a distributed system, and the specific concept of local active information storage was successfully applied to the analysis and optimization of artificial neural systems. However, no attempt to measure the space-time dynamics of local active information storage in neural data has been made to date. Here we measure local active information storage on a local scale in time and space in voltage sensitive dye imaging data from area 18 of the cat. We show that storage reflects neural properties such as stimulus preferences and surprise upon unexpected stimulus change, and in area 18 reflects the abstract concept of an ongoing stimulus despite the locally random nature of this stimulus. We suggest that LAIS will be a useful quantity to test theories of cortical function, such as predictive coding.

2:20 PM - 3:00 PM: Developing Neural Networks - Theory and Experiments
Viola Priesemann, Max Planck Institute for Dynamics and Self-Organization (DE)

Human brains possess sophisticated information processing capabilities, which rely on the coordinated interplay of billions of neurons. Despite recent advances in characterizing the collective neuronal dynamics, however, it remains a major challenge to understand the principles of how functional neuronal networks develop and maintain these processing capabilities. A popular hypothesis is that neuronal networks self-organize to a critical state, because in models, criticality maximizes information processing capacities. This predicts that biological networks should develop towards a critical state during maturation, and at the same time processing capabilities should increase. We tested this hypothesis using multi-electrode spike recordings in mouse hippocampal and cortical neurons over the first four weeks in vitro. We showed that developing neuronal networks indeed increased their information processing capacities, as quantified by transfer entropy and active information storage. the increase in processing capacities was tightly linked to decreasing the distance to criticality (correlation r = 0.68, p < 10-9; r = 0.55, p < 10-6 for transfer and storage, respectively). Thereby our results for the first time demonstrate experimentally that approaching criticality with maturation goes in hand with diverging processing capacities.

3:00 PM - 3:40 PM: COFFEE BREAK

[Back to Top]


Speakers

Butts, Dan University of Maryland (US)
Dauwels, Justin Nanyang Technological University (SG)
Dimitrov, Alexander Washington State University, Vancouver (US)
Gastpar, Michael Ecole polytechnique federale de Lausanne (CH)
Kappel, David Graz University of Technology (AT)
Kostal, Lubomir Institute of Physiology CAS (CZ)
Lizier, Joseph University of Sydney (AU)
Priesemann, Viola Max Planck Institute for Dynamics and Self-Organization (DE)
Tauste, Adria Universitat Pompeu Fabra (ES)
Tkacik, Gasper Institute of Science and Technology (AT)
Trendafilov, Dari University of Hertfordshire (GB)
Zylberberg, Joel University of Washington (US)

[Back to Top]