I am interested in how neuronal networks learn. That is, which collective dynamical structures do networks use to code new information, and how do these structures evolve as environmental conditions or computational requirements change? Neuronal networks present some unique challenges for their analysis. They are large, heterogeneous and sparsely connected through complex topologies. As such, they can exhibit an almost uncountable number of spike activity patterns that change constantly on very rapid time scales. There is no consensus on which aspects of this high dimensional, noisy activity are computationally relevant. Indeed the answer likely varies considerably with species, brain region, and the computation being performed. Thus much of my research involves developing bespoke methodologies to move beyond single neuron descriptions and address, in specific experimental settings, how neuronal networks encode and compute. The methods I develop tend to be based upon modern regression methods (such as Generalized Linear Models) and machine learning techniques (such as clustering algorithms, regression trees and support vector machines).
In collaboration with co-PIs who perform animal experiments, I have applied these methods to a wide variety of neural systems to determine how real neuronal networks compute. For example, in rodent somatosensory cortex, I demonstrated that the coding of touch involves an interplay between external tactile stimuli and internally generated cortical rhythms Haslinger, J. Neurophysiol. (2006). In macaque primary visual cortex I demonstrated that more “complex” visual stimuli, such as natural scenes, are coded using greater communication between neurons than simpler “laboratory type” stimuli which have historically been used to probe the visual system. Haslinger, PLoS One (2012). Most recently, I have been investigating how higher cognitive functions, such as movement planning, are coded by the dynamics of frontal networks. Using multi-electrode recordings in macaque frontal cortex and computational techniques developed by myself, I have shown that communication between frontal neurons is key for movement planning. This is one of the first demonstrations of the collective nature of higher cognitive processing by large distributed networks.
In addition to experimentally motivated studies, I also research theoretical models of information processing and computation by complexly structured neural activity. Previously, I considered this question in the context of the computational capacity of single neuron spike trains Haslinger, Neural Computation (2010). However computation is a collective process requiring a continuous stream of information to be integrated with that already stored in a network’s ongoing dynamics (working memory). A theoretical framework that can capture this is reservoir computation. Here large recurrent spiking networks self organize their dynamics, via synaptic plasticity, into a critical regime which can support complex information processing. I am currently collaborating with Gordon Pipa to develop experimentally testable predictions for the reservoir computing model. Such work aims to provide a theoretical framework to identify and explain common computational principles underlying apparently diverse cortical areas and functions.
I was originally trained as a statistical and condensed matter physicist at the
University of Wisconsin Madison. For my dissertation I researched
strongly coupled electron systems, specifically high temperature
superconductors with Robert Joynt and Andrey Chubukov. Most of my
early papers study superconductors using either Ginsburg-Landau mean
field theory or many body quantum field theory although I also did some
work in percolation. During my final year of graduate school I
got an NSF fellowship to study complex systems at the Santa Fe
Institute where I worked on pattern discovery algorithms with Jim Crutchfield and Cosma Shalizi. Cosma and I wrote several papers
together on the use of non-parametric clustering methods to deduce
hidden Markov models from both time series and spatio-temporal
data. The most cited of these was a Physical Review Letter on a
theoretically rigorous way to quantify the rather hazy notion of "self
organization" using the hidden (causal) states of a dynamical
system. After defending my dissertation, I was a postdoc in the
Center for Non-linear Studies at Los Alamos National Laboratory where I
worked with David Pines on complex adaptive matter. I transitioned into computational neuroscience through
a postdoc with Emery Brown at the Martinos Center for Biomedical Imaging of Massachusetts General Hospital where I learned modern statistics and machine learning. I am currently an
Instructor at Harvard Medical School and Massachusetts General Hospital and a Research Affiliate
in the Department of Brain and Cognitive Sciences at MIT.