I am interested in the information processing and computational properties of complex, distributed systems. Currently I am pursuing this interest within the field of computational neuroscience, but I originally trained as a statistical and condensed matter physicist at the University of Wisconsin Madison (PhD in 2001). For my dissertation I researched strongly coupled electron systems, specifically high temperature superconductors with Robert Joynt and Andrey Chubukov. Most of my early papers study superconductors using either Ginsburg-Landau mean field theory or many body quantum field theory although I also did some work in percolation. During my final year of graduate school I got an NSF fellowship to study complex systems at the Santa Fe Institute where I worked on pattern discovery algorithms with Jim Crutchfield and Cosma Shalizi. Cosma and I wrote several papers together on the use of non-parametric clustering methods to deduce hidden Markov models from both time series and spatio-temporal data. The most cited of these wasa Physical Review Letter on a theoretically rigorous way to quantify the rather hazy notion of "self organization" using the hidden (causal) states of a dynamical system. After defending my dissertation, I was a postdoc in the Center for Non-linear Studies at Los Alamos National Laboratory where I worked with David Pines on complex adaptive matter.
I transitioned into computational neuroscience through a postdoc with Emery Brown at the Martinos Center for Biomedical Imaging of Massachusetts General Hospital where I learned modern statistics and machine learning. I am currently an Instructor at Harvard Medical School and Massachusetts General Hospital and a Research Affiliate in the Department of Brain and Cognitive Sciences at MIT. Much of my current work involves developing algorithms based on statistical physics and machine learning methods to detect, in multi-scale, high dimensional neurobiological data, lower dimensional collective structures relevant for coding and processing information. Some specific techniques I've used include Generalized Linear models, Markov random fields, regularization, clustering and regression trees, hidden Markov Models and non-linear time series analyses. I try to develop simple, practical algorithms which can be applied to real data and tell us something scientific about how neural systems compute. I work a lot with experimentalists and many of my projects were motivated by wishing to answer specific experimental questions.
Fundamentally I believe that complex computation requires complex, rather than ordered or disordered dynamics. What we don't know is exactly which types of structures and dynamics are required or specific forms taken by computationally relevant structure in neural systems. For many researchers infering structure from neural data implies infering a functional network defined by statistical dependencies between neurons, and I've worked on that. However, I'm not personally sure that this is the best way to understand neural systems since the dynamics of the individual graph nodes (neurons) are so noisy. Motivated by my prior work on pattern discovery I'm starting to work more on more "collective" representations of neural activity which identify/cluster patterns of activity that code the sameinformation. I'm also, inspired by the reservoir computing literature, begining to work more on phase space type representations of recurrent network activity, which to me seem to get more at the "collective" nature of neural activity but this is work in progress.
You can get a copy of my current CV here and PDFs of all my publications are here.
The links below contain more detailed information about some of the projects I've worked on.
Data Science for Neural Systems
Understanding Complex Computation
Back to main page