Machine Learning Thesis Defense
Modern electrophysiological and optical recording techniques allow for the simultaneous monitoring of large populations of neurons. However, current technologies are still limited in the total number of neurons they can simultaneously monitor, the brain areas they can be deployed in and in their ability to stably record from a given set of neurons. These limitations have important implications for the clinic and basic science.
In this thesis, we present novel machine learning algorithms and theory that leverage structure in neural data to combine neural recordings across space and time. We first present a brain-computer interface (BCI) classifier which takes advantage of clustering of neural data in a BCI classification task to autonomously retrain itself daily. The resulting classifier is capable of maintaining stable performance over 31 days of prerecorded data.
We do not expect to find clustering in general neural recordings, so in the remainder of the thesis we seek to exploit another property of neural data, low-dimensional structure, for combining neural recordings in more general settings. We describe how a single factor analysis (FA) model can be fit to the joint activity of populations of neurons recorded at separate times. This problem turns out to be closely related to that of completing a low-rank covariance matrix, a type of symmetric positive semidefinite (SPSD) matrix, and we present basic theory and an algorithm for SPSD matrix completion when entries of the matrix are missing in a deterministic fashion.
We then validate our theory and algorithm for combining neural recordings with low-dimensional structure in three scenarios. First, we develop a self-recalibrating decoder for regression in the BCI setting, which is capable of maintaining steady performance in the face of simulated electrode instabilities. We then use our methods to infer low-dimensional representations of neural population activity with learning when we cannt record from the same set of neural units over the entire period of learning.
Finally, we propose to apply our methods for studying communication between populations of neurons in two brain areas, when it is not possible to record from a large number of neurons simultaneously inone of the areas.
We conclude the thesis by returning to the basic problem of SPSD matrix completion, proving that in our scenarios of interest, a common technique for matrix recovery, nuclear norm minimization, can also be applied to the recovery of SPSD matrices with entries missing in a deterministic fashion.
Rob Kass (Co-chair)
Byron Yu (Co-chair)
Maneesh Sahani (University College London)
Source: Machine Learning Thesis Defense
Via: Google Alert for ML