There are good theoretical reasons to believe that neurons learn to predict their input, but fewer experimental tests of this hypothesis. The mentor and co-mentor on this project have developed new methods for assessing the predictive capabilities of a large number of neurons from data using both information theoretic and a Bayesian framework. These methods have not yet been optimized and integrated into an existing, ever-growing codebase that will, upon release, allow other groups to easily assess the predictive capabilities of their neural populations.
One of the metrics for assessing stimulus prediction is based on the fitting of a Maximum Entropy model that probabilistically describes stimulus activity and the corresponding neural response. The student on this project is expected to be familiar with Python and Matlab, and either familiar with or eager to learn new techniques for Maximum Entropy model fitting. He or she will improve the neural Maximum Entropy model fit to data in two steps:
- functions for Maximum Entropy model validation using pre-written Monte Carlo methods will be integrated into in the codebase
- hyperparameters of Minimum Probability Flow, a powerful and relatively new algorithm for Maximum Entropy model fitting, will be optimized so that the model validation is improved
The result of this project will be a state-of-the-art, computationally efficient approach to parameter fitting of Maximum Entropy models that will on its own be useful to neuroscientists. The result of this project may be therefore integrated into HDNet (https://github.com/team-hdnet/hdnet) in addition to being released as part of the stimulus prediction package.
Mentor: Sarah Marzen
Co-mentor: Joost le Feber