There are good theoretical reasons to believe that neurons learn to predict their input, but fewer experimental tests of this hypothesis. The mentor and co-mentor on this project have developed new methods for assessing the predictive capabilities of a large number of neurons from data using both information theoretic and a Bayesian framework. These methods have not yet been optimized and integrated into an existing, ever-growing codebase that will, upon release, allow other groups to easily assess the predictive capabilities of their neural populations.
One of the metrics for assessing stimulus prediction is based on the predictive information, the shared information between present neural response and future stimulus. The student on this project is expected to be familiar with Python, Matlab, and TensorFlow/Keras, and either familiar with or eager to learn new techniques for mutual information estimation in the undersampled limit. He or she will implement existing state-of-the-art algorithms for such estimation and add such algorithms to the existing codebase.
The result of this project will be a state-of-the-art compilation of predictive information estimation methods.
Mentor: Sarah Marzen @smarzen Co-mentor: Joost le Feber
Thanks for reaching out. We are in touch with the mentors of this project and they will get back to you shortly. In the meantime, I request you to keep up the project idea exploration that you are doing and share further queries here so that the mentors can help you out.
Thanks @arnab1896 !
I think I got it though!
I think there need to be some edits in HDNet Documentation, both in the installation guidelines as well as in the Math Requirments section which is still not there and left as ‘TODO’!
I’ll love to contribute towards them, if anyone of you could guide me towards how to
@smarzen I am excited about this project and already emailed you as well. Could you also please share the articles with description of the methods planned to realize here? I would help us (the students) to understand the difficulty level and write an appropriate proposal with adequate timelines.
Here are some articles that this project is based on. https://www.pnas.org/content/112/22/6908 gave us the idea for this project. The difference is, we’re doing neurons in a dish to evaluate if synaptic learning rules are enough by themselves to guarantee an increase in prediction. We’re also using different stimuli.
We have to evaluate predictive accuracy somehow, though. One method is an estimate of predictive information from https://papers.nips.cc/paper/2001/file/d46e1fcf4c07ce4a69ee07e4134bcef1-Paper.pdf.
Another is an estimate of the accuracy of prediction from a neural model via Weak pairwise correlations imply strongly correlated network states in a neural population - PubMed fit using https://arxiv.org/pdf/0906.4779.pdf. We might also use these GLMs as neural models: “Statistical models for neural encoding, decoding, and optimal stimulus design”; we might also use dichotimized Gaussians.
Chris, Joost, and I need to add code to hdnet to validate the model fit, estimate the predictive accuracy, and estimate the predictive information. Most of the algorithms are already written in some form and just need some cleaning up, but not all.
PS: I can’t make more than 3 consecutive replies so I’ll be merging reports to previous replies
Weekly Report 3
June 20 -June 27
What I’ve been doing
a. Completed code for validation using log-likelihood data, most common codewords, started for higher order interactions
What I’ll do next week
a.Complete code for higher order interactions, find other missing docs in HDNet I could fill
Blockers
a. None
Weekly Report 4
June 28 - July 4
What I’ve been doing
a. Completed code for validation using higher-order interactions
b. Testing code for pushing to source
What I’ll do next week
a. Complete documentation and testing to merge code with source
Blockers
a. None
Weekly Report 5
July 5 - July 12
What I’ve been doing
a. Completed all validation methods for maximum entropy models
b. restructured codebase after code review from mentors
c. started reading on information estimation methods, and samplers for spiketrain for implementation next week
What I’ll do next week
a. Implement better sampler for spiketrain data and/or start coding on information estimation methods
b. finish writing tests to merge with origin
Blockers
a. None
Weekly Report 6
July 13 - July 20
What I’ve been doing
a. Added metropolis hastings sampler for spiketrain, modified it to work with validations like gibbs sampler
b. Added detailed demos and examples for how someone could use the code without having to know too much about it
c. Started implementing CDM Entropy estimate in HDNet, after which I’ll pick up mutual information estimation
What I’ll do next week
a. Complete CDM Entropy validation
b. Continue with NSB and Miller Madow information estimation
What I’ve been doing
a. Started working on integrating legacy MATLAB codebases to HDNet
b. Had discussions on research directions project could take, and added task of building interface for .m files to use HDNet in Python
What I’ll do next week
a. Test CDMEntropy interface, add MI methods
Blockers
a. None
Weekly Report 8
July 29 - August 5
What I’ve been doing
a. Completed interface for CDME with examples
b. Started extension for HDNet Contrib containing other such additional features to HDNet
What I’ll do next week
a. Completed MI using CDME
Blockers
a. None
Weekly Report 9
August 6 - August 13
What I’ve been doing
a. Completed MI using CDME, tested for data
b. Completed a data loader for using raw stimulus data, and started testing NSB for MI estimates
What I’ll do next week
a. Completed documentation and final work report
Blockers
a. None
Weekly Report 10
August 13 - August 20
What I’ve been doing
a. Wrote documentation and examples for usage
b. Completed final evaluation report