GSoC 2021 Project Idea 21.2: HDNet projects - Developing a Python-based codebase for analyzing stimulus prediction capabilities of neurons: improve stimulus prediction

There are good theoretical reasons to believe that neurons learn to predict their input, but fewer experimental tests of this hypothesis. The mentor and co-mentor on this project have developed new methods for assessing the predictive capabilities of a large number of neurons from data using both information theoretic and a Bayesian framework. These methods have not yet been optimized and integrated into an existing, ever-growing codebase that will, upon release, allow other groups to easily assess the predictive capabilities of their neural populations.

One of the metrics for assessing stimulus prediction is based on the predictive information, the shared information between present neural response and future stimulus. The student on this project is expected to be familiar with Python, Matlab, and TensorFlow/Keras, and either familiar with or eager to learn new techniques for mutual information estimation in the undersampled limit. He or she will implement existing state-of-the-art algorithms for such estimation and add such algorithms to the existing codebase.

The result of this project will be a state-of-the-art compilation of predictive information estimation methods.

Mentor: Sarah Marzen @smarzen
Co-mentor: Joost le Feber


I’m getting this when I’m running the setup file and maybe due to this(maybe!), I’m later getting this:

When I try to run it finally,
I followed the installation guidelines given at: Installing hdnet — hdnet v0.1 documentation

Hi @shubh ,

Thanks for reaching out. We are in touch with the mentors of this project and they will get back to you shortly. In the meantime, I request you to keep up the project idea exploration that you are doing and share further queries here so that the mentors can help you out.

Cheers

Thanks @arnab1896 !
I think I got it though!
I think there need to be some edits in HDNet Documentation, both in the installation guidelines as well as in the Math Requirments section which is still not there and left as ‘TODO’!
I’ll love to contribute towards them, if anyone of you could guide me towards how to

Hello myself Debopjyoti Interested in this project , how can I start contributing to it or develop the idea @arnab1896

Hi Debopjyoti, please email cav@awecom.com and smarzen@kecksci.claremont.edu and let’s set up a time to talk.

@smarzen check your email, I sent the email.

@smarzen I am excited about this project and already emailed you as well. Could you also please share the articles with description of the methods planned to realize here? I would help us (the students) to understand the difficulty level and write an appropriate proposal with adequate timelines.

Thanks in advance!

Hi all,

Here are some articles that this project is based on.
https://www.pnas.org/content/112/22/6908 gave us the idea for this project. The difference is, we’re doing neurons in a dish to evaluate if synaptic learning rules are enough by themselves to guarantee an increase in prediction. We’re also using different stimuli.
We have to evaluate predictive accuracy somehow, though. One method is an estimate of predictive information from https://papers.nips.cc/paper/2001/file/d46e1fcf4c07ce4a69ee07e4134bcef1-Paper.pdf.
Another is an estimate of the accuracy of prediction from a neural model via Weak pairwise correlations imply strongly correlated network states in a neural population - PubMed fit using https://arxiv.org/pdf/0906.4779.pdf. We might also use these GLMs as neural models: “Statistical models for neural encoding, decoding, and optimal stimulus design”; we might also use dichotimized Gaussians.
Chris, Joost, and I need to add code to hdnet to validate the model fit, estimate the predictive accuracy, and estimate the predictive information. Most of the algorithms are already written in some form and just need some cleaning up, but not all.

1 Like

Weekly Report 1

June 4-June 11

  1. What I’ve been doing
    a. Literature Review: Stimulus-dependent Maximum Entropy Models of Neural Population Codes, Weak pairwise correlations imply strongly correlated network states in a neural population, The simplest maximum entropy model for collective behavior in a neural network

    b. From these papers identified validation methods in use and finalised them on discussion with mentors

    c. Discussing implementation strategies, tested existing codebase using demo files and real dataset

  2. What I’ll do next week
    a. Continue with coding validation methods

  3. Blockers
    a. Was down with fever for a couple of days so could not make much progress on code this week

Weekly Report 2
June 12-June 19

  1. What I’ve been doing
    a. Validation using log-likelihood for spike train data
    b. Working with HDNet to fit spike train data
  2. What I’ll do next week
    a. Complete log-likelihood validation and discuss the next set of methods for estimation of predictive accuracy
  3. Blockers
    a. Had some difficulty working with the preexisting code in HDNet as not all of them had complete examples, so discussed that with mentors

PS: I can’t make more than 3 consecutive replies so I’ll be merging reports to previous replies

Weekly Report 3
June 20 -June 27

  1. What I’ve been doing
    a. Completed code for validation using log-likelihood data, most common codewords, started for higher order interactions
  2. What I’ll do next week
    a.Complete code for higher order interactions, find other missing docs in HDNet I could fill
  3. Blockers
    a. None

Weekly Report 4
June 28 - July 4

  1. What I’ve been doing
    a. Completed code for validation using higher-order interactions
    b. Testing code for pushing to source

  2. What I’ll do next week
    a. Complete documentation and testing to merge code with source

  3. Blockers
    a. None

Weekly Report 5
July 5 - July 12

  1. What I’ve been doing
    a. Completed all validation methods for maximum entropy models
    b. restructured codebase after code review from mentors
    c. started reading on information estimation methods, and samplers for spiketrain for implementation next week
  2. What I’ll do next week
    a. Implement better sampler for spiketrain data and/or start coding on information estimation methods
    b. finish writing tests to merge with origin
  3. Blockers
    a. None

Weekly Report 6
July 13 - July 20

  1. What I’ve been doing
    a. Added metropolis hastings sampler for spiketrain, modified it to work with validations like gibbs sampler
    b. Added detailed demos and examples for how someone could use the code without having to know too much about it
    c. Started implementing CDM Entropy estimate in HDNet, after which I’ll pick up mutual information estimation
  2. What I’ll do next week
    a. Complete CDM Entropy validation
    b. Continue with NSB and Miller Madow information estimation
  3. Blockers
    a. None

Weekly Report 7
July 21 - July 28

  1. What I’ve been doing
    a. Started working on integrating legacy MATLAB codebases to HDNet
    b. Had discussions on research directions project could take, and added task of building interface for .m files to use HDNet in Python
  2. What I’ll do next week
    a. Test CDMEntropy interface, add MI methods
  3. Blockers
    a. None

Weekly Report 8
July 29 - August 5

  1. What I’ve been doing
    a. Completed interface for CDME with examples
    b. Started extension for HDNet Contrib containing other such additional features to HDNet
  2. What I’ll do next week
    a. Completed MI using CDME
  3. Blockers
    a. None

Weekly Report 9
August 6 - August 13

  1. What I’ve been doing
    a. Completed MI using CDME, tested for data
    b. Completed a data loader for using raw stimulus data, and started testing NSB for MI estimates
  2. What I’ll do next week
    a. Completed documentation and final work report
  3. Blockers
    a. None

Weekly Report 10
August 13 - August 20

  1. What I’ve been doing
    a. Wrote documentation and examples for usage
    b. Completed final evaluation report

Final Report

ShivenTripathi/GSoC-INCF-HDNet/FinalReport