GSoC 2022 Project Idea 4.4: A GPU-accelerated model of the mouse primary visual cortex (175 h)

The Allen Institute has produced a data-driven model of the mouse primary visual cortex (Models of the Mouse Primary Visual Cortex - brain-map.org). Using GeNN (http://genn-team.github.io/genn/) - our GPU-accelerated spiking neural network simulator - you will reproduce the point neuron version of this model. As modern GPUs are capable of running models of this scale in real-time, ambitious students could consider connecting this model to live input from a webcam.

Mentors: This project will be supervised by Dr James Knight @jamie and Prof Thomas Nowotny @tnowotny

Skills: Python programming, Maths, Computational Neuroscience

Tech keywords: Python

Hi,
I am totally beginner to open source. How can I learn more about this project?

I would say that perhaps the learning curve on this project might be rather steep if you have no experience with spiking neural network models but, nonetheless, for anyone interested in how to get started on this project. I would suggest having a look at some of the following:

The point neuron model is built using GLIF neurons from the Allen Institute database (more information about the model is provided by https://www.nature.com/articles/s41467-017-02717-4) and I think a good starting point would be to try downloading an example set of Allen institute parameters (as described in Generalized LIF Models — Allen SDK dev documentation) and start thinking about how to implement them using PyGeNN. We did some previous work on this, implementing PyNN’s GIF neurons using a template and code to generate PyGeNN code but, I think, the Allan institute models may require some additional functionality.

Any questions, don’t hesitate to ask (ideally in this forum so other potential students can benefit)

1 Like

Hi Jamie,

I’m Anh, a prospective candidate for this project. I’m doing my PhD in decision making and have basic understanding of SNN and lower-middle level programming (Python) skill. I’ve been looking at the resources you posted and compiling a proposal therefrom.

That said, what’s the baseline of computational neuroscience and programming knowledge/skill that you expect in a contributor, given their learning speed and time commitment? Also, how much time and effort you are willing to spend on mentoring if the project in actuality is more complicated than expected and thus results in a steep learning curve for the contributor?

Anh.

Hi Anh,

Thank you for your interest in this project! I think a basic understanding of SNNs and Python programming is probably sufficient background for this one. This project is something we’ve wanted to get done for some time so we are more than happy to invest time in mentoring/helping out to get it done.

Jamie

Cool! Thanks, that’s very encouraging :slight_smile:

Regarding the project, an approach that one would embark, as I gathered, is to use the guideline from GeNN/PyGeNN about defining a model as a framework, and modify the details of the model, as provided here, accordingly?

Anh.

I don’t quite know what you mean by

defining a model as a framework

But, you will indeed define a PyGeNN model as described in the guidelines you linked and then, a sensible approach would be to write code to parse the data from that dropbox repository and turn it into PyGeNN neuron and synapse populations.

I meant using the guideline as a framework for the project, not defining a model as a framework.
And yes, thanks for the clarification :slight_smile:

Hi Jamie,

I created a github repo to document my progress on the PyGeNN project, do you mind if I invite you as a collaborator? I also posted questions regarding implementing the Allen’s GLIF neuron models on there, I would appreciate your feedback.

In addition, I have been trying to understand how GeNN, PyNNGeNN and PyGeNN work, and how they interact with each other. As far as I understand GeNN is the C++ library, PyGeNN is the Python interface of GeNN which is built on top of the SWIG wrapper. However:

  • How does PyNNGeNN work? How are PyGeNN and PyNNGeNN different in terms of functionality and usability?

  • Would mismatches in model definitions in GeNN and PyNNGeNN be considered as bugs and thus hampering model’s usability? For example, the threshold_condition_code of the PoissonNew model in GeNN is different from its counterpart in PyNNGeNN.

  • Also, if a PyGeNN user can retrieve GeNN models, but if a model is implemented in PyGeNN, can GeNN users access it as well?

I created a github repo to document my progress on the PyGeNN project, do you mind if I invite you as a collaborator? I also posted questions regarding implementing the Allen’s GLIF neuron models on there, I would appreciate your feedback.

Sure! My github username is neworderofjamie. Questions about the GLIF models might be better here though so other applicants can learn from them as well and so I have less places to check!

In addition, I have been trying to understand how GeNN, PyNNGeNN and PyGeNN work, and how they interact with each other. As far as I understand GeNN is the C++ library, PyGeNN is the Python interface of GeNN which is built on top of the SWIG wrapper.

Exactly!

  • How does PyNNGeNN work? How are PyGeNN and PyNNGeNN different in terms of functionality and usability?

The names are sadly very confusing by PyGeNN aims to expose all the functionality of the underlying GeNN library to Python users whereas, PyNNGeNN uses PyGeNN to implement a backend for PyNN - NeuralEnsemble which is cross-simulator way of describing SNN models.

  • Would mismatches in model definitions in GeNN and PyNNGeNN be considered as bugs and thus hampering model’s usability? For example, the threshold_condition_code of the PoissonNew model in GeNN is different from its counterpart in PyNNGeNN.

Not necessarily and not in this case. The PyNN standard requires that “Poisson” neurons have a start and stop time so we have added those to this model.

*Also, if a PyGeNN user can retrieve GeNN models, but if a model is implemented in PyGeNN, can GeNN users access it as well?

No they can’t which is why we implement ‘standard’ models in GeNN directly.

Thank you for the prompt responses.

I’m copying here the questions I posted on Github:

-Regarding the PyNNGeNN/PyGeNN’s template: what are the roles of ‘translations’ and ‘extra_param_values’?

-Regarding implementing the models: the GLIF models have 5 (GLIF1-5) variations with different reset and parameter optimization rules, so is it possible to define a general neuron model that has the flexibility to comprise the 5 models, or each has to be done separately? In either case, is it a must to define the model(s) in the C-like language string codes, or is using Python sufficient?

I’d also appreciate if you can look at the codes and give me feedback on how to further proceed.

1 Like

No problem!

Regarding the PyNNGeNN/PyGeNN’s template: what are the roles of ‘translations’ and ‘extra_param_values’?

I don’t think you should worry too much about the workings of PyNNGeNN - for this project you will be working with the lower-level PyGeNN library.

Regarding implementing the models: the GLIF models have 5 (GLIF1-5) variations with different reset and parameter optimization rules, so is it possible to define a general neuron model that has the flexibility to comprise the 5 models, or each has to be done separately? In either case, is it a must to define the model(s) in the C-like language string codes, or is using Python sufficient?

This is somewhat up to you, the way we did this in the models used by PyNN GeNN is to template code strings and insert the correct reset conditions etc into that. However, if there are only 5 variants of GLIF models used by the Allen institute models I think it would be much easier to just define 5 separate models. Neuron models in PyGeNN are always described using the C-like language but this can be done from Python.

Hi @jamie ,

I have a question about the best way to have a PyGeNN model operate on videos.

The simplest approach (I think) would be to convert the videos into spikes beforehand. For this project, that could mean feeding a video into the LGN model from the Billeh paper, which would output a time-series of spikes. I would incorporate this into the model by adding an “LGN” neuron population with a SpikeSourceArray containing the spiking data. I think the downside of this approach would be that the entire video needs to be processed before the simulation is run.

I’m wondering if there is a better way to do this. Can you pass spikes into a PyGeNN model that is already built/loaded/stepping through time? If so what is the best practice to take advantage of the GPU-acceleration? The application I have in mind is simulating a network in real time (video input from a webcam).

My first stab at this would be to add spikes within the time-stepping loop, so something like:

while model.t < SIMULATION_STOP_TIME: 
    
    # Convert new frame to spikes 
    frame = get_frame_from_webcam_buffer()
    if frame:
        spikes = convert_frame_to_spikes(frame)

        # Apply spikes to model
        # Not sure if this would be correct function or how to to pass in the spikes
        model.push_spikes_to_device()  
    
    # Simulate next time step of model
    model.step_time()

I guess the potential bottleneck of this is the convert_frame_to_spikes function, which might run slower than real-time on a CPU depending on how complex the conversion is.

Which makes me wonder how one could incorporate a GPU-accelerated method of converting video to spikes. These might be total newbie questions but

  1. If I created some custom neural network that converted images to spikes, could that run on a single GPU alongside a PyGeNN model? At first glance this seems pretty complicated, and from what I’ve read, GeNN seems optimized to take advantage of an entire GPU?
  2. Would it then make more sense to process the videos frames within PyGeNN? The way I’m thinking of doing this would be to push each frame’s data to the device as a variable (array of floats), and for the LGN neuron populations to have a “sim_code” C-like snippet that carries out the conversion from video frame to either a spike/no-spike. The snippet would be individualized for each neuron’s receptive field / spatiotemporal filters. Would this be plausible/actually take advantage of GPU-acceleration?
  3. Maybe there’s some other way that’s better?

Hope these questions make sense - trying to think about the approaches that will result in the best performance.

Thanks!

Hi William,

So, firstly, using a SpikeSourceArray is exactly the right way to inject spikes calculated offline by the LGN model. Doing this online is indeed trickier but you have several options:

  1. Push spikes every timestep using code very similar to your pseudo-code. However, this is operating on GeNN’s internal spike representation and isn’t very efficient
  2. Write some sort of custom spike source model which encodes a timestep of spikes in some more efficient way e.g. https://github.com/neworderofjamie/genn_examples/blob/master/dvs_optical_flow/model.cc#L7-L15 only takes 1 bit per neuron
  3. Upload video frames to the GPU and convert them there. https://github.com/neworderofjamie/genn_examples/blob/master/eprop/s_mnist/models.h#L6-L45 does this sort of thing to convert images to a weird sequential representation.

In the case of the LGMD, you’d need to read up more about how it actually works but, I wonder if you could do the spatial filtering using some other library e.g. OpenCV and then upload the spatially filtered images to the GPU and do the temporal filtering and spike generation using a custom GeNN neuron model.

Hi @jamie ,
I am an undergrad developing methods for visual reconstruction of what is seen by subjects by inferring the features of the hidden layers of DNNs using fMRI data.
I’m very interested and desire to take part in this project. I have a question about GSoC itself.
Can multiple contributors tackle this same project? or only the selected one? Thank you.

Hi, @malin
Sorry for bothering you, but could you answer to the question above: “Can multiple contributors tackle this same project? or only the selected one?”?

Hi!

Depends very much on the type of project. It is not formally disallowed, but GSoC projects must be completely independent of each other, so that contributors don’t hinder each other’s progress in any way. Some project ideas are general or flexible enough to have several possible implementations, in that case two contributors on the same project is theoretically possible.

Hi, @jamie
I decided to apply for this project and have a question about the application template. It has an section:

Your plan for communication with mentors
How, and how often, will you and the mentors keep in contact? (Via weekly video calls, via email, via chat…?)

Is there any preferred way and frequency of contact?

Thank you.

Great - multiple contributers are definitely allowed to and should apply!

I think a weekly video meeting (e.g. Zoom) with asynchronous chat (e.g. Slack) or email in between is a sensible strategy

1 Like

Hi!

Unfortunately, I just became aware of this project, and therefore I missed the deadline. I just wanted to say that I find this project exciting since I, too, am working on implementing a model for the micro-circuitry of the cortical laminar architecture using the Brian2 simulator. (~80.000 neurons, two columns, with Izhikevich neurons.)

I wish you a great experience!

1 Like