GeNN is a framework for accelerating spiking neural network simulations on graphical processing accelerators (GPUs). GeNN was originally developed for computational neuroscience models but, in recent years, there have been exciting advances in using spiking neural network (SNN) models trained with biologically plausible learning rules for Machine Learning (ML). Networks with these learning rules are suitable for acceleration in GeNN and, in this project, we propose that the student implements one or more of the recently developed methods (SuperSpike (https://arxiv.org/pdf/1705.11146.pdf), e-prop (https://www.biorxiv.org/content/10.1101/738385v3), or Senn’s dendritic microcircuit for error backpropagation (https://doi.org/10.1371/journal.pcbi.1004638)) and benchmark it on GeNN. A stretch goal would be to help further optimise GeNN based on the benchmarking results.
Skills required: C/C++, experience with SNNs; previous knowledge of GeNN and/or experience with CUDA could be helpful; when knowledgeable inPython, some of this could be done in PyGeNN, lessening the requirements of C++ programming.
Mentors: Jamie Knight (J.C.Knight@sussex.ac.uk), James Turner (J.P.Turner@sussex.ac.uk), and Thomas Nowotny (firstname.lastname@example.org)
I am Kanishk (GitHub profile), an undergraduate student from India majoring in Computer Science. My interests are inclined towards exploring interdisciplinary problems related to Deep Learning. I usually work with Python, PyTorch and OpenCV. I am familiar with C++ too, being my first programming language to start with.
Originally, I intended to visit this community as an aspirant starting out for GSoC’20 but their vision has completely altered my thought process. I would love to get involved in this project.
I’ll be looking forward to learning and contributing to this amazing community…
P.S. - Though I’m currently reading the SuperSpike paper and familiarizing myself with PyGeNN, I would also like to be guided to any other important resources that I must go through while gaining a better understanding of SNN’s.
Apologies for not responding sooner to your message (or your email). Glad to hear you’re excited about GSoC and this project. One thing you could definitely start looking at are the PyGeNN tutorials. GeNN is currently under fairly heavy development so I would recommend you use the 4.1 release.
It’s to good to hear from you. I’ll be careful with the release I’m working with.
I am Alish Dipani, an undergraduate student from India. My research interests are applications of Artificial Intelligence in biology and neuroscience. I already have some research experience with Spiking Neural Networks and I am highly interested in applications of SNNs. Currently, I am working as a research intern at TCS innovation labs where I am working on hand gesture recognition using SNNs on SpiNNaker. I have attached my resume here.
I also have experience with Open Source as I participated in GSoC 2019 with the Ruby Science Foundation (Project Link). I worked on Rubyplot: an advanced plotting library for Ruby. I was also selected for the Ruby Association Grant 2019 (Link) for further development on Rubyplot.
This project and project number 6 highly match with my interests and therefore I would love to contribute to these projects.
Since I do not have any experience with GeNN, I will start by familiarising myself with GeNN and pyGeNN, please do guide me for the further steps to be taken for contributing.
Looking forward to working with you and being a part of the INCF community.
I am Abhirami S, an undergraduate student from India. I am interested in the fields of Computational Neuroscience and Deep Learning. Currently, I am working on skills, that I understand, are important. The prospect of contributing to this organization is exciting, and I would appreciate any advice that could help me in the process.
My name is Oksana, and I am undergraduate student as well. My current education domain is Biomedical Electronics but I do pay my attention to Computation Neuroscience as well (and hopefully in a while will be working within Neuromorphic Engineering). Previously I had one small research project in regard to machine learning (especially feature engineering part). Also I was writing some small ANNs (its feed-forward and backpropagation parts) from scratch for one Neural Networks course, so at least I’ve already touched to the field (backpropagation introduced in the papers seemed to me not to be strongly different).
So, as everybody here I’d like to contribute to the project and my personal skill set)
P.S. While currently exploring GeNN I’ve found one bug which someone could meet as well. If you download GeNN 4.1.0 source code and use pygenn, you may have an error related to gpu selection in gen_model.py. It can be fixed just by adding underscore to self.selected_gpu = selected_gpu. You should get self._selected_gpu = selected_gpu instead to make everything work. Good luck!
Thank you all for your interest in this project - I think it’s shaping up to be a really exciting one! @Oksana_Savenko very good spot on the selected_gpu issue - there are a number of similar minor Python issues with the 4.1.0 release so I might try and make a 4.1.1 minor release in the next couple of days with this fix included.
As this is more of a research than a software engineering project, I think the most important things you guys can do to prepare for writing a great proposal later in the month is to read the papers linked to in the project description, familiarise yourself with the PyGeNN tutorials (linked in a previous post) and maybe try and implement some sort of simple model with STDP learning. One possible idea would be to replace the last layer of static weights used in the PyGeNN tutorial with simple STDP learning (the additive rule here would hopefully be sufficient) and try and train it using the MNIST labels.
@Oksana_Savenko pointed out correctly…but @jamie I believe this issue has already been fixed in the GeNN (master) way back.
Need a little help, please.
Could someone help me to figure out how to get parameters like simcode of default NeuronModels in pygenn? I saw there was getSimCode() method but I can not apply it directly to a model e.g.
I know that it is probably very naive approach but I do not find anything better
Also I am curious if it is possible to modify a default NeuronModel by adding some variables and simcode? If yes, please, point me where I can read about it.
Besides I would be very grateful if someone share a link where writing of simulation/reset/thresh-con code is described (intuitively I get it but clear understanding is always better).
Thanks for attention!
That is a good question! You should be able to do this with:
As you have probably figured out, GeNN is a C++ library wrapped using SWIG so it’s accessible in Python and, in C++, you can inherit model classes to achieve this e.g. here (although the syntax gets pretty annoying for anything beyond swapping out sim code or whatever). I’m not honestly 100% sure how you would achieve the same through the Python wrapper but I imagine it would be possible somehow - feel free to experiment!
The process of defining a neuron model is reasonably well documented in the manual although the examples used are in C++ (the manual does include Python class documentation but, currently, not that much more)
Hello everyone, I was wondering if we could implement the above mentioned training methods(SuperSpike, e-prop) directly in GeNN instead of doing this in python in PyGeNN ?
Totally! If you’re confident using C++, using GeNN directly from C++ is better documented and there is a performance advantage in some scenerios.
Sounds great, One more thing I want to clear is whether the bench-marking should be on both that is for cuda and OpenCL or only for cuda? As the development phase of GeNN for OpenCL will almost be completed by that time.
The main focus of this project is slightly higher level in that many of these algorithsm haven’t been implemented in a ‘real’ event-driven SNN simulator like GeNN and we’d like to know how they perform compared to tensor flow-based implementations e.g. BindsNET but, it would be a very interesting addition to the project to compare AMD and NVIDIA hardware.
For the sake of other students, @Obaid51 is part of a team of undergraduates adding an OpenCL backend to GeNN.
You could also consider applying for Project 7 as there’s definitely scope for a nice using GeNN directly vs using GeNN from PyNN vs CUDA vs OpenCL benchmark at the end of that!
Hi Jaime, I am currently writing a proposal for this project.
Can you guide me a little about which learning method GeNN currently uses for Recurrent SNNs training?
To the best of my knowledge, no one has done any work on recurrent SNNs (in the machine learning sense) using GeNN as, until quite recently, GeNN has mostly been used for computational neuroscience research rather than machine learning. In general, GeNN only includes one learning rule - an STDP rule with a piecewise-linear kernel. However, GeNN is intended to be very flexible so we have implemented a wide range of learning rules in your own models (e.g. https://github.com/neworderofjamie/genn_examples/blob/master/common/pfister_triplet.h, https://github.com/neworderofjamie/genn_examples/blob/master/common/stdp_additive.h or https://github.com/neworderofjamie/genn_examples/blob/master/common/vogels_2011.h).
We’ve just made a new release of GeNN (4.2.0) which includes several fixes for PyGeNN which you are likely to find useful if you’re building your own models as part of your application.