There has recently been a lot of interest in converting ready-trained convolutional deep networks of artificial neurons into spiking neural networks (SNNs) for low-power inference on neuromorphic hardware. While GeNN is unlikely to compete with neuromorphic hardware in terms of energy efficiency, it is a useful and flexible platform for exploring this research area.
The first stage of this project will be to build a Python library which converts networks trained using Tensor flow into GeNN models via GeNN’s Python interface and some of the techniques discussed by Diehl et al. [1]. Possible extensions would then include modifying GeNN to implement a more efficient convolution connector and perhaps beginning to investigate some recent attempts to train deep SNNs [2].
Skills required: Tensor flow, Python, C++
References
[1] Diehl, Peter U., et al. "Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing." Neural Networks (IJCNN), 2015 International Joint Conference on. IEEE, 2015.
[2] Zenke, Friedemann, and Surya Ganguli. "SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks." Neural computation 30.6 (2018): 1514-1541.
Mentors: Jamie Knight & Thomas Nowotny, Sussex U, UK.
Several people have emailed me about getting started on this project. As well as reading the papers already suggested, the following could be helpful introductory material:
GeNN
I think for this project you will probably end up using GeNN’s python interface (PyGeNN) rather than using GeNN directly from C++. This mode of operation is available in the master branch of GeNN, but I’m afraid is not currently particularly well documented. The best resource currently available is the tutorials at https://github.com/neworderofjamie/new_genn_tutorials - towards the end of the ‘slides.pdf’ there are instructions for installing ‘PyGeNN’ (which is a little tricky) and there is a python version of each of the tutorials in the repository which hopefully you can match up to the C++ versions which are explained in the slides.
More papers
This recent paper might be of interest to you - they are implementing SNN inference using PyTorch itself which seems to be fairly inefficient but the tool they create is fairly similar to the sort of thing I imagine this project resulting in:
Additionally, this preprint describes another promising technique for converting deep convolutional networks with VGG and ResNet architectures into SNNs.
Hi! I am Shardul Parab, a fourth year student studying Computer Science at the Birla Institute of Technology and Science, Pilani(Hyderabad), India. I am highly positive about the potential of SNNs and would love to contribute to this project for GSoC 2019. I have already gone through the literature given here and tried out the examples for GeNN. I am currently trying to figure out ways to build the python library for conversion.
I would like to ask is there any further literature to read about SNNs or any other tasks which I could do in order to get a better hold of this project.
Thanks for your continued interest in our project. I’ve had a quick look around and https://arxiv.org/pdf/1611.05141.pdf may also be an interesting paper on conversion of trained ANNs to SNNs. Additionally https://www.frontiersin.org/articles/10.3389/fncom.2015.00099/full might be interesting as they use a more-biological, local learning rule to learn MNIST (something we would also ideally like to be able to support in this framework). Have you managed to successfully get PyGeNN working?
I had a question regarding the goal of the project. My understanding is that the project involves building a python library which has methods capable of converting a given trained Tensorflow model to the corresponding GeNN model according to the methods specified in the paper. But I’m still uncertain about the structure of the library. I’m assuming that it contains methods that are responsible for converting each kind of layer into its equivalent SNN form. Is this right or is it to be done in some other way? Kindly clarify.
That is indeed the overview of the project. Our top-level design requirements are that the resultant library is easy to extend when new algorithms for converting ANN architectures to SNNs come along (there are 3 within the links in this paper) and that its usage feels intuitive to TensorFlow users but, beyond that, the design is very much up to you. I imagine that the first stage of this project will largely consist of figuring out where this library will hook into TensorFlow and GeNN; and figuring out its design.
As we seem to be getting an unprecedented level of interest in this project and lots of requests for pre-project tasks, we have come up with a ‘getting started’ project. The project is to reproduce the model and results presented in this paper using PyGeNN (see previous links in this forum for installation instructions and tutorial).
This existing (unrelated) python example should give a lot of clues as to how to define the required neuron and synapse models in GeNN and this (also unrelated) python example should give some clues on network construction and reading back spikes from the GPU.
If anyone has any further questions, don’t hesitate to ask!
GeNN has a fallback CPU backend which can be used without CUDA. Installation steps are (apologies that the PyGeNN steps for CPU mode weren’t clear in the new tutorial slides):
Make sure you have swig installed
Checkout GeNN from the master branch on our github
Set the GENN_PATH environment variable to point to the directory GeNN was checked out into
From GeNN directory, build as a dynamic library using: make -f lib/GNUMakefileLibGeNN CPU_ONLY=1 DYNAMIC=1 LIBGENN_PATH=pygenn/genn_wrapper/
On Mac OS X, set your newly created library’s name with install_name_tool -id "@loader_path/libgenn_CPU_ONLY_DYNAMIC.dylib" pygenn/genn_wrapper/libgenn_CPU_ONLY_DYNAMIC.dylib
Install python module with setuptools using: python setup.py develop
During the installation, the make step gives me the following error
"
make: *** No rule to make target ‘/lib/obj_CPU_ONLY_DYNAMIC/binomial.o’, needed by ‘pygenn/genn_wrapper/libgenn_CPU_ONLY_DYNAMIC.so’. Stop.
"
Jamie, can you verify the make rules in /lib/GNUMakefileLibGeNN? Did the installation work for anyone else?
Hi,
I’m not able to identify the format that the sim_code for the neuron model is written in. Any reference for that? Also, does it support derivatives or do I have to solve the differential equations in the paper by hand and convert them to exponential form first?
Hi @jamie. I’m getting the following error while installing GeNN:
process_begin: CreateProcess(NULL, getconf LONG_BIT, …) failed.
make: lib/GNUMakefileLibGeNN:15: pipe: No error
The system cannot find the path specified.
The system cannot find the path specified.
mkdir C:\Users\IITI\genn/lib/obj_DYNAMIC
The syntax of the command is incorrect.
make: *** [lib/GNUMakefileLibGeNN:144: C:\Users\IITI\genn/lib/obj_DYNAMIC] Error 1
My bad! I was trying to install it on Windows. Even the github page says that Windows is not supported. I somehow missed to see that:
Anyway, thanks for the reply.
Irritatingly SWIG doesn’t generate code for diagnosing these issues so the best thing to do is modify genn_wrapper.py (which should be in $GENN_PATH/pygenn/genn_wrapper if you installed with python setup.py develop). Then modify around line 19 where it reads:
except ImportError as ex:
print(ex)
import _genn_wrapper
return _genn_wrapper
And let me know what it says. Sorry that the installation process is a little tricky right now - we’re working on supplying pre-built wheels for the next GeNN version.