Brian can parallelize simulations over multiple processor cores by making use of the OpenMP framework. However, in its current state Brian does not yet make full use of the parallelization potential, in particular for synaptic propagation.
The aim of this project is to improve the OpenMP support, by:
- analyzing the connectivity structure and type of synaptic interaction to decide whether trivial parallelization is safely possible
- benchmarking and implementing parallelization approaches for non-trivial situations
- [for 6 months project:] identify other parts of a simulation (e.g. creation of synapses, “summed variable” mechanism) that could benefit of parallelization and implement these approaches.
Planned effort: 175h or 350h
Skills: C++ and Python programming, experience with OpenMP or other parallelization techniques helpful
Skill level: advanced
Mentors: Marcel Stimberg @mstimberg, Dan Goodman @d.goodman
Tech keywords: Brian, Python, C++
I am interested to work in this project. I have experience in working with both C++ and Python and also have intermediate knowledge of OpenMP. I would request the mentor to let me know if there are any starter tasks associated with this project or any issues that I need to work on to get selected for this project.
Hello Raj, I’ll tag the mentors for you: @mstimberg @d.goodman
/Malin, org admin
Hi @Raj_Gupta , happy to hear that you are interested in the project. I posted some general comments on the application process on our website: Recommendations for GSoC 2022 applications | The Brian spiking neural network simulator
Regarding this specific project, it would be important to get a good idea of Brian’s code generation approach, in particular the “C++ standalone” mode which is at the core of this project. You can find some general information about this in our 2014 and 2019 papers (for the 2019 paper in particular in the appendix), and of course in Brian’s documentation . If you want to get a bit more into algorithmic details, Dan and Romain’s papers on vectorization and GPU computing could be interesting – note that this project is not really about either topic, but many of the questions around spike propagation are related.
Ideally, it would be great if you could discuss the following two specific questions as part of the application:
- A (contrived) example where Brian gives an incorrected result – this is the main reason why we are currently showing a warning when the user uses OpenMP.
- Three examples of toy networks with synaptic events . This is one of the “low-hanging fruits” to improve OpenMP performance. Try to understand where OpenMP parallelization is used and where it isn’t, and try to figure out why this is the case (of course, feel free to look into Brian’s source code). Do you see any room for improvement, and how would you improve things?
Let me know if you have any further questions.