GSOC 2026 Project #5 : Brian Simulator - Fix outstanding issues in Brian2CUDA (175h/350h)

The Brian simulator’s “C++ standalone mode” has been extended to support code generation for CUDA via the Brian2CUDA package (Alevi et al. 2022, Frontiers Neuroinformatics). This makes it possible to accelerate simulations by making use of the parallel processing capabilities of NVIDIA GPUs. The package is already widely used, despite the software still being in a “beta” state. The aim of this project is to make the package “production-ready” by tackling various issues from the project’s issue tracker. Particular aims include:

  • Add Windows support by adapting the makefile and compilation options
  • Improve compilation speed
  • Update and extend documentation
  • Implement support for preference files
  • Implement a configurable logging system
  • For the bigger project scope, in addition:
    • Triage the existing issue reports
    • Triage and fix (if necessary/possible) the existing performance issues
    • Set up a basic test suite that can be run online on free infrastructure (e.g. Google Collab with GPU support)

Skill level: intermediate

Required skills: C++, CUDA, Python

Lead mentor: Marcel Stimberg (marcel.stimberg@sorbonne-universite.fr; mstimberg on NeuroStars)

Project Website: GitHub - brian-team/brian2cuda: A brian2 extension to simulate spiking neural networks on GPUs · GitHub and GitHub - brian-team/brian2: Brian is a free, open source simulator for spiking neural networks. · GitHub

Backup mentors: Dan Goodman (d.goodman@imperial.ac.uk; d.goodman on NeuroStars), Benjamin Evans (B.D.Evans@sussex.ac.uk)

Tech keywords: Python, C++, CUDA, GPU, Makefile

Hi Mentor,

I am Yamini, a 3rd-year B.Tech student in Computer Science and Engineering. I am very interested in contributing to the “Fix outstanding issues in Brian2CUDA” project for GSoC 2026.

My technical stack is primarily Java and Python, but I have a strong foundation in C++ and version control with Git. I am particularly drawn to the tasks of adding Windows support and implementing the configurable logging system, as I believe these are crucial for making the tool more accessible to the wider community.

I have already cloned the brian2cuda repository and am currently exploring the code-generation logic and the existing Makefile structure. I have also started reviewing the issue tracker to understand the current bottlenecks.

1 Like

Hi @Yamini04-oss, happy to hear that you are interested in this project. Please note that I wrote down some general recommendations for the GSoC application on our website: GSoC 2026 | The Brian spiking neural network simulator

This project is a bit of a mixed bag, with issues that needs fixing in Python code, makefiles, CUDA and C++, so looking into all these parts and how they interact with each other is the best preparation. A good strategy is to take a very simple example, e.g. something like the CUBA example from the documentation, let it run with brian2cuda and look at the generated files. If you are on a Windows machine, you could try to take the code generated for Linux (e.g. by running things via WSL), copy the directory and try compiling it on Windows, either manually or by writing a Windows-compatible makefile.

1 Like

Hi @mstimberg ,

I have been working on Issue #320 https://github.com/brian-team/brian2cuda/issues/320(Adding a minimal automated test for Brian2CUDA) to verify the code generation and compilation workflow.

I’ve successfully prototyped a “no-hardware” test script in a GitHub Codespace. This script triggers the brian2cuda backend to translate a simple LIF model into C++/CUDA source files without requiring a physical GPU. By setting build_on_run=False and compile=False, I was able to verify that the core translation logic and file scaffolding (including main.cu, objects.cu, and the makefile) are functioning correctly.

This satisfies the first step of the “Production-ready” goal by ensuring we can detect regressions in the code generator using standard CI infrastructure. I am now looking into how to adapt this for Windows-compatible compilation checks as discussed.

Best regards,

Yamini

1 Like

Hi @mstimberg, I’m Ahmad, a 3rd-year Computer Engineering Undergrad at GIK Institute from Pakistan. I’m currently researching and planning on a Final Year Project in Neuromorphic Cybersecurity and therefore I’m interested in the Brain2CUDA project because it aligns with my interest in Spiking Neural Networks. I am interested in contributing to the “Fix outstanding issues in Brian2CUDA” project for GSoC 2026.

I successfully got the CUBA benchmark as you suggested compiling and running natively on my RTX 2050 on Arch Linux. Futhermore, I looked into Optimize our SpikeMonitor for Subgroups #293. Debugging the build process and generated kernels helped me solidify the two core technical milestones for my GSoC proposal:

1. C++17 & Compiler Compatibility: Modern CUB/Thrust headers reject the hardcoded C++11 templates, causing opaque nvcc panics on newer distros. I bypassed this locally by side-loading GCC 13 and patching templates to -std=c++17. I will propose natively upgrading the generator to C++17 and implementing automated host-compiler validation.

2. SpikeMonitor Subgroup Bug: I found the exact cause of the subgroup recording issue in spikemonitor_codeobject.cu. The host over-allocates memory using the parent group’s total _num_events, and the __global__ kernel writes without boundary filtering. I plan to propose replacing this brittle manual logic with thrust::count_if (for exact dynamic allocation) and thrust::copy_if (for safe filtering without atomic contention). Let me know if these findings are worthwhile for GSoC proposal and if i can work on any other issues, I have access to windows aswell.

Hi mentor,
I am Arham, a 3rd year computer science student. I have hands-on experience working with the LLVM framework to optimize frontends, specifically I have worked on C++. I love what Brain2 is doing and would like to contribute to Brain2CUDA and help make it more efficient.

I am new to open-source and am open to learning and guidance.

I live by “one step at a time” and “better late than never” sayings.

I hope to make an impact in open-source!

Hi Guys,

I am Raj, a computational neuroscience researcher. I recently completed my master’s and am learning to work with RNNs using SNNs. I am a huge admirer of Brian and wanted to be a part of the impact it has been making all along.

I have been tinkering around with the project. I have tried running Example: CUBA and realised that just using the base Brian2 takes about 4.835 s ± 0.172 s whereas using Brian2CUDA takes 59.925 s ± 0.798 s. This part confused me a bit. Later, I realised this is expected for small networks due to GPU overhead, but it scales well across large networks, and I found further information in the publication.

I have been collecting a few useful resources for the project. Just curious, @mstimberg, are there any additional resources that can help us understand the project well, and also any recent publications about optimization and implementation of relevant projects that would be helpful?

Hi everyone,

My name is Yusuf Abdul-Mateen, and I’m a Computer Science student at the Federal University of Technology, Akure, Nigeria. I’m interested in the GSoC 2026 project on fixing outstanding issues in Brian2CUDA and helping make it more ready for wider use.

I’ve gone through the general GSoC recommendations, and I appreciate the advice shared by @mstimberg. Starting with a simple example and looking closely at how the Python code, generated files, makefiles, CUDA, and C++ parts connect seems like a good place to start, so that’s what I’ll be focusing on.

I’m here to learn and follow discussions, and I’d appreciate any other advice on where a newcomer should focus first.