GSOC 2026 Project #39 : National Brain Research Centre (NBRC) & EBRAINS (NeuroSim) Automating In-Silico Stimulation for Non-Invasive Biomarker Discovery

Mentor/s: Dr Khusbu Agarwal <khusbu.agarwal@nbrc.ac.in>

Project Synopsis: This project aims to build NeuroSim, an open-source “In-Silico Stimulation” engine. Unlike standard tools that simply analyze static functional connectivity, NeuroSim integrates Network Control Theory (NCT) with Effective Connectivity modeling to quantify the energy dynamics of brain state transitions. The pipeline will be validated by identifying “Stuck States” (Attractor Basins) in Alcohol Use Disorder (AUD), Alzheimer’s Disease and Epilepsy, effectively creating a mimic framework for virtual therapeutic stress-testing.

The Problem: Current neuroinformatics workflows largely focus on static functional connectivity (correlations). However, understanding complex pathologies—from neurodegeneration to addiction—requires quantifying the dynamic cost of brain state transitions. While wet lab approaches like intracranial stimulation (TMS/DBS) can probe these dynamics, they are invasive and limited to pre-surgical patients. There is a critical unmet need for a pre-emptive computational framework that can simulate these dynamics non-invasively to identify biomarkers before physical intervention is attempted.

The Objectives: The aim is to build a robust, modular Python pipeline to provide an effective solution to the problem, utilizing Network Control Theory and Manifold Learning. The project has three specific technical goals:

  1. Automate In-Silico Stimulation: Develop a workflow to calculate “Control Energy” landscapes. This allows researchers to simulate how hard it is for a brain to switch between cognitive states.

  2. Ensure Physical Validity: Implement Effective Connectivity estimation. This allows Network Control Theory to be validly applied to functional (fMRI) data, therefore bridging the gap between structural and functional analysis.

  3. Validate via Case Studies: Demonstrate the tool’s utility by isolating dynamical biomarkers in three distinct regimes with the first one as control. They are: the healthy baseline (HCP), the entropic collapse of neurodegeneration (ADNI), the rigid attractor states of addiction (AUD) and the facilitator nodes that drive seizure propagation in Epilepsy (OpenNeuro).

Methodology & Implementation Plan: The pipeline will be developed as a Python library, consisting of three core modules:

a) Collection, Cleaning & Harmonization: Standardization of BIDS-formatted data from different sources (HCP, ADNI, OpenNeuro). To account for multi-site scanner effects, this module will incorporate neuroCombat. This guarantees that true biological variance, not site noise and artifacts, which is reflected in downstream physical modeling.

b) Network Control Theory: This module constitutes the computational engine. It will first estimate Effective Connectivity (e.g., via spectral inversion or regression methods). This will construct directed adjacency matrices. It will then compute key control metrics:

• Average Controllability: To Quantify the brain’s general capacity to navigate state space.

• Modal Controllability: To identify nodes that drive difficult state transitions (potential “Facilitator Nodes” in epilepsy or addiction).

c) Trajectory Inference & Visualization A visualization engine using Manifold Learning (UMAP) and Pseudo-Time Inference. This will project high-dimensional control energy profiles onto a low-dimensional manifold, allowing clinicians to visualize a patient’s position on a disease trajectory.

Expected Outcomes:

  1. A fully documented NeuroSim Python library.

  2. Jupyter Notebook Tutorials hosted on GitHub, that demonstrate how to run an “In-Silico Stimulation” on patient data.

  3. Validation Report: A benchmark comparison of Control Energy biomarkers vs. standard Static Connectivity in distinguishing AUD patients from healthy controls.

Skills Required: Python (Advanced), Neuroimaging (Nilearn, Nibabel), Linear Algebra (SciPy), Graph Theory, Basic Machine Learning (Scikit-learn, UMAP).

1 Like

Hello Dr. Agarwal,

My name is Aditya Rawat and I’m a pre-final year undergraduate Computer Science (Data Science) student at Manipal Institute of Technology, Bengaluru.

I found the NeuroSim project very interesting, especially the idea of modeling brain state transitions using Network Control Theory. I wanted to ask if there is an existing GitHub repository or starter code for the NeuroSim pipeline, or if the implementation will begin from scratch.

Any recommended resources to better understand the expected architecture would also be very helpful.

Thank you

Dear Dr. Agarwal,

I am Md. Shamsul Alam, a final-semester undergraduate and an AI/ML/DL Research Assistant working across both university and remote research labs. With multiple biomedical imaging papers currently under review in Q1 journals, I am writing to express my strong interest in building the NeuroSim pipeline for GSoC 2026.

The project’s focus on transitioning from static functional connectivity to quantifying dynamic brain state transitions via Network Control Theory deeply resonates with my work. My research involves developing complex predictive models for stroke and cardiovascular diseases, giving me extensive hands-on experience with advanced Python (SciPy, Scikit-learn), manifold learning, and handling biological variance in medical datasets.

Additionally, my open-source engineering experience—including building Agentic RAG pipelines for the Google Gemini CLI and contributing to AI tools like Clawdbot/Moltbot—ensures that I can deliver NeuroSim as a robust, fully documented, and modular Python library, rather than just isolated academic scripts.

As I outline the timeline for my official proposal, I have an architectural question regarding Module B: For estimating Effective Connectivity to construct the directed adjacency matrices, do you have a preferred method (e.g., regression-based approaches vs. spectral inversion) that I should prioritize in the initial pipeline design to best integrate with the downstream Control Energy calculations?

I look forward to the opportunity to contribute to this vital pre-emptive computational framework.

Best regards,

Md. Shamsul Alam :link: GitHub: shamsulalam1114 (MD. SHAMSUL ALAM) · GitHub :link: LinkedIn: https://www.linkedin.com/in/shamsul-alam-ba7658373/

Dear Dr. Agarwal,

I am Chenfan Liao, an Intelligent Medical Engineering undergraduate at SUSTech.

I am deeply interested in building the NeuroSim pipeline. My background uniquely aligns with two of your validation regimes. At a molecular level, as our iGEM team captain, we are investigating Alzheimer’s pathology via RNA editing (REWIRE). At a macroscopic system level, my research directly focuses on the control dynamics of epilepsy—specifically, performing closed-loop neuromodulation simulations on phenomenological seizure models using data-driven optimal control strategies.

Given my focus on optimal control, I have a fundamental architectural question regarding how NeuroSim defines the target “healthy baseline” (e.g., transient suppression vs. true restorative transition).

To keep this thread concise, I have detailed this question and my full background in an email to you. Looking forward to the opportunity to contribute!

Best regards,

Chenfan Liao

Hello Dr. Agarwal,

I’ve been reading about the NeuroSim project and find the shift from static connectivity to modeling brain dynamics using Network Control Theory particularly compelling, particularly the identification of high-control nodes, and states of an attractor intriguing.

I have been working on a Python and ML-based EEG-based seizure classification project, as well as literature-based neural dynamics and epilepsy work, which has prepared me with both a biological and computational approach to understanding brain systems. I am also writing a manuscript in this field.

I am particularly interested in expanding my effort to network-level and control-based analysis and I find it highly consistent with the emphasis of NeuroSim on dynamical biomarkers.

I hope to conduct more studies in neuroinformatics and computational neuroscience, and any suggestions regarding how I can prepare to do the same are welcome and I can help make a contribution to this project.

Hey,

Can you please fix spelling of my name and email id? the correct one is Khushbu Agarwal; khushbu.agarwal@nbrc.ac.in

Noted, Dr. Khushbu Agarwal! Thank you for the clarification. I have updated my records and have just sent a direct email to you regarding my architectural question for Module B. Looking forward to your insights!

Hello Team., I am Rohith S

I am a current Integrated M.Tech student in Computer Science and Engineering at SSN College of Engineering, Chennai, with an active research portfolio at the intersection of machine learning and physical modeling. My work on Quantum Error Mitigation using a Physics-Embedded Liquid Neural Network is directly analogous in spirit to NeuroSim: both projects take a domain where pure data-driven methods are insufficient, identify the physical constraints that govern the system, and embed those constraints into a computational model to produce answers that a purely statistical approach cannot.

Eagerly Waiting to Contribute to GSoc Project 39.

Scope and Estimated Timeline of the NeuroSim Project (360h Project)

Phase 1: The Data & Harmonization Engine (Weeks 1–3 | 90 Hours)

Objective: Establish the BIDS-compliant infrastructure and site-effect correction.

  • Weeks 1–2 (60 hrs): Development of the Loader module. This involves writing the boilerplate for PyBIDS integration and building the parcellation-based time-series extraction pipeline (e.g., extracting signals from brain atlases).

  • Week 3 (30 hrs): Implementation of Blind neuroCombat. Maybe create a “Reference-Group” script that calculates scanner-effect parameters strictly from healthy controls in the HCP dataset before applying them to the clinical ADNI/AUD cohorts.

Phase 2: The Mathematical operations & Energy Solver (Weeks 4–7 | 120 Hours)

Objective: The implementation of Finite-Horizon NCT and GraphNet regularization.

  • Week 4 (30 hrs): Building the Connectivity Solver. This goes beyond correlation; it requires implementing spectral inversion methods to estimate the A matrix (Directed Effective Connectivity).

  • Week 5 (30 hrs): Implementation of GraphNet Regularization.

· Week 6 (30 hrs): The EnergySolver (Finite-Horizon). This is the implementation of the Discrete-Time Controllability Gramian. Reduce the computational intensivity.

  • Week 7 (30 hrs): Developing the Optimal Control Path algorithm to calculate the trajectory energy between specific brain states (e.g. Rest to Cognition).

Phase 3: Clinical Validation & Manifold Learning (Weeks 8–10 | 90 Hours)

Objective: Testing the engine on pathological regimes and visualizing disease trajectories.

  • Week 8 (30 hrs): Midterm Validation. Running unit tests against a simulated Wilson-Cowan model. This ensures that the engine can correctly identify non-linear oscillations before it is applied to real-world data.

  • Week 9 (30 hrs): The Attractor State Analysis (AUD/ADNI). Quantifying the Rigid Attractor States hypothesis in addiction—measuring how much more energy is required for a brain to “unlock” from a craving state compared to a healthy control. Looking at Alzheimer’s data to map out the energy metrics between healthy vs degenerated brain states.

  • Week 10 (30 hrs): Facilitator Node Detection (Epilepsy). Using Modal Controllability metrics to identify specific structural nodes that act as gateways for seizure propagation.

Phase 4: Tutorials & Deployment (Weeks 11–12 | 60 Hours)

Objective: Finalizing documentation and community-facing tools.

  • Week 11 (30 hrs): Manifold Learning & Visualization. Integrating UMAP to project high-dimensional energy profiles into a 2D clinical “state space,” allowing for the visualization of patient-specific disease trajectories.

  • Week 12 (30 hrs): The Tutorial System. Finalizing three high-fidelity Jupyter Notebooks (Data Ingestion, Energy Calculation, and Clinical Plotting) and completing the Benchmark Report for iNCF review.

Dear all,

Thank you for the overwhelming interest and the high quality of the questions regarding Project #39: NeuroSim. It is excellent to see so many diverse backgrounds converging on computational neuroscience.

To provide a general update: Given the mathematical and engineering scope required to build a physics-constrained pipeline from scratch, NeuroSim is officially classified as a Large Project (350/360 hours).

Below, I have addressed your specific architectural and pipeline inquiries to help you structure your formal proposals. I am also answering questions received via email here so that all applicants have access to the same technical clarifications.

Aditya Rawat

We are building the NeuroSim pipeline fundamentally from scratch. While there are excellent isolated scripts in the academic ecosystem (such as nctpy), our goal is to build a unified, BIDS-compliant Python library that integrates harmonization, effective connectivity, and control theory into one modular package.

For foundational reading on the expected architecture, I recommend reviewing the recent Nature Protocols paper: Parkes, L., et al. (2024). “A network control theory pipeline for studying the dynamics of the structural connectome.” Additionally, familiarize yourself with the PyBIDS and Nilearn libraries, as they will form the backbone of our data ingest@Mdo@Md@M@Md

Md. Shamsul Alam

Your experience with biological variance and predictive models is highly relevant. Regarding Module B (Effective Connectivity): The pipeline must produce directed, causal adjacency matrices (A). We are open to both regression-based approaches (such as Multivariate Autoregressive [MVAR] models) and spectral inversion techniques. The critical factor you must address in your proposal is the computational trade-off and matrix stability. For example, MVAR can become computationally unstable when applied to dense parcellations without proper regularization. I expect a method that preserves the physical validity of the network while ensuring the downstream Controllability Gramian computation does not fail due to poorly conditioned mat@Chenfanices@C@Chenfanenfa@Chenfan

Chenfan Liao

Your background in closed-loop neuromodulation for epilepsy aligns perfectly with the core vision of this tool. Regarding the “healthy baseline” target for our optimal control calculations: We are defining this empirically via our harmonized reference cohorts, aiming for a restorative transition. Specifically, the control energy will be calculated as the effort required to drive a patient’s state vector (e.g., an epileptic interictal state) toward the centroid of the state space defined by the harmonized Healthy Control (HC) group. In your proposal, consider detailing how you would manage the dimensionality of this target@Rishikastate @Rishikapace@Rishik@Rishika

Rishika Kapil

Thank you for the detailed extended abstract; integrating a normative modeling layer is an interesting downstream perspective. Regarding your question on the visualization engine—the answer could be that the pipeline requires both exploratory and clinically readable outputs.

Manifold learning (UMAP) and Pseudo-Time inference are required for exploratory analysis of the overall disease trajectory (e.g., mapping how a patient’s total energy profile shifts over time). However, to make the output clinically readable, the pipeline must identify specific Facilitator Nodes. This requires projecting control metrics (like Modal Controllability) back onto the cortex using Nilearn surface plotting so researchers can visualize exactly which anatomical regions are

Rohith S

Your work with Liquid Neural Networks and physical constraints is very analogous to what we are doing here. Network Control Theory essentially embeds the physical constraints of the brain’s white-matter connectome into the dynamic model. When drafting your timeline, focus heavily on the numerical methods you would use to simulate these state transitions efficiently, as calculating control energy over complex biological networks is computationally expensive.

The deadline for final proposal submission is approaching. Please ensure your timelines reflect a 12-week, 350-hour commitment with clear weekly deliverables.

Best,

Dr. Khushbu Agarwal

1 Like

Sure sir..I’ll refine my proposal that would align with the requirments.

Dear Dr. Khushbu Agarwal,

Thank you for your detailed and personalized guidance — particularly your
challenge regarding MVAR stability on dense parcellations. I took it
seriously and have implemented a working solution.

Over the past week, I built a complete NeuroSim POC that directly addresses
the architectural requirements you described:

:link: GitHub - shamsulalam1114/NeuroSim-Core-Dev: GSoC 2026 Project #39 (INCF/EBRAINS): NeuroSim in-silico stimulation pipeline — A-matrix solver, Controllability Gramians, BIDS-compliant data ingestion. · GitHub


Module A — Blind neuroCombat Harmonization
ComBat parameters are estimated exclusively from Healthy Controls (HCP).
Clinical cohorts (ADNI, AUD, Epilepsy) are harmonized using these locked
parameters — the disease signal is never used to estimate batch effects,
ensuring biomarker signal is preserved.

Module B-1 — Addressing your stability challenge
Rather than relying on post-hoc correction, I implemented a Spectral
Inversion solver using Tikhonov-damped eigendecomposition. The spectral
radius < 1.0 is guaranteed algebraically — the Gramian computation cannot
fail due to a poorly conditioned matrix. A regularized MVAR solver (Ridge
/ LassoLars) is also provided as an alternative, with automatic Schur
stabilization if needed.

Module B-2 & C — Control Engine + Facilitator Nodes

  • Discrete-time Controllability Gramian (PSD guaranteed)
  • Minimum control energy: patient state → HC centroid (restorative
    transition as you described)
  • Modal Controllability per node → rank_facilitator_nodes() returns
    the top-k anatomical gateway nodes for seizure / AUD circuits
  • PCA/UMAP projection of energy profiles into 2D clinical state space

The complete end-to-end pipeline runs in:
notebooks/04_full_pipeline_demo.ipynb

All modules are tested (30+ unit tests, all passing) and the notebooks
have been executed with outputs.

I would be very grateful for any feedback on whether this implementation
aligns with your vision. I am happy to refine any aspect before the
April 8 deadline.

Thank you for your time.

Best regards,
Md. Shamsul Alam

1 Like

It’s pretty good, Even I too started to work on it but here I just stuck with my college practicals and end semester examinations, It will be upto this month. After that I’ll work full fledgedly on our NeuroSim Project.

Dear @Md. Shamsul Alam,

Thank you for the update and for the significant effort you have put into developing this proof-of-concept. It is encouraging to see such proactive engagement with the NeuroSim vision so early in the cycle. For this project, our primary focus is transitioning from computational efficiency to biological and physical validity. While your repository covers a broad range of modules, the true challenge of NeuroSim lies in the mathematical engine specifically ensuring that the connectivity solvers and controllability metrics aren’t just computationally stable, but physically representative of directed neural dynamics.

As we move into the internal review phase, I am particularly interested in how the various proposals handle the ‘Approximation Crisis’ I mentioned earlier. Specifically:

  • How does the engine distinguish between directed causality and simple functional correlation?

  • How does the implementation of the Controllability Gramian scale for high-resolution clinical datasets (like ADNI or Epilepsy cohorts) without losing numerical precision?

I will review all submitted formal proposals in detail alongside the iNCF committee over the coming days. I encourage you and all other applicants to keep the discussion focused on these physics-constrained benchmarks, as they will be the primary criteria for clinical validation in our lab. Also, I encourage everyone who sent me their repository versions via email to share them here as well, for better clarity in understanding the problems.

Best,
Dr. Khushbu Agarwal

2 Likes

Dear Dr. Khushbu Agarwal,

Thank you for defining the physics-constrained benchmarks so clearly — the Approximation Crisis framing directly shaped what I implemented in response.

On Q1 — Distinguishing directed causality from functional correlation:

You are correct to flag this distinction. The spectral_inversion_solver derives A from an FC (correlation) matrix and is explicitly documented in the codebase as an approximation. The primary causal solver is mvar_solver, which implements Granger causality: each row i of A is obtained by regressing node i’s activity on ALL nodes’ lagged activity simultaneously — not pairwise. This controls for network-wide context and captures directed influence that is invisible to functional correlation.

To make this rigorous and testable, I have added granger_causality_matrix() in neurosim/connectivity/granger.py. For each directed pair (j→i), it fits a full MVAR and a restricted MVAR (with node j removed), then computes F = ((RSS_restricted − RSS_full) / order) / (RSS_full / df2). Entries with p < 0.05 represent statistically validated directed causal edges. A companion function causality_vs_correlation_summary() explicitly maps where FC and Granger diverge — exposing spurious correlations (high FC, no causality) and hidden causal edges (significant Granger, low FC).

On Q2 — Gramian scaling for ADNI/Epilepsy clinical datasets:

For infinite-horizon Gramians, I use scipy.linalg.solve_discrete_lyapunov, which implements the Bartels-Stewart algorithm (internally Schur-decomposition-based): O(N³), double precision, tractable at N ≈ 300–400. The critical prerequisite — spectral radius < 1 — is algebraically enforced by the solvers before any Gramian is computed.

I have added compute_gramian_large_scale() in neurosim/control/gramian_schur.py. It wraps the Lyapunov solve with a precision_report returned alongside the Gramian: condition number, minimum eigenvalue, effective rank, and the Lyapunov residual ‖A Wc Aᵀ − Wc + BBᵀ‖_F (verified < 1e-8 in tests). A gramian_precision_benchmark() function validates how precision scales across N = 50, 100, 200.

All 109 unit tests pass. The updated repository is at: https://github.com/shamsulalam1114/NeuroSim-Core-Dev

I would be grateful for any feedback on whether these implementations meet the physical validity standard you described.

Best regards, Md. Shamsul Alam`

Dear Dr. Agarwal,

Thank you for framing the Approximation Crisis; it directly shaped my validation strategy and model choices.

Directed causality vs FC:
The limitation of FC is algebraic; correlation matrices are symmetric, forcing purely real eigenvalues and eliminating oscillatory modes. This is biologically unrealistic for neural systems.
I estimate effective connectivity using VAR(1) OLS (A = Xₜ₊₁ Xₜᵀ (Xₜ Xₜᵀ)⁻¹), preserving temporal asymmetry and complex eigenvalue structure. I have validated directionality using Granger causality (F-tests) and explicitly compared causality vs correlation to show where FC fails. This aligns better with delayed, directional brain interactions.

Gramian precision:
I use the discrete Lyapunov solution (Bartels–Stewart) instead of finite-horizon sums, which degrade as spectral radius approaches 1 (common in pathological states like AUD/epilepsy).
I track residuals, condition number, and eigenvalues to ensure numerical and biological stability of energy estimates.

Validation:
Using synthetic data with known ground-truth A, I compute Frobenius recovery error. Results are consistent across cohorts, with variation linked to spectral properties. This ensures the model is not just mathematically valid but biologically recoverable — something FC-based approaches cannot support.

Current implementation:
Blind neuroCombat (HC-referenced), VAR + Granger EC estimation, Gramian + energy solver, attractor rigidity metrics, facilitator node detection, and UMAP-based trajectory visualization — all with unit tests.

Next steps:
GraphNet regularization, Wilson–Cowan validation, full BIDS ingestion, and clinically interpretable state transitions.

Repository: GitHub - Rishikakaps/NeuroSim-Dev: GSoC INCF project #39 POC pipeline · GitHub

I would appreciate feedback on whether the current validation meets the threshold for clinical-scale testing.

Best regards,
Rishika Kapil

Dear Dr. Khushbu Agarwal,

Following the recent discussion and the validation points raised, I have added two further implementations to the repository that directly address the biological and physical validity concerns — specifically the points on eigenvalue structure and ground-truth recovery that have been discussed.

On the eigenvalue structure argument:

The claim that FC-derived connectivity forces purely real eigenvalues, eliminating oscillatory neural dynamics, is correct — and I have now implemented eigenvalue_structure_report() in neurosim/connectivity/solver.py to quantify this precisely rather than state it verbally.

Measured result on a 15-node network:

  • FC-derived A matrix: 13.3% complex eigenvalues (near-symmetric structure collapses oscillatory modes)

  • MVAR-derived A matrix: 80.0% complex eigenvalues (directed asymmetry preserves biologically realistic oscillatory dynamics)

This is a 6x difference — quantitatively confirming that MVAR captures directed, temporally asymmetric dynamics that FC cannot represent by construction.

On ground-truth recovery:

I have implemented frobenius_recovery_benchmark() which generates a known stable A_true, simulates linear dynamics x(t+1) = A_true @ x(t) + ε, recovers A_est via regularized MVAR, and reports the normalised Frobenius error:

‖A_est − A_true‖_F / ‖A_true‖_F = 0.1206 at N=20, T=500

This corresponds to approximately 88% structural recovery accuracy. Tests also confirm that recovery error decreases monotonically with T, consistent with MVAR identifiability theory (Seth et al., 2015).

On Schur stabilization:

Beyond tracking stability, the engine algorithmically enforces it. When MVAR returns a solution with spectral radius ≥ 1.0 (common in pathological states like epilepsy where dynamics approach criticality), _normalize_for_stability() applies a scalar Schur rescaling that preserves the full sign and ratio structure of A while guaranteeing Schur stability. Three dedicated unit tests verify that the sign pattern and proportionality of A are preserved post-stabilization.

The repository now contains 129 passing unit tests across 6 modules, including the new test_frobenius_recovery.py.

Repository: https://github.com/shamsulalam1114/NeuroSim-Core-Dev

I would welcome any feedback on whether these quantitative physics validations meet the threshold for the clinical-scale evaluation criteria.

Best regards, Md. Shamsul Alam

@Shamsul @Rishika_Kapil
Thank you both for this phenomenal discussion. This level of biophysical rigor—specifically regarding eigenvalue structures, temporal asymmetry, and the limits of static FC—is exactly why the NBRC proposed the NeuroSim architecture to the iNCF this year. It is excellent to see the community already engaging with the ‘Approximation Crisis’ with such depth.

Shamsul, your agility in deploying these quantitative checks is highly commendable. Rishika, your theoretical framing of the stability constraints is very insightful.

Regarding the selection process, please note that our mentor team is now in the final administrative phase of the review cycle. To ensure absolute fairness across the entire applicant pool, we are evaluating all proposals based strictly on the technical state of the repositories and documentation as they stood at the official GSoC submission close.

While we are unable to formally factor in new feature implementations during this final evaluation window, this scientific momentum is exactly what we hoped to see. Once the iNCF announces final project allocations and the official NeuroSim repository is opened for the Community Bonding phase, we highly encourage you both to bring these insights into the public pipeline via Pull Requests.

Thank you again for elevating the technical standard of this discussion!

Dear Dr. Khushbu Agarwal,

Thank you for the kind words and for engaging with the discussion in such depth. Understanding the strict evaluation timeline and administrative criteria is very helpful, and I completely respect the process to ensure fairness across all applicants.

I look forward to the official GSoC announcement. If selected, I will absolutely bring these implementations into the public repository during the Community Bonding phase — the pipeline is already running end-to-end and the Pull Requests are ready to draft.

Thank you again for mentoring this process with such transparency and technical rigor.

Best regards,
Md. Shamsul Alam