Request for Gabor Visual Stimuli Data from "A Brain-Wide Map of Neural Activity during Complex Behaviour"

Message from Masaru [m.kuwabara@araya.org]:

I am writing to express my interest in reconstructing the Gabor visual stimuli described in your recent publication, “A Brain-Wide Map of Neural Activity during Complex Behaviour.” I am keen to examine the correlation between these stimuli and the neural activity data available in the DANDI archive.

To facilitate a precise replication of the experiments, including aspects such as luminance and the dynamics of the Gabor stimuli, would it be possible for you to provide access to the relevant data files? Your assistance would be invaluable in enabling us to accurately reproduce and further investigate the findings detailed in your study.

Dear Masaru,

On our rigs, we calibrate the luminance once upon setup – whilst we try to harmonise the luminance across rigs as best we can using a polarising filter, it is important to note that the value of luminance is variable and varies across position on the screen. Attached is an example of recorded luminance for 3 rigs, where you can see it varies and is maximal at the center.

Regarding the position of the visual stimulus:
In our rigs, the visual stimulus is linked to the wheel movement. We will generate examples for you to easily relate the wheel movement to the visual stimulus position, however please be aware that this will solely be the programmed position on the screen, and not the actual position. We have no means on our rigs to record the actual position on the screen, these rigs were not designed with this intent in mind.

It will take us ~2 weeks to create documentation for you to replicate the stimulus position from the recorded wheel movement. We have functions using ONE to load datasets, however we want to provide you with a generic way to do this calculation (which is useful if you downloaded our data via DANDI).
We will update this Neurostars issue once this is available.

I hope this is clear, thank you for your patience and let us know if anything is unclear.

Dear Gaelle,

Thank you for your response with all details.
We are thinking of replicating video of the screen with Gabor as it will be an input to our machine learning model.
I think programmed position is enough for us but how it differs from actual position? (if your concern is jitter of few tens of ms, or few mm, that should be fine)
The code you used to create visual stimuli would be the best but if it is difficult, some picture or movie of the monitor during experiment would be very helpful too.

Thank you in advance,
Sincerely,

Masaru

Dear Masaru,

Here is the example notebook:

https://int-brain-lab.github.io/iblenv/notebooks_external/docs_wheel_screen_stimulus.html

Please let us know if anything is unclear.

Dear Gaelle,

Thank you so much for your swift response!

Masaru