I am writing to express my interest in reconstructing the Gabor visual stimuli described in your recent publication, “A Brain-Wide Map of Neural Activity during Complex Behaviour.” I am keen to examine the correlation between these stimuli and the neural activity data available in the DANDI archive.
To facilitate a precise replication of the experiments, including aspects such as luminance and the dynamics of the Gabor stimuli, would it be possible for you to provide access to the relevant data files? Your assistance would be invaluable in enabling us to accurately reproduce and further investigate the findings detailed in your study.
On our rigs, we calibrate the luminance once upon setup – whilst we try to harmonise the luminance across rigs as best we can using a polarising filter, it is important to note that the value of luminance is variable and varies across position on the screen. Attached is an example of recorded luminance for 3 rigs, where you can see it varies and is maximal at the center.
Regarding the position of the visual stimulus:
In our rigs, the visual stimulus is linked to the wheel movement. We will generate examples for you to easily relate the wheel movement to the visual stimulus position, however please be aware that this will solely be the programmed position on the screen, and not the actual position. We have no means on our rigs to record the actual position on the screen, these rigs were not designed with this intent in mind.
It will take us ~2 weeks to create documentation for you to replicate the stimulus position from the recorded wheel movement. We have functions using ONE to load datasets, however we want to provide you with a generic way to do this calculation (which is useful if you downloaded our data via DANDI).
We will update this Neurostars issue once this is available.
I hope this is clear, thank you for your patience and let us know if anything is unclear.
Thank you for your response with all details.
We are thinking of replicating video of the screen with Gabor as it will be an input to our machine learning model.
I think programmed position is enough for us but how it differs from actual position? (if your concern is jitter of few tens of ms, or few mm, that should be fine)
The code you used to create visual stimuli would be the best but if it is difficult, some picture or movie of the monitor during experiment would be very helpful too.
We noticed that the link you gave us was not enough to recreate the visual stimuli. (For example, if the background RGB is [127, 127, 127], the detail of Gabor luminance such as if it is from [0, 0, 0] to [255, 255, 255] with sin wave frequency with phase information.)
Could you provide the codes to delineate the stimuli?
Also, if possible could you provide the password to download the data in “Step 1: Load data” being asked as below?
Enter Alyx password for "intbrainlab":
HTTPError: Failed to load the remote cache file
Enter Alyx password for "intbrainlab":
Traceback (most recent call last):
The phase is randomized at each trial. The spatial frequency should be in the behaviour appendix (.1 I think), as is the size of the Gabor (it’s 7 I think). You can find all our protocols on Figshare. I couldn’t possibly tell you the exact RGB values for a given pixel because they indeed vary between 0 and 255. As noted, the average should be 127.5, and the min-max are modulated by contrast. You can find the shader code here: