Dear all,
I am facing 2 issues. My study design is rather simple: single session, events-related, 18 trials, 12 runs. A trial is 10s long: a black screen with a central cross, then a “3,2,1” countdown before projecting an image for 500ms. I have N = 5 subjects (from sub-05 to sub-09).
I am interested in the brain activity in response to this short visual stimuli. Therefore, I am using Nilearn’s GLM to make sure about brain activity and activation, before going MVPA. Amongst other, I expect brain activity in the visual cortex.
Here is my design matrix - only 1 run showed over 12:
And my contrast matrix for the global effect of “vision” - again, only 1 run showed over 12:
For the record, here are my different parameters for the 1st level:
parameter | value |
---|---|
drift_model | cosine |
drift_order | 1 |
high_pass (Hz) | 0.01 |
hrf_model | spm |
noise_model | ar1 |
signal_scaling | False |
slice_time_ref | 0.0 |
smoothing_fwhm | 5 |
standardize | False |
t_r (s) | 1.5 |
target_affine | None |
target_shape | None |
— | — |
Height control | fpr |
α | 0.001 |
Threshold (computed) | 3.291 |
Cluster size threshold (voxels) | 10 |
Minimum distance (mm) | 8.0 |
And 2nd level:
parameter | value |
---|---|
smoothing_fwhm | None |
target_affine | None |
target_shape | None |
— | — |
Height control | fpr |
α | 0.001 |
Threshold (computed) | 3.291 |
Cluster size threshold (voxels) | 0 |
Minimum distance (mm) | 8.0 |
Issue 1: second level analysis
The first one is with the results for 2nd / group level. Even though I applied a smoothing_fwhm = 5
in 1st level nilearn.glm.first_level.FirstLevelModel
, the final map looks a lot like “salt and paper”. Shouldnt this be reduced using smoothing?
Issue 2: first level analysis
This result made me wander: maybe it’s due to the 1st / subject level results? Investigating that point, I faced my second issue: the results for sub-08 are super strange:
compared to others - sub-05 for instance:
Except from this glass brain, everything is similar and looks fine (brain mask, clusters, etc). The data acquisition, dcm2bids
bids formating and fMRIprep
processing is strictly the same.
I think something is wrong with all the results, but I cannot say what. Most certainly due to how I look at brain activity. Maybe my contrast matrix? How can I investigated this further?
Good thing is, when I look at the difference in brain activity between looking at “food” pictures (2,3,4) or “human” pictures (5,6,7):
it occurs mainly in the fusiform area for both example subjects, see sub-05
and sub-08
Thank you all.