I am processing resting state fMRI images that are in BIDS fromat and have an issue in extracting timeseries from an atlas (Schafer atlas). I have applied fmriprep on our data and did the QC of the output (everything seems to be normal and there is no error). Now, I am using nilearn to extract timeseries from the 400 Schaefer atlas regions alongside regressing out some confounds. Accordingly, I used “NiftiLabelsMasker” to create the masker and “masker.fit_transform” to extract the timeseries, regress out the selected confounds, band pass filtering, standardizing, and detrending.
The issue and my question is that I am not sure where and how to integrate the mask file outputed by fmriprep (“desc-brain_mask.nii.gz”) in my nilearn code since the image I used “desc-preproc_bold.nii.gz” is not skull-stripped. As a result, timeseries of some regions are not extracted (timeseries size is 146**394 instead of 146*400) which I assume is due to the usage of a non skull-stripped image.
The following is the command I used to create masker:
masker = NiftiLabelsMasker(labels_img=schaefer_resamp, standardize=True, detrend=True, low_pass=0.08, high_pass=0.009,t_r=2)
and the below line is the command I used for timesries extraction:
Thank you in advance!
If you get signals from only 394 regions, it may be because the resampled image only has 394 regions, no ? Some atlases provide very thin regions that do not survive resampling.
To get back to your question, you can also provide a
mask_img when you instantiate the object, but this will rather further decrease the number of regions.
Thanks a lot for your reply. Yes, exactly, the resamling was the issue. I have used the T1w space output of fMRIPrep (to extract timeseries of subjects in the native space) but then resampling the Schaefer atlas (which was in MNI space) did not work properly. After visualization of both, I noticed that the fmri image and the resampled schaefer atlas are not well aligned and those regions that timeseries were not extracted from were regions that were misaligned. Then, I used fMRIPrep output in MNI space and inputted that image to NiftiLabelsMasker and it worked and the schaefer atlas and the fmri image are now aligned and timeseries are now extracted for all the 400 regions. Therefore, the issue was using an fmri image in the native space.
It was not a good idea to use an image in the native space but is there a reason why the resampling from schaefer atlas in MNI to the T1 space would not work?
Thanks for your response and guidance.
Not sure I have the exact answer to your question. I think that the issue may be somewhere betw residual misalignment leading to out-of-volume issues, some regions being late ; this could come if your registration algorithm does not provide an accurate match between MNI and T1 space ; another issue may be registration and resampling: depending on the transformation and interpolation method used, tiny regions may disappear in the resampling process.
HTH (not sure …)