Different number of voxels among volumes and subjects in fMRI data


I have fMRI BOLD series for 36 subjects, each with 121 volumes, preprocessed using fMRIPrep with the command:

fmriprep-docker /home/datasets/download /home/datasets/output_dir participant \
    --fs-license-file /home/license.txt

The first problem is that the number of non-zero value voxels for a single subject varies among volumes. For example, The data for first volume has 375,114 non-zero value voxels while the data for second volume has 375,096 non-zero value voxels. I want to use these data in a deep learning model and Iā€™m afraid zero value may bring in more deviations. Should I find the maximal non-zero voxel intersection among all volumes and revise the brain mask generated by fMRIPrep, or keep the original mask and do nothing to these zero values?

The second problem is that the number of non-zero value voxels also varies among different subjects. Should I use method like nilearn.masking.intersect_masks to create a public mask for all subjects?

Any help would be highly appreciated.

Best Regards,

Hi @yuhan_chen,

If you want a consistent number of voxels across runs and subjects, you can intersect the masks using the function you linked to, using a threshold of 1 for the true intersection (the most conservative mask). If you want, you could even slightly erode the resulting mask further with fslmathsā€™s -ero flag.


1 Like