fMRI Prep Runtime Estimates

Hi all,

Brand new to BIDS format and neuroimage pre-processing using fMRI Prep. I’d done a little in FSL in the past, but could effectively be considered a novice in neuro processing in general.

I have a dataset of 116 participants that each possesses a T1W and 8 runs of two functional tasks (16 total). After much struggle, I finally got this dataset in BIDS format using HeuDiConv and began running fMRI Prep via docker wrapper on a high-powered Ubuntu 20.02 setup (125gb RAM, Intel® Xeon® Silver 4116 CPU @ 2.10GHz Processer). Despite that, it seems like things are taking a LONG time. In 48 hours, it processed 8 participants (A rate of ~6 hrs/pt). I’m thinking something might not be right (mostly because, at this rate, it’ll take 29 continuous days to process the whole dataset and I don’t want that to be true). As best as I can tell, I’m not seeing any error messages and the output looks right, but I don’t have a frame of reference to know.

So, my questions are: Does this time frame sound about right to more seasoned researchers? Any things I should be checking/changing? Any suggestions or input is greatly appreciated! I don’t think it matters much since the command is pretty barebones, but I tossed my code below in case I’m wrong in that thinking.

#Environmental Variables
#RAW contains my BIDS-formatted NifTis and codes
#CODE contains my file and list of participants.
#DERIV is the output for my preprocessed data
#SUBJECTS contains a list of my subjects
SUBJECTS=`cat ${CODE}/Participants.txt`    

for i in ${SUBJECTS}
    	echo "+++++ PRE PROCESSING ${i} +++++"
    	fmriprep-docker ${RAW}/ \
    		${DERIV}/pipeline_1/ \
    		participant --participant-label ${i} \
    		--fs-license-file ${CODE}/license.txt


6hrs per subject while using freesurfer for each subject seems pretty normal to me. If possible, you could parallelize you script to process your subjects in parallel.
Here is a link with a few guidances, but this requires that you know how to launch tasks in parallel on your system.

If you have access to a high performance cluster (HPC) with singularity installed, you could build a singularity image and follow this script to run your subjects in parallel:

Thanks for the suggestions! I really appreciate it. I’d come across these options in my preparation, but hadn’t really considered them at the time. Running a time function, linux suggested the operation should execute far quicker than it has so they seemed unnecessary. I’ll more seriously look into these options and consult with some other folks in the lab. Thanks again!