I am trying to implement this first- and second-level task-based fMRI analysis after successfully running fMRIprep on my dataset: Analysis of task-based functional MRI data preprocessed with fMRIPrep | Nature Protocols
I have modified the script to adapt it to relevant contrasts for my task and have adapted the DerivativesDataSink nodes as they caused an error because the parameters couldn’t build a bids-compatible file name.
I am using a 2023 Macbook Pro M3 Max with 36GB of RAM.
I initially used the docker container as described in the publication and have seen even slower speeds. When running the script locally without docker using:
python run.py \
$BIDSDIR \
$BIDSDIR/derivatives/task-analysis \
participant \
--task XX \
--space MNI152NLin2009cAsym \
--bids-dir $BIDSDIR \
--work-dir $BIDSDIR/working_dir
The feat_fit step is faster, but still takes a very long time to run (8 hours plus for first-level analysis when running two subjects at the same time).
In the plugin_settings in run.py, I changed n_procs to 2 since I would keep getting memory errors otherwise. When checking the activity monitor, I see two film_gls processes using ~690% CPU and 30GB memory and a total CPU and memory usage close to 95%.
Is there any reason the first-level analysis could be this slow? What can I do to troublehoot and potentially speed up this analysis?