Currently, trying to run multiple fmriprep subjects in parallel leads to errors. The solution is to make sure the first job runs a bit before all the other jobs. One potential way to do this is simply to add sleep 1m && before every job except for the first:
Note that [submit job: ] should be replaced with whatever command you use to submit a job. This solution assumes you already know more generally how to run tasks in parallel on your system.
Reading your response jbwexler, I am wondering if have any recommendations for a similar thing in hpc with fMRIprep via singularity and a parallel GNU, e.g. https://github.uconn.edu/HPC/parallel-slurm.git
I am trying to set the code to read-in consecutive subjectIDs from a .txt file and submit the same job using:
cat subjID.txt | parallel myscript.sh
but I keep getting errors, and I am not sure if this is because the variable for subjID was defined as s=$1 instead of s=+$1, or loop with something like
I am not sure if it’s the same issue, but I was getting the same error running multiple subjects in parallel on an HPC cluster using Snakemake with slurm, with fmriprep version 25.0.0, and the solution that fixed it for me was specifying a separate working directory for each subject:
--work_dir temp_{$SUBID}
Hope this helps someone, also, if anyone is aware of some new official fix that I missed I would be happy to hear about it:)