Do I have to use parallel to process multiple subs using xcp_d?

Summary of what happened:

Hi experts,

at first, I didn’t do parallel, and found that all subs were being processed together, and eventually the thing crashed with error like: OSError: [Errno 28] No space left on device.

I tried to add --nthreads --omp-nthreads , but not helping. so I tried parallel and it worked. question is, when I looked at other people’s issues with multiple processing, no one used parallel. so I wonder if there are other ways to process multiple subs?

also, when using parallel, I cannot monitor the process (can’t find the log file), and I also want to know the solution.

Command used (and if a helper script was used, a link to the helper script or the command generated):

cat sublist_numbers.txt | parallel --verbose -j 4 docker run --rm \
   -v /Users/sap/Desktop/derivatives/license.txt:/license.txt \
   -v /Users/sap/Desktop/fmriprep:/fmriprep_input \
   -v /Users/sap/Desktop/fmriprep/sourcedata/freesurfer:/freesurfer:ro \
   -v /Users/sap/Desktop/fmriprep/xcp-d:/xcp_d_output \
   -v /Users/sap/Desktop/working:/working \
   -w /working \
   pennlinc/xcp_d:latest \
   /fmriprep_input \
   /xcp_d_output \
   participant \
   --fs-license-file /license.txt \
   --mode linc \
   --smoothing 6 \
   -p 36P \
   --motion-filter-type lp --band-stop-min 6 \
   --lower-bpf 0.01 --upper-bpf 0.08 \
   --stop-on-first-crash \
   --despike n \
   --fd-thresh 0 \
   --file-format nifti \
   --input-type fmriprep \
   --nthreads 2 \
   --mem_gb 16 \
   --participant-label {}

Version:

Latest (0.9.1 at this time)

Environment (Docker, Singularity / Apptainer, custom installation):

Docker

Data formatted according to a validatable standard? Please provide the output of the validator:

PASTE VALIDATOR OUTPUT HERE

Relevant log outputs (up to 20 lines):

PASTE LOG OUTPUT HERE

Screenshots / relevant information:


You shouldn’t need to enable parallel processing to use XCP-D. I think the problem might be that you didn’t set the working directory (--work-dir/-w) to your mounted working folder, so the Docker image is writing those files inside the image. Try adding --work-dir /working.

1 Like

Thank you, I tried and it worked well.

but if I have 800 subs to process, will parallel be a better choice?

Parallel would be the way to go in that case.

1 Like

thank you very much :slight_smile: