Fmriprep error in multiproc.py

Summary of what happened:

Running fmriprep docker, got a fair way into it, then getting this error:

231201-02:27:23,543 nipype.workflow INFO:
         [Node] Executing "ds_bold" <fmriprep.interfaces.DerivativesDataSink>
231201-02:27:23,544 nipype.workflow INFO:
         [Node] Executing "ds_bold" <fmriprep.interfaces.DerivativesDataSink>
231201-02:27:23,545 nipype.workflow INFO:
         [Node] Setting-up "fmriprep_23_2_wf.sub_02_wf.bold_task_auditory_run_06_wf.ds_bold_std_wf.ds_bold" in "/home/rt/fmriprep_23_2_wf/sub_02_wf/bold_task_auditory_run_06_wf/ds_bold_std_wf/_in_tuple_MNI152NLin2009cAsym.resnative/ds_bold".
231201-02:27:23,548 nipype.workflow INFO:
         [Node] Executing "ds_bold" <fmriprep.interfaces.DerivativesDataSink>
231201-02:27:23,548 nipype.workflow INFO:
         [Node] Executing "ds_bold" <fmriprep.interfaces.DerivativesDataSink>
231201-02:27:23,550 nipype.workflow INFO:
         [Node] Executing "ds_bold" <fmriprep.interfaces.DerivativesDataSink>
231201-02:27:23,553 nipype.workflow INFO:
         [Node] Executing "ds_bold" <fmriprep.interfaces.DerivativesDataSink>
231201-02:27:23,555 nipype.workflow INFO:
         [Node] Setting-up "fmriprep_23_2_wf.sub_02_wf.bold_task_auditory_run_07_wf.ds_bold_std_wf.ds_bold" in "/home/rt/fmriprep_23_2_wf/sub_02_wf/bold_task_auditory_run_07_wf/ds_bold_std_wf/_in_tuple_MNI152NLin2009cAsym.resnative/ds_bold".
231201-02:27:23,576 nipype.workflow INFO:
         [Node] Executing "ds_bold" <fmriprep.interfaces.DerivativesDataSink>
231201-02:27:23,580 nipype.workflow INFO:
         [Node] Executing "ds_bold" <fmriprep.interfaces.DerivativesDataSink>
exception calling callback for <Future at 0x7f80862f6230 state=finished raised BrokenProcessPool>
Traceback (most recent call last):
  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 342, in _invoke_callbacks
    callback(self)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
exception calling callback for <Future at 0x7f8085daebf0 state=finished raised BrokenProcessPool>
Traceback (most recent call last):
  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 342, in _invoke_callbacks
    callback(self)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/opt/conda/envs/fmriprep/lib/python3.10/concurrent/futures/_base.py", line 342, in _invoke_callbacks
    callback(self)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback

and repeated many times

Command used (and if a helper script was used, a link to the helper script or the command generated):

docker run --rm -e DOCKER_VERSION_8395080871=24.0.6 -it -v /home/ra/license.txt:/opt/freesurfer/license.txt:ro -v /mnt/h/FMRI_SAMPLE/ds004835-download:/data:ro -v /mnt/h/FMRI_SAMPLE/out:/out -v /home/ra:/scratch nipreps/fmriprep:23.1.4 /data /out participant -w /home/ra

Version:

23.2.0a2

Environment (Docker, Singularity, custom installation):

docker

Data formatted according to a validatable standard? Please provide the output of the validator:


bids-validator@1.14.0
(node:89809) Warning: Closing directory handle on garbage collection
(Use `node --trace-warnings ...` to show where the warning was created)
        1: [WARN] Not all subjects contain the same files. Each subject should contain the same number of files with the same naming unless some files are known to be missing. (code: 38 - INCONSISTENT_SUBJECTS)
                ./sub-01/anat/sub-01_run-01_T1w.json
                        Evidence: Subject: sub-01; Missing file: sub-01_run-01_T1w.json
                ./sub-01/anat/sub-01_run-01_T1w.nii.gz
                        Evidence: Subject: sub-01; Missing file: sub-01_run-01_T1w.nii.gz
                ./sub-01/anat/sub-01_run-02_T1w.json
                        Evidence: Subject: sub-01; Missing file: sub-01_run-02_T1w.json
                ./sub-01/anat/sub-01_run-02_T1w.nii.gz
                        Evidence: Subject: sub-01; Missing file: sub-01_run-02_T1w.nii.gz
                ./sub-01/fmap/sub-01_acq-gre_dir-PA_epi.json
                        Evidence: Subject: sub-01; Missing file: sub-01_acq-gre_dir-PA_epi.json
                ./sub-01/fmap/sub-01_acq-gre_dir-PA_epi.nii.gz
                        Evidence: Subject: sub-01; Missing file: sub-01_acq-gre_dir-PA_epi.nii.gz
                ./sub-02/anat/sub-02_run-01_T1w.json
                        Evidence: Subject: sub-02; Missing file: sub-02_run-01_T1w.json
                ./sub-02/anat/sub-02_run-01_T1w.nii.gz
                        Evidence: Subject: sub-02; Missing file: sub-02_run-01_T1w.nii.gz
                ./sub-02/anat/sub-02_run-02_T1w.json
                        Evidence: Subject: sub-02; Missing file: sub-02_run-02_T1w.json
                ./sub-02/anat/sub-02_run-02_T1w.nii.gz
                        Evidence: Subject: sub-02; Missing file: sub-02_run-02_T1w.nii.gz
                ... and 12 more files having this issue (Use --verbose to see them all).

        Please visit https://neurostars.org/search?q=INCONSISTENT_SUBJECTS for existing conversations about this issue.

        2: [WARN] Not all subjects/sessions/runs have the same scanning parameters. (code: 39 - INCONSISTENT_PARAMETERS)
                ./sub-01/func/sub-01_task-auditory_run-07_bold.nii.gz
                ./sub-01/func/sub-01_task-auditory_run-08_bold.nii.gz
                ./sub-02/func/sub-02_task-auditory_run-07_bold.nii.gz
                ./sub-02/func/sub-02_task-auditory_run-08_bold.nii.gz
                ./sub-03/func/sub-03_task-auditory_run-07_bold.nii.gz
                ./sub-03/func/sub-03_task-auditory_run-08_bold.nii.gz
                ./sub-04/func/sub-04_task-auditory_run-07_bold.nii.gz
                ./sub-04/func/sub-04_task-auditory_run-08_bold.nii.gz
                ./sub-05/func/sub-05_task-auditory_run-07_bold.nii.gz
                ./sub-05/func/sub-05_task-auditory_run-08_bold.nii.gz
                ... and 3 more files having this issue (Use --verbose to see them all).

        Please visit https://neurostars.org/search?q=INCONSISTENT_PARAMETERS for existing conversations about this issue.

        Summary:                 Available Tasks:        Available Modalities:
        691 Files, 6.44GB        auditory                MRI
        6 - Subjects
        1 - Session


        If you have any questions, please post on https://neurostars.org/tags/bids.

Relevant log outputs (up to 20 lines):

log listed above

Screenshots / relevant information:

wsl, ubuntu 20.02

Hi @rcha and welcome to neurostars!

This looks like an out of memory issue. How much memory are you devoting to the job? Keep in mind docker may not make all system memory available by default.

Does it work when you try processing only one subject, as with the —participant-label argument?

Also, a note that I think in your command you should have written -w /scratch instead.

Best,
Steven

Thanks very much for the reply, I am having a lot of trouble resuming after errors - it doesn’t seem to matter where I put the working directory, it always restarts the entire pipeline from the beginning?

e.g. just now trying to run 1 participant only:

docker run --rm -e DOCKER_VERSION_8395080871=24.0.6 -it -v /home/ra/license.txt:/opt/freesurfer/license.txt:ro -v /mnt/h/FMRI_SAMPLE/ds004835-download:/data:ro -v /mnt/h/FMRI_SAMPLE/out:/out -v /home/ra:/scratch nipreps/fmriprep:23.1.4 /data /out participant --participant-label sub-01 -w /home/ra
bids-validator@1.13.1
(node:9) Warning: Closing directory handle on garbage collection
(Use `node --trace-warnings ...` to show where the warning was created)
This dataset appears to be BIDS compatible.
        Summary:                 Available Tasks:        Available Modalities:
        691 Files, 6.44GB        auditory                MRI
        6 - Subjects
        1 - Session


        If you have any questions, please post on https://neurostars.org/tags/bids.
231201-21:39:56,658 nipype.workflow IMPORTANT:
         Running fMRIPrep version 23.2.0a2

         License NOTICE ##################################################
         fMRIPrep 23.2.0a2
         Copyright 2023 The NiPreps Developers.

         This product includes software developed by
         the NiPreps Community (https://nipreps.org/).

         Portions of this software were developed at the Department of
         Psychology at Stanford University, Stanford, CA, US.

         This software is also distributed as a Docker container image.
         The bootstrapping file for the image ("Dockerfile") is licensed
         under the MIT License.

         This software may be distributed through an add-on package called
         "Docker Wrapper" that is under the BSD 3-clause License.
         #################################################################
231201-21:39:56,876 nipype.workflow IMPORTANT:
         Building fMRIPrep's workflow:
           * BIDS dataset path: /data.
           * Participant list: ['01'].
           * Run identifier: 20231201-213939_6568d59f-1d83-4cba-840b-e2eae97fabf9.
           * Output spaces: MNI152NLin2009cAsym:res-native.
           * Pre-run FreeSurfer's SUBJECTS_DIR: /out/sourcedata/freesurfer.
231201-21:39:57,461 nipype.workflow INFO:
         ANAT Stage 1: Adding template workflow
231201-21:39:57,626 nipype.workflow INFO:
         ANAT Stage 2: Preparing brain extraction workflow
231201-21:39:57,695 nipype.workflow INFO:
         ANAT Stage 3: Preparing segmentation workflow
231201-21:39:57,699 nipype.workflow INFO:
         ANAT Stage 4: Preparing normalization workflow for ['MNI152NLin2009cAsym']
231201-21:39:57,706 nipype.workflow INFO:
         ANAT Stage 5: Preparing surface reconstruction workflow
231201-21:39:57,724 nipype.workflow INFO:
         ANAT Stage 6: Preparing mask refinement workflow
231201-21:39:57,726 nipype.workflow INFO:
         ANAT No T2w images provided - skipping Stage 7
231201-21:39:57,726 nipype.workflow INFO:
         ANAT Stage 8: Creating GIFTI surfaces for ['white', 'pial', 'midthickness', 'sphere_reg', 'sphere']
231201-21:39:57,741 nipype.workflow INFO:
         ANAT Stage 8: Creating GIFTI metrics for ['thickness', 'sulc']
231201-21:39:57,747 nipype.workflow INFO:
         ANAT Stage 8a: Creating cortical ribbon mask
231201-21:39:57,751 nipype.workflow INFO:
         ANAT Stage 9: Creating fsLR registration sphere
231201-21:39:57,755 nipype.workflow INFO:
         ANAT Stage 10: Creating MSM-Sulc registration sphere
231201-21:39:59,101 nipype.workflow INFO:
         Stage 1: Adding HMC boldref workflow
231201-21:39:59,106 nipype.workflow INFO:
         Stage 2: Adding motion correction workflow
231201-21:39:59,113 nipype.workflow INFO:
         Stage 3: Adding coregistration boldref workflow
231201-21:39:59,146 nipype.workflow IMPORTANT:
         BOLD series will be slice-timing corrected to an offset of 0.774s.
231201-21:39:59,883 nipype.workflow INFO:
         Stage 1: Adding HMC boldref workflow
231201-21:39:59,888 nipype.workflow INFO:
         Stage 2: Adding motion correction workflow
231201-21:39:59,892 nipype.workflow INFO:
         Stage 3: Adding coregistration boldref workflow
231201-21:39:59,922 nipype.workflow IMPORTANT:
         BOLD series will be slice-timing corrected to an offset of 0.774s.
231201-21:40:00,494 nipype.workflow INFO:
         Stage 1: Adding HMC boldref workflow
231201-21:40:00,499 nipype.workflow INFO:
         Stage 2: Adding motion correction workflow
231201-21:40:00,503 nipype.workflow INFO:
         Stage 3: Adding coregistration boldref workflow
231201-21:40:00,532 nipype.workflow IMPORTANT:
         BOLD series will be slice-timing corrected to an offset of 0.775s.
231201-21:40:01,210 nipype.workflow INFO:
         Stage 1: Adding HMC boldref workflow
231201-21:40:01,215 nipype.workflow INFO:
         Stage 2: Adding motion correction workflow

Hi @rcha

Those statements you shared are always present when starting fmriprep, either in a new or old working directory. It does not mean that it is rerunning everything from the beginning.

Best,
Steven

Thanks Steven
At the moment, I can see that it is running Atropos if I look at top in the docker
Does this indicate that it is starting again? Is there a definitive way to tell?

Can’t speak to that specific term, but if you’re using an old working directory it should restart from the last completed step (with the exception of a few minor things that are recalculated upon every run).

Thanks for your help. Does it need to exit cleanly for this to happen? I had to ctrl-c out of it.
Also, I have 64Gb of RAM so I am a little surprised it ran out. Do I need to set that in docker?
Appreciate it

Nope

Yes, it should be in the Docker desktop settings.

Thanks for being super helpful :slight_smile:

Of course, No problem! Hope it works after this change. Note you may also consider using brainlife.io to run many subjects if you are limited on resources.

Best,
Steven