FMRIPREP no space on device

Summary of what happened:

Dear community,

We are currently analyzing fMRI data of with a structural T1w image and 4 tasks and would like to use fmriprep for the preprocessing. Unfortunately, even with one subject, the lack of space comes up (error message beneath). With 1TB of space, the error comes up even if we define the working directory.

Command used (and if a helper script was used, a link to the helper script or the command generated):

We are using this shell-command on Ubuntu 20.04:

docker run -ti --rm  \
  -u $(id -u):$(id -g) \
  -v /data/DataPreproc/rawdata:/data:ro \
  -v /data/DataPeproc/derivatives/fmriprep-preprocessing/:/out \
  -v /data/DataPreproc/workingDirectory/:/workDir \
  -v /home/christian/license.txt:/opt/freesurfer/license.txt \
  nipreps/fmriprep:latest /data /out  \
  participant --participant-label "001" \
    --output-spaces MNI152NLin6Asym:res-2 MNI152NLin2009cAsym fsLR \
    -w /workDir \
    --return-all-components \
    --cifti-output \
    --force-syn \
    --mem 52000;

Version:

Latest (23.0.2 at this time)

Environment (Docker, Singularity, custom installation):

Docker on Ubuntu 20.0.4

Data formatted according to a validatable standard? Please provide the output of the validator:

Relevant log outputs (up to 20 lines):

This is the last part of the command line output:

230519-13:26:13,660 nipype.workflow INFO:> 
         [Node] Setting-up "fmriprep_23_0_wf.single_subject_ZI019_wf.func_preproc_ses_01_task_rest_run_01_wf.final_boldref_wf.gen_avg" in "/tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/final_boldref_wf/gen_avg".
230519-13:26:13,671 nipype.workflow INFO:
         [Node] Executing "gen_avg" <niworkflows.interfaces.images.RobustAverage>
230519-13:26:20,395 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.395848:++ 3dvolreg: AFNI version=AFNI_23.0.04 (Feb 13 2023) [64-bit]
230519-13:26:20,396 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.395848:++ Authored by: RW Cox
230519-13:26:20,396 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.396494:*+ WARNING:   If you are performing spatial transformations on an oblique dset,
230519-13:26:20,396 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.396494:  such as /tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/final_boldref_wf/gen_avg/vol0000_unwarped_merged_valid_sliced.nii.gz,
230519-13:26:20,396 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.396494:  or viewing/combining it with volumes of differing obliquity,
230519-13:26:20,396 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.396494:  you should consider running: 
230519-13:26:20,396 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.396494:     3dWarp -deoblique 
230519-13:26:20,396 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.396494:  on this and  other oblique datasets in the same session.
230519-13:26:20,396 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.396494: See 3dWarp -help for details.
230519-13:26:20,396 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.396759:++ Oblique dataset:/tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/final_boldref_wf/gen_avg/vol0000_unwarped_merged_valid_sliced.nii.gz is 16.249729 degrees from plumb.
230519-13:26:20,396 nipype.interface INFO:
         stderr 2023-05-19T13:26:20.396833:++ Coarse del was 10, replaced with 6
230519-13:26:56,608 nipype.interface INFO:
         stderr 2023-05-19T13:26:56.608480:++ Max displacement in automask = 0.20 (mm) at sub-brick 9
230519-13:26:56,609 nipype.interface INFO:
         stderr 2023-05-19T13:26:56.608480:++ Max delta displ  in automask = 0.10 (mm) at sub-brick 2
230519-13:26:56,619 nipype.interface INFO:
         stderr 2023-05-19T13:26:56.619821:*+ WARNING: Disk space: writing dataset ./vol0000_unwarped_merged_valid_sliced_volreg.nii.gz (37 MB), but only 0 free MB on disk
230519-13:26:59,955 nipype.workflow INFO:
         [Node] Finished "gen_avg", elapsed time 46.282453s.
230519-13:27:01,588 nipype.workflow INFO:
         [Node] Setting-up "fmriprep_23_0_wf.single_subject_ZI019_wf.func_preproc_ses_01_task_rest_run_01_wf.final_boldref_wf.enhance_and_skullstrip_bold_wf.init_aff" in "/tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/final_boldref_wf/enhance_and_skullstrip_bold_wf/init_aff".
exception calling callback for <Future at 0x7fb1bfb706a0 state=finished raised FileNotFoundError>
concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 518, in run
    os.makedirs(outdir, exist_ok=True)
  File "/opt/conda/lib/python3.9/os.py", line 215, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/opt/conda/lib/python3.9/os.py", line 225, in makedirs
    mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/final_boldref_wf/enhance_and_skullstrip_bold_wf'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 70, in run_node
    result["result"] = node.result
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 223, in result
    return _load_resultfile(
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/utils.py", line 291, in load_resultfile
    raise FileNotFoundError(results_file)
FileNotFoundError: /tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/final_boldref_wf/enhance_and_skullstrip_bold_wf/init_aff/result_init_aff.pklz
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/concurrent/futures/_base.py", line 330, in _invoke_callbacks
    callback(self)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/opt/conda/lib/python3.9/concurrent/futures/_base.py", line 439, in result
    return self.__get_result()
  File "/opt/conda/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
    raise self._exception
FileNotFoundError: /tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/final_boldref_wf/enhance_and_skullstrip_bold_wf/init_aff/result_init_aff.pklz
230519-13:27:01,952 nipype.workflow INFO:
         [Node] Finished "average", elapsed time 92.277685s.
230519-13:27:03,638 nipype.workflow INFO:
         [Node] Setting-up "fmriprep_23_0_wf.single_subject_ZI019_wf.func_preproc_ses_01_task_rest_run_01_wf.unwarp_wf.brainextraction_wf.clipper_pre" in "/tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/unwarp_wf/brainextraction_wf/clipper_pre".
230519-13:27:03,641 nipype.workflow INFO:
         [Node] Executing "clipper_pre" <niworkflows.interfaces.nibabel.IntensityClip>
230519-13:27:04,388 nipype.workflow INFO:
         [Node] Finished "clipper_pre", elapsed time 0.745836s.
230519-13:27:05,824 nipype.workflow INFO:
         [Node] Setting-up "fmriprep_23_0_wf.single_subject_ZI019_wf.func_preproc_ses_01_task_rest_run_01_wf.unwarp_wf.brainextraction_wf.n4" in "/tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/unwarp_wf/brainextraction_wf/n4".
230519-13:27:05,827 nipype.workflow INFO:
         [Node] Executing "n4" <nipype.interfaces.ants.segmentation.N4BiasFieldCorrection>
230519-13:27:07,995 nipype.workflow INFO:
         [Node] Finished "n4", elapsed time 2.166367s.
230519-13:27:09,676 nipype.workflow INFO:
         [Node] Setting-up "fmriprep_23_0_wf.single_subject_ZI019_wf.func_preproc_ses_01_task_rest_run_01_wf.unwarp_wf.brainextraction_wf.clipper_post" in "/tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/unwarp_wf/brainextraction_wf/clipper_post".
230519-13:27:09,678 nipype.workflow INFO:
         [Node] Executing "clipper_post" <niworkflows.interfaces.nibabel.IntensityClip>
230519-13:27:10,179 nipype.workflow INFO:
         [Node] Finished "clipper_post", elapsed time 0.500801s.
230519-13:27:11,644 nipype.workflow INFO:
         [Node] Setting-up "fmriprep_23_0_wf.single_subject_ZI019_wf.func_preproc_ses_01_task_rest_run_01_wf.unwarp_wf.brainextraction_wf.masker" in "/tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_rest_run_01_wf/unwarp_wf/brainextraction_wf/masker".
230519-13:27:11,655 nipype.workflow INFO:
         [Node] Executing "masker" <sdcflows.interfaces.brainmask.BrainExtraction>
230519-13:27:15,801 nipype.workflow INFO:
         [Node] Finished "masker", elapsed time 4.144534s.
230519-13:45:05,248 nipype.workflow INFO:
         [Node] Finished "resample", elapsed time 1243.756193s.
230519-13:45:05,248 nipype.workflow WARNING:
         Storing result file without outputs
230519-13:45:05,249 nipype.workflow WARNING:
         [Node] Error on "fmriprep_23_0_wf.single_subject_ZI019_wf.func_preproc_ses_01_task_reversal_run_01_wf.unwarp_wf.resample" (/tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_reversal_run_01_wf/unwarp_wf/resample)
exception calling callback for <Future at 0x7fb1bfa16610 state=finished raised FileNotFoundError>
concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 745, in _run_command
    _save_resultfile(
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/utils.py", line 235, in save_resultfile
    savepkl(resultsfile, result)
  File "/opt/conda/lib/python3.9/site-packages/nipype/utils/filemanip.py", line 719, in savepkl
    with pkl_open(tmpfile, "wb") as pkl_file:
  File "/opt/conda/lib/python3.9/gzip.py", line 58, in open
    binary_file = GzipFile(filename, gz_mode, compresslevel)
  File "/opt/conda/lib/python3.9/gzip.py", line 173, in __init__
    fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
OSError: [Errno 28] No space left on device: '/tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_reversal_run_01_wf/unwarp_wf/resample/result_resample.pklz.tmp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 70, in run_node
    result["result"] = node.result
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 223, in result
    return _load_resultfile(
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/utils.py", line 291, in load_resultfile
    raise FileNotFoundError(results_file)
FileNotFoundError: /tmp/work/fmriprep_23_0_wf/single_subject_ZI019_wf/func_preproc_ses_01_task_reversal_run_01_wf/unwarp_wf/resample/result_resample.pklz

Screenshots / relevant information:

Thanks a lot for your help!
Christian

The failure is in /tmp/work/, so it looks like something has happened to your -w /workDir flag in the actual run. Or possibly /data/DataPreproc/workingDirectory/ is a symlink to /tmp/work, which is being interpreted inside the Docker container.

Thank you very much for the quick response. The error code above was without setting the working directory and was produced by missing root permissions for the /tmp directory on our server.
Accordingly, I tried the setting up fmriprep with a specifyed working directory. With the working directory set, our storage of 800 Gb is filled with one subject and the preprocessing aborts eventually.
Thus, I would like to ask you if it is possible to define a specific /tmp directory (on a directory with user priveleges) or whether there is an option for a working directory that limits the data usage?