QSIPrep: No space left on device

Summary of what happened:

Recon Noddi is failing to run on host. Preprocessing successful. How to resolve?

Command used (and if a helper script was used, a link to the helper script or the command generated):

singularity run --cleanenv -B /mnt/cina/WMH:/WMH /mnt/cina/WMH/lib/qsiprep-0.16.1.simg /WMH/data/BIDS/reprocess_QSIprep/ /WMH/data/BIDS/reprocess_QSIprep/derivatives/qsiprep participant --participant_label sub-037S62227 --stop-on-first-crash --notrack --output-resolution 1.0 --fs_license_file /WMH/lib/fs_license.txt --recon-spec amico_noddi --skip-bids-validation -w /WMH/data/intermediates_qsirerun/qsiprep --nthreads 36

Version:

qsiprep-0.16.1.simg

Environment (Docker, Singularity, custom installation):

Singularity

Data formatted according to a validatable standard? Please provide the output of the validator:

BIDS standard

Relevant log outputs (up to 20 lines):

230106-20:45:28,127 nipype.interface INFO:
	 Fitting NODDI Model.

-> Creating LUT for "NODDI" model:
   [ 163.6 seconds ]

-> Resampling LUT for subject "subject":
   [ 34.6 seconds ]

-> Fitting "NODDI" model to 1324544 voxels:
230106-20:48:54,104 nipype.workflow INFO:
	 [Node] Finished "recon_noddi", elapsed time 291.298209s.
230106-20:48:54,104 nipype.workflow WARNING:
	 Storing result file without outputs
230106-20:48:54,105 nipype.workflow WARNING:
	 [Node] Error on "qsirecon_wf.sub-037S62225_amico_noddi.sub_037S62225_ses_005_run_01_space_T1w_desc_preproc_recon_wf.fit_noddi.recon_noddi" (/WMH/data/intermediates_qsirerun/qsiprep/qsirecon_wf/sub-037S62225_amico_noddi/sub_037S62225_ses_005_run_01_space_T1w_desc_preproc_recon_wf/fit_noddi/recon_noddi)
230106-20:48:55,99 nipype.workflow ERROR:
	 Node recon_noddi failed to run on host cina.
230106-20:48:55,101 nipype.workflow ERROR:
	 Saving crash info to /WMH/data/BIDS/reprocess_QSIprep/derivatives/qsiprep/qsirecon/sub-037S62225/log/20230106-204018_6159dbad-53b2-4f9c-86d8-e2b4d9b980e0/crash-20230106-204855-tds27-recon_noddi-4f2ed50b-9bea-4966-b9ce-deb3347505fa.txt
1 Like

Hi, what are in the contents of /WMH/data/BIDS/reprocess_QSIprep/derivatives/qsiprep/qsirecon/sub-037S62225/log/20230106-204018_6159dbad-53b2-4f9c-86d8-e2b4d9b980e0/crash-20230106-204855-tds27-recon_noddi-4f2ed50b-9bea-4966-b9ce-deb3347505fa.txt?

Node inputs:

b0_threshold = 50.0
big_delta = None
bval_file = <undefined>
bvec_file = <undefined>
dIso = 0.003
dPar = 0.0017
dwi_file = <undefined>
isExvivo = False
little_delta = None
mask_file = <undefined>
num_threads = 8
write_fibgz = <undefined>
write_mif = <undefined>

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node recon_noddi.

Traceback:
	joblib.externals.loky.process_executor._RemoteTraceback: 
	"""
	Traceback (most recent call last):
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/externals/loky/backend/queues.py", line 153, in _feed
	    obj_ = dumps(obj, reducers=reducers)
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 271, in dumps
	    dump(obj, buf, reducers=reducers, protocol=protocol)
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 264, in dump
	    _LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 602, in dump
	    return Pickler.dump(self, obj)
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/_memmapping_reducer.py", line 442, in __call__
	    for dumped_filename in dump(a, filename):
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/numpy_pickle.py", line 482, in dump
	    NumpyPickler(f, protocol=protocol).dump(value)
	  File "/usr/local/miniconda/lib/python3.8/pickle.py", line 487, in dump
	    self.save(obj)
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/numpy_pickle.py", line 281, in save
	    wrapper.write_array(obj, self)
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/numpy_pickle.py", line 104, in write_array
	    pickler.file_handle.write(chunk.tobytes('C'))
	OSError: [Errno 28] No space left on device
	"""

	The above exception was the direct cause of the following exception:

	Traceback (most recent call last):
	  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 398, in run
	    runtime = self._run_interface(runtime)
	  File "/usr/local/miniconda/lib/python3.8/site-packages/qsiprep/interfaces/amico.py", line 173, in _run_interface
	    aeval.fit()
	  File "/usr/local/miniconda/lib/python3.8/site-packages/amico/core.py", line 461, in fit
	    estimates = Parallel(n_jobs=n_jobs, backend=parallel_backend)(
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/parallel.py", line 1056, in __call__
	    self.retrieve()
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/parallel.py", line 935, in retrieve
	    self._output.extend(job.get(timeout=self.timeout))
	  File "/usr/local/miniconda/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 542, in wrap_future_result
	    return future.result(timeout=timeout)
	  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 444, in result
	    return self.__get_result()
	  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
	    raise self._exception
	_pickle.PicklingError: Could not pickle the task to send it to the workers.

This sounds like an issue with your computer/cluster. Do you have space in /mnt/cina/WMH/data/intermediates_qsirerun/qsiprep?