MRIQC disk usage

I tried to run MRIQC with Docker on a large dataset (~2000 images) and it filled up my disk and I cannot find where the files it created are.

My BIDS dataset and mriqc output directory are on a separate disk. I have no Docker containers on my system as per “docker container ls”.

Here is a sample error:

OSError: [Errno 28] No space left on device: ‘/usr/local/src/mriqc/work/workflow_enumerator/anatMRIQCT1w/SpatialNormalization/in_file…data…sub-927676…ses-V24…anat…sub-927676_ses-V24_run-002_T1w.nii’

Anyone ever encounter this issue?

This basically broke my entire system and I can no longer work

Try using https://docs.docker.com/engine/reference/commandline/system_df/ to investigate what is taking up the space. If you are using Mac or Windows you should use docker settings to set the virtual machine maximum disk size (last time I checked those virtual disk could only grow, so you might need to reset docker).

Thanks for your response, Chris. This is on Ubuntu 16.04. For some reason, there was ~350GB of stuff in (I think, IT didn’t specify) /var/lib/docker/ that I didn’t have access to. My IT department deleted everything and I’m trying again. I had not used Docker for anything other mriqc.

There is some problem running mriqc on my BIDS dataset on more than one participant at a time (no --participant_label + participants). It seems to be working well initially and then eventually crashes at AntsRegistration. No output in the /derivatives/ directory, get a “no space on disk” error but all of my disk drives have lots of space.

I have tried writing a Python script that launches mriqc individually for each participant, but it seems docker doesn’t like that, gives the error: “docker: invalid reference format”.

Still can’t run mriqc.

The ‘–rm’ option in docker means that the container (inside which the working directory will temporarily reside) will be removed as soon as the container stops running (successfuly or not). This means that you might be running out of space, container stops, gets deleted and pace gets released. Please mind that you can set the work directory to an arbitrary location using ‘-w’ mriqc flag (but first you will have to mount that folder in docker using ‘-v’).

As for the scripts it’s hard to help without seeing the code and how you are trying to run it.

Hurray! When removing the ‘–rm’ flag and setting a working directory on another disk, it worked (for most cases)! I still get a couple of errors (below, not sure if these affect follow-up classifier problem), but the output files exist and the group level analysis went off without a hitch!

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 68, in run_node
    result['result'] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 480, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 564, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 644, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 521, in run
    runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.6/site-packages/mriqc/interfaces/anatomical.py", line 104, in _run_interface
    stats = summary_stats(inudata, pvmdata, airdata, erode=erode)
  File "/usr/local/miniconda/lib/python3.6/site-packages/mriqc/qc/anatomical.py", line 607, in summary_stats
    'p95': float(np.percentile(img[mask == 1], 95)),
  File "/usr/local/miniconda/lib/python3.6/site-packages/numpy/lib/function_base.py", line 4116, in percentile
    interpolation=interpolation)
  File "/usr/local/miniconda/lib/python3.6/site-packages/numpy/lib/function_base.py", line 3858, in _ureduce
    r = func(a, **kwargs)
  File "/usr/local/miniconda/lib/python3.6/site-packages/numpy/lib/function_base.py", line 4233, in _percentile
    x1 = take(ap, indices_below, axis=axis) * weights_below
  File "/usr/local/miniconda/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 134, in take
    return _wrapfunc(a, 'take', indices, axis=axis, out=out, mode=mode)
  File "/usr/local/miniconda/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 57, in _wrapfunc
    return getattr(obj, method)(*args, **kwds)
IndexError: cannot do a non-empty take from an empty axes.

Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py”, line 68, in run_node
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 480, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 564, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 644, in _run_command
result = self._interface.run(cwd=outdir)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/base/core.py”, line 521, in run
runtime = self._run_interface(runtime)
File “/usr/local/miniconda/lib/python3.6/site-packages/mriqc/interfaces/anatomical.py”, line 111, in _run_interface
snrvals.append(snr(stats[tlabel][‘median’], stats[tlabel][‘stdv’], stats[tlabel][‘n’]))
File “/usr/local/miniconda/lib/python3.6/site-packages/mriqc/qc/anatomical.py”, line 228, in snr
return float(mu_fg / (sigma_fg * sqrt(n / (n - 1))))
ZeroDivisionError: float division by zero

Now I am trying to run the random forest classifier, and I get the following error:
IndexError: boolean index did not match indexed array along dimension 0; dimension is 73 but corresponding boolean dimension is 71

The docker version of mriqc that I used is 0.11.0, and the current pip-installable version is 0.12.0… Both 0.12.0 and 0.11.0 gave me the above error. Is it plausible that the group level analysis failed to populate a column of the T1w.csv table because of one of the above errors?