MRIQC error when creating html report

Hey everyone,

I’m currently running mriqc on a data set in BIDS.
To do so, I’m using the poldracklab/mriqc docker container like the following:

sudo docker run -it --rm -v $bidsdir:/data:ro -v $output_dir:/out poldracklab/mriqc:latest /data /out --verbose-reports participant --participant_label 07

For some participants everything works totally fine. However, the command results in an error in some participants, when processing the functional images.
The error itself seems to happen during the generation of the html report and looks like the following:

2018-02-06 16:12:04,692 niworkflows:INFO Successfully created report (/usr/local/src/mriqc/work/workflow_enumerator/funcMRIQC/SpatialNormalization/_in_file_..data..sub-07..func..sub-07_task-test_run-02_bold.nii.gz/EPI2MNI/report.svg) Fatal Python error: Segmentation fault

Current thread 0x00007fbd4ccca700 (most recent call first): File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/image.py", line 411 in _make_image File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/image.py", line 719 in make_image File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/image.py", line 495 in draw File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/artist.py", line 63 in draw_wrapper File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/image.py", line 147 in flush_images File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/image.py", line 163 in _draw_list_compositing_images File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/axes/_base.py", line 2409 in draw File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/artist.py", line 63 in draw_wrapper File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/image.py", line 139 in _draw_list_compositing_images File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/figure.py", line 1143 in draw File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/artist.py", line 63 in draw_wrapper File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/backends/backend_svg.py", line 1248 in _print_svg File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/backends/backend_svg.py", line 1212 in print_svg File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/backend_bases.py", line 2192 in print_figure File "/usr/local/miniconda/lib/python3.6/site-packages/matplotlib/figure.py", line 1572 in savefig File "<string>", line 34 in _big_plot File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/interfaces/utility/wrappers.py", line 137 in _run_interface File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/interfaces/base/core.py", line 485 in run File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py", line 596 in _run_command File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py", line 520 in _run_interface File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py", line 443 in run File "/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/plugins/multiproc.py", line 62 in run_node File "/usr/local/miniconda/lib/python3.6/multiprocessing/pool.py", line 119 in worker File "/usr/local/miniconda/lib/python3.6/multiprocessing/process.py", line 93 in run File "/usr/local/miniconda/lib/python3.6/multiprocessing/process.py", line 249 in _bootstrap File "/usr/local/miniconda/lib/python3.6/multiprocessing/popen_fork.py", line 74 in _launch File "/usr/local/miniconda/lib/python3.6/multiprocessing/popen_fork.py", line 20 in __init__ File "/usr/local/miniconda/lib/python3.6/multiprocessing/context.py", line 277 in _Popen File "/usr/local/miniconda/lib/python3.6/multiprocessing/context.py", line 223 in _Popen File "/usr/local/miniconda/lib/python3.6/multiprocessing/process.py", line 105 in start File "/usr/local/miniconda/lib/python3.6/multiprocessing/pool.py", line 233 in _repopulate_pool File "/usr/local/miniconda/lib/python3.6/multiprocessing/pool.py", line 240 in _maintain_pool File "/usr/local/miniconda/lib/python3.6/multiprocessing/pool.py", line 366 in _handle_workers File "/usr/local/miniconda/lib/python3.6/threading.py", line 864 in run File "/usr/local/miniconda/lib/python3.6/threading.py", line 916 in _bootstrap_inner File "/usr/local/miniconda/lib/python3.6/threading.py", line 884 in _bootstrap

Then the docker container keeps running, doing nothing, until I kill the process. The corresponding .json file in mriqc/derivatives/ (e.g. sub-07_task-test_run-01_bold.json) looks fine and normal.
Weirdly, it’s not appearing randomly, but participant specific: sub-01, sub-03, sub-04, sub-05, sub-06 work every time, while the rest of the participants don’t work (no matter what).
Comparing the .json files of participants that worked and those that didn’t, no fundamental difference are visible.
The same accounts for the underlying raw data: multiband epi, TR=0.512 sec., 790 images per run, two runs per participant.
The structural pipeline works completely fine for every participant.

Has anyone an idea what the problem might be?

Best, Peer

It looks a lot like a memory issue. Try limiting --nprocs so that less tasks are run simultaneously (by default, it will run as many parallel tasks as cpus found).

Hey @oesteban,

thank you very much for your fast reply and the hint.
I actually already tried that, but the behavior doesn’t change:
Some participants work, while others don’t. And it’s always
the same participants. I also tried after re-converting the files,
but that didn’t help either.

Following the terminal outputs all participants are comparable, except
the part mentioned in the first post. Can you think of any participant related
effects or possible problems?

Hi @PeerHerholz, could you provide us with that particular participant’s data. I’ll try to replicate the issue.

Thanks!

Hi @oesteban,

sure thing! Do you have any preferred way of sharing/uploading,
or would something like google drive be okay?

Thank you very much for taking the time to look into this!

Best, Peer

Dropbox? I don’t have a strong preference.

Just to keep everyone up to date: I shared the files with @oesteban via a private link, as I’m not allowed to post it publicly, sorry. I’ll, of course, post every update here!

1 Like

Hi @PeerHerholz, sorry for the long wait. I’ve been able to replicate the problem on your data using MRIQC 0.10.1. I’m investigating what is the source of the issue.

Okay, so trying with the latest (development) version of MRIQC it worked like a charm. My intuition is that this PR and this PR together fixed this issue. I think that the EPI2MNI registration node fails, but MRIQC is only aware of it trying to plot something wrong.

EDIT: actually, this problem was reported here, sorry for not checking out first.

Please try with the latest MRIQC version, it should work for you.

Otherwise, then the new fMRI summary would fix your issue. We’ll be releasing a new version very soon. BTW, the new plots look beautiful on your data (this is sub-02, run-01 which previously crashed with version 0.10.1):

Hey @oesteban,

thank you so much for the great support, help and your effort!

I rerun the complete dataset using the latest version and everything worked like a charm! The random forrest classifier actually now predicted a problematic participant that it didn’t before, so even more benefits, hehe. Sorry for this noobish query, never thought about updating…

Just to be sure: it was a graphics related problem, more precisely the fMRI summary plot?

Thanks again, best, Peer

2 Likes

Hi that is great news!

So looking deeper into your first post, you were having two issues:

  • One was purely graphical, as you were mentioning: your scans were very long and under particular conditions, the carpet plot would cause the segmentation fault. Now, a decimating strategy is in place to prevent that.
  • A second was a logical problem on the one nipype interface for SpatialNormalization that would silently crash, and try to generate a report (over null outputs). Those were subjects showing an error on the EPI2MNI node.

Both have been solved in the latest versions of MRIQC.

Cheers,
Oscar