Fmriprep Memory error related to ICA AROMA


Hi all,

I used newest version of fmriprep (1.4.1) on my dataset, but it finished with a bunch of errors. All errors are related to ica_aroma node or func_derivatives node (but only in MNI152NLin6Asym space that is also related to AROMA). Errors result in many missing files varying from subject to subject (although all bold_preproc files are created): some subjects are missing confounds files or json files related to preprocessed imaging files. Errors for specific subjects also vary from subject to subjects but most common ones are:

Memory error

 Node Name: fmriprep_wf.single_subject_m33_wf.func_preproc_task_prlpun_wf.func_derivatives_wf.ds_bold_std
     File: /out/fmriprep/sub-m33/log/20190819-144336_d25ee372-868d-4b3e-a1f8-6ce6c259f943/crash-20190821-193504-root-ds_bold_std.a1-788e4e76-fc7b-4d31-9112-9a2935ea1236.txt
     Working Directory: /scratch/fmriprep_wf/single_subject_m33_wf/func_preproc_task_prlpun_wf/func_derivatives_wf/_key_MNI152NLin6Asym/ds_bold_std

base_directory: /out
check_hdr: True
compress: True
desc: preproc
in_file: ['/scratch/fmriprep_wf/single_subject_m33_wf/func_preproc_task_prlpun_wf/bold_std_trans_wf/_key_MNI152NLin6Asym/merge/vol0000_xform-00000_merged.nii.gz']
keep_dtype: True
source_file: /data/sub-m33/func/sub-m33_task-prlpun_bold.nii.gz
space: MNI152NLin6Asym

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/", line 316, in _send_procs_to_workers
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/", line 472, in run
result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/", line 563, in _run_interface
return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/", line 643, in _run_command
result =
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/", line 375, in run
runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/niworkflows/interfaces/", line 494, in _run_interface
nii.__class__(np.array(nii.dataobj), nii.affine, hdr).to_filename(
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 356, in __array__
return apply_read_scaling(raw_data, self._slope, self._inter)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 965, in apply_read_scaling
arr = arr * slope

Aroma error

Node Name: fmriprep_wf.single_subject_m14_wf.func_preproc_task_prlpun_wf.ica_aroma_wf.ica_aroma
File: /out/fmriprep/sub-m14/log/20190816-094855_b4ae6d79-b309-4e83-a722-a5801787ac44/crash-20190819-114145-root-ica_aroma-9e78cb96-0cac-4e01-a45c-0113c8045848.txt
Working Directory: /scratch/fmriprep_wf/single_subject_m14_wf/func_preproc_task_prlpun_wf/ica_aroma_wf/ica_aroma

    TR: 2.0
    compress_report: auto
    denoise_type: nonaggr
    environ: {}
    in_file: /scratch/fmriprep_wf/single_subject_m14_wf/func_preproc_task_prlpun_wf/ica_aroma_wf/smooth/vol0000_xform-00000_merged_smooth.nii.gz
    mask: /scratch/fmriprep_wf/single_subject_m14_wf/func_preproc_task_prlpun_wf/bold_std_trans_wf/_key_MNI152NLin6Asym/mask_std_tfm/ref_bold_corrected_brain_mask_maths_trans.nii.gz
    melodic_dir: /scratch/fmriprep_wf/single_subject_m14_wf/func_preproc_task_prlpun_wf/ica_aroma_wf/melodic
    motion_parameters: /scratch/fmriprep_wf/single_subject_m14_wf/func_preproc_task_prlpun_wf/bold_hmc_wf/normalize_motion/motion_params.txt
    out_dir: out
    out_report: ica_aroma_reportlet.svg
    report_mask: /scratch/fmriprep_wf/single_subject_m14_wf/func_preproc_task_prlpun_wf/bold_std_trans_wf/_key_MNI152NLin6Asym/mask_std_tfm/ref_bold_corrected_brain_mask_maths_trans.nii.gz

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/", line 69, in run_node
    result['result'] =
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/", line 472, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/", line 563, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/", line 643, in _run_command
    result =
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/", line 376, in run
    runtime = self._post_run_hook(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/niworkflows/interfaces/", line 171, in _post_run_hook
    outputs = self.aggregate_outputs(runtime=runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/", line 478, in aggregate_outputs
    raise error
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/", line 471, in aggregate_outputs
    setattr(outputs, key, val)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/", line 112, in validate
    self.info_text, value))
traits.trait_errors.TraitError: The trait 'nonaggr_denoised_file' of an ICA_AROMAOutputSpecRPT instance is an existing file name, but the path  '/scratch/fmriprep_wf/single_subject_m14_wf/func_preproc_task_prlpun_wf/ica_aroma_wf/ica_aroma/out/denoised_func_data_nonaggr.nii.gz' does not exist.

It seems like AROMA error is correlated with missing confounds file. Interestingly all HTML reports have plots for correlation among nuissance regressors. I was running fmriprep on two lab PC’s for selected subjects and one PC had significantly less errors than the other (although type of error was identical).

Any ideas what is causing the problem? Now I am running fmriprep without --use-aroma flag to see if problem remain. I can also try running downgraded version of fmriprep to see if there is improvement.

Thanks in advance,


Hi @kbonna

Thank you for your message! These errors perhaps may be the result of Docker memory limits (potentially the memory error in particular). To raise these limits can be found in our fMRIPrep tutorial.

Running without the aroma flag will be interesting to see the result.

Thank you,


From your traceback, it seems to me that you are trying to process several subjects at a time. How are you running fMRIPrep? Please let us know the full command line, whether you are using containers (Docker, Singularity) and some description of your hardware settings.


@franklin I tried running without AROMA and it succeeded on two subjects. Now I am running fmriprep for entire dataset, results should be tomorrow, I will post them asap.

@oesteban I am using Docker. Here is my full command example for a bunch of subjects:

sudo /home/connectomics/anaconda3/bin/fmriprep-docker /mnt/dane/BONNA_decide_net/data/main_fmri_study /mnt/dane/BONNA_decide_net/data/main_fmri_study/derivatives -w /mnt/dane/BONNA_decide_net/data/temp --participant-label m18 m19 m20 m21 m22 m23 m24 m25 --use-aroma --fs-license-file /mnt/dane/BONNA_decide_net/code/fmri_preparation/license.txt

cat /proc/cpuinfo gives (for each of 12 processors):

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel® Core™ i7-5820K CPU @ 3.30GHz
stepping : 2
microcode : 0x43
cpu MHz : 1199.388
cache size : 15360 KB
physical id : 0
siblings : 12
core id : 0
cpu cores : 6
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 15
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds
bogomips : 6596.55
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

free -h gives (while running fmriprep):

 total        used        free      shared  buff/cache   available
Mem:            23G        6.0G        9.4G        148M        8.1G         16G
Swap:          2.0G        739M        1.3G

Hi @kbonna, you may want to isolate participants in separate processes (in other words, run one fmriprep-docker instance per subject).