Error running FLAMEO on Nipype

Dear all,

I’m trying to do subject-level modeling on FSL using Nipype. I wrote a workflow that mostly works, but I keep running into this error when I use FLAMEO:

RuntimeError: Command:
flameo --copefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/copemerge/mapflow/_copemerge0/cope1_merged.nii.gz --covsplitfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/l2model/design.grp --designfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/l2model/design.mat --dofvarcopefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/gendofvolume/dof_file.nii.gz --ld=stats --maskfile=/scratch/groups/hyo/SwiSt/BIDS_data/derivatives/fmriprep/sub-04/func/sub-04_task-tomloc_run-01_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz --runmode=fe --tcontrastsfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/l2model/design.con --varcopefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/varcopemerge/mapflow/_varcopemerge0/varcope1_merged.nii.gz
Standard output:
Log directory is: stats
Setting up:
ntptsing=2.000000 

evs_group=1.000000 

Standard error:
Aborted
Return code: 134

The funny thing is, the command in the error message above works just fine if I run it through the terminal—so I’m guessing that this is a Nipype issue, rather than an FSL issue. Does anyone have any idea how to debug this? Some additional context:

Thanks in advance!
Natalia

Was this directly in your terminal, or via singularity shell?

Ah, good catch! I was running it directly in my terminal. Here’s the error message I got when I ran through my Singularity image:

[nvelez@sh-06-36 /home/groups/hyo/singularity]$ singularity run -B $PI_HOME,$PI_SCRATCH sll_fmri_20190204.img 
Some packages in this Docker container are non-free
If you are considering commercial use of this container, please consult the relevant license:
https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Licence
nvelez@sh-06-36:/home/groups/hyo/singularity$ flameo --copefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/copemerge/mapflow/_copemerge0/cope1_merged.nii.gz --covsplitfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/l2model/design.grp --designfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/l2model/design.mat --dofvarcopefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/gendofvolume/dof_file.nii.gz --ld=stats --maskfile=/scratch/groups/hyo/SwiSt/BIDS_data/derivatives/fmriprep/sub-04/func/sub-04_task-tomloc_run-01_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz --runmode=fe --tcontrastsfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/l2model/design.con --varcopefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/varcopemerge/mapflow/_varcopemerge0/varcope1_merged.nii.gz
Log directory is: stats
Setting up:
Image Exception : #22 :: ERROR: Could not open image /scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/copemerge/mapflow/_copemerge0/cope1_merged
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Aborted

Are you able to ls /scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-04/l1/fixedfx/copemerge/mapflow/_copemerge0/cope1_merged.nii.gz from within your singularity shell?

And I see that your singularity image has FSL 5.0.11 installed. What version do you have installed locally?

(1) I cleared my working directory and ran the workflow cleanly, and I realized that the fixedfx directory doesn’t exist anymore because I changed the workflow a bit. (I build my own fixedfx workflow from scratch, rather than loading the pre-made workflow.) The FLAMEO command is now:

flameo --copefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/copemerge/mapflow/_copemerge0/cope1_merged.nii.gz --covsplitfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.grp --designfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.mat --ld=stats --maskfile=/scratch/groups/hyo/SwiSt/BIDS_data/derivatives/fmriprep/sub-06/func/sub-06_task-tomloc_run-01_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz --runmode=fe --tcontrastsfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.con --varcopefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/varcopemerge/mapflow/_varcopemerge0/varcope1_merged.nii.gz

I checked each of the files in this new command—I can ls all of them. I tried running this new command inside the Singularity image and got a different error:

flameo --copefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/copemerge/mapflow/_copemerge0/cope1_merged.nii.gz --covsplitfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.grp --designfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.mat --ld=stats --maskfile=/scratch/groups/hyo/SwiSt/BIDS_data/derivatives/fmriprep/sub-06/func/sub-06_task-tomloc_run-01_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz --runmode=fe --tcontrastsfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.con --varcopefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/varcopemerge/mapflow/_varcopemerge0/varcope1_merged.nii.gz
Log directory is: stats
Setting up:
ntptsing=2.000000 

evs_group=1.000000 

Aborted

The command does generate a new stats directory, but the directory only includes a logfile with a single line (the FSL command).

(2) 5.0.10! Here’s the output when I run the same command on the Terminal:

Log directory is: stats+
Setting up:
ntptsing=2.000000 

evs_group=1.000000 

No f contrasts

WARNING: The passed in varcope file, /scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/varcopemerge/mapflow/_varcopemerge0/varcope1_merged.nii.gz, contains voxels inside the mask with zero (or negative) values. These voxels will be excluded from the analysis.
nevs=1
ntpts=2
ngs=1
nvoxels=74434
Running:
nmaskvoxels=74434
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
Saving results

Log directory was: stats+

The stats+ directory contains the following files:

[nvelez@sh-06-36 /home/groups/hyo/singularity]$ ls stats+
cope1.nii.gz                     res4d.nii.gz     zflame1lowertstat1.nii.gz
logfile                          tdof_t1.nii.gz   zflame1uppertstat1.nii.gz
mask.nii.gz                      tstat1.nii.gz    zstat1.nii.gz
mean_random_effects_var1.nii.gz  varcope1.nii.gz
pe1.nii.gz                       weights1.nii.gz

So this may be a bug in FSL 5.0.11. Would you be able to rebuild your Singularity image with FSL 5.0.10 and try again? If that resolves it, then it’s less likely to be nipype or your singularity environment, so I would go ahead and submit a bug report to FSL. (That said, they’ve released FSL 6.0, so I don’t know whether they’ll still be fixing issues in 5.0.x.)

Hmmm, I’m getting the same error on a Singularity image with FSL 5.0.10 installed:

nvelez@sh-06-36:/home/groups/hyo/singularity$ flameo --copefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/copemerge/mapflow/_copemerge0/cope1_merged.nii.gz --covsplitfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.grp --designfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.mat --ld=stats --maskfile=/scratch/groups/hyo/SwiSt/BIDS_data/derivatives/fmriprep/sub-06/func/sub-06_task-tomloc_run-01_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz --runmode=fe --tcontrastsfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.con --varcopefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/varcopemerge/mapflow/_varcopemerge0/varcope1_merged.nii.gz
Log directory is: stats
Setting up:
ntptsing=2.000000 

evs_group=1.000000 

Aborted

This may be a filesystem permissions issue. Can you try setting the argument to --ld to some directory you know you can write to from inside Singularity? e.g. --ld=/scratch/groups/hyo/SwiSt/stats

So, I went to a couple of different directories from within the Singularity image, tested whether I can write to them, and tried to run the command there—unfortunately, I still got the same error.

nvelez@sh-06-36:/home/groups/hyo/SwiSt/3_model$ flameo --copefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/copemerge/mapflow/_copemerge0/cope1_merged.nii.gz --covsplitfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.grp --designfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.mat --ld=/scratch/groups/hyo/swist_cache/stats --maskfile=/scratch/groups/hyo/SwiSt/BIDS_data/derivatives/fmriprep/sub-06/func/sub-06_task-tomloc_run-01_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz --runmode=fe --tcontrastsfile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/l2model/design.con --varcopefile=/scratch/groups/hyo/swist_cache/l1_model/task-tomloc_model-localizer_sub-06/l1/varcopemerge/mapflow/_varcopemerge0/varcope1_merged.nii.gz
Log directory is: /scratch/groups/hyo/swist_cache/stats
Setting up:
ntptsing=2.000000 

evs_group=1.000000 

Aborted

Did anyone ever find a solution to this? I just ran into this problem myself, where the code crashes within a Docker container, but runs fine in the terminal. Code and error message copied below.

import nibabel as nib
import numpy as np
import nipype.pipeline.engine as pe
import nipype.interfaces.fsl as fsl
import os, glob

#### INPUT ####
path_base = '/home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/*/model/fwhm_5.0*/_modelestimate0/'
work_dir = '/home/neuro/workdir/2020-02-28_simple_lvl2/'
con_list = [
#     'cope1_instructions.nii.gz', 
    'cope2_speech_prep.nii.gz', 
    'cope3_no_speech.nii.gz'
]
########

fixed_fx = pe.Workflow(name='fixedfx')
copemerge = pe.MapNode(
    interface=fsl.Merge(dimension='t'),
    iterfield=['in_files'],
    name='copemerge')
varcopemerge = pe.MapNode(
    interface=fsl.Merge(dimension='t'),
    iterfield=['in_files'],
    name='varcopemerge')
level2model = pe.Node(interface=fsl.L2Model(), name='l2model')
flameo = pe.MapNode(
    interface=fsl.FLAMEO(run_mode='flame1'),
    name='flameo',
    iterfield=['cope_file', 'var_cope_file'])
fixed_fx.connect([
    (copemerge, flameo, [('merged_file', 'cope_file')]),
    (varcopemerge, flameo, [('merged_file', 'var_cope_file')]),
    (level2model, flameo, [('design_mat', 'design_file'),
                           ('design_con', 't_con_file'),
                           ('design_grp', 'cov_split_file')]),
])

for con in con_list:
    if not os.path.exists(os.path.join(work_dir, con.split('.')[0])):
        os.mkdir(os.path.join(work_dir, con.split('.')[0]))
    fixed_fx.base_dir = os.path.join(work_dir, con.split('.')[0])
    print('working on con:', con)
    cope_path = [path_base + con]
    varcope_path = [path_base + 'var' + con]

    # get files
    cope_list = glob.glob(cope_path[0])
    varcope_list = glob.glob(varcope_path[0])

    # build mask.
    mask = np.mean(np.array([nib.load(f).get_data() for f in cope_list]), axis=0)
    mask[mask!=0] = 1
    nib.save(nib.Nifti1Image(mask, nib.load(cope_list[0]).affine, nib.load(cope_list[0]).header), 
             os.path.join(work_dir, 'mask.nii.gz'))
    mask_file = os.path.join(work_dir, 'mask.nii.gz')
    
    fixed_fx.inputs.flameo.mask_file = mask_file
    fixed_fx.inputs.copemerge.in_files = cope_list
    fixed_fx.inputs.varcopemerge.in_files = varcope_list
    fixed_fx.inputs.l2model.num_copes = len(cope_list)
    
    fixed_fx.run()       

working on con: cope1_instructions.nii.gz
        200228-22:58:03,729 nipype.workflow INFO:
        	 Workflow fixedfx settings: ['check', 'execution', 'logging', 'monitoring']
        200228-22:58:03,789 nipype.workflow INFO:
        	 Running serially.
        200228-22:58:03,791 nipype.workflow INFO:
        	 [Node] Setting-up "fixedfx.l2model" in "/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/l2model".
        200228-22:58:03,805 nipype.workflow INFO:
        	 [Node] Running "l2model" ("nipype.interfaces.fsl.model.L2Model")
        200228-22:58:03,838 nipype.workflow INFO:
        	 [Node] Finished "fixedfx.l2model".
        200228-22:58:03,840 nipype.workflow INFO:
        	 [Node] Setting-up "fixedfx.varcopemerge" in "/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/varcopemerge".
        200228-22:58:04,49 nipype.workflow INFO:
        	 [Node] Setting-up "_varcopemerge0" in "/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/varcopemerge/mapflow/_varcopemerge0".
        200228-22:58:04,207 nipype.workflow INFO:
        	 [Node] Running "_varcopemerge0" ("nipype.interfaces.fsl.utils.Merge"), a CommandLine Interface with command:
        fslmerge -t varcope1_instructions_merged.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-001/model/fwhm_5.0sub-001/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-002/model/fwhm_5.0sub-002/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-003/model/fwhm_5.0sub-003/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-004/model/fwhm_5.0sub-004/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-005/model/fwhm_5.0sub-005/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-006/model/fwhm_5.0sub-006/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-007/model/fwhm_5.0sub-007/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-008/model/fwhm_5.0sub-008/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-009/model/fwhm_5.0sub-009/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-010/model/fwhm_5.0sub-010/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-011/model/fwhm_5.0sub-011/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-012/model/fwhm_5.0sub-012/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-013/model/fwhm_5.0sub-013/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-015/model/fwhm_5.0sub-015/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-016/model/fwhm_5.0sub-016/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-019/model/fwhm_5.0sub-019/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-022/model/fwhm_5.0sub-022/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-023/model/fwhm_5.0sub-023/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-024/model/fwhm_5.0sub-024/_modelestimate0/varcope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-025/model/fwhm_5.0sub-025/_modelestimate0/varcope1_instructions.nii.gz
        200228-22:58:20,824 nipype.workflow INFO:
        	 [Node] Finished "_varcopemerge0".
        200228-22:58:20,836 nipype.workflow INFO:
        	 [Node] Finished "fixedfx.varcopemerge".
        200228-22:58:20,837 nipype.workflow INFO:
        	 [Node] Setting-up "fixedfx.copemerge" in "/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/copemerge".
        200228-22:58:21,39 nipype.workflow INFO:
        	 [Node] Setting-up "_copemerge0" in "/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/copemerge/mapflow/_copemerge0".
        200228-22:58:21,195 nipype.workflow INFO:
        	 [Node] Running "_copemerge0" ("nipype.interfaces.fsl.utils.Merge"), a CommandLine Interface with command:
        fslmerge -t cope1_instructions_merged.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-001/model/fwhm_5.0sub-001/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-002/model/fwhm_5.0sub-002/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-003/model/fwhm_5.0sub-003/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-004/model/fwhm_5.0sub-004/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-005/model/fwhm_5.0sub-005/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-006/model/fwhm_5.0sub-006/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-007/model/fwhm_5.0sub-007/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-008/model/fwhm_5.0sub-008/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-009/model/fwhm_5.0sub-009/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-010/model/fwhm_5.0sub-010/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-011/model/fwhm_5.0sub-011/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-012/model/fwhm_5.0sub-012/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-013/model/fwhm_5.0sub-013/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-015/model/fwhm_5.0sub-015/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-016/model/fwhm_5.0sub-016/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-019/model/fwhm_5.0sub-019/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-022/model/fwhm_5.0sub-022/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-023/model/fwhm_5.0sub-023/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-024/model/fwhm_5.0sub-024/_modelestimate0/cope1_instructions.nii.gz /home/neuro/data/stress_combinebaseline_v2/stress_combinebaseline_v2/smooth/sub-025/model/fwhm_5.0sub-025/_modelestimate0/cope1_instructions.nii.gz
        200228-22:58:37,714 nipype.workflow INFO:
        	 [Node] Finished "_copemerge0".
        200228-22:58:37,723 nipype.workflow INFO:
        	 [Node] Finished "fixedfx.copemerge".
        200228-22:58:37,724 nipype.workflow INFO:
        	 [Node] Setting-up "fixedfx.flameo" in "/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/flameo".
        200228-22:58:37,797 nipype.workflow INFO:
        	 [Node] Setting-up "_flameo0" in "/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/flameo/mapflow/_flameo0".
        200228-22:58:37,830 nipype.workflow INFO:
        	 [Node] Running "_flameo0" ("nipype.interfaces.fsl.model.FLAMEO"), a CommandLine Interface with command:
        flameo --copefile=/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/copemerge/mapflow/_copemerge0/cope1_instructions_merged.nii.gz --covsplitfile=/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/l2model/design.grp --designfile=/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/l2model/design.mat --ld=stats --maskfile=/home/neuro/workdir/2020-02-28_simple_lvl2/mask.nii.gz --runmode=flame1 --tcontrastsfile=/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/l2model/design.con --varcopefile=/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/varcopemerge/mapflow/_varcopemerge0/varcope1_instructions_merged.nii.gz
        200228-22:58:38,87 nipype.interface INFO:
        	 stdout 2020-02-28T22:58:38.086954:Log directory is: stats
        200228-22:58:38,102 nipype.interface INFO:
        	 stdout 2020-02-28T22:58:38.102748:Setting up:
        200228-22:58:44,135 nipype.interface INFO:
        	 stdout 2020-02-28T22:58:44.129119:ntptsing=20.000000 
        200228-22:58:44,756 nipype.interface INFO:
        	 stdout 2020-02-28T22:58:44.129119:
        200228-22:58:44,761 nipype.interface INFO:
        	 stdout 2020-02-28T22:58:44.129119:evs_group=1.000000 
        200228-22:58:44,763 nipype.interface INFO:
        	 stdout 2020-02-28T22:58:44.129119:
        200228-22:58:44,933 nipype.interface INFO:
        	 stderr 2020-02-28T22:58:44.933059:Aborted
        200228-22:58:45,155 nipype.workflow WARNING:
        	 Storing result file without outputs
        200228-22:58:45,162 nipype.workflow WARNING:
        	 [Node] Error on "_flameo0" (/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/flameo/mapflow/_flameo0)
        200228-22:58:45,177 nipype.workflow WARNING:
        	 [Node] Error on "fixedfx.flameo" (/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/flameo)
        200228-22:58:45,187 nipype.workflow ERROR:
        	 Node flameo failed to run on host 9f1a15736eea.
        200228-22:58:45,190 nipype.workflow ERROR:
        	 Saving crash info to /home/neuro/scripts/jupyter_scrap/crash-20200228-225845-root-flameo-6a557025-6f3e-49cd-9c7e-6ed7c370010f.pklz
        Traceback (most recent call last):
          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/plugins/linear.py", line 48, in run
            node.run(updatehash=updatehash)
          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 479, in run
            result = self._run_interface(execute=True)
          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 1282, in _run_interface
            self.config['execution']['stop_on_first_crash'])))
          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 1204, in _collate_results
            (self.name, '\n'.join(msg)))
        Exception: Subnodes of node: flameo failed:
        Subnode 0 failed
        Error: Traceback (most recent call last):

          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/engine/utils.py", line 103, in nodelist_runner
            result = node.run(updatehash=updatehash)

          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 479, in run
            result = self._run_interface(execute=True)

          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 569, in _run_interface
            return self._run_command(execute)

          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 662, in _run_command
            result = self._interface.run(cwd=outdir)

          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 376, in run
            runtime = self._run_interface(runtime)

          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/interfaces/fsl/model.py", line 1072, in _run_interface
            return super(FLAMEO, self)._run_interface(runtime)

          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 761, in _run_interface
            self.raise_exception(runtime)

          File "/opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 698, in raise_exception
            ).format(**runtime.dictcopy()))

        RuntimeError: Command:
        flameo --copefile=/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/copemerge/mapflow/_copemerge0/cope1_instructions_merged.nii.gz --covsplitfile=/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/l2model/design.grp --designfile=/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/l2model/design.mat --ld=stats --maskfile=/home/neuro/workdir/2020-02-28_simple_lvl2/mask.nii.gz --runmode=flame1 --tcontrastsfile=/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/l2model/design.con --varcopefile=/home/neuro/workdir/2020-02-28_simple_lvl2/cope1_instructions/fixedfx/varcopemerge/mapflow/_varcopemerge0/varcope1_instructions_merged.nii.gz
        Standard output:
        Log directory is: stats
        Setting up:
        ntptsing=20.000000 

        evs_group=1.000000 

        Standard error:
        Aborted
        Return code: 134



        When creating this crashfile, the results file corresponding
        to the node could not be found.
        200228-22:58:45,195 nipype.workflow INFO:
        	 ***********************************
        200228-22:58:45,197 nipype.workflow ERROR:
        	 could not run node: fixedfx.flameo
        200228-22:58:45,198 nipype.workflow INFO:
        	 crashfile: /home/neuro/scripts/jupyter_scrap/crash-20200228-225845-root-flameo-6a557025-6f3e-49cd-9c7e-6ed7c370010f.pklz
        200228-22:58:45,199 nipype.workflow INFO:
        	 ***********************************
        ---------------------------------------------------------------------------
        RuntimeError                              Traceback (most recent call last)
        <ipython-input-19-32ca7d40d850> in <module>
             58     fixed_fx.inputs.l2model.num_copes = len(cope_list)
             59 
        ---> 60     fixed_fx.run()
             61 

        /opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/engine/workflows.py in run(self, plugin, plugin_args, updatehash)
            597         if str2bool(self.config['execution']['create_report']):
            598             self._write_report_info(self.base_dir, self.name, execgraph)
        --> 599         runner.run(execgraph, updatehash=updatehash, config=self.config)
            600         datestr = datetime.utcnow().strftime('%Y%m%dT%H%M%S')
            601         if str2bool(self.config['execution']['write_provenance']):

        /opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/plugins/linear.py in run(self, graph, config, updatehash)
             69 
             70         os.chdir(old_wd)  # Return wherever we were before
        ---> 71         report_nodes_not_run(notrun)

        /opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/nipype/pipeline/plugins/tools.py in report_nodes_not_run(notrun)
             93                 logger.debug(subnode._id)
             94         logger.info("***********************************")
        ---> 95         raise RuntimeError(('Workflow did not execute cleanly. '
             96                             'Check log for details'))
             97 

        RuntimeError: Workflow did not execute cleanly. Check log for details

Both of these seem more like an FSL interaction within container issue rather than Nipype.

@jordan_theriault and @nataliavelez - could you both please share your container recipe and container?

Here is my container recipe! I’m not sure how best to share the container, though; it’s a pretty large file.

For what it’s worth, I got around this issue in kind of a hacky way: I modified my modeling script so that it saves the inputs to FLAMEO to outputspec, and then I ran FLAMEO on those inputs through a separate bash script. For some reason, this works fine, but running FLAMEO within my pipeline does not. It’s not ideal, though.

@satra Thanks for the response!

My container is build from neurodocker, using the command below, and the full container is here.

The container should be available as docker pull jtheriaultpsych/jtnipyutil:latest

It’s a catch-all docker image for imaging, so quite large!

docker run --rm kaczmarj/neurodocker:master generate docker \
--base debian:stretch --pkg-manager apt \
--install gcc g++ graphviz tree \
          git vim emacs-nox nano less ncdu \
          tig openjdk-8-jdk \
--run "export JCC_JDK=/usr/lib/jvm/java-8-openjdk-amd64" \
--fsl version=5.0.11 \
--ants version=2.2.0 \
--convert3d version=1.0.0 \
--freesurfer version=6.0.0-min \
--afni version=latest \
--spm version=r7219 \
--miniconda create_env=py36 \
  conda_install="python=3.6 jupyter jupyterlab jupyter_contrib_nbextensions
                 traits pandas matplotlib scikit-learn==0.20.3 seaborn" \
  pip_install="https://github.com/nipy/nipype/tarball/1.2.3
               https://github.com/INCF/pybids/tarball/0.9.4
               nltools nilearn datalad[full] nipy duecredit niwidgets
               mne deepdish hypertools ipywidgets pynv six nibabel joblib==0.11
               git+https://github.com/poldracklab/niworkflows.git" \
  activate=True \
--copy jtnipyutil /opt/miniconda-latest/envs/py36/lib/python3.6/site-packages/jtnipyutil

Hi everyone, did anyone find a solution ?
I use nipype inside a Docker container and have the same issue… When I use FSL flameo with nipype OUTSIDE the docker container, it works fine but INSIDE the docker container, I get the error (using nipype and using FSL directly from the docker container’s terminal) :
Log directory is: stats
Setting up:
ntptsing=4.000000

evs_group=1.000000

Aborted

My docker container’s recipe is :

 docker run --rm repronim/neurodocker:0.7.0 generate docker \
            --base debian:stretch --pkg-manager apt \
            --install git \
            --afni version=latest method=binaries \
            --fsl version=6.0.3 \
            --spm12 version=r7771 method=binaries \
            --miniconda create_env=neuro \
                        conda_install="python=3.8 traits jupyter nilearn graphviz" \
                        pip_install="nipype matplotlib"