Fmriprep error in multiple workflows

Hi,
I have recently tried to get fmriprep to work on my university’s HPC through a singularity container.
I have checked that the singularity container is working, and fmriprep does run through the HPC. However, when I try to run my automatised bash script for 3 participants with fMRI data I receive an exitstatus=1 termination.
When reading the error file, it states that there was an error in the following:
200205-17:16:49,495 workflow ERROR:
could not run node: fmriprep_wf.single_subject_control0647_wf.func_preproc_task_rest_wf.bold_split
200205-17:16:49,496 workflow ERROR:
could not run node: fmriprep_wf.single_subject_control0647_wf.func_preproc_task_rest_wf.bold_reference_wf.gen_ref
200205-17:16:49,496 workflow ERROR:
could not run node: fmriprep_wf.single_subject_control0647_wf.anat_preproc_wf.skullstrip_ants_wf.t1_skull_strip
200205-17:16:49,980 cli WARNING:

Errors occurred while generating reports for participants: control0647 (3).
I have attached the full error file. I have made sure that the error is not relating to cpus, as I have requested 32 cpus per participant, with 64gb of memory space.
Could someone please help me understand what could be the possible route of this error that keeps occurring?
I did originally think that it was because I was not requesting enough cpus and ram space, however I am not convinced that that is the issue now because the processes are not reaching full-capacity of cpus or mem space.

Thank you,
Natasha

This is the full error message:
200205-16:59:20,497 workflow IMPORTANT:

Running fMRIPREP version 1.1.1:
  * BIDS dataset path: /project/mri/RBD/RBD_data.
  * Participant list: ['control0647'].
  * Run identifier: 20200205-165920_ed60899c-5b8e-4006-a7e5-4d5016867b11.

200205-16:59:20,877 workflow IMPORTANT:
Creating bold processing workflow for “/project/mri/RBD/RBD_data/sub-control0647/func/sub-control0647_task-rest_bold.nii.gz” (0.02 GB / 140 TRs). Memory resampled/largemem=0.10/0.13 GB.
200205-16:59:21,64 workflow WARNING:
SDC: no fieldmaps found or they were ignored (/project/mri/RBD/RBD_data/sub-control0647/func/sub-control0647_task-rest_bold.nii.gz).
200205-16:59:28,791 workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_control0647_wf.func_preproc_task_rest_wf.bold_split” in “/project/mri/RBD/output/fmriprep_wf/single_subject_control0647_wf/func_preproc_task_rest_wf/bold_split”.
200205-16:59:28,800 workflow INFO:
[Node] Running “bold_split” (“nipype.interfaces.fsl.utils.Split”), a CommandLine Interface with command:
fslsplit /project/mri/RBD/RBD_data/sub-control0647/func/sub-control0647_task-rest_bold.nii.gz -t
200205-16:59:30,468 workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_control0647_wf.func_preproc_task_rest_wf.bold_reference_wf.gen_ref” in “/project/mri/RBD/output/fmriprep_wf/single_subject_control0647_wf/func_preproc_task_rest_wf/bold_reference_wf/gen_ref”.
200205-16:59:31,365 workflow INFO:
[Node] Running “gen_ref” (“niworkflows.interfaces.registration.EstimateReferenceImage”)
200205-16:59:35,593 interface INFO:
stderr 2020-02-05T16:59:35.593213:++ 3dvolreg: AFNI version=Debian-16.2.07~dfsg.1-5~nd16.04+1 (Jun 12 2017) [64-bit]
200205-16:59:35,593 interface INFO:
stderr 2020-02-05T16:59:35.593213:++ Authored by: RW Cox
200205-16:59:35,613 interface INFO:
stderr 2020-02-05T16:59:35.613206:++ Coarse del was 10, replaced with 4
200205-16:59:45,717 workflow WARNING:
[Node] Error on “fmriprep_wf.single_subject_control0647_wf.func_preproc_task_rest_wf.bold_split” (/project/mri/RBD/output/fmriprep_wf/single_subject_control0647_wf/func_preproc_task_rest_wf/bold_split)
200205-16:59:46,457 workflow ERROR:
Node bold_split failed to run on host hpc159.
200205-16:59:46,457 workflow ERROR:
Saving crash info to /project/mri/RBD/output/fmriprep/sub-control0647/log/20200205-165920_ed60899c-5b8e-4006-a7e5-4d5016867b11/crash-20200205-165946-ntay2251-bold_split-842fec56-fc39-400f-b781-d0f3988de2c2.txt
Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py”, line 68, in run_node
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 480, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 564, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 662, in _run_command
_save_resultfile(result, outdir, self.name)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/utils.py”, line 244, in save_resultfile
result.outputs.set(**modify_paths(outputs, relative=True, basedir=cwd))
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/util/deprecated.py”, line 32, in wrapper
return fn(*args, **kw)
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/has_traits.py”, line 1551, in set
trait_change_notify=trait_change_notify, **traits)
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/has_traits.py”, line 1543, in trait_set
setattr( self, name, value )
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/base/traits_extension.py”, line 341, in validate
value = super(MultiObject, self).validate(object, name, newvalue)
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/trait_types.py”, line 2336, in validate
return TraitListObject( self, object, name, value )
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/trait_handlers.py”, line 2313, in init
raise excp
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/trait_handlers.py”, line 2305, in init
value = [ validate( object, name, val ) for val in value ]
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/trait_handlers.py”, line 2305, in
value = [ validate( object, name, val ) for val in value ]
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/base/traits_extension.py”, line 112, in validate
self.info_text, value))
traits.trait_errors.TraitError: The trait ‘out_files’ of a SplitOutputSpec instance is an existing file name, but the path ‘/project/RDS-FMH-mri-RW/RBD/output/fmriprep_wf/single_subject_control0647_wf/func_preproc_task_rest_wf/bold_split/vol0000.nii.gz’ does not exist.

200205-16:59:46,571 workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_control0647_wf.anat_preproc_wf.skullstrip_ants_wf.t1_skull_strip” in “/project/mri/RBD/output/fmriprep_wf/single_subject_control0647_wf/anat_preproc_wf/skullstrip_ants_wf/t1_skull_strip”.
200205-16:59:46,582 workflow INFO:
[Node] Running “t1_skull_strip” (“nipype.interfaces.ants.segmentation.BrainExtraction”), a CommandLine Interface with command:
antsBrainExtraction.sh -a /project/mri/RBD/RBD_data/sub-control0647/anat/sub-control0647_T1w.nii.gz -m /project/mri/RBD/output/fmriprep_wf/single_subject_control0647_wf/anat_preproc_wf/skullstrip_ants_wf/t1_skull_strip/T_template0_BrainCerebellumProbabilityMask.nii.gz -e /niworkflows_data/ants_oasis_template_ras/T_template0.nii.gz -d 3 -f /niworkflows_data/ants_oasis_template_ras/T_template0_BrainCerebellumRegistrationMask.nii.gz -s nii.gz -k 1 -o highres001_ -q 1
200205-16:59:57,176 interface INFO:
stderr 2020-02-05T16:59:57.176368:++ Max displacement in automask = 0.66 (mm) at sub-brick 18
200205-16:59:57,176 interface INFO:
stderr 2020-02-05T16:59:57.176368:++ Max delta displ in automask = 0.33 (mm) at sub-brick 2
200205-16:59:58,474 workflow WARNING:
[Node] Error on “fmriprep_wf.single_subject_control0647_wf.func_preproc_task_rest_wf.bold_reference_wf.gen_ref” (/project/mri/RBD/output/fmriprep_wf/single_subject_control0647_wf/func_preproc_task_rest_wf/bold_reference_wf/gen_ref)
200205-17:00:00,466 workflow ERROR:
Node gen_ref failed to run on host hpc159.
200205-17:00:00,466 workflow ERROR:
Saving crash info to /project/mri/RBD/output/fmriprep/sub-control0647/log/20200205-165920_ed60899c-5b8e-4006-a7e5-4d5016867b11/crash-20200205-170000-ntay2251-gen_ref-d70bae24-e45e-47d9-a92f-60b743080211.txt
Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py”, line 68, in run_node
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 480, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 564, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 662, in _run_command
_save_resultfile(result, outdir, self.name)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/utils.py”, line 244, in save_resultfile
result.outputs.set(**modify_paths(outputs, relative=True, basedir=cwd))
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/util/deprecated.py”, line 32, in wrapper
return fn(*args, **kw)
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/has_traits.py”, line 1551, in set
trait_change_notify=trait_change_notify, **traits)
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/has_traits.py”, line 1543, in trait_set
setattr( self, name, value )
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/base/traits_extension.py”, line 112, in validate
self.info_text, value))
traits.trait_errors.TraitError: The trait ‘ref_image’ of an EstimateReferenceImageOutputSpec instance is an existing file name, but the path ‘/project/RDS-FMH-mri-RW/RBD/output/fmriprep_wf/single_subject_control0647_wf/func_preproc_task_rest_wf/bold_reference_wf/gen_ref/ref_image.nii.gz’ does not exist.

200205-17:16:45,758 workflow WARNING:
[Node] Error on “fmriprep_wf.single_subject_control0647_wf.anat_preproc_wf.skullstrip_ants_wf.t1_skull_strip” (/project/mri/RBD/output/fmriprep_wf/single_subject_control0647_wf/anat_preproc_wf/skullstrip_ants_wf/t1_skull_strip)
200205-17:16:47,495 workflow ERROR:
Node t1_skull_strip failed to run on host hpc159.
200205-17:16:47,495 workflow ERROR:
Saving crash info to /project/mri/RBD/output/fmriprep/sub-control0647/log/20200205-165920_ed60899c-5b8e-4006-a7e5-4d5016867b11/crash-20200205-171647-ntay2251-t1_skull_strip-1af39ee3-9959-4ac0-8927-71c1949b5039.txt
Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py”, line 68, in run_node
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 480, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 564, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py”, line 662, in _run_command
_save_resultfile(result, outdir, self.name)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/utils.py”, line 244, in save_resultfile
result.outputs.set(**modify_paths(outputs, relative=True, basedir=cwd))
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/util/deprecated.py”, line 32, in wrapper
return fn(*args, **kw)
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/has_traits.py”, line 1551, in set
trait_change_notify=trait_change_notify, **traits)
File “/usr/local/miniconda/lib/python3.6/site-packages/traits/has_traits.py”, line 1543, in trait_set
setattr( self, name, value )
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/interfaces/base/traits_extension.py”, line 112, in validate
self.info_text, value))
traits.trait_errors.TraitError: The trait ‘BrainExtractionBrain’ of a BrainExtractionOutputSpec instance is an existing file name, but the path ‘/project/RDS-FMH-mri-RW/RBD/output/fmriprep_wf/single_subject_control0647_wf/anat_preproc_wf/skullstrip_ants_wf/t1_skull_strip/highres001_BrainExtractionBrain.nii.gz’ does not exist.

200205-17:16:49,495 workflow ERROR:
could not run node: fmriprep_wf.single_subject_control0647_wf.func_preproc_task_rest_wf.bold_split
200205-17:16:49,496 workflow ERROR:
could not run node: fmriprep_wf.single_subject_control0647_wf.func_preproc_task_rest_wf.bold_reference_wf.gen_ref
200205-17:16:49,496 workflow ERROR:
could not run node: fmriprep_wf.single_subject_control0647_wf.anat_preproc_wf.skullstrip_ants_wf.t1_skull_strip
200205-17:16:49,980 cli WARNING:
Errors occurred while generating reports for participants: control0647 (3).

Hi @NatashaLTaylor,

Welcome to Neurostars!

You appear to be using version 1.1.1 of fmriprep which does have this issue.

Would it be possible to upgrade your fmriprep installation to 1.5.8? There have been a number of changes/fixes including making the output more compatible with the BIDS Derivatives Specification.

Let me know if you have additional questions!
James

1 Like

Hi James,

Thank you. I will try to upgrade fmriprep to 1.5.8, but I am aware that it was quite difficult to get my singularity container to be compatible with the HPC.

Thank you,
Natasha

If your HPC admins let you use the singularity command and you have the singularity version > 2.5 (check with singularity --version), you should be able to create the container with this command:

singularity build /my_images/fmriprep-1.5.8.simg docker://poldracklab/fmriprep:1.5.8

If only the HPC admins are allowed to make containers (I’m sorry!), you may wish to start a conversation to allow users to make containers themselves and save the admin’s time from having to keep track/creating the containers. My university and several others I know allow users to make their own containers.

-James

Thank you. I am using that command line now on my HPC.
It seems to be letting me create the container, so hopefully it works.
Thank you for your help!

Natasha

1 Like

Hi all,

Just an update. I have a new singularity built with fmriprep-1.5.8. I am still encountering an error with multiple workflows. Does anyone have any suggestions of what could be occurring? I have checked that my dataset is BIDS compliant and that the singularity container is set-up appropriately to my HPC server.
Here is the error message:

200224-08:48:14,216 nipype.workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_control1170_wf.func_preproc_task_rest_wf.bold_split” in "/project/RDS-FMH-mri-RW/RBD/output/fmriprep_wf/single_subject_control1170_wf/func_pre$
200224-08:48:14,264 nipype.workflow INFO:
[Node] Running “bold_split” (“nipype.interfaces.fsl.utils.Split”), a CommandLine Interface with command:
fslsplit /project/RDS-FMH-mri-RW/RBD/RBD_data/BIDS_RewardRBD/sub-control1170/func/sub-control1170_task-rest_bold.nii.gz -t
200224-08:48:14,303 nipype.workflow WARNING:
Storing result file without outputs
200224-08:48:14,323 nipype.workflow WARNING:
[Node] Error on “fmriprep_wf.single_subject_control1170_wf.func_preproc_task_rest_wf.bold_split” (/project/RDS-FMH-mri-RW/RBD/output/fmriprep_wf/single_subject_control1170_wf/func_preproc_$
200224-08:48:14,763 nipype.workflow INFO:
[Node] Setting-up “fmriprep_wf.single_subject_control1170_wf.func_preproc_task_rest_wf.bold_hmc_wf.mcflirt” in "/project/RDS-FMH-mri-RW/RBD/output/fmriprep_wf/single_subject_control1170_wf$
200224-08:48:14,781 nipype.workflow INFO:
[Node] Running “mcflirt” (“nipype.interfaces.fsl.preprocess.MCFLIRT”), a CommandLine Interface with command:
mcflirt -in /project/RDS-FMH-mri-RW/RBD/RBD_data/BIDS_RewardRBD/sub-control1170/func/sub-control1170_task-rest_bold.nii.gz -out /project/RDS-FMH-mri-RW/RBD/output/fmriprep_wf/single_subject_control$
200224-08:48:14,838 nipype.workflow WARNING:
Storing result file without outputs
200224-08:48:14,840 nipype.workflow WARNING:
[Node] Error on “fmriprep_wf.single_subject_control1170_wf.func_preproc_task_rest_wf.bold_hmc_wf.mcflirt” (/project/RDS-FMH-mri-RW/RBD/output/fmriprep_wf/single_subject_control1170_wf/func$
200224-08:48:16,248 nipype.workflow ERROR:
Node bold_split failed to run on host hpc148.

The error in one node states that the standard error:
fslsplit: error while loading shared libraries: libnewimage.so : cannot open shared object file

crash-20200224-084402-ntay2251-bold_split-03e0adab-1adf-4163-aa53-9cb1c87ae82c.txt (1.9 KB)

Hi Natasha,

Glad to hear you got the singularity container built.
Your new issue “smells” like your container is trying to use a local FSL installation instead of the one installed on the container.

Could you share the singularity command you used to run fmriprep?

My current suggestion (assuming you haven’t already tried this) is to add --cleanenv to your singularity command.

Hope that helps!
James

1 Like

Hi James,
I tried to include --cleanev in my command but it stated that an invalid flag.
This is my command line for singularity
singularity run --cleanev fmriprep-1.5.8.simg /project/mri/RBD/RBD_data /project/mri/RBD/output/ participant --participant_label $PBS_JOBNAME --n_cpus 16 --low-mem --task-id rest --ignore slicetiming --ignore fieldmaps --fs-no-reconall --use-aroma --fs-license-file /home/ntay2251/license.txt -w /project/mri/RBD/output

Oops typo there “–cleanenv” I’ll try running that again ahah sorry!

1 Like