Fmriprep v1.4.1rc1 bold split error

Hello,
I am trying to give the latest version of fmriprep a spin and have hit an error early during processing. I’ve pasted one of the crash log outputs below; I get the same error for each functional run. I’m running fmriprep using singularity using the version I pulled today (v1.4.1rc1 u). I’ve processed the same data successfully using earlier versions.


Node: fmriprep_wf.single_subject_01_wf.func_preproc_task_cardguess_run_02_wf.bold_split Working directory: /gpfs/group/sjw42/default/ASH/SEMA/work/fmriprep_wf/single_subject_01_wf/func_preproc_task_cardguess_run_02_wf/bold_spli t

Node inputs:

args =
dimension = t
environ = {‘FSLOUTPUTTYPE’: ‘NIFTI_GZ’}
in_file =
out_base_name =
output_type = NIFTI_GZ

Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py”, line 69, in run_node
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 410, in run
cached, updated = self.is_cached()
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 296, in is_cached
hashed_inputs, hashvalue = self._get_hashval()
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 493, in _get_hashval
self._get_inputs()
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 520, in _get_inputs
outputs = loadpkl(results_file).outputs
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 668, in loadpkl
raise e
File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py”, line 659, in loadpkl
unpkl = pickle.load(pkl_file)
AttributeError: Can’t get attribute ‘CopyXFormOutputSpec’ on <module ‘niworkflows.interfaces.utils’ from ‘/usr/local/miniconda/lib/python3.
7/site-packages/niworkflows/interfaces/utils.py’>

Hi @Steve_Wilson, are you reusing an old work directory?

Otherwise, it could be that some environment variable might be sneaking into the container.

Thanks, @oesteban - that was the problem! I thought I had cleared out the previous work directory but I was mistaken. I am using a clean work directory and now things seem to be running fine. Thanks!

1 Like