Nipype.Node._parameterization_dir(param) errors for TRACULA pipeline

Hello, Nipype experts:
Recently, I want to use TRACULA pipeline in nipype, and luckily I found that Satra has already done this, this is the link:
So I adjust his script to use for my data, but I always got some errors to say that:
OSError: Could not create /Users/junhao.wen/test/test-tracula/trac-all_workflow/

So I dig into the code, I found the problem is from the private function from nipype.Node._parameterization_dir(param) :

Private functions

def _parameterization_dir(self, param):
    Returns the directory name for the given parameterization string as follows:
        - If the parameterization is longer than 32 characters, then
          return the SHA-1 hex digest.
        - Otherwise, return the parameterization unchanged.
    if len(param) > 32:
        return sha1(param).hexdigest()
        return param

It seems that in the nipype.Function interface, when we define the working_directory, the node.parameterization is too long, so that it can not create the folder. And the path is in a mess… In Satra’s script, he use nipype.Function to call a function, for example run_prep, and in the run_prep, he created another node to call the command line for TRACULA.

Before, I tried to wrap another interface, it works well, but this time, when I debug this script, here is the output of sub-task of my workflow for the paramerization func:

> trac_all_worlflow.trac-prep.a009.parameterization=
> <type 'list'>: ['']

Any suggestions will be appreciated:)

Happy christmas


Hi @Junhao_Wen, this may be solved by changing your nipype config to

from nipype import config
cfg = dict(execution={'parameterize_dirs': False})

This will shorten your parameterization to its hash

1 Like

Thank for your reply, I changed the config for my pipeline(I put the config into a Function Node,not at the beginning of my script), but still, I got the same errors, here is the screen shot for the debug in Pycharm:

I think still, the len(param) is two big…

Here is the code for the Function node to create the pipeline of trac-all -prep

def run_prep(CAPS_path, subject_id, BIDS_dir, BIDS_id_nii, BIDS_id_bvec, BIDS_id_bval, BIDS_id_mfmap,    BIDS_id_pfmap,
         template, tracula_dir):
this pipeline run trac-all for every subject with seperate config file
:param CAPS_id: this is the subject_id in CAPS version, correspond to the #subjectlist in config file
:param template: the template for config file
:param data_dir: the BIDS directory to contain the dwi images
from nipype.interfaces.base import CommandLine
from nipype.pipeline.engine import Node
from string import Template
import os
from nipype import config
# config.enable_provenance()
cfg = dict(execution={'parameterize_dirs': False})  ###This will shorten your parameterization to its hash
CAPS_id = os.path.join(CAPS_path.split('subjects')[1], subject_id)
with open(template, 'rt') as fp:
    tpl = Template(
out = tpl.substitute(participant_id=CAPS_id, dcmroot_path=BIDS_dir, dmri_image=BIDS_id_nii,   bvec_file=BIDS_id_bvec,
                     bval_file=BIDS_id_bval, mfmap_image=BIDS_id_mfmap, pfmap_image=BIDS_id_pfmap)
config_file = os.path.join(tracula_dir, 'config_%s' % CAPS_id)
with open(config_file, 'wt') as fp:
    fp.write(out)  ####### this is to rewrite the config fire for the subjects
trac_all_prep = Node(interface=CommandLine('trac-all -prep -c %s' % config_file, terminal_output='allatonce'),
                     name='trac-prep-%s' % CAPS_id)
trac_all_prep.base_dir = tracula_dir
trac_all_prep.parameterization= False
return CAPS_id, config_file

Thanks, i solve the problem by putting the config in my workflow script, also in the Function node:)

but can you clarify a little more about the nipype config, cuz the official page does not explain it too clearly for me…

from nipype import config
cfg = dict(execution={‘parameterize_dirs’: False})

This is to set the parameterize to be false, i got it.

How about this:


what is the func enable_provenance() here for???


Sure - the config file is flexible in that you can tweak things in your nipype script. It’s essentially a dictionary with two levels of configuration, logging and execution. Every option (and including defaults) is discussed in more detail here.

You can set workflow specific configs:

myworkflow = pe.Workflow()
myworkflow.config['execution'] = {'stop_on_first_rerun': 'True',
                                   'hash_method': 'timestamp'}

as well as global configs as you did.

what is the func enable_provenance() here for???

This will make each interface write out either a provenace.json or provenance.rdf, more information can be found on this page.

Thanks very much, right now, it seems the pipeline works, haha