Fmriprep prepocessing not successful, file not gzip

I ran fmriprep on this dataset: GitHub - big-data-lab-team/openpain-subacute_longitudinal_study: A Datalad dataset for OpenPain's subacute_longitudinal_study dataset (, for integration in CONP (

Using this command:

singularity run --cleanenv fmriprep-22.0.0.simg "" "output/" participant --participant-label 101 --nthreads 16 --verbose --fs-license-file "license.txt" --use-aroma --mem_mb 5980

I get the following error

Node Name: fmriprep_22_0_wf.single_subject_101_wf.anat_preproc_wf.anat_norm_wf.registration

File: /rds/project/rds-3IOyKgCQu4I/sbp/output/sub-101/log/20220821-183519_dddc8470-aed3-40e5-9040-9ff0b4225ae9/crash-20220821-231719-mj606-registration.a1-58ead18b-c530-4e3e-94b7-3f3f10e592ff.txt
Working Directory: /rds/project/rds-3IOyKgCQu4I/sbp/work/fmriprep_22_0_wf/single_subject_101_wf/anat_preproc_wf/anat_norm_wf/_template_MNI152NLin6Asym/registration

    explicit_masking: True
    flavor: precise
    float: True
    moving: T1w
    moving_image: /rds/project/rds-3IOyKgCQu4I/sbp/work/fmriprep_22_0_wf/single_subject_101_wf/anat_preproc_wf/anat_norm_wf/_template_MNI152NLin6Asym/trunc_mov/sub-101_ses-visit1_T1w_ras_template_corrected_xform_maths.nii.gz
    moving_mask: /rds/project/rds-3IOyKgCQu4I/sbp/work/fmriprep_22_0_wf/single_subject_101_wf/anat_preproc_wf/surface_recon_wf/refine/sub-101_ses-visit1_T1w_ras_template_corrected_xform_rbrainmask.nii.gz
    num_threads: 8
    orientation: RAS
    reference: T1w
    template: MNI152NLin6Asym
    template_spec: {}

Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/", line 67, in run_node
    result["result"] =
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/", line 524, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/", line 642, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/", line 750, in _run_command
    raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node registration.

Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/", line 398, in run
    runtime = self._run_interface(runtime)
  File "/opt/conda/lib/python3.9/site-packages/niworkflows/interfaces/", line 183, in _run_interface
    ants_args = self._get_ants_args()
  File "/opt/conda/lib/python3.9/site-packages/niworkflows/interfaces/", line 457, in _get_ants_args
    args["fixed_image"] = mask(
  File "/opt/conda/lib/python3.9/site-packages/niworkflows/interfaces/", line 502, in mask
    in_nii = nb.load(in_file)
  File "/opt/conda/lib/python3.9/site-packages/nibabel/", line 105, in load
    raise ImageFileError(msg)
nibabel.filebasedimages.ImageFileError: File /home/mj606/.cache/templateflow/tpl-MNI152NLin6Asym/tpl-MNI152NLin6Asym_res-01_T1w.nii.gz is not a gzip file

Why is this the case?


This looks like a Templateflow issue, where the template file did not download correctly. Are you using a machine that has internet? Sometimes computing clusters do not have internet access.

Can you navigate to the Templateflow cache and see what the contents look like?


The HPC is connected to the internet.

But here’s the Templateflow cache:

Is there any specific file you want to look at?


You should look at the file that is mentioned at the end of the crash log you brought up, and see if that file is actually a nifti, or just the git identifier.

Usually seeing the storage occupied by the file is a good indicator (if it’s small, like on the order of bytes/kbytes, then it’s likely not the nifti).


Yup, it looks like that’s the case:


That last screenshot doesn’t show the files, those are still the directories. Go into the tpl-MNI152NLin6Asym folder and look for tpl-MNI152NLin6Asym_res-01_T1w.nii.gz.



Yes, that is too small to be an actual nifti. If you have Datalad installed, you can try to run datalad get in that directory to download them. This is dependent on your machine having internet access (again, not guaranteed with HPCs). You can as an alternative, install Templateflow locally on your machine, download the data there, and then move that to your HPC templateflow cache.


I followed these steps to download templateflow archive, downloading it seemed like a success, had no error messages but the size of this particular file is still 24 kilobytes.


Just to make sure, you made it to the datalad get stage, not just the datalad install command, right?

Yes, I’ve run datalad get -r tpl-MNI152NLin2009cAsym (just as it says on their website for all the different folders.

Right now, because I’ve already run it the output looks like this:

Should I try running this as well:

$ export TEMPLATEFLOW_HOME=/path/to/keep/templateflow
$ python -m pip install -U templateflow  # Install the client
$ python
>>> import templateflow.api
>>> templateflow.api.TF_S3_ROOT = ''
>>> api.get(‘MNI152NLin6Asym’)

Taken from: Running fMRIPrep via Singularity containers — fmriprep version documentation

What you should do is in the terminal or script where you run fmriprep, set

export SINGULARITYENV_TEMPLATEFLOW_HOME=$Path/to/your/complete/templateflow/directory

Sigh, still looks pretty much the same.

Here’s the slurm script with the needed changes: Slurm script fMRIPrep -

Here’s the script that downloads templateflow archive: Templateflow archive download -

Have you tried downloading templateflow locally and sending the files to your HPC?

So, I’ve tried downloading it multiple times on my local machine as well, it tries to complete downloading most of the files, but some of the larger files fail to download using my network. When I re-run the command to install it just doesn’t complete downloading those remaining files.

That’s when I shifted to downloading this on the HPC which is connected to a faster and more reliable network.

(I feel kind of sorry that this has turned into a datalad and internet speed problem now.)

What happens if you, on your HPC, cd to the folder with the T1 file you are trying to download, and run datalad get tpl-MNI152NLin6Asym_res-01_T1w.nii.gz?

How did you install datalad? It might be best to make a dedicated conda environment for it.

This is the link I used: Installation and configuration — The DataLad Handbook

So when I list my environments I see:

# conda environments:
sbp_env                  /home/mj606/.conda/envs/sbp_env
base                  *  /home/mj606/miniconda3