Fmriprep's normalization error on singularity

Summary of what happened:

Hello I’m a student for studying brain science

I want to run fMRIPrep on singularity because my personal desktop is a high-end computer. So I ran fmriprep on singularity (used command is below) but an error occurs consistently. Only normalization can’t be run.

Command used (and if a helper script was used, a link to the helper script or the command generated):

 singularity run --cleanenv /home/park/Desktop/files/fmriprep-23.1.4.simg \
    data/bids_root/ out/ \
    participant \
    --participant-label 001 \
    --fs-license-file /home/park/Desktop/files/license.txt --fs-no-reconall \



Environment (Docker, Singularity / Apptainer, custom installation):


Data formatted according to a validatable standard? Please provide the output of the validator:

(node:532365) Warning: Closing directory handle on garbage collection
(Use `node --trace-warnings ...` to show where the warning was created)
	1: [WARN] Each _phasediff.nii[.gz] file should be associated with a _magnitude1.nii[.gz] file. (code: 92 - MISSING_MAGNITUDE1_FILE)

	Please visit for existing conversations about this issue.

	2: [WARN] The Authors field of dataset_description.json should contain an array of fields - with one author per field. This was triggered based on the presence of only one author field. Please ignore if all contributors are already properly listed. (code: 102 - TOO_FEW_AUTHORS)

	Please visit for existing conversations about this issue.

	3: [WARN] The Name field of dataset_description.json is present but empty of visible characters. (code: 115 - EMPTY_DATASET_NAME)

	Please visit for existing conversations about this issue.

        Summary:                  Available Tasks:        Available Modalities: 
        1229 Files, 9.07GB                                MRI                   
        103 - Subjects                                                          
        1 - Session                                                             

	If you have any questions, please post on

Relevant log outputs (up to 20 lines):

File: /home/park/Desktop/NI_2/KSHAP_18bul/rest_msit/derivatives/sub-001/log/20240120-124628_e4ea70e5-8c9d-4f01-8788-cb787502355a/crash-20240120-125152-park-registration.a0-b54bd93e-ae2d-4129-8c34-3b6c04a986ba.txt
Working Directory: /home/park/Desktop/files/work/fmriprep_23_1_wf/single_subject_001_wf/anat_preproc_wf/anat_norm_wf/_template_MNI152NLin2009cAsym/registration

    explicit_masking: True
    flavor: precise
    float: True
    moving: T1w
    num_threads: 8
    orientation: RAS
    reference: T1w
    template: MNI152NLin2009cAsym

Traceback (most recent call last):
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/plugins/", line 67, in run_node
    result["result"] =
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node registration.

	Traceback (most recent call last):
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/interfaces/base/", line 397, in run
	    runtime = self._run_interface(runtime)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/niworkflows/interfaces/", line 183, in _run_interface
	    ants_args = self._get_ants_args()
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/niworkflows/interfaces/", line 458, in _get_ants_args
	    args["fixed_image"] = mask(
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/niworkflows/interfaces/", line 507, in mask
	    data = in_nii.get_fdata()
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nibabel/", line 373, in get_fdata
	    data = np.asanyarray(self._dataobj, dtype=dtype)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nibabel/", line 439, in __array__
	    arr = self._get_scaled(dtype=dtype, slicer=())
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nibabel/", line 406, in _get_scaled
	    scaled = apply_read_scaling(self._get_unscaled(slicer=slicer), scl_slope, scl_inter)
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nibabel/", line 376, in _get_unscaled
	    return array_from_file(
	  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nibabel/", line 472, in array_from_file
	    raise OSError(
	OSError: Expected 17060042 bytes, got 945364 bytes from object
	 - could the file be damaged?

Screenshots / relevant information:

How can I fix this error? Thank you for your attention.

Hi @sangmin_park,

For future posts, please open issues on the Software Support category, which provides a post template that prompts you to provide information that will help us debug your issue. I have added in the template and switched the category for you this time.

You can see one thing that is missing is your BIDS validation report. Please return the output of the BIDS validator, and do not use --skip-bids-validation. Additionally, --fs-no-reconall is not recommended. You can also try upgrading to the latest stable release of 23.2.0 (using a clean working directory outside of the BIDS directory, which you can specify with the -w flag).


1 Like

sorry for making you bother and thank you for teaching me.

i loaded bids validator result.

i really wonder meaning about "OSError: Expected 17060042 bytes, got 945364 bytes from object
** - could the file be damaged?"** because i have also run fmriprep on docker about same data.

Hi @sangmin_park,

It might due to not having enough memory on the job. How much RAM/CPUs are you giving the task? Also, sometimes just using a fresh working directory can fix things.


1 Like

thank you for your answer.

i have 32GB RAM and intel i7 13700k CPU. but i don’t give resource to this task like docker resource setting because i don’t know how to setting resource to singularity task. is there setting way?


Hi @sangmin_park,

That is explained in the documentation here: Limiting Container Resources — SingularityCE User Guide main documentation


1 Like

Thank you really…

although the problem is yet, i will try to find a way until the end…