Singularity and fmriprep

When I try to run fmriprep in singularity for a particular study (this script was adapted from another study) - fmriprep doesn’t run, but I don’t get any particular error - just blank output.

Has anyone experienced this before? I don’t know how to fix this issue without any error output? Here is my code, just wondering if there is something obviously wrong? I am using fmriprep 22.0.2

I don’t think this is a binding issue because I can get fMRIprep to run for another study located in a folder within a similar directory.

My data is located on work on our university super computer.

I should also say that I pulled nipreps version on docker to make the .sif image

for file in "${dicomfilename[@]}";do
    # Takes out session and id information from the dicom file for processing throughout this script - I checked this works properly
    session=$(echo "$file" | cut -d \. -f 2| grep -o '..')
    id=$(echo "$file" | cut -d \. -f 1 | grep -o '....$')
   echo RUNNING PARTICIPANT: sub-${id}-${session} >> $script_log_location
if [ $fmriprep_run == "yes" ]; # labelled "yes" above
	cd ${localnifty}
	singularity run --cleanenv ${fmriprep} ${localnifty} ${fmriprep_output} "participant" --participant-label ${id} --fs-license-file ${fsl_license} --skip_bids_validation -w ${workingdir} --debug "fieldmaps"


It seems that you are not binding any folder to your singularity container (option -B which should appear before ${fmriprep} in your singularity call.

An explanation for this binding was given in another thead by @Steven :

Thanks for this feedback, that’s what I was thinking, but what I’ve written doesn’t seem to stop the blank screen hanging. I ran the following and it still hangs with no output (I also tried removing the skip_bids_validation and no joy)… Here is my updated command. Is there some issues with it? Sorry in advance if I am missing something very obvious. I have also tried to change my working directory to go into WORK but it still hangs. I also checked and it’s def version 22.0.2. When I try to run 22.0.1 I get a warning that there is a newer version of fMRI prep but then it hangs.

My working directory is in $SCRATCH while my data is on $WORK

singularity run \
			-B ${localnifty} \
	                -B ${fmriprep_output} \
			-B ${fsl_license} \
			-B ${workingdir} \
			--cleanenv \
			${fmriprep} ${localnifty} ${fmriprep_output} "participant" --participant-label ${id} \
			--fs-license-file ${fsl_license} --skip_bids_validation -w ${workingdir} --debug "fieldmaps"

Here is my directory structure
│ ├── sub-a017_ses-01_rec-norm_T1w.json
│ ├── sub-a017_ses-01_rec-norm_T1w.nii.gz
│ ├── sub-a017_ses-01_rec-norm_T2w.json
│ ├── sub-a017_ses-01_rec-norm_T2w.nii.gz
│ ├── sub-a017_ses-01_rec-orig_T1w.json
│ ├── sub-a017_ses-01_rec-orig_T1w.nii.gz
│ ├── sub-a017_ses-01_rec-orig_T2w.json
│ └── sub-a017_ses-01_rec-orig_T2w.nii.gz
├── dwi
│ ├── sub-a017_ses-01_dir-AP_dwi.bval
│ ├── sub-a017_ses-01_dir-AP_dwi.bvec
│ ├── sub-a017_ses-01_dir-AP_dwi.json
│ └── sub-a017_ses-01_dir-AP_dwi.nii.gz
├── fmap
│ ├── sub-a017_ses-01_acq-cybdistmap_dir-PA_epi.json
│ ├── sub-a017_ses-01_acq-cybdistmap_dir-PA_epi.nii.gz
│ ├── sub-a017_ses-01_acq-distmap_dir-PA_dwi.json
│ ├── sub-a017_ses-01_acq-distmap_dir-PA_dwi.nii.gz
│ ├── sub-a017_ses-01_acq-restdistmap_dir-PA_epi.json
│ ├── sub-a017_ses-01_acq-restdistmap_dir-PA_epi.nii.gz
│ ├── sub-a017_ses-01_acq-socmiddistmap_dir-PA_epi.json
│ └── sub-a017_ses-01_acq-socmiddistmap_dir-PA_epi.nii.gz
└── func
├── sub-a017_ses-01_task-cyb_dir-AP_bold.json
├── sub-a017_ses-01_task-cyb_dir-AP_bold.nii.gz
├── sub-a017_ses-01_task-mid_dir-AP_bold.json
├── sub-a017_ses-01_task-mid_dir-AP_bold.nii.gz
├── sub-a017_ses-01_task-rest_dir-AP_bold.json
├── sub-a017_ses-01_task-rest_dir-AP_bold.nii.gz
├── sub-a017_ses-01_task-soc_dir-AP_bold.json
└── sub-a017_ses-01_task-soc_dir-AP_bold.nii.gz

I am not sure if that will make a difference, but what I do is to give a specific name to the mounted directory, for example here is a typical command I use:

singularity run -B /scratch/jsein/BIDS:/work,$HOME/.templateflow:/opt/templateflow --cleanenv /scratch/jsein/my_images/fmriprep-21.0.2.simg \
		 --fs-license-file /work/freesurfer/license.txt /work/$study /work/$study/derivatives/fmriprep  \
		 participant --participant-label $sub \
		 -w /work/temp_data_${study}\
		 --mem-mb 50000 --omp-nthreads 10 --nthreads 12  \
		 --fd-spike-threshold 0.5 --dvars-spike-threshold 2.0 --bold2t1w-dof 9   \
		 --output-spaces MNI152NLin6Asym MNI152NLin2009cAsym T1w --ignore slicetiming  --fs-subjects-dir /work/$study/derivatives/fmriprep/sourcedata/freesurfer

For example here I mount /scratch/jsein/BIDS as /work in the FMRIPREP container.

What kind of memory/cpu usage are you devoting to the job?

I haven’t made any changes here regarding memory. Do you think this might be forcing it to fail?

I have tried to update my code to the following as per the suggestions above and it still appears to be failing. Also went here and tried all of the suggestions. I do appear to be able to complete all the trouble shooting steps without issue. I also added a link to the templateflow and run without and without this.

Running fMRIPrep via Singularity containers — fmriprep version documentation

unset PYTHONPATH; singularity run -B ${localnifty}:/workd \
>                         -B $HOME/.templateflow:opt/templateflow
>                         -B ${workingdir}:/scratchd \
>                         -B ${fsl_license}:/license \
>                         -B $HOME:/home/fmriprep --home /home/fmriprep --cleanenv \
>                         ${fmriprep} /workd/ /work/derivatives/fmriprep-v22.0.2/ \
>                         participant --participant-label ${id} \
>                         --fs-license-file /license -w /scratchd/ \
>                         --debug fieldmaps --skip_bids_validation

One detail: there is a typo in your command just above:

-B ${localnifty}:/workd should be: ${localnifty}:/work , or /work/derivatives/fmriprep-v22.0.2/should be /workd/derivatives/fmriprep-v22.0.2/

I also think that you should specify how much memory and cpu you plan to use to be sure to have enough available for the fmriprep execution.

Thanks I’ve allocated the following tags…

–low-mem --n-cpus 6 --mem-mb 50000

None of these seem to make any difference. Still hanging, should I add anything else to reduce my memory allocations?

What do you see in the log as an output of your command and at which stage does it hangs?

The memory allocation looks ok to me.

Also, I just noticed in your data hierarchy: your dataset is not bids valid as you are missing one layer: the ses-01/ subdirectory. You should have:


These will not change how much memory/cpu is devoted to the job, just how much memory/cpu fMRIPrep will use out of what is available to it. That is, you can specify --mem-mb 50000, but if you only give fMRIPrep 20GB, then that flag won’t be doing anything. How are you submitting fMRIPrep jobs? E.g. sbatch job array on a HPC.

Hi Steven and Jsein,

Firstly, I am so sorry this is such a drama, but I just can’t think of what I am missing here. I spent the morning cleaning up my files to make sure I had enough space to run this analysis and it’s still stalling.

Originally, I was submitting a bash script job via sbatch on our university supercomputer. I must have the correct arguments added because if anything is wrong, fMRI prep still throws errors and when I fix, they go away (and it just stalls).

I’ve also tried going into the fmriprep shell and running it (still on the supercomputer) and it still stalls in the same way. I have tried running on different data sets and with different versions of fmriprep (when I do this, I get a reminder that there is a new version of fmriprep but then it stalls). Within the shell, I can get the bids vallidator to run on a single participant data point (sub-a017) but when I try to run on the entire dataset I get the error…

"Unhandled rejection ( reason: RangeError: Maximum call stack size exceeded at recursiveMerge.."

Here are the last set of commands I ran. I played around with the memory options a ton and still stalled out. I must be missing something very obvious but I just can’t figure out what it is.

module load tacc-singularity

singularity shell /work/06953/jes6785/Container/fmriprep_22.0.2.sif

fmriprep /work/06953/jes6785/ls6/NECTARY_DATA/  /work/06953/jes6785/ls6/NECTARY_DATA/derivatives/fmriprep-v22.0.2/ participant --participant-label a017 --fs-license-file /work/06953/jes6785/ls6/NECTARY_DATA/derivatives/fmriprep-v22.0.2/code/license_2.txt -w /scratch/06953/jes6785/working_dir/ --low-mem -vvv --mem-mb 16000 --omp-nthreads 2 --nthreads 4

After this hangs in my working directory, I have a directory output like “20221103-135132…” inside the file is another directory “bids_db” which contains a blank doc (or it looks blank) layout_index.sqlite - nothing else.

When you did this, what kind of resources did you devote to fMRIPrep? This would usually be found in the SBATCH header. e.g. #SBATCH --mem=20GB .

Have you tried running it on a test BIDS root directory with only one subject? The first thing BIDS apps do is create an SQL index file of the entire BIDS directory (even if only running on one-subject). It could be possible it is taking a long time on this step. Knowing what kind of resources you are giving the job, and about how large the dataset is, may suggest whether this happening. Also, I should have asked earlier, are your data BIDS valid? The organization pointed out by @jsein’s last comment suggests you may be working with data not organized to BIDS standards.



Thanks again for the response. Will try to reset to a test set of data with only 1 subject and see if that helps. Most of the time, I only have a few participants in but this dataset have about 6-7 participants in it.

I also hadn’t considered the sbatch stuff at the top of my script.
Currently, I have not allocated anything in regards to “–mem” but I listed the following…

#SBATCH -J name # Job name
#SBATCH -o job.%j # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal # Queue name
#SBATCH -N 1 # Total number of nodes requested (68 cores/node)
#SBATCH -n 1 # Total number of mpi tasks requested
#SBATCH -t 00:30:00 # Run time (hh:mm:ss) - 30 minutes
#SBATCH -A imaging_test # allocation to run under

Also, yes, it’s BIDS complaint - I had a typo above - sorry forgot to get back about that.

Hi, it looks like without a --mem or -c flag, your jobs will default to using whatever your computing cluster’s default is, which is probably not sufficient for fMRIPrep. I would explicitly specify these in your SBATCH header. (Perhaps start with #SBATCH --mem=16GB and --cpus-per-task=8). Also your wall time of 30 minutes will not be enough for fMRIPrep to finish. Depending on whether you are using previous FreeSurfer outputs, I would set this at 2 days, to be safe (it doesn’t matter if it finishes before the wall time).


I can’t believe it, I finally have some output!
A bit thank you for your help here! I couldn’t for the life of me figure out what was going wrong with my script. These solutions fixed things wonderfully and the output is looking really good. For any newbies with a similar problem, I am attaching the top portion of my final sbatch code for your reference below.

I just wanted to check one final thing. When I run with a -vvv flag, I get the following output. I assume this is just the program giving me tons of info, but wanted to double check this output is ok as I am not 100% sure what it means.

221108-10:49:28,608 nipype.workflow DEBUG:
         Cannot allocate job 94 (0.20GB, 7 threads).
221108-10:49:28,608 nipype.workflow DEBUG:
         Cannot allocate job 97 (0.20GB, 7 threads).
221108-10:49:28,608 nipype.workflow DEBUG:
         Cannot allocate job 100 (0.20GB, 7 threads).
221108-10:49:28,608 nipype.workflow DEBUG:
         Cannot allocate job 103 (0.20GB, 7 threads).
221108-10:49:28,608 nipype.workflow DEBUG:
         Cannot allocate job 106 (0.20GB, 7 threads).
221108-10:49:28,608 nipype.workflow DEBUG:
         Cannot allocate job 123 (0.05GB, 7 threads).

#SBATCH -J laser_preprocessing        # Job name
#SBATCH -o lsb108_fmriprep.%j           # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal                     # Queue name
#SBATCH -N 1                          # Total number of nodes requested (68 cores/node)
#SBATCH -n 1                          # Total number of mpi tasks requested
#SBATCH --mem=16GB
#SBATCH --cpus-per-task=8

#SBATCH -t 48:00:00                   # Run time (hh:mm:ss) - 30 minutes
#SBATCH -A IBN22006                   # allocation to run under


Great! I wouldn’t worry about those warnings if fMRIPrep finishes without errors and the outputs look good.