Skullstrip_first_pass error

I’m running fMRIprep v 21.0.2 and am getting a skullstrip_first_pass error early during processing for each functional run.

Here is a sample command for one subject:

fmriprep --participant_label 113 --nthreads 10 --verbose --longitudinal --dummy-scans 4 --output-layout legacy --output-spaces MNI152NLin6Asym:res-2 MNI152NLin6
Asym:res-native fsaverage5 --fs-license-file /gpfs/group/sjw42/default/sjw42_collab/sw/freesurfer-6.0.1/license.txt --work /gpfs/group/sjw42/default/ASH/DVAL/wo
rk /gpfs/group/sjw42/default/ASH/DVAL/bids /gpfs/group/sjw42/default/ASH/DVAL/bids/derivatives participant

I’ve pasted one of the error logs below. Does anyone know what I might be doing wrong?

Node: fmriprep_wf.single_subject_113_wf.func_preproc_ses_01_task_cardguess_run_01_wf.initial_boldref_wf.enhance_and_skullstrip_bold_wf.skullstrip_first_pass
Working directory: /gpfs/group/sjw42/default/ASH/DVAL/work/fmriprep_wf/single_subject_113_wf/func_preproc_ses_01_task_cardguess_run_01_wf/initial_boldref_wf/enhance_and_skullstrip_bold_wf/skullstrip_first_pass

Node inputs:

args =
center =
frac = 0.2
functional =
in_file =
mask = True
mesh =
no_output =
out_file =
outline =
output_type = NIFTI_GZ
padding =
radius =
reduce_bias =
remove_eyes =
robust =
skull =
surfaces =
t2_guided =
threshold =
vertical_gradient =

Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/plugins/”, line 67, in run_node
result[“result”] =
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/”, line 516, in run
result = self._run_interface(execute=True)
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/”, line 635, in _run_interface
return self._run_command(execute)
File “/opt/conda/lib/python3.8/site-packages/nipype/pipeline/engine/”, line 741, in _run_command
result =
File “/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/”, line 428, in run
runtime = self._run_interface(runtime)
File “/opt/conda/lib/python3.8/site-packages/nipype/interfaces/fsl/”, line 165, in _run_interface
File “/opt/conda/lib/python3.8/site-packages/nipype/interfaces/base/”, line 749, in raise_exception
raise RuntimeError(
RuntimeError: Command:
bet sub-113_ses-01_task-cardguess_run-01_bold_average_corrected.nii.gz sub-113_ses-01_task-cardguess_run-01_bold_average_corr
ected_brain.nii.gz -f 0.20 -m
RuntimeError: Command:
bet sub-104_ses-01_task-cardguess_run-04_bold_average_corrected.nii.gz sub-104_ses-01_task-cardguess_run-04_bold_average_corrected_brain.nii.gz -f 0.20 -m
Standard output:

Standard error:
/storage/home/sjw42/.bashrc: line 36: module: command not found
/storage/home/sjw42/.bashrc: line 37: module: command not found
/storage/home/sjw42/.bashrc: line 38: module: command not found
/storage/home/sjw42/.bashrc: line 39: module: command not found
/storage/home/sjw42/.bashrc: line 40: module: command not found
Return code: 0


Are you reusing anything from your older run? E.G. FreeSurfer, anatomical derivatives, or work directory?

Also how are you running fmriprep? Python, singularity, or docker?


Hi Steven - The errors happened with a fresh run with nothing reused and I’m using singularity. I apologize if I’m not describing this correctly, but the fMRIprep call is in a pbs batch job script that loads the singularity container as a module (lua file points to the sif container file).

I wonder if there is any connection to issue described here?

Thanks for the info. I was asking because the work directories you specified in here and your previous issue were the same, but I guess you cleaned it out before rerunning this time :+1:

How much memory and CPU are you devoting to each job?

I set the maximum number of threads to 10 but otherwise don’t specify a memory limit. I know that memory problems have not really been an issue when processing the data with earlier versions of fMRIprep.

I am confused as to why fMRIPrep would be invoking your local bashrc file. Can you provide relevant code snippets from your PBS job script?

Here’s a sample PBS script:

#PBS -l nodes=1:ppn=1:rhel7
#PBS -l walltime=60:00:00
#PBS -j oe
#PBS -m abe 
#PBS -A sjw42_a_g_sc_default 

# Get started
echo "Job started on 'hostname' at 'date'"

echo "Clearing environment and loading singularity image"
module purge
module load fmriprep/v21.0.2
echo "Done!"

# Go to the correct location

# Run the job itself
fmriprep --participant_label 102 --nthreads 4 --verbose --longitudinal --dummy-scans 4 --output-layout legacy
 --output-spaces MNI152NLin6Asym:res-2 MNI152NLin6Asym:res-native fsaverage5 --fs-license-file /gpfs/group/sj
w42/default/sjw42_collab/sw/freesurfer-6.0.1/license.txt --work /gpfs/group/sjw42/default/ASH/DVAL/work /gpfs
/group/sjw42/default/ASH/DVAL/bids /gpfs/group/sjw42/default/ASH/DVAL/bids/derivatives participant

# Finish up
echo "Job Ended at 'date'"

And, in case it’s useful, below is the lua file that is referenced in the module load command. Earlier versions of fMRIprep worked fine for me with this same setup. However, I built the container differently this time. Previously, I used Singularity Hub, but since it is no longer active, I ran the following command locally “singularity build fmriprep_v21.0.2.sif docker://nipreps/fmriprep:21.0.2”

-- -*- lua -*-
-- fmriprep latest
fmriprep is a functional magnetic resonance imaging (fMRI) data preprocessing pipeline that is designed to provide a
n easily accessible, state-of-the-art interface that is robust to variations in scan acquisition protocols and that 
requires minimal user input, while providing easily interpretable and comprehensive error and output reporting. It p
erforms basic processing steps (coregistration, normalization, unwarping, noise component extraction, segmentation, 
skullstripping etc.) providing outputs that can be easily submitted to a variety of group level analyses, including 
task-based or resting-state fMRI, graph theory measures, surface or volume-based statistics, etc.

-- Whatis description
whatis('Description: A Robust Preprocessing Pipeline for fMRI Data')
whatis('singularity pull shub://sjw42/fmriprep_icsaci:rec')
local fmriprep = [==[
/usr/bin/singularity run /gpfs/group/sjw42/default/sjw42_collab/sw/singularity/fmriprep/fmriprep_v21.0.2.sif "$@";


I’m going to try a script with the “singularity run” command called directly rather than using the module load approach to see if I encounter the same issue

Sounds good, that’s what I would recommend. I’d also recommend using a different fresh working directory.