sMRIPrep crashes at -autorecon1, but runs perfectly with --fs-no-reconall

The title is pretty informative. I’ve been trying to run sMRIPrep (basically the first half of fMRIPrep) on two subjects and it always crashes at the same place with the same error code for both subjects when I try to run the complete smriprep pipeline. Everything works perfectly when I put the flag --fs-no-reconall. I tried one subject alone and it does the same thing. Here is the error code:
“”"
File “/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py”, line 750, in _run_command
raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node autorecon1.

RuntimeError: subprocess exited with code 1.
“”"

I checked the crash log .txt file but nothing looks weird to me. Here is the rest of the crash log .txt file:
“”"
Node inputs:

FLAIR_file =
T1_files = [’/data/sub-01/anat/sub-01_T1w.nii.gz’]
T2_file =
args =
big_ventricles =
brainstem =
directive = autorecon1
environ = {}
expert =
flags = [’-noskullstrip’]
hemi =
hippocampal_subfields_T1 =
hippocampal_subfields_T2 =
hires = True
mprage =
mri_aparc2aseg =
mri_ca_label =
mri_ca_normalize =
mri_ca_register =
mri_edit_wm_with_aseg =
mri_em_register =
mri_fill =
mri_mask =
mri_normalize =
mri_pretess =
mri_remove_neck =
mri_segment =
mri_segstats =
mri_tessellate =
mri_watershed =
mris_anatomical_stats =
mris_ca_label =
mris_fix_topology =
mris_inflate = -n 50
mris_make_surfaces =
mris_register =
mris_smooth =
mris_sphere =
mris_surf2vol =
mrisp_paint =
openmp = 8
parallel =
steps =
subject_id = sub-01
subjects_dir = /out/freesurfer
talairach =
use_FLAIR =
use_T2 =
xopts =
“”"

I’ve been looking online across several platforms to find a solution and haven’t found one yet after nearly a week.

I am using smriprep-docker 0.9.0 on Linux Ubuntu 18.04 if it matters.

@effigies @oesteban

Thank you in advance! :slight_smile:

Sorry, forgot to add that:

Just to add to my confusion, a freesurfer folder is created in the bids/derivatives/ folder with the typical freesurfer results subfolders (mri, scripts, surf, etc.). Moreover, the mri subfolder contains the expected output files from freesurfer (e.g., aseg.mgz, aparc+aseg.mgz, brain.mgz, etc.) but they seem to be calculated from an “unwanted” T1w template image (which is weird and not desired from me). I looked at these results in 3D Slicer and they look normal if you gave as input images the T1w template images…which is not the case as you can see from the crash log .txt that has the correct path to the mprage images (i.e., sub-01_T1w.nii.gz). I also verified in 3D Slicer and the sub-01_T1w.nii.gz is the correct mprage images I gave in input to smriprep-docker.

Seems likely related to Autorecon1 Crashing · Issue #2788 · nipreps/fmriprep · GitHub, but the utter lack of error messaging is a problem.

Thanks for verifying that it occurs in sMRIPrep. I will try to see if I can reproduce there; I’ve failed so far via fMRIPrep. Could you provide your full command?

I use a bash script based on fMRIPrep Tutorial #2: Running the Analysis — Andy's Brain Book 1.0 documentation. So, don’t mind the singularity part since I am not using it, only the docker (else).

Bash script:
“”"
#User inputs:
container=docker #docker or singularity [not implemented]
bids_root_dir=/home/marcantf/Data/bids_scaifield #[DATASET DEPENDENT]
subj=(01 02) #SUBJECT IDS [DATASET DEPENDENT]
nthreads=12
mem=54 #UPPER LIMIT ALLOCATED TO THE CALCULATION [GB]

#Begin:

#Convert virtual memory from gb to mb
mem=echo "${mem//[!0-9]/}" #remove gb at end
mem_mb=echo $(((mem*1000)-5000)) #reduce some memory for buffer space during pre-processing

export FS_LICENSE=/home/marcantf/Data/bids_scaifield/derivatives/license.txt #PATH TO THE FREESURFER LICENSE [REQUIRED TO USE FREESURFER]

#Run smriprep
if [ $container == singularity ]; then
unset PYTHONPATH; singularity run -B $HOME/.cache/templateflow:/opt/templateflow $HOME/smriprep.simg
$bids_root_dir $bids_root_dir/derivatives
participant
–participant-label $subj
–skip-bids-validation
–md-only-boilerplate
–fs-license-file $HOME/Desktop/Flanker/derivatives/license.txt
–fs-no-reconall
–output-spaces MNI152NLin2009cAsym:res-2
–nthreads $nthreads
–stop-on-first-crash
–mem_mb $mem_mb
-w $HOME
else
smriprep-docker $bids_root_dir $bids_root_dir/derivatives
participant
–fs-license-file $FS_LICENSE
–skull-strip-mode force
–verbose
–output-spaces fsnative
–mem-gb $mem_mb
-w $bids_root_dir/derivatives
fi
“”"
I then call “bash filename.sh” in the bids/code folder.

And yeah, it looks very similar to the issue you sent.

Hi All,

Sorry for resurrecting this thread but seemed like the best place.

I am getting the same failure at autorecon1 that others have posted. I went through the github comment related to this issue and troubleshot as suggested by -shell ing into the container and running

<workdir>/fmriprep_22_0_wf/single_subject_<subject>_wf/anat_preproc_wf/surface_recon_wf/autorecon1/command.txt

the output was:

bash: /wynton/group/rsl/ABCD/code/working/slashofregas/rsfmri_pipeline/fmri_prep/working/fmriprep_22_0_wf/single_subject_hifuRS01a_wf/anat_preproc_wf/surface_recon_wf/autorecon1/command.txt : Permission denied

This is using fmriprep-22.0.2.simg in singularity remotely on an HPC. My singularity call is a bash script that looks like this(all the variables are filepaths):

singularity run --cleanenv -B $hostpath	$fmriprep							\
    $bids_raw_dir  $bids_deriv_dir	participant			\
    --participant-label $sub 							\
    -w $working_dir								\
    --fs-license-file $fs_license 						\
    --output-spaces T1w MNI152NLin6Asym:res-2           \

It seems like the autorecon1 command from within singularity is having a permissions issue creating a folder or doing something but this is my first time using singularity and not sure how to make it play nice with the HPC.To note all the working dirs and everything else going on with fmriprep are subirectories that come after $hostpath so, at least the way I understand it, they should all be bound to the simg.

Just tried running again and got the same error(have also tried running it with the server /scratch folders bound and the same thing happens) . Here are the lines from the terminal output when it errors out(it claims 'no such file or directory but the way I am thinking about it is that this has to do with the permissions error stated above):

         [Node] Error on "fmriprep_22_0_wf.single_subject_hifuRS01a_wf.anat_preproc_wf.surface_recon_wf.autorecon1" (/wynton/group/rsl/ABCD/code/working/slashofregas/rsfmri_pipeline/fmri_prep/working/fmriprep_22_0_wf/single_subject_hifuRS01a_wf/anat_preproc_wf/surface_recon_wf/autorecon1)
221207-09:32:33,648 nipype.workflow ERROR:
         Node autorecon1 failed to run on host dev1.wynton.ucsf.edu.
221207-09:32:33,657 nipype.workflow ERROR:
         Saving crash info to /wynton/group/rsl/ABCD/code/working/slashofregas/rsfmri_pipeline/Data/hifu_rsfmri_bids/derivatives/sub-hifuRS01a/log/20221207-093126_8a2fb94f-5a20-4688-9877-8d4ce05b6b79/crash-20221207-093233-slashofregas-autorecon1-0840c0c3-4797-4135-8a1e-a91a7b67dd48.txt
Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node autorecon1.

Cmdline:
        recon-all -autorecon1 -i /wynton/group/rsl/ABCD/code/working/slashofregas/rsfmri_pipeline/Data/hifu_rsfmri_bids/rawdata/sub-hifuRS01a/ses-pre/anat/sub-hifuRS01a_ses-pre_T1w.nii -noskullstrip -noT2pial -noFLAIRpial -openmp 8 -subjid sub-hifuRS01a -sd /wynton/group/rsl/ABCD/code/working/slashofregas/rsfmri_pipeline/Data/hifu_rsfmri_bids/derivatives/sourcedata/freesurfer
Stdout:

Stderr:
        /wynton/home/sugrue/slashofregas/rsfmri_pipeline/fmri_prep/working/fmriprep_22_0_wf/single_subject_hifuRS01a_wf/anat_preproc_wf/surface_recon_wf/autorecon1: No such file or directory.
Traceback:
        RuntimeError: subprocess exited with code 1.

221207-09:33:39,376 nipype.workflow ERROR:
         could not run node: fmriprep_22_0_wf.single_subject_hifuRS01a_wf.anat_preproc_wf.surface_recon_wf.autorecon1
221207-09:33:39,445 nipype.workflow CRITICAL:
         fMRIPrep failed: Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node autorecon1.

Anyway any help would be much appreciate!(I can move this to a new thread if mods think thats more appropriate)

Please reopen this as a post in Software Support - Neurostars

1 Like