Summary of what happened:
I am getting the same failure at autorecon1 that others have posted. I went through the github comment related to this issue and troubleshot as suggested by -shell ing into the container and running
<workdir>/fmriprep_22_0_wf/single_subject_<subject>_wf/anat_preproc_wf/surface_recon_wf/autorecon1/command.txt
the output was:
bash: /wynton/group/rsl/ABCD/code/working/slashofregas/rsfmri_pipeline/fmri_
Command used (and if a helper script was used, a link to the helper script or the command generated):
This is using fmriprep-22.0.2.simg in singularity remotely on an HPC. My singularity call is a bash script that looks like this(all the variables are filepaths):
singularity run --cleanenv -B $hostpath $fmriprep \
$bids_raw_dir $bids_deriv_dir participant \
--participant-label $sub \
-w $working_dir \
--fs-license-file $fs_license \
--output-spaces T1w MNI152NLin6Asym:res-2 \
Version:
22.0.2
Environment (Docker, Singularity, custom installation):
Singularity
Data formatted according to a validatable standard? Please provide the output of the validator:
Yes, at least according to the fmri-prep built in BIDS validator.
Relevant log outputs (up to 20 lines):
Here are the lines from the terminal output when it errors out(it claims 'no such file or directory but the way I am thinking about it is that this has to do with the permissions error stated above):
[Node] Error on "fmriprep_22_0_wf.single_subject_hifuRS01a_wf.anat_preproc_wf.surface_recon_wf.autorecon1" (/wynton/group/rsl/ABCD/code/working/slashofregas/rsfmri_pipeline/fmri_prep/working/fmriprep_22_0_wf/single_subject_hifuRS01a_wf/anat_preproc_wf/surface_recon_wf/autorecon1)
221207-09:32:33,648 nipype.workflow ERROR:
Node autorecon1 failed to run on host dev1.wynton.ucsf.edu.
221207-09:32:33,657 nipype.workflow ERROR:
Saving crash info to /wynton/group/rsl/ABCD/code/working/slashofregas/rsfmri_pipeline/Data/hifu_rsfmri_bids/derivatives/sub-hifuRS01a/log/20221207-093126_8a2fb94f-5a20-4688-9877-8d4ce05b6b79/crash-20221207-093233-slashofregas-autorecon1-0840c0c3-4797-4135-8a1e-a91a7b67dd48.txt
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
result = self._run_interface(execute=True)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
return self._run_command(execute)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node autorecon1.
Cmdline:
recon-all -autorecon1 -i /wynton/group/rsl/ABCD/code/working/slashofregas/rsfmri_pipeline/Data/hifu_rsfmri_bids/rawdata/sub-hifuRS01a/ses-pre/anat/sub-hifuRS01a_ses-pre_T1w.nii -noskullstrip -noT2pial -noFLAIRpial -openmp 8 -subjid sub-hifuRS01a -sd /wynton/group/rsl/ABCD/code/working/slashofregas/rsfmri_pipeline/Data/hifu_rsfmri_bids/derivatives/sourcedata/freesurfer
Stdout:
Stderr:
/wynton/home/sugrue/slashofregas/rsfmri_pipeline/fmri_prep/working/fmriprep_22_0_wf/single_subject_hifuRS01a_wf/anat_preproc_wf/surface_recon_wf/autorecon1: No such file or directory.
Traceback:
RuntimeError: subprocess exited with code 1.
221207-09:33:39,376 nipype.workflow ERROR:
could not run node: fmriprep_22_0_wf.single_subject_hifuRS01a_wf.anat_preproc_wf.surface_recon_wf.autorecon1
221207-09:33:39,445 nipype.workflow CRITICAL:
fMRIPrep failed: Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
result = self._run_interface(execute=True)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
return self._run_command(execute)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node autorecon1.
Screenshots / relevant information:
It seems like the autorecon1 command from within singularity is having a permissions issue creating a folder or doing something but this is my first time using singularity and not sure how to make it play nice with the HPC.To note all the working dirs and everything else going on with fmriprep are subirectories that come after $hostpath so, at least the way I understand it, they should all be bound to the simg.
Just tried running again and got the same error(have also tried running it with the server /scratch folders bound and the same thing happens) .