Missing 001.mgz using fmriprep 1.0.8

I was using fmriprep 1.0.8 on sherlock to preprocess some functional and anatomical scans but couldn’t set up the original anatomical scan for freesurfer. Any thoughts will be appreciated! Thanks!

and got the following error report:

Traceback (most recent call last):
File “/usr/local/miniconda/bin/fmriprep”, line 11, in
load_entry_point(‘fmriprep==1.0.8’, ‘console_scripts’, ‘fmriprep’)()
File “/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/cli/run.py”, line 267, in main
fmriprep_wf.run(**plugin_settings)
File “/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/workflows.py”, line 602, in run
runner.run(execgraph, updatehash=updatehash, config=self.config)
File “/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/plugins/base.py”, line 168, in run
self._clean_queue(jobid, graph, result=result))
File “/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/plugins/base.py”, line 227, in _clean_queue
raise RuntimeError("".join(result[‘traceback’]))
RuntimeError: Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/plugins/multiproc.py”, line 68, in run_node
result[‘result’] = node.run(updatehash=updatehash)
File “/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py”, line 487, in run
result = self._run_interface(execute=True)
File “/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py”, line 571, in _run_interface
return self._run_command(execute)
File “/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/pipeline/engine/nodes.py”, line 650, in _run_command
result = self._interface.run(cwd=outdir)
File “/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/interfaces/base/core.py”, line 516, in run
runtime = self._run_interface(runtime)
File “/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/interfaces/base/core.py”, line 1023, in _run_interface
self.raise_exception(runtime)
File “/usr/local/miniconda/lib/python3.6/site-packages/niworkflows/nipype/interfaces/base/core.py”, line 960, in raise_exception
).format(**runtime.dictcopy()))
RuntimeError: Command:
recon-all -autorecon1 -noskullstrip -hires -openmp 8 -subjid sub-01 -sd /scratch/users/jfj/CCD/preprocessed/freesurfer -xopts-use
Standard output:
INFO: hi-res volumes are conformed to the min voxel size
Subject Stamp: freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.1-f53a55a
Current Stamp: freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.1-f53a55a
INFO: SUBJECTS_DIR is /scratch/users/jfj/CCD/preprocessed/freesurfer
Actual FREESURFER_HOME /opt/freesurfer
-rw-rw-r-- 1 jfj awagner 70349 May 18 03:36 /scratch/users/jfj/CCD/preprocessed/freesurfer/sub-01/scripts/recon-all.log
Linux sh-03-26.int 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
‘/opt/freesurfer/bin/recon-all’ -> ‘/scratch/users/jfj/CCD/preprocessed/freesurfer/sub-01/scripts/recon-all.local-copy’
#--------------------------------------------
#@# MotionCor Fri May 18 05:26:17 UTC 2018
ERROR: no run data found in /scratch/users/jfj/CCD/preprocessed/freesurfer/sub-01/mri. Make sure to
have a volume called 001.mgz in /scratch/users/jfj/CCD/preprocessed/freesurfer/sub-01/mri/orig.
If you have a second run of data call it 002.mgz, etc.
See also: http://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/Conversion
Linux sh-03-26.int 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

recon-all -s sub-01 exited with ERRORS at Fri May 18 05:26:17 UTC 2018

For more details, see the log file /scratch/users/jfj/CCD/preprocessed/freesurfer/sub-01/scripts/recon-all.log
To report a problem, see htp://surfer.nmr.mgh.harvard.edu/fswiki/BugReporting

Here is my sbatch file:

#!/bin/bash

#all commands that start with SBATCH contain commands that are just used by SLURM for scheduling
#################
#set a job name
#SBATCH --job-name=Framing
#################
#a file for job output, you can check job progress, append the job ID with %j to make it unique
#SBATCH --output=Framing.%j.out
#################
#a file for errors from the job
#SBATCH --error=Framing.%j.err
#################
#time you think you need; default is 2 hours
#format could be dd-hh:mm:ss, hh:mm:ss, mm:ss, or mm
#SBATCH --time=47:00:00
#################
#Quality of Service (QOS); think of it as job priority, there is also --qos=long for with a max job length of 7 days, qos normal is 48 hours.
#REMOVE “normal” and set to “long” if you want your job to run longer than 48 hours,
#NOTE- in the hns partition the default max run time is 7 days , so you wont need to include qos
#SBATCH --qos=normal
#We are submitting to the dev partition, there are several on sherlock: normal, gpu, owners, hns, bigmem (jobs requiring >64Gigs RAM)
#The more partitions you can submit to the less time you will wait, you can submit to multiple partitions with -p at once in comma separated format.
#SBATCH -p normal
#################
#number of nodes you are requesting, the more you ask for the longer you wait
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#################
#–mem is memory per node; default is 4000 MB per CPU, remember to ask for enough mem to match your CPU request, since
#sherlock automatically allocates 8 Gigs of RAM/CPU, if you ask for 8 CPUs you will need 32 Gigs of RAM, so either
#leave --mem commented out or request >= to the RAM needed for your CPU request.
#SBATCH --mem=60000
#################
#Have SLURM send you an email when the job ends or fails, careful, the email could end up in your clutter folder
#Also, if you submit hundreds of jobs at once you will get hundreds of emails.
#SBATCH --mail-type=END,FAIL # notifications for job done & fail
#Remember to change this to your email
#SBATCH --mail-user=jiefeng.jiang@stanford.edu
#now run normal batch commands
#note the "CMD BATCH is an specific command
module purge
module load system
module load singularity
export FS_LICENSE=$PWD/license.txt
#You can use srun if your job is parallel
singularity run /share/PI/russpold/singularity_images/poldracklab_fmriprep_1.0.8-2018-02-23-8c60ec5604ac.img --anat-only $SCRATCH/CCD $SCRATCH/CCD/preprocessed participant --participant_label sub-01

It looks as if the directory already existed (-xopts-use is used, rather than specifying an -expert file), so the ReconAll interface will not attempt to insert new images, on the basis that they are presumably already inserted.

If you delete your scratch/users/jfj/CCD/preprocessed/freesurfer/sub-01 directory, I think it should work for you.

1 Like

Yes, this solved the problem. Thank you!

1 Like