Fmriprep srun job killed

Ran singularity (2.3.1) exec fmriprep (with paths specific to file locations etc.) with
salloc -N 1 -n 16 --mem=20G
srun -t 8:00:00 -N 1 -n 16 -p owners --qos=dev --pty --x11 bash

Most of fmriprep pipeline ran except for two strange things:

  1. Took a long time (~ 45 min) to finish creating anat_preproc_wf/skullstrip_ants_wf/t1_skull_strip/report.html (This could just be how long skullstripping takes but unsure)
  2. srun killed the pipeline at node _bold_to_t1w_transform120 after running antsApplyTransforms
    Error: “srun: error: sh-101-20: task 0: Killed”

Any insight?

Since FMRIPREP runs freesurfer by default the expected runtime could exceed 8h. Could you try running this with 16h cutoff?

You can also disable freesurfer with --no-freesurfer.

Best,
Chris

I ran it with --no-freesurfer

Just to address the first issue, skull-stripping uses antsBrainExtraction. With 8 threads, I expect it to take at least 30m. 45 minutes is a little long, but not unreasonably so.

How many BOLD runs do you have, and how many volumes per run? And what version are you using? If you’re using 1.0.0-rc3 or earlier, you may be running into file number quotas if you have many BOLD volumes in total.

There is one subject, two tasks, each task has 260 sub-bricks. fmriprep v1.0.0-rc3-dev. How do I get the newer version of fmriprep?

Thank you!

Update: just downloaded the new rc5 version of fmriprep and copied it over (didn’t create new docker image and convert, should I do that?) running again

You don’t need to create a new docker image. If you’re using docker2singularity on the most recent poldracklab/fmriprep image, that’s what we’d recommend.

Ok I’m trying this. I opened a new singularity shell, but has the format of the fmriprep command changed? For some reason it won’t recognize:
fmriprep /scratch/users/mcsnyder/ /scratch/users/mcsnyder/ participant 335411 -w /scratch/users/mcsnyder/work/ --no-freesurfer

Even though this worked for rc3

I think you’re missing --participant_label, but also you can simply run it as follows:

singularity run fmriprep.img /scratch/users/mcsnyder /scratch/users/mcsnyder \
    participant --participant_label 335411 -w /scratch/users/mcsnyder/work \
    --no-freesurfer
1 Like

It’s throwing an error I think is due to incompatibility of fmriprep version and the docker image. I am using fmriprep version rc5 because that’s the newest one available on the github, and I added a script of my own to parse the BIDS format of my data to make it compatible. However, the fmriprep version in the fmriprep set up script is rc6. Is there a newer fmriprep (rc6) somewhere? Or an older version to make the docker file compatible with rc5?

1.0.0-rc6-dev is the version string of any commits on the GitHub master branch, after the release of 1.0.0-rc5. There have been no changes merged since the release, so that shouldn’t be an issue.

However, I’m very unclear on how you’re actually running this. Could you share your process in a bit more detail?

Note: all .py files are …py because I’m a new user and can’t post links apparently.

  1. Made docker image from latest poldracklab/fmriprep version and converted to singularity compatible image. Then launched singularity shell using that image and check version of fmriprep (version rc6)
  2. Copied this image to the server that can’t use docker and added edited versions of run…py, base…py, and a new file called BIDSgenerator…py. These new files make our data structure BIDS compatible by constructing a BIDS directory within the raw data directory that has the proper data format for the pipeline to read. NOTE: these run…py and base…py scripts were based on rc3, so changes that were made between run…py and base…py from rc3 to rc5 were discarded.
  3. Launched singularity shell -B /scratch:/scratch -B /oak/stanford/groups/menon:/oak/stanford/groups/menon/ -B /oak/stanford/groups/menon/software/singularity/fmriprep/fmriprep:/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/ /share/PI/menon/lab_shared/singularity/singularity_images/poldracklab_fmriprep_snyder.img. This binds the (writeable) scratch directories, where my test subject was stored, the new rc5 fmriprep within software/singularity/ etc. to the place where the shell looks for the fmriprep functions, and the freesurfer directories.
  4. Ran fmriprep with: fmriprep /scratch/users/mcsnyder/ /scratch/users/mcsnyder/ participant 3354 1 1 -w /scratch/users/mcsnyder/work/ --no-freesurfer
  5. Returned error:

base…py line 329: Subject Data:
{‘fmap’: [], ‘bold’: [’/scratch/users/mcsnyder/data/imaging/BIDS/sub-335411/func/sub-335411_task-music_depression_bold.nii.gz’, ‘/scratch/users/mcsnyder/data/imaging/BIDS/sub-335411/func/sub-335411_task-resting_state_1_bold.nii.gz’], ‘sbref’: [], ‘t2w’: [], ‘t1w’: [’/scratch/users/mcsnyder/data/imaging/BIDS/sub-335411/anat/sub-335411_T1w.nii.gz’]}
Traceback (most recent call last):
File “/usr/local/miniconda/bin/fmriprep”, line 11, in
load_entry_point(‘fmriprep==1.0.0rc6.dev0’, ‘console_scripts’, ‘fmriprep’)()
File “/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/cli/run.py”, line 218, in main
errno = create_workflow(opts)
File “/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/cli/run.py”, line 324, in create_workflow
ignore_aroma_err=opts.ignore_aroma_denoising_errors,
File “/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/workflows/base.py”, line 184, in init_fmriprep_wf
ignore_aroma_err=ignore_aroma_err)
File “/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/workflows/base.py”, line 387, in init_single_subject_wf
output_dir=output_dir)
TypeError: init_anat_preproc_wf() got an unexpected keyword argument ‘skull_strip_template’

It would be difficult and time consuming for us to debug and support custom modified version of FMRIPREP you are working with. A cleaner approach would be to decouple the custom data conversion you wrote from FMRIPREP itself.

In this scenario you would have two independent applications: a) your own custom code used to convert data into BIDS b) vanilla unmodified version of FMRIPREP. This way we would be able to debug the FMRIPREP part and replicate any of the issues you are running into on our side.