Using fmriprep on Sherlock cluster

Hi,
I am using fmriprep on Sherlock.

The pathway to my dataset : /scratch/PI/aetkin/redwan/framing/sourcedata/
My dataset in on BIDS format according the BIDS validator online

On sherlock I use the following command lines :
module load singularity
sbatch framing.sbatch


My framing sbatch is like that :

#!/bin/bash 
#
#all commands that start with SBATCH contain commands that are just used by SLURM for scheduling
#################
#set a job name  
#SBATCH --job-name=Framing
#################  
#a file for job output, you can check job progress, append the job ID with %j to make it unique
#SBATCH --output=Framing.%j.out
#################
# a file for errors from the job
#SBATCH --error=Framing.%j.err
#################
#time you think you need; default is 2 hours
#format could be dd-hh:mm:ss, hh:mm:ss, mm:ss, or mm
#SBATCH --time=8:00:00
#################
#Quality of Service (QOS); think of it as job priority, there is also --qos=long for with a max job length of 7 days, qos normal is 48 h
ours.
# REMOVE "normal" and set to "long" if you want your job to run longer than 48 hours,  
# NOTE- in the hns partition the default max run time is 7 days , so you wont need to include qos
#SBATCH --qos=normal
# We are submitting to the dev partition, there are several on sherlock: normal, gpu, owners, hns, bigmem (jobs requiring >64Gigs RAM) 
# The more partitions you can submit to the less time you will wait, you can submit to multiple partitions with -p at once in comma sepa
rated format.
#SBATCH -p normal 
#################
#number of nodes you are requesting, the more you ask for the longer you wait
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#################
# --mem is memory per node; default is 4000 MB per CPU, remember to ask for enough mem to match your CPU request, since 
# sherlock automatically allocates 8 Gigs of RAM/CPU, if you ask for 8 CPUs you will need 32 Gigs of RAM, so either 
# leave --mem commented out or request >= to the RAM needed for your CPU request.
#SBATCH --mem=16000
#################
# Have SLURM send you an email when the job ends or fails, careful, the email could end up in your clutter folder
# Also, if you submit hundreds of jobs at once you will get hundreds of emails.
#SBATCH --mail-type=END,FAIL # notifications for job done & fail
# Remember to change this to your email
#SBATCH --mail-user=rmaatoug@stanford.edu
#now run normal batch commands
# note the "CMD BATCH is an  specific command
module load singularity
# You can use srun if your job is parallel

singularity run /share/PI/russpold/singularity_images/poldracklab_fmriprep_0.3.1-2017-03-25-c38ac0136e8c.img /scratch/PI/aetkin/redwan/framing/sourcedata/ /scratch/PI/aetkin/redwan/preprocessed/ participant -–participant_label sub-01

(the error file from Sherlock seems to say that there is a mistake in my syntax with “participant” but I have tried many different ways without any success.

I would really appreciate any help

Thank you,
Redwan

What is the exact error you are getting?

It should be-–participant_label 01 rather than -–participant_label sub-01

I would also recommend to specify the working directory as local scratch -w $LOCAL_SCRATCH - otherwise $HOME will be used for storing temporary files and you are likely to run out of quota.

Thank you Chris,

The preprocessing is running now.
I have opened the error file and found this :

/usr/local/miniconda/lib/python3.6/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
“This module will be removed in 0.20.”, DeprecationWarning)
/usr/local/miniconda/lib/python3.6/sitepackages/nipype/workflows/dmri/mrtrix/group_connectivity.py:16: UserWarning: cmp not installed warnings.warn(‘cmp not installed’)

Is it something I have to worry about ?
Thank you,
Redwan

1 Like

Nothing to worry about - those are just deprecation warnings you can ignore.

Ok, perfect.

Thank you and have a nice day,
Redwan

1 Like