Presence of fieldmaps causes fMRIPrep to crash

Summary of what happened:

Hello!

It’s my first time using fMRIPrep and I’m currently trying to run pipeline with unprocessed fMRI data from the human connectome project young adult dataset. In my input BIDS directory I have:

  • T1w scan, with corresponding ‘phasediff’, ‘magnitude1’, and ‘magnitude2’ fieldmaps
  • T2w scan, with corresponding ‘phasediff’, ‘magnitude1’, and ‘magnitude2’ fieldmaps
  • Four functional task runs, each with corresponding RL and LR epi fieldmaps (two opposite phase encoding directions for each run). Each also has a corresponding SBRef scan.

fMRIPrep runs fine when these fieldmaps are taken out of the directory, but as soon as any are present (it doesn’t appear to matter which type), the script crashes at the same point.

Command used:

Here’s the fMRIPrep command line:

singularity run --cleanenv \
    -B ${DATA_DIR}:/data \
    -B ${OUT_DIR}/:/out \
    -B ${SCRATCH_DIR}:/wd \
    -B ${LICENSE}:/license \
    /research/cisc2/shared/fmriprep_singularity/fmriprep_22.1.1.simg \
    --participant-label ${SUBJECT} \
    --omp-nthreads 1 --nthreads 5 --mem_mb 30000 \
    --output-spaces MNI152NLin6Asym:res-2 MNI152NLin2009cAsym fsaverage T1w fsLR:den-32k \
    --fs-license-file /license \
    --skip-bids-validation \
    --work-dir /wd \
    --ignore slicetiming \
    --clean-workdir \
    /data /out/ participant

Version:

22.1.1

Environment (Docker, Singularity, custom installation):

Singularity

Data formatted according to a validatable standard? Please provide the output of the validator:

Relevant log outputs (up to 20 lines):

And here is the output of an example log file when fieldmaps are present:

Number of subjects = 1
This is the task id 1
this is i 0
sub-100307
230221-13:07:17,799 nipype.workflow IMPORTANT:
	 Running fMRIPrep version 22.1.1

         License NOTICE ##################################################
         fMRIPrep 22.1.1
         Copyright 2022 The NiPreps Developers.
         
         This product includes software developed by
         the NiPreps Community (https://nipreps.org/).
         
         Portions of this software were developed at the Department of
         Psychology at Stanford University, Stanford, CA, US.
         
         This software redistributes the versioneer Python package, which is
         Public domain source code.
         
         This software is also distributed as a Docker container image.
         The bootstraping file for the image ("Dockerfile") is licensed
         under the MIT License.
         
         This software may be distributed through an add-on package called
         "Docker Wrapper" that is under the BSD 3-clause License.
         #################################################################
230221-13:07:17,957 nipype.workflow IMPORTANT:
	 Building fMRIPrep's workflow:
           * BIDS dataset path: /data.
           * Participant list: ['100307'].
           * Run identifier: 20230221-130600_75ca5d53-ae64-4ad7-acaf-f31cf956fd8c.
           * Output spaces: MNI152NLin6Asym:res-2 MNI152NLin2009cAsym:res-native fsaverage:den-164k T1w fsLR:den-32k.
           * Pre-run FreeSurfer's SUBJECTS_DIR: /out/sourcedata/freesurfer.
Done

Screenshots / relevant information:

In case it’s helpful, here’s the structure of my BIDS folder:

|   dataset_description.json
|   participants.json
|   participants.tsv
|   README.txt
|   
\---sub-100307
    +---anat
    |       sub-100307_T1w.json
    |       sub-100307_T1w.nii.gz
    |       sub-100307_T2w.json
    |       sub-100307_T2w.nii.gz
    |       
    +---fmap
    |       sub-100307_acq-EmotionLR_dir-LR_epi.json
    |       sub-100307_acq-EmotionLR_dir-LR_epi.nii.gz
    |       sub-100307_acq-EmotionLR_dir-RL_epi.json
    |       sub-100307_acq-EmotionLR_dir-RL_epi.nii.gz
    |       sub-100307_acq-EmotionRL_dir-LR_epi.json
    |       sub-100307_acq-EmotionRL_dir-LR_epi.nii.gz
    |       sub-100307_acq-EmotionRL_dir-RL_epi.json
    |       sub-100307_acq-EmotionRL_dir-RL_epi.nii.gz
    |       sub-100307_acq-MotorLR_dir-LR_epi.json
    |       sub-100307_acq-MotorLR_dir-LR_epi.nii.gz
    |       sub-100307_acq-MotorLR_dir-RL_epi.json
    |       sub-100307_acq-MotorLR_dir-RL_epi.nii.gz
    |       sub-100307_acq-MotorRL_dir-LR_epi.json
    |       sub-100307_acq-MotorRL_dir-LR_epi.nii.gz
    |       sub-100307_acq-MotorRL_dir-RL_epi.json
    |       sub-100307_acq-MotorRL_dir-RL_epi.nii.gz
    |       sub-100307_acq-T1w_magnitude1.nii.gz
    |       sub-100307_acq-T1w_magnitude2.nii.gz
    |       sub-100307_acq-T1w_phasediff.json
    |       sub-100307_acq-T1w_phasediff.nii.gz
    |       sub-100307_acq-T2w_magnitude1.nii.gz
    |       sub-100307_acq-T2w_magnitude2.nii.gz
    |       sub-100307_acq-T2w_phasediff.json
    |       sub-100307_acq-T2w_phasediff.nii.gz
    |       
    \---func
            sub-100307_task-Emotion_dir-LR_bold.json
            sub-100307_task-Emotion_dir-LR_bold.nii.gz
            sub-100307_task-Emotion_dir-LR_events.tsv
            sub-100307_task-Emotion_dir-LR_sbref.json
            sub-100307_task-Emotion_dir-LR_sbref.nii.gz
            sub-100307_task-Emotion_dir-RL_bold.json
            sub-100307_task-Emotion_dir-RL_bold.nii.gz
            sub-100307_task-Emotion_dir-RL_events.tsv
            sub-100307_task-Emotion_dir-RL_sbref.json
            sub-100307_task-Emotion_dir-RL_sbref.nii.gz
            sub-100307_task-Motor_dir-LR_bold.json
            sub-100307_task-Motor_dir-LR_bold.nii.gz
            sub-100307_task-Motor_dir-LR_events.tsv
            sub-100307_task-Motor_dir-LR_sbref.json
            sub-100307_task-Motor_dir-LR_sbref.nii.gz
            sub-100307_task-Motor_dir-RL_bold.json
            sub-100307_task-Motor_dir-RL_bold.nii.gz
            sub-100307_task-Motor_dir-RL_events.tsv
            sub-100307_task-Motor_dir-RL_sbref.json
            sub-100307_task-Motor_dir-RL_sbref.nii.gz

And an example of the metadata included in the sidecar for an EPI file:

{"IntendedFor": "bids::sub-100307/func/sub-100307_task-Emotion_dir-LR_bold.nii.gz", "Manufacturer": "Siemens",
 "ManufacturersModelName": "Connectome Skyra", "MagneticFieldStrength": 3, "ReceiveCoilName": "Standard 32-Channel Siemens Receive Head Coil",
 "ReceiveCoilActiveElements": "HEA;HEP", "PulseSequenceType": "Spin Echo Field Map", "NonlinearGradientCorrection": "false",
 "MRAcquisitionType": "2D", "MTState": "false", "EffectiveEchoSpacing": 0.00058, "EchoTime": 0.058, "FlipAngle": 90, "MultibandAccelerationFactor": 1,
 "TaskName": "Emotion", "RepetitionTime": 7.06, "TotalReadoutTime": 0.08346, "B0FieldIdentifier": "EmotionLR_bold_fmap", "PhaseEncodingDirection": "i"}

I’m not clear why this crash would be happening, but wondering if it could be an issue with available memory, or with Freesurfer given the point at which the job crashes? Interestingly, if I try to run the job with or without fieldmaps with Freesurfer toggled off (fs-no-reconall), it crashes in the exact same place.

Any advice would be much appreciated!

Nick

H @nicksouter,

  1. I have relabeled your post as Software Support and added the corresponding template. Can you add whether your data passes BIDS validation?

  2. The log you attached doesn’t seem to indicate any error. Is there part of the log that shows an error? If it isn’t getting past that part, then try raising your thread counts. And how much memory are you devoting?

  3. the IntendedFor field should be relative to the subject folder. So the path should begin with func/....

  4. Regardless if any of the above fix your issue, I would recommend just using the already preprocessed HCP-YA data.

Best,
Steven

Hi Steven,

Thanks for the speedy response!

  1. Yes, with all files present the folder passes BIDS validation with no errors or warnings.

  2. Maybe crashing is the wrong terminology here. Essentially it seems like having the field maps present just causes fMRIPrep to never actually start running. That ‘Done’ should occur at the very end of the script, after fMRIPrep has completed successfully. Given this, do you think it’s not likely to be an actual issue with fMRIPrep? I’m running this on our high performance cluster and unfortunately it’s a bit of a black box, I don’t get any information apart from the log given here…

  3. I’ve played around with a couple iterations of IntendedFor. The one I used there is consistent with the template given in the BIDS specification, but I have tried running this with the updated structure as you recommend (previously and just now as I’m typing this), and unfortunately this seems to get the same outcome.

  4. I’m currently working on a project that involves experimentally manipulating the command line of fMRIPrep to assess different steps’ impact on (a) estimated carbon emissions and (b) preprocessing performance. We’re looking for a relatively large sample with simple fMRI tasks, which is why unprocessed HCP was a good fit for us. Would be very open to suggestions of alternative datasets if you can think of anything appropriate!

Thank you :slight_smile:

Nick

Hi @nicksouter,

Might be a memory issue. You can try upping your thread counts and devote more memory. For HCP you would probably need something like 16 thread and 32GB if you want to do all tasks simultaneously. Also, when fMRIPrep begins it starts by indexing the whole dataset. If your BIDS dataset contains all HCP subjects, that will take a long time. When I run fMRIPrep on big datasets, I copy single subjects to a fake BIDS directory in scratch space, and have each instance of fMRIPrep just run on a single fake BIDS directory, much faster to begin.

You might find the Healthy Brain Network project easier to deal with. It has been curated to BIDS and can be downloaded with Amazon S3.

The S3 bucket is at s3://fcp-indi/data/Projects/HBN/BIDS_curated/

It contains resting state and two movie watching tasks.

Best,
Steven

Hi Steven,

I maybe should have noted that so far I only have the one HCP subject downloaded, so it’s a pretty small dataset as it stands. All the same, I’ll try playing around with threads and memory parameters to see if this makes a difference. The largest we’ll go is 100 subjects, once/if we’re able to get it off the ground, and yes was planning to at least run the job in smaller chunks of participants rather than going for the full dataset.

I’ll look into accessing the healthy brain network, but my intuition is that this movie data might not be quite as constrained as we need. One of the preprocessing ‘performance’ metrics we’re considering is height of statistical effect in ROIs. HCP data is great for us as their simple motor task and emotion face perception task might be as close as we’re able to get to uncontroversial and predictable task-based activation, which is why I’m so eager to get it working. Still, I will keep considering other datasets as HCP is looking a little tricky to work with for sure!

Thanks again :slight_smile:

Nick

Hi @nicksouter ,

Consider the Queensland Twin Adolescent Brain Project
Data at: OpenNeuro
Descriptor at: https://www.biorxiv.org/content/10.1101/2022.05.19.492753v2

Best,
Steven