fMRIPrep, tedana, and transform

Hello!

I have run ME data through fmriprep, outputed individual echoes, and then run these through tedana to create optimally combined output files. I am now attempting to transform these files into t1-w space, as I want to be able to convert my output files to CIFTI format (and this is most compliant with data in tw-1 space).

The code I am running is resulting in an output file. However, it is only 100kb in size so I assume something has gone wrong. I have been troubleshooting but I am unsure what is wrong with my script. I would appreciate any advice. My script is written below:


#!/bin/bash
#SBATCH --job-name=transform_proc
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=25G
#SBATCH --time=36:00:00
#SBATCH --output=t1transform_proc_%A_%a.log
#SBATCH --error=t1transform_proc_%A_%a.err
#SBATCH --mail-type=END,FAIL
#SBATCH --mail-user=
#SBATCH --array=0-27  # 14 participants × 2 sessions

export PATH=$PATH:/home/dumbr174/.local/bin

# Load necessary modules
module avail FSL
module load apptainer/FSL/6.0.7
module load apptainer/fMRIPrep/23.2.3

# Define all subject/session combinations
declare -a combinations=(
  "pfm01 a"
  "pfm01 b"
  "pfm02 a"
  "pfm02 b"
  "pfm03 a"
  "pfm03 b"
  "pfm04 a"
  "pfm04 b"
  "pfm05 a"
  "pfm05 b"
  "pfm06 a"
  "pfm06 b"
  "pfm07 a"
  "pfm07 b"
  "pfm08 a"
  "pfm08 b"
  "pfm09 a"
  "pfm09 b"
  "pfm10 a"
  "pfm10 b"
  "pfm11 a"
  "pfm11 b"
  "pfm12 a"
  "pfm12 b"
  "pfm13 a"
  "pfm13 b"
  "pfm14 a"
  "pfm14 b"
)

# Get the current combination from the SLURM array index
combination=(${combinations[$SLURM_ARRAY_TASK_ID]})
subject=${combination[0]}
session=${combination[1]}

base_path="/projects/sciences/psychology/imageotago/dumbr174/PFMtrial1/processed/bids/derivatives"

# Input functional file (tedana output)
func_file="$base_path/tedana/sub-${subject}/ses-${session}/desc-denoised_bold.nii.gz"
coreg_transform="$base_path/fmriprep/sub-${subject}/ses-${session}/func/sub-${subject}_ses-${session}_task-rest_run-001_from-boldref_to-T1w_mode-image_desc-coreg_xfm.txt"

# Reference T1w anatomical file
reference_file="$base_path/fmriprep/sub-${subject}/ses-${session}/anat/sub-${subject}_ses-${session}_run-001_desc-preproc_T1w.nii.gz"

# Output directory and file
output_dir="$base_path/completed_multiecho/sub-${subject}/ses-${session}"
mkdir -p "$output_dir"
output_file="$output_dir/sub-${subject}_ses-${session}_desc-denoised_space-T1w_bold.nii.gz"

echo "Running antsApplyTransforms for sub-${subject} ses-${session} ..."

apptainer exec /opt/apptainer_img/fmriprep-23.2.3.sif \
antsApplyTransforms -d 3 \
  -i "$func_file" \
  -r "$reference_file" \
  -t "$coreg_transform" \
  -o "$output_file" \
  -n Linear -v

echo "Transform completed for sub-${subject} ses-${session}"

Hi @Brydied , welcome to neurostars!

Could you try in your command to replace -d 3 by -e 3. I was trying the same process on my side and got this idea looking at this thread: antsApplyTransforms on 4D BOLD images ¡ Issue #1717 ¡ ANTsX/ANTs ¡ GitHub and it worked for me.

Hi there!

I tried that, I got no output from it and additionally got this error message?

/usr/bin/lua: /usr/share/lmod/lmod/libexec/Cache.lua:341: bad argument #1 to ‘next’ (table expected, got boolean)
stack traceback:
[C]: in function ‘next’
/usr/share/lmod/lmod/libexec/Cache.lua:341: in upvalue ‘l_readCacheFile’
/usr/share/lmod/lmod/libexec/Cache.lua:561: in function ‘Cache.build’
/usr/share/lmod/lmod/libexec/ModuleA.lua:685: in function ‘ModuleA.singleton’
/usr/share/lmod/lmod/libexec/Hub.lua:1145: in function ‘Hub.avail’
/usr/share/lmod/lmod/libexec/cmdfuncs.lua:144: in function ‘Avail’
/usr/share/lmod/lmod/libexec/lmod:514: in function ‘main’
/usr/share/lmod/lmod/libexec/lmod:585: in main chunk
[C]: in ?
/var/spool/slurmd/job2339251/slurm_script: line 82: 3541299 Killed apptainer exec /opt/apptainer_img/fmriprep-23.2.3.sif antsApplyTransforms -e 3 -i “$func_file” -r “$reference_file” -t “$coreg_transform” -o “$output_file” -n Linear -v
slurmstepd: error: Detected 1 oom_kill event in StepId=2339251.batch. Some of the step tasks have been OOM Killed.

It looks like an issue with the memory devoted to the job by slurm. Could you try to give more CPU RAM to your execution?

I increased my allocated memory bit by bit and it did not make any difference. Unless it is something else I need to alter?

How much memory did you ask? “OOM killed” means “out of memory” killed.

I have increased the memory up to 80gb. I’m concerned something else is causing this issue.

How did you increase the memory? Could you show what is asked in your slurm directives in your script? (lines starting with #SBATCH ).

( also you may use -u float in your antsApplyTransforms command to keep the size of the output with the same size as the one of the input)

Also you should decrease the size of the reference image. what is the voxel size of the bold images?

You should undersample your reference image to your bold image voxel size, this should help with the memory issue.

You could run these commands:

bold_voxel_size=$(apptainer exec /opt/apptainer_img/fmriprep-23.2.3.sif fslinfo $func_file | grep pixdim1 | cut  -f 3)

apptainer exec /opt/apptainer_img/fmriprep-23.2.3.sif flirt -in $reference_file -ref $reference_file -out ${reference_file/_T1w.nii.gz/LR_T1w.nii.gz} -applyisoxfm $bold_voxel_size

reference_file_LR=${reference_file/_T1w.nii.gz/LR_T1w.nii.gz}

apptainer exec /opt/apptainer_img/fmriprep-23.2.3.sif \
antsApplyTransforms -d 3 \
  -i "$func_file" \
  -r "$reference_file_LR" \
  -t "$coreg_transform" \
  -o "$output_file" \
  -n Linear -v

I increased the memory as follows:

#!/bin/bash
#SBATCH --job-name=transform_proc
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=256G
#SBATCH --time=36:00:00
#SBATCH --output=t1transform_proc_%A_%a.log
#SBATCH --error=t1transform_proc_%A_%a.err
#SBATCH --mail-type=END,FAIL
#SBATCH --mail-user=dumbr174@otago.student.ac.nz
#SBATCH --array=0-27 # 14 participants × 2 sessions

It appears to have worked now, there is no oom error! The files must just be really large due to being ME. The outputs are about 7gb each!

Would you say this is a sufficient fix or are the output files too big/require too much memory? - My next step is to do manual ICA denoising.

I think the main issue is that the output of the command antsApplyTransform you were using was at the resolution of the T1w image (usually 1mm isotropic or less), wheras the bold images are usually between 2 to 3 mm of voxel size.

I have gotten it to work by increasing the memory more! Do you think I should be using the code you suggested instead? Or is increasing the memory a sufficient fix?

You should definitely decrease the size of your files with a strategy similar to what I am suggesting otherwise you will use too much ressources in your analyses with no gain in your results.

Thank you very much for your help! Glad to know that it is capable of working - I will attempt to decrease the file sizes :slight_smile:

1 Like

I don’t want to resurrect this topic since it’s been solved, but I want to note that the easiest way to get denoised data in your target space is to (1) run fMRIPrep with --me-output-echos and whatever output spaces or cifti resolutions you want, (2) run tedana on your individual echoes, and (3) use the ICA components and component table from tedana to denoise the optimally combined data produced by fMRIPrep in your target spaces, rather than denoising in native space and warping/projecting to your target spaces. The preprocessed data produced by fMRIPrep in your requested spaces is the optimally combined data from tedana, so you shouldn’t need to optimally combine your data separately.

2 Likes

Thank you for clarifying this point @tsalo . I had fun playing with antsApplyTransforms to solve this issue but you are right to show the best pratice to denoise data with fmriprep and tedana.

Thank you for the advice! This seems straightforward and I will explore denoising the T1-w space data in this way. I think I will still do my cifti conversion manually though, as I want to work with the nifti output files from fmriprep for now :slight_smile:

I attempted to remove the tedana components from the fmriprep t1-w space BOLD in this way. I just wanted to check that my method for doing so is sound (I made sure the components listed for removal in my log matched those in the ICA table). It was very quick, only took about 30 seconds per participant:

#!/bin/bash
#SBATCH --job-name=me_denoise
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=256G
#SBATCH --time=36:00:00
#SBATCH --output=me_denoise_%A_%a.log
#SBATCH --error=me_denoise_%A_%a.err
#SBATCH --mail-type=END,FAIL
#SBATCH --mail-user=
#SBATCH --array=0-27  # 14 participants × 2 sessions

export PATH=$PATH:/home/dumbr174/.local/bin

# Load necessary modules
module load apptainer/FSL/6.0.7
module load apptainer/fMRIPrep/23.2.3

# Define all subject/session combinations
declare -a combinations=(
  "pfm01 a"
  "pfm01 b"
  "pfm02 a"
  "pfm02 b"
  "pfm03 a"
  "pfm03 b"
  "pfm04 a"
  "pfm04 b"
  "pfm05 a"
  "pfm05 b"
  "pfm06 a"
  "pfm06 b"
  "pfm07 a"
  "pfm07 b"
  "pfm08 a"
  "pfm08 b"
  "pfm09 a"
  "pfm09 b"
  "pfm10 a"
  "pfm10 b"
  "pfm11 a"
  "pfm11 b"
  "pfm12 a"
  "pfm12 b"
  "pfm13 a"
  "pfm13 b"
  "pfm14 a"
  "pfm14 b"
)

# Get the current combination from the SLURM array index
combination=(${combinations[$SLURM_ARRAY_TASK_ID]})
subject=${combination[0]}
session=${combination[1]}

base_path="/projects/sciences/psychology/imageotago/dumbr174/PFMtrial1/processed/bids/derivatives"

# Input files
preproc_t1w="$base_path/fmriprep/sub-${subject}/ses-${session}/func/sub-${subject}_ses-${session}_task-rest_run-001_space-T1w_desc-preproc_bold.nii.gz"
mixing_tsv="$base_path/tedana/sub-${subject}/ses-${session}/desc-ICA_mixing.tsv"
comp_table="$base_path/tedana/sub-${subject}/ses-${session}/desc-ICA_status_table.tsv"

# Output directory
output_dir="$base_path/completed_multiecho/sub-${subject}/ses-${session}"
mkdir -p "$output_dir"

# Working directory for temporary files
work_dir="$output_dir/temp_$$"
mkdir -p "$work_dir"

echo "Processing multi-echo denoising for sub-${subject} ses-${session} ..."

# Check if input files exist
if [[ ! -f "$preproc_t1w" ]]; then
    echo "Error: Preprocessed T1w file not found: $preproc_t1w"
    exit 1
fi

if [[ ! -f "$mixing_tsv" ]]; then
    echo "Error: Mixing matrix not found: $mixing_tsv"
    exit 1
fi

if [[ ! -f "$comp_table" ]]; then
    echo "Error: Component table not found: $comp_table"
    exit 1
fi

# Extract rejected component indices (0-based) from the final column
echo "Extracting rejected components..."
rejected_comps=$(awk -F'\t' 'NR>1 && $NF=="rejected" {gsub(/ICA_/, "", $1); print $1}' "$comp_table" | tr '\n' ',' | sed 's/,$//')

if [[ -z "$rejected_comps" ]]; then
    echo "No rejected components found. Copying original file..."
    cp "$preproc_t1w" "$output_dir/sub-${subject}_ses-${session}_desc-medenoised_space-T1w_bold.nii.gz"
else
    echo "Rejected components: $rejected_comps"
    
    # Prepare mixing matrix (remove header)
    echo "Preparing mixing matrix..."
    tail -n +2 "$mixing_tsv" > "$work_dir/mixing_noheader.txt"
    
    # Create rejected components design matrix
    echo "Creating rejected components design matrix..."
    IFS=',' read -ra COMP_ARRAY <<< "$rejected_comps"
    
    # Extract rejected component columns (add 1 to convert from 0-based to 1-based for awk)
    first_comp=true
    for comp in "${COMP_ARRAY[@]}"; do
        # Remove leading zeros to avoid octal interpretation
        comp_num=$(echo "$comp" | sed 's/^0*//')
        if [[ -z "$comp_num" ]]; then comp_num=0; fi
        comp_col=$((comp_num + 1))  # Convert to 1-based indexing for awk
        if [[ "$first_comp" == true ]]; then
            awk -v col="$comp_col" '{print $col}' "$work_dir/mixing_noheader.txt" > "$work_dir/rejected_mixing.txt"
            first_comp=false
        else
            awk -v col="$comp_col" '{print $col}' "$work_dir/mixing_noheader.txt" > "$work_dir/temp_col.txt"
            paste "$work_dir/rejected_mixing.txt" "$work_dir/temp_col.txt" > "$work_dir/rejected_mixing_temp.txt"
            mv "$work_dir/rejected_mixing_temp.txt" "$work_dir/rejected_mixing.txt"
            rm "$work_dir/temp_col.txt"
        fi
    done
    
    # Use fsl_glm to fit rejected components and get residuals
    echo "Fitting rejected components using fsl_glm..."
    apptainer run /opt/apptainer_img/fsl-6.0.7.12.sif \
    fsl_glm -i "$preproc_t1w" \
            -d "$work_dir/rejected_mixing.txt" \
            -o "$work_dir/component_fits.nii.gz" \
            --out_res="$output_dir/sub-${subject}_ses-${session}_desc-medenoised_space-T1w_bold.nii.gz"
fi

# Clean up temporary files
echo "Cleaning up temporary files..."
rm -rf "$work_dir"

echo "Multi-echo denoising completed for sub-${subject} ses-${session}"
echo "Output file: $output_dir/sub-${subject}_ses-${session}_desc-medenoised_space-T1w_bold.nii.gz"