I have run ME data through fmriprep, outputed individual echoes, and then run these through tedana to create optimally combined output files. I am now attempting to transform these files into t1-w space, as I want to be able to convert my output files to CIFTI format (and this is most compliant with data in tw-1 space).
The code I am running is resulting in an output file. However, it is only 100kb in size so I assume something has gone wrong. I have been troubleshooting but I am unsure what is wrong with my script. I would appreciate any advice. My script is written below:
I tried that, I got no output from it and additionally got this error message?
/usr/bin/lua: /usr/share/lmod/lmod/libexec/Cache.lua:341: bad argument #1 to ânextâ (table expected, got boolean)
stack traceback:
[C]: in function ânextâ
/usr/share/lmod/lmod/libexec/Cache.lua:341: in upvalue âl_readCacheFileâ
/usr/share/lmod/lmod/libexec/Cache.lua:561: in function âCache.buildâ
/usr/share/lmod/lmod/libexec/ModuleA.lua:685: in function âModuleA.singletonâ
/usr/share/lmod/lmod/libexec/Hub.lua:1145: in function âHub.availâ
/usr/share/lmod/lmod/libexec/cmdfuncs.lua:144: in function âAvailâ
/usr/share/lmod/lmod/libexec/lmod:514: in function âmainâ
/usr/share/lmod/lmod/libexec/lmod:585: in main chunk
[C]: in ?
/var/spool/slurmd/job2339251/slurm_script: line 82: 3541299 Killed apptainer exec /opt/apptainer_img/fmriprep-23.2.3.sif antsApplyTransforms -e 3 -i â$func_fileâ -r â$reference_fileâ -t â$coreg_transformâ -o â$output_fileâ -n Linear -v
slurmstepd: error: Detected 1 oom_kill event in StepId=2339251.batch. Some of the step tasks have been OOM Killed.
I think the main issue is that the output of the command antsApplyTransform you were using was at the resolution of the T1w image (usually 1mm isotropic or less), wheras the bold images are usually between 2 to 3 mm of voxel size.
I have gotten it to work by increasing the memory more! Do you think I should be using the code you suggested instead? Or is increasing the memory a sufficient fix?
You should definitely decrease the size of your files with a strategy similar to what I am suggesting otherwise you will use too much ressources in your analyses with no gain in your results.
I donât want to resurrect this topic since itâs been solved, but I want to note that the easiest way to get denoised data in your target space is to (1) run fMRIPrep with --me-output-echos and whatever output spaces or cifti resolutions you want, (2) run tedana on your individual echoes, and (3) use the ICA components and component table from tedana to denoise the optimally combined data produced by fMRIPrep in your target spaces, rather than denoising in native space and warping/projecting to your target spaces. The preprocessed data produced by fMRIPrep in your requested spaces is the optimally combined data from tedana, so you shouldnât need to optimally combine your data separately.
Thank you for clarifying this point @tsalo . I had fun playing with antsApplyTransforms to solve this issue but you are right to show the best pratice to denoise data with fmriprep and tedana.
Thank you for the advice! This seems straightforward and I will explore denoising the T1-w space data in this way. I think I will still do my cifti conversion manually though, as I want to work with the nifti output files from fmriprep for now
I attempted to remove the tedana components from the fmriprep t1-w space BOLD in this way. I just wanted to check that my method for doing so is sound (I made sure the components listed for removal in my log matched those in the ICA table). It was very quick, only took about 30 seconds per participant:
#!/bin/bash
#SBATCH --job-name=me_denoise
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=256G
#SBATCH --time=36:00:00
#SBATCH --output=me_denoise_%A_%a.log
#SBATCH --error=me_denoise_%A_%a.err
#SBATCH --mail-type=END,FAIL
#SBATCH --mail-user=
#SBATCH --array=0-27 # 14 participants Ă 2 sessions
export PATH=$PATH:/home/dumbr174/.local/bin
# Load necessary modules
module load apptainer/FSL/6.0.7
module load apptainer/fMRIPrep/23.2.3
# Define all subject/session combinations
declare -a combinations=(
"pfm01 a"
"pfm01 b"
"pfm02 a"
"pfm02 b"
"pfm03 a"
"pfm03 b"
"pfm04 a"
"pfm04 b"
"pfm05 a"
"pfm05 b"
"pfm06 a"
"pfm06 b"
"pfm07 a"
"pfm07 b"
"pfm08 a"
"pfm08 b"
"pfm09 a"
"pfm09 b"
"pfm10 a"
"pfm10 b"
"pfm11 a"
"pfm11 b"
"pfm12 a"
"pfm12 b"
"pfm13 a"
"pfm13 b"
"pfm14 a"
"pfm14 b"
)
# Get the current combination from the SLURM array index
combination=(${combinations[$SLURM_ARRAY_TASK_ID]})
subject=${combination[0]}
session=${combination[1]}
base_path="/projects/sciences/psychology/imageotago/dumbr174/PFMtrial1/processed/bids/derivatives"
# Input files
preproc_t1w="$base_path/fmriprep/sub-${subject}/ses-${session}/func/sub-${subject}_ses-${session}_task-rest_run-001_space-T1w_desc-preproc_bold.nii.gz"
mixing_tsv="$base_path/tedana/sub-${subject}/ses-${session}/desc-ICA_mixing.tsv"
comp_table="$base_path/tedana/sub-${subject}/ses-${session}/desc-ICA_status_table.tsv"
# Output directory
output_dir="$base_path/completed_multiecho/sub-${subject}/ses-${session}"
mkdir -p "$output_dir"
# Working directory for temporary files
work_dir="$output_dir/temp_$$"
mkdir -p "$work_dir"
echo "Processing multi-echo denoising for sub-${subject} ses-${session} ..."
# Check if input files exist
if [[ ! -f "$preproc_t1w" ]]; then
echo "Error: Preprocessed T1w file not found: $preproc_t1w"
exit 1
fi
if [[ ! -f "$mixing_tsv" ]]; then
echo "Error: Mixing matrix not found: $mixing_tsv"
exit 1
fi
if [[ ! -f "$comp_table" ]]; then
echo "Error: Component table not found: $comp_table"
exit 1
fi
# Extract rejected component indices (0-based) from the final column
echo "Extracting rejected components..."
rejected_comps=$(awk -F'\t' 'NR>1 && $NF=="rejected" {gsub(/ICA_/, "", $1); print $1}' "$comp_table" | tr '\n' ',' | sed 's/,$//')
if [[ -z "$rejected_comps" ]]; then
echo "No rejected components found. Copying original file..."
cp "$preproc_t1w" "$output_dir/sub-${subject}_ses-${session}_desc-medenoised_space-T1w_bold.nii.gz"
else
echo "Rejected components: $rejected_comps"
# Prepare mixing matrix (remove header)
echo "Preparing mixing matrix..."
tail -n +2 "$mixing_tsv" > "$work_dir/mixing_noheader.txt"
# Create rejected components design matrix
echo "Creating rejected components design matrix..."
IFS=',' read -ra COMP_ARRAY <<< "$rejected_comps"
# Extract rejected component columns (add 1 to convert from 0-based to 1-based for awk)
first_comp=true
for comp in "${COMP_ARRAY[@]}"; do
# Remove leading zeros to avoid octal interpretation
comp_num=$(echo "$comp" | sed 's/^0*//')
if [[ -z "$comp_num" ]]; then comp_num=0; fi
comp_col=$((comp_num + 1)) # Convert to 1-based indexing for awk
if [[ "$first_comp" == true ]]; then
awk -v col="$comp_col" '{print $col}' "$work_dir/mixing_noheader.txt" > "$work_dir/rejected_mixing.txt"
first_comp=false
else
awk -v col="$comp_col" '{print $col}' "$work_dir/mixing_noheader.txt" > "$work_dir/temp_col.txt"
paste "$work_dir/rejected_mixing.txt" "$work_dir/temp_col.txt" > "$work_dir/rejected_mixing_temp.txt"
mv "$work_dir/rejected_mixing_temp.txt" "$work_dir/rejected_mixing.txt"
rm "$work_dir/temp_col.txt"
fi
done
# Use fsl_glm to fit rejected components and get residuals
echo "Fitting rejected components using fsl_glm..."
apptainer run /opt/apptainer_img/fsl-6.0.7.12.sif \
fsl_glm -i "$preproc_t1w" \
-d "$work_dir/rejected_mixing.txt" \
-o "$work_dir/component_fits.nii.gz" \
--out_res="$output_dir/sub-${subject}_ses-${session}_desc-medenoised_space-T1w_bold.nii.gz"
fi
# Clean up temporary files
echo "Cleaning up temporary files..."
rm -rf "$work_dir"
echo "Multi-echo denoising completed for sub-${subject} ses-${session}"
echo "Output file: $output_dir/sub-${subject}_ses-${session}_desc-medenoised_space-T1w_bold.nii.gz"