Issue with fMRIPost-AROMA Processing: ROI Mask Alignment and Distortion in Resting-State Data

Summary of what happened:

am currently working on a resting-state fMRI seed-based analysis and have encountered an issue that I hope you can help me resolve. When I overlay my binary ROI mask on the preprocessed fMRIPrep data, the alignment appears accurate without any issues.

However, after applying the fMRIPost-AROMA processing script, I noticed that for the same data with no issues in the preprocessed fMRIPrep output, the binary ROI mask does not fully align with the denoised BOLD image (output of fMRIPost-AROMA). Additionally, there appears to be distortion in the data for these cases (e.g., sub-042, as shown in the attached images).

It’s worth mentioning that this problem does not occur consistently across all participants. For several participants, there is no misalignment or distortion, either in the preprocessed fMRIPrep or fMRIPost-AROMA processed data. I tried using the flag --output-spaces MNI152NLin6Asym to ensure the output of the fMRIPost-AROMA processing remains in the same standard space, but this flag was not recognized.

I would greatly appreciate your insights into possible reasons for this issue and any suggestions on how to resolve it. Could it be related to the post-processing steps, or is there something else I should investigate in my pipeline?

Thank you in advance for your time and expertise. Please let me know if additional details or data samples would be helpful.

Command used (and if a helper script was used, a link to the helper script or the command generated):

My preprocessing was conducted using fMRIPrep with the following command.

fmriprep_command=(
    docker run --rm
    -v "$bidsdir:/data"
    -v "$outputdir:/out"
    -v "$license_file:/usr/local/freesurfer/license.txt"
    -v "$workdir:/work"
    nipreps/fmriprep:latest
    /data /out participant
    --participant-label "$pid"
    --fs-license-file /usr/local/freesurfer/license.txt
    --skip_bids_validation
    --no-submm-recon
    --n-cpus 16
    --omp-nthreads 8
    --work-dir /work 
    --output-spaces MNI152NLin6Asym T1w
)

fmripost-aroma was run with the following command

fMRIPostAROMA_command=(
    docker run --rm
    -v "$bids_dir:/data"  
    -v "$output_dir:/data/derivatives"
    -v "$ica_outputdir:/out"  
    -v "$work_dir:/work"
    nipreps/fmripost-aroma:latest
    /data /out participant 
    -w /work
    --derivatives fmriprep=/data/derivatives
    --skip_bids_validation
    --participant-label "$pid"
    --denoising-method nonaggr
    --n-cpus 16
    --omp-nthreads 8
)

Version:

PUT VERSION HERE

Environment (Docker, Singularity / Apptainer, custom installation):

Docker

Data formatted according to a validatable standard? Please provide the output of the validator:

PASTE VALIDATOR OUTPUT HERE

Relevant log outputs (up to 20 lines):

PASTE LOG OUTPUT HERE

Screenshots / relevant information:



It looks like you ran fMRIPrep with native-resolution outputs, while fMRIPost-AROMA will resample to 2x2x2 mm voxels. Maybe that’s the cause of the mismatch?

Can you share the log from your fMRIPost-AROMA run?

1 Like

Dear Taylor,
Thank you for your response.
I attempted running the script twice—once with the --output-spaces MNI152NLin6Asym flag and once with --output-spaces MNI152NLin6Asym:res-2. Unfortunately, the issue persisted in both cases.

As you mentioned, it’s possible that I ran fMRIPrep with native-resolution outputs. To provide more context, I’ve shared the defined paths from my script below:

# Define paths
dataset_dir="${USER_DATA:-/media}/data04/ArSh/resting_state_project"
bids_dir="$dataset_dir/BIDS"
output_dir="$dataset_dir/fMRIPrep/derivatives"
work_dir="/media/data01/ArSh/fMRIPost-AROMA/workdir_ica"
ica_outputdir="/media/data01/ArSh/fMRIPost-AROMA/derivatives_ica"  # New output directory for ICA-AROMA results
participants_file="/media/data01/ArSh/participantss.tsv"

I checked all directories under workdir_ica and derivatives_ica, but I couldn’t find any informative log files apart from some citation files in the log folder. Would it be sufficient to rerun the process and save the Linux terminal output for further debugging?

Thank you for your guidance and support!

fMRIPost-AROMA currently forces outputs to be in MNI152NLin6Asym:res-2. I plan to implement an --output-spaces flag at some point in the future, but I haven’t had much time to dedicate to the fMRIPost workflows lately. The good news is that fMRIPost-AROMA will output the confounds in a TSV file. You can denoise your data in other output spaces or resolutions (e.g., your MNI152NLin6Asym:res-native fMRIPrep derivatives) using those confounds instead of using the denoised data produced by fMRIPost-AROMA.

1 Like

Thanks for your response. I used the .tsv file (...desc-aroma_timeseries.tsv) in the fMRIPost-AROMA/derivatives directory. Specifically, I regressed out the movement components using the Nilearn package with the following script:

from nilearn import image, masking
from nilearn.input_data import NiftiMasker
 

# Load original BOLD and mask
bold_file = "/media/data04/ArSh/resting_state_project/fMRIPrep/derivatives/sub-042/ses-1/func/sub-042_ses-1_task-resting_dir-AP_run-01_space-MNI152NLin6Asym_desc-preproc_bold.nii.gz"
mask_file = "/media/data04/ArSh/resting_state_project/fMRIPrep/derivatives/sub-042/ses-1/func/sub-042_ses-1_task-resting_dir-AP_run-02_space-MNI152NLin6Asym_desc-brain_mask.nii.gz"
mean_img = image.mean_img(bold_file)  # Extract mean BOLD image

# Load confounds (ICA AROMA regressors)
confounds = pd.read_csv("/media/data01/ArSh/fMRIPost-AROMA/derivatives_ica_old/sub-042/ses-1/func/sub-042_ses-1_task-resting_run-01_desc-aroma_timeseries.tsv", sep="\t")

# Apply regression with NiftiMasker
masker = NiftiMasker(mask_img=mask_file, standardize=False)
time_series = masker.fit_transform(bold_file)

# Regress out noise confounds
regressor = LinearRegression()
regressor.fit(confounds, time_series)  # Fit noise regressors
denoised_time_series = time_series - regressor.predict(confounds)  # Remove noise

# Add the mean signal back
mean_signal = np.mean(time_series, axis=0)
denoised_time_series += mean_signal  # Add voxel mean back

# Reconstruct denoised 4D image
denoised_img = masker.inverse_transform(denoised_time_series)


mean_4d = image.concat_imgs([mean_img] * denoised_img.shape[-1])  # Repeat mean image over time


final_img = image.math_img("img1 + img2", img1=denoised_img, img2=mean_4d)
# Save final result
final_img.to_filename("/media/data01/ArSh/fMRIPost-AROMA/tsv/denoised_bold_with_mean.nii.gz")

However, after processing, the output was in anatomical space, so I used the affine function from the Nibabel package to transform the data to MNI152NLin6Asym space using the following script:

import nibabel as nib
 

# Load input preprocessed BOLD (MNI space) and denoised BOLD
input_bold = nib.load("/media/data04/ArSh/resting_state_project/fMRIPrep/derivatives/sub-042/ses-1/func/sub-042_ses-1_task-resting_dir-AP_run-01_space-MNI152NLin6Asym_desc-preproc_bold.nii.gz")
denoised_bold = nib.load("/media/data01/ArSh/fMRIPost-AROMA/tsv/denoised_bold_with_mean.nii.gz")

# Extract the data from the denoised image
denoised_data = denoised_bold.get_fdata()

output_image = nib.Nifti1Image(denoised_data, affine=input_bold.affine, header=input_bold.header)


output_image.to_filename("/media/data01/ArSh/fMRIPost-AROMA/tsv/denoised_bold_MNI152NLin6Asym.nii.gz")

Although the results appear to be correct and alignment of my ROI mask appears accurate now, I wanted to confirm whether I used the correct .tsv file and whether I missed any necessary steps in this process. Additionally, should I apply a FWHM of 6.0 mm for spatial smoothing separately?

Thank you

I have a few thoughts.

  1. The output is not going to be in anatomical space. You’re not doing any spatial transformations in your code and the input data are in MNI152NLin6Asym space at native BOLD resolution, unless you modified your fMRIPrep derivatives in place.
  2. The AROMA confounds will include both the original “nuisance” components and ones that are orthogonalized with respect to the “signal” components. You need to select one or the other for your denoising step rather than use all of them.
    • If you want to do “aggressive” denoising, use the original components. They are columns that start with “aroma_motion_”.
    • If you want to do what fMRIPost-AROMA calls “orthaggr” denoising, use the orthogonalized components. They are columns starting with “aroma_orth_motion_”.
    • If you want to do non-aggressive denoising, you will need to do something a bit more complicated. I won’t go into that here since it would require a bunch of extra code.
    • For more information on the different kinds of denoising, see tedana’s documentation.
  3. Your denoising code is overkill, since NiftiMasker can handle the denoising internally. I would recommend doing the following:
import pandas as pd
from nilearn.maskers import NiftiMasker

# Load original BOLD and mask
bold_file = "/media/data04/ArSh/resting_state_project/fMRIPrep/derivatives/sub-042/ses-1/func/sub-042_ses-1_task-resting_dir-AP_run-01_space-MNI152NLin6Asym_desc-preproc_bold.nii.gz"
mask_file = "/media/data04/ArSh/resting_state_project/fMRIPrep/derivatives/sub-042/ses-1/func/sub-042_ses-1_task-resting_dir-AP_run-02_space-MNI152NLin6Asym_desc-brain_mask.nii.gz"

# Load confounds (ICA AROMA regressors)
confounds = pd.read_table("/media/data01/ArSh/fMRIPost-AROMA/derivatives_ica_old/sub-042/ses-1/func/sub-042_ses-1_task-resting_run-01_desc-aroma_timeseries.tsv")

# Select relevant confounds (I chose orthogonalized components)
orth_columns = [c for c in confounds.columns if c.startswith("aroma_orth_motion_")]
confounds = confounds[orth_columns]

# Apply regression with NiftiMasker
masker = NiftiMasker(
    mask_img=mask_file,
    standardize=False,
    standardize_confounds=True,
)
denoised_data = masker.fit_transform(
    X=bold_file,
    confounds=confounds,
)
denoised_img = masker.inverse_transform(denoised_data)
denoised_img.to_filename("/media/data01/ArSh/fMRIPost-AROMA/tsv/sub-042_ses-1_task-resting_dir-AP_run-01_space-MNI152NLin6Asym_desc-orthaggrDenoised_bold.nii.gz")
1 Like

Thank you so much for your helpful feedback! I checked your solution, and it works perfectly.

Regarding the spatial transformation, as you correctly pointed out, the input data are indeed in MNI152NLin6Asym space at native BOLD resolution. However, when I checked the header of my output data, I noticed that the sform_name is labeled as “Unknown” and the sform_code is set to 2, which might have caused some confusion on my end.

Thanks again for your guidance!

1 Like