Memory Issues When Applying DiFuMo Parcellation to HCP-TRT Dataset Using Nilearn

Summary:

I am new to using Nilearn, and I am trying to apply parcellation of the HCP-TRT dataset using the DiFuMo atlas. I’m encountering memory issues, especially when increasing the number of parcels or working with resting-state data.

I succeeded in running the parcellation for the motor task using 128 and 518 parcels, but when I try 1024 parcels, my memory usage reaches 100%, and the process crashes. Similarly, when I try applying the same procedure to the resting-state data, even with a low number of parcels, I cannot get it to run without crashing.

I am processing one run of fMRI data at a time.

System specifications:
Processor: AMD® Ryzen 9 5950x 16-core processor × 32
Ram: 32.0 GiB
GPU: NVIDIA Corporation GA102 [GeForce RTX 3090]

Size of motor task: (97,115,97,144)
Resting state: (97,115,97,600).

Does anyone have suggestions on how to manage memory usage or alternative approaches for working with large parcellations, especially for resting-state fMRI?

Command used :

 elif atlas == "DiFuMo":
        atlas = datasets.fetch_atlas_difumo(dimension=n_parcels, resolution_mm=2)
        parcellation_labels_img = atlas.maps # Denoise the data using compcor strategy
        confounds, sample_mask = load_confounds_strategy( 
            fmri_data, # Path to the functional file for the current run
            denoise_strategy='compcor', # Use compcor to denoise the data
            compcor="temporal_anat_combined", # Use both temporal and anatomical compcor
            n_compcor="all", # Use all components of compcor for denoising the data 
        )  

        masker = NiftiMapsMasker(
            mask_img=mask_file,
            maps_img = parcellation_labels_img,
            standardize=standardize,
            smoothing_fwhm=5,
            memory='nilearn_cache', 
            memory_level=1
        )
        
        parcellated_data = masker.fit_transform(fmri_data,confounds=confounds) 

Version: 0.10.3


A few questions:

  • I doubt it will make a difference but just in case: try version 0.10.4 and maybe if you can installing from the source on github
  • does it crash on the first subject / run your process?
  • how ‘big’ is the data: nb of voxels in X, Y, Z and number of time points per time series

Yes, in the first run.
The size of motor task: (97,115,97,144) and for resting state is (97,115,97,600).

Ok I will try to reproduce this with some dummy data.

1 Like

Hi, I was wondering if you’ve had a chance to look into the issue with the dummy data or if there are any updates on the potential solutions.

Thanks again for your time and help!

Hey
sorry for the slow reply.

Had a look with this script that generates dummy data with dimensions that match yours.
At the moment it runs fine on my side: this does not include the confound loading though.

Can you run this on your side and tell me if that runs well for you?
Just to check

from nilearn import datasets
from nilearn.maskers import NiftiMapsMasker
from nilearn.interfaces.fmriprep import load_confounds_strategy
from nilearn._utils.data_gen import generate_fake_fmri

shape = (97, 115, 97)
length = 600

n_parcels = 128

atlas = datasets.fetch_atlas_difumo(dimension=n_parcels, resolution_mm=2)
parcellation_labels_img = atlas.maps

fmri_data, mask = generate_fake_fmri(
    shape=shape,
    length=length,
)

# confounds, sample_mask = load_confounds_strategy(
#     fmri_data, # Path to the functional file for the current run
#     denoise_strategy='compcor', # Use compcor to denoise the data
#     compcor="temporal_anat_combined", # Use both temporal and anatomical compcor
#     n_compcor="all", # Use all components of compcor for denoising the data
# )
confounds = None

masker = NiftiMapsMasker(
    mask_img=mask,
    maps_img=parcellation_labels_img,
    standardize=True,
    smoothing_fwhm=5,
    memory="nilearn_cache",
    memory_level=1,
)

parcellated_data = masker.fit_transform(fmri_data, confounds=confounds)

Hi, thanks for your reply!

I just run it, and this is seems to work fine, it was easily handled by my pc.

Just to check: have you tried with using 128, 518 and 1024 parcels with this script?

Yes, I tried all of them.

I also tried to use the real data, and I tried these combinations:

Fmri and mask
Real Dummy
Confounds Real Crashed Crashed
Dummy Crashed Successful
None Crashed Successful

Where Dummy fmri data and mask are produced by “generate_fake_fmri” function from nilearn._utils.data_gen, while dummy confounds by “generate_random_df” function that I include in the script.

You can find the data files and the script to run these here: resting_state_share – Google Drive

Another strange thing is that when I decreased the dimensions of the dummy confounds (in the case of real mask and fmri data), from [600,261] (real confound shape) to [600,1] it once crashed, but then I re-run it and it was successful, with memory usage reaching almost 100%.

Hi! I just wanted to follow up on the memory issues. I was wondering if you had a chance to run the test code I shared?

Thanks again for your help!