Memory Issues When Applying DiFuMo Parcellation to HCP-TRT Dataset Using Nilearn

Summary:

I am new to using Nilearn, and I am trying to apply parcellation of the HCP-TRT dataset using the DiFuMo atlas. I’m encountering memory issues, especially when increasing the number of parcels or working with resting-state data.

I succeeded in running the parcellation for the motor task using 128 and 518 parcels, but when I try 1024 parcels, my memory usage reaches 100%, and the process crashes. Similarly, when I try applying the same procedure to the resting-state data, even with a low number of parcels, I cannot get it to run without crashing.

I am processing one run of fMRI data at a time.

System specifications:
Processor: AMD® Ryzen 9 5950x 16-core processor × 32
Ram: 32.0 GiB
GPU: NVIDIA Corporation GA102 [GeForce RTX 3090]

Size of motor task: (97,115,97,144)
Resting state: (97,115,97,600).

Does anyone have suggestions on how to manage memory usage or alternative approaches for working with large parcellations, especially for resting-state fMRI?

Command used :

 elif atlas == "DiFuMo":
        atlas = datasets.fetch_atlas_difumo(dimension=n_parcels, resolution_mm=2)
        parcellation_labels_img = atlas.maps # Denoise the data using compcor strategy
        confounds, sample_mask = load_confounds_strategy( 
            fmri_data, # Path to the functional file for the current run
            denoise_strategy='compcor', # Use compcor to denoise the data
            compcor="temporal_anat_combined", # Use both temporal and anatomical compcor
            n_compcor="all", # Use all components of compcor for denoising the data 
        )  

        masker = NiftiMapsMasker(
            mask_img=mask_file,
            maps_img = parcellation_labels_img,
            standardize=standardize,
            smoothing_fwhm=5,
            memory='nilearn_cache', 
            memory_level=1
        )
        
        parcellated_data = masker.fit_transform(fmri_data,confounds=confounds) 

Version: 0.10.3


A few questions:

  • I doubt it will make a difference but just in case: try version 0.10.4 and maybe if you can installing from the source on github
  • does it crash on the first subject / run your process?
  • how ‘big’ is the data: nb of voxels in X, Y, Z and number of time points per time series

Yes, in the first run.
The size of motor task: (97,115,97,144) and for resting state is (97,115,97,600).

Ok I will try to reproduce this with some dummy data.

1 Like