Nilearn FirstLevelModel.fit() RAM memory explosion

Hi!
I’m having some issue when trying to fit() a nilearn.FirstLevelModel model, creating a GLM for each participant within 15-20. For each participant I’m calling the fit() for a new model and using the compute_contrast for calculating the beta coefficients that fit my fMRI data.
I’m trying to call explicitly del model and gc.collect() after fitting the model for each participant, but eventually get a rise of 8GB after calling each time fit() for a given participant, eventually RAM memory accumulating and getting an OutOfMemoryException.

I’d love to get some help, stuck on this issue for a long time…
Thanks!

Will try to have a look at this tomorrow.
I remember having this issue once but when I got back to it some times later I could not reproduce it.

In the meantime, can you tell us more?

How many runs per subject, how many time points per run, what image resolution and voxel size?

Also if you have a bit of code to show how you are setting and running your models.

Thanks Remi.
I have 1 run per subject, overall calling 15 times FirstLevelModel.fit() for each of my 15 participant’s fMRI BOLD data. I have 6804 timepoints per run, voxel size = 3X3X3 mm.

This is my relevant code section (after removing del model and gc.collect() because it didn’t help):

def run_for_participant(self, participant_id, data, stat_output_type, mask_strategy, mask_img=None):
        if mask_img is None:
            masker = NiftiMasker(mask_strategy=mask_strategy, lower_cutoff=0.9)
            masker.fit(data)
            mask_img = masker.mask_img_

        model = FirstLevelModel(mask_img=mask_img, memory=None, memory_level=0, n_jobs=1)
        model = model.fit(data, design_matrices=self.design_matrix)

        participant_dir = FileUtils.concat_file_paths(self.output_dir, participant_id)
        FileUtils.ensure_folder_exists(participant_dir)

        contrast_files = []
        for contrast_id, contrast_vector in self.contrasts.items():
            stats_map_img = model.compute_contrast(contrast_vector, output_type=stat_output_type)
            contrast_file = FileUtils.concat_file_paths(participant_dir, f"{participant_id}_{contrast_id}_map.nii.gz")
            stats_map_img.to_filename(contrast_file)
            contrast_files.append(contrast_file)

        merged_img = concat_imgs(contrast_files)
        merged_file = FileUtils.concat_file_paths(participant_dir, f"{participant_id}_contrast_map.nii.gz")
        merged_img.to_filename(merged_file)
        print(f"Saved participant {participant_id} contrast map of shape {merged_img.shape} in {merged_file}")

    def run_all(self, stat_output_type="z_score", mask_strategy="epi"):
        for participant_id, data in self.data_dict.items():
            print(f"Fitting GLM for participant {participant_id}...")
            self.run_for_participant(participant_id, data, stat_output_type, mask_strategy)
            print(f"Successfully fitted GLM for participant {participant_id}")