Thanks for sharing the data, Daniela! This indeed made the troubleshooting easier.
I think the main issue is that your fMRI images are high resolution (1.2 mm iso), and fitting the GLM on them is quite memory-intensive. There might also be some resampling happening depending on the resolution of the mask you provide in your initial code. This resampling is again memory intensive.
The RAM usage indeed reached just over 32 GBs when I tried.
But I am unsure why this behavior differs between native space and MNI space images.
Nevertheless, depending on what you want to do with these GLM parameters, you have the following options:
-
If you want high-resolution maps, you should probably consider using a high-performance cluster (you might already have access to them at Neurospin).
-
If you are fine with downsampling your data, then you can probably use your laptop. I have updated the code to do so.
Let me know if you have any further questions.
Best,
Himanshu
data = {
"func1_path": "nilearn_testing_data/sub-02_task-comparison_dir-ap_run-01_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz",
"func2_path": "nilearn_testing_data/sub-02_task-comparison_dir-ap_run-02_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz",
"events1": "nilearn_testing_data/sub-02_task-comparison_run-1_events_button_left-right.tsv",
"events2": "nilearn_testing_data/sub-02_task-comparison_run-2_events_button_left-right.tsv",
"confounds1": "nilearn_testing_data/sub-02_task-comparison_dir-ap_run-01_desc-confounds_timeseries.tsv",
"confounds2": "nilearn_testing_data/sub-02_task-comparison_dir-pa_run-02_desc-confounds_timeseries.tsv",
}
tr = 2
n_runs = 2
# Get a default MNI152 grey matter mask
mask = load_mni152_gm_mask(resolution=3)
# downsample both the functional images and the mask to 3mm isotropic resolution
# this is done to speed up the processing and reduce memory usage
for run in range(n_runs):
print(f"Downsampling run {run+1}...")
fmri_img = data[f"func{run+1}_path"]
# Resample the functional image to the mask's resolution
fmri_img = resample_img(fmri_img, target_affine=np.diag([3, 3, 3]))
data[f"func{run+1}"] = fmri_img
# Resample the mask to the functional images' resolution
mask = resample_img(
mask,
target_affine=fmri_img.affine,
target_shape=fmri_img.shape[:3],
interpolation="nearest",
)
# Calculate the design matrices for each run
design_matrices = []
for run in range(n_runs):
print(f"Processing run {run+1}...")
fmri_img = data[f"func{run+1}"]
print(f"Loading {run+1} image...")
fmri_img = load_img(fmri_img)
# Load the events file
events = pd.read_csv(data[f"events{run+1}"], sep="\t")
# I saw that there were some NaNs in `global_signal` variables in the confounds
# load_confounds only loads motion, high_pass and wm_csf variables
# this selection is based on best practices from the literature
# you can add more variables if you think they are important
# see docs https://nilearn.github.io/stable/modules/generated/nilearn.interfaces.fmriprep.load_confounds.html
confounds, _ = load_confounds(data[f"func{run+1}_path"])
frame_times = np.arange(fmri_img.shape[-1]) * tr
print("Creating design matrix...")
design_matrix = make_first_level_design_matrix(
frame_times,
events,
hrf_model="spm",
drift_model="Cosine",
high_pass=1 / 128,
add_regs=confounds,
)
# put the design matrices in a list
design_matrices.append(design_matrix)
print(design_matrix)
plot_design_matrix(design_matrix)
plt.show()
design_matrix.to_csv(f"design_matrix_run{run+1}.xlsx")
print(f"Created design matrix for run{run+1}")
flm = FirstLevelModel(
t_r=tr,
hrf_model="spm",
mask_img=mask,
smoothing_fwhm=2.4,
n_jobs=1,
)
# fitting the FirstLevelModel
flm.fit(
[data[f"func{run+1}"] for run in range(n_runs)],
design_matrices=design_matrices,
)