DictLearning with precomputed brain masks - Nilearn

Hello everyone,
I’m new to fMRI analysis, currently working with resting state data. I’ve managed to run fMRIPrep and compute several correlation matrices on different atlases/regions.
Now I’m trying to run Dictionary Learning, but I don’t know how to specify the precomputed brain masks for each subject.
If I leave the “mask” parameter empty in the DictLearning constructor, I receive the error “All masks should have the same affine” when doing fit().

I’ve tried initializing a MultiNiftiMasker object and using it as parameter, but it is not possible to specify a list of brain masks (mask_img must be a single image).

Basically, what I want to do is run DictLearning on several subjects using the files [...]_desc-brain_mask.nii.gz computed by fMRIPrep for each subject.
Is there something fundamentally wrong that I’m missing?

Thanks

Could you post your code with some comments to clarify your intentions? This will make it easier to debug whats going on for outsiders. Also in which space are the masks that you are using from your dataset?

I’ve tried initializing a MultiNiftiMasker object and using it as parameter, but it is not possible to specify a list of brain masks (mask_img must be a single image).

Basically, what I want to do is run DictLearning on several subjects using the files […]_desc-brain_mask.nii.gz computed by fMRIPrep for each subject.

I guess one way of getting around this would be to instantiate a new DictLearning object for every subject with the corresponding mask and fit that on the subjects data, while leaving all other parameters the same. Although I might be careful with this as I am not sure whether or not it will have other knock on effects (I am not too familiar with DictLearning).

Sure, here is the code:

# This will raise an error, because it doesn't accept lists
masker = MultiNiftiMasker(mask_img=subjects['brainmask'], # List of [...]brain_mask.nii files
                          standardize=True,
                          detrend=True,
                          high_pass=0.01,
                          low_pass=0.08,
                          t_r=2.,
                          smoothing_fwhm=9.)
dict_learn = DictLearning(n_components=5,
                          high_pass=0.01,
                          low_pass=0.08,
                          mask=masker, # Use previously computed mask
                          t_r=2.,
                          standardize=True,
                          smoothing_fwhm=9.)
dict_learn.fit(subjects['funcdata'],
               confounds=subjects['confounds'])

Also in which space are the masks that you are using from your dataset?

The masks have shapes 55x65x55. There is one subject whose shapes are 51x60x55. Maybe I can use the same brain mask as long as it has the same shape?

I guess one way of getting around this would be to instantiate a new DictLearning object for every subject with the corresponding mask and fit that on the subjects data, while leaving all other parameters the same.

I’m not sure if I follow. How would I combine several DictLearning (or CanICA) objects?

Thank you for the reply

How would I combine several DictLearning (or CanICA) objects?

Yeah, that wouldn’t work I guess. Looking at the documentation for this, I think you will have to simply use one mask for the group. You could create a group average mask out of the single-subject mask using something like intersect_masks.

As for the MultiNiftiMasker, I think the name refers to masking multiple images, but not using multiple masks. I would simply take the NIfTI image outputted by intersect_masks and use it as mask in DictLearning, but I am not too experienced with this particular application to be fair.

The masks have shapes 55x65x55. There is one subject whose shapes are 51x60x55.

You can resample images and masks to be the same shape using resample_to_img if they are in the same reference space (for example MNI space).

1 Like

Great, I’ll move forward with your proposal. Thanks!