Mask image neuroimaging

Hi !
please help !
i have a 262 nifti image [’ image1.nii’ , ‘image2.nii’…] (each of shape(182,218,182) and a mask mask.nii of shape (182,218,182) .
and i want to apply mask on image but i have error

Traceback (most recent call last):

File "<ipython-input-61-88ba8b5de2d2>", line 5, in <module>
masked_data = apply_mask(img, masker)

 File "C:\Users\moham\Anaconda3\lib\site-packages\nilearn\masking.py", line 707, in 
 apply_mask
 mask, mask_affine = _load_mask_img(mask_img)

 File "C:\Users\moham\Anaconda3\lib\site-packages\nilearn\masking.py", line 63, in 
 _load_mask_img
 % values)

 ValueError: Given mask is not made of 2 values: [ 0  1  2  3  4  5  6  7  8  9 10 11 12 
 13 14 15 16 17 18 19 20 21 22 23
 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48]. Cannot interpret as true or false

Please help me

Looks like your mask NIFTI is a collection of 48 regions?
The nilearn mask function only take mask images consist of 1 and 0.
If you want to use the whole 48 regions as a mask, you will need to make all values above 0 one (binarize the image).
If you want to mask each individual region, extract each region and binarize it before passing to the mask function.

How did you extract this mask image? mask.nii

masker = nb.load('C:/Users/JHU-ICBM-labels-1mm.nii.gz')
mask = image.math_img("img>0", img=masker)
nb.save(mask, "mask.nii.gz")

And now i can use it :

masked =  'C:/Users/mask.nii.gz'
x = ['ze.nii.gz' , 'ke.nii.gz' , ....)
rena = Parcellations(method='rena', n_parcels=5000 ,mask=masked)
rena_fit =rena.fit_transform(x)

is it correct like this ?

Looks correct to me.

ok thank you very much !
@KamalakerD i have another question :
x =['aa.nii ',‘bb.nii’ ,‘gg.nii’ …] # we have 262 subjects

for i in range (100):

    x=resample(x)    # to have every time a diiferent distribution
    rena = Parcellations(method='rena', n_parcels=1000 ,mask=masked)
    rena_fit =rena.fit_transform(x)

So, when i run with 100 randomized parcellations (each parcellation with 1000 parcels) and 262 subjects , i have a problem on memory , when i reduce the number of subjects to 50 , it takes 5 hour to do this !
but i will do this with the 262 subjects :confused:
how can i do this please to perform the computational time .

ERROR :

File “C:\Users\Anaconda3\lib\site-packages\nilearn\regions\rena_clustering.py”, line 128, in _make_edges_and_weights
weights_unmasked = _compute_weights(X, mask_img)

File “C:\Users\Anaconda3\lib\site-packages\nilearn\regions\rena_clustering.py”, line 53, in _compute_weights
weights_deep = np.sum(np.diff(data, axis=2) ** 2, axis=-1).ravel()

File “C:\Users\Anaconda3\lib\site-packages\numpy\lib\function_base.py”, line 1273, in diff
a = op(a[slice1], a[slice2])

MemoryError

@KamalakerD

Out of 262, you choose 50 random subjects to fit 1000 parcels? This is repeated for 100 iterations?
That’s what you are trying to do?

What does this resample do?

x =['aa.nii ',‘bb.nii’ ,‘gg.nii’ …]
i do the resampling method (bootstrap) to change the order of subject to have at each parcellation a different distribution of parcels .

The goal is to do 100 randomized parcellation and each parcellation have 1000 parcels .
that’s why for each iteration i (each parcellation) :
1/ we resample x (to change order)
2/ then, we train parcellation with ReNA to do the clustering .

the problem is when i take a dataset with only 60 subjects , the method works very good (~5 hours) but when i take a dataset with 262 subjects , i have a problem of memory

when i take a dataset with 262 subjects , i have a problem of memory

Can you try by changing the data dtype of the images?

from nilearn._utils import check_niimg_4d
x = check_niimg_4d(x, dtype='int32')

yes , i try it and i have the same problem
my computer have RAM 8go , maybe this is the problem ?
because when i try it with a dataset of 60 subjects , it work but when i try it with 262 , i have an error of memory

yes , i try it and i have the same problem
my computer have RAM 8go , maybe this is the problem ?

When you fit ReNA on 262 images, you have a Memory Error? Irrespective of whether in the loop?
Thanks!

Yes ! and when i take a dataset with just 60 subjects , it works very goog without any error !
thank you

did you solve the memory error issue?

No :sleepy: @KamalakerD

Do you think you can upload data some where and share with me a link pointing to that data?
I will try to get my hand on it to see if I can propose a solution.
Thanks!

Otherwise, there is this code which exists in this nilearn sandbox repo: https://github.com/nilearn/nilearn_sandbox/blob/master/nilearn_sandbox/mass_univariate/rpbi.py

May be the repo has some memory efficient code.

If you use that code, please cite this article: B . Da Mota, V. Fritsch, G. Varoquaux, T. Banaschewski, G. J. Barker,A. L. Bokde, U. Bromberg, P. Conrod, J. Gallinat, H. Garavanet al.,“Randomized parcellation based inference,”NeuroImage, vol. 89, pp.203–215, 2014

okey thank you very much ! i will try it and if it work i will cite the article .

hello @KamalakerD
in this function :

 _build_parcellations(all_subjects_data, mask, n_parcellations=100,n_parcels=1000, 

n_bootstrap_samples=None,random_state=None,memory=Memory(cachedir=None),n_jobs=1, verbose=False):

i don’t understand this part :

  draw = rng.randint(n_samples, size=n_samples * n_parcellations)
  draw = draw.reshape((n_parcellations, -1))

and in the function :

  _ward_fit_transform(all_subjects_data, fit_samples_indices,
                    connectivity, n_parcels, offset_labels):

why he use “data_fit” in the fit part and “all_subjects_data” in the transform part ?

Please explain to me please

That’s the randomization step. It generates random indices to build parcellations.

Ideally, parcellations should be built on a subsample of full data to avoid possible overfitting results.

I suggest that you read the paper and code documentation thoroughly.

Best,
Kamalakar