Hi !
please help !
i have a 262 nifti image [’ image1.nii’ , ‘image2.nii’…] (each of shape(182,218,182) and a mask mask.nii of shape (182,218,182) .
and i want to apply mask on image but i have error
Traceback (most recent call last):
File "<ipython-input-61-88ba8b5de2d2>", line 5, in <module>
masked_data = apply_mask(img, masker)
File "C:\Users\moham\Anaconda3\lib\site-packages\nilearn\masking.py", line 707, in
apply_mask
mask, mask_affine = _load_mask_img(mask_img)
File "C:\Users\moham\Anaconda3\lib\site-packages\nilearn\masking.py", line 63, in
_load_mask_img
% values)
ValueError: Given mask is not made of 2 values: [ 0 1 2 3 4 5 6 7 8 9 10 11 12
13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48]. Cannot interpret as true or false
Looks like your mask NIFTI is a collection of 48 regions?
The nilearn mask function only take mask images consist of 1 and 0.
If you want to use the whole 48 regions as a mask, you will need to make all values above 0 one (binarize the image).
If you want to mask each individual region, extract each region and binarize it before passing to the mask function.
ok thank you very much ! @KamalakerD i have another question :
x =['aa.nii ',‘bb.nii’ ,‘gg.nii’ …] # we have 262 subjects
for i in range (100):
x=resample(x) # to have every time a diiferent distribution
rena = Parcellations(method='rena', n_parcels=1000 ,mask=masked)
rena_fit =rena.fit_transform(x)
So, when i run with 100 randomized parcellations (each parcellation with 1000 parcels) and 262 subjects , i have a problem on memory , when i reduce the number of subjects to 50 , it takes 5 hour to do this !
but i will do this with the 262 subjects
how can i do this please to perform the computational time .
File “C:\Users\Anaconda3\lib\site-packages\nilearn\regions\rena_clustering.py”, line 128, in _make_edges_and_weights
weights_unmasked = _compute_weights(X, mask_img)
File “C:\Users\Anaconda3\lib\site-packages\nilearn\regions\rena_clustering.py”, line 53, in _compute_weights
weights_deep = np.sum(np.diff(data, axis=2) ** 2, axis=-1).ravel()
File “C:\Users\Anaconda3\lib\site-packages\numpy\lib\function_base.py”, line 1273, in diff
a = op(a[slice1], a[slice2])
x =['aa.nii ',‘bb.nii’ ,‘gg.nii’ …]
i do the resampling method (bootstrap) to change the order of subject to have at each parcellation a different distribution of parcels .
The goal is to do 100 randomized parcellation and each parcellation have 1000 parcels .
that’s why for each iteration i (each parcellation) :
1/ we resample x (to change order)
2/ then, we train parcellation with ReNA to do the clustering .
the problem is when i take a dataset with only 60 subjects , the method works very good (~5 hours) but when i take a dataset with 262 subjects , i have a problem of memory
yes , i try it and i have the same problem
my computer have RAM 8go , maybe this is the problem ?
because when i try it with a dataset of 60 subjects , it work but when i try it with 262 , i have an error of memory
Do you think you can upload data some where and share with me a link pointing to that data?
I will try to get my hand on it to see if I can propose a solution.
Thanks!
If you use that code, please cite this article: B . Da Mota, V. Fritsch, G. Varoquaux, T. Banaschewski, G. J. Barker,A. L. Bokde, U. Bromberg, P. Conrod, J. Gallinat, H. Garavanet al.,“Randomized parcellation based inference,”NeuroImage, vol. 89, pp.203–215, 2014