ROI voxel count using python

I have a predefined ROI (.nii.gz image format) which i want to overlaid on spatially normalized PET image (.nii.gz format) to calculate the total count and count per voxel. I was wondering if anyone has an python implementation or any idea to find the count of voxels in each of these ROIs.

Assuming your original ROI image had intensity values of either 0 or 1, and the interpolation used by the normalization/reslicing to the PET image makes the threshold 0.5 sensible, you could write:

import nibabel as nib
fnm = 'roi.nii.gz'
thresh = 0.5
nii = nib.load(fnm)
mm = nii.header['pixdim'][1] * nii.header['pixdim'][2] * nii.header['pixdim'][3]
mm = abs(mm)
img_np = nii.get_data()
vox_count = (img_np > thresh).sum()
print('{} mm^3 greater than {}'.format(vox_count * mm, thresh))
vox_sum= (img_np).sum()
print('{} mm^3 (assuming 0..1)'.format(vox_count * mm))

Note that if your original data was in the range of 0…1, the unthresholded volume may provide a slightly more accurate estimate of the normalized volume, as it will capture partial volume effects (e.g. voxels in PET space that are partially in the ROI and partially outside).

1 Like

Thank you Chris! It helps me a lot.

But I want to calculate vox_count in each ROI (I have 116 ROI). Above method gives just a single value. Also, my ROI image doesn’t have 0 or 1 value (it has value like 0 for outside and 2001, 2002, 5011, 9082,… for each ROI). Sorry, I am very new to this field.

Make sure you use nearest-neighbor interpolation when normalizing your region of interest. A voxel that is on the border between area 17 and area 19 should not be interpolated as area 18! With that proviso, the solution is:

import nibabel as nib
fnm = 'roi.nii.gz'
nii = nib.load(fnm)
mm = nii.header['pixdim'][1] * nii.header['pixdim'][2] * nii.header['pixdim'][3]
mm = abs(mm)
img_np = nii.get_data()
for i in range(1, (img_np).max()):
	vox_count = (img_np == i).sum()
	if vox_count == 0:
		continue
	print('{} mm^3 for region {}'.format(vox_count * mm, i))

The other option is to apply a higher order interpolation (e.g. linear, since, spline) to each region independently, and use my previous method. This will account for partial volume better. You may find the power point slides from my Image to Inference class useful, specifically slide 170 “Palette indexed images”.

1 Like

Thanks a lot Chris! It saves my time and now I figure out how to do it. Your power point slides are very helpful. Once again thank you.

Hi @Chris_Rorden - this is a great thread! I believe I have a similar question that I am hoping you may have some suggestions RE a solution. I used fMRIPrep to preprocess some fMRI data. fMRIPrep output gives some great QC, especially RE motion. However, I am looking for a more quantitative method in assessing how my coverage is looking. Ideally, I want to overlay my final preprocessed functional image on a parcellation atlas, and then extract information to examine coverage (e.g., # of voxels in an ROI, some sort of measure of signal strength within a region, etc.). I have been looking around to see if there are any existing directions and/or scripts to accomplish at least something along these lines. Do you happen to have any advice with this? Thanks so much!
Nicole

@NRob you could apply this script to motion across a 4D image (3D space + time). The issue is knowing if variability reflects poor spatial alignment or an temporal artifact. Therefore, you would want to incorporate some values regarding motion. My Matlab scripts illustrate this approach with relevant links. The tool you process images with should provide som quality checks, for example fmriprep and AFNI’s thresholds/feedback on censoring. So the best approach will depend on your pipeline, and QA tools already exist for most pipelines.