Nibabel extracted image slice extracted not aligned to original image

I have T1-w and FA brain volumes aligned to each other, I want to extract random sagital slices of size 128*128 from both images (same slice position for T1 and FA).
I’m trying to do this task with python and I think is working, but when saving the slices in nifti format the coordinates of them change with respect to re original volume where they were got from.
Do you have any clue why is this happening? Could you help me please?
Or if there is an easy way to get these slices I would appreciate your suggestions.

This is the code I’m using:

d = '/path/to/images'
number_slices = 5
r = 64

for root, dirs, files in os.walk(d, topdown=True):
    for file in files:
            if fnmatch.fnmatch(file, "t1_brain_fcm.nii.gz"):
                path = root +"/"+ file
                    print(f'Directory in {root} already exists')
                t1 = nib.load(path)
                img = t1.get_fdata()
            if fnmatch.fnmatch(file, "FA_map_reg.nii.gz"):
                path2 = root +"/"+ file
                fa = nib.load(path2)
                img2 = fa.get_fdata()
                CoM = tuple(map(round, ndimage.measurements.center_of_mass(img)))
                axis = list(range(((CoM[0]-r)+1), (CoM[0]+r)))
                for i in range(number_slices):
                    sd = random.choice(axis)
                    final_t1 = img[sd,((CoM[1]-r)+1):(CoM[1]+r),
                    final_fa = img2[sd,((CoM[1]-r)+1):(CoM[1]+r),
                    # Create a NIfTI T1 image object and save it
                    nifti_t1 = nib.Nifti1Image(final_t1, t1.affine)# t1.header) 
          , os.path.join(root, 'slices/' + root.split('/')[6] + '_t1_{}.nii.gz'.format(int(i))))
                    # Create a NIfTI FA image object and save it
                    nifti_fa = nib.Nifti1Image(final_fa, fa.affine)# t1.header) 
          , os.path.join(root, 'slices/' + root.split('/')[6] + '_fa_{}.nii.gz'.format(int(i))))

I’m getting the slices as wanted:

Screenshot from 2023-07-01 10-47-44

But when I try to overlay them with the original volume, their coordinates are different as you can see:

Thank you!

Hi @Al-yhuwert,

When you load images (e.g., with img = nilearn.image.load_img(img_path)) you can get the affine with affine = img.affine. When you create your new nifti images out of whatever data_matrix you want to save (e.g., with img_to_save = nilearn.image.new_img_like(reference_img, data_matrix, affine=affine, copy_header=True)), you can pass in your affine and tell the function to copy the header from a reference image (e.g., your original image). When you save out your image (img_to_save.to_filename('/path/to/outfile.nii.gz')) it should now be aligned to your original image.


Hi @Steven,
Thanks for the answer,
I think that’s what I’m doing with nibabel, t1 = nib.load(path) already has the affine transformation (t1.affine) and the header information (t1.header).
When creating the NIfTI object I’m passing the affine transformation as a parameter nib.Nifti1Image(final_t1, t1.affine), however it’s not working.

Do results change if you use Nilearn instead?

I don’t have it, I’ll try it and I’ll let you know :slight_smile:

No @Steven, I’m still getting the same results

Can you explain what you are trying to do with this block?

Yes, as I said, I want sagital slices of size 128x128, but I want these slices centered around the X axis, I didn’t know how to get this so I thought computing the center of mass to get the ‘center of the brain’ would work ( CoM = tuple(map(round, ndimage.measurements.center_of_mass(img)))).
Then, I created a list with values in X axis (axis variable) from where I could get random values (variable sd) to use as index in position [0] to slice the image.
Finally, I sliced the img, taking slices centered at sd in axis 0, and CoM in axis 1 and 2, and I add to CoM the variable r which is 64, to get slices of 128x128 (img[sd,((CoM[1]-r)+1):(CoM[1]+r), ((CoM[2]-r)+1):(CoM[2]+r)])

So, this block is giving me random locations in X axis to slice the volume there, with slice sizes of 128x128

Does this correspond to the current resolution of your images, or is this more arbitrary? Is the idea that you want the 128 voxels to cover the entirety of the original field of view, or you only want a restricted field of view that is afforded by those 128 voxels?

Perhaps, you would find that cropping your image (removing empty border space) and then resampling to your 128 resolution would be more straightforward, and not deal with any center of mass calculations. Then you can just collect all voxels in the Y/Z directions of a given index in X.

128x128 is arbitrary (could be 64x64, or whatever). The original image resolution is bigger. The idea is not to cover the entire image, just that field of view.
So, I want to extract saggital patches of that size, but not to resample the image.

What about getting all of the Y/Z indices in the slice, and then zero/nan out those outside of the central 128X128 field of interest?

I don’t understand, how would that be?

data_slice = np.asarray(img_data[x_coord,:,:]) # cut the img at x_coordinate
square_size = 128 #how large you want the square to be
data_size=np.shape(data_slice) # dimension of slice
y_center=int(data_size[0]/2) # y-center of slice
z_center=int(data_size[1]/2) # z-center of slice
data_slice[(y_center+int(square_size/2)):,:]=0 # zero out upper y-indexes
data_slice[:(y_center-int(square_size/2)),:]=0 #  zero out lower y-indexes
data_slice[(z_center+int(square_size/2)):,:]=0 #  zero out upper z-indexes
data_slice[:(z_center-int(square_size/2)),:]=0 #  zero out upper z-indexes

Ok, I got it
That’s a way of getting the slices, but when I save these slices as NIfTI the same problem is happening. The slice is not aligned with the original image where it was extracted from

Try doing a similar thing, but don’t slice the data - just zero out all the x coordinates you are not interested in. That way it is still a 3D image, and the affine/header should act appropriately.

Thanks @Steven, I will use the slices as input to a CNN, so keeping then as 3D image won’t work then.

Do you know if there’s something similar to this comand mrgrid crop in nilearn or nibabel? That mrtrix command work but I don’t know how to put the X/Y/Z indices in it with the specifications I said before, that is changing the X (randomly but in a range of values), and adjusting the Y/Z to make the image of the desired size

If you are using these as inputs to a CNN, why do they need to be aligned to the original image, as long as you are getting the correct data from the image? I imagine if you are inputting these values into a classifier, you are likely going to vectorize the values anyway, so I am not sure I see the value of making these slices into nifti files.

Also, nilearn has a function for cropping your image.

Hmm, I think it’s my way to review visually each step I’m doing, and it was weird to me why slicing the images as I did change its coordinates in this way, so I wanted to know if I did something wrong. Other than that, you’re right, I don’t think I see any other purpose at the moment for having them aligned.
Probably I’m making my life complicated for no reason hehe
Thanks for your help today!

No problem, and sorry I couldn’t directly answer your question, good luck!

Likely this is a result of changing the ordering of the axes. By sampling img[sd, ...], you’re going from PxQxR to QxR, while your affine still encodes the spatial dimensions of a 3D image. Unless you want to do affine algebra yourself, the easiest way to do this will be by using the Nifti1Image.slicer attribute:

img = nib.load("t1_brain_fcm.nii.gz")
sag0 = img.slicer[0:1, ...]
sagN = img.slicer[N:N+1, ...]

See Image slicing.