Hi everyone,
I’m working with Nifti files and I’m having problems figuring out how to get them in the right orientation. I converted these files from DICOM using dcm2niix. I’d like to understand how to plot a slice of the Nifti array in a certain orientation, specifically axial. I thought that the conversion from DICOM to Nifti would take care of the orientation, such that plotting data_array[:,:,slice_nr] would give a slice in the axial plane. This works for most images, but not all. Some of them get plotted with right and left sides flipped, or simply in another plane.
Opening the images with a Nifti viewer (ITKsnap) works fine, so I assume my naive approach is simply wrong.
I’ve uploaded two Nifti images here as an example. Plotting either the flair or the T2 image using data_array[:,:,slice_nr] gives an axial plane, while doing the same with the T1 image gives the sagittal plane.
And indeed slicing directly the data array is not guaranteed to give you the physical slice you want, or that the data will be oriented in the position you would like.
Hi @pbellec, thanks for your answer and the links you shared, it helps a lot to have a starting point.
I need to resample my data so that slicing the array in a certain way always gives the same plane. I’ve just tried using nilearn.image.resample_img() with mixed results:
The different sequences from a single patient are always mapped to the same space, so that plotting data_array[:,:,slice_nr] gives the same plane for all sequences but,
This plane is not always axial (though it most of the time is). Meaning that volumes from different patients can have the same affine, but still data arrays that are oriented differently.
I wonder if this means that the coordinate system the affine is mapping to is scanner centered, instead of patient-centered? So maybe the patients where the data array is still oriented “wrong” were laying on their side/stomach in the scanner?
Is there any way of fixing this?
@lidialuq please be aware that behavior of dcm2niix is intentional and required by the BIDS standard. It will automatically reslice 3D acquisitions to the axial plane for you. However, there is a reason that it preserves the slice orientation for EPI sequences: this is required by the slice time correction tools. If you reslice coronal or sagittal EPI data, you will make slice timing correction impossible.
This has nothing to do with the patient position in the scanner. It reflects the native orientation of how the scans were acquired and stored to disk. We often choose between axial, sagittal and coronal orientations based on the field of view in question (e.g. the brain is longer in the A-P axis than the L-R or S-I axis), the direction and consequence of field wrap, and the direction of EPI and susceptibility-based distortions.
dcm2niix preserves the mapping from image space to world space using the NIfTI s-form and q-form. The appropriate behavior for your tool is to use these transforms to display your images appropriately.
@Chris_Rorden That explains why in some images the array is oriented the way I expected it to, while it isn’t others, since there’s a combination of 3D and EPI sequences. However, in this case I’m not going to process the images further, so I’m okay with not preserving slice orientation. I need axial slices from these .nii files to use as input to a machine learning model.
Shouldn’t the resample_img(img, target_affine) function from nilearn.image still work? It takes an image and reorients it so that it’s affine is the affine passed as an argument (as far as I can see, the affine is equal to the s-form?). The way I understand it images with the same affine should have the same orientation in image space (the data array), but this isn’t the case in my data. What am I missing?
I’ve also tried the as_closest_canonical(img) from nibabel. The documentation states:
Reorder the data to be closest to canonical (RAS+) orientation.
This transform reorders the voxels and modifies the affine matrix so that
the voxel orientations are nearest to:
1. First voxel axis goes from left to Right
2. Second voxel axis goes from posterior to Anterior
3. Third voxel axis goes from inferior to Superior
This seems exactly what I’m looking for, yet when I plot the output as data_array[:,:,slice_nr] I get both axial and coronal planes (but interestingly always the same plane for all sequences of one subject).
@lidialuq I think this depends on the goal for your machine learning model.
If your aim is to have a system that can learn to classify better than chance, rotating to the closest canonical and running a 2D machine learning on the slices is likely to work.
If your goal is to develop a system that is accurate enough to change the standard of care, I think you will want to try to account for a lot of the predictable sources of variability in your data (e.g. 3D and EPI sequences, brain shape, 3D morphology). Both processing the data and adding nuisance variables as features all seem like sensible ways to improve your prediction accuracy.