ACPC alignment using FSLeyes nudge

I tried using the FSLeyes nudge to do ACPC realignment. (I wanted to do this because ERROR: talairach_afd: Talairach Transform: transforms/talairach.xfm ***FAILED*** (p=0.0455, pval=0.0034 < threshold=0.0050) error came out in QSIPrep, and upon looking at the T1w and dwi images, it was clear that the origin was not set as AC and the orientation was also weird)

Upon searching, I thought that the nudge tool in FSLeyes could be used to set the origin and do ACPC realignment. However, when I did so, and save the images and load the image again, the same image (unaligned image) still poped up. Only when I set it to “WorldView” did it show the ACPC realigned image.

Is it OK (i.e. can the image that only shows aligned images when viewed through “WorldView”) to use the image for QSIPrep? I am asking because when I viewed the image on mrview, it still showed me the unaligned image, with the warning,


mrview: [WARNING] qform and sform are inconsistent in NIfTI image "/Users/eunmi/Desktop/QSIPREP/error_CHA_SPM_trial/sum/anat/haha.nii.gz" - using sform

Also, are there other tools that (hopefully automatically) does ACPC realignment? (or is using fsl nudge in the first place completely wrong?)

I know my question may seem dumb, but any help would be greatly appreciated!

Hi,

When using FSLeyes - Nudge, you can save the transformation necessary to reorient your image and apply it with flirt. In that case, both sform and qform matrices are changed in the resulting image.

1 Like

To do ac-pc realignment, you could do it automatically by registering your image to a template image with rigid body alignment (6 degrees of freedom (dof)), with FSL-flirt for instance.

This code can do it for you:

1 Like

Remember if you do any rotation to the DWI image, that you rotate your Bvectors accordingly. You can use the Automatic Registration Toolbox (NITRC: Automatic Registration Toolbox: Tool/Resource Info) on your T1 and apply the transform to the DWI. Although, I have never had any issues with letting QSIPrep do the realignment for me.

1 Like

Thank you! I’ll be sure to try this! :slight_smile:

Thank you for your insight! If my understanding is correct, are you saying that as long as ACPC alignment is done well on T1w image, dwi image usually gets realigned well automatically by QSIPrep?

Not quite. I was saying that you can rotate the T1 with that toolbox, and then apply the same rotation to the DWI image (making sure you rotate the bvec accordingly). You could try only rotating the T1 and having QSIPrep try rotating the DWI to the T1 though, it is worth a shot.

Thank you for your answer Steven! I think it clears things up a bit :slight_smile: If you don’t mind, I wanted to ask further :

In your most recent answer, you said that I should apply the same rotation to the DWI image as that on the T1. Does your answer assume that DWI and T1 images are somewhat aligned to themselves (not necessarily ACPC aligned) in the first place? (since if they are not, applying the same rotation as T1 to DWI would not result in DWI getting ACPC aligned too) (sorry for my lack of basic knowledge. I haven’t had any formal education in neuroscience so there are big gaps in my knowledge)

Yes, the DWI and T1 images should be roughly aligned. So you can get both the images in roughly ACPC space, and then let qsiprep take care of the more fine grained alignment tuning.

1 Like

Thank you for the answer :slight_smile:

As you have suggested, I tried automatically registering the T1 images to the template image with rigid body alignment (6 dof) to MNI152_T1_1mm but have failed. Believing that this could be caused because our subjects were young, I tried with child atlases, (BIC - The McConnell Brain Imaging Centre: NIHPD-obj 1) but still failed. Is this a common occurrence? (I have attached an example of our data as an example, with the crosshair set to coordinates (0,0,0))

Which command did you use and which version of the template did you use (skull stripped or not?). My guess is that your image is quite “far” from the template at the origin.
What may help is to remove the neck part (use FSL robustfov for instance)

For my non human primate images where this happens a lot, I found this software which is quite powerful: NiftyReg - CMIC

And I use a command of this kind:

reg_aladin -flo your_image.nii.gz -ref MNI_template.nii.gz -rigOnly -smooR 1 -res your_image_in_acpc.nii.gz -noSym -ln 12 -lp 10 -nac

The linear registration is done with a block-matching strategy which quite fast and robust. I could register newborn baboons brain with adult baboon template easily with this tool.

Thank you for the response!

Currently, I decided to manually do the ACPC alignment, as there were only like 6 subjects who had the problem.(I’ll be sure to use your suggestion when I encounter datasets where many subjects suffer from this problem) Anyway, as I was manually applying the transformation by using the transformation matrix (4*4 matrix) acquired via fsl nudge and applied it using the fsl using the command below,

-in   /Users/eunmi/Desktop/QSIPREP/CHA_crash_subjects/sub-200087/anat/sub-200087_T1w.nii.gz \
-applyxfm -init /Users/eunmi/Desktop/QSIPREP/CHA_crash_subjects/sub-200087/anat/xform.mat \
-out /Users/eunmi/Desktop/QSIPREP/CHA_crash_subjects/sub-200087/anat/sub-200087_T1w_REALINGED.nii.gz \
-paddingsize 0.0 -interp trilinear \
-ref /Users/eunmi/Desktop/QSIPREP/CHA_crash_subjects/sub-200087/anat/sub-200087_T1w.nii.gz

I found that as the picture below shows, some subjects had their brain cropped! I tried remedying this by increasing the padding from 0 to 10, but it yielded the same results… Could there be something I am doing wrong??

Thank you (I really am!!)

The code used was actually:

/usr/local/fsl/bin/flirt \
-in   /Users/eunmi/Desktop/QSIPREP/CHA_crash_subjects/sub-200087/anat/sub-200087_T1w.nii.gz \
-applyxfm -init /Users/eunmi/Desktop/QSIPREP/CHA_crash_subjects/sub-200087/anat/xform.mat \
-out /Users/eunmi/Desktop/QSIPREP/CHA_crash_subjects/sub-200087/anat/sub-200087_T1w_REALINGED.nii.gz \
-paddingsize 0.0 -interp trilinear \
-ref /Users/eunmi/Desktop/QSIPREP/CHA_crash_subjects/sub-200087/anat/sub-200087_T1w.nii.gz

Sorry for the late answer.

It is an interesting problem. In fact I could reproduce what you see on an image from my side . It is a marmoset anatomical image with a huge FOV compared to the size of the brain. At first it looked like applying Nudge or applying the affine matrix calculated by nudge with FLIRT was giving identical results. The brain were overlapping perfectly. But when I changed the colormap, I could see this:

The blue image is the image produce by FLIRT which is on top of the orange image produce by applying Nudge within FSLeyes.

In fact Nudge is only modifying the affine, while flirt is applying the affine matrix and resampling the image into the reference image, which may involve some cropping of the corners as you see well on your brains with tight FOVs.

So to reproduce what Nudge is really doing, you have to save the affine matrice created by nudge when you move your image manually, but instead of choosing the “FLIRT” format, choose the “voxel-to-wold” format. In fact it saves the new affine matrix visible from the image’s header (let’s say you save this transform in under the xform.matname). To apply this new affine to your image, you have to use fslorient $(cat xform.mat) my_image_to_move => no more cropping!