Using atlas in animal_warp

Hi, i’m using @animal_warper for marmoset brain and having some questions

  • it has the -atlas flag. Can we use a cortical and a subcortical atlas at the same time? I guess not? But if I can manage to put two atlas into one file and then pass through this option, then it should work?
  • I saw a couple of examples using @animal_warper see here and here that doing resampling (voxel size I guess?) before putting into @animal_warper. Just curious: can @animal_warper handle resampling during alignment, or does the resampling process have to be done before input into @animal_warper?

Thanks!

Howdy-

Q1: Is it possible to use more than 1 atlas simultaneously? Yes, totally. You can add more than 1. They are treated independently and in parallel. You can also provide a convenient abbreviation for each, which I find useful. So, something like:

@animal_warper \
    -atlas ATLASNAME_CORTICAL.nii.gz ATLASNAME_SUBCORTICAL.nii.gz ATLASNAME_OTHER+tlrc.HEAD \
    -atlas_abbrevs CORT SUBCORT OTHER \
 ...

is reasonable.

Q2: Those resampling cases might be doing separate things. What resampling are you wanting to know about—of input data (in subject space), or of reference data (in reference space)?

Some cases where resampling is useful:

  • removing obliquity from data in the original space. In AFNI, we have adjunct_deob_around_origin to help do this in a convenient way that both preserves coordinate origin and does not blur it.
  • 3dZeropad was useful I think to remove unnecessarily excess slices that affect initial center of mass alignment.
    … and maybe others.

But indeed, the input data will be resampled during the alignment process when aligning to template space and being warped there.

–pt

Hiii, thank you for the answers. follow-ups:
I compiled a script to prepare MRI for putting into animal_warp. It has the following steps:

  • check obliquity, if needed then 3dWarp -oblique2card
  • check orientation, if needed then 3dLRflip (check X, Y, Z separately)
  • 3drefit
  • 3dresample

the goal is to dynamically check any input MRI images and do necessary preprocs for animal_warp. The ideal case I want is that given any input MRI, this dynamic process can make it suitable for animal_warp. What do you think about these steps? Am I missing anything? Will adjunct_deob_around_origin be more suitable than -oblique2card? How do we know when we should use 3dZeropad without a visual inspection?

thanks!!

Hi-

Scripting/functionality side note:

  • You can check obliquity with 3dinfo -obliquity .. and see if the output is “0.000” or something, but for an if condition, I would use 3dinfo -is_oblique .. to get either a 1 or a 0 to use programmatically. It is just cleaner than worrying about precision of the actual floating point obliquity value. You can then use the actual 3dinfo -obliquity .. scalar value for whatever you need separately.

Comments on dealing with obliquity:

  • With either -deoblique or -oblique2card, 3dWarp will deal with obliquity by applying that rotation+shift, so the output has no obliquity stored in the header anymore anymore. The result has the “good” property that the output dataset shows the data where it was in scanner space, however that is probably not something you want to do to raw/unprocessed data: it is also a regridding process that will inherently require interpolation, blurring and changing the values in some way.
  • 3drefit -deoblique .. will just purge the obliquity information, with the “good” result that the data values are not interpolated, but with the bad result that the coordinates will no longer refer to where the dataset was in the scanner and can sometimes be very far away from where it should be.
  • adjunct_deob_around_origin tries to get the most good as possible with least bad: the output will not have obliquity, and the dataset won’t be interpolated at all (no data changed), and while the output will not be exactly where it was in scanner space, the coordinate origin location (x,y,z)=(0,0,0) is preserved. Therefore, it should be in close to the same spot where it was recorded in the scanner, which is a useful property for later alignments, typically (assuming the originally stored coordinates were well made).

I’m not quite sure about the aim of the (multiple) 3dLRflip commands. Is that to reorient the data, like from sphinx position to human-like? We do have a new-ish program called desphinxify to help with that programmatically, if that is the case. You do have to check what intermediate orientation should be used, and verify it—but for a given protocol, then that should have constant usage.

–pt

thank you! ah the 3dLRflip thing. well, okay, long story. my project is actually about functional ultrasound images, and we want to get an atlas for fUS but map atlas directly to fUS is not easy so we collect MRI data and align MRI to atlas (through template), then align fUS to MRI.
We had some LR and misorientation (metadata and data not matching) problems with the fUS image and also some LR problems with a previous MRI image, so I was kind of overreacted when compiling this preproc script. I guess if I don’t know whether the (meta-)data has problems beforehand, it won’t make sense to use desphinxify either, right?

okay, I read desphinxify document again. yes, my goal is to reorient, for example, from IAL to RAS. so I can use desphinxify -orient_mid RAS instead of 3drefit -orient RAS? after desphinxify, I should run 3dresample -orient RAS` again to resample the voxel size?
sorry, I’m a bit confused by desphinxifying and re-orienting

Howdy-

Just to make sure we aren’t using the term “re-orienting” differently, there are a couple things it could mean, and different programs to use for each case:

  • If your data and your header information don’t match—that is, you want to change the direction of where the brain is pointing and see it look different (which happens in animal scanning often, where like macaques are scanned in “sphinx” position), then you would want to use 3drefit or 3dLRflip or desphinxify.
  • If your brain looks like how you want it to look in a GUI, and you don’t want it to change position, then that means you are happy with the data and header information matching. You can still change the orientation field in the header, and not have the dataset move in space, by using 3dresample. Those are two different types of functionality.

The question of obliquity and dealing with it is separate. With anatomical volumes, we often want to not have to deal with it, but we want to remove it in a way that we don’t blur the data. adjunct_deob_around_origin is good for that.

–pt

ah, I see! Thanks for the explanation! I encountered both cases.

in the first case/example, I have an MRI that is PIR, and when I look at it in Freeview, A and S should switch positions. so I ran

desphinxify -orient_mid IPR -input input_file.nii -prefix output_file.nii.gz -no_clean

but the output is not right.

here is the original image


here is the one after desphinxify

also, I updated AFNI to the latest version to use desphinxify, then something wrong with freeview/freesurfer, I could only use fsleyes to visualize.

I also notice that desphinxify does oblique adjustment too. My question is, is there a rule or cookbook something can help me figure out how to choose -orient_mid. obviously I chose the wrong one.

the second case is easier. I guess if the input is LPS but the template is RAS, as long as they look normal/correct in a GUI. I’ll just do 3dresample to change LPS to RAS, right.

desphinxify is intended for datasets that are some variation of sphinx position to start. Left-right is assumed to be correct, but A-P and I-S are swapped. If you have the more general case, then you may want to follow a procedure like this one or one of the other ones mentioned in that thread. If you ALWAYS have the same orientation for acquisition (and anyone who uses your script in the distant future has the same one too), then you can then use just a 3drefit -orient ... command to reorient your data.

Hiii, thanks for the reply! (sorry I went to a department retreat in the past few days)
I tried the following steps:

3dcopy mydata.nii.gz mydata
3dinfo mydata+orig
to3d ‘3D:-1:0:128:128:33:mydata+orig.BRIK’

3dcopy output:

++ 3dcopy: AFNI version=AFNI_24.1.16 (Jun  6 2024) [64-bit]
** AFNI converts NIFTI_datatype=64 (FLOAT64) in file /Users/yibeichen/Desktop/fusi/fusi.nii to FLOAT32
     Warnings of this type will be muted for this session.
     Set AFNI_NIFTI_TYPE_WARN to YES to see them all, NO to see none.
*+ WARNING: NO spatial transform (neither qform nor sform), in NIfTI file '/Users/yibeichen/Desktop/fusi/fusi.nii'

3dinfo output:

++ 3dinfo: AFNI version=AFNI_24.1.16 (Jun  6 2024) [64-bit]

Dataset File:    fusi+orig
Identifier Code: AFN_yLaLnmhciQBuIYQCRewhJQ  Creation Date: Wed Jun 12 12:57:11 2024
Template Space:  ORIG
Dataset Type:    Anat Bucket (-abuc)
Byte Order:      LSB_FIRST [this CPU native = LSB_FIRST]
Storage Mode:    BRIK
Storage Space:   20,062,464 (20 million) bytes
Geometry String: "MATRIX(-1,0,0,0,0,-1,0,0,0,0,1,0):192,151,173"
Data Axes Tilt:  Plumb
Data Axes Orientation:
  first  (x) = Left-to-Right
  second (y) = Posterior-to-Anterior
  third  (z) = Inferior-to-Superior   [-orient LPI]
R-to-L extent:  -191.000 [R] -to-     0.000     -step-     1.000 mm [192 voxels]
A-to-P extent:  -150.000 [A] -to-     0.000     -step-     1.000 mm [151 voxels]
I-to-S extent:     0.000     -to-   172.000 [S] -step-     1.000 mm [173 voxels]
Number of values stored at each pixel = 1
  -- At sub-brick #0 '#0' datum type is float:      4.50397 to       83020.4

to3d messages/warnings:

*+ WARNING: *** ILLEGAL INPUTS (cannot save) ***

Axes orientations are not consistent!
++ Making widgets++
++
 Hints disabled: X11 failure to create LiteClue window
++
.....
yibeichen@dhcp-10-29-164-213 fusi % to3d '3D:-1:0:128:128:33:fusi+orig.BRIK'
++ to3d: AFNI version=AFNI_24.1.16 (Jun  6 2024) [64-bit]
++ Authored by: RW Cox
++ It is best to use to3d via the Dimon program.
++ Counting images:  total=33 2D slices
++ Each 2D slice is 128 X 128 pixels
++ Image data type = short
++ Reading images: .................................
++ to3d WARNING: 134870 negative voxels (24.9449%) were read in images of shorts.
++               It is possible the input images need byte-swapping.
Consider also -ushort2float.
*+ WARNING: *** ILLEGAL INPUTS (cannot save) ***

Axes orientations are not consistent!
++ Making widgets++
++
 Hints disabled: X11 failure to create LiteClue window
++

I guess this one is important?

134870 negative voxels (24.9449%) were read in images of shorts.
It is possible the input images need byte-swapping.

then if I click “view images” I got the following:

I also tried MRIcroGL from here. for some reason, I couldn’t get the correct results after rotation…

It looks like your data is floating point. You can either change it to a short integer or change the command to handle the floating point.

Change this from

to

to3d '3Df:-1:0:128:128:33:fusi+orig.BRIK'

ah, sorry, first, I made a mistake by using the functional ultrasound image as the input in the last round… (the fusi has the sphinx problem too but I wanted to fix the sphinx in the MRI data here …

here are the outputs from each of the three steps from the MRI data

yibeichen@dhcp-10-29-164-213 fusi % 3dcopy 14_MEAN_S12_S13_1.nii anat
++ 3dcopy: AFNI version=AFNI_24.1.16 (Jun  6 2024) [64-bit]
** AFNI converts NIFTI_datatype=512 (UINT16) in file /Users/yibeichen/Desktop/fusi/14_MEAN_S12_S13_1.nii to FLOAT32
     Warnings of this type will be muted for this session.
     Set AFNI_NIFTI_TYPE_WARN to YES to see them all, NO to see none.

yibeichen@dhcp-10-29-164-213 fusi % 3dinfo anat+orig
++ 3dinfo: AFNI version=AFNI_24.1.16 (Jun  6 2024) [64-bit]

Dataset File:    anat+orig
Identifier Code: AFN_Ulqvm6xt8uxwoGLs8L9JQg  Creation Date: Wed Jun 12 14:29:12 2024
Template Space:  ORIG
Dataset Type:    Anat Bucket (-abuc)
Byte Order:      LSB_FIRST [this CPU native = LSB_FIRST]
Storage Mode:    BRIK
Storage Space:   127,401,984 (127 million) bytes
Geometry String: "MATRIX(0.01848,-0.022387,-0.177703,21.70634,0.181353,0.002281,0.018108,-51.01572,-9.91163e-09,-0.180897,0.02222,35.68187):324,384,256"
Data Axes Tilt:  Oblique (9.163 deg. from plumb)
Data Axes Approximate Orientation:
  first  (x) = Anterior-to-Posterior
  second (y) = Superior-to-Inferior
  third  (z) = Left-to-Right   [-orient ASL]
R-to-L extent:   -24.194 [R] -to-    21.706 [L] -step-     0.180 mm [256 voxels]
A-to-P extent:   -51.016 [A] -to-     7.864 [P] -step-     0.182 mm [324 voxels]
I-to-S extent:   -34.136 [I] -to-    35.682 [S] -step-     0.182 mm [384 voxels]
Number of values stored at each pixel = 1
  -- At sub-brick #0 '#0' datum type is float:            0 to          3034

----- HISTORY -----
[yibeichen@dhcp-10-29-164-213.dyn.MIT.EDU: Wed Jun 12 14:29:12 2024] {AFNI_24.1.16:macos_13_ARM_clang} 3dcopy 14_MEAN_S12_S13_1.nii anat

yibeichen@dhcp-10-29-164-213 fusi % to3d '3Df:-1:0:128:128:33:anat+orig.BRIK'
++ to3d: AFNI version=AFNI_24.1.16 (Jun  6 2024) [64-bit]
++ Authored by: RW Cox
++ It is best to use to3d via the Dimon program.
++ Counting images:  total=33 2D slices
++ Each 2D slice is 128 X 128 pixels
++ Image data type = float
++ Reading images: .................................
*+ WARNING: *** ILLEGAL INPUTS (cannot save) ***

Axes orientations are not consistent!
++ Making widgets++
++
 Hints disabled: X11 failure to create LiteClue window
++
.....

here is the output using to3d '3Df:-1:0:128:128:33:anat+orig.BRIK'

but this time MRIcroGL worked when I manually rotated it.

The idea is to copy the dimensions from the 3dinfo command into the to3d command, noting which dimension is first, second, and third for the i,j,k storage indexing.

should be
‘3Df:-1:0:324:384:256:anat+orig.BRIK’

ah! it worked! Now everything makes sense. I didn’t know what those numbers were. Thank you!

1 Like