Results of fmriprep-rodents

Hi @KFDK,

I have relabeled your post as software support and added the corresponding template. You can see that there is information missing that would help us debug your issue. Please edit your post to include this information.

From a previous post it seems like you don’t have a T1 image, yet the summary seems to think you do. Did you try to pass a T2 image as a T1? I imagine that wouldn’t work very well.

Best,
Steven

I have added the information required by the template and added some additional information that should clarify my problem according to your objection. Hope someone can help!

Hi @KFDK,

Indeed it is not a good result for now. One of the issues here is that fmriprep is expected a T1w image with a T1w contrast, and giving it a T2w contrast must be messing up all the part that normalize the subject anatomy image to the template image. Also, did you check that your original T2w image is “close” in space and with similar orientation than the template image your are using, i.e. Fischer344 ?
If not you may want to try to bring your T2w image closer to the Fisher344, and try again.
Also if you have some tools to do the skull stripping yourself, you can try to pass the skull stripped version of your image to FMRIPREP, with the option : --skull-strip-t1w skip to see if the pipeline downstream is working well.

hi @jsein
it seems like the orientation of my image does not match the orientation of the fischer344 template, but i have not been able to fix that (see image). do you have recommendations on how to go about this?

The first image is my data, and the second image is the template

Hi, do you think that the label (R, L, A, P, S, I) are correct on your image? If not you will have to correct them.

Here is a procedure I use in my cases when the label are wrong on an image:

I use two lines using AFNI and FSL: First you need to look at the data storage in FSLeyes (you have to determine toward each direction each dimension is increasing):

  • in x, the cursor is going toward the neck when x increases: I (Inferior)
  • in y, the cursor is going posterior when y increases: P (Posterior)
  • in z, the cursor is going left (opposite to the pastille), when z increases: L (Left)

=> The data storage is ‘IPL’ in FSL convention, and ‘SAR’ in AFNI convention (opposite convention between AFNI and FSL, why keep things simple? :wink: ) So the first command is from AFNI:
3drefit -deoblique -orient SAR sub-enya_ses-01_T1w.nii.gz

=> the label are now correct but the display is not standard in FSLeye So to have a correct display, we now use FSL:

fslreorient2std sub-enya_ses-01_T1w.nii.gz sub-enya_ses-01_T1w_correct_orient.nii.gz

=> the image is now ready to work with!

thanks a lot for your reply! however, i am fairly certain that the labels are correct. i also try to use your procedure and use the commands you mentioned, but the images look just the same afterwards. it seems to me that i need to do some kind of rotation. to me, it looks like the second and third axes need to be swapped, but i don’t know

Are you able to display both images in a viewer such as FSLeyes?

Hi @KFDK , my name is Eilidh and I’m one of the developers of fMRIPrep-rodents.

First: I should include the disclaimer that fMRIPrep-rodents is still very much in its infancy, and although I really welcome your enthusiasm to use it, we are not quite in a place where we can encourage its use unless users really want to get involved with debugging, etc because we are not as established as its sister project just yet.
Importantly, an example of known problems in the workflow includes the fact that we have not yet integrated susceptibility distortion correction into fMRIPrep-rodents so epi-to-anatomical transformations are less than ideal, so please look carefully at your reports to make sure that the data processing is of a suitable quality before proceeding to any analysis.

One of the issues here is that fmriprep is expected a T1w image with a T1w contrast, and giving it a T2w contrast must be messing up all the part that normalize the subject anatomy image to the template image.

This is not true for fMRIPrep-rodents; although I plan to address the previously mentioned problem shortly so that we can have compatibility for both T2w and T1w images, T2w anatomicals are conventional with small animal imaging, so fMRIPrep-rodents currently expects a T2w image as part of the pipeline (at least for now).

Having said that, the problem you have experienced has been correctly identified as being rooted in the differing orientation of your image with respect to the template for brain extraction. This step is currently handled by a second tool called NiRodents so this is actually not a problem with fMRIPrep-rodents per se. I’ve talked about some of these issues previously here

Until we work other urgent issues out, we require all images to be RAS+ oriented but I understand that conventions for this are all over the place in the rodent imaging community and so heavily relies on the data conversion to Nifti. Is it possible for you to share your image header? How did you convert your data to nii?

1 Like

Thanks for your reply! I use dcm2niix to convert my images to nifti. When I do so, I get the message “Error: Anatomical Orientation Type (0010,2210) is QUADRUPED: rotate coordinates accordingly”. I am unsure how to rotate them properly. For example, should it only be the affine matrix that I rotate?

Could you perhaps share the affine you got for your image after conversion into NIFTI?

I discovered this explanation about the Anatomical Orientation Type of type QUADRUPED: Anatomical Orientation Type Attribute – DICOM Standard Browser

I think that means that you have to check that the axis are labelled correctly with R,L,A,P,S,I pointing in the right direction.

There was a fairly lengthy discussion about dcm2niix conversion of rodent data on github.

The short version is that the problem comes down to inconsistent standards between groups and depends on 1) the system that the data were recorded on, 2) whether the position of the animal (supine vs prone) is accurately recorded, and then 3) the conversion tool itself.

thanks for replying both of you. i am new to the field, and this whole orientation in rodents thing has really confused me a lot, and I am unsure how to go about it. The image below shows the affine matrix from my anatomical image. I have also printed the output from aff2axcodes, which shows that the orientation is (as expected) not correct, since the corresponding output for the template is R,A,S. I am just now sure how to fix it

image

If it makes you feel better, it still makes my head hurt and I’m not so new to the field.

The issue, as far as I can tell, is that the ax codes seem to correspond to the same orientation, but the axcodes in the image follow a different convention to the one of the template. The template convention uses an anatomical convention (i.e. RAS correspond to the rat’s right, rat’s anterior and rat’s superior) and I’d guess that your image follows a scanner convention (i.e. using the same orientation as if the subject was a primate, regardless of the subject’s orientation in the scanner).

The correct way to fix this is to take a deep dive into the data conversion (i.e. from the scanner format, presumably 2dseq, to dicom and then to nifti) so as to maintain fidelity to the original data at each stage.

The not-so-correct (read: hacky, but probably easier) is to change the order of the affine’s rows so that they correspond to the correct direction. This does not preserve the original data so it is problematic in a few ways: it reduces transparency, but also it may cause problems downstream (e.g. with directional data such as fieldmaps, or diffusion).

I have some code that I had to use when we used an in-house converter from 2dseq to nifti. I suspect that it might not be plug-and-play for you, since I don’t know which axes necessarily correspond to which direction in your image, and you may not need to do any rotations, but it might give you somewhere to start. I would really like to emphasise that this is not the best option, and now we use brkraw for our data conversion, this code is no longer necessary for our uses.

hi @eilidhmacnicol and @jsein

first of all, thanks a lot for all your help. very nice being new to the community that you are to helpful and kind!! with your help, i have managed to get what i deem to be much better results from fmriprep-rodents when matching my anatomical image to the template:

I have reoriented the functional images in the same manner as the anatomical image, but the Brain mask and (temporal/anatomical) CompCor ROIs do not look too good to me. Honestly, i don’t know if this is simply within the realm of what you would expect with this kind of data, but it does not look good to me:

Is there perhaps something else i need to do?

regards,
Kelvin

That’s very kind Kelvin; we’re glad to have you in the community :slight_smile: I’m also sorry that it is this way, but preclinical imaging is certainly associated with a steeper learning curve than human imaging, at least until standardization improves.

It’s good to see your results look better now that your orientation matches, although the segmentation results still look a little off (the algorithm is picking up a lot of white matter in the grey matter; this is likely a separate issue that will need to be looked into down the line).

A more pressing issue is that your functional images should not have been reoriented in exactly the same manner as your anatomical images. In the anatomical report, you can see that the first row is the axial plane (going inferior-superior), the second row is the sagittal plane (going left-right) and the final row is the coronal plane (going posterior-anterior). Conversely, the functional report rows are coronal (but this time reversed; anterior-posterior), sagittal, and axial (and you can see the olfactory bulb is at the bottom of the image, compared to the anatomical where the olfactory bulb is at the top of the image).

This suggests to me that the rows in your functional affine do not correspond to the same rows in your anatomical affine. This is not unsurprising, as structural and functional data can certainly be acquired in different directions. Consequently, the brain mask and the anatomical information from the template are not referencing their intended targets in the functional data.

hi @eilidhmacnicol
I think I have managed to fix the orientation, but it looks like the brain mask from the BOLD signal is too large:

Also, I have had a hard time figuring out exactly what workflow is executed when I run fmriprep-rodents. Where can I get that information? When I look in the github repo, it just references the pipeline for the regular fmriprep, but I assume there are differences

Hi @KFDK - is this a rat image or a mouse image? fMRIPrep-rodents has only been tested on rats, although using images derived from mice is in our roadmap.

In particular, this is because our brain extraction in NiRodents works well for rats but is relatively untested in mice. Early tests show that despite similarities in overall anatomy, it seems that the size difference between rats and mice is too large for the registration algorithm to overcome, so using a rat template with a mouse image has not been successful. We have a mouse template available in templateflow, but it is possible that there are more optimisations required for it to work for mice out of the box.

These are definitely things we want to fix and we’d hope to do it sooner with more contributors.

Hello, I want to use fmirep to register mice with WHS, but fmireprep always reports errors, can you help me see?

Hi @eeee -

First, I should point out that the WHS template is a rat template, not a mouse template, and I’ve found that the size difference is too large for registration algorithms to overcome.

Unfortunately, mouse data is not yet supported for fMRIPrep-rodents, as I mentioned in the message above.

Your error indicates that the pipeline is struggling to find the WHS brain mask from templateflow. It is difficult to help more without knowing how you are running fMRIPrep-rodents. Do you have access to the directory that the pipeline is looking in?

As @eilidhmacnicol points out, mouse template alignment has some difficulties like the differences in scale that can happen between mouse subjects and the template. A couple solutions can work though - first allow for the alignment to search over a large range of scale. Another approach is to prescale the data before or as part of the alignment procedure using an approximate scale factor. In AFNI, we include those methods in @animal_warper (options -super_size, -init_scale) along with -feature_size that allows for different size animals to be aligned. The output can then be used with afni_proc.py.

Here is the latest Allen Brain mouse CCF3 template converted to a convenient NIFTI format with voxel dimensions and a useful orientation. I put the 0,0,0 coordinate over anterior commissure. The left and right are almost completely symmetric, and there is no online definition from Allen on how to tell the difference.

https://afni.nimh.nih.gov/pub/dist/atlases/mouse/AllenCCF3_2020/AFNI_AllenMouseCCF3.tgz