Fmriprep resamples bold images to another voxel dimension/orientation

Hi everyone,

I’m using fmriprep for preprocessing my fmri data. My voxel dimension is 2,4x4x2,4 in the original bold images (we scanned coronal). However after preprocessing, the voxel dimension changes to 2,4x2,4x4. Is there any idea how to keep the original resolution? This change in voxel dimension/orientation does not happen when preprocessing bold images that were scanned axial.
I guess one solution would be to use --template-resampling-grid 2mm. But I would really prefer simply keeping the original voxels 2,4x4x2,4.

Thanks a lot!!!
Anna

Hi, we reorient images to RAS+. It looks like your input images are probably LIP (or RSA, or similar), so rotating the data matrix to RAS+ will move the second (4mm) dimension to the third.

Out of curiosity, do you have a particular use case for keep preprocessed images in the original orientation?

Hi Chris,

thanks for your answer! Is there any way to keep the second dimension as the secons dimension? Or would I need to rotate my data matrix somehow before using fmriprep to keep the original dimension? When the dimensions move from the second to the third I suspect that the resampling reduces the datas spatial accuray.

Well, as I’m not really experienced in fmri data yet, I though it might be of advantage to preprocess the images in original orientation. Then I can decide as a second step at any time, if its best to resample to 2x2x2 or any other voxel dimension. But I’m not sure if this is necessary…

Hi Anna,

To be clear, it’s not resampling, it’s reorienting, i.e., rotating the data matrix in order to flip the second and third axes. So, for a value that was indexed as datablock[i, j, k] will now be found at datablock[i, k, j]. Similarly, if we rincrement j, we’re moving in the inferior-superior direction: datablock_rsa[i, j + 1, k] = datablock_ras[i, k, j + 1].

This needs to be the same difference in “real-world” space (RAS+), so incrementing j by 1 moves ±4mm in the superior-inferior axis. Therefore, to keep the interpretation consistent, the voxel sizes (zooms) need to rotate to correspond to the new axes.

Just to think through the consequences, imagine that the voxel sizes stay the same while we rotate the data matrix. Suppose we have an image with (240mm)^3 field of view, in RSA orientation, with zooms 2.4mm, 4mm and 2.4mm. Then we have a data block of 100x60x100 voxels. If we rotate to RAS orientation, then our data block is now 100x100x60 voxels. If we don’t change the zooms, the new field of view will 240mm x 400mm x 144mm, so we’ll get stretching along the anterior/posterior dimension (remember, the voxel values have not changed), and compression along the inferior/superior dimension.

The obvious question then is why reorient at all. And the reason for that is that not all tools respect orientation information properly; as fMRIPrep uses tools from across a variety of software packages, keeping orientation consistent across all data that passes through fMRIPrep drastically reduces sources of variation; we validate that the pipeline works on a wide variety of RAS-oriented data, and then ensure that we feed the pipeline RAS-oriented data.

From what you’ve said, there isn’t a particular reason to keep the data in RSA orientation, just that you’d like to avoid resampling. I assure you that reorientation does not involve resampling. If you would like to check this out, you can use fslreorient2std or nibabel.as_closest_canonical on your original images, to verify that the data is unaffected:

fslreorient2std input_img.nii.gz output_img.nii.gz
import nibabel as nb
img = nb.load('input_img.nii.gz')
nb.as_closest_canonical(img).to_filename('output_img.nii.gz')

I believe fslreorient2std orients to either LAS or LPS, while nibabel will target RAS, but you can then take either of these outputs and overlay them with your original data in a viewer that respects orientation (fsleyes or freeview do, I know), and compare the appearance. If any resampling occurred, there should be a visible difference, but you should not see it.

1 Like

Hi Chris,

thanks again a lot for your detailed answer! And sorry for mixing up terms, its good you clarified. I tried to understand all you wrote and I think I can follow you. When using nibabel, the image indeed did only change orientation but nothing else(screenshot 1). However after fmriprep preprocessing the voxel dimension changed, no only the orientation (screenshot 2). In the original image the voxel in the coronal slice have a 2,4x2,4 dimension. In contrast in the fmriprep preprocessed images the voxel in the corinal slices are 2,4 x 4. This is what I worriy about… Do you have any sugestions why this is happening?

I really appreciate your support! Thank you a lot!

screenshot 1

screenshot 2

Some additional info:
original image: i=2.4 j=4 k=2.4

fmriprep preprocessed image:
T1w: i=2.4 j=2.4 k=4
MNI: i=2.4 j=2.4 k=4

fmriprep preprocessed image with --template-resampling-grid 2mm
T1w: i=2.4 j=2.4 k=4
MNI: i=2 j=2 k=2
-> so this would only be a solution for the MNI space I guess

reorienting the original bold with nibabel: i=2.4 j=4 k=2.4

So in contrast to fmriprep, nibabel reorients the image but keeps the each i,j,k the same - probably only switching from datablock[i, j, k] to datablock[i, k, j].

fslhd give the following info:

original image:
qform_name Scanner Anat
qform_code 1
qto_xyz:1 0.799065 0.023157 0.028023 -79.792603
qto_xyz:2 -0.019529 0.739797 -0.121924 -93.937271
qto_xyz:3 -0.033501 0.121097 0.739493 -158.061401
qto_xyz:4 0.000000 0.000000 0.000000 1.000000
qform_xorient Left-to-Right
qform_yorient Posterior-to-Anterior
qform_zorient Inferior-to-Superior
sform_name Scanner Anat
sform_code 1
sto_xyz:1 0.799065 0.023157 0.028023 -79.792603
sto_xyz:2 -0.019529 0.739797 -0.121924 -93.937271
sto_xyz:3 -0.033501 0.121097 0.739493 -158.061401
sto_xyz:4 0.000000 0.000000 0.000000 1.000000
sform_xorient Left-to-Right
sform_yorient Posterior-to-Anterior
sform_zorient Inferior-to-Superior

fmriprep preprocessed image MNI:
qform_name Aligned Anat
qform_code 2
qto_xyz:1 2.400000 0.000000 0.000000 -96.000000
qto_xyz:2 0.000000 2.400000 0.000000 -132.000000
qto_xyz:3 0.000000 0.000000 4.000000 -78.000000
qto_xyz:4 0.000000 0.000000 0.000000 1.000000
qform_xorient Left-to-Right
qform_yorient Posterior-to-Anterior
qform_zorient Inferior-to-Superior
sform_name Aligned Anat
sform_code 2
sto_xyz:1 2.400000 0.000000 0.000000 -96.000000
sto_xyz:2 0.000000 2.400000 0.000000 -132.000000
sto_xyz:3 0.000000 0.000000 4.000000 -78.000000
sto_xyz:4 0.000000 0.000000 0.000000 1.000000
sform_xorient Left-to-Right
sform_yorient Posterior-to-Anterior
sform_zorient Inferior-to-Superior

reoriented with nibabel:
qform_name Unknown
qform_code 0
qto_xyz:1 2.400000 0.000000 0.000000 0.000000
qto_xyz:2 0.000000 4.000000 0.000000 0.000000
qto_xyz:3 0.000000 0.000000 2.400000 0.000000
qto_xyz:4 0.000000 0.000000 0.000000 1.000000
qform_xorient Left-to-Right
qform_yorient Posterior-to-Anterior
qform_zorient Inferior-to-Superior
sform_name Aligned Anat
sform_code 2
sto_xyz:1 2.399983 -0.002400 -0.008964 -88.718063
sto_xyz:2 0.004713 3.704824 0.904831 -128.612152
sto_xyz:3 0.007759 -1.508069 2.222882 -74.325401
sto_xyz:4 0.000000 0.000000 0.000000 1.000000
sform_xorient Left-to-Right
sform_yorient Posterior-to-Anterior
sform_zorient Inferior-to-Superior

It would be great if someone could explain me whats happening! It has to be more than reorienting I think…

Hi @anna, sorry about the slow response. I’ve looked at this a couple times and I’m not entirely sure what’s going on. I want to try to respond to each of your posts in order, as it’s difficult to write a coherent response to the whole. Apologies if I jump between levels of certainty…

What version of nibabel are you using? If you’re changing the orientation, the ordering of the zooms should change, as well. If they’re not, that’s a fairly serious bug.

As to the fMRIPrep outputs, I see a couple things. First, I do see the inferior/superior stretching you’re referring to. Do you know if you’re using a 6- or 9-dof BOLD-T1w registration? If it’s 9, this might be a consequence of that (the extra three dimensions allow for some scaling). How do your reports look for registration?

Another thing I see is that you have a BOLD image that is pitched approximately 45deg forward over the L-R axis. When you align that to a more-or-less plumb T1w image, you’re going to be resampling quite a lot, whether the 4mm zoom is in the j or k dimension. If your concern is to introduce as little interpolation as possible to your outputs, then you’ll need your outputs in boldref space. This is an open issue on fMRIPrep, and we’d appreciate help, if you’re interested in trying your hand.

Again, if nibabel isn’t changing the zooms, this is a problem. Can you verify that they aren’t reordered?

import nibabel as nb
print(nb.__version__)
img = nb.load('input_img.nii.gz')
print(img.affine)
print(img.header.get_zooms())
nb.as_closest_canonical(img).to_filename('output_img.nii.gz')
new_img = nb.load('output_img.nii.gz')
print(new_img.affine)
print(new_img.header.get_zooms())

This indicates the zooms are ~0.8mm^3. Are you sure this isn’t your T1w image?

According to this, nibabel at least thinks your images are already in RAS, so perhaps the issue is entirely in the transform application.

So next steps:

  1. Can you share the screenshot of the registration section of your HTML reports?
  2. Can you paste the output of that Python segment I posted above?

And it may be that we’ll need to have a look more deeply at this case. Would you be able to share a defaced T1w image and the first 50 volumes of a BOLD series that produce this issue? Ideally one series with axial slices and one with horizontal slices.

Hi @effigies, sorry for me being impatient!

I doublechecked but still sent you the information of the T1 instead of the original bold image, sorry. Below you see a screenshot of the python code you suggested, and as a last entry I added the fslhd info, this time really of the original bold image.

I’m using 6 dof BOLD T1w registration according to the html report. Further we just noticed " Note on orientation: qform matrix overwritten. The qform has been copied from sform." -> is this of importance?

Below a try to summarize the different voxel orientation in the different images:

The registration section of the html reports

and finally also screenshots from the Fieldmap to EPI registration, as orientation of the voxel seems to be different again in those.

Sharing a T1w image and volumes of the BOLD series of a healthy control would be possible. I try to get the data today from the MRI just in case. Those bold images are oriented coronal along the hippocampus. The axial slices I refered to are from a different bold series of our team (you find them on open neuro https://openneuro.org/datasets/ds001419/versions/1.0.1 ).

I hope I did not miss anything and those information helps to understand what is happening. Thank you!!

Thanks. This is interesting. So your original image is in LSP orientation, but the original zooms are (2.4, 2.4, 4), so it will be (2.4, 4, 2.4) in RAS space, which seems born out.

Not really. This is an “in the interest of full disclosure” sort of statement, not a problem. Our treatment of affine matrices can be surprising to some people, so what this means is that both the qform and sform matrices were valid, but were not identical. Because the standard says that sform takes precedence we sync the qform to the sform, to accommodate tools that don’t respect the sform.

Alright, thanks. I think I’m going to need to run this to track down exactly where the flip happens. I think I agree that the outputs should be in (2.4, 4, 2.4), instead of the original (2.4, 2.4, 4) zooms. But it really might be an effect of the pitch, and the transform application step making a best guess as to which zoom set will introduce the least distortion. For such a high rotation, it might be that interpolating to (2.4mm)^3 voxels is the best approach, or moving to a boldref output. Both of these options will require additions to fMRIPrep to make happen.

Could it be the case that both q-form and s-form matrices are set and correct under the definition of NIfTI? If so, the q-form and the s-form could differ in this flip and by overwriting the q-form we are imposing the s-form (that would mean aligning to the stereotactic frame instead of the scanner’s frame). Does it make any sense to you Chris?

@anna although you can’t share T1w images at this point, it would be incredibly helpful to have a look into the first 50 timepoints of one of your bold runs. In principle, these 50 timempoints are almost useless to address any research question and identification would not be an issue for a BOLD run.

To do that:

import nibabel as nb

nii = nb.load('sub-ZZZ_task-TaskName_bold.nii.gz')
nii.__class__(nii.get_data()[..., :50], nii.affine, nii.header).to_filename('clipped_bold.nii.gz')

Thanks a lot!

I guess it’s possible if the pixdims don’t match the RSS of the sform columns that you could have a difference of axis ordering in the sform vs the qform, but that doesn’t seem to be the case here, by my reading.

@effigies @oesteban thanks for your help!

I got the data of a healthy control, who gave consent to share the data. So I uploaded T1w and the whole bold run of one condition. Still thanks for the nibabel code, I think its a more convenient way to delete the dummy scans than I did before.

The data should have the same scanning protocol. I startet now preprocessing the data without fieldmaps and freesurfer to make sure they behave the same. I will update you when fmriprep is finished.

The fmriprep code I used for all the patients is (I included fieldmaps):

nohup docker run -i --rm \

-v /usr/local/freesurfer/license.txt:/opt/freesurfer/license.txt:ro \

-v /data/anna/memory/fmriprep/data:/data:ro \

-v /data/anna/memory/fmriprep/output/out:/out \

-v /data/anna/memory/fmriprep/output/work:/scratch \

poldracklab/fmriprep:1.2.1 \

/data /out/out \

participant \

–output-space T1w template fsnative fsaverage5 \

–participant_label 36 38 39 43 53 \

–use-aroma \

–nthreads 6 \

–omp-nthreads 4 \

–write-graph \

-w /scratch &

Fmriprep is finished and preprocessed bold in both T1w and MNI look the same in this dataset as in the patients data. I added the fmriprep outputs and the reoriented image by nibabel to the link.

Hi @anna, sorry this fell off my plate. I’ll try to get to it today.

I think I tracked down the issue. I’m realizing now that perhaps the better diagnostic would have been for you to run fslreorient2std on your BOLD files before running fMRIPrep and seeing if the problem persisted. I don’t think it would have, though, and have proposed a fix.

I’ll test it locally now, and we can see what the output looks like.

1 Like

Here’s the result: RAS oriented with zooms 2.4 x 4 x 2.4

$ nib-ls sub-201/func/sub-201_task-face_space-T1w_desc-preproc_bold.nii.gz
sub-201/func/sub-201_task-face_space-T1w_desc-preproc_bold.nii.gz int16 [ 58,  46,  61, 147] 2.40x4.00x2.40x3.00

$ fslhd sub-201/func/sub-201_task-face_space-T1w_desc-preproc_bold.nii.gz  grep -A8 sform_name
sform_name     Aligned Anat
sform_code     2
sto_xyz:1      2.400000  0.000000  0.000000  -69.073967
sto_xyz:2      0.000000  4.000000  0.000000  -108.906075
sto_xyz:3      0.000000  0.000000  2.400000  -42.736328
sto_xyz:4      0.000000  0.000000  0.000000  1.000000
sform_xorient  Left-to-Right
sform_yorient  Posterior-to-Anterior
sform_zorient  Inferior-to-Superior

Thanks a lot for finding out and proposing the fix!! What is the best way to implement the fix into fmriprep? I’m currently using fmriprep 1.2.1 with the docker container.

May you have tried running fslreorient2std on your BOLD files and then rerunning fMRIPrep? I believe for the fix to have been implemented is pulling the latest fMRIPrep docker container. To preserve your version (1.2.1) it may be better to try the fslreorient2std approach before updating to see if the problem persists.

Hi @anna, if you’re okay updating to a newer version of fMRIPrep, I’m going to be pushing a (hopefully) fixed version to an experimental Docker tag within the next couple of hours. I’ll update here when it’s ready.