I have recently used fMRIPrep for preprocessing my fMRI data. And I’ve encountered a few issues that I’m struggling to understand. These may be kind of basic questions but I’m relatively new to this field, so I’m sorry!
Resolution Changes Post-Processing : After pre-process, the resolution of functional images changed from (1.71875, 1.71875, 3.48, 2.0) to (1.719, 1.719, 3.48, 1.0). Why the time resolution for func images changed from 2s to 1s, while the number of time points stayed the same? Is it correct?
Start Time Change : Initially, all fMRI onsets were at 0s with a Repetition Time of 2s and an Echo Time of 0.02. However, after pre-processing, the output file indicates that the start time for all subjects has been altered to 0.972s, as shown below.
How to interpret this new start time? Does this mean that I should consider the first time point as being collected at 0.972s?
Dimension Changes : The dimensions of my anat images changed from (256, 256, 160) to (193, 229, 193) after preprocessing, and func images changed from (128, 120, 32, 121) to (89, 110, 49, 121). Why these dimensions changed when the resolution hardly did? What determines the pre-processing output dimensions, and why do some dimensions increase (e.g., the third dimension of func images from 32 to 49, anat images from 160 to 193)?
I have consulted various documents but haven’t found clear answers to these questions. Any help or pointers towards relevant documentation would be greatly appreciated.
Thank you in advance for your time and assistance!
That last dimension should be the number of time points, not the TR. As you note below in the JSON, the output TR is still 2 seconds. What BOLD image could you be working with that only has 2 volumes? That seems very short.
This is a product of slice-timing correction. fMRIPrep by default slice-times towards the middle of a TR. You can change this to be at the beginning or end of a TR by the --slice-time-ref argument. Keep this in mind as this is important for making sure task events correspond with your MRI data accordingly. Software such as Nilearn allows you to specify this reference time in it’s models. I usually just slice-time-ref to the beginning (--slice-time-ref 0 or --slice-time-ref start) to leave the guess work out of it (and because some software may assume this is how data are corrected).
Are these second dimensions the MNI image? You didn’t share the filenames, but I would assume so since you did not specify different output spaces. Those dimensions are likely just how your data in native space had to be warped to reach MNI space.
I’m still a bit unclear about the space transformation. Why do the dimensions change while the voxel resolution remains the same (still 1.719mm x 1.719mm x 3.48mm)? In my understanding (it may be wrong), the product of image dimension and voxel resolution seems to represent the actual physical volume. For example, if the resolution is 1mm x 1mm x 1mm and the image dimension is 256 x 256 x 256, then this should represent a cube with sides of 256mm. However, after the space transformation, the resolution remains unchanged while the dimensions change. Doesn’t this imply that the actual physical volume also changes? This is something I don’t quite understand.
Hi @Steven , thank you for your helpful responses! But I’m sorry, I’m still a bit confused about the spatial normalization . At first, there’s a basic concept I want to make sure about: does the physical volume of a voxel represent the same thing as resolution? Or in other words, are their values usually the same in most cases?
Then, from what the fMRIPrep documentation says, if I don’t specify :res-1 or :res-2 in the argument --output-spaces, fMRIPrep will keep the original resolution of the BOLD data. Does this mean that after preprocessing, each voxel in my brain still represents the same physical space as before, which is 1.719mm x 1.719mm x 3.48mm?
If the answer to this question is yes, then if I want to use the Yeo2011 functional atlas (which has a resolution of 1mm x 1mm x 1mm) on my data, do I need to resample the atlas to match the resolution of my data, which is 1.719mm x 1.719mm x 3.48mm?
If the answer is no, then how big is the physical space of each voxel after preprocessing?
Besides, I’m still not quite clear on that when standardizing from native space to MNI space, why it happened that my brain is warped but the volume of each voxel remains the same? If the volume of each voxel stayed the same, then in a situation where the same xyz coordinates in my brain and another brain have different voxel volumes (like 1mm x 1mm x 1mm) are given, would the voxel identified by the given xyz coordinates be in the same physical location?