Questions Regarding fMRI Preprocessing with fMRIPrep: Resolution, Time Points, and Dimensions

Hello everyone,

I have recently used fMRIPrep for preprocessing my fMRI data. And I’ve encountered a few issues that I’m struggling to understand. These may be kind of basic questions but I’m relatively new to this field, so I’m sorry!

Here’s the command I used for preprocessing:

fmriprep-docker /home/datasets/download /home/datasets/output_dir participant \
    --fs-license-file /home/license.txt 2>&1 | tee /home/datasets/stdout.log

Here’s a brief overview of my questions:

  1. Resolution Changes Post-Processing : After pre-process, the resolution of functional images changed from (1.71875, 1.71875, 3.48, 2.0) to (1.719, 1.719, 3.48, 1.0). Why the time resolution for func images changed from 2s to 1s, while the number of time points stayed the same? Is it correct?

  2. Start Time Change : Initially, all fMRI onsets were at 0s with a Repetition Time of 2s and an Echo Time of 0.02. However, after pre-processing, the output file indicates that the start time for all subjects has been altered to 0.972s, as shown below.

# filename: sub1001/func/sub-1001_task-Mult_run-01_space-MNI152NLin2009cAsym_desc-preproc_bold.json
  "RepetitionTime": 2,
  "SkullStripped": false,
  "SliceTimingCorrected": true,
  "StartTime": 0.972,
  "TaskName": "Single-Digit Multiplication"

How to interpret this new start time? Does this mean that I should consider the first time point as being collected at 0.972s?

  1. Dimension Changes : The dimensions of my anat images changed from (256, 256, 160) to (193, 229, 193) after preprocessing, and func images changed from (128, 120, 32, 121) to (89, 110, 49, 121). Why these dimensions changed when the resolution hardly did? What determines the pre-processing output dimensions, and why do some dimensions increase (e.g., the third dimension of func images from 32 to 49, anat images from 160 to 193)?

I have consulted various documents but haven’t found clear answers to these questions. Any help or pointers towards relevant documentation would be greatly appreciated.

Thank you in advance for your time and assistance!

Best Regards,

Hi @yuhan_chen,

That last dimension should be the number of time points, not the TR. As you note below in the JSON, the output TR is still 2 seconds. What BOLD image could you be working with that only has 2 volumes? That seems very short.

This is a product of slice-timing correction. fMRIPrep by default slice-times towards the middle of a TR. You can change this to be at the beginning or end of a TR by the --slice-time-ref argument. Keep this in mind as this is important for making sure task events correspond with your MRI data accordingly. Software such as Nilearn allows you to specify this reference time in it’s models. I usually just slice-time-ref to the beginning (--slice-time-ref 0 or --slice-time-ref start) to leave the guess work out of it (and because some software may assume this is how data are corrected).

Are these second dimensions the MNI image? You didn’t share the filenames, but I would assume so since you did not specify different output spaces. Those dimensions are likely just how your data in native space had to be warped to reach MNI space.



@Steven Thanks!

Apologies for the confusion. The images indeed comprise 121 volumes, not just 2. I derived the resolution from this Python script:

import nibabel as nib

original = "/home/datasets/download/sub-1003/func/sub-1003_task-Mult_run-01_bold.nii.gz"
preprocessed = "/home/datasets/output_dir/sub-1003/func/sub-1003_task-Mult_run-01_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz"

original_img = nib.load(original)
original_header = original_img.header
print("Voxel dimensions:", original_header.get_zooms())

preprocessed_img = nib.load(preprocessed)
preprocessed_header = preprocessed_img.header
print("Voxel dimensions:", preprocessed_header.get_zooms())

This seems to indicate that each voxel represents a physical space of 1.71875mm x 1.71875mm x 3.48mm. I’m puzzled about the change in the fourth dimension (from 2.0 to 1.0). What does its change mean?

Thanks for the tip! I’ve completed all preprocessing now. Is there a way to correct this, or must I rerun the entire fMRIPrep process? It’s so time consuming :frowning:

Sorry about that. The original fMRI data file is named


and the preprocessed file is


I’m still a bit unclear about the space transformation. Why do the dimensions change while the voxel resolution remains the same (still 1.719mm x 1.719mm x 3.48mm)? In my understanding (it may be wrong), the product of image dimension and voxel resolution seems to represent the actual physical volume. For example, if the resolution is 1mm x 1mm x 1mm and the image dimension is 256 x 256 x 256, then this should represent a cube with sides of 256mm. However, after the space transformation, the resolution remains unchanged while the dimensions change. Doesn’t this imply that the actual physical volume also changes? This is something I don’t quite understand.

Thanks! :pray:

Hi @yuhan_chen,

I wouldn’t worry about this, as long as whatever software you use knows what the TR is (mostly you explicitly define this).

You don’t need to “correct” it necessarily. What you have is correct, but you just have to make sure your next step models account for this. E.g., see the slice_time_ref argument of nilearn.glm.first_level.FirstLevelModel - Nilearn.

MNI is a standard space with it’s own volume. Your brain may have to get warped bigger or smaller to fit this space. By default, fMRIPrep keeps the voxel resolution the same (though you can change this by specifying the res modifier, see Defining standard and nonstandard spaces where data will be resampled — fmriprep version documentation).


Hi @Steven , thank you for your helpful responses! But I’m sorry, I’m still a bit confused about the spatial normalization :frowning:. At first, there’s a basic concept I want to make sure about: does the physical volume of a voxel represent the same thing as resolution? Or in other words, are their values usually the same in most cases?

Then, from what the fMRIPrep documentation says, if I don’t specify :res-1 or :res-2 in the argument --output-spaces, fMRIPrep will keep the original resolution of the BOLD data. Does this mean that after preprocessing, each voxel in my brain still represents the same physical space as before, which is 1.719mm x 1.719mm x 3.48mm?

If the answer to this question is yes, then if I want to use the Yeo2011 functional atlas (which has a resolution of 1mm x 1mm x 1mm) on my data, do I need to resample the atlas to match the resolution of my data, which is 1.719mm x 1.719mm x 3.48mm?

If the answer is no, then how big is the physical space of each voxel after preprocessing?

Besides, I’m still not quite clear on that when standardizing from native space to MNI space, why it happened that my brain is warped but the volume of each voxel remains the same? If the volume of each voxel stayed the same, then in a situation where the same xyz coordinates in my brain and another brain have different voxel volumes (like 1mm x 1mm x 1mm) are given, would the voxel identified by the given xyz coordinates be in the same physical location?

Best Regards,

Hi @yuhan_chen,

I think you’ll find the explanations here helpful: Coordinate systems - Slicer Wiki

Not necessarily - a lot of softwares such as Nilearn (nilearn.maskers.NiftiLabelsMasker - Nilearn) only need the atlas and brain to be in same space, but not same resolution.

No, if you have different resolutions, then the Ith, Jth, Kth voxel in one image will be in a different physical location to the same Ith Jth Kth voxel in the second image.


Thanks a lot @Steven , for your helpful and detailed responses. Your expertise is greatly appreciated! :+1: