Downsampling in time dimension only

I am working with a set of BOLD data from multiple sites. 3 sites have 240 volumes with slice thickness of 2. 1 has 820 volumes with a slice thickness of 0.6 so I need to downsample it. But I can’t find a single command in fsl that will downsample only in time other than breaking the volume down by hand and using fslmaths -Tmean on groups of slices.

I tried looking into afni to see if they had a command but couldn’t find one, although I’m much less familiar with anfi so I may have missed something.

Does anyone know how to genuinely resample the slices of a 4D image only (the resolutions are otherwise similar) without resorting to splitting the volumes and reassembling every 3rd slice.

I was not responsible for collecting any of this data so I can’t comment on why it was collected this way.

Hi @RSharkey,

If you are just looking to sample every third volume, you can use something like the following in Nilearn. Generated by ChatGPT so untested.

from nilearn.image import index_img, load_img, new_img_like
import nibabel as nib
import os

# Input and output file paths
input_nifti_path = "input_fmri.nii.gz"
output_nifti_path = "output_fmri_sampled.nii.gz"

# Load the fMRI NIfTI image
fmri_img = load_img(input_nifti_path)

# Get the number of volumes
num_volumes = fmri_img.shape[-1]

# Sample every third volume
sampled_volumes = index_img(fmri_img, slice(0, num_volumes, 3))

# Save the resulting image
sampled_volumes.to_filename(output_nifti_path)

print(f"Sampled NIfTI image saved to {output_nifti_path}")

Best,
Steven

Hi-

I’m not sure why the difference in slice thickness influences the need for time point selection, but in AFNI, to take every third volume, you could run:

3dcalc -a DSET_IN'[0..$(3)]' -expr 'a' -prefix DSET_OUT

The 0..$ means the range of slice indices (or subbrick selection) is the full range, and the (3) means every third.

From the afni program help:

INPUT DATASET NAMES
-------------------
 An input dataset is specified using one of these forms:
    'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
 You can also add a sub-brick selection list after the end of the
 dataset name.  This allows only a subset of the sub-bricks to be
 read in (by default, all of a dataset's sub-bricks are input).
 A sub-brick selection list looks like one of the following forms:
   fred+orig[5]                     ==> use only sub-brick #5
   fred+orig[5,9,17]                ==> use #5, #9, and #17
   fred+orig[5..8]     or [5-8]     ==> use #5, #6, #7, and #8
   fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
 Sub-brick indexes start at 0.  You can use the character '$'
 to indicate the last sub-brick in a dataset; for example, you
 can select every third sub-brick by using the selection list
   fred+orig[0..$(3)]

 N.B.: The sub-bricks are read in the order specified, which may
 not be the order in the original dataset.  For example, using
   fred+orig[0..$(2),1..$(2)]
 will cause the sub-bricks in fred+orig to be input into memory
 in an interleaved fashion.  Using
   fred+orig[$..0]
 will reverse the order of the sub-bricks.

–pt

Oh, wait, rereading your message, perhaps you want to downsample in the z-slice (spatial) dimension?

You could use 3dresample perhaps to regrid DSET_A to have the same grid as DSET_B:

3dresample -master DSET_B -prefix DSET_A_ongrid_B -input DSET_A -rmode NN

I thought using -rmode NN for a resampling mode of “nearest neighbor” might be the most appropriate, to not add blur.

Secondarily, you could specify a new output grid voxel x-, y- and z-dimensions with 3dresample -dxyz A B C ... In your case, you might want A and B to just be the input voxel x- and y-dims, while C would be 3x the z-dim.

Though, overall, I think it would be better to avoid resampling before processing. Any resampling/regridding will incur a blur and interpolation. You don’t need to do that before processing, where you will likely have additional blurs of motion correction and alignment of some kind (even if just to the anatomical dset grid). In afni_proc.py, you can specify the voxel dims of the final output grid, and you could make those be the same (isotropic) case for both.

–pt

If you want to select every Nth time point, I would personally use the term sub-sampling rather than down-sampling. In this case you could use the suggestion from @ptaylor, or if you want to use FSL, a combination of fslsplit and fslmerge.

If you want to perform down-sampling (or up-sampling), there are a range of options, including a hidden command in FSL called $FSLDIR/bin/resample_image which allows flexible resampling along all dimensions including time:

$FSLDIR/bin/resample_image -h
usage: resample_image (--shape|--dim|--reference) [options] input output

Resample an image to different dimensions.

positional arguments:
  input                 Input image
  output                Output image

options:
  -h, --help            show this help message and exit
  -i {nearest,linear,cubic}, --interp {nearest,linear,cubic}
                        Interpolation (default: linear)
  -o {centre,corner}, --origin {centre,corner}
                        Resampling origin (default: centre)
  -dt {char,short,int,float,double}, --dtype {char,short,int,float,double}
                        Data type (default: data type of input image)
  -n, --nosmooth        Do not smooth image when downsampling

Resampling destination:
  Specify the resampling destination space using one of the following options. Note that the --reference option will cause the field-of-view of
  the input image to be changed to that of the reference image.

  -s X,Y,Z,..., --shape X,Y,Z,...
                        Output shape
  -d X,Y,Z,..., --dim X,Y,Z,...
                        Output voxel dimensions
  -r IMAGE, --reference IMAGE
                        Resample input to the space of this reference image(overrides --origin)

e.g. if your image has shape (100, 100, 100, 20), and you want to upsample it along the time dimension by a factor of two:

$FSLDIR/bin/resample_image input.nii.gz output.nii.gz -s 100,100,100,40

Having said that, I’m going to pass on the question of whether down-sampling is suitable for your particular use-case :slight_smile:

This looks much more like what I need. I would rather use a true downsample than a subsample, I was just struggling to find the downsample in time command.

I think its also better suited for my use case but that’s a separate issue.

I would suggest posting details for the regular and anomalous dataset - both the output of fslhd as well as the NIfTI header. An extremely short TR might be useful for cases where you want to get about the Nyquist for physiological noise, but it likely provides a real tradeoff that would make it unsuitable for mixing with more typical acquisitions. A very short TR will impact the T1 recovery. In order to maintain a TE that will have reasonable BOLD SNR, it will also need a very high multi-band rate or fewer slices per TR. Extremely high multiband may lead to aliasing with bleed between spatially distant slices, and would require a head coil with a lot of channels.