fMRIPrep scaling parameters in output NIFTI header

Hello,

When loading the header of the NIFTI output of the BOLD timeseries, I see that there are two parameters which affect the scaling of data values (“scl_slope” and “scl_inter” - which are both nonzero). I’m wondering how exactly these parameters are used/generated by fMRIPrep, and whether there might be some documentation on why the scaling is used?

Thanks in advance!

These parameters are automatically calculated by nibabel. This most likely happens because your input data was int16 or uint16.


Imaging data is generated with analog-to-digital converters (ADCs) in the scanner hardware. Many scanners have 7-bit ADCs (IIRC) and some modern scanners now have 12-bit. In any event, these recordings can be stored in 16 bit integers, along with scaling parameters to ensure that the original values can be recovered. dcm2niix will preserve these in the NIfTI files.

For almost all operations in fMRIPrep, these data will be automatically promoted to float32 (or occasionally float64), applying the scale factors in order to preserve their intended values. However, no amount of processing can inject more bits of precision than the original input data contained. Therefore, when producing outputs, fMRIPrep will finally recast to the original precision. This is done automatically when nibabel needs to store data in a dtype that has a smaller range than the data.

If you take the minimum and maximum data values and the number of bits of precision you have, you can calculate the factors:

pr = (500, 2000)                            # Plausible range
nbits = 16                                  # 16-bits of precision
scl_slope = (pr[1] - pr[0]) / (2 ** nbits)  # Resolvable difference
scl_inter = pr[0]                           # Minimum value

(This assumes you’re targeting an unsigned 16-bit integer… It’s slightly more complicated for int16, but it doesn’t add clarity to cover that case.)

If you save a float32 dataset as int16 and reload it, the max difference in data will be ± scl_slope / 2. This will be one or two of your least significant bits, and leave you plenty of room to preserve the original ≤12 bits of precision.

Thanks for the reply!

In that case, is there a downside to originally storing the data as uint16? And/or is there also a recommended way to use/consider the scaling parameters after they are added in?

Not that I know of.

scaled_data = scl_slope * unscaled_data + scl_inter

Any conforming implementation of NIfTI should transparently scale data on load/access, or provide a method to get scaled data out, though I guess it’s worth verifying. If you’re writing your own implementation, then you will need to know about this. If you’re using Python, I can confirm that nibabel does it transparently.