These parameters are automatically calculated by nibabel. This most likely happens because your input data was int16 or uint16.

Imaging data is generated with analog-to-digital converters (ADCs) in the scanner hardware. Many scanners have 7-bit ADCs (IIRC) and some modern scanners now have 12-bit. In any event, these recordings can be stored in 16 bit integers, along with scaling parameters to ensure that the original values can be recovered. dcm2niix will preserve these in the NIfTI files.

For almost all operations in fMRIPrep, these data will be automatically promoted to float32 (or occasionally float64), applying the scale factors in order to preserve their intended values. However, no amount of processing can inject more bits of precision than the original input data contained. Therefore, when producing outputs, fMRIPrep will finally recast to the original precision. This is done automatically when nibabel needs to store data in a dtype that has a smaller range than the data.

If you take the minimum and maximum data values and the number of bits of precision you have, you can calculate the factors:

```
pr = (500, 2000) # Plausible range
nbits = 16 # 16-bits of precision
scl_slope = (pr[1] - pr[0]) / (2 ** nbits) # Resolvable difference
scl_inter = pr[0] # Minimum value
```

(This assumes you’re targeting an unsigned 16-bit integer… It’s slightly more complicated for int16, but it doesn’t add clarity to cover that case.)

If you save a float32 dataset as int16 and reload it, the max difference in data will be `± scl_slope / 2`

. This will be one or two of your least significant bits, and leave you plenty of room to preserve the original ≤12 bits of precision.