Nifti header-related error "nibabel.spatialimages.HeaderDataError: vox offset 348 too low for single file nifti"

I have an old data set with only .nii files (not DICOMs), so I made .json file myself with all the information I could collect from the published paper based on this same dataset. Here’s an example of the .json file sub-101_ses-1_task-VWMT_run-01_bold.json:

  "TaskName" : "VWMT",
  "RepetitionTime": 1.5,
  "EchoTime" : 0.03,
  "FlipAngle" : 70,
  "FieldOfView" : [ 240, 240, 125 ]

and I have rearranged the data to fit bids structure, as below:

		sub-101_ses-1_task-VWMT_run-01_bold.nii.gz (run_01.nii.bz2)
		sub-101_ses-1_task-VWMT_run-02_bold.nii.gz (run_02.nii.bz2)
		sub-101_ses-1_task-VWMT_run-03_bold.nii.gz (run_03.nii.bz2)
		sub-101_ses-1_task-VWMT_run-04_bold.nii.gz (run_04.nii.bz2)
		sub-101_ses-2_task-VWMT_run-01_bold.nii.gz (run_01.nii.bz2)
		sub-101_ses-2_task-VWMT_run-02_bold.nii.gz (run_02.nii.bz2)
		sub-101_ses-2_task-VWMT_run-03_bold.nii.gz (run_03.nii.bz2)
		sub-101_ses-2_task-VWMT_run-04_bold.nii.gz (run_04.nii.bz2)
		sub-101_ses-3_task-VWMT_run-01_bold.nii.gz (run_01.nii.bz2)
		sub-101_ses-3_task-VWMT_run-02_bold.nii.gz (run_02.nii.bz2)
		sub-101_ses-3_task-VWMT_run-03_bold.nii.gz (run_03.nii.bz2)
		sub-101_ses-3_task-VWMT_run-04_bold.nii.gz (run_04.nii.bz2)

When I feed the data into fmriprep, validator gave me two errors and three warnings as below:

Making sure the input data is BIDS compliant (warnings can be ignored in most cases).
        1: [ERR] Repetition time was not defined in seconds, milliseconds or microseconds in the scan's header. (code: 11 - REPETITION_TIME_UNITS)

    Please visit for existing conversations about this issue.

        2: [ERR] sform_code and qform_code in the image header are 0. The image/file will be considered invalid or assumed to be in LAS orientation. (code: 60 - SFORM_AND_QFORM_IN_IMAGE_HEADER_ARE_ZERO)

    Please visit for existing conversations about this issue.

        1: [WARN] You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. (code: 13 - SLICE_TIMING_NOT_DEFINED)

    Please visit for existing conversations about this issue.

        2: [WARN] NIfTI file's header field for unit information for x, y, z, and t dimensions empty or too short (code: 41 - NIFTI_UNIT)

    Please visit for existing conversations about this issue.

        3: [WARN] The recommended file /README is missing. See Section 03 (Modality agnostic files) of the BIDS specification. (code: 101 - README_FILE_MISSING)

    Please visit for existing conversations about this issue.

        Summary:                 Available Tasks:        Available Modalities: 
        8 Files, 129.15MB        VWMT                    T1w                   
        1 - Subject                                      bold                  
        1 - Session                                                            

    If you have any questions, please post on

And fmriprep was killed with the follwoing error from nibabel

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/", line 297, in _bootstrap
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/cli/", line 674, in build_workflow
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/", line 259, in init_fmriprep_wf
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/", line 617, in init_single_subject_wf
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/bold/", line 289, in init_func_preproc_wf
    bold_tlen, mem_gb = _create_mem_gb(ref_file)
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/bold/", line 991, in _create_mem_gb
    bold_tlen = nb.load(bold_fname).shape[-1]
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 49, in load
    img = image_klass.from_filename(filename, **kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 17, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 484, in from_filename
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 17, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 975, in from_file_map
    header = klass.header_class.from_fileobj(hdrf)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 687, in from_fileobj
    hdr = klass(raw_str, endianness, check)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 670, in __init__
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 252, in __init__
    super(AnalyzeHeader, self).__init__(binaryblock, endianness, check)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 174, in __init__
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 365, in check_fix
    report.log_raise(logger, error_level)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/", line 277, in log_raise
    raise self.error(self.problem_msg)
nibabel.spatialimages.HeaderDataError: vox offset 348 too low for single file nifti1

So I opened a Jupyter Notebook to check the possible issues. When using nibabel.load() to read one of the *_bold.nii file, I got the same error

vox offset 348 too low for single file nifti1

So I checked the header, as below:

$ fslhd sub-101_ses-1_task-VWMT_run-01_bold.nii.gz
filename        sub-101_ses-1_task-VWMT_run-01_bold.nii.gz
size of header  348
data_type       INT16
dim0            4
dim1            64
dim2            64
dim3            25
dim4            285
dim5            0
dim6            0
dim7            0
vox_units       Unknown
time_units      Unknown
datatype        4
nbyper          2
bitpix          16
pixdim0         0.000000
pixdim1         3.750000
pixdim2         3.750000
pixdim3         5.000000
pixdim4         0.000000
pixdim5         0.000000
pixdim6         0.000000
pixdim7         0.000000
vox_offset      348
cal_max         0.000000
cal_min         0.000000
scl_slope       0.000000
scl_inter       0.000000
phase_dim       0
freq_dim        0
slice_dim       0
slice_name      Unknown
slice_code      0
slice_start     0
slice_end       0
slice_duration  0.000000
toffset         0.000000
intent          Unknown
intent_code     0
intent_p1       0.000000
intent_p2       0.000000
intent_p3       0.000000
qform_name      Unknown
qform_code      0
qto_xyz:1       3.750000 0.000000 0.000000 0.000000 
qto_xyz:2       0.000000 3.750000 0.000000 0.000000 
qto_xyz:3       0.000000 0.000000 5.000000 0.000000 
qto_xyz:4       0.000000 0.000000 0.000000 1.000000 
qform_xorient   Left-to-Right
qform_yorient   Posterior-to-Anterior
qform_zorient   Inferior-to-Superior
sform_name      Unknown
sform_code      0
sto_xyz:1       0.000000 0.000000 0.000000 0.000000 
sto_xyz:2       0.000000 0.000000 0.000000 0.000000 
sto_xyz:3       0.000000 0.000000 0.000000 0.000000 
sto_xyz:4       0.000000 0.000000 0.000000 1.000000 
sform_xorient   Unknown
sform_yorient   Unknown
sform_zorient   Unknown
file_type       NIFTI-1+
file_code       1

Should I manually change the header info, such as vox_offset, to 352 or other values?

Also, for the second error reported by bids validator,

        2: [ERR] sform_code and qform_code in the image header are 0. The image/file will be considered invalid or assumed to be in LAS orientation. (code: 60 - SFORM_AND_QFORM_IN_IMAGE_HEADER_ARE_ZERO)

Does that mean I need to manually change sform_code and qform_code?

If all the three pieces of information in header were changed, would my data be workable to fmriprep ?


It’s unfortunate that you don’t have the original DICOMs, as it seems these NIfTIs are malformed. Do you have any reliable way to recover the orientation information?

I would be very wary of these files, but if you need a way to load the images in Python:

import nibabel as nb

opener = nb.openers.ImageOpener(fname)  # Detects and handles compression
header = nb.Nifti1Header.from_fileobj(opener, check=False)
data = nb.arrayproxy.ArrayProxy(fname, header)
img = nb.Nifti1Image(data, None, header)

The above should at least permit you to inspect the image, and check what values are in the qform/sform (fslhd, I believe, will zero them out if the xform codes are 0). It’s possible you have some orientation information there.

Also, this data object assumes that the 348 vox_offset is correct. You will probably want to make some checks, such as that the full length of the file is in fact vox_offset + prod(img.shape) * sizeof(img.get_data_dtype()).

Again, though, your files seem corrupted. Manual adjustment to interpretable values is no substitution for source files of more certain provenance.

1 Like

Thanks, Effigies! I loaded the image through nibabel, and get the following header info:

>>> print(img.header)
<class 'nibabel.nifti1.Nifti1Header'> object, endian='<'
sizeof_hdr      : 348
data_type       : b'          '
db_name         : b'                  '
extents         : 16384
session_error   : 0
regular         : b'r'
dim_info        : 32
dim             : [  4  64  64  25 285   1   1   1]
intent_p1       : 0.0
intent_p2       : 0.0
intent_p3       : 0.0
intent_code     : none
datatype        : int16
bitpix          : 16
slice_start     : 0
pixdim          : [1. 1. 1. 1. 1. 0. 0. 0.]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 0
cal_max         : 0.0
cal_min         : 0.0
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : b'                                                                                '
aux_file        : b'                        '
qform_code      : unknown
sform_code      : unknown
quatern_b       : 0.0
quatern_c       : 0.0
quatern_d       : 0.0
qoffset_x       : 0.0
qoffset_y       : 0.0
qoffset_z       : 0.0
srow_x          : [0. 0. 0. 0.]
srow_y          : [0. 0. 0. 0.]
srow_z          : [0. 0. 0. 0.]
intent_name     : b'                '
magic           : b'n+1'

>>> print(img.header.get_qform())
[[1. 0. 0. 0.]
 [0. 1. 0. 0.]
 [0. 0. 1. 0.]
 [0. 0. 0. 1.]]

>>> print(img.header.get_sform())
[[0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 1.]]

>>> print(img.header.get_best_affine())
[[ -1.    0.    0.   31.5]
 [  0.    1.    0.  -31.5]
 [  0.    0.    1.  -12. ]
 [  0.    0.    0.    1. ]]

Do you think there’s any luck that we can recover orientation info from these?

Good news is, I can email the person who collected these scans. What information do you recommend me to ask from this person?

Many thanks!

The Analyze header (which was the fore runner to NIfTI) used a 348 byte header (saved as filename.hdr) and a separate file for image data (filename.img). The NIfTI format can be saved as two files (.hdr/.img), or as a single file (filename.nii) with the header as the first 348 bytes with the image data following. However, the image data must begin no earlier than byte 352. So your images are not proper NIfTI files.

To correct your files, add a 4 byte pad between the header and the image data, and set the header vox_offset to be 352. The corrected file would then report:

>fslhd fnm.nii
filename	fnm.nii
size of header	348
vox_offset	352

For details, see the NIfTI standard:

After the end of the 348 byte header (e.g., after the magic field), the next 4 bytes are a char array field named “extension”. By default, all 4 bytes of this array should be set to zero. In a .nii file, these 4 bytes will always be present, since the earliest start point for the image data is byte #352.

It looks like the qform and sform indeed have no data in them. There’s nothing recoverable from this file. If it was an Analyze file, then using get_best_affine() will produce an LPS affine, which should match Analyze conventions. However, it would be unwise to depend on this.

If the people who ran the study were collecting data they knew would be in Analyze, then they may have use a fiducial marker like a vitamin E capsule that you can see in the image. I have always encountered it as a PORS (pill on right side) convention, but I would check with the data collectors for any information on their protocol.

As this appears to be functional data, you can see if there are any well-known, lateralized effects. Finger-tapping, for instance, should show a strong effect in contralateral motor cortex. For checks like this, prepare your data as if you know the orientation (anterior/posterior and inferior/superior should be visually verifiable), preprocess and run first-level statistics. If the result is contralateral to the expected area, flip the orientation on affected runs, and rerun all processing from start to finish.

As a last ditch effort, if you have reasonably good gray/white contrast, you can try both possible orientations and see which can be registered to the anatomical image with a lower final cost function (assuming both look reasonably aligned). If there’s a large difference, then I would consider that weak evidence for one orientation.

If you don’t have fiducials or strong lateralized effects, there’s nothing to be done. I would personally probably give up on the data. Your effects have an even chance of being localized to the wrong hemisphere, assuming all data at least share an orientation. If some of your runs/subjects are in one orentation, and some in the other, your effect sizes are probably going to be under-estimated. Either you’ll see weaker effects or you’re going to get a lot more noise artifacts, depending on your family-wise error correction strategy.

I want to emphasize, again, that none of this replaces getting files of more certain provenance.

In nibabel, you should be able to load the image as I described above, and then just re-save with img.to_filename(). The voxel offset should get automatically fixed. I would highly recommend saving as a new file name, to avoid any memory mapping artifacts. (If you hit hit a BusError, it means that you tried to save to the same filename, and the data is now gone.)