fMRIprep Bold transform & Crc/size validation Error


We are using fMRIprep to preprocess a series of data centered around a task-based design (i.e., participants are tasked with an activity during the scanning). We keep running into the same error for seven subjects with identical messaging:

indexed_gzip.indexed_gzip.CrcError: CRC/size validation failed - the GZIP data might be corrupt

The error occurs on the following node:


We have done the following steps to remediate this error (with no success):

  1. We have re-run these scans through fmriprep and used a customized program called heudiconv to convert all dicom images to nifti format prior to using two different formats of this conversion: “dcm2nii” and “dcm2niix.” We made sure to use the “bids-validator” before re-running fmriprep
  2. We have also compared the JSON files (which provides descriptive info on each series of scans) between both a series of scans that were either good or bad.
  3. We have unzipped and re-zipped the files, when we first came across the “GZip” error.
  4. Used modifier flags to only select “task-based” scans. (Ex: -t ssrt)
  5. We used Statistical Parametric Mapping (SPM), an add-on used in MATLAB to preprocess fmri data, to initiate the conversion from dicom to nifti instead of heudiconv.
  6. We’ve looked into whether the files or the Freesurfer directory were corrupt by using “Freeview” to view the fmri data. No skipping or inconsistencies were present between volumes when using the “movie” option.
  7. We also looked into the “ribbon.mgz” file to determine if any inconsistencies existed that contributed to the impediment of the fmriprep process.
  8. We’ve tried modifying the JSON file to contain only the necessary meta-data (repetition time, slice-timing, and task-name).
  9. We have compared successful subjects’s JSONs to unsuccessful ones.
  10. We’ve also used the “-- ignore slicetiming” flag on fmriprep after using a newly modified JSON file (as described above in #9).

The CrcError was due to a bug in indexed_gzip that was packaged with a couple versions in the 20.2.x series. If you upgrade to 20.2.6, it should be resolved.

@effigies Thanks so much, we will try running with the 20.2.6 version!