Indexing Error During Regressor Refinment

Summary of what happened: When running rapidtide denoising on fMRI data, I received an indexing error during regressor refinement. This error only happens with a handful of subjects.

Command used:

singularity run \
	--cleanenv \
	-B /home/vagrant/fMETData/derivatives/:/data_in,/home/vagrant/fMETData/derivatives/:/data_out,/home/vagrant/:/home/rapidtide/ \
	/home/vagrant/rapidtidev2.9.8.2.simg \
	rapidtide \
/data_in/$input_file \
$outputname \
--brainmask $mask \
--nprocs 16 \
--denoising 

Version:

2.9.8.2

Environment (Docker, Singularity / Apptainer, custom installation):

I am running rapidtide via Singularity.

Relevant log outputs (up to 20 lines):

Regressor refinement, pass 2
Traceback (most recent call last):
  File "/opt/conda/bin/rapidtide", line 23, in <module>
    rapidtide_workflow.rapidtide_main(rapidtide_parser.process_args(inputargs=None))
  File "/opt/conda/lib/python3.12/site-packages/rapidtide/workflows/rapidtide.py", line 2561, in rapidtide_main
    peaklag, dummy, dummy = tide_stats.gethistprops(
                            ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.12/site-packages/rapidtide/stats.py", line 595, in gethistprops
    peaklag = thestore[0, peakindex + 1]
              ~~~~~~~~^^^^^^^^^^^^^^^^^^
IndexError: index 101 is out of bounds for axis 1 with size 101

It seems like the peak detection step didn’t find any peaks. As far as I can tell, if the histogram doesn’t show a clear peak, where the bin after the peak has a value <0.75 * the peak bin’s value, then it will just flag the last bin as the peak.

@bbfrederick is there a workaround for this kind of situation, or is it actually a bug in rapidtide?

Looks like a bug. I can at least stop the error from happening by limiting the search to stop one before the end - then it won’t crash, but either there’s something very unexpected in that dataset or it’s starting with a regressor that’s very wrong. Let me see if I can stop it from crashing, and then we can do a postmortem on a dataset.

Actually, while I do that, you should try increasing the upper limit of --searchrange - if indeed there is a real peak just past the end of the search range, that will maybe find it.

Yup. Looks like an off-by-one error on line 585 of tide_stats. I’ve fixed it, but there’s something broken in my package and container build process at the moment. The fix will be in 2.9.9.2, whenever I can figure out how to deploy it.

I increased the search range incrementally and got it to run without error when the range was +/- 18s. Checking the delay maps I saw that some voxels had delays close to that upper limit. The data is from a healthy subject and the brain mask looks fine, so I am a little confused as to how the delay estimate could be so high.