fMRIPREP registration issue for single run

Upon inspection, registration between the functional and anatomical image seems to be failing for 1 run (of 12). The BOLD data looks shifted/cut off when overlaid (see attached photo). This is only occurring for 1 run, and the raw data for the run looks normal.

Has anyone encountered this issue? If so, do you recommend re-running everything again, or is there a specific step I should target?

Can you upload the co-registration reportlet as generated by fMRIPrep?

Do you mean the .html visual report? If not, where should I look for this? Thank you!

Either the .html report (with the figures folder with is located at sub-<id>/figures/) or the figure about coregistration (you’ll find a link to it in the html report under the figure).

Hi Oscar,

I put the zipped files on google drive- let me know if you have any problems accessing them:

The run in question is Session 3, Run 1. Thanks!!


Okay, you can see the same problem in the reports for task-HSR_run-01 so we can rule out a visualization issue.

Could you check for differences in the headers of run-01 and run-02 of that task?

The only difference in the nifti headers between runs is in the minimum and maximum display range of the image intensity. This differs between all runs though, and doesn’t seem a likely culprit.

The orientation and the translation are the same across runs.

I redownloaded the data from flywheel, double checked all the json files and verified BIDS formatting, ran again last night-- same outcome. Let me know if you have any thoughts on how to troubleshoot this!

Thank you so much!

Yes, I’m not surprised by that.

Could you 0) backup your data; and 1) copy the nifti header from a functioning file?

We’ll probably want to add a check on the field causing this behavior.

I’ve uploaded a single volume from the problematic run, and one from a run that looks normal:

Let me know if you need anything else, and what I can do from here. Thank you!

Just fetched the data. Will look into this tomorrow. Sorry for the slow response.

Just wanted to check in about this issue- let me know if there’s anything I can do!

Sorry for the slow response.

So I could check headers are the same, and only differ on the cal_max/cal_min values - which should not cause this issue.

The only potential problem I could pick up is that the run you labelled as “bad” has a pretty unfortunate FoV prescription, cutting out a good portion at the top of the brain. I would agree that should not drive co-registration adrift but it might just be the case.

Could you share the T1w image, metadata and the reverse encoding references for me to replicate? Maybe give me access on Sherlock?


BTW, a good sanity check would be running this subject with --fs-no-reconall to use an alternative registration approach.

I’ve uploaded the subject’s data folder to google drive:

I can run the sanity check you suggested, I’ll get that started now.

I tried to run the subject with this flag and it errors out after ~1 min, with the message:

traits.trait_errors.TraitError: The ‘tr’ trait of a FunctionalSummaryInputSpec instance must be a float, but a value of None <class ‘NoneType’> was specified.

I’m not sure why this would happen all of a sudden- do you think it is related?

That is surprising. I’ll check on the dataset you’ve provided.

Okay, that error just happened to me with a dataset missing some _bold.json files.

Hi @corey,

Might be totally off-base here, but I know at our site there have been some issues with Flywheel intermittently reaping parts of the nifti headers for some vols w/in a scan and not others, and then spitting out info that confuses the conversion to nifti and downstream (someone just uncovered this a few days ago).

I might try it with data that hasn’t gone through Flywheel, just straight from the scanner? Unless @oesteban thinks he has some idea behind the issue.

Thank you for letting me know! @oesteban, let me know your thoughts!