What kind of details do you need? We posted the same problem more than a week ago with more details but did not receive any reply: Alignment of different sessions
Let me know if you need more information (and what kind).
It looks like whatever files were shared last week are not still there, so I don’t know if anything else would be needed.
In general, it sounds like the issue is that you’re expecting multiple BOLD series to be exactly in register? This is the overall problem that motion correction, susceptibility distortion correction and BOLD-T1w registration is intended to solve, but these aren’t perfect methods.
In the other post, @Flaarn suggests creating some kind of average BOLD reference to align each series to, which is a reasonable idea, and this could be implemented in fMRIPrep. It would likely take some substantial rearchitecting, but if you have someone on your team interested in taking this on, we can provide guidance.
Another thing to consider for multivariate methods is to run them within run, and then analyze the figures of merit across runs as a second order statistic. If you’re using searchlight or other methods that have an implicit smoothing effect, then the variations across runs will have less impact. This will also make it easier to take run and session effects into account in your stats.
The current thinking in our lab is that the fmriprep pipeline may not be optimal when requiring very consistent voxel alignment across sessions for each subject. This is required (for example) for single voxel encoding models (and I assume RSA too).
Fmriprep may be more suited for more “standard” group level analysis where data is analyzed after transforming to a common template and some smoothing is applied. In this case maybe slight voxel misalignments at the single subject level do not matter.
This thinking is based on some small and simple comparisons between data processed using our in-house pipeline and fmriprep, so could well turn out to be wrong ;).
Btw, you will notice at the end of that linked thread that I am supposed to check something to help drill down where the issue may be. I have not yet done this but I will report back on that original thread once I do.
Thank you both for your clear replies. @12552, we think you may be right regarding the current utility of preprocessing in fmriprep in those situations where voxel alignment across sessions is imperative, like in our case.
We have tried aligning each series to the same volume in another package (SPM) and that leads to better alignment than the output of fmriprep, so it seems improvement is possible here. Unfortunately, we do not have the expertise available in our team to take this on.
@Flaarn and I were wondering if we could work around this problem by running fmriprep with bold native space as output and then aligning runs/sessions (and align to T1) with another package afterwards.
We tried to specify the native bold as output space (as described here: https://fmriprep.readthedocs.io/en/stable/spaces.html; last point under nonstandard spaces). However, using any of these terms as output spaces (e.g., bold, boldref, run, func) does not seem to work. Is this implemented and what term should be used?
As of fmriprep 1.5 this works (at least on our singularity setup). Prior to that there was an issue where one of the output spaces had to be a standard space (see Error in "native" EPI space processing (--output-spaces func)), so perhaps that is your issue?
As to your workaround, I’d be interested to see how it works for you.
However, I have to tell you that I tried this on some small sample (N=2) data in our lab and it did not solve the issue. That is, the results after doing this are still worse compared to a preprocessing pipeline that aligns to a single volume from the start. In fact it did not seem to improve on fmriprep’s approach much, if at all.
Fyi, what we are looking at is Single to Noise, defined as response reliability for repeated presentation of a movie stimuli (also sometime called “Noise Ceiling”). So I don’t know if my experience generalizes to what you are testing.