Correct point during pre/postprocessing to apply rapidtide in complex workflow

Hi!
I am working on a rather extensive investigation of a resting-state connectivity-based analysis method for which the spatial structure of the signal across the brain is of central relevance. The main effect I find is extremely robust across subjects, sessions, and datasets, and seems to be related to arousal, which indicates that I should look into sLFOs :slight_smile: .

As I am working on multiple datasets with different characteristics simultaneously and have a completely automated processing workflow, it does not seem too straight-forward to me when to employ sLFO estimation and regression.

For a first exploration, I consider the following dataset most suited (it’s also the most complicated case):
15 minutes resting-state. After 7 minutes, Ketamine i.v. infusion. The readout from my method looks like it basically tracks Ketamine levels. Ketamine, of course, also induces physiological changes. TR=2.2, resolution is a bit low, ~ 3x3x3.9

Processing:

fMRIprep, output in fsLR and MNI-152NLin2009c-native.
All fMRIprep data goes into XCP-D with the following settings: despiking, no censoring, 0.01-0.08 Butterworth filter, linear detrending, regression of 36 parameters (6 motion, gm, wm, global signal, + first and second-order temporal derivatives),

Options

  • fMRIprep → XCP-D → rapidtide. I understand that rapidtide should generally be run late in processing, however, I think that I employ too much temporal filtering for that, and the aggressive global signal regression might also be a problem?
  • fMRIprep → rapidtide → XCP-D. Does this make sense? If yes, what regressors, expecially concerning the global signal should I drop from the XCP-D run?
  • fMRIprep → rapidtide: estimate regressors → XCP-D → apply rapidtide regressors. That would be the solution suggested for FIX-denoised HCP data in the docs. Same question regarding what global signal regression parameters to keep in the XCP-D run?
  • fMRIprep → some combination of rapidtide and custom nilearn-filtering, depending on what you suggest. I would prefer one of the upper solutions due to them using automated pipelines.

Could you possibly give me some guidance on what procedure would be the most fitting? That would be immensely helpful! Thanks a lot in advance!

Best wishes,
Leon

I personally hope to implement the pipeline fMRIPrep → rapidtide → XCP-D. Rapidtide is actually why I added support for voxel-wise regressors to XCP-D. It’s also why I’ve been hoping to get fMRIPost-rapidtide working at some point. I have drafted a rapidtide-based XCP-D denoising config that combines rapidtide with the 24P denoising strategy (just motion parameters and their extensions) here, but I haven’t tested it yet. In practice, fMRIPrep → rapidtide (using the confounds file to do the denoising within rapidtide) might be the safest bet, unless you really want some of the derivatives XCP-D produces. The one drawback is that you won’t have despiking, but I think you can run that before rapidtide.

My understanding is, if you’re using rapidtide, you probably don’t need to include global signal in your denoising step, but I don’t know if it’s a problem in the same way that FIX is. @bbfrederick WDYT?

Thanks a lot for your input @tsalo! And thanks for your efforts of trying to embed rapidtide with fMRIprep and XCP-D, this would of course be optimal!
For projects in the future, I could indeed just not use XCP-D and write custom code for postprocessing. For the current one, though, I relied heavily on XCP-D and as I would like to check for the influence of sLFO-regression on my current results, it would be sensible to stick to XCP-D (to have as few other differences between with/without sLFO regression as possible).

But I would be happy to try if I can make XCP-D work with a 24P or 32P+rapidtide strategy – if that is more reasonable than fMRIprep → rapidtide:estimate → XCP-D → rapidtide:apply (which would than be retroregress if I’d do it in two steps or the --denoisesourcefile argument in rapidtide if in one step?)
If I understand correctly, the downside to the voxelwise-regressors-in-XCP-D option is that one has to save out the regressor 4D file, which would not be the case for the post-hoc path?

In both cases, the question of which other regressors to include remains open. As you also mentioned, I already thought that global signal shouldn’t be necessary anymore. So:

  • only motion&extensions (24P)
  • motion&extensions + wm + csf (“26P”)
  • motion&extensions + wm&extensions + csf&extensions (“32P”)
  • Futhermore: It also seems to be an option to include derivatives of the voxelwise sLFO regressors? That would likely work only with the post-hoc option, as adding derivatives on the fly to voxelwise regressors isn’t implemented in XCP-D?

Thanks!

The one drawback to doing fMRIPrep → rapidtide:estimate → XCP-D → rapidtide:apply is that you won’t get XCP-D derivatives (e.g., ALFF, ReHo, parcellated time series). However, if you don’t need those, then that is probably fine. I would lean toward using fMRIPrep → rapidtide (with 24P confound regression enabled) → XCP-D with rapidtide+24P.

rapidtide should write out the regressor 4D file anyway, and TBH I assumed that rapidtide:estimate would involve writing out the same file for rapidtide:apply. Is that not correct?

From what I’ve read of @bbfrederick’s work, WM and CSF signals at least partially reflect the sLFO at specific lags, so I’m guessing including those in the GLM would be counterproductive.

You’re right that XCP-D won’t calculate the derivatives internally, but I believe rapidtide can output derivatives of the voxel-wise regressor. I think that’s the XXX_desc-lfofilterEVDerivN_bold output.

Right, I agree of course, and somehow missed this. For my current analyses I would clearly like to keep XCP-D as the last processing step to compare outputs between with/without rapidtide regressors.

As far as I understand and observed in some trial runs, rapidtide by default will not write out the 4d regressor file. This can be triggered with the --outputlevel max setting. Here’s a quote from the docs:

Name Extension(s) Content When present
XXX_desc-lfofilterEV_bold nii.gz, json Shifted sLFO regressor to filter Present if despecklepasses > 0 (default) and outputlevel is max
XXX_desc-lfofilterEVDerivN_bold nii.gz, json Nth time derivative of shifted sLFO regressor Present if sLFO filtering is enabled (default), regressderivs > 0, and outputlevel is max

I understand that, when voxelwise regression is run separately with rapidtide (i.e., retroregress?), this 4D regressor file is not needed. Instead, the 4D regressors and their derivatives are calculated on the fly from the regressor time series (XXX_desc-EV_timeseries.tsv.gz) and the delay map (likely XXX_desc-delayoffset_map.nii.gz? Not sure about that, tho). This should be much more storage efficient. Wouldnt be too relevant for my Ketamine study, but I am also working on HCP data.

Anyway, writing out the 4D regressor time series and passing it to XCP-D is what I would prefer to do for now. I am sure that there will be a better solution in the future.

Okay, thank you. I think I will give the fMRIprep → rapidtide : 24P : estimate → XCP-D : 24P+voxel regressors a try.

1 Like

If you use rapidtide, I think you should certainly omit the global signal - you’ll get better noise removal and no spurious negative correlations (see Erdogan 2016). I’d also argue you should probably omit the white matter regressor too, since that’s mostly a delayed version of the global mean (although there may be some additional information in there).

I also agree that fMRIPrep → rapidtide → XCP-D is the correct way to do it.

Also - you should probably do motion regression in rapidtide (it only uses this internally for estimating the sLFO regressor and delays. The estimation works better with that confound removed).

1 Like

You are correct. The delay calculation and sLFO regressor extraction take 90% of the calculation time, but can be summarized with a fairly small amount of saved data (you can use output level “min” if you don’t want to look at the correlation function). Then run retroregress with output level “max” to do the actual filtering and/or save the 4D regressor. If this is going to be something people want to do a lot, I can put in an outputlevel that ONLY generates the 4D file for regression and does not do the filtering (and does not produce and save lots of data).

Ok, I added the option. Now if you set --outputlevel onlyregressors in retrotregress, you will get the XXX_desc-lfofilterEV_bold.nii.gz (and XXX_desc-lfofilterEVDerivN_bold.nii.gz) file(s) that have the voxel specific sLFO regressors, and ONLY those files. It runs very fast, and produces the minimum amount of output to use in later filtering.

Thank you, @bbfrederick, for providing your input!
What about CSF regressors? Intuitively, these should also contain a part of the sLFO signal?

I’m on that right now (using dockerized rapidtide v3.0.2) and realized that 24P motion regression does currently not seem to be implemented. Derivatives are calculated by default, but squared motion parameters can only be requested for the separate confoundfile. I.e., a motionpowers akin to the confoundpowers option would be required. That being said, I only want to do this because I will run 24P+dGSR regression in the XCP-D step and thought that it would make sense to have the same regression parameters in both?.

I nevertheless got it running by passing a file with 6 motion parameters to --confoundfile and setting --confoundpowers 2. That worked, except it increases runtime dramatically. The start confound filtering part now takes about 20 minutes with the 24 regressors, as compared with 15 seconds with 12 regressors.

Thanks a lot! I will try to implement this in my pipeline by not writing out the 4D volume in rapidtide, but instead running retroregress in my XCP-D script to create the 4D volumes temporarily and delete them afterwards.

CSF signals are a little complicated - the velocity of the CSF is something like the negative derivative of the sLFO (a la Lara Lewis’ work), which will lead to a motion effect that probably looks something like the derivative. The sLFO plus the derivative of the sLFO can account for small delays, so how it affects the final fit is hard (for me) to characterize.

In an ideal world, you’d regress out the voxel specific sLFO signal, then calculate the “pure” WM and CSF regressors, so you’d only get whatever special information they carry, then throw them into your later fit. I can add that to rapidtide, since at the end of the run you’ll have all the information you need to do that calculation easily.

I’m on that right now (using dockerized rapidtide v3.0.2) and realized that 24P motion regression does currently not seem to be implemented. Derivatives are calculated by default, but squared motion parameters can only be requested for the separate confoundfile . I.e., a motionpowers akin to the confoundpowers option would be required.

That’s a mistake on my part. It used to be in there, and when I split the confound regression out from motion, I think the powers option came with it. I’ll add a --motpowers option back to the argument parser. Although I just looked at Satterthwaite (2013) and I think I had a gross conceptual error about what 24P meant. I thought it was x, y, z, x_rot, y_rot, z_rot, and first derivatives, and then all squared, but it seems to be x, y, z, WM, CSF, global mean, and first derivatives, all squared. So I’m not sure exactly how to proceed on that.

For 24P you can also just select the relevant confounds from the fMRIPrep confounds file. The derivatives and squared versions should already be available in there.

So, in theory, one can also create a voxelwise regressor set of white matter and csf regressors that is orthogonalized irt each voxel’s sLFO signal (and, if required, its derivative)? Advantage would be that no wm/csf masks are required, only the original wm/csf signals. Disadvantage very clearly the file sizes. In the most extreme case, this would result in regressors 1 sLFO + 1 sLFO-deriv + 4 WM + 4 CSF = 10 times the size of the input data.
However, I think your way is both cleaner and more straight-forward, and I will try to go for that. I would like to work as clean as possible for this analysis, as its outcome will be of high relevance to me.

That would be nice, of course! I am working with the definition given by XCP-D, which clearly is “6 motion parameters extended to 24, no wm/csf/gs”, as you describe it.

EDIT: I just saw that you added the option. Thanks for quick fix!

Yes, right! AFAIK, the resulting file with 24 motion parameters would have to be passed to the --confoundfile argument of rapidtide with the flag --noconfoundderiv set. Otherwise, rapidtide might calculate derivatives of the 24 parameters.

FWIW, in v3.0.3, which I’ll release tonight, if you’ve specified a brain mask, GM, WM, or CSF mask, rapidtide saves the time courses in those regions before and after sLFO filtering. Unsurprisingly, prior to filtering, all four time courses look VERY similar (+/- some small delays). Afterwards, they are much more distinct. So if you want a “pure” WM or CSF regressor, you can get it from the XXX_desc-regionalprefilt_timeseries and XXX_desc-regionalpostfilt_timeseries files.

Note that neither the pre nor the post time courses have been filtered to the LFO band here - the “after” time courses have had the systemic LFO removed from the unfiltered input data, so there are high frequencies in there, but I’m trying to leave the signal unmodified except for sLFO removal.


I didn’t proceed over the last few days (we had a long weekend in Germany). Thank you for so swiftly implementing this! I will then wait for your release and use it in my processing stream.

If it runs smoothly, I’ll post my final code calls for rapidtide and XCP-D here so we can close the topic.