Question about percent signal change analysis?

What kind of preprocessing should I do when I need to do a time-continuous percent signal change analysis?

I’m currently processing 20 minutes of MRI data, of which the first four minutes are in the resting state, followed by eight minutes of neuromodulation and then eight minutes of rest. I’m interested in dynamic percentage signal changes over 4-20 minutes

Currently I only use preprocessed data from fmriprep to calculate percent signal change. But I noticed that the signal itself is drifting. If filtering is done drift can be resolved, but the magnitude of the data as well as the signal change from continuous neuromodulation disappears.If I don’t do the filtering, there’s no way to show that the overall enhancement or weakening of the signal comes from neuromodulation or signal drift.

A general linear model may be useful, but there’s no way to see the signal change at each point in time with this approach.

I’m a little confused right now, so I’m asking the experts to help me out.

I am not sure about fmriprep options, but in AFNI’s afni_proc.py, you can include a “scale” block, which transforms the BOLD signal (which only has arbitrary/uninterpretable units) to that of local “BOLD percent signal change”. NB: there are different forms of scaling typically used/available in different software, so be sure to check the details of how you would do the scaling. Local/voxelwise scaling makes the most sense to me, personally (volume-based ones would seem to have difficulty with the inhomogeneity of BOLD signal baselines across the brain, for example). Here is more discussion/details on this, and why it matters:

–pt

If I don’t do the filtering, there’s no way to show that the overall enhancement or weakening of the signal comes from neuromodulation or signal drift.

I am no expert, so I am asking my questions to the middle here. I am just curious to expand my knowledge. Take my words with a grain of salt.

I have two primary questions about this. The first thing is, without filtering how does one even validate that the effect they are observing is due to the manipulation (ie. neuromodulation)? Scanner drift is a known phenomena and AFAIK, it is understood that it should be filtered. I can’t see a reason to not filter the data unless you are primarily interested in the intricacies of the BOLD signal itself (which doesn’t seem to be the case here). The fact that with filtering one can not show their effect indicates that there is no significant effect. I guess one way to validate this would be to check behavioural data if available (but even then this doesn’t confirm your effect is valid in terms of fMRI)

On the other part, even though I am not that well-versed with neuromodulation, I am not sure if a 20 minute scan contains enough power to show such an effect, given that I’ve seen neuromodulation/neurofeedback studies require lots of training runs and testing runs beforehand.

fMRIPrep only performs resampling. It does not rescale or detrend.

Thank you very much for your reply!
What I want to do is the percentage change in bold during neuromodulation (relative to the pre-stimulation baseline (bold-baseline)/baseline * 100). But after the detrend operation is done, the average value of the bold signal is equal to 0. Meaning that the baseline level becomes 0. The rate of change then appears to be a large value(eg. 1000%), instead of the level of 0 to 2% change in bold.

So whether or not I should do de-linearization confuses me~

You could fit a GLM with an intercept and a high-pass filter (Legendre polynomials or a cosine basis are popular; a linear trend would be the first order polynomial). The residuals will have these components removed, but the coefficient of the intercept will just be the mean. You can re-add that to your BOLD series without re-adding the other trends.

Thank you so much, I’m going to try it