Con.nii images have different value ranges across individuals uisng SPM GLM analysis

Hi all,

I have done a 1st level analysis of some fMRI data using SPM12.

I have a contrast map for a condition for each subject. I have noticed that some subjects’ con map value range is like -10 to 10 and for others -166 to 199.

Moreover, when I do a 2nd level one-sample t-test, nothing survives an uncorrected p=0.001 threshold with k=0 cluster extent.

Is that normal? The same regressors have been used in all cases.

Ahoi hoi @makaros622,

thank you very much for this interesting question.

Would it be possible to provide more information regarding the paradigm (rough type, length, runs, etc.), model run (regressors, HRF, covariates, auto-correlation, etc.) and number of participants/details about their respective maps (number, variability, etc.)? With that the community might be able to help out better.

Cheers, Peer

Hi Peer.

Of course. I am analyzing some task-based fMRI neonatal data. During the acquisition, we had 4 types of auditory stimuli presented in a block design. The duration of each condition is 8 sec for a total of ~7min acquisition. TR was 0.7 sec and n=33 subjects.

The preprocessing includes realignment, co-registration, normalization to a neonatal template and 6mm smoothing. GLM modeling includes HPF=256, split of runs if motion-corrupted frames exist (defined as FD > 1mm or DVARS > 30%). 6 motion parameters are also used in the design matrix to account for motion. In summary, I have 4 task regressors, 6 motion regressors, and the constant term in the design matrix.

Dear @makaros622

This could potentially explain what you see. When you split your design into runs you need to make sure that the contrast you specifiy account for that.

An example: If you want to run a contrast comparing condition 1 < condition 2, then you specifiy -1 1, but if you split up your design into multiple runs, then you need to account for this, so the contrast would become:
-0.5 0.5 -0.5 0.5

If you don’t do this, then you will get different con map ranges for the different runs and the second level will not work anymore (you will basically bias the statistics and overweight the subjects with more runs)

Kind regards
Steffen

1 Like

Ahoi hoi @makaros622,

I’m very sorry for the late reply and thx @stebo85 for following up on this!

You could verify @stebo85’s suggestion via evaluating if a certain number of runs/split thereof is related to a certain range of values.

Another option would be to check things related to the paradigm. Specifically, this refers to the design/its efficiency and aspects like BOLD linearity (ie model assumptions). Auditory stimuli can introduce additional problems, ie individual hearing thresholds in the scanner/during the acquisition (in both continuous and sparse acquisition protocols). IIRC, the suggestion for auditory stimuli re BOLD linearity, etc. was a duration of 4-8 seconds. Thus, that should work in your paradigm. However, 8 seconds could already be “too long” and in combination with the utilized permutation scheme/counterbalancing of stimulus categories, might lead to saturation/habituation effects. All of that could be summarized under inter-participant-variability. As your TR is very fast: was this a multiband data acquisition protocol and/or non-whole-brain FoV? If so that could also introduce some effects. For example, the way autocorrelation is modeled in a GLM can account for/address the rather fast acquisition.

Sorry for not providing rather specific pointers/ways to investigate this further. Overall, I think I would suggest evaluating it on a per-participant basis and check if there’s a certain underlying pattern (as with the runs/split thereof).

HTH, cheers, Peer

Thank you for the excellent suggestion.

However, I use matlabbatch{step_count}.spm.stats.con.consess{c}.tcon.sessrep = 'replsc'; when building the contrasts and this replicates and scales them across the sessions.

Anything else that I could verify?