Calculating statistics using between-subject design classification in TDT

Hi there,

I am running the decoding analysis on my fMRI data consisting of one run and 69 subjects (35 subjects in group A and 34 subjects in group B). Because of the fact that I have only one run and not many trials in it (12 trials of each type), I decided to try the between-subject design.

The condition from my 1st levels that I am mainly interested in is US>noUS, however I have also US and noUS modeled separately. Ideally, I would like to compare the accuracies of the US>noUS classification between groups A and B.

So far, I tried decoding US vs noUS in each group separately (using decoding_template_between_subject.m), which resulted in the mean accuracies for each condition in each group (82% US-decoding accuracy in group A, 79% US-decoding accuracy in group B). Now I was thinking of repeating the decoding ~1000 times using random assignment of data into two groups (which would be permutation testing if I’m correct?). For each repetition I could calculate the % difference between the groups, and compare my main result (3%) with the resulting distribution.

Having described my design and my ideas now I have questions:

  1. If the “permutation” idea sounds logic, is it implemented somewhere in the template scripts? I have to admit that I am a bit lost in the templates and I am not sure which of them would be suitable (maybe instead of decoding_template_between_subject.m I should use make_design_permutation.m?)
  2. Would it be possible to compare the decoding accuracies of the US>noUS condition between groups? I mean, can I provide one beta file (the US>noUS contrast) for each subject instead of two (US and noUS)?
  3. Another thing that I don’t really get from the decoding_template_between_subject.m output is why did I get only one value of accuracy-chance if I had two conditions? What does this value refer to exactly?

Sorry for bothering you with so many details, I just thought it would be easier for you to follow this way. I will greatly appreciate any comments!

Cheers, Ania

Hi Ania,

It does make sense! Assigning subjects randomly to either group is exactly the way to go. Of course, there is the assumption of exchangeability. If, for example, you intentionally balanced other factors between groups, then you can only exchange subjects with that balancing factor. For example, if you balanced gender, then you should only swap within gender. It can get more complicated for continuous variables. Unfortunately, this approach is not implemented in TDT but I think you would just need to get a long list of all subject file names, permute the files in using randperm, and then split the new variable in two. It may make sense to also get a different filename start for each (for example by setting cfg.results.filestart = sprintf('perm%04i',i_perm); where i_perm is the current permutation.

Yeah, that would be possible. You can just pass con_ images instead of betas but need to assign them manually. Or you directly model the contrast of both as one beta, which would give the same result.

Accuracy is a contrast between two conditions. The bigger the difference between both, the higher the accuracy. It’s a way of telling how discriminable they are. It’s in that respect (but only in that respect) a tiny little bit like an F-test in SPM*.

Hope that helps!

*with the difference that it’s decoding instead of encoding, multivariate instead of univariate, and not using the assumptions of a GLM but of an SVM. So, a ton of differences but the non-directionality aspect is the same.

Hi Martin, thanks a lot!

As there are two variables - condition (within-sub) and group (between-sub) I am not sure how the design should look like.

I guess that trying to answer the question of whether the accuracy of US>noUS classification differs between groups, it would be more elegant to create one design instead of two separate (as I previously did)? Also shuffling betas and assigning them randomly to groups seems like a thing requiring that the subjects from both groups are set in one design. Is the decoding_template_between_group.m the proper script in this case, or maybe the decoding_template_between_subject version could be somehow employed?

If I get it correct - you suggested that a single beta modeled as the US>noUS contrast could be used as an input. I already tried that, but as there was only one file per subject, the decoding failed - it needs something to compare to (because as you said, the accuracy is the difference between conditions, and US>noUS is one condition only). This suggests that providing US and noUS as separate conditions is a better idea. And since the accuracy is their contrast, it would answer my question.

Assuming that we stick to the “one design” idea: if both groups and both stimuli were introduced to the design, I would need to set four labels probably: A_US, A_noUS, B_US, B_noUS? What values should be set for such labels? 1, -1, 2, -2? Would such coding result in the expected output:
(A_US - A_noUS) - (B_US - B_noUS)?

I am glad my idea of doing statistics is correct and I will try to implement it, but could you give me a hint on the “standard” ways of calculating the effects? I mean, which of the TDT scripts could be used in my case?

Once again thanks for your effort, I greatly appreciate it!