Factorial design with TDT

Dear all,
I am trying to use TDT to analyse data from a factorial design fMRI study.
Following this topic (https://neurostars.org/t/combining-beta-images-for-the-decoding-toolbox-tdt/2817), I am using this command

decoding_describe_data(cfg,{‘A*’, ‘B*’},[-1, 1],regressor_names,beta_loc,xclass)

But in my design, I have a condition A, divided in AX and AY, n trials each. I have an other condition B, not divided, with 2n trials.
When I try to compare these, I get the warning for unbalanced training data, because I guess the first condition has 2n chunks and the second one has n (but with twice more trials within).

My question is : is it okay to just specify the option cfg.design.unbalanced_data = ‘ok’, or is it better to modify my SPM model and split in two conditions (with n trials each) the B condition, so I will have 2n chunks for each ?

Thanks for your help,
Fabien.

Hi Fabien,

I would not set unbalanced_data = 'ok'; but you could give it a try. In that case, you would have to use AUC or AUC_minus_chance as a results measure. My suggestion would be to either re-estimate the model with one A, or to create con-images for condition A for each run and use the decoding_template_nobetas.m.

Reason: the beta estimates in condition A are more variable than betas in condition B, even in the absence of an effect.
See this paper (text around Figure 7) and this paper (text around Figures 5 and 7). AUC seems to be less affected or even invariant to imbalances and also seems to be not sensitive to variance/covariance based classification. However, the estimates would be noisier, so re-running the first-level model / computing con-images would probably be the best course of action and make your analyses more flexible if you need to explore more options.

Best,
Martin

Hello,
Thanks for you quick answer.
Regarding the AUC_minus_chance, is it a default output or do I have to specify any option ? (I don’t have access to the toolbox right now).
And I suppose it is the same thing to fuse the 2 A conditions than to split the condition B for example ? Because I also have other conditions, crossed with X and Y (AX, AY, CX, CY, …) and I would like to compare A vs C, but also X vs Y, and that would make a lot of 1st level models if I have to fuse for each conditions (while it makes only one if I split the impair conditions like B).
And for example if I have to compare A and C vs B (AX, AY, CX, CY), then I will have no choice than computing all of its in one condition, or to use the AUC_minus_chance?
Thanks again for your help and your toolbox,
Fabien.

just set cfg.results.output = 'AUC_minus_chance';

It’s likely going to be very similar, yes. Just make sure that condition X and Y are equally present in A, B, and C. This all assumes anyway that there is no interaction between both factors (A vs. B vs. C) x (X vs. Y).

Now, based on your description it sounds as if B does not have X and Y. I don’t see an obvious way to split it, but if you do it kind of randomly, then I would make sure the estimability is comparable to AX, AY, CX, and CY (check the SPM estimability matrix).

Otherwise, running many firstlevel models may be a good idea (I tend to do this when designs are as complicated as the one you are mentioning).

But try out AUC_minus_chance first and look at the searchlight maps, if there are wide regions with strongly below or above-chance AUC. If it looks weird or doesn’t yield much, then balance everything and use accuracy.

Hope that helps!
Martin

Now, based on your description it sounds as if B does not have X and Y. I don’t see an obvious way to split it, but if you do it kind of randomly

Actually I may have gave you a bad description of my paradigm.
In a run I have let’s say 7 conditions, 3 split on a parameter X/Y, and one which can not be (there is no distinction possible BX/BY), so the latter will be presented twice to the subject (to equalize the number of time he is presented the A, C and D conditions (AX, AY, …) and the B condition (B, B) : for example : 1 * (AX, AY, CX, CY, DX, DY) + 2*B). At each presentation, the condition contains the same amount of trials, only the number of times they are presented varies.
Then I can split randomly the B condition (actually i split in a 1 of 2 way, as trials are randomly distributed within each passage of the conditions, it looks to be random).

Otherwise, running many firstlevel models may be a good idea (I tend to do this when designs are as complicated as the one you are mentioning).

If you think it is the best, I probably should follow this way.

Thanks for your answers,
Fabien.

Hello,
Sorry to bother you again,

  • I was wondering, should I use normalized images to perform MVPA analysis or non normalized?
    Following this discussion : Basic questions about the outputs (TDT), I understand I should not and only normalize after MVPA analysis, to make group statistics? Even smoothing? Or can I normalize before the MVPA analysis (probably but that makes the analysis less sensible?)
  • Always on this discussion, it seems that there is no statistic to interpret with the accuracy/AUC_minus_chance maps? So we can not know what threshold to put and how to interpret the values?
  • Plus, if the data are balanced, should I use the AUC or the accuracy_minus_chance res, or whichever I want?
  • And last, is the intensity of the signal taken into accounts by the TDT, or only the activation pattern? I mean if an area has a higher activation for the condition A than for B, but with the same pattern, will I see something in MVPA or the signal intensities are ‘normalized’?
    Thanks a lot,
    Fabien.

Hi Fabien,

Spatial normalization: you can normalize your data in advance and I don’t see strong reasons why you shouldn’t if you plan on doing it anyway. One reason to normalize afterwards is to speed up analyses: Typically normalization comes with more voxels - sometimes many more voxels - i.e. searchlight analyses take longer to run.

Smoothing: Some people have reported improvements in decoding accuracy when smoothing data before decoding analyses. It depends on the spatial scale of your pattern (fine-scale vs. coarse-scale). I would recommend smoothing after searchlight analyses. It makes the discrete accuracy/AUC results more continuous (better for assumptions of Gaussian random fields), should increase the SNR, and compensates for spatial inaccuracies in the normalization procedure. If you have quite specifically localized results, then it may reduce effects. Also, remember that searchlight analyses act as a form of (noisy) smoothing.

For statistical analyses at the decoding-level, you need to run a permutation test (unless you have a true out-of-sample prediction and not cross-validation, in which case a binomial test on the test set would be correct). At the group-level, there is an ongoing debate. There, you can run a t-test or a sign permutation test. These are not really testing the random effects hypothesis though, as has been shown here (preprint here). That is why these authors suggest using prevalence tests, which are implemented in our toolbox.

You can use either, but I think you are on the safe side with AUC_minus_chance when it comes to the interpretation.

I think I recently answered someone else’s question that was quite similar, see if you find it in the forum. Contrary to common belief, it’s not possible to remove the mean activation unless all voxels respond exactly the same. That means, if you remove the mean effect of each condition and can no longer decode, then the mean effect played an important role. If you can still decode, you cannot say the mean didn’t play a role, because by subtracting the mean you just spread the estimate of voxels with more signal to voxels with less signal. See towards the end of our paper that I referenced in an earlier reply here for an explanation (preprint here).

Hope that helps!
Martin

1 Like

Thanks for your answer and sorry for my late one, it helps!

  • After reading your paper, I understand well the different options for removing the ‘univariate signal’, but I am wondering what is the default option for the removal of the signal? Is there an option to precise to modify the way it is done?

Thanks a lot for your help,
Fabien

I come again with a new question!
When I do my permutations with [designs, all_perms] = make_design_permutation(cfg), then I do p = stats_permutation(n_correct,reference,tail), I have 2 issues:

  • First, after the make permutation, I don’t get any outputs as n_correct or reference. It is kind of easy to create from the different .mat for each permutation I get with a small loop, but I wonder whether I am supposed to get them?
  • Second, after the stats_permutation, I get my p_value but in .mat format, and I would like to transform it in a .nii file, to be able to threshold where my voxels can significatively decode better than chance, and not all voxels which decode better than chance (I know I can put a threshold manually for example, but I’d rather get a statistical threshold and use the one I get from the permutations).
    May be the trick is in the decoding_write_results script?

Thanks for your help,
Fabien.

In my view, there is unfortunately no way of removing the “univariate signal” and knowing that it was really removed, unless the effect disappears completely, at which point you know only that a univariate response was likely a strong driver of your results. If you want to try this, then I would say it depends really on what you think the univariate signal is. I would think that it is the common unidirectional (i.e. all positive or all negative) response that just scales differently across conditions. (because some voxels are more responsive than others).

Martin