Problem with labels

Hi Martin (and all TDT experts),

I recently started to use TDT and I’m basically moving my data from CoSMoMVPA to TDT. This wasn’t a problem until I got this error:

Error using ==
Matrix dimensions must agree.

Error in decoding_transform_results (line 98)
output = 100 * (1/size(predicted_labels,1)) * sum(predicted_labels == true_labels); % calculate mean (faster than Matlab function)

Error in decoding_generate_output (line 35)
output = decoding_transform_results(curr_output,decoding_out,chancelevel,cfg,data);

Error in decoding (line 568)
results = decoding_generate_output(cfg,results,decoding_out,i_decoding,curr_decoding,current_data);

Error in my_script
[results, cfg] = decoding(cfg);

I’m running an ROI analysis, lda classifier and using “make_design_cv”. I have my data preprocessed in SPM, but I want to use the labels in a different way so I manually defined chunks, names and labels using “cfg.files”. This was not a problem until I had to change the beta files (these betas were created using cosmo functions). I had to use different betas in a second analysis (but still using the same logic as before), and I got this error. The betas were created in the exact same way and the struct is the same. I don’t understand why are the predicted labels and true labels are not matching.

Any ideas about this problem?

Thank you in advance!

Lénia

Hi Lénia,

Ouch, that sounds annoying! Could you please try another classifier first, e.g. 'libsvm', to rule out this is an issue with our LDA implementation? I think that might be the culprit.

If it works with libsvm, then please send me a separate email and I’ll point out how you can identify the problem.

Best,
Martin

Hi Martin,

I changed the classifier to libsvm and the decoding went fine (!) So what could be the problem when using lda?

I sent you a separate e-mail, but I’m not sure if I did it correctly. I just started using neurostars forum, so I’m sorry if I’m sending you this message twice.

Thank you!

Ok, it looks like there were some data points without variance, which then for the chosen Ledoit-Wolf shrinkage operation that retains the variances of the original data (shrinkage = lw2) leads to division by zero. Long story short, I have updated the version of that shrinkage operator to catch this issue. For everyone else reading this, I would try avoiding all 0 data, since other tools might not catch it.

Thanks for spotting this, Lénia!
Martin