Correct procedure to decode brain activity in fMRI data

Hi all

I have an fMRI dataset of 20 patients that did a task-based fMRI experiment consisting of 2 conditions.
I would like to get a voxel-wise weight map after fitting an SVM model on the data for decoding purposes.

I am not sure what is the right approach for that. I see two ways.

  1. Use the smoothed preprocessed volumes and fit an SVM on them using as labels the experimental conditions of interest (eg condition 1 and condition 2)

  2. Use the smoothed preprocessed volumes to fit a 1st level GLM model and get the responses (beta maps) for the conditions of interest (eg condition 1 and condition 2). Next, use these beta maps as inputs for the SVM fitting. This is somewhat recommended in the documentation of Nilearn.

In both cases, a brain mask covering the whole brain will be used given that I need a whole-brain voxel-wise map of weights.

I am looking forward to your input

Hi @makaros622

I would recommend fitting a GLM to retrieve the response maps, and use these maps for decoding (basically the second option in your message).
This is indeed the way we recommend to perform decoding in Nilearn (see documentation here).
I think this example could also be helpful.

HTH!
Best,
Nicolas

1 Like

Hi Nicolas.

Thanks for the answer and links.

In the tutorial, I see z_maps.append(glm.compute_contrast(condition_)). This means that the beta map is z-scored and returned as output instead of the standard beta map?

Or that the beta map is internally transformed into t-map and then finally returned as a z-scored version of the t-map?

If I have some beta_XXX, con_XXX and spmT_XXX maps from SPM, which should I use for the SVM fitting with nilearn?

1 Like

To “second” @bthirion’s warnings - while you can calculate SVM weights from whole-brain analyses, they are unlikely to be stable or informative.

If you are interested in a question like “which brain regions can distinguish my conditions?” I suggest starting with regions (e.g., a whole-brain parcellation or anatomical/functional regions of interest for your hypotheses), and focus on characterizing the signal within those regions, rather than starting with whole-brain signal and then attempting to characterize its components.

Hi,
I am trying to run the FREM classifier on single trial data. I have about 10K beta images for the training set and 1K for the test set. I’m submitting the job to a cluster but the job keeps failing with an out of memory error.
Is this setup feasible? Should I reduce my training set size?
Thank you.
Leyla