Extract Betas from Each Trial at Single Voxel Level

Hi all, right now I am working on a project using representational similarity analysis and need to extract signal for each trial at each signal voxel level, I mainly use Python as my tool now, any one can give me some suggestions about which module I can use for this purpose? I really appreciate it.

Hi,

May you be more specific as to what kind of signal you want to extract? Perhaps showing a bit of code to orient us will help too.

Best,
Steven

Thanks Steven, I want to extract beta weights (parameter estimates) for each single trial at each single voxel level, would it be possible?

You can fit a general linear model in Nilearn and extract the beta coefficient maps from the model. This example (Nilearn: Statistical Analysis for NeuroImaging in Python — Machine learning for NeuroImaging) and others might help.

Thank you very much! I will take a look at this

Hi Steven, I looked into the script and ran it, seems like it is extracting the beta coeffificents for the whole map? do you think I can adopt this to extract the betaweights for EACH trial? I might have overlooked something, so I would appreciate it if you could give me a little bit more information about this.

Thanks

You will have to be more specific as to what you want these beta parameters to estimate as well as the structure of the BOLD runs (that is, what defines a trial, how many trials per run etc).

I want to model each trial individually so that I can get a beta weights for each signal run (in that case I will have a 4D for each subject to calculate the distance/correlation between the beta weights from voxels). For my functional run, there are 45 trials for each condition.

What do you want modeled per trial? What should these beta weights measure? Are both conditions present per trial?

sorry Steven, I had some typo in my last response. I meant I want to model each trial so that I can get a beta weights for each signal trial - like what Nibetaseries is doing, see this (Beta series — NiBetaSeries 0.6.0 documentation). I have three runs for three conditions, it is a block design.

This tutorial is better for that Nilearn: Statistical Analysis for NeuroImaging in Python — Machine learning for NeuroImaging

Hi Steven, I hope you are doing well. Just a quick follow-up and some further questions, I would really appreciate your help.

So I have tried and looked into the scripts you recommended, but I don’t think I can get the information I need from there. I probably knew how to extract the data at a single-voxel level already but I am having trouble extracting the beta-weights for each trial. For example, I have 45 trials for one condition, instead of getting a beta-weight for all trials like traditional analysis, I wanted to have 45 beta-weights for all these 45 trials; in other words, all trials are individually modeled.

I have used FSL to run the first level analysis already, and also have done the postprocessing. In the first level glm folder file, do you think there is any file I can use to extract what I want? Here is the list:

example_func.nii.gz; filtered_func_data.nii.gz; mean_func.nii.gz; rendered_thresh_zstat1.nii.gz; thresh_zstat1.nii.gz;

Hi Junqiang,

I think you’re on the right track by trying to get NiBetaSeries working, but I also want to note that there are multiple approaches to beta series modeling. The ones I’m most familiar with are Least Squares- All (LSA) and Least Squares- Separate (LSS). I could probably help you figure out how to perform each type with nilearn, if you can’t get NiBetaSeries working, but I’m not as familiar with other approaches, like the finite BOLD response- separate (FS) approach, nor am I experienced with RSA, as I’ve mostly used beta series for task-based functional connectivity analyses in the past.

For background, LSA, LSS, and FS are described in Turner, Mumford, Poldrack, and Ashby (2012). The basic idea for LSA is to create a separate condition for each trial, which will get you one beta map for each trial. LSS works by running a series of GLMs- one for each trial. Within the specific trial’s GLM, that one trial is moved to its own condition, while everything else about the design matrix is kept the same. That way, you get a beta map for the one trial from that particular GLM, and after running all of the GLMs you end up with a beta map for each trial. I believe NiBetaSeries supports both approaches, though it recommends (as do I) the LSS approach.

Best,
Taylor

2 Likes

Hi Taylor,

Thank you so much for providing me switch uch detailed and useful suggestions! Yes, I am getting NiBetaseries, but unfortunately, I can successfully install this on my iMac via terminal. I know about the LSA and LSS and am happy to use Nilearn if these can be conducted. I actually was checking the Nilearn examples to see whether I can adjust them for my purpose. Again, what I want to do is to model every single trial and then extract the beta weights from each trial at each single voxel, so that for each participant, I would have many columns representing voxel and rows representing time to calculate the correlation (i.e., similarity distance) between voxel across time for one condition.

I think this example provided by Steven can be adjusted? https://nilearn.github.io/auto_examples/02_decoding/plot_haxby_glm_decoding.html#sphx-glr-auto-examples-02-decoding-plot-haxby-glm-decoding-py

In this example, they modeled each condition and get beta weights for many conditions, then concatenated the beta-weights, based on my understanding. I guess I could do the same thing, create a time file and label each trial as a single condition, then model them using FirstLevel GLM to get beta weights? Please let me know your thoughts and how can I do this with Nilearn.

Best,
Jacob

It looks like you’re comfortable with an LSA model, in which case you can just adapt that FirstLevelModel example. Here’s some code that I think should (1) create new condition labels in a BIDS events file that are LSA-compatible and (2) collect the associated trial-wise beta maps into a dictionary of 2D (time by voxel) arrays.

You’d have to concatenate the arrays across conditions and cross-correlate them afterwards, I believe.

import pandas as pd
from nilearn import masking


def lsa_transformer(df):
    """Labels trials for an LSA model."""
    conditions = df["trial_type"].unique()
    df["old_trial_type"] = df["trial_type"]
    condition_counter = {c: 0 for c in conditions}
    for i_trial, trial in df.iterrows():
        trial_condition = trial["old_trial_type"]
        condition_counter[trial_condition] += 1
        trial_name = f"{trial_condition}_{condition_counter[trial_condition]:03d}"
        df.loc[i_trial, "trial_type"] = trial_name

    return df


# This example events file is taken from OpenNeuro:
# https://openneuro.org/datasets/ds003813/versions/1.0.0
events_file = "sub-108TUSW011005_ses-1_task-imagery_acq-Fs2_run-01_events.tsv"
events_df = pd.read_table(events_file)
# Remove fixation "trials"
events_df = events_df.loc[events_df["trial_type"] != "Fixation"]
lsa_events_df = lsa_transformer(events_df)

# Compute and collect beta maps
# NOTE: The variable "glm" comes from the Haxby example.
# It's a nilearn FirstLevelModel that has already been fitted to the fMRI data and events.
condition_beta_maps = {cond: [] for cond in lsa_events_df["old_trial_type"].unique()}
trialwise_conditions = lsa_events_df["trial_type"].unique()
for condition in trialwise_conditions:
    beta_map = glm.compute_contrast(condition, output_type="effect_size")
    # Drop the trial number from the condition name to get the original name.
    condition_name = "_".join(condition.split("_")[:-1])
    condition_beta_maps[condition_name].append(beta_map)

# NOTE: The variable "mask" should be provided by the user.
condition_beta_arrs = {}
for condition, maps in condition_beta_maps.items():
    beta_arr = masking.apply_mask(condition_beta_maps, mask)
    condition_beta_arrs[condition] = beta_arr

# NOTE: The end result should be a dictionary of condition: (trial-by-voxel)
# numpy array pairs.

Let me know if that helps.

I really appreciate your help, Taylor!! I will carefully take a look at and adjust for mine. By the way, do you happen to have any script that I can do the LSS? I would also appreciate it if you could share it with me.

Something like the following should probably work for LSS (though I haven’t really tested it, of course). You’ll want to double-check both snippets I posted, and probably plot the design matrices as well to be safe.

"""LSS example."""
import pandas as pd
from nilearn import masking


def lss_transformer(df, row_number):
    """Label one trial for one LSS model."""
    df = df.copy()

    # Technically, all you need is for the requested trial to have a unique "trial_type" *within*
    # the dataframe, rather than across models.
    # However, we may want to have meaningful "trial_type"s (e.g., "Left_001") across models,
    # so that you could track individual trials across models.
    df["old_trial_type"] = df["trial_type"]

    # Determine which number trial it is *within the condition*.
    trial_condition = df.loc[row_number, "old_trial_type"]
    trial_type_series = df["old_trial_type"]
    trial_type_series = trial_type_series.loc[trial_type_series == trial_condition]
    trial_type_list = trial_type_series.index.tolist()
    trial_number = trial_type_list.index(row_number)
    trial_name = f"{trial_condition}_{trial_number:03d}"
    df.loc[row_number, "trial_type"] = trial_name
    return df, trial_name


# This example events file is taken from OpenNeuro:
# https://openneuro.org/datasets/ds003813/versions/1.0.0
events_file = "sub-108TUSW011005_ses-1_task-imagery_acq-Fs2_run-01_events.tsv"
events_df = pd.read_table(events_file)
# Remove fixation "trials"
events_df = events_df.loc[events_df["trial_type"] != "Fixation"]

condition_beta_maps = {cond: [] for cond in events_df["trial_type"].unique()}

for i_row, trial_row in events_df.iterrows():
    lss_events_df, trial_condition = lss_transformer(events_df, i_row)

    # NOTE: Initialize and fit your model here

    # Compute and collect beta maps
    # NOTE: The variable "glm" comes from the Haxby example.
    # It's a nilearn FirstLevelModel that has already been fitted to the data and events df.
    beta_map = glm.compute_contrast(trial_condition, output_type="effect_size")

    # Drop the trial number from the condition name to get the original name.
    condition_name = "_".join(trial_condition.split("_")[:-1])
    condition_beta_maps[condition_name].append(beta_map)

# NOTE: The variable "mask" should be provided by the user.
condition_beta_arrs = {}
for condition, maps in condition_beta_maps.items():
    beta_arr = masking.apply_mask(condition_beta_maps, mask)
    condition_beta_arrs[condition] = beta_arr

# The end result should be a dictionary of condition: (trial-by-voxel) numpy array pairs.

Hi Tylor, again, thank you so much! The LAS and LAA stuff makes sense to me. I am pretty new to python and Nilearn, so just double-check with you about the following stuff: before running the “for condition in trialwise_conditions”, I should run the “glm = FirstLevelModel” first and then “glm.fit”, as indicated in the Decoding of a dataset after GLM fit for signal extraction ?

for this “for condition, maps in condition_beta_maps.items():
beta_arr = masking.apply_mask(condition_beta_maps, mask)
condition_beta_arrs[condition] = beta_arr”

The mask, should be my ROI (i.e., masker img using Niftimasker, if I want to do the ROI method)?

That’s right. You should probably use the GLM settings you would have used for a standard (non-LSS/LSA) model.

You should use an ROI or even whole brain mask, depending on your needs, but nilearn.masking.apply_mask should be used on a nifti image, rather than a masker object. If you’re using a NiftiMasker, then you can use the masker’s transform method (e.g., beta_arr = masker.transform(condition_beta_maps)).

I did not follow your last comment very well, do you mind explaining more about that? In my analysis, I am going to use ROI method instead of whole-brain analysis, where I can apply my ROI?

This is what I got online: nilearn.masking.apply_mask(imgs , mask_img , dtype=‘f’ , smoothing_fwhm=None , ensure_finite=True )

should I put the ROI image for “mask_img”?

If I am using a NiftiMasker, then I can use the masker’s transform method (e.g., beta_arr = masker.transform(condition_beta_maps) ). – in this case, where and when I can use my ROR?

Thanks for your patience and it has been a great help!