Extract estimator in searchlight

Hello,

I am doing cross-modal classification within a ROI and with searchlight strategy.
With the function nilearn.decoding.Decoder, I could train one decoder, and apply the same decoder to test on another set of data (cross-modal classification) with Decoder.predict(), but .predict() does not go with SearchLight, so I can’t access the same estimator.

Here is the pipeline I am using:

cv = LeaveOneGroupOut()
pipeline = make_pipeline(StandardScaler(), LinearSVC())
searchlight = SearchLight(mask_img=mask_img, radius=10.0, estimator=pipeline, n_jobs=-1, scoring="accuracy", cv=cv, verbose=0)
searchlight.fit(X_train, y_train, groups=train_groups)

Question 1: When using Decoder(), I defined the cv.split for Decoder.fit(X_train, y_train) and for Decoder.predict(X_test, y_test). Do I need to define cv.split when using searchlight.fit()? Based on the examples I’ve looked up, this step seems not neccessary for searchlight.fit() that the train and test sets are still splited, but I am not sure.

Question 2: I need suggestions that in which part of the arugments I should modify to apply the same estimator on another set of data.
So far, I think the way to do this is to create a custom_cv as in the scikit-learn example.
Is it possible to define the cv to split the data1 with leave-one-group-out (groups = groups; I have 6 groups for 12 images (2 images per group) in data1) cross-validation as X_train, and to define the whole data2 as the X_test (I have the same y_train and y_test labels for data1 and data2)?
Does this idea sound reasonable?

Many thanks.

Hi,
Regarding question 1: when you create the searchlight object, you can specify the cv scheme. Normally, you can’t do this at fit time (but you may specif the grouping variable, which is necessary for structured cross validation).

  1. Maybe you can do that, but I’m worried that to you have too few samples to get anything meaningful with the dataset you describe.

But I don’t understand why you want to do that: searchlight is not the right approach to test how models generalize to new data. It is simply meant to measure the accuracy of local models. If you want to make a precise statement about which regions show a generalization from train to test, the best thing to do is to use a parcellation, and do the standard fit/predict validation procedure with a classifier instantiated on the signals of each region you want to consider.

HTH,

Bertrand

Hello Bertrand,

Thank you very much for your reply!
My concern comes from the different script/function between nilearn.decoding.Decoder and nilearn.decoding.SearchLight.

More specifically, here is how I use Decoder():

cv = LeaveOneGroupOut()
decoder = Decoder(estimator='svc', mask=mask_img, cv=cv, scoring="accuracy")
## I defined train and test sets by cv.split
decoder.fit(X_train, y_train, groups=train_groups)
y_pred = decoder.predict(X_test)
y_true = y_test
accuracy_fold = accuracy_score(y_true, y_pred)

and here is for SearchLight():

cv = LeaveOneGroupOut()
pipeline = make_pipeline(StandardScaler(), LinearSVC())
searchlight = SearchLight(mask_img=mask_img, radius=10.0, estimator=pipeline, n_jobs=-1, scoring='accuracy', cv=cv, verbose=0)
searchlight.fit(X_train, y_train, groups=train_groups)
coefs = searchlight.scores_

For classification, there are steps for decoder.fit(X_train, y_train), decoder.predict(X_test) as y_pred, and accuracy_score(y_test, y_pred); however for searchlight, there are only searchlihgt.fit(X, y) and searchlight.scores_ which returns an array of accuracy for each tested voxel.
searchlight.predict() or searchlight.score() do not exist, so the information in between is missing.
Or did I miss something here?

If there is a way to check the train and test set, that means I can customize my train and test sets with different data sets; otherwise, it seems I need to create a function for this.

I plan to do cross-modal searchlight based on previous literature, but I will check the parcellation.
Thanks a lot for this great information!

Indeed, you’re right.
Regarding Searchlight.predict(), I think that this is a feature not a bug. Searchlight methods are used to get statistical maps; it is not a predictive method.
Regarding Searchlight.score(), I agree that this would be an interesting addition, that would give more control on what data are used for training and testing. Simply it’s not there yet.
Bzw, I realize that a very similar discussion occurred some time ago: Using SearchLight without cross-validation · Issue #2855 · nilearn/nilearn · GitHub
HTH,
Bertrand

1 Like

Hello Bertrand,

Thank you very much for the information!
Just I think on my way to figure out sklearn.model_selection.PredefinedSplit, I might make it to customize my cross-validation splits manually.

To do cross-modal searchlight, I need to train data1 and test on data2 (data1 within-modal searchlight checked before) in each cross-validation fold of searchlight. Since I can’t reuse the estimator from SearchLight(), so I manually define the train and test sets.
In short, I combine two data sets first (data3), train on the first half of data3 (data1) with leave-one-group-out cross-validation (checked with within-modal searchlight) and test on the second half of data3 (all images in data 2).

# customized cross-validation splits
n_samples = len(total_images) # data 3
n_train = n_samples // 2 # data 1
n_test = n_samples - n_train # data 2
train_indices = np.arange(n_train)
test_indices = np.arange(n_train, n_samples)
  
logo = LeaveOneGroupOut()
# loop over leave-one-group-out cross-validation within train_indices
for fold, (train_index, test_index) in enumerate(logo.split(np.zeros(len(train_indices)),   groups=groups_data1), 1):
  # define the training and testing data
  X_train = [total_images[i] for i in train_index]
  y_train = [total_labels[i] for i in train_index]
  X_test = [total_images[i] for i in test_indices]
  y_test = [total_labels[i] for i in test_indices]
  
  # print the data for the current fold
  print(f'X_train:\n{X_train}')
  print(f'y_train:\n{y_train}')
  print(f'X_test:\n{X_test}')
  print(f'y_test:\n{y_test}')
  
  # searchlight
  pipeline = make_pipeline(StandardScaler(), LinearSVC())
  searchlight = SearchLight(mask_img=mask_img, radius=10.0, estimator=pipeline, cv=logo, n_jobs=-1, scoring='accuracy')
  searchlight.fit(total_images, total_labels, groups = total_groups)

Question1: Since I can’t/don’t know how to trace the actual train and test sets in searchlight, I would like to know if it should function as what I printed out (# print the data for the current fold) as long as I define the argument cv=logo?
Question2: Am I right that searchlight.scores_ returns an array of accuracy scores of each voxel within the mask_img, and I can mean_img(each fold of new_img_like(mean_fmri, searchlight.scores_)) as a mean accuracy image for each subject?

Many thanks!

  1. It’s unclear to me how you define groups in the above script. If you have 2 groups, note that logo will do the train/test in both configurations : group1 → group2 and groupe2 → group1
  2. AFAIK searchlight.scores_ represent the average accuracy across folds.
    HTH,
    Bertrand
1 Like

Dear Bertrand,

Indeed, I didn’t make the ‘group’ definition clear.
I have 6 groups for both data sets, which actually represent the acquisition runs, and I defined this condition when I load the images.
In each data set, there are 12 images for two classes to be classified, so 2 images per group and 6 images per class.
I use the group argument to ensure that the same group/run will be split into either the train or the test sets.
This seems correct when I print out the X_train and X_test that I always have one group in the test set in each fold of cv.

Many thanks.

Sounds OK (but your ample is very small, expect large uncertainties on the results).
Best,
Bertrand

1 Like

Is it reasonable that I put searchlight.fit() inside the loop of fold and cv.split() like the script above, so I get searchlight.scores_ for each fold?

  • By doing this, I printed the images and labels in each train and test sets in each fold.
    I also plotted the searchlight.scores_ image in each fold, and I got a different image for each fold.
  • When I move the searchlight.fit() after/outside the loop for defining the cv.split(), and then plot the searchlight.scores_ image, I got the same image as the one in the last fold above.

Many thanks and have a good day.

Yes, each time you fit, you’lle obtain a new searchlight.scores_, this is expected.

When you move the fit outside of the loop, you only run the procedure on the last train/test partition, which explains why the resulting maps is equal to the last one of the above loop.

You probably want to run the fit inside the loop and then average the scores.
Best,
Bertrand

1 Like

Yes, I averaged each fold of cross-validation for each subject, so I am on the right track.
Now it is much clear for me!
Thank you very much Bertrand for all of these replies!