Need suggestions for Decoder() script with LeavePGroupsOut() function

Dear experts,

I am a newbie for Python and scikit-learn. Thank you very much for your time and help in advance.
My questions:

  • Question 1: (excuse me) Is there any concern for the coding structure** (attached below)?
  • Question 2: is it okay to ignore the warnings below when I run the script for classification?
  C:\Users\anaconda3\Lib\site-packages\nilearn\decoding\decoder.py:320: UserWarning: Use a custom estimator at your own risk of the process not working as intended. warnings.warn(
  C:\Users\anaconda3\Lib\site-packages\sklearn\feature_selection_univariate_selection.py:112: UserWarning: Features [268 279 319 332 346 358 398 410 423 436 437 462 474 482 496 507 518 519 530 531 542 553 554 564 565 572 577 578 582 583 592 601 611 619 620 628 629 635 638 642 643 650 658 665 672 673 678 679 681 682 687 688 689 694 695 696 697 698 699 700 701 702 703] are constant. warnings.warn("Features %s are constant." % constant_features_idx, UserWarning)
  C:\Users\anaconda3\Lib\site-packages\sklearn\feature_selection_univariate_selection.py:113: RuntimeWarning: invalid value encountered in divide f = msb / msw

conditions and script

I have 12 unsmoothed first-level t-mpas for 2 conditions (‘Lip-BL’ and ‘Tongue-BL’; 6 images/runs per condition).
When I leave 2 groups out, I have therefore the maximum 15 combinations of splits.
To ensure that the 2 conditions of the same run (here named ‘group’ in the script) will be taken together for either training or testing, the group is set as a parameter for decoder.fit(X_train, y_train, groups=train_groups).

    # Function to load images based on condition and group
    def load_images(subject, condition, groups):
        images = []
        labels = []
        for group in groups:
            img_path = os.path.join(motor_path, f'sub-{subject}', f'sub-{subject}_task-loc_desc-{condition}-{group}_stat.nii.gz')
            images.append(img_path)
            labels.append(condition)
        return images, labels

    # Loop over hemispheres
    for selected_hemisphere in ['RH', 'LH']:
        mask_pattern = '{}_LTFmotorROI_corr001_{}.nii.gz'.format('{}', selected_hemisphere)
        # Loop over subjects
        for subject in sub_list:
            mask_img = load_img(os.path.join(mask_path, mask_pattern.format(subject)))
            # Load images and corresponding labels for both conditions
            conditions = ['Lip-BL', 'Tongue-BL']
            all_images = []
            all_conditions = []
            all_groups = []
            for condition in conditions:
                images, labels = load_images(subject, condition, np.arange(1, 7))
                all_images.extend(images)
                all_conditions.extend(labels)
                all_groups.extend([f'Group-{i}' for i in np.arange(1, 7)])
            
            # Decoder
            cv = LeavePGroupsOut(n_groups=2)
            decoder = Decoder(estimator='svc', mask=mask_img, cv=cv, standardize="zscore_sample", scoring='accuracy')

            y_true = []
            y_pred = []

            # cross-validation
            for fold, (train_idx, test_idx) in enumerate(cv.split(all_images, all_conditions, groups=all_groups), 1):
                X_train, X_test = np.array(all_images)[train_idx], np.array(all_images)[test_idx]
                y_train, y_test = np.array(all_conditions)[train_idx], np.array(all_conditions)[test_idx]

                # Fit the decoder with RUNs balanced
                decoder.fit(X_train, y_train, groups=train_groups)

                # Predict on the test set
                y_pred.extend(decoder.predict(X_test))
                y_true.extend(y_test)

                # accuracy for the current fold
                accuracy_fold = accuracy_score(y_true, y_pred)
                print(f'Fold {fold} Accuracy: {accuracy_fold:.2f}')