Nilearn how to combine two niftimaskers for decoding?

Hi all, I’d like to standardize every condition of the data separately, and then pass them into either the train-set or test-set. I have 4 conditions and half of them go to one of the sets.
However, after processing the conditions with niftimasker, I don’t know how to combine them.
For example, in my code below, I want to pass the outputs of

Face_fmri_niimgs_masked and NonFace_fmri_niimgs_masked

into

train_fmri_niimg

How do I do this?

def process_fmri_data(mask_filename, subj, dir_AFNI_beta, dir_labels, repre_type, sliding_window, pipeline_name):

    ...
    Face_fmri_niimgs = index_img(beta_imgs, Face_condition_mask)
    High_fmri_niimgs = index_img(beta_imgs, High_condition_mask)
    Low_fmri_niimgs = index_img(beta_imgs, Low_condition_mask)
    NonFace_fmri_niimgs = index_img(beta_imgs, NonFace_condition_mask)
    
    Face_masker = NiftiMasker(
        mask_img=mask_filename,
        runs=Face_run_label,
        standardize='zscore_sample',
        t_r=0.5,
        memory="nilearn_cache",
        memory_level=1,
    )
    High_masker = NiftiMasker(
        mask_img=mask_filename,
        runs=High_run_label,
        standardize='zscore_sample',
        t_r=0.5,
        memory="nilearn_cache",
        memory_level=1,
    )
    Low_masker = NiftiMasker(
        mask_img=mask_filename,
        runs=Low_run_label,
        standardize='zscore_sample',
        t_r=0.5,
        memory="nilearn_cache",
        memory_level=1,
    )
    NonFace_masker = NiftiMasker(
        mask_img=mask_filename,
        runs=NonFace_run_label,
        standardize='zscore_sample',
        t_r=0.5,
        memory="nilearn_cache",
        memory_level=1,
    )

    Face_fmri_niimgs_masked = Face_masker.fit(Face_fmri_niimgs)
    High_fmri_niimgs_masked = High_masker.fit(High_fmri_niimgs)
    Low_fmri_niimgs_masked = Low_masker.fit(Low_fmri_niimgs)
    NonFace_fmri_niimgs_masked = NonFace_masker.fit(NonFace_fmri_niimgs)

    Face_conditions = conditions[Face_condition_mask]
    High_conditions = conditions[High_condition_mask]
    Low_conditions = conditions[Low_condition_mask]
    NonFace_conditions = conditions[NonFace_condition_mask]

    Face_conditions = Face_conditions.values
    High_conditions = High_conditions.values
    Low_conditions = Low_conditions.values
    NonFace_conditions = NonFace_conditions.values

    train_run_label = pd.concat([Face_run_label, NonFace_run_label])
    untrain_run_label = pd.concat([High_run_label, Low_run_label])

    train_fmri_niimg = image.index_img(beta_imgs, train_conditions_mask)
    untrain_fmri_niimg = image.index_img(beta_imgs, untrain_conditions_mask)

    train_conditions = conditions[train_conditions_mask]
    untrain_conditions = conditions[untrain_conditions_mask]

    train_conditions = train_conditions.values
    untrain_conditions = untrain_conditions.values

    cv = LeaveOneGroupOut()

    return (train_fmri_niimg, untrain_fmri_niimg, train_run_label, untrain_run_label,
            cv, num_size, num_start, num_tr,
            repre_type, roi_name, pipeline_name, subj,
            train_conditions, untrain_conditions, train_run_label, untrain_run_label)

def train_svm(mask_filename, subj, dir_AFNI_beta, dir_labels,
              repre_type, sliding_window, pipeline_name, total_iters,
              pre_iters_for_Grid, cv, train_conditions, untrain_conditions,
              train_run_label, untrain_run_label, train_fmri_niimg, untrain_fmri_niimg):

    decoder = FREMClassifier(
        estimator='svc', scoring='f1_macro',
        cv=cv, standardize=False, n_jobs=38, t_r=0.5
    )
    decoder.fit(train_fmri_niimg, train_conditions, groups=train_run_label)
...

Do I understand that you want to concatenate the data arrays that have been generated, i.e. something like

train_data = np.concatenate(Face_fmri_niimgs_masked, NonFace_fmri_niimgs_masked)

(you probably need to tweek the concatenation axis).
Does that answer your question ?
Best,
Bertrand

Yes and no.
I want to concatenate the two outputs of nifitimasker. If i call fit_transform on the inputs, then the outputs will be data arrays and can be concatenated without any problem. But it seems the FREMClassifier does not take data arrays as input, but only niimg-like object. So I can only call fit on the inputs. That way the outputs will still be niimg-like and cannot be concatenated with np.concatenate. This is where I got stuck

Well, indeed you cannot call a Masker object and then a FREMClassifier, given that the latter expecte niimg-like objects.
What you probably want to do is to recreate images using

Face_fmri_niimgs_filtered = Face_masker.inverse_transform(Face_fmri_niimgs_masked)
NonFace_fmri_niimgs_filtered = NonFace_masker.inverse_transform(NonFace_fmri_niimgs_masked)

# then you concatenate the two images
from nilearn.image import concat_imgs
train_img = concat_imgs(Face_fmri_niimgs_filtered, NonFace_fmri_niimgs_filtered)