Multimodal neuroimaging /multi-view learning

Hello everyone,
I have two dataset : M1 = (200 subject, 170000 voxels) and M2= (200 subject, 170000 voxels)
I want to apply FastICA to get the latent space between this two data,
how can i do this please!
i try it but it return a matrix MT= (200,200) and when i apply a featureagglomeration on MT , it return an error ( because of the connectivity matrix (170000,170000)) .
And if you have another method that make it possible to have latent space between this two view of data ?

thank you very much ! @bthirion

Note sure what you’re doing: indeed ICA does not require a connectivity matrix. You’re probably using Ward algrorithm to cluster the voxel set ?
Would you mind posting a script somewhere ?

Or, taking a different perspective, what would you like to achieve in the end ?

One thing that you can do with ICA is to find spatial components that explain most of the signal in your two datasets and are interpretable.

hello , thank you :

 X_train1, X_train2 = two matrix of shape (200,170000)
 X_c =   find a latent space that capture the shared information between X_train1 and X_train2

 F= featureAgglomeration(500,connectivity=connec,ward)
 x_r= F.fit_transform(X_c)

thank you @bthirion

I think that we can better help you if you can post the actual script you are running.
If you want to run things properly, I suggest that you take inspiration from
http://nilearn.github.io/auto_examples/03_connectivity/plot_data_driven_parcellations.html#sphx-glr-auto-examples-03-connectivity-plot-data-driven-parcellations-py

X1 of shape (200,170000) and X2 of shape (200,170000)

     def algo(model,X1,X2,y,train,val)  :  
            X_train_1, X_val_1, y_train, y_val = X1[train], X1[val], y[train], y[val]
            X_train_2, X_val_2   = X2[train], X2[val]

           X_c = np.concatenate((X_train_1,X_train_2),axis=0)
           X_v = np.concatenate((X_val_1,X_val_2),axis=0)

            X_train_c = FastICA().fit_transform(X_c)
            X_val_c= FastICA().fit_transform(X_v)
            ward = FeatureAgglomeration(n_clusters=1000,connectivity=connectivity,linkage='ward')
            X_red = ward.fit_transform(X_train_c)
            model.fit(X_red,y_train)

X_train_1 and X_train_2 of shape(150,170000), and X_val1 and X_val_2 of shape (50,170000)
X_c of shape (300,170000) and X_v of shape (100,170000)

i have an error : bescause X_train_c of shape (300,300) while featureAgglomeration take an adjacence matrix (connectivity) of shape (nb_voxels,nb_voxels) …

is there another method that allows us to get only one matrix that containt the shared weight or latent space between X_train_1 and X_train_2 before applying feature agglomeration ?

I know that we can apply featureAgglomeration directly on X_c , but i want to get latent space between X_train_1 and X_train_2 , then, we can apply featureagglomeration on the latent space.

Thank you very much @bthirion

Runnin the following should solve it:

X_red= ward.fit_transform(X_train_c.T)