Nilearn and sklearn- RBM on timeseries output

Hi all,

I have a question regarding the learning of components with nilearn & sklearn’s RBM.
I followed a paper on using RBM on fMRI timeseries and tried to replicate the RBM analysis (I used the movie-watching dataset shipped with nilearn).
After masking say 10 subjects with MultiNiftiMasker, I fit the RBM on the 1D timeseries. With masker.inverse_transform, single components of the RBM can be visualized when reshaping beforehand to 2D [scans x voxels]. Disadvantage here is of course that you can’t scroll through the components, as I have to reshape and inverse_transform beforehand.

My question now is, is it possible to get a format similar to fastICA (e.g.(20, 24000))? I’m not sure if I have overseen something and the correct output for visualization should be similar to fastICA, or if it is okay to have the single RBM components in [scans x voxels] format?

Comments/help is very much appreciated
Cheers,
higgs

Sorry for the delay, did you manage to solve your problem ?

(If not I didn’t get exactly what you’re trying to achieve in terms of vizualisation, can you provide an example ?)

Thanks a lot for your answer. I try to reformulate my question more clearly. I am not sure how to tweak the RBM setup so that I receive a spatial map in the end.
I tried to concatenate along the time domain first (analogous to spatial ICA - data was masked with the MSDL atlas). This seemed to work, but the output of this approach is only one component instead of e.g. 10 (= the 10 components are identical, so basically its just one).
Maybe someone has a hint what I have to change to get in the right direction.
Edit: Image is the spatial map from the rbm output (which is basically only 1 map that is identical across the components)
Thanks a lot for any advice.
spatial_rbm