I have a nifti object train_X
of shape (91, 109, 91, 2947) (i.e., 2947 images).
Each image in the nifti object has a numeric value associated with it in a list train_y
, and I am training a nilearn.decoder.DecoderRegressor
predictor on this image set.
from sklearn.model_selection import GroupKFold
cv_inner = GroupKFold(3)
regressor = DecoderRegressor(standardize= True,cv=cv_inner, scoring="r2")
regressor.fit(y=train_y,X=train_X,groups=train_groups)
When running, I get the following error:
/home/bsmith16/.conda/envs/neuralsignature/lib/python3.8/site-packages/sklearn/svm/_base.py:255: ConvergenceWarning: Solver terminated early (max_iter=10000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn('Solver terminated early (max_iter=%i).'
/home/bsmith16/.conda/envs/neuralsignature/lib/python3.8/site-packages/sklearn/svm/_base.py:255: ConvergenceWarning: Solver terminated early (max_iter=10000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn('Solver terminated early (max_iter=%i).'
/home/bsmith16/.conda/envs/neuralsignature/lib/python3.8/site-packages/sklearn/svm/_base.py:255: ConvergenceWarning: Solver terminated early (max_iter=10000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn('Solver terminated early (max_iter=%i).'
It’s puzzling because I am passing in the standardize
argument to the DecoderRegressor.
If I run the first 500 images only through the Decoder, I do not get the same problem, so perhaps it’s some kind of memory issue. But the system is managing to run.
Any ideas what could be going on here? Is this an unusually large dataset (should I be trying to mask it more than it is?)