Nilearn deprecation warning and killing: 9 error for nested cross validation

Hi team,
I’m working with basic (linear SVC) nested cross-validation of stimuli with the ANOVA feature reduction, akin to the Haxby tutorial with my own data (n=20), where I concatenated the (preprocessed) sessions and then subjects via FSL. I made very minimal changes to the tutorial code.

On our 64GB, 6 core Mac pro and the job gets killed in a couple hours (no error message beyond ‘killed: 9’). Via the verbose command, the nifitmasker step when resampling the images its where it gets killed.

I’ve also received the warning discussed here: (and have the current version of nil earn).

Could it be the way the data is concatenated? Just running out of memory? (we have a server/node option that I have messed with yet). Pieces of code to increase the efficiency? I’ve tried a couple different options to no avail.

many thanks,

If the job gets killed, we are not getting an error message, hence it is hard to diagnose. It could be that it is running out of memory. To diagnose this you could look at memory usage when you are running the analysis. Another option would be try to run the analysis on a handful of subjects to see if it succeeds.

Many thanks Gael. Moreover, thanks for the documentation on nilearn in general, as this had made these important analyses accessible.

Completely understand what you are getting at. As a didactic practice (prior to posting), I worked with a single subject (no issues), and then two subjects concatenated, (intermittent issues, cannot consistently replicate) and then got through the full sample once (‘kernel died … restarting’; but got through it). I have looked at memory usage during and it’s high for each machine we’ve tried it on. So on the 18.8 GB of the full sample perhaps might be an memory issue, I’ll work to push it to the server and check there where memory shouldn’t be an issue.

I appreciate your thoughts, will keep you posted as things get resolved and we scale up (I’m hoping to apply these approaches to larger data sets). There is a far amount of assurance in more experienced feedback (even if the issue is not completely resolved), as we are working through these the analyses the first couple times.
thanks again

If you’re witnessing high memory usage, that might be the problem. Is your data big in one direction (high spatial resolution? Many time points?)? As the crash happens during spatial resampling, do you have an idea of what you are resampling too? Maybe you are providing a mask that is high resolution (1mm cube because it was computed on anatomy)? One solution would be to downsample the data, giving “target_affine=np.diag((4, 4, 4))” to the niftimasker for 4mm cube voxels? Or downsampling the mask using (be careful that when downsampling a mask, you need to use interpolation=“continuous”).

Yup, I’m sure my mask is 1x1x1, will downsample and give it a try. That would make sense when the 2-3 subjects tests failed (failed with the 1x1x1 whole brain mask, but not necessarily with smaller ROI masks (but still 1x1x1) and didn’t fail with masks pulled from the tutorials).
Will let you know how it goes.
thanks again

Working great (resampled mask). Many thanks