Question about Nilearn_cache folder

Hi all,

I have been running my python script on HCP server and realized that the Nilearn_cache folder takes a lot of space on my PI’s space (my personal computer as well). I am curious is there any way to ask the python not to persist this cached data? I guessed this is something like memory space for data analysis and not useful after everything is done?

Thanks

In any of your nilearn commands, do you ever specify a memory or memory_level argument? Usually when nilearn is using/writing to cache it will say so, so if you could say which commands are using it, that would help.

thanks Steven, I checked my script and noticed that I specified the memory as nilearn_cache in the least square separate GLM for each trial. Whether it is a must? Please see an example of my script below:

so that you can collect the trial-wise beta map
glm_self_lNACC = FirstLevelModel(t_r=2,
mask_img=masker_leftNACC,
high_pass=.008,
smoothing_fwhm=None,
memory=‘nilearn_cache’)
glm_self_lNACC.fit(func_self_filename,events = lss_events_df_self,confounds=confound_self_df)

You do not need to specify the memory argument, you can also set memory_level as 0, which doubly makes sure cache is not used.

Thanks Steven, do you happen to know what this memory argument serves as? I am just curious why someone would put memory = ‘nilearn_cache’?

Having a cache speeds up computation. The intermediate files that are stored there (and usually deleted if no cache) can be reused to save time if a command is repeated.

so based on what you said, if I am running a sbatch on the server for analyzing many subjects individually (about 150 subjects), I have better specify the memory in nilearn_cache then delete them once I don’t need? Correct me if I misunderstood

If you are running on an HPC cluster, chances are you do not need cache, since you will likely have good memory/computational resources at your disposal.

Note that you may want to put the cache on a “cheap” place, eg a non-backed up disk space using the nilearn_cache argument.
Best,