FSL out-of-memory

Summary of what happened:

Hi all,
I’m trying to run smoothing and high pass filtering after preprocessing using fmriprep. However, I have an error about fsl out-of-memory. How could I adjust the memory limit?
It might be that the input is too big for FSL to process as they have been preprocessed by fmriprep? Please let me know if you have any suggestions. Log is below. Thanks for helping

Command used (and if a helper script was used, a link to the helper script or the command generated):

PASTE CODE HERE

Version:

FSL 6.0.7.4

Environment (Docker, Singularity / Apptainer, custom installation):

Not sure.

Data formatted according to a validatable standard? Please provide the output of the validator:

PASTE VALIDATOR OUTPUT HERE

Relevant log outputs (up to 20 lines):

Initialisation


/software/system/fsl/6.0.7.4/bin/fslmaths /data/project/CannTeen/CannTeen_longitudinal_fMRI/SST_1/output/sub-062/sub-062/ses-occ1/func/sub-062_ses-occ1_task-stopsignal_space-MNI152NLin2009cAsym_desc-preproc_bold prefiltered_func_data -odt float
Total original volumes = 512

/software/system/fsl/6.0.7.4/bin/fslroi prefiltered_func_data example_func 256 1

Preprocessing:Stage 1

Preprocessing:Stage 2


/software/system/fsl/6.0.7.4/bin/fslstats prefiltered_func_data -p 2 -p 98
0.000000 575.843750 

/software/system/fsl/6.0.7.4/bin/fslmaths prefiltered_func_data -thr 57.584375 -Tmin -bin mask -odt char

/software/system/fsl/6.0.7.4/bin/fslstats prefiltered_func_data -k mask -p 50

FATAL ERROR ENCOUNTERED:
COMMAND:
/software/system/fsl/6.0.7.4/bin/fslstats prefiltered_func_data -k mask -p 50
ERROR MESSAGE:
child killed: kill signal
END OF ERROR MESSAGE
child killed: kill signal
    while executing
"if { [ catch {

for { set argindex 1 } { $argindex < $argc } { incr argindex 1 } {
    switch -- [ lindex $argv $argindex ] {

	-I {
	    incr arginde..."
    (file "/software/system/fsl/6.0.7.4/bin/feat" line 312)
slurmstepd: error: Detected 1 oom-kill event(s) in StepId=3645994.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
Error encountered while running in main feat script, halting.
child killed: kill signal

Screenshots / relevant information:


Hi @wang_simiao,

No, that isn’t a problem.

It looks like you are using the SLURM job scheduler. How are you defining your resources in SLURM (e.g. with a SBATCH header or srun options), and what kind of resources are you giving the job?

Best,
Steven