Fmriprep FATAL kernel error when running fmriprep singularity container through HPC

Summary of what happened:

Hi all,

I have an issue when trying to run more recent versions of fmriprep through my institute’s HPC using a singularity container. My original fmriprep container that runs correctly contains v. 20.2.0 and will conduct pre-processing through the HPC using singularity v 3.7.0.

However, I have multi-echo data which I want to pre-process using tedana workflows that exist in v 21.0.0 onward, which creates an error.

Command used (and if a helper script was used, a link to the helper script or the command generated):

#PBS -P AlcNeuro
#PBS -l select=1:ncpus=8:mem=16GB
#PBS -q=main

#PBS -l walltime=12:00:00

#PBS -q defaultQ

module load singularity/3.5.3 

#Set up directories
cd $root_path
echo "Data input ${data_path}"
echo "Data output ${output_path}" 
echo "singularity run --cleanenv fmriprep_img {data_path} ${output_path} participant \ 
--participant_label $PBS_JOBNAME --fs-license-file ${freesurfer_file} \
 --bold2t1w-dof 6 --force-bbr --dummy-scans 10 \
--use-aroma --aroma-melodic-dimensionality -200 --return-all-components \
--skull-strip-t1w auto --n-cpus 8"

singularity run --cleanenv fmriprep_img {data_path} ${output_path} participant  \
--participant_label $PBS_JOBNAME --fs-license-file ${freesurfer_file} \ 
--bold2t1w-dof 6 --force-bbr --dummy-scans 10 \
--use-aroma --aroma-melodic-dimensionality -200 --return-all-components \
--skull-strip-t1w auto --n-cpus 8



Environment (Docker, Singularity, custom installation):

Data formatted according to a validatable standard? Please provide the output of the validator:

Relevant log outputs (up to 20 lines):

When attempting to running fmriprep using version 21.0.0 (or any above) I receive the following error:

*WARNING: Skipping mount /etc/localtime [binds]: /etc/localtime doesn't exist in container *
*FATAL: kernel too old *

*WARNING: Skipping mount /etc/localtime [binds]: /etc/localtime doesn't exist in container *
*FATAL: kernel too old*

Screenshots / relevant information:

I’m assuming this means that my HPC Linux kernel is too old to run newer versions of fmriprep, is this correct? My HPC Linux Kernel is Centos 6.9, and unfortunately there are no administrator short-term plans to upgrade Linux versions at this stage.

The questions/next steps I have are:

What would be the most up-to-date version I could possibly run using Centos 6.9 as a kernel?

Or, is there any workaround that would enable me to run newer versions of fmriprep without encountering this issue on my HPC with Centos 6.9?

Failing this, would there be outputs from v 20.2.0 that I can feed through to tedana to at least get some of the multi-echo output for post-processing? Even being able tto obtain inputs of the echoes to then obtain t2smap_workflow outputs would be a huge advantage here.

Thanks in advance for your help!


Hi @warrenlogge, and welcome to neurostars!

Please share the code you are using to run fmriprep. You can do this by editing your original post.


(original post edited)

Hi @warrenlogge,

A few issues with your command:

You need to bind relevant drives with the -B argument. Since everything you need is in root_path, you can just bind that.

Seems like there are extra $ in your text.

You do not specify the --freesurfer-license-file argument to import this.

Besides that, do you have modules available on your cluster such that you can add more recent version of GCC and glibc to your path? e.g., module avail

Did you run the singularity build command on this cluster or did you get the container from somewhere else?


Thanks for your swift reply @Steven - much appreciated - sorry about the messy code too.

Will do, thanks for clarifying that!

Yes, was having a bit of trouble with the html formatting (long-time listener, first-time poster) that was not happy using $ - thanks for pointing out.

Ah, this is missing from the original code post - will update the post,but the option was added in original job run.

yes, we have GCC v 12.1.0 available, and glibc v 2.62.6 available. Are these sufficient? Is this then added to my job script along with singularity (I somehow lost this line in the initial code post) as:
module load gcc/12.1.0 glibc/2.62.6 singularity/3.5.3

The container singularity build command is created locally and then transferred to ourthe cluster. Would building it through the cluster be more beneficial here? I’ve had issues in the past, but I can attempt a cluster singularity build again if this is a better option.

My grateful thanks,

Hi @warrenlogge

I am not sure, but worth adding whatever the most recent available.

Yup! Although I thought you said you had singularity 3.7.0. I would also ask your cluster admin to consider installing Apptainer, which recently absorbed Singularity. You do not have to change your fmriprep command at all, since Apptainer uses singularity as an alias and accepts the same arguments.

In theory I don’t think it should matter, but it would be worth a shot.