Ciftify Segmentation fault error

I am getting a segmentation fault error with ciftify. I am not sure where to start looking for problems. 15/160 of my subjects have this error. The freesurfer outputs look fine and fMRIprep worked on these subjects. Any advice? I’m new to this and could really use some help! Thanks!
I have put the full job script and output log here:
cifty fail job.pdf
ciftify fail log.pdf

When trying to run this command:
singularity run -B /data/scratch/pstew/tmp:/tmp fmriprep_ciftify.simg /data/project/vislab/raw/MBAR/BACKUP/MBAR/ /data/project/vislab/raw/MBAR/BACKUP/MBAR/derivatives/MBAR_reconallT2Space/ participant --participant_label MBAR10025 --verbose --read-from-derivatives /data/project/vislab/raw/MBAR/BACKUP/MBAR/derivatives/MBAR_reconallT2Space/ --rerun-if-incomplete --fs-license license.txt

This is the end of the log file: /data/project/vislab/raw/MBAR/BACKUP/MBAR/derivatives/MBAR_reconallT2Space/freesurfer/sub-MBAR10025/surf/rh.white /tmp/tmpf6q7d5w8/MNINonLinear/native/sub-MBAR10025.R.aparc.native.label.gii
Failed with returncode 139
reading colortable from annotation file…
colortable with 36 entries read (originally /autofs/space/tanha_002/users/greve/fsdev.build/average/colortable_desikan_killiany.txt)
Segmentation fault

It’s failing for the right hemisphere after it completed the same command the left hemisphere data - so I don’t think it’s a problems with ciftify or your software in environment per se.

Segementation faults can be a general error the computer is overloaded. Might be that too many jobs were running on one node and the system just got overloaded? Might be worth checking your HPC usage (CPU and RAM) on the nodes and checking if any major failures were happening on your HPC?

Looks like you already allocated a decent amount of RAM, but things might work better if you allocate for than one CPU per task in your slurm call (I think I tend towards 4)…

Let

Thanks so much! I will change that and try again.
I really appreciate the speed of response! So helpful!
Thanks again!

So I allocated more resources up to
#SBATCH --cpus-per-task=8
#SBATCH --mem-per-cpu=20000
And it is still stopping at the same place with the same error.