Weird Kmeans dseg failure

Hello All,

I was hoping someone might be able to advise me on an error I keep getting while attempting to run fmriprep through 20.2.1.simg singularity image? The output from the crashlog is appended below the message.

Weirdly, I only get this error for one study’s data on my university HPC (I’ve tried for many different subs from this problematic study as well). The other study datasets I have run fmriprep on, using the same code, all complete successfully on my HPC (and beautifully I might add!).

The data used in the unsuccessful fmriprep attempts are a bit larger, so I’ve also tweaked the virtual memory avail and cores up to 264 GB mem and 20 cores…along with using the --use-plugin and --low-mem switches, as well as various other switches to control the dedicated resources and/or simplify the dataset.

Any assistance you can offer would be extremely appreciated, as I am at a bit of an impasse here.

PS- also just wanted to give a HUGE thanks to the developers/testers of this program, it really is superb!

Very Best,

Nick

Node inputs:

args =
bias_iters =
bias_lowpass =
environ = {‘FSLOUTPUTTYPE’: ‘NIFTI_GZ’}
hyper =
img_type =
in_files =
init_seg_smooth =
init_transform =
iters_afterbias =
manual_seg =
mixel_smooth =
no_bias = True
no_pve =
number_classes =
other_priors =
out_basename =
output_biascorrected =
output_biasfield =
output_type = NIFTI_GZ
probability_maps = True
segment_iters =
segments = True
use_priors =
verbose =

Traceback (most recent call last):
** File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/legacymultiproc.py”, line 67, in run_node**
** result[“result”] = node.run(updatehash=updatehash)**
** File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 516, in run**
** result = self._run_interface(execute=True)**
** File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 635, in _run_interface**
** return self._run_command(execute)**
** File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py”, line 741, in _run_command**
** result = self._interface.run(cwd=outdir)**
** File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py”, line 419, in run**
** runtime = self._run_interface(runtime)**
** File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py”, line 814, in _run_interface**
** self.raise_exception(runtime)**
** File “/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py”, line 745, in raise_exception**
** ).format(runtime.dictcopy())
RuntimeError: Command:
fast -N -p -g -S 1 /workdir/fmriprep_wf/single_subject_BANDA001_wf/anat_preproc_wf/t1w_dseg/sub-BANDA001_T1w_corrected_xform_masked.nii.gz
Standard output:
Exception: Not enough classes detected to init KMeans
Standard error:

Return code: 255

Hi,

I came across this issue too and wanted to share a workaround:

When running fMRIprep, if you don’t run FreeSurfer, then fMRIPrep uses FAST (from FSL) which is faster and runs a Kmeans clustering to segment CSF, GM, WM.
However, my understanding is that if you include the option to run recon-all, segmentation will be carried out in FreeSurfer, bypassing FAST completely.

This doesn’t really address the reason why FAST fails, which I’ll be interested to learn.

Best,
Francesca

Thank you so much Francesca!

Hi,
I am having the same issue (strangely enough, just with one dataset).
Did you manage to understand what the problem is about?

Thank you,
Giuseppe