Tedana pca error: Mean of empty slice

Summary of what happened:

Hi, I am a new user of tedana. I have just successfullly used tedana to denoise my multi-echo data in the first run. However, when I started to use it to denoise the second run, an error occurred:

/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/decomposition/pca.py:209: RuntimeWarning: Mean of empty slice.
data_z = (data_z - data_z.mean()) / data_z.std() # var normalize everything

I have checked the adaptive mask but haven’t found anything special. Are there any possible solutions for this problem? Thank you for your help.

Command used (and if a helper script was used, a link to the helper script or the command generated):

My pipeline started with using afni_proc to realign the images and then combine and denoise using tedana. Here is my code.

afni_proc.py -subj_id $sub -blocks tshift volreg mask -copy_anat "T1.nii" -dsets_me_echo "Task_BOLD1/echo1.nii" "Task_BOLD2/echo1.nii" "Task_BOLD3/echo1.nii" "Task_BOLD4/echo1.nii"  -dsets_me_echo "Task_BOLD1/echo2.nii" "Task_BOLD2/echo2.nii" "Task_BOLD3/echo2.nii" "Task_BOLD4/echo2.nii" -dsets_me_echo "Task_BOLD1/echo3.nii" "Task_BOLD2/echo3.nii" "Task_BOLD3/echo3.nii" "Task_BOLD4/echo3.nii"  -reg_echo 2 -tcat_remove_first_trs 2 -volreg_align_to MIN_OUTLIER 

tedana -d "task_bold_run$run""_echo1.nii" "task_bold_run$run""_echo2.nii" "task_bold_run$run""_echo3.nii" -e 14.8 35.02 55.24 --out-dir $outputdir

Version:

tedana v24.0.2

Relevant log outputs (up to 20 lines):

INFO     pca:tedpca:203 Computing PCA of optimally combined multi-echo data with selection criteria: aic
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/decomposition/pca.py:209: RuntimeWarning: Mean of empty slice.
  data_z = (data_z - data_z.mean()) / data_z.std()  # var normalize everything
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/numpy/_core/_methods.py:138: RuntimeWarning: invalid value encountered in scalar divide
  ret = ret.dtype.type(ret / rcount)
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/numpy/_core/_methods.py:218: RuntimeWarning: Degrees of freedom <= 0 for slice
  ret = _var(a, axis=axis, dtype=dtype, out=out, ddof=ddof,
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/numpy/_core/_methods.py:175: RuntimeWarning: invalid value encountered in divide
  arrmean = um.true_divide(arrmean, div, out=arrmean,
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/numpy/_core/_methods.py:210: RuntimeWarning: invalid value encountered in scalar divide
  ret = ret.dtype.type(ret / rcount)
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/io.py:833: UserWarning: Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32.
  nii = new_img_like(ref_img, newdata, affine=affine, copy_header=copy_header)
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.9/bin/tedana", line 8, in <module>
    sys.exit(_main())
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/workflows/tedana.py", line 1077, in _main
    tedana_workflow(**kwargs, tedana_command=tedana_command)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/workflows/tedana.py", line 762, in tedana_workflow
    dd, n_components = decomposition.tedpca(
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/decomposition/pca.py", line 215, in tedpca
    _ = ma_pca.fit_transform(data_img, mask_img)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mapca/mapca.py", line 479, in fit_transform
    self._fit(img, mask, subsample_depth=subsample_depth)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mapca/mapca.py", line 156, in _fit
    x = self.scaler_.fit_transform(x.T).T  # This was x_sc
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/utils/_set_output.py", line 316, in wrapped
    data_to_wrap = f(self, X, *args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/base.py", line 1098, in fit_transform
    return self.fit(X, **fit_params).transform(X)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/preprocessing/_data.py", line 878, in fit
    return self.partial_fit(X, y, sample_weight)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/base.py", line 1473, in wrapper
    return fit_method(estimator, *args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/preprocessing/_data.py", line 914, in partial_fit
    X = self._validate_data(
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/base.py", line 633, in _validate_data
    out = check_array(X, input_name="X", **check_params)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/utils/validation.py", line 1096, in check_array
    raise ValueError(
ValueError: Found array with 0 feature(s) (shape=(352, 0)) while a minimum of 1 is required by StandardScaler.

This looks like an issue in PCA, but I’m not sure what’s causing it. My best guess is you are inputting empty or corrupted data, but I’m not sure where that would be happening. You note the adaptive mask looks reasonable. Have you opened the full volumes in the AFNI viewer to confirm nothing looks odd in space or time series?

Tedana also writes an output log in a tsv file. Could you share that here? Maybe it contains some into on where things broke down.

Best

Dan

Hi, Thanks for you reply!
I have just checked the data, but found nothing odd. Here is the log file in the tsv file:

2024-11-21T10:16:38 tedana.tedana_workflow INFO Using output directory: /Volumes/OneTouch/Grid_cell_dataset/results/preprocess/sub-01/tedana_out/run-01
2024-11-21T10:16:38 tedana.tedana_workflow INFO Initializing and validating component selection tree
2024-11-21T10:16:38 component_selector.validate_tree WARNING Decision tree includes fields that are not used or logged [‘_comment’]
2024-11-21T10:16:38 component_selector.init INFO Performing component selection with tedana_orig_decision_tree
2024-11-21T10:16:38 component_selector.init INFO Very similar to the decision tree designed by Prantik Kundu
2024-11-21T10:16:38 tedana.tedana_workflow INFO Loading input data: [‘task_bold_run1_echo1.nii’, ‘task_bold_run1_echo2.nii’, ‘task_bold_run1_echo3.nii’]
2024-11-21T10:18:08 tedana.tedana_workflow INFO Computing EPI mask from first echo
2024-11-21T10:18:24 utils.make_adaptive_mask INFO Echo-wise intensity thresholds for adaptive mask: [4277.32031065 2780.40773796 1774.95821635]
2024-11-21T10:18:24 utils.make_adaptive_mask WARNING 25 voxels in user-defined mask do not have good signal. Removing voxels from mask.
2024-11-21T10:18:24 tedana.tedana_workflow INFO Computing T2* map
2024-11-21T10:18:54 combine.make_optcom INFO Optimally combining data with voxel-wise T2* estimates
2024-11-21T10:19:13 tedana.tedana_workflow INFO Writing optimally combined data set: /Volumes/OneTouch/Grid_cell_dataset/results/preprocess/sub-01/tedana_out/run-01/desc-optcom_bold.nii.gz
2024-11-21T10:19:13 pca.tedpca INFO Computing PCA of optimally combined multi-echo data with selection criteria: aic

Best

May Wei