Summary of what happened:
Hi, I am a new user of tedana. I have just successfullly used tedana to denoise my multi-echo data in the first run. However, when I started to use it to denoise the second run, an error occurred:
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/decomposition/pca.py:209: RuntimeWarning: Mean of empty slice.
data_z = (data_z - data_z.mean()) / data_z.std() # var normalize everything
I have checked the adaptive mask but haven’t found anything special. Are there any possible solutions for this problem? Thank you for your help.
Command used (and if a helper script was used, a link to the helper script or the command generated):
My pipeline started with using afni_proc to realign the images and then combine and denoise using tedana. Here is my code.
afni_proc.py -subj_id $sub -blocks tshift volreg mask -copy_anat "T1.nii" -dsets_me_echo "Task_BOLD1/echo1.nii" "Task_BOLD2/echo1.nii" "Task_BOLD3/echo1.nii" "Task_BOLD4/echo1.nii" -dsets_me_echo "Task_BOLD1/echo2.nii" "Task_BOLD2/echo2.nii" "Task_BOLD3/echo2.nii" "Task_BOLD4/echo2.nii" -dsets_me_echo "Task_BOLD1/echo3.nii" "Task_BOLD2/echo3.nii" "Task_BOLD3/echo3.nii" "Task_BOLD4/echo3.nii" -reg_echo 2 -tcat_remove_first_trs 2 -volreg_align_to MIN_OUTLIER
tedana -d "task_bold_run$run""_echo1.nii" "task_bold_run$run""_echo2.nii" "task_bold_run$run""_echo3.nii" -e 14.8 35.02 55.24 --out-dir $outputdir
Version:
tedana v24.0.2
Relevant log outputs (up to 20 lines):
INFO pca:tedpca:203 Computing PCA of optimally combined multi-echo data with selection criteria: aic
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/decomposition/pca.py:209: RuntimeWarning: Mean of empty slice.
data_z = (data_z - data_z.mean()) / data_z.std() # var normalize everything
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/numpy/_core/_methods.py:138: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/numpy/_core/_methods.py:218: RuntimeWarning: Degrees of freedom <= 0 for slice
ret = _var(a, axis=axis, dtype=dtype, out=out, ddof=ddof,
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/numpy/_core/_methods.py:175: RuntimeWarning: invalid value encountered in divide
arrmean = um.true_divide(arrmean, div, out=arrmean,
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/numpy/_core/_methods.py:210: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/io.py:833: UserWarning: Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32.
nii = new_img_like(ref_img, newdata, affine=affine, copy_header=copy_header)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/bin/tedana", line 8, in <module>
sys.exit(_main())
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/workflows/tedana.py", line 1077, in _main
tedana_workflow(**kwargs, tedana_command=tedana_command)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/workflows/tedana.py", line 762, in tedana_workflow
dd, n_components = decomposition.tedpca(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tedana/decomposition/pca.py", line 215, in tedpca
_ = ma_pca.fit_transform(data_img, mask_img)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mapca/mapca.py", line 479, in fit_transform
self._fit(img, mask, subsample_depth=subsample_depth)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mapca/mapca.py", line 156, in _fit
x = self.scaler_.fit_transform(x.T).T # This was x_sc
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/utils/_set_output.py", line 316, in wrapped
data_to_wrap = f(self, X, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/base.py", line 1098, in fit_transform
return self.fit(X, **fit_params).transform(X)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/preprocessing/_data.py", line 878, in fit
return self.partial_fit(X, y, sample_weight)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/base.py", line 1473, in wrapper
return fit_method(estimator, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/preprocessing/_data.py", line 914, in partial_fit
X = self._validate_data(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/base.py", line 633, in _validate_data
out = check_array(X, input_name="X", **check_params)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/sklearn/utils/validation.py", line 1096, in check_array
raise ValueError(
ValueError: Found array with 0 feature(s) (shape=(352, 0)) while a minimum of 1 is required by StandardScaler.