# How to run multi-echo data on fMRIPrep-docker?

### Summary of what happened:

I try to use fmriprep-docker to run multi-echo data but the results did not combine the multi echos. I think that somewhere of my command need to be corrected but I have no idea right now.

### Command used (and if a helper script was used, a link to the helper script or the command generated):

The config.json is as below:

{
"descriptions": [
{
"dataType": "anat",
"modalityLabel": "T1w",
"criteria": {
"SeriesDescription": "t1_mprage_sag_p2",
"ProtocolName": "t1_mprage_sag_p2",
"SeriesNumber": 2
}
},
{
"dataType": "func",
"modalityLabel": "bold",
"criteria": {
"SeriesDescription": "REsting_3.0iso_conv_GRA2_MB3_ME4",
"ProtocolName": "REsting_3.0iso_conv_GRA2_MB3_ME4",
"ScanningSequence": "EP"
},
"sidecarChanges": {
}
}
]
}


The command on PowerShell like：
fmriprep-docker D:\bids2 D:\bids2\derivatives participant --participant-label 001 --fs-license-file D:\FClearning\license.txt --fs-no-reconall

fmriprep:22.1.1

Docker

### Relevant log outputs (up to 20 lines):

Here is the logs/CITATION.tex:

Results included in this manuscript come from preprocessing performed
using \emph{fMRIPrep} 22.1.1 (\citet{fmriprep1}; \citet{fmriprep2};
RRID:SCR\_016216), which is based on \emph{Nipype} 1.8.5
(\citet{nipype1}; \citet{nipype2}; RRID:SCR\_002502).

\begin{description}
\item[Anatomical data preprocessing]
A total of 1 T1-weighted (T1w) images were found within the input BIDS
dataset.The T1-weighted (T1w) image was corrected for intensity
non-uniformity (INU) with \texttt{N4BiasFieldCorrection} \citep{n4},
distributed with ANTs 2.3.3 \citep[RRID:SCR\_004757]{ants}, and used as
T1w-reference throughout the workflow. The T1w-reference was then
skull-stripped with a \emph{Nipype} implementation of the
\texttt{antsBrainExtraction.sh} workflow (from ANTs), using OASIS30ANTs
as target template. Brain tissue segmentation of cerebrospinal fluid
(CSF), white-matter (WM) and gray-matter (GM) was performed on the
brain-extracted T1w using \texttt{fast} \citep[FSL 6.0.5.1:57b01774,
RRID:SCR\_002823,][]{fsl_fast}. Volume-based spatial normalization to
one standard space (MNI152NLin2009cAsym) was performed through nonlinear
registration with \texttt{antsRegistration} (ANTs 2.3.3), using
brain-extracted versions of both T1w reference and the T1w template. The
following template was selected for spatial normalization: \emph{ICBM
152 Nonlinear Asymmetrical template version 2009c}
{[}\citet{mni152nlin2009casym}, RRID:SCR\_008796; TemplateFlow ID:
MNI152NLin2009cAsym{]}.
\item[Functional data preprocessing]
For each of the 3 BOLD runs found per subject (across all tasks and
sessions), the following preprocessing was performed. First, a reference
volume and its skull-stripped version were generated using a custom
methodology of \emph{fMRIPrep}. Head-motion parameters with respect to
the BOLD reference (transformation matrices, and six corresponding
rotation and translation parameters) are estimated before any
spatiotemporal filtering using \texttt{mcflirt} \citep[FSL
6.0.5.1:57b01774,][]{mcflirt}. BOLD runs were slice-time corrected to
0.694s (0.5 of slice acquisition range 0s-1.39s) using \texttt{3dTshift}
from AFNI \citep[RRID:SCR\_005927]{afni}. The BOLD time-series
(including slice-timing correction when applied) were resampled onto
their original, native space by applying the transforms to correct for
head-motion. These resampled BOLD time-series will be referred to as
\emph{preprocessed BOLD in original space}, or just \emph{preprocessed
BOLD}. The BOLD reference was then co-registered to the T1w reference
using \texttt{mri\_coreg} (FreeSurfer) followed by \texttt{flirt}
\citep[FSL 6.0.5.1:57b01774,][]{flirt} with the boundary-based
registration \citep{bbr} cost-function. Co-registration was configured
with six degrees of freedom. Several confounding time-series were
calculated based on the \emph{preprocessed BOLD}: framewise displacement
(FD), DVARS and three region-wise global signals. FD was computed using
two formulations following Power (absolute sum of relative motions,
\citet{power_fd_dvars}) and Jenkinson (relative root mean square
displacement between affines, \citet{mcflirt}). FD and DVARS are
calculated for each functional run, both using their implementations in
\emph{Nipype} \citep[following the definitions by][]{power_fd_dvars}.
The three global signals are extracted within the CSF, the WM, and the
whole-brain masks. Additionally, a set of physiological regressors were
extracted to allow for component-based noise correction
\citep[\emph{CompCor},][]{compcor}. Principal components are estimated
after high-pass filtering the \emph{preprocessed BOLD} time-series
(using a discrete cosine filter with 128s cut-off) for the two
\emph{CompCor} variants: temporal (tCompCor) and anatomical (aCompCor).
tCompCor components are then calculated from the top 2\% variable voxels
within the brain mask. For aCompCor, three probabilistic masks (CSF, WM
and combined CSF+WM) are generated in anatomical space. The
implementation differs from that of Behzadi et al.~in that instead of
eroding the masks by 2 pixels on BOLD space, a mask of pixels that
likely contain a volume fraction of GM is subtracted from the aCompCor
masks. This mask is obtained by thresholding the corresponding partial
volume map at 0.05, and it ensures components are not extracted from
voxels containing a minimal fraction of GM. Finally, these masks are
resampled into BOLD space and binarized by thresholding at 0.99 (as in
the original implementation). Components are also calculated separately
within the WM and CSF masks. For each CompCor decomposition, the
\emph{k} components with the largest singular values are retained, such
that the retained components' time series are sufficient to explain 50
percent of variance across the nuisance mask (CSF, WM, combined, or
temporal). The remaining components are dropped from consideration. The
head-motion estimates calculated in the correction step were also placed
within the corresponding confounds file. The confound time series
derived from head motion estimates and global signals were expanded with
the inclusion of temporal derivatives and quadratic terms for each
\citep{confounds_satterthwaite_2013}. Frames that exceeded a threshold
of 0.5 mm FD or 1.5 standardized DVARS were annotated as motion
outliers. Additional nuisance timeseries are calculated by means of
principal components analysis of the signal found within a thin band
(\emph{crown}) of voxels around the edge of the brain, as proposed by
\citep{patriat_improved_2017}. The BOLD time-series were resampled into
standard space, generating a \emph{preprocessed BOLD run in
MNI152NLin2009cAsym space}. First, a reference volume and its
skull-stripped version were generated using a custom methodology of
\emph{fMRIPrep}. All resamplings can be performed with \emph{a single
interpolation step} by composing all the pertinent transformations
(i.e.~head-motion transform matrices, susceptibility distortion
correction when available, and co-registrations to anatomical and output
spaces). Gridded (volumetric) resamplings were performed using
\texttt{antsApplyTransforms} (ANTs), configured with Lanczos
interpolation to minimize the smoothing effects of other kernels
\citep{lanczos}. Non-gridded (surface) resamplings were performed using
\texttt{mri\_vol2surf} (FreeSurfer).
\end{description}


### Screenshots / relevant information:

Hello all,
I am new to use fMRIPrep, and I have a confusing question about how to process multi echo data on fMRIPrep. What I have learnt is that first using dcm2bids to transform the raw multi echo data to BIDS, and then using fmriprep-docker to run the pipline(my PC is based on Windows).I don’t know if I understand right?
What’s more, I am not sure how to write the acquired config.json for multi echo data, since there seems no tutorials about it. I just try to use the basic command to run multi-echo data(since I read somewhere that fMRIPrep will automatically combine the echos) but the results seems to process the echo data seperately (just like single scho).
My command on PowerShell is like below,and I don’t know if it is right:
fmriprep-docker D:\bids2 D:\bids2\derivatives participant --participant-label 001 --fs-license-file D:\FClearning\license.txt --fs-no-reconall

I would really appreciate it if I can get any reply! Thanks a lot!

Andrea

Hi @Andrea1, I moved this to the software support category and added the template above your original post. Please fill that out when you get an opportunity.

If the dataset is not valid BIDS, we can’t support that. In general, I would expect people to use a tool like BIDSCoin or Heudiconv to convert their datasets to BIDS, rather than using dcm2niix directly, which does convert the file formats and metadata but does not handle naming conventions.

Please provide a listing of your dataset (e.g., with the tree command).

Hi @effigies , thanks so much for your advice! I have tried to fill the template and if it needs more information, please let me know.
Before I use fmriprep-docker to run multi-echo data, I use BIDS validator to validate the BIDS format and it has no errors. I don’t know if the failure of processing the multi-echo data on fmriprep-docker has something to do with this BIDS format? What I thought is that maybe I need to look at the command in fmriprep-docker to find the solution. I will learn more about multi-echo data next and see if I can correct this problem.
All in all, thanks a lot for your help!

Hi @Andrea1,

dcm2bids has done the job it is suppose to do especially if you use BIDS validator to validate your BIDS structure. I’m pretty confident in your BIDS output. @effigies is right we need the output of the tree command. Something is off but so far with what you wrote I cannot say what is the issue. What is the fmriprep error ?

Hi @abore ,
Thanks a lot for your reply! I follow @effigies and your advice and use the tree command to get the data frame as bellow:

The command on fmriprep run well and there is no error. But the results is not what I expected, since I use the multi-echo data and I thought that the fmriprep-docker will automatically combine these three echos but actually it’s not. So there must be something that I need to correct and I am trying to figure it out.

This shows multiple runs, not multiple echos. Assuming you do only have one run with three echos, you should change those filenames from sub-001_task-rest_run-*_bold.nii.gz to sub-001_task-rest_echo-*_bold.nii.gz.

1 Like

Hi @effigies ,
Thanks for your suggestion! I rerun the dcm2bids and successfully change the filenames like you said, and it seems to combine the echos!
But another error comes out, it says that the fmriprep did not successfully finished.The html is as below:

FileNotFoundError: No such file or directory ‘/tmp/work/fmriprep_22_1_wf/single_subject_001_wf/func_preproc_task_rest_echo_1_wf/bold_t2smap_wf/t2smap_node/T2starmap.nii.gz’ for output ‘t2star_map’ of a T2SMap interface

I think that maybe the error has something to do with the pipline? And how does this problem be solved? Any of your advice can help me a lot,thank you!

@Andrea1 please try specifying a working directory with the -w argument and try again (also, I don’t recommend using the --fs-no-reconall flag).

Best,
Steven

Hi @Steven ,thanks for your suggestion!
I have try to change a working directory but the results still says fmriprep did not finish successfully. The error is a little different from the error above. But they all have something to do with T2SMap interface.

I don’t know where the problem is, the command I use is:
fmriprep-docker D:\bids4 D:\bids4\derivatives participant --participant-label 001 --fs-license-file D:\FClearning\license.txt -w D:\scratch

And the error is:

Node Name: fmriprep_22_1_wf.single_subject_001_wf.func_preproc_task_rest_echo_1_wf.bold_t2smap_wf.t2smap_node
File: /out/sub-001/log/20230115-084836_36922f1f-98d7-4fb1-a59f-ba79bc8a0433/crash-20230115-140807-root-t2smap_node-3c5e3278-1bda-43c6-9961-e90c7cdcb124.txt
Inputs:
args:
echo_times: [0.0132, 0.03245, 0.0517]
environ: {}
fittype: curvefit
in_files:
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
result = self._run_interface(execute=True)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
return self._run_command(execute)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node t2smap_node.

Cmdline:
Stdout:

Stderr:
INFO     t2smap:t2smap_workflow:241 Using output directory: /scratch/fmriprep_22_1_wf/single_subject_001_wf/func_preproc_task_rest_echo_1_wf/bold_t2smap_wf/t2smap_node
Killed
Traceback:
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 454, in aggregate_outputs
setattr(outputs, key, val)
File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/traits_extension.py", line 330, in validate
value = super(File, self).validate(objekt, name, value, return_pathlike=True)
File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/traits_extension.py", line 135, in validate
self.error(objekt, name, str(value))
File "/opt/conda/lib/python3.9/site-packages/traits/base_trait_handler.py", line 74, in error
raise TraitError(
traits.trait_errors.TraitError: The 'optimal_comb' trait of a T2SMapOutputSpec instance must be a pathlike object or string representing an existing file, but a value of '/scratch/fmriprep_22_1_wf/single_subject_001_wf/func_preproc_task_rest_echo_1_wf/bold_t2smap_wf/t2smap_node/desc-optcom_bold.nii.gz'  was specified.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 401, in run
outputs = self.aggregate_outputs(runtime)
File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 461, in aggregate_outputs
raise FileNotFoundError(msg)
FileNotFoundError: No such file or directory '/scratch/fmriprep_22_1_wf/single_subject_001_wf/func_preproc_task_rest_echo_1_wf/bold_t2smap_wf/t2smap_node/desc-optcom_bold.nii.gz' for output 'optimal_comb' of a T2SMap interface


And before the fmriprep runs, there is a warning like:
<8GB of RAM is available within your Docker environment.
Some parts of fMRIPrep may fail to complete.
I don’t know if this is the cause of the failure?

What is the output of BIDS validation after running your new dcm2bids configuration?

It has 2 warnings.