Hi, I ran the fmriprep pipeline for the first time yesterday. While the anatomical data was preprocessed completely, the functional data derivative is missing the final preprocessed file. I have the following files in the folder: task-resting_desc-brain_mask.nii, task-resting_desc-coreg_boldref.nii, task-resting_desc-hmc_boldref.nii and accompanying json files for these folders.
Not sure what’s wrong, would love some help/suggestions. Thanks in advance!
Command used (and if a helper script was used, a link to the helper script or the command generated):
Data formatted according to a validatable standard? Please provide the output of the validator:
Summary: Available Tasks: Available Modalities:
36 Files, 431.83MB resting MRI
2 - Subjects 46x26x38
2 - Sessions 24x60xMinus2
30x50x26
34x6x62
40xMinus18x64
48xMinus54x46
50x10x28
Minus24x60xMinus2
6x2x70
Minus32x42x34
Minus38x22x48
If you have any questions, please post on https://neurostars.org/tags/bids.
Relevant log outputs (up to 20 lines):
Can't see a log.html file for the preprocessed data
Screenshots / relevant information:
The run produced no errors in powershell, I have run the command twice same error both times for 2 subjects.
In the future, please use the Software Support post category and post template which prompts you for important information. You can see I reformatted your post for you this time. Editing your post to add this information will help us debug your issue. Beyond the information requested in the template, I would want to know 1) what resources you are devoting to the job, 2) were there any errors in the log, 3) was this subject specific or for everyone? Additionally --fs-no-reconall is not recommended.
Hi Steven, thanks for the edit and suggestions! 1. --n_cpus 4 --mem 16GB 2. No errors, a few warnings did pop up during the run but were resolved and the run was completed (didn’t crash) 3. I tried it for 2 subjects, got the same error for both. [used --fs-no-reconall because I wanted to try doing the whole run for one subject before preprocessing all, also I only need the data to fit to a model so wasn’t sure if it was needed)
Without seeing a log it will be hard to debug much further. But I will say that it looks like there are a lot of runs there and perhaps 16GB isn’t enough memory. Also, you are using a version of fmriprep without aroma functionality. If you want to use aroma you should specify the MNI152NLin6Asym (res-02) output space and then use the outputs in fMRIPost-aroma.
Hello,
I did try specifying the MNI152NLin6Asym too in the previous run but faced the same problem. Since there is no log file, is there any other way to troubleshoot this?
Node: fmriprep_24_1_wf.sub_NTHC1001_wf.bold_ses_1_task_resting_wf.bold_confounds_wf.acc_masks
Working directory: /scratch/fmriprep_24_1_wf/sub_NTHC1001_wf/bold_ses_1_task_resting_wf/bold_confounds_wf/acc_masks
Node inputs:
bold_zooms = <undefined>
in_vfs = <undefined>
is_aseg = False
Traceback (most recent call last):
File "/opt/conda/envs/fmriprep/lib/python3.11/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/fmriprep/lib/python3.11/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
result = self._run_interface(execute=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/fmriprep/lib/python3.11/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
return self._run_command(execute)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/fmriprep/lib/python3.11/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node acc_masks.
Traceback:
Traceback (most recent call last):
File "/opt/conda/envs/fmriprep/lib/python3.11/site-packages/nibabel/loadsave.py", line 100, in load
stat_result = os.stat(filename)
^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/out/sub-NTHC1001/ses-1/anat/sub-NTHC1001_ses-1_label-CSF_probseg.nii.gz'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/fmriprep/lib/python3.11/site-packages/nipype/interfaces/base/core.py", line 397, in run
runtime = self._run_interface(runtime)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/fmriprep/lib/python3.11/site-packages/fmriprep/interfaces/confounds.py", line 82, in _run_interface
self._results['out_masks'] = acompcor_masks(
^^^^^^^^^^^^^^^
File "/opt/conda/envs/fmriprep/lib/python3.11/site-packages/fmriprep/utils/confounds.py", line 120, in acompcor_masks
csf_vf = nb.load(csf_file)
^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/fmriprep/lib/python3.11/site-packages/nibabel/loadsave.py", line 102, in load
raise FileNotFoundError(f"No such file or no access: '{filename}'")
FileNotFoundError: No such file or no access: '/out/sub-NTHC1001/ses-1/anat/sub-NTHC1001_ses-1_label-CSF_probseg.nii.gz'
If you cannot increase your memory allowance on docker further by going into the docker desktop settings, then you can loop across fmriprep commands with different BIDS filter files to run individual tasks.