The following error first occurred after updating to 1.1.4 (in order to see if the working dir issue was fixed for Windows-docker env)
Many thanks
SDC: no fieldmaps found or they were ignored (/data/sub-030_fm/func/sub-030_task-bat_run-1_bold.nii.gz).
Process Process-2:
Traceback (most recent call last):
File “/usr/local/miniconda/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/usr/local/miniconda/lib/python3.6/multiprocessing/process.py”, line 93, in run
self._target(*self._args, **self._kwargs)
File “/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/cli/run.py”, line 542, in build_workflow
ignore_aroma_err=opts.ignore_aroma_denoising_errors,
File “/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/workflows/base.py”, line 210, in init_fmriprep_wf
ignore_aroma_err=ignore_aroma_err)
File “/usr/local/miniconda/lib/python3.6/site-packages/fmriprep/workflows/base.py”, line 513, in init_single_subject_wf
‘inputnode.t1_2_fsnative_reverse_transform’)]),
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/workflows.py”, line 155, in connect
self._check_nodes(newnodes)
File “/usr/local/miniconda/lib/python3.6/site-packages/nipype/pipeline/engine/workflows.py”, line 727, in _check_nodes
‘Duplicate node name “%s” found.’ % node.name)
OSError: Duplicate node name “func_preproc_task_bat_run_1_wf” found.
Sorry, my question was very unclear. Are you using datalad to manage those data? In other words, is it a “datalad dataset”?
My intuition is that we may have a regression of an earlier bug where “hidden” images under the .git/ folder were considered part of the bids structure.
Can you print with the tree command or similar your the structure of your BIDS dataset (including hidden folders)?
Found the root cause (bids structure wasn’t really valid)-
I had two directories with different participant ids (sub-030 and sub-030_fm), but the files under both followed the sub-030 pattern and not consistent with the renaming of the dir), so I just moved one of these directories to another folder (to the derivatives folder).
so- this issue is “on the border”, I think- on one hand- the general structure of the root folder wasn’t a valid BIDS, but on the other hand- if I run the pipeline only for one participant (with a valid bids structure)- it would be nice if the code wouldn’t fail because of a non valid structure of another participant, in case one wants to rename another participant’s folder just to put it aside for a while…
Would love to hear your opinion.
I would say you should always validate your data using https://github.com/INCF/bids-validator and not expect any BIDS app to work correctly if the input dataset is not valid.
I can’t fully agree, since in this case the relevant data is valid-
why shouldn’t fmriprep take only the data it should work on?
(i.e.- take the data from the folder of participant sub-030 and not look at other directories)
I agree that this feature isn’t a “must”, but it’s a desirable and useful feature- fmriprep shouldn’t care about other participants folders (it’s often convenient to move folders aside and keep them in the directory)
I can’t see see why fmriprep should fail on the data of the relevant participants, if there exists some other directory of another participant with faulty structure.
I’d say this is exactly the issue: although you see sub-030 and sub-030_fm as different subjects, they are the same subject to the eyes of the BIDS parser. It just happens that the second has a weird fm key that maybe the user knows how to process. And thus, fMRIPrep goes on (until crashes).
You could just include sub-030_fm/ in your .bidsignore file if you REALLY want to keep that one under your structure without interfering with fMRIPrep. However, I would not agree this is a good idea if you were to share the dataset with a colleague.
That all said, we will take in any good improvements to fMRIPrep. There could be some use cases we don’t fully understand at this moment.