i am new to neuroimaging and I am trying to establish a python based workflow like the following with datalad as version control system:
raw data
BIDS dataset (created with Heudiconv)
BIDS subdataset / derivative with specific mri sequences (resting state fmri)
further analysis
and 2. contain anatomical, functional (task & rest) and dwi images. For my analysis I would like to tailor the dataset to only resting state and anatomical data for further analysis.
Could somebody give me a hint, which tool would be best to achieve this. fmriprep, bidsonym and others need to be implemented in the workflow, but I did not see a smart way to perform further analysis only with parts of the initial data. Is there a datalad (YODA compliant) way of doing this. Maybe via pybids oder one of the nipy packages (nistats, nibabel, etc.)? Does one need to clean the JSON files afterwards manually or is there a automatic solution?
Thanks for the hints. Let me see if I understood you correctly. I create the BIDS dataset in a own directory (BIDS/). Then for each step I create a new dataset in datalad and register the step before as sourcedataset via datalad clone --reckless=ephemeral mode. Which would result in the following structure
Structure:
data/
BIDS/
BIDSonym/
++ source/
+++ BIDS/
fMRIprep/
++ source/
+++ BIDSonym
Analysis/
++ source/
+++ fMRIprep
What is the main reason for --reckless=ephemeral mode? To have no duplicate file structures/annex? What would be the disadvantage of a “normal” local clone from the “parent” dataset?
Regarding the Repronim/containers, I plan on using them, but I thought they don´t automatically manage my data structure but only ensure that there is BIDS input and output?