Replicable scripts, BIDS, and curating data

bids

#1

I’m currently analyzing a dataset, and I would like for each step of the analysis to be completely automated, so that I could publish this dataset and have the analysis be replicated exactly. (I’m not editing freesurfer surfaces, so I shouldn’t actually need to do a lot of interacting with the pipeline.)

The BIDS framework and all of its apps make that pretty simple – I run heudiconv on a list of subjects to get BIDS directories. I run MRIQC and fmriprep to check data quality and do preprocessing. Then I have a nipype modeling script, et cetera.

The thing I’m struggling with is this: in most large projects with a bunch of subjects, you’ve got some one-off subjects that you need to exclude. Maybe there’s an excessive amount of motion and the data is garbage. Maybe there’s a run where the projector turned off midway through. Maybe there’s one subject with a really unfortunately slice prescription that you don’t want to include in group analysis because the intersection of his mask with the other masks excludes too much data.

How is this documented and managed? Is there a BIDS standard for this? I’d like to keep the data in the dataset that we upload, and even if I didn’t, I wouldn’t want to deal with this in the heudiconv heuristics file (“if TRs = 128 and task = ‘faces’, process it, unless it’s the 2nd run of subject 8, or the 3rd run of subject 10, or …”)

Ideally, there’d be something like an “excluded runs” file somewhere so that there was documentation of the bad runs in a standardized place, and also so that by the time modeling scripts were active, they could intelligently exclude running first-level models on garbage data.

Has anyone done this in a clever way?


#2

The core of this problem is that “exclusion of runs” is your particular interpretation of quality of the data which is dependent on what tools you used to asses it and what you are planning to use the data for. So the answer which runs to keep will differ from one person to another and from one analysis to another (T1w scans with some motion could be good as intermediate coregistration target, but not good for cortical thickness measurements).

At the moment the spec does not specify how to do this, but you can do the following:

  1. Add a known issues section to the README describing what you found problematic about specific runs
  2. Add a custom column to_scans.tsv files denoting which scans should be excluded or not. Add a_scans.json data dictionary explaining what this column means and how you made the decision.

This might be also a good thing to add to the spec. If you could propose a change on https://github.com/bids-standard/bids-specification that would be great.