Excluding subjects based on mriqc

Hey,

I ran mriqc on a dataset, and now I’m wondering how to proceed. There are clearly some subjects that should be excluded, but I’d like to decide on some standard exlusion criterion. I’d be very happy to hear how people use mriqc (and potentially fmriprep) output to decide which participants to exclude:

Are there specific measures you find more important?
Do you set some threshold such that participants X SDs from the mean in any of the measures are excluded?
If I have two runs per subject, each a different condition. Is there any way to exclude participants that differ significantly between runs in one of the measures (for example if they move significantly more in one run vs. the other)?

Thanks!

1 Like

This is an excellent question. The problem is very complex.

  1. It’s hard to define what is good quality vs. bad quality. It’s often a subjective opinion and sometimes it depends on what you are planning to use the data for (a T1w image with a little bit of motion could be good enough for coregistration, but not quite for measuring cortical thickness). We need good studies that show how different exclusion/weighting rules influence statistical power. With MRIQC and diverse public data it would not be that difficult to perform such evaluation.

  2. Another approach is to look at outliers in quality metrics. However, each quality metric (even though with physical units) is influenced by many factors. Thus it’s not easy to construct normative distributions - especially at the beginning of the study when little data is available. To help with this problem we started crowdsourcing MRIQC metrics along with sequence parameters which can be used to group similar protocols. More infor about this data can be found in this preprint: https://www.biorxiv.org/content/early/2018/09/18/420984

1 Like

What a wonderful resource! Thanks!

1 Like

I’ve been playing around with the metrics, and the distribution of the different parameters is very useful for deciding which subjects to exclude. However, rather than just excluding participants at the edges, I was hoping to exclude participants with similar values to images marked as poor quality.

@oesteban, @ChrisGorgolewski - Are there ratings for bold as well? I saw in the interface the bold data should have ratings columns, but didn’t see such columns in the data. I also tried getting the rating database, but the md5sum doesn’t match any of the bold data (so presumably it’s all anatomical?).

In any case - thanks for the data and for the notebooks, those were super useful for getting the IQMs.

Hi @ayab, unfortunatelly rating bold timeseries is very unreliable and time consuming… so there are very few cases with quality annotation.

I can remember that they have quality annotations (bold images) for the ~1000 subjects of the ABIDE I at the PCP-QAP site.

Thanks, I’ll take a look at their data