My team and I successfully preprocessed an imaging dataset using fmriprep-20.2.1, followed by subject-wise and group-level QA using mriqc-0.15.1.
We explicitly called
--dummy-scans 2 in the fmriprep script, but the group-level mriqc output shows some runs with 10 or more dummy scans. Is there a reason that the number of dummy scans would be overridden during preprocessing?
singularity run --home $HOME --cleanenv $IMAGE \
$BIDS_ROOT $BIDS_ROOT/derivatives \
--participant-label $SUBJ \
--fs-license-file $SURFER \
--output-spaces MNI152NLin2009cAsym:res-2 \
--nthreads $THREADS \
--mem_mb $MEM \
Group-Level Output (via MRIQC)
MRIQC has a built-in function for deteching non-steady state volumes (i.e. dummy scans). Typically, modern scanners will discard these; however, there can still be lingering non-steady state volumes. MRIQC will detect if any exist, and provide that information in the reports. In the group report that you’ve screenshot, if you hover your mouse over the individual dots, it will state which subject’s run contains that number of non-steady state volumes. My understanding is that this metric report is independent of whether or not you specify
--dummy-scans in your MRIQC command. By specifying
--dummy-scans you’re simply telling MRIQC to drop however many volumes at the beginning of each functional run from certain processes. So even though you specified 2 non-steady state volumes for each run, MRIQC detected anywhere from 0 to 11 of these non-steady states across your dataset.
You could then potentially use this MRIQC information in your fmriprep command (via
--dummy-scans) if you wanted.
This is something that I’ve been wondering about, surprised when, e.g., the global signal, is substantially different in the first couple of TRs even when the scanner discards the dummy TRs. When that happens, is it accurate to think about this as simply that there were insufficient dummy TRs to reach complete saturation for that run? Or is there something more complicated happening?
That’s really helpful! Thanks for your insight here
I’m not an expert on this topic, so I can’t speak from a position of authority, but this is my understanding:
Modern scanners will typically discard a certain number of volumes at the beginning of each functional bold acquisition, which is meant to account for the magnet field instability. The assumption is that once the scanner begins collecting the actual volumes, there is no longer any notable instability. This clearly is not always the case, and can be particularly pronounced with multi-band acquisitions. I believe this is why MRIQC calculates this metric, because even with the scanner discarding several volumes at the beginning, the instability may still linger.
One way to alleviate this problem is to include some buffer time (e.g. 10 seconds) at the beginning of each functional bold acquisition, which can then be automically discarded by the researcher. That way, in addition to the discarded dummy scans by the scanner, the researcher can also remove the first several additional volumes, to better ensure that the magnet field instability has subsided.