--dummy-scans option on fmriprep functional?

Hi, all. I’ve realized that one of the data sets we’ve been working on had dummy scans that were included in the dicoms, and we’d like to avoid analyzing those scans.

I saw in the latest fmriprep release that there’s a -dummy-scans flag that, I assume, is intended to drop the dummy scans from further analysis, but it still creates timeseries with the same number of TRs as previous versions of fmriprep (i.e., including the dummy scans).

Am I doing something wrong, or does this not work as I expect, or is this a bug?

Command invocation here:
usr/local/miniconda/bin/fmriprep $BIDS_DIR $BIDS_DERIV_DIR participant --participant_label sub-SAXOA253 --nthreads 4 --mem_mb 13000 --ignore slicetiming --use-aroma --ignor
e-aroma-denoising-errors -w $WORKING_DIR --fs-license-file /cm/shared/openmind/freesurfer/6.0.0/.license --output-spaces
MNI152NLin6Asym:res-2 --dummy-scans 4 --skip-bids-validation

Thanks!
Todd

1 Like

Hi @toddt,

Thank you for your message! The --dummy-scans sets the number of volumes, but we never remove volumes. The non_steady_state_outlier confound can be used in regression models to keep the dummy volumes from contributing to parameter estimates.

Note: For AROMA and CompCor, we do remove the dummy scans for calculation, but then zero-pad to ensure that the confounds match the number of volumes in the BOLD series.

Thank you,
Franklin

Aha. That explains that. Thanks!

I’m not sure I’m totally on board with this implementation, though, for a number of reasons:

  1. this makes the carpet plot reports/other confound reports look super-janky (to use a technical term), because the non-steady-state volumes drive up the signal intensity.
  2. we exclude people based on average motion (FD) above a certain threshold, and I’m not sure we want the motion during the dummy scans to count for/against that threshold
  3. This is idiosyncratic, but the onsets we have don’t include the dummy scans, so in my particular case, all of my onsets are now off by 4 TRs for this dataset
  4. Adding num_dummy_scans additional regressors to the model costs degrees of freedom, adds runtime, et cetera, which seems a bit silly for data that’s actually garbage data.

Thoughts?

Edit: either way, it probably makes sense to update the documentation to explain the way the flag works!

Hi @toddt

Thank you for your message. I have raised these --dummy-scans concerns with the fMRIPrep development team.

I have also added an issue to update the documentation to provide more explanation regarding this flag.

Thank you,
Franklin

Hi @toddt,

I’m in agreement with points 1 and 2, I believe.

for point 3, to be in line with BIDS, if the original BIDS dataset includes the dummy volumes, the onset column of the task tsv should be in line with the first volume in the file. but you may already be aware of this, and are just noting that it is annoying that you thought the volumes dummy volumes were not included in the raw bids dataset, but then the volumes were included. (I agree, that is annoying)

For point 4, I agree it’s silly to include garbage data, but we’ve found it more difficult to define how one person’s trash could be another person’s treasure. If you have more to add to that conversation, I am happy to bring it back from the dead.

Finally, I agree, more clear documentation is always a good thing.

Best,
James

You’re completely right on point 3 – these data weren’t provided in BIDS format, and in making them BIDsy, I didn’t realize the onsets were off by the number of dummies. It’s a personal frustration, not a systemic problem.

re: point 4, trash and treasure – this is a good point. I’d forgotten that the initial “dummy” volumes were useful for better reference scans.

With that said, I really, really don’t want these scans in my data set. Setting aside my personal onsets hassle, I use ART to detect artifact timepoints during post-processing, some of which are calculated as standard deviations of signal intensity away from the mean signal intensity. Having a bunch of super-high intensity images at the start of the run is going to affect that calculation in ways I don’t really want to deal with.

So my current plan is to take the BIDS functionals, after heudiconv, drop the first dummy volumes, then pass that along to fmriprep and cross my fingers that there aren’t .json files somewhere in there tracking the “correct” number of TRs. This feels not ideal for a number of reasons, not least of which is that there’s now a piece of manual code in between the very nicely reproducible heudiconv and fmriprep programs.

In my ideal world, though, there’d be another flag (–delete-dummy-volumes) that I could pass to fmriprep to avoid this. And perhaps that flag could keep the useful functions of non-steady-state volumes (reference scans) while avoiding all the headaches I’m trying to avoid?

Best,
Todd

3 Likes

Related to this issue:

I noticed in the confounds_regressors.tsv files that when I use the dummy scan flag (in v1.4.1), only tCompCor components are filled with zeros.

It seems that DVARS and framewise displacement values from these dummy scans are still used and I noticed that dummy scans can have wild DVARS values (20x the average non-dummy scan).

Since frames that exceed a threshold of 1.5 standardised DVARS are annotated as motion outliers, these dummy scans affect outlier detection, right? And if so, shouldn’t more (if not all) confounds regressors be adjusted for dummy scans?

-Jelle

Thanks for these discussion.
I also need to drop the first few volumes because I am replicating a previous study and need to use the original protocol first (but i hope to do it using fmriprep).

@toddt, I am wonder, did the solution you described (remove the dummy volume after converting to BIDS) work? And, is there a flag --delete-dummy-volumes now?

Hi everyone

Following up on this thread: in the most recent fmriprep, when you include “–dummy-scans N”, are those N non-steady-state volumes include in…

(1) the output functional data (e.g., mapped to the surface)?

(2) the slice time correction? I see in the documentation for AFNI’s 3dTshift requires those volumes to be ignored.

(3) the quality assessment statistics and figures (e.g. DVARS)?

(4) the confound regressors?

Many thanks
Alex

  1. Yes, the output files have the same number of time points as the inputs.
  2. The count of dummy scans is passed to 3dTshift, so this is done correctly.
  3. (+4) They are removed from ICA-AROMA and CompCor, but DVARS does not appear to skip dummy scans. non_steady_state regressors are provided, and should be used to censor dummy scans. As with any censoring regressors, you should zero out other regressors for censored time points.
1 Like