Hi Richard,
That is a good question, and your caution is well placed.
Rather than jumping directly to a short answer, let me first give some intuition.
SCORE (Structural Correlation–based Outlier Rejection) was originally developed and validated primarily for 2D ASL data, where acquisitions typically include a larger number of label and control pairs, often on the order of 30–40 or more. The core idea is that a small subset of pairs may contain severe artifacts that can disproportionately bias the resulting CBF map. In those scenarios, removing a few corrupted pairs using SCORE can be quite effective.
With 3D background-suppressed ASL, however, the number of label and control pairs is usually much smaller, often around 8–10. In my experience, applying SCORE in this setting does not consistently lead to improvement. Instead, removing even one or two pairs can somewhat reduce SNR.
Turning to SCRUB, SCRUB (Structural Correlation with Robust Bayesian estimation) can be viewed as SCORE combined with local artifact mitigation using an empirical Bayesian strategy. This approach is designed to be most beneficial when artifacts are spatially localized. In such cases, SCRUB aims to make small, local adjustments while preserving the underlying data, rather than heavily altering the signal based on strong priors. If artifacts are more global and affect the entire image similarly across all pairs, SCRUB generally provides limited benefit.
When all pairs are similarly contaminated by artifacts and the artifact pattern is largely global, neither SCORE nor SCRUB is likely to help much, since there are no clear outlier volumes to downweight or correct.
If you decide to try --scorescrub in this situation, I would recommend doing so cautiously, carefully inspecting which pairs are being flagged, and explicitly comparing results with and without scrubbing. With only eight pairs, I would generally avoid SCORE or SCRUB unless there are obvious outlier volumes or clearly localized artifacts driving visible issues.
Hope this helps.