N/A timeseries for some ROI when using XCP-D for postprocessing

Summary of what happened:

I’m trying to use XCP-D to calculate the functional connectivity matrices from data preprocessed by fmriprep. However, some ROIs show n/a in whole timeseries consistently across different subjects. It may be worth noting that one subject has 7 fmri (3 with specific tasks while other 4 corresponding to two runs of resting-state with two directions). And resting-state image in dir-AP shows better result with no n/a value in most cases.

Command used (and if a helper script was used, a link to the helper script or the command generated):

To save the space, I post the main arguments of XCP-D and fmriprep:

    --participant-label ${1} \
    --fs-license-file ${fs_dir} \
    -w  ${work_dir} \
    -v \
    --dcan-qc \
    --nthreads 8 \
    --omp-nthreads 2


    --participant-label ${1} \
    --fs-license-file ${fs_dir} \
    -w  ${work_dir} \
    -vv \
    --nprocs 16 \
    --omp-nthreads 8

Both of them actually are almost in default settings.


XCP-D: 0.5.2
fmriprep: 23.1.4

Environment (Docker, Singularity, custom installation):


Screenshots / relevant information:

Some screen shots from one of the subjects:

And these irregular patterns (white band) occur consistently across different subjects and different task, that is, occur at similar ROIs.

I want to know whether it is related to wrong registration or something else? And any suggestions to fix this problem? I am not so familiar with neuroimages, but I will try my best to provide any other information if needed.

Thank you!

It’s probably just a coverage issue (i.e., those nodes are not fully covered by your brain mask), but it’s definitely a good idea to check your registration results in the fMRIPrep report. By the way, XCP-D applies a coverage threshold with the --min-coverage parameter. The default is 0.5, which means that 50% of the voxels in a given node must have non-zero, non-NaN values in order for that node to be retained in the parcellated time series.

Thank you so much! @tsalo
I have a question that is it reasonable to directly tune --min-coverage, because it shows similar behavior across most of the tasks and subjects.

I’m not sure what you mean by directly tuning the parameter. Could you explain what you’re proposing in a bit more detail?

1 Like

I’m sorry for my fuzzy expression. What I mean is that can I change --min-coverage from 0.5 to a smaller value like 0.3~0.4 to reduce the incidence of this unexpected phenomenon (N/A value)? And then I want to use 0 to pad this unusual value.
Thank you!

You can use whatever coverage value you think makes sense for your data.

I wouldn’t recommend replacing your NaN time series with 0s. If you end up analyzing results across participants, then those zeros will mess up the results. The best method would be to ignore the subjects/nodes with NaNs.

1 Like