Need help with HCP resting-state data (surface-based analysis)

Hi all!

Our group is trying to analyze the resting-state functional connectivity of the HCP data in the surface space. We have a few questions:

  1. What files should we use to calculate functional connectivity in the surface space? Should we use .dtseries.nii, .native.func.gii, or others? HCP does not offer a detailed description of the files, so we were wondering which files do we need to start our analysis.

  2. We would also like to apply a mask to extract time series from a specific ROI (such as V1) in the surface space. What probabilistic atlas should we use? Should we reslice the atlas to the functional/structural image?

  3. Are there any python packages or Softwares suitable to analyze HCP resting-state data in the surface space?

Thank you!! We would appreciate it if you had any thoughts or suggestions!!

Best,
Linjing

Hi Linjing,

I don’t do much with resting state, but I can answer some of these:

  1. Specifically which file you use I’m not sure, but the *.dtseries.nii files are typically CIFTI files containing time-series data that have already been resampled onto the cortical surface vertices and various subcortical voxels (the CIFTI files contain both of LH and RH cortices plus subcortical vertices).

  2. You can use neuropythy to apply all of these atlases to either HCP or FreeSurfer subjects:

    • Benson et al. (2014 & 2018) V1-V3 atlas
    • Wang et al. (2015) probabilistic atlas
    • Glasser et al. (2016) multimodal parcellation
    • Rosenke et al (2018) cytoarchitectural atlas

    The first three give you V1 (the last is probably not useful for ROIs). The Wang atlas is only the ROI boundaries and excludes the fovea and periphery, but it includes many visual areas. The Benson atlas is V1-V3 but gives you polar angle and eccentricity as well as ROI boundaries; the Glasser atlas is a big parcellation of the whole brain based on the HCP data, also only boundaries (if you are going to use Glasser with an HCP subject, probably best to use the HCP tools instead of neuropythy).
    To use neuropythy, you can use the docker-image:

    > docker run --rm -it \
          -v <path_to_hcp_subjects_directory>:/data/hcp/subjects \
          nben/neuropythy atlas --help
    ...
    # Apply all 4 atlases to subject 111312:
    > docker run --rm -it \
           -v <path_to_hcp_subjects_directory>:/data/hcp/subjects \
           nben/neuropythy atlas -v --atlases benson14,wang15,glasser16,rosenke18 111312
      * Using HCP subject: /data/hcp/subjects/111312
      * Using Atlas subject: /data/required_subjects/fsaverage
      * Preparing Hemispheres...
      * Preparing Atlases...
          * Atlas: benson14, Version: (4, 0)
          * Atlas: wang15, Version: (1, 0)
          * Atlas: glasser16, Version: (1, 0)
          * Atlas: rosenke18, Version: (1, 0)
      * Preparing Filemap...
      * Extracting Files...
          * /data/hcp/subjects/111312/lh.benson14_eccen.mgz
          * /data/hcp/subjects/111312/lh.benson14_angle.mgz
          * /data/hcp/subjects/111312/lh.benson14_sigma.mgz
          * /data/hcp/subjects/111312/lh.benson14_varea.mgz
          * /data/hcp/subjects/111312/rh.benson14_eccen.mgz
          * /data/hcp/subjects/111312/rh.benson14_angle.mgz
          * /data/hcp/subjects/111312/rh.benson14_sigma.mgz
          * /data/hcp/subjects/111312/rh.benson14_varea.mgz
          * /data/hcp/subjects/111312/lh.rosenke18_vcatlas.mgz
          * /data/hcp/subjects/111312/rh.rosenke18_vcatlas.mgz
          * /data/hcp/subjects/111312/lh.glasser16_atlas.mgz
          * /data/hcp/subjects/111312/rh.glasser16_atlas.mgz
          * /data/hcp/subjects/111312/rh.wang15_fplbl.mgz
          * /data/hcp/subjects/111312/rh.wang15_mplbl.mgz
          * /data/hcp/subjects/111312/lh.wang15_fplbl.mgz
          * /data/hcp/subjects/111312/lh.wang15_mplbl.mgz
    # See the outputs:
    > ls /my/path/to/hcp/subjects/111312
    lh.benson14_angle.mgz  lh.benson14_sigma.mgz  lh.glasser16_atlas.mgz    lh.wang15_fplbl.mgz  MNINonLinear  rh.benson14_angle.mgz  rh.benson14_sigma.mgz  rh.glasser16_atlas.mgz    rh.wang15_fplbl.mgz  T1w
    lh.benson14_eccen.mgz  lh.benson14_varea.mgz  lh.rosenke18_vcatlas.mgz  lh.wang15_mplbl.mgz  retinotopy    rh.benson14_eccen.mgz  rh.benson14_varea.mgz  rh.rosenke18_vcatlas.mgz  rh.wang15_mplbl.mgz
    

    The MGZ files that are produces store 1x1xN volumes in surface space whose values are matched to each vertex in the subjects native hemisphere (LH or RH); N is just the vertex count of the associated hemisphere. You can alternately export the data in FreeSurfer curv-file format; mgz files can be converted to other formats using FreeSurfer’s mri_info or nibabel or neuropythy or many others.

    You can also run neuropythy from the command line if you have it installed locally:

    > python -m neuropythy atlas --help
    

Hope this helps!
-Noah

1 Like

Hi Noah,

Thanks so much!! We will definitely try it out!!

Besides, we took a look at the *.dtseries files in the resting-state data folder, and found that there are
four types:
“rfMRI_REST1_LR_Atlas.dtseries.nii”
“rfMRI_REST1_LR_Atlas_hp2000_clean.dtseries.nii”
“rfMRI_REST1_LR_Atlas_MSMAll.dtseries.nii”
“rfMRI_REST1_LR_Atlas_MSMAll_hp2000_clean.dtseries.nii”
We were wondering if you have any clues about the differences between these files, and which one we should use for further connectivity analysis?

Thank you so much!!
Linjing

Again, I haven’t looked at the resting state data much, so you should confirm all of this before believing me:
I think the hp2000 indicates that the time-series has been detrended with HCP’s ICA-FIX algorithm while the others have not been.

The MSMAll indicates that the subject was aligned to the HCP’s fs_LR atlas using resting-state data as well as structural/anatomical data like curvature and myelination estimates. I believe the others are made using an alignment that the HCP calls MSMSulc, which is closer to (possibly identical to) FreeSurfer’s traditional registrations using sulcal curvature only. This is important if you are doing cross-subject comparisons, and something you’ll probably want to check is whether the resting-state data you’ll be analyzing/comparing between subjects was used in the MSMAll alignments.

Also, this page has some useful documentation.

Hi Linjing,

I have the same question about the python packages or Softwares suitable to analyze resting-state data in the surface space. Have you solved it yet?

Thank you!

Best,
Jun