Thank you for your answer! However, I was looking for the warps for the previous (non-symmetrical) atlas, since it is the one I was already using. If they are not available, I will use the ones you showed me with the new symmetrical template.
I have noted in gitHub that the dHCP pipeline uses NiBabel’s Cifti2 to convert gifti surfaces (and metric files, commented out) to cifti. This is unlike HCP pipeline that uses the connectome workbench -cifti-create-dense-timeserieshere.
Is there something relating infant population that motivated this change? Does it produce better results in your experience?
Not at all. I actually use both in various places. The code you mentioned where I use NiBabel Cifit2 was simply because I was already working in python.
Thank you very much for your answer, it is extremely helpful!
I noticed you just pushed a hcp_surface.sh script. Thank you for publishing such great work! Funnily enough, I am just now thinking about how to go about performing the next step, adding the subcortical voxels to the CIFTI on the dhcp32kSym mesh.
I see that the NiBabel code I referred to before creates a mask of the subcortical structures. My first guess would be that this would not work in atlas space, since the volumetric atlas is not in the same space as the surface one. However, I am not familiar enough with the CIFTI format, maybe masking the BOLD signal in atlas space is a valid input for -cifti-create-dense-timeseries on a dhcp32kSym mesh? The HCP pipeline inputs subcortical data in a AtlasSubcortical space but I can’t quite figure out where it comes from.
My understanding is that the adult HCP pipeline is able to resample the native volumetric into the MNI space in a single resampling step. The surface mapping can then be done in the MNI space which means that the subcortical voxels are already aligned.
In the dHCP pipeline we cannot currently achieve a single resampling to template space because of the slice-to-volume motion correction and motion-by-susceptibility distortion correction that we do. Therefore, we do the surface mapping in the motion and distortion corrected denoised functional space. This is straightforward for the cortical surface, but as you correctly point out, it means that the subcortical voxels are not aligned across subjects. I have attempted to solve this by masking the native functional space subcortical voxels and then resampling them to the dHCP 40wk volumetric template space. An example of this can by seen in dhcp/func/surface.py on line 741.
I also resample the surface geometry to the dHCP 40wk volumetric template space (see lines 764-781 in dhcp/func/surface.py) so that I can visualise the surface vertices and subcortical voxels together in wb_view.
It is still very much an open question on what the best way to map this data to the surface is. We are trialling a number of approaches that extend the HCP approach in various ways. All the code I have trialed so far is in either dhcp/func/surface.py or dhcp/func/hcp_surface.sh.
If would be glad to hear if you have success with your surface mapping.
Thank you very much for your response, it has been extremley helpful, as always. I have been reviewing both the adult and developing HCP artciles to get a full picture on this issue.
If I understand correctly, the resulting CIFTI, where both the cortical surface and the subcortical volumes are in the dhcp40wk volumetric template space, would be appropriate to perform group analysis, am I right? If that is the case, the cortical surface atlas would be needed for resampling every subject to the same mesh, so the actual cortical surface template is only used for visualization purposes?
To get both subcortical structures and cortical surfaces in the same (group) space. I first thought of performing an affine registration to the native T2 space, then map the obtained volume to the native surface (in T2 space), and finally align that surface to the cortical surface atlas. Therefore, the masked subcortical volumes would need to be warped from the native T2 space to the Conte69 space where the atlas is initialized. Would it be right to say this is not a good idea on the dHCP data because volumes should be mapped to their native surfaces in the functional space?
I should mention the HCP pipeline recommends to map volumes to surfaces in the native space, before it goes through non-linear transofrmations or resamplings, which makes sense. But I fail to see a reason why that native space could not be T1 for the adults, instead of functional space.
I hope this has not been overly confusing. I apologise in advance if that is so, I will work on a diagram to keep track of all the different space transformations and meshes.
I have done my best to address your questions. I hope I have not confused things further.
Yes, as I understand it. That is the space I am doing group analysis in.
I am not completely sure I understand this question. The cortical surface atlas and the cortical surface template are the same thing?
The only part of a CIFTI that has a space is the subcortical voxels. The cortical vertices in the CIFTI will have vertex correspondence with a particular separate mesh surface geometry (*.surf.gii), but the space of that geometry is irrelevant.
I initially sample the native fMRI to the native subject surface and add it to a CIFTI with the subcortical voxels in native fMRI space (which I name as mesh-native_space-func; you can see more info on my naming scheme here). The “space” of the cortical surface will depend on the geometry file (*.surf.gii) you choose to view it with. As long as there is vertex correspondence with the CIFTI, the geometry file can be in any space.
To do group analysis I resample the cortical surface (in the CIFTI) to have vertex correspondence with the 40wk surface template, and I warp the subcortical voxels to be in the dHCP 40wk volumetric template space. So in the resultant CIFTI, the cortical surface is still space agnostic, but has vertex correspondence with the 40wk surface template.
To visualise this CIFTI such that the subcortical voxels, and the cortical vertices can be displayed simultaneously, you will require a surface geometry (*.surf.gii) that has vertex correspondence with the 40wk surface template and is in the space of the 40wk volumetric template. The easiest way to do this is to resample a native (T2w space) surface geometry to have vertex correspondence to the 40-week surface template, then warp that geometry to the 40-week volumetric template using the provided warp from structural-to-template space. This will now have vertex correspondence with the surface atlas and be in the same space as the volumetric atlas. Once you have this you can use it for all subjects…
I hope that makes sense? I still find this very confusing…
Because to get the fMRI into the T1w space it requires resampling and therefore interpolation, and each interpolation step introduces error. This is why I sample the to the surface in native fMRI space.
extdhcp40wk refers to the 40wk template for the extended dHCP volumetric atlas. The original dHCP volumetric atlas spanned 36-44 weeks, for release 03 we extended that to be 28-44 weeks. Unfortunately the two atlas versions are not exactly aligned so we use different labels dhcp40wk and extdhcp40wk to differentiate. There is more information here.
I have a small confusion about the registration process in here.
The first step is to align the volumetric image to the 40w volumetric template. This template refers to the dHCP template or the template used in the minimal processing pipeline (if I understand correctly is a modified version of this one)? Thanks in advance.
Thanks for your reply! I’ve been using the Serag et al. 40w template. Do you know the implications of the differences of the template? I think the problem comes at the point when you use the predefined pre-rotation matrix. Do you think I should rerun all the registrations? Thanks in advance
First of all, thank you very much for taking the time to answer my questions in such detailed fashion. Everything is much more clear but there are still a couple of issues I am thinking about:
You are right, I had totally missed that an affine transformation implies a resampling because of the change in resolution. Additionally, it is also true that -surface-apply-affine should not perform any resampling, and should use flirt in a “single stage mode” to apply the rigid transformation matrix, keeping the original resolution (vertex number), am I right? I have not been able to confirm this with workbench’s documentation.
Interpolation - select the interpolation method to be used in the final (reslice) transformation (it is not used for the estimation stage - trilinear interpolation is always used for the estimation of the transformation).
I am assuming this means that some interpolation error is already built-in the transformation matrices previously calculated, is that right? I cannot think of a way of bypassing that error, so I understand the idea is to avoid adding additional sources of error.
I am slightly confused. You linked to the augmented dHCP volumetric atlas, which is the same (same space) as the original but with the computed week-to-week warps, isn’t it? Is this the link to the extended atlas discussed above?
It is hard to know the exact implications of using the Serag et al. atlas rather than the Schuh et al. atlas - do the two 40w templates look rotated relative to one another? The only way to really know would be to run both and look at the surfaces, but that seems unnecessary. I would personally re-run the registration using the Schuh et al. atlas
-surface-apply-affine will transform the geometric coordinates in a *.surf.gii but will not touch metric data. It does not use flirt at all. It just uses a flirt-style transforms matrix to define the transform.
There is not any interpolation error built into the transform matrices. The only error they might contain is alignment error. The interpolation error comes from resampling a volumetric image when you apply the transformation matrix.
Fair enough, it is unfortunately confusing.
The original dHCP volumetric atlas spanned 36-44 weeks and is available at:
I augmented this original-atlas with week-to-week transforms and change some filenames for compatibility with the fMRI pipeline. The augmented original atlas is here:
For release 03, the original atlas was extended to 28-44 weeks. Unfortunately it is not exactly aligned with the original-atlas. As far as I know this has not been released yet.
I augmented this extended-atlas with week-to-week transforms, change some filenames for compatibility with the fMRI pipeline, and added transforms to the MNI and the original-atlas. It is available here:
Thank you very much for the detailed response. Trying to fully understand hcp_surface.sh, I ran it using the transformations available with the 2nd release, i.e., native to dhcp40wk.
Unfortunately, --surface-apply-warpfield returns the result on the left. On the right, the input surface :
. The documentation is scarce, and therefore I am not sure if this workbench command is supposed to completely ignore the cortical grayordinates but it clearly isn’t, for some reason. Do you have any clue what could be the problem?
In a somewhat unrelated topic: I used the script by Emma and @lzjwilliams to calculate the surface transforms from native to symmetrical surface atlas mesh. Will those be already provided with the 3rd release?
I am not really sure what you mean by “completely ignore the cortical grayordinates”?
The command --surface-apply-warpfield will transform the coordinates of the input surface to the target space of the warp. The warp you are using is non-linear and goes from a single-subject native space to a group space which is a very different space. I would not expect the transformed surface to look as smooth as the original surface…
It is not obvious to me from these images that this is a problem. What would you be expecting the transformed surface to look like?
I’ve been running all the registrations to the new symmetric atlas (using Schuh’s template to initialize the registration). I would like to know how much variability to expect in the results, for example:
This image shows the labels of two different subjects mapped to template space. Is this variability expected? and is this the correct approach to assess the quality of the registration? If not, what could be the best map to inspect? Note that this data is not from the dHCP. Thanks in advance.
Great to see you have some prelim data! Because the registrations are driven using the sulcal depth maps, I would use those to assess the registrations. What I tend to do is merge all the sulcal depth maps for each subject into a single metric file using wb_command -metric-merge, and then view these on the 40 week template space. It’s quick to flick through them in wb_view, and then get an idea of how aligned they are on the surface (and also if you think the sulcal depth maps look legitimate or too distorted). I would also look at the surfaces that have been registered to template space, just to make sure you are happy with their quality. I don’t know if I trust the parcellation on the left, given that there is a small yellow island in the middle of the parietal lobe… but if it is an isolated case it probably won’t impact any group-level analyses (if you have small numbers).
One thing you might also want to do if you are performing analyses at an ROI level is pass the surfaces through the M-CRIB-S pipeline here. This will give you regions consistent with the Desikan-Killiany atlas, and overall more granularity.
Thanks for your help! We prepared a presentation with some of the common registration inaccuracies we find in our population. Maybe is better if I send it to you by mail, it has a few images. Then you can let me know your opinion. I think probably we should change the config file for this population, but I don’t know what parameters to tune, so any feedback will be really appreciated.
For now, we are performing just voxel level analysis., but I’ll give it a try tho this segmentation. Thanks in advance!