Reproducing FreeSurfer Cortical Thickness Estimation Algorithm

Hey everyone,

I’m working on reproducing FreeSurfer’s cortical thickness estimation for a specific ROI (e.g., superior parietal). My current approach computes thickness by finding the shortest distance from each vertex on one surface to the closest vertex on the other surface. While this provides a reasonable approximation, it doesn’t exactly match FreeSurfer’s method, which considers the shortest distance to the closest point on the other surface (not just the closest vertex).

For example, in the rh_superiorparietal region:

FreeSurfer reports: 2.183 mm
My implementation gives: 2.34 mm

I suspect the discrepancy is due to the difference in how the closest point is defined. However, implementing the exact FreeSurfer approach—where each vertex projects onto all triangles of the other surface to find the shortest intersection distance—is computationally expensive.

Question:

How can I efficiently compute the shortest vertex-to-surface distance for a given ROI? Since I only need to process one ROI at a time, I’m looking for an optimized approach that avoids brute-force computations.

What I’ve Tried:

  • Vertex-to-Vertex Distance (Current Implementation)
    Fast but not fully accurate (since it doesn’t check distances to edges or face interiors).

  • Vertex-to-Triangle Projection
    Attempted projecting each vertex onto all triangles of the other surface to get the true closest distance.
    Computationally prohibitive.

Would using spatial trees (e.g., KD-tree) help in accelerating this process? Are there any libraries that make this kind of calculation easier?
I tried to check FreeSurfer source code but I can’t seem to find the section in the code in charge of this computation

Current Code that implements the vertex-to-vertex definition:

import nibabel as nib
import numpy as np
from scipy.spatial.distance import cdist
from nibabel.freesurfer import read_geometry
from tqdm import tqdm  

# Load surfaces
white_surface = read_geometry('/path/to/rh.white')
pial_surface = read_geometry('/path/to/rh.pial')
smoothwm_surface = read_geometry('/path/to/rh.smoothwm')

# Load annotation file
annot_file = '/path/to/rh.aparc.annot'
labels, _, _ = nib.freesurfer.read_annot(annot_file)

# Define the label for superior parietal ROI
superior_parietal_label = 27

# Extract relevant vertices
white_vertices, _ = white_surface
pial_vertices, _ = pial_surface
smoothwm_vertices, _ = smoothwm_surface

# Get vertex indices of the ROI
vertex_indices_in_roi = np.where(labels == superior_parietal_label)[0]

# Compute shortest distances
def compute_shortest_distances(vertices1, vertices2):
    distances = []
    for v1 in tqdm(vertices1, desc="Computing distances", unit="vertex"):
        distances.append(np.min(np.linalg.norm(vertices2 - v1, axis=1)))
    return np.array(distances)

pial_to_smoothwm_distances = compute_shortest_distances(pial_vertices, smoothwm_vertices)
smoothwm_to_pial_distances = compute_shortest_distances(smoothwm_vertices, pial_vertices)

# Compute thickness for the ROI
cortical_thicknesses = [
    (pial_to_smoothwm_distances[idx] + smoothwm_to_pial_distances[idx]) / 2.0
    for idx in vertex_indices_in_roi
]

# Print the average thickness
print(f"Average Cortical Thickness for rh_superiorparietal: {np.mean(cortical_thicknesses):.2f} mm")

Any insights or suggestions on implementing the exact definition would be greatly appreciated!

Thanks in advance!