Spm + derivative orthogonalize question in Nilearn

Summary of what happened:

I’ve just realized that in Nilearn’s orthogonalize function used to orthogonalize the derivative regressor with respect to the non-derivative term (e.g., in make_first_level_design_matrix) achieves linear independence (dot product of zero) but not a correlation of 0 (common statistical interpretation of orthogonal). The implication is the non-derivative term’s estimate can be impacted by the derivate term and I wondered if there was a benefit to this that I’m missing? Is there a reason why enforcing a dot product of 0 instead of a correlation of 0 was used in the orthogonalize function?

I was just curious, thanks!
Jeanette

Command used (and if a helper script was used, a link to the helper script or the command generated):

import numpy as np
import pandas as pd
from nilearn.glm.first_level import make_first_level_design_matrix

events_test = pd.DataFrame(
    {
        'onset': np.linspace(10, 80, 8),
        'trial_type': ['trial'] * 8,
        'duration': [1] * 8,
    }
)
frame_times_test = np.arange(0, 100, 1)

des = make_first_level_design_matrix(
    frame_times_test,
    events=events_test,
    hrf_model='spm + derivative',
    drift_model=None,
)

print(des.corr())
print(np.dot(des['trial'], des['trial_derivative']))

Version:

nilearn 0.11.0, python 3.12.7

Environment (Docker, Singularity / Apptainer, custom installation):

Data formatted according to a validatable standard? Please provide the output of the validator:

PASTE VALIDATOR OUTPUT HERE

Relevant log outputs (up to 20 lines):

PASTE LOG OUTPUT HERE

Screenshots / relevant information:


No idea of why this choice was made.

@bthirion may know.

In any case probably worth documenting somewhere.

opened an issue for it