Fitlins slice time correction to event onsets

Summary of what happened:

I am using fitlins via apptainer (singularity) on event-related task fMRI data in cifti fsLR format preprocessed through fMRIPrep. My fMRIPrep call enabled slice time correction. The notes here indicate I should shift my onsets in *events.tsv so the model matches the preprocessed data. Makes sense! But, pybids-transforms-v1 currently does not have the same formula transformation as in bidspm/bids-matlab. Hardcoding the ‘onset’ value in events.tsv to be the ‘shifted_onset’ value works but isn’t ideal for reproducibility. I tried a few variations of existing transforms but haven’t found the right combination to adjust the onsets within the BIDS stats model file. For example, I tried to ‘assign’ the ‘shifted_onsets’ to ‘onset’ but got an error about the Target column not being sparse. I might try adding a sparse column to events.tsv then re-trying ‘assign’. Then, I tried to delete ‘onset’ and rename ‘shifted_onsets’ but ‘onset’ was not listed as editable/available for that transform.

My question is: is there a way with the existing transforms?

Alternately, I could try to edit the manually prepared package to add the transform I want or use nilearn directly but I’m not super python savvy so I thought I’d ask here first. Thanks!

Command used (and if a helper script was used, a link to the helper script or the command generated):

# run fitlins
apptainer run \
  -B ${BIDS_root}:/data/:ro \
  -B ${fp_dir}:/fp/:ro \
  -B ${out_dir}:/out/ \
  -B ${work_dir}:/work/ \
  -B ${mod_file}:/mod/ \
  --cleanenv ${cont_img} \
  --verbose \
  --database-path /data/sourcedata/workdir/dbcache \
  --space fsLR \
  --desc-label '' \
  --model /data/models/model-falsebelief_smdl.json \
  --derivatives /fp/ \
  --smoothing 6:run:iso \
  --estimator nilearn \
  --drift-model cosine \
  --work-dir /work/ \
  /data/ /out/ run

My model file

{
  "Name": "FalseBeliefTask",
  "BIDSModelVersion": "1.0.0",
  "Description": "created by Colleen Hughes Sept 2025",
  "Input": {"subject": ["10"], "task": ["falsebelief"]},
  "Nodes": [
    {
      "Level": "Run",
      "Name": "run_level",
      "GroupBy": ["run", "subject"],
      "Transformations":{
        "Transformer":"pybids-transforms-v1",
        "Instructions":[
	       {
          "Name":"Factor",
          "Input":["trial_type"]
          },
        {
          "Name":"Convolve",
          "Input":["trial_type.FalseBelief_story",
                  "trial_type.FalseBelief_statement",
                  "trial_type.FalsePhoto_story",
                  "trial_type.FalsePhoto_statement"
                  ],
          "Model": "spm"
        }
             ]
      },
      "Model": {"X": [1,
                      "trial_type.FalseBelief_story",
                      "trial_type.FalseBelief_statement",
                      "trial_type.FalsePhoto_story",
                      "trial_type.FalsePhoto_statement",
                     "trans_x",
                     "trans_y",
     	               "trans_z",
                      "rot_x",
                      "rot_y",
                      "rot_z",
                      "non_steady_state*"
                    ],
                "Type": "glm"},
      "Contrasts": [
        {
          "Name": "allbelief_v_allphoto",
          "ConditionList": ["trial_type.FalseBelief_story",
                            "trial_type.FalseBelief_statement",
                            "trial_type.FalsePhoto_story",
                            "trial_type.FalsePhoto_statement"],
          "Weights": [0.5,0.5,-0.5,-0.5],
          "Test": "t"
        },
        { 
          "Name": "statement_belief_v_photo",
          "ConditionList": ["trial_type.FalseBelief_statement",
                            "trial_type.FalsePhoto_statement"],
          "Weights": [1,-1],
          "Test": "t"
        }
      ]
    },
    {
      "Level": "Subject",
      "Name": "subject_level",
      "GroupBy": ["subject", "contrast"],
      "Model": {"X": [1], "Type": "meta"},
      "DummyContrasts": {"Test": "t"}
    },
    {
      "Level": "Dataset",
      "Name": "one-sample_dataset",
      "GroupBy": ["contrast"],
      "Model": {"X": [1], "Type": "glm"},
      "DummyContrasts": {"Test": "t"}
    }
  ]
}

Version: 0.11.0

Environment (Docker, Singularity / Apptainer, custom installation):

Apptainer

Data formatted according to a validatable standard? Please provide the output of the validator:

Validated model file here: https://bids-standard.github.io/stats-models/validator.html

Relevant log outputs (up to 20 lines):


Screenshots / relevant information:


Hi Colleen,

I had a chat w/ @effigies about this, and we think this is something that as a user you shouldn’t have to explicitly deal with, by modifying your variables.

Instead, FitLins should be aware that there is meta-data indicating that slice timing correction was used, (already annotated using the StartTime meta-data field by fmriprep), and adjust your variable accordingly for you.

However, this is not currently implemented. Chris is opening up an issue in FitLins and we’ll work on it for a future release. In that case, be careful, because it may mean that FitLins will operate differently, which means any hardcoded changes to your events to handle this may over-correct once FitLins does this automatically. By default we will issue a warning to make this more obvious.

In the meantime, if you want, you can also try to use the Lag transformation. In theory that would be the correct thing to do otherwise, but it is an untested solution so I can’t guarantee it does the right thing.

Thanks for bringing this to our attention!

Awesome, I appreciate that you all are working on a solution. I think I tried lag but it was using the value of a lagged row not subtracting the appropriate # of seconds from the onset. But I’ll double check my understanding.

Ps. I also noticed that the smoothing does not operate on cifti files which is noted here. Might be good if it could throw an explicit error if you include the -smoothing argument on cifti files. I only realized this when varying the kernel size and not seeing changes to the outputs. My workaround is to smooth using connectome workbench, put the outputs in a new derivatives directory with the fmriprep file name, and point to that new derivatives directory instead of the fmriprep derivatives. Just an FYI if others come across the same use case. It’s understandable that fitlins doesn’t do everything for every case, and it’s been useful for getting me started with BIDS stats models and nilearn with cifti data.

Sounds good. I could be wrong about Lag, so your recollection may be correct.

Thanks for the comment on the CIFTI smoothing. I will open up on issue to further document / fix / warn.

1 Like