Summary of what happened:
I am using fitlins via apptainer (singularity) on event-related task fMRI data in cifti fsLR format preprocessed through fMRIPrep. My fMRIPrep call enabled slice time correction. The notes here indicate I should shift my onsets in *events.tsv so the model matches the preprocessed data. Makes sense! But, pybids-transforms-v1 currently does not have the same formula transformation as in bidspm/bids-matlab. Hardcoding the ‘onset’ value in events.tsv to be the ‘shifted_onset’ value works but isn’t ideal for reproducibility. I tried a few variations of existing transforms but haven’t found the right combination to adjust the onsets within the BIDS stats model file. For example, I tried to ‘assign’ the ‘shifted_onsets’ to ‘onset’ but got an error about the Target column not being sparse. I might try adding a sparse column to events.tsv then re-trying ‘assign’. Then, I tried to delete ‘onset’ and rename ‘shifted_onsets’ but ‘onset’ was not listed as editable/available for that transform.
My question is: is there a way with the existing transforms?
Alternately, I could try to edit the manually prepared package to add the transform I want or use nilearn directly but I’m not super python savvy so I thought I’d ask here first. Thanks!
Command used (and if a helper script was used, a link to the helper script or the command generated):
# run fitlins
apptainer run \
-B ${BIDS_root}:/data/:ro \
-B ${fp_dir}:/fp/:ro \
-B ${out_dir}:/out/ \
-B ${work_dir}:/work/ \
-B ${mod_file}:/mod/ \
--cleanenv ${cont_img} \
--verbose \
--database-path /data/sourcedata/workdir/dbcache \
--space fsLR \
--desc-label '' \
--model /data/models/model-falsebelief_smdl.json \
--derivatives /fp/ \
--smoothing 6:run:iso \
--estimator nilearn \
--drift-model cosine \
--work-dir /work/ \
/data/ /out/ run
My model file
{
"Name": "FalseBeliefTask",
"BIDSModelVersion": "1.0.0",
"Description": "created by Colleen Hughes Sept 2025",
"Input": {"subject": ["10"], "task": ["falsebelief"]},
"Nodes": [
{
"Level": "Run",
"Name": "run_level",
"GroupBy": ["run", "subject"],
"Transformations":{
"Transformer":"pybids-transforms-v1",
"Instructions":[
{
"Name":"Factor",
"Input":["trial_type"]
},
{
"Name":"Convolve",
"Input":["trial_type.FalseBelief_story",
"trial_type.FalseBelief_statement",
"trial_type.FalsePhoto_story",
"trial_type.FalsePhoto_statement"
],
"Model": "spm"
}
]
},
"Model": {"X": [1,
"trial_type.FalseBelief_story",
"trial_type.FalseBelief_statement",
"trial_type.FalsePhoto_story",
"trial_type.FalsePhoto_statement",
"trans_x",
"trans_y",
"trans_z",
"rot_x",
"rot_y",
"rot_z",
"non_steady_state*"
],
"Type": "glm"},
"Contrasts": [
{
"Name": "allbelief_v_allphoto",
"ConditionList": ["trial_type.FalseBelief_story",
"trial_type.FalseBelief_statement",
"trial_type.FalsePhoto_story",
"trial_type.FalsePhoto_statement"],
"Weights": [0.5,0.5,-0.5,-0.5],
"Test": "t"
},
{
"Name": "statement_belief_v_photo",
"ConditionList": ["trial_type.FalseBelief_statement",
"trial_type.FalsePhoto_statement"],
"Weights": [1,-1],
"Test": "t"
}
]
},
{
"Level": "Subject",
"Name": "subject_level",
"GroupBy": ["subject", "contrast"],
"Model": {"X": [1], "Type": "meta"},
"DummyContrasts": {"Test": "t"}
},
{
"Level": "Dataset",
"Name": "one-sample_dataset",
"GroupBy": ["contrast"],
"Model": {"X": [1], "Type": "glm"},
"DummyContrasts": {"Test": "t"}
}
]
}
Version: 0.11.0
Environment (Docker, Singularity / Apptainer, custom installation):
Apptainer
Data formatted according to a validatable standard? Please provide the output of the validator:
Validated model file here: https://bids-standard.github.io/stats-models/validator.html
Relevant log outputs (up to 20 lines):