Testing different reconstructions- proper higher level modeling FEAT

Summary of what happened:

We want to test a new reconstruction method v standard recon to see what effect it has on detecting significant brain activation during task fMRI. For initial piloting, we have 3 subjects (s1 has 4 usuable runs, s2 has 2 runs, s3 has 4runs), each run analyzed at the 1st level (ex: subj1 has run1.feat and recon_run1.feat x 4, such that we have 10 1st level analyses for regular and 10 corresponding to the new recon.

How would I best set this up to ask the questions: standard recon mean across subjects and runs, new recon mean, standard > new recon, new>standard? Would this be 2nd level (within subject) and then 3rd level (across)? Or can it be all be modeled in a 2nd level?

Many thanks,
Jen

Command used (and if a helper script was used, a link to the helper script or the command generated):

PASTE CODE HERE

Version:

Environment (Docker, Singularity / Apptainer, custom installation):

Data formatted according to a validatable standard? Please provide the output of the validator:

PASTE VALIDATOR OUTPUT HERE

Relevant log outputs (up to 20 lines):

PASTE LOG OUTPUT HERE

Screenshots / relevant information: