I’ll start with the easiest part.
Using containers will give you great stability for your development settings (in terms of dependencies that do not change much, and versions that have to be pinned to some release - for instance, the code you link won’t work with FSL 6.0).
However, developing with containers can be tedious if you don’t discipline yourself to be organized and have many shortcuts at hand (e.g., a bash alias that automatically adds all the bind mounts, patches the code folder in the right path within the image, etc.). In other words, developing with containers pays off in the long run but has quite a steep learning curve.
A final big plus of developing with containers is that then deploying your stack to HPC or cloud (or a new laptop, for what’s worth) is really easy.
Finally, other nice features of containers for reproducibility are more deeply described here and here
TBC, nipype will be necessary in either case. Taking the container route it will be installed as part of the container’s “recipe” - but you need nipype anyways for this code to run.
Okay, I think that code is a good starting point. But be ready to master your nipype-fu. NiPype is the workflow framework we use in that code (and also fMRIPrep). If you want to start playing with it, please check out these wonderful tutorials - https://miykael.github.io/nipype_tutorial/.
When you have installed all the necessary tools to run those tutorials on your machine, you’ll be really close to run our code (containers may be of great help here).
Finally, the group level analysis should be easily adaptable, as the inputs are going to be individual statistical maps. If you run, e.g., three levels of analysis, the second will also be quite standard, etc. In other words, the first level analysis will take most of your time in adaptation. If you want a little roadmap, I can propose you the following:
- Decide whether you have time to learn containers
- Set up an environment (i.e., either reuse our containers from that project or install nipype and all dependencies on your laptop by hand).
- Familiarize with nipype with the tutorials.
- Run the protocol on your own (using ds003 and the code we posted).
- Identify the similarities between your analysis and the one we proposed.
- Write the code to interpret the events and regressors files and write the contrasts you want in a way nipype understands. I would probably start with the easiest task. The ds109 branch of the repo contains a more elaborate (unfinished) example for a different dataset.
- Write the first level analyses for all your tasks
- Move on to subsequent levels of analysis.
I hope this is enough to get you started. Let me know if there is something I haven’t covered with enough detail.