I have a quick question on datalad. I am really enthusiastic about datalad, and would like to implement it for many users in our institute. However, most people do analysis using SPM. Has anyone already made an attempt to ‘weave’ regular spm uses into datalad (I can think of a plugin inside SPM that does some API calls when processes are complete / files are generated)? Another quick idea would be to have users use it inside containers, but I believe that is one bridge too far for many -less tech savvy- users. A final solution I can think of, is to once and a while manually/automated add files by command line… Hoping to hear your opinions on this!
Are you suggesting that SPM would do a system call to
datalad add at the end of the execution of each module from the batch interface? Or that there would be a datalad batch module that a user could call explicitly? An alternative would be to use
datalad run but that would be less straightforward to implement.
I would say the first option. Because I think datalad run requires people
to learn new ways of analyzing data, I would say. Or is my suggestion really
a weird one? Thanks,
if there is interest in making this happen, I’d be happy to help. It is also worth mentioning that it is possible to make a
run-like commit record without having to have DataLad manage the execution. This is not exposed on the command-line API, but it would be no problem to add that.
FTR: This DataLad extension has the same needs. It computes something remotely and needs to record what was done in a local dataset. https://github.com/datalad/datalad-htcondor
I’m developing the pipeline w datalad, once I’m ready to incorporate SPM preprocessing, that would be good to share best practices !