Could anyone please post here a one liner that will download all templates in TemplateFlow such that fMRIprep runs in a cluster with no internet access without error?
I already followed the instructions from the official website and tried to download it via datalad. Datalad itself came into my project’s conda environment with tens of packages that I’ll never otherwise use (zope.interface?). When I run it, it appears to work (it gives warnings about the fact I didn’t configure git, but why would one have to do that?), but then the downloaded templateflow directory is populated with subdirectories that in turn have not actual image files, only broken symlinks to a non-existing local .git directory.
This isn’t a new problem. Four years ago I opened a thread here in Neurostars about the same issue (thanks all for the answers back then). At the time, the solution ended up being to downgrade fMRIprep. What would be the solution now? This is a different cluster, btw. Could we please be able to download a compressed .tar.gz with everything?
I don’t know if I should open a new thread or not. After downloading all of templateflow, and configuring correctly both the TEMPLATEFLOW_HOME and SINGULARITY_TEMPLATEFLOW_HOME, and confirming that the compute node that runs the job can see these environmental variables, fMRIprep insists in trying to download tpl-OASIS30ANTs, a template that is available (like all others) in the templateflow directory.
Leaving aside the fact that I don’t know why it wants to use an OASIS-based template since I made no such choice, why is it still trying to access the internet to download it, when it’s right there, in the place indicated by the corresponding environmental variables?
Do you do any renaming in your docker/singulairty command when you bind drives? That could cause your container application to not find the drive. Might help to see your full command. Also, the OASIS template is used by some of the ANTs processes.