Hi – just to follow up, maybe the slowness is related to the post below? Though I’m still confused since it doesn’t seem like files are being transferred to the remote annex (it is creating the symlinks, though).
Any ideas? If I’m reading this GitHub issue correctly (Behavior with large N datasets · Issue #3869 · datalad/datalad · GitHub), it, may not be unexpected for the datalad save to take twice as long as it took to generate the data?
Creating datalad dataset with existing directories
Also, with neuroimaging data, would be best to make a subdataset for each subject? It seems like that was recommended on this thread for the HCP data? And maybe it is the default approach in heudiconv when the --datalad option is enabled?
Maybe yarikoptic and/or eknahm would know best here? Thanks any advice! We’re very excited to start having datalad as a regular part of our workflow.
Best,
David