fMRIPrep defaults to using a work subfolder of your current working directory. Are you running fMRIPrep from /restricted/projectnb/cd-lab/yesh/processing?
Given that dcm2bids generates the work directory within the bids folder when it’s doing its stuff, I’m not sure what fmriprep’s problem is with using a pre-existing work folder, or even using a work folder within the bids folder. Seems like a really sensible place to keep it imho.
Short of binding a different folder to dump the working data into what are my options…?
You should not put work folder in BIDS root. That creates problems when creating BIDS layout index. The tmp2dcmbids folder is added to a .bidsignore by dcm2bids.
Just specify an outside folder.
This flag is generally not recommended (even if you are not using surface outputs) unless you have subjects with whom FS generally fails (e.g., bad lesions).
You should also specify -e or --containall flag in your singularity run command.
Thanks for getting back to me. Some follow-on thoughts/questions:
couldn’t the “work” folder be added to .bidsignore? If it’s good enough for dcm2bids, why not fmriprep?
if I’m not registering the output from fmriprep to a standard, is there still an advantage to removing “–fs-no-reconall” - what is the advantage of keeping this in (apart from roasting a few more polar bears).
what does “-e or --containall” do? FYI I’m running on slurm (not sure if that’s relevant to this though).
Thanks for your advice/help.
Forgot to say, thanks for formatting my original question properly!
Is there a particular reason you want work to be inside your BIDS dir? I imagine the pybids layout calculation by fmriprep is confounded by having workdir inside BIDS dir. dcm2bids isn’t a “BIDS-app” per se, and doesn’t use the pybids mechanism for anything.
Boundary-based BOLD to T1w registration.
Makes sure environment variables are not carried in from your local computing environment. These could be things defined by local installations of softwares that could point fMRIPrep to incorrect software installations.
Mostly for reasons of tidiness - just wanted to keep temporary working directories co-located with data from which it derives, but that’s probably just my problem. I will move it outside of the bids folder…
I’m going to do that in FSL.
This is helpful and probably explains some unexpected behaviour!
I think this is the reason why FSL does first level analysis in acquired space.
The only time registration to T1, then to standard is done is when performing group analysis of the computed beta/variance maps. This always seemed more intuitive to me than jumping straight to standard space, but I’m sure there’s pros/cons for both approaches.
To be clear, the BBR step will still happen - the only things that’ll have already been done is motion correction with mcflirt (I think that’s what fmriprep uses) and SDC (if I can persuade the PI to add this to their protocol).
The other slightly complicating factor is that these are multi-echo data, which I think are probably best left in func space for the purpose of denoising, prior to reading in the motion corrected and denoised version of the data into FSL. That would then provide the reference image that’s used for BBR reg to the T1.
So, it’s a lttle (lot) convoluted, but should work - I hope!
I do not use multi-echo data, so I cannot speak to that point, sorry! But in general…
FSL analyses can happen in any space (native, standard, or otherwise), it is just that fMRIPrep combines all transformations and resampling into a single one-shot process. It is true that native space outputs will not involve nonlinear correction as in MNI normalization. But even then, the whole process of registration and motion correction happens in a single step.
Warping these statistical parametric maps to standard space may introduce unwanted smoothing. I would personally recommend just standardizing space in fMRIPrep and do all first and second-level modeling there.
If the PI won’t, you can use SynBOLD Dis-Co to make a synthetic BOLD fieldmap. Empirically has worked well for others.
Think this might be slightly moot given that the first level pre-processing involves smoothing, but I take your point.
In general I think it’s probably better to stay in acquired space when determining noise components to be removed, as the effect of warping/interpolating to standard space will alter their structure, making it harder (?) to unambiguously identify them. However, that’s just a gut feeling I don’t have evidence to back that up. So perhaps take with a pinch of …