Fmriprep BET threshold?



Hi all,

I’ve been trying to figure out the FSL BET threshold used in fmriprep. Could you please clarify the threshold? I’d also like to know if there is an option to adjust the BET threshold in the fmriprep pipeline.

(My confusion comes from noticing this line of code “'bet structural.nii brain_anat.nii -f 0.70’” from niworkflows/niworkflows/interfaces/ and assumed it to be .3, but
also noticed a different line of code “bet_hmc = pe.Node(BETRPT(mask=True, frac=0.6), name=‘EPI_hmc_bet’)” from fmriprep/fmripre/workflows/

Thank you!


For skullstriping BOLD images (which is what I assume you are interested - in contrast to skullstripping T1w) FMRIPREP uses a set of heuristics that capitalize on multiple tools (of which bet is only one step). You can see the full procedure here

Do I understand correctly that FMRIPREP did not do a good job on your data? In such case could you share the HTML reports (with images)?



Thank you for the link to the BOLD image skullstriping procedure. That’s helpful to know.
I believe FMRIPREP did a nice job on the functionals. (How would I share the HTML reports with you? They don’t seem to upload in this forum. I’d be happy to share via a different route if interested)

The reason I’ve asked for the BET threshold was because I was using an intersection of the FMRIPREP functional brain masks. I’m trying to overlay it with a probabilistic atlas. However, the intersection is small compared to the MNI template, thus, many voxels dropout once overlaying the intersection with the atlas.

Due to these difficulties, I wanted to check the BET threshold - but since it’s a number of procedures, I’m assuming it might not be a simple fix after all.


You can share reports by uploading them to Dropbox or similar file sharing service.

If masks look good in the reports and coregistration is satisfactory, small mask intersection is most likely caused by signal dropout in some of the subject. In other words it would be worth fixing inaccurate mask, but making masks more liberal (less accurate) just to increase the size of the mask intersection seems counterintuitive.

Mind that many studies use group level mask that includes voxels from at least 80% of participants (instead of 100% as in case of a pure intersection). This is more liberal option and it’s also more transparent (or easier to interpret) than making individual masks more liberal.


Thank you for the quick response!

Follow up on your suggestions and comments:

  • Here’s the link to the functional html report. Please let me know if it doesn’t work.

  • How would I go about fixing an inaccurate mask? I’m not sure what would be an indicator of signal dropout from the reports.

  • Thanks for pointing out the 80% group-level intersection mask. Will definitely try that liberal option.


Your link is giving me Error 400.

Fixing inaccurate mask would involve making changes to FMRIPREP codebase to improve the heuristic. Hard to say what exactly would need to be done without seeing the reports.