I have a question regarding skullstripping and epi data.
So we are running an ASL/BOLD method via a multi-band multi-echo sequence.
This means that we still have skull on our volumes (a lot more on echo1 than on the 4th one).
It seems that there isn’t much processing that enables us to skullstrip epi.
afni_proc.py doesn’t do it from the 12th exemple script for instance.
Likely a few ways to address this, but the first one I would try is to add -align_epi_strip_method 3dSkullStrip to your command. Feel free to post images of the results so we can get a better sense of what is going on.
I am very unfamiliar with this script, I didn’t have a clue that there was such an option.
I hope all data will converge properly as we cannot use the -align_opts_aea -giant_move. Hopefully, we won’t need it once all Echos are skullstripped.
Thanks for your help, It’s nice to have someone who can navigate through the doc much easier than me to help
And just as a small point, this will not end up skull-stripping all EPI echoes, it will only strip the single volume used for anatomical registration (which should be all you need).
Also, as you are using echo 2 for registration, the echo 2 volumes are used to register all echoes, as well as registering to the anat.
Hello, thanks for you reply.
This is interresting then. THe registration and normalisation won’t take into account the skull.
The thing is I am regressing with 3dDeconvolve after this and there is still signal in the skull. Then the analysis following will be falsy positive on the skull. isn’t there a way to use a mask or skullstriping it once and for all ?
subsidiary question : Why is it so complicated to skullstrip EPI. Why aren’t there any function doing it in 4d ?
P.S : [pmolfese]
I didn’t try giant move as it is working without at the moment, but thanks for the tip, I’ll keep it in mind if I need to do more pre-processing on the rest of the data
4D masking is not difficult, but rather a preference. In afni_proc.py, the default behavior is to not mask the EPI (other than via the extents_mask, which excludes voxels that do not have data at every time point). We would rather see what is happening outside the brain than to simply trust and not see it. That is important from a QC perspective, both of the data and of the modeling of it (ghosting, task correlated motion, etc). So we recommended masking at the group analysis level instead.
If you really want to mask at the single subject level, -mask_apply can be used (maybe with ‘epi’, for example). It is easy, but not recommended.