SPM Segmentation in Nipype is taking a long time

Hello,

I have a preprocessing workflow that has SPM NewSegment node which can be run through Binder:
https://github.com/arash-ash/working_memory_fMRI

It runs okay in my local Docker but since the SPM segmentation node is taking a long time it sometimes crashes while running on Binder.

I guess this step is taking a long time due to shim correction (to correct the front part of the anatomical from being wrapped around to the back side).

However, there should be a way to remedy this problem (using different interfaces or different parameters). Because processing time is important for me since I will demo the code in the class. I would love to hear your suggestions.

Thanks a bunch,
Best,
Arash

Hey @arash-ash,

super cool that you’re using Binder in a class!

Unfortunately, I never really used SPM’s NewSegment and therefore
don’t know much about it, but I remember folks saying that it actually is
kinda fast (unintended pun following in the next line).
Have you tried FSL’s FAST yet?
I can’t tell if it’s faster than SPM’s NewSegment, but you could have a look
if it works for you. There’s of course also a nipype interface. Be aware that it also applies a bias field correction.

HTH, best, Peer

P.S.: the resources of Binder are limited. Maybe you could adapt your docker settings accordingly to emulate the performance on Binder? Assuming you running it as nipype workflow: how did you set it up?

2 Likes