FSL feat takes a long time to run

I try to implement the suggested 1st level analysis from here

I’ve changed it just a bit (added two more contrasts that fits my questions).
My design is a 20 min run, with 3 conditions, each one runs for 2m in a raw for 3 times, so I end up with a big functional file (app. 1.5g).
I pre processed everything using fmriPrep.

Running the pipeline mentioned above takes a lot of resources and a long time. It seems like it requires about 30g per run, and it runs for more than 8 hours, mostly running the feat command.

I was wondering if this sounds normal to those who uses the same pipeline

Thanks,

I’m not sure what your system specs are, but that sounds a bit slow to me, even for a huge dataset.

I experienced with some pretty slow performance with FEAT (and specifically film_gls) earlier this year, and the FSL folks suggested that I set OPENBLAS_NUM_THREADS=1 as an environment variable. That did the trick for me on my system. Here’s the thread for that, if you’re interested:

JISCMail - FSL Archive - Re: film_gls and flameo running very slowly on Amazon Web Services (https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=FSL;5ceafc9e.1902)

1 Like

Thanks.
Regarding my specs:
I try to run it both on my local computer (FSL 6.0, with 64g RAM and 12 cores) and a slurm cluster (FSL 6.0). In both, I run it through nipype in anaconda environment (maybe that causes something - should I try clean env?)
I did try to add you change (actually not sure I put it in the right place…) - doesn’t seems to help.

I can’t speak to nipype unfortunately, but this sounds a bit slow for basic FSL stuff. I’m assuming your data is multiband with a fast TR (e.g., < 1 s) and fairly high resolution (e.g., ~2 mm isotropic)? Is it specifically getting hung up with film_gls or another FEAT process?

To set the OPENBLAS_NUM_THREADS=1, you’d need to run export OPENBLAS_NUM_THREADS=1 (could also put it in your ~/.bashrc file to make sure it’s always an env variable).

Indeed TR=1 and 1mm resolution.
It gets hung up with film_gls, all the rest is reasonable.
I’ve tried adding OPENBLAS_NUM_THREADS=1, without success so far.

I have the exact same problem (different data set though). Exporting the environment variable does not seem to have any effect. Any other ideas?

@orduek Did you find a solution?

Not fully.
I played with different nipype scripts and found was that was reasonable, still about twice longer than SPM.
This is an example of a script that was long, but not as bad.

Hope it helps.

Thanks! So running film_gls directly makes it run faster than when it is called by feat? That is interesting…

I also have the additional problem that, as soon as I start to more than one FEAT node in parallel, each FEAT process will crash with the error message below. I am not sure if this is related to the same problem or not though. Has anyone experienced a similar crash and can maybe point me to what causes it?

feat /project/3013068.04/1st_level_test/all_runs/1st_level/_run_sub-002.Training-1.4...project..3013068.04..converted..StressNF../l1_model/run0.fsf
Standard output:
To view the FEAT progress and final report, point your web browser at /project/3013068.04/1st_level_test/all_runs/1st_level/_run_sub-002.Training-1.4...project..3013068.04..converted..StressNF../feat_fit/run0.feat/report_log.html
Standard error:
child process exited abnormally
    while executing
"fsl:exec "${FSLDIR}/bin/feat ${fsfroot}.fsf -D $FD -I $session -stats" -b $howlong -h $prestatsID -N feat3_film -l logs "
    (procedure "firstLevelMaster" line 190)
    invoked from within
"firstLevelMaster $session"
    invoked from within
"if { $done_something == 0 } {

    if { ! $fmri(inmelodic) } {
       if { $fmri(level) == 1 } {
          for { set session 1 } { $session <= $fmri(mult..."
    (file "/opt/fsl/6.0.1/bin/feat" line 390)
Return code: 1
1 Like

We’re having the same issue running feat on our local cluster–did you solve it?

Any info much appreciated!
Cheers,
Jeremy

1 Like

I ran into the issue where FEAT crashes when run in parallel. Same issue with fslstats. The crashes showed up intermittently which made it harder to diagnose. My current belief is that the executables crash when they’re called by different treads in too quick succession. Staggering when the parallel processes call an executable seems to resolve it. It’s not pretty, but I did this by adding a random delay in passing one of the inputs, e.g.

def _stagger(argin):
    import time
    import numpy as np
    rng = np.random.default_rng()
    rints = rng.integers(low=1, high=2 * 60, size=1)
    time.sleep(rints[0])
    return argin
workflow.connect([
    (l1_design, l1_feat, [
        (('fsf_files', _stagger), 'fsf_file'), 
    ]),
])
1 Like