Why so many fMRI analysis tools?

Hi experts,
I am a freshman working with fMRI data analysis.
I am very curious about why there are so many tools (SPM FSL ANTs BrainVoyager HCPworkbench) conducting similar process (registeration, normalization, motion correction, slice time correction, spatial smooth, highpass etc.).
My supervisor recently showed us a list of analysis toolbox and commented that he was wondering why the amout of toolbox far outnumber the limited number of analysis and he ascribed it to the chaostic organization of the society.
If the phenomenon do exists, several brainstorming reasons come up with me like
1. There may be several group separately working on fmri studies at the inception of this field and leave the historical trace of multiple analysis tools
2. An interaction with computer science like prevailing programming language has changed through the time, thereby new ones emerged to be compiled with industrial trends
3. New analysis methods are invented or more user friendly interface is supported by different study groups
4. Though the concepts of analysis is limited, actual computation or engineering tricks are much complex and various, different tools have really different operations on the same analysis.
How about oppinions of all you experts? How to think straight about this? Is there any practical suggestions on fmri learners’ choice of tools to kick off study?

Best

1 Like

Welcome to the chaotic world. It’s like

Grab the software other people use in your group. So you can ask them.

2 Likes

I see the diversity of tools as a good thing. It has led to healthy competition. As an analogy, different car manufacturers compete with each other and also adapt their designs for specific niches. Countries that focused on a single make and model of a car like the East German Trabant were able to simplify support, but did not benefit from intense competition. Likewise, it is nice to be able to get a vehicle that is tuned for your use case - a sedan is not ideal for every application.

The different tools have different pros and cons - in a pre-Python world, SPMs usage of Matlab eased scriptability but required licensing. FSL’s standalone applications were great for deploying on super computers, but bash scripting was a challenge. AFNI provides a tremendous amount of customization, but has a steep learning curve. FreeSurfer was focused on high precision cortical segmentation which is ideal for some research applications but not suitable for many clinical applications. While all these tools have to deal with spatial normalization, they tend to take different approaches that lend themselves to specific sub-domains. Further, thanks to competition and the (relatively) open licenses, each of these tools has been able to leverage successful breakthroughs from other tools, leading to a virtuous cycle.

For scientists entering the field, it makes sense to focus on one tool and use it well, or to leverage a well-supported best-in-breed pipeline like fmriprep that leverages the relative strengths and weaknesses of each tool. The AFNI, FSL, FreeSurfer and SPM groups all have courses that help users exploit their toolchains. The similarity of the core image processing problems these tools are attempting to solve does allow advanced users to pick and choose the right tool for a specific problem.

@cni-md the good news is that the dominant tools of our field were able to agree on a common image format: NIfTI. So I think your analogy of competing standards only holds on a superficial level. Again, going back to my car analogy, there are different makes and models of (internal combustion engine) cars, but they all agreed to use a standard fuel. I contend there is a strength to diversity, and having different motivated teams competing with each other benefits users (in particular, if they all use open source which allows break throughs to propagate across tools).

6 Likes