Hi all, I just got an M1 Mac Mini (8gb model, on Big Sur) and have been trying various neuroimaging tools. So far, FreeSurfer works very well (recon-all completed in just under 5 hours on 7.1.1!), but essentially every other tool I’ve tried does not work. FSL, SPM, Docker, none are compatible yet. Hopefully soon, but thought this could be helpful if you are looking into the new M1 hardware- beware. (I also regularly use dcm2niix, ITK-snap, bidskit, mricron; all of these work fine.)
For those interested, here is my evaluation for this nascent architecture including tips on how to get neuroimaging tools to run on this platform.
For people who use my tools, you can get Universal binaries that natively support this architecture. Due to a glitch with NITRC, you will want to get the latest versions from GitHub:
In addition, if there are any AFNI users who want try out my experimental native M1 build, they can contact me directly. Once we can get confidence in this, I will make a pull request to Github.
Wish I’d come across this sooner! Thanks very much for your comprehensive review, Chris; I’d encourage everyone to read these terrific posts.
Has anyone had any more experience with the M1 chip? I need to get a new personal laptop (I currently am using a 2012 Macbook pro… yup it’s old but it’s great) and would like it to be able to run typical softwares like FSL, Matlab and Python. Today Apple announced their new laptop and I’m wondering if it’s worth purchasing or should I get something that can run a linux partition. Again this is a personal laptop, not solely for work purposes so the user-friendly interface in Mac is very nice.
I have continuously updated my evaluation. For your typical usage:
- FSL runs well under Rosetta emulation. However, since there is no CUDA support it is a poor choice if you use Eddy, Bedpost or ProbtrackX.
- Matlab 2021b runs well under Rosetta emulation, all the SPM mex files run under emulation.
- Python runs well natively. However, NumPy does not have ARM SIMD intrinsics. This means that some NumPy functions run an order of magnitude faster in emulation than natively, while others perform better natively.
While the new M1 Pro and Max are extremely impressive technically, I do not think they address some of the core issues our community faced a year ago:
- While the M1 computers have outstanding GPUs, they are limited to single-precision compute. These computers can not support CUDA, and Apple has not announced any efforts to aid translation (e.g. AMD’s GPUFORT). Therefore, tools like Eddy, Bedpost and Probtrackx are not competitive.
-
Despite the huge popularity of NumPy, Apple’s Developer Ecosystem Engineering has not helped develop SIMD intrinsics. Therefore, much of the CPU potential remains untapped.See recent pull request that will address this limitation in a future release.
I suggest that the latest releases target Apple’s core markets of video creation and photography. However, they are not designed to to compete in the scientific and HPC arenas. Apple would be well positioned to grow into these domains if they make three changes: modify GPUs to handle double precision, update CPUs to handle SVE SIMD instructions (instead of Neon), prioritize Apple’s Developer Ecosystem Engineering resources to leverage these advances.
Thanks! Yeah I checked out your evaluation but wasn’t sure if something else happened since the last commit or others had some experience/input. Luckily I only use things like BET and FLIRT in FSL and all other toolboxes I rely on are python and matlab based. Those are mainly MEG/EEG toolboxes like fieldtrip and mne. Thanks so much for the detailed input! This is a tremendous help.
Hi all!
Given that a lot has likely changed since 2021, I wanted to check in with the community regarding the current state of Apple Silicon chips for neuroimage analysis.
I’m considering getting a maxed-out Mac Mini with the new M4 chip and am wondering if it’s now a solid choice for neuroimaging workflows (e.g., FSL, SPM, FreeSurfer etc.). How does it compare to the older M1 and Intel-based Macs in terms of performance, compatibility, and any potential limitations?
Looking forward to hearing your insights—thanks in advance!
@RDoerfel in general, Apple hardware is great for most workflows you mention (AFNI, FSL, SPM, FreeSurfer). The base Mac mini M4 (10C CPU, 10C GPU, 16GB, 256GB) is a terrific value, and the base Mac Mini M4 Pro (12C CPU, 16C GPU, 24GB, 512GB) has some very nice upgrades (faster for multithreaded, more RAM, faster thunderbolt and more space on the hard disk). Apple tends to charge crazy upgrade prices as you move away from the base model. Both will handle traditional datasets well.
There are a couple caveats:
- Some core tools for diffusion (FSL’s Bedpostx, Probtrackx, Eddy) are much faster with a NVidia CUDA GPU. If you use these, you MacOS is not a good solution.
- Some of the FreeSurfer AI tools ( EasyReg, FastSurfer, SynthStrip, SynthSR) require a NVidia CUDA GPU. If you use these, you MacOS is not a good solution (though I think in theory the models could be run if someone updated the code and installed the latest versions of libraries like pytorch that can run conv3d using Metal Performance Shaders).
- Apple charges a huge amount for RAM, and many neuroimaging tools are pretty greedy. Therefore, while a base-model Mac might be a great value, the value proposition degrades when you try to have a huge amount of RAM. So I would say that modern Macs are a good value for analyses that can live within a reasonable amount of RAM (which is most AFNI, FSL, FreeSurfer and SPM pipelines).
- Apple charges a huge amount for SSD capacity. If you get a M4 Pro or M4 Max, I would invest in an external Thunderbolt 5 enclosure, for a regular M4 or earlier Mac I would get a USB 4 enclosure. For example, the ACASIS 405 provides 40Gbps for $90 (or $63 for the
Air
version that requires USB4 or later like all modern Macs).
For my own work, I develop on a MacBook, but run my main pipelines on a desktop computer. I build my Linux desktops using pcpartpicker. The RAM and hard drives are less expensive than Macs, the AMD 9950x provides 16 cores and 32 threads, and I usually try to get an NVidia GPU with at least 16 Gb of RAM which seems sufficient for all inference models from the FSL and FreeSurfer teams.
Very useful, and thank you so much. Just add a note that FreeSurfer 8 would require 24 GB for a single subject analysis (ReleaseNotes - Free Surfer Wiki), so a larger memory may be considered in the long run.