Apple Silicon M1 compatibility - FreeSurfer YES; everything else no

,

Hi all, I just got an M1 Mac Mini (8gb model, on Big Sur) and have been trying various neuroimaging tools. So far, FreeSurfer works very well (recon-all completed in just under 5 hours on 7.1.1!), but essentially every other tool I’ve tried does not work. FSL, SPM, Docker, none are compatible yet. Hopefully soon, but thought this could be helpful if you are looking into the new M1 hardware- beware. (I also regularly use dcm2niix, ITK-snap, bidskit, mricron; all of these work fine.)

1 Like

For those interested, here is my evaluation for this nascent architecture including tips on how to get neuroimaging tools to run on this platform.

For people who use my tools, you can get Universal binaries that natively support this architecture. Due to a glitch with NITRC, you will want to get the latest versions from GitHub:

In addition, if there are any AFNI users who want try out my experimental native M1 build, they can contact me directly. Once we can get confidence in this, I will make a pull request to Github.

8 Likes

Wish I’d come across this sooner! Thanks very much for your comprehensive review, Chris; I’d encourage everyone to read these terrific posts.

1 Like

Has anyone had any more experience with the M1 chip? I need to get a new personal laptop (I currently am using a 2012 Macbook pro… yup it’s old but it’s great) and would like it to be able to run typical softwares like FSL, Matlab and Python. Today Apple announced their new laptop and I’m wondering if it’s worth purchasing or should I get something that can run a linux partition. Again this is a personal laptop, not solely for work purposes so the user-friendly interface in Mac is very nice.

I have continuously updated my evaluation. For your typical usage:

  • FSL runs well under Rosetta emulation. However, since there is no CUDA support it is a poor choice if you use Eddy, Bedpost or ProbtrackX.
  • Matlab 2021b runs well under Rosetta emulation, all the SPM mex files run under emulation.
  • Python runs well natively. However, NumPy does not have ARM SIMD intrinsics. This means that some NumPy functions run an order of magnitude faster in emulation than natively, while others perform better natively.

While the new M1 Pro and Max are extremely impressive technically, I do not think they address some of the core issues our community faced a year ago:

  1. While the M1 computers have outstanding GPUs, they are limited to single-precision compute. These computers can not support CUDA, and Apple has not announced any efforts to aid translation (e.g. AMD’s GPUFORT). Therefore, tools like Eddy, Bedpost and Probtrackx are not competitive.
  2. Despite the huge popularity of NumPy, Apple’s Developer Ecosystem Engineering has not helped develop SIMD intrinsics. Therefore, much of the CPU potential remains untapped.See recent pull request that will address this limitation in a future release.

I suggest that the latest releases target Apple’s core markets of video creation and photography. However, they are not designed to to compete in the scientific and HPC arenas. Apple would be well positioned to grow into these domains if they make three changes: modify GPUs to handle double precision, update CPUs to handle SVE SIMD instructions (instead of Neon), prioritize Apple’s Developer Ecosystem Engineering resources to leverage these advances.

3 Likes

Thanks! Yeah I checked out your evaluation but wasn’t sure if something else happened since the last commit or others had some experience/input. Luckily I only use things like BET and FLIRT in FSL and all other toolboxes I rely on are python and matlab based. Those are mainly MEG/EEG toolboxes like fieldtrip and mne. Thanks so much for the detailed input! This is a tremendous help.

1 Like