Buying a new Mac for fMRI data processing & analysis: which configuration?

We would like to change from Ubuntu to MacOS for our fMRI data (pre)processing & analysis.
Our usual pipeline is:

  • pre-pre-processing : dcm2bids + freesurfer (mideface)
  • pre-processing: fMRIprep
  • processing & analysis: nilearn
  • visual exploration: FSL (fsleyes)
  • other: AFNI / ANTs (sometimes)

and of course, all related dependencies

Our usual studies include

  • 10-20 subjects, 1 session each,
  • with T1w anat, SE or GE A>P and P>A fieldmaps,
  • and 10-20 functional runs with 10-20 trials each (event design),
  • using 32ch or 64ch in a 3T

Which model and configuration do you recommend? Pro, studio, mini, etc.
Is there anything in particular to know about the chips?

Thank you!

Hi @MatthieuGG,

You might find @Chris_Rorden’s evaluation of Apple Silicon useful here: GitHub - neurolabusc/AppleSiliconForNeuroimaging: Review the challenges and potential of ARM-based Apple Silicon macOS for brain imaging research

You will need a lot of RAM if you want to process all these data simultaneously. Know that there are free Cloud-based processing such as BrainLife.io .

Best,
Steven

1 Like

Hi @Steven , thank you as always.

All new Mac models are equiped with Apple M x chips. According to the link you provided, it may be better to avoid. But what about if we want to use Neurodesk?

Matthieu

Hi @MatthieuGG,

That would be a better question for @stebo85

Best,
Steven

just to chime in a sec, at least in terms of AFNI you don’t have to worry about the M chips because their new way to install is just to build AFNI for whichever one you have, which doesn’t take as long as it may sound. i’ve had successful AFNI install experiences on M1, M2 and M4s, for example.

1 Like

FWIW FSL also works just fine on Apple Silicon machines.

2 Likes

Hey @MatthieuGG and everyone,

is there any specific reason for switching to MacOS (except potentially easier management and integration with other hardware, software, etc.)?

My 2 cents: I would argue that, independent of the OS, the majority of the tools and resources could and maybe should be used through containers, either through neurodesk or respective docker images (IIRC singularity still doesn’t work quite natively on MacOS). Besides the BIDS-Apps and software-specific images, one could also create images for analyses using nilearn. IMHO, this makes the software management and reproducibility aspects (running the same computational environments on different machines, etc.) way easier. However, it might come with drawbacks concerning resource-utilization. Following up on @Steven’s point, a lot of RAM should come in handy and a higher amount of cores for parallelization. IIRC, studio and mini would be preferable given the way they can be configured, temperature control, etc. .

Would be cool to know what you ended up with and how things are working out!

Cheers, Peer

1 Like

Dear @MatthieuGG,

Neurodesk supports MacOS and running on the Apple Silicon ARM64 architecture. We do this through a couple of tricks: The docker container is a native ARM64 build, but since most neuroimaging software still doesn’t support ARM64 builds yet on Linux, we provide all neuroimaging containers as x86 builds, and users can choose between two ways of running these: 1) Through Rosetta2 two emulation; Pro: about 20-30% slower compared to native ARM64; Con: there are some edge cases where Rosetta2 has still bugs and applications can crash 2) Through QEMU emulation; Pro: more compatible; Con: about 80-90% slower.

Enabled through a CZI grant, we are currently working on building the neuroimaging applications as ARM64 linux containers and then we wouldn’t need a lot of the tricks above anymore for running on Apple Silicon. However, this will still take a few more months before we will be able to release these.

So my conclusion for now is:

  • If you want something that’s cost-efficient and works perfectly now: Get a big workstation with lots of Storage and RAM and install Linux and our Neurodesk k3s server on it - then every user can connect to this workstation and get their own session, including proper resource management.
  • If you have the money to pay the apple tax and you are keen to find bugs in Rosetta2 or wait for a few months, a mac mini or studio would get you quite far
2 Likes

Thank you very much for all your support.

@PeerHerholz the reason for switching to macOS is for easier management. We are facing a lot of issues with Ubuntu that we don’t understand. Currently, my Ubuntu 22 work station doesn’t even boot anymore, and the GUI of the OS is dead. 3 fresh clean install in 3 days… Not a hardware issue according to Dell. We have no dedicated IT & system service at the univ. From my experience, macOS is very stable - plus, Apple provides both hardware and software support. Not even mentioning all the dependencies conflicts, that I hope Neurodesk will solve.

According to @stbmtx and @paulmccarthy , we should be just fine for AFNI & FSL on Apple Silicon machines. And according to @PeerHerholz , we should have no issues with Docker + fMRIprep, and Python related solution such as Nilearn. What about dcm2bids (+ dcm2niix) and Freesurfer?

My understanding from @SteffenBollmann / @stebo85 is that Neurodesk could be quite unstable / slow on Apple Silicon for now. This is unfortunate, since we have no resource for creating clean images / containers / computational environments.

All in, I don’t know what to do: stick to Ubuntu and pray for Neurodesk to match our needs? Bet on macOS and work “localy”?

Best,
Matthieu

Dear @MatthieuGG - Neurodesk is worth trying on Apple Silicon already now - most things should work already now and it will get better in a few months once we have native arm64 builds.

2 Likes

Hey @MatthieuGG,

ah sorry to hear that. Yeah, ubuntu can be tricky regarding setup, installation and maintenance, especially without dedicated support. While the LTS versions should make things “easier”, there are, of course, still many things that can go wrong. This is aggravated by all the different aspects one needs to address by installing many different software packages.

Personally, I use a mix of macOS and linux distributions. My everyday laptop is an intel Macbook Pro which gives my a nice trade-off between unix/linux-like features and compatibility with required software/hardware, ie I can code and test locally to then run more computationally heavy processes on a work station or servers (usually using containers on both). IMHO, a lot of the macOS ecosystem moved away from being computation-focused, which rendered the OS less “powerful” than it used to be.

Regarding freesurfer: IIRC it should run on both intel and silicon as different respective installers were introduced (e.g. here). That is, if you don’t plan to use the VirtualBox images, it should be ok. However, you could also use it through the respective docker image. Re dcm2bids + dcm2niix I’m not sure but I think you could also use containers for that (I usually use heudiconv which uses dcm2niix under the hood).

However, as @SteffenBollmann mentioned, if Neurodesk works, it’s a great option!

Cheers, Peer

1 Like