Mac or Windows for fMRI

Hey.
I am fairly new to fMRI but been struggling with running some of the software at my university.
I started off with Windows 11 and I tried WSL but it felt very clunky and ran really slow for my old uni computer.
I set up Ubuntu on a home laptop as a bit of an experiment and had a great experience and everything is working really well (fmriprep via docker and FSL), It’s only a laptop though so limited cores so still a bit slow for multiple subjects. I do actually prefer the OS to Windows.
I was hoping I could then request a modern workstation but Linux is not supported at our uni so IT have said I should buy a Mac studio or stick with Windows 11 and WSL.
I read about some issues with ARM here and here
Is Apple M chips ok to use in 2025 and beyond? Looks like Neurodock will be the most suitable way forward. The Mac studio seems pretty powerful on paper and looks pretty portable.

You might want to look at this page. Apple CPU’s provide outstanding single core performance, and many neuroimaging tasks are single threaded so Amdahl’s law means that Apple CPUs are outstanding for the fastest analysis of a single individual. However, for most large datasets we process data from many individuals at once, with each being single threaded but the overall analysis being very parallel. The major weakness of Apple computers is for tools that require NVidia GPUs. In our field, that has historically been diffusion (eddy, topup, bedpost), but many newer AI tools are tuned for NVidia (synthstrip, synthsr, fastsurfer, etc). However, I am not sure whether a Windows/WSL supports these GPU tools. I really think if these tools are of interest to you, you should work with Linux rather than either Windows or MacOS.

If my center is any indication, many neuroimaging scientists use their Apple MacBooks to prototype pipelines (leveraging their fast single performance) and then deploy to Linux supercomputers (leveraging parallel abilities). You link to an early Github page where I describe the limitations and potential of Apple Silicon. Today we see far fewer of the former and a lot more of the latter. Many of the initial teething issues I wrote about in the initial github page are now dealt with. Apple’s developers deserve credit for not only making core tools like numpy compatible, but tuning them to fully leverage the unique instructions and architecture.

On a related note, I am a huge fan of neurodesk which handles dependencies and versioning so well. I suspect @SteffenBollmann can provide expertise.

Thank you for the advice.
I would much prefer Linux but our IT department refuse to allow me to use it on site. I’ve been offered a Mac Studio m3 Ultra 28 core/96 gb ram though which I was hoping would be good enough to manage my studies (1.5t, 20 to 30 participants, 4x5 minute blocks at TR 5). I don’t have access to a HPC, I could buy time on an external one but it takes our purchasing department weeks to approve something so I am sceptical of that as well.
I will look into Neurodesk. That does sound promising and should solve most issues.
I really just wondered how most research groups deal with computing and how much leeway they get with software and OS choice.

Many thanks

As long as none of your tools need NIVidia GPUs, that will be a terrific system. Amdahl’s law and the fact that most neuroimaging tools do not leverage multi-threading well make the choice between the M3 Ultra and M4 Max difficult. I would be tempted to go with the latter.

I am a fan of thunderbolt 5 external NVME hard disks (tb5, e.g. below I test a Trebleet TRE-8132), these are much faster than Thunderbolt 3/USB4 (e.g. TBU405) and traditional usb 3.2 drives. Most neuroimaging tools save intermediate images at each step of processing, and this disk I/O is the achilles heel for most HPC systems. For my M4 Pro, write/read speeds in mb/s are:

Device Write Read
tb5 6346 5724
tb3/4 2674 2513
usb3.2 712 675

On the topic of paralleling: most of the guides you’ll find for nilearn, such as the officials or the excellent ones from @PeerHerholz amongst others, are presented and run through Jupyter Notebook. Which are mono core by default. You’ll have to manually call joblib for paralleling. You can find example in this topic.