Real-time fMRI processing pipeline with Python, nipype and/or other packages

Hi everyone

I have some questions about how best to structure python and nipype code that I’m working on. I am very new to python (a few months) and nipype (a few days) and have been working with Matlab and SPM12 for the past few years. I work with real-time / online fMRI processing methods. I have code in Matlab that runs several preprocessing steps on each volume in a run as it arrives from the scanner. I want to do more or less the same using python and whichever packages are best suited for the job. I understand the concept of nipype nodes and workflows and have built a short script that creates a nipype workflow (with nodes for realignment and smoothing from the spm interface) to process a single fMRI volume. The idea that I have is that I can specify these nodes (with some default parameters) before I start real-time processing and connect the nodes to form a generic workflow (without specifying input and output filenames). Then for real-time, I update the nodes’ input/output filenames for each iteration (i.e. these values change as each new volume arrives) and then I rerun the updated workflow each time on a new volume.

So my questions:

  1. Is this a sensible way to structure the code, are there better ways to achieve the same with nipype?
  2. Is the timing an unavoidable issue with nipype? I ran my short script and it took like half a minute to run. I saw Matlab opening and closing twice (I assume since I have the two nodes from the spm interface) and I guess that adds a lot to the latency. Are there workarounds to make this run faster? The big thing with real-time / online processing is that we need the pipeline to be finished for each volume within at most 1 TR (typically max 2000ms).
  3. Is nipype even a smart way to do this? Should I rather be using other functions from other packages. Have people already solved similar problems with python? I am familiar with most of the widely used real-time fMRI software packages and none of them use python for the full application. Only pyneal (not really widely used) is fully built with python, and it doesn’t use nipype (afaict). nilearn, e.g., has some smoothing functionality that could probably be useful, and I’m guessing there are other packages with python algorithms for typical fMRI preprocessing steps? Should I rather investigate such a route?
1 Like

@StephanHeunis - this really depends on the goals of your realtime fMRI setup. there are definitely things that can be done with Python, but one should stay away from significant processing as much as necessary. indeed, nipype can be used for any general purpose processing but whether the processing pertains to your use case should be considered. i’ll describe some of the tools that we use:

  1. vsend (a siemens functor) to send realtime data over TCPIP. this is developed by our colleagues at MGH, and if you have a siemens scanner, you can get this through what is called a C2P agreement. the intent of this is to send data directly to a receiver over the network without saving it to disk. the latest version works with SMS (slice-accelerated multi slice) scans.

  2. on the receiving side we either use MURFI or just a straight up python receiver (also in the murfi code).

  3. one can connect pschopy to murfi to then use realtime calculations to change stimulus display or other interactions.

  4. in some settings we need to run some quick processing on an initial resting or task scan to generate regions of interest. we often use nipype workflows for this purpose.

  5. we have also used scikit-learn/nilearn to create a model using one or more early runs, or data from a previous session. and then used it during a feedback session.

all of this is to say there is no one answer to your needs and depends on your setup and your rtfMRI experiment. if you provide some additional details, we can definitely suggest a set of routes.

Hi @satra. Thanks for responding and for the details about MURFI.

For my use case, I am specifically interested in using python packages/functions to preprocess and denoise fMRI data in real-time. I am familiar with the tools and infrastructure I need to transport and receive data in real-time from the scanner (Philips in our case) to the networked PC that I’m using for rtfMRI processing. I also know of some tools that I could interface with for feedback presentation or visualisation. My main challenge is selecting the right packages for real-time processing: they should be fast, standardised, ideally widely used in the community. I would like to build a framework that could ideally grow into a community project.

Let’s say we assume the following constraints (and we assume that data transfer and feedback presentation is sorted and not totally relevant):

  • Python-based
  • Existing (ideally packaged) algorithms for basic fMRI preprocessing steps (realignment, smoothing, slice timing correction) that can be run per volume
  • Fast (10-150ms per step, preferably less)

Is nipype an option in this case (my very limited experience suggests that it isn’t), or are there better alternatives? My feeling is I’d have to “pick and place” the steps (e.g. use the realignment from pyneal, use smoothing from nilearn, use slice time correction from somewere else) or create my own algorithms. Or is there functionality like this somewhere that I’m missing and that could be useful?

Thanks again!

if your processing has no parallel paths, nipype will not gain you any advantage in this scenario other than dataflow management, and uniform access to set of existing tools.

your requirement of fast and standardized does not exist for realtime fMRI :slight_smile:

people have developed various pieces, including things in MURFI and AFNI (both have implementations of a realtime GLM for example). however unless you are using caseforge or ensuring your participants lie really still or your scanner does prospective motion correction, you will have to do some motion correction as well.

the end result of which packages or algorithms to use/develop will depend on your use case. what nipype maybe useful for to start with is to play with the different tools and dataflows to determine what is useful in an offline setup. then you can optimize it for realtime processing, which will require tools that the usual tools will not generally support since you will likely be doing TR by TR processing.