Various ways to apply masks (and what's the difference, and which is the best)

Hi all,

I have a few questions about the best way to extract time-series for a particular ROI. There may be profound misunderstandings on my part. If so, please correct me and point me in the right direction!

This question is in the context of the studyforrest data by @eknahm. The studyforrest data uses a subject-specific space (i.e. not MNI space).

Cheers,
Sebastiaan

What to warp?

Now let’s say that I want to extract a time-series for the LGN. I can easily get the LGN mask from the Juelich atlas. This mask will then be in MNI space. So I could either:

  1. Warp the studyforrest data to MNI space (as in this discussion), and then use the unwarped LGN mask to extract the time-series; or
  2. Warp the LGN mask to the subject-specific space, and then use this to extract the time-series from the unwarped studyforrest data.

My natural inclination would be to do 1. However, when extracting LC data for a common project, @eknahm actually did 2. Is there any reason to prefer the one approach over the other? (Perhaps the answer is Question 2 below.)

How to best apply a mask?

If my understanding is correct, a mask is just a 3D array where the 0 - 1 values indicate how much each voxel should contribute to the extracted time series. (Where a sharp mask contains only 0’s and 1’s, but a fuzzy mask can also contain intermediate values.)

So to extract a time-series, you could just multiply the bold image with the mask, and then take the sum along the time axis.

However, there’s also the NiftiMasker from nilearn. So two questions:

  1. To what extent does NiftiMasker simply do the multiply-and-sum operation that I described above?
  2. NiftiMasker.fit_transform() accepts a confounds keyword, which you can use (I think) to apply motion correction. In the studyforrest data, these motion-correction parameters are provided. I suspect that these parameters are specific to a space, such that you can only use the (subject-specific) motion-correction parameters provided with studyforrest when you are working in the subject-specific space (effectively forcing strategy 2 from above)?

Hey,

I tend to leave data in native image space for as long as possible. In case of mask operations and signal extraction, I consider the chances that something unexpected happens without me noticing to be much lower when warping a (binary) mask, than when warping BOLD images. Moreover, warping from MNI into native image space immediately offers insight into the variability of brain structures (i.e. how many voxels are labeled LC across all subjects).

To what extent does NiftiMasker simply do the multiply-and-sum operation that I described above?

I think it does not. This seems to be the critical code line: https://github.com/nilearn/nilearn/blob/master/nilearn/masking.py#L756

NiftiMasker.fit_transform() confounds

I think these are just plain timeseries that get regressed out. There should be nothing image space specific about them (extrapolation from gut feeling, no proof).

Thanks, that’s very useful!

I think these are just plain timeseries that get regressed out. There should be nothing image space specific about them (extrapolation from gut feeling, no proof).

Ok, thanks. Then I misunderstood the concept!