FSL; FEAT does not output the registered 4D functional image..?

Summary of what happened:

Hey folks, apologies if this is a noob question but I’m just getting into the domain. I’m trying to run a cohort of fMRI scans through FEAT’s preprocessing and I’m a bit confused about the outputs. As I understand it, the file “filtered_func_data.nii.gz” is supposed to be the 4D image after all the preprocessing has been done to it. From what I’m seeing, this is the case excluding the registration preprocessing - meaning that the 4D image is still un-warped. My question is: is this intentional? Does FEAT normally output an un-warped image? If not, what am I doing wrong? If it is intentional: is there an easy way to apply the relevant .mat output in the GUI or is this something I’m going to have to take care of in the command line?

Command used (and if a helper script was used, a link to the helper script or the command generated):

Version:

FSL 6.0.7.4

Environment (Docker, Singularity, custom installation):

Data formatted according to a validatable standard? Please provide the output of the validator:

Relevant log outputs (up to 20 lines):

Screenshots / relevant information:

MacOS13.5

Hi @Jaz yes, this is intentional - at the single-subject level, FEAT performs time series analysis in native fMRI space. If you have selected standard options, all of the files required to transform data between any of the three spaces (functional, structural, standard) will be located in the <analysis.feat>/reg/ directory.

When you pass single-subject .feat directories to a higher-level FEAT analysis, all necessary images will be automatically transformed into standard space, and the analysis performed in that space.

Hey Paul, thanks for answering my question!

Good to know. From what I’ve been able to gather thus far (and what I’ve managed to do), the most efficient pipeline for warping (in order to achieve a ROI analysis for each subject in a given study) is to warp the standard space into the functional space of each participant - not the other way around. That way, I can apply flirt or fnirt to a given atlas, and place it “on to” my functional images for analyses. Please correct me if that’s not actually an efficient pipeline.

I guess my main, naïve, follow up question is: Is there any way, or reason (for that matter), to warp an entire functional image (consisting of n scans) to a standard space?

Ultimately the choice of which images to transform into which space will depend on what type of study you are performing, but I think your suggestion sounds reasonable, if by “ROI analysis” you mean extracting some ROI-specific time-series or activation-based metric for each of your subjects.

One scenario I can think of which requires transforming all subjects’ 4D time series data into a common anatomical space is group ICA, most commonly used with resting-state fMRI, where you run ICA on time series data from a group of subjects in order to identify the underlying signals (a.k.a. resting-state networks) which are common to the group. These group-level ICs can then be used as a template for further analysis. I’m sure there are other situations, but that’s the only one that immediately comes to mind.