How to load AFNI's GLM into nilearn?

Hi everyone.
I’d like to perform MVPA in nilearn with the GLM from AFNI. However, I don’t know how to load the brik files into nilearn.
For example, the directory of my AFNI beta maps is:

/media/menglab/work2_4T/hy/face_imagery/Analysis/sub-2//AFNI_beta_maps

which contains errts, fitts and stats files from AFNI 3dDeconvolve. How do I load them into nilearn?
Huge thanks for your help!

Hi @Yan_Huang,

You can use 3dAFNItoNIFTI to convert the .brik to .nii.gz and then load it into nilearn with nilearn.image.load_img.

Best,
Steven

1 Like

Thank you, Steven.
One more question, which files should I load (all of them or a specific one?)

I do not know the specific naming convention outputs, but which ever one relates to the beta coefficients / effect sizes of the regressions.

Just to note, there are lots of ways to convert between NIFTI and BRIK/HEAD in AFNI. 3dAFNItoNIFTI is one, as is simply 3dcopy. If you want to select out subbricks/subvolumes of datasets, then 3dcalc can be useful. For example, if you want the first 3 volumes of a dataset and have the output be NIFTI format, you could run:

3dcalc -a DSET"[0..2]" -expr "a" -prefix DSET_012.nii.gz

You can basically do standard slicing things with subbrick notation:

DSET"[3,5..8,19]"   # subbricks 3,5,6,7,8,19
DSET"[1,14,29..$]"  # subbricks 1,14,29-to-the-last
DSET'[1..$(2)]'     # all odd subbricks 1,3,5,... 
DSET[0,4,5,15]      # ERROR in tcsh (no quotes); OK in bash

Note there is a bit of subtlety with using quotes to escape special shell characters, esp. if something follows the dollar sign.

As to which file is what, there is a dictionary of major outputs in the results file from running AFNI’s afni_proc.py: cat out.ss_review_uvars.json. It points out both datasets of primary interest, as well as important intermediate files, quantitative processing results and other information. For example, in the AFNI Bootcamp dataset with subject ID “FT”, cat out.ss_review_uvars.json produces :

{
   "afni_package": "macos_10.12_local",
   "afni_ver": "AFNI_22.1.09",
   "align_anat": "FT_al_keep+orig.HEAD",
   "censor_dset": "censor_FT_combined_2.1D",
   "copy_anat": "FT_anat+orig.HEAD",
   "cormat_warn_dset": "out.cormat_warn.txt",
   "df_info_dset": "out.df_info.txt",
   "enorm_dset": "motion_FT_enorm.1D",
   "errts_dset": "errts.FT+tlrc.HEAD",
   "final_anat": "anat_final.FT+tlrc.HEAD",
   "final_epi_dset": "final_epi_vr_base_min_outlier+tlrc.HEAD",
   "final_view": "tlrc",
   "flip_check_dset": "aea_checkflip_results.txt",
   "flip_guess": "NO_FLIP",
...

(truncated output here; the file is longer). To become more familiar with what the keys mean, you can run gen_ss_review_scripts.py -show_uvar_dict:

   afni_package         : set AFNI package
   afni_ver             : set AFNI version
   align_anat           : anat aligned with orig EPI
   censor_dset          : set motion_censor file
   combine_method       : set method for combining multiple echoes
   copy_anat            : original -copy_anat dataset
   cormat_warn_dset     : correlation warns in Xmat
   decon_err_dset       : 3dDeconvolve warnings
   df_info_dset         : degree of freedom info
   dir_suma_spec        : directory containing surface spec file
   enorm_dset           : set motion_enorm file
   errts_dset           : set residual dataset
   final_anat           : anat aligned with stats dataset
   final_epi_dset       : set final EPI base dataset
   final_view           : set final view of data (orig/tlrc)
   flip_check_dset      : -check_flip result dset
   flip_guess           : guessed dset flip status
...

So, for example, the errts_dset value sets/provides the name of the residuals from the processing.

For the sets you asked about:

  • errts*.HEAD: the dataset of residuals from the regression model (concatenated across all in put FMRI runs), which is the major output in resting state data, and also informative for task-based FMRI.
  • fitts*HEAD: the dataset of the “fit” part of the model (concatenated across all in put FMRI runs), which is the sum of all model regressors and their estimated coefficient values, \sum_i \beta_i x_i.
  • all_runs*HEAD : the sum of the errts and fitts, or basically the full dataset on the “left side” of the GLM (concatenated across all runs).
  • stats*HEAD: when processing task FMRI data (which contains stimulus timing files), this contains the full F-stat of regressors of interest, as well as the coefficient and statistics for each individual regressor and coefficient of interest. This is a major output for task-based FMRI.

So, for your case, if you want the betas (i.e., effect estimates or coefficients) from task-based FMRI for MVPA, then you likely want the stats*HEAD file output. You can see what subbrick/subvolume is what by checking out the labels that AFNI’s 3dDeconvolve puts in each subbrick, such as via 3dinfo -label stats.*HEAD or more verbosely 3dinfo -subbrick_info stats.*HEAD or most verbosely 3dinfo -verb stats.*HEAD. For example, the subbrick_info for the same Bootcamp dataset as above, looks like:

++ 3dinfo: AFNI version=AFNI_24.0.06 (Feb 14 2024) [64-bit]
  -- At sub-brick #0 'Full_Fstat' datum type is float:            0 to       912.081
     statcode = fift;  statpar = 2 412
  -- At sub-brick #1 'vis#0_Coef' datum type is float:     -50.0403 to       48.2865
  -- At sub-brick #2 'vis#0_Tstat' datum type is float:     -26.6244 to       36.1814
     statcode = fitt;  statpar = 412
  -- At sub-brick #3 'vis_Fstat' datum type is float:            0 to          1000
     statcode = fift;  statpar = 1 412
  -- At sub-brick #4 'aud#0_Coef' datum type is float:     -37.5292 to       40.4449
  -- At sub-brick #5 'aud#0_Tstat' datum type is float:     -20.0322 to        38.504
     statcode = fitt;  statpar = 412
  -- At sub-brick #6 'aud_Fstat' datum type is float:            0 to          1000
     statcode = fift;  statpar = 1 412
  -- At sub-brick #7 'V-A_GLT#0_Coef' datum type is float:     -45.8654 to        54.538
  -- At sub-brick #8 'V-A_GLT#0_Tstat' datum type is float:      -11.357 to       12.3942
     statcode = fitt;  statpar = 412
...

You can see that AFNI stores the relevant degree of freedom info in the stats file, associated with each statistic.

–pt

3 Likes

Thank you, ptaylor.
I tried loading that stats nii file into nilearn, but it gave me this error heads-up, saying the image is 5D.
Does it mean I have to extract the sub-brick containing the betas first when converting from BRIK/HEAD to NIFTI?

Also, if the fitts file contains the betas, can I take the fitts nii as input for MVPA?

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[42], line 17
     14 beta_files = glob.glob(dir_AFNI_beta + f"stats.sub-{subj}_task-imagery.nii")
     16 # 用load_img函数来读取所有的nii文件
---> 17 beta_imgs = image.load_img(beta_files)
     19 for roi in roi_list:#读取dir_ROI_mask下所有的nii文件
     20     roi_files.extend(glob.glob(dir_ROI_mask + f"/{roi}*.nii"))

File /media/menglab/work2_4T/miniconda3/envs/nilearn/lib/python3.9/site-packages/nilearn/image/image.py:1355, in load_img(img, wildcards, dtype)
   1318 def load_img(img, wildcards=True, dtype=None):
   1319     """Load a Niimg-like object from filenames or list of filenames.
   1320 
   1321     .. versionadded:: 0.2.5
   (...)
   1353 
   1354     """
-> 1355     return check_niimg(img, wildcards=wildcards, dtype=dtype)

File /media/menglab/work2_4T/miniconda3/envs/nilearn/lib/python3.9/site-packages/nilearn/_utils/niimg_conversions.py:310, in check_niimg(niimg, ensure_ndim, atleast_4d, dtype, return_iterator, wildcards)
    306     if return_iterator:
    307         return iter_check_niimg(
    308             niimg, ensure_ndim=ensure_ndim, dtype=dtype
    309         )
--> 310     return ni.image.concat_imgs(
    311         niimg, ensure_ndim=ensure_ndim, dtype=dtype
    312     )
    314 # Otherwise, it should be a filename or a SpatialImage, we load it
    315 niimg = load_niimg(niimg, dtype=dtype)

File /media/menglab/work2_4T/miniconda3/envs/nilearn/lib/python3.9/site-packages/nilearn/image/image.py:1442, in concat_imgs(niimgs, dtype, ensure_ndim, memory, memory_level, auto_resample, verbose)
   1439     ndim = len(first_niimg.shape)
   1441 if ndim not in [3, 4]:
-> 1442     raise TypeError(
   1443         "Concatenated images must be 3D or 4D. You gave a "
   1444         f"list of {ndim}D images"
   1445     )
   1447 lengths = [first_niimg.shape[-1] if ndim == 4 else 1]
   1448 for niimg in literator:
   1449     # We check the dimensionality of the niimg

TypeError: Concatenated images must be 3D or 4D. You gave a list of 5D images

The fitts files does not contain betas; if your input was a set of one or more EPI runs with a cumulative total of N points, then it contains N points itself. It is the “fit time series”, merely the sum of all components in your regression model times the estimated betas. It is the time series that you “explain” or estimate with all your components; it is the complement to the residuals. By summing the fit time series (fitts) and error time series (errts, or residual time series), you get exactly your input time series.

I’m not sure about the 5D error issue; I wonder if that is a difference between having a NIFTI-1 and a NIFTI-2 format output. Does nilearn handle both?

–pt

I don’t know if nilearn handles both of the formats.
But I’ve managed to load the fitts file as input for MVPA. Judging by the ‘Load the behavioral labels’ part in this tutorial: A introduction tutorial to fMRI decoding - Nilearn, I feel that the fitts file should be the one to load since it contains time series.