Nilearn affine error potentially related to MNI fmriprep registration?

Hi Neuroimagers!

I am getting an affine error from nilearn when generating my second level model. The error is: ValueError: Field of view of image #31 is different from reference FOV.

A little bit about our data pipeline:

  • I am working to process MID fMRI data from a particularly high motion sample.
  • Data is preprocessed using fmriprep 23.0.1, and all images are registered to MNI152NLin2009cAsym. The registration is decent, but we are having issues with areas of the preprocessed fmriprep output images showing brain outside of the standard MNI152NLin2009cAsym and it varies participant to participant.
  • First level models and contrasts are created using nilearn.FirstLevelModel. The mask image used in this FirstLevelModel call is the fmriprep participant and scan specific mask image. The mask images also show brain outside of the standard MNI space, and the outputted t-maps also show activation outside of the standard MNI space.
  • Finally, a second level model is run via nilearn.SecondLevelModel, however errors out due to FOV error. When one with a smaller subsample, there is tons of activity shown outside of standard MNI space. I’ve pasted the affines below and included a screenshot.
Reference affine:
array([[   2.5,    0. ,    0. ,  -96.5],
       [   0. ,    2.5,    0. , -132.5],
       [   0. ,    0. ,    2.5,  -78.5],
       [   0. ,    0. ,    0. ,    1. ]])
Image affine:
array([[   2.5       ,    0.        ,    0.        ,  -96.5       ],
       [   0.        ,    2.5       ,    0.        , -132.5       ],
       [   0.        ,    0.        ,    2.50099993,  -78.5       ],
       [   0.        ,    0.        ,    0.        ,    1.        ]])
Reference shape:
(78, 93, 78)
Image shape:
(78, 93, 78, 1)

Below is a picture of a second level models run on a subsample that doesn’t include the participant with the problematic affines.

I read this thread but am wondering if resampling makes sense here and what image I would use, because in theory all images should already be in the same space. Is it possible that the fmriprep registering the MNI space isn’t super successful with these data? How big is too big of an affine difference for nilearn to tolerate?

Thanks for troubleshooting with me :slight_smile:

Hi @katseitz and welcome to neurostars!

It is possible you’ll get better looking results in 23.2.0 due to changes in how resampling is done (RF: Update primary bold workflow to incorporate single shot resampling by effigies · Pull Request #3114 · nipreps/fmriprep · GitHub). It would also help to see your command. I am particularly interested in whether you are applying SDC and using the FreeSurfer inputs, both of which should improve brain shape and resampling efforts.

You might want to consider making a study-wide mask based off of all the subjects’ masks (nilearn.masking.intersect_masks - Nilearn).

Seeing your code would help here, and a full traceback that indicates which line of the code makes that error. But those affine are so close, I would feel comfortable copying one affine to any image giving the error, if that would fix it.

Best,
Steven

Thanks for your thoughts, @Steven!

The fmriprep call:

singularity run --cleanenv -B /projects:/projects \
-B /projects/data/processed/neuroimaging/fmriprep_ses-1:/out \
-B /projects/data/raw/neuroimaging/bids:/data \
-B /projects/data/processed/neuroimaging/fmriprep_ses-1/work:/work \
/projects/software/singularity_images/fmriprep-23.0.1.simg \
/data /out participant \
--participant-label ${1} --bids-filter-file bids_filter_file_ses-1.json \
--fs-license-file /projects/software/freesurfer_license/license.txt \
-w /work --ignore fieldmaps 

We originally attempted to use the --use-syn-sdc flag but it ended up stretching our data so that frontal regions were incredibly long and didn’t look anatomically correct. We aren’t applying fieldmaps currently because fieldmaps are only collected once at the beginning of scans rather than multiple times and participants move enough between scans that they also did not really help :frowning: Also, loads of affine errors when we attempt to apply them…

Mask image:
What are the advantages of that over using the MNI mask image?

Code and Traceback:
First level code and second level. The second level code especially is an attempt to get a minimally viable second level running so it’s barebones.

Traceback (most recent call last):
  File "MID_second_levels.py", line 49, in <module>
    main()
  File "MID_second_levels.py", line 46, in main
    second_level(ses)
  File "MID_second_levels.py", line 31, in second_level
    z_map = second_level_model.compute_contrast(output_type="z_score")
  File "/home/sir8526/.local/lib/python3.8/site-packages/nilearn/glm/second_level/second_level.py", line 571, in compute_contrast
    Y = self.masker_.transform(effect_maps)
  File "/home/sir8526/.local/lib/python3.8/site-packages/nilearn/maskers/base_masker.py", line 232, in transform
    return self.transform_single_imgs(imgs,
  File "/home/sir8526/.local/lib/python3.8/site-packages/nilearn/maskers/nifti_masker.py", line 542, in transform_single_imgs
    data = self._cache(
  File "/software/python/3.8.4/lib/python3.8/site-packages/joblib/memory.py", line 352, in __call__
    return self.func(*args, **kwargs)
  File "/home/sir8526/.local/lib/python3.8/site-packages/nilearn/maskers/nifti_masker.py", line 80, in _filter_and_mask
    temp_imgs = _utils.check_niimg(imgs)
  File "/home/sir8526/.local/lib/python3.8/site-packages/nilearn/_utils/niimg_conversions.py", line 313, in check_niimg
    return concat_niimgs(niimg, ensure_ndim=ensure_ndim, dtype=dtype)
  File "/home/sir8526/.local/lib/python3.8/site-packages/nilearn/_utils/niimg_conversions.py", line 525, in concat_niimgs
    for index, (size, niimg) in enumerate(
  File "/home/sir8526/.local/lib/python3.8/site-packages/nilearn/_utils/niimg_conversions.py", line 173, in _iter_check_niimg
    raise ValueError(
ValueError: Field of view of image #31 is different from reference FOV.
Reference affine:
array([[   2.5,    0. ,    0. ,  -96.5],
       [   0. ,    2.5,    0. , -132.5],
       [   0. ,    0. ,    2.5,  -78.5],
       [   0. ,    0. ,    0. ,    1. ]])
Image affine:
array([[   2.5       ,    0.        ,    0.        ,  -96.5       ],
       [   0.        ,    2.5       ,    0.        , -132.5       ],
       [   0.        ,    0.        ,    2.50099993,  -78.5       ],
       [   0.        ,    0.        ,    0.        ,    1.        ]])
Reference shape:
(78, 93, 78)
Image shape:
(78, 93, 78, 1)

Some fmap issues were also fixed in 23.2.0, so maybe see if you get better performance with it?

Just so you are not limiting the amount of outside-the-brain signals in your models.

Some fmap issues were also fixed in 23.2.0, so maybe see if you get better performance with it?

I’m going to install 23.2.0 and rerun 10 participants through our pipeline as a test. I’ll report back in a few days how that goes. Any information I should gather as I go for helpful troubleshooting? Should I try both fmaps and the fmapless --use-syn-sdc?

Do we think this will fix the affine error as well?

Hi @katseitz,

I would start with dedicated fmaps first, then try SYN if that doesn’t work.

Could you also check if the affines are all the same for the raw files? Identifying the particular subject with the wonky affine will also probably help (the error you got was unspecific as to which file was causing problems).

Best,
Steven

Hi Steven,

I think there are a couple of different things going on here – the first being that our registration is still really wonky, even using fmriprep 23.2.0. The second, being the affine error.

Registration
We reran 10 participants using fmriprep 23.20 with fmaps and the alignment between the anatomical reference of the fieldmap and the target EPI was wonky and we ended up losing a lot of brain and the brain masks were tiny.

We then reran without fieldmaps in 23.2.0 and output was super similar to 23.0.1. I can also try using SYN if we think that might have better results, but any thoughts on why we’re having such a hard time getting into MNI space?

We’re getting activation outside of the brain, and the shape of the GLM output seems a little odd too. I can check individual brain masks to see if a few participants brain masks are larger than the MNI output space we’re shooting for, presumably that activation is coming from somewhere, right?

GLM outputs:

Thresholded GLM output over MNI brain

Affine Error
Since the error referred to image 31, I could check the affines on the 31st image in the list passed into the GLM. Is the easiest way to see affines using nibabel’s img.affine? And by raw - do you mean un-preprocessed niftis or the preproc output from fmriprep? Should I compare that to the affine of the output of the first levels? Happy to paste the output here I get later today :slight_smile:

Hi @katseitz,

Unprocessed niftis.

Would you feel comfortable sharing a subject’s raw data? You can DM me a google drive link.

Best,
Steven

Hi Steven,

Yes, can you send me your email?

Thanks,
Kat

smeisler@g.harvard.edu

Hi @katseitz,

I also got weird results using the dedicated fmap. I don’t use GE so I cannot help much with that in particular, but when I used syn-sdc I got okay results:




my extra arguments, using 23.2.0

--participant_label t1270 \
-w $work \
--fs-license-file $license \
 --fs-subjects-dir $dir \
 --mem_mb 63500 --nprocs 32 --omp-nthreads 16 \
--ignore fieldmaps --use-syn-sdc --force-syn 

Best,
Steven

I’ll give it a go with the same ten participants and see what happens!

Promptly getting an error. @Steven, did you have any issues? Using the exact same participant data as I sent you, and did not get this error when using 23.2.0 with or without fmaps.

Here’s my call:

singularity run --cleanenv -B /projects/b1108:/projects/b1108 \
-B /projects/b1108/studies/transitions2/data/processed/neuroimaging/ses-1_v23_2_0_syn:/out \
-B /projects/b1108/studies/transitions2/data/raw/neuroimaging/bids:/data \
-B /projects/b1108/studies/transitions2/data/processed/neuroimaging/ses-1_v23_2_0_syn/work:/work \
/projects/b1108/software/singularity_images/fmriprep_23.2.0.sif \
/data /out participant \
--participant-label ${1} --bids-filter-file bids_filter_file_ses-1.json \
--fs-license-file /projects/b1108/software/freesurfer_license/license.txt \
-w /work --ignore fieldmaps --use-syn-sdc --force-syn 

And here’s the error

240228-09:01:08,947 nipype.workflow IMPORTANT:
	 BOLD series will be slice-timing corrected to an offset of 0.987s.
Process Process-2:
Traceback (most recent call last):
  File "/opt/conda/envs/fmriprep/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/opt/conda/envs/fmriprep/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/fmriprep/cli/workflow.py", line 115, in build_workflow
    retval["workflow"] = init_fmriprep_wf()
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/fmriprep/workflows/base.py", line 94, in init_fmriprep_wf
    single_subject_wf = init_single_subject_wf(subject_id)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/fmriprep/workflows/base.py", line 655, in init_single_subject_wf
    workflow.connect([
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/workflows.py", line 161, in connect
    self._check_nodes(newnodes)
  File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/nipype/pipeline/engine/workflows.py", line 769, in _check_nodes
    raise IOError('Duplicate node name "%s" found.' % node.name)
OSError: Duplicate node name "bold_ses_1_task_mid_run_01_wf" found.

Did not get an error like this, try using a new workdir?

Same error, unfortunately. I’ll mess around with it later.

What are the contents of the filter file?

{
    "fmap": {"datatype": "fmap", "session": "1"},
    "bold": {"datatype": "func", "session": "1"},
    "t1w": {"datatype": "anat", "suffix": "T1w", "session": "1"}

  }

I’ve been using the same filter file for in all of the fmriprep calls.

Tried again today and it all worked - no errors yet. fmriprep is running! Will report back after running first and second levels…

@Steven Reran using 23.2.0 with syn, and still seeing activation outside of the brain. I’m going to check brain masks one at a time, but what would you check next? The T-Map extends far outside of and looks like the weirdly shaped GLM I posted a few weeks ago.

Is this now a nilearn related issue?

Hi @katseitz,

That GLM output looks mainly like noise. I would make sure the GLM is set up properly. You can also explicitly define a brain mask in Nilearn to only look at signals in a certain region / within the brain.

Best,
Steven