Using a RPE fieldmap image from a Philips scanner: specifying directions

Summary of what happened:

I am trying to process a single subject using QSIprep (and then pyAFQ). I have a T1 image, a DTI image, and a RPE image. When I processed this subject using just the T1 and DTI images, everything worked and the outputs (sub-02.dwiqc.json as viewed in dmriprep-viewer) looked good. However, I have RPE images, and would like to put them to use and do susceptibility distortion correction! I am using a Philips scanner, and I noticed that both the DTI json file and the RPE json file are in the ā€˜ā€œPhaseEncodingDirectionā€: ā€œjā€ā€™. Following Chris Rordenā€™s advice: BIDS, FMRIPREP: specify phase encoding direction with respect to qform orientation in nifti header? - #3 by joszim for the RPE image I changed the ā€˜ā€œPhaseEncodingDirectionā€: ā€œjā€ā€™ to be j- and changed the name of the RPE image to be ā€œsub-02_dir-AP_epi.niiā€ (and the corresponding json file to be ā€œsub-02_dir-AP_epi.jsonā€). Topup indeed occurred, according to sub-02.html. However, the outputs (sub-02.dwiqc.json as viewed in dmriprep-viewer) look TERRIBLE. I checked with one of our scanner techs who confirmed the exam card is correct and confirmed that the images themselves are indeed distorted in equal and opposite directions (and also the images look quite good)ā€¦ so the problem has to be something that I did. I am hoping that someone might be able to help me figure out how to successfully use my RPE images. If I have a j direction DTI image and a j direction RPE image, from a Philips scanner, how do I proceed?

Command used (and if a helper script was used, a link to the helper script or the command generated):

<qsiprep-docker \Users\harrioem\Desktop\testdataset \Users\harrioem\Desktop\testdataset\derivatives -w \Users\harrioem\Desktop\testworkdir --output-resolution 1.3 --fs-license-file \Users\harrioem\Applications\freesurfer\720 --recon-spec pyafq_tractometry --recon-input \Users\harrioem\Desktop\testdataset\derivatives\qsiprep>

Version:

QSIprep 0.19.1

Environment (Docker, Singularity, custom installation):

I am on a windows computer using the docker desktop app and typing my qsiprep-docker command into a command prompt window.

Data formatted according to a validatable standard? Please provide the output of the validator:

They are in BIDS format!! I BIDS validated originally using the BIDS validator website, and then every time I ran my qsiprep-docker command.

Relevant log outputs (up to 20 lines):

Everything ran fine, no errors in the command prompt or anything. The error is in sub-02_dwiqc.json looking bad, especially the FA map. The ventricles look massive, there arenā€™t many colors in the FA map, and there are colors but they are fuzzy. I am not sure I can post a screenshot due to participant privacy, etc.

Any thoughts anyone has to share would be so greatly appreciated. Also, if there is any more information I can provide, please let me know. Thank you so much!

Hi @emilymharriott and welcome to neurostars!

Just to be confirm, are you sure that sub-02_dir-AP_epi.nii is the file that was used for the fmap? That is, is there something like a IntendedFor or B0FieldSource/Identifier in the fmap json that is used to link it to the DWI?

If both your DWI and fMAP are in the same phase encoding direction, that could explain why the ā€œcorrectedā€ results look bad. There really isnā€™t a way to use that fmap if it is not truly RPE. However, there are alternatives you can look into.

First (and easiest) is using --use-syn-sdc --force-syn --ignore fieldmaps arguments in QSIPrep, which will use a ā€œfieldmaplessā€ warp (based on the registration between the b0 and intensity-inverted T1 - https://www.frontiersin.org/articles/10.3389/fninf.2017.00017/full)

Second is by trying SynB0-Disco: GitHub - MASILab/Synb0-DISCO: Distortion correction of diffusion weighted MRI without reverse phase-encoding scans or field-maps

Following the instructions on the repo, you would synthesize the RPE b0, name it as you did in the BIDS directory and make a minimal JSON for it that includes the PhaseEncodingDirection of opposite your DWI. It is going to be incorporated into QSIPrep in subsequent release, but is not there yet. Here is a Python wrapper for it:

from dipy.align.imaffine import (AffineMap,
                                 AffineRegistration,
                                 MutualInformationMetric,
                                 transform_centers_of_mass)
from dipy.align.transforms import (AffineTransform3D,
                                   RigidTransform3D)
from dipy.segment.tissue import TissueClassifierHMRF
from scipy.ndimage import gaussian_filter
import logging
import os
import sys
import nibabel as nib
import numpy as np
import warnings
# Disable tensorflow warnings
with warnings.catch_warnings():
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
    warnings.simplefilter("ignore")
    from dipy.nn.synb0 import Synb0
    
def register_image(static, static_grid2world, moving, moving_grid2world,
                   transformation_type='affine', dwi=None, fine=False):
    """
    Register a moving image to a static image using either rigid or affine
    transformations. If a DWI (4D) is provided, it applies the transformation
    to each volume.

    Parameters
    ----------
    static : ndarray
        The static image volume to which the moving image will be registered.
    static_grid2world : ndarray
        The grid-to-world (vox2ras) transformation associated with the static
        image.
    moving : ndarray
        The moving image volume that needs to be registered to the static image.
    moving_grid2world : ndarray
        The grid-to-world (vox2ras) transformation associated with the moving
        image.
    transformation_type : str, optional
        The type of transformation ('rigid' or 'affine'). Default is 'affine'.
    dwi : ndarray, optional
        Diffusion-weighted imaging data (if applicable). Default is None.
    fine : bool, optional
        Whether to use fine or coarse settings for the registration.
        Default is False.

    Raises
    ------
    ValueError
        If the transformation_type is neither 'rigid' nor 'affine'.

    Returns
    -------
    ndarray or tuple
        If `dwi` is None, returns transformed moving image and transformation
        matrix.
        If `dwi` is not None, returns transformed DWI and transformation matrix.
    """

    if transformation_type not in ['rigid', 'affine']:
        raise ValueError('Transformation type not available in Dipy')

    # Set all parameters for registration
    nbins = 64 if fine else 32
    params0 = None
    sampling_prop = None
    level_iters = [250, 100, 50, 25] if fine else [50, 25, 5]
    sigmas = [8.0, 4.0, 2.0, 1.0] if fine else [8.0, 4.0, 2.0]
    factors = [8, 4, 2, 1.0] if fine else [8, 4, 2]
    metric = MutualInformationMetric(nbins, sampling_prop)
    reg_obj = AffineRegistration(metric=metric, level_iters=level_iters,
                                 sigmas=sigmas, factors=factors, verbosity=0)

    # First, align the center of mass of both volume
    c_of_mass = transform_centers_of_mass(static, static_grid2world,
                                          moving, moving_grid2world)
    # Then, rigid transformation (translation + rotation)
    transform = RigidTransform3D()
    rigid = reg_obj.optimize(static, moving, transform, params0,
                             static_grid2world, moving_grid2world,
                             starting_affine=c_of_mass.affine)

    if transformation_type == 'affine':
        # Finally, affine transformation (translation + rotation + scaling)
        transform = AffineTransform3D()
        affine = reg_obj.optimize(static, moving, transform, params0,
                                  static_grid2world, moving_grid2world,
                                  starting_affine=rigid.affine)

        mapper = affine
        transformation = affine.affine
    else:
        mapper = rigid
        transformation = rigid.affine

    if dwi is not None:
        trans_dwi = transform_dwi(mapper, static, dwi)
        return trans_dwi, transformation
    else:
        return mapper.transform(moving), transformation

def synb0_synthesis(b0, b0_mask, t1, t1_mask, template, out_b0):

    logging.info('The usage of synthetic b0 is not fully tested.'
                 'Be careful when using it.')
    
    # Load template
    template_img = nib.load(template)
    template_data = template_img.get_fdata()

    # Load b0
    b0_img = nib.load(b0)
    b0_skull_data = b0_img.get_fdata()
    b0_mask_img = nib.load(b0_mask)
    b0_mask_data = b0_mask_img.get_fdata()

    # Load t1
    t1_img = nib.load(t1)
    t1_skull_data = t1_img.get_fdata()
    t1_mask_img = nib.load(t1_mask)
    t1_mask_data = t1_mask_img.get_fdata()

    # Apply masks
    b0_bet_data = np.zeros(b0_skull_data.shape)
    b0_bet_data[b0_mask_data > 0] = b0_skull_data[b0_mask_data > 0]
    t1_bet_data = np.zeros(t1_skull_data.shape)
    t1_bet_data[t1_mask_data > 0] = t1_skull_data[t1_mask_data > 0]

    # Crude estimation of the WM mean intensity and normalization
    logging.info('Estimating WM mean intensity')
    hmrf = TissueClassifierHMRF()
    t1_bet_data = gaussian_filter(t1_bet_data, 2)
    _, final_segmentation, _ = hmrf.classify(t1_bet_data, 3, 0.25,
                                             tolerance=1e-4, max_iter=5)
    avg_wm = np.mean(t1_skull_data[final_segmentation == 3])
    t1_skull_data /= avg_wm
    t1_skull_data *= 110

    # SyNB0 works only in a standard space, so we need to register the images
    logging.info('Registering images')
    # Use the BET image for registration
    t1_bet_to_b0, t1_bet_to_b0_transform = register_image(b0_bet_data,
                                                          b0_img.affine,
                                                          t1_bet_data,
                                                          t1_img.affine,
                                                          fine=True)
    affine_map = AffineMap(t1_bet_to_b0_transform,
                           b0_skull_data.shape, b0_img.affine,
                           t1_skull_data.shape, t1_img.affine)
    t1_skull_to_b0 = affine_map.transform(t1_skull_data.astype(np.float64))

    # Then register to MNI (using the BET again)
    _, t1_bet_to_b0_to_mni_transform = register_image(template_data,
                                                      template_img.affine,
                                                      t1_bet_to_b0,
                                                      b0_img.affine,
                                                      fine=True)
    affine_map = AffineMap(t1_bet_to_b0_to_mni_transform,
                           template_data.shape, template_img.affine,
                           b0_skull_data.shape, b0_img.affine)

    # But for prediction, we want the skull
    b0_skull_to_mni = affine_map.transform(b0_skull_data.astype(np.float64))
    t1_skull_to_mni = affine_map.transform(t1_skull_to_b0.astype(np.float64))

    # SynB0
    logging.info('Running SyN-B0')
    SyNb0 = Synb0()
    rev_b0 = SyNb0.predict(b0_skull_to_mni, t1_skull_to_mni)
    # Transform back
    rev_b0 = affine_map.transform_inverse(rev_b0.astype(np.float64))
    # Save out
    dtype = b0_img.get_data_dtype()
    nib.save(nib.Nifti1Image(rev_b0.astype(dtype), b0_img.affine),
             out_b0)

## RUN IT
template = "/PATH/TO/mni_icbm152_t1_tal_nlin_asym_09c_mask_2_5.nii.gz"
t1 = "/PATH/TO/T1w.nii"
t1_mask = "/PATH/TO/T1_mask.nii"
b0 = "/PATH/TO/b0.nii"
b0_mask = "/PATH/TO/b0_mask.nii"
out_b0 = "/PATH/TO/RPE_b0.nii"
synb0_synthesis(b0, b0_mask, t1, t1_mask, template, out_b0)

Best,
Steven

Yes! At the end of sub-02_dir-AP_epi.json, I added a line that says: ā€œIntendedForā€: ā€œdwi/sub-02_dwi.niiā€ as per some neurostars forum post that I saw.

Our scan tech person who designed our sequences looked at the images and visually confirmed they are in different directions despite what the json file says - he said one was APA and one was APP and ultimately that this should work? (thank you for sending these alternatives!! hoping to get this to work but if it doesnā€™tā€¦)

Could it be that you have it flipped around (i.e., DWI is J- / dir-AP and fmap is J / dir-PA or vice versa)? A screenshot of the before/after SDC would be helpful if you are indeed able to share it.

Best,
Steven

Good point, not sure!! I thought based on the Chris Rorden post that I linked above that I could pick whichever to change from j to j- ā€¦ but I will try it (currently, DWI = j and RPE = j- ; I will switch to DWI = j- and RPE = j) and get back to you!! not sure I can post a screenshot but will double check that, too.

Thank you so much!!

Update: I tried putting the DW image in j- and RPE image in j (opposite of what I tried yesterday, as described above) and the outputs still look terrible - exactly the same as when the DW image was in j and RPE image in j-. I verified again with someone else who has IRB etc. approval to look at our data, and he confirmed that the images are in opposite directions and this should work but does not know enough about QSIprep to know why it isnā€™t working. Is QSIprep reading something else in the .json file (beyond the title and the PhaseEncodingDirection, both of which I changed) that is causing this mess?

Hi Harriett, you should remove the --recon-input argument from this call. It is only needed when you want to bypass preprocessing.

I see you donā€™t have --skip-bids-validation, so your IntendedFors should be ok.

What are the TotalReadoutTimes in the fieldmap and the dwi? These are also used in TOPUP

It might also be useful if you could show what your bids data for sub-02 looks like with tree

Oh, this is great to know! Thank you so much for this tip. I will remove it. Iā€™m very much so still learningā€¦

PHEW! great! thank you! (correct, I did not skip bids validation)

I looked at the corresponding .json files for the DWI and fmap images (which are located in my raw data folder, which I called \testdataset) to get these numbers.
ā€œTotalReadoutTimeā€, from sub-02_dwi.json in /dwi/; 0.0325488
ā€œTotalReadoutTimeā€, from sub-02_dir-PA_epi.json in /fmap/; 0.0244386

Does this work?



Microsoft Windows [Version 10.0.19045.3803]
(c) Microsoft Corporation. All rights reserved.

C:\Users\harrioem>tree /F \Users\harrioem\Desktop\testdataset\sub-02
Folder PATH listing for volume Windows
Volume serial number is 00000045 7405:27A8
C:\USERS\HARRIOEM\DESKTOP\TESTDATASET\SUB-02
ā”œā”€ā”€ā”€anat
ā”‚       sub-02_T1w.nii
ā”‚
ā”œā”€ā”€ā”€dwi
ā”‚       sub-02_dwi.bval
ā”‚       sub-02_dwi.bvec
ā”‚       sub-02_dwi.json
ā”‚       sub-02_dwi.nii
ā”‚
ā””ā”€ā”€ā”€fmap
        sub-02_dir-PA_epi.json
        sub-02_dir-PA_epi.nii


C:\Users\harrioem>

hmm, maybe there are bids issues, the T1w should have a corresponding sidecar. Any luck finding TotalReadoutTime?

T1 does not need a sidecar JSON, turns out there is not BIDS-required metadata for T1s.

1 Like

oops I missed it! Itā€™s somewhat strange to have different total readout times for the dwi and the epi fmap. Usually these are identical. What happens if you replace the TotalReadoutTime in the fmap epi with the value from the dwi and rerun processing with a clean working directory?

1 Like

I donā€™t have a corresponding .json file for the T1 (as you can see in tree) but I can go back to the cloud where we store our raw data to look for one!

I looked at the corresponding .json files for the DWI and fmap images (which are located in my raw data folder, which I called \testdataset) to get these numbers.

ā€œTotalReadoutTimeā€, from sub-02_dwi.json in /dwi/; 0.0325488
ā€œTotalReadoutTimeā€, from sub-02_dir-PA_epi.json in /fmap/; 0.0244386

I will wipe everything so I start with a clean working directory, restart my computer, and try this.
So the TotalReadoutTime for the dwi will be 0.0325488 (stays the same) and the TotalReadoutTime for the epi will be 0.0325488 as well (this, I change).

What should I do about the PhaseEncodingDirections? Both were originally j (in the original .json files before I edited anything). Should I return to original j and j, or change one to be j-?

one should be be j- and one should be j. I think it is ok to guess which is the j- even though there are different TotalReadoutTimes, but weā€™ll see!

1 Like

Okay, great! I have this running:

qsiprep-docker \Users\harrioem\Desktop\testdataset \Users\harrioem\Desktop\testdataset\derivatives -w \Users\harrioem\Desktop\testworkdir --output-resolution 1.3 --fs-license-file \Users\harrioem\Applications\freesurfer\720 --recon-spec pyafq_tractometry

Where both TotalReadoutTimes for both the DW image and the RPE image are 0.0325488 and in the .json files, the DW image is j and the RPE image is j-.

I will report back as soon as it is done! Thank you so much for your help!

It finished running!
And, IMPORTANT:cli:QSIPrep finished without errors.

Unfortunately, the outputs Iā€™m looking at in dmriprep-viewer still look bad (they look the same as they did before we changed the TotalReadoutTime)ā€¦ do you have any other thoughts?

If it helps, I can try to describe the outputs, from sub-02_dwiqc.json, as seen in dmriprep-viewer. In the first set of images (where you slide across through the slices to look at motion) and on the FA map, the ventricles look enormous (and much bigger than they do on the raw T1 and in the QSIprep run without the fmap). On the carpet plot, the bottom ~10 rows or so are all yellow while the remaining rows are mostly black with some purple spots. In q space sampling, the red dots are mostly around the outline of the sphere (few in the middle). In the FA map, the red (corpus callosum) is more orange, the blue (corticospinal tract) is thin and short, and the green (SLF) is also thin and short. Past these thin and short tracts, the three colors all kinda just blend together. If that helps? at all?

Any thoughts you might have on what to try next would be so greatly appreciated.

Thank you so very much for your help!