pyAFQ with anatomical constraints

Summary of what happened:

Hi, I am new to using QSIrecon and I am working out which tractography workflow is best for my data. I would like to be able to use pyafq_tractometry as this has predefined tracts (without needing to create masks) and can handle group data. However, I would like to specify that the tractography is conducted with anatomical constraints (such as ’ mrtrix_singleshell_ss3t_ACT-hsvs’). However it seems the pyafq_tractometry command uses an automatic mrtrix command without specifying using the T1 image for anatomical constraints. Is there any way to do this?

Command used (and if a helper script was used, a link to the helper script or the command generated):

PASTE CODE HERE

Version:

Environment (Docker, Singularity / Apptainer, custom installation):

Data formatted according to a validatable standard? Please provide the output of the validator:

PASTE VALIDATOR OUTPUT HERE

Relevant log outputs (up to 20 lines):

PASTE LOG OUTPUT HERE

Screenshots / relevant information:


Hi @MRI_New, and welcome to neurostars!

You can make your own pipeline that uses an HSVS tractography to feed into pyafq. You can copy the relevant parts from HSVS and pyafq yamls into your own yaml and pass that as your recon spec.

Best,
Steven

Thank you for getting back to me so quickly!

To check, does the pyafq_tractometry.yaml run tractography - or do you have to run separate tractography first? As i cannot see any reference to tractography in the pyafq yaml. For context, I am working on single shell DTI data of a patient cohort.

If the pyafq yaml does compute tractography, what method does it use?

If the pyafq yaml does not compute the tractography, what is the standard combination? and if not by combining yaml’s, how would you run separate tractography and tractometry after the --recon-spec command?

I could take the beginning from mrtrix_singleshell_ss3t_ACT-hsvs.yaml and add the end from mrtrix_multishell_msmt_pyafq_tractometry.yaml into a single yaml as you suggest however I would like to make sure I am not using too much of a novel, unstandardised approach - by combining ACT tractography with AFQ tractometry.

This is my first venture into DTI analysis, so i am grateful for any advice.

Hi @MRI_New,

It uses pyafq to run tractography, not MRtrix.

You will find the description of PyAFQ default tractography here (and in the supplement) helpful: Evaluating the Reliability of Human Brain White Matter Tractometry - PMC

I have done MSMT-HSVS → PyAFQ via qsirecon, it works really nicely.

Best,
Steven

Thank you - so you have used the multi-shell mrtrix with ACT and then put that through pyafq for tractometry and have not had any issues? I would therefore assume the same would be true for single shell?

If you can foresee any issues - please let me know. I am very grateful for your help today!

The yaml I have worked out would be:

anatomical:
- mrtrix_5tt_hsvs
name: mrtrix_singleshell_ss3_hsvst
nodes:
-   action: select_gradients
    input: qsirecon
    name: select_single_shell
    parameters:
        requested_shells:
            - 0
            - highest
        bval_distance_cutoff: 100
-   action: csd
    input: select_single_shell
    name: ss3t_csd
    parameters:
        fod:
            algorithm: ss3t
        mtnormalize: true
        response:
            algorithm: dhollander
    qsirecon_suffix: MRtrix3_fork-SS3T_act-HSVS
    software: MRTrix3
-   action: tractography
    input: ss3t_csd
    name: track_ifod2
    parameters:
        method_5tt: hsvs
        sift2: {}
        tckgen:
            algorithm: iFOD2
            backtrack: true
            crop_at_gmwmi: true
            max_length: 250
            min_length: 30
            power: 0.33
            quiet: true
            select: 10000000.0
        use_5tt: true
        use_sift2: true
    qsirecon_suffix: MRtrix3_fork-SS3T_act-HSVS
    software: MRTrix3
-   action: pyafq_tractometry
    input: track_ifod2
    name: pyafq_tractometry
    parameters:
        b0_threshold: 50
        brain_mask_definition: ''
        bundle_info: null
        clean_rounds: 5
        clip_edges: false
        csd_lambda_: 1
        csd_response: ''
        csd_sh_order: ''
        csd_tau: 0.1
        directions: prob
        dist_to_atlas: 4
        dist_to_waypoint: ''
        distance_threshold: 3
        export: all
        filter_b: true
        filter_by_endpoints: true
        greater_than: 50
        gtol: 0.01
        import_tract: ''
        length_threshold: 4
        mapping_definition: ''
        max_angle: 30.0
        max_bval: ''
        max_length: 250
        min_bval: ''
        min_length: 50
        min_sl: 20
        model_clust_thr: 1.25
        n_points: 100
        n_points_bundles: 40
        n_points_indiv: 40
        n_seeds: 1
        nb_points: false
        nb_streamlines: false
        odf_model: CSD
        parallel_segmentation: '{''n_jobs'': -1, ''engine'': ''joblib'', ''backend'':
            ''loky''}'
        presegment_bundle_dict: null
        presegment_kwargs: '{}'
        prob_threshold: 0
        profile_weights: gauss
        progressive: true
        pruning_thr: 12
        random_seeds: false
        reduction_thr: 25
        refine: false
        reg_algo: ''
        reg_subject_spec: power_map
        reg_template_spec: mni_T1
        return_idx: false
        rm_small_clusters: 50
        rng: ''
        rng_seed: ''
        robust_tensor_fitting: false
        roi_dist_tie_break: false
        save_intermediates: ''
        sbv_lims_bundles: '[None, None]'
        sbv_lims_indiv: '[None, None]'
        scalars: '[''dti_fa'', ''dti_md'']'
        seed_mask: ''
        seed_threshold: 0
        seg_algo: AFQ
        sphere: ''
        stat: mean
        step_size: 0.5
        stop_mask: ''
        stop_threshold: 0
        tracker: local
        use_external_tracking: false
        virtual_frame_buffer: false
        viz_backend_spec: plotly_no_gif
        volume_opacity_bundles: 0.3
        volume_opacity_indiv: 0.3
    qsirecon_suffix: PYAFQ
    software: pyAFQ
space: T1w

Hi @MRI_New,

Looks good to me! If you won’t need the SIFT2 weights at all, then you can turn that off. It would save some time. Also, I think you can get away with a lower streamline count. Try 2 million at first and see how it looks.

Best,
Steven