Summary of what happened:
Dear experts,
I would like to use pyAFQ to segment tractography results obtained using MRtrix. I interpret that the tractography results in MRtrix are output in .tck format and pyAFQ cannot handle .tck, is this correct?
In addition, I tried to use nibabel’s tck2trk.py to convert tck files to trk files, but it did not work. I typed the following command, but the trk file is not output in the working directory (I have confirmed that there is the input file in the working directory).
I guess that this may be a simple problem. It’s frustrating that I can’t solve the problem on my own, but I would love to get some advice!
Command used (and if a helper script was used, a link to the helper script or the command generated):
python3.10 /home/brain/.local/lib/python3.10/site-packages/nibabel/cmdline/tck2trk.py sub-BP0061_CSD_Prob_ACT_5000000_tractography.tck sub-BP0061_CSD_Prob_ACT_5000000_tractography.trk
Hi @Printemps,
I believe that the answer to your first question is “no, that’s incorrect”
That is, pyAFQ uses DIPY’s all purpose load_tractogram
function, which also takes tck files as inputs (see also this example) . See this page for some instructions on using tractography from another pipeline and more information about the interaction of pyAFQ with BIDS-organized datasets in this example. And let us know if something isn’t working as expected.
Cheers,
Ariel
Dear @Ariel_Rokem ,
Thank you for your kind reply.
I downloaded the sample data of stanford_hardi and replaced '~/AFQ_data/stamford_hardi/derivatives/my_tractography/sub-01/ses-01/dwi/sub-01_ses-01_dwi_tractography.trk'
with the tck file from our data, then pyAFQ completed without problems. Thus, I could understand that pyAFQ can read both tck and trk.
However, I continue to have problems with pyAFQ not recognizing our data.
Regarding the pyAFQ instructions, the command to create an object of class GroupAFQ was executed. Then, I got the warning message below;
my_afq = GroupAFQ( bids_path, preproc_pipeline='preprocess', bundle_info=bundle_info, import_tract={ "suffix": "tractography", "scope": "my_tractography" }, segmentation_params={'nb_streamlines': 10000})
Out:
WARNING: AFQ: No dwi found for subject BP0061 and session 01. Skipping.
Also, my_afq.export_all()
stops with IndexError; list index out of range
.
I have carefully checked that the data is placed according to the BIDS format and the contents of dataset.description.json, but what am I doing wrong?
The data set I have made is displayed in a tree format below. I would be grateful for any advice.
Sincerely yours,
pyAFQ_test9/
├── CHANGES
├── README
├── code
├── dataset_description.json
├── derivatives
│ ├── freesurfer
│ │ ├── dataset_description.json
│ │ └── sub-BP0061
│ │ ├── label
│ │ ├── mri
│ │ ├── scripts
│ │ ├── stats
│ │ ├── surf
│ │ ├── tmp
│ │ └── trash
│ ├── my_tractography
│ │ ├── dataset_description.json
│ │ └── sub-BP0061
│ │ └── ses-01
│ │ └── dwi
│ │ └── sub-BP0061_dwi_tractography.tck
│ └── preprocess
│ ├── dataset_description.json
│ └── sub-BP0061
│ └── ses-01
│ └── dwi
│ ├── aparc+aseg.mgz
│ ├── sub-BP0061_5tt.mif
│ ├── sub-BP0061_AD_AP.mif
│ ├── sub-BP0061_APb0_all.nii.gz
│ ├── sub-BP0061_CSF_FOD_AP.mif
│ ├── sub-BP0061_FA_AP.mif
│ ├── sub-BP0061_GM_FOD_AP.mif
│ ├── sub-BP0061_MD_AP.mif
│ ├── sub-BP0061_PDD_AP.mif
│ ├── sub-BP0061_RD_AP.mif
│ ├── sub-BP0061_RF_CSF_AP.txt
│ ├── sub-BP0061_RF_GM_AP.txt
│ ├── sub-BP0061_RF_WM_AP.txt
│ ├── sub-BP0061_RF_voxels_AP.mif
│ ├── sub-BP0061_T1w.nii
│ ├── sub-BP0061_T1w_brain.nii.gz
│ ├── sub-BP0061_T1w_brain_mask.nii.gz
│ ├── sub-BP0061_WM_FOD_AP.mif
│ ├── sub-BP0061_b0_1_AP_aftereddy.nii.gz
│ ├── sub-BP0061_b0_1_AP_aftereddy_brain.nii.gz
│ ├── sub-BP0061_b0_1_AP_aftereddy_brain_mask.nii.gz
│ ├── sub-BP0061_b0_AP1_brain.nii.gz
│ ├── sub-BP0061_b0_AP1_brain_mask.nii.gz
│ ├── sub-BP0061_b0_AP_1.nii.gz
│ ├── sub-BP0061_b0_AP_2.nii.gz
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_cnr_maps.nii.gz
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_command_txt
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_movement_over_time
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_movement_rms
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_outlier_free_data.nii.gz
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_outlier_map
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_outlier_n_sqr_stdev_map
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_outlier_n_stdev_map
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_outlier_report
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_parameters
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_post_eddy_shell_PE_translation_parameters
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_post_eddy_shell_alignment_parameters
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_restricted_movement_rms
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_rotated_bvecs
│ ├── sub-BP0061_dir-AP_aftereddy.eddy_values_of_all_input_parameters
│ ├── sub-BP0061_dir-AP_aftereddy.mif
│ ├── sub-BP0061_dir-AP_aftereddy.nii.gz
│ ├── sub-BP0061_dir-AP_aftereddy_anatalign.mif
│ ├── sub-BP0061_dir-AP_dwi.bval
│ ├── sub-BP0061_dir-AP_dwi.bvec
│ ├── sub-BP0061_dir-AP_dwi.nii
│ ├── sub-BP0061_dt_AP.mif
│ ├── sub-BP0061_dwi2anatalign_AP.mat
│ ├── sub-BP0061_dwi2anatalign_AP.nii.gz
│ ├── sub-BP0061_dwi2anatalign_AP_fast_wmedge.nii.gz
│ ├── sub-BP0061_dwi2anatalign_AP_fast_wmseg.nii.gz
│ ├── sub-BP0061_dwi2anatalign_AP_init.mat
│ └── sub-BP0061_dwi2anatalign_AP_mrtrix.mat
├── participants.json
├── participants.tsv
├── sourcedata
└── sub-BP0061
├── anat
│ └── sub-BP0061_T1w.nii
└── dwi
├── sub-BP0061_dir-AP_dwi.bval
├── sub-BP0061_dir-AP_dwi.bvec
└── sub-BP0061_dir-AP_dwi.nii
This sometimes happens when the contents of the derivatives “dataset_description.json” files are not perfectly aligned. In particular, you want to make sure that the name of the derivative dataset (i.e., the name of the folder in which it is stored), and the PipelineDescription field in that file match precisely:
{
"Name": "my_tractography",
...
"PipelineDescription": {
"Name": "my_tractography",
...
},
}
Dear @Ariel_Rokem
It turns out that the ID specified for “Subject” in dataset_description.json under bids_path was incorrect.
I now understand the importance of dataset_description.json in pyAFQ.
Thanks for the important advice!