This because when you non linearly aligned your lesion in the MNI space some voxels falls in between and therefore are tagged with a value between 0 and 1 according to the proportion of involvement of that voxels in the MNI space. You can threshold and binarise your lesion map at 0.5 (this would be the equivalent of doing a normalisation using the nearest neighbour interpolation)
Thanks for the clarification! That makes perfect sense now. I’ll go ahead and threshold the lesion map at 0.5 as you suggested. I really appreciate your help with this!
I’m having trouble with the output of “Disconnectome maps”.
I have drawn Lesion on a T1 image converted with the command “Normalization” on the MNI152 template and put the mask image of Lesion in the command “Disconnectome maps”.
I get an “End!” notification and the message “Data properly written in [out put folder]“, but the result is not output.
A folder named “log” is created, but it does not contain anything. Other image data such as NIfTI data are not output. Since no error code is generated, I cannot tell which part of the operation was incorrect.
Thank you for replying.
Whether the input and output folders are the same or different, an empty folder named “log” is created.
Is it correct to map lesion to T1 registered in the MNI template output in “Normalized” command and put in “Disconnectome maps”?
Do I need to do any additional work?
By the way, I’m using M3 MacbookAir and macOS is Sonoma 14.6.1.
I am having trouble with disconnectome maps. Therefore, I used the fslmaths function of FSL to analyze the NIfTI data of the MNI template showing the disrupted white matter fibers output by the tractotron.
I would like to know if the results of the FSL analysis are valid or not, as follows: The Disconnectome maps command in BCBToolKit says that the analysis was successful, but nothing is output.
Merging multiple NIfTI data (data where white matter fibers impaired by the MNI template are indicated by 0 to 1 with the fslmaths function.
Averages the merged data with the fslmaths function.
Cut 0.5 as a threshold value from the data averaged by fslmaths function.
Before the step 1, I also attempted to unify the disabled side using the fslswapdim function, because in my data, there were people with right hemisphere disability and people with left hemisphere disability. Is this procedure legitimate?
Hey, sorry for the delay. Just back from holiday.
I’m sorry but we added this atlas upon request from our reviewers and are not in charge of its maintenance. All I could find is this page JHU DTI-based white-matter atlases. Good luck
I’m sorry I am a bit confused with your request.
I can run the analysis locally if you have any trouble. Feel free to reach out michel.thiebaut@gmail.com
@sawai.neuroreha did you use the Normalize module with the folder of T1 images as input and then used the “Apply transformation to other” option with the folder of the lesion masks as input?
If so, this is a way to do it so it should have worked. Have you verified the normalized brains and masks align well?
We’ve been having issues with M3 laptops so maybe it could be that too.
When I have some time I will try to make a Docker image to simplify the BCBToolkit setup, hopefully that will help.
Thanks for replying to my message.
I did not check the “Apply transformation to other” box during Normalization processing. Is it correct to specify the folder where the lesion data is stored in this part?
Sorry I am not familiar with white matter fiber bundle analysis.
Yes, you should use “Apply transformation to other” (on the lesions folder) when you are normalizing your T1 images if you also need to align your lesions masks to be in the same space as your normalized T1s (if you cannot re-segment the lesions on the normalized T1 which would be better but time consuming).
Normalize, by default, takes T1 images and aligns them to the MNI152 T1 template. Lesion masks are usually binary images, or at least they do not contain the same type of values as in a T1 images (first they do not actually contain a brain). The normalization computes what would be the transformation from your input image to have roughly the same size, shape, and alignment as the template and to do that it needs to find similarities between the input and the template (a brain, the substructures and landmarks). A lesion mask doesn’t have any similarity to the template and thus cannot be directly aligned to the template, that is likely why the disconnectome didn’t work because the lesions you gave it were probably completely wrong (probably outside of the brain).
I’m experiencing the same issue with disconnection maps that was mentioned in previous posts. When I try running the disconnection maps using the lesion provided in the package, the software runs for about 2 seconds and then stops, leaving an empty log folder.
I’ve attempted this on several different computers, but I haven’t had any success.
Does anyone have any suggestions for troubleshooting or fixing this problem?
I finally received an M4 iMac. I’m afraid we understood where the problem is coming from.
Apple changed the structure of the processor that now require us to compile a specific version of BCBtoolkit for apple M3 and M4. While this is possible in principle we are actually struggling with one binary (track_vis) that we cannot make work at the moment. We’re working on a solution. thanks for your patience. Meanwhile BCBtoolkit still works on M1 and Intel processors as well as linux. Kind regards. mich
Hello,
I’m new here, but I couldn’t seem to find anything about this in previous posts so here goes:
I have quite a bit of data, so running the disconnectome maps with the boosted connector takes a (very) long time. Is it possible so somehow batch process the use of the BCBToolKit or to otherwise parallelize it? Specifically the disconnectome maps?