How to import mindboggle in jupyter notebook

Hi all,

I am trying to run examples related to thickinthehead:

I have install Nipype in my conda environment (pip install nipype).
I thought that mindboggle is part of the nipype package. However, I cannot import it in jupyter notebook.

Do I have to install the mindboggle as a Docker container if I just want to use it within the Jupyter notebook?

Thank you,
Ali

Yes, you will need to install it. Nipype provides a standardised way to call functions from various packages, but the packages still need to be present in your local environment as it won’t download them for you. It looks like mindboggle distributes a Docker container, which might be of use !

Hope that helps !

Elizabeth

Thank you for your interest in Mindlogger! We have disabled thickinthehead in the automated shape analysis pipeline because we found a possible dependency on isotropic resolution. While it is available as a command-line argument and in Mindlogger’s python library, we recommend using this shape measure only in special circumstances.

@binarybottle: Thank you!
I have noticed that from here: https://github.com/nipy/mindboggle/issues/149
So by special circumstances, do you mean when the images are isotropic 1 mm^3?
I have my T1 images in isotropic 1 mm^3 space, so can I assume that the revised algorithm will work similarly to what you have published?

Thank you. Yes, it should work the same as in the published version.

1 Like

@binarybottle:
Hi, I am afraid there is some bugs in the latest version of the thickinthehead code.
Consider the following simple situation:

  • In an isotropic 1 mm^3 image (voxvol=1; voxarea=1), we have a cortical label with the width of 1 voxel and the length of N voxels (thickness = 1mm).

Now, based on the code:
inner_edge is similar to the cortex.
outer_edge is all zero (after erosion of cortex by 1)

Therefore,
label_inner_edge_area = 1*N;
label_outer_edge_area = 0;
-> label_area = (N+0)/2 = N/2;
label_cortex_volume=1*N;
-> thickness = label_cortex_volume / label_area = N/ N/2 = 2mm! (which is wrong!)

I also see another issue with the code:
The “thickness” variable is not initialized if the “if statement” in line 379 is not passed (https://github.com/nipy/mindboggle/blob/master/mindboggle/shapes/volume_shapes.py#L379), so it will probably show a wrong random value.

I appreciate your feedback. Thank you.

Thank you, @aghayoor. This is precisely why I caution against use of thickinthehead for general-purpose brain image processing. It assumes not only 1mm^3 isometric voxels but also a minimum cortical thickness. In all the cases I have run, the cortex was several voxels thick, so I didn’t run into this problem, but if you are running these on brain images with very thin cortices, thickinthehead could fail.

Re: line 379, only “label_volume_thickness” and “output_table” are returned by the function, not “thickness”. if the “if statement” is not passed, then -1 values are saved in label_volume_thickness and nothing is written to output_table for that label.

@binarybottle: Thank you for your review. Do you think can you offer a more generalized version of the algorithm? Do you have any suggestions to fix the above bug? I will be glad to help making the code more reliable. Thank you.

I tried a quick fix by upsampling the resolution to simulate thicknesses of multiple voxels, but this required too much memory, so I removed that function. I haven’t had time to revisit this problem, but welcome any pull requests that tackle this issue!

@binarybottle: Is it possible if you put back that function and make its usage optional by a flag?
Using this option can be recommended only in cases that the input subject scan might include very thin cortical thicknesses in some cortical regions. For example in processing of the brain scans in some neurodegenerative disease studies like late AD.

It’s easy enough to reset the upsample factor, but this raises a new issue – the aliasing will be so great on curved regions that the estimate of volume/area will be increasingly inaccurate. You are welcome to play with this function with greater upsampling but I don’t want to alter the code if it encourages users to run into memory and accuracy issues…