Top-Up correction with TractoFlow not working

I have installed the latest version of TractoFlow following the instructions given at Tractoflow Documentation . In order to test the pipeline I have downloaded the publicly available Penthera_3T data set. I am working on MacOS v10.15.6 using Docker. My command looks as follows:

./nextflow run ./tractoflow-2.1.1/main.nf --root <ROOT_FOLDER> --dti_shells “0 300 1000” --fodf_shells “0 1000 2000” -profile macos -with-docker scilus/docker-tractoflow:2.1.1 -resume

For some reason tractoflow is not able to run eddy correction when topup is included:

If topup correction is omitted, i.e. by adding --run_topup false, the pipeline ran fine after manually replacing “0.001” in the bval files with “0” (otherwise dwinormalise failed). I have tried running topup with several randomly picked data sets from the Penthera_3T collective and failed at every attempt so far. I was also unable to locate any kind of log file that would include more detailed debugging messages concerning this error.

I’d appreciate any input on possible fixes or workarounds.

Kindly,
Jan

Hi Jan,

In your current directory you should have a work directory. To be able to help you I would need to know what you’ve got in this file: work/41/cbc379*/.command.err

Thank you in advance.

Have a good day
Arnaud

Anecdotally, I tend to see this error when I do not devote enough memory to the task. How much memory are you allocating to Docker? You may want to try increasing it.

Docker --> Preferences --> Resources --> CPUs and Memory

In your current directory you should have a work directory. To be able to help you I would need to know what you’ve got in this file: work/41/cbc379*/.command.err

I have found the error-log in each of the four work-folders involved in the topup-correction. The log looks like this:

As far as I understand the error does not occur because of insufficient memory. Rather it seems to be caused by a compatability issue?

Edit: I have looked a bit deeper into the code. The processing step causing this error seems to be scil_image_math.py mean rev_b0.nii.gz rev_b0.nii.gz. According to the documentation on github this step was only added recently, i.e. two months ago.

Within scil_image_math.py the error is thrown when checking
if isinstance(data, np.ndarray) and data.dtype != ref_img.get_data_dtype() and not args.data_type: .

I assume averaging rev_b0 was introduced to account for multiple rev_b0 images acquired over the course of the acquisition? However, for the data sets I was using rev_b0 should be a single stack of images without repeated acquisitions.

Anecdotally, I tend to see this error when I do not devote enough memory to the task. How much memory are you allocating to Docker? You may want to try increasing it.

I had read about similar experiences from other users and already set the memory for docker to 16 GB before running TractoFlow for the first time. Also, the eddy process without topup correction appears to work fine which seems to me as a strong indicator that memory might not be an issue here.

Hi @blu4

It is a bug of TractoFlow. Sorry for that. We will fix it in the next release. In the mean time if you want to run TractoFlow you just must to change the datatype of your reverse B0 to float32. If you have mrtrix3 you can do: mrconvert rev_b0.nii.gz new_rev_b0.nii.gz -datatype float32

Best

Guillaume Theaud

Dear Guillaume,

thank you for your reply. Converting rev_b0 to float32 solved the issue.