Qsiprep problems when using previous Freesurfer output

Hi,

I am trying to run qsiprep now that the segmentation and masks from previous FreeSurfer output can be inserted into the pipeline. Previously the data has been successfully processed via older versions of the pipeline but since trying to add the FreeSurfer input I have not been able to get the pipeline to execute successfully. This is the command I am using:

docker run --rm --memory=30g -ti -v /Users/fionaodonovan/Desktop/data/Fiona/QSIPrep/analysis_run/license.txt:/opt/freesurfer/license.txt:ro -v /Users/fionaodonovan/Desktop/data/Fiona/QSIPrep/analysis_run/qsiprep:/data:ro -v /Users/fionaodonovan/Desktop/data/Fiona/QSIPrep/analysis_run/qsiprep:/qsiprep-output -v /Users/fionaodonovan/Desktop/data/Fiona/QSIPrep/analysis_run/qsiprepscratch:/scratch -v /Users/fionaodonovan/Desktop/data/Fiona/QSIPrep/analysis_run/data/derivatives/freesurfer:/freesurfer:ro -v /Users/fionaodonovan/Desktop/data/Fiona/QSIPrep/analysis_run/qsi_prep_recon_out:/out pennbbl/qsiprep:0.15.2 /data /out participant --recon-input /qsiprep-output --recon-spec mrtrix_multishell_msmt_ACT-hsvs --freesurfer-input /freesurfer --ignore fieldmaps --n_cpus 6 -w /scratch --mem-mb 30000 --output_resolution 1.3 --skip-odf-reports

When I ran this it got stuck after the “create_5tt_hsvs” step and was processing for about 14 hours before I stopped the process. I also got a crash report in the qsiprep folder indicating there is an issue with concatenation of the confounds and no confounds.tsv file was created.

crash-20220811-221242-root-concat-a135ce0b-9ee0-40c9-a098-5147dfd65bca.txt (13.5 KB)

Previously I was using the 0.16.0RC3 version of the pipeline and when using this I got further along the qsirecon pipeline but there was an issue with the peak plots.

crash-20220725-163638-root-plot_peaks-d312a78b-d42b-43f8-a694-668568458b1e.txt (3.0 KB)

I then added the --skip-odf-reports flag but the pipeline got stuck on the “create_5tt_hsvs” step. Also,
when using this version of the pipeline I additionally got the first crash report however, a confounds.tsv folder was still created in this case.

I am unsure about what is going on and how to get the pipeline to run successfully so any help with these issues would be greatly appreciated.

Thanks,
Fiona

Hi Fiona,

A few things:

It is important that your DWI data are corrected for susceptibility distortion for anatomically constrained tractography to work well. If your data were corrected in other ways (e.g. two series collected with reverse phase encoding directions) then you can ignore this point.

How much memory (CPUs, GPUs, RAM etc) are you devoting to the job? Are your data especially high resolution?

I would stick with 0.16.0.

What version did you preprocess with? I have been able to use 0.16.0 reconstructions on 0.13.1-preprocessed data, have not tested earlier versions however.

Best,
Steven

Hi Steven,

Thanks for your prompt response.

Yes we did collect the data in reverse phase encoding directions.

The data is not high resolution. The computer has 6 cores and 32gb ram and for the job all 6 cpus and 30g of ram has been assigned. Previously we preprocessed with qsiprep version 0.14.2 using all 6 cpus and 30g of ram and had no issues so I didn’t think it would be an issue now running it on the newer versions of qsiprep. This could potentially be the source of error. Maybe I need to run it on less cpus and memory?

I am now preprocessing using the raw dwi data and not any dwi data that has previously been preprocessed, just preprocessed FreeSurfer data, so I don’t think the final point applies but from now on I will use 0.16.0.

Thanks for your help so far.

Kind regards,
Fiona

Ahh i just noticed you passed in your QSIPrep outputs as the BIDS root. The BIDS root should still be the original BIDS data directory, even though you are only reconstructing the preprocessed data. Also, how many subjects are there? If you are trying to run several subjects with your compute specs, that could be why it is going slowly.

Thanks for pointing that out. I didn’t see it before. However, I have changed my QSIPrep output path and tried to run it again but it is still getting stuck. I am currently only trying to process one subject so I don’t think it is going slowly but that it is getting stuck.

I have now also tried running it on less CPUs but this has not helped. It is stuck in the recon process at the step after ‘intensity norm’. Does anyone know what step comes after intensity norm?

Also as the recon process begins I am getting a few error messages in the terminal. I will attached them here but also if anyone has information on this it would be great as I am not sure what to do next and where to find a solution for my problems.

recon_error.txt (3.0 KB)

What FS version did you use for recon-all?

I think we used version 6.0.

When you say you changed your QSIprep output path, what change did you make to your code? The problem was that your BIDS input argument was set as your QSIprep output and not the original BIDS dir.

After you pointed out the error that I had the same input and output folder I changed the output path to the folder I had intended to use for output and corrected the mistake. The folder that I was using for inputs, ‘qsiprep’, has the BIDS structure for the one subject I am currently trying to process and the dataset description file. I have successfully been able to run the qsiprep part of the pipeline and view all of the output. It is the qsirecon part of the pipeline that is getting stuck, right after intensity normalisation has been finished. Do you know what is done in the pipeline after intensity normalisation?

If you are skipping odf plots, then the next stop should be tractography, which does usually take a while. Does the log file indicate that tractography has begun? Perhaps as a test you can try to make a recon spec that runs a small number of streamlines.

I have spent the last few days trying different things that I hoped would be the solution but thus far I still have not been successful. The tractography workflow is written in the log file but it is not written in the terminal that tractography has begun. The last thing written is:
220825-01:11:41,281 nipype.workflow INFO:
[Node] Finished “intensity_norm”, elapsed time 41.495349s.

I am not sure if this means that it has begun to run the tractography or not.

This time I ran the pipeline with the -sloppy flag as I thought this might be faster but it is still stuck at the tractography step for a few days now. How can I change the recon spec to run it with a smaller number of streamlines? I tried to figure this out but I was not sure what to do.

You can make a JSON file similar to this: qsiprep/mrtrix_multishell_msmt_ACT-hsvs.json at aa378882bbc396d9bd93c27af0545c1976b7bb56 ¡ PennLINC/qsiprep ¡ GitHub

but change the select parameter (which is the number of streamlines) of the track_ifod2 node to be lower. Then pass in this new json file to the --recon-spec argument of QSIPrep.

1 Like

I tried your suggestion and did a test with a smaller number of streamlines but again it is getting stuck after the intensity norm step. I left it run with the reduced amount of streamlines also for a few days but it did not progress. I think that even if I keep letting it run now it will never complete the tractography step. Does anyone have any other suggestions to help me figure out how to solve this?

Thanks,
Fiona

Do you see that the tractography file has begun developing? You can look in the scratch directory under the ifod workflow.

I do not have a ifod folder in the scratch directory so I suppose this means tractography has not begun?

Sounds about right, what is the folder in scratch corresponding to the last running step?

In the msmt_csd it is the intensity_norm folder