Stuck when run QSIprep on HCP data

Hi experts,
I am trying to run QSIprep on HCP data. I have transform the unprocessed data to bids format with hcp2bids. However, when i try to run the qsiprep, it is stucked at topup part for a very long time (~10hour). Any suggestions? Thank you very much.

Here is my code,

docker run -ti --rm \
-v /Users/jinb/Documents/datapool/datasetExample/HCP-1200-DWI/BIDS:/data:ro \
-v /Users/jinb/Documents/datapool/datasetExample/HCP-1200-DWI/Output:/out \
-v /Users/jinb/Documents/Project_Git/couplingProfile/Step_3rd_hcpSC/freesurfer_license:/fs_license \
pennbbl/qsiprep:0.13.1 \
/data /out participant \
--unringing-method mrdegibbs \
--recon-spec mrtrix_multishell_msmt \
--output_resolution 1.2 \
--fs-license-file /fs_license/license.txt

And this is my folder structure,

Here is info from activity monitor

How much memory/cpus are you devoting to QSIPrep? Docker has its own memory limitations which you can edit in preferences. I think the default is 8gb, which you should try bumping up to 16 if you can afford to. Maybe try running on a single subject first by specifying --participant-label.

QSIprep handle it. Based on the log, 8 core were assign to it. And all mem (125g) assigned to it.

Only one subject in the BIDSified folder.

Okay good to know, thank you. Yah it looks like the CPU is clocked out when running topup. There are a lot of scans in that folder, so if denoising is done after merging TOPUP should take a long time. Maybe as a test you can just run on a single acquisition (e.g. just the 95lr)

Maybe I can try it with qsiprep 0.13.0. Some verion of qsiprep seem run weired on my workstation.

Yes. I will try your suggestion. Thank you.

I have had QSIPrep (not including QSIRecon) runs take ~2 days depending on the subject even with good CPU / memory usage. Have you tried letting the topup run for longer?

1 Like

Yeah. I will wait it for some time. Seem topup is running.

This has to do with how TOPUP runs. You can see it’s running in a single thread, which can take an incredibly long time with high resolution data. When it gets to eddy you will see all 8 cores being used. HCP data takes a very long time to run because of the single-threaded topup, and even on 8 cores eddy can take a long time. If you have access to a GPU it will finish eddy remarkably faster.

One thing to be aware of, if this is data from minimally preprocessed HCP releases, you don’t need to preprocess it in qsiprep again. A goal for the next release is to be able to ingress HCP minimally preprocessed data so the reconstruction workflows can run in it.

1 Like

Cool. Waiting for that release. I have wait another 10hours, seem it is a long travel to do it from unprocessed data.

After a long time waiting, QSIprep reported a crash:
AcqPara::AcqPara: Unrealistic read-out time

Here is the crash file:
crash-20210624-040924-root-eddy-1256a032-f5f7-48f1-84cd-cee22ff37767.txt (4.3 KB)

I check my json file generated by hcp2bids for sub-100206_acq-dir95lr_dwi:
“EffectiveEchoSpacing”: 0.00078,
“TotalReadoutTime”: 0.6,
“EchoTime”: 0.0895,
“PhaseEncodingDirection”: “i-”

That total readout time looks too big by a factor of 10. Is this from official HCP data?

Yes. I also think it is too big. Original data is come from HCP (participant id is 100206). The value is assigned by hcp2bids. I think it might be wrong. TotalReadoutTime = Echo spacing * (ReconMatrixPE - 1)
so, TotalReadoutTime=0.00078*(143-1)=0.11154?

In some locally acquired HCP Lifespan data we have

"TotalReadoutTime": 0.0959097,
"RepetitionTime": 3.23,
"EchoTime": 0.0892,
"EchoTrainLength": 105,
"EffectiveEchoSpacing": 0.000689998,
"DerivedVendorReportedEchoSpacing": 0.000689998,
"AcquisitionMatrixPE": 140,
"BandwidthPerPixelPhaseEncode": 10.352,
"BaseResolution": 140,

for reference. It looks like lifespan might be a little different but your new total readout time is a lot closer.

If you kept your working directory around qsiprep should start back up at eddy without needing to re-run topup.

1 Like

I also have some data from HCP Lifespan. I will check it also. Thank you.

Great. The pipeline finished, and connectome sucessfully generated.

However, I cannot get any QC info in the html, and all figure folder is empty. This is normal for QSIprep?

Here is a summary of my output folder:

There should be files in the figures/ directory and you should see a bunch of interactive figures on the html page when you open it in a browser. If you re-run the same command using the same working directory the figures should copy over to your output directory.

oh. yes. got that. seem the tmp dir is auto clean by the system. Thank you. I will try it again. Seem all could work now.