Developing-hcp structural pipeline installation

Dear fellow researchers,

We are having difficulties installing the dhcp-structural pipeline package https://github.com/BioMedIA/dhcp-structural-pipeline on our Linux server.

More specifically, we are running into compilation issues with VTK, where it seems downloads/tests during the VTK “make” are not working properly and it dies at around 40% complete (VTK cmake runs fine, btw). So far, workbench and ITK compile just fine. We have previously installed DRAWEM on its own, and VTK compiled fine…So we suspect that there is something wrong with this dhcp version and VTK.

Unfortunately, we cannot use docker on our cluster. Would it be possible for the developers to create a singularity image for us?

Many thanks in advance!
Fotis

Yes, it can be a beast to compile. A large number of software packages all have to line up correctly.

@Jianliang, sorry to ping you, I think you made a singularity image for our cluster, I don’t suppose you have a link to it?

Hi John,

No worries. I did only use the docker container image you built. I pulled the image using singularity pull and that converted the image to a new format recognizable by singularity. Then I ran the pipeline with singularity on cluster.

@Fotis hope above explanable with be helpful.
Best regards,
J

Hi Fotis,

Sorry for delayed reply. Please check below my reply to John. Any further questions please feel free to pin me.

Best regards,
J

Hi Jianliang,

Thank you for the tip! We have tried using the ‘pull’ option and it seemed to work better than compiling from scratch. I’m not sure yet whether it was fully installed as some colleagues are on holiday. I will find out soon, I hope, and let you know as well!

Kind Regards,
Fotis

Dear Jianliang,

I have an update from actually running the pipeline with singularity. It does work, but I have a few comments, if I may:

  1. The output is split into 2 folders: /workdir and /logs and not /sourcedata and /derivatives, as mentioned on GitHub. I assume that is acceptable and not a problem in itself?

  2. Within /workdir there are plenty of folders. For example, /segmentations, where the labeled images can be found. However, there is another folder, /segmentations_data which is empty. Is this normal?

  3. What is the role of the additional measures (https://github.com/amakropoulos/structural-pipeline-measures)? Are those scripts meant to be run manually after the pipeline completes?

  4. At this point the output contains volumetric segmentations only. So I was wondering what happens with generating surface data. Do I have to run other scripts manually for getting such an output? I can see that the connectome workbench and spherical mesh are part of the singularity image. But are they supposed to be used separately from the pipeline (e.g. feeding into them the volumetric segmentations from the pipeline in order to start fitting the surfaces)?

As you can tell I’m still finding my way around the proper usage of the pipeline and your comments would be greatly appreciated!

Many thanks!
Fotis

Hi,

That looks like the script crashed during the surface reconstruction part and it didn’t continue with the data structure script. This is why you don’t have the surfaces and the data is not in the correct folders. If you inspect the logs of the surface reconstruction you should see the error. I hope this helps.

Best regards,

Manuel

Dear Fotis,

  1. Yes, that’s normal to have those folders.
  2. I think the segmentations_data just contains minimal information of surface, looks like empty.
  3. Not sure.
  4. Surface data are generated when you run pipeline and they are stored in the workdir. We do need other pipelines to do further process on the structural pipeline output. But I am not sure about the detail. John may know.
    John @jcupitt please correct me if my answers were incorrect.

Best,
Jianliang

Hi, yes, @mblesac is correct.

The final struct pipeline stage builds the derivatives and sourcedata outputs, so if they are not there, it probably failed. You’ll need to check the logs.

Dear all,

Thank you for the feedback. The logs are indeed the place to start!

We have 2 sources of logs in our case:

  1. From Slurm that manages heavy jobs in our compute node. There is no indication of failure for the segmentation. But there is one, when adding the -additional option.

  2. From the pipeline itself. And this is split into 4 further log files: .log and .err for segmentation and .log and .err for the additional option.

It is perhaps much better showing you, rather than typing the warnings/errors here. As such, I have combined the text output in one document. It is annotated with red comments for convenience.

The next milestone would be interpreting the logs and this is where your experience will make the difference!

(I just realised that I can’t upload pdfs here, so I’m sharing with google drive)
https://drive.google.com/file/d/1Qn5xxYjCrirBFo95sRx71HQlISKt4EyI/view?usp=sharing

Many thanks in advance!
Fotis

Dear all,

We got a couple of successful completions of the pipeline. Indeed, when successful, the /sourcedata and /derivatives are created. Those were not there before, even though there is a segmentations folder created in the /workdir.

Some errors/warnings from the logs:

-additional.err-

Warning: FFD spacing smaller than image resolution!
This may lead to artifacts in the transformation because
not all control points are within the vicinity of a voxel center.

-segmentation.err-

Warning: FFD spacing smaller than image resolution!
This may lead to artifacts in the transformation because
not all control points are within the vicinity of a voxel center.

OR

Warning: 10.008 percentile collapses in target, skipping
Warning: 20.006 percentile collapses in target, skipping
Warning: 30.004 percentile collapses in target, skipping
Warning: 40.002 percentile collapses in target, skipping
Warning: 50 percentile collapses in target, skipping
Warning: 59.998 percentile collapses in target, skipping
Warning: 69.996 percentile collapses in target, skipping
Warning: 79.994 percentile collapses in target, skipping

OR

Error: N4 command returned non-zero exit status 1
Error: neonatal-segmentation command returned non-zero exit status 1

-surface.err-

While running:
wb_command -volume-math ‘clamp((T1w / T2w), 0, 100)’ surfaces/subjectNN_DTI_0040_MR1-session1/workbench/subjectNN_DTI_0040_MR1-session1.T1wDividedByT2w.nii.gz -var T1w restore/T1/subjectNN_DTI_0040_MR1-session1.nii.gz -var T2w restore/T2/subjectNN_DTI_0040_MR1-session1.nii.gz -fixnan 0

ERROR: volume file for variable ‘T2w’ has different volume space than the first volume file

I’m trying again without passing the T1 in hopes of avoiding the surface.err (and perhaps the other errors?).

Do the T1 and T2 have to have same pixel size and matrix? I’ve tried making the T1 like the T2 regarding those, but didn’t help.

Any tips are welcome!

Many thanks,
Fotis

Dear John,

I’m not sure whether my previous replies notify the participants so I’ll give it another go!

After having tried different operating systems and versions of singularity we are still facing issues during the processing steps.

For example, even though we do not use the -additional flag, the logs contain relevant info:

subjectNN_DTI_0317_MR1-session1.additional_err.txt (44.1 KB)
subjectNN_DTI_0317_MR1-session1.additional.txt (4.3 KB)

The remaining logs are similar to what I have reported before:

subjectNN_DTI_0317_MR1-session1.segmentation_err.txt (693.8 KB)
subjectNN_DTI_0317_MR1-session1.segmentation_log.txt (138.9 KB)
subjectNN_DTI_0317_MR1-session1.surface_err.txt (2.9 KB)
subjectNN_DTI_0317_MR1-session1.surface_log.txt (1.5 KB)

Now, the pipeline does fully process some datasets, so perhaps the problem is on the data quality side.

In any case, if you could help interpreting the logs then we would be grateful!

Thanks in advance!
Fotis

Hi,

I had a similar issue, the surface reconstruction failed. I solved it by installing another version of the pipeline that allows you to perform the surface reconstruction using only the tissue segmentation obtained in the previous step, without performing the intensity normalization step and also improves the segmentation (I think in the version 1.1 there was a bug in the definition of label probabilities in Draw-EM). I think this version is not available as a singularity container, so you would have to install it manually: https://github.com/DevelopingHCP/structural-pipeline/tree/dhcp-v1.1.1

Best regards,

Manuel

I see. It’s not just us then. Good to know. We do indeed use the latest version, but if the older one helped you, then it’s worth trying it here too.

Thank you for the tip Manuel.

Fotis