I’m trying to process the HCP-aging dataset, which I’ve converted to BIDS format after downloading from NDA.
For my first attempt I tried using the HCPPipelines BIDS app , which has recently been updated to use the HCP pipeline version 4.1.3 that’s being developed for processing the HCP lifespan data. I have built a singularity image from the docker container to run on my HPC. However when i run it i get this message:
bids-validator /cifs/hariri-long/Studies/HCP-Aging/BIDS/sourcedata
ESC[31m1: Files with such naming scheme are not part of BIDS specification. This error is most commonly caused by typos in file names that make them not BIDS compatible. Please consult the specification and make sure your files are named correctly. If this is not a file naming issue (for example when including files not yet covered by the BIDS specification) you should include a ".bidsignore" file in your dataset. Please note that derived (processed) data should be placed in /derivatives folder and source data (such as DICOMS or behavioural logs in proprietary formats) should be placed in the /sourcedata folder. (code: 1 - NOT_INCLUDED)ESC[39m
./sub-6005242/func/sub-6005242_task-carit_dir-PA_bold.json
./sub-6005242/func/sub-6005242_task-facename_dir-PA_bold.nii.gz
./sub-6005242/func/sub-6005242_task-vismotor_dir-PA_bold.nii.gz
…
[plus a jillion more files]
However, I am able to validate the dataset just fine running the bids validator app.
My second attempt has been to start out running the PreFreeSurfer.sh script within my singularity container manually, passing all the arguments to it. With that I’m getting the error:
Spin echo fieldmap has different dimensions than scout image, this requires a manual fix
I’m passing it the fieldmap that’s designated for the T1 (i.e., it’s stored in the T1 folder in the raw download), so I haven’t been able to figure out yet what’s going on here but I’m still working on it (don’t yet fully understand exactly which fieldmaps i need to be passing to this script / how).
I think my biggest question is whether I’m missing any tools or documentation that are available for processing the HCP lifespan data, since I would expect that there’d be something out there by now!
Any guidance anyone has is much appreciated, thanks!!!
Thank you for your message and welcome to NeuroStars! This image was recently updated (about an hour ago) with the latest image. May you please try pulling the latest image and processing the dataset?
Thanks for your response and for your work on this, @franklin! Per Roeland’s advice on the HCPpipelines github page, I have now built my singularity image from rhancock/hcpbids. I get past the bids validation (yay!), but then pretty quickly hit this error:
Tue Jul 21 11:05:08 EDT 2020:FreeSurferPipeline.sh: Thresholding T1w image to eliminate negative voxel values
Tue Jul 21 11:05:08 EDT 2020:FreeSurferPipeline.sh: …This produces a new file named: /work/long/HCP_MPP/HCP-A/sub-6005242/T1w/T1w_acpc_dc_restore_zero_threshold.nii.gz
Image Exception : #63 :: No image files match: /work/long/HCP_MPP/HCP-A/sub-6005242/T1w/T1w_acpc_dc_restore
I think this image was supposed to be generated by the pipeline, right? Will keep troubleshooting. Thanks!
Hi @aknodt,
The T1w_acpc_dc_restore image is supposed to be generated at the PreFreeSurfer stage, so there is likely an earlier error. Could you upload the complete output log from running the container?
Thanks! The log you provided is starting from the FreeSurfer stage (the second processing stage) though, and the prior PreFreeSurfer stage is likely where the issue is. Do you have some earlier output messages, starting from the beginning of the messages generated by the container? If you didn’t get a chance to capture those messages, would you be able to run the container again with a new output directory (or deleting the existing output) while redirecting the messages to a file? You can add > output.log > errors.log to your singularity command line to save the output.
Thanks @rhancockn!! I was able to make it through the PreFreeSurfer stage. It doesn’t seem to have worked properly though (the final stage T1w_acpc_dc_restore_brain.nii.gz image is essentially empty), but it seems most likely to me that the issue could easily be with the way I set up the data etc.
I was pretty careful about converting the dataset to BIDs format (it comes as *nii.gzs and *json sidecars, so mainly I just did some re-arranging and re-naming), but it seems like there could easily be an issue with the jsons or something. I did get this message between Gradient Unwarping and FAST that looks to indicate an issue:
…
Tue Jul 21 18:33:30 EDT 2020:TopupPreprocessingAll.sh: END: Topup Field Map Generation and Gradient Unwarping
Cannot interpret shift direction = NONE
Cannot interpret shift direction = NONE
Cannot interpret shift direction = NONE
Cannot interpret shift direction = NONE
Running FAST segmentation
…
I think I’m going to try running the PreFreeSurferPipeline.sh script inside the container and passing it all the arguments manually, since we have had success doing that with some other datasets. And will also keep trying to learn more about how to properly configure and run the pipeline. Thought I’d follow-up with this in the meantime in case there are any obvious solutions I’m missing.
Hi @aknodt,
Possibly the gradient unwrap direction is not getting passed correctly. This should be specified with the flag --anat_unwarpdir when running the container, e.g. --anat_unwarpdir z might work for HCP data.
The command line options that PreFreeSurfer.sh is called with should be logged near the beginning of the output. If you post that bit, I can see if anything looks odd there.
If the HCP-aging dataset is organized similar to HCP1200 (not sure if it is), could you just clone the hcp2bids repo and make a few changes to make parameters correspond with the aging dataset? Looking at the code, not that much of it is strictly hard-coded for HCP1200.