I have very limited disk space, if possible i would prefer not to download the entire output folder for each subject, so it will be great if i can simply download only the files required for XCP-d.
the minimal inputs for fMRIprep are listed here but not for the HCP preprocessing pipeline.
A valid FreeSurfer license file is recommended. Set the FS_LICENSE environment variable or use the '--fs-license-file' flag.
Framewise displacement-based scrubbing is disabled. The following parameters will have no effect:
--min-time
250602-23:07:07,903 nipype.utils WARNING:
convert_hcp2bids is an experimental function.
250602-23:07:07,903 nipype.utils INFO:
Converting 100206
250602-23:07:07,904 nipype.utils INFO:
Converted dataset already exists. Skipping conversion.
250602-23:07:21,150 nipype.workflow IMPORTANT:
Running XCP-D version 0.10.7.dev16+g4d40272
250602-23:07:21,197 nipype.workflow IMPORTANT:
Building XCP-D's workflow:
* Preprocessing derivatives path: /scratch/junhong.yu/HCP/work/dset_bids/derivatives/hcp.
* Participant list: ['100206'].
* Run identifier: 20250602-230702_009b3d90-5ebd-4f7e-a715-28421363d8c2.
Process Process-2:
Traceback (most recent call last):
File "/home/junhong.yu/XCPDenv/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/junhong.yu/XCPDenv/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/junhong.yu/XCPDenv/lib/python3.10/site-packages/xcp_d/cli/workflow.py", line 100, in build_workflow
retval['workflow'] = init_xcpd_wf()
File "/home/junhong.yu/XCPDenv/lib/python3.10/site-packages/xcp_d/workflows/base.py", line 81, in init_xcpd_wf
single_subject_wf = init_single_subject_wf(subject_id)
File "/home/junhong.yu/XCPDenv/lib/python3.10/site-packages/xcp_d/workflows/base.py", line 127, in init_single_subject_wf
subj_data = collect_data(
File "/home/junhong.yu/XCPDenv/lib/python3.10/site-packages/xcp_d/utils/bids.py", line 211, in collect_data
raise FileNotFoundError(
FileNotFoundError: No BOLD data found in allowed spaces (fsLR).
Query: {'datatype': 'func', 'desc': ['preproc', None], 'suffix': 'bold', 'extension': '.dtseries.nii', 'space': 'fsLR'}
Found files:
here are the files that were generated in dset_bids within the working directory
Are you sure these files were downlaoded properly? E.g., if using datalad distribution, you have to datalad get them. Do these files open properly in an image viewer (e.g., connectome workbench)?
wb_view doesn’t work for me because of missing libraries, but i’m able to do wb_command -nifti-information on all the files, except for the brain masks brainmask_fs.2.0.nii.gz and Brainmask_fs.nii.gz, which i got the error: ERROR: error reading NIfTI file brainmask_fs.2.0.nii.gz: brainmask_fs.2.0.nii.gz is not a valid NIfTI file
None of these files are 0 bytes though- they all have the same file size of 208kb.
the URL for one of the brain masks that i use in my bash script: https://db.humanconnectome.org/data/archive/projects/HCP_1200/subjects//experiments/_CREST/resources/_CREST/files/MNINonLinear/Results/rfMRI_REST1_LR/Brainmask_fs.2.nii.gz
Unfortunately there are too many unknowns for me to describe an exact solution. I do not know what the variables described in your command link to, what are in those folders, if the behavior will persist in a container, and how the HCP data were downloaded.
First, the brainmasks were indeed invalid because the filenames which i got from the HCP reference manual were incorrect. For instance, it should have been brainmask_fs.2.nii.gz instead of Brainmask_fs.2.nii.gz (as listed in the HCP reference manual).
Second, the file directory structure described in hcpya.py is incorrect
Note, i’m leaving out all the entire fsaverage_LR32k directory because i realized that these files are not required if the --warp-surfaces-native2std flag is not used.
Also the ALFF processing is taking several hours for each subject. I understand that there isn’t an option to turn it off at the moment. So i went to comment out all the alff sections in cifti.py and outputs.py