Sharing BIDS Electrophysiological Derivatives on OpenNEURO

Hi, I want to share EEG data on OpenNEURO but I can’t share the raw data because it is still being used for other yet unpublished projects. In particular, I would like to share epochs (wich are cropped within the time points of interest that I want to share). The problem is that the BIDS structure for Common Electrophysiological Derivatives is not established yet so I don’t know if it would be BIDS compliant.

Is it possible to do it on OpenNEURO, or should I search for another repository?

Cheers,

1 Like

Hi @ezemikulan

Thank you for your question! This should still be possible to share on OpenNeuro. For data that may not be specified yet, a .bidsignore can work (an example of the structure can be found here. The .bidsignore will tell the validator to ignore those files when doing the validation. Once the dataset is valid it can be uploaded to OpenNeuro.

@sappelhoff may be able to give more guidance on naming the epochs and passing the validation

Thank you for considering OpenNeuro for hosting your data!

Could you provide a mock-up of how your data structure would look like currently? something like what I paste at the bottom of this post. (e.g., generate with the tree command line tool)

That’d make it easier to give advice.

The derivatives specification for electrophysiology data is still being developed - feel free to join in the effort! See here: https://docs.google.com/document/d/1PmcVs7vg7Th-cGC-UrX8rAhKUHIzOI-uIOh69_mvdlw

.
├── CHANGES
├── dataset_description.json
├── LICENSE
├── participants.json
├── participants.tsv
├── README
├── sourcedata
│   ├── sub-05
│   │   └── eeg
│   │       └── sub-05_task-matchingpennies_eeg.xdf
│   ├── sub-06
│   │   └── eeg
│   │       └── sub-06_task-matchingpennies_eeg.xdf
│   ├── sub-07
│   │   └── eeg
│   │       └── sub-07_task-matchingpennies_eeg.xdf
│   ├── sub-08
│   │   └── eeg
│   │       └── sub-08_task-matchingpennies_eeg.xdf
│   ├── sub-09
│   │   └── eeg
│   │       └── sub-09_task-matchingpennies_eeg.xdf
│   ├── sub-10
│   │   └── eeg
│   │       └── sub-10_task-matchingpennies_eeg.xdf
│   └── sub-11
│       └── eeg
│           └── sub-11_task-matchingpennies_eeg.xdf
├── stimuli
│   ├── left_hand.png
│   └── right_hand.png
├── sub-05
│   └── eeg
│       ├── sub-05_task-matchingpennies_channels.tsv
│       ├── sub-05_task-matchingpennies_eeg.eeg
│       ├── sub-05_task-matchingpennies_eeg.vhdr
│       ├── sub-05_task-matchingpennies_eeg.vmrk
│       └── sub-05_task-matchingpennies_events.tsv
├── sub-06
│   └── eeg
│       ├── sub-06_task-matchingpennies_channels.tsv
│       ├── sub-06_task-matchingpennies_eeg.eeg
│       ├── sub-06_task-matchingpennies_eeg.vhdr
│       ├── sub-06_task-matchingpennies_eeg.vmrk
│       └── sub-06_task-matchingpennies_events.tsv
├── sub-07
│   └── eeg
│       ├── sub-07_task-matchingpennies_channels.tsv
│       ├── sub-07_task-matchingpennies_eeg.eeg
│       ├── sub-07_task-matchingpennies_eeg.vhdr
│       ├── sub-07_task-matchingpennies_eeg.vmrk
│       └── sub-07_task-matchingpennies_events.tsv
├── sub-08
│   └── eeg
│       ├── sub-08_task-matchingpennies_channels.tsv
│       ├── sub-08_task-matchingpennies_eeg.eeg
│       ├── sub-08_task-matchingpennies_eeg.vhdr
│       ├── sub-08_task-matchingpennies_eeg.vmrk
│       └── sub-08_task-matchingpennies_events.tsv
├── sub-09
│   └── eeg
│       ├── sub-09_task-matchingpennies_channels.tsv
│       ├── sub-09_task-matchingpennies_eeg.eeg
│       ├── sub-09_task-matchingpennies_eeg.vhdr
│       ├── sub-09_task-matchingpennies_eeg.vmrk
│       └── sub-09_task-matchingpennies_events.tsv
├── sub-10
│   └── eeg
│       ├── sub-10_task-matchingpennies_channels.tsv
│       ├── sub-10_task-matchingpennies_eeg.eeg
│       ├── sub-10_task-matchingpennies_eeg.vhdr
│       ├── sub-10_task-matchingpennies_eeg.vmrk
│       └── sub-10_task-matchingpennies_events.tsv
├── sub-11
│   └── eeg
│       ├── sub-11_task-matchingpennies_channels.tsv
│       ├── sub-11_task-matchingpennies_eeg.eeg
│       ├── sub-11_task-matchingpennies_eeg.vhdr
│       ├── sub-11_task-matchingpennies_eeg.vmrk
│       └── sub-11_task-matchingpennies_events.tsv
├── task-matchingpennies_eeg.json
└── task-matchingpennies_events.json

Thanks for your responses. @sappelhoff: I don’t have the structure yet but I’ll prepare it and post it as soon as possible. Regarding the specification, I’ll be glad to contribute to the discussion.

1 Like

Hi, the dataset is a bit unusual, it contains epochs of data recorded during intracranial stimulation. It includes EEG, MRIs (anonymized with two different methods), and the Freesurfer and BEM surfaces obtained with the original MRIs. I thought of structuring the different stimulation sites as different runs, on which the *events.tsv file specifies the contact being stimulated (whose coordinates are defined in the ieeg *electrodes.tsv files). I would like to include also the source-space but I didn’t see any guidelines in the BIDS extension documents, do you think it is possible? Additionally, there is no intracranial data, only spatial information of the contacts, therefore the *coordsystem.json and *electrodes.tsv will exist but not the data, might that be a problem?

This is what I had in mind:

├── dataset_description.json
├── derivatives
│   ├── epochs
│   │   └── sub-001
│   │       ├── anat
│   │       │   ├── sub-001_T1wdeface.nii.gz
│   │       │   └── sub-001_T1wmaskface.nii.gz
│   │       ├── eeg
│   │       │   ├── sub-001_task-taskname_coordsystem.json
│   │       │   ├── sub-001_task-taskname_electrodes.tsv
│   │       │   ├── sub-001_task-taskname_events.json
│   │       │   ├── sub-001_task-taskname_run-01_channels.tsv
│   │       │   ├── sub-001_task-taskname_run-01_eeg.json
│   │       │   ├── sub-001_task-taskname_run-01_eeg.npy
│   │       │   ├── sub-001_task-taskname_run-01_events.tsv
│   │       │   ├── sub-001_task-taskname_run-02_channels.tsv
│   │       │   ├── sub-001_task-taskname_run-02_eeg.json
│   │       │   ├── sub-001_task-taskname_run-02_eeg.npy
│   │       │   └── sub-001_task-taskname_run-02_events.tsv
│   │       ├── ieeg
│   │       │   ├── sub-001_task-taskname_space-MNI152NLin2009aSym_coordsystem.json
│   │       │   ├── sub-001_task-taskname_space-MNI152NLin2009aSym_electrodes.tsv
│   │       │   ├── sub-001_task-taskname_space-T1wdeface_coordsystem.json
│   │       │   ├── sub-001_task-taskname_space-T1wdeface_electrodes.tsv
│   │       │   ├── sub-001_task-taskname_space-T1wmaskface_coordsystem.json
│   │       │   └── sub-001_task-taskname_space-T1wmaskface_electrodes.tsv
│   │       └── sub-001_task-taskname_scans.tsv
│   └── sourcemodelling
│       └── sub-001
│           ├── anat
│           │   ├── sub-001_hemi-L_inflated.surf.gii
│           │   ├── sub-001_hemi-L_pial.surf.gii
│           │   ├── sub-001_hemi-R_inflated.surf.gii
│           │   ├── sub-001_hemi-R_pial.surf.gii
│           │   ├── sub-001_inner_skull.surf.gii
│           │   ├── sub-001_outer_skin.surf.gii
│           │   └── sub-001_outer_skull.surf.gii
│           └── xfm
│               ├── sub-001_task-taskname_from-head_to-surface.json
│               └── sub-001_task-taskname_from-head_to-surface.tfm
├── participants.json
├── participants.tsv
└── README

Thanks!

Thanks for the structure! This looks like a very interesting setup.

Does it pass the bids-validator? Currently, the /derivatives subfolder is ignored during validation, so it could be that your general files at the root suffice to make a valid dataset?

  • dataset_description.json
  • README
  • participants.json
  • participants.tsv

Anyhow, the data looks very tidy and reasonably organized. It would be cool to use this as a case when developing BEP021 (as I linked above). Regarding a recommendation for OpenNeuro: If the data passes the BIDS validator, I think it’d be okay to store it there, under the condition that you’ll reformat and update the uploaded data once the BEP021 has progressed.

The changes could then be mentioned in the CHANGES BIDS file.

I haven’t tried the validator yet as these are only empty files that I created for setting up the structure. I’ll prepare the data and test it with the validator. If everything is ok I’ll upload it to OpenNeuro and update it once the specification is ready. I’ll be glad if the data is used for the development of BEP021, the only matter is that I’ll have to upload it with restricted access until the corresponding article is accepted, but it should be out soon (hopefully). Thanks for all your help!

Hi, I tried the validator and I’m finding an issue. If I use the structure as I posted I get an error due to the fact that there are no subjects at the root level:

Error 1: [Code 45] SUBJECT_FOLDERS
There are no subject folders (labeled “sub-*”) in the root of this dataset.

So I tried putting the anatomical images at the root level and the eeg and ieeg folders inside the derivatives. But if I do so I get other errors due to the facts that the anonymized MRIs don’t conform to the naming convention (as they are named sub-01_T1wdeface.nii and sub-01_T1maskface.nii) and that therefore no data is found:

Error 1: [Code 1] NOT_INCLUDED
Files with such naming scheme are not part of BIDS specification. This error is most commonly caused by typos in file names that make them not BIDS compatible. Please consult the specification and make sure your files are named correctly. If this is not a file naming issue (for example when including files not yet covered by the BIDS specification) you should include a “.bidsignore” file in your dataset (see https://github.com/bids-standard/bids-validator#bidsignore for details). Please note that derived (processed) data should be placed in /derivatives folder and source data (such as DICOMS or behavioural logs in proprietary formats) should be placed in the /sourcedata folder.14 files

Error 2: [Code 67] NO_VALID_DATA_FOUND_FOR_SUBJECT
No BIDS compatible data found for at least one subject.

I can solve the issue by providing only one anonymized MRI per subject and naming it according to the convention (sub-01_T1w.nii) but I think that providing them with the both anonymization methods is better as in some cases results can change among them. Do you think it would be possible or should I provide only one?

Thank you @ezemikulan for the results from running the validator!

From the structure provided in your previous post it appears there are no root subjects only derivatives. Putting the anatomical images back into the structure sounds good. To fix the defacemask error please try sub-01_mod-T1w_defacemask.nii (Found here in the specification). The anonymized version could be called sub-01_T1w.nii. To clarify - by both you mean providing the defaced image and defaced mask? With this strategy should be able to include both with your dataset.

Thank you,
Franklin

Thanks for your response @franklin. By “providing both” I mean that I have anonymized the images with two different anonymization methods: pydeface ( https://github.com/poldracklab/pydeface) and face-masking (https://nrg.wustl.edu/software/face-masking/). Each one of them yields a different surface reconstruction of the head and skull of the subjects, and this can have an impact on source localization results. That’s why I wanted to provide both images. Do you think it would be possible or should I provide only one?

Thanks again!

Hi @ezemikulan

Thank you for your message and clarification. One can be the recommended sub-01_mod-T1w_defacemask.nii while the other can be sub-01_mod-T1w_facemask.nii, but with this approach would be adding a json file (with the same file name, but a json instead of nifti) to clarify how each of these were constructed (pointing to the software like you did in your message). The sub-01_mod-T1w_facemask.nii would be .bidsignore .

I have included a sample bidsignore file. I have added the txt extension to upload here, but it would be named .bidsignore. The contents would be closest to the last line of the txt as a sample of how to set it up to ignore all the sub-01_mod-T1w_facemask.nii. I think this is a good use case for extending the specification to cover multiple deface approaches of the same scan.

bidsignore_template.txt (153 Bytes)

Thanks for your reply @franklin. As I understood the paragraph regarding the *facemask.nii files, they are for storing the binary mask, not the anonymized MRI, but maybe I misunderstood it. It says:

If the structural images included in the dataset were defaced (to protect identity of participants) one CAN provide the binary mask that was used to remove facial features in the form of _defacemask files. In such cases the OPTIONAL mod-<label> key/value pair corresponds to modality label for eg: T1w, inplaneT1, referenced by a defacemask image. E.g., sub-01_mod-T1w_defacemask.nii.gz

If this is the case, maybe a solution could be to put the sub-01_T1w.nii (anonymized with deface) at the root level, and add the maskface image in an anat folder inside the derivatives > epochs folder (which is already in .bidsignore). And once the specification is extended to allow multiple anonymized images I can update the dataset.

Do you think it mught be a viable solution?

Cheers and thanks again for all your help!

Hi @ezemikulan

Thank you for your message. You are right in your thinking, thank you for detailing it out. The _defacemask.nii should be used if it is the binary mask that was used to remove facial features. This does not appear to be the case because this is for two different defacing algorithms. The anonymized MRI can be the sub-01_T1w.nii.

Your proposed solution sounds reasonable to me! Perhaps may be useful to have a json next to the sub-01_T1w.nii detailing which defacing algorithm constructed that file for further reusability.

If you have troubles uploading to OpenNeuro please let me know!

Thank you!
Franklin

Great, thanks @franklin and @sappelhoff for all your help. If I encounter any issues I’ll let you know.

Cheers!

2 Likes