Our understanding of human cognition has been revolutionized through neuroimaging. In particular, an impressive literature describes neural correlates of many cognitive, behavioral, developmental, and pathological features. Still, a core question remains: how do we turn the information from individual studies into knowledge?
In cognitive neuroimaging, psychopathology and neurodevelopment, key results have recently been questioned in light of a lack of reproducibility. Additionally, the typical temporal and spatial scale of neuroimaging are difficult to relate with the current neurobiological knowledge. The challenge is hence to increase the sample sizes and to bridge across spatial and temporal scales. Several approaches have become central to this endeavour:
- Data sharing through large international consortia, including an increasing number of modalities and species;
- Data mining and aggregation of results and conclusions from publication databases;
- Standardisation efforts to facilitate the re-use and posterior analysis across publications;
- Deep phenotyping: the in-depth study of a limited number of individuals.
Still, how to leverage the information latent in the datasets issued by the proposals above remains a largely unanswered question. The increasing size of available data, along with the infrastructures – such as UK Biobank, the Allen Institute, OpenNeuro or PRIME-DE – conceived to facilitate access to this data, breeds for various important questions
• Finding consistent descriptions of structural, functional and behavioural variables across studies and species
• Finding coherent descriptions of organization, pathology, and development across species
• What should one expect from a meta-analysis: an association analysis or a predictive model?
• Querying is often limited to topic-based approaches: how to perform more expressive queries of existing publication corpora? How to interweave publication corpora and structured knowledge curated in ontologies?
• Two paradigms are currently prevalent: deep phenotyping, which studies a few individuals in detail, and meta-analyses, which capture a few bits of information across studies. What insights can one gain from both approaches? How do they complement each other?
AI approaches hold a promise for scaling up this process, by enabling the automatic inference of useful representations and summaries from the vast aggregated neuroimaging knowledge. Specifically, developing sound AI methods can provide interpretable explanations of the relationship between the human brain, cognition, development, and pathology. A task for which it is fundamental to harness the large amounts of heterogeneous noisy data issued from publications and image repositories. Still, current AI-based methods can not be readily used for this task. Adaptation to neuroimaging data specificities such as image and signal structure, limited signal-to-noise ratio, and inconsistent annotations is fundamental for the success of the inference of the relationship between the structure and function of the human brain and cognition.
Building on the success of the previous edition, CogBases aims to gather experts from different fields of cognitive and comparative neuroscience, bioinformatics, and AI to discuss recent technological advances and pending challenges in cognitive neuroscience knowledge handling. In doing this, CogBases will spark advances in the methodological development and application of AI methods capable of handling significant publication and imaging databases. Consequently, it will advance our understanding of cognition, neuropathology, and neurodevelopment through neuroimaging.
As in the previous edition, the workshop will feature contributed posters and talks along with the scheduled speakers.
Registration is free but mandatory. Further information on speakers an registration CogBases