Introducing TaskBeacon: A Platform for Reproducible Cognitive Tasks

Hi everyone,

Not sure if this is the best place to post this, but it was the first that came to mind. I’d like to introduce TaskBeacon, which may be of interest to the NeuroStars community, especially for those using cognitive tasks with neuroimaging tools.

TaskBeacon is an open, community-driven platform for building, sharing, and adapting standardized cognitive tasks.

It combines:

  • TAPS – a modular structure for organizing task components (think it as task-BIDS)

  • PsyFlow – a lightweight Python framework built on PsychoPy

  • GitHub-based workflows – for versioning, collaboration, and task sharing

Over 13 tasks are already available for free, including SST, MID, dot-probe, and resting-state paradigms, some of which have been tested in real EEG studies. I’m also hoping to expand TaskBeacon to host as many tasks as possible.

With TaskBeacon, you can not only access and submit tasks, but also use the MCP server with your LLMs to generate or localize tasks using natural language with an easy setup:

{
  "name": "taskbeacon-mcp",
  "type": "stdio",
  "description": "Local FastMCP server for taskbeacon task operations. Uses uvx for automatic setup.",
  "isActive": true,
  "command": "uvx",
  "args": [
    "taskbeacon-mcp"
  ]
}

For MCP, I will be working on visulization of task flowchart.

Looking forward to your feedback and contributions!

:link: Link: https://taskbeacon.github.io/

Hi everyone,

A quick update on TaskBeacon since my last post.

The library has grown from 13 tasks to 35 task packages, and the platform is now more clearly organized around three layers: TAPS for task structure, PsyFlow for canonical local PsychoPy tasks, and psyflow-web for aligned browser-based task previews.

Another major change is that I’m no longer centering the project around the MCP server. The newer direction is a skills-based workflow, including task-build for literature-grounded task construction/refactoring, task-plot for auditable task-flow visualization, and task-py2js for converting PsyFlow tasks into matching HTML companions.

PsyFlow itself has also matured into a stricter development loop with explicit run, QA, simulation, and validation stages. In practice, tasks now go through standard checks, psyflow-validate, QA, scripted simulation, and a sampler-based simulation layer before release. The goal is to make tasks easier to audit, localize, share, and keep aligned across local and web versions.

We have also started adding a hardware layer intended to support different devices during data collection. That part is still early and has not been fully tested yet, but the aim is to make task deployment more flexible across acquisition setups.

Looking ahead, an important next step is more extensive human review and pilot studies to validate these tasks more systematically in real research settings.

I am still in progress of refining the existing tasks, updating the website, check it out:

1 Like