Improving Web-Based 4D Neuroimaging Visualization: Seeking Feedback on Pain Points

Hello everyone,

I have recently been contributing to NiiVue to improve the handling of 4D time-series data in the browser. Specifically, I’ve been working on:

  1. Correcting Intensity Scaling for 4D Volumes: Ensuring that global min/max intensity calculations consider the entire time series (all frames) rather than just the first volume. This is crucial for modalities like PET or fMRI where intensity varies significantly over time.
  2. Interaction Sensitivity: Adding configurable drag sensitivity for window/level adjustments to make UI interactions smoother across different devices.

As we see more data sharing platforms (OpenNeuro, Brainlife) relying on web-based viewers, I am interested in hearing from the community about your experience with web-based visualization tools.

My Question:
When viewing 4D datasets (fMRI, PET, DWI) in a browser compared to desktop tools (like FSLeyes or MRIcron), what are your biggest friction points?

  • Is it the initial loading speed of large 4D files?
  • Issues with intensity scaling/contrast across timepoints?
  • Lack of specific interaction controls (like syncing 3D/2D views)?

I am looking to guide my future contributions to open-source visualization tools and would love to know what features or fixes would make your QA/viewing workflows easier.

Thanks!