concord-viz

Data preparation functions that convert Recordings and MetricResults into JSON-ready dicts for the browser

Role in the system concord-viz is a pure transformation layer: it takes core containers and returns plain Python dicts (lists of numbers, strings) that the server can serialize to JSON and the browser's Plotly can render. It never modifies data, never writes files.

Package Structure

concord-viz/
  src/concord_viz/
    __init__.py       exports all public functions
    downsample.py     LTTB algorithm
    timeseries.py     get_timeseries
    spectral.py       get_psd, get_spectrogram
    metrics.py        get_line_length, get_hjorth, get_band_power

downsample.py — LTTB

fnlttb(times, values, n_out) → (times_ds, values_ds)

Largest Triangle Three Buckets — a perceptually-optimal downsampling algorithm. Reduces a time series from many thousands of points to a target n_out while preserving the visual shape as faithfully as possible.

Why not just every-Nth-sample? Naive subsampling can miss sharp peaks and flat regions equally. LTTB works in buckets, and within each bucket selects the point that forms the largest triangle with its neighbors — preserving extrema that would otherwise be missed.

Returns downsampled times_ds and values_ds arrays of length n_out.

timeseries.py

fnget_timeseries(recording, t_start=None, t_end=None, channels=None, max_points=2000) → dict

Extracts a time window from a Recording, optionally subsets channels, and downsamples to at most max_points using LTTB.

ParameterDescription
t_startStart time in seconds. None = beginning of recording.
t_endEnd time in seconds. None = end of recording.
channelsList of channel names to include. None = all channels.
max_pointsMaximum number of time points per channel after downsampling. Default 2000.

Returns:

{
  "channels": ["SEEG1", "SEEG2", ...],
  "times":    [0.0, 0.0005, ...],   # shared downsampled time axis
  "values":   [[...], [...], ...],  # one list per channel
  "events":   [{"onset": 45.2, "duration": 60.0, "label": "seizure"}, ...]
}

Shared time axis: LTTB is run on the first channel to determine which time points to keep. The same indices are applied to all other channels, so all channels share one time array — efficient for Plotly rendering.

spectral.py

fnget_psd(recording, channels=None, window_s=4.0, fmin=1.0, fmax=150.0) → dict

Computes Welch PSD for specified channels and returns JSON-ready dict.

Returns:

{
  "channels": ["SEEG1", ...],
  "freqs":    [1.0, 1.25, 1.5, ...],   # Hz
  "power":    [[...], [...], ...]        # V^2/Hz, one list per channel
}
fnget_spectrogram(recording, channel, window_s=1.0, fmin=1.0, fmax=150.0) → dict

Computes a Short-Time Fourier Transform (STFT) spectrogram for a single channel using scipy.signal.spectrogram.

Returns:

{
  "times":  [0.5, 1.5, ...],   # window center times in seconds
  "freqs":  [1.0, 2.0, ...],   # Hz
  "power":  [[...], ...]        # dB (10 * log10(V^2/Hz)), shape [n_freqs, n_times]
}

Power is in dB (decibels) because the human visual system perceives power logarithmically — dB scale makes both subtle low-power features and large bursts visible simultaneously.

metrics.py

fnget_line_length(recording, window_s=1.0) → dict

Runs LineLength(window_s) and converts the result.

{
  "channels": ["SEEG1", ...],
  "times":    [0.5, 1.5, ...],  # window center times
  "values":   [[...], ...]       # (n_channels, n_windows)
}
fnget_hjorth(recording, window_s=1.0) → dict

Runs HjorthParameters(window_s) and converts the result.

{
  "channels": ["SEEG1", ...],
  "times":    [0.5, 1.5, ...],
  "params":   ["activity", "mobility", "complexity"],
  "values":   [[[a,m,c], ...], ...]  # (n_channels, n_windows, 3)
}
fnget_band_power(recording, window_s=4.0) → dict

Runs BandPower(window_s) and converts the result.

{
  "channels": ["SEEG1", ...],
  "bands":    ["delta", "theta", "alpha", "beta", "gamma", "high_gamma"],
  "values":   [[...], ...]   # (n_channels, n_bands)
}

Design Notes

Why separate viz from the server?

The viz functions are pure Python (no HTTP, no state). This means you can use them in a Jupyter notebook, a script, or any other context — not just the web server. For example, to render a quick PSD plot in a notebook:

import plotly.graph_objects as go
from concord_viz import get_psd

data = get_psd(recording, channels=["SEEG1", "SEEG2"])
fig = go.Figure()
for i, ch in enumerate(data["channels"]):
    fig.add_trace(go.Scatter(x=data["freqs"], y=data["power"][i], name=ch))
fig.show()

Why return dicts instead of numpy arrays?

JSON serialization requires plain Python lists. The viz layer does this conversion once, centrally. The server just calls json.dumps(result) — no additional processing needed.