concord-viz
Data preparation functions that convert Recordings and MetricResults into JSON-ready dicts for the browser
Package Structure
concord-viz/
src/concord_viz/
__init__.py exports all public functions
downsample.py LTTB algorithm
timeseries.py get_timeseries
spectral.py get_psd, get_spectrogram
metrics.py get_line_length, get_hjorth, get_band_power
downsample.py — LTTB
Largest Triangle Three Buckets — a perceptually-optimal downsampling algorithm.
Reduces a time series from many thousands of points to a target n_out while
preserving the visual shape as faithfully as possible.
Why not just every-Nth-sample? Naive subsampling can miss sharp peaks and flat regions equally. LTTB works in buckets, and within each bucket selects the point that forms the largest triangle with its neighbors — preserving extrema that would otherwise be missed.
Returns downsampled times_ds and values_ds arrays of length n_out.
timeseries.py
Extracts a time window from a Recording, optionally subsets channels, and downsamples to at most max_points using LTTB.
| Parameter | Description |
|---|---|
| t_start | Start time in seconds. None = beginning of recording. |
| t_end | End time in seconds. None = end of recording. |
| channels | List of channel names to include. None = all channels. |
| max_points | Maximum number of time points per channel after downsampling. Default 2000. |
Returns:
{
"channels": ["SEEG1", "SEEG2", ...],
"times": [0.0, 0.0005, ...], # shared downsampled time axis
"values": [[...], [...], ...], # one list per channel
"events": [{"onset": 45.2, "duration": 60.0, "label": "seizure"}, ...]
}
Shared time axis: LTTB is run on the first channel to determine which time points to keep. The same indices are applied to all other channels, so all channels share one time array — efficient for Plotly rendering.
spectral.py
Computes Welch PSD for specified channels and returns JSON-ready dict.
Returns:
{
"channels": ["SEEG1", ...],
"freqs": [1.0, 1.25, 1.5, ...], # Hz
"power": [[...], [...], ...] # V^2/Hz, one list per channel
}
Computes a Short-Time Fourier Transform (STFT) spectrogram for a single channel using scipy.signal.spectrogram.
Returns:
{
"times": [0.5, 1.5, ...], # window center times in seconds
"freqs": [1.0, 2.0, ...], # Hz
"power": [[...], ...] # dB (10 * log10(V^2/Hz)), shape [n_freqs, n_times]
}
Power is in dB (decibels) because the human visual system perceives power logarithmically — dB scale makes both subtle low-power features and large bursts visible simultaneously.
metrics.py
Runs LineLength(window_s) and converts the result.
{
"channels": ["SEEG1", ...],
"times": [0.5, 1.5, ...], # window center times
"values": [[...], ...] # (n_channels, n_windows)
}
Runs HjorthParameters(window_s) and converts the result.
{
"channels": ["SEEG1", ...],
"times": [0.5, 1.5, ...],
"params": ["activity", "mobility", "complexity"],
"values": [[[a,m,c], ...], ...] # (n_channels, n_windows, 3)
}
Runs BandPower(window_s) and converts the result.
{
"channels": ["SEEG1", ...],
"bands": ["delta", "theta", "alpha", "beta", "gamma", "high_gamma"],
"values": [[...], ...] # (n_channels, n_bands)
}
Design Notes
Why separate viz from the server?
The viz functions are pure Python (no HTTP, no state). This means you can use them in a Jupyter notebook, a script, or any other context — not just the web server. For example, to render a quick PSD plot in a notebook:
import plotly.graph_objects as go
from concord_viz import get_psd
data = get_psd(recording, channels=["SEEG1", "SEEG2"])
fig = go.Figure()
for i, ch in enumerate(data["channels"]):
fig.add_trace(go.Scatter(x=data["freqs"], y=data["power"][i], name=ch))
fig.show()
Why return dicts instead of numpy arrays?
JSON serialization requires plain Python lists. The viz layer does this conversion once,
centrally. The server just calls json.dumps(result) — no additional processing needed.