Linux DJ

Latency Graph: Temporal Visualization of Linux Audio Latency

Latency Graph is a visualization tool for Linux audio system latency traces. Where a histogram tool like HDRBench summarizes the statistical distribution of latency values across an entire measurement run, latency-graph plots those values against time, showing you how latency behaved moment to moment throughout the trace. This temporal perspective is what makes latency-graph indispensable for diagnosis work: it transforms a pile of numbers into a picture of when things went wrong, at what intervals, and whether the pattern is consistent or erratic. Below I cover what the tool does in detail, which systems and test configurations it is most useful for comparing, how to read the output it produces, and how to integrate it into a complete measurement workflow. For the overview of all measurement tools on this site, see the benchmarks hub.

The intuition behind temporal latency visualization is simple but powerful. Two systems might have nearly identical statistical distributions -- the same median, the same 99th percentile -- but completely different temporal patterns. One system might have high-latency events distributed randomly through the trace at low frequency. Another might have its high-latency events clustered at regular intervals that correspond precisely to a background process running on a timer. Those two systems require different interventions, and only temporal visualization makes the distinction obvious. Statistics tell you what happened; the time-series plot tells you when and why.

What the Tool Does

Latency-graph reads latency trace data in the format produced by cyclictest or compatible kernel timing tools and generates a time-series plot of the measured latency values. Each point in the output graph represents a single latency measurement, placed on the horizontal time axis at the moment it was captured and on the vertical latency axis at its measured value in microseconds. The result is a scatter plot or connected line graph -- depending on the trace density and the rendering options chosen -- that shows the full temporal behavior of the system under test.

Beyond the basic time series, latency-graph can produce several derived visualizations that are useful for specific diagnostic questions. A windowed maximum trace, which plots the worst latency value observed in each short time window rather than every individual sample, makes periodic spikes more visible against the background of normal variation. A rolling percentile trace highlights when the distribution is shifting over time -- useful for catching thermal throttling, DVFS (Dynamic Voltage and Frequency Scaling) transitions, or workload-driven changes in system behavior. These derived views complement the raw scatter data rather than replacing it, because the raw data is what you need when you are trying to identify a specific event that caused a specific spike.

Which Systems and Tests It Helps Compare

Latency-graph is most useful in comparative work: you have two or more traces from different conditions and you need to understand how those conditions differ in their real-time behavior. The most common comparison is between a stock kernel and a PREEMPT_RT kernel on the same hardware. In a typical result, the PREEMPT_RT trace shows a much tighter cluster of low-latency values with fewer and smaller outliers, while the stock kernel trace shows a broader scatter with periodic large spikes corresponding to kernel sections that cannot be preempted in the standard configuration. The temporal visualization makes this difference immediately legible in a way that a comparison of summary statistics does not.

Hardware comparisons benefit equally from temporal visualization. Different audio interfaces have different interrupt latency profiles, and those profiles often have temporal structure that statistics obscure. A FireWire interface that polls at 8 kHz introduces a characteristic ripple into the latency trace that is invisible in a histogram but obvious in a time series. USB audio interfaces often show latency spikes at the USB polling interval, which again is a temporal pattern rather than a statistical one. When evaluating hardware for audio work, latency-graph frequently reveals problems that HDRBench alone would not flag as severe. For the community-accumulated knowledge about specific hardware latency profiles, the LAD latency resources page documents findings from years of testing across a wide range of hardware configurations.

Configuration comparisons -- IRQ affinity settings, CPU isolation, scheduler tuning, power management state -- also benefit from temporal analysis. A configuration change might not shift the overall distribution in a statistically significant way but still produce a meaningful change in the temporal pattern of latency spikes. Eliminating the periodic clustering of spikes, even without changing the peak spike value, can be the difference between a system that produces occasional audible artifacts and one that runs cleanly in a live performance context.

How to Read the Output

A latency-graph output plot has time on the horizontal axis (in seconds from the start of the trace) and latency in microseconds on the vertical axis. For audio work, the vertical axis is often displayed on a log scale for the same reason HDRBench uses log buckets: the difference between 10 and 20 microseconds is as significant in practice as the difference between 1 and 2 milliseconds, and a linear scale would compress the interesting low-latency region into a narrow band at the bottom of the plot.

Reading the output is largely a matter of pattern recognition developed through experience with these plots. A clean system produces a dense horizontal band of points near the bottom of the plot with occasional points scattered above it. The floor of that band is the system's baseline latency -- its best-case performance under the current configuration. The height of the scattered points above the floor tells you about worst-case behavior. If the scattered points form a thin vertical stripe at a consistent time interval, you have a periodic event causing the spikes: look for that interval in your list of background processes, kernel timers, and device drivers. If the scattered points are distributed randomly, the cause is non-periodic and likely comes from contention in a shared resource.

A rising floor -- where the baseline of the dense band shifts upward as the trace progresses -- indicates thermal throttling or a workload that is accumulating system state over time. A floor that rises and falls periodically suggests DVFS activity: the CPU is changing frequency in response to workload, and the latency changes with it. Both patterns are worth investigating because they affect the consistency of audio system performance over the duration of a session rather than just its initial behavior. For the theoretical framework underlying these interpretations, the latency hub covers what scheduling latency means, why consistency matters as much as magnitude, and what kernel and system configurations address the most common failure modes.

Download Latency Graph

Latency-graph is distributed as a source tarball. It requires a C build environment and gnuplot for rendering graph output. The build process is straightforward: extract, check the Makefile for any environment-specific paths, and compile. The resulting tool accepts trace data as a file argument or on standard input and produces gnuplot-compatible data files along with a default plot script.

Latency Graph 0.1 -- Source Distribution
latency-graph-0.1.tar.gz
Requires C toolchain and gnuplot. Accepts cyclictest trace data and produces time-series latency plots.

Workflow: From Trace to Graph

A complete measurement and visualization workflow starts with collecting the trace. Run cyclictest as root with real-time priority, specifying the measurement interval, priority, and duration that match your intended use case. For a system you plan to run JACK at 64 samples at 48 kHz, run cyclictest with a loop interval of 1000 microseconds (1 kHz) to match the audio thread scheduling frequency. Collect for at minimum 30 minutes with the system under representative load. Save the output to a file.

Once you have the trace file, run it through both HDRBench and latency-graph. HDRBench gives you the distribution summary: the peak, the tail shape, any secondary bumps. Latency-graph gives you the temporal picture: when the outliers happened and whether they have a pattern. Together these two views give you a complete characterization of the measurement run. If HDRBench shows a clean distribution with a good tail, and latency-graph shows that the few outliers are randomly distributed with no temporal structure, you have a genuinely well-performing system. If either tool shows a problem, you have a diagnostic starting point.

Latency Graph in the Measurement Stack

Latency-graph and HDRBench address different aspects of the same measurement data and are most useful together. If I had to choose one for initial diagnosis, I would choose latency-graph, because temporal patterns are almost always the faster path to identifying the cause of a latency problem. Once I have a hypothesis about the cause, HDRBench gives me the statistical confirmation that the fix actually moved the distribution in the right direction.

For the full context of what latency measurement means for Linux audio work, the latency hub is the place to start. For the community knowledge base around specific hardware and kernel configurations, the LAD latency resources page documents findings that complement what these tools measure. And for the overview of all measurement resources available on this site, the benchmarks hub provides the full picture.