We built a synthetic city, broke a road, and measured how quickly two independent measurement systems — compared against each other — could detect what went wrong. Then we compared that against the standard approach. Here's what happened.
We simulated one hour of traffic on a 36-edge grid network using SUMO (Simulation of Urban Mobility). At minute 20, we blocked a road. The question: how quickly can you detect that something went wrong, and what can you learn about it?
Perfect knowledge. Every vehicle on every road segment, counted exactly. This is what actually happened — but in reality, you never have this.
Imperfect, realistic. Only 70% of roads have sensors. Each sensor adds ~15% noise. 5% of readings randomly drop out. This is what you actually get.
The gap between what's really happening and what the sensors report isn't random — it's information. Drag the timeline to see how the system responds to the incident.
A naive approach — just watching the rate of vehicles accumulating — detected the incident about 2 minutes earlier. But that's all it could say: "more vehicles." It couldn't tell you where the problem started, whether it was a sensor glitch or a real event, or how far the disruption had spread.
By comparing two independent observers, we could pinpoint the exact road segment where the gap first appeared (B1C1 — the blocked road), distinguish sensor noise from real signal (signal rose 7.5x during the incident), and track how congestion propagated to neighbouring segments.
| Capability | Naive (single observer) | Convergent (two observers) |
|---|---|---|
| Detects something happened | Yes (t=22 min) | Yes (t=24 min) |
| Tells you WHERE | No | Yes — first deviation edge |
| Tells you WHY (real vs sensor error) | No | Yes — signal/noise decomposition |
| Tracks spread to neighbours | No | Yes — chain propagation |
| Estimates coverage impact | No | Yes — 71% RMSE reduction with full coverage |
| Works with imperfect sensors | Assumes perfect data | Models and measures imperfection |
This isn't a traffic-specific tool. The same principle — measuring the gap between independent observers — applies to health systems (field data vs reported data), financial anomalies (price vs fundamentals), sensor networks, and any domain where you have two independent ways to look at the same thing.
CAAD is being built as a service. If you have a system with multiple data sources that should agree but sometimes don't — we should talk.
Get in touch