We deployed two independent seismic monitoring networks across Kenya's Rift Valley, injected a magnitude 4.2 earthquake into the simulation, and measured how comparing the two networks — against each other — reveals what neither could see alone. Here's what happened.
We simulated 120 minutes of seismic activity across the Kenyan Rift Valley using a synthetic dual-network configuration. At minute 45, we injected a magnitude 4.2 earthquake near Longonot. An aftershock followed at minute 75. The question: how much more can you learn when two independent observers disagree?
High-precision broadband seismometers. Research-grade instruments with excellent sensitivity. But only 60% spatial coverage — sparse deployment across the rift. Measures what it sees precisely, but misses what it can't reach.
Lower-cost MEMS accelerometers. Wider deployment with 90% coverage. Noisier measurements with higher instrument error. Sees more of the region, but each reading is less precise.
The gap between what the two networks report isn't random — it's information. When both agree, you have confidence. When they disagree, the shape of the disagreement tells you what's happening underground. Drag the timeline to see.
Any properly calibrated seismometer picks up a magnitude 4.2 event. That's table stakes. But a single network can't tell you whether an anomalous reading is instrument drift or real ground motion. It can't identify which areas of the rift have monitoring blind spots. And it can't track wave propagation by watching HOW the disagreement between networks moves across stations.
By comparing the primary seismometer network against the accelerometer network, we could separate instrument noise from real seismic signal (signal rose 18x during the main event), pinpoint the first station to show deviation (Longonot-N — closest to the injected epicentre), track wave propagation across 12 stations, and identify 40% of the rift where only one network had coverage — the monitoring blind spots that matter most during an event.
| Capability | Traditional (single network) | Convergent (two observers) |
|---|---|---|
| Detects seismic event | Yes (t=46 min) | Yes (t=46 min) |
| Locates epicentre | Roughly — triangulation only | Yes — first deviation station + gap shape |
| Distinguishes instrument noise from ground motion | No | Yes — signal/noise decomposition |
| Tracks wave propagation across stations | No | Yes — propagation count over time |
| Identifies monitoring blind spots | No | Yes — where only one network has coverage |
| Separates main event from aftershock | Yes — amplitude only | Yes — plus propagation pattern differences |
| Works with imperfect, noisy sensors | Assumes calibrated instruments | Models and measures imperfection |
This isn't a seismic-specific tool. The same principle — measuring the gap between independent observers — applies to health systems (field data vs reported data), financial anomalies (price vs fundamentals), traffic networks (ground truth vs sensors), and any domain where you have two independent ways to measure the same thing. The gap is always structured information.
CAAD is being built as a service. If you have multiple sensor networks, data sources, or measurement systems that should agree but sometimes don't — the gap between them is information you're not using.
Get in touch