When AI Moves Physical Systems: The Spatial Graph as Real-Time Control Plane

Most AI systems stop at detection. They identify objects, classify scenes, and produce annotations, but they don't act. The output is a dashboard, not a control signal.
The perception-action gap#
In a venue environment, detection without action is incomplete. Identifying a player is not the same as framing them in a broadcast shot. Detecting an intrusion is not the same as directing a camera to track the intruder. The gap between seeing and acting is where operational value lives.
Graph-driven control#
OZ's robotic capture systems do not run on raw detections. They run on Spatial Graph queries. When the graph updates with a new entity position, the control system evaluates:
- Priority rules: which entity matters most right now
- Zone policies: what capture behavior applies in this area
- Mechanical constraints: what the physical actuator can reach
- Transition cost: how disruptive is the camera move
The result is a control decision, not a detection report.
Deterministic under real-time constraints#
Robotic capture at venue scale demands deterministic timing. A PTZ camera that receives a cueing command 200ms late produces a missed shot, not a delayed shot. The control loop runs at the edge with fixed-budget computation. Every cycle completes within its time window or escalates.
From one camera to many#
The Spatial Graph makes multi-camera coordination possible. When one camera tracks wide, another can zoom tight, not because a human operator made the call, but because the graph encodes the spatial relationships that make the decision obvious.
This is what spatial robotics means at OZ: the graph sees, the control system decides, and the hardware acts, all within the latency budget that the venue demands.