Why We Started Where the Physics Is Hardest
There's a reason we didn't start with CCTV corridors or warehouse floors. We started with live sports, the single hardest operating environment for visual intelligence infrastructure.
What makes sports the hardest#
Every requirement that matters in real-time spatial intelligence is maximized in live sports:
Speed of action. Players change direction in milliseconds. Ball flight is sub-second. Game state transitions are unpredictable and continuous. The perception system doesn't get to process at its own pace; the pace is set by physics.
Latency budget. A robotic gimbal executing a close-up shot of a player receiving a pass has less than one second from detection to framing. That budget includes: detect the event, predict the trajectory, calculate the camera position, issue the gimbal command, wait for mechanical response, and validate the frame. Every millisecond matters. p99 latency ≤120ms isn't a target. It's an SLO with financial consequences.
Pointing accuracy. Optical zoom to identification-grade close-ups requires sub-degree pointing accuracy from three-axis gimbal-stabilized capture heads. At the focal lengths needed for broadcast-quality close-ups, a fraction of a degree of error puts the subject out of frame entirely. This is a control theory problem that requires hardware and software to be co-designed.
Concurrent workloads. Six 4K60p camera streams, each requiring real-time detection, tracking, prediction, camera cueing, and production-quality rendering, all running simultaneously on a single on-venue compute unit. Over 100 Gbps of raw sensor data processed per match.
Zero tolerance for failure. Live broadcast under published SLOs. Thousands of viewers watching. A match cannot be replayed. Recovery must happen within the same broadcast window. This is not a batch job that can be retried.
Why the hardest environment first#
The strategic logic is simple: if the platform survives sports, it's proven for everything else.
A CCTV corridor has slower motion, lower resolution requirements, longer latency budgets, and no live audience. A logistics facility has predictable patterns, fixed lighting, and tolerance for batch processing. A transit hub has more subjects but slower individual motion.
Every environment we expand into after sports is easier along at least three of these dimensions. The platform doesn't need to be redesigned; it needs new playbooks.
What transfers and what doesn't#
What transfers: The entire hardware and software stack. The edge compute architecture, the inference runtime, the multi-stream pipeline, the Spatial API contracts, the deployment tooling, the operational playbooks framework. These are environment-agnostic.
What doesn't transfer: The specific playbook content. Zone definitions change. Priority schemas change. Capture policies change. Governance rules change. But the playbook format (the way OZ configures and deploys a venue) stays the same.
This is the difference between building a platform and building a product. A product solves one problem well. A platform provides the primitives to solve many problems, and the hardest problem validates that the primitives are sound.
The operational evidence#
Every match OZ has produced under broadcast SLOs is evidence. Not a demo, not a benchmark, not a simulation. Evidence from live production where failure has consequences.
That evidence doesn't just prove the technology works. It proves the operations work: the deployment process, the monitoring, the recovery procedures, the calibration, the maintenance. Operational maturity is earned through live production, not prototyping.
When OZ enters a new vertical (CCTV, robotics, defense, any environment that needs visual intelligence) it carries the operational evidence of every match before it. That evidence is the strongest proof a platform can offer: we already operate under harder conditions than anything you'll ask of us.