GPU-Accelerated Video Routing: 3 Billion Pixels Per Second
The video routing pipeline processes six simultaneous 4K60p camera streams through GPU-accelerated encoding, decoding, and routing, processing over 3 billion pixels per second at the venue edge.
- GPU hardware-accelerated encoding and decoding for all six 4K60p streams
- GStreamer HLS sink integration for adaptive bitrate output
- Shared memory video pipeline: camera frames flow from capture to inference to rendering to output without CPU-bound copies
- Multi-output routing: same source streams feed AI inference, production switching, recording, and contribution simultaneously
- Per-camera encoding profiles tuned for broadcast delivery (SRT, HLS) and AI inference separately
- Latency-optimized frame pipeline with sub-frame delay from camera sensor to inference input
A critical design decision: OZ works with clean, full-resolution optical input from every camera. No stitched panoramas. No digital crops from a wide-angle master. Each camera captures through its own optical zoom lens at the focal length the AI selects, and the full 4K frame enters the processing pipeline.
This is the difference between working with physics (optical zoom, mechanical lens, full photon capture) and working around it (digital zoom, software upscaling, information loss). At 3 billion pixels per second, the venue GPU processes more raw visual data than most cloud inference clusters handle per customer.