ArcFlow
Company
Managed Services
Markets
  • News
  • LOG IN
  • GET STARTED

OZ brings Visual Intelligence to physical venues, a managed edge layer that lets real-world environments see, understand, and act in real time.

Talk to us

ArcFlow

  • World Models
  • Sensors

Managed Services

  • OZ VI Venue 1
  • Case Studies

Markets

  • Sports
  • Broadcasting
  • Robotics

Company

  • About
  • Technology
  • Careers
  • Contact

Ready to see it live?

Talk to the OZ team about deploying at your venues, from a single pilot match to a full regional rollout.

Schedule a deployment review

© 2026 OZ. All rights reserved.

LinkedIn
  1. Home
  2. Blog
  3. Changelog
  4. GPU-Accelerated Video Routing: 3 Billion Pixels Per Second

GPU-Accelerated Video Routing: 3 Billion Pixels Per Second

OZ VI Venue · v3.0.2 · March 3, 2025

Share this post

OZ VI Venuev3.0.2

The video routing pipeline processes six simultaneous 4K60p camera streams through GPU-accelerated encoding, decoding, and routing, processing over 3 billion pixels per second at the venue edge.

Added
  • GPU hardware-accelerated encoding and decoding for all six 4K60p streams
  • GStreamer HLS sink integration for adaptive bitrate output
  • Shared memory video pipeline: camera frames flow from capture to inference to rendering to output without CPU-bound copies
  • Multi-output routing: same source streams feed AI inference, production switching, recording, and contribution simultaneously
Improved
  • Per-camera encoding profiles tuned for broadcast delivery (SRT, HLS) and AI inference separately
  • Latency-optimized frame pipeline with sub-frame delay from camera sensor to inference input

A critical design decision: OZ works with clean, full-resolution optical input from every camera. No stitched panoramas. No digital crops from a wide-angle master. Each camera captures through its own optical zoom lens at the focal length the AI selects, and the full 4K frame enters the processing pipeline.

This is the difference between working with physics (optical zoom, mechanical lens, full photon capture) and working around it (digital zoom, software upscaling, information loss). At 3 billion pixels per second, the venue GPU processes more raw visual data than most cloud inference clusters handle per customer.

Previous
Edge-Sovereign Runtime
Next
3D Venue Design & Playbook Editor (Beta)
View all changelog entries