It Fits in One Hand and Tracks Every Player for $500
Not a Camera#
Q: Everyone is going to call this a camera.
I know. And I need to stop that immediately. A camera captures video. That is its job. You record footage, you upload it, you watch it later. The PanoNode captures spatial data. Video is a byproduct of the sensing process, not the product.
The product is structured tracking data. Every entity in the field of view, detected, classified, timestamped, confidence-scored. Positions updated thirty times per second. Distances. Speeds. Spatial relationships. In a football deployment, that means twenty-two players plus ball with full spatial context. But the same pipeline tracks people across a logistics yard, vehicles in a port, or movement patterns on a campus. All of it computed on the device itself, not in the cloud, not on someone else's server. On this device, at the venue, in real time.
Q: Why does the distinction between camera and sensor matter?
Because it changes everything about how you think about the device. A camera is a video appliance. You record footage, upload it, watch it later. The value is in the video. A sensor is data infrastructure. You deploy it, it produces structured data, and you build systems on top of that data. The value is in what you build.
When you deploy a camera, you get footage. When you deploy a PanoNode, you get a data stream that any software system can consume: tracking feeds, spatial context, zone occupancy, formation analysis. All through stable API contracts that third-party teams can integrate against without ever talking to an OZ engineer.
The OZ PanoNode is an edge data sensor, not a camera. Its primary output is structured tracking data: every entity in the scene, timestamped and confidence-scored. In football, that is 22 players plus ball. The same pipeline applies to any spatial environment. Video is a byproduct of the sensing process, not the product.
Why "Pano" and Why "Node"#
Q: Walk us through the name.
"Pano" is panoramic. The device captures nearly 180 degrees of field of view from a single mounting point. In a football venue, that means the entire pitch, sideline to sideline, from one position. No blind spots within the coverage arc. No need to pan, tilt, or zoom. The field of view is fixed and continuous.
"Node" is the scalability part. One PanoNode covers one viewpoint. But this is designed from the ground up to be deployed at scale. Three PanoNodes at a venue (one at midfield, two behind the goals) and you have occlusion coverage from three angles. Every player is visible from at least two viewpoints at all times. Four nodes give you opposite-side coverage. Six nodes give you near-complete spatial redundancy.
But the word "node" means something deeper. Each unit is a self-governing node in a network. It powers on, starts processing, and produces output, with zero per-match setup. No operator presence. No manual start and stop. No tripod repositioning. You mount it, connect a single PoE cable, and walk away. It runs all season.
Q: A single cable?
Power over Ethernet. One cable carries both power and data. Total system power sits between 15 and 25 watts, well within standard PoE budgets. That is the entire installation: mount, cable, done. No dedicated power supply. No network switch configuration beyond the basics. No special infrastructure.
What Is Inside#
Q: What makes this possible at a device this size?
The compute platform is the NVIDIA Jetson Orin Nano Super. Sixty-seven trillion operations per second of AI inference in a module smaller than a credit card. That is the engine. Around it, we built everything our years of edge hardware taught us: thermal management, power conditioning, environmental protection. All the lessons from the OZ VI Venue, compressed into a form factor that can be mounted anywhere.
The sensing side uses dual global-shutter image sensors. Global shutter is critical. It means every pixel in the frame is captured at the same instant. Rolling-shutter sensors, the kind used in consumer cameras and most competing products, capture each row of pixels at a slightly different moment. When something moves fast (a ball kicked at 120 kilometers per hour) rolling shutter produces motion artifacts. The ball smears. The position measurement is wrong. Global shutter eliminates that entirely. Every frame is geometrically truthful.
Q: Two sensors for panoramic coverage?
Two sensors, precisely aligned, with a deterministic stitching pipeline running on the GPU. The two images are combined into a single seamless panoramic output on every frame. And here is a design principle that matters: the stitching is deterministic. Given the same input frames, it produces the same output, every time. No adaptive algorithms that drift. No runtime recalibration that shifts the geometry. The transform is computed once at factory calibration and applied identically to every frame for the life of the device.
And from the outside, from the perspective of anything consuming the PanoNode's output, it appears as a single camera. One camera ID. One video stream. One tracking feed. The dual-sensor architecture is invisible. Any system that works with a standard camera works with PanoNode without modification.
Dr. Rushikesh Deshmukh
Head of Enclosure & Edge Systems
“People see a device with lenses and say camera. This is not a camera. This is data infrastructure you bolt to a wall.”
Edge-Sovereign by Design#
Q: You keep saying "on the device." How much actually runs locally?
Everything that matters. Capture, stitching, tracking inference, encoding: the entire pipeline runs on the PanoNode itself. If you disconnect the network cable after installation, the device continues operating. It captures, processes, and (if you add the NVMe storage module) records locally. When connectivity returns, it uploads. But it never stops working because the internet went down.
We call this edge-sovereign. The word "sovereign" is deliberate. The PanoNode is a self-governing node. It doesn't depend on any external authority (not a cloud server, not a remote management console, not a license server) to perform its core function. The cloud adds value: remote monitoring, fleet management, firmware updates. But its absence never prevents the device from doing its job.
Q: That is unusual in this market.
Most competing products are cloud-first. The device captures video, uploads it to the vendor's cloud, and the cloud does the processing. Your data lives on someone else's server. Your analytics depend on their uptime. Your costs scale with their storage and compute pricing. And if the internet goes down during a match, no data.
We inverted that. Processing happens at the edge. Data stays at the venue. The customer owns the data. Full provenance: every data point traces back to which sensor captured it, which model version processed it, which calibration produced it. Reproducible, auditable, verifiable after the fact.
Edge-sovereign means the PanoNode functions fully without internet connectivity. Capture, stitching, tracking, encoding: the entire pipeline runs on-device. Cloud connectivity adds value but is never required for core operation.
The Pipeline#
Q: Walk us through what happens from photons hitting the sensor to data coming out the other end.
It starts with frame-pair atomicity. Both sensors capture a frame at the same instant, synchronized to within microseconds. If one sensor misses a frame, we do not use the other sensor's frame alone. The pair is atomic. Either both frames exist for a given moment, or that moment is skipped. No partial data. No stitching a current left frame with a stale right frame.
The frame pair enters the stitching pipeline on the GPU. The geometric transform, computed during factory calibration, is applied. Two images become one panoramic image. This takes single-digit milliseconds.
The stitched frame then enters the tracking pipeline. Detection models identify players, the ball, referees, every entity on the pitch. Each detection gets a position in venue coordinates, a confidence score, and a timestamp. These raw detections are fused across frames to produce continuous tracks, smooth trajectories, not jumpy point detections.
The tracking data is published as a structured data stream. The stitched video is encoded using the hardware encoder on the Jetson, dedicated silicon for video compression, so it does not compete with the AI compute for GPU resources.
The entire pipeline, from photons to published tracking data, runs within a strict latency budget. Every stage has a watchdog timer. If any stage exceeds its time budget, the system responds predictably, never by silently degrading, always by emitting a classified error that tells you exactly what happened and why.
What You Can Build on Top#
Q: So the PanoNode produces tracking data and video. What do people do with it?
This is where it gets interesting. The tracking data is published through OZ Cortex, our real-time data bus, using stable topic conventions. Any system that speaks the protocol can subscribe and consume the data in real time. But let me describe the four output layers, because they serve different audiences.
The first is the raw tracking feed. Thirty updates per second, every entity on the pitch, with positions, velocities, and confidence scores. This is what analytics companies build on. They take our tracking data and compute their own metrics: expected goals, pass completion probability, tactical analysis. They do not need to build their own tracking system. They consume ours.
The second is spatial context primitives. Zone occupancy, formation shape, team compactness, line height, all computed at the edge. These are higher-level spatial summaries that applications can consume directly without doing their own spatial math.
The third is the camera cueing stream. This is unique to the layered capture architecture. The PanoNode's panoramic view sees the entire pitch. When it detects a subject of interest (a player making a run, a ball trajectory heading toward goal) it generates a cue. A predicted target position with a priority score and confidence level. This cue is consumed by higher-resolution cameras (robotic PTZ systems with optical zoom) that can act on it instantly. The PanoNode is the spotter. It tells the zoom cameras where to look.
The fourth is the data API itself, the OZ Spatial API. Stable REST endpoints that return clean JSON. Current positions. Player trajectories over a time window. Spatial events. Third-party developers build against this API without needing to understand anything about sensors, stitching, or tracking inference. They see data contracts, not hardware.
The Venue Swarm#
Q: You mentioned deploying three, four, even six PanoNodes at a single venue. How do they coordinate?
We call it the venue swarm. Multiple PanoNodes connected to a single PoE switch form a coordinated network. One node is designated as the leader. It runs everything a regular node runs, plus a time synchronization service and a tracking fusion engine.
The key principle is that the swarm shares tracking data (positions, identities, state transitions) not video. Raw video frames never cross the network between nodes. GPU memory stays local. Coordination happens at the semantic level: "I see player seven at position X from my angle, and you see player seven at position Y from your angle." The leader fuses these observations into a unified tracking output that resolves occlusions that no single viewpoint could solve alone.
Q: What happens if the leader goes down?
Every node falls back to sovereign operation. The swarm adds value (better occlusion handling, multi-angle tracking fusion) but its absence never prevents individual nodes from producing valid output. This is the degradation hierarchy: full swarm, then partial swarm if a worker drops out, then sovereign nodes if the leader drops out. At every level, every surviving node continues producing data.
Q: How do they stay synchronized?
Time is a service. The leader publishes a clock tick on the local network at the target frame rate. Every node tags its frames with the same tick ID. Downstream fusion uses that tick as the primary correlation key. All nodes see the same clock. There is no negotiation, no voting, no consensus protocol. The leader broadcasts time. Workers consume it.
Five Hundred Dollars#
Q: Let's talk about price.
We are targeting approximately five hundred US dollars per unit, before VAT, at commercial launch. That is for the complete PanoNode: compute, sensors, enclosure, everything needed to produce continuous tracking data from a single mounting point.
Q: That is dramatically lower than existing tracking systems.
That is the point. Current venue tracking systems, the kind that satisfy professional standards, cost tens of thousands, sometimes hundreds of thousands per venue. They require dedicated installation teams, custom calibration by specialists, and ongoing operational support. They are designed for elite venues with elite budgets.
At five hundred dollars per node, the math changes completely. A three-node deployment covering a full pitch costs fifteen hundred dollars in hardware. A federation that wants to deploy across fifty youth academies, a project that would be budget-prohibitive with existing systems, spends seventy-five thousand dollars total. And every single node produces the same structured data in the same format through the same API contracts. The analytics platform built for the top-tier venue works identically at the youth academy.
That is the scalability effect. It isn't just about one venue getting cheaper tracking. It's about an entire ecosystem of venues producing compatible, structured spatial data, from the national stadium to the local training pitch. The same data shapes. The same API. The same integration code. Thousands of venues feeding into the same analytical framework.
At ~$500 per unit, the OZ PanoNode makes venue-grade tracking data infrastructure accessible at a scale that existing systems cannot reach. The same data contracts and APIs work at every level, from national stadiums to youth academies.
The Full Platform Behind a Small Device#
Q: This device is small and relatively inexpensive. What connects it to the broader OZ platform?
Everything. The PanoNode is the entry point. It uses the same OZ Cortex data bus that the full OZ VI Venue system uses. The same topic conventions. The same Spatial API contracts. The same data shapes and coordinate systems. The same EPTS data export pipeline (the same codebase, literally) that produces FIFA-standard tracking data from the full production system.
This means upgrading from PanoNode to OZ VI Venue is additive, not disruptive. You do not throw away your PanoNode deployment. You add to it. The PanoNode continues operating as a panoramic tracking sensor (the awareness layer) while the VI Venue adds robotic PTZ cameras, multi-camera orchestration, live switching, graphics, replay, audio. Everything the PanoNode cannot and should never do.
Q: Why "should never"?
Because the upgrade path is the strategy. If the PanoNode tried to be a production system (if we added switching capabilities, graphics overlays, replay) there would be no reason to upgrade. The PanoNode is tracking and cueing. The VI Venue is production. The boundary is deliberate and permanent. The PanoNode's job is to be the best possible entry point to the OZ platform, so good that every venue wants one, and so clearly scoped that the path to VI Venue is obvious when you are ready for more.
Building the Data Layer#
Q: You mentioned earlier that PanoNode is data infrastructure. What does that actually mean for the people deploying it?
It means the PanoNode is not a product you buy and use. It is a product you buy and build on.
When a sports analytics startup deploys PanoNodes at venues, they are not buying a recording device. They are installing a data collection layer. They subscribe to the tracking feed through stable API contracts and build their own analytics on top. Their intellectual property is their analytics: their models, their metrics, their insights. Our infrastructure delivers the raw spatial data they need, reliably, at every venue, in the same format.
When a federation deploys PanoNodes across fifty venues, they are not buying fifty cameras. They are building a national spatial data network. Every venue produces compatible data. Player development programs can track athlete progression across venues, across age groups, across seasons. The data is comparable because the format is identical and the provenance is auditable.
When a security firm deploys PanoNodes at a commercial campus, they are not buying surveillance cameras. They are building a spatial awareness layer. Zone occupancy, movement patterns, anomaly detection, all computed at the edge, all governed by the same data sovereignty principles. The customer owns the data. Always.
Q: That last example, security. The PanoNode works beyond sports?
The core technology is domain-neutral. The sensors capture spatial data. The tracking pipeline detects and tracks entities. The output contract delivers structured positions and trajectories. None of that is specific to football.
What changes between domains is the intelligence layer on top: the detection models, the spatial logic, the event definitions. In football, an "event" might be a goal-scoring opportunity. In a port, an "event" might be a container movement anomaly. In a retail environment, an "event" might be queue length exceeding a threshold. The PanoNode provides the spatial data foundation. What you build on that foundation depends on your domain.
And this is where the World Model comes in. The Playbook, the declarative behavior specification that Bhagyashree described in her recent interview, works with PanoNode data. The same JSON file that tells cameras how to behave during a football match can be adapted to tell systems how to respond to spatial events in any environment. Environments, entities, contexts, intents, all defined in a document that a domain expert can read and modify. The PanoNode is the eyes. The World Model is the understanding. Together, they create a spatial intelligence layer that works anywhere.
The Dev Kit Program#
Q: When can people get one?
Right now, through our Dev Kit program. We are working with key partners who want to build on PanoNode data before the commercial launch. The Dev Kit includes the hardware, documentation, API access, and direct engineering support. We want the first wave of builders to shape the platform alongside us.
Q: Who should sign up?
Anyone who looks at their venue (whether that's a football pitch, a training facility, a commercial campus, or a logistics yard) and thinks: "If I had continuous spatial data from this space, I could build something valuable." Sports analytics companies that need reliable tracking data without building their own hardware. Federations that want to create data-driven player development programs across their entire network. Security and facilities teams that want spatial awareness without cloud dependency. Research institutions working on spatial AI, computer vision, or physical AI applications.
The sign-up is on our website. Tell us what you want to build, and we will get a Dev Kit into your hands.
The Scalability Effect#
Q: You have been in hardware for eighteen years. What makes this one different?
Every hardware product I have built before had a ceiling. Consumer electronics: you sell millions, but each one is disposable. Industrial systems: you build dozens, but each one is custom. The PanoNode breaks that pattern because it combines venue-grade reliability with a price point that enables mass deployment. And mass deployment changes the nature of the product.
One PanoNode at one venue is a tracking device. Fifty PanoNodes across a federation is a spatial data network. Five hundred PanoNodes across a league is a continental data infrastructure. At each scale, the value compounds. Cross-venue analytics become possible. Longitudinal player tracking across seasons becomes possible. Benchmark datasets that improve every AI model in the ecosystem become possible.
And here is the part that keeps me up at night, in a good way. Every PanoNode deployed is a potential OZ VI Venue upgrade. The venue that starts with a fifteen-hundred-dollar three-node PanoNode deployment and proves the value of tracking data is the venue that upgrades to VI Venue when they are ready for full production. The data contracts carry over. The API integrations carry over. The institutional knowledge of working with spatial data carries over. Nothing is wasted.
Q: From five hundred dollars to a full production system.
From data infrastructure to broadcast infrastructure. From tracking to production. From awareness to orchestration. The PanoNode is not the end product. It is the beginning. And because everything we build shares the same data layer (the same Cortex bus, the same Spatial API, the same World Model) the beginning connects seamlessly to everything that comes after.
Q: Final thought?
I have spent eighteen years making electronics survive the physical world. Sealed enclosures that outlast seasons. Cooling systems that work without fans. Power protection that absorbs stadium chaos. All of that knowledge is inside this device. But the PanoNode is not just a smaller version of what we have built before. It is the first time we can put that knowledge into a device that anyone can afford. Data infrastructure for every venue. Spatial intelligence at every scale. That is what this is. And we are just getting started.