Why the Hardest Problem in AI Is the Box, Not the Model
You've said that hardware is the most underestimated layer in the AI stack. Why?
Everyone wants to talk about models. Model architecture, training data, how fast the AI runs. That's the glamorous work. AND the models are genuinely impressive. The leaps in detection accuracy, tracking precision, and real-time performance have been extraordinary. BUT nobody wants to talk about where that compute physically lives. Where does the AI chip sit? What enclosure protects it? What cooling system keeps it from overheating? What power protection absorbs the electrical shock when a stadium's floodlight bank cycles on? THEREFORE we end up with a bizarre situation: billions of dollars invested in AI models, running on commodity hardware that fails within weeks of outdoor deployment.
You've seen that failure pattern before?
Repeatedly. Across industries: surveillance, autonomous vehicles, industrial IoT, smart city infrastructure. The failure mode is always the same. A company builds exceptional software, demos it beautifully in a controlled environment, then deploys it outdoors on hardware designed for a server room. AND the demo works perfectly. BUT a data center has stable power, stable temperature, stable humidity. A stadium gantry in July has none of those things. THEREFORE the AI chip overheats when outdoor temperature climbs past 38 degrees, processing deadlines are missed, video frames are dropped, and a live broadcast that half a million people are watching degrades, not because the AI failed, but because the cooling system wasn't designed for sunlight.
The chain from a poorly designed heat sink to a broadcast failure is exactly four links long. Hardware engineers count those links. Software engineers rarely know they exist.
When did OZ decide to own the hardware stack?
Early. Before it was rational by startup standards. The decision was driven by a simple observation: the reliability requirements for permanent outdoor venue infrastructure are fundamentally different from anything the commodity hardware market serves. AND you can buy off-the-shelf server enclosures. They're well-designed for climate-controlled environments. BUT a standard outdoor camera enclosure is built for a static camera drawing minimal power. We needed to house AI compute drawing hundreds of watts, with robotic camera mechanisms, in an enclosure sealed against rain, dust, and everything a stadium can throw at it, operating continuously across every season. THEREFORE no product existed. We built it.
We melted three prototypes before we got the thermal architecture right. I say that without embarrassment. It's a badge of understanding. Every melted prototype taught us something that no simulation fully captures.
OZ's hardware decision inverts the typical AI startup playbook: instead of treating hardware as a commodity input and competing on software, OZ designs hardware as the foundation of the competitive moat. Custom enclosures, cooling systems, power protection, and mechanical mounting. Each one is a barrier that capital alone cannot replicate.
Walk us through what goes into an OZ VI Venue unit.
Start with the enclosure itself. It's a completely sealed system: no fans, no air intake, no vents. AND sealed enclosures are straightforward in concept. You seal the box, keep the weather out. BUT an AI chip running continuous processing generates enormous heat. In a data center, industrial air conditioning handles that. On a stadium gantry, there is no air conditioning. There is only your engineering and your understanding of how heat moves. THEREFORE we designed a system where heat transfers from the AI chip through the enclosure walls to the outside air. The shell itself becomes the radiator. No moving parts in the cooling path, nothing to fail.
This eliminates dust ingress, moisture ingress, fan failure, every mechanical failure mode that kills commodity hardware in outdoor deployments. Sealed means sealed. There are no filters to clog. There is no maintenance interval for cleaning air intakes. The enclosure operates for years without anyone opening it.
Power conditioning sounds mundane. Why does it matter?
Because stadium power is hostile. AND most people think electrical power is clean and stable; you plug in, you get 240 volts. BUT stadium electrical systems are shared infrastructure. When a bank of twenty floodlights cycles on simultaneously, it creates power spikes and drops that can damage sensitive electronics. Point-of-sale systems, scoreboard controllers, broadcast equipment, everything is competing on the same electrical infrastructure. THEREFORE we designed custom power protection specifically for the demands of continuous AI processing: high sustained draw with headroom for peak loads, filtering out the electrical chaos that would destroy off-the-shelf equipment.
This isn't a generic backup battery bolted onto the rack. It's purpose-built power electronics designed for the specific conditions of stadium environments.
And the robotic gimbals?
This is where mechanical engineering meets AI in the most literal way. Our cameras are broadcast-quality robotic systems that can pan, tilt, and zoom, driven by real-time AI commands. The AI detects a player, calculates the optimal framing, and moves the camera to follow them, all faster than a human blink.
AND robotic camera systems are well-understood in aerospace and surveillance. BUT those industries operate in controlled conditions or for limited durations. Our cameras operate continuously for years, outdoors, under temperature swings from freezing to scorching, under vibration from crowd activity, under UV exposure that breaks down materials over time. THEREFORE every component (every bearing, every motor, every sensor) is rated for five-plus years of continuous outdoor operation. We validate across real weather extremes, not just numbers on a specification sheet.
Dr. Rushikesh Deshmukh
Head of Enclosure & Edge Systems
“Software people think deployment means pushing code. For us, deployment means bolting steel to concrete in a rainstorm.”
How do you think about hardware as competitive advantage?
People are skeptical of hardware advantages. "Software eats the world," they say. AND that's true for digital-only companies. You can copy software: read a paper, replicate an architecture, train a competing model. BUT you can't copy eighteen years of knowledge that tells you which failure modes emerge in year three, failure modes that don't appear in a six-month pilot. You cannot copy the deployment procedures, the failure recovery patterns, the institutional knowledge of what breaks and why under real-world conditions. THEREFORE the real hardware advantage is the accumulated operational knowledge (what I call scar tissue) that comes from deploying and maintaining hardware in the field, across seasons, for years. Hardware can't be forked on GitHub.
You can update software overnight. Hardware has to survive five seasons. That's a fundamentally different engineering discipline. The iteration cycles are longer, the cost of failure is higher, and the knowledge compounds differently.
Every failed competitor deployment that used commodity hardware makes our deployments more valuable by comparison. Every successful OZ season that passes without a hardware failure is a data point that venue operators weigh heavily. Hardware reliability compounds as reputation. And reputation is the hardest moat to erode.
How does the hardware foundation change what the software team can do?
This is the part that vertical integration skeptics miss. When Bhagyashree's team ships a new AI model, they know exactly how much processing power, memory, and cooling capacity is available. They can push the model to the absolute limit of the hardware because the hardware was designed with known, precisely measured limits, not guesses from a vendor's specification sheet.
The conversation is: "I need more headroom for the next model generation." And that conversation results in an engineering change, not a six-month procurement process that delivers a compromise.
What does the next generation of OZ hardware look like?
Three vectors. More AI processing power per watt of heat generated; each new chip generation is a step change, and the next will push further. Modular compute modules that allow AI chip upgrades without replacing the entire enclosure, extending the enclosure's useful life across multiple hardware generations. And enhanced self-monitoring, where the enclosure reports its own temperature, power quality, and mechanical health alongside the spatial data from the cameras. The hardware becomes self-aware.
The hardware advantage has three layers: proprietary design protects the engineering, operational scar tissue protects the knowledge, and venue relationships protect the deployment. A competitor would need to replicate all three simultaneously, a process that takes years, not capital.
Final thought?
Software gets replaced every year. Hardware that works (truly works, in the rain, in the cold, in the heat, year after year) becomes the foundation that everything else is built on. People ask what keeps me motivated after eighteen years. It's this: I'm not building something that ships once. I'm building something that survives.