ArcFlow
Company
Managed Services
Markets
  • News
  • LOG IN
  • GET STARTED

OZ brings Visual Intelligence to physical venues, a managed edge layer that lets real-world environments see, understand, and act in real time.

Talk to us

ArcFlow

  • World Models
  • Sensors

Managed Services

  • OZ VI Venue 1
  • Case Studies

Markets

  • Sports
  • Broadcasting
  • Robotics

Company

  • About
  • Technology
  • Careers
  • Contact

Ready to see it live?

Talk to the OZ team about deploying at your venues, from a single pilot match to a full regional rollout.

Schedule a deployment review

© 2026 OZ. All rights reserved.

LinkedIn
  1. Home
  2. Blog
  3. We Stopped Writing Code and Started Directing Intelligence

We Stopped Writing Code and Started Directing Intelligence

Article February 25, 2026

Share this post

A single camera cut during a live match. The Action Manager chose it, evaluating six feeds, player positions, ball trajectory, and broadcast grammar, in under a second. The director never touched a button; she had set the intent three hours earlier. Suresh Gohane watched this happen and realized he was looking at the future of software engineering. Not metaphorically. Literally. At OZ, the engineers now work the same way: they set intent through precise specifications, AI agents execute the mechanical work (writing code, running tests, generating documentation) and human judgment governs the loop. "We used to write code," Suresh says. "Now we direct intelligence. The shift happened faster than anyone expected, and it changed everything."

The Shift#

"In 2023, our engineering process looked like every other technology company's," Suresh says. "Engineers wrote code. They wrote tests. They debugged. They deployed. They wrote documentation. The cycle was: understand the problem, design a solution, implement it line by line, verify it works, ship it. The human wrote every line."

"By early 2025, something had changed. AI agents could write reliable code from clear specifications. Not perfect code, but code that was correct enough to be a strong starting point. Our engineers started spending less time writing and more time specifying: defining what the system should do, in enough detail that an agent could produce a working implementation. The role shifted from author to director."

"Today, in 2026, the shift is complete. Our engineers are orchestrators. They design systems. They write specifications. They direct agents to implement. They review, refine, and validate. They make judgment calls that agents cannot make: architectural decisions, trade-off analysis, user experience intuition. The mechanical work (the boilerplate, the repetitive patterns, the routine testing) is handled by agents."

He's careful to frame this correctly. "This doesn't mean our engineers are less technical. It means they are technical at a higher level. Understanding how to direct an agent to implement a complex subsystem requires deeper architectural understanding than writing it yourself. You must know what to specify, what to leave flexible, and what constraints to enforce. You must read agent output critically and catch the subtle errors that look correct on the surface. The bar went up, not down."

The Parallel#

Suresh draws the parallel explicitly.

"In OZ Studio, the director sets intent. The Action Manager (the AI) executes the mechanical decisions: which camera, what framing, when to cut. The director focuses on storytelling and editorial judgment. Every override improves the AI's understanding of the director's preferences. The system learns the director's style."

"In our engineering process, the engineer sets intent through a specification. The AI agents execute: writing code, generating tests, formatting documentation. The engineer focuses on architecture and judgment. Every correction improves the agent's understanding of our codebase, our patterns, our standards. The system learns our engineering style."

"The same principles apply in both domains," he says. "Human intent. AI execution. Human judgment as the override. A feedback loop that compounds. And governance: clear rules about what the AI can do autonomously and where human approval is required."

The parallel between AI-directed broadcast production and AI-directed software development isn't a metaphor. It's a design principle. The same governance model (human intent, AI execution, human override, compounding feedback) applies at every layer of OZ's operation. This consistency is deliberate: a team that practices orchestrating intelligence in one domain develops the instinct to apply it in every domain.

Everything as a Specification#

The shift to orchestration only works if the intent is clear. An agent that receives a vague specification produces vague output, the same way that an Action Manager with an unclear playbook makes inconsistent camera decisions.

"This is why we run an everything-as-code culture," Suresh says. "Every playbook, every deployment checklist, every production rule, every engineering specification is a structured document. Readable by humans. Executable by systems. Version-controlled. Reviewable. The specification is the source of truth, not a conversation, not a meeting, not a document that lives in someone's head."

"When an engineer writes a specification for a new subsystem, that specification must be clear enough for an agent to implement it. This forces a discipline of thought that most engineering organizations never develop. You can't be vague. You can't leave implicit assumptions. You can't write 'handle edge cases appropriately'; you must define what the edge cases are and what 'appropriate' means. The agent will do exactly what you specify. Nothing more. Nothing less."

"Jagruti talks about clarity as the foundational hiring criteria," he adds, referring to his colleague's interview on the subject. "She's right, and this is why. In an everything-as-code culture, the quality of your specifications is the quality of your output. An engineer who thinks clearly writes clear specifications. Clear specifications produce correct implementations. This isn't a soft skill. It's the most consequential technical skill in our organization."

Intelligence Loops#

Suresh describes how the traditional boundaries between development, operations, and quality assurance have dissolved at OZ.

"In a conventional organization, development writes the code. QA tests it. Operations deploys and monitors it. Three teams, three handoffs, three potential failure points," he says. "At OZ, these functions aren't separate teams. They are loops, continuous cycles of build, verify, deploy, observe, and improve, and agents participate in every loop."

"An agent writes code. A different agent runs the test suite. A third agent analyzes the test results and suggests improvements. A fourth agent monitors the deployment for anomalies. When an anomaly is detected, the observation feeds back into the specification, and the cycle repeats. No handoff. No ticket. No waiting."

"The human's role in these loops is governance," Suresh says. "Setting the boundaries. Defining what the agents can do autonomously and where they must ask for approval. Reviewing the critical decisions. Investigating the edge cases that agents flag but can't resolve. The engineer is no longer the builder. The engineer is the governor of an intelligence loop."

He connects this to OZ's product. "This is exactly what the Action Manager does during a match. It runs an intelligence loop: observe the state of play, select a camera, frame the shot, execute the transition. The director governs the loop: setting intent, reviewing suggestions, overriding when judgment disagrees with automation. The same mental model. The same governance structure. The same principle: human judgment, AI execution, continuous feedback."

Suresh Gohane

Suresh Gohane

OZ Cortex / AI Stack Lead

AI Runtime & Cortex

“We used to write code. Now we direct intelligence. The shift happened faster than anyone expected, and it changed everything about how we build.”

Governance as Engineering#

"Governance isn't bureaucracy," Suresh says. "It's engineering. The same way Cortex enforces timing budgets on AI models (preventing any single model from consuming resources that degrade the whole system), our engineering governance enforces quality budgets on agent output."

He describes the framework. "Every agent-generated change goes through a verification pipeline. Not just testing; structured review against our specifications, our patterns, our standards. The pipeline catches the subtle problems: a correct implementation that violates an architectural principle. A working feature that introduces a dependency we want to avoid. A clean solution that doesn't match the mental model the rest of the team uses."

"We also build circuit breakers," he says. "An agent that detects unusual patterns in its own output (generating too many files, modifying too many systems, encountering too many errors) stops and escalates. It doesn't push through. It asks a human. This is the same principle as our production safety modes: the system has the confidence to act autonomously within defined boundaries, and the discipline to stop when it reaches the edge."

"Audit trails are automatic. Every agent decision, every specification change, every deployment is documented, timestamped, traceable. When something goes wrong (and things go wrong) we can reconstruct the chain of decisions that led to the problem. Not blame. Understanding. The audit trail exists so we can improve the governance, not punish the agent or the engineer."

The Team That Directs#

"People ask me how many engineers we have," Suresh says. "The answer depends on how you count. If you count humans, we are a small team. If you count the agents that our humans direct (the coding agents, the testing agents, the monitoring agents, the documentation agents), we are a much larger operation. The output of our team doesn't match what you'd expect from our headcount. It matches what you'd expect from a team several times our size."

"This isn't because we work longer hours. It's because we work differently. Every engineer directs a fleet of agents. Every specification they write becomes executable. Every review they conduct improves the agents' understanding of our standards. The work compounds; each sprint, the agents get better because the specifications get clearer and the codebase patterns become more consistent."

He describes what he looks for in talent. "I need engineers who can think at the system level. Who can write a specification that is precise enough for an agent to implement correctly, and flexible enough to accommodate the edge cases that the agent will encounter. Who can review agent output with a critical eye, not just checking whether it works, but whether it is the right approach. Who have the judgment to know when the agent's solution is good enough and when it needs human intervention."

"The rarest quality," he adds, "is the combination of velocity and governance. Many engineers can move fast. Many engineers can be careful. Few can do both. At OZ, we need people who ship quickly within safe boundaries. Who run experiments but never without guardrails. Who trust agents to do their job but verify the output before it reaches production. That's the orchestrator mindset. And it's the mindset that defines our engineering culture."

The orchestrator mindset combines three qualities: velocity (shipping fast), reliability (shipping correctly), and governance (shipping safely). Most engineers optimize for one or two. The exceptional ones, the ones who thrive in an AI-native engineering organization, hold all three simultaneously. This is the profile OZ hires for: engineers who can direct fleets of AI agents while maintaining the judgment, discipline, and architectural taste that agents can't provide.

What Changed Forever#

"Agentic AI changed how we build software," Suresh says. "That's already done. There's no going back to the world where every line of code is hand-written by a human. The engineers who resist this shift aren't wrong about the value of deep technical understanding; they are wrong about where that understanding is best applied. It's no longer best applied to writing code. It's best applied to directing the intelligence that writes code."

"At OZ, we embraced this early because we had no choice. A small team building a five-layer platform that operates at live venues in multiple countries can't afford to write every line by hand. We needed leverage. Agents gave us leverage. But leverage without governance is chaos. So we built governance into every loop, every pipeline, every specification. The agents execute. The humans govern. The system compounds."

He pauses, choosing his closing words.

"Software development has moved from humans writing code to humans directing intelligence. The companies that understood this in 2024 and 2025 are now twelve to eighteen months ahead of the ones that are still debating it. We are one of those companies. Not because we are smarter. Because we had no choice. Our ambition exceeded what hand-coding could deliver, so we learned to orchestrate. And now that orchestration is part of who we are, it compounds every day. The gap doesn't shrink. It widens."


This interview is part of the OZ Interview Series, profiling the team building the world model for the physical world.

All InterviewsAll with SureshLearn more about OZ