ArcFlow
Company
Managed Services
Markets
  • News
  • LOG IN
  • GET STARTED

OZ brings Visual Intelligence to physical venues, a managed edge layer that lets real-world environments see, understand, and act in real time.

Talk to us

ArcFlow

  • World Models
  • Sensors

Managed Services

  • OZ VI Venue 1
  • Case Studies

Markets

  • Sports
  • Broadcasting
  • Robotics

Company

  • About
  • Technology
  • Careers
  • Contact

Ready to see it live?

Talk to the OZ team about deploying at your venues, from a single pilot match to a full regional rollout.

Schedule a deployment review

© 2026 OZ. All rights reserved.

LinkedIn
ArcFlow Docs
Get Started
  • Get Started
  • Quickstart
  • Installation
  • Project Setup
  • Platforms
  • Bindings
  • Licensing
  • Pricing
Capabilities
  • Vector Search
  • Graph Algorithms
  • Sync
  • MCP Server (AI Agents)
  • Live Queries
  • Programs
  • Temporal
  • Spatial
  • Trusted RAG
  • Behavior Graph
  • Agent-Native
  • Event Sourcing
  • GPU Acceleration
  • Intent Relay
Concepts
  • World Model
  • Graph Model
  • Query Language (GQL)
  • Graph Patterns
  • SQL vs GQL
  • Parameters
  • Query Results
  • Persistence & WAL
  • Error Handling
  • Observations & Evidence
  • Confidence & Provenance
  • Proof Artifacts & Gates
  • Skills
GQL / WorldCypher
  • Overview
  • MATCH
  • WHERE
  • RETURN
  • OPTIONAL MATCH
  • CREATE
  • SET
  • MERGE
  • DELETE
  • REMOVE
  • WITH
  • UNION
  • UNWIND
  • CASE
  • Spatial Queries
  • Temporal Queries
  • Algorithms Reference
  • Triggers
Schema
  • Overview
  • Indexes
  • Constraints
  • Data Types
Functions
  • Built-in Functions
  • Aggregations
  • Procedures
  • Shortest Path
  • EXPLAIN
  • PROFILE
Skills
  • Overview
  • CREATE SKILL
  • PROCESS NODE
  • REPROCESS EDGES
Operations
  • CLI
  • REPL Commands
  • Snapshot & Restore
  • Server Modes & PG Wire
  • Persistence
  • Import & Export
  • Docker
  • Architecture
  • Cloud Architecture
  • Sync Protocol (Deep Dive)
Guides
  • Agent Integration
  • World Model
  • Graph Model Fundamentals
  • Trusted RAG
  • Using Skills
  • Behavior Graphs
  • Swarm & Multi-Agent
  • Migration Guide
  • Filesystem Workspace
  • From SQL to GQL
  • ArcFlow for Coding Agents
  • Data Quality & Pipeline Integrity
  • Code Intelligence
Tutorials
  • Knowledge Graph
  • Entity Linking
  • Vector Search
  • Graph Algorithms
Recipes
  • CRUD
  • Multi-MATCH
  • MERGE (Upsert)
  • Full-Text Search
  • Temporal Queries
  • Batch Projection
  • GraphRAG
Use Cases
  • Agent Tooling
  • Knowledge Management
  • RAG Pipeline
  • Fraud Detection
  • Sports Analytics
  • Grounded Neural Objects
  • Behavior Graphs
  • Autonomous Systems
  • Digital Twins
  • Robotics & Perception
Reference
  • TypeScript API
  • GQL Conformance
  • Compatibility Matrix
  • Glossary
  • Data Types
  • Operators
  • Error Codes
  • Known Issues

Skills

A Skill is a named, versioned, graph-native enrichment program. You describe a relationship rule in natural language. ArcFlow compiles it once — using your configured LLM endpoint — into a cached graph pattern. From that point on, PROCESS NODE and REPROCESS EDGES execute the compiled pattern at graph-native speed: no model calls, no API latency, no token cost.

The LLM is a compiler frontend. ArcFlow is the optimized runtime. These two things never overlap.


The compile-once model#

CREATE SKILL ZoneCoOccupancy
  FROM PROMPT 'Link two Robots as ZONE_PEER if they occupy the same Zone'
  ALLOWED ON [Robot]
  TIER SYMBOLIC

At creation time, ArcFlow sends the prompt and relevant schema context to OZ's compiler endpoint — authenticated automatically through your oz.com/world connection. No API keys to manage. Compilation results are cached: the same rule against the same schema returns an instant response at no cost to your allowance. On a cache miss, the schema context is prefix-cached across calls, so you pay only for the rule text itself. The compiled WorldCypher pattern is returned and stored in the skill alongside the original prompt.

Skill stored in graph:
  name:             ZoneCoOccupancy
  prompt_template:  'Link two Robots as ZONE_PEER if they occupy the same Zone'
  compiled_pattern: 'MATCH (a:Robot)-[:OCCUPIES]->(z:Zone)<-[:OCCUPIES]-(b:Robot)
                     WHERE a.id < b.id
                     MERGE (a)-[:ZONE_PEER {zone_id: z.id}]->(b)'
  compiled_by:      claude-opus-4-6
  version:          1
  tier:             symbolic

The original prompt is preserved as auditable intent — you can always read it to verify the compiled form matches what you asked for. The compiled pattern is what executes.


PROCESS NODE: zero LLM#

PROCESS NODE (n:Robot)

Runs the compiled pattern against every matched node. No model call. No API request. The engine evaluates the graph pattern directly — finding co-occupying robots, scoring the relationship by zone confidence, materializing edges.

At 1,000 robots this takes milliseconds. At 1,000,000 it scales the same way — the pattern is graph algebra, not inference.


REPROCESS EDGES: zero LLM#

REPROCESS EDGES WHERE confidence < 0.7

When the world model changes — new sensor data arrives, observation confidence improves — low-trust edges can be refreshed. ArcFlow deletes edges below the threshold and re-runs the same compiled pattern for fresh scores. The LLM is not consulted. The pattern was compiled once and it runs again, unchanged.

To get a materially different pattern, you create a new skill version — which calls the LLM once more to recompile.


What a skill actually contains#

PropertyDescription
nameUnique identifier
versionSemantic version — edges carry the version that built them
prompt_templateThe original natural language rule — auditable intent
compiled_patternWorldCypher graph pattern produced by LLM at creation time
compiled_byModel that compiled the pattern — provenance on the compilation itself
allowed_onWhich node labels this skill applies to
min_confidenceMinimum score for an edge to be materialized
activeWhether the skill runs on PROCESS NODE calls

Execution tiers#

The tier determines what the LLM compiles to — the output format of the compilation step.

TierLLM compiles toRuntimeStatus
SYMBOLICWorldCypher graph patternSub-millisecond graph traversalActive
WASMSandboxed WASM module~1ms compiled logicPlanned

TIER SYMBOLIC covers the vast majority of relationship rules — co-location, shared detection, causal proximity, coverage overlap. Any rule expressible as a graph path can be compiled to a SYMBOLIC pattern.

TIER WASM is for rules that require heavy computation beyond graph traversal — custom scoring functions, matrix operations, statistical models. The LLM generates the implementation; the WASM sandbox executes it deterministically.


World model use case#

In a facility environment, sensors observe entities. Skills derive the relationships between them — co-located robots, shared detection chains, coverage gaps. These relationships don't come from sensors directly. They come from reasoning over observed facts.

-- Step 1: create the skill (LLM called once)
CREATE SKILL CoDetectionLinker
  FROM PROMPT 'Link two Robots as CO_DETECTED if the same Sensor detected both,
               weighted by the sensor reliability and detection confidence'
  ALLOWED ON [Robot]
  TIER SYMBOLIC
 
-- Step 2: materialize edges across the world model (zero LLM)
PROCESS NODE (n:Robot)
-- => 23 CO_DETECTED edges, avg confidence 0.87
 
-- Step 3: query derived relationships
MATCH (a:Robot)-[r:CO_DETECTED]->(b:Robot)
WHERE r.confidence > 0.8
RETURN a.name, b.name, r.confidence, r.zone_id
ORDER BY r.confidence DESC
 
-- Step 4: world model updated, refresh weak edges (zero LLM)
REPROCESS EDGES WHERE confidence < 0.7
-- => 6 edges re-evaluated with updated sensor data, avg confidence now 0.91

No ETL pipeline. No external enrichment service. The LLM authored the logic once; ArcFlow operates it forever.


Why this matters#

Most graph databases treat edges as static assertions — inserted once, never questioned. Skills make edges derived: compiled by a model, scored by confidence, traced to the version that produced them, and re-evaluable as the world changes.

The model call happens at authoring time. Production is model-free.

StageLLM involved?Cost
CREATE SKILL — compilationYes, onceOne compilation via your oz.com/world connection (cached results are free)
PROCESS NODE — executionNoPure graph computation, unlimited
REPROCESS EDGES — refreshNoPure graph computation, unlimited
Query derived edgesNoRead-only graph traversal, unlimited

Compilation uses your oz.com/world connection — credentials are managed through your account, no separate API keys required. Compilation results are cached: the same rule against the same schema returns an instant response at no cost on repeat calls.

For AI applications: every RAG answer can trace its evidence chain back to the specific skill version and compiled pattern that produced each supporting edge — with the original human-readable prompt preserved alongside it.

See Also#

  • CREATE SKILL — register and compile a skill
  • PROCESS NODE — execute compiled skills across the world model
  • REPROCESS EDGES — refresh weak edges without model calls
  • Confidence & Provenance — how skill edges carry scores and evidence chains
  • Using Skills — step-by-step guide with world model examples
Try it
Open ↗⌘↵ to run
Loading engine…
← PreviousProof Artifacts & GatesNext →Overview