ArcFlow
Company
Managed Services
Markets
  • News
  • LOG IN
  • GET STARTED

OZ brings Visual Intelligence to physical venues, a managed edge layer that lets real-world environments see, understand, and act in real time.

Talk to us

ArcFlow

  • World Models
  • Sensors

Managed Services

  • OZ VI Venue 1
  • Case Studies

Markets

  • Sports
  • Broadcasting
  • Robotics

Company

  • About
  • Technology
  • Careers
  • Contact

Ready to see it live?

Talk to the OZ team about deploying at your venues, from a single pilot match to a full regional rollout.

Schedule a deployment review

© 2026 OZ. All rights reserved.

LinkedIn
ArcFlow Docs
Get Started
  • Get Started
  • Quickstart
  • Installation
  • Project Setup
  • Platforms
  • Bindings
  • Licensing
  • Pricing
Capabilities
  • Vector Search
  • Graph Algorithms
  • Sync
  • MCP Server (AI Agents)
  • Live Queries
  • Programs
  • Temporal
  • Spatial
  • Trusted RAG
  • Behavior Graph
  • Agent-Native
  • Event Sourcing
  • GPU Acceleration
  • Intent Relay
Concepts
  • World Model
  • Graph Model
  • Query Language (GQL)
  • Graph Patterns
  • SQL vs GQL
  • Parameters
  • Query Results
  • Persistence & WAL
  • Error Handling
  • Observations & Evidence
  • Confidence & Provenance
  • Proof Artifacts & Gates
  • Skills
GQL / WorldCypher
  • Overview
  • MATCH
  • WHERE
  • RETURN
  • OPTIONAL MATCH
  • CREATE
  • SET
  • MERGE
  • DELETE
  • REMOVE
  • WITH
  • UNION
  • UNWIND
  • CASE
  • Spatial Queries
  • Temporal Queries
  • Algorithms Reference
  • Triggers
Schema
  • Overview
  • Indexes
  • Constraints
  • Data Types
Functions
  • Built-in Functions
  • Aggregations
  • Procedures
  • Shortest Path
  • EXPLAIN
  • PROFILE
Skills
  • Overview
  • CREATE SKILL
  • PROCESS NODE
  • REPROCESS EDGES
Operations
  • CLI
  • REPL Commands
  • Snapshot & Restore
  • Server Modes & PG Wire
  • Persistence
  • Import & Export
  • Docker
  • Architecture
  • Cloud Architecture
  • Sync Protocol (Deep Dive)
Guides
  • Agent Integration
  • World Model
  • Graph Model Fundamentals
  • Trusted RAG
  • Using Skills
  • Behavior Graphs
  • Swarm & Multi-Agent
  • Migration Guide
  • Filesystem Workspace
  • From SQL to GQL
  • ArcFlow for Coding Agents
  • Data Quality & Pipeline Integrity
  • Code Intelligence
Tutorials
  • Knowledge Graph
  • Entity Linking
  • Vector Search
  • Graph Algorithms
Recipes
  • CRUD
  • Multi-MATCH
  • MERGE (Upsert)
  • Full-Text Search
  • Temporal Queries
  • Batch Projection
  • GraphRAG
Use Cases
  • Agent Tooling
  • Knowledge Management
  • RAG Pipeline
  • Fraud Detection
  • Sports Analytics
  • Grounded Neural Objects
  • Behavior Graphs
  • Autonomous Systems
  • Digital Twins
  • Robotics & Perception
Reference
  • TypeScript API
  • GQL Conformance
  • Compatibility Matrix
  • Glossary
  • Data Types
  • Operators
  • Error Codes
  • Known Issues

Architecture

One process, zero serialization, all modules share memory. ArcFlow is a SoC modular monolith -- like Apple's M1 chip, which puts CPU, GPU, and RAM on one die with unified memory instead of bolting them together over buses. ArcFlow does the same for data infrastructure: graph storage, vector search, graph algorithms, and GPU dispatch share one GraphStore in one address space. No network hops. No message queues. No external cache. No separate vector database.

ArcFlow SoC Architecture

This architecture eliminates entire categories of infrastructure: no external cache service (the graph is already in-process), no separate vector database (vector indexes live alongside graph data), no workflow engine for orchestration (procedures run in the same runtime). The result is fewer moving parts, lower latency, and a single binary to deploy.

Design Principles#

  • Deterministic: Same query + same state = same results. Always.
  • Local-first: Single binary, full authority, no network dependency.
  • Agent-native: Structured output, typed errors, machine-readable contracts.
  • Rust-native: Zero-cost abstractions, memory safety, single binary.
  • Zero serialization: Modules communicate through shared Rust types, not wire protocols.
  • Evidence-first: Every fact carries the ArcFlow Evidence Model — observation class, confidence score, provenance chain. Trust is a query dimension, not an afterthought.

Three-Plane Architecture#

PlaneAuthorityWhat lives here
Authored WorkspaceSource of truth for intentSchemas, queries, facts in git
Canonical EngineEngine-managed durabilityWAL, checkpoint, manifest
Derived ProjectionNon-authoritativeExports, caches, compatibility files

Rules: Workspace → Engine (explicit load). Engine → Projection (explicit export). Projections never feed back as authority.

Module architecture#

Core (bottom of stack):

LayerResponsibility
Core typesNode, relationship, property primitives; confidence and evidence types
Graph engineGraph store, property index, adjacency structures, incremental computation, standing queries, window operators, live algorithms
StorageJournaled storage, WAL, snapshot/restore

Query and incremental (middle):

LayerResponsibility
Query IRCompiled query representation — the target for the query compiler
Query compilerWorldCypher (ISO GQL) parser, query planning, incrementalization
RuntimeExecution engine, ArcFlow Adaptive Dispatch, GPU kernels

Public API (top of stack):

SurfaceResponsibility
Rust SDKPublished as arcflow on crates.io
CLIREPL, TCP/HTTP/PostgreSQL servers, self-update, structured output — user-facing binary: arcflow
FFIC ABI for Python, TypeScript, and C++ bindings
MCPModel Context Protocol server (stdio JSON-RPC)
WASMBrowser and edge runtime

Dependencies flow inward. Transport/CLI depend on core, never reverse.

Why This Matters#

A typical knowledge-graph stack requires 4-6 services: a graph database, a vector store, an analytics engine, a job runner, a cache, and a message bus. Each introduces serialization overhead, operational complexity, and failure modes. ArcFlow collapses this to one process and one binary — graph storage, incremental computation, vector search, and PostgreSQL wire protocol compatibility all in the same unified address space.

For AI workloads, in-process execution means a GraphRAG query can traverse the graph, run vector similarity, execute PageRank, and score confidence — all without a single network call or data format conversion. Measured end-to-end on a MacBook Air M4 (10-core, 24GB), this architecture delivers 154M PageRank nodes/sec and 25K vector queries/sec on CPU alone — on a fanless laptop.

Three execution innovations sit at the core of this performance:

  • ArcFlow Graph Kernel — processes graph algorithms as a single parallel pass across all nodes, not sequential traversal
  • ArcFlow Adaptive Dispatch — routes each operation to the fastest available hardware (CPU, Metal, CUDA) via a live cost model at runtime
  • ArcFlow GPU Index — a pointer-free spatial index that transfers directly to GPU memory, enabling high-density spatial queries at GPU speed

Forward vision: The unified address space is the foundation for in-process AI inference, real-time sensor fusion, and perception pipelines where latency budgets are measured in microseconds, not milliseconds. Same architecture, same query language, expanded compute fabric across CPU, CUDA, and Metal.

See Also#

  • GPU Acceleration — unified compute across CPU, CUDA, and Metal
  • Language Bindings — same architecture, every language
Try it
Open ↗⌘↵ to run
Loading engine…
← PreviousDockerNext →Cloud Architecture