ArcFlow
Company
Managed Services
Markets
  • News
  • LOG IN
  • GET STARTED

OZ brings Visual Intelligence to physical venues, a managed edge layer that lets real-world environments see, understand, and act in real time.

Talk to us

ArcFlow

  • World Models
  • Sensors

Managed Services

  • OZ VI Venue 1
  • Case Studies

Markets

  • Sports
  • Broadcasting
  • Robotics

Company

  • About
  • Technology
  • Careers
  • Contact

Ready to see it live?

Talk to the OZ team about deploying at your venues, from a single pilot match to a full regional rollout.

Schedule a deployment review

© 2026 OZ. All rights reserved.

LinkedIn
ArcFlow Docs
Get Started
  • Get Started
  • Quickstart
  • Installation
  • Project Setup
  • Platforms
  • Bindings
  • Licensing
  • Pricing
Capabilities
  • Vector Search
  • Graph Algorithms
  • Sync
  • MCP Server (AI Agents)
  • Live Queries
  • Programs
  • Temporal
  • Spatial
  • Trusted RAG
  • Behavior Graph
  • Agent-Native
  • Event Sourcing
  • GPU Acceleration
  • Intent Relay
Concepts
  • World Model
  • Graph Model
  • Query Language (GQL)
  • Graph Patterns
  • SQL vs GQL
  • Parameters
  • Query Results
  • Persistence & WAL
  • Error Handling
  • Observations & Evidence
  • Confidence & Provenance
  • Proof Artifacts & Gates
  • Skills
GQL / WorldCypher
  • Overview
  • MATCH
  • WHERE
  • RETURN
  • OPTIONAL MATCH
  • CREATE
  • SET
  • MERGE
  • DELETE
  • REMOVE
  • WITH
  • UNION
  • UNWIND
  • CASE
  • Spatial Queries
  • Temporal Queries
  • Algorithms Reference
  • Triggers
Schema
  • Overview
  • Indexes
  • Constraints
  • Data Types
Functions
  • Built-in Functions
  • Aggregations
  • Procedures
  • Shortest Path
  • EXPLAIN
  • PROFILE
Skills
  • Overview
  • CREATE SKILL
  • PROCESS NODE
  • REPROCESS EDGES
Operations
  • CLI
  • REPL Commands
  • Snapshot & Restore
  • Server Modes & PG Wire
  • Persistence
  • Import & Export
  • Docker
  • Architecture
  • Cloud Architecture
  • Sync Protocol (Deep Dive)
Guides
  • Agent Integration
  • World Model
  • Graph Model Fundamentals
  • Trusted RAG
  • Using Skills
  • Behavior Graphs
  • Swarm & Multi-Agent
  • Migration Guide
  • Filesystem Workspace
  • From SQL to GQL
  • ArcFlow for Coding Agents
  • Data Quality & Pipeline Integrity
  • Code Intelligence
Tutorials
  • Knowledge Graph
  • Entity Linking
  • Vector Search
  • Graph Algorithms
Recipes
  • CRUD
  • Multi-MATCH
  • MERGE (Upsert)
  • Full-Text Search
  • Temporal Queries
  • Batch Projection
  • GraphRAG
Use Cases
  • Agent Tooling
  • Knowledge Management
  • RAG Pipeline
  • Fraud Detection
  • Sports Analytics
  • Grounded Neural Objects
  • Behavior Graphs
  • Autonomous Systems
  • Digital Twins
  • Robotics & Perception
Reference
  • TypeScript API
  • GQL Conformance
  • Compatibility Matrix
  • Glossary
  • Data Types
  • Operators
  • Error Codes
  • Known Issues

MCP Server

The MCP server is the integration surface for cloud chat interfaces — ChatGPT, Claude.ai, Gemini web, Copilot Chat — that run in a browser or cloud sandbox with no local filesystem access.

If you have a shell, use the CLI binary instead. Claude Code, Codex CLI, and Gemini CLI all have shell tools. The arcflow binary exits in under 10ms, needs no configuration, and is fully composable with grep, jq, and git. MCP adds a protocol layer that those agents don't need.

MCP is correct when there is no shell. That is its exact scope.


Setup#

npx arcflow-mcp                              # In-memory (ephemeral)
npx arcflow-mcp --data-dir ./my-graph        # Persistent graph

Claude Desktop / Claude.ai#

{
  "mcpServers": {
    "arcflow": {
      "command": "npx",
      "args": ["arcflow-mcp"]
    }
  }
}

With persistent data#

{
  "mcpServers": {
    "arcflow": {
      "command": "npx",
      "args": ["arcflow-mcp", "--data-dir", "./my-graph"]
    }
  }
}

Tools#

ToolDescriptionRead/Write
get_schemaLabels, relationship types, properties, indexes, statsRead
get_capabilitiesAlgorithms, procedures, window functions, featuresRead
read_queryExecute read-only WorldCypher (MATCH, CALL algo., CALL db.)Read
write_queryExecute mutations (CREATE, SET, DELETE, MERGE)Write
graph_ragTrusted GraphRAG — answer questions from the world modelRead
ingest_nodesPush node/edge batches with idempotent content-hash dedupWrite
create_live_viewRegister a standing query as a named live viewWrite
live_view_statusPoll a live view's current result set and frontierRead

read_query rejects all mutations — CREATE, SET, DELETE, MERGE, REMOVE are refused at the tool level. Use write_query for explicit write operations.


Tool details#

read_query and write_query#

read_query("MATCH (e:Entity) WHERE e._confidence > 0.85 RETURN e.id, e.x, e.y")
write_query("MATCH (e:Entity {id: $id}) SET e._confidence = $conf", {id: "unit-01", conf: 0.97})

ingest_nodes#

Push structured node/edge deltas. Content-hash dedup means calling the same delta twice is safe — already-ingested nodes are silently skipped.

{
  "added_nodes": [
    {
      "label": "Entity",
      "id": "unit-01",
      "content_hash": "abc123",
      "properties": { "x": 12.4, "y": 8.7, "_observation_class": "observed", "_confidence": 0.94 }
    }
  ],
  "removed_node_ids": [],
  "updated_nodes": [],
  "added_edges": [
    { "kind": "DETECTS", "from_id": "unit-01", "to_id": "contact-x" }
  ],
  "removed_edge_ids": []
}

Returns { nodes_added, nodes_removed, nodes_updated, edges_added, edges_removed, wal_bytes_written }.

create_live_view and live_view_status#

Register a view once, poll for changes:

create_live_view("high_risk", "MATCH (e:Entity) WHERE e._confidence < 0.4 RETURN e.id, e._confidence ORDER BY e._confidence ASC")
live_view_status("high_risk")
// → { frontier: 47, row_count: 3, query_text: "..." }

The frontier is a monotonically increasing mutation sequence number. If it has not changed since the last poll, the result set has not changed.

graph_rag#

Ask a natural language question answered from the world model with confidence filtering:

graph_rag("Which entities have been observed with confidence above 0.9 in the last 5 minutes?")

Latency#

MCP operates over stdio JSON-RPC. Each tool call has ~100ms of overhead from process boundary, serialization, and JSON-RPC round-trip. This is acceptable for cloud chat interfaces where the user is already waiting for a response. It is not acceptable for application code or shell agent loops.

SurfaceLatencyUse when
napi-rs in-process~1 µsApplication code in Node.js
arcflow CLI binary< 10 msShell-capable agents (Claude Code, Codex, Gemini CLI)
MCP server~100 msCloud chat UIs with no local shell

See Also#

  • CLI — the arcflow binary for shell-capable agents and developers
  • Language Bindings — napi-rs (Node.js), Python, Rust, C, C++
  • Agent-Native — filesystem workspace, batch execution, watch mode
  • Skills — teach the world model a relationship rule in plain language; compiled once, executed at graph speed forever
Try it
Open ↗⌘↵ to run
Loading engine…
← PreviousSyncNext →Live Queries