ArcFlow Engine
Every detection, every sensor observation, every prediction — graded by the ArcFlow Evidence Model, indexed by the ArcFlow Spatial Index, and queryable in microseconds. Where the neural layer ends, ArcFlow begins.
In-process. No server. No round-trip. Proximity checks, zone occupancy, trajectory projections — all on GPU, at frame rate.
Or try it in your browser
oz.com/engine →81
Built-in functions
69
CALL procedures
Sub-ms
Spatial kernel latency
GPU
Spatial & graph acceleration
Neural world models simulate how the world evolves under actions — physics-aware, action-conditioned, generative. Their outputs are inference streams: video frames, latent tensors, predicted detections. They are extraordinary at this. They are not designed to answer "which sensor detected the entity that entered the hazard zone 4 seconds ago?" — that requires a separate, persistent, queryable record of what actually happened.
ArcFlow is that record. Every neural model output that lands in ArcFlow becomes a queryable fact — confidence-scored, spatially indexed, versioned to the sequence number. Where the neural layer ends, ArcFlow begins: a dedicated persistence tier that stores ground truth, maintains the full history, and answers operational questions in microseconds.
The AI infrastructure stack has three tiers. Foundation models handle language and perception. Neural world models simulate possible futures. ArcFlow is the persistence tier — the layer that records what actually happened, so agents, fleets, and real-time systems always have a factual ground to stand on.
Distance checks, path intersections, and zone evaluations run via the ArcFlow GPU Index — a pointer-free spatial structure that transfers to GPU memory without transformation. Sub-millisecond latency at scale.
Every entity carries its full movement trail. Position and history are one data structure. 'Where are they now?' and 'where have they been?' answered from the same query.
Write a query once with LIVE, and the engine maintains it as the graph mutates. Batch and streaming produce identical results from the same definition — no separate pipeline to maintain.
PageRank, Louvain, BFS, connected components, triangle count, clustering coefficient. The ArcFlow Graph Kernel executes each as a single parallel pass — not sequential traversal. Six run as standing queries via LIVE CALL.
Connect with psql, pgAdmin, Grafana, psycopg2, node-postgres, or any PG client. Full v3 wire protocol with simple and extended query support. Your existing tools just work.
Every node and relationship carries an observation class — observed, inferred, or predicted — a confidence score, and a provenance chain. Filter by trust natively. Built into the storage engine, not bolted on.
Register a standing question: 'tell me when anyone enters the penalty area.' The engine evaluates it against every spatial update at frame rate.
Every ArcFlow instance operates independently. Sync when connected. Run offline when not. Same engine on a stadium server, a laptop, or a phone.
Write a query once. The engine runs it in batch mode on the full dataset, then maintains the result incrementally as the graph changes. One definition produces identical results whether recomputed from scratch or updated via deltas. The math guarantees it.
CREATE LIVE VIEW zone_occupancy AS MATCH (r:Robot)-[:OCCUPIES]->(z:Zone) RETURN z.name AS zone, count(r) AS robots, avg(r.battery) AS avg_battery -- Stays current as robots move — no polling, no recomputation MATCH (row) FROM VIEW zone_occupancy WHERE row.zone = 'lab-a' RETURN row.robots, row.avg_battery
-- Incremental algorithms as standing queries
LIVE CALL algo.pageRank({ damping: 0.85 })
LIVE CALL algo.connectedComponents()
LIVE CALL algo.louvain()Query results stay current without recomputation. As the graph changes, only the affected portion of each result updates — not the full dataset. Standing graph algorithms maintain their output across mutations at the same cost.
ArcFlow speaks PostgreSQL wire protocol v3. Start the server with --pg and connect with any tool that talks to Postgres. WorldCypher queries run transparently underneath.
$ arcflow --pg 5432 --data-dir ./stadium
$ psql -h localhost -p 5432
stadium=# MATCH (p:Player)-[:NEAR {distance: 2.0}]->(b:Ball)
RETURN p.name, p.speed;
name | speed
---------+-------
Salah | 32.1
Haaland | 28.7
(2 rows)SQL shims handle SET, BEGIN/COMMIT/ROLLBACK, and catalog queries so that ORMs and connection poolers don't choke. Error responses map to proper SQLSTATE codes with detail and recovery hints. Type inference assigns the right OIDs so client libraries deserialize correctly.
Run PG, native Bolt, and HTTP servers simultaneously on the same graph store:
arcflow --pg 5432 --serve 7687 --http 8080 --data-dir ./data
A query language for the physical world
MATCH (p:Player)-[:INSIDE]->(z:Zone {name: 'penalty_area'})
WHERE p.trajectory.eta < duration('PT3S')
RETURN p.name, p.trajectory.eta
ORDER BY p.trajectory.eta ASCCypher-inspired with native spatial predicates: WITHIN, NEAR, CROSSES, INTERSECTS. Temporal windows, trajectory projection, and confidence propagation are first-class concepts, not string comparisons.
The LIVE keyword family (CREATE LIVE VIEW, LIVE MATCH, LIVE CALL) turns any query into a standing subscription. Window operators — LAG, LEAD, rolling stddev, skewness, max — and the ArcFlow Evidence Model mean every result carries a confidence score, observation class, and provenance chain natively. ArcFlow Clock Domains let heterogeneous sensors advance at their own rates without external alignment preprocessing.
If you've written a Cypher query, you can write a WorldCypher query.
81 functions. 69 procedures. ArcFlow Graph Kernel, Adaptive Dispatch, GPU Index, Spatial Index. Evidence Model. Clock Domains. MVCC concurrency, WAL durability, CDC for change streaming. Bindings for Python, TypeScript, C/C++, and native Rust SDK.
Not a demo. A full ArcFlow engine running in your browser — persistent graph, real WorldCypher queries, zero install. Build a world model in under a minute.
Open the engine