ArcFlow Cloud
One ArcFlow engine is straightforward. A fleet of them — across venues, cities, or deployments — is a different problem. ArcFlow Cloud handles what the edge can't see on its own: replication, fleet observability, ArcFlow Adaptive Dispatch across your fleet, and a live window into every Venue Graph you operate.
Free account to start. Add services per node as your fleet grows.
Neural world models simulate. ArcFlow records. Every detection, every observation, every prediction needs somewhere to land — graded by the ArcFlow Evidence Model, indexed by the ArcFlow Spatial Index, versioned, and queryable. ArcFlow Cloud is what makes it coherent across a fleet: ArcFlow WAL Stream replication, observability, aggregate analytics, and a live window into every operational world model you run.
ArcFlow Cloud service
ArcFlow Replication streams graph changes from edge engines to ArcFlow Cloud via the ArcFlow WAL Stream — not the full graph on every sync, just what changed. Causal ordering is guaranteed. If an engine goes offline for a week and reconnects, the delta catches up automatically. The result: an aggregate view of every connected venue you can query from a single GQL endpoint.
Only changes travel via ArcFlow WAL Stream. Not snapshots, not full graphs. Causal ordering and conflict-free merges guaranteed.
Every synced graph is a backup. WAL segments accumulate automatically. Restore to any prior state without thinking about it.
Query the aggregate graph across all connected engines from one GQL endpoint. Fleet-scale patterns that no single engine can see.
ArcFlow Cloud service
A lightweight agent inside every ArcFlow engine collects six metric categories — node health, graph size, query latency (p50/p99), GPU utilization, replication status, and WAL throughput — and pushes them to ArcFlow Cloud every 30 seconds. The dashboard shows fleet-wide status at a glance. Threshold alerts fire to email or webhook when something crosses a limit you set. No query strings, no graph data, no property values ever leave the engine.
Every connected engine on one screen. Live status, query latency, graph size, and replication lag — no guessing about fleet health.
Latency spikes, disk pressure, replication lag. Configure limits and receive alerts via email or webhook before problems become outages.
No query strings. No graph data. Only structural metrics leave the engine. Your spatial intelligence never touches the cloud.
ArcFlow Cloud service
ArcFlow builds a continuous model of every venue it operates: cameras, tracked entities, zones, event detections. That model is invisible today — operators see outputs, not what produced them. The World Model Workspace makes it inspectable: a live entity browser, event stream, per-entity ArcFlow Evidence Model scores, and a structured readiness gate any team member can sign off on.
The browser runs an embedded ArcFlow WASM instance. Scoped graph fragments sync from the relay — only what the operator needs. Queries run client-side in under a millisecond.
Every camera, zone, and tracked object in the Venue Graph — with confidence scores and last-seen timestamps. Inspect any entity without engineering.
What ArcFlow has detected in the last 24 hours: entries, exits, state changes, and anomalies — chronological, filterable, live.
All entities present, cameras calibrated, confidence above threshold — confirmed before any operation goes live. Any operator can sign off; any stakeholder can audit it.
Which ArcFlow engines are online, what venue each serves, and their model health — all in one view. No SSH required.
ArcFlow Cloud service
Some workloads exceed what a single edge engine can handle — PageRank across a million-node aggregate graph, community detection across an entire season, spatial clustering spanning every venue. GPU Origin is cloud-hosted GPU compute for exactly these. Your replicated data flows to the origin, where the ArcFlow Graph Kernel runs at cloud GPU scale. Same GQL syntax. Same Adaptive Dispatch routing — now with hardware your edge box can't match.
ArcFlow Graph Kernel on hardware that no edge box can match. Same GQL syntax. Results in seconds, not hours.
Query across all replicated venues simultaneously. Patterns that only emerge at fleet scale — seasonal trends, cross-venue anomalies.
No GPU instances to provision. No clusters to manage. Submit a GQL query. Pay for the compute time it uses.
ArcFlow Cloud service
The only service with an OZ prefix — because it is the only one where humans are part of the delivery. OZ's NOC monitors your deployment around the clock: proactive incident response, configuration drift detection, fleet-wide firmware updates, and capacity planning alerts before you hit limits. SLA-backed uptime. You build the product. OZ keeps the infrastructure running.
OZ engineers watching your engines. Proactive monitoring and incident response — they see it before you do.
Firmware, configuration, and patches rolled across your entire deployment. No maintenance windows to schedule.
The uptime commitment is in the contract. Not a marketing claim. Backed by the same team that runs the NOC.
Architecture
ArcFlow Cloud is a control plane, not a data plane. It manages engines — registration, access control, replication scheduling, fleet monitoring, billing. Your spatial data lives where it was produced: at the edge, inside the engine that built it.
If ArcFlow Cloud goes offline, every engine in the field keeps running at full capability. Queries answer. Spatial triggers fire. The graph keeps building. When connectivity returns, replication catches up. Nothing was lost.
This is not cloud-first with an edge cache. In cloud-first, the edge stops when the cloud does. In ArcFlow, only the aggregate view goes stale. Every space you understand keeps running, independently, regardless.
Register an engine, connect to ArcFlow Cloud, and write your first GQL query against physical reality — free. Add Replication, Observability, GPU Origin, or Managed Ops as your fleet grows.