Autonomous Systems
Every autonomous system is, at its core, a question of state: what is known, when it was known, and how much to trust it. A robot navigating a warehouse, a UAV fleet maintaining formation, a self-driving vehicle making a lane-change decision — each requires a continuous, spatially precise, temporally accurate, confidence-scored store of what is actually happening.
Neural world models are the simulation tier — generative engines that anticipate how the world evolves under actions. ArcFlow is the persistence tier — the operational world model that stores what actually happened, at what confidence, from which sensor, queryable at any sequence checkpoint.
Neural world models simulate. ArcFlow records. Autonomous systems need both.
The infrastructure problem#
Most autonomous system architectures cobble together three or more systems:
- A spatial data store for positions and geometry
- A time-series database for sensor history
- A graph or relational database for entity relationships
- An ML model store for confidence scores
- A message broker for real-time updates
Each boundary between these systems introduces latency, consistency risk, and operational complexity. A single query that crosses systems — "find all robots within 20 meters of a high-confidence obstacle, sorted by their last confirmed status" — requires a join across at least three of them.
ArcFlow collapses this stack into one in-process engine.
The data model#
-- Physical entities with spatial position and observation class
CREATE (r1:Robot {
id: 'ROBOT-01',
x: 12.4, y: 8.7, z: 0.0,
vx: 0.5, vy: 0.0, vz: 0.0,
status: 'navigating',
battery_pct: 87,
_observation_class: 'observed',
_confidence: 0.99
})
-- Obstacles — observed vs predicted
CREATE (o1:Obstacle {
id: 'OBS-001',
x: 18.0, y: 8.5, z: 0.0,
type: 'static',
_observation_class: 'observed',
_confidence: 0.98
})
CREATE (o2:Obstacle {
id: 'OBS-002',
x: 25.0, y: 12.0, z: 0.0,
type: 'dynamic',
_observation_class: 'predicted',
_confidence: 0.42
})
-- Detection edges carry sensor provenance
CREATE (r1)-[:DETECTS {
sensor: 'lidar',
range_m: 8.3,
_confidence: 0.98,
at: timestamp()
}]->(o1)
-- Fleet coordination
CREATE (f:Fleet {id: 'FLEET-A', formation: 'line'})
CREATE (r1)-[:MEMBER_OF {role: 'lead', position: 1}]->(f)Spatial queries — what is around me?#
-- All entities within 15m of Robot-01 (ArcFlow Spatial Index backed)
CALL algo.nearestNodes(point({x: 12.4, y: 8.7}), 'Obstacle', 10)
YIELD node AS obs, distance
WHERE distance < 15.0
RETURN obs.id, obs.type, obs._observation_class, obs._confidence, distance
ORDER BY distance
-- High-confidence obstacles only — filter out predictions
CALL algo.nearestNodes(point({x: 12.4, y: 8.7}), 'Obstacle', 10)
YIELD node AS obs, distance
WHERE distance < 15.0
AND obs._observation_class = 'observed'
AND obs._confidence > 0.85
RETURN obs.id, distance
-- Line-of-sight check between two entities
CALL arcflow.scene.lineOfSight(robot_id, obstacle_id)
YIELD has_los, noteTemporal queries — what changed?#
-- Where was this robot at a recent checkpoint?
MATCH (r:Robot {id: 'ROBOT-01'}) AS OF seq 800
RETURN r.x, r.y, r.status
-- Replay the last 5 minutes of all robot positions
MATCH (r:Robot) AS OF seq 1000
RETURN r.id, r.x, r.y, r.status
-- Detect if any robot has been stationary for more than 60 seconds
MATCH (r:Robot)
WHERE r.status = 'stopped'
AND r._updated_at < (timestamp() - 60000)
RETURN r.id, r.x, r.y, r._updated_atConfidence-filtered decision making#
-- Only navigate toward zones where all obstacles are high-confidence
MATCH (target:Zone {id: $target_zone_id})
MATCH (obs:Obstacle)
WHERE obs._observation_class IN ['observed', 'inferred']
AND obs._confidence > 0.7
AND distance(obs.position, target.position) < 20.0
RETURN count(obs) AS blocking_obstacles
-- Confidence-weighted PageRank to find the most "trusted" nodes in the sensor graph
CALL algo.confidencePageRank()
YIELD nodeId, score
RETURN nodeId, score ORDER BY score DESC LIMIT 10Live monitoring — always current#
import { open } from 'arcflow'
const db = open('./data/world-model')
// Alert when any robot enters a restricted zone
const zoneMonitor = db.subscribe(
`MATCH (r:Robot)-[:IN_ZONE]->(z:Zone {restricted: true})
WHERE r._observation_class = 'observed'
RETURN r.id, z.id, r.x, r.y`,
(event) => {
for (const row of event.added) {
triggerAlert(`Robot ${row.get('r.id')} entered restricted zone ${row.get('z.id')}`)
db.mutate(`MATCH (r:Robot {id: $id}) SET r.status = 'halted'`, { id: row.get('r.id') })
}
}
)
// Live view: fleet health dashboard (auto-maintained, zero-cost reads)
db.mutate(`
CREATE LIVE VIEW fleet_health AS
MATCH (r:Robot)
RETURN r.id, r.status, r.battery_pct, r._confidence
ORDER BY r.battery_pct ASC
`)
const health = db.query("MATCH (row) FROM VIEW fleet_health RETURN row")Multi-robot coordination#
-- Fleet members sorted by their position in formation
MATCH (r:Robot)-[m:MEMBER_OF]->(f:Fleet {id: 'FLEET-A'})
RETURN r.id, r.x, r.y, m.position
ORDER BY m.position
-- Find robots that have lost contact (no detection events in last 10 seconds)
MATCH (r:Robot)
WHERE NOT EXISTS {
MATCH (r)-[d:DETECTS]->()
WHERE d.at > (timestamp() - 10000)
}
RETURN r.id, r.status
-- Shortest path between two robots through the navigable graph
MATCH p = shortestPath(
(a:Robot {id: 'ROBOT-01'}),
(b:Robot {id: 'ROBOT-05'})
-[:NAVIGABLE*]->()
)
RETURN length(p), nodes(p)OpenUSD / physics integration#
For robotic systems that use USD-based scene graphs (USD-based simulators, digital twin platforms):
-- Export the world model as USD scene description
CALL arcflow.scene.toUsda() YIELD usda
-- Resolve USD prim path to graph node
CALL arcflow.scene.primId('/World/Robots/ROBOT-01') YIELD prim_path, prim_id
-- Collision contacts from physics simulation
CALL arcflow.scene.collisions(robot_id) YIELD from_id, to_id, impulse, at_time
-- Neighborhood in robot's local coordinate space
CALL arcflow.scene.queryInLocalSpace(robot_id, 10.0)
YIELD node_id, local_x, local_y, local_zWhy one world model instead of five systems#
The operational world model layer is the piece most autonomous stacks are missing. Neural world models handle simulation. ArcFlow handles persistence — collapsing what used to be five separate systems into one in-process engine.
| Capability | Traditional stack | ArcFlow |
|---|---|---|
| Neural model outputs | Application code, separate store | Store as _observation_class: 'predicted' edges |
| Spatial proximity | Separate spatial DB + query | CALL algo.nearestNodes(...) |
| Entity history | Time-series DB + join | AS OF seq N on same graph |
| Confidence scoring | Application logic | _confidence on every fact |
| Observation class | Not modeled | _observation_class built-in |
| Live alerts | Message broker + CDC | db.subscribe() in-process |
| Graph algorithms | External tool | CALL algo.pageRank() built-in |
| Fleet relationships | Relational DB + joins | First-class edges with properties |
| USD integration | Separate scene graph | arcflow.scene.* procedures |
See Also#
- World Model — the two-layer framing: neural world models simulate, ArcFlow records
- Building a World Model — step-by-step with spatial, temporal, and confidence
- Grounded Neural Objects — lifting neural world models detections into persistent entities
- Spatial Queries — ArcFlow Spatial Index, frustum, line-of-sight
- Temporal Queries —
AS OF seq N, replay, comparison - Live Queries — standing queries and live views
- Programs — declare sensor fusion pipelines and perception programs as installable manifests with GPU hardware validation
- Triggers — fire a skill automatically when a new sensor frame or detection node arrives
- Digital Twins — live mirror of physical systems