ArcFlow for Coding Agents
ArcFlow is designed to be used directly by coding agents — not as infrastructure they configure, but as a tool they call. The CLI exits in under 10ms. Every error carries a recovery suggestion. Results are structured JSON. Agents build world models the same way humans do, except they can query, reason, and update in a tight loop without any protocol overhead.
The three surfaces#
| Agent type | Interface | Latency |
|---|---|---|
| Shell-native agents (Claude Code, Codex CLI, etc.) | arcflow binary | < 10ms |
| Python agents, shell pipelines | arcflow binary | < 10ms |
| Cloud chat UIs (Claude.ai, browser-only agents) | MCP server | ~100ms |
For any agent with shell access, the CLI binary is always the right choice — no configuration, no MCP registration, no daemon.
Starting a world model#
# Initialize a persistent workspace in the current project
arcflow workspace init
# All subsequent queries auto-discover .arcflow/ from any subdirectory
arcflow query "CREATE (e:Entity {id: 'unit-01', x: 12.4, y: 8.7, _observation_class: 'observed', _confidence: 0.94})" --json
# Or: in-memory only (no --data-dir, no init)
arcflow query "MATCH (e:Entity) RETURN e.id" --jsonStructured errors with recovery hints#
Every failure returns machine-readable fields. Agents parse code to decide recovery strategy — no string matching on message:
arcflow query "METCH (n) RETURN n" --json 2>&1{
"ok": false,
"code": "PARSE_FAILED",
"message": "Unknown keyword 'METCH'",
"recovery_suggestion": "Did you mean MATCH?"
}arcflow query "MATCH (n:Perso) RETURN n" --json 2>&1{
"ok": false,
"code": "UNKNOWN_LABEL",
"message": "Label 'Perso' not found",
"failing_field": "label",
"recovery_suggestion": "Check label spelling. Available: Entity, Zone, Observation"
}Exit codes are deterministic:
| Code | Meaning |
|---|---|
0 | Success |
1 | Query error (syntax, missing label, type mismatch) |
2 | System error (file not found, permission denied) |
3 | Validation error (constraint violation) |
The agent checks $? before parsing output.
Schema discovery before querying#
Before writing queries against an existing world model, agents inspect the schema:
arcflow query "CALL db.schema()" --jsonReturns every label, relationship type, property key, index, and constraint currently in the graph. An agent that reads schema first can write correct queries without trial and error.
# Full engine context — procedures, algorithms, observation classes
arcflow agent-context synth --jsonConfidence-aware queries#
World model queries should always filter on epistemic state:
# Only act on high-confidence observed facts
arcflow query "
MATCH (e:Entity)
WHERE e._observation_class = 'observed' AND e._confidence > 0.85
RETURN e.id, e.x, e.y, e._confidence
ORDER BY e._confidence DESC
" --json
# Flag low-confidence predictions for verification
arcflow query "
MATCH (e:Entity)
WHERE e._observation_class = 'predicted' AND e._confidence < 0.5
RETURN e.id, e._confidence
ORDER BY e._confidence ASC
" --jsonAn agent building on a world model that mixes observed and predicted facts without filtering is reasoning from incomplete epistemic context.
Temporal queries for debugging#
When an agent needs to understand what the world model showed at a previous moment:
# What was the entity's state 50 mutations ago?
arcflow query "MATCH (e:Entity {id: 'unit-01'}) AS OF seq 50 RETURN e.x, e.y, e._confidence" --json
# What entities existed before the agent's last write batch?
arcflow query "MATCH (e:Entity) AS OF seq \$seq RETURN e.id, e._observation_class" \
--param seq=100 --jsonThis is the primary debugging surface: when an agent's decision produced a wrong outcome, AS OF seq N shows exactly what state the world model was in when that decision was made.
Query planning before execution#
Before running an expensive traversal, agents can inspect the execution plan:
arcflow query "EXPLAIN MATCH (a:Entity)-[:DETECTS*1..5]->(b) RETURN b.id" --jsonUse this before any multi-hop traversal or algorithm call on a large world model.
Checkpointing before risky operations#
Agents checkpoint state before mutations that are hard to reverse:
# Save state before a bulk update
arcflow query ":snapshot ./before-migration.json" --json
# Run the migration
arcflow query "MATCH (e:Entity) SET e.schema_version = 2" --json
# Verify, then if something is wrong:
arcflow query ":restore ./before-migration.json" --jsonBatch execution — multiple queries, one call#
For processing many queries at once, the --exec-dir pattern avoids per-query startup overhead:
# Agent writes queries as files
echo "MATCH (e:Entity) WHERE e._confidence > 0.9 RETURN e.id" > queries/trusted.cypher
echo "CALL algo.pageRank()" > queries/centrality.cypher
# Execute all at once
arcflow --exec-dir queries/ --output-dir results/
# Results appear as JSON files
cat results/trusted.json
cat results/centrality.jsonOne CLI invocation. N queries. N result files. No per-query startup.
Multi-agent shared workspace#
Multiple agents can share the same world model through the filesystem:
project/
└── .arcflow/
└── data/
└── worldcypher.snapshot.json # Shared graph state
Agent A writes observations. Agent B queries them. Agent C runs algorithms. No message broker. No coordination framework. For concurrent writes at scale, start the HTTP server to serialize access:
arcflow --http 8080 --data-dir .arcflow/dataSee Also#
- Agent-Native — full surface comparison and filesystem workspace patterns
- Agent Tooling — ephemeral in-process usage, MCP for cloud chat UIs
- Temporal Queries —
AS OF seq Nfor replay and audit - Swarm & Multi-Agent — multi-agent coordination via shared world model