MCP Server
The MCP server is the integration surface for cloud chat interfaces — ChatGPT, Claude.ai, Gemini web, Copilot Chat — that run in a browser or cloud sandbox with no local filesystem access.
If you have a shell, use the CLI binary instead. Claude Code, Codex CLI, and Gemini CLI all have shell tools. The arcflow binary exits in under 10ms, needs no configuration, and is fully composable with grep, jq, and git. MCP adds a protocol layer that those agents don't need.
MCP is correct when there is no shell. That is its exact scope.
Setup#
npx arcflow-mcp # In-memory (ephemeral)
npx arcflow-mcp --data-dir ./my-graph # Persistent graphClaude Desktop / Claude.ai#
{
"mcpServers": {
"arcflow": {
"command": "npx",
"args": ["arcflow-mcp"]
}
}
}With persistent data#
{
"mcpServers": {
"arcflow": {
"command": "npx",
"args": ["arcflow-mcp", "--data-dir", "./my-graph"]
}
}
}Tools#
| Tool | Description | Read/Write |
|---|---|---|
get_schema | Labels, relationship types, properties, indexes, stats | Read |
get_capabilities | Algorithms, procedures, window functions, features | Read |
read_query | Execute read-only WorldCypher (MATCH, CALL algo., CALL db.) | Read |
write_query | Execute mutations (CREATE, SET, DELETE, MERGE) | Write |
graph_rag | Trusted GraphRAG — answer questions from the world model | Read |
ingest_nodes | Push node/edge batches with idempotent content-hash dedup | Write |
create_live_view | Register a standing query as a named live view | Write |
live_view_status | Poll a live view's current result set and frontier | Read |
read_query rejects all mutations — CREATE, SET, DELETE, MERGE, REMOVE are refused at the tool level. Use write_query for explicit write operations.
Tool details#
read_query and write_query#
read_query("MATCH (e:Entity) WHERE e._confidence > 0.85 RETURN e.id, e.x, e.y")
write_query("MATCH (e:Entity {id: $id}) SET e._confidence = $conf", {id: "unit-01", conf: 0.97})
ingest_nodes#
Push structured node/edge deltas. Content-hash dedup means calling the same delta twice is safe — already-ingested nodes are silently skipped.
{
"added_nodes": [
{
"label": "Entity",
"id": "unit-01",
"content_hash": "abc123",
"properties": { "x": 12.4, "y": 8.7, "_observation_class": "observed", "_confidence": 0.94 }
}
],
"removed_node_ids": [],
"updated_nodes": [],
"added_edges": [
{ "kind": "DETECTS", "from_id": "unit-01", "to_id": "contact-x" }
],
"removed_edge_ids": []
}Returns { nodes_added, nodes_removed, nodes_updated, edges_added, edges_removed, wal_bytes_written }.
create_live_view and live_view_status#
Register a view once, poll for changes:
create_live_view("high_risk", "MATCH (e:Entity) WHERE e._confidence < 0.4 RETURN e.id, e._confidence ORDER BY e._confidence ASC")
live_view_status("high_risk")
// → { frontier: 47, row_count: 3, query_text: "..." }
The frontier is a monotonically increasing mutation sequence number. If it has not changed since the last poll, the result set has not changed.
graph_rag#
Ask a natural language question answered from the world model with confidence filtering:
graph_rag("Which entities have been observed with confidence above 0.9 in the last 5 minutes?")
Latency#
MCP operates over stdio JSON-RPC. Each tool call has ~100ms of overhead from process boundary, serialization, and JSON-RPC round-trip. This is acceptable for cloud chat interfaces where the user is already waiting for a response. It is not acceptable for application code or shell agent loops.
| Surface | Latency | Use when |
|---|---|---|
| napi-rs in-process | ~1 µs | Application code in Node.js |
arcflow CLI binary | < 10 ms | Shell-capable agents (Claude Code, Codex, Gemini CLI) |
| MCP server | ~100 ms | Cloud chat UIs with no local shell |
See Also#
- CLI — the
arcflowbinary for shell-capable agents and developers - Language Bindings — napi-rs (Node.js), Python, Rust, C, C++
- Agent-Native — filesystem workspace, batch execution, watch mode
- Skills — teach the world model a relationship rule in plain language; compiled once, executed at graph speed forever