Skills
A Skill is a named, versioned, graph-native enrichment program. You describe a relationship rule in natural language. ArcFlow compiles it once — using your configured LLM endpoint — into a cached graph pattern. From that point on, PROCESS NODE and REPROCESS EDGES execute the compiled pattern at graph-native speed: no model calls, no API latency, no token cost.
The LLM is a compiler frontend. ArcFlow is the optimized runtime. These two things never overlap.
The compile-once model#
CREATE SKILL ZoneCoOccupancy
FROM PROMPT 'Link two Robots as ZONE_PEER if they occupy the same Zone'
ALLOWED ON [Robot]
TIER SYMBOLIC
At creation time, ArcFlow sends the prompt and relevant schema context to OZ's compiler endpoint — authenticated automatically through your oz.com/world connection. No API keys to manage. Compilation results are cached: the same rule against the same schema returns an instant response at no cost to your allowance. On a cache miss, the schema context is prefix-cached across calls, so you pay only for the rule text itself. The compiled WorldCypher pattern is returned and stored in the skill alongside the original prompt.
Skill stored in graph:
name: ZoneCoOccupancy
prompt_template: 'Link two Robots as ZONE_PEER if they occupy the same Zone'
compiled_pattern: 'MATCH (a:Robot)-[:OCCUPIES]->(z:Zone)<-[:OCCUPIES]-(b:Robot)
WHERE a.id < b.id
MERGE (a)-[:ZONE_PEER {zone_id: z.id}]->(b)'
compiled_by: claude-opus-4-6
version: 1
tier: symbolic
The original prompt is preserved as auditable intent — you can always read it to verify the compiled form matches what you asked for. The compiled pattern is what executes.
PROCESS NODE: zero LLM#
PROCESS NODE (n:Robot)Runs the compiled pattern against every matched node. No model call. No API request. The engine evaluates the graph pattern directly — finding co-occupying robots, scoring the relationship by zone confidence, materializing edges.
At 1,000 robots this takes milliseconds. At 1,000,000 it scales the same way — the pattern is graph algebra, not inference.
REPROCESS EDGES: zero LLM#
REPROCESS EDGES WHERE confidence < 0.7When the world model changes — new sensor data arrives, observation confidence improves — low-trust edges can be refreshed. ArcFlow deletes edges below the threshold and re-runs the same compiled pattern for fresh scores. The LLM is not consulted. The pattern was compiled once and it runs again, unchanged.
To get a materially different pattern, you create a new skill version — which calls the LLM once more to recompile.
What a skill actually contains#
| Property | Description |
|---|---|
name | Unique identifier |
version | Semantic version — edges carry the version that built them |
prompt_template | The original natural language rule — auditable intent |
compiled_pattern | WorldCypher graph pattern produced by LLM at creation time |
compiled_by | Model that compiled the pattern — provenance on the compilation itself |
allowed_on | Which node labels this skill applies to |
min_confidence | Minimum score for an edge to be materialized |
active | Whether the skill runs on PROCESS NODE calls |
Execution tiers#
The tier determines what the LLM compiles to — the output format of the compilation step.
| Tier | LLM compiles to | Runtime | Status |
|---|---|---|---|
SYMBOLIC | WorldCypher graph pattern | Sub-millisecond graph traversal | Active |
WASM | Sandboxed WASM module | ~1ms compiled logic | Planned |
TIER SYMBOLIC covers the vast majority of relationship rules — co-location, shared detection, causal proximity, coverage overlap. Any rule expressible as a graph path can be compiled to a SYMBOLIC pattern.
TIER WASM is for rules that require heavy computation beyond graph traversal — custom scoring functions, matrix operations, statistical models. The LLM generates the implementation; the WASM sandbox executes it deterministically.
World model use case#
In a facility environment, sensors observe entities. Skills derive the relationships between them — co-located robots, shared detection chains, coverage gaps. These relationships don't come from sensors directly. They come from reasoning over observed facts.
-- Step 1: create the skill (LLM called once)
CREATE SKILL CoDetectionLinker
FROM PROMPT 'Link two Robots as CO_DETECTED if the same Sensor detected both,
weighted by the sensor reliability and detection confidence'
ALLOWED ON [Robot]
TIER SYMBOLIC
-- Step 2: materialize edges across the world model (zero LLM)
PROCESS NODE (n:Robot)
-- => 23 CO_DETECTED edges, avg confidence 0.87
-- Step 3: query derived relationships
MATCH (a:Robot)-[r:CO_DETECTED]->(b:Robot)
WHERE r.confidence > 0.8
RETURN a.name, b.name, r.confidence, r.zone_id
ORDER BY r.confidence DESC
-- Step 4: world model updated, refresh weak edges (zero LLM)
REPROCESS EDGES WHERE confidence < 0.7
-- => 6 edges re-evaluated with updated sensor data, avg confidence now 0.91No ETL pipeline. No external enrichment service. The LLM authored the logic once; ArcFlow operates it forever.
Why this matters#
Most graph databases treat edges as static assertions — inserted once, never questioned. Skills make edges derived: compiled by a model, scored by confidence, traced to the version that produced them, and re-evaluable as the world changes.
The model call happens at authoring time. Production is model-free.
| Stage | LLM involved? | Cost |
|---|---|---|
CREATE SKILL — compilation | Yes, once | One compilation via your oz.com/world connection (cached results are free) |
PROCESS NODE — execution | No | Pure graph computation, unlimited |
REPROCESS EDGES — refresh | No | Pure graph computation, unlimited |
| Query derived edges | No | Read-only graph traversal, unlimited |
Compilation uses your oz.com/world connection — credentials are managed through your account, no separate API keys required. Compilation results are cached: the same rule against the same schema returns an instant response at no cost on repeat calls.
For AI applications: every RAG answer can trace its evidence chain back to the specific skill version and compiled pattern that produced each supporting edge — with the original human-readable prompt preserved alongside it.
See Also#
- CREATE SKILL — register and compile a skill
- PROCESS NODE — execute compiled skills across the world model
- REPROCESS EDGES — refresh weak edges without model calls
- Confidence & Provenance — how skill edges carry scores and evidence chains
- Using Skills — step-by-step guide with world model examples