Sync
The full persistent world model — spatial positions, temporal history, confidence scores, observation classes — runs locally and replicates to cloud automatically. Local operations are instant: no round-trip, no network dependency, no blocking. Sync is eventual, conflict-resolved by confidence and temporal precedence, and resumable. The local world model stays authoritative. Cloud stays consistent. Multiple agents can each maintain their own local world model and converge on shared state without central coordination.
Local instance Cloud
┌──────────────────┐ HTTPS ┌──────────────────┐
│ Browser (WASM) │────────→│ ArcFlow Cloud │
│ Node.js process │←────────│ │
│ Edge device │ sync │ Persistent graph │
└──────────────────┘ └──────────────────┘
How it works#
Every mutation is captured in a sync WAL (write-ahead log). The sync engine pushes WAL entries to the cloud and pulls remote changes back.
import { openInMemory } from 'arcflow'
const db = openInMemory()
// Mutations are automatically captured in the sync WAL
db.mutate("CREATE (n:Person {name: 'Alice', age: 30})")
db.mutate("CREATE (n:Person {name: 'Bob', age: 25})")
// Check how many mutations are pending sync
console.log(db.syncPending()) // 2
// Read queries don't enter the WAL
db.query("MATCH (n:Person) RETURN n.name")
console.log(db.syncPending()) // still 2
// Graph fingerprint — hash of current state
console.log(db.fingerprint()) // "29553633"Sync WAL#
The sync WAL captures every mutating query automatically. Read queries are never captured.
| Operation | Enters WAL? |
|---|---|
db.mutate("CREATE ...") | Yes |
db.mutate("SET ...") | Yes |
db.mutate("DELETE ...") | Yes |
db.batchMutate([...]) | Yes (each query) |
db.query("MATCH ...") | No |
db.query("CALL algo...") | No |
db.stats() | No |
Fingerprint#
The graph fingerprint is a hash of the entire graph state — generation, node count, relationship count, mutation sequence. Two instances with the same fingerprint have identical content.
const fp1 = db.fingerprint()
db.mutate("CREATE (n:Change {v: 1})")
const fp2 = db.fingerprint()
// fp1 !== fp2 — fingerprint changes after mutationUse fingerprints to verify sync consistency: after a sync cycle, both client and cloud should have the same fingerprint.
Sync protocol#
The sync engine uses a push/pull protocol with resume via high-water marks:
- Push: local WAL entries batched into a sync request and POST'd to cloud
- Ack: cloud responds with
acked_seq+fingerprint - Pull: client requests changes since its
remote_hwm - Apply: remote queries replayed locally
- Verify: fingerprints compared — match = consistent
Sync request format#
{
"peer_id": "node-1",
"seq_start": 1,
"seq_end": 3,
"queries": [
"CREATE (n:Person {name: 'Alice'})",
"CREATE (n:Person {name: 'Bob'})",
"CREATE (a:Person {name: 'Alice'})-[:KNOWS]->(b:Person {name: 'Bob'})"
],
"fingerprint": 29553633
}Queries are replayed on the receiving end — MERGE for idempotent operations, CREATE for new data.
Conflict resolution#
When two clients mutate concurrently, conflicts are resolved using vector clocks:
| Strategy | Rule | Default |
|---|---|---|
| Last-writer-wins | Higher vector clock wins | Yes |
| First-writer-wins | Earlier clock wins | Optional |
| Evidence-weighted | Higher confidence wins | For observation data |
LAN Discovery#
ArcFlow instances on the same network discover each other automatically via UDP multicast. No configuration needed — peers announce themselves and sync propagates across the mesh.
Group: 239.255.0.1:7947
Announce: every 5 seconds
Stale: 60 seconds (peer removed if no heartbeat)
Protocol: {"op":"announce","peer_id":"af-abc123","data_port":7948}
CLI commands for mesh status:
arcflow sync health # Show discovered peers
arcflow sync scopes # List registered sync scopes
arcflow sync announce # Print this node's announcement JSONSupported transports#
| Transport | How |
|---|---|
| HTTP push/pull | arcflow sync push/pull |
| Background auto-sync | arcflow sync start daemon |
| LAN peer discovery | UDP multicast announce |
| WebSocket delta-push | Live delta stream over WS |
| ArcFlow WAL Stream | High-throughput WAL streaming to read replicas |
| Object-store fan-out | WAL segments written to S3/GCS/Azure for durable replication |
| WASM (browser) | Not yet supported |
Replication (SWMR)#
ArcFlow supports a Single-Writer-Multiple-Reader replication model via ArcFlow WAL Stream. The primary node streams WAL entries directly to read replicas, which apply them in order.
-- Check SWMR contract and replication configuration
CALL arcflow.replication.contract()
YIELD mode, writes_enabled, replication_factor, description
-- ArcFlow WAL Stream parameters
CALL arcflow.replication.walTailing()
YIELD field, value, description
-- Object-store fan-out configuration (S3/GCS/Azure)
CALL arcflow.replication.objectStoreFanout()
YIELD field, value, description
-- Live replication health
CALL db.replicationStatus()The mode from arcflow.replication.contract() is one of "primary", "replica", or "standalone". writes_enabled is false on read replicas.
API reference#
interface ArcflowDB {
// ... existing methods ...
/** Number of mutations pending sync push (0 if sync not configured). */
syncPending(): number
/** Graph fingerprint for sync verification. */
fingerprint(): string
}Sync subsystem#
| Component | What it provides |
|---|---|
| WAL delta capture | Mutating queries recorded with sequence numbers |
| Batched transport | Queries batched for efficient push/pull |
| Push/pull orchestration | Coordinator for outbound and inbound sync |
| Pluggable transport | Memory mock, HTTP, WASM fetch — same protocol |
| Causal ordering | Vector clock across peers for consistent ordering |
| Conflict detection | Entity-level detection and resolution |
| Fingerprint verification | State hash ensures sync consistency |
| Remote replay | Apply remote mutation queries locally |
Architecture deep dive#
See Sync Architecture for the full protocol specification, implementation waves, and engine primitive mapping.
See Also#
- Sync Architecture — full protocol specification, vector clocks, and engine primitives
- Event Sourcing — WAL replay and temporal audit trail built on the same mutation log
- Live Queries — live views that sync across nodes as CDC
- Cloud Architecture — distributed mesh, ArcFlow WAL Stream, multi-region