Neural Fabric · Shared Memory

How It Works

Save a note, drop in a file, capture a decision. It's instantly available to every agent you use. One Neural Fabric. Everything reads from it.

01

Write once. Recall anywhere.

Write from any MCP-connected LLM. Ask from any other. It's there, with the date, the confidence, and the source. That's the Neural Fabric.

Agent A · IDE
// Write a decision fabric_note({ notes: ["Chose Teller over Plaid, simpler API surface, better webhook reliability."], category: "decision" }) // → ✓ Persisted to Fabric · confidence: 0.97
14 days later · different agent
// Recall from any connected LLM fabric_query({ query: "Why did we pick Teller?" }) // → Teller was chosen over Plaid for its simpler API surface and better webhook reliability. // source: decision · Feb 14, 2026 · confidence 0.97
 
Fabric
LLM Agent
-
✓ saved to Fabric
AI Agent
-
✓ saved to Fabric
Document
-
✓ saved to Fabric
Any Agent · Later
"What do we know about this project?"
Synthesised answer · 3 sources
confidence 1.0 · 812ms
LLM
Agent
Client wants the dashboard delivered by end of March.
written 6 days ago
AI
Agent
Budget approved. Team aligned on delivery scope.
written 3 days ago
Document
Spec doc, 5 requirements connected to this project
ingested 8 days ago

02

What makes it different.

Zero API cost
At ingestion
No call to any embedding provider to encode your documents. 250× faster than dense embeddings. Files never leave your environment to get vectorized.
Multi-hop
Graph traversal
Answers emerge from connections across your entire knowledge graph, not from a single matching chunk. Follows chains of related entities standard retrieval can't see.
Governed refusal
Anti-hallucination
When the evidence isn't there, the Fabric returns "No information available", not a guess. Gets quieter, not wronger. Every answer traces to its source.

03

Nine tools. Three jobs.

Write
fabric_note
Decisions, constraints, facts
fabric_assert
Permanent facts with confidence
fabric_ingest
Entire documents, instantly queryable
Read
fabric_query
Grounded answers with sources
fabric_search
Fast text search across everything
fabric_entities
Browse by type, across layers
Coordinate
fabric_thread
Follow notes across agents
fabric_task
Assign work between agents
fabric_status
Health, counts, pipeline status

Same tools in Claude.ai, ChatGPT, Cursor, OpenClaw, or any MCP-compatible agent.

04

Validated at scale.

10-hop graph traversal passing at every scale tested, from 211 entities to 5 million. The Fabric scales linearly at ~120 bytes per entity. Spectral radius held below 1.0 at every point.

Scale ValidationAll milestones passed
EntitiesRetrieval timeMemoryMax hopsStatus
1,0006.5ms121KB10PASS
10,00048ms1.2MB10PASS
100,000363ms11.8MB10PASS
1,000,0005.3s118MB10PASS
5,000,0001.9min591MB10PASS
5,188
entities / sec
Ingestion throughput. Zero external API calls. 250× faster than dense embeddings.
120B
per entity
Linear memory scaling. Validated accurate to within 1.7% of formula at 5M entities.
1,097×
vs dense at 10K
Sparse vs dense speedup. Dense matrix at 10K requires 800MB. Sparse requires 1.2MB.
0.998
spectral radius
Measured on real-world corpus. Constrained below 1.0 at every write. The Fabric cannot drift.
Validated to 5 million entities. 99.8% candidate reduction at every scale. Multi-hop traversal verified at 10+ hops. Linear O(n) scaling confirmed.
Neural Fabric Development Layer
About User Feedback View Plans