Status: Shipped (v1.1)
Category: AI/ML, Developer Tool
Landing Page: next.henrybarefoot.com/engram
GitHub: github.com/HBarefoot/engram
npm: @hbarefoot/engram
Desktop App: macOS Download
Tech Stack: Node.js, React, Tailwind CSS, SQLite, MCP, REST API
Engram is a lightweight, embeddable agent memory layer that gives AI agents persistent, cross-session memory. Think SQLite for agent state — any framework can plug into it.
It solves the fundamental problem that every AI agent starts from zero each session, forgetting your codebase patterns, debugging preferences, deployment quirks, and personal context.
Runs fully offline with zero cloud dependencies. Data stored at ~/.engram/
Every time you start a new Claude conversation, a new Claude Code session, or trigger an n8n AI workflow — the agent starts from zero. It doesn't know your server setup. It doesn't know your preferred stack. You end up re-explaining context constantly or maintaining static system prompts that go stale.
| What Exists | What's Missing |
|---|---|
| RAG (vector search over docs) | Learns passively from interactions over time |
| Static system prompts / .claude files | Understands relationships between facts |
| Conversation history (context window) | Prioritizes relevant memory by context |
| Manual memory edits (Claude's memory) | Decays outdated info automatically |
| .cursorrules / project configs | Works across agents, not locked to one platform |
Interfaces (MCP, REST, CLI, Dashboard) → Core Engine → SQLite Storage
A lightweight sidecar/daemon that watches agent interactions. Extracts facts, preferences, patterns, outcomes, and mistakes. Structures knowledge into typed categories: preference, fact, pattern, decision, outcome.
Not just vectors in a database — actual relationships. Facts have confidence scores, timestamps, and decay rates. Contradictions get flagged and resolved.
Before any agent interaction, the memory layer reads the current context and injects the most relevant memories. This isn't keyword search — it's contextual.
(similarity × 0.45) + (recency × 0.15) + (confidence × 0.15) + (access × 0.05) + (feedback × 0.10) + fts_boost
| Tool | Description |
|---|---|
engram_remember |
Store a memory with content, context, and optional tags |
engram_recall |
Get relevant memories based on query |
engram_forget |
Remove a memory by ID |
engram_status |
Health check and stats |
engram_feedback |
Mark memories as helpful/unhelpful to adjust confidence |
engram_context |
Generate pre-formatted context blocks for AI system prompts |
POST /api/memory — Store a memory { content, context, tags }
GET /api/memory/recall — ?query=...&limit=5
DELETE /api/memory/:id — Remove a memory
GET /api/status — Health check
CREATE TABLE memories (
id TEXT PRIMARY KEY,
content TEXT,
entity TEXT,
category TEXT, -- preference|fact|pattern|decision|outcome
confidence REAL, -- 0.0 to 1.0
embedding BLOB,
source TEXT, -- which agent/session created this
created_at INTEGER,
last_accessed INTEGER,
access_count INTEGER,
decay_rate REAL
);
engram_feedback tool lets agents mark memories as helpful/unhelpful, automatically adjusting confidence scores over time.| Agent | Detection | Config Location |
|---|---|---|
| Claude Code | ~/.claude/ directory |
~/.claude/mcp.json |
| Claude Desktop | App installed, process running | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Cursor | .cursor/ or process running |
~/.cursor/mcp.json |
| Windsurf | Process or config directory | ~/.windsurf/mcp.json |
| n8n | Process on port 5678 | REST API integration |
| Ollama | Process on port 11434 | REST adapter |
~/.engram/memory.dbDownload from GitHub Releases — macOS Apple Silicon (M1/M2/M3/M4)
npm install -g @hbarefoot/engram
engram start
Or via single binary for non-Node users (via pkg or bun build --compile).
RAG answers: "What documents are relevant to this query?"
Engram answers: "What does this specific person need me to know right now, given everything I've learned about how they work?"
It's the difference between a search engine and a colleague who's worked with you for years.
| Version | Status | Features |
|---|---|---|
| v1.0 | ✅ Shipped | CLI, MCP server, REST API, web dashboard, agent auto-discovery |
| v1.1 | ✅ Shipped | Deduplication, temporal queries, feedback loop, context export |
| v1.0 Desktop | ✅ Shipped | Native macOS app, menu bar, bundled runtime |
| v1.2 | 🔜 Next | Windows/Linux desktop, system tray polish, PWA support |
| v2.0 | 📋 Planned | Graph relationships, team shared memory, Obsidian import |