Lerim¶
Your coding agents forget everything after each session. Lerim learns — across all of them.
Lerim is the continual learning and context graph layer for AI coding agents — it watches sessions, extracts structured knowledge, and builds a shared intelligence graph across agents, projects, and teams. Current runtime architecture is PydanticAI-only for sync, maintain, and ask.
The problem¶
You spend 20 minutes explaining context to your coding agent. It writes great code. Next session? It's forgotten everything. Every decision, every pattern, every "we tried X and it didn't work" — gone.
And if you use multiple agents — Claude Code at the terminal, Cursor in the IDE, Codex for reviews — none of them know what the others learned. Your project knowledge is scattered across isolated sessions with no shared intelligence.
This is agent context amnesia, and it's the biggest productivity drain in AI-assisted development.
The solution¶
Lerim solves this by:
- Watching your agent sessions across all supported coding agents
- Extracting decisions and learnings automatically using a PydanticAI extraction agent
- Storing everything as plain markdown files in your repo (
.lerim/) - Refining knowledge over time — merges duplicates, archives stale entries, refreshes the memory index
- Unifying knowledge across all your agents — shared files under
.lerim/memory/ - Answering questions about past context:
lerim ask "why did we choose Postgres?"
No proprietary format. No database lock-in. Just markdown files that both humans and agents can read.
Get started¶
-
Quickstart
Get from zero to first working command in under 5 minutes
-
Installation
Detailed installation instructions and prerequisites
-
CLI Reference
Complete command-line interface documentation
-
How It Works
How Lerim works under the hood
Key features¶
Multi-agent support¶
Works with any coding agent that produces session traces
Plain markdown storage¶
No proprietary formats — just .md files in .lerim/
Automatic extraction¶
PydanticAI agents extract decisions and learnings from sessions
Continuous refinement¶
Merges duplicates, archives stale entries, maintains index.md
Natural language queries¶
Ask questions about past context in plain English
Local-first¶
Runs entirely on your machine with Docker or standalone
Supported agents¶
| Agent | Session Format | Status |
|---|---|---|
| Claude Code | JSONL traces | Supported |
| Codex CLI | JSONL traces | Supported |
| Cursor | SQLite to JSONL | Supported |
| OpenCode | SQLite to JSONL | Supported |
More agents coming soon
PRs welcome! See the contributing guide to add support for your favorite agent.
How it works¶
Connect your agents¶
Link your coding agent platforms. Lerim auto-detects supported agents on your system.
Sync sessions¶
Lerim reads session transcripts and runs the PydanticAI extraction flow with the [roles.agent] model. The agent uses tool functions to read the trace, take notes, prune context, search existing memories, write or edit markdown, and save a session summary:
flowchart TB
subgraph runtime_sync["Agent flow"]
RT[LerimRuntime · run_extraction]
end
subgraph lm["LM"]
L[roles.agent]
end
subgraph syncTools["Sync tools (7)"]
t1["read · grep"]
c1["note · prune"]
wm["write · edit"]
v1["verify_index"]
end
RT --> L
RT --> t1
RT --> c1
RT --> wm
RT --> v1
Maintain knowledge¶
Offline refinement merges duplicates, archives low-value entries, and consolidates related learnings. The maintain flow uses the same [roles.agent] model with maintain-only tools:
flowchart TB
subgraph runtime_maintain["Agent flow"]
RT_m[LerimRuntime · run_maintain]
end
subgraph maintainTools["Maintain tools (6)"]
t2["read · scan"]
wm2["write · edit"]
ar[archive]
v2[verify_index]
end
RT_m --> t2
RT_m --> wm2
RT_m --> ar
RT_m --> v2
Query past context¶
Ask Lerim about any past decision or learning. Your agents can do this too.
Dashboard¶
Dashboard UI is not released yet. The local daemon exposes a JSON API on http://localhost:8765 for CLI usage.
Quick install¶
Then follow the quickstart guide to get running in 5 minutes.
Next steps¶
-
Quickstart
Install, configure, and run your first sync in 5 minutes
-
Connecting agents
Link your coding agent platforms for session ingestion
-
Memory model
Understand how memories are stored and structured
-
Configuration
Customize model providers, tracing, and more