Your agent needs to
remember what it learned
File-based memory system for AI coding agents.
Automatic consolidation. 9:1 compression. Zero infrastructure.
Memory that grows with your project
Agents start fresh every session. You spend the first 10 minutes re-explaining context the agent already learned yesterday. AGENTS.md files help, but they can't hold everything.
agent-context gives your agents persistent memory that survives sessions. Append-only daily logs capture raw signal. Periodic consolidation compresses them into curated topic files. A small index keeps the most relevant memories injected on every turn.
It's git for agent memory. File-based, version-controlled, human-readable. No vector databases, no embeddings, no API keys. Just markdown files that your agent reads and writes.
How it works
Daily logs capture everything. Consolidation compresses. Index injects the essentials.
Daily Logs
memory/YYYY-MM-DD.md
Raw, append-only logs. Your agent writes what it learns each session. Verbatim, timestamped, never edited.
## 2026-04-12 08:30 — Auth refactor exec('pnpm prisma migrate') Learned: edge middleware can't access DB directly 14:15 — Test fixes Fixed flaky timeout in auth.spec.ts
Topic Files
memory/*.md
Curated knowledge by topic. Consolidation merges daily logs into persistent memories. De-duplicates. Converts dates.
## auth.md Edge middleware constraints - JWT-only checks - No Prisma access - Set on 2026-02-07 ## tests.md Flaky timeouts autowait in auth.spec.ts
MEMORY.md
Always injected
Pointer index to topic files. Max 200 lines / 25KB. Your agent reads this every turn. Links to deeper docs.
## Index Auth patterns: memory/auth.md Test fixes: memory/tests.md API gotchas: memory/api.md ## Quick Facts Edge = JWT only Prisma = server actions
Memory lifecycle
Compression: ~9:1 (107 KB → 11.6 KB in production)
Simple Python API
Three methods. File-based. No magic.
Append
Write to daily log. Append-only.
from agent_context import Memory memory = Memory( agent_id="rusty", workspace=Path(".") ) memory.append( "Learned: edge middleware" " can't access Prisma" )
Search
Keyword search across all memories.
results = memory.search( query="prisma", limit=5 ) for entry in results: print(entry.content) print(entry.source_file)
Consolidate v0.2.0
Compress daily logs into topics.
stats = memory.consolidate( days=7, llm_consolidate=True ) print(f"Compression: stats.compression_ratio:.1f:1") print(f"Duration: stats.duration_seconds:.2fs")
Automatic consolidation gates
Consolidation runs automatically when both gates pass:
Lock mechanism prevents concurrent runs. Typical compression: 5-9:1.
Built on research, not vibes
Every design decision comes from real data about how agents consume context.
Instructions per session
HumanLayer found frontier LLMs reliably follow about 150-200 instructions. Claude Code's system prompt uses ~50. That's why AGENTS.md stays under 120 lines.
HumanLayer research →Pass rate with passive context
Vercel's evals: embedded docs = 100% pass. Skills where agents decide to read = 53%. Same as having nothing. Passive context beats active retrieval.
Vercel evals →Repos analyzed
GitHub found the files that work are specific. Executable commands early. Code examples over prose. Explicit boundaries. "Helpful assistant" does nothing.
GitHub analysis →When subagents enter the picture
Subagents don't inherit conversation history. AGENTS.md is the only shared brain preventing five parallel agents from making conflicting decisions.
Claude subagents docs →What subagents actually see
AGENTS.md is the only context that flows to every parallel agent.
Works with every agent
AGENTS.md is the cross-platform standard. One file, every tool.
| Agent | Setup |
|---|---|
OpenClaw |
Clone into skills/ directory — reads AGENTS.md natively
|
Cursor | Reads AGENTS.md natively. Nothing to configure. |
| GitHub Copilot & Copilot CLI | Reads AGENTS.md natively. Nothing to configure. |
| Codex | Reads AGENTS.md natively. Nothing to configure. |
Windsurf | Reads AGENTS.md natively. Nothing to configure. |
Claude Code | CLAUDE.md symlink → AGENTS.md (created by init)
|
Auto-Reflect
Your agent observes, reflects, and learns — automatically
Continuous observation
Your agent logs what happens during sessions — tool usage patterns, corrections, preferences — building a living record in .agents.local.md.
Session-end reflection
At the end of each session, your agent reviews its observations, identifies recurring patterns, and surfaces them as candidates for promotion into AGENTS.md.
.agents.local.md
## Scratchpad (auto-updated) - User prefers pnpm over npm (observed 4 sessions) - Always run lint before committing (corrected twice) - Test files use .spec.ts, not .test.ts ## Ready to Promote | Pattern | Seen | Action | |--------------------------------|------|----------| | pnpm over npm | 4× | suggest | | lint before commit | 3× | suggest | | .spec.ts naming convention | 2× | observe |
suggest Default — asks before promoting
auto Promotes when confident
off No reflection, manual only
Inspired by Mastra's Observational Memory research
Quick start
One command. Under a minute.
1 GitHub template Recommended
# Create from template gh repo create my-project --template AndreaGriffiths11/agent-context-system # Initialize cd my-project ./agent-context init
2 Existing project
# Clone and copy files git clone https://github.com/AndreaGriffiths11/agent-context-system.git /tmp/acs cp /tmp/acs/AGENTS.md /tmp/acs/agent-context /tmp/acs/agent_docs your-project/ # Initialize cd your-project ./agent-context init
3 OpenClaw users
# Clone into your skills directory
git clone https://github.com/AndreaGriffiths11/agent-context-system.git skills/agent-context-system OpenClaw picks it up as a workspace skill on the next session.
4 Copilot CLI skill
# Install as a Copilot CLI skill
npx skills add AndreaGriffiths11/agent-context-system
bash .agents/skills/agent-context-system/scripts/init-agent-context.sh After setup
- 1. Edit AGENTS.md — Fill in your stack, commands, patterns. The template has examples to show the format.
- 2. Edit agent_docs/ — Replace the examples with your project's architecture, conventions, gotchas.
- 3. Work — The agent reads both files, does the task, updates the scratchpad.
- 4. Promote — Run
agent-context promoteto find recurring patterns ready for AGENTS.md.
FAQ
Common questions about how this fits your workflow.
I use OpenClaw. Do I need this?
If OpenClaw is your only coding agent, you probably don't. OpenClaw reads AGENTS.md natively and handles session context on its own.
But if you also code with Claude Code, Cursor, Copilot, or Windsurf, this is where it gets useful. Agent-context gives you one shared context file that works across every agent. Write your project rules once in AGENTS.md, and every tool your team uses picks them up — no per-tool configuration, no drift between agents.
How is this different from built-in memory (Claude auto memory, Copilot Memory)?
Built-in memory learns about you — your preferences, patterns, style. It's personal and tool-specific. Agent-context handles what every agent needs to know about your project — stack, conventions, architecture decisions. It's shared, version-controlled, and reviewable in PRs like any other code.
Why is AGENTS.md capped at 120 lines?
Research from HumanLayer found frontier LLMs reliably follow about 150-200 instructions per session. Your agent's system prompt already uses ~50. That leaves room for about 120 lines of project context before instruction-following quality degrades. Deeper docs go in agent_docs/ and are loaded on demand.
Does this require any infrastructure or plugins?
No. It's two markdown files and a bash CLI. No background processes, no API keys, no dependencies beyond bash. The convention lives inside the files themselves.
Stop starting from zero
Two files. Persistent memory. Every agent.
