Share:

04 | Introducing Lineage MCP

Lineage MCP is a Model Context Protocol server that gives your AI the context it needs when it needs it.

Matt Peters
Featured image for article: 04 | Introducing Lineage MCP

I've been using Claude Code as my primary AI coding assistant for months now. It's fast, it understands my codebase, and most importantly - it knows how to load context properly. When it reads a file, it automatically discovers and loads any AGENTS.md or CLAUDE.md files from parent directories. This is huge.

Then I tried OpenCode. OpenCode is an open-source terminal-based AI coding agent. It uses the same underlying models - Claude Sonnet, Claude Opus - so in theory, it should produce similar results. But the quality of the generated code was noticeably worse. Same prompts, same codebase, significantly worse outputs.

I was baffled. How could the same model perform so differently?

The Problem: Invisible Context

I gave up on OpenCode and went back to Claude Code but I kept pondering what could be the issue, and then I saw it in the Claude Code app.

● Read(src\api\MyExampleFile.cs)
  ⎿  Read 25 lines
  ⎿  Loaded src\api\CLAUDE.md

It turns out Claude Code does something quietly brilliant that I'd taken for granted: automatic instruction file discovery.

When Claude Code reads any file in your workspace, it walks up the directory tree and includes all CLAUDE.md files it finds along the way. You write documentation once, and the AI picks it up automatically - every time, without you asking.

This ensures the context is only loaded once the AI actually reaches into any given folder. OpenCode doesn't do this. It reads the file you asked for, nothing more. Which means all those carefully crafted CLAUDE.md files I'd written - the ones explaining architecture patterns, naming conventions, and domain concepts - were completely invisible. The AI was flying blind.

The difference in output quality was dramatic.

Building a Solution: Lineage MCP

I've been a C# developer for most of my career. Python was always something I "meant to learn" but never got around to properly. But for the last month I've been teaching myself Python and the problem bothered me enough that I built my own solution: Lineage MCP (opens in new tab) with my friend Claude.

It's taken me a couple of days but I'm happy with the result.

Lineage is a Model Context Protocol server that provides file operations. When you read any file, Lineage walks up the directory tree and appends all AGENTS.md, CLAUDE.md, and similar files to the response. The AI gets the full context without asking.

I also built in file tracking: Lineage monitors every file the AI reads. If a file changes externally (by you or another process), the next Lineage operation reports exactly which lines changed and when. This eliminates the need to manually ask the AI to re-read updated files.

How Instruction Discovery Works

Say you're reading a file deep in your project:

your-project/
├── AGENTS.md              ← Included automatically  
├── src/
│   ├── CLAUDE.md          ← Included automatically
│   └── app/
│       └── handler.py     ← File you're reading

When you read handler.py, Lineage appends:

--- content of handler.py ---

[AGENTS.md from src]
# Instructions for this module
...

[CLAUDE.md from .]
# Project-wide instructions
...

Every read operation includes the full context chain. No manual specification required.

The Session Management Problem

There's one wrinkle with MCP servers: they're stateful, but LLM conversations aren't.

LLM systems periodically "compact" or summarize conversation history to stay within context limits. When this happens, the detailed content from instruction files gets compressed or lost. Lineage's server-side cache still thinks these files were "already provided" and won't re-send them.

The solution is simple: after summarization, clear the cache. But how? There is no communication from the AI harness (OpenCode / VS Code) that a summarization has occurred.

I added a new_session argument to all of the Lineage tools and told the LLM in the tool description to use new_session=True after a summarization.

This did not work. After a lot of testing, AIs are so focused on solving the task at hand they never self-reflect enough to realise that a summarization has happened.

A little pondering and talking to my back seat driver (Claude), we came up with a prompt which does work. So far this is working, the prompt now reads:

    🛑 STOP AND CHECK: Can you see the FULL output of a previous lineage tool
    call you made in this conversation (not a summary)?
      → NO or UNSURE: new_session=True is REQUIRED
      → YES, I see complete previous output: new_session=False is fine

This tool description ensures that when the AI can't see full output from a previous Lineage call (or isn't sure), it should use new_session=True.

Configuration

You can customize which instruction files Lineage looks for via appsettings.json:

json
{
  "instructionFileNames": [
    "AGENTS.md",
    "CLAUDE.md",
    "GEMINI.md",
    ".cursorrules"
  ]
}

Files are checked in priority order - first match per folder wins.

Setting Up Lineage MCP

Python Installation

bash
git clone https://github.com/imattpeters/lineage-mcp.git
cd lineage-mcp
pip install -r requirements.txt

MCP Client Configuration (Python)

json
{
  "mcpServers": {
    "lineage": {
      "command": "python",
      "args": ["/path/to/lineage-mcp/lineage.py", "/your/workspace"]
    }
  }
}

Docker Configuration

json
{
  "mcpServers": {
    "lineage": {
      "command": "docker",
      "args": [
        "run", "-i", "--rm",
        "-v", "/your/workspace:/data",
        "lineage-mcp"
      ]
    }
  }
}

Available Tools

ToolPurpose
listList directory contents with metadata
searchSearch files by glob pattern
readRead file with change tracking and instruction discovery
writeWrite content to file
editReplace exact strings in a file
deleteDelete file or empty directory
clearReset all session caches

Each tool supports new_session=True for cache reset after context compaction.

Using Lineage

This works best when every read/edit/write operation uses Lineage. If you can, disable your agent's built-in read/write tools. If you can't, add an instruction in your main prompt to only use Lineage tools.

A Python Learning Experience

This only took a few evenings. That's the speedup you get when the problem fits what LLMs do best: translating clear intent into working code, especially in languages you're still learning. Without AI assistance, it would have taken significantly longer and involved plenty of StackOverflow diving.

I needed this tool, so I built it. If I can build something useful in a language I'm still learning, maybe you can too. Sometimes the best way to learn is to build something you actually need.

What's Next

Lineage MCP is open source and available on GitHub (opens in new tab). If you're using AI coding assistants other than Claude Code and have started or are thinking of starting to use AGENTS.md, then I would suggest giving it a try.

Building this has opened my eyes to how easy it is to build an MCP tool and I've already got ideas for other tools that would be good - watch this space.

What problems have you encountered with AI context management? What solutions have you tried?

#AI#MCP#DeveloperTools#LLM#OpenSource#Python
Want to continue the conversation? Find me onlinkedinortwitter