Local knowledge graph for Claude Code. Builds a persistent map of your codebase so Claude reads only what matters — 6.8× fewer tokens on reviews and up to 49× on daily coding tasks.
- ✓Open-source license (MIT)
- ✓Actively maintained (<30d)
- ✓Healthy fork ratio
- ✓Clear description
- ✓Topics declared
- ✓Documented (README)
{
"mcpServers": {
"code-review-graph": {
"command": "python",
"args": ["-m", "code-review-graph"]
}
}
}~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or %APPDATA%\Claude\claude_desktop_config.json (Windows).<placeholder> values with your API keys or paths.Tools overview
<h1 align="center">code-review-graph</h1> <p align="center"> <strong>Stop burning tokens. Start reviewing smarter.</strong> </p> <p align="center"> <a href="README.md">English</a> | <a href="README.zh-CN.md">简体中文</a> | <a href="README.ja-JP.md">日本語</a> | <a href="README.ko-KR.md">한국어</a> | <a href="README.hi-IN.md">हिन्दी</a> </p> <p align="center"> <a href="https://pypi.org/project/code-review-graph/"><img src="https://img.shields.io/pypi/v/code-review-graph?style=flat-square&color=blue" alt="PyPI"></a> <a href="https://pepy.tech/project/code-review-graph"><img src="https://img.shields.io/pepy/dt/code-review-graph?style=flat-square" alt="Downloads"></a> <a href="https://github.com/tirth8205/code-review-graph/stargazers"><img src="https://img.shields.io/github/stars/tirth8205/code-review-graph?style=flat-square" alt="Stars"></a> <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg?style=flat-square" alt="MIT Licence"></a> <a href="https://github.com/tirth8205/code-review-graph/actions/workflows/ci.yml"><img src="https://github.com/tirth8205/code-review-graph/actions/workflows/ci.yml/badge.svg" alt="CI"></a> <a href="https://www.python.org/"><img src="https://img.shields.io/badge/python-3.10%2B-blue.svg?style=flat-square" alt="Python 3.10+"></a> <a href="https://modelcontextprotocol.io/"><img src="https://img.shields.io/badge/MCP-compatible-green.svg?style=flat-square" alt="MCP"></a> <a href="https://code-review-graph.com"><img src="https://img.shields.io/badge/website-code--review--graph.com-blue?style=flat-square" alt="Website"></a> <a href="https://discord.gg/3p58KXqGFN"><img src="https://img.shields.io/badge/discord-join-5865F2?style=flat-square&logo=discord&logoColor=white" alt="Discord"></a> </p> <br> AI coding tools re-read your entire codebase on every task. `code-review-graph` fixes that. It builds a structural map of your code with [Tree-sitter](https://tree-sitter.github.io/tree-sitter/), tracks changes incrementally, and gives your AI assistant precise context via [MCP](https://modelcontextprotocol.io/) so it reads only what matters. <p align="center"> <img src="diagrams/diagram1_before_vs_after.png" alt="The Token Problem: 8.2x average token reduction across 6 real repositories" width="85%" /> </p> --- ## Quick Start ```bash pip install code-review-graph # or: pipx install code-review-graph code-review-graph install # auto-detects and configures all supported platforms code-review-graph build # parse your codebase ``` One command sets up everything. `install` detects which AI coding tools you have, writes the correct MCP configuration for each one, and injects graph-aware instructions into your platform rules. It auto-detects whether you installed via `uvx` or `pip`/`pipx` and generates the right config. Restart your editor/tool after installing. <p align="center"> <img src="diagrams/diagram8_supported_platforms.png" alt="One Install, Every Platform: auto-detects Codex, Claude Code, Cursor, Windsurf, Zed, Continue, OpenCode, Antigravity, and Kiro" width="85%" /> </p> To target a specific platform: ```bash code-review-graph install --platform codex # configure only Codex code-review-graph install --platform cursor # configure only Cursor code-review-graph install --platform claude-code # configure only Claude Code code-review-graph install --platform kiro # configure only Kiro ``` Requires Python 3.10+. For the best experience, install [uv](https://docs.astral.sh/uv/) (the MCP config will use `uvx` if available, otherwise falls back to the `code-review-graph` command directly). Then open your project and ask your AI assistant: ``` Build the code review graph for this project ``` The initial build takes ~10 seconds for a 500-file project. After that, the graph updates automatically on every file edit and git commit. --- ## How It Works <p align="center"> <img src="diagrams/diagram7_mcp_integration_flow.png" alt="How your AI assistant uses the graph: User asks for review, AI checks MCP tools, graph returns blast radius and risk scores, AI reads only what matters" width="80%" /> </p> Your repository is parsed into an AST with Tree-sitter, stored as a graph of nodes (functions, classes, imports) and edges (calls, inheritance, test coverage), then queried at review time to compute the minimal set of files your AI assistant needs to read. <p align="center"> <img src="diagrams/diagram2_architecture_pipeline.png" alt="Architecture pipeline: Repository to Tree-sitter Parser to SQLite Graph to Blast Radius to Minimal Review Set" width="100%" /> </p> ### Blast-radius analysis When a file changes, the graph traces every caller, dependent, and test that could be affected. This is the "blast radius" of the change. Your AI reads only these files instead of scanning the whole project. <p align="center"> <img src="diagrams/diagram3_blast_radius.png" alt="Blast radius visualization showing how a change to login() propagates to callers, dependents, and tests" width="70%" /> </p> ### Incremental updates in < 2 seconds On every git commit or file save, a hook fires. The graph diffs changed files, finds their dependents via SHA-256 hash checks, and re-parses only what changed. A 2,900-file project re-indexes in under 2 seconds. <p align="center"> <img src="diagrams/diagram4_incremental_update.png" alt="Incremental update flow: git commit triggers diff, finds dependents, re-parses only 5 files while 2,910 are skipped" width="90%" /> </p> ### The monorepo problem, solved Large monorepos are where token waste is most painful. The graph cuts through the noise — 27,700+ files excluded from review context, only ~15 files actually read. <p align="center"> <img src="diagrams/diagram6_monorepo_funnel.png" alt="Next.js monorepo: 27,732 files funnelled through code-review-graph down to ~15 files — 49x fewer tokens" width="80%" /> </p> ### 23 languages + Jupyter notebooks <p align="center"> <img src="diagrams/diagram9_language_coverage.png" alt="19 languages organized by category: Web, Backend, Systems, Mobile, Scripting, plus Jupyter/Databricks notebook support" width="90%" /> </p> Full Tree-sitter grammar support for functions, classes, imports, call sites, inheritance, and test detection in every language. Includes Zig, PowerShell, Julia, and Svelte SFC support. Plus Jupyter/Databricks notebook parsing (`.ipynb`) with multi-language cell support (Python, R, SQL), and Perl XS files (`.xs`). --- ## Benchmarks <p align="center"> <img src="diagrams/diagram5_benchmark_board.png" alt="Benchmarks across real repos: 4.9x to 27.3x fewer tokens, higher review quality" width="85%" /> </p> All numbers come from the automated evaluation runner against 6 real open-source repositories (13 commits total). Reproduce with `code-review-graph eval --all`. Raw data in [`evaluate/reports/summary.md`](evaluate/reports/summary.md). <details> <summary><strong>Token efficiency: 8.2x average reduction (naive vs graph)</strong></summary> <br> The graph replaces reading entire source files with a compact structural context covering blast radius, dependency chains, and test coverage gaps. | Repo | Commits | Avg Naive Tokens | Avg Graph Tokens | Reduction | |------|--------:|-----------------:|----------------:|----------:| | express | 2 | 693 | 983 | 0.7x | | fastapi | 2 | 4,944 | 614 | 8.1x | | flask | 2 | 44,751 | 4,252 | 9.1x | | gin | 3 | 21,972 | 1,153 | 16.4x | | httpx | 2 | 12,044 | 1,728 | 6.9x | | nextjs | 2 | 9,882 | 1,249 | 8.0x | | **Average** | **13** | | | **8.2x** | **Why express shows <1x:** For single-file changes in small packages, the graph context (metadata, edges, review guidance) can exceed the raw file size. The graph approach pays off on multi-file changes where it prunes irrelevant code. </details> <details> <summary><strong>Impact accuracy: 100% recall, 0.54 average F1</strong></summary> <br> The blast-radius analysis never misses an actually impacted file (perfect recall). It over-predicts in some cases, which is a conservative trade-off — better to flag too many files than miss a broken dependency. | Repo | Commits | Avg F1 | Avg Precision | Recall | |------|--------:|-------:|--------------:|-------:| | express | 2 | 0.667 | 0.50 | 1.0 | | fastapi | 2 | 0.584 | 0.42 | 1.0 | | flask | 2 | 0.475 | 0.34 | 1.0 | | gin | 3 | 0.429 | 0.29 | 1.0 | | httpx | 2 | 0.762 | 0.63 | 1.0 | | nextjs | 2 | 0.331 | 0.20 | 1.0 | | **Average** | **13** | **0.54** | **0.38** | **1.0** | </details> <details> <summary><strong>Build performance</strong></summary> <br> | Repo | Files | Nodes | Edges | Flow Detection | Search Latency | |------|------:|------:|------:|---------------:|---------------:| | express | 141 | 1,910 | 17,553 | 106ms | 0.7ms | | fastapi | 1,122 | 6,285 | 27,117 | 128ms | 1.5ms | | flask | 83 | 1,446 | 7,974 | 95ms | 0.7ms | | gin | 99 | 1,286 | 16,762 | 111ms | 0.5ms | | httpx | 60 | 1,253 | 7,896 | 96ms | 0.4ms | </details> <details> <summary><strong>Limitations and known weaknesses</strong></summary> <br> - **Small single-file changes:** Graph context can exceed naive file reads for trivial edits (see express results above). The overhead is the structural metadata that enables multi-file analysis. - **Search quality (MRR 0.35):** Keyword search finds the right result in the top-4 for most queries, but ranking needs improvement. Express queries return 0 hits due to module-pattern naming. - **Flow detection (33% recall):** Only reliably detects entry points in Python repos (fastapi, httpx) where framework patterns are recognized. JavaScript and Go flow detection needs work. - **Precision vs recall trade-off:** Impact analysis is deliberately conservative. It flags files that *might* be affected, which means some false positives in large dependency graphs. </details> --- ## Features | Feature | Details | |---------|-
More Tools
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows - all through natural language commands.
An AI SKILL that provide design intelligence for building professional UI/UX multiple platforms
A light-weight and powerful meta-prompting, context engineering and spec-driven development system for Claude Code by TÂCHES.
aider is AI pair programming in your terminal
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
Extracted system prompts from ChatGPT (GPT-5.4, GPT-5.3, Codex), Claude (Opus 4.6, Sonnet 4.6, Claude Code), Gemini (3.1 Pro, 3 Flash, CLI), Grok (4.2, 4), Perplexity, and more. Updated regularly.