Context window optimization for AI coding agents. Sandboxes tool output, 98% reduction. 12 platforms
MCP Servers7.2k stars499 forks● TypeScriptNOASSERTIONUpdated today
ClaudeWave Trust Score
95/100
Passed
- ✓License: NOASSERTION
- ✓Actively maintained (<30d)
- ✓Healthy fork ratio
- ✓Clear description
- ✓Topics declared
- ✓Documented (README)
Last scanned: 4/14/2026
Install in Claude Desktop
Method detected: NPX · context-mode
{
"mcpServers": {
"context-mode": {
"command": "npx",
"args": ["-y", "context-mode"]
}
}
}1. Copy the snippet above.
2. Paste into
~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or %APPDATA%\Claude\claude_desktop_config.json (Windows).3. Replace any
<placeholder> values with your API keys or paths.4. Restart Claude Desktop. The MCP server appears automatically.
Use cases
🛠️ Dev Tools💬 Social💰 Finance
About
MCP Servers overview
# Context Mode
**The other half of the context problem.**
[](https://www.npmjs.com/package/context-mode) [](https://www.npmjs.com/package/context-mode) [](https://github.com/mksglu/context-mode) [](https://github.com/mksglu/context-mode/stargazers) [](https://github.com/mksglu/context-mode/network/members) [](https://github.com/mksglu/context-mode/commits) [](LICENSE)
[](https://discord.gg/DCN9jUgN5v)
[](https://news.ycombinator.com/item?id=47193064)
<p align="center">
<sub>Used across teams at</sub>
<br><br>
<a href="#"><img src="https://img.shields.io/badge/Microsoft-141414?style=flat" alt="Microsoft" /></a>
<a href="#"><img src="https://img.shields.io/badge/Google-141414?style=flat&logo=google&logoColor=white" alt="Google" /></a>
<a href="#"><img src="https://img.shields.io/badge/Meta-141414?style=flat&logo=meta&logoColor=white" alt="Meta" /></a>
<a href="#"><img src="https://img.shields.io/badge/Amazon-141414?style=flat" alt="Amazon" /></a>
<a href="#"><img src="https://img.shields.io/badge/IBM-141414?style=flat" alt="IBM" /></a>
<a href="#"><img src="https://img.shields.io/badge/NVIDIA-141414?style=flat&logo=nvidia&logoColor=white" alt="NVIDIA" /></a>
<a href="#"><img src="https://img.shields.io/badge/ByteDance-141414?style=flat&logo=bytedance&logoColor=white" alt="ByteDance" /></a>
<a href="#"><img src="https://img.shields.io/badge/Stripe-141414?style=flat&logo=stripe&logoColor=white" alt="Stripe" /></a>
<a href="#"><img src="https://img.shields.io/badge/Datadog-141414?style=flat&logo=datadog&logoColor=white" alt="Datadog" /></a>
<a href="#"><img src="https://img.shields.io/badge/Salesforce-141414?style=flat" alt="Salesforce" /></a>
<a href="#"><img src="https://img.shields.io/badge/GitHub-141414?style=flat&logo=github&logoColor=white" alt="GitHub" /></a>
<a href="#"><img src="https://img.shields.io/badge/Red%20Hat-141414?style=flat&logo=redhat&logoColor=white" alt="Red Hat" /></a>
<a href="#"><img src="https://img.shields.io/badge/Supabase-141414?style=flat&logo=supabase&logoColor=white" alt="Supabase" /></a>
<a href="#"><img src="https://img.shields.io/badge/Canva-141414?style=flat" alt="Canva" /></a>
<a href="#"><img src="https://img.shields.io/badge/Notion-141414?style=flat&logo=notion&logoColor=white" alt="Notion" /></a>
<a href="#"><img src="https://img.shields.io/badge/Hasura-141414?style=flat&logo=hasura&logoColor=white" alt="Hasura" /></a>
<a href="#"><img src="https://img.shields.io/badge/Framer-141414?style=flat&logo=framer&logoColor=white" alt="Framer" /></a>
<a href="#"><img src="https://img.shields.io/badge/Cursor-141414?style=flat&logo=cursor&logoColor=white" alt="Cursor" /></a>
</p>
## The Problem
Every MCP tool call dumps raw data into your context window. A Playwright snapshot costs 56 KB. Twenty GitHub issues cost 59 KB. One access log — 45 KB. After 30 minutes, 40% of your context is gone. And when the agent compacts the conversation to free space, it forgets which files it was editing, what tasks are in progress, and what you last asked for.
Context Mode is an MCP server that solves all three sides of this problem:
1. **Context Saving** — Sandbox tools keep raw data out of the context window. 315 KB becomes 5.4 KB. 98% reduction.
2. **Session Continuity** — Every file edit, git operation, task, error, and user decision is tracked in SQLite. When the conversation compacts, context-mode doesn't dump this data back into context — it indexes events into FTS5 and retrieves only what's relevant via BM25 search. The model picks up exactly where you left off. If you don't `--continue`, previous session data is deleted immediately — a fresh session means a clean slate.
3. **Think in Code** — The LLM should program the analysis, not compute it. Instead of reading 50 files into context to count functions, the agent writes a script that does the counting and `console.log()`s only the result. One script replaces ten tool calls and saves 100x context. This is a mandatory paradigm across all 12 platforms: stop treating the LLM as a data processor, treat it as a code generator.
<a href="https://www.youtube.com/watch?v=QUHrntlfPo4">
<picture>
<img src="https://img.youtube.com/vi/QUHrntlfPo4/maxresdefault.jpg" alt="Watch context-mode demo on YouTube" width="100%">
</picture>
</a>
<p align="center"><a href="https://www.youtube.com/watch?v=QUHrntlfPo4"><img src="https://img.shields.io/badge/%E2%96%B6%EF%B8%8F_Watch_Demo-YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white" alt="Watch on YouTube"></a></p>
## Install
Platforms are grouped by install complexity. Hook-capable platforms get automatic routing enforcement. Non-hook platforms need a one-time routing file copy.
<details open>
<summary><strong>Claude Code</strong> — plugin marketplace, fully automatic</summary>
**Prerequisites:** Claude Code v1.0.33+ (`claude --version`). If `/plugin` is not recognized, update first: `brew upgrade claude-code` or `npm update -g @anthropic-ai/claude-code`.
**Install:**
```bash
/plugin marketplace add mksglu/context-mode
/plugin install context-mode@context-mode
```
Restart Claude Code (or run `/reload-plugins`).
**Verify:**
```
/context-mode:ctx-doctor
```
All checks should show `[x]`. The doctor validates runtimes, hooks, FTS5, and plugin registration.
**Routing:** Automatic. The SessionStart hook injects routing instructions at runtime — no file is written to your project. The plugin registers all hooks (PreToolUse, PostToolUse, PreCompact, SessionStart) and 6 sandbox tools (`ctx_batch_execute`, `ctx_execute`, `ctx_execute_file`, `ctx_index`, `ctx_search`, `ctx_fetch_and_index`) plus meta-tools (`ctx_stats`, `ctx_doctor`, `ctx_upgrade`, `ctx_purge`, `ctx_insight`).
| Slash Command | What it does |
|---|---|
| `/context-mode:ctx-stats` | Context savings — per-tool breakdown, tokens consumed, savings ratio. |
| `/context-mode:ctx-doctor` | Diagnostics — runtimes, hooks, FTS5, plugin registration, versions. |
| `/context-mode:ctx-upgrade` | Pull latest, rebuild, migrate cache, fix hooks. |
| `/context-mode:ctx-purge` | Permanently delete all indexed content from the knowledge base. |
| `/context-mode:ctx-insight` | Personal analytics dashboard — 15+ metrics on tool usage, session activity, error rate, parallel work patterns, and mastery curve. Opens a local web UI. |
> **Note:** Slash commands are a Claude Code plugin feature. On other platforms, type `ctx stats`, `ctx doctor`, `ctx upgrade`, or `ctx insight` in the chat — the model calls the MCP tool automatically. See [Utility Commands](#utility-commands).
<details>
<summary>Alternative — MCP-only install (no hooks or slash commands)</summary>
```bash
claude mcp add context-mode -- npx -y context-mode
```
This gives you the 6 sandbox tools without automatic routing. The model can still use them — it just won't be nudged to prefer them over raw Bash/Read/WebFetch. Good for trying it out before committing to the full plugin.
</details>
</details>
<details>
<summary><strong>Gemini CLI</strong> — one config file, hooks included</summary>
**Prerequisites:** Node.js 18+, Gemini CLI installed.
**Install:**
1. Install context-mode globally:
```bash
npm install -g context-mode
```
2. Add the following to `~/.gemini/settings.json`. This single file registers the MCP server and all four hooks:
```json
{
"mcpServers": {
"context-mode": {
"command": "context-mode"
}
},
"hooks": {
"BeforeTool": [
{
"matcher": "run_shell_command|read_file|read_many_files|grep_search|search_file_content|web_fetch|activate_skill|mcp__plugin_context-mode",
"hooks": [{ "type": "command", "command": "context-mode hook gemini-cli beforetool" }]
}
],
"AfterTool": [
{
"matcher": "",
"hooks": [{ "type": "command", "command": "context-mode hook gemini-cli aftertool" }]
}
],
"PreCompress": [
{
"matcher": "",
"hooks": [{ "type": "command", "command": "context-mode hook gemini-cli precompress" }]
}
],
"SessionStart": [
{
"matcher": "",
"hooks": [{ "type": "command", "command": "context-mode hook gemini-cli sessionstart" }]
}
]
}
}
```
3. Restart Gemini CLI.
**Verify:**
```
/mcp list
```
You should see `context-mode: ... - Connected`.
**Routing:** Automatic. The SessionStart hook injects routing instructions at runtime — no `GEMINI.md` file is written to your project. All four hooks (BeforeTool, AfterTool, PreCompress, SessionStart) handle enforcement programmatically.
> **Why the BeforeTool matcher?** It targets only tools that produce large output (`run_shell_command`, `read_file`, `read_many_files`, `grep_search`, `search_file_content`, `web_fetch`, `activate_skill`)Topics
antigravityclaudeclaude-codeclaude-code-hooksclaude-code-pluginsclaude-code-skillcodexcodex-clicontext-modecopilotcursor-pluginkiromcpmcp-servermcp-toolsopenclawopencodepi-agentskillszed-extension
Related
More MCP Servers
n8n-io
n8n
✓95
Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
184k56.8kTypeScript· today
MCP Serversaiapis
open-webui
open-webui
✓89
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
131.8k18.7kPython· today
MCP Serversaillm
google-gemini
gemini-cli
✓98
An open-source AI agent that brings the power of Gemini directly into your terminal.
101.2k13.1kTypeScript· today
MCP Serversaiai-agents
punkpeye
awesome-mcp-servers
✓87
A collection of MCP servers.
84.8k9.1k· today
MCP Serversaimcp
netdata
netdata
✓97
The fastest path to AI-powered full stack observability, even for lean teams.
78.4k6.4kC· today
MCP Serversaialerting
Mintplex-Labs
anything-llm
✓93
The all-in-one AI productivity accelerator. On device and privacy first with no annoying setup or configuration.
58.3k6.3kJavaScript· today
MCP Serversai-agentscustom-ai-agents