The power of Claude Code / GeminiCLI / CodexCLI + [Gemini / OpenAI / OpenRouter / Azure / Grok / Ollama / Custom Model / All Of The Above] working as one.
- ✓License: NOASSERTION
- ✓Healthy fork ratio
- ✓Clear description
- ✓Documented (README)
{
"mcpServers": {
"pal": {
"command": "uvx",
"args": ["2"]
}
}
}~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or %APPDATA%\Claude\claude_desktop_config.json (Windows).<placeholder> values with your API keys or paths.Resumen de MCP Servers
# PAL MCP: Many Workflows. One Context. <div align="center"> <em>Your AI's PAL – a Provider Abstraction Layer</em><br /> <sub><a href="docs/name-change.md">Formerly known as Zen MCP</a></sub> [PAL in action](https://github.com/user-attachments/assets/0d26061e-5f21-4ab1-b7d0-f883ddc2c3da) 👉 **[Watch more examples](#-watch-tools-in-action)** ### Your CLI + Multiple Models = Your AI Dev Team **Use the 🤖 CLI you love:** [Claude Code](https://www.anthropic.com/claude-code) · [Gemini CLI](https://github.com/google-gemini/gemini-cli) · [Codex CLI](https://github.com/openai/codex) · [Qwen Code CLI](https://qwenlm.github.io/qwen-code-docs/) · [Cursor](https://cursor.com) · _and more_ **With multiple models within a single prompt:** Gemini · OpenAI · Anthropic · Grok · Azure · Ollama · OpenRouter · DIAL · On-Device Model </div> --- ## 🆕 Now with CLI-to-CLI Bridge The new **[`clink`](docs/tools/clink.md)** (CLI + Link) tool connects external AI CLIs directly into your workflow: - **Connect external CLIs** like [Gemini CLI](https://github.com/google-gemini/gemini-cli), [Codex CLI](https://github.com/openai/codex), and [Claude Code](https://www.anthropic.com/claude-code) directly into your workflow - **CLI Subagents** - Launch isolated CLI instances from _within_ your current CLI! Claude Code can spawn Codex subagents, Codex can spawn Gemini CLI subagents, etc. Offload heavy tasks (code reviews, bug hunting) to fresh contexts while your main session's context window remains unpolluted. Each subagent returns only final results. - **Context Isolation** - Run separate investigations without polluting your primary workspace - **Role Specialization** - Spawn `planner`, `codereviewer`, or custom role agents with specialized system prompts - **Full CLI Capabilities** - Web search, file inspection, MCP tool access, latest documentation lookups - **Seamless Continuity** - Sub-CLIs participate as first-class members with full conversation context between tools ```bash # Codex spawns Codex subagent for isolated code review in fresh context clink with codex codereviewer to audit auth module for security issues # Subagent reviews in isolation, returns final report without cluttering your context as codex reads each file and walks the directory structure # Consensus from different AI models → Implementation handoff with full context preservation between tools Use consensus with gpt-5 and gemini-pro to decide: dark mode or offline support next Continue with clink gemini - implement the recommended feature # Gemini receives full debate context and starts coding immediately ``` 👉 **[Learn more about clink](docs/tools/clink.md)** --- ## Why PAL MCP? **Why rely on one AI model when you can orchestrate them all?** A Model Context Protocol server that supercharges tools like [Claude Code](https://www.anthropic.com/claude-code), [Codex CLI](https://developers.openai.com/codex/cli), and IDE clients such as [Cursor](https://cursor.com) or the [Claude Dev VS Code extension](https://marketplace.visualstudio.com/items?itemName=Anthropic.claude-vscode). **PAL MCP connects your favorite AI tool to multiple AI models** for enhanced code analysis, problem-solving, and collaborative development. ### True AI Collaboration with Conversation Continuity PAL supports **conversation threading** so your CLI can **discuss ideas with multiple AI models, exchange reasoning, get second opinions, and even run collaborative debates between models** to help you reach deeper insights and better solutions. Your CLI always stays in control but gets perspectives from the best AI for each subtask. Context carries forward seamlessly across tools and models, enabling complex workflows like: code reviews with multiple models → automated planning → implementation → pre-commit validation. > **You're in control.** Your CLI of choice orchestrates the AI team, but you decide the workflow. Craft powerful prompts that bring in Gemini Pro, GPT 5, Flash, or local offline models exactly when needed. <details> <summary><b>Reasons to Use PAL MCP</b></summary> A typical workflow with Claude Code as an example: 1. **Multi-Model Orchestration** - Claude coordinates with Gemini Pro, O3, GPT-5, and 50+ other models to get the best analysis for each task 2. **Context Revival Magic** - Even after Claude's context resets, continue conversations seamlessly by having other models "remind" Claude of the discussion 3. **Guided Workflows** - Enforces systematic investigation phases that prevent rushed analysis and ensure thorough code examination 4. **Extended Context Windows** - Break Claude's limits by delegating to Gemini (1M tokens) or O3 (200K tokens) for massive codebases 5. **True Conversation Continuity** - Full context flows across tools and models - Gemini remembers what O3 said 10 steps ago 6. **Model-Specific Strengths** - Extended thinking with Gemini Pro, blazing speed with Flash, strong reasoning with O3, privacy with local Ollama 7. **Professional Code Reviews** - Multi-pass analysis with severity levels, actionable feedback, and consensus from multiple AI experts 8. **Smart Debugging Assistant** - Systematic root cause analysis with hypothesis tracking and confidence levels 9. **Automatic Model Selection** - Claude intelligently picks the right model for each subtask (or you can specify) 10. **Vision Capabilities** - Analyze screenshots, diagrams, and visual content with vision-enabled models 11. **Local Model Support** - Run Llama, Mistral, or other models locally for complete privacy and zero API costs 12. **Bypass MCP Token Limits** - Automatically works around MCP's 25K limit for large prompts and responses **The Killer Feature:** When Claude's context resets, just ask to "continue with O3" - the other model's response magically revives Claude's understanding without re-ingesting documents! #### Example: Multi-Model Code Review Workflow 1. `Perform a codereview using gemini pro and o3 and use planner to generate a detailed plan, implement the fixes and do a final precommit check by continuing from the previous codereview` 2. This triggers a [`codereview`](docs/tools/codereview.md) workflow where Claude walks the code, looking for all kinds of issues 3. After multiple passes, collects relevant code and makes note of issues along the way 4. Maintains a `confidence` level between `exploring`, `low`, `medium`, `high` and `certain` to track how confidently it's been able to find and identify issues 5. Generates a detailed list of critical -> low issues 6. Shares the relevant files, findings, etc with **Gemini Pro** to perform a deep dive for a second [`codereview`](docs/tools/codereview.md) 7. Comes back with a response and next does the same with o3, adding to the prompt if a new discovery comes to light 8. When done, Claude takes in all the feedback and combines a single list of all critical -> low issues, including good patterns in your code. The final list includes new findings or revisions in case Claude misunderstood or missed something crucial and one of the other models pointed this out 9. It then uses the [`planner`](docs/tools/planner.md) workflow to break the work down into simpler steps if a major refactor is required 10. Claude then performs the actual work of fixing highlighted issues 11. When done, Claude returns to Gemini Pro for a [`precommit`](docs/tools/precommit.md) review All within a single conversation thread! Gemini Pro in step 11 _knows_ what was recommended by O3 in step 7! Taking that context and review into consideration to aid with its final pre-commit review. **Think of it as Claude Code _for_ Claude Code.** This MCP isn't magic. It's just **super-glue**. > **Remember:** Claude stays in full control — but **YOU** call the shots. > PAL is designed to have Claude engage other models only when needed — and to follow through with meaningful back-and-forth. > **You're** the one who crafts the powerful prompt that makes Claude bring in Gemini, Flash, O3 — or fly solo. > You're the guide. The prompter. The puppeteer. > #### You are the AI - **Actually Intelligent**. </details> #### Recommended AI Stack <details> <summary>For Claude Code Users</summary> For best results when using [Claude Code](https://claude.ai/code): - **Sonnet 4.5** - All agentic work and orchestration - **Gemini 3.0 Pro** OR **GPT-5.2 / Pro** - Deep thinking, additional code reviews, debugging and validations, pre-commit analysis </details> <details> <summary>For Codex Users</summary> For best results when using [Codex CLI](https://developers.openai.com/codex/cli): - **GPT-5.2 Codex Medium** - All agentic work and orchestration - **Gemini 3.0 Pro** OR **GPT-5.2-Pro** - Deep thinking, additional code reviews, debugging and validations, pre-commit analysis </details> ## Quick Start (5 minutes) **Prerequisites:** Python 3.10+, Git, [uv installed](https://docs.astral.sh/uv/getting-started/installation/) **1. Get API Keys** (choose one or more): - **[OpenRouter](https://openrouter.ai/)** - Access multiple models with one API - **[Gemini](https://makersuite.google.com/app/apikey)** - Google's latest models - **[OpenAI](https://platform.openai.com/api-keys)** - O3, GPT-5 series - **[Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/)** - Enterprise deployments of GPT-4o, GPT-4.1, GPT-5 family - **[X.AI](https://console.x.ai/)** - Grok models - **[DIAL](https://dialx.ai/)** - Vendor-agnostic model access - **[Ollama](https://ollama.ai/)** - Local models (free) **2. Install** (choose one): **Option A: Clone and Automatic Setup** (recommended) ```bash git clone https://github.com/BeehiveInnovations/pal-mcp-server.git cd pal-mcp-server # Handles everything: setup, config, API keys from system environment. # Auto-configures Claude Desktop, Claude Code, Gemini CLI, Codex CLI, Qwen CLI # Enable / disable additional settings in .env ./run-server.sh ``` **Option B: Instant Setup with [uvx](https://docs.astral.sh/u
Lo que la gente pregunta sobre pal-mcp-server
¿Qué es BeehiveInnovations/pal-mcp-server?
+
BeehiveInnovations/pal-mcp-server es mcp servers para el ecosistema de Claude AI. The power of Claude Code / GeminiCLI / CodexCLI + [Gemini / OpenAI / OpenRouter / Azure / Grok / Ollama / Custom Model / All Of The Above] working as one. Tiene 11.5k estrellas en GitHub y se actualizó por última vez 4mo ago.
¿Cómo se instala pal-mcp-server?
+
Puedes instalar pal-mcp-server clonando el repositorio (https://github.com/BeehiveInnovations/pal-mcp-server) o siguiendo las instrucciones del README en GitHub. ClaudeWave también te ofrece bloques de instalación rápida en esta misma página.
¿Es seguro usar BeehiveInnovations/pal-mcp-server?
+
Nuestro agente de seguridad ha analizado BeehiveInnovations/pal-mcp-server y le ha asignado un Trust Score de 77/100 (tier: Trusted). Revisa el desglose completo de comprobaciones superadas y flags en esta página.
¿Quién mantiene BeehiveInnovations/pal-mcp-server?
+
BeehiveInnovations/pal-mcp-server es mantenido por BeehiveInnovations. La última actividad registrada en GitHub es de 4mo ago, con 122 issues abiertos.
¿Hay alternativas a pal-mcp-server?
+
Sí. En ClaudeWave puedes explorar mcp servers similares en /categories/mcp, ordenados por popularidad o actividad reciente.
Despliega pal-mcp-server en tu cloud
Lleva este repo a producción en minutos. Cada plataforma genera su propio entorno con variables de entorno editables.
¿Mantienes este repo? Añade un badge a tu README
Pega el badge en tu README de GitHub para mostrar que está auditado por ClaudeWave. Cada badge enlaza de vuelta a esta página y muestra el Trust Score actual.
[](https://claudewave.com/repo/beehiveinnovations-pal-mcp-server)<a href="https://claudewave.com/repo/beehiveinnovations-pal-mcp-server"><img src="https://claudewave.com/api/badge/beehiveinnovations-pal-mcp-server" alt="Featured on ClaudeWave — BeehiveInnovations/pal-mcp-server" width="320" height="64" /></a>Más MCP Servers
Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
An open-source AI agent that brings the power of Gemini directly into your terminal.
A collection of MCP servers.
The fastest path to AI-powered full stack observability, even for lean teams.
The all-in-one AI productivity accelerator. On device and privacy first with no annoying setup or configuration.