ClaudeWave
Alishahryar1 avatar
Alishahryar1

free-claude-code

View on GitHub

Use claude-code for free in the terminal, VSCode extension or via discord like openclaw

Tools15.1k stars2.1k forksPythonMITUpdated today
ClaudeWave Trust Score
89/100
Trusted
Passed
  • Open-source license (MIT)
  • Actively maintained (<30d)
  • Healthy fork ratio
  • Clear description
Last scanned: 4/14/2026
Install in Claude Desktop
Method detected: pip / Python · uv
{
  "mcpServers": {
    "free-claude-code": {
      "command": "python",
      "args": ["-m", "uv"],
      "env": {
        "NVIDIA_NIM_API_KEY": "<nvidia_nim_api_key>",
        "OPENROUTER_API_KEY": "<openrouter_api_key>",
        "DEEPSEEK_API_KEY": "<deepseek_api_key>",
        "LLAMACPP_BASE_URL": "<llamacpp_base_url>",
        "OLLAMA_BASE_URL": "<ollama_base_url>",
        "ANTHROPIC_AUTH_TOKEN": "<anthropic_auth_token>"
      }
    }
  }
}
1. Copy the snippet above.
2. Paste into ~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or %APPDATA%\Claude\claude_desktop_config.json (Windows).
3. Replace any <placeholder> values with your API keys or paths.
4. Restart Claude Desktop. The MCP server appears automatically.
💡 Install first: pip install uv
Detected environment variables
NVIDIA_NIM_API_KEYOPENROUTER_API_KEYDEEPSEEK_API_KEYLLAMACPP_BASE_URLOLLAMA_BASE_URLANTHROPIC_AUTH_TOKEN
Use cases

Tools overview

<div align="center">

# 🤖 Free Claude Code

### Use Claude Code CLI & VSCode for free. No Anthropic API key required.

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=for-the-badge)](https://opensource.org/licenses/MIT)
[![Python 3.14](https://img.shields.io/badge/python-3.14-3776ab.svg?style=for-the-badge&logo=python&logoColor=white)](https://www.python.org/downloads/)
[![uv](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json&style=for-the-badge)](https://github.com/astral-sh/uv)
[![Tested with Pytest](https://img.shields.io/badge/testing-Pytest-00c0ff.svg?style=for-the-badge)](https://github.com/Alishahryar1/free-claude-code/actions/workflows/tests.yml)
[![Type checking: Ty](https://img.shields.io/badge/type%20checking-ty-ffcc00.svg?style=for-the-badge)](https://pypi.org/project/ty/)
[![Code style: Ruff](https://img.shields.io/badge/code%20formatting-ruff-f5a623.svg?style=for-the-badge)](https://github.com/astral-sh/ruff)
[![Logging: Loguru](https://img.shields.io/badge/logging-loguru-4ecdc4.svg?style=for-the-badge)](https://github.com/Delgan/loguru)

A lightweight proxy that routes Claude Code's Anthropic API calls to **NVIDIA NIM** (40 req/min free), **OpenRouter** (hundreds of models), **DeepSeek** (direct Anthropic-compatible API), **LM Studio** (fully local), **llama.cpp** (local with Anthropic endpoints), or **Ollama** (fully local, native Anthropic Messages).

[Quick Start](#quick-start) · [Providers](#providers) · [Discord Bot](#discord-bot) · [Configuration](#configuration) · [Development](#development) · [Contributing](#contributing)

---

</div>

<div align="center">
  <img src="pic.png" alt="Free Claude Code in action" width="700">
  <p><em>Claude Code running via NVIDIA NIM, completely free</em></p>
</div>

## Features

| Feature                    | Description                                                                                     |
| -------------------------- | ----------------------------------------------------------------------------------------------- |
| **Zero Cost**              | 40 req/min free on NVIDIA NIM. Free models on OpenRouter. Fully local with LM Studio, Ollama, or llama.cpp |
| **Drop-in Replacement**    | Set 2 env vars. No modifications to Claude Code CLI or VSCode extension needed                  |
| **6 Providers**            | NVIDIA NIM, OpenRouter, DeepSeek, LM Studio (local), llama.cpp (`llama-server`), Ollama         |
| **Per-Model Mapping**      | Route Opus / Sonnet / Haiku to different models and providers. Mix providers freely             |
| **Thinking Token Support** | Parses `<think>` tags and `reasoning_content` into native Claude thinking blocks                |
| **Heuristic Tool Parser**  | Models outputting tool calls as text are auto-parsed into structured tool use                   |
| **Request Optimization**   | 5 categories of trivial API calls intercepted locally, saving quota and latency                 |
| **Smart Rate Limiting**    | Proactive rolling-window throttle + reactive 429 exponential backoff + optional concurrency cap |
| **Discord / Telegram Bot** | Remote autonomous coding with tree-based threading, session persistence, and live progress      |
| **Subagent Control**       | Task tool interception forces `run_in_background=False`. No runaway subagents                   |
| **Extensible**             | Clean `BaseProvider` and `MessagingPlatform` ABCs. Add new providers or platforms easily        |

## Quick Start

### Prerequisites

1. Get an API key (or use a local provider):
   - **NVIDIA NIM**: [build.nvidia.com/settings/api-keys](https://build.nvidia.com/settings/api-keys)
   - **OpenRouter**: [openrouter.ai/keys](https://openrouter.ai/keys)
   - **DeepSeek**: [platform.deepseek.com/api_keys](https://platform.deepseek.com/api_keys)
   - **LM Studio**: No API key needed. Run locally with [LM Studio](https://lmstudio.ai)
   - **llama.cpp**: No API key needed. Run `llama-server` locally.
   - **Ollama**: No API key needed. Run locally with [Ollama](https://ollama.com) (`ollama serve`).
2. Install [Claude Code](https://github.com/anthropics/claude-code)

### Install `uv`

```bash
# Recommended installer (works on macOS/Linux without relying on system pip)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Keep uv current if it is already installed
uv self update

# This project requires Python 3.14
uv python install 3.14
```

PowerShell (Windows):

```powershell
# Recommended installer (avoids relying on system pip)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

# Keep uv current if it is already installed
uv self update

# This project requires Python 3.14
uv python install 3.14
```

`pip install uv` can fail on Homebrew-managed Python with `externally-managed-environment` (PEP 668), so prefer the official installer above.

### Clone & Configure

```bash
git clone https://github.com/Alishahryar1/free-claude-code.git
cd free-claude-code
cp .env.example .env
```

Choose your provider and edit `.env`:

<details>
<summary><b>NVIDIA NIM</b> (40 req/min free, recommended)</summary>

```dotenv
NVIDIA_NIM_API_KEY="nvapi-your-key-here"

MODEL_OPUS=
MODEL_SONNET=
MODEL_HAIKU=
MODEL="nvidia_nim/z-ai/glm4.7"                     # fallback

# Per-Claude-model switches for provider reasoning requests and Claude thinking blocks.
# Blank per-model switches inherit ENABLE_MODEL_THINKING.
ENABLE_OPUS_THINKING=
ENABLE_SONNET_THINKING=
ENABLE_HAIKU_THINKING=
ENABLE_MODEL_THINKING=true
```

</details>

<details>
<summary><b>OpenRouter</b> (hundreds of models)</summary>

```dotenv
OPENROUTER_API_KEY="sk-or-your-key-here"

MODEL_OPUS="open_router/deepseek/deepseek-r1-0528:free"
MODEL_SONNET="open_router/openai/gpt-oss-120b:free"
MODEL_HAIKU="open_router/stepfun/step-3.5-flash:free"
MODEL="open_router/stepfun/step-3.5-flash:free"     # fallback
```

</details>

<details>
<summary><b>DeepSeek</b> (direct API)</summary>

```dotenv
DEEPSEEK_API_KEY="your-deepseek-key-here"

MODEL_OPUS="deepseek/deepseek-reasoner"
MODEL_SONNET="deepseek/deepseek-chat"
MODEL_HAIKU="deepseek/deepseek-chat"
MODEL="deepseek/deepseek-chat"                      # fallback
```

</details>

<details>
<summary><b>LM Studio</b> (fully local, no API key)</summary>

```dotenv
MODEL_OPUS="lmstudio/unsloth/MiniMax-M2.5-GGUF"
MODEL_SONNET="lmstudio/unsloth/Qwen3.5-35B-A3B-GGUF"
MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF"
MODEL="lmstudio/unsloth/GLM-4.7-Flash-GGUF"         # fallback
```

</details>

<details>
<summary><b>llama.cpp</b> (fully local, no API key)</summary>

```dotenv
LLAMACPP_BASE_URL="http://localhost:8080/v1"

MODEL_OPUS="llamacpp/local-model"
MODEL_SONNET="llamacpp/local-model"
MODEL_HAIKU="llamacpp/local-model"
MODEL="llamacpp/local-model"
```

</details>

<details>
<summary><b>Ollama</b> (fully local, no API key)</summary>

```dotenv
OLLAMA_BASE_URL="http://localhost:11434"

MODEL_OPUS="ollama/llama3.1"
MODEL_SONNET="ollama/llama3.1"
MODEL_HAIKU="ollama/llama3.1"
MODEL="ollama/llama3.1"                             # fallback
```

Install: [ollama.com](https://ollama.com). Pull a model (`ollama pull llama3.1`) and keep the server running (`ollama serve` or the desktop app). Use the same model tag in `MODEL*` that appears in `ollama list` (for example `ollama/llama3.1:8b`).

</details>

<details>
<summary><b>Mix providers</b></summary>

Each `MODEL_*` variable can use a different provider. `MODEL` is the fallback for unrecognized Claude models.

```dotenv
NVIDIA_NIM_API_KEY="nvapi-your-key-here"
OPENROUTER_API_KEY="sk-or-your-key-here"

MODEL_OPUS="nvidia_nim/moonshotai/kimi-k2.5"
MODEL_SONNET="open_router/deepseek/deepseek-r1-0528:free"
MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF"
MODEL="nvidia_nim/z-ai/glm4.7"                      # fallback
```

</details>

> Migration: `NIM_ENABLE_THINKING` and `ENABLE_THINKING` were removed in this release. Use `ENABLE_MODEL_THINKING` as the fallback switch, with optional `ENABLE_OPUS_THINKING`, `ENABLE_SONNET_THINKING`, and `ENABLE_HAIKU_THINKING` overrides.

<details>
<summary><b>Optional Authentication</b> (restrict access to your proxy)</summary>

Set `ANTHROPIC_AUTH_TOKEN` in `.env` to require clients to authenticate:

```dotenv
ANTHROPIC_AUTH_TOKEN="your-secret-token-here"
```

**How it works:**
- If `ANTHROPIC_AUTH_TOKEN` is empty (default), no authentication is required (backward compatible)
- If set, clients must provide the same token via the `ANTHROPIC_AUTH_TOKEN` header
- The `claude-pick` script automatically reads the token from `.env` if configured

**Example usage:**
```bash
# With authentication
ANTHROPIC_AUTH_TOKEN="your-secret-token-here" \
ANTHROPIC_BASE_URL="http://localhost:8082" claude

# claude-pick automatically uses the configured token
claude-pick
```

Use this feature if:
- Running the proxy on a public network
- Sharing the server with others but restricting access
- Wanting an additional layer of security

</details>

### Run It

**Terminal 1:** Start the proxy server:

```bash
uv run uvicorn server:app --host 0.0.0.0 --port 8082
```

**Terminal 2:** Run Claude Code:

Point `ANTHROPIC_BASE_URL` at the proxy root URL, not `http://localhost:8082/v1`.

#### Powershell
```powershell
$env:ANTHROPIC_AUTH_TOKEN="freecc"; $env:ANTHROPIC_BASE_URL="http://localhost:8082"; claude
```
#### Bash
```bash
ANTHROPIC_AUTH_TOKEN="freecc" ANTHROPIC_BASE_URL="http://localhost:8082" claude
```

That's it! Claude Code now uses your configured provider for free.

<details>
<summary><b>VSCode Extension Setup</b></summary>

1. Start the proxy server (same as above).
2. Open Settings (`Ctrl + ,`) and search for `claude-code.environmentVariables`.
3. Click **Edit in settings.json** and add:

```json
"claudeCode.environmentVariables": [
  { "name": "ANTHROPIC_BASE_URL", "value": "http://localhost:8082" },
  { "name": "ANTHROPIC_AUTH_TOKEN", "value": "freecc" }
]
```

4. Reload extension

What people ask about free-claude-code

What is Alishahryar1/free-claude-code?

+

Alishahryar1/free-claude-code is tools for the Claude AI ecosystem. Use claude-code for free in the terminal, VSCode extension or via discord like openclaw It has 15.1k GitHub stars and was last updated today.

How do I install free-claude-code?

+

You can install free-claude-code by cloning the repository (https://github.com/Alishahryar1/free-claude-code) or following the README instructions on GitHub. ClaudeWave also provides quick install blocks on this page.

Is Alishahryar1/free-claude-code safe to use?

+

Our security agent has analyzed Alishahryar1/free-claude-code and assigned a Trust Score of 89/100 (tier: Trusted). See the full breakdown of passed checks and flags on this page.

Who maintains Alishahryar1/free-claude-code?

+

Alishahryar1/free-claude-code is maintained by Alishahryar1. The last recorded GitHub activity is from today, with 47 open issues.

Are there alternatives to free-claude-code?

+

Yes. On ClaudeWave you can browse similar tools at /categories/tools, sorted by popularity or recent activity.

Deploy free-claude-code to your cloud

Ship this repo to production in minutes. Each platform spins up its own environment with editable env vars.

Maintain this repo? Add a badge to your README

Drop the badge into your GitHub README to show it's tracked on ClaudeWave. Each badge links back to this page and reflects the live Trust Score.

Featured on ClaudeWave — Alishahryar1/free-claude-code
[![Featured on ClaudeWave](https://claudewave.com/api/badge/alishahryar1-free-claude-code)](https://claudewave.com/repo/alishahryar1-free-claude-code)
<a href="https://claudewave.com/repo/alishahryar1-free-claude-code"><img src="https://claudewave.com/api/badge/alishahryar1-free-claude-code" alt="Featured on ClaudeWave — Alishahryar1/free-claude-code" width="320" height="64" /></a>

More Tools

anthropics
claude-code
2d ago

Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows - all through natural language commands.

118.4k19.7kShell
Tools
forrestchang
andrej-karpathy-skills
7d ago

A single CLAUDE.md file to improve Claude Code behavior, derived from Andrej Karpathy's observations on LLM coding pitfalls.

92.5k8.9k
Tools
nextlevelbuilder
ui-ux-pro-max-skill
24d ago

An AI SKILL that provide design intelligence for building professional UI/UX multiple platforms

71.1k7.3kPython
Toolsai-skillsantigravity
gsd-build
get-shit-done
today

A light-weight and powerful meta-prompting, context engineering and spec-driven development system for Claude Code by TÂCHES.

57.7k4.9kJavaScript
Toolsclaude-codecontext-engineering
JuliusBrussee
caveman
9d ago

🪨 why use many token when few token do trick — Claude Code skill that cuts 65% of tokens by talking like caveman

47.6k2.5kPython
Toolsaianthropic
jeecgboot
JeecgBoot
2d ago

一款 AI 驱动的低代码平台,提供"零代码"与"代码生成"双模式——零代码模式一句话搭建系统,代码生成模式自动输出前后端代码与建表 SQL,生成即可运行。平台内置 AI 聊天助手、AI大模型、知识库、AI流程编排、MCP 与插件体系,兼容主流大模型,支持一句话生成流程图、设计表单、聊天式业务操作,解决 Java 项目 80% 重复工作,高效且不失灵活。

46k16kJava
Toolsactivitiagent