Open-source Next.js template for building apps that are fully generated by AI. By E2B.
- ✓Open-source license (Apache-2.0)
- ✓Actively maintained (<30d)
- ✓Healthy fork ratio
- ✓Clear description
- ✓Topics declared
- ✓Mature repo (>1y old)
{
"mcpServers": {
"fragments": {
"command": "node",
"args": ["/path/to/fragments/dist/index.js"],
"env": {
"E2B_API_KEY": "<e2b_api_key>",
"ANTHROPIC_API_KEY": "<anthropic_api_key>",
"FIREWORKS_API_KEY": "<fireworks_api_key>",
"GOOGLE_AI_API_KEY": "<google_ai_api_key>",
"MISTRAL_API_KEY": "<mistral_api_key>",
"KV_REST_API_URL": "<kv_rest_api_url>"
}
}
}
}~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or %APPDATA%\Claude\claude_desktop_config.json (Windows).<placeholder> values with your API keys or paths.E2B_API_KEYANTHROPIC_API_KEYFIREWORKS_API_KEYGOOGLE_AI_API_KEYMISTRAL_API_KEYKV_REST_API_URLTools overview


# Fragments by E2B
This is an open-source version of apps like [Anthropic's Claude Artifacts](https://www.anthropic.com/news/claude-3-5-sonnet), Vercel [v0](https://v0.dev), or [GPT Engineer](https://gptengineer.app).
Powered by the [E2B SDK](https://github.com/e2b-dev/code-interpreter).
[→ Try on fragments.e2b.dev](https://fragments.e2b.dev)
## Features
- Based on Next.js 14 (App Router, Server Actions), shadcn/ui, TailwindCSS, Vercel AI SDK.
- Uses the [E2B SDK](https://github.com/e2b-dev/code-interpreter) by [E2B](https://e2b.dev) to securely execute code generated by AI.
- Streaming in the UI.
- Can install and use any package from npm, pip.
- Supported stacks ([add your own](#adding-custom-personas)):
- 🔸 Python interpreter
- 🔸 Next.js
- 🔸 Vue.js
- 🔸 Streamlit
- 🔸 Gradio
- Supported LLM Providers ([add your own](#adding-custom-llm-models)):
- 🔸 OpenAI
- 🔸 Anthropic
- 🔸 Google AI
- 🔸 Mistral
- 🔸 Groq
- 🔸 Fireworks
- 🔸 Together AI
- 🔸 Ollama
- Integrates with [Morph](https://morphllm.com/) Apply model for token efficient, accurate and faster code editing.
**Make sure to give us a star!**
<img width="165" alt="Screenshot 2024-04-20 at 22 13 32" src="https://github.com/mishushakov/llm-scraper/assets/10400064/11e2a79f-a835-48c4-9f85-5c104ca7bb49">
## Get started
### Prerequisites
- [git](https://git-scm.com)
- Recent version of [Node.js](https://nodejs.org) and npm package manager
- [E2B API Key](https://e2b.dev)
- LLM Provider API Key
### 1. Clone the repository
In your terminal:
```
git clone https://github.com/e2b-dev/fragments.git
```
### 2. Install the dependencies
Enter the repository:
```
cd fragments
```
Run the following to install the required dependencies:
```
npm i
```
### 3. Set the environment variables
Create a `.env.local` file and set the following:
```sh
# Get your API key here - https://e2b.dev/
E2B_API_KEY="your-e2b-api-key"
# OpenAI API Key
OPENAI_API_KEY=
# Other providers
ANTHROPIC_API_KEY=
GROQ_API_KEY=
FIREWORKS_API_KEY=
TOGETHER_API_KEY=
GOOGLE_AI_API_KEY=
GOOGLE_VERTEX_CREDENTIALS=
MISTRAL_API_KEY=
XAI_API_KEY=
### Optional env vars
# (on by default) Get your MORPH key here - https://morphllm.com/dashboard/api-keys
MORPH_API_KEY=
# Domain of the site
NEXT_PUBLIC_SITE_URL=
# Rate limit
RATE_LIMIT_MAX_REQUESTS=
RATE_LIMIT_WINDOW=
# Vercel/Upstash KV (short URLs, rate limiting)
KV_REST_API_URL=
KV_REST_API_TOKEN=
# Supabase (auth)
SUPABASE_URL=
SUPABASE_ANON_KEY=
# PostHog (analytics)
NEXT_PUBLIC_POSTHOG_KEY=
NEXT_PUBLIC_POSTHOG_HOST=
### Disabling functionality (when uncommented)
# Disable API key and base URL input in the chat
# NEXT_PUBLIC_NO_API_KEY_INPUT=
# NEXT_PUBLIC_NO_BASE_URL_INPUT=
# Hide local models from the list of available models
# NEXT_PUBLIC_HIDE_LOCAL_MODELS=
```
### 4. Start the development server
```
npm run dev
```
### 5. Build the web app
```
npm run build
```
## Customize
### Adding custom personas
1. Make sure [E2B CLI](https://e2b.dev/docs/cli) is installed and you're logged in.
2. Add a new folder under [sandbox-templates/](sandbox-templates/)
3. Initialize a new template using E2B CLI:
```
e2b template init
```
This will create a new file called `e2b.Dockerfile`.
4. Adjust the `e2b.Dockerfile`
Here's an example streamlit template:
```Dockerfile
# You can use most Debian-based base images
FROM python:3.19-slim
RUN pip3 install --no-cache-dir streamlit pandas numpy matplotlib requests seaborn plotly
# Copy the code to the container
WORKDIR /home/user
COPY . /home/user
```
5. Specify a custom start command in `e2b.toml`:
```toml
start_cmd = "cd /home/user && streamlit run app.py"
```
6. Deploy the template with the E2B CLI
```
e2b template build --name <template-name>
```
After the build has finished, you should get the following message:
```
✅ Building sandbox template <template-id> <template-name> finished.
```
7. Open [lib/templates.json](lib/templates.json) in your code editor.
Add your new template to the list. Here's an example for Streamlit:
```json
"streamlit-developer": {
"name": "Streamlit developer",
"lib": [
"streamlit",
"pandas",
"numpy",
"matplotlib",
"requests",
"seaborn",
"plotly"
],
"file": "app.py",
"instructions": "A streamlit app that reloads automatically.",
"port": 8501 // can be null
},
```
Provide a template id (as key), name, list of dependencies, entrypoint and a port (optional). You can also add additional instructions that will be given to the LLM.
4. Optionally, add a new logo under [public/thirdparty/templates](public/thirdparty/templates)
### Adding custom LLM models
1. Open [lib/models.json](lib/models.ts) in your code editor.
2. Add a new entry to the models list:
```json
{
"id": "mistral-large",
"name": "Mistral Large",
"provider": "Ollama",
"providerId": "ollama"
}
```
Where id is the model id, name is the model name (visible in the UI), provider is the provider name and providerId is the provider tag (see [adding providers](#adding-custom-llm-providers) below).
### Adding custom LLM providers
1. Open [lib/models.ts](lib/models.ts) in your code editor.
2. Add a new entry to the `providerConfigs` list:
Example for fireworks:
```ts
fireworks: () => createOpenAI({ apiKey: apiKey || process.env.FIREWORKS_API_KEY, baseURL: baseURL || 'https://api.fireworks.ai/inference/v1' })(modelNameString),
```
3. Optionally, adjust the default structured output mode in the `getDefaultMode` function:
```ts
if (providerId === 'fireworks') {
return 'json'
}
```
4. Optionally, add a new logo under [public/thirdparty/logos](public/thirdparty/logos)
## Contributing
As an open-source project, we welcome contributions from the community. If you are experiencing any bugs or want to add some improvements, please feel free to open an issue or pull request.
What people ask about fragments
What is e2b-dev/fragments?
+
e2b-dev/fragments is tools for the Claude AI ecosystem. Open-source Next.js template for building apps that are fully generated by AI. By E2B. It has 6.3k GitHub stars and was last updated 18d ago.
How do I install fragments?
+
You can install fragments by cloning the repository (https://github.com/e2b-dev/fragments) or following the README instructions on GitHub. ClaudeWave also provides quick install blocks on this page.
Is e2b-dev/fragments safe to use?
+
Our security agent has analyzed e2b-dev/fragments and assigned a Trust Score of 100/100 (tier: Verified). See the full breakdown of passed checks and flags on this page.
Who maintains e2b-dev/fragments?
+
e2b-dev/fragments is maintained by e2b-dev. The last recorded GitHub activity is from 18d ago, with 22 open issues.
Are there alternatives to fragments?
+
Yes. On ClaudeWave you can browse similar tools at /categories/tools, sorted by popularity or recent activity.
Deploy fragments to your cloud
Ship this repo to production in minutes. Each platform spins up its own environment with editable env vars.
Maintain this repo? Add a badge to your README
Drop the badge into your GitHub README to show it's tracked on ClaudeWave. Each badge links back to this page and reflects the live Trust Score.
[](https://claudewave.com/repo/e2b-dev-fragments)<a href="https://claudewave.com/repo/e2b-dev-fragments"><img src="https://claudewave.com/api/badge/e2b-dev-fragments" alt="Featured on ClaudeWave — e2b-dev/fragments" width="320" height="64" /></a>More Tools
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows - all through natural language commands.
A single CLAUDE.md file to improve Claude Code behavior, derived from Andrej Karpathy's observations on LLM coding pitfalls.
An AI SKILL that provide design intelligence for building professional UI/UX multiple platforms
A light-weight and powerful meta-prompting, context engineering and spec-driven development system for Claude Code by TÂCHES.
🪨 why use many token when few token do trick — Claude Code skill that cuts 65% of tokens by talking like caveman
一款 AI 驱动的低代码平台,提供"零代码"与"代码生成"双模式——零代码模式一句话搭建系统,代码生成模式自动输出前后端代码与建表 SQL,生成即可运行。平台内置 AI 聊天助手、AI大模型、知识库、AI流程编排、MCP 与插件体系,兼容主流大模型,支持一句话生成流程图、设计表单、聊天式业务操作,解决 Java 项目 80% 重复工作,高效且不失灵活。