[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-vndee--llm-sandbox":3,"tool-vndee--llm-sandbox":65},[4,17,27,35,48,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,3,"2026-04-06T11:19:32",[15,26,14,13],"图像",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":10,"last_commit_at":33,"category_tags":34,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85092,"2026-04-10T11:13:16",[26,43,44,45,14,46,15,13,47],"数据工具","视频","插件","其他","音频",{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":54,"last_commit_at":55,"category_tags":56,"status":16},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[15,43,46],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":54,"last_commit_at":63,"category_tags":64,"status":16},6590,"gpt4all","nomic-ai\u002Fgpt4all","GPT4All 是一款让普通电脑也能轻松运行大型语言模型（LLM）的开源工具。它的核心目标是打破算力壁垒，让用户无需依赖昂贵的显卡（GPU）或云端 API，即可在普通的笔记本电脑和台式机上私密、离线地部署和使用大模型。\n\n对于担心数据隐私、希望完全掌控本地数据的企业用户、研究人员以及技术爱好者来说，GPT4All 提供了理想的解决方案。它解决了传统大模型必须联网调用或需要高端硬件才能运行的痛点，让日常设备也能成为强大的 AI 助手。无论是希望构建本地知识库的开发者，还是单纯想体验私有化 AI 聊天的普通用户，都能从中受益。\n\n技术上，GPT4All 基于高效的 `llama.cpp` 后端，支持多种主流模型架构（包括最新的 DeepSeek R1 蒸馏模型），并采用 GGUF 格式优化推理速度。它不仅提供界面友好的桌面客户端，支持 Windows、macOS 和 Linux 等多平台一键安装，还为开发者提供了便捷的 Python 库，可轻松集成到 LangChain 等生态中。通过简单的下载和配置，用户即可立即开始探索本地大模型的无限可能。",77307,"2026-04-11T06:52:37",[15,13],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":83,"owner_website":84,"owner_url":85,"languages":86,"stars":103,"forks":104,"last_commit_at":105,"license":106,"difficulty_score":23,"env_os":107,"env_gpu":108,"env_ram":108,"env_deps":109,"category_tags":116,"github_topics":117,"view_count":23,"oss_zip_url":121,"oss_zip_packed_at":121,"status":16,"created_at":122,"updated_at":123,"faqs":124,"releases":154},1354,"vndee\u002Fllm-sandbox","llm-sandbox","Lightweight and portable LLM sandbox runtime (code interpreter) Python library.","llm-sandbox 是一个轻量级 Python 库，用来“安全地跑大模型写的代码”。它把 AI 生成的脚本放进隔离的容器里执行，既防止恶意代码破坏主机，又能自动装依赖、限制 CPU\u002F内存、控制网络，还能把生成的图表、文件一键取出来。支持 Docker、Kubernetes、Podman 三种后端，Python、JavaScript、Java、C++、Go、R 等语言开箱即用，并可与 LangChain、OpenAI 等框架无缝衔接。最新版还加入了容器池预热和 MCP 协议支持，让 Claude Desktop 也能直接调用。  \n适合 AI 应用开发者、数据科学家、教育平台或任何需要“让大模型写代码并立即跑起来”的人。","## LLM Sandbox\n\n*Securely Execute LLM-Generated Code with Ease*\n\n[![SonarQube Cloud](https:\u002F\u002Fsonarcloud.io\u002Fimages\u002Fproject_badges\u002Fsonarcloud-light.svg)](https:\u002F\u002Fsonarcloud.io\u002Fsummary\u002Fnew_code?id=vndee_llm-sandbox)\n\n[![Quality Gate Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_4d3322b292fb.png)](https:\u002F\u002Fsonarcloud.io\u002Fsummary\u002Fnew_code?id=vndee_llm-sandbox)\n[![PyPI Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_34c50e933400.png)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fllm-sandbox\u002F)\n[![Release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fvndee\u002Fllm-sandbox)](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fvndee\u002Fllm-sandbox)\n[![Build status](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fvndee\u002Fllm-sandbox\u002Fmain.yml?branch=main)](https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Factions\u002Fworkflows\u002Fmain.yml?query=branch%3Amain)\n[![CodeFactor](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_185f8b40f3ca.png)](https:\u002F\u002Fwww.codefactor.io\u002Frepository\u002Fgithub\u002Fvndee\u002Fllm-sandbox)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fvndee\u002Fllm-sandbox\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=EULWCESZAY)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fvndee\u002Fllm-sandbox)\n![](https:\u002F\u002Fbadge.mcpx.dev?status=on 'MCP Enabled')\n[![Ask DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fbadge.svg)](https:\u002F\u002Fdeepwiki.com\u002Fvndee\u002Fllm-sandbox)\n\n**LLM Sandbox** is a lightweight and portable sandbox environment designed to run Large Language Model (LLM) generated code in a safe and isolated mode. It provides a secure execution environment for AI-generated code while offering flexibility in container backends and comprehensive language support, simplifying the process of running code generated by LLMs.\n\nDocumentation: https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002F\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_24c7248444d4.png)\n\n✨ **New:** This project now supports the [Model Context Protocol (MCP)](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fmcp-integration\u002F) server, which allows your MCP clients (e.g. Claude Desktop) to run code generated by LLMs in a secure sandbox environment.\n\n## 🚀 Key Features\n\n### 🛡️ Security First\n- **Isolated Execution**: Code runs in isolated containers with no access to host system\n- **Security Policies**: Define custom security policies to control code execution\n- **Resource Limits**: Set CPU, memory, and execution time limits\n- **Network Isolation**: Control network access for sandboxed code\n\n### 🏗️ Flexible Container Backends\n- **Docker**: Most popular and widely supported option\n- **Kubernetes**: Enterprise-grade orchestration for scalable deployments\n- **Podman**: Rootless containers for enhanced security\n\n### 🌐 Multi-Language Support\nExecute code in multiple programming languages with automatic dependency management:\n- **Python** - Full ecosystem support with pip packages\n- **JavaScript\u002FNode.js** - npm package installation\n- **Java** - Maven and Gradle dependency management\n- **C++** - Compilation and execution\n- **Go** - Module support and compilation\n- **R** - Statistical computing and data analysis with CRAN packages\n\n### 🔌 LLM Framework Integration\nSeamlessly integrate with popular LLM frameworks such as LangChain, LangGraph, LlamaIndex, OpenAI, and more.\n\n### 📊 Advanced Features\n- **Artifact Extraction**: Automatically capture plots and visualizations\n- **Library Management**: Install dependencies on-the-fly\n- **File Operations**: Copy files to\u002Ffrom sandbox environments\n- **Custom Images**: Use your own container images\n- **Fast Production Mode**: Skip environment setup for faster container startup\n- **Container Pooling**: Pre-warm and reuse containers for improved performance (NEW!)\n\n## 📦 Installation\n\n### Basic Installation\n```bash\npip install llm-sandbox\n```\n\n### With Specific Backend Support\n```bash\n# For Docker support (most common)\npip install 'llm-sandbox[docker]'\n\n# For Kubernetes support\npip install 'llm-sandbox[k8s]'\n\n# For Podman support\npip install 'llm-sandbox[podman]'\n\n# All backends\npip install 'llm-sandbox[docker,k8s,podman]'\n```\n\n### Development Installation\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox.git\ncd llm-sandbox\npip install -e '.[dev]'\n```\n\n## 🏃‍♂️ Quick Start\n\n### Basic Usage\n\n```python\nfrom llm_sandbox import SandboxSession\n\n# Create and use a sandbox session\nwith SandboxSession(lang=\"python\") as session:\n    result = session.run(\"\"\"\nprint(\"Hello from LLM Sandbox!\")\nprint(\"I'm running in a secure container.\")\n    \"\"\")\n    print(result.stdout)\n```\n\n### Installing Libraries\n\n```python\nfrom llm_sandbox import SandboxSession\n\nwith SandboxSession(lang=\"python\") as session:\n    result = session.run(\"\"\"\nimport numpy as np\n\n# Create an array\narr = np.array([1, 2, 3, 4, 5])\nprint(f\"Array: {arr}\")\nprint(f\"Mean: {np.mean(arr)}\")\n    \"\"\", libraries=[\"numpy\"])\n\n    print(result.stdout)\n```\n\n### Multi-Language Support\n\n#### JavaScript\n```python\nwith SandboxSession(lang=\"javascript\") as session:\n    result = session.run(\"\"\"\nconst greeting = \"Hello from Node.js!\";\nconsole.log(greeting);\n\nconst axios = require('axios');\nconsole.log(\"Axios loaded successfully!\");\n    \"\"\", libraries=[\"axios\"])\n```\n\n#### Java\n```python\nwith SandboxSession(lang=\"java\") as session:\n    result = session.run(\"\"\"\npublic class HelloWorld {\n    public static void main(String[] args) {\n        System.out.println(\"Hello from Java!\");\n    }\n}\n    \"\"\")\n```\n\n#### C++\n```python\nwith SandboxSession(lang=\"cpp\") as session:\n    result = session.run(\"\"\"\n#include \u003Ciostream>\n\nint main() {\n    std::cout \u003C\u003C \"Hello from C++!\" \u003C\u003C std::endl;\n    return 0;\n}\n    \"\"\")\n```\n\n#### Go\n```python\nwith SandboxSession(lang=\"go\") as session:\n    result = session.run(\"\"\"\npackage main\nimport \"fmt\"\n\nfunc main() {\n    fmt.Println(\"Hello from Go!\")\n}\n    \"\"\")\n```\n\n#### R\n```python\nwith SandboxSession(\n    lang=\"r\",\n    image=\"ghcr.io\u002Fvndee\u002Fsandbox-r-451-bullseye\",\n    verbose=True,\n) as session:\n    result = session.run(\n        \"\"\"\n# Basic R operations\nprint(\"=== Basic R Demo ===\")\n\n# Create some data\nnumbers \u003C- c(1, 2, 3, 4, 5, 10, 15, 20)\nprint(paste(\"Numbers:\", paste(numbers, collapse=\", \")))\n\n# Basic statistics\nprint(paste(\"Mean:\", mean(numbers)))\nprint(paste(\"Median:\", median(numbers)))\nprint(paste(\"Standard Deviation:\", sd(numbers)))\n\n# Work with data frames\ndf \u003C- data.frame(\n    name = c(\"Alice\", \"Bob\", \"Charlie\", \"Diana\"),\n    age = c(25, 30, 35, 28),\n    score = c(85, 92, 78, 96)\n)\n\nprint(\"=== Data Frame ===\")\nprint(df)\n\n# Calculate average score\navg_score \u003C- mean(df$score)\nprint(paste(\"Average Score:\", avg_score))\n        \"\"\"\n    )\n```\n\n### Interactive Sessions\n\nFor notebook-style workflows you can use `InteractiveSandboxSession`, which keeps the Python interpreter state across multiple `run` calls.\n\n```python\nfrom llm_sandbox import InteractiveSandboxSession\n\nwith InteractiveSandboxSession(\n    lang=\"python\",\n    kernel_type=\"ipython\",\n    history_size=200,\n) as session:\n    session.run(\"value = 21 * 2\")\n    result = session.run(\"print(f'Result: {value}')\")\n    print(result.stdout)  # -> Result: 42\n\n    # Use magic command to install libraries\n    session.run(\"%pip install pandas\")\n    result = session.run(\"import pandas as pd; print(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}))\")\n    print(result.stdout)\n```\n\nInteractive sessions support Docker, Podman, and Kubernetes backends and currently target Python language. They spin up a long-running IPython kernel inside the sandbox, so each `run()` behaves like a notebook cell—state, imports, and magic commands stay alive until the context manager exits, without any extra networking or manual serialization.\n\n### Capturing Plots and Visualizations\n\n#### Python Plots\n```python\nfrom llm_sandbox import ArtifactSandboxSession\nimport base64\nfrom pathlib import Path\n\nwith ArtifactSandboxSession(lang=\"python\") as session:\n    result = session.run(\"\"\"\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.linspace(0, 10, 100)\ny = np.sin(x)\n\nplt.figure(figsize=(10, 6))\nplt.plot(x, y)\nplt.title(\"Sine Wave\")\nplt.xlabel(\"x\")\nplt.ylabel(\"sin(x)\")\nplt.grid(True)\nplt.savefig(\"sine_wave.png\", dpi=150, bbox_inches=\"tight\")\nplt.show()\n    \"\"\", libraries=[\"matplotlib\", \"numpy\"])\n\n    # Extract the generated plots\n    print(f\"Generated {len(result.plots)} plots\")\n\n    # Save plots to files\n    for i, plot in enumerate(result.plots):\n        plot_path = Path(f\"plot_{i + 1}.{plot.format.value}\")\n        with plot_path.open(\"wb\") as f:\n            f.write(base64.b64decode(plot.content_base64))\n```\n\n#### R Plots\n```python\nfrom llm_sandbox import ArtifactSandboxSession\nimport base64\nfrom pathlib import Path\n\nwith ArtifactSandboxSession(lang=\"r\") as session:\n    result = session.run(\"\"\"\nlibrary(ggplot2)\n\n# Create sample data\ndata \u003C- data.frame(\n    x = rnorm(100),\n    y = rnorm(100)\n)\n\n# Create ggplot2 visualization\np \u003C- ggplot(data, aes(x = x, y = y)) +\n    geom_point(alpha = 0.6) +\n    geom_smooth(method = \"lm\", se = FALSE) +\n    labs(title = \"Scatter Plot with Trend Line\",\n         x = \"X values\", y = \"Y values\") +\n    theme_minimal()\n\nprint(p)\n\n# Base R plot\nhist(data$x, main = \"Distribution of X\",\n     xlab = \"X values\", col = \"lightblue\", breaks = 20)\n    \"\"\", libraries=[\"ggplot2\"])\n\n    # Extract the generated plots\n    print(f\"Generated {len(result.plots)} R plots\")\n\n    # Save plots to files\n    for i, plot in enumerate(result.plots):\n        plot_path = Path(f\"r_plot_{i + 1}.{plot.format.value}\")\n        with plot_path.open(\"wb\") as f:\n            f.write(base64.b64decode(plot.content_base64))\n```\n\n## 🔧 Configuration\n\n### Basic Configuration\n\n```python\nfrom llm_sandbox import SandboxSession\n\n# Create a new sandbox session\nwith SandboxSession(image=\"python:3.9.19-bullseye\", keep_template=True, lang=\"python\") as session:\n    result = session.run(\"print('Hello, World!')\")\n    print(result)\n\n# With custom Dockerfile\nwith SandboxSession(dockerfile=\"Dockerfile\", keep_template=True, lang=\"python\") as session:\n    result = session.run(\"print('Hello, World!')\")\n    print(result)\n\n# Or default image\nwith SandboxSession(lang=\"python\", keep_template=True) as session:\n    result = session.run(\"print('Hello, World!')\")\n    print(result)\n```\n\n\nLLM Sandbox also supports copying files between the host and the sandbox:\n\n```python\nfrom llm_sandbox import SandboxSession\n\nwith SandboxSession(lang=\"python\", keep_template=True) as session:\n    # Copy a file from the host to the sandbox\n    session.copy_to_runtime(\"test.py\", \"\u002Fsandbox\u002Ftest.py\")\n\n    # Run the copied Python code in the sandbox\n    result = session.execute_command(\"python \u002Fsandbox\u002Ftest.py\")\n    print(result)\n\n    # Copy a file from the sandbox to the host\n    session.copy_from_runtime(\"\u002Fsandbox\u002Foutput.txt\", \"output.txt\")\n```\n\n#### Custom runtime configs\n\n```python\nfrom llm_sandbox import SandboxSession\n\npod_manifest = {\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"name\": \"test\",\n        \"namespace\": \"test\",\n        \"labels\": {\"app\": \"sandbox\"},\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"name\": \"sandbox-container\",\n                \"image\": \"test\",\n                \"tty\": True,\n                \"volumeMounts\": {\n                    \"name\": \"tmp\",\n                    \"mountPath\": \"\u002Ftmp\",\n                },\n            }\n        ],\n        \"volumes\": [{\"name\": \"tmp\", \"emptyDir\": {\"sizeLimit\": \"5Gi\"}}],\n    },\n}\nwith SandboxSession(\n    backend=\"kubernetes\",\n    image=\"python:3.9.19-bullseye\",\n    dockerfile=None,\n    lang=\"python\",\n    keep_template=False,\n    verbose=False,\n    pod_manifest=pod_manifest,\n) as session:\n    result = session.run(\"print('Hello, World!')\")\n    print(result)\n```\n\n#### Remote Docker Host\n\n```python\nimport docker\nfrom llm_sandbox import SandboxSession\n\ntls_config = docker.tls.TLSConfig(\n    client_cert=(\"path\u002Fto\u002Fcert.pem\", \"path\u002Fto\u002Fkey.pem\"),\n    ca_cert=\"path\u002Fto\u002Fca.pem\",\n    verify=True\n)\ndocker_client = docker.DockerClient(base_url=\"tcp:\u002F\u002F\u003Cyour_host>:\u003Cport>\", tls=tls_config)\n\nwith SandboxSession(\n    client=docker_client,\n    image=\"python:3.9.19-bullseye\",\n    keep_template=True,\n    lang=\"python\",\n) as session:\n    result = session.run(\"print('Hello, World!')\")\n    print(result)\n```\n\n#### Kubernetes Support\n\n```python\nfrom kubernetes import client, config\nfrom llm_sandbox import SandboxSession\n\n# Use local kubeconfig\nconfig.load_kube_config()\nk8s_client = client.CoreV1Api()\n\nwith SandboxSession(\n    client=k8s_client,\n    backend=\"kubernetes\",\n    image=\"python:3.9.19-bullseye\",\n    lang=\"python\",\n    pod_manifest=pod_manifest, # None by default\n) as session:\n    result = session.run(\"print('Hello from Kubernetes!')\")\n    print(result)\n```\n\n**⚠️ Important for Custom Pod Manifests:**\n\nWhen using custom pod manifests, ensure your container configuration includes:\n- `\"tty\": True` (keeps container alive)\n- Proper `securityContext` at both pod and container levels\n- Container name can be any valid name (no restrictions)\n\nSee the [Configuration Guide](docs\u002Fconfiguration.md#kubernetes-backend) for complete requirements.\n\n#### Podman Support\n\n```python\nfrom llm_sandbox import SandboxSession\n\nwith SandboxSession(\n    backend=\"podman\",\n    lang=\"python\",\n    image=\"python:3.9.19-bullseye\"\n) as session:\n    result = session.run(\"print('Hello from Podman!')\")\n    print(result)\n```\n\n## ⚡ Container Pooling (Performance Optimization)\n\nContainer pooling dramatically improves performance by reusing pre-warmed containers instead of creating new ones for each execution. This is particularly beneficial for applications that execute code frequently.\n\n### Key Benefits\n- **Faster Execution**: Eliminate container creation overhead (up to 10x faster)\n- **Pre-warmed Environments**: Containers are initialized with your dependencies\n- **Thread-Safe**: Safely handle concurrent requests\n- **Resource Efficient**: Automatic container lifecycle management\n- **Flexible Configuration**: Control pool size, timeouts, and behavior\n\n### Basic Pool Usage\n\n```python\nfrom llm_sandbox import SandboxSession\nfrom llm_sandbox.pool import PoolConfig, create_pool_manager\n\n# Create a pool manager explicitly\npool = create_pool_manager(\n    backend=\"docker\",\n    config=PoolConfig(\n        max_pool_size=10,          # Maximum containers\n        min_pool_size=3,           # Keep at least 3 warm\n        idle_timeout=300.0,        # Recycle idle containers after 5 min\n        enable_prewarming=True,    # Create containers on startup\n    ),\n    lang=\"python\",\n)\n\n# Use the pool in a session\nwith SandboxSession(\n    lang=\"python\",\n    pool=pool,\n) as session:\n    result = session.run(\"print('Hello from pool!')\")\n\n# Container is automatically returned to pool when the session closes\n# Clean up the pool when done\npool.close()\n```\n\n### Sharing a Pool Across Sessions\n\nFor maximum efficiency, share a single pool across multiple sessions:\n\n```python\nfrom llm_sandbox import SandboxSession\nfrom llm_sandbox.pool import create_pool_manager, PoolConfig\n\n# Create a shared pool manager\npool = create_pool_manager(\n    backend=\"docker\",\n    config=PoolConfig(\n        max_pool_size=10,\n        min_pool_size=3,\n    ),\n    lang=\"python\",\n    libraries=[\"numpy\", \"pandas\"],  # Pre-install libraries in all containers\n)\n\n# Use the pool in multiple sessions\nwith SandboxSession(lang=\"python\", pool=pool) as session1:\n    result1 = session1.run(\"import pandas; print(pandas.__version__)\")\n\nwith SandboxSession(lang=\"python\", pool=pool) as session2:\n    result2 = session2.run(\"import numpy; print(numpy.__version__)\")\n\n# Clean up when done\npool.close()\n```\n\n### Concurrent Execution\n\nContainer pools are thread-safe and handle concurrent requests efficiently:\n\n```python\nfrom concurrent.futures import ThreadPoolExecutor\nfrom llm_sandbox import SandboxSession\nfrom llm_sandbox.pool import create_pool_manager, PoolConfig\n\n# Create shared pool\npool = create_pool_manager(\n    backend=\"docker\",\n    config=PoolConfig(max_pool_size=5),\n    lang=\"python\",\n)\n\ndef run_code(task_id: int):\n    with SandboxSession(lang=\"python\", pool=pool) as session:\n        return session.run(f'print(\"Task {task_id}\")')\n\ntry:\n    # Execute 20 tasks concurrently using only 5 containers\n    with ThreadPoolExecutor(max_workers=10) as executor:\n        results = list(executor.map(run_code, range(20)))\nfinally:\n    pool.close()\n```\n\n### Pool Configuration Options\n\n```python\nfrom llm_sandbox.pool import PoolConfig, ExhaustionStrategy, create_pool_manager\n\nconfig = PoolConfig(\n    # Pool size limits\n    max_pool_size=10,                      # Maximum containers in pool\n    min_pool_size=2,                       # Minimum warm containers\n\n    # Timeout configuration\n    idle_timeout=300.0,                    # Recycle idle containers (seconds)\n    acquisition_timeout=30.0,              # Wait time for available container\n\n    # Health and lifecycle\n    health_check_interval=60.0,            # Health check frequency\n    max_container_lifetime=3600.0,         # Max container lifetime\n    max_container_uses=100,                # Max uses before recycling\n\n    # Pool exhaustion behavior\n    exhaustion_strategy=ExhaustionStrategy.WAIT,  # WAIT, FAIL_FAST, or TEMPORARY\n\n    # Pre-warming\n    enable_prewarming=True,                # Pre-warm containers\n)\n\npool = create_pool_manager(\n    backend=\"docker\",\n    config=config,\n    lang=\"python\",\n    libraries=[\"requests\", \"numpy\"],       # Pre-install libraries\n)\n```\n\n### Pool Exhaustion Strategies\n\nWhen all containers are busy, the pool can handle it in different ways:\n\n#### 1. WAIT (Default)\nWait for a container to become available:\n```python\nconfig = PoolConfig(\n    max_pool_size=5,\n    exhaustion_strategy=ExhaustionStrategy.WAIT,\n    acquisition_timeout=30.0,  # Wait up to 30 seconds\n)\n```\n\n#### 2. FAIL_FAST\nImmediately raise an error:\n```python\nconfig = PoolConfig(\n    max_pool_size=5,\n    exhaustion_strategy=ExhaustionStrategy.FAIL_FAST,\n)\n```\n\n#### 3. TEMPORARY\nCreate a temporary container outside the pool:\n```python\nconfig = PoolConfig(\n    max_pool_size=5,\n    exhaustion_strategy=ExhaustionStrategy.TEMPORARY,\n)\n```\n\n### Monitoring Pool Statistics\n\n```python\nfrom llm_sandbox.pool import create_pool_manager\n\npool = create_pool_manager(backend=\"docker\", lang=\"python\")\n\n# Get pool statistics\nstats = pool.get_stats()\nprint(f\"Total containers: {stats['total_size']}\")\nprint(f\"Idle containers: {stats['state_counts']['idle']}\")\nprint(f\"Busy containers: {stats['state_counts']['busy']}\")\n\npool.close()\n```\n\n### Artifact Extraction with Pooling\n\nFor capturing plots and visualizations with container pooling, you can use either approach:\n\n```python\nfrom llm_sandbox import ArtifactSandboxSession\nfrom llm_sandbox.pool import create_pool_manager, PoolConfig\nimport base64\nfrom pathlib import Path\n\n# Create pool with pre-installed visualization libraries\npool = create_pool_manager(\n    backend=\"docker\",\n    config=PoolConfig(max_pool_size=5, min_pool_size=2),\n    lang=\"python\",\n    libraries=[\"matplotlib\", \"numpy\"],\n)\n\ntry:\n    # Option 1: Use pool parameter (recommended for API consistency)\n    with ArtifactSandboxSession(pool=pool, enable_plotting=True) as session:\n        result = session.run(\"\"\"\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.linspace(0, 10, 100)\ny = np.sin(x)\nplt.plot(x, y)\nplt.title('Pooled Execution - Sine Wave')\nplt.show()\n        \"\"\")\n\n        # Save generated plots\n        for i, plot in enumerate(result.plots):\n            Path(f\"plot_{i}.{plot.format.value}\").write_bytes(\n                base64.b64decode(plot.content_base64)\n            )\n\n        print(f\"Generated {len(result.plots)} plots using pooled container\")\n\n    # Option 2: Use ArtifactPooledSandboxSession (explicit class)\n    # Both approaches work identically\n    from llm_sandbox.pool import ArtifactPooledSandboxSession\n\n    with ArtifactPooledSandboxSession(pool_manager=pool, enable_plotting=True) as session:\n        result = session.run(\"print('Same functionality, different API')\")\n\nfinally:\n    pool.close()\n```\n\n### Examples\n\nSee the `examples\u002F` directory for complete demonstrations:\n- [pool_basic_demo.py](examples\u002Fpool_basic_demo.py) - Basic pool usage and configuration\n- [pool_concurrent_demo.py](examples\u002Fpool_concurrent_demo.py) - Concurrent execution patterns\n- [pool_monitoring_demo.py](examples\u002Fpool_monitoring_demo.py) - Health monitoring and lifecycle management\n- [pool_artifact_demo.py](examples\u002Fpool_artifact_demo.py) - Artifact extraction with pooling (plots, CSV, mixed artifacts)\n\n## 🤖 LLM Framework Integration\n\n### LangChain Tool\n\n```python\nfrom langchain.tools import BaseTool\nfrom llm_sandbox import SandboxSession\n\nclass PythonSandboxTool(BaseTool):\n    name = \"python_sandbox\"\n    description = \"Execute Python code in a secure sandbox\"\n\n    def _run(self, code: str) -> str:\n        with SandboxSession(lang=\"python\") as session:\n            result = session.run(code)\n            return result.stdout if result.exit_code == 0 else result.stderr\n```\n\n### Use with OpenAI Functions\n\n```python\nimport openai\nfrom llm_sandbox import SandboxSession\n\ndef execute_code(code: str, language: str = \"python\") -> str:\n    \"\"\"Execute code in a secure sandbox environment.\"\"\"\n    with SandboxSession(lang=language) as session:\n        result = session.run(code)\n        return result.stdout if result.exit_code == 0 else result.stderr\n\n# Register as OpenAI function\nfunctions = [\n    {\n        \"name\": \"execute_code\",\n        \"description\": \"Execute code in a secure sandbox\",\n        \"parameters\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"code\": {\"type\": \"string\", \"description\": \"Code to execute\"},\n                \"language\": {\"type\": \"string\", \"enum\": [\"python\", \"javascript\", \"java\", \"cpp\", \"go\", \"r\"]}\n            },\n            \"required\": [\"code\"]\n        }\n    }\n]\n```\n\n## 🔌 Model Context Protocol (MCP) Server\n\nLLM Sandbox provides a [Model Context Protocol (MCP)](https:\u002F\u002Fmodelcontextprotocol.io\u002F) server that enables AI assistants like Claude Desktop to execute code securely in sandboxed environments. This integration allows LLMs to run code directly with automatic visualization capture and multi-language support.\n\n### Features\n\n- **Secure Code Execution**: Execute code in isolated containers with your preferred backend\n- **Multi-Language Support**: Run Python, JavaScript, Java, C++, Go, R, and Ruby code\n- **Automatic Visualization Capture**: Automatically capture and return plots and visualizations\n- **Library Management**: Install packages and dependencies on-the-fly\n- **Flexible Backend Support**: Choose from Docker, Podman, or Kubernetes backends\n\n### Installation\n\nInstall LLM Sandbox with MCP support using your preferred backend:\n\n```bash\n# For Docker backend\npip install 'llm-sandbox[mcp-docker]'\n\n# For Podman backend\npip install 'llm-sandbox[mcp-podman]'\n\n# For Kubernetes backend\npip install 'llm-sandbox[mcp-k8s]'\n```\n\n### Configuration\n\nAdd the following configuration to your MCP client (e.g., `claude_desktop_config.json` for Claude Desktop):\n\n```json\n{\n  \"mcpServers\": {\n    \"llm-sandbox\": {\n      \"command\": \"python3\",\n      \"args\": [\"-m\", \"llm_sandbox.mcp_server.server\"],\n    }\n  }\n}\n```\n\n#### Backend-Specific Configuration\n\nFor the specific backend, you need to set the `BACKEND` environment variable to the backend you want to use. You might need to set other environment variables depending on the backend you are using. For example, you might need to set the `DOCKER_HOST` environment variable to the host you want to use if the `DOCKER_HOST` is not automatically in your system.\n\n**Docker (default):**\n```json\n{\n  \"mcpServers\": {\n    \"llm-sandbox\": {\n      \"command\": \"python3\",\n      \"args\": [\"-m\", \"llm_sandbox.mcp_server.server\"],\n      \"env\": {\n        \"BACKEND\": \"docker\",\n        \"DOCKER_HOST\": \"unix:\u002F\u002F\u002Fvar\u002Frun\u002Fdocker.sock\" \u002F\u002F change this to the actual host you are using\n      }\n    }\n  }\n}\n```\n\n**Podman:**\n```json\n{\n  \"mcpServers\": {\n    \"llm-sandbox\": {\n      \"command\": \"python3\",\n      \"args\": [\"-m\", \"llm_sandbox.mcp_server.server\"],\n      \"env\": {\n        \"BACKEND\": \"podman\",\n        \"DOCKER_HOST\": \"unix:\u002F\u002F\u002Fvar\u002Frun\u002Fpodman\u002Fpodman.sock\" \u002F\u002F change this to the actual host you are using\n      }\n    }\n  }\n}\n```\n\nFor Kubernetes, you might need to set the `KUBECONFIG` environment variable to the path to your kubeconfig file.\n\n**Kubernetes:**\n```json\n{\n  \"mcpServers\": {\n    \"llm-sandbox\": {\n      \"command\": \"python3\",\n      \"args\": [\"-m\", \"llm_sandbox.mcp_server.server\"],\n      \"env\": {\n        \"BACKEND\": \"kubernetes\",\n        \"KUBECONFIG\": \"\u002Fpath\u002Fto\u002Fkubeconfig\" \u002F\u002F change this to the actual path to your kubeconfig file\n      }\n    }\n  }\n}\n```\n\n### Available Tools\n\nThe MCP server provides the following tools:\n\n- **`execute_code`**: Execute code in a secure sandbox with automatic visualization capture\n- **`get_supported_languages`**: Get the list of supported programming languages\n- **`get_language_details`**: Get detailed information about a specific language\n\n### Usage Example\n\nOnce configured, you can ask your AI assistant to run code, and it will automatically use the LLM Sandbox MCP server:\n\n```text\n\"Create a scatter plot showing the relationship between x and y data points using matplotlib\"\n```\n\nThe assistant will execute Python code in a secure sandbox and automatically capture any generated plots or visualizations.\n\n## 🏗️ Architecture\n\n```mermaid\ngraph LR\n    A[LLM Client] --> B[LLM Sandbox]\n    B --> C[Container Backend]\n\n    A1[OpenAI] --> A\n    A2[Anthropic] --> A\n    A3[Local LLMs] --> A\n    A4[LangChain] --> A\n    A5[LangGraph] --> A\n    A6[LlamaIndex] --> A\n    A7[MCP Clients] --> A\n\n    C --> C1[Docker]\n    C --> C2[Kubernetes]\n    C --> C3[Podman]\n\n    style A fill:#e1f5fe\n    style B fill:#f3e5f5\n    style C fill:#e8f5e8\n    style A1 fill:#fff3e0\n    style A2 fill:#fff3e0\n    style A3 fill:#fff3e0\n    style A4 fill:#fff3e0\n    style A5 fill:#fff3e0\n    style A6 fill:#fff3e0\n    style A7 fill:#fff3e0\n    style C1 fill:#e0f2f1\n    style C2 fill:#e0f2f1\n    style C3 fill:#e0f2f1\n```\n\n## 📚 Documentation\n\n- **[Full Documentation](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002F)** - Complete documentation\n- **[Getting Started](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fgetting-started\u002F)** - Installation and basic usage\n- **[Configuration](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fconfiguration\u002F)** - Detailed configuration options\n- **[Security](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fsecurity\u002F)** - Security policies and best practices\n- **[Backends](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fbackends\u002F)** - Container backend details\n- **[Languages](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Flanguages\u002F)** - Supported programming languages\n- **[Integrations](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fintegrations\u002F)** - LLM framework integrations\n- **[API Reference](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fapi-reference\u002F)** - Complete API documentation\n- **[Examples](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fexamples\u002F)** - Real-world usage examples\n\n## 🤝 Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fcontributing\u002F) for details.\n\n### Development Setup\n\n```bash\n# Clone the repository\ngit clone https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox.git\ncd llm-sandbox\n\n# Install in development mode\nmake install\n\n# Run pre-commit hooks\nuv run pre-commit run -a\n\n# Run tests\nmake test\n```\n\n## 📄 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## 🌟 Star History\n\nIf you find LLM Sandbox useful, please consider giving it a star on GitHub!\n\n## 📞 Support & Community\n\n- **GitHub Issues**: [Report bugs or request features](https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fissues)\n- **GitHub Discussions**: [Join the community](https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fdiscussions)\n- **PyPI**: [pypi.org\u002Fproject\u002Fllm-sandbox](https:\u002F\u002Fpypi.org\u002Fproject\u002Fllm-sandbox\u002F)\n- **Documentation**: [vndee.github.io\u002Fllm-sandbox](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002F)\n\n## Contributors\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_a521af427117.png\" \u002F>\n\u003C\u002Fa>\n\n## Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_a5b3c0c8e48e.png)](https:\u002F\u002Fwww.star-history.com\u002F#vndee\u002Fllm-sandbox&Date)\n","## LLM 沙盒\n\n*轻松安全地执行 LLM 生成的代码*\n\n[![SonarQube 云](https:\u002F\u002Fsonarcloud.io\u002Fimages\u002Fproject_badges\u002Fsonarcloud-light.svg)](https:\u002F\u002Fsonarcloud.io\u002Fsummary\u002Fnew_code?id=vndee_llm-sandbox)\n\n[![质量门状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_4d3322b292fb.png)](https:\u002F\u002Fsonarcloud.io\u002Fsummary\u002Fnew_code?id=vndee_llm-sandbox)\n[![PyPI 下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_34c50e933400.png)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fllm-sandbox\u002F)\n[![发布](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fvndee\u002Fllm-sandbox)](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fvndee\u002Fllm-sandbox)\n[![构建状态](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fvndee\u002Fllm-sandbox\u002Fmain.yml?branch=main)](https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Factions\u002Fworkflows\u002Fmain.yml?query=branch%3Amain)\n[![CodeFactor](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_185f8b40f3ca.png)](https:\u002F\u002Fwww.codefactor.io\u002Frepository\u002Fgithub\u002Fvndee\u002Fllm-sandbox)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fvndee\u002Fllm-sandbox\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=EULWCESZAY)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fvndee\u002Fllm-sandbox)\n![](https:\u002F\u002Fbadge.mcpx.dev?status=on 'MCP 已启用')\n[![Ask DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fbadge.svg)](https:\u002F\u002Fdeepwiki.com\u002Fvndee\u002Fllm-sandbox)\n\n**LLM 沙盒** 是一种轻量级且可移植的沙盒环境，专为在安全、隔离的模式下运行大型语言模型（LLM）生成的代码而设计。它为 AI 生成的代码提供了一个安全的执行环境，同时具备灵活的容器后端选择和全面的语言支持，从而简化了运行 LLM 生成代码的流程。\n\n文档：https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002F\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_24c7248444d4.png)\n\n✨ **新增功能**：本项目现已支持 [模型上下文协议 (MCP)](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fmcp-integration\u002F) 服务器，使您的 MCP 客户端（如 Claude Desktop）能够在一个安全的沙盒环境中运行 LLM 生成的代码。\n\n## 🚀 核心功能\n\n### 🛡️ 安全第一\n- **隔离式执行**：代码运行于隔离的容器中，无权访问主机系统。\n- **安全策略**：可自定义安全策略，以严格控制代码的执行。\n- **资源限制**：可设置 CPU、内存及执行时间的上限。\n- **网络隔离**：对沙盒化代码的网络访问进行严格管控。\n\n### 🏗️ 灵活的容器后端\n- **Docker**：最受欢迎且广泛支持的选项。\n- **Kubernetes**：企业级编排工具，适用于可扩展的部署方案。\n- **Podman**：无根容器，提供更强的安全性。\n\n### 🌐 多语言支持\n支持多种编程语言的代码执行，并实现自动依赖管理：\n- **Python** - 全面支持 pip 包管理。\n- **JavaScript\u002FNode.js** - 通过 npm 安装包。\n- **Java** - 使用 Maven 和 Gradle 进行依赖管理。\n- **C++** - 支持编译与运行。\n- **Go** - 提供模块化支持与编译功能。\n- **R** - 利用 CRAN 包进行统计计算与数据分析。\n\n### 🔌 LLM 框架集成\n可无缝集成到流行的 LLM 框架中，例如 LangChain、LangGraph、LlamaIndex、OpenAI 等。\n\n### 📊 高阶功能\n- **工件提取**：自动捕获图表与可视化结果。\n- **库管理**：即时安装依赖项。\n- **文件操作**：在沙盒环境中复制文件。\n- **自定义镜像**：使用您自己的容器镜像。\n- **快速生产模式**：跳过环境搭建，加快容器启动速度。\n- **容器池化**：预热并复用容器，提升性能（全新功能！）\n\n## 📦 安装指南\n\n### 基础安装\n```bash\npip install llm-sandbox\n```\n\n### 配合特定后端支持\n```bash\n# 用于 Docker 支持（最常见）\npip install 'llm-sandbox[docker]'\n\n# 用于 Kubernetes 支持\npip install 'llm-sandbox[k8s]'\n\n# 用于 Podman 支持\npip install 'llm-sandbox[podman]'\n\n# 所有后端\npip install 'llm-sandbox[docker,k8s,podman]'\n```\n\n### 开发版安装\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox.git\ncd llm-sandbox\npip install -e '.[dev]'\n```\n\n## 🏃‍♂️ 快速上手\n\n### 基本用法\n\n```python\nfrom llm_sandbox import SandboxSession\n\n# 创建并使用沙盒会话\nwith SandboxSession(lang=\"python\") as session:\n    result = session.run(\"\"\"\nprint(\"来自 LLM 沙盒的问候!\")\nprint(\"我正在安全的容器中运行。\")\n    \"\"\")\n    print(result.stdout)\n```\n\n### 安装库\n\n```python\nfrom llm_sandbox import SandboxSession\n\nwith SandboxSession(lang=\"python\") as session:\n    result = session.run(\"\"\"\nimport numpy as np\n\n# 创建一个数组\narr = np.array([1, 2, 3, 4, 5])\nprint(f\"数组: {arr}\")\nprint(f\"平均值: {np.mean(arr)}\")\n    \"\"\", libraries=[\"numpy\"])\n\n    print(result.stdout)\n```\n\n### 多语言支持\n\n#### JavaScript\n```python\nwith SandboxSession(lang=\"javascript\") as session:\n    result = session.run(\"\"\"\nconst greeting = \"来自 Node.js 的问候!\";\nconsole.log(greeting);\n\nconst axios = require('axios');\nconsole.log(\"Axios 已成功加载！\");\n    \"\"\", libraries=[\"axios\"])\n```\n\n#### Java\n```python\nwith SandboxSession(lang=\"java\") as session:\n    result = session.run(\"\"\"\npublic class HelloWorld {\n    public static void main(String[] args) {\n        System.out.println(\"来自 Java 的问候!\");\n    }\n}\n    \"\"\")\n```\n\n#### C++\n```python\nwith SandboxSession(lang=\"cpp\") as session:\n    result = session.run(\"\"\"\n#include \u003Ciostream>\n\nint main() {\n    std::cout \u003C\u003C \"来自 C++ 的问候!\" \u003C\u003C std::endl;\n    return 0;\n}\n    \"\"\")\n```\n\n#### Go\n```python\nwith SandboxSession(lang=\"go\") as session:\n    result = session.run(\"\"\"\npackage main\nimport \"fmt\"\n\nfunc main() {\n    fmt.Println(\"来自 Go 的问候!\")\n}\n    \"\"\")\n```\n\n#### R\n```python\nwith SandboxSession(\n    lang=\"r\",\n    image=\"ghcr.io\u002Fvndee\u002Fsandbox-r-451-bullseye\",\n    verbose=True,\n) as session:\n    result = session.run(\n        \"\"\"\n# 基本的 R 操作\nprint(\"=== 基本的 R 演示 ===\")\n\n# 创建一些数据\nnumbers \u003C- c(1, 2, 3, 4, 5, 10, 15, 20)\nprint(paste(\"数字:\", paste(numbers, collapse=\", \")))\n\n# 基本统计\nprint(paste(\"平均值:\", mean(numbers)))\nprint(paste(\"中位数:\", median(numbers)))\nprint(paste(\"标准差:\", sd(numbers)))\n\n# 处理数据框\ndf \u003C- data.frame(\n    name = c(\"Alice\", \"Bob\", \"Charlie\", \"Diana\"),\n    age = c(25, 30, 35, 28),\n    score = c(85, 92, 78, 96)\n)\n\nprint(\"=== 数据框 ===\")\nprint(df)\n\n# 计算平均分数\navg_score \u003C- mean(df$score)\nprint(paste(\"平均分数:\", avg_score))\n        \"\"\"\n    )\n```\n\n### 交互式会话\n\n对于笔记本式的工作流，您可以使用 `InteractiveSandboxSession`，它能够在多次 `run` 调用之间保持 Python 解释器的状态。\n\n```python\nfrom llm_sandbox import InteractiveSandboxSession\n\nwith InteractiveSandboxSession(\n    lang=\"python\",\n    kernel_type=\"ipython\",\n    history_size=200,\n) as session:\n    session.run(\"value = 21 * 2\")\n    result = session.run(\"print(f'结果: {value}')\")\n    print(result.stdout)  # -> 结果: 42\n\n    # 使用魔法命令安装库\n    session.run(\"%pip install pandas\")\n    result = session.run(\"import pandas as pd; print(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}))\")\n    print(result.stdout)\n```\n\n交互式会话支持 Docker、Podman 和 Kubernetes 后端，目前主要面向 Python 语言。它们会在沙盒中启动一个长时间运行的 IPython 内核，因此每次 `run()` 操作都如同在笔记本单元格中执行一样——状态、导入语句以及魔法命令都会一直保留，直到上下文管理器退出，无需额外的网络通信或手动序列化操作。\n\n### 捕获图表与可视化效果\n\n#### Python 图表\n```python\nfrom llm_sandbox import ArtifactSandboxSession\nimport base64\nfrom pathlib import Path\n\nwith ArtifactSandboxSession(lang=\"python\") as session:\n    result = session.run(\"\"\"\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.linspace(0, 10, 100)\ny = np.sin(x)\n\nplt.figure(figsize=(10, 6))\nplt.plot(x, y)\nplt.title(\"正弦波\")\nplt.xlabel(\"x\")\nplt.ylabel(\"sin(x)\")\nplt.grid(True)\nplt.savefig(\"sine_wave.png\", dpi=150, bbox_inches=\"tight\")\nplt.show()\n    \"\"\", libraries=[\"matplotlib\", \"numpy\"])\n\n    # 提取生成的图表\n    print(f\"生成了 {len(result.plots)} 张图表\")\n\n    # 将图表保存为文件\n    for i, plot in enumerate(result.plots):\n        plot_path = Path(f\"plot_{i + 1}.{plot.format.value}\")\n        with plot_path.open(\"wb\") as f:\n            f.write(base64.b64decode(plot.content_base64))\n```\n\n#### R 图表\n```python\nfrom llm_sandbox import ArtifactSandboxSession\nimport base64\nfrom pathlib import Path\n\nwith ArtifactSandboxSession(lang=\"r\") as session:\n    result = session.run(\"\"\"\nlibrary(ggplot2)\n\n# 创建示例数据\ndata \u003C- data.frame(\n    x = rnorm(100),\n    y = rnorm(100)\n)\n\n# 创建 ggplot2 可视化\np \u003C- ggplot(data, aes(x = x, y = y)) +\n    geom_point(alpha = 0.6) +\n    geom_smooth(method = \"lm\", se = FALSE) +\n    labs(title = \"散点图带趋势线\",\n         x = \"X 值\", y = \"Y 值\") +\n    theme_minimal()\n\nprint(p)\n\n# R 图表\nhist(data$x, main = \"X 的分布\",\n     xlab = \"X 值\", col = \"lightblue\", breaks = 20)\n    \"\"\", libraries=[\"ggplot2\"])\n\n    # 提取生成的图表\n    print(f\"生成了 {len(result.plots)} 张 R 图表\")\n\n    # 将图表保存为文件\n    for i, plot in enumerate(result.plots):\n        plot_path = Path(f\"r_plot_{i + 1}.{plot.format.value}\")\n        with plot_path.open(\"wb\") as f:\n            f.write(base64.b64decode(plot.content_base64))\n```\n\n## 🔧 配置\n\n### 基本配置\n\n```python\nfrom llm_sandbox import SandboxSession\n\n# 创建一个新的沙盒会话\nwith SandboxSession(image=\"python:3.9.19-bullseye\", keep_template=True, lang=\"python\") as session:\n    result = session.run(\"print('Hello, World!')\")\n    print(result)\n\n# 使用自定义 Dockerfile\nwith SandboxSession(dockerfile=\"Dockerfile\", keep_template=True, lang=\"python\") as session:\n    result = session.run(\"print('Hello, World!')\")\n    print(result)\n\n# 或者使用默认镜像\nwith SandboxSession(lang=\"python\", keep_template=True) as session:\n    result = session.run(\"print('Hello, World!')\")\n    print(result)\n```\n\n\nLLM 沙盒还支持在主机与沙盒之间复制文件：\n\n```python\nfrom llm_sandbox import SandboxSession\n\nwith SandboxSession(lang=\"python\", keep_template=True) as session:\n    # 将文件从主机复制到沙盒\n    session.copy_to_runtime(\"test.py\", \"\u002Fsandbox\u002Ftest.py\")\n\n    # 在沙盒中运行复制的 Python 代码\n    result = session.execute_command(\"python \u002Fsandbox\u002Ftest.py\")\n    print(result)\n\n    # 将文件从沙盒复制回主机\n    session.copy_from_runtime(\"\u002Fsandbox\u002Foutput.txt\", \"output.txt\")\n```\n\n#### 自定义运行时配置\n\n```python\nfrom llm_sandbox import SandboxSession\n\npod_manifest = {\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"name\": \"test\",\n        \"namespace\": \"test\",\n        \"labels\": {\"app\": \"sandbox\"},\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"name\": \"sandbox-container\",\n                \"image\": \"test\",\n                \"tty\": True,\n                \"volumeMounts\": {\n                    \"name\": \"tmp\",\n                    \"mountPath\": \"\u002Ftmp\",\n                }\n            }\n        ],\n        \"volumes\": [{\"name\": \"tmp\", \"emptyDir\": {\"sizeLimit\": \"5Gi\"}}],\n    },\n}\nwith SandboxSession(\n    backend=\"kubernetes\",\n    image=\"python:3.9.19-bullseye\",\n    dockerfile=None,\n    lang=\"python\",\n    keep_template=False,\n    verbose=False,\n    pod_manifest=pod_manifest,\n) as session:\n    result = session.run(\"print('Hello, World!')\")\n    print(result)\n```\n\n#### 远程 Docker 主机\n\n```python\nimport docker\nfrom llm_sandbox import SandboxSession\n\ntls_config = docker.tls.TLSConfig(\n    client_cert=(\"path\u002Fto\u002Fcert.pem\", \"path\u002Fto\u002Fkey.pem\"),\n    ca_cert=\"path\u002Fto\u002Fca.pem\",\n    verify=True\n)\ndocker_client = docker.DockerClient(base_url=\"tcp:\u002F\u002F\u003Cyour_host>:\u003Cport>\", tls=tls_config)\n\nwith SandboxSession(\n    client=docker_client,\n    image=\"python:3.9.19-bullseye\",\n    keep_template=True,\n    lang=\"python\",\n) as session:\n    result = session.run(\"print('Hello, World!')\")\n    print(result)\n```\n\n#### Kubernetes 支持\n\n```python\nfrom kubernetes import client, config\nfrom llm_sandbox import SandboxSession\n\n# 使用本地 kubeconfig\nconfig.load_kube_config()\nk8s_client = client.CoreV1Api()\n\nwith SandboxSession(\n    client=k8s_client,\n    backend=\"kubernetes\",\n    image=\"python:3.9.19-bullseye\",\n    lang=\"python\",\n    pod_manifest=pod_manifest, # 默认为 None\n) as session:\n    result = session.run(\"print('Hello from Kubernetes!')\")\n    print(result)\n```\n\n**⚠️ 对于自定义 Pod 映射至关重要：**\n\n在使用自定义 Pod 映射时，请确保您的容器配置包含以下内容：\n- `\"tty\": True`（使容器保持存活）\n- 在 Pod 和容器层面正确配置 `securityContext`\n- 容器名称可以是任意有效名称（无任何限制）\n\n有关完整要求，请参阅 [配置指南](docs\u002Fconfiguration.md#kubernetes-backend)。\n\n#### Podman 支持\n\n```python\nfrom llm_sandbox import SandboxSession\n\nwith SandboxSession(\n    backend=\"podman\",\n    lang=\"python\",\n    image=\"python:3.9.19-bullseye\"\n) as session:\n    result = session.run(\"print('Hello from Podman!')\")\n    print(result)\n```\n\n## ⚡ 容器池化（性能优化）\n\n容器池化通过重复利用预先预热的容器，而非为每次执行创建新容器，从而大幅提升性能。这尤其适用于频繁执行代码的应用程序。\n\n### 核心优势\n- **执行速度更快**：消除容器创建的开销，执行速度最高可提升10倍。\n- **预热环境**：容器在启动时即已加载所需依赖项。\n- **线程安全**：安全处理并发请求。\n- **资源高效**：自动管理容器的生命周期。\n- **灵活配置**：可自定义池大小、超时时间及行为。\n\n### 基本池使用\n\n```python\nfrom llm_sandbox import SandboxSession\nfrom llm_sandbox.pool import PoolConfig, create_pool_manager\n\n# 明确创建池管理器\npool = create_pool_manager(\n    backend=\"docker\",\n    config=PoolConfig(\n        max_pool_size=10,          # 最大容器数量\n        min_pool_size=3,           # 至少保持3个已预热容器\n        idle_timeout=300.0,        # 5分钟后回收空闲容器\n        enable_prewarming=True,    # 在启动时自动创建容器\n    ),\n    lang=\"python\",\n)\n\n# 在会话中使用池\nwith SandboxSession(\n    lang=\"python\",\n    pool=pool,\n) as session:\n    result = session.run(\"print('Hello from pool!')\")\n\n# 当会话关闭时，容器会自动返回到池中\n# 完成操作后，清理池\npool.close()\n```\n\n### 在多个会话间共享池\n\n为实现最大效率，可将单个池在多个会话间共享：\n\n```python\nfrom llm_sandbox import SandboxSession\nfrom llm_sandbox.pool import create_pool_manager, PoolConfig\n\n# 创建共享池管理器\npool = create_pool_manager(\n    backend=\"docker\",\n    config=PoolConfig(\n        max_pool_size=10,\n        min_pool_size=3,\n    ),\n    lang=\"python\",\n    libraries=[\"numpy\", \"pandas\"],  # 在所有容器中预先安装相关库\n)\n\n# 在多个会话中使用池\nwith SandboxSession(lang=\"python\", pool=pool) as session1:\n    result1 = session1.run(\"import pandas; print(pandas.__version__)\")\n\nwith SandboxSession(lang=\"python\", pool=pool) as session2:\n    result2 = session2.run(\"import numpy; print(numpy.__version__)\")\n\n# 完成操作后，清理池\npool.close()\n```\n\n### 并发执行\n\n容器池具有线程安全性，并能高效处理并发请求：\n\n```python\nfrom concurrent.futures import ThreadPoolExecutor\nfrom llm_sandbox import SandboxSession\nfrom llm_sandbox.pool import create_pool_manager, PoolConfig\n\n# 创建共享池\npool = create_pool_manager(\n    backend=\"docker\",\n    config=PoolConfig(max_pool_size=5),\n    lang=\"python\",\n)\n\ndef run_code(task_id: int):\n    with SandboxSession(lang=\"python\", pool=pool) as session:\n        return session.run(f'print(\"Task {task_id}\")')\n\ntry:\n    # 使用仅5个容器，同时并发执行20个任务\n    with ThreadPoolExecutor(max_workers=10) as executor:\n        results = list(executor.map(run_code, range(20)))\nfinally:\n    pool.close()\n```\n\n### 池配置选项\n\n```python\nfrom llm_sandbox.pool import PoolConfig, ExhaustionStrategy, create_pool_manager\n\nconfig = PoolConfig(\n    # 池大小限制\n    max_pool_size=10,                      # 池中的最大容器数量\n    min_pool_size=2,                       # 最小的已预热容器数量\n\n    # 超时配置\n    idle_timeout=300.0,                    # 回收空闲容器（秒）\n    acquisition_timeout=30.0,              # 等待可用容器的时间\n\n    # 运行状态与生命周期\n    health_check_interval=60.0,            # 运行状态检查频率\n    max_container_lifetime=3600.0,         # 最大容器寿命\n    max_container_uses=100,                # 回收前的最大使用次数\n\n    # 池的耗尽策略\n    exhaustion_strategy=ExhaustionStrategy.WAIT,  # WAIT、FAIL_FAST 或 TEMPORARY\n\n    # 预热容器\n    enable_prewarming=True,                # 预热容器\n)\n\npool = create_pool_manager(\n    backend=\"docker\",\n    config=config,\n    lang=\"python\",\n)\n```\n\n### 池耗尽策略\n\n当所有容器都处于繁忙状态时，池可采用多种方式应对：\n\n#### 1. WAIT（默认）\n等待容器空闲：\n```python\nconfig = PoolConfig(\n    max_pool_size=5,\n    exhaustion_strategy=ExhaustionStrategy.WAIT,\n    acquisition_timeout=30.0,  # 最多等待30秒\n)\n```\n\n#### 2. FAIL_FAST\n立即抛出错误：\n```python\nconfig = PoolConfig(\n    max_pool_size=5,\n    exhaustion_strategy=ExhaustionStrategy.FAIL_FAST,\n)\n```\n\n#### 3. TEMPORARY\n在池外创建临时容器：\n```python\nconfig = PoolConfig(\n    max_pool_size=5,\n    exhaustion_strategy=ExhaustionStrategy.TEMPORARY,\n)\n```\n\n### 监控池统计信息\n\n```python\nfrom llm_sandbox.pool import create_pool_manager\n\npool = create_pool_manager(backend=\"docker\", lang=\"python\")\n\n# 获取池统计信息\nstats = pool.get_stats()\nprint(f\"总容器数: {stats['total_size']}\")\nprint(f\"空闲容器数: {stats['state_counts']['idle']}\")\nprint(f\"繁忙容器数: {stats['state_counts']['busy']}\")\n\npool.close()\n```\n\n### 利用池进行工件提取\n\n若需通过容器池捕获图表和可视化效果，可采用以下两种方法：\n\n```python\nfrom llm_sandbox import ArtifactSandboxSession\nfrom llm_sandbox.pool import create_pool_manager, PoolConfig\nimport base64\nfrom pathlib import Path\n\n# 创建带有预装可视化库的池\npool = create_pool_manager(\n    backend=\"docker\",\n    config=PoolConfig(max_pool_size=5, min_pool_size=2),\n    lang=\"python\",\n    libraries=[\"matplotlib\", \"numpy\"],\n)\n\ntry:\n    # 方法1：利用池参数（推荐用于保持API一致性）\n    with ArtifactSandboxSession(pool=pool, enable_plotting=True) as session:\n        result = session.run(\"\"\"\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.linspace(0, 10, 100)\ny = np.sin(x)\nplt.plot(x, y)\nplt.title('池化执行 - 正弦波')\nplt.show()\n        \"\"\")\n\n        # 保存生成的图表\n        for i, plot in enumerate(result.plots):\n            Path(f\"plot_{i}.{plot.format.value}\").write_bytes(\n                base64.b64decode(plot.content_base64)\n            )\n\n        print(f\"使用池化容器生成了{len(result.plots)}张图表\")\n\n    # 方法2：使用ArtifactPooledSandboxSession（显式类）\n    # 两种方法的效果完全相同\n    from llm_sandbox.pool import ArtifactPooledSandboxSession\n\n    with ArtifactPooledSandboxSession(pool_manager=pool, enable_plotting=True) as session:\n        result = session.run(\"print('功能相同，API不同')\")\n\nfinally:\n    pool.close()\n```\n\n### 示例\n\n请参阅`examples\u002F`目录，获取完整演示：\n- [pool_basic_demo.py](examples\u002Fpool_basic_demo.py) - 基础池使用与配置\n- [pool_concurrent_demo.py](examples\u002Fpool_concurrent_demo.py) - 并发执行模式\n- [pool_monitoring_demo.py](examples\u002Fpool_monitoring_demo.py) - 运行状态监控与生命周期管理\n- [pool_artifact_demo.py](examples\u002Fpool_artifact_demo.py) - 利用池进行工件提取（图表、CSV、混合工件）\n\n## 🤖 LLM 框架集成\n\n### LangChain 工具\n\n```python\nfrom langchain.tools import BaseTool\nfrom llm_sandbox import SandboxSession\n\nclass PythonSandboxTool(BaseTool):\n    name = \"python_sandbox\"\n    description = \"在安全的沙盒环境中执行 Python 代码\"\n\n    def _run(self, code: str) -> str:\n        with SandboxSession(lang=\"python\") as session:\n            result = session.run(code)\n            return result.stdout if result.exit_code == 0 else result.stderr\n```\n\n### 与 OpenAI 函数结合使用\n\n```python\nimport openai\nfrom llm_sandbox import SandboxSession\n\ndef execute_code(code: str, language: str = \"python\") -> str:\n    \"\"\"在安全的沙盒环境中执行代码\"\"\"\n    with SandboxSession(lang=language) as session:\n        result = session.run(code)\n        return result.stdout if result.exit_code == 0 else result.stderr\n\n# 注册为 OpenAI 函数\nfunctions = [\n    {\n        \"name\": \"execute_code\",\n        \"description\": \"在安全的沙盒环境中执行代码\",\n        \"parameters\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"code\": {\"type\": \"string\", \"description\": \"要执行的代码\"},\n                \"language\": {\"type\": \"string\", \"enum\": [\"python\", \"javascript\", \"java\", \"cpp\", \"go\", \"r\"]}\n            },\n            \"required\": [\"code\"]\n        }\n    }\n]\n```\n\n## 🔌 模型上下文协议 (MCP) 服务器\n\nLLM 沙盒提供了一个 [模型上下文协议 (MCP)](https:\u002F\u002Fmodelcontextprotocol.io\u002F) 服务器，该服务器使像 Claude Desktop 这样的 AI 助手能够在安全的沙盒环境中安全地执行代码。通过这一集成，LLMs 可以直接运行代码，并自动捕获可视化结果，同时支持多种语言。\n\n### 特点\n\n- **安全的代码执行**：在隔离的容器中执行代码，使用您偏好的后端\n- **多语言支持**：可运行 Python、JavaScript、Java、C++、Go、R 和 Ruby 代码\n- **自动可视化捕获**：自动捕获并返回图表和可视化结果\n- **库管理**：即时安装包和依赖项\n- **灵活的后端支持**：可选择 Docker、Podman 或 Kubernetes 后端\n\n### 安装\n\n使用您偏好的后端，安装支持 MCP 的 LLM 沙盒：\n\n```bash\n# 对于 Docker 后端\npip install 'llm-sandbox[mcp-docker]'\n\n# 对于 Podman 后端\npip install 'llm-sandbox[mcp-podman]'\n\n# 对于 Kubernetes 后端\npip install 'llm-sandbox[mcp-k8s]'\n```\n\n### 配置\n\n将以下配置添加到您的 MCP 客户端（例如，适用于 Claude Desktop 的 `claude_desktop_config.json`）：\n\n```json\n{\n  \"mcpServers\": {\n    \"llm-sandbox\": {\n      \"command\": \"python3\",\n      \"args\": [\"-m\", \"llm_sandbox.mcp_server.server\"],\n    }\n  }\n}\n```\n\n#### 后端特定配置\n\n对于特定的后端，您需要将 `BACKEND` 环境变量设置为您希望使用的后端。根据您所使用的后端，可能还需要设置其他环境变量。例如，如果您使用的是 `DOCKER_HOST`，而系统中未自动包含 `DOCKER_HOST`，则可能需要设置 `DOCKER_HOST` 环境变量。\n\n**Docker（默认）：**\n```json\n{\n  \"mcpServers\": {\n    \"llm-sandbox\": {\n      \"command\": \"python3\",\n      \"args\": [\"-m\", \"llm_sandbox.mcp_server.server\"],\n      \"env\": {\n        \"BACKEND\": \"docker\",\n        \"DOCKER_HOST\": \"unix:\u002F\u002F\u002Fvar\u002Frun\u002Fdocker.sock\" \u002F\u002F 将此地址更改为实际使用的主机\n      }\n    }\n  }\n}\n```\n\n**Podman：**\n```json\n{\n  \"mcpServers\": {\n    \"llm-sandbox\": {\n      \"command\": \"python3\",\n      \"args\": [\"-m\", \"llm_sandbox.mcp_server.server\"],\n      \"env\": {\n        \"BACKEND\": \"podman\",\n        \"DOCKER_HOST\": \"unix:\u002F\u002F\u002Fvar\u002Frun\u002Fpodman\u002Fpodman.sock\" \u002F\u002F 将此地址更改为实际使用的主机\n      }\n    }\n  }\n}\n```\n\n对于 Kubernetes，您可能需要将 `KUBECONFIG` 环境变量设置为您的 kubeconfig 文件路径。\n\n**Kubernetes：**\n```json\n{\n  \"mcpServers\": {\n    \"llm-sandbox\": {\n      \"command\": \"python3\",\n      \"args\": [\"-m\", \"llm_sandbox.mcp_server.server\"],\n      \"env\": {\n        \"BACKEND\": \"kubernetes\",\n        \"KUBECONFIG\": \"\u002Fpath\u002Fto\u002Fkubeconfig\" \u002F\u002F 将此地址更改为实际的 kubeconfig 文件路径\n      }\n    }\n  }\n}\n```\n\n### 可用工具\n\nMCP 服务器提供了以下工具：\n\n- **`execute_code`**：在安全的沙盒环境中执行代码，并自动捕获可视化结果\n- **`get_supported_languages`**：获取支持的编程语言列表\n- **`get_language_details`**：获取特定语言的详细信息\n\n### 使用示例\n\n配置完成后，您可以请求 AI 助手运行代码，它会自动使用 LLM 沙盒的 MCP 服务器：\n\n```text\n“创建一个散点图，展示 x 和 y 数据点之间的关系，使用 matplotlib”\n```\n\n助手将在安全的沙盒中执行 Python 代码，并自动捕获生成的图表或可视化结果。\n\n## 🏗️ 架构\n\n```mermaid\ngraph LR\n    A[LLM 客户端] --> B[LLM 沙盒]\n    B --> C[容器后端]\n\n    A1[OpenAI] --> A\n    A2[Anthropic] --> A\n    A3[本地 LLMs] --> A\n    A4[LangChain] --> A\n    A5[LangGraph] --> A\n    A6[LlamaIndex] --> A\n    A7[MCP 客户端] --> A\n\n    C --> C1[Docker]\n    C --> C2[Kubernetes]\n    C --> C3[Podman]\n\n    style A fill:#e1f5fe\n    style B fill:#f3e5f5\n    style C fill:#e8f5e8\n    style A1 fill:#fff3e0\n    style A2 fill:#fff3e0\n    style A3 fill:#fff3e0\n    style A4 fill:#fff3e0\n    style A5 fill:#fff3e0\n    style A6 fill:#fff3e0\n    style A7 fill:#fff3e0\n    style C1 fill:#e0f2f1\n    style C2 fill:#e0f2f1\n    style C3 fill:#e0f2f1\n```\n\n## 📚 文档\n\n- **[完整文档](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002F)** - 全面的文档\n- **[入门指南](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fgetting-started\u002F)** - 安装与基本使用\n- **[配置指南](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fconfiguration\u002F)** - 详细的配置选项\n- **[安全性](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fsecurity\u002F)** - 安全策略与最佳实践\n- **[后端](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fbackends\u002F)** - 容器后端的详细信息\n- **[语言](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Flanguages\u002F)** - 支持的编程语言\n- **[集成](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fintegrations\u002F)** - LLM 框架的集成\n- **[API 参考](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fapi-reference\u002F)** - 完整的 API 文档\n- **[示例](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fexamples\u002F)** - 真实世界的应用示例\n\n## 🤝 贡献\n\n我们欢迎您的贡献！请参阅我们的 [贡献指南](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002Fcontributing\u002F) 以了解详细信息。\n\n### 开发设置\n\n```bash\n# 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox.git\ncd llm-sandbox\n\n# 以开发模式安装\nmake install\n\n# 运行预提交钩子\nuv run pre-commit run -a\n\n# 运行测试\nmake test\n```\n\n## 📄 许可证\n\n本项目采用 MIT 许可证——详情请参阅 [LICENSE](LICENSE) 文件。\n\n## 🌟 星星历史\n\n如果您觉得 LLM Sandbox 有用，请考虑在 GitHub 上为它点个星！\n\n## 📞 支持与社区\n\n- **GitHub 问题**：[报告错误或请求功能](https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fissues)\n- **GitHub 讨论区**：[加入社区](https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fdiscussions)\n- **PyPI**：[pypi.org\u002Fproject\u002Fllm-sandbox](https:\u002F\u002Fpypi.org\u002Fproject\u002Fllm-sandbox\u002F)\n- **文档**：[vndee.github.io\u002Fllm-sandbox](https:\u002F\u002Fvndee.github.io\u002Fllm-sandbox\u002F)\n\n## 贡献者\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_a521af427117.png\" \u002F>\n\u003C\u002Fa>\n\n## 星星历史\n\n[![星星历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_readme_a5b3c0c8e48e.png)](https:\u002F\u002Fwww.star-history.com\u002F#vndee\u002Fllm-sandbox&Date)","# llm-sandbox 快速上手指南\n\n## 环境准备\n\n- **操作系统**：Linux \u002F macOS \u002F Windows（需支持 Docker Desktop）\n- **Python**：≥ 3.8\n- **容器运行时**（任选其一）  \n  - Docker ≥ 20.10  \n  - Podman ≥ 4.0  \n  - Kubernetes 集群（可选）\n\n> 国内用户建议使用 [阿里云镜像加速器](https:\u002F\u002Fcr.console.aliyun.com) 或 [DaoCloud 镜像站](https:\u002F\u002Fdocker.m.daocloud.io) 加速镜像拉取。\n\n## 安装步骤\n\n1. **安装容器运行时**  \n   - Docker（推荐）：  \n     ```bash\n     curl -fsSL https:\u002F\u002Fget.docker.com | bash -s docker --mirror Aliyun\n     sudo systemctl enable --now docker\n     ```\n   - Podman（rootless）：  \n     ```bash\n     sudo apt install podman  # Ubuntu\u002FDebian\n     ```\n\n2. **安装 llm-sandbox**  \n   ```bash\n   # 基础版本\n   pip install llm-sandbox -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n   # 带 Docker 后端\n   pip install 'llm-sandbox[docker]' -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n   ```\n\n## 基本使用\n\n### 1. 运行 Python 代码\n```python\nfrom llm_sandbox import SandboxSession\n\nwith SandboxSession(lang=\"python\") as session:\n    result = session.run(\"\"\"\nprint(\"你好，LLM Sandbox！\")\nprint(\"我在隔离容器里安全运行。\")\n    \"\"\")\n    print(result.stdout)\n```\n\n### 2. 自动安装依赖并绘图\n```python\nfrom llm_sandbox import ArtifactSandboxSession\n\nwith ArtifactSandboxSession(lang=\"python\") as session:\n    result = session.run(\"\"\"\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.linspace(0, 2*np.pi, 100)\nplt.plot(x, np.sin(x))\nplt.title(\"正弦曲线\")\nplt.savefig(\"sin.png\")\n    \"\"\", libraries=[\"matplotlib\", \"numpy\"])\n\n    # 保存生成的图片\n    for i, plot in enumerate(result.plots):\n        with open(f\"plot_{i}.png\", \"wb\") as f:\n            f.write(plot.content_base64)\n```\n\n### 3. 交互式会话（保持状态）\n```python\nfrom llm_sandbox import InteractiveSandboxSession\n\nwith InteractiveSandboxSession(lang=\"python\") as session:\n    session.run(\"a = 10\")\n    result = session.run(\"print(a * 2)\")  # 输出 20\n```\n\n完成！现在你已经可以在隔离环境中安全运行 LLM 生成的任何代码了。","一家 5 人规模的初创公司正在开发一款 AI 数据洞察助手，用户上传 CSV 后，系统让 LLM 自动生成并执行 Python 代码，返回图表和结论。\n\n### 没有 llm-sandbox 时\n- 直接在宿主机 `exec()` LLM 生成的代码，曾有一次恶意 `os.system(\"curl ... | bash\")` 差点把整台服务器变成矿机， CTO 连夜回滚救火。  \n- 每次用户请求都要新建一个 venv，安装 pandas、matplotlib 等依赖，冷启动 30 秒起步，用户抱怨“上传完文件要等半分钟才出图”。  \n- 不同用户任务并发跑在同一台 4 vCPU 机器上，一次内存泄漏把 OOM Killer 触发，导致其他用户会话全部 502。  \n- 为了隔离，运维用 Docker 手写临时容器，结果镜像体积 2 GB，CI\u002FCD 每次推送 15 分钟，开发迭代被拖慢。  \n\n### 使用 llm-sandbox 后\n- 代码被扔进一次性沙箱容器，默认无网络、无特权，即使 LLM 写出 `import os; os.system(\"rm -rf \u002F\")` 也只是在容器里自毁，宿主机安然无恙。  \n- 开启容器池化后，预热的 Python 镜像秒级复用，依赖已缓存，平均响应从 30 s 降到 2 s，用户体验像本地脚本一样顺滑。  \n- 通过 `memory=\"512m\"`、`cpu=0.5` 限制，每个任务独占配额，并发 20 个用户也不会互相挤爆，系统稳定性 SLA 提升到 99.9%。  \n- llm-sandbox 自动拉取轻量基础镜像，按需增量安装依赖，镜像体积降到 300 MB，CI 构建时间缩短到 3 分钟，开发每天可以多迭代 3-4 次。  \n\n一句话总结：llm-sandbox 让这家初创公司在不增加运维人手的情况下，既守住了安全底线，又把 AI 代码执行体验从“胆战心惊”变成了“丝滑可控”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvndee_llm-sandbox_24c72484.png","vndee","Duy Huynh","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fvndee_f95de0ee.jpg","SWE - AI\u002FML & Data","Looking for the next cool idea","Vietnam","vndee.huynh@gmail.com","DeeHuynh99","blog.duy.dev","https:\u002F\u002Fgithub.com\u002Fvndee",[87,91,95,99],{"name":88,"color":89,"percentage":90},"Python","#3572A5",99.5,{"name":92,"color":93,"percentage":94},"Dockerfile","#384d54",0.4,{"name":96,"color":97,"percentage":98},"Makefile","#427819",0.2,{"name":100,"color":101,"percentage":102},"Shell","#89e051",0,1009,96,"2026-04-10T15:06:07","MIT","Linux, macOS, Windows","未说明",{"notes":110,"python":111,"dependencies":112},"必须预先安装 Docker、Kubernetes 或 Podman 作为容器后端；首次运行会自动拉取对应语言的官方镜像（如 python:3.9.19-bullseye），镜像大小数百 MB；支持自定义 Dockerfile 与镜像；支持远程 Docker 主机；支持 TLS 安全连接；支持容器池化以提升性能；支持文件双向拷贝；支持自定义安全策略与资源限制","3.8+",[113,114,115],"docker","kubernetes","podman",[15],[118,119,120,68],"code-generation","code-interpreter","large-language-models",null,"2026-03-27T02:49:30.150509","2026-04-11T18:32:45.707124",[125,130,135,140,145,150],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},6189,"自定义镜像里已经预装了 Python 库，还需要在 SandboxSession 里写 libraries 参数吗？","不需要。只要镜像里已经通过 Dockerfile 安装好依赖，启动 SandboxSession 时无需再写 libraries 参数。示例：\n```python\nwith SandboxSession(lang=\"python\", image=\"fast-python\") as session:\n    result = session.run(\"import pandas as pd; ...\")\n```\n如果仍想显式声明，也不会报错，但会重复安装，浪费时间。","https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fissues\u002F79",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},6190,"Kubernetes 池管理器在关闭时误删 default 命名空间的 Pod，如何解决？","这是 0.3.33 之前版本的已知 Bug，已在 0.3.33 修复。请升级到 ≥0.3.33，并确保创建池时参数名正确：\n```python\npool = create_pool_manager(\n    backend=\"kubernetes\",\n    kube_namespace=\"llm-sandbox\",   # 注意参数名是 kube_namespace\n    ...\n)\n```\n升级后 `pool.close()` 会只删除指定命名空间的 Pod。","https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fissues\u002F134",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},6191,"使用 MCP 执行代码后 Docker 镜像被覆盖，如何禁止自动提交容器？","在 0.3.19 版本已修复。升级后默认 `commit_container=False`，镜像不会被修改。如仍想手动控制，可在启动 MCP 时加环境变量：\n```json\n{\n  \"llm-sandbox\": {\n    \"command\": \"uv\",\n    \"args\": [\"run\", \"python\", \"-m\", \"llm_sandbox.mcp_server.server\"],\n    \"env\": {\"BACKEND\": \"docker\", \"COMMIT_CONTAINER\": \"false\"}\n  }\n}\n```","https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fissues\u002F95",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},6192,"如何在 llm-sandbox 中使用 R 语言？","从 0.3.13 开始已内置 R 沙箱。可直接使用：\n```python\nfrom llm_sandbox import SandboxSession\nwith SandboxSession(lang=\"r\") as session:\n    session.run(\"library(dplyr); print('Hello R!')\")\n```\n镜像已预装常用包（包括 Bioconductor）。如需自定义镜像，可参考官方 Dockerfile：\nhttps:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fblob\u002Fmain\u002Fdockers\u002Fr451.Dockerfile","https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fissues\u002F73",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},6193,"Kubernetes 模式下 Pod 报 “mkdir: cannot create directory ‘\u002Fsandbox’: Permission denied” 怎么办？","容器启用了 `readOnlyRootFilesystem: true` 且未挂载可写卷。解决方式二选一：\n1. 在 Pod 模板里给 `\u002Fsandbox` 挂一个 emptyDir：\n   ```yaml\n   volumeMounts:\n     - name: sandbox\n       mountPath: \u002Fsandbox\n   volumes:\n     - name: sandbox\n       emptyDir: {}\n   ```\n2. 关闭 `readOnlyRootFilesystem`（不推荐）。\n官方镜像已默认包含挂载配置，使用最新版即可避免此问题。","https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fissues\u002F121",{"id":151,"question_zh":152,"answer_zh":153,"source_url":144},6194,"可以在 R 沙箱里动态安装 CRAN 或 Bioconductor 包吗？","可以。在代码里直接调用 `install.packages()` 或 `BiocManager::install()` 即可，沙箱拥有网络权限。示例：\n```r\ninstall.packages(\"ggplot2\")\nBiocManager::install(\"edgeR\")\n```\n注意首次安装会稍慢，后续运行若使用同一镜像则无需重复安装。",[155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240,245,250],{"id":156,"version":157,"summary_zh":158,"released_at":159},204340,"0.3.37","## What's Changed\r\n* fix(mcp): clear plots before execution and return latest plot by @arubinst in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F152\r\n\r\n## New Contributors\r\n* @arubinst made their first contribution in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F152\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.36...0.3.37","2026-03-02T06:44:59",{"id":161,"version":162,"summary_zh":163,"released_at":164},204341,"0.3.36","## What's Changed\r\n* feat: add real-time output streaming callbacks by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F150\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.35...0.3.36","2026-02-23T09:43:33",{"id":166,"version":167,"summary_zh":168,"released_at":169},204342,"0.3.35","## What's Changed\r\n* Fix command injection and path traversal vulnerabilities by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F148\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.34...0.3.35","2026-02-14T02:22:00",{"id":171,"version":172,"summary_zh":173,"released_at":174},204343,"0.3.34","## What's Changed\r\n* Add comprehensive CLAUDE.md documentation for AI assistants by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F143\r\n* Add configurable encoding error handling for command output by @ret2libc in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F145\r\n\r\n## New Contributors\r\n* @ret2libc made their first contribution in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F145\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.33...0.3.34","2026-01-28T04:50:30",{"id":176,"version":177,"summary_zh":178,"released_at":179},204344,"0.3.33","## What's Changed\r\n* Fix: Pool manager namespace parameter bug (issue #134) by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F142\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.32...0.3.33","2026-01-15T14:27:56",{"id":181,"version":182,"summary_zh":183,"released_at":184},204345,"0.3.32","## What's Changed\r\n* fix(pool): use venv Python for pooled sessions with libraries by @m7mdhka in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F137\r\n* Make CRAN repository URL configurable via R_REPO environment variable by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F141\r\n\r\n## New Contributors\r\n* @m7mdhka made their first contribution in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F137\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.31...0.3.32","2026-01-15T13:45:25",{"id":186,"version":187,"summary_zh":188,"released_at":189},204346,"0.3.31","## What's Changed\r\n* Fix AttributeError when using string language in pool managers by @Copilot in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F136\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.30...0.3.31","2025-12-06T04:44:48",{"id":191,"version":192,"summary_zh":193,"released_at":194},204347,"0.3.30","## What's Changed\r\n* Fix lazy imports for optional backend dependencies in interactive.py by @Copilot in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F132\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.29...0.3.30","2025-12-05T03:05:43",{"id":196,"version":197,"summary_zh":198,"released_at":199},204348,"0.3.29","## What's Changed\r\n* Fix skip_environment_setup to use system Python instead of non-existent venv by @Copilot in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F129\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.28...0.3.29","2025-12-04T13:33:14",{"id":201,"version":202,"summary_zh":203,"released_at":204},204349,"0.3.28","## What's Changed\r\n* Fix invalid docker tag name when using dockerfile in current directory by @Copilot in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F130\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.27...0.3.28","2025-12-04T12:51:52",{"id":206,"version":207,"summary_zh":208,"released_at":209},204350,"0.3.27","## What's Changed\r\n* Enhance documentation and functionality for read-only root filesystem in Kubernetes by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F125\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.26...0.3.27","2025-11-28T15:52:46",{"id":211,"version":212,"summary_zh":213,"released_at":214},204351,"0.3.26","## What's Changed\r\n* Fix critical pool manager initialization bugs and improve thread safety by @Copilot in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F116\r\n* Add DeepWiki badge to README by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F120\r\n* Fix all lint and type issues from make check by @Copilot in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F124\r\n* Add container pooling feature for improved performance by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F115\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.25...0.3.26","2025-11-28T14:09:02",{"id":216,"version":217,"summary_zh":218,"released_at":219},204352,"0.3.25","## What's Changed\r\n* [session] feat: add interactive IPython session by @Wangmerlyn in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F117\r\n* Enhance interactive session documentation and examples by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F118\r\n* Enhance interactive session functionality and documentation by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F119\r\n\r\n## New Contributors\r\n* @Wangmerlyn made their first contribution in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F117\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.24...0.3.25","2025-11-14T07:51:44",{"id":221,"version":222,"summary_zh":223,"released_at":224},204353,"0.3.24","**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.23...0.3.24","2025-10-28T02:28:10",{"id":226,"version":227,"summary_zh":228,"released_at":229},204354,"0.3.23","## What's Changed\r\n* ✨ Add comprehensive Copilot instructions for repository by @Copilot in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F110\r\n* Update README.md by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F111\r\n* Fix R ggplot2 S3 method error by implementing lazy initialization of plot hooks by @Copilot in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F112\r\n\r\n## New Contributors\r\n* @Copilot made their first contribution in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F110\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.22...0.3.23","2025-10-21T04:37:19",{"id":231,"version":232,"summary_zh":233,"released_at":234},204355,"0.3.22","## What's Changed\r\n* test: add missing MCP dependencies and make MCP-related tests pass. by @zhyg in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F107\r\n* Add plot clearing functionality to ArtifactSandboxSession by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F106\r\n\r\n## New Contributors\r\n* @zhyg made their first contribution in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F107\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.21...0.3.22","2025-10-19T10:16:37",{"id":236,"version":237,"summary_zh":238,"released_at":239},204356,"0.3.21","## What's Changed\r\n* Fix ArtifactSandboxSession timeout to use config defaults by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F101\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.20...0.3.21","2025-10-07T04:27:04",{"id":241,"version":242,"summary_zh":243,"released_at":244},204357,"0.3.20","## What's Changed\r\n* Add NAMESPACE environment variable for Kubernetes pod customization by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F99\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.19...0.3.20","2025-10-01T13:03:28",{"id":246,"version":247,"summary_zh":248,"released_at":249},204358,"0.3.19","## What's Changed\r\n* Fix commit_container default value and add environment variable control by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F98\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.18...0.3.19","2025-10-01T10:43:18",{"id":251,"version":252,"summary_zh":253,"released_at":254},204359,"0.3.18","## What's Changed\r\n* Update README.md by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F89\r\n* Fix: More posix path issues when preparing image by @rhiza-fr in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F90\r\n* Add `skip_environment_setup` option for faster deployments by @vndee in https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fpull\u002F93\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fvndee\u002Fllm-sandbox\u002Fcompare\u002F0.3.17...0.3.18","2025-08-23T07:52:47"]