[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-beam-cloud--beta9":3,"tool-beam-cloud--beta9":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":114,"forks":115,"last_commit_at":116,"license":117,"difficulty_score":32,"env_os":118,"env_gpu":119,"env_ram":120,"env_deps":121,"category_tags":126,"github_topics":127,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":146,"updated_at":147,"faqs":148,"releases":177},9817,"beam-cloud\u002Fbeta9","beta9","Ultrafast serverless GPU inference, sandboxes, and background jobs","beta9 是一个专为 AI 工作负载设计的开源无服务器运行时，旨在让开发者以极简的 Python 代码轻松部署和扩展人工智能应用。它主要解决了传统 AI 部署中基础设施配置复杂、资源管理繁琐以及难以弹性伸缩的痛点，让用户无需关心底层服务器运维，即可快速将模型转化为生产服务。\n\n这款工具特别适合 AI 工程师、数据科学家以及需要构建高并发后端服务的软件开发人员。无论是运行大型语言模型推理、处理批量后台任务，还是创建隔离的代码沙箱，beta9 都能提供流畅的开发体验。其核心技术亮点包括秒级容器启动速度、支持自动扩缩容至零（闲置时不消耗资源）、原生 GPU 加速（兼容 H100、4090 等高端显卡），以及内置的分布式存储挂载功能。通过简单的装饰器语法，用户即可定义并行任务队列或自动伸缩的 API 端点，甚至能直接替代传统的 Celery 消息队列系统。作为 Beam 云平台的开源引擎，beta9 既支持在云端一键托管，也允许用户在自有环境中灵活部署，是提升 AI 工程化效率的得力助手。","\u003Cdiv align=\"center\">\n\u003Cp align=\"center\">\n\u003Cimg alt=\"Logo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbeam-cloud_beta9_readme_53d71845b7f9.png\" width=\"30%\">\n\u003Cimg alt=\"Logo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbeam-cloud_beta9_readme_ee398c5118df.png\" width=\"30%\">\n\u003C\u002Fp>\n\n## Run AI Workloads at Scale\n\n\u003Cp align=\"center\">\n  \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1jSDyYY7FY3Y3jJlCzkmHlH8vTyF-TEmB?usp=sharing\">\n    \u003Cimg alt=\"Colab\" src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fstargazers\">\n    \u003Cimg alt=\"⭐ Star the Repo\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fbeam-cloud\u002Fbeta9\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fdocs.beam.cloud\">\n    \u003Cimg alt=\"Documentation\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-quickstart-purple\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fbeam-cloud\u002Fshared_invite\u002Fzt-39hbkt8ty-CTVv4NsgLoYArjWaVkwcFw\">\n    \u003Cimg alt=\"Join Slack\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBeam-Join%20Slack-orange?logo=slack\">\n  \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Ftwitter.com\u002Fbeam_cloud\">\n    \u003Cimg alt=\"Twitter\" src=\"https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fbeam_cloud.svg?style=social&logo=twitter\">\n  \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9?tab=AGPL-3.0-1-ov-file\">\n    \u003Cimg alt=\"AGPL\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-AGPL-green\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003C\u002Fdiv>\n\n**[Beam](https:\u002F\u002Fbeam.cloud?utm_source=github_readme)** is a fast, open-source runtime for serverless AI workloads. It gives you a Pythonic interface to deploy and scale AI applications with zero infrastructure overhead.\n\n![Watch the demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbeam-cloud_beta9_readme_8922621d4c32.gif)\n\n## ✨ Features\n\n- **Fast Image Builds**: Launch containers in under a second using a custom container runtime\n- **Parallelization and Concurrency**: Fan out workloads to 100s of containers\n- **First-Class Developer Experience**: Hot-reloading, webhooks, and scheduled jobs\n- **Scale-to-Zero**: Workloads are serverless by default\n- **Volume Storage**: Mount distributed storage volumes\n- **GPU Support**: Run on our cloud (4090s, H100s, and more) or bring your own GPUs\n\n## 📦 Installation\n\n```shell\npip install beam-client\n```\n\n## ⚡️ Quickstart\n\n1. Create an account [here](https:\u002F\u002Fbeam.cloud?utm_source=github_readme)\n2. Follow our [Getting Started Guide](https:\u002F\u002Fplatform.beam.cloud\u002Fonboarding?utm_source=github_readme)\n\n## Creating a sandbox\n\nSpin up isolated containers to run LLM-generated code:\n\n```python\nfrom beam import Image, Sandbox\n\n\nsandbox = Sandbox(image=Image()).create()\nresponse = sandbox.process.run_code(\"print('I am running remotely')\")\n\nprint(response.result)\n```\n\n## Deploy a serverless inference endpoint\n\nCreate an autoscaling endpoint for your custom model:\n\n```python\nfrom beam import Image, endpoint\nfrom beam import QueueDepthAutoscaler\n\n@endpoint(\n    image=Image(python_version=\"python3.11\"),\n    gpu=\"A10G\",\n    cpu=2,\n    memory=\"16Gi\",\n    autoscaler=QueueDepthAutoscaler(max_containers=5, tasks_per_container=30)\n)\ndef handler():\n    return {\"label\": \"cat\", \"confidence\": 0.97}\n```\n\n## Run background tasks\n\nSchedule resilient background tasks (or replace your Celery queue) by adding a simple decorator:\n\n```python\nfrom beam import Image, TaskPolicy, schema, task_queue\n\n\nclass Input(schema.Schema):\n    image_url = schema.String()\n\n\n@task_queue(\n    name=\"image-processor\",\n    image=Image(python_version=\"python3.11\"),\n    cpu=1,\n    memory=1024,\n    inputs=Input,\n    task_policy=TaskPolicy(max_retries=3),\n)\ndef my_background_task(input: Input, *, context):\n    image_url = input.image_url\n    print(f\"Processing image: {image_url}\")\n    return {\"image_url\": image_url}\n\n\nif __name__ == \"__main__\":\n    # Invoke a background task from your app (without deploying it)\n    my_background_task.put(image_url=\"https:\u002F\u002Fexample.com\u002Fimage.jpg\")\n\n    # You can also deploy this behind a versioned endpoint with:\n    # beam deploy app.py:my_background_task --name image-processor\n```\n\n> ## Self-Hosting vs Cloud\n>\n> Beta9 is the open-source engine powering [Beam](https:\u002F\u002Fbeam.cloud), our fully-managed cloud platform. You can self-host Beta9 for free or choose managed cloud hosting through Beam.\n\n## 👋 Contributing\n\nWe welcome contributions big or small. These are the most helpful things for us:\n\n- Submit a [feature request](https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fissues\u002Fnew?assignees=&labels=&projects=&template=feature-request.md&title=) or [bug report](https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fissues\u002Fnew?assignees=&labels=&projects=&template=bug-report.md&title=)\n- Open a PR with a new feature or improvement\n\n## ❤️ Thanks to Our Contributors\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbeam-cloud_beta9_readme_e34851d2fa74.png\" \u002F>\n\u003C\u002Fa>\n","\u003Cdiv align=\"center\">\n\u003Cp align=\"center\">\n\u003Cimg alt=\"Logo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbeam-cloud_beta9_readme_53d71845b7f9.png\" width=\"30%\">\n\u003Cimg alt=\"Logo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbeam-cloud_beta9_readme_ee398c5118df.png\" width=\"30%\">\n\u003C\u002Fp>\n\n## 大规模运行 AI 工作负载\n\n\u003Cp align=\"center\">\n  \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1jSDyYY7FY3Y3jJlCzkmHlH8vTyF-TEmB?usp=sharing\">\n    \u003Cimg alt=\"Colab\" src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fstargazers\">\n    \u003Cimg alt=\"⭐ Star the Repo\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fbeam-cloud\u002Fbeta9\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fdocs.beam.cloud\">\n    \u003Cimg alt=\"Documentation\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-quickstart-purple\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fbeam-cloud\u002Fshared_invite\u002Fzt-39hbkt8ty-CTVv4NsgLoYArjWaVkwcFw\">\n    \u003Cimg alt=\"Join Slack\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBeam-Join%20Slack-orange?logo=slack\">\n  \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Ftwitter.com\u002Fbeam_cloud\">\n    \u003Cimg alt=\"Twitter\" src=\"https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fbeam_cloud.svg?style=social&logo=twitter\">\n  \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9?tab=AGPL-3.0-1-ov-file\">\n    \u003Cimg alt=\"AGPL\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-AGPL-green\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003C\u002Fdiv>\n\n**[Beam](https:\u002F\u002Fbeam.cloud?utm_source=github_readme)** 是一个用于无服务器 AI 工作负载的快速、开源运行时。它为您提供了一个 Python 式的接口，可在零基础设施开销的情况下部署和扩展 AI 应用程序。\n\n![观看演示](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbeam-cloud_beta9_readme_8922621d4c32.gif)\n\n## ✨ 特性\n\n- **快速镜像构建**：使用自定义容器运行时，在不到一秒钟内启动容器\n- **并行化与并发**：将工作负载分发到数百个容器\n- **一流的开发者体验**：热重载、Webhook 和定时任务\n- **按需缩放至零**：工作负载默认为无服务器模式\n- **卷存储**：挂载分布式存储卷\n- **GPU 支持**：在我们的云端（4090、H100 等）运行，或使用您自己的 GPU\n\n## 📦 安装\n\n```shell\npip install beam-client\n```\n\n## ⚡️ 快速入门\n\n1. 在 [这里](https:\u002F\u002Fbeam.cloud?utm_source=github_readme) 创建账户\n2. 按照我们的 [入门指南](https:\u002F\u002Fplatform.beam.cloud\u002Fonboarding?utm_source=github_readme) 操作\n\n## 创建沙盒\n\n启动隔离的容器来运行 LLM 生成的代码：\n\n```python\nfrom beam import Image, Sandbox\n\n\nsandbox = Sandbox(image=Image()).create()\nresponse = sandbox.process.run_code(\"print('I am running remotely')\")\n\nprint(response.result)\n```\n\n## 部署无服务器推理端点\n\n为您的自定义模型创建一个自动伸缩端点：\n\n```python\nfrom beam import Image, endpoint\nfrom beam import QueueDepthAutoscaler\n\n@endpoint(\n    image=Image(python_version=\"python3.11\"),\n    gpu=\"A10G\",\n    cpu=2,\n    memory=\"16Gi\",\n    autoscaler=QueueDepthAutoscaler(max_containers=5, tasks_per_container=30)\n)\ndef handler():\n    return {\"label\": \"cat\", \"confidence\": 0.97}\n```\n\n## 运行后台任务\n\n通过添加一个简单的装饰器来调度可靠的后台任务（或替代您的 Celery 队列）：\n\n```python\nfrom beam import Image, TaskPolicy, schema, task_queue\n\n\nclass Input(schema.Schema):\n    image_url = schema.String()\n\n\n@task_queue(\n    name=\"image-processor\",\n    image=Image(python_version=\"python3.11\"),\n    cpu=1,\n    memory=1024,\n    inputs=Input,\n    task_policy=TaskPolicy(max_retries=3),\n)\ndef my_background_task(input: Input, *, context):\n    image_url = input.image_url\n    print(f\"Processing image: {image_url}\")\n    return {\"image_url\": image_url}\n\n\nif __name__ == \"__main__\":\n    # 从您的应用中调用后台任务（无需部署）\n    my_background_task.put(image_url=\"https:\u002F\u002Fexample.com\u002Fimage.jpg\")\n\n    # 您也可以通过以下命令将其部署到版本化的端点：\n    # beam deploy app.py:my_background_task --name image-processor\n```\n\n> ## 自托管 vs 云\n>\n> Beta9 是驱动 [Beam](https:\u002F\u002Fbeam.cloud) 的开源引擎，后者是我们完全托管的云平台。您可以免费自托管 Beta9，也可以选择通过 Beam 使用托管云服务。\n\n## 👋 贡献\n\n我们欢迎各种大小的贡献。对我们最有帮助的是：\n\n- 提交 [功能请求](https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fissues\u002Fnew?assignees=&labels=&projects=&template=feature-request.md&title=) 或 [错误报告](https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fissues\u002Fnew?assignees=&labels=&projects=&template=bug-report.md&title=)\n- 打开包含新功能或改进的 PR\n\n## ❤️ 感谢我们的贡献者\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbeam-cloud_beta9_readme_e34851d2fa74.png\" \u002F>\n\u003C\u002Fa>","# Beta9 快速上手指南\n\nBeta9 是一个用于大规模运行 AI 工作负载的开源无服务器运行时。它提供了 Pythonic 接口，让你无需管理基础设施即可部署和扩展 AI 应用。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows (WSL2 推荐)\n*   **Python 版本**：Python 3.8 或更高版本（示例中常用 3.11）\n*   **账号准备**：需要注册 [Beam Cloud](https:\u002F\u002Fbeam.cloud) 账号以获取访问凭证\n*   **网络环境**：确保能访问 `pypi.org` 和 `beam.cloud` API\n\n## 安装步骤\n\n使用 pip 安装官方客户端库：\n\n```shell\npip install beam-client\n```\n\n> **提示**：国内开发者若遇到下载缓慢，可临时指定清华或阿里镜像源：\n> `pip install beam-client -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n安装完成后，请前往 [Beam 控制台](https:\u002F\u002Fplatform.beam.cloud\u002Fonboarding) 完成初始配置并获取 API Token。\n\n## 基本使用\n\n### 1. 创建沙箱环境 (Sandbox)\n快速启动隔离容器来运行代码（例如运行 LLM 生成的代码）：\n\n```python\nfrom beam import Image, Sandbox\n\n\nsandbox = Sandbox(image=Image()).create()\nresponse = sandbox.process.run_code(\"print('I am running remotely')\")\n\nprint(response.result)\n```\n\n### 2. 部署无服务器推理端点\n通过简单的装饰器将函数部署为自动伸缩的 API 端点：\n\n```python\nfrom beam import Image, endpoint\nfrom beam import QueueDepthAutoscaler\n\n@endpoint(\n    image=Image(python_version=\"python3.11\"),\n    gpu=\"A10G\",\n    cpu=2,\n    memory=\"16Gi\",\n    autoscaler=QueueDepthAutoscaler(max_containers=5, tasks_per_container=30)\n)\ndef handler():\n    return {\"label\": \"cat\", \"confidence\": 0.97}\n```\n\n### 3. 运行后台任务\n替代 Celery 等队列系统，轻松调度可靠的后台任务：\n\n```python\nfrom beam import Image, TaskPolicy, schema, task_queue\n\n\nclass Input(schema.Schema):\n    image_url = schema.String()\n\n\n@task_queue(\n    name=\"image-processor\",\n    image=Image(python_version=\"python3.11\"),\n    cpu=1,\n    memory=1024,\n    inputs=Input,\n    task_policy=TaskPolicy(max_retries=3),\n)\ndef my_background_task(input: Input, *, context):\n    image_url = input.image_url\n    print(f\"Processing image: {image_url}\")\n    return {\"image_url\": image_url}\n\n\nif __name__ == \"__main__\":\n    # 从本地应用触发后台任务\n    my_background_task.put(image_url=\"https:\u002F\u002Fexample.com\u002Fimage.jpg\")\n```","一家初创公司的 AI 团队需要构建一个能实时处理用户上传视频并生成摘要的系统，且需应对早晚高峰的流量剧烈波动。\n\n### 没有 beta9 时\n- **资源闲置成本高**：为了应对峰值流量，团队必须常年租用昂贵的 GPU 服务器，但在夜间低峰期资源利用率极低，造成大量资金浪费。\n- **扩容响应滞后**：当突发流量涌入时，传统云服务的自动扩容需要数分钟启动容器，导致用户请求排队甚至超时失败。\n- **运维负担沉重**：开发人员需花费大量时间配置 Docker 环境、管理 Kubernetes 集群以及处理底层基础设施故障，无法专注于算法优化。\n- **并发处理能力弱**：难以快速将视频处理任务分发到数百个节点并行执行，长视频的处理延迟严重影响用户体验。\n\n### 使用 beta9 后\n- **实现真正的按需付费**：利用 beta9 的 Serverless 特性，系统默认缩容至零，仅在收到视频上传请求时瞬间启动 GPU 容器，大幅降低闲置成本。\n- **毫秒级弹性伸缩**：借助超快的镜像构建和并发能力，beta9 能在秒级内将任务扇出（Fan-out）到上百个容器，轻松消化流量洪峰。\n- **极简的开发体验**：团队只需使用简单的 Python 装饰器定义任务逻辑，无需关心底层设施，热重载功能更让调试效率倍增。\n- **高效并行处理**：通过内置的任务队列和分布式存储卷，视频切片被自动分发并行处理，将原本几分钟的等待时间缩短至秒级。\n\nbeta9 让 AI 团队彻底摆脱了基础设施的束缚，以最低成本和最高效率实现了大规模视频推理应用的落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbeam-cloud_beta9_8922621d.gif","beam-cloud","Beam","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbeam-cloud_5b423626.png","AI Infrastructure for Developers",null,"founders@beam.cloud","beam_cloud","beam.cloud","https:\u002F\u002Fgithub.com\u002Fbeam-cloud",[82,86,90,94,98,102,104,107,111],{"name":83,"color":84,"percentage":85},"Go","#00ADD8",72.4,{"name":87,"color":88,"percentage":89},"Python","#3572A5",25.3,{"name":91,"color":92,"percentage":93},"HCL","#844FBA",1.3,{"name":95,"color":96,"percentage":97},"Shell","#89e051",0.6,{"name":99,"color":100,"percentage":101},"Makefile","#427819",0.1,{"name":103,"color":84,"percentage":101},"Go Template",{"name":105,"color":106,"percentage":101},"Smarty","#f0c040",{"name":108,"color":109,"percentage":110},"JavaScript","#f1e05a",0,{"name":112,"color":113,"percentage":110},"Dockerfile","#384d54",1632,142,"2026-04-18T19:37:54","AGPL-3.0","未说明","可选。支持在 Beam 云端使用 (如 RTX 4090, H100, A10G) 或自带 GPU；本地自托管具体型号和显存要求未在 README 中明确列出。","未说明 (示例代码中展示了 16Gi 和 1024MB 的配置选项，但非本地运行最低要求)。",{"notes":122,"python":123,"dependencies":124},"该工具主要作为客户端库 (beam-client) 使用，用于连接 Beam 云平台或自托管的 Beta9 引擎进行无服务器 AI 工作负载部署。大部分计算资源 (GPU\u002F内存) 由云端或自托管集群提供，而非本地机器直接承担。自托管需自行搭建容器运行时环境。","3.11 (示例代码指定)，客户端安装通常支持 Python 3.8+。",[125],"beam-client",[14,35],[128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145],"gpu","ml-platform","cuda","fine-tuning","generative-ai","large-language-models","llm","distributed-computing","llm-inference","self-hosted","autoscaler","cloudrun","developer-productivity","faas","functions-as-a-service","paas","serverless","serverless-containers","2026-03-27T02:49:30.150509","2026-04-20T07:16:11.952384",[149,154,159,164,169,173],{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},44091,"vLLM 示例报错 `TypeError: cannot unpack non-iterable bool object` 如何解决？","该问题通常由配置参数引起。请检查 VLLM 初始化代码，将 `task` 参数从 `\"chat\"` 修改为 `\"generate\"` 即可修复。例如：\n```python\nvllm_args=VLLMArgs(\n    model=YI_CODER_CHAT,\n    served_model_name=[YI_CODER_CHAT],\n    task=\"generate\",  # 修改此处\n    trust_remote_code=True,\n    max_model_len=8096,\n)\n```\n此外，请确保项目文件夹中没有名为 `vllm.py` 的文件，以免与官方包导入冲突。","https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fissues\u002F1377",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},44092,"在 Kubernetes 上自托管 Beta9 时，如何获取或重置管理员 Token？","如果忘记 Token 或需要重置，可以直接从 PostgreSQL 数据库中查询或清理。连接到 postgres pod 后执行以下 SQL 命令：\n1. 获取现有 Admin Token：\n```bash\nPGPASSWORD=\"password\" psql -U \"root\" -d \"dbname=main\" -c \"SELECT key FROM token WHERE token_type = 'admin'\"\n```\n2. 如果存在旧的工作空间导致连接问题，可以清空工作空间表：\n```bash\nPGPASSWORD=\"password\" psql -U \"root\" -d \"dbname=main\" -c 'TRUNCATE workspace CASCADE'\n```\n注意：如果是重装图表后出现问题，可能还需要删除持久化卷（PVC）以清除残留数据。","https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fissues\u002F865",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},44093,"使用 Helm 安装 Beta9 时报错 `invalid_reference: invalid tag` 或 `resource mapping not found` 怎么办？","这通常是因为使用了过时或不正确的 Helm Chart 版本。请尝试安装最新版本的 Chart，并确保本地环境（如使用 k3d）兼容。执行以下命令安装最新版本：\n```sh\nhelm install beta9 oci:\u002F\u002Fpublic.ecr.aws\u002Fn4e0e1y0\u002Fbeta9-chart --version 0.1.359\n```\n如果 Worker 镜像版本不匹配，可以通过 Helm values 指定具体的 worker 镜像标签：\n```yaml\nconfig:\n  worker:\n    imageTag: 0.1.305\n```\n安装后可能需要删除现有的 Worker Pod，以便 Gateway 启动新版本。","https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fissues\u002F921",{"id":165,"question_zh":166,"answer_zh":167,"source_url":168},44094,"Beta9 是否支持私有化部署？是否有商业支持？","是的，Beam\u002FBeta9 支持在私有云（Private Cloud）环境中部署。官方提供包含商业支持的企业版计划。如果您有企业级需求（如大规模模型推理调度），可以通过官方渠道预约会议讨论具体合作方案。需要注意的是，供应商 beam.cloud 是一家美国公司。","https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fissues\u002F484",{"id":170,"question_zh":171,"answer_zh":172,"source_url":163},44095,"如何在自托管环境中指定 Worker 镜像的具体版本？","在 Helm 部署中，您可以通过配置 `values.yaml` 文件来明确指定 Worker 镜像的版本标签，避免使用不稳定的 `latest` 标签。配置示例如下：\n```yaml\nconfig:\n  worker:\n    imageTag: 0.1.305\n```\n设置完成后，升级您的 Helm 部署即可生效。您可以在 GitHub Releases 页面或 ECR Public Gallery 上查找可用的 Worker 镜像版本。",{"id":174,"question_zh":175,"answer_zh":176,"source_url":153},44096,"运行 vLLM 示例时遇到导入错误，但代码看起来没问题，可能是什么原因？","这很可能是文件名冲突导致的。请检查您的项目目录，确认是否存在名为 `vllm.py` 的文件。如果存在，Python 会优先导入该本地文件而不是官方的 `vllm` 包，从而导致 `AttributeError` 或 `TypeError` 等异常。解决方法是将该文件重命名（例如改为 `my_vllm_runner.py`）或删除。",[178,183,188,193,198,203,208,213,218,223,228,233,238,243,248,253,258,263,268,273],{"id":179,"version":180,"summary_zh":181,"released_at":182},351634,"worker-0.1.527","## 变更内容\n* 由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1557 中修复了多 GPU 分配问题\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.526...worker-0.1.527","2026-03-11T20:37:59",{"id":184,"version":185,"summary_zh":186,"released_at":187},351635,"gateway-0.1.564","## 变更内容\n* 由 @cooper-grc 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1542 中提升 SDK 版本\n* 功能：由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1545 中添加沙箱异步方法\n* 修复：由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1547 中清理过时的检查点\n* 修复 NVIDIA 运行时边缘情况，由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1552 中完成\n* 第二次修复 GPU 可见性问题，由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1553 中完成\n* 修复严格隔离漏洞，由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1554 中完成\n* 修复测试问题，由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1555 中完成\n* 采用不同方法修复隔离漏洞，由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1556 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fgateway-0.1.563...gateway-0.1.564","2026-03-11T20:32:32",{"id":189,"version":190,"summary_zh":191,"released_at":192},351636,"worker-0.1.526","## 变更内容\n* 由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1556 中采用不同方法修复隔离漏洞\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.525...worker-0.1.526","2026-03-11T20:32:13",{"id":194,"version":195,"summary_zh":196,"released_at":197},351637,"worker-0.1.525","## 变更内容\n* 由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1555 中修复了测试\r\n\r\n\r\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.524...worker-0.1.525","2026-03-11T19:48:44",{"id":199,"version":200,"summary_zh":201,"released_at":202},351638,"worker-0.1.524","## 变更内容\n* 由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1554 中修复了严格隔离的 bug\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.523...worker-0.1.524","2026-03-11T19:36:51",{"id":204,"version":205,"summary_zh":206,"released_at":207},351639,"worker-0.1.523","## 变更内容\n* 由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1553 中提交的 GPU 可见性问题的第二次修复\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.522...worker-0.1.523","2026-03-11T16:32:53",{"id":209,"version":210,"summary_zh":211,"released_at":212},351640,"worker-0.1.522","## 变更内容\n* 由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1552 中修复了 NVIDIA 运行时的边缘情况\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.521...worker-0.1.522","2026-03-11T16:10:52",{"id":214,"version":215,"summary_zh":216,"released_at":217},351641,"worker-0.1.521","## 变更内容\n* 修复：由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1547 中清理过时的检查点\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.520...worker-0.1.521","2026-01-16T16:26:18",{"id":219,"version":220,"summary_zh":221,"released_at":222},351642,"worker-0.1.520","## 变更内容\n* 由 @cooper-grc 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1542 中升级了 SDK 版本\n* 功能：由 @luke-lombardi 在 https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1545 中添加了沙箱异步方法\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.519...worker-0.1.520","2026-01-11T21:29:47",{"id":224,"version":225,"summary_zh":226,"released_at":227},351643,"gateway-0.1.563","## 变更内容\n* 在运行时更新网络权限，由 @cooper-grc 提交，链接：https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1528\n* 为沙箱添加端口，由 @cooper-grc 提交，链接：https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1541\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.518...gateway-0.1.563","2025-11-25T16:51:49",{"id":229,"version":230,"summary_zh":231,"released_at":232},351644,"worker-0.1.519","## What's Changed\r\n* Update Network Permissions At Runtime by @cooper-grc in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1528\r\n* Add Ports To Sandboxes by @cooper-grc in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1541\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.518...worker-0.1.519","2025-11-25T16:51:06",{"id":234,"version":235,"summary_zh":236,"released_at":237},351645,"worker-0.1.518","## What's Changed\r\n* fix: bump max message sizes by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1540\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.517...worker-0.1.518","2025-11-20T22:31:45",{"id":239,"version":240,"summary_zh":241,"released_at":242},351646,"gateway-0.1.562","## What's Changed\r\n* Improve SandboxFileSystemError by @cooper-grc in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1527\r\n* fix: cleanup criu interface by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1530\r\n* fix: devices pointer should always be nil by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1532\r\n* fix: improve image caching for clip v1 by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1538\r\n* pin clip version by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1539\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fgateway-0.1.561...gateway-0.1.562","2025-11-20T20:09:20",{"id":244,"version":245,"summary_zh":246,"released_at":247},351647,"worker-0.1.517","## What's Changed\r\n* fix: improve image caching for clip v1 by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1538\r\n* pin clip version by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1539\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.516...worker-0.1.517","2025-11-20T20:08:58",{"id":249,"version":250,"summary_zh":251,"released_at":252},351648,"worker-0.1.516","## What's Changed\r\n* fix: devices pointer should always be nil by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1532\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.515...worker-0.1.516","2025-11-17T18:30:25",{"id":254,"version":255,"summary_zh":256,"released_at":257},351649,"worker-0.1.515","## What's Changed\r\n* Improve SandboxFileSystemError by @cooper-grc in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1527\r\n* fix: cleanup criu interface by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1530\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.514...worker-0.1.515","2025-11-17T17:53:39",{"id":259,"version":260,"summary_zh":261,"released_at":262},351650,"gateway-0.1.561","## What's Changed\r\n* Fix: Moved Runtime by @cooper-grc in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1526\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fgateway-0.1.560...gateway-0.1.561","2025-11-12T19:47:42",{"id":264,"version":265,"summary_zh":266,"released_at":267},351651,"worker-0.1.514","## What's Changed\r\n* Fix: Moved Runtime by @cooper-grc in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1526\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fworker-0.1.513...worker-0.1.514","2025-11-12T19:47:24",{"id":269,"version":270,"summary_zh":271,"released_at":272},351652,"gateway-0.1.560","## What's Changed\r\n* Refactor container interface by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1270\r\n* Refactor: clean up incorrectly named server\u002Finterfaces by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1508\r\n* remove creds file and outdated comments by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1509\r\n* Conditionally cache full image excluding clip v2 by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1511\r\n* Feat: docker support by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1514\r\n* Feat: Docker manager by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1516\r\n* Fix docker manager sandbox issues by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1517\r\n* Fix docker networking in gvisor sandbox by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1522\r\n* removed unused network methods from docker manager by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1523\r\n* Fix docker compose host network error by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1524\r\n* Add block_network to Sandboxes by @cooper-grc in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1488\r\n* Add Allow List to Sandboxes by @cooper-grc in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1513\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fgateway-0.1.559...gateway-0.1.560","2025-11-10T22:49:36",{"id":274,"version":275,"summary_zh":276,"released_at":277},351653,"worker-0.1.513","## What's Changed\r\n* Refactor container interface by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1270\r\n* Refactor: clean up incorrectly named server\u002Finterfaces by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1508\r\n* remove creds file and outdated comments by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1509\r\n* Conditionally cache full image excluding clip v2 by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1511\r\n* Feat: docker support by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1514\r\n* Feat: Docker manager by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1516\r\n* Fix docker manager sandbox issues by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1517\r\n* Fix docker networking in gvisor sandbox by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1522\r\n* removed unused network methods from docker manager by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1523\r\n* Fix docker compose host network error by @luke-lombardi in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1524\r\n* Add block_network to Sandboxes by @cooper-grc in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1488\r\n* Add Allow List to Sandboxes by @cooper-grc in https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fpull\u002F1513\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbeam-cloud\u002Fbeta9\u002Fcompare\u002Fgateway-0.1.559...worker-0.1.513","2025-11-10T22:49:00"]