[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-run-house--kubetorch":3,"tool-run-house--kubetorch":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":104,"forks":105,"last_commit_at":106,"license":107,"difficulty_score":108,"env_os":109,"env_gpu":109,"env_ram":109,"env_deps":110,"category_tags":113,"github_topics":114,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":132,"updated_at":133,"faqs":134,"releases":155},526,"run-house\u002Fkubetorch","kubetorch","Distribute and run AI workloads on Kubernetes magically in Python, like PyTorch for ML infra.","kubetorch 是一个专为机器学习设计的开源框架，让你能用纯 Python 代码在 Kubernetes 集群上无缝运行 AI 工作负载。它旨在消除本地开发与云端部署之间的壁垒，将集群的计算能力直接带入你的开发环境。\n\n传统模式下，在 K8s 上调试模型往往耗时且繁琐。kubetorch 解决了这一痛点，通过无需本地运行时和代码序列化的机制，实现了类似本地进程池的编程体验。开发者可以在 IDE、Jupyter Notebook 或 CI 流程中直接调用远程算力，将复杂应用的迭代时间从十几分钟缩短至 1-2 秒。此外，它还具备智能资源调度与自动故障恢复能力，能有效降低计算成本并提升稳定性。\n\n这样的特性使得 kubetorch 特别适合机器学习工程师、AI 研究人员以及需要构建大规模分布式训练系统的开发团队。无论是强化学习还是分布式模型训练，它都能提供快速、稳定的基础设施支持，让开发者更专注于算法本身而非运维细节。","# 📦Kubetorch🔥\n\n**A Fast, Pythonic, \"Serverless\" Interface for Running ML Workloads on Kubernetes**\n\nKubetorch lets you programmatically build, iterate, and deploy ML applications on Kubernetes at any scale - directly from Python.\n\nIt brings your cluster's compute power into your local development environment, enabling extremely fast iteration (1-2 seconds). Logs, exceptions, and hardware faults are automatically propagated back to you in real-time.\n\nSince Kubetorch has no local runtime or code serialization, you can access large-scale cluster compute from any Python environment - your IDE, notebooks, CI pipelines, or production code - just like you would use a local process pool.\n\n## Hello World\n\n```python\nimport kubetorch as kt\n\ndef hello_world():\n    return \"Hello from Kubetorch!\"\n\nif __name__ == \"__main__\":\n    # Define your compute\n    compute = kt.Compute(cpus=\".1\")\n\n    # Send local function to freshly launched remote compute\n    remote_hello = kt.fn(hello_world).to(compute)\n\n    # Runs remotely on your Kubernetes cluster\n    result = remote_hello()\n    print(result)  # \"Hello from Kubetorch!\"\n```\n\n## What Kubetorch Enables\n\n- **100x faster iteration** from 10+ minutes to 1-3 seconds for complex ML applications like RL and distributed training\n- **50%+ compute cost savings** through intelligent resource allocation, bin-packing, and dynamic scaling\n- **95% fewer production faults** with built-in fault handling with programmatic error recovery and resource adjustment\n\n## Installation\n\n### 1. Python Client\n\n```bash\npip install \"kubetorch[client]\"\n```\n\n### 2. Kubernetes Deployment (Helm)\n\n```bash\n# Option 1: Install directly from OCI registry\nhelm upgrade --install kubetorch oci:\u002F\u002Fghcr.io\u002Frun-house\u002Fcharts\u002Fkubetorch \\\n  --version 0.5.0 -n kubetorch --create-namespace\n\n# Option 2: Download chart locally first\nhelm pull oci:\u002F\u002Fghcr.io\u002Frun-house\u002Fcharts\u002Fkubetorch --version 0.5.0 --untar\nhelm upgrade --install kubetorch .\u002Fkubetorch -n kubetorch --create-namespace\n```\n\nFor detailed setup instructions, see our [Installation Guide](https:\u002F\u002Fwww.run.house\u002Fkubetorch\u002Finstallation).\n\n## Source Layout\n\nThis repo now includes the customer-facing OSS deployment components that were previously split across internal and OSS repos:\n\n- `python_client\u002F` for the SDK\n- `charts\u002Fkubetorch\u002F` for the Helm chart\n- `services\u002F` for the controller and data store sources\n- `release\u002Fdefault_images\u002F` for the workload base images\n- `release\u002F` for release scripts and version sync\n\n\n## Kubetorch Serverless\n\nContact us ([email](mailto:hello@run.house), [Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fkubetorch\u002Fshared_invite\u002Fzt-3sfhomlk3-AN61_tf1PRiUynHdNtoRAg)) to try out Kubetorch on our fully managed serverless platform.\n\n## Learn More\n\n- **[Documentation](https:\u002F\u002Fwww.run.house\u002Fkubetorch\u002Fintroduction)** - API Reference, concepts, and guides\n- **[Examples](https:\u002F\u002Fwww.run.house\u002Fexamples)** - Real-world usage patterns and tutorials\n- **[Join our Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fkubetorch\u002Fshared_invite\u002Fzt-3g76q5i4j-uP60AdydxnAmjGVAQhtALA)** - Connect with the community and get support\n\n---\n\n[Apache 2.0 License](LICENSE)\n\n**🏃‍♀️ Built by [Runhouse](https:\u002F\u002Fwww.run.house) 🏠**\n","# 📦Kubetorch🔥\n\n**在 Kubernetes 上运行机器学习（ML）工作负载的快速、符合 Python 风格、“无服务器”接口**\n\nKubetorch 让你能够直接从 Python 编程构建、迭代和部署任意规模的 Kubernetes 机器学习（ML）应用程序。\n\n它将集群的计算能力引入到你的本地开发环境中，实现极快的迭代速度（1-2 秒）。日志、异常和硬件故障会自动实时回传给你。\n\n由于 Kubetorch 没有本地运行时或代码序列化，你可以从任何 Python 环境访问大规模集群计算——无论是你的集成开发环境（IDE）、笔记本、CI 流水线还是生产代码——就像使用本地进程池一样。\n\n## 你好世界\n\n```python\nimport kubetorch as kt\n\ndef hello_world():\n    return \"Hello from Kubetorch!\"\n\nif __name__ == \"__main__\":\n    # Define your compute\n    compute = kt.Compute(cpus=\".1\")\n\n    # Send local function to freshly launched remote compute\n    remote_hello = kt.fn(hello_world).to(compute)\n\n    # Runs remotely on your Kubernetes cluster\n    result = remote_hello()\n    print(result)  # \"Hello from Kubetorch!\"\n```\n\n## Kubetorch 带来的优势\n\n- **100 倍更快的迭代速度**，将复杂机器学习（ML）应用（如强化学习（RL）和分布式训练）的迭代时间从 10 多分钟缩短至 1-3 秒\n- **50%+ 的计算成本节省**，通过智能资源分配、装箱（bin-packing）和动态扩展实现\n- **95% 的生产故障减少**，内置故障处理功能，支持程序化错误恢复和资源调整\n\n## 安装\n\n### 1. Python 客户端\n\n```bash\npip install \"kubetorch[client]\"\n```\n\n### 2. Kubernetes 部署 (Helm)\n\n```bash\n# Option 1: Install directly from OCI registry\nhelm upgrade --install kubetorch oci:\u002F\u002Fghcr.io\u002Frun-house\u002Fcharts\u002Fkubetorch \\\n  --version 0.5.0 -n kubetorch --create-namespace\n\n# Option 2: Download chart locally first\nhelm pull oci:\u002F\u002Fghcr.io\u002Frun-house\u002Fcharts\u002Fkubetorch --version 0.5.0 --untar\nhelm upgrade --install kubetorch .\u002Fkubetorch -n kubetorch --create-namespace\n```\n\n有关详细设置说明，请参阅我们的 [安装指南](https:\u002F\u002Fwww.run.house\u002Fkubetorch\u002Finstallation)。\n\n## 源代码布局\n\n此仓库现在包含了之前分散在内部和开源软件（OSS）仓库中的面向客户的开源部署组件：\n\n- `python_client\u002F` 用于软件开发工具包（SDK）\n- `charts\u002Fkubetorch\u002F` 用于 Helm 图表\n- `services\u002F` 用于控制器和数据存储源\n- `release\u002Fdefault_images\u002F` 用于工作负载基础镜像\n- `release\u002F` 用于发布脚本和版本同步\n\n\n## Kubetorch 无服务器\n\n联系我们（[邮箱](mailto:hello@run.house), [Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fkubetorch\u002Fshared_invite\u002Fzt-3sfhomlk3-AN61_tf1PRiUynHdNtoRAg)）在我们的完全托管的无服务器平台上试用 Kubetorch。\n\n## 了解更多\n\n- **[文档](https:\u002F\u002Fwww.run.house\u002Fkubetorch\u002Fintroduction)** - API 参考、概念和指南\n- **[示例](https:\u002F\u002Fwww.run.house\u002Fexamples)** - 实际使用模式和教程\n- **[加入我们的 Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fkubetorch\u002Fshared_invite\u002Fzt-3g76q5i4j-uP60AdydxnAmjGVAQhtALA)** - 与社区联系并获得支持\n\n---\n\n[Apache 2.0 许可证](LICENSE)\n\n**🏃‍♀️ 由 [Runhouse](https:\u002F\u002Fwww.run.house) 打造 🏠**","# kubetorch 快速上手指南\n\n**Kubetorch** 是一个面向 Kubernetes 的 Pythonic、无服务器接口，用于以任意规模构建、迭代和部署机器学习工作负载。它允许您直接从 Python 编程调用集群算力，实现极速迭代（1-2 秒）和自动故障恢复。\n\n## 环境准备\n\n在使用 Kubetorch 之前，请确保您的开发环境满足以下要求：\n\n- **Python 环境**：支持 Python 3.x 的开发机器或容器。\n- **Kubernetes 集群**：已配置好可用的 K8s 集群及访问权限。\n- **Helm 工具**：用于部署 Kubetorch 服务端组件。\n- **网络连通性**：能够访问 GitHub Container Registry (`ghcr.io`)。\n\n## 安装步骤\n\n### 1. 安装 Python 客户端\n\n通过 pip 安装 Kubetorch 客户端库：\n\n```bash\npip install \"kubetorch[client]\"\n```\n\n### 2. 部署 Kubernetes 服务端 (Helm)\n\n使用 Helm 将 Kubetorch 部署到您的集群中。以下命令将在 `kubetorch` 命名空间下安装版本 0.5.0：\n\n```bash\nhelm upgrade --install kubetorch oci:\u002F\u002Fghcr.io\u002Frun-house\u002Fcharts\u002Fkubetorch \\\n  --version 0.5.0 -n kubetorch --create-namespace\n```\n\n> 如需本地下载 Chart 包后再安装，可先执行 `helm pull` 获取文件，再使用 `helm upgrade --install kubetorch .\u002Fkubetorch ...`。\n\n## 基本使用\n\n安装完成后，您可以直接在 Python 脚本中定义计算资源并远程运行函数。以下是最简单的 Hello World 示例：\n\n```python\nimport kubetorch as kt\n\ndef hello_world():\n    return \"Hello from Kubetorch!\"\n\nif __name__ == \"__main__\":\n    # Define your compute\n    compute = kt.Compute(cpus=\".1\")\n\n    # Send local function to freshly launched remote compute\n    remote_hello = kt.fn(hello_world).to(compute)\n\n    # Runs remotely on your Kubernetes cluster\n    result = remote_hello()\n    print(result)  # \"Hello from Kubetorch!\"\n```\n\n## 更多信息\n\n- **[官方文档](https:\u002F\u002Fwww.run.house\u002Fkubetorch\u002Fintroduction)**：API 参考、概念与指南\n- **[代码示例](https:\u002F\u002Fwww.run.house\u002Fexamples)**：真实场景用法与教程\n- **[Slack 社区](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fkubetorch\u002Fshared_invite\u002Fzt-3g76q5i4j-uP60AdydxnAmjGVAQhtALA)**：加入社区获取支持与交流","某电商公司的算法工程师正在开发推荐系统的强化学习模型，需要频繁调整超参数并测试分布式训练效果。\n\n### 没有 kubetorch 时\n- 每次代码微调都需要重新构建 Docker 镜像并部署到集群，单次完整迭代耗时超过 10 分钟\n- 远程任务报错时，日志分散在多个 Pod 容器中，排查故障极其困难且容易遗漏关键信息\n- 手动管理 GPU 资源分配导致计算节点经常闲置或过载，月度云成本浪费显著\n- 本地调试环境与生产环境不一致，常出现因序列化导致的“本地能跑、上线报错”问题\n\n### 使用 kubetorch 后\n- 直接在 Python 环境中调用函数即可调度至集群，迭代速度从分钟级提升至 1-2 秒\n- 异常信息和硬件故障自动回传至本地终端，无需登录服务器手动查看分散的日志\n- 智能资源调度自动优化 GPU 利用率，通过动态伸缩节省 50% 以上的计算成本\n- 无需关心容器序列化细节，IDE 中的代码可直接无缝运行在大规模集群上\n\nkubetorch 让机器学习工程师像使用本地进程池一样轻松驾驭 Kubernetes 算力，实现极速开发与成本优化的双赢。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Frun-house_kubetorch_45df1e1c.png","run-house","Runhouse","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Frun-house_98e088c0.png","Welcome! Make yourself at home 🏃‍♀️🏠",null,"team@run.house","runhouse_","run.house","https:\u002F\u002Fgithub.com\u002Frun-house",[85,89,93,97,100],{"name":86,"color":87,"percentage":88},"Python","#3572A5",99,{"name":90,"color":91,"percentage":92},"Shell","#89e051",0.6,{"name":94,"color":95,"percentage":96},"Dockerfile","#384d54",0.2,{"name":98,"color":99,"percentage":96},"Jinja","#a52a22",{"name":101,"color":102,"percentage":103},"Makefile","#427819",0,1176,53,"2026-03-31T03:20:22","Apache-2.0",4,"未说明",{"notes":111,"python":109,"dependencies":112},"需要预先配置 Kubernetes 集群，并通过 Helm 安装服务端组件；客户端仅需安装 Python SDK，可在任意 Python 环境（IDE、Notebook、CI）中使用；无本地运行时依赖，直接利用集群算力。",[109],[54,51,53,13],[115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131],"artificial-intelligence","aws","data-science","gcp","machine-learning","python","pytorch","ray","serverless","distributed","infrastructure","observability","data-processing","evaluation","inference","kubernetes","training","2026-03-27T02:49:30.150509","2026-04-06T08:52:26.836360",[135,140,145,150],{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},2114,"在 Lambda 云上设置 SelfHostedHuggingFaceLLM 时出现 RuntimeError 怎么办？","这通常是由于 Python 版本不一致导致的（例如 Traceback 中同时出现了 Python 3.8 和 3.10）。请确保 Lambda 实例上的环境统一，避免不同层调用不同的 Ray 版本。建议安装以下依赖并配置好 Lambda 密钥：\npip install runhouse\npip install langchain\npip install git+https:\u002F\u002Fgithub.com\u002Fskypilot-org\u002Fskypilot\npip install -U pyOpenSSL\nmkdir -p ~\u002F.lambda_cloud\necho \"api_key = \u003Cyour_api_key_here>\" > ~\u002F.lambda_cloud\u002Flambda_keys","https:\u002F\u002Fgithub.com\u002Frun-house\u002Fkubetorch\u002Fissues\u002F9",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},2115,"在 AWS 集群上运行 cluster.check_server() 时报错 Internal Server Error 如何解决？","该问题已在主分支修复，主要与 fastapi 和 pydantic 的版本兼容性变化有关。请升级到最新版本的 Runhouse 以解决依赖冲突。如果您需要支持 fastAPI > 0.1 和 Pydantic 2.x，也建议更新 Runhouse 版本，因为维护者已放宽了相关要求。","https:\u002F\u002Fgithub.com\u002Frun-house\u002Fkubetorch\u002Fissues\u002F83",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},2116,"如何在 Runhouse Cluster API 中指定 SSH ProxyCommand？","可以通过在 ~\u002F.ssh\u002Fconfig 文件中配置 ProxyCommand 来作为临时解决方案。对于动态主机名的情况，Runhouse 已修复相关逻辑以支持通过 dict 参数指定 ssh_proxy_command。建议更新代码或使用最新的分支，以确保代理主机能正确解析目标主机名。","https:\u002F\u002Fgithub.com\u002Frun-house\u002Fkubetorch\u002Fissues\u002F84",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},2117,"Windows 系统启动 LangChain LLM 时提示 'System cannot find the path specified' 怎么办？","这是由于 Windows 默认不提供 os.devnull 导致的，而 WSL 环境则提供。如果在 Windows 原生环境中遇到此问题，建议使用 WSL 环境，或者修改代码以避免直接使用 os.devnull 进行日志路径处理。","https:\u002F\u002Fgithub.com\u002Frun-house\u002Fkubetorch\u002Fissues\u002F58",[156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231,236,241,246,251],{"id":157,"version":158,"summary_zh":159,"released_at":160},111255,"v0.2.4","## New Features\r\n\r\n### Metrics Streaming\r\nKubetorch now supports real-time metrics streaming during service execution.\r\nWhile your service runs, you can watch live resource usage directly in your terminal, including:\r\n\r\n- CPU utilization (per service or pod)\r\n- Memory consumption (MiB)\r\n- GPU metrics (DCGM-based utilization and memory usage, where relevant)\r\n\r\nThis feature makes it easier to monitor performance, detect bottlenecks, and verify resource scaling in real time. \r\n\r\nRelated PRs: #1856, #1867, #1881, #1887\r\n\r\n**Note: To disable metrics collection, set `metrics.enabled` to `false` in the `values.yaml` of the Helm chart**\r\n\r\n## Improvements\r\n* Helm chart cleanup of deprecated kubetorch config values (#1865)\r\n* Convert cluster scoped RBAC to namespace scoped (#1864, #1861)\r\n* Logging: updating callable name for clarity (#1876)\r\n\r\n## Bug Fixes\r\n* Fix dockerfile sync when running kt app (#1863)\r\n* kt config set\u002Funset to only update specific config keys (#1882)\r\n* Reload cached submodules when reimporting kubetorch module on the server (#1883)","2025-11-12T09:52:14",{"id":162,"version":163,"summary_zh":164,"released_at":165},111256,"v0.2.3","## Improvements\r\n* Allow teardown without username prefix (#1849)\r\n* Skip working directory sync outside packages or repos (#1850)\r\n* Error handling for missing or invalid kube config (#1854)\r\n* Update compute tolerations for GPU workloads (#1839)\r\n","2025-11-04T20:03:37",{"id":167,"version":168,"summary_zh":169,"released_at":170},111257,"v0.2.2","## Bug Fixes\r\n* Rsync for non local mode (#1833)\r\n\r\n## Improvements\r\n* Helm chart cleanup (#1832)","2025-10-22T01:13:48",{"id":172,"version":173,"summary_zh":174,"released_at":175},111258,"v0.2.1","## Introducing Kubetorch\r\nA Python interface for running ML workloads on Kubernetes\r\n\r\n### Components\r\n\r\n- **Python Client**: Run any Python function or class directly on Kubernetes using a simple, intuitive API.\r\n- **Helm Chart**: Deploy Kubetorch to any existing Kubernetes cluster with one command.\r\n\r\n### Highlights\r\n\r\n- **Fast Iteration**: Hot reload and caching enable 1–3 second iteration loops for complex ML workloads.\r\n- **Resilient Execution**: Built-in fault tolerance, preemption handling, and automatic restarts.\r\n- **Observability**: Integrated logging and tracing for distributed workloads\r\n\r\nFor more info see the [Getting Started](https:\u002F\u002Fwww.run.house\u002Fkubetorch\u002Fintroduction) guide. ","2025-10-22T00:50:11",{"id":177,"version":178,"summary_zh":179,"released_at":180},111246,"v0.5.0","## New Features\r\n### Workload CRD (#2195, #2210, #2219, #2222, #2236, #2238, #2251)\r\nKubetorch now stores workload information in a Kubernetes Workload CRD. This contains metadata for the Kubetorch services, such as labels, paths, images, and can be viewed and managed through standard Kubernetes tools.\r\n\r\nTo use the latest release, upgrade your kubetorch installation and ensure the CRD is installed.\r\n\r\n### BYO Manifest for arbitrary K8s resource types (#2133, #2211, #2237)\r\nIn release 0.4.0, we introduced bring-your-own (BYO) manifest for deploying custom Kubernetes manifests while maintaining Kubetorch compatible capabilities. In this release, we expand the supported resource types to be arbitrary Kubernetes resource types, in addition to the previously supported Kubetorch resource types. To apply this to new resource types, you will need to specify the `pod_template_path` parameter when creating the Compute object:\r\n\r\n```python\r\ncompute = kt.Compute.from_manifest(\r\n    manifest=my_custom_manifest,\r\n    pod_template_path=\"spec.workload.template\",\r\n)\r\n```\r\n\r\n### Kt Apply (#2085, #2257)\r\n`kt apply` is a new CLI command to deploy existing Kubernetes manifests through Kubetorch. This automatically injects the Kubetorch server to the manifest start and supports optional Dockerfile based image setup.\r\n\r\n```bash\r\n# Apply a deployment manifest\r\nkt apply deployment.yaml\r\n\r\n# Apply with Dockerfile for image setup\r\nkt apply deployment.yaml --dockerfile Dockerfile\r\n\r\n# Apply with HTTP proxying enabled\r\nkt apply fastapi-deployment.yaml --port 8000 --health-check \u002Fhealth\r\n```\r\n\r\n### Code synchronization control (#2228, #2243)\r\nIntroduce new parameters `sync_dir`, `remote_dir`, and `remote_import_path` in module initialization for finer grain control over code syncing and remote imports. Use `sync_dir` to specify a local directory to sync (or set to `False` to skip module syncing entirely), and use `remote_dir` and `remote_import_path` to point to code that already exists on the container (mutually exclusive from `sync_dir`). \r\n\r\n```python\r\n# Sync a specific directory\r\nremote_fn = kt.fn(my_function, sync_dir=\".\u002Fsrc\").to(compute)\r\n\r\n# Use code already on container (e.g., from image.copy())\r\nimage = kt.Image().copy(\".\u002Fsrc\")\r\nremote_fn = kt.fn(\r\n    my_function,\r\n    sync_dir=False,\r\n    remote_dir=\"src\",\r\n    remote_import_path=\"mymodule\"\r\n).to(compute)\r\n```\r\n\r\n## Improvements\r\n* Kill processes better (#2135)\r\n* Scale data store (#2185)\r\n* Update image to include dockerfile contents upon setup and .to (#2127, #2194)\r\n* `kt describe` to include the ingress if configured (#2191)\r\n* Allow setting kt config values to None (#2190)\r\n* Retry connection when hitting a RemoteProtocolError (#2204)\r\n* Split data store helm chart resources (#2216)\r\n* Add configuration for controller uvicorn worker count (#2217)\r\n* Update app http health check params (#2225)\r\n* Hide pod names by default for kt list (#2218)\r\n* Add release namespace for data store deployments (#2248)\r\n* Split up module pointers (#2227)\r\n* Add support for serialization=”none” (#2231)\r\n* Reduce poll interval for faster service readiness detection (#2232)\r\n* Eagerly load callable at subprocess startup (#2234)\r\n* Add callable_name property for modules (#2252)\r\n* Add more helpful logging for rsync errors (#2255)\r\n* Fix a few user facing type check errors (#2256)\r\n\r\n\r\n## Deprecations\r\n* Deprecate image rsync in favor of copy (#2127)\r\n\r\n## BC-Breaking\r\n* Require rsync 3.2.0+ instead of falling back to manual directory creation\r\n* Refactor teardown method (#2262)\r\n     * `--force\u002F-f` flag no longer deletes without confirmation. `--force` indicates force deleting the resource, but user will still be prompted with a confirmation if a `--yes\u002F-y` flag is not provided.\r\n\r\n## Bug Fixes\r\n* Fix controller connection scaling (#2184)\r\n* Fix noisy websocket error logging during shutdown (#2193)\r\n* Fix rerun errors by appending launch_id (#2223)\r\n* Add markers to support decorating modules (#2235)\r\n* Fix for single-file rsync and dockerfile absolute rsync check (#2241, #2245)\r\n* Fix EADDRINUSE errno check for macOS compatibility (#2265)\r\n","2026-02-18T00:00:20",{"id":182,"version":183,"summary_zh":184,"released_at":185},111247,"v0.4.1","## Improvements\r\n* Global helm flags (#2168)\r\n* Remove service name from data store (#2173)\r\n* Set logging config level based on env var (#2172)\r\n* Propagate user annotations defined in `kt.Compute` (#2171) \r\n\r\n## Bug Fixes\r\n* Fix `kt logs --tail`(#2166)\r\n* Fix process group formation for data syncing (#2176)\r\n* List secrets in deployed namespaces only (#2177)\r\n* List volumes in deployed namespaces only (#2179)","2026-01-21T19:05:23",{"id":187,"version":188,"summary_zh":189,"released_at":190},111248,"v0.4.0","## Kubetorch Controller (#1947)\r\n\r\nThis release introduces the Kubetorch Controller, a new cluster-side component that eliminates the need for local kubeconfig files and simplifies authentication — the Python client now communicates with your cluster through a centralized controller endpoint.\r\n\r\n- **Simplified setup**: No more managing kubeconfig files or local Kubernetes client dependencies\r\n- **Unified API access**: All resource types (Deployments, Knative Services, RayClusters, Kubeflow Training Jobs, or other arbitrary CRDs) are managed through a single endpoint\r\n\r\n### Standard Workflow (#2041, #2076, #2079, #2081)\r\n\r\nThe familiar Kubetorch experience — declare your compute requirements and let Kubetorch handle everything:\r\n\r\n```python\r\nimport kubetorch as kt\r\n\r\n# Define compute requirements\r\ncompute = kt.Compute(cpus=\"2\", memory=\"4Gi\", gpus=1)\r\n\r\n# Deploy your function\r\nremote_fn = kt.fn(my_func).to(compute)\r\nresult = remote_fn(1, 2)\r\n```\r\n\r\nKubetorch automatically:\r\n  - Generates the appropriate Kubernetes manifest (Deployment, Knative Service, etc.)\r\n  - Deploys and manages the workload\r\n  - Creates service routing\r\n  - Handles code syncing and remote execution\r\n\r\n### Bring Your Own Manifest (#1914, #1916, #1935, #1960)\r\nFull support for deploying custom Kubernetes manifests while still leveraging Kubetorch's module system, code syncing, and remote execution capabilities.\r\n\r\n#### Use Cases\r\n(1) Provide your own K8s manifest and let Kubetorch manage the deployment\r\n\r\n```python\r\nimport kubetorch as kt\r\n\r\n# Your custom deployment manifest\r\nmy_manifest = {\"apiVersion\": \"apps\u002Fv1\", \"kind\": \"Deployment\", ...}\r\n\r\n# KT applies the manifest and creates the routing service\r\ncompute = kt.Compute.from_manifest(\r\n  manifest=my_manifest,\r\n  selector={\"app\": \"my-workers\"}\r\n)\r\nremote_fn = kt.fn(my_func).to(compute)\r\nresult = remote_fn(1, 2)\r\n```\r\n\r\n(2) Apply the manifest separately\r\n\r\nPoint Kubetorch at resources deployed via `kubectl` (or another tool):\r\n\r\n```python\r\n# User already deployed pods with label app=workers via kubectl\r\n# KT only registers the pool and creates routing\r\n\r\ncompute = kt.Compute(selector={\"app\": \"workers\", \"team\": \"ml\"})\r\nremote_fn = kt.fn(my_func).to(compute)\r\nresult = remote_fn(1, 2)\r\n```\r\n\r\n(3) Provide a service endpoint\r\n\r\nUse your own ingress, service mesh, or load balancer:\r\n\r\n```python\r\ncompute = kt.Compute.from_manifest(\r\n    manifest=my_manifest,\r\n    selector={\"app\": \"my-workers\"},\r\n    endpoint=kt.Endpoint(url=\"http:\u002F\u002Fmy-svc.my-namespace.svc.cluster.local:8080\")\r\n)\r\n```\r\n\r\n(4) Custom endpoint selector for routing\r\n\r\nRoute to a subset of your pods (e.g., only worker pods, not master):\r\n\r\n```python\r\ncompute = kt.Compute.from_manifest(\r\n    manifest=pytorch_job_manifest,\r\n    selector={\"job-name\": \"my-job\"}, \r\n    endpoint=kt.Endpoint(selector={\"job-name\": \"my-job\", \"replica-type\": \"worker\"})  # Route: workers only\r\n)\r\n```\r\n\r\n## Highlights\r\n- **`Compute.from_manifest()`**: Create a Compute object from any existing Kubernetes manifest (Deployment, Knative Service, RayCluster, or Kubeflow Training Jobs)\r\n- **Kubeflow v1 Training Jobs**: Native support for PyTorchJob, TFJob, MXJob, and XGBoostJob with automatic distributed execution\r\n- **Property overrides**: Modify CPU, memory, replicas, image, env vars, and other settings on imported manifests using standard Compute properties\r\n- **Custom service managers**: Extensible architecture for adding support for additional Kubernetes resource types\r\n\r\n## Improvements\r\n\r\n### Performance & Reliability \r\n* Add retry logic for transient rsync errors (#2061)\r\n* Increase loki stream limits (#2063)\r\n* Improve service teardown reliability (#2057)\r\n* PDB websocket cleanup (#2056)\r\n* Remove default httpx client timeouts for long running connections (#2122)\r\n* Update TTL support to scrape pod metrics via Prometheus (#2149)\r\n\r\n### Distributed Training  \r\n* Auto-enable distributed execution for training jobs with more than one replica (#2111)\r\n* Cloud agnostic DCGM config (#2114)\r\n\r\n### Architecture & Refactoring \r\n* Move all module calls into subprocesses (#2069, #2070)\r\n* Consolidate launch time updates to service manager (#2046)\r\n* Remove env vars from manifest and reverse websocket connection from launched pods to Kubetorch controller (#2081)\r\n* Simplify http server launched on Kubetorch workload pods (#2108)\r\n* Replace requests with httpx for improved HTTP client functionality (#2107)\r\n* Remove unused launch params (#2130)\r\n* Remove unused controller client APIs (#2138)\r\n* Remove helm limits (#2146)\r\n\r\n## Bug Fixes\r\n* Always start Ray on head node where relevant for BYO manifests (#2046)\r\n* Properly stream logs for `kt run` app (#2103, #2121)\r\n* Fix `kt app` liveness check and logging config (#2105)\r\n* Fix noisy log streaming and event loop blocking (#2094)\r\n* Persist serialization type when reloading from an existing manifest (#2110)\r\n* Fix `sys.path` for scripts in subdirectories to enable sibling package imports (#2140)\r\n* Fix log streaming duplica","2026-01-19T18:48:14",{"id":192,"version":193,"summary_zh":194,"released_at":195},111249,"v0.3.0","This release introduces the Kubetorch Data Store, a unified cluster data transfer system for seamless data movement between your local machine and Kubernetes pods.\r\n\r\n## Kubetorch Data Store \r\n\r\nThe data store provides a unified `kt.put()` and `kt.get()` API for both filesystem and GPU data. It solves two critical gaps in Kubernetes for machine learning:\r\n\r\n1. **Fast deployment**: Sync code and data to your cluster instantly via rsync - no container rebuilds necessary \r\n2. **In-cluster data sharing**: Peer-to-peer data transfer between pods with automatic caching and discovery - the \"object store\" functionality that Ray users miss\r\n\r\n(#1994, #1989, #1988, #1987, #1985, #1982, #1981, #1979, #1933, #1932, #1929,  #1893, #1997)\r\n\r\n#### Highlights\r\n  - **Unified API**: Single `kt.put()`\u002F`kt.get()` interface handles both filesystem data (files\u002Fdirectories via rsync) and GPU data (CUDA tensors via NCCL broadcast)\r\n  - **No kubeconfig required**: The Python client communicates through a centralized controller endpoint\r\n  - **Peer-to-peer optimization**: Intelligent routing that tries pod-to-pod transfers first before falling back to the central store\r\n  - **GPU tensor & state dict transfers**: First-class support for CUDA tensor broadcasting via NCCL, including efficient packed transfers for model state dicts\r\n  - **Broadcast coordination**: `BroadcastWindow` enables coordinated multi-pod transfers with configurable quorum, fanout, and tree-based propagation\r\n\r\n####   Example Usage\r\n```python\r\nimport kubetorch as kt\r\n\r\n# Filesystem data\r\nkt.put(\"my-service\u002Fweights\", src=\".\u002Fmodel_weights\u002F\")\r\nkt.get(\"my-service\u002Fweights\", dest=\".\u002Flocal_copy\u002F\")\r\n\r\n# GPU tensors (NCCL broadcast)\r\nkt.put(\"checkpoint\", data=model.state_dict(), broadcast=kt.BroadcastWindow(world_size=2))\r\nkt.get(\"checkpoint\", dest=dest_state_dict, broadcast=kt.BroadcastWindow(world_size=2))\r\n\r\n# List and manage keys\r\nkt.ls(\"my-service\u002F\")\r\nkt.rm(\"my-service\u002Fold-checkpoint\")\r\n```\r\n\r\nSee the [docs](https:\u002F\u002Fwww.run.house\u002Fkubetorch\u002Fguides\u002Fdata-store) for more info.\r\n\r\n## Improvements\r\n* Remove queue and scheduler from Compute configuration options (#1968)\r\n* Added `kt teardown` support for training jobs (#1986)\r\n* Updates to metrics streaming output (#1984)\r\n* Remove OTEL as a Helm dependency (#2016, #2022)\r\n* Allow custom annotations for Kubetorch service account configuration (#2009)\r\n\r\n## Bug Fixes\r\n* Use correct container name when querying logs for Kubetorch services (#1972)\r\n* Prevent events and logs from printing on same line (#2008)\r\n* Async lifecycle management and cleanup (#2028)\r\n* Start Ray on head node if no distributed config provided with BYO manifest (#2046)\r\n* Handle image pull errors when checking for knative service readiness (#2050)\r\n* Control over autoscaler pod eviction behavior for distributed jobs (#2052)","2025-12-30T16:44:08",{"id":197,"version":198,"summary_zh":199,"released_at":200},111250,"v0.2.9","## Improvements\r\n* Introduce global LoggingConfig for easier control of logging behavior with Kubetorch services (#1959)\r\n* Simplify compute spec requirements in factory and constructor (#1963)\r\n\r\n## Bug Fixes\r\n* Prevent log recursion errors (#1957)\r\n* Metrics streaming for async service calls (#1958)\r\n* Specify limits in pod template when requesting GPUs (#1965)","2025-12-09T00:37:32",{"id":202,"version":203,"summary_zh":204,"released_at":205},111251,"v0.2.8","## Improvements\r\n* Support for loading all pod IPs for distributed workloads (#1937)\r\n* Add app and deployment id labels for easier querying of all Kubetorch deployments (#1940)\r\n* Improve pdb debugging (#1950)\r\n* Remove resource limits for workloads (#1949)\r\n* Set allowed serialization methods using local environment variable (#1951)\r\n\r\n## Bug Fixes\r\n* Fix pdb support and other query params for callables (#1939)\r\n* Setting Kubetorch volume mount paths on creation & reload (#1944)\r\n* Fix `to_async()` when `get_if_exists` is set to True (#1945)","2025-12-05T13:56:02",{"id":207,"version":208,"summary_zh":209,"released_at":210},111252,"v0.2.7","## Improvements\r\n* Added `kt logs` CLI command to stream and follow Kubetorch deployment logs (including distributed deployments) (#1928)\r\n* Add affinity and tolerations for rsync and nginx proxy deployments (#1923)\r\n* Filter out metric service calls from kubetorch pods (#1918)\r\n\r\n## Bug Fixes\r\n* Always SSH into head node for Ray deployments (#1931)\r\n* Metrics streaming for async calls (#1924)","2025-11-26T23:00:33",{"id":212,"version":213,"summary_zh":214,"released_at":215},111253,"v0.2.6","## Improvements\r\n* Expanded python version support (#1907)\r\n* Support helm installation in any namespace for better isolation (#1913)\r\n* Suppress metrics log checks in kubetorch pod logs (#1918)\r\n* Relax cluster readiness checks (#1919)\r\n\r\n## Bug Fixes\r\n* Template label parsing (#1908)\r\n* Deploy headless service for distributed use cases only (#1920)","2025-11-20T20:43:32",{"id":217,"version":218,"summary_zh":219,"released_at":220},111254,"v0.2.5","## New Features\r\n\r\n### Notebook Integration\r\n- Added `kt notebook` CLI command to launch a JupyterLab instance connected directly to your Kubetorch services (#1890)\r\n- You can now send Kubetorch functions defined inside local Jupyter notebooks to run on your cluster — no extra setup needed (#1892)\r\n\r\n## Improvements\r\n* Simplified `kubetorch[client]` dependencies (#1896)\r\n* Faster `kt list`CLI loading (#1905)\r\n\r\n## Bug Fixes\r\n* Module and submodule reimporting on the cluster (#1902) ","2025-11-13T22:08:21",{"id":222,"version":223,"summary_zh":224,"released_at":225},111259,"v0.0.42","## New Features\r\n### Updates to `rh.Image`\r\n* Improved functionality installing specific package types - use `pip_install()`, `conda_install()`, `sync_package()` in place of the more generic `install_packages()`. For a local editable package, use the flow `sync_package(\"local_path\").pip_install(\"path_on_cluster\")`\r\n* Add support for uv package installation type, using `uv_install()` (#1767)\r\n* Add venv support through `set_venv()` - start the runhouse server inside the specified venv. venv creation steps can be specified to run prior to setting the venv (#1782)\r\n* Add python version support - create a venv with the specified python version using uv, and start the runhouse server inside the venv (#1792)\r\n\r\n### DockerCluster (#1786)\r\n* Runhouse cluster wrapper around a Dockerfile. Can be constructed using `rh.DockerCluster`\r\n\r\n### Other\r\n* Rsync to accept node index or \"head\" for node argument (#1804)\r\n* Add resync_image option to restart_server (#1790)\r\n\r\n## Improvements\r\n* Parallelize runhouse and ray setup (#1772)\r\n* Disk memory exception handling - throw an error if there is not enough space on disk for rsync and package installations (#1770)\r\n\r\n## Deprecations\r\n* Removing reqs and git package types from string. Instead, use `run_bash(\"pip install path\u002Freqs.txt\")` or `run_bash([\"git clone xxx\", \"pip install xxx\"])` (#1766)\r\n* Fully deprecating `accelerators` from cluster factory. Use `gpus` instead","2025-03-10T20:19:35",{"id":227,"version":228,"summary_zh":229,"released_at":230},111260,"v0.0.41","## Improvements\r\n* Expand monitoring and reporting of multi-node clusters (#1739, #1740, #1746, #1752)\r\n* Enable cluster sharing (#1683)\r\n\r\n## Bug Fixes\r\n* Handling of arguments in the cluster factory (#1753)","2025-02-19T16:24:27",{"id":232,"version":233,"summary_zh":234,"released_at":235},111261,"v0.0.40","Quick release to bug fix env var support in cluster node setup","2025-02-12T21:03:45",{"id":237,"version":238,"summary_zh":239,"released_at":240},111262,"v0.0.39","## Improvements\r\n* Creds related (#1665)\r\n* Better logs and printing format (#1673, #1689, #1727)\r\n* Add default replicas behavior for distributed (#1677)\r\n* Sync secrets to all nodes + envs (#1698, #1726)\r\n* Parallel setup for multinode setup (#1699)\r\n* Use a new process for every new function on the cluster (#1724)\r\n* Ability to set env vars globally (#1725)\r\n\r\n## New Features\r\n* Make server dependencies optional, with `pip install runhouse[server]` (#1652)\r\n* Support custom VPC for Den Launcher (#1669, #1675)\r\n* SSH CLI support for a specific node (#1688)\r\n* Intra cluster rsync (#1697, #1694)\r\n* Alias for rh.cluster (rh.compute) and rh.module (rh.cls) (#1728, #1742)\r\n\r\n## Bug Fixes\r\n* Fix GCP secret write (#1687)\r\n* Properly compute image config with secrets (#1692)\r\n","2025-02-05T21:39:52",{"id":242,"version":243,"summary_zh":244,"released_at":245},111263,"v0.0.38","## Improvements\r\n* Add functionality for cluster kill process (#1588)\r\n* Cluster factory refactor to be more robust (#1591)\r\n* Fix bug with `run_bash` in Image setup (#1608)\r\n* Update `run_bash` output to match input type (#1639)\r\n* Bring up the cluster server after launching (#1645)\r\n* Fix to Pytorch distributed (#1650)\r\n* Detect and update stale ondemand ips in `restart_server` (#1633)\r\n\r\n## Deprecations\r\n* Deprecate `accelerators` in favor of `gpus` for the cluster factory (#1648)\r\n\r\n## BC-Breaking\r\nThe following were deprecated in an earlier release, and are now no longer supported\r\n* `config_for_rns`:  Resource config is now just named `config` instead of `config_for_rns` (#1627)\r\n* `default_env`: Please use `rh.Image()` class to specify default setup for a Runhouse cluster, and pass it in the `image` field of the cluster factory. Links to: [docs](https:\u002F\u002Fwww.run.house\u002Fdocs\u002Fen\u002Fapi\u002Fpython\u002Fimage), [examples](https:\u002F\u002Fwww.run.house\u002Fdocs\u002Fen\u002Ftutorials\u002Fapi-images) (#1627)\r\n* `address`: Use `cluster.head_ip` instead `cluster.address` (#1627)\r\n* `cluster.run()`: Please instead use `run_bash` (to run over HTTP) or `run_bash_over_ssh` (to run over SSH)\r\n* `num_instances` in cluster factory: Please use `num_nodes` instead (#1634)\r\n\r\n## Docs\r\n* New Image, Process tutorial pages (#1607)\r\n* Updated Docker workflows (#1599)\r\n* ... more overall docs and examples updates","2025-01-10T22:30:48",{"id":247,"version":248,"summary_zh":249,"released_at":250},111264,"v0.0.37","## Highlights\r\n### ⚠️⚠️ Envs are no longer supported ⚠️⚠️\r\nThe Runhouse Env class (`rh.Env` or `rh.env`) is no longer supported. Instead, we introduce the concept of a `process` to handle running on a specific worker, and the `Image` replaces the `default_env` of a cluster, specifying any setup steps to take on the cluster.\r\n\r\n#### Process\r\nTo specify env vars, compute, conda envs, or anything necessary to run on a specific process, you can create an process through the `cluster.ensure_cluster_process()` function\r\n\r\nInstead of:\r\n```\r\nenv = rh.env(name=\"my_env\", reqs=[\"numpy, \"pandas\"], env_vars=MY_ENV_VARS, compute={\"CPU\": 0.5})\r\nrh.function(local_fn).to(cluster, env=env)\r\n```\r\n\r\nYou can do:\r\n```\r\ncluster.install_packages([\"numpy, \"pandas\"])\r\nprocess = cluster.ensure_process_created(\"my_process\", env_vars=MY_ENV_VARS, compute={\"CPU\": 0.5})\r\nrh.function(local_fn).to(cluster, process=process)\r\n```\r\n\r\n#### Image\r\nImage is a new primitive that makes it easier to specify environment and setup steps to start the cluster with. It replaces the previous `default_env` cluster argument. It supports installing packages, running commands, creating a conda env, loading from a machine\u002Fdocker image, and more.\r\n```\r\nmy_image = (\r\n    rh.Image(name=\"base_image\")\r\n    .setup_conda_env(\r\n        conda_env_name=\"base_env\",\r\n        conda_yaml={\"dependencies\": [\"python=3.11\"], \"name\": \"base_env\"},\r\n    )\r\n    .install_packages([\"numpy\", \"pandas\"])\r\n    .set_env_vars({\"OMP_NUM_THREADS\": 1})\r\n)\r\ncluster = rh.cluster(\"rh-cpu\", instance_type=\"CPU:2+\", image=my_image)\r\ncluster.up_if_not()\r\n```\r\n\r\n### Den Launcher\r\nNow you can manage clusters (up, teardown, and status checks) via the Runhouse launcher \u002F control plane. This is especially useful for cases where a local sky database would not be available, like upping a cluster in distributed workflows. Set `launcher=\"den\"` in the cluster factory or include `launcher: den` in your local `~\u002F.rh\u002Fconfig.yaml` to use this feature.\r\n\r\nYou can also track cluster status, memory consumption, and view logs for Den launched clusters in the [resources dashboard](https:\u002F\u002Fwww.run.house\u002Fresources).\r\n\r\n## New Features\r\n* introduce `DockerRegistrySecret` for pulling from private or cloud provider registries \r\n\r\n## Updates\r\n* rename cluster launched_properties as compute_properties, and save generated internal and external ips in there\r\n* turn autosave to False by default, configurable in your rh config\r\n* allow cluster name to be passed into runhouse server cli commands\r\n* add rh cli alias\r\n\r\n## Bugfixes\r\n* cluster factory to properly handle differences when loading ints and autostop_mins\r\n\r\n## Examples\r\n* updated examples to remove env dependency, and follow the new process\u002FImage flow\r\n","2024-12-12T00:41:07",{"id":252,"version":253,"summary_zh":254,"released_at":255},111265,"v0.0.36","## Highlights\r\n\r\nEnrich CLI commands (and corresponding python APIs) for interacting with Runhouse clusters. \r\n\r\n`runhouse cluster`: `list`, `down`, `up`, `keep-warm`, `logs`, `status`, `ssh`\r\n\r\n`runhouse server`:  `start`, `restart`, `stop`, `status`\r\n\r\n### New Features\r\n\r\n* Cluster list support (#1225, #1227, #1231, #1233, #1245)\r\n* Runhouse cluster & server CLI support (#1268, #1301)\r\n* Default ssh key to use for clusters (#1357, #1358, #1359, #1365)\r\n* Kubeconfig secret (#1346)\r\n* Distributed Pool - runhouse, Ray, PyTorch, and Dask (#1304, #1305, #1378, #1379)\r\n\r\n### Improvements\r\n\r\n* Cluster to reuse secrets keys instead of generating new secrets per cluster (#1338, #1344)\r\n* Log streaming for nested and multinode (#1375, #1377)\r\n\r\n### Bugfixes\r\n* Cluster reloading fixes (#1290, #1291)\r\n* Don't refresh when initializing on-demand clusters via Sky (#1258)\r\n* Fix notebook support (#1390)\r\n* Fix multinode K8s (#1376)\r\n\r\n### Deprecations\r\n* Python 3.7 support (#1281)\r\n* Replacing cluster num_instances with num_nodes (#1380, #1405)\r\n* Replace cluster address with head_ip (#1370)\r\n\r\n### Build\r\n* Pin skypilot to 0.7.0 for faster cluster start times\r\n\r\n### Examples\r\n* Running Flux1 Schnell on AWS EC2 (#1275)\r\n* Distributed Examples\r\n     * Distributed Pool (#1280)\r\n     * Pytorch HPO and ResNet (#1378)\r\n     * Dask LGBM Train (#1379)\r\n","2024-11-13T15:04:20"]