[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-cortexlabs--cortex":3,"tool-cortexlabs--cortex":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",144730,2,"2026-04-07T23:26:32",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":80,"stars":105,"forks":106,"last_commit_at":107,"license":108,"difficulty_score":109,"env_os":110,"env_gpu":111,"env_ram":112,"env_deps":113,"category_tags":120,"github_topics":121,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":124,"updated_at":125,"faqs":126,"releases":157},5421,"cortexlabs\u002Fcortex","cortex","Production infrastructure for machine learning at scale","Cortex 是一套专为大规模机器学习生产环境设计的基础设施平台，旨在帮助团队轻松部署、管理和扩展 AI 模型。它主要解决了模型从实验阶段走向实际应用时面临的运维难题，如流量波动导致的资源浪费、高并发下的响应延迟以及复杂的集群管理成本。\n\nCortex 特别适合需要构建稳定生产系统的机器学习工程师、数据科学家及后端开发人员。其核心亮点在于提供了三种灵活的工作负载模式：支持根据实时请求量自动伸缩的“实时”服务、基于队列长度弹性调整的“异步”处理，以及容错性强的分布式“批量”任务。在架构上，Cortex 深度集成 AWS 生态，基于 EKS 运行，能够智能调度 CPU 与 GPU 资源，并支持利用低成本 Spot 实例配合自动备份机制来优化开支。此外，它还具备声明式配置和 Terraform 支持，可无缝对接主流监控与日志工具，让开发者能专注于算法本身，而将繁琐的基础设施运维交给 Cortex 自动化处理。需要注意的是，该项目目前已不再由原作者积极维护，用户在采纳前建议评估其长期支持情况。","**[Docs](https:\u002F\u002Fdocs.cortexlabs.com)** • **[Slack](https:\u002F\u002Fcommunity.cortexlabs.com)**\n\n\u003Cbr>\n\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcortexlabs_cortex_readme_b22af5f36be6.png' height='32'>\n\n\u003Cbr>\n\nNote: This project is no longer actively maintained by its original authors.\n\n# Production infrastructure for machine learning at scale\n\nDeploy, manage, and scale machine learning models in production.\n\n\u003Cbr>\n\n## Serverless workloads\n\n**Realtime** - respond to requests in real-time and autoscale based on in-flight request volumes.\n\n**Async** - process requests asynchronously and autoscale based on request queue length.\n\n**Batch** - run distributed and fault-tolerant batch processing jobs on-demand.\n\n\u003Cbr>\n\n## Automated cluster management\n\n**Autoscaling** - elastically scale clusters with CPU and GPU instances.\n\n**Spot instances** - run workloads on spot instances with automated on-demand backups.\n\n**Environments** - create multiple clusters with different configurations.\n\n\u003Cbr>\n\n## CI\u002FCD and observability integrations\n\n**Provisioning** - provision clusters with declarative configuration or a Terraform provider.\n\n**Metrics** - send metrics to any monitoring tool or use pre-built Grafana dashboards.\n\n**Logs** - stream logs to any log management tool or use the pre-built CloudWatch integration.\n\n\u003Cbr>\n\n## Built for AWS\n\n**EKS** - Cortex runs on top of EKS to scale workloads reliably and cost-effectively.\n\n**VPC** - deploy clusters into a VPC on your AWS account to keep your data private.\n\n**IAM** - integrate with IAM for authentication and authorization workflows.\n","**[文档](https:\u002F\u002Fdocs.cortexlabs.com)** • **[Slack](https:\u002F\u002Fcommunity.cortexlabs.com)**\n\n\u003Cbr>\n\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcortexlabs_cortex_readme_b22af5f36be6.png' height='32'>\n\n\u003Cbr>\n\n注意：该项目已不再由其原始作者积极维护。\n\n# 面向大规模机器学习的生产基础设施\n\n在生产环境中部署、管理和扩展机器学习模型。\n\n\u003Cbr>\n\n## 无服务器工作负载\n\n**实时** - 实时响应请求，并根据当前处理中的请求数量自动扩缩容。\n\n**异步** - 异步处理请求，并根据请求队列长度自动扩缩容。\n\n**批处理** - 按需运行分布式且具备容错能力的批处理作业。\n\n\u003Cbr>\n\n## 自动化集群管理\n\n**自动扩缩容** - 使用 CPU 和 GPU 实例弹性扩展集群。\n\n**竞价实例** - 在竞价实例上运行工作负载，并配备自动化的按需备份机制。\n\n**环境** - 创建具有不同配置的多个集群。\n\n\u003Cbr>\n\n## CI\u002FCD 与可观测性集成\n\n**资源 provision** - 使用声明式配置或 Terraform Provider 来 provision 集群。\n\n**指标** - 将指标发送至任何监控工具，或使用预构建的 Grafana 仪表板。\n\n**日志** - 将日志流式传输至任何日志管理工具，或使用预构建的 CloudWatch 集成。\n\n\u003Cbr>\n\n## 专为 AWS 构建\n\n**EKS** - Cortex 运行在 EKS 之上，可可靠且经济高效地扩展工作负载。\n\n**VPC** - 将集群部署到您 AWS 账户中的 VPC 中，以保护数据隐私。\n\n**IAM** - 与 IAM 集成，实现身份验证和授权流程。","# Cortex 快速上手指南\n\n> **⚠️ 重要提示**：本项目已不再由原作者积极维护。在生产环境中使用前，请仔细评估风险或考虑社区 fork 版本。\n\nCortex 是一个用于在 AWS 上大规模部署、管理和扩展机器学习模型的生产级基础设施。它基于 Kubernetes (EKS) 构建，支持实时推理、异步处理和批量任务。\n\n## 环境准备\n\n在开始之前，请确保满足以下系统要求和前置依赖：\n\n*   **操作系统**: Linux 或 macOS (Windows 用户建议使用 WSL2)。\n*   **AWS 账户**: 需要有效的 AWS 账户，并配置好访问权限。\n*   **前置工具**:\n    *   `aws-cli`: 用于配置 AWS 凭证。\n    *   `kubectl`: 用于与 Kubernetes 集群交互（Cortex 安装过程会自动管理，但建议预先安装以便调试）。\n*   **网络环境**: 确保机器可以访问 AWS API 和 Docker Hub。\n    *   *国内加速建议*: 由于 Cortex 依赖 Docker 镜像拉取，国内用户建议配置 Docker 镜像加速器（如阿里云、腾讯云容器镜像服务）以避免拉取超时。\n\n**配置 AWS 凭证：**\n```bash\naws configure\n```\n*(按提示输入 Access Key ID, Secret Access Key, Region 等)*\n\n## 安装步骤\n\nCortex 主要通过命令行工具 `cortex` 进行安装和管理。\n\n1.  **下载并安装 Cortex CLI**\n\n    使用官方提供的安装脚本（国内网络不佳时可尝试手动下载二进制文件）：\n\n    ```bash\n    export CORTEX_VERSION=latest\n    bash -c \"$(curl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002Fcortexlabs\u002Fcortex\u002F$CORTEX_VERSION\u002Finstall.sh)\"\n    ```\n\n2.  **验证安装**\n\n    检查版本号以确认安装成功：\n\n    ```bash\n    cortex version\n    ```\n\n3.  **初始化集群配置**\n\n    创建一个新的集群配置文件（例如 `cortex.yaml`）：\n\n    ```bash\n    cortex cluster configure --config cortex.yaml\n    ```\n\n    *编辑 `cortex.yaml` 文件，根据你的需求调整区域（region）、实例类型（如 `g4dn.xlarge` 用于 GPU）以及是否启用 Spot 实例。*\n\n4.  **创建集群**\n\n    执行以下命令在 AWS EKS 上启动集群：\n\n    ```bash\n    cortex cluster up --config cortex.yaml\n    ```\n\n    > 注意：此过程可能需要 10-15 分钟。完成后，CLI 会自动更新 kubeconfig 以便连接集群。\n\n## 基本使用\n\n以下是一个部署简单的实时（Realtime）API 的最小化示例。\n\n### 1. 准备模型代码\n\n创建一个目录，例如 `iris-classifier`，并在其中添加两个文件：\n\n**`predictor.py`** (模型逻辑):\n```python\nclass PythonPredictor:\n    def __init__(self, config, job_info):\n        pass\n\n    def predict(self, payload):\n        # 简单的回声示例，实际项目中在此处加载模型并进行推理\n        return {\"received\": payload}\n```\n\n**`cortex.yaml`** (API 配置):\n```yaml\n- name: iris-classifier\n  kind: RealtimeAPI\n  predictor:\n    type: python\n    path: predictor.py\n  compute:\n    cpu: 0.5\n    mem: 1G\n```\n\n### 2. 部署 API\n\n在项目目录下运行部署命令：\n\n```bash\ncortex deploy\n```\n\nCortex 将构建 Docker 镜像，推送到注册表，并在集群中启动服务。\n\n### 3. 调用 API\n\n部署成功后，获取 API 端点并发送请求：\n\n```bash\n# 查看 API 状态和端点\ncortex get iris-classifier\n\n# 发送测试请求 (替换 \u003CENDPOINT_URL> 为实际返回的 URL)\ncurl http:\u002F\u002F\u003CENDPOINT_URL> \\\n    -X POST \\\n    -H \"Content-Type: application\u002Fjson\" \\\n    -d '{\"features\": [5.1, 3.5, 1.4, 0.2]}'\n```\n\n### 4. 清理资源\n\n使用完毕后，务必销毁集群以避免产生不必要的 AWS 费用：\n\n```bash\ncortex cluster down --config cortex.yaml\n```","某电商初创团队需要在黑五大促期间，将其实时推荐模型从测试环境快速迁移至生产环境，以应对突发的流量洪峰。\n\n### 没有 cortex 时\n- **扩容滞后导致服务崩溃**：面对瞬间激增的请求，手动配置 Kubernetes 集群响应太慢，导致 API 超时甚至宕机，错失销售良机。\n- **资源成本高昂且浪费**：为保稳定不得不长期预留大量 GPU 实例，但在非高峰时段资源闲置严重，云账单居高不下。\n- **运维复杂度极高**：工程师需花费大量时间编写脚本管理 EKS 集群、处理 Spot 实例中断备份及配置监控日志，无法专注模型优化。\n- **部署流程繁琐**：每次模型更新都需手动调整基础设施配置，缺乏标准化的 CI\u002FCD 集成，上线周期长且易出错。\n\n### 使用 cortex 后\n- **毫秒级弹性伸缩**：cortex 根据实时请求量自动扩缩容，无论是实时推理还是异步队列任务，都能无缝承接流量峰值，保障服务零中断。\n- **智能降低成本**：自动调度工作负载至 Spot 实例并配合按需备份机制，在确保容错的同时将计算成本降低高达 70%。\n- **基础设施即代码**：通过声明式配置文件一键部署包含 VPC 隔离和 IAM 权限的生产级集群，自动集成 CloudWatch 日志与 Grafana 监控，运维负担归零。\n- **流畅的发布体验**：原生支持 CI\u002FCD 流程，模型迭代只需提交配置变更，cortex 自动完成灰度发布与版本管理，上线效率提升数倍。\n\ncortex 让团队从繁琐的基础设施运维中解放出来，实现了机器学习模型在 AWS 上的低成本、高可用规模化落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcortexlabs_cortex_0dbcb41c.png","cortexlabs","Cortex Labs","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fcortexlabs_9f60348b.png","",null,"hello@cortex.dev","https:\u002F\u002Fcortex.dev","https:\u002F\u002Fgithub.com\u002Fcortexlabs",[81,85,89,93,97,101],{"name":82,"color":83,"percentage":84},"Go","#00ADD8",89.4,{"name":86,"color":87,"percentage":88},"Python","#3572A5",5.2,{"name":90,"color":91,"percentage":92},"Shell","#89e051",2.8,{"name":94,"color":95,"percentage":96},"Jinja","#a52a22",1.2,{"name":98,"color":99,"percentage":100},"Dockerfile","#384d54",0.9,{"name":102,"color":103,"percentage":104},"Makefile","#427819",0.4,8026,596,"2026-04-05T00:28:12","Apache-2.0",5,"未说明 (基于 AWS EKS，通常指 Linux 环境)","非必需，支持弹性伸缩的 CPU 和 GPU 实例 (具体型号取决于 AWS EC2 配置)","未说明 (取决于所选 AWS 实例类型)",{"notes":114,"python":115,"dependencies":116},"该项目已不再由原作者积极维护。这是一个基于 AWS EKS 的基础设施平台，用于部署和管理机器学习模型，而非单一的本地运行脚本。它支持实时、异步和批处理工作负载，并具备自动扩缩容和使用 Spot 实例的能力。用户需要拥有 AWS 账户并配置好 VPC 和 IAM 权限。","未说明",[117,118,119],"Terraform (可选用于资源供应)","Grafana (可选用于监控)","AWS CloudWatch (可选用于日志)",[14],[122,123],"machine-learning","infrastructure","2026-03-27T02:49:30.150509","2026-04-08T14:25:05.554165",[127,132,137,142,147,152],{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},24601,"如何通过 Cortex 实现模型输出的实时流式传输（word-by-word），特别是在使用 HTTPS 时？","AWS API Gateway 可能不支持服务器发送事件（SSE），这会导致 HTTPS 请求下无法实时流式输出，只能等待整个过程结束后一次性返回。解决方案是使用自定义 SSL 证书（参考官方文档配置自定义域名），然后将请求直接发送到负载均衡器（Load Balancer），绕过 API Gateway，即可在 HTTPS 下实现实时流式输出。","https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1500",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},24602,"如何确保每个用户的数据隔离，避免不同用户的请求共享同一个 AWS 实例导致敏感数据泄露？","Cortex 默认会在实例间复用资源，若需严格隔离用户数据，建议在代码层面处理。虽然目前无法直接标记实例专属于某个用户，但可以通过在 `predict` 方法或构造函数中动态初始化资源（如使用 boto3 连接独立的 S3 路径或数据库）来确保数据隔离。推荐做法是在 `__init__` 构造函数中导入模块并初始化特定于用户的资源，而不是在全局作用域共享状态。","https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1749",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},24603,"发送大文件（如音频）时遇到 'Resource exhausted' 错误，如何增加允许的最大消息大小？","该问题通常由 gRPC 默认的消息大小限制引起。维护者确认这是一个已知问题，并已在版本 0.27 中修复了 gRPC 资源耗尽的错误。如果您遇到此问题，请升级 Cortex 到 0.27 或更高版本。对于特别大的负载（如 40MB），如果升级后仍有性能问题，可能需要进一步检查 TensorFlow Serving (TFS) 的配置或网络延迟。","https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1740",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},24604,"如何在 Cortex 中限制每个进程使用的 GPU 显存大小？","您可以在预测器（predictor）代码中使用 TensorFlow 的 API 来限制显存。具体代码片段如下：\n```python\nmem_limit_mb = 1024\nfor gpu in tf.config.list_physical_devices(\"GPU\"):\n    tf.config.set_logical_device_configuration(\n        gpu, [tf.config.LogicalDeviceConfiguration(memory_limit=mem_limit_mb)])\n```\n这段代码应放置在模型加载之前，通常在 `PythonPredictor` 类的 `__init__` 方法或模块导入后的初始化部分执行，以确保在分配显存前生效。","https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1426",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},24605,"Cortex CLI 是否支持使用 `aws_session_token` 或 IAM 角色进行认证？","早期版本仅支持静态凭证。社区已提出支持 `aws_session_token` 和 IAM 角色（如从 Lambda 或 EC2 继承）的需求。维护者已将该需求纳入研发计划，并在后续版本（如 v0.28 及以后）中逐步实现了对临时凭证和 IAM 角色认证的支持。建议查阅最新版的认证文档以获取具体的配置命令，通常可以通过环境变量或配置文件指定 session token。","https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1134",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},24606,"如何在本地环境部署和运行 Cortex 进行测试？","Cortex 的核心是构建并运行用户提供的 Docker 容器。若要在本地运行（例如用于测试或利用本地 GPU 集群），您可以直接构建您的预测器镜像，然后使用标准的 Docker 命令运行：`docker run \u003Ccontainer_name>`。虽然某些版本的本地编排功能有所变动，但最直接的方式是将 Cortex 生成的容器镜像提取出来，在本地 Docker 环境中手动启动，从而模拟服务行为。","https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F109",[158,163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238,243,248,253],{"id":159,"version":160,"summary_zh":161,"released_at":162},154175,"v0.42.1","# v0.42.1\n\n**新功能**\n\n* 增加对一组新的 EC2 实例的支持，其中包括 `c6` 和 `g5` 系列 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2414  ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n\n**错误修复**\n\n* 修复了 VPC CNI 日志记录功能在运行 `cortex` CLI 时会触发警告日志的问题 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2443 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n\n**其他**\n\n* 更新 Cortex 的依赖版本：eksctl、EKS 至 1.22，AWS IAM、Python 等 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2414 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian), [deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))","2022-09-23T18:01:31",{"id":164,"version":165,"summary_zh":166,"released_at":167},154176,"v0.42.0","# v0.42.0\n\n**新功能**\n\n* 为 API 添加对 Classic 负载均衡器的支持；网络负载均衡器仍为默认选项 ([文档](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fmanagement\u002Fcreate#cluster.yaml)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2413 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2414 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n\n**错误修复**\n\n* 修复异步 API 在探测空根路径 (`\u002F`) 时的 HTTP\u002FTCP 探针问题 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2407 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 修复 `cortex cluster export` 命令中的空指针异常 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2415 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2414 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 确保用户指定的环境变量在 Kubernetes 部署规范中以确定性顺序排列 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2411 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n\n**其他**\n\n* 确保作业完成后的批量请求包含有效的 JSON 主体 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2409 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))","2022-01-10T17:48:15",{"id":169,"version":170,"summary_zh":171,"released_at":172},154189,"v0.31.0","# v0.31.0\r\n\r\n**New features**\r\n\r\n* Add support for AsyncAPI (experimental) ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fintroduction)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1935 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1610 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Add support for multi-instance-type clusters to AWS\u002FGCP providers (experimental) ([aws](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Faws\u002Fmulti-instance-type)\u002F[gcp](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fgcp\u002Fmulti-instance-type) docs) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1951 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Allow users to duplicate\u002Fmirror traffic using shadow pipelines https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1948 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1889 ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Frealtime-apis\u002Ftraffic-splitter\u002Fconfiguration)) ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n\r\n**Breaking changes**\r\n\r\n* `on_demand_backup` in cluster configuration has been removed in favour of using a cluster with a mixture of spot and on-demand nodegroups. See multi-instance documentation for [aws](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Faws\u002Fmulti-instance-type) and [gcp](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fgcp\u002Fmulti-instance-type) for more details.\r\n\r\n**Bug fixes**\r\n\r\n* Fix Python client not respecting CORTEX_CLI_CONFIG_DIR environment variable for client-id.txt https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1953 ([jackmpcollins](https:\u002F\u002Fgithub.com\u002Fjackmpcollins))\r\n* Prevent threads from being stuck in DynamicBatcher https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1915 ([cbensimon](https:\u002F\u002Fgithub.com\u002Fcbensimon))\r\n* Fix unexpected cortex logs termination by increasing buffer size https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1939 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Decouple cluster deletion from EBS volume deletion for cortex cluster down https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1954 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\r\n* Fix spot\u002Fon-demand GPU instances not joining the cluster by upgrading to eksctl 0.40.0 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1955 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Prevent premature queue not found errors by preserving the SQS for minutes till after the job has completed https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1952 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n\r\n**Docs**\r\n\r\n* Update docs https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1949 ([ospillinger](https:\u002F\u002Fgithub.com\u002Fospillinger))\r\n\r\n**Misc**\r\n\r\n* Configure a default cortex client to manage APIs from with cortex workloads https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1942 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1644 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Save batch metrics to cloud to preserve job metrics history https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1940 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n","2021-03-17T01:38:41",{"id":174,"version":175,"summary_zh":176,"released_at":177},154190,"v0.30.0","# v0.30.0\r\n\r\n**New features**\r\n\r\n* Record custom metrics from predictors and view them in Grafana ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fobservability\u002Fmetrics#custom-user-metrics)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1910 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1897 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Add granular pod metrics to the Grafana dashboards https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1905 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Add node metrics to Grafana dashboards https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1900 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n\r\n**Breaking changes**\r\n\r\n* Remove support for installing Cortex on your own Kubernetes Cluster https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1921 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n\r\n**Bug fixes**\r\n\r\n* Fix bug where successfully completed jobs were marked as completed with errors https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1913 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Fix bug where batch jobs were being terminated unnecessarily https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1917 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Prevent cluster autoscaler from reallocating job pods https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1919 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Address AWS cluster up quota issues such not enough NAT Gateways or EIPs https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1912 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Delete unused prometheus volume on cluster down https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1863 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Create .cortex dir if not present https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1909 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n\r\n**Docs**\r\n\r\n* Add docs for accessing dashboard through private load balancer ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fobservability\u002Fmetrics#accessing-the-dashboard)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1907 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\r\n\r\n**Misc**\r\n\r\n* Allow specifying paths for requirements.txt, conda-packages.txt & dependencies.sh ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fdependencies\u002Fpython-packages#customizing-dependency-paths)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1896 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1927 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1777 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Log relevant kubernetes events to API specific log streams https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1906 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F833 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Support credentials using AWS_SESSION_TOKEN with the CLI\u002FClient ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Faws\u002Fauth#cortex-client)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1908 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1920 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1134 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1865 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Provide auth to Operator and APIs by attaching IAM policies to the cluster ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Faws\u002Fauth#authorizing-your-apis)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1908 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1858 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n","2021-03-03T00:30:07",{"id":179,"version":180,"summary_zh":181,"released_at":182},154191,"v0.29.0","# v0.29.0\r\n\r\n**New features**\r\n\r\n* Add Grafana dashboard for APIs ([docs](https:\u002F\u002Fwww.docs.cortex.dev\u002Fworkloads\u002Frealtime-apis\u002Fmetrics)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1867 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1885 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1890 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1887 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Support API autoscaling in GCP clusters ([docs](https:\u002F\u002Fwww.docs.cortex.dev\u002Fworkloads\u002Frealtime-apis\u002Fautoscaling)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1814 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1879 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1601 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Support traffic splitting in GCP clusters ([docs](https:\u002F\u002Fwww.docs.cortex.dev\u002Fworkloads\u002Frealtime-apis\u002Ftraffic-splitter\u002Fexample)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1892 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1660 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n\r\n**Breaking changes**\r\n\r\n* The default Docker images for APIs have been slimmed down to not include packages other than what Cortex requires to function. Therefore, when deploying APIs, it is now necessary to include the dependencies that your predictor needs in `requirements.txt` ([docs](https:\u002F\u002Fwww.docs.cortex.dev\u002Fworkloads\u002Fdependencies\u002Fpython-packages)) and\u002For `dependencies.sh` ([docs](https:\u002F\u002Fwww.docs.cortex.dev\u002Fworkloads\u002Fdependencies\u002Fsystem-packages)).\r\n\r\n**Bug fixes**\r\n\r\n* Disable dynamic batcher for TensorFlow predictor type https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1888 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Support empty directory objects for models saved in S3\u002FGCS https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1830 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1829 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Fix bug which prevented Task APIs on GCP from being cleaned up after completion https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1871 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n\r\n**Docs**\r\n\r\n* Add documentation for using a version of Python other than the default via `dependencies.sh` ([docs](https:\u002F\u002Fwww.docs.cortex.dev\u002Fworkloads\u002Fdependencies\u002Fsystem-packages)) or custom images ([docs](https:\u002F\u002Fwww.docs.cortex.dev\u002Fworkloads\u002Fdependencies\u002Fimages)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1862 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1779 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n\r\n**Misc**\r\n\r\n* Support deploying predictor Python classes from more environments (e.g. from separate Python files, AWS Lambda) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1883 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fcommit\u002F3a1b777d06e660a49b6223badda4c5e8b1fe4ec1 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1824 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1826 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Improve error logging for Batch and Task APIs https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1866 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1833 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))","2021-02-17T04:52:53",{"id":184,"version":185,"summary_zh":186,"released_at":187},154177,"v0.41.0","# v0.41.0\n\n**新功能**\n\n* 支持为容器配置 `pre_stop` 命令 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2403 ([文档](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Frealtime\u002Fconfiguration)) ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n\n**其他**\n\n* 支持 m6i 实例类型 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2398 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n* 升级到 Kubernetes v1.21 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2398 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n\n**错误修复**\n\n* 在终止代理容器之前，等待进行中的请求数量降至零 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2402 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n* 修复 `cortex get --env` 命令 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2404 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n* 修复在使用按需基础容量的竞价型节点组时，`cortex cluster up` 过程中的集群价格估算问题 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2406 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n\n## Nucleus 模型服务器\n\n我们发布了 [Nucleus 模型服务器](https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fnucleus) 的 v0.1.0 版本！\n\nNucleus 是一个用于 TensorFlow 和通用 Python 模型的模型服务器。它兼容 Cortex 集群、Kubernetes 集群以及其他基于容器的部署平台。Nucleus 也可以通过 Docker Compose 在本地运行。\n\nNucleus 的一些特性包括：\n\n* 支持通用 Python 模型（PyTorch、ONNX、Scikit-learn、MLflow、NumPy、Pandas 等）\n* 支持 TensorFlow 模型\n* 支持 CPU 和 GPU\n* 可直接从 S3 路径提供模型服务\n* 可配置的多进程和多线程处理\n* 多模型端点\n* 动态的服务器端请求批处理\n* 当新的模型版本上传到 S3 时自动重新加载模型\n* 基于 LRU 策略的模型缓存（磁盘和内存）\n* 支持 HTTP 和 gRPC","2021-12-08T02:15:29",{"id":189,"version":190,"summary_zh":191,"released_at":192},154178,"v0.40.0","# v0.40.0\n\n**新功能**\n\n* 支持异步 API 的并发控制（通过 `max_concurrency` 字段）https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2376 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2200 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\n* 在集群指标仪表板中添加集群范围及每个 API 的成本分解图表 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2382 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1962 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 允许包含异步 API 的工作节点缩放到零（现在使用共享的异步网关，该网关运行在操作员节点组上）https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2380 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2279 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\n* 为实时和异步 API 添加 `cortex describe API_NAME` 命令 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2368 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2320 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2359 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 支持更新现有节点组的优先级 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2369 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2254 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\n\n**其他**\n\n* 改进 API 状态的报告 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2368 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2320 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2359 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 如果 API 规范中指定了自定义就绪探针，则移除目标端口上的默认就绪探针 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2379 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))","2021-08-05T04:33:35",{"id":194,"version":195,"summary_zh":196,"released_at":197},154179,"v0.39.1","# v0.39.1\n\n**错误修复**\n\n* 移除一项不必要的集群验证，该验证限制了 `api_load_balancer_cidr_white_list` 和 `operator_load_balancer_cidr_white_list` 中可使用的 IP 范围。https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2363 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))","2021-07-21T22:32:00",{"id":199,"version":200,"summary_zh":201,"released_at":202},154180,"v0.39.0","# v0.39.0\n\n**新功能**\n\n* 添加 `cortex cluster health` 命令，用于显示集群各组件的健康状态 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2313 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2029 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\n* 将请求头转发到异步 API https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2329 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2296 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\n* 为任务型 API 添加指标仪表板 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2311 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2322 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n\n**可靠性**\n\n* 通过启用 IPVS，支持更大规模的集群（最多 1000 个节点、10000 个 Pod）https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2357 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1834 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 自动限制节点添加速率，以避免 Kubernetes API 服务器过载 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2331 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2338 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2314 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 确保集群自动伸缩器的可用性 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2347 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2346 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 在大规模场景下提升 istiod 的可用性 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2342 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2332 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 减少 `cortex get` 命令中显示的指标数量，以提高该命令的可扩展性和可靠性 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2333 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2319 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\n* 在集群仪表板中显示聚合的节点统计信息 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2336 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2318 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n\n**错误修复**\n\n* 确保异步 API 提交响应的 `Content-Type` 头正确设置为 `application\u002Fjson` https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2323 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\n* 修复 Pod 自动伸缩器在缩容至零时的边缘情况 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2350 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\n* 允许在运行中的 API 上更新自动伸缩配置 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2355 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 修正集群自动伸缩器的节点组优先级计算 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2358 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2343 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian), [deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n* 允许在运行中的 API 中更新 `node_groups` 选择器 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2354 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 修复异步 API 仪表板上的活动副本图 https:\u002F\u002F","2021-07-20T23:56:59",{"id":204,"version":205,"summary_zh":206,"released_at":207},154181,"v0.38.0","# v0.38.0\n\n**新功能**\n\n* 支持实时 API 的自动缩放，可缩放到零个副本 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2298 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F445 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\n* 允许在现有集群上更新 `ssl_certificate_arn`、`api_load_balancer_cidr_white_list` 和 `operator_load_balancer_cidr_white_list`（通过 `cortex cluster configure` 命令）https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2305 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2107 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\n* 允许配置 Prometheus 的实例类型（[文档](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fmanagement\u002Fcreate#cluster-yaml)）https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2307 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2285 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 允许将多个 Inferentia 芯片分配给单个容器 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2304 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1123 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n\n**错误修复**\n\n* 修复集群自动伸缩器的节点组优先级计算 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2309 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n\n**其他**\n\n* 各种可扩展性改进 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2307 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2304 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2297 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2278 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2285\n* 允许将节点组的 `max_instances` 设置为 `0` https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2310 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))","2021-07-06T23:57:08",{"id":209,"version":210,"summary_zh":211,"released_at":212},154182,"v0.37.0","# v0.37.0\n\n**新功能**\n\n* 支持 ARM 实例类型 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2268 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1528 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 添加 `cortex cluster configure` 命令，用于在运行中的集群上添加、移除或扩缩节点组 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2246 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2096 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 添加 `cortex cluster info --print-config` 命令，用于打印运行中集群的当前配置 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2246 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 为异步 API 添加指标仪表板 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2242 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1958 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\n* 支持异步 API 的 `cortex refresh` 命令 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2265 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2237 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n\n**破坏性变更**\n\n* `cortex cluster scale` 命令已被 `cortex cluster configure` 命令取代。\n\n**错误修复**\n\n* 修复非 200 响应状态码下的异步 API 指标上报问题 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2266 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\n* 使批处理作业指标的持久化对实例终止更具弹性 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2247 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2041 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\n* 在 `cortex cluster up` 过程中放宽网络验证（以避免在 GovCloud 上不必要地失败）https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2248 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\n* 修复 Inferentia 资源请求 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2250 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n\n**文档**\n\n* 添加将 [日志](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fobservability\u002Flogging#exporting-logs) 和 [指标](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fobservability\u002Fmetrics#exporting-metrics-to-monitoring-solutions) 导出到外部工具的说明 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\n\n**其他**\n\n* 改善运行中批处理作业时 `cortex cluster info` 的输出 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2270 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n* 无论作业状态如何，都持久化批处理作业的指标 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2244 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\n* 支持创建不包含任何节点组的集群 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2269 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n* 改善多容器批处理作业中容器启动错误的处理 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2260 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2217 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\n* 向代理和 dequeuer 容器添加 CPU 和内存资源请求 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2252 ([d","2021-06-24T03:59:24",{"id":214,"version":215,"summary_zh":216,"released_at":217},154183,"v0.36.0","# v0.36.0\n\n**新功能**\n\n* 支持在所有工作负载类型（实时、异步、批处理、任务）中运行任意 Docker 容器 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2173（[RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian)、[miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr)、[vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu)、[deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu)、[ospillinger](https:\u002F\u002Fgithub.com\u002Fospillinger)）\n* 支持将异步 API 自动扩展至零副本 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2224 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2199（[RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian)）\n\n**破坏性变更**\n\n* 在此版本中，我们已将 Cortex 普遍化，使其仅支持在所有工作负载类型（实时、异步、批处理和任务）中运行任意 Docker 容器。这使得可以使用任何模型服务器、编程语言等。因此，API 配置已更新：移除了 `predictor` 部分，新增了 `pod` 部分，并对 `autoscaling` 参数进行了轻微调整（具体取决于工作负载类型）。请参阅更新后的文档，了解 [实时](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Frealtime)、[异步](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fasync)、[批处理](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fbatch) 和 [任务](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Ftask) 的配置。如果您想查看 Python 应用程序的 Docker 化示例，请参阅我们的 [test\u002Fapis](https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Ftree\u002F0.36\u002Ftest\u002Fapis) 文件夹。\n* 已移除 `cortex prepare-debug` 命令；Cortex 现在仅运行 Docker 容器，这些容器可以通过 `docker run` 在本地运行。\n* 已移除 `cortex patch` 命令；其行为现在与 `cortex deploy` 完全相同。\n* `cortex logs` 命令现在会打印一个预填充查询的 CloudWatch Insights URL，该查询可执行以显示您工作负载的日志，因为这是生产环境中的推荐做法。如果您希望随机流式传输某个 Pod 的日志，可以使用 `cortex logs --random-pod`（请注意，这些日志不会包含与您的工作负载相关的部分系统日志）。\n* gRPC 支持已被暂时移除；我们正在努力在 v0.37 版本中将其重新添加。\n\n**错误修复**\n\n* 处理在未设置默认环境时初始化 Python 客户端时的异常 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2225 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2223（[deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu)）\n\n**文档**\n\n* 记录如何在 Grafana 中配置 SMTP（例如启用电子邮件告警）https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2219（[RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian)）\n\n**其他**\n\n* 在 `cortex logs` 的输出中显示带有预填充查询的 CloudWatch Insights URL https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2085（[vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu)）\n* 提高批处理作业提交验证的效率 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2179 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2178（[deliahu](https:\u002F\u002Fgithub","2021-06-08T18:00:39",{"id":219,"version":220,"summary_zh":221,"released_at":222},154184,"v0.35.0","# v0.35.0\n\n**新功能**\n\n* 避免处理已被客户端取消的 HTTP 请求 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2135 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1453 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\n* 支持 GP3 卷（并将 GP3 设为默认卷类型）https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2130 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1843 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 允许为 Task API 设置共享内存（shm）大小 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2132 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2115 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 实现异步 API 响应的自动 7 天过期机制 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2151 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 添加 `cortex env rename` 命令 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2165 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1773 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\n\n**破坏性变更**\n\n* 用于部署 Python 类的 Python 客户端方法已从 `deploy()` 方法中分离出来。现在，`deploy()` 仅用于部署项目文件夹，而 `deploy_realtime_api()`、`deploy_async_api()`、`deploy_batch_api()` 和 `deploy_task_api()` 则用于部署 Python 类。（[文档](https:\u002F\u002Fdocs.cortex.dev\u002Fclients\u002Fpython)）\n* Cortex 用于内部用途的存储桶名称不再可配置。在创建集群时，Cortex 会自动生成存储桶名称（如果该存储桶不存在，则会创建）。在删除集群时，存储桶将被清空（除非在执行 `cortex cluster down` 时提供了 `--keep-aws-resources` 标志）。用户文件不应存储在 Cortex 的内部存储桶中。\n\n**错误修复**\n\n* 修复 `cortex cluster info` 中显示的异步 API 副本数量问题 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2140 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2129 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n\n**其他**\n\n* 在删除集群时，删除所有由 Cortex 创建的 AWS 资源，并支持在执行 `cortex cluster down` 时使用 `--keep-aws-resources` 标志以保留 AWS 资源 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2161 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1612 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 在创建集群时，验证用户 AWS 服务配额中的安全组数量及入站\u002F出站规则数量 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2127 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2087 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 允许在使用 `cortex cluster scale` 时仅指定 `--min-instances` 或 `--max-instances` 中的一个 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2149 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 对未实现的实时 API 方法使用 405 状态码 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2158 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\n* 降低文件大小和项目大小限制 https:\u002F\u002Fgithub.co","2021-05-11T21:36:23",{"id":224,"version":225,"summary_zh":226,"released_at":227},154185,"v0.34.0","# v0.34.0\r\n\r\n**New features**\r\n\r\n* Support handling `GET`, `PUT`, `PATCH`, and `DELETE` HTTP requests in Realtime APIs ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Frealtime-apis\u002Fhandler#http)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2111 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2063 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Support running realtime API containers locally for debugging \u002F development purposes ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fdebugging)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2112 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2077 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Support multiple gRPC services \u002F methods (which can be named arbitrarily) in a single Realtime API ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Frealtime-apis\u002Fhandler#grpc)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2111 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2063 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Support specifying a list of node groups on which a workload is allowed to run (see configuration docs for [Realtime](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Frealtime-apis\u002Fconfiguration), [Async](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fasync-apis\u002Fconfiguration), [Batch](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fbatch-apis\u002Fconfiguration), or [Task](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Ftask-apis\u002Fconfiguration) APIs) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2098 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2034 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Support AWS GovCloud regions https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2118 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2103 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n\r\n**Breaking changes**\r\n\r\n* \"predictor\" has been renamed to \"handler\" throughout the product (API configuration and Python APIs). In addition, as a result of supporting additional HTTP method verbs, `predict()` has been renamed to `handle_post()` in Realtime APIs (`handle_get()`, `handle_put()`, `handle_patch()`, and `handle_delete()` are now also supported). For consistency, `predict()` has been renamed to `handle_async()` for Async APIs, and `handle_batch()` for Batch APIs. See the examples for [Realtime](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Frealtime-apis\u002Fexample), [Async](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fasync-apis\u002Fexample), and [Batch](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fbatch-apis\u002Fexample) APIs. Task APIs have not been changed.\r\n\r\n**Bug fixes**\r\n\r\n* Fix invalid Async workload status during processing https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2106 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2104 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n\r\n**Docs**\r\n\r\n* Add docs for [configuring Grafana alerts](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fobservability\u002Falerting) ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Document how to [create a Cortex cluster without administrator IAM access](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fmanagement\u002Fauth#minimum-iam-policy) ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Add docs for [mirroring Cortex's docker images to a private repo](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fadvanced\u002Fself-hosted-images) ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n\r\n**Misc**\r\n\r\n* Support json output for the `cortex cluster info` command https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2089 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2062 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Allow nodegroups to be scaled down to `max_instances` == 0 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2095 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))","2021-04-27T23:41:29",{"id":229,"version":230,"summary_zh":231,"released_at":232},154186,"v0.33.0","# v0.33.0\r\n\r\n**New features**\r\n\r\n* Allow specifying a CIDR range whitelist for APIs and the operator ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fmanagement\u002Fcreate)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2071 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2003 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Enable CORS for async, batch, and task APIs https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2082 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2073 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\r\n\r\n**Breaking changes**\r\n\r\n* The onnx predictor type has been replaced by the python predictor type; please use the python predictor type instead (all onnx models are fully supported by the python predictor type)\r\n\r\n**Bug fixes**\r\n\r\n* Fix bug affecting async api consistency during heavy traffic https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2072 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Fix bug affecting async api updates https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2067 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n\r\n**Misc**\r\n\r\n* Rename `cortex cluster configure` command to `cortex cluster scale` https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2040 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1972 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Disable AZRebalance autoscaling group process https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2042 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1349 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Add horizontal pod autoscaler to async API gateway https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2079 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2078 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Rename async modules to `async_api` to avoid name collision with the reserved keyword in Python 3.7+ https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2066 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2052 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Backup images to dockerhub https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2081 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Add additional debugging info for `cluster up` failures https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2080 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F2027 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))","2021-04-13T21:41:02",{"id":234,"version":235,"summary_zh":236,"released_at":237},154187,"v0.32.0","# v0.32.0\r\n\r\n**New features**\r\n\r\n* Add gRPC support to realtime APIs ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Frealtime-apis\u002Fpredictors#grpc)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1997 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1056 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Add support for ONNX and TensorFlow predictor types in async APIs ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Fasync-apis\u002Fpredictors)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1996 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1980 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Support using ECR images from other AWS accounts and regions https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2011 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1988 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n\r\n**Breaking changes**\r\n\r\n* GCP support has been removed so that we can focus our efforts on improving the scalability, reliability, and security for Cortex on AWS. Cortex on GCP will still be available in v0.31. If you are currently using Cortex on GCP, our team will be happy to help you migrate to AWS or work with you to find alternative solutions. Please feel free to reach out to us on [slack](https:\u002F\u002Fcommunity.cortex.dev\u002F) or email us at hello@cortex.dev if you're interested.\r\n\r\n**Bug fixes**\r\n\r\n* Fix memory plots on Grafana dashboards for realtime and batch APIs https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2024 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2014 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1970 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n\r\n**Docs**\r\n\r\n* Misc docs improvements https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1994 ([ospillinger](https:\u002F\u002Fgithub.com\u002Fospillinger))\r\n\r\n**Misc**\r\n\r\n* Increase kubelet's `registryPullQPS` limit from 5 to 10 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2023 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1989 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Pin the AMI version https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F2010 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1975 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1615 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n","2021-03-30T23:55:15",{"id":239,"version":240,"summary_zh":241,"released_at":242},154188,"v0.31.1","# v0.31.1\r\n\r\n**Bug fixes**\r\n\r\n* Preemptible node pools on GCP aren't autoscaling https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1981 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Replica autoscaler targets incorrect deployments on operator restart https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1982 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr))\r\n* Replica autoscaler is not reinitialized for running APIs on operator restart on GCP https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1984 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))","2021-03-23T20:07:23",{"id":244,"version":245,"summary_zh":246,"released_at":247},154192,"v0.28.0","# v0.28.0\r\n\r\n**New features**\r\n\r\n* Support installing Cortex on an existing Kubernetes cluster (on AWS or GCP) ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fcortex-core-on-kubernetes\u002Finstall)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1837 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1808 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n\r\n**Breaking changes**\r\n\r\n* The cloudwatch dashboard has been removed as a result of our switch to Prometheus for metrics aggregation. The dashboard will be replaced with an alternative in an upcoming release.\r\n\r\n**Bug fixes**\r\n\r\n* Fix bug which can cause requests to APIs from a Python client to timeout during cluster autoscaling https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1841 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1840 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Fix bug which can cause `downscale_stabilization_period` to be disregarded during downscaling https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1847 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1846 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n\r\n**Misc**\r\n\r\n* AWS credentials are no longer required to connect the CLI to the cluster operator. If you need to restrict access to your cluster operator, configure the operator's load balancer to be private by setting `operator_load_balancer_scheme: internal` in your [cluster configuration file](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fcortex-cloud-on-aws\u002Finstall#configure-cortex), and set up [VPC Peering](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fcortex-cloud-on-aws\u002Findex\u002Fvpc-peering). We plan in supporting a new auth strategy in an upcoming release.\r\n* Improve S6 error code\u002Fsignal handling https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1825 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1703 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))","2021-02-03T20:46:28",{"id":249,"version":250,"summary_zh":251,"released_at":252},154193,"v0.27.0","# v0.27.0\r\n\r\n**New features**\r\n\r\n* Add new API type `TaskAPI` for running arbitrary Python jobs ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fworkloads\u002Ftask\u002Fexample)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1717 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F253 ([miguelvr](https:\u002F\u002Fgithub.com\u002Fmiguelvr), [RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Write Cortex's logs as structured logs, and allow use of Cortex's structured logger in predictors (supports adding extra fields) ([aws docs](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Faws\u002Flogging), [gcp docs](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fgcp\u002Flogging)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1778 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1803 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1804 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1732 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1563 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Support preemptible instances on GCP ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fgcp\u002Finstall)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1791 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1631 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Support private load balancers on GCP ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fgcp\u002Finstall)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1786 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1621 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\r\n* Support GCP instances with multiple GPUs ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fclusters\u002Fgcp\u002Finstall)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1789 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1784 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\r\n\r\n**Breaking changes**\r\n\r\n* `cortex logs` now streams logs from a single replica at random when there are multiple replicas for an API. The recommended way to analyze production logs is via a dedicated logging tool (by default, logs are sent to [CloudWatch](https:\u002F\u002Fus-west-2.console.aws.amazon.com\u002Fcloudwatch\u002Fhome) on AWS and [StackDriver](https:\u002F\u002Fconsole.cloud.google.com\u002Flogs\u002Fquery) on GCP)\r\n\r\n**Bug fixes**\r\n\r\n* Misc Python client fixes https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1798 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1782 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1772 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu), [RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n\r\n**Docs**\r\n\r\n* Document the shared `\u002Fmnt` directory for TensorFlow predictors https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1802 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1792 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\r\n* Misc GCP docs improvements https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1799 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\r\n\r\n**Misc**\r\n\r\n* Improve out-of-memory status reporting ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Improve batch job cleanup process https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1797 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1796 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n* Remove grpc msg send\u002Freceive limit https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1769 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1740 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n","2021-01-21T05:23:13",{"id":254,"version":255,"summary_zh":256,"released_at":257},154194,"v0.26.0","# v0.26.0\r\n\r\n**New features**\r\n\r\n* Support configuring the log level for APIs ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fv\u002F0.26\u002Fworkloads\u002Frealtime\u002Fconfiguration)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1741 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1484 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n* Support creating a cluster in an existing AWS VPC ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fv\u002F0.26\u002Fclusters\u002Faws\u002Finstall)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1759 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1142 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\r\n* Support specifying the GCP network and subnet for the Cortex cluster ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fv\u002F0.26\u002Fclusters\u002Fgcp\u002Finstall)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1752 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1738 ([deliahu](https:\u002F\u002Fgithub.com\u002Fdeliahu))\r\n* Support configuring shared memory size (shm) for inter-process communication ([docs](https:\u002F\u002Fdocs.cortex.dev\u002Fv\u002F0.26\u002Fworkloads\u002Frealtime\u002Fconfiguration)) https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1756 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1638 ([vishalbollu](https:\u002F\u002Fgithub.com\u002Fvishalbollu))\r\n\r\n**Breaking changes**\r\n\r\n* The local provider has been removed. The best way to test your predictor implementation locally is to import it in a separate Python file and call your `__init__()` and `predict()` functions directly. The best way to test your API is to deploy it to a dev\u002Ftest cluster.\r\n* Built-in support for API Gateway has been removed. If you need to create an https endpoint with valid certs, some options are to set up a [custom domain](https:\u002F\u002Fdocs.cortex.dev\u002Fv\u002F0.26\u002Fclusters\u002Faws\u002Findex\u002Fcustom-domain) or to [manually create an API Gateway](https:\u002F\u002Fdocs.cortex.dev\u002Fv\u002F0.26\u002Fclusters\u002Faws\u002Findex\u002Fhttps).\r\n* Prediction monitoring has been removed. We are exploring how to build a more powerful and customizable solution for this.\r\n* The `predict` CLI command has been deleted. `curl`, `requests`, etc. are the best tools for testing APIs.\r\n\r\n**Bug fixes**\r\n\r\n* For multi-model APIs, allow model names to share a prefix https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fpull\u002F1745 https:\u002F\u002Fgithub.com\u002Fcortexlabs\u002Fcortex\u002Fissues\u002F1699 ([RobertLucian](https:\u002F\u002Fgithub.com\u002FRobertLucian))\r\n\r\n**Docs**\r\n\r\n* Misc docs improvements ([ospillinger](https:\u002F\u002Fgithub.com\u002Fospillinger))","2021-01-06T08:18:00"]