[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jina-ai--serve":3,"tool-jina-ai--serve":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",143909,2,"2026-04-07T11:33:18",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":102,"forks":103,"last_commit_at":104,"license":105,"difficulty_score":10,"env_os":106,"env_gpu":107,"env_ram":108,"env_deps":109,"category_tags":117,"github_topics":119,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":140,"updated_at":141,"faqs":142,"releases":168},5211,"jina-ai\u002Fserve","serve","☁️ Build multimodal AI applications with cloud-native stack","Jina-Serve 是一个专为构建和部署多模态 AI 应用设计的云原生框架。它帮助开发者轻松将机器学习模型转化为可通过 gRPC、HTTP 或 WebSocket 通信的高性能服务，解决从本地开发到生产环境扩展过程中遇到的架构复杂、部署困难及数据格式不统一等痛点。\n\n这款工具特别适合 AI 工程师、后端开发者及研究人员使用，尤其是那些希望专注于核心算法逻辑，而不愿在微服务编排、容器化或基础设施管理上耗费过多精力的人群。无论是处理文本、图像还是其他复杂数据类型，Jina-Serve 都能提供原生支持。\n\n其技术亮点在于基于 DocArray 的数据处理机制，实现了类型安全且高效的输入输出管理；内置对主流 ML 框架的兼容，并支持流式输出与大语言模型（LLM）服务。此外，它具备自动动态批处理、副本并行及数据分片等高级特性，显著提升推理吞吐量。通过简单的 Python 代码或 YAML 配置，用户即可一键将服务部署至本地、Kubernetes 集群或 Jina AI 云平台，真正实现“一次编写，随处运行”的现代化 AI 工程体验。","# Jina-Serve\n\u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fjina\u002F\">\u003Cimg alt=\"PyPI\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fjina?label=Release&style=flat-square\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fdiscord.jina.ai\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1106542220112302130?logo=discord&logoColor=white&style=flat-square\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fpypistats.org\u002Fpackages\u002Fjina\">\u003Cimg alt=\"PyPI - Downloads from official pypistats\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fjina?style=flat-square\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Factions\u002Fworkflows\u002Fcd.yml\">\u003Cimg alt=\"Github CD status\" src=\"https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Factions\u002Fworkflows\u002Fcd.yml\u002Fbadge.svg\">\u003C\u002Fa>\n\nJina-serve is a framework for building and deploying AI services that communicate via gRPC, HTTP and WebSockets. Scale your services from local development to production while focusing on your core logic.\n\n## Key Features\n\n- Native support for all major ML frameworks and data types\n- High-performance service design with scaling, streaming, and dynamic batching\n- LLM serving with streaming output\n- Built-in Docker integration and Executor Hub\n- One-click deployment to Jina AI Cloud\n- Enterprise-ready with Kubernetes and Docker Compose support\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>Comparison with FastAPI\u003C\u002Fstrong>\u003C\u002Fsummary>\n\nKey advantages over FastAPI:\n\n- DocArray-based data handling with native gRPC support\n- Built-in containerization and service orchestration\n- Seamless scaling of microservices\n- One-command cloud deployment\n\u003C\u002Fdetails>\n\n## Install \n\n```bash\npip install jina\n```\n\nSee guides for [Apple Silicon](https:\u002F\u002Fjina.ai\u002Fserve\u002Fget-started\u002Finstall\u002Fapple-silicon-m1-m2\u002F) and [Windows](https:\u002F\u002Fjina.ai\u002Fserve\u002Fget-started\u002Finstall\u002Fwindows\u002F).\n\n## Core Concepts\n\nThree main layers:\n- **Data**: BaseDoc and DocList for input\u002Foutput\n- **Serving**: Executors process Documents, Gateway connects services\n- **Orchestration**: Deployments serve Executors, Flows create pipelines\n\n## Build AI Services\n\nLet's create a gRPC-based AI service using StableLM:\n\n```python\nfrom jina import Executor, requests\nfrom docarray import DocList, BaseDoc\nfrom transformers import pipeline\n\n\nclass Prompt(BaseDoc):\n    text: str\n\n\nclass Generation(BaseDoc):\n    prompt: str\n    text: str\n\n\nclass StableLM(Executor):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.generator = pipeline(\n            'text-generation', model='stabilityai\u002Fstablelm-base-alpha-3b'\n        )\n\n    @requests\n    def generate(self, docs: DocList[Prompt], **kwargs) -> DocList[Generation]:\n        generations = DocList[Generation]()\n        prompts = docs.text\n        llm_outputs = self.generator(prompts)\n        for prompt, output in zip(prompts, llm_outputs):\n            generations.append(Generation(prompt=prompt, text=output))\n        return generations\n```\n\nDeploy with Python or YAML:\n\n```python\nfrom jina import Deployment\nfrom executor import StableLM\n\ndep = Deployment(uses=StableLM, timeout_ready=-1, port=12345)\n\nwith dep:\n    dep.block()\n```\n\n```yaml\njtype: Deployment\nwith:\n uses: StableLM\n py_modules:\n   - executor.py\n timeout_ready: -1\n port: 12345\n```\n\nUse the client:\n\n```python\nfrom jina import Client\nfrom docarray import DocList\nfrom executor import Prompt, Generation\n\nprompt = Prompt(text='suggest an interesting image generation prompt')\nclient = Client(port=12345)\nresponse = client.post('\u002F', inputs=[prompt], return_type=DocList[Generation])\n```\n\n## Build Pipelines\n\nChain services into a Flow:\n\n```python\nfrom jina import Flow\n\nflow = Flow(port=12345).add(uses=StableLM).add(uses=TextToImage)\n\nwith flow:\n    flow.block()\n```\n\n## Scaling and Deployment\n\n### Local Scaling\n\nBoost throughput with built-in features:\n- Replicas for parallel processing\n- Shards for data partitioning\n- Dynamic batching for efficient model inference\n\nExample scaling a Stable Diffusion deployment:\n\n```yaml\njtype: Deployment\nwith:\n uses: TextToImage\n timeout_ready: -1\n py_modules:\n   - text_to_image.py\n env:\n  CUDA_VISIBLE_DEVICES: RR\n replicas: 2\n uses_dynamic_batching:\n   \u002Fdefault:\n     preferred_batch_size: 10\n     timeout: 200\n```\n\n### Cloud Deployment\n\n#### Containerize Services\n\n1. Structure your Executor:\n```\nTextToImage\u002F\n├── executor.py\n├── config.yml\n├── requirements.txt\n```\n\n2. Configure:\n```yaml\n# config.yml\njtype: TextToImage\npy_modules:\n - executor.py\nmetas:\n name: TextToImage\n description: Text to Image generation Executor\n```\n\n3. Push to Hub:\n```bash\njina hub push TextToImage\n```\n\n#### Deploy to Kubernetes\n```bash\njina export kubernetes flow.yml .\u002Fmy-k8s\nkubectl apply -R -f my-k8s\n```\n\n#### Use Docker Compose\n```bash\njina export docker-compose flow.yml docker-compose.yml\ndocker-compose up\n```\n\n#### JCloud Deployment\n\nDeploy with a single command:\n```bash\njina cloud deploy jcloud-flow.yml\n```\n\n## LLM Streaming\n\nEnable token-by-token streaming for responsive LLM applications:\n\n1. Define schemas:\n```python\nfrom docarray import BaseDoc\n\n\nclass PromptDocument(BaseDoc):\n    prompt: str\n    max_tokens: int\n\n\nclass ModelOutputDocument(BaseDoc):\n    token_id: int\n    generated_text: str\n```\n\n2. Initialize service:\n```python\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\n\n\nclass TokenStreamingExecutor(Executor):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.model = GPT2LMHeadModel.from_pretrained('gpt2')\n```\n\n3. Implement streaming:\n```python\n@requests(on='\u002Fstream')\nasync def task(self, doc: PromptDocument, **kwargs) -> ModelOutputDocument:\n    input = tokenizer(doc.prompt, return_tensors='pt')\n    input_len = input['input_ids'].shape[1]\n    for _ in range(doc.max_tokens):\n        output = self.model.generate(**input, max_new_tokens=1)\n        if output[0][-1] == tokenizer.eos_token_id:\n            break\n        yield ModelOutputDocument(\n            token_id=output[0][-1],\n            generated_text=tokenizer.decode(\n                output[0][input_len:], skip_special_tokens=True\n            ),\n        )\n        input = {\n            'input_ids': output,\n            'attention_mask': torch.ones(1, len(output[0])),\n        }\n```\n\n4. Serve and use:\n```python\n# Server\nwith Deployment(uses=TokenStreamingExecutor, port=12345, protocol='grpc') as dep:\n    dep.block()\n\n\n# Client\nasync def main():\n    client = Client(port=12345, protocol='grpc', asyncio=True)\n    async for doc in client.stream_doc(\n        on='\u002Fstream',\n        inputs=PromptDocument(prompt='what is the capital of France ?', max_tokens=10),\n        return_type=ModelOutputDocument,\n    ):\n        print(doc.generated_text)\n```\n\n## Support\n\nJina-serve is backed by [Jina AI](https:\u002F\u002Fjina.ai) and licensed under [Apache-2.0](.\u002FLICENSE).\n","# Jina-Serve\n\u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fjina\u002F\">\u003Cimg alt=\"PyPI\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fjina?label=Release&style=flat-square\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fdiscord.jina.ai\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1106542220112302130?logo=discord&logoColor=white&style=flat-square\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fpypistats.org\u002Fpackages\u002Fjina\">\u003Cimg alt=\"PyPI - Downloads from official pypistats\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fjina?style=flat-square\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Factions\u002Fworkflows\u002Fcd.yml\">\u003Cimg alt=\"Github CD status\" src=\"https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Factions\u002Fworkflows\u002Fcd.yml\u002Fbadge.svg\">\u003C\u002Fa>\n\nJina-serve 是一个用于构建和部署通过 gRPC、HTTP 和 WebSocket 进行通信的 AI 服务的框架。您可以在专注于核心逻辑的同时，将服务从本地开发扩展到生产环境。\n\n## 核心特性\n\n- 原生支持所有主流的机器学习框架和数据类型\n- 高性能的服务设计，具备可扩展性、流式处理和动态批处理功能\n- 支持流式输出的大语言模型推理\n- 内置 Docker 集成和 Executor Hub\n- 一键部署至 Jina AI Cloud\n- 企业级支持，兼容 Kubernetes 和 Docker Compose\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>与 FastAPI 的对比\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n相较于 FastAPI 的主要优势：\n\n- 基于 DocArray 的数据处理，并原生支持 gRPC\n- 内置容器化和服务编排功能\n- 无缝扩展微服务\n- 一命令即可完成云端部署\n\u003C\u002Fdetails>\n\n## 安装\n\n```bash\npip install jina\n```\n\n请参阅适用于 [Apple Silicon](https:\u002F\u002Fjina.ai\u002Fserve\u002Fget-started\u002Finstall\u002Fapple-silicon-m1-m2\u002F) 和 [Windows](https:\u002F\u002Fjina.ai\u002Fserve\u002Fget-started\u002Finstall\u002Fwindows\u002F) 的安装指南。\n\n## 核心概念\n\n三个主要层次：\n- **数据**：BaseDoc 和 DocList 用于输入输出\n- **服务层**：Executor 处理 Documents，Gateway 连接各个服务\n- **编排层**：Deployments 提供 Executor 服务，Flows 创建工作流\n\n## 构建 AI 服务\n\n让我们使用 StableLM 创建一个基于 gRPC 的 AI 服务：\n\n```python\nfrom jina import Executor, requests\nfrom docarray import DocList, BaseDoc\nfrom transformers import pipeline\n\n\nclass Prompt(BaseDoc):\n    text: str\n\n\nclass Generation(BaseDoc):\n    prompt: str\n    text: str\n\n\nclass StableLM(Executor):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.generator = pipeline(\n            'text-generation', model='stabilityai\u002Fstablelm-base-alpha-3b'\n        )\n\n    @requests\n    def generate(self, docs: DocList[Prompt], **kwargs) -> DocList[Generation]:\n        generations = DocList[Generation]()\n        prompts = docs.text\n        llm_outputs = self.generator(prompts)\n        for prompt, output in zip(prompts, llm_outputs):\n            generations.append(Generation(prompt=prompt, text=output))\n        return generations\n```\n\n可通过 Python 或 YAML 部署：\n\n```python\nfrom jina import Deployment\nfrom executor import StableLM\n\ndep = Deployment(uses=StableLM, timeout_ready=-1, port=12345)\n\nwith dep:\n    dep.block()\n```\n\n```yaml\njtype: Deployment\nwith:\n uses: StableLM\n py_modules:\n   - executor.py\n timeout_ready: -1\n port: 12345\n```\n\n使用客户端：\n\n```python\nfrom jina import Client\nfrom docarray import DocList\nfrom executor import Prompt, Generation\n\nprompt = Prompt(text='suggest an interesting image generation prompt')\nclient = Client(port=12345)\nresponse = client.post('\u002F', inputs=[prompt], return_type=DocList[Generation])\n```\n\n## 构建工作流\n\n将多个服务串联成一个 Flow：\n\n```python\nfrom jina import Flow\n\nflow = Flow(port=12345).add(uses=StableLM).add(uses=TextToImage)\n\nwith flow:\n    flow.block()\n```\n\n## 扩展与部署\n\n### 本地扩展\n\n借助内置功能提升吞吐量：\n- 使用副本实现并行处理\n- 使用分片进行数据分区\n- 动态批处理以提高模型推理效率\n\n以下是一个对 Stable Diffusion 部署进行扩展的示例：\n\n```yaml\njtype: Deployment\nwith:\n uses: TextToImage\n timeout_ready: -1\n py_modules:\n   - text_to_image.py\n env:\n  CUDA_VISIBLE_DEVICES: RR\n replicas: 2\n uses_dynamic_batching:\n   \u002Fdefault:\n     preferred_batch_size: 10\n     timeout: 200\n```\n\n### 云端部署\n\n#### 容器化服务\n\n1. 整理您的 Executor：\n```\nTextToImage\u002F\n├── executor.py\n├── config.yml\n├── requirements.txt\n```\n\n2. 配置：\n```yaml\n# config.yml\njtype: TextToImage\npy_modules:\n - executor.py\nmetas:\n name: TextToImage\n description: Text to Image generation Executor\n```\n\n3. 推送至 Hub：\n```bash\njina hub push TextToImage\n```\n\n#### 部署至 Kubernetes\n```bash\njina export kubernetes flow.yml .\u002Fmy-k8s\nkubectl apply -R -f my-k8s\n```\n\n#### 使用 Docker Compose\n```bash\njina export docker-compose flow.yml docker-compose.yml\ndocker-compose up\n```\n\n#### JCloud 部署\n\n只需一条命令即可部署：\n```bash\njina cloud deploy jcloud-flow.yml\n```\n\n## LLM 流式处理\n\n启用逐 token 流式输出，以构建响应迅速的 LLM 应用程序：\n\n1. 定义 Schema：\n```python\nfrom docarray import BaseDoc\n\n\nclass PromptDocument(BaseDoc):\n    prompt: str\n    max_tokens: int\n\n\nclass ModelOutputDocument(BaseDoc):\n    token_id: int\n    generated_text: str\n```\n\n2. 初始化服务：\n```python\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\n\n\nclass TokenStreamingExecutor(Executor):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.model = GPT2LMHeadModel.from_pretrained('gpt2')\n```\n\n3. 实现流式处理：\n```python\n@requests(on='\u002Fstream')\nasync def task(self, doc: PromptDocument, **kwargs) -> ModelOutputDocument:\n    input = tokenizer(doc.prompt, return_tensors='pt')\n    input_len = input['input_ids'].shape[1]\n    for _ in range(doc.max_tokens):\n        output = self.model.generate(**input, max_new_tokens=1)\n        if output[0][-1] == tokenizer.eos_token_id:\n            break\n        yield ModelOutputDocument(\n            token_id=output[0][-1],\n            generated_text=tokenizer.decode(\n                output[0][input_len:], skip_special_tokens=True\n            ),\n        )\n        input = {\n            'input_ids': output,\n            'attention_mask': torch.ones(1, len(output[0])),\n        }\n```\n\n4. 服务端与客户端：\n```python\n# 服务端\nwith Deployment(uses=TokenStreamingExecutor, port=12345, protocol='grpc') as dep:\n    dep.block()\n\n\n# 客户端\nasync def main():\n    client = Client(port=12345, protocol='grpc', asyncio=True)\n    async for doc in client.stream_doc(\n        on='\u002Fstream',\n        inputs=PromptDocument(prompt='what is the capital of France ?', max_tokens=10),\n        return_type=ModelOutputDocument,\n    ):\n        print(doc.generated_text)\n```\n\n## 支持\n\nJina-serve 由 [Jina AI](https:\u002F\u002Fjina.ai) 提供支持，并采用 [Apache-2.0](.\u002FLICENSE) 许可证授权。","# Jina-Serve 快速上手指南\n\nJina-Serve 是一个用于构建和部署 AI 服务的框架，支持 gRPC、HTTP 和 WebSocket 通信。它帮助开发者专注于核心逻辑，轻松将服务从本地开发扩展至生产环境。\n\n## 环境准备\n\n*   **系统要求**：支持 Linux、macOS（包括 Apple Silicon M1\u002FM2）和 Windows。\n    *   *Apple Silicon 用户请参考：[安装指南](https:\u002F\u002Fjina.ai\u002Fserve\u002Fget-started\u002Finstall\u002Fapple-silicon-m1-m2\u002F)*\n    *   *Windows 用户请参考：[安装指南](https:\u002F\u002Fjina.ai\u002Fserve\u002Fget-started\u002Finstall\u002Fwindows\u002F)*\n*   **前置依赖**：\n    *   Python 3.7+\n    *   pip 包管理工具\n*   **网络建议**：由于涉及模型下载（如 Hugging Face），国内用户建议配置相应的网络加速或镜像源以确保安装顺畅。\n\n## 安装步骤\n\n使用 pip 直接安装核心库：\n\n```bash\npip install jina\n```\n\n> **提示**：如果下载速度较慢，可使用国内镜像源加速安装：\n> ```bash\n> pip install jina -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 基本使用\n\n以下示例展示如何快速构建一个基于 StableLM 的 gRPC AI 服务，包含定义数据结构、创建执行器（Executor）、部署服务以及客户端调用。\n\n### 1. 定义服务逻辑 (`executor.py`)\n\n创建一个 Python 文件，定义输入输出数据结构及处理逻辑：\n\n```python\nfrom jina import Executor, requests\nfrom docarray import DocList, BaseDoc\nfrom transformers import pipeline\n\n\nclass Prompt(BaseDoc):\n    text: str\n\n\nclass Generation(BaseDoc):\n    prompt: str\n    text: str\n\n\nclass StableLM(Executor):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.generator = pipeline(\n            'text-generation', model='stabilityai\u002Fstablelm-base-alpha-3b'\n        )\n\n    @requests\n    def generate(self, docs: DocList[Prompt], **kwargs) -> DocList[Generation]:\n        generations = DocList[Generation]()\n        prompts = docs.text\n        llm_outputs = self.generator(prompts)\n        for prompt, output in zip(prompts, llm_outputs):\n            generations.append(Generation(prompt=prompt, text=output))\n        return generations\n```\n\n### 2. 部署服务\n\n你可以选择直接使用 Python 代码或 YAML 配置文件进行部署。\n\n**方式 A：使用 Python 部署**\n\n```python\nfrom jina import Deployment\nfrom executor import StableLM\n\ndep = Deployment(uses=StableLM, timeout_ready=-1, port=12345)\n\nwith dep:\n    dep.block()\n```\n\n**方式 B：使用 YAML 部署 (`config.yml`)**\n\n```yaml\njtype: Deployment\nwith:\n uses: StableLM\n py_modules:\n   - executor.py\n timeout_ready: -1\n port: 12345\n```\n运行命令：`jina deploy config.yml`\n\n### 3. 客户端调用\n\n服务启动后，使用客户端发送请求并获取结果：\n\n```python\nfrom jina import Client\nfrom docarray import DocList\nfrom executor import Prompt, Generation\n\nprompt = Prompt(text='suggest an interesting image generation prompt')\nclient = Client(port=12345)\nresponse = client.post('\u002F', inputs=[prompt], return_type=DocList[Generation])\n\nprint(response[0].text)\n```\n\n---\n*更多高级功能（如流水线编排、Kubernetes 部署、LLM 流式输出等）请参阅官方完整文档。*","某初创团队正在开发一款“多模态内容生成平台”，需同时处理文本提示词并调用 Stable Diffusion 生成图像，且要求支持高并发实时流式输出。\n\n### 没有 serve 时\n- **协议适配繁琐**：为了兼顾 gRPC 低延迟和 HTTP 兼容性，开发者需手动编写大量胶水代码转换数据格式，难以统一处理多模态输入输出。\n- **扩容部署困难**：从本地测试迁移到生产环境时，缺乏内置的容器化方案，配置 Docker 和 Kubernetes 脚本耗时耗力，容易出错。\n- **推理效率低下**：面对突发流量，无法自动实现动态批处理（Dynamic Batching），导致 GPU 利用率波动大，单次请求响应时间不稳定。\n- **链路编排复杂**：将文本生成模型与图像生成模型串联成流水线时，需自行管理服务间通信与错误重试，代码耦合度高。\n\n### 使用 serve 后\n- **原生多模态支持**：利用 DocArray 定义结构化数据，天然支持 gRPC\u002FHTTP\u002FWebSocket 多种协议，无需额外转换即可流畅传输图文数据。\n- **一键云原生部署**：通过 YAML 配置文件即可定义服务，内置 Docker 集成与 Executor Hub，能直接将本地服务无缝扩展至 Jina AI Cloud 或 K8s 集群。\n- **高性能自动优化**：开启动态批处理与副本复制功能，自动聚合请求以最大化 GPU 算力，显著降低高并发下的平均延迟。\n- **声明式流程编排**：使用 Flow 仅需几行代码即可将多个 Executor 串联为完整管线，自动处理服务发现与负载均衡，逻辑清晰易维护。\n\nserve 让团队从繁琐的基础设施运维中解放出来，专注于核心算法逻辑，实现了从本地原型到企业级高可用服务的快速跨越。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjina-ai_serve_4142c483.png","jina-ai","Jina AI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjina-ai_e871d2a3.png","Your Search Foundation, Supercharged!",null,"hello@jina.ai","JinaAI_","https:\u002F\u002Fjina.ai","https:\u002F\u002Fgithub.com\u002Fjina-ai",[82,86,90,94,98],{"name":83,"color":84,"percentage":85},"Python","#3572A5",97,{"name":87,"color":88,"percentage":89},"Go","#00ADD8",1.6,{"name":91,"color":92,"percentage":93},"Shell","#89e051",0.8,{"name":95,"color":96,"percentage":97},"Dockerfile","#384d54",0.5,{"name":99,"color":100,"percentage":101},"C","#555555",0.1,21869,2239,"2026-04-07T14:04:01","Apache-2.0","Linux, macOS, Windows","非必需（取决于使用的模型）。若运行 GPU 加速模型（如 Stable Diffusion），需 NVIDIA GPU 并配置 CUDA_VISIBLE_DEVICES 环境变量；支持 Apple Silicon (M1\u002FM2)。具体显存和 CUDA 版本未说明。","未说明",{"notes":110,"python":108,"dependencies":111},"该工具原生支持 Apple Silicon (M1\u002FM2) 和 Windows，有专门的安装指南。核心依赖为 jina 和 docarray。实际资源需求高度依赖于用户加载的具体 AI 模型（如示例中的 StableLM 或 Stable Diffusion）。支持通过 Docker、Kubernetes 和 Jina AI Cloud 进行部署。",[112,113,114,115,116],"jina","docarray","transformers","torch","grpcio",[14,35,16,52,118,13],"其他",[120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139],"neural-search","cloud-native","deep-learning","machine-learning","framework","grpc","kubernetes","multimodal","mlops","pipeline","fastapi","generative-ai","docker","jaeger","llmops","opentelemetry","cncf","microservice","orchestration","prometheus","2026-03-27T02:49:30.150509","2026-04-08T03:55:19.091067",[143,148,153,158,163],{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},23625,"使用 AudioCLIP 模型进行音频搜索时，为什么检索结果不准确或得分异常？","这是因为 AudioCLIP 模型在推理（inference）阶段未调用 `.eval()` 模式，导致批归一化（Batch Normalization）层使用了变化的参数而非固定值。解决方法是手动应用修复代码，确保模型在推理前设置为评估模式。您可以参考相关 PR (jina-ai\u002Fexecutors#315) 中的代码片段进行修改，直到官方发布包含此修复的新版本。","https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fserve\u002Fissues\u002F4163",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},23626,"遇到 'CUDA driver and runtime could not be initialized' 错误或与 PaddleHub 集成时报错怎么办？","这通常与多进程启动方法有关。请尝试以下两个步骤：\n1. 安装 Jina 的预发布版本：`pip install --pre jina`\n2. 在启动 Python 脚本之前，设置环境变量 `JINA_MP_START_METHOD` 为 `spawn`。\n命令行执行方式：\n`export JINA_MP_START_METHOD=spawn`\n`python your_jina_app.py`\n或者单行命令：\n`JINA_MP_START_METHOD=spawn python your_jina_app.py`","https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fserve\u002Fissues\u002F2514",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},23627,"升级到 Jina 1.1.2 后，BinaryPbIndexer 无法找到任何文档（即使没有报错），如何解决？","这是 1.1.2 版本中 BinaryPbIndexer 的一个已知问题。维护者已通过 PR #2306 提供了修复方案。建议检查是否使用了自定义 ID（特别是长度固定的 ID），并应用社区提供的补丁或等待包含该修复的后续版本更新。如果急需解决，可参考相关 PR 中的测试代码来验证修复效果。","https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fserve\u002Fissues\u002F2295",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},23628,"如何从存储中永久删除文档，而不仅仅是标记为不可见，且无需重新索引整个数据集？","默认情况下，Jina 仅将删除操作标记为逻辑删除（文档仍存在于存储中但不可访问）。若要物理删除文档，针对不同的索引器有不同的解决方案：\n1. 对于 NpIndexer：参考 PR #2046 的实现。\n2. 对于 BinaryPbIndexer：参考 PR #2102 的实现。\n这些更新允许在不重新索引整个数据集的情况下，真正从存储介质中移除数据。","https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fserve\u002Fissues\u002F1984",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},23629,"服务显示 'Endless Waiting executor0' 但实际上服务运行正常，这是什么原因？","这是一个关于状态检测日志显示的已知问题，通常不影响实际服务功能。如果服务本身能正常响应请求，可以忽略该等待提示。该问题通常会在后续的代码合并中得到修复，以消除误导性的日志输出。","https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fserve\u002Fissues\u002F5966",[169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249,254,259,264],{"id":170,"version":171,"summary_zh":172,"released_at":173},145129,"v3.28.0","## 发布说明 (`3.28.0`)\n\n> 发布时间：2024-11-12 08:25:46\n\n\n\n🙇 我们感谢所有为本次新版本做出贡献的开发者！特别感谢：\nJoan Fontanals、Jina Dev Bot、🙇\n\n\n### 🏁 单元测试与 CI\u002FCD\n\n - [[```7f3ad55d```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F7f3ad55dc51a3398caf7e341150d15685b0065e0)] __-__ 解决 grpcio 版本阻塞问题 (#6198) (*Joan Fontanals*)\n\n### 🍹 其他改进\n\n - [[```6a472c8c```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F6a472c8c49a9acbd53f49955994e349a15430fac)] __-__ 更新版本号 (#6219) (*Joan Fontanals*)\n - [[```80996ca3```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F80996ca3a1395221fe1e8fbbb14ab45a9ea2b5af)] __-__ __文档__: 更新目录结构 (*Jina Dev Bot*)\n - [[```6edbbb14```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F6edbbb147eaacf7cc3248b7e10143dba6f8ece49)] __-__ __版本__: 下一个版本将为 3.27.21 (*Jina Dev Bot*)\n\n","2024-11-12T08:26:45",{"id":175,"version":176,"summary_zh":177,"released_at":178},145130,"v3.27.20","## 发布说明 (`3.27.20`)\n\n> 发布时间：2024-11-05 09:18:22\n\n\n\n🙇 我们感谢所有为本次新版本做出贡献的开发者！特别感谢：\nJoan Fontanals、Jina Dev Bot、🙇\n\n\n### 🐞 错误修复\n\n - [[```a51e5eec```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fa51e5eec4df0b80dee13fcd89044be1b42ed5c40)] __-__ 修复 SageMaker 的请求处理问题 (#6218) (*Joan Fontanals*)\n - [[```0337bc47```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F0337bc47b4beaf1c3ab5639bf55c88dd7c80ae7b)] __-__ 修复 HubApp 和 HubPods 的测试问题 (#6217) (*Joan Fontanals*)\n\n### 🍹 其他改进\n\n - [[```425e5ea6```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F425e5ea6a0f687f78d27b95e2f107c3319f499d2)] __-__ __文档__: 更新目录结构 (*Jina Dev Bot*)\n - [[```6ddc3406```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F6ddc34062c9a87075010c13136b1ba7f4902e666)] __-__ __版本号__: 下一个版本将为 3.27.20 (*Jina Dev Bot*)\n\n","2024-11-05T09:19:23",{"id":180,"version":181,"summary_zh":182,"released_at":183},145131,"v3.27.19","## 发布说明 (`3.27.19`)\n\n> 发布时间: 2024-11-04 10:18:08\n\n\n\n🙇 我们感谢所有为本次新版本做出贡献的开发者！特别感谢：\n Zac Li,  Jina 开发机器人,  🙇\n\n\n### 🆕 新特性\n\n - [[```92d03f08```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F92d03f08a3ae9ff02de207418cb4cde4d9fb93a2)] __-__ 修复 SageMaker 中 CLIP 的批处理问题 (#6216) (*Zac Li*)\n\n### 🍹 其他改进\n\n - [[```2f65ce10```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F2f65ce1074aeff7fd6b77c8b9cd2a0eca7e0dcbd)] __-__ __文档__: 更新目录结构 (*Jina 开发机器人*)\n - [[```8aee3a1d```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F8aee3a1d9dc7842425163ba36f82cb0e7f09c851)] __-__ __版本号__: 下一个版本将为 3.27.19 (*Jina 开发机器人*)\n\n","2024-11-04T10:19:10",{"id":185,"version":186,"summary_zh":187,"released_at":188},145132,"v3.27.18","## 发布说明 (`3.27.18`)\n\n> 发布时间：2024-10-25 16:57:06\n\n\n\n🙇 我们感谢所有为本次新版本做出贡献的开发者！特别感谢：\n韩晓、Jina 开发机器人、🙇\n\n\n### 📗 文档\n\n - [[```d65d0c30```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fd65d0c30e5732d14477938a018f46d33463e3d51)] __-__ 修复 URL (*韩晓*)\n - [[```99f4384a```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F99f4384a13595cd4c687e0ff062e75e43798f05b)] __-__ 修复拼写错误 (*韩晓*)\n\n### 🍹 其他改进\n\n - [[```ce97c322```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fce97c322aaeca958e106458d0837927413a983b5)] __-__ 将 docarray v1v2 替换为版本号 (*韩晓*)\n - [[```984da92d```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F984da92d126471a107e82fb0d77950696420be10)] __-__ 更新 README 文件 (*韩晓*)\n - [[```32f0cb30```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F32f0cb30dbd019cb60f24754befdfb30d2410e4a)] __-__ __文档__：更新目录 (*Jina 开发机器人*)\n - [[```1e3dd5f2```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F1e3dd5f29b6c88548cd44a1be51d3226523cf81a)] __-__ __版本__：下一个版本将是 3.27.18 (*Jina 开发机器人*)\n\n","2024-10-25T16:58:01",{"id":190,"version":191,"summary_zh":192,"released_at":193},145133,"v3.27.17","## 发布说明 (`3.27.17`)\n\n> 发布时间：2024-10-01 09:22:10\n\n\n\n🙇 我们感谢所有为本次新版本做出贡献的开发者！特别感谢：\nJoan Fontanals、Jina Dev Bot、🙇\n\n\n### 🐞 错误修复\n\n - [[```47eb5f0e```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F47eb5f0e8e41df54b82e1214ece0890b90cb7ef5)] __-__ 从客户端移除 inputs 状态 (#6207) (*Joan Fontanals*)\n\n### 🍹 其他改进\n\n - [[```ebbc2519```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Febbc25197e58116226126fc731ff394c73d5a2c4)] __-__ __文档__: 更新目录结构 (*Jina Dev Bot*)\n - [[```1517fc7c```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F1517fc7c93c75a428ce351add2dc886e51d90e53)] __-__ __版本__: 下一个版本将为 3.27.17 (*Jina Dev Bot*)\n\n","2024-10-01T09:31:10",{"id":195,"version":196,"summary_zh":197,"released_at":198},145134,"v3.27.16","## 发布说明 (`3.27.16`)\n\n> 发布时间：2024-09-26 15:28:55\n\n\n\n🙇 我们感谢所有为本次新版本做出贡献的开发者！特别鸣谢：\nJoan Fontanals、Jina Dev Bot、🙇\n\n\n### 🐞 错误修复\n\n - [[```c933e499```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fc933e4998a2d8bafe80851f00f3ef94cd63a5e29)] __-__ 丰富日志 (#6206) (*Joan Fontanals*)\n\n### 🏁 单元测试与 CI\u002FCD\n\n - [[```eaad3233```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Feaad323320ba8951bd17d9154a1d0adcf3e5c9d0)] __-__ 为动态批处理添加额外测试 (#6205) (*Joan Fontanals*)\n\n### 🍹 其他改进\n\n - [[```449e087c```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F449e087c91f19223aa96fe04c0ec3a231ebaaa5e)] __-__ __文档__: 更新目录结构 (*Jina Dev Bot*)\n - [[```35241042```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F352410421a3def43bbb0492205a33186e207fc51)] __-__ __版本号__: 下一个版本将是 3.27.16 (*Jina Dev Bot*)\n\n","2024-09-26T15:30:04",{"id":200,"version":201,"summary_zh":202,"released_at":203},145135,"v3.27.15","## 发布说明（`3.27.15`）\n\n> 发布时间：2024-09-25 11:07:13\n\n\n\n🙇 我们感谢所有为本次新版本做出贡献的开发者！特别感谢：\nJoan Fontanals、Jina Dev Bot、🙇\n\n\n### 🐞 错误修复\n\n - [[```b1139bc6```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fb1139bc62336301b439cc6a4f69bfc81fba3e687)] __-__ 动态批处理配置 (#6204) (*Joan Fontanals*)\n\n### 🍹 其他改进\n\n - [[```450553a3```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F450553a3dc163e692db148666dd082f4991984c2)] __-__ __文档__：更新目录结构 (*Jina Dev Bot*)\n - [[```22bdaee8```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F22bdaee8c69fe56518505a67318d2ce36a2971f6)] __-__ __版本号__：下一个版本将为 3.27.15 (*Jina Dev Bot*)\n\n","2024-09-25T11:08:12",{"id":205,"version":206,"summary_zh":207,"released_at":208},145136,"v3.27.14","## 发布说明 (`3.27.14`)\n\n> 发布时间：2024-09-23 14:35:28\n\n\n\n🙇 我们感谢所有为本次新版本做出贡献的开发者！特别感谢：\nJoan Fontanals、Jina Dev Bot、🙇\n\n\n### 🆕 新特性\n\n - [[```338ac3f3```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F338ac3f3ae40d21278fd79fe11b710df646acdff)] __-__ 使用动态批处理参数 (#6203) (*Joan Fontanals*)\n\n### 🍹 其他改进\n\n - [[```434d09ec```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F434d09ec238cd8d20ec38c241587dc71e3458f93)] __-__ __文档__: 更新目录结构 (*Jina Dev Bot*)\n - [[```82bb7220```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F82bb7220e03db2cca7e132e18d8bc965e0c49eba)] __-__ __版本号__: 下一个版本将为 3.27.14 (*Jina Dev Bot*)\n\n","2024-09-23T14:36:35",{"id":210,"version":211,"summary_zh":212,"released_at":213},145137,"v3.27.13","## 发布说明 (`3.27.13`)\n\n> 发布时间：2024-09-20 09:19:17\n\n\n🙇 我们感谢所有为本次新版本做出贡献的开发者！特别感谢：\nJoan Fontanals、Jina Dev Bot、🙇\n\n\n### 🐞 错误修复\n\n - [[```d48f5a35```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fd48f5a35abc1eb40d632f30c5a2191251a765e06)] __-__ 重写兼容性 (#6202) (*Joan Fontanals*)\n\n### 🍹 其他改进\n\n - [[```77d46f71```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F77d46f71e60643bf9a850d31e26fa9bb3662982c)] __-__ __版本号__: 下一个版本将是 3.27.13 (*Jina Dev Bot*)\n","2024-09-20T09:20:21",{"id":215,"version":216,"summary_zh":217,"released_at":218},145138,"v3.27.12","## 发布说明 (`3.27.12`)\n\n> 发布时间: 2024-09-20 06:33:17\n\n\n\n🙇 我们感谢所有为本次新版本做出贡献的开发者！特别感谢：\nJoan Fontanals、Jina Dev Bot、🙇\n\n\n### 🏁 单元测试与 CI\u002FCD\n\n - [[```38943534```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F389435345bef0926942e1dfb58fccdd458c74b84)] __-__ 测试批处理队列中无数据锁 (#6201) (*Joan Fontanals*)\n\n### 🍹 其他改进\n\n - [[```246f5960```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F246f5960b99750202b87b0c1b7e788fa252e3319)] __-__ __文档__: 更新目录结构 (*Jina Dev Bot*)\n - [[```9cdb9ffc```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F9cdb9ffc7a4d3dc22a5df6f8e17dcc9583a66ab4)] __-__ __版本__: 下一个版本将为 3.27.12 (*Jina Dev Bot*)\n\n","2024-09-20T06:34:25",{"id":220,"version":221,"summary_zh":222,"released_at":223},145139,"v3.27.11","## Release Note (`3.27.11`)\n\n> Release time: 2024-09-18 18:58:01\n\n\n\n🙇 We'd like to thank all contributors for this new release! In particular,\n Joan Fontanals,  Jina Dev Bot,  🙇\n\n\n### 🆕 New Features\n\n - [[```d17b6206```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fd17b62063520258187970ed069053de633dd2c8a)] __-__ add custom_metric for dynamic batching (#6189) (*Joan Fontanals*)\n\n### 🍹 Other Improvements\n\n - [[```d4fb94d2```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fd4fb94d245092c8bdd8f713e44a362d31b7c8f07)] __-__ __docs__: update TOC (*Jina Dev Bot*)\n - [[```12949a5f```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F12949a5f95b2565e47d92a045eba0b55559f14fa)] __-__ __version__: the next version will be 3.27.11 (*Jina Dev Bot*)\n\n","2024-09-18T19:05:39",{"id":225,"version":226,"summary_zh":227,"released_at":228},145140,"v3.27.10","## Release Note (`3.27.10`)\n\n> Release time: 2024-09-17 12:00:15\n\n\n\n🙇 We'd like to thank all contributors for this new release! In particular,\n Joan Fontanals,  Jina Dev Bot,  🙇\n\n\n### 🐞 Bug fixes\n\n - [[```be385a50```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fbe385a5009e2ee348f3fb4f963dac3a4db0aa613)] __-__ pass params to iolet (#6200) (*Joan Fontanals*)\n\n### 🍹 Other Improvements\n\n - [[```3223ce9f```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F3223ce9fe4433238efae75f2e14df88527d07a48)] __-__ __docs__: update TOC (*Jina Dev Bot*)\n - [[```be20ea83```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fbe20ea830ebd16f25a581bb844c762390d8f87c5)] __-__ __version__: the next version will be 3.27.10 (*Jina Dev Bot*)\n\n","2024-09-17T12:01:18",{"id":230,"version":231,"summary_zh":232,"released_at":233},145141,"v3.27.9","## Release Note (`3.27.9`)\n\n> Release time: 2024-09-15 19:37:06\n\n\n\n🙇 We'd like to thank all contributors for this new release! In particular,\n Joan Fontanals,  Jina Dev Bot,  🙇\n\n\n### 🐞 Bug fixes\n\n - [[```ef11cf70```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fef11cf705795cbf71ad172044c2e060fc4b6331c)] __-__ readd timeout (#6199) (*Joan Fontanals*)\n\n### 🍹 Other Improvements\n\n - [[```50e34ca5```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F50e34ca56be6ff2b59a4b135a8adcae4b060f9ac)] __-__ __version__: the next version will be 3.27.9 (*Jina Dev Bot*)\n\n","2024-09-15T19:46:50",{"id":235,"version":236,"summary_zh":237,"released_at":238},145142,"v3.27.8","## Release Note (`3.27.8`)\n\n> Release time: 2024-09-15 16:24:05\n\n\n\n🙇 We'd like to thank all contributors for this new release! In particular,\n Joan Fontanals,  Jina Dev Bot,  🙇\n\n\n### 🐞 Bug fixes\n\n - [[```fbdde039```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Ffbdde03928fc1c66958ab1718d3587f875f59638)] __-__ reuse session (#6196) (*Joan Fontanals*)\n\n### 🍹 Other Improvements\n\n - [[```a62b85e7```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fa62b85e76bab95c379410eab5f91af5db026ab34)] __-__ __docs__: update TOC (*Jina Dev Bot*)\n - [[```f9f08da5```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Ff9f08da535433dcd8f13a041a9e69c739db8e21d)] __-__ __version__: the next version will be 3.27.8 (*Jina Dev Bot*)\n\n","2024-09-15T16:34:30",{"id":240,"version":241,"summary_zh":242,"released_at":243},145143,"v3.27.7","## Release Note (`3.27.7`)\n\n> Release time: 2024-09-12 09:51:05\n\n\n\n🙇 We'd like to thank all contributors for this new release! In particular,\n Joan Fontanals,  Jina Dev Bot,  🙇\n\n\n### 🧼 Code Refactoring\n\n - [[```4c5dab7a```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F4c5dab7a1cf128f21349ca64b5aa0664f011ab7e)] __-__ do not return response object (#6195) (*Joan Fontanals*)\n\n### 🍹 Other Improvements\n\n - [[```6caf84c0```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F6caf84c09bc1c551df0bb14ad800a4c3970c859b)] __-__ __docs__: update TOC (*Jina Dev Bot*)\n - [[```a61028bc```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fa61028bca94021a3b4e6427565726766c3cbec74)] __-__ __version__: the next version will be 3.27.7 (*Jina Dev Bot*)\n\n","2024-09-12T09:52:08",{"id":245,"version":246,"summary_zh":247,"released_at":248},145144,"v3.27.6","## Release Note (`3.27.6`)\n\n> Release time: 2024-09-10 08:19:23\n\n\n\n🙇 We'd like to thank all contributors for this new release! In particular,\n Joan Fontanals,  Jina Dev Bot,  🙇\n\n\n### 🧼 Code Refactoring\n\n - [[```533a7d8a```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F533a7d8a45ea88ee47c5e0fa614059444de2d5e5)] __-__ handle async context manager in clientlet (#6194) (*Joan Fontanals*)\n\n### 🍹 Other Improvements\n\n - [[```dbb62432```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fdbb624328c2a546a54de96470cca1826a9833719)] __-__ __docs__: update TOC (*Jina Dev Bot*)\n - [[```762b50c6```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F762b50c6eb968ecf85b1d7afe22025e0c6b04eb2)] __-__ __version__: the next version will be 3.27.6 (*Jina Dev Bot*)\n\n","2024-09-10T08:20:35",{"id":250,"version":251,"summary_zh":252,"released_at":253},145145,"v3.27.5","## Release Note (`3.27.5`)\n\n> Release time: 2024-09-05 09:58:55\n\n\n\n🙇 We'd like to thank all contributors for this new release! In particular,\n Joan Fontanals,  Jina Dev Bot,  🙇\n\n\n### 🐞 Bug fixes\n\n - [[```5397e430```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F5397e430417be834a5ae0e659192cd5daf3e8b2d)] __-__ add plain handler (#6192) (*Joan Fontanals*)\n\n### 🧼 Code Refactoring\n\n - [[```abc4ca24```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fabc4ca24f4a36347018370050d3d5cd42190ee93)] __-__ slight change in dyn batch queue (#6193) (*Joan Fontanals*)\n\n### 🍹 Other Improvements\n\n - [[```bff3d1fc```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fbff3d1fcb95299a12845d7c415f313905351b106)] __-__ __docs__: update TOC (*Jina Dev Bot*)\n - [[```35dcf9e5```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F35dcf9e59581a39f0a4f2012182956a4f3d31d54)] __-__ __version__: the next version will be 3.27.5 (*Jina Dev Bot*)\n\n","2024-09-05T09:59:53",{"id":255,"version":256,"summary_zh":257,"released_at":258},145146,"v3.27.4","## Release Note (`3.27.4`)\n\n> Release time: 2024-09-03 19:36:57\n\n\n\n🙇 We'd like to thank all contributors for this new release! In particular,\n Joan Fontanals,  Jina Dev Bot,  🙇\n\n\n### 🆕 New Features\n\n - [[```3e0943ff```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F3e0943ffef1441bf3dcdf7ea03f3471622b9f04c)] __-__ avoid need data lock in batch queue (#6190) (*Joan Fontanals*)\n\n### 🍹 Other Improvements\n\n - [[```f3e34428```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Ff3e344289d22d8b20bba39296758640c122974b7)] __-__ __docs__: update TOC (*Jina Dev Bot*)\n - [[```e8c1f00d```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fe8c1f00d2c2b600e2322e75500ab67237837a73b)] __-__ __version__: the next version will be 3.27.4 (*Jina Dev Bot*)\n\n","2024-09-03T19:38:00",{"id":260,"version":261,"summary_zh":262,"released_at":263},145147,"v3.27.3","## Release Note (`3.27.3`)\n\n> Release time: 2024-09-02 12:44:55\n\n\n\n🙇 We'd like to thank all contributors for this new release! In particular,\n Joan Fontanals,  Jina Dev Bot,  🙇\n\n\n### 🐞 Bug fixes\n\n - [[```09f61da8```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F09f61da8a96b0b45f89865447de3049a35e31984)] __-__ pass allow_concurrent to runtime_args (#6188) (*Joan Fontanals*)\n\n### 🍹 Other Improvements\n\n - [[```3ca71a55```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F3ca71a5529b772c652243ba94c69d77215522abe)] __-__ __docs__: update TOC (*Jina Dev Bot*)\n - [[```dc3ef520```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002Fdc3ef520dd1a6a33720339653d1606375cc32eac)] __-__ __version__: the next version will be 3.27.3 (*Jina Dev Bot*)\n\n","2024-09-02T12:46:09",{"id":265,"version":266,"summary_zh":267,"released_at":268},145148,"v3.27.2","## Release Note (`3.27.2`)\n\n> Release time: 2024-07-23 14:50:49\n\n\n\n🙇 We'd like to thank all contributors for this new release! In particular,\n Joan Fontanals,  Jina Dev Bot,  🙇\n\n\n### 🐞 Bug fixes\n\n - [[```6e29377a```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F6e29377a6595911750967c30dab484d5cc2800a5)] __-__ fix optim issue with optimization (#6184) (*Joan Fontanals*)\n\n### 🍹 Other Improvements\n\n - [[```268ceb76```](https:\u002F\u002Fgithub.com\u002Fjina-ai\u002Fjina\u002Fcommit\u002F268ceb766d405bf7286c8ba6b8772b7488fab063)] __-__ __version__: the next version will be 3.27.2 (*Jina Dev Bot*)\n\n","2024-07-23T14:54:11"]