[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-HKUDS--nanobot":3,"tool-HKUDS--nanobot":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":101,"forks":102,"last_commit_at":103,"license":104,"difficulty_score":23,"env_os":105,"env_gpu":106,"env_ram":106,"env_deps":107,"category_tags":113,"github_topics":78,"view_count":114,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":115,"updated_at":116,"faqs":117,"releases":147},2221,"HKUDS\u002Fnanobot","nanobot","\"🐈 nanobot: The Ultra-Lightweight OpenClaw\"","nanobot 是一款受 OpenClaw 启发打造的超轻量级个人 AI 助手。它旨在解决现有 AI 代理框架代码冗余、部署复杂的问题，通过极致精简的架构，用比 OpenClaw 少 99% 的代码量实现了核心智能体功能，让运行更高效、维护更简单。\n\n这款工具特别适合开发者和技术研究人员使用，尤其是那些希望快速搭建私有化 AI 助手、深入理解智能体底层逻辑，或需要在资源受限环境中部署应用的用户。普通用户若具备基础编程能力，也可通过其交互式向导轻松配置属于自己的 AI 伙伴。\n\nnanobot 的技术亮点在于其“轻量化”与“高兼容性”。它不仅支持 OpenAI、Anthropic 等主流模型原生接入，还无缝集成了微信、飞书、Telegram、Slack 等多种通讯渠道，甚至支持端到端流式输出和媒体文件处理。近期更新中，项目团队移除了存在供应链风险的依赖库，进一步提升了安全性与稳定性。无论是用于日常任务自动化，还是作为学习 AI Agent 架构的教学案例，nanobot 都是一个简洁而强大的选择。","\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_ef974e5bf5af.png\" alt=\"nanobot\" width=\"500\">\n  \u003Ch1>nanobot: Ultra-Lightweight Personal AI Assistant\u003C\u002Fh1>\n  \u003Cp>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fnanobot-ai\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fnanobot-ai\" alt=\"PyPI\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fnanobot-ai\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_726788d99f74.png\" alt=\"Downloads\">\u003C\u002Fa>\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-≥3.11-blue\" alt=\"Python\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-green\" alt=\"License\">\n    \u003Ca href=\".\u002FCOMMUNICATION.md\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFeishu-Group-E9DBFC?style=flat&logo=feishu&logoColor=white\" alt=\"Feishu\">\u003C\u002Fa>\n    \u003Ca href=\".\u002FCOMMUNICATION.md\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeChat-Group-C5EAB4?style=flat&logo=wechat&logoColor=white\" alt=\"WeChat\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FMnCvHqpUGB\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Community-5865F2?style=flat&logo=discord&logoColor=white\" alt=\"Discord\">\u003C\u002Fa>\n  \u003C\u002Fp>\n\u003C\u002Fdiv>\n\n🐈 **nanobot** is an **ultra-lightweight** personal AI assistant inspired by [OpenClaw](https:\u002F\u002Fgithub.com\u002Fopenclaw\u002Fopenclaw).\n\n⚡️ Delivers core agent functionality with **99% fewer lines of code** than OpenClaw.\n\n📏 Real-time line count: run `bash core_agent_lines.sh` to verify anytime.\n\n## 📢 News\n\n> [!IMPORTANT]\n> **Security note:** Due to `litellm` supply chain poisoning, **please check your Python environment ASAP** and refer to this [advisory](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fdiscussions\u002F2445) for details. We have fully removed the `litellm` since **v0.1.4.post6**.\n\n- **2026-03-27** 🚀 Released **v0.1.4.post6** — architecture decoupling, litellm removal, end-to-end streaming, WeChat channel, and a security fix. Please see [release notes](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post6) for details.\n- **2026-03-26** 🏗️ Agent runner extracted and lifecycle hooks unified; stream delta coalescing at boundaries.\n- **2026-03-25** 🌏 StepFun provider, configurable timezone, Gemini thought signatures.\n- **2026-03-24** 🔧 WeChat compatibility, Feishu CardKit streaming, test suite restructured.\n- **2026-03-23** 🔧 Command routing refactored for plugins, WhatsApp\u002FWeChat media, unified channel login CLI.\n- **2026-03-22** ⚡ End-to-end streaming, WeChat channel, Anthropic cache optimization, `\u002Fstatus` command.\n- **2026-03-21** 🔒 Replace `litellm` with native `openai` + `anthropic` SDKs. Please see [commit](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fcommit\u002F3dfdab7).\n- **2026-03-20** 🧙 Interactive setup wizard — pick your provider, model autocomplete, and you're good to go.\n- **2026-03-19** 💬 Telegram gets more resilient under load; Feishu now renders code blocks properly.\n- **2026-03-18** 📷 Telegram can now send media via URL. Cron schedules show human-readable details.\n- **2026-03-17** ✨ Feishu formatting glow-up, Slack reacts when done, custom endpoints support extra headers, and image handling is more reliable.\n\n\u003Cdetails>\n\u003Csummary>Earlier news\u003C\u002Fsummary>\n\n- **2026-03-16** 🚀 Released **v0.1.4.post5** — a refinement-focused release with stronger reliability and channel support, and a more dependable day-to-day experience. Please see [release notes](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post5) for details.\n- **2026-03-15** 🧩 DingTalk rich media, smarter built-in skills, and cleaner model compatibility.\n- **2026-03-14** 💬 Channel plugins, Feishu replies, and steadier MCP, QQ, and media handling.\n- **2026-03-13** 🌐 Multi-provider web search, LangSmith, and broader reliability improvements.\n- **2026-03-12** 🚀 VolcEngine support, Telegram reply context, `\u002Frestart`, and sturdier memory.\n- **2026-03-11** 🔌 WeCom, Ollama, cleaner discovery, and safer tool behavior.\n- **2026-03-10** 🧠 Token-based memory, shared retries, and cleaner gateway and Telegram behavior.\n- **2026-03-09** 💬 Slack thread polish and better Feishu audio compatibility.\n- **2026-03-08** 🚀 Released **v0.1.4.post4** — a reliability-packed release with safer defaults, better multi-instance support, sturdier MCP, and major channel and provider improvements. Please see [release notes](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post4) for details.\n- **2026-03-07** 🚀 Azure OpenAI provider, WhatsApp media, QQ group chats, and more Telegram\u002FFeishu polish.\n- **2026-03-06** 🪄 Lighter providers, smarter media handling, and sturdier memory and CLI compatibility.\n- **2026-03-05** ⚡️ Telegram draft streaming, MCP SSE support, and broader channel reliability fixes.\n- **2026-03-04** 🛠️ Dependency cleanup, safer file reads, and another round of test and Cron fixes.\n- **2026-03-03** 🧠 Cleaner user-message merging, safer multimodal saves, and stronger Cron guards.\n- **2026-03-02** 🛡️ Safer default access control, sturdier Cron reloads, and cleaner Matrix media handling.\n- **2026-03-01** 🌐 Web proxy support, smarter Cron reminders, and Feishu rich-text parsing improvements.\n- **2026-02-28** 🚀 Released **v0.1.4.post3** — cleaner context, hardened session history, and smarter agent. Please see [release notes](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post3) for details.\n- **2026-02-27** 🧠 Experimental thinking mode support, DingTalk media messages, Feishu and QQ channel fixes.\n- **2026-02-26** 🛡️ Session poisoning fix, WhatsApp dedup, Windows path guard, Mistral compatibility.\n- **2026-02-25** 🧹 New Matrix channel, cleaner session context, auto workspace template sync.\n- **2026-02-24** 🚀 Released **v0.1.4.post2** — a reliability-focused release with a redesigned heartbeat, prompt cache optimization, and hardened provider & channel stability. See [release notes](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post2) for details.\n- **2026-02-23** 🔧 Virtual tool-call heartbeat, prompt cache optimization, Slack mrkdwn fixes.\n- **2026-02-22** 🛡️ Slack thread isolation, Discord typing fix, agent reliability improvements.\n- **2026-02-21** 🎉 Released **v0.1.4.post1** — new providers, media support across channels, and major stability improvements. See [release notes](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post1) for details.\n- **2026-02-20** 🐦 Feishu now receives multimodal files from users. More reliable memory under the hood.\n- **2026-02-19** ✨ Slack now sends files, Discord splits long messages, and subagents work in CLI mode.\n- **2026-02-18** ⚡️ nanobot now supports VolcEngine, MCP custom auth headers, and Anthropic prompt caching.\n- **2026-02-17** 🎉 Released **v0.1.4** — MCP support, progress streaming, new providers, and multiple channel improvements. Please see [release notes](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4) for details.\n- **2026-02-16** 🦞 nanobot now integrates a [ClawHub](https:\u002F\u002Fclawhub.ai) skill — search and install public agent skills.\n- **2026-02-15** 🔑 nanobot now supports OpenAI Codex provider with OAuth login support.\n- **2026-02-14** 🔌 nanobot now supports MCP! See [MCP section](#mcp-model-context-protocol) for details.\n- **2026-02-13** 🎉 Released **v0.1.3.post7** — includes security hardening and multiple improvements. **Please upgrade to the latest version to address security issues**. See [release notes](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.3.post7) for more details.\n- **2026-02-12** 🧠 Redesigned memory system — Less code, more reliable. Join the [discussion](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fdiscussions\u002F566) about it!\n- **2026-02-11** ✨ Enhanced CLI experience and added MiniMax support!\n- **2026-02-10** 🎉 Released **v0.1.3.post6** with improvements! Check the updates [notes](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.3.post6) and our [roadmap](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fdiscussions\u002F431).\n- **2026-02-09** 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms!\n- **2026-02-08** 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check [here](#providers).\n- **2026-02-07** 🚀 Released **v0.1.3.post5** with Qwen support & several key improvements! Check [here](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.3.post5) for details.\n- **2026-02-06** ✨ Added Moonshot\u002FKimi provider, Discord integration, and enhanced security hardening!\n- **2026-02-05** ✨ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support!\n- **2026-02-04** 🚀 Released **v0.1.3.post4** with multi-provider & Docker support! Check [here](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.3.post4) for details.\n- **2026-02-03** ⚡ Integrated vLLM for local LLM support and improved natural language task scheduling!\n- **2026-02-02** 🎉 nanobot officially launched! Welcome to try 🐈 nanobot!\n\n\u003C\u002Fdetails>\n\n> 🐈 nanobot is for educational, research, and technical exchange purposes only. It is unrelated to crypto and does not involve any official token or coin.\n\n## Key Features of nanobot:\n\n🪶 **Ultra-Lightweight**: A super lightweight implementation of OpenClaw — 99% smaller, significantly faster.\n\n🔬 **Research-Ready**: Clean, readable code that's easy to understand, modify, and extend for research.\n\n⚡️ **Lightning Fast**: Minimal footprint means faster startup, lower resource usage, and quicker iterations.\n\n💎 **Easy-to-Use**: One-click to deploy and you're ready to go.\n\n## 🏗️ Architecture\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_1ceb50b3cf68.png\" alt=\"nanobot architecture\" width=\"800\">\n\u003C\u002Fp>\n\n## Table of Contents\n\n- [News](#-news)\n- [Key Features](#key-features-of-nanobot)\n- [Architecture](#️-architecture)\n- [Features](#-features)\n- [Install](#-install)\n- [Quick Start](#-quick-start)\n- [Chat Apps](#-chat-apps)\n- [Agent Social Network](#-agent-social-network)\n- [Configuration](#️-configuration)\n- [Multiple Instances](#-multiple-instances)\n- [CLI Reference](#-cli-reference)\n- [Python SDK](#-python-sdk)\n- [OpenAI-Compatible API](#-openai-compatible-api)\n- [Docker](#-docker)\n- [Linux Service](#-linux-service)\n- [Project Structure](#-project-structure)\n- [Contribute & Roadmap](#-contribute--roadmap)\n- [Star History](#-star-history)\n\n## ✨ Features\n\n\u003Ctable align=\"center\">\n  \u003Ctr align=\"center\">\n    \u003Cth>\u003Cp align=\"center\">📈 24\u002F7 Real-Time Market Analysis\u003C\u002Fp>\u003C\u002Fth>\n    \u003Cth>\u003Cp align=\"center\">🚀 Full-Stack Software Engineer\u003C\u002Fp>\u003C\u002Fth>\n    \u003Cth>\u003Cp align=\"center\">📅 Smart Daily Routine Manager\u003C\u002Fp>\u003C\u002Fth>\n    \u003Cth>\u003Cp align=\"center\">📚 Personal Knowledge Assistant\u003C\u002Fp>\u003C\u002Fth>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_02d06ff8b9e2.gif\" width=\"180\" height=\"400\">\u003C\u002Fp>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_36d322a926fa.gif\" width=\"180\" height=\"400\">\u003C\u002Fp>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_55871898f25e.gif\" width=\"180\" height=\"400\">\u003C\u002Fp>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_a1d9382d70c7.gif\" width=\"180\" height=\"400\">\u003C\u002Fp>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">Discovery • Insights • Trends\u003C\u002Ftd>\n    \u003Ctd align=\"center\">Develop • Deploy • Scale\u003C\u002Ftd>\n    \u003Ctd align=\"center\">Schedule • Automate • Organize\u003C\u002Ftd>\n    \u003Ctd align=\"center\">Learn • Memory • Reasoning\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## 📦 Install\n\n**Install from source** (latest features, recommended for development)\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot.git\ncd nanobot\npip install -e .\n```\n\n**Install with [uv](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv)** (stable, fast)\n\n```bash\nuv tool install nanobot-ai\n```\n\n**Install from PyPI** (stable)\n\n```bash\npip install nanobot-ai\n```\n\n### Update to latest version\n\n**PyPI \u002F pip**\n\n```bash\npip install -U nanobot-ai\nnanobot --version\n```\n\n**uv**\n\n```bash\nuv tool upgrade nanobot-ai\nnanobot --version\n```\n\n**Using WhatsApp?** Rebuild the local bridge after upgrading:\n\n```bash\nrm -rf ~\u002F.nanobot\u002Fbridge\nnanobot channels login whatsapp\n```\n\n## 🚀 Quick Start\n\n> [!TIP]\n> Set your API key in `~\u002F.nanobot\u002Fconfig.json`.\n> Get API keys: [OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fkeys) (Global)\n>\n> For other LLM providers, please see the [Providers](#providers) section.\n>\n> For web search capability setup, please see [Web Search](#web-search).\n\n**1. Initialize**\n\n```bash\nnanobot onboard\n```\n\nUse `nanobot onboard --wizard` if you want the interactive setup wizard.\n\n**2. Configure** (`~\u002F.nanobot\u002Fconfig.json`)\n\nConfigure these **two parts** in your config (other options have defaults).\n\n*Set your API key* (e.g. OpenRouter, recommended for global users):\n```json\n{\n  \"providers\": {\n    \"openrouter\": {\n      \"apiKey\": \"sk-or-v1-xxx\"\n    }\n  }\n}\n```\n\n*Set your model* (optionally pin a provider — defaults to auto-detection):\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"anthropic\u002Fclaude-opus-4-5\",\n      \"provider\": \"openrouter\"\n    }\n  }\n}\n```\n\n**3. Chat**\n\n```bash\nnanobot agent\n```\n\nThat's it! You have a working AI assistant in 2 minutes.\n\n## 💬 Chat Apps\n\nConnect nanobot to your favorite chat platform. Want to build your own? See the [Channel Plugin Guide](.\u002Fdocs\u002FCHANNEL_PLUGIN_GUIDE.md).\n\n| Channel | What you need |\n|---------|---------------|\n| **Telegram** | Bot token from @BotFather |\n| **Discord** | Bot token + Message Content intent |\n| **WhatsApp** | QR code scan (`nanobot channels login whatsapp`) |\n| **WeChat (Weixin)** | QR code scan (`nanobot channels login weixin`) |\n| **Feishu** | App ID + App Secret |\n| **DingTalk** | App Key + App Secret |\n| **Slack** | Bot token + App-Level token |\n| **Matrix** | Homeserver URL + Access token |\n| **Email** | IMAP\u002FSMTP credentials |\n| **QQ** | App ID + App Secret |\n| **Wecom** | Bot ID + Bot Secret |\n| **Mochat** | Claw token (auto-setup available) |\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Telegram\u003C\u002Fb> (Recommended)\u003C\u002Fsummary>\n\n**1. Create a bot**\n- Open Telegram, search `@BotFather`\n- Send `\u002Fnewbot`, follow prompts\n- Copy the token\n\n**2. Configure**\n\n```json\n{\n  \"channels\": {\n    \"telegram\": {\n      \"enabled\": true,\n      \"token\": \"YOUR_BOT_TOKEN\",\n      \"allowFrom\": [\"YOUR_USER_ID\"]\n    }\n  }\n}\n```\n\n> You can find your **User ID** in Telegram settings. It is shown as `@yourUserId`.\n> Copy this value **without the `@` symbol** and paste it into the config file.\n\n\n**3. Run**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Mochat (Claw IM)\u003C\u002Fb>\u003C\u002Fsummary>\n\nUses **Socket.IO WebSocket** by default, with HTTP polling fallback.\n\n**1. Ask nanobot to set up Mochat for you**\n\nSimply send this message to nanobot (replace `xxx@xxx` with your real email):\n\n```\nRead https:\u002F\u002Fraw.githubusercontent.com\u002FHKUDS\u002FMoChat\u002Frefs\u002Fheads\u002Fmain\u002Fskills\u002Fnanobot\u002Fskill.md and register on MoChat. My Email account is xxx@xxx Bind me as your owner and DM me on MoChat.\n```\n\nnanobot will automatically register, configure `~\u002F.nanobot\u002Fconfig.json`, and connect to Mochat.\n\n**2. Restart gateway**\n\n```bash\nnanobot gateway\n```\n\nThat's it — nanobot handles the rest!\n\n\u003Cbr>\n\n\u003Cdetails>\n\u003Csummary>Manual configuration (advanced)\u003C\u002Fsummary>\n\nIf you prefer to configure manually, add the following to `~\u002F.nanobot\u002Fconfig.json`:\n\n> Keep `claw_token` private. It should only be sent in `X-Claw-Token` header to your Mochat API endpoint.\n\n```json\n{\n  \"channels\": {\n    \"mochat\": {\n      \"enabled\": true,\n      \"base_url\": \"https:\u002F\u002Fmochat.io\",\n      \"socket_url\": \"https:\u002F\u002Fmochat.io\",\n      \"socket_path\": \"\u002Fsocket.io\",\n      \"claw_token\": \"claw_xxx\",\n      \"agent_user_id\": \"6982abcdef\",\n      \"sessions\": [\"*\"],\n      \"panels\": [\"*\"],\n      \"reply_delay_mode\": \"non-mention\",\n      \"reply_delay_ms\": 120000\n    }\n  }\n}\n```\n\n\n\n\u003C\u002Fdetails>\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Discord\u003C\u002Fb>\u003C\u002Fsummary>\n\n**1. Create a bot**\n- Go to https:\u002F\u002Fdiscord.com\u002Fdevelopers\u002Fapplications\n- Create an application → Bot → Add Bot\n- Copy the bot token\n\n**2. Enable intents**\n- In the Bot settings, enable **MESSAGE CONTENT INTENT**\n- (Optional) Enable **SERVER MEMBERS INTENT** if you plan to use allow lists based on member data\n\n**3. Get your User ID**\n- Discord Settings → Advanced → enable **Developer Mode**\n- Right-click your avatar → **Copy User ID**\n\n**4. Configure**\n\n```json\n{\n  \"channels\": {\n    \"discord\": {\n      \"enabled\": true,\n      \"token\": \"YOUR_BOT_TOKEN\",\n      \"allowFrom\": [\"YOUR_USER_ID\"],\n      \"groupPolicy\": \"mention\"\n    }\n  }\n}\n```\n\n> `groupPolicy` controls how the bot responds in group channels:\n> - `\"mention\"` (default) — Only respond when @mentioned\n> - `\"open\"` — Respond to all messages\n> DMs always respond when the sender is in `allowFrom`.\n> - If you set group policy to open create new threads as private threads and then @ the bot into it. Otherwise the thread itself and the channel in which you spawned it will spawn a bot session.\n\n**5. Invite the bot**\n- OAuth2 → URL Generator\n- Scopes: `bot`\n- Bot Permissions: `Send Messages`, `Read Message History`\n- Open the generated invite URL and add the bot to your server\n\n**6. Run**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Matrix (Element)\u003C\u002Fb>\u003C\u002Fsummary>\n\nInstall Matrix dependencies first:\n\n```bash\npip install nanobot-ai[matrix]\n```\n\n**1. Create\u002Fchoose a Matrix account**\n\n- Create or reuse a Matrix account on your homeserver (for example `matrix.org`).\n- Confirm you can log in with Element.\n\n**2. Get credentials**\n\n- You need:\n  - `userId` (example: `@nanobot:matrix.org`)\n  - `accessToken`\n  - `deviceId` (recommended so sync tokens can be restored across restarts)\n- You can obtain these from your homeserver login API (`\u002F_matrix\u002Fclient\u002Fv3\u002Flogin`) or from your client's advanced session settings.\n\n**3. Configure**\n\n```json\n{\n  \"channels\": {\n    \"matrix\": {\n      \"enabled\": true,\n      \"homeserver\": \"https:\u002F\u002Fmatrix.org\",\n      \"userId\": \"@nanobot:matrix.org\",\n      \"accessToken\": \"syt_xxx\",\n      \"deviceId\": \"NANOBOT01\",\n      \"e2eeEnabled\": true,\n      \"allowFrom\": [\"@your_user:matrix.org\"],\n      \"groupPolicy\": \"open\",\n      \"groupAllowFrom\": [],\n      \"allowRoomMentions\": false,\n      \"maxMediaBytes\": 20971520\n    }\n  }\n}\n```\n\n> Keep a persistent `matrix-store` and stable `deviceId` — encrypted session state is lost if these change across restarts.\n\n| Option | Description |\n|--------|-------------|\n| `allowFrom` | User IDs allowed to interact. Empty denies all; use `[\"*\"]` to allow everyone. |\n| `groupPolicy` | `open` (default), `mention`, or `allowlist`. |\n| `groupAllowFrom` | Room allowlist (used when policy is `allowlist`). |\n| `allowRoomMentions` | Accept `@room` mentions in mention mode. |\n| `e2eeEnabled` | E2EE support (default `true`). Set `false` for plaintext-only. |\n| `maxMediaBytes` | Max attachment size (default `20MB`). Set `0` to block all media. |\n\n\n\n\n**4. Run**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>WhatsApp\u003C\u002Fb>\u003C\u002Fsummary>\n\nRequires **Node.js ≥18**.\n\n**1. Link device**\n\n```bash\nnanobot channels login whatsapp\n# Scan QR with WhatsApp → Settings → Linked Devices\n```\n\n**2. Configure**\n\n```json\n{\n  \"channels\": {\n    \"whatsapp\": {\n      \"enabled\": true,\n      \"allowFrom\": [\"+1234567890\"]\n    }\n  }\n}\n```\n\n**3. Run** (two terminals)\n\n```bash\n# Terminal 1\nnanobot channels login whatsapp\n\n# Terminal 2\nnanobot gateway\n```\n\n> WhatsApp bridge updates are not applied automatically for existing installations.\n> After upgrading nanobot, rebuild the local bridge with:\n> `rm -rf ~\u002F.nanobot\u002Fbridge && nanobot channels login whatsapp`\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Feishu\u003C\u002Fb>\u003C\u002Fsummary>\n\nUses **WebSocket** long connection — no public IP required.\n\n**1. Create a Feishu bot**\n- Visit [Feishu Open Platform](https:\u002F\u002Fopen.feishu.cn\u002Fapp)\n- Create a new app → Enable **Bot** capability\n- **Permissions**:\n  - `im:message` (send messages) and `im:message.p2p_msg:readonly` (receive messages)\n  - **Streaming replies** (default in nanobot): add **`cardkit:card:write`** (often labeled **Create and update cards** in the Feishu developer console). Required for CardKit entities and streamed assistant text. Older apps may not have it yet — open **Permission management**, enable the scope, then **publish** a new app version if the console requires it.\n  - If you **cannot** add `cardkit:card:write`, set `\"streaming\": false` under `channels.feishu` (see below). The bot still works; replies use normal interactive cards without token-by-token streaming.\n- **Events**: Add `im.message.receive_v1` (receive messages)\n  - Select **Long Connection** mode (requires running nanobot first to establish connection)\n- Get **App ID** and **App Secret** from \"Credentials & Basic Info\"\n- Publish the app\n\n**2. Configure**\n\n```json\n{\n  \"channels\": {\n    \"feishu\": {\n      \"enabled\": true,\n      \"appId\": \"cli_xxx\",\n      \"appSecret\": \"xxx\",\n      \"encryptKey\": \"\",\n      \"verificationToken\": \"\",\n      \"allowFrom\": [\"ou_YOUR_OPEN_ID\"],\n      \"groupPolicy\": \"mention\",\n      \"streaming\": true\n    }\n  }\n}\n```\n\n> `streaming` defaults to `true`. Use `false` if your app does not have **`cardkit:card:write`** (see permissions above).\n> `encryptKey` and `verificationToken` are optional for Long Connection mode.\n> `allowFrom`: Add your open_id (find it in nanobot logs when you message the bot). Use `[\"*\"]` to allow all users.\n> `groupPolicy`: `\"mention\"` (default — respond only when @mentioned), `\"open\"` (respond to all group messages). Private chats always respond.\n\n**3. Run**\n\n```bash\nnanobot gateway\n```\n\n> [!TIP]\n> Feishu uses WebSocket to receive messages — no webhook or public IP needed!\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>QQ (QQ单聊)\u003C\u002Fb>\u003C\u002Fsummary>\n\nUses **botpy SDK** with WebSocket — no public IP required. Currently supports **private messages only**.\n\n**1. Register & create bot**\n- Visit [QQ Open Platform](https:\u002F\u002Fq.qq.com) → Register as a developer (personal or enterprise)\n- Create a new bot application\n- Go to **开发设置 (Developer Settings)** → copy **AppID** and **AppSecret**\n\n**2. Set up sandbox for testing**\n- In the bot management console, find **沙箱配置 (Sandbox Config)**\n- Under **在消息列表配置**, click **添加成员** and add your own QQ number\n- Once added, scan the bot's QR code with mobile QQ → open the bot profile → tap \"发消息\" to start chatting\n\n**3. Configure**\n\n> - `allowFrom`: Add your openid (find it in nanobot logs when you message the bot). Use `[\"*\"]` for public access.\n> - `msgFormat`: Optional. Use `\"plain\"` (default) for maximum compatibility with legacy QQ clients, or `\"markdown\"` for richer formatting on newer clients.\n> - For production: submit a review in the bot console and publish. See [QQ Bot Docs](https:\u002F\u002Fbot.q.qq.com\u002Fwiki\u002F) for the full publishing flow.\n\n```json\n{\n  \"channels\": {\n    \"qq\": {\n      \"enabled\": true,\n      \"appId\": \"YOUR_APP_ID\",\n      \"secret\": \"YOUR_APP_SECRET\",\n      \"allowFrom\": [\"YOUR_OPENID\"],\n      \"msgFormat\": \"plain\"\n    }\n  }\n}\n```\n\n**4. Run**\n\n```bash\nnanobot gateway\n```\n\nNow send a message to the bot from QQ — it should respond!\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>DingTalk (钉钉)\u003C\u002Fb>\u003C\u002Fsummary>\n\nUses **Stream Mode** — no public IP required.\n\n**1. Create a DingTalk bot**\n- Visit [DingTalk Open Platform](https:\u002F\u002Fopen-dev.dingtalk.com\u002F)\n- Create a new app -> Add **Robot** capability\n- **Configuration**:\n  - Toggle **Stream Mode** ON\n- **Permissions**: Add necessary permissions for sending messages\n- Get **AppKey** (Client ID) and **AppSecret** (Client Secret) from \"Credentials\"\n- Publish the app\n\n**2. Configure**\n\n```json\n{\n  \"channels\": {\n    \"dingtalk\": {\n      \"enabled\": true,\n      \"clientId\": \"YOUR_APP_KEY\",\n      \"clientSecret\": \"YOUR_APP_SECRET\",\n      \"allowFrom\": [\"YOUR_STAFF_ID\"]\n    }\n  }\n}\n```\n\n> `allowFrom`: Add your staff ID. Use `[\"*\"]` to allow all users.\n\n**3. Run**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Slack\u003C\u002Fb>\u003C\u002Fsummary>\n\nUses **Socket Mode** — no public URL required.\n\n**1. Create a Slack app**\n- Go to [Slack API](https:\u002F\u002Fapi.slack.com\u002Fapps) → **Create New App** → \"From scratch\"\n- Pick a name and select your workspace\n\n**2. Configure the app**\n- **Socket Mode**: Toggle ON → Generate an **App-Level Token** with `connections:write` scope → copy it (`xapp-...`)\n- **OAuth & Permissions**: Add bot scopes: `chat:write`, `reactions:write`, `app_mentions:read`\n- **Event Subscriptions**: Toggle ON → Subscribe to bot events: `message.im`, `message.channels`, `app_mention` → Save Changes\n- **App Home**: Scroll to **Show Tabs** → Enable **Messages Tab** → Check **\"Allow users to send Slash commands and messages from the messages tab\"**\n- **Install App**: Click **Install to Workspace** → Authorize → copy the **Bot Token** (`xoxb-...`)\n\n**3. Configure nanobot**\n\n```json\n{\n  \"channels\": {\n    \"slack\": {\n      \"enabled\": true,\n      \"botToken\": \"xoxb-...\",\n      \"appToken\": \"xapp-...\",\n      \"allowFrom\": [\"YOUR_SLACK_USER_ID\"],\n      \"groupPolicy\": \"mention\"\n    }\n  }\n}\n```\n\n**4. Run**\n\n```bash\nnanobot gateway\n```\n\nDM the bot directly or @mention it in a channel — it should respond!\n\n> [!TIP]\n> - `groupPolicy`: `\"mention\"` (default — respond only when @mentioned), `\"open\"` (respond to all channel messages), or `\"allowlist\"` (restrict to specific channels).\n> - DM policy defaults to open. Set `\"dm\": {\"enabled\": false}` to disable DMs.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Email\u003C\u002Fb>\u003C\u002Fsummary>\n\nGive nanobot its own email account. It polls **IMAP** for incoming mail and replies via **SMTP** — like a personal email assistant.\n\n**1. Get credentials (Gmail example)**\n- Create a dedicated Gmail account for your bot (e.g. `my-nanobot@gmail.com`)\n- Enable 2-Step Verification → Create an [App Password](https:\u002F\u002Fmyaccount.google.com\u002Fapppasswords)\n- Use this app password for both IMAP and SMTP\n\n**2. Configure**\n\n> - `consentGranted` must be `true` to allow mailbox access. This is a safety gate — set `false` to fully disable.\n> - `allowFrom`: Add your email address. Use `[\"*\"]` to accept emails from anyone.\n> - `smtpUseTls` and `smtpUseSsl` default to `true` \u002F `false` respectively, which is correct for Gmail (port 587 + STARTTLS). No need to set them explicitly.\n> - Set `\"autoReplyEnabled\": false` if you only want to read\u002Fanalyze emails without sending automatic replies.\n\n```json\n{\n  \"channels\": {\n    \"email\": {\n      \"enabled\": true,\n      \"consentGranted\": true,\n      \"imapHost\": \"imap.gmail.com\",\n      \"imapPort\": 993,\n      \"imapUsername\": \"my-nanobot@gmail.com\",\n      \"imapPassword\": \"your-app-password\",\n      \"smtpHost\": \"smtp.gmail.com\",\n      \"smtpPort\": 587,\n      \"smtpUsername\": \"my-nanobot@gmail.com\",\n      \"smtpPassword\": \"your-app-password\",\n      \"fromAddress\": \"my-nanobot@gmail.com\",\n      \"allowFrom\": [\"your-real-email@gmail.com\"]\n    }\n  }\n}\n```\n\n\n**3. Run**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>WeChat (微信 \u002F Weixin)\u003C\u002Fb>\u003C\u002Fsummary>\n\nUses **HTTP long-poll** with QR-code login via the ilinkai personal WeChat API. No local WeChat desktop client is required.\n\n**1. Install with WeChat support**\n\n```bash\npip install \"nanobot-ai[weixin]\"\n```\n\n**2. Configure**\n\n```json\n{\n  \"channels\": {\n    \"weixin\": {\n      \"enabled\": true,\n      \"allowFrom\": [\"YOUR_WECHAT_USER_ID\"]\n    }\n  }\n}\n```\n\n> - `allowFrom`: Add the sender ID you see in nanobot logs for your WeChat account. Use `[\"*\"]` to allow all users.\n> - `token`: Optional. If omitted, log in interactively and nanobot will save the token for you.\n> - `routeTag`: Optional. When your upstream Weixin deployment requires request routing, nanobot will send it as the `SKRouteTag` header.\n> - `stateDir`: Optional. Defaults to nanobot's runtime directory for Weixin state.\n> - `pollTimeout`: Optional long-poll timeout in seconds.\n\n**3. Login**\n\n```bash\nnanobot channels login weixin\n```\n\nUse `--force` to re-authenticate and ignore any saved token:\n\n```bash\nnanobot channels login weixin --force\n```\n\n**4. Run**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Wecom (企业微信)\u003C\u002Fb>\u003C\u002Fsummary>\n\n> Here we use [wecom-aibot-sdk-python](https:\u002F\u002Fgithub.com\u002Fchengyongru\u002Fwecom_aibot_sdk) (community Python version of the official [@wecom\u002Faibot-node-sdk](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@wecom\u002Faibot-node-sdk)).\n>\n> Uses **WebSocket** long connection — no public IP required.\n\n**1. Install the optional dependency**\n\n```bash\npip install nanobot-ai[wecom]\n```\n\n**2. Create a WeCom AI Bot**\n\nGo to the WeCom admin console → Intelligent Robot → Create Robot → select **API mode** with **long connection**. Copy the Bot ID and Secret.\n\n**3. Configure**\n\n```json\n{\n  \"channels\": {\n    \"wecom\": {\n      \"enabled\": true,\n      \"botId\": \"your_bot_id\",\n      \"secret\": \"your_bot_secret\",\n      \"allowFrom\": [\"your_id\"]\n    }\n  }\n}\n```\n\n**4. Run**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n## 🌐 Agent Social Network\n\n🐈 nanobot is capable of linking to the agent social network (agent community). **Just send one message and your nanobot joins automatically!**\n\n| Platform | How to Join (send this message to your bot) |\n|----------|-------------|\n| [**Moltbook**](https:\u002F\u002Fwww.moltbook.com\u002F) | `Read https:\u002F\u002Fmoltbook.com\u002Fskill.md and follow the instructions to join Moltbook` |\n| [**ClawdChat**](https:\u002F\u002Fclawdchat.ai\u002F) | `Read https:\u002F\u002Fclawdchat.ai\u002Fskill.md and follow the instructions to join ClawdChat` |\n\nSimply send the command above to your nanobot (via CLI or any chat channel), and it will handle the rest.\n\n## ⚙️ Configuration\n\nConfig file: `~\u002F.nanobot\u002Fconfig.json`\n\n### Providers\n\n> [!TIP]\n> - **Groq** provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.\n> - **MiniMax Coding Plan**: Exclusive discount links for the nanobot community: [Overseas](https:\u002F\u002Fplatform.minimax.io\u002Fsubscribe\u002Fcoding-plan?code=9txpdXw04g&source=link) · [Mainland China](https:\u002F\u002Fplatform.minimaxi.com\u002Fsubscribe\u002Ftoken-plan?code=GILTJpMTqZ&source=link)\n> - **MiniMax (Mainland China)**: If your API key is from MiniMax's mainland China platform (minimaxi.com), set `\"apiBase\": \"https:\u002F\u002Fapi.minimaxi.com\u002Fv1\"` in your minimax provider config.\n> - **VolcEngine \u002F BytePlus Coding Plan**: Use dedicated providers `volcengineCodingPlan` or `byteplusCodingPlan` instead of the pay-per-use `volcengine` \u002F `byteplus` providers.\n> - **Zhipu Coding Plan**: If you're on Zhipu's coding plan, set `\"apiBase\": \"https:\u002F\u002Fopen.bigmodel.cn\u002Fapi\u002Fcoding\u002Fpaas\u002Fv4\"` in your zhipu provider config.\n> - **Alibaba Cloud BaiLian**: If you're using Alibaba Cloud BaiLian's OpenAI-compatible endpoint, set `\"apiBase\": \"https:\u002F\u002Fdashscope.aliyuncs.com\u002Fcompatible-mode\u002Fv1\"` in your dashscope provider config.\n> - **Step Fun (Mainland China)**: If your API key is from Step Fun's mainland China platform (stepfun.com), set `\"apiBase\": \"https:\u002F\u002Fapi.stepfun.com\u002Fv1\"` in your stepfun provider config.\n\n| Provider | Purpose | Get API Key |\n|----------|---------|-------------|\n| `custom` | Any OpenAI-compatible endpoint | — |\n| `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https:\u002F\u002Fopenrouter.ai) |\n| `volcengine` | LLM (VolcEngine, pay-per-use) | [Coding Plan](https:\u002F\u002Fwww.volcengine.com\u002Factivity\u002Fcodingplan?utm_campaign=nanobot&utm_content=nanobot&utm_medium=devrel&utm_source=OWO&utm_term=nanobot) · [volcengine.com](https:\u002F\u002Fwww.volcengine.com) |\n| `byteplus` | LLM (VolcEngine international, pay-per-use) | [Coding Plan](https:\u002F\u002Fwww.byteplus.com\u002Fen\u002Factivity\u002Fcodingplan?utm_campaign=nanobot&utm_content=nanobot&utm_medium=devrel&utm_source=OWO&utm_term=nanobot) · [byteplus.com](https:\u002F\u002Fwww.byteplus.com) |\n| `anthropic` | LLM (Claude direct) | [console.anthropic.com](https:\u002F\u002Fconsole.anthropic.com) |\n| `azure_openai` | LLM (Azure OpenAI) | [portal.azure.com](https:\u002F\u002Fportal.azure.com) |\n| `openai` | LLM (GPT direct) | [platform.openai.com](https:\u002F\u002Fplatform.openai.com) |\n| `deepseek` | LLM (DeepSeek direct) | [platform.deepseek.com](https:\u002F\u002Fplatform.deepseek.com) |\n| `groq` | LLM + **Voice transcription** (Whisper) | [console.groq.com](https:\u002F\u002Fconsole.groq.com) |\n| `minimax` | LLM (MiniMax direct) | [platform.minimaxi.com](https:\u002F\u002Fplatform.minimaxi.com) |\n| `gemini` | LLM (Gemini direct) | [aistudio.google.com](https:\u002F\u002Faistudio.google.com) |\n| `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https:\u002F\u002Faihubmix.com) |\n| `siliconflow` | LLM (SiliconFlow\u002F硅基流动) | [siliconflow.cn](https:\u002F\u002Fsiliconflow.cn) |\n| `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https:\u002F\u002Fdashscope.console.aliyun.com) |\n| `moonshot` | LLM (Moonshot\u002FKimi) | [platform.moonshot.cn](https:\u002F\u002Fplatform.moonshot.cn) |\n| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https:\u002F\u002Fopen.bigmodel.cn) |\n| `ollama` | LLM (local, Ollama) | — |\n| `mistral` | LLM | [docs.mistral.ai](https:\u002F\u002Fdocs.mistral.ai\u002F) |\n| `stepfun` | LLM (Step Fun\u002F阶跃星辰) | [platform.stepfun.com](https:\u002F\u002Fplatform.stepfun.com) |\n| `ovms` | LLM (local, OpenVINO Model Server) | [docs.openvino.ai](https:\u002F\u002Fdocs.openvino.ai\u002F2026\u002Fmodel-server\u002Fovms_docs_llm_quickstart.html) |\n| `vllm` | LLM (local, any OpenAI-compatible server) | — |\n| `openai_codex` | LLM (Codex, OAuth) | `nanobot provider login openai-codex` |\n| `github_copilot` | LLM (GitHub Copilot, OAuth) | `nanobot provider login github-copilot` |\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>OpenAI Codex (OAuth)\u003C\u002Fb>\u003C\u002Fsummary>\n\nCodex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.\nNo `providers.openaiCodex` block is needed in `config.json`; `nanobot provider login` stores the OAuth session outside config.\n\n**1. Login:**\n```bash\nnanobot provider login openai-codex\n```\n\n**2. Set model** (merge into `~\u002F.nanobot\u002Fconfig.json`):\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"openai-codex\u002Fgpt-5.1-codex\"\n    }\n  }\n}\n```\n\n**3. Chat:**\n```bash\nnanobot agent -m \"Hello!\"\n\n# Target a specific workspace\u002Fconfig locally\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -m \"Hello!\"\n\n# One-off workspace override on top of that config\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -w \u002Ftmp\u002Fnanobot-telegram-test -m \"Hello!\"\n```\n\n> Docker users: use `docker run -it` for interactive OAuth login.\n\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>GitHub Copilot (OAuth)\u003C\u002Fb>\u003C\u002Fsummary>\n\nGitHub Copilot uses OAuth instead of API keys. Requires a [GitHub account with a plan](https:\u002F\u002Fgithub.com\u002Ffeatures\u002Fcopilot\u002Fplans) configured.\nNo `providers.githubCopilot` block is needed in `config.json`; `nanobot provider login` stores the OAuth session outside config.\n\n**1. Login:**\n```bash\nnanobot provider login github-copilot\n```\n\n**2. Set model** (merge into `~\u002F.nanobot\u002Fconfig.json`):\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"github-copilot\u002Fgpt-4.1\"\n    }\n  }\n}\n```\n\n**3. Chat:**\n```bash\nnanobot agent -m \"Hello!\"\n\n# Target a specific workspace\u002Fconfig locally\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -m \"Hello!\"\n\n# One-off workspace override on top of that config\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -w \u002Ftmp\u002Fnanobot-telegram-test -m \"Hello!\"\n```\n\n> Docker users: use `docker run -it` for interactive OAuth login.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Custom Provider (Any OpenAI-compatible API)\u003C\u002Fb>\u003C\u002Fsummary>\n\nConnects directly to any OpenAI-compatible endpoint — LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Model name is passed as-is.\n\n```json\n{\n  \"providers\": {\n    \"custom\": {\n      \"apiKey\": \"your-api-key\",\n      \"apiBase\": \"https:\u002F\u002Fapi.your-provider.com\u002Fv1\"\n    }\n  },\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"your-model-name\"\n    }\n  }\n}\n```\n\n> For local servers that don't require a key, set `apiKey` to any non-empty string (e.g. `\"no-key\"`).\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Ollama (local)\u003C\u002Fb>\u003C\u002Fsummary>\n\nRun a local model with Ollama, then add to config:\n\n**1. Start Ollama** (example):\n```bash\nollama run llama3.2\n```\n\n**2. Add to config** (partial — merge into `~\u002F.nanobot\u002Fconfig.json`):\n```json\n{\n  \"providers\": {\n    \"ollama\": {\n      \"apiBase\": \"http:\u002F\u002Flocalhost:11434\"\n    }\n  },\n  \"agents\": {\n    \"defaults\": {\n      \"provider\": \"ollama\",\n      \"model\": \"llama3.2\"\n    }\n  }\n}\n```\n\n> `provider: \"auto\"` also works when `providers.ollama.apiBase` is configured, but setting `\"provider\": \"ollama\"` is the clearest option.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>OpenVINO Model Server (local \u002F OpenAI-compatible)\u003C\u002Fb>\u003C\u002Fsummary>\n\nRun LLMs locally on Intel GPUs using [OpenVINO Model Server](https:\u002F\u002Fdocs.openvino.ai\u002F2026\u002Fmodel-server\u002Fovms_docs_llm_quickstart.html). OVMS exposes an OpenAI-compatible API at `\u002Fv3`.\n\n> Requires Docker and an Intel GPU with driver access (`\u002Fdev\u002Fdri`).\n\n**1. Pull the model** (example):\n\n```bash\nmkdir -p ov\u002Fmodels && cd ov\n\ndocker run -d \\\n  --rm \\\n  --user $(id -u):$(id -g) \\\n  -v $(pwd)\u002Fmodels:\u002Fmodels \\\n  openvino\u002Fmodel_server:latest-gpu \\\n  --pull \\\n  --model_name openai\u002Fgpt-oss-20b \\\n  --model_repository_path \u002Fmodels \\\n  --source_model OpenVINO\u002Fgpt-oss-20b-int4-ov \\\n  --task text_generation \\\n  --tool_parser gptoss \\\n  --reasoning_parser gptoss \\\n  --enable_prefix_caching true \\\n  --target_device GPU\n```\n\n> This downloads the model weights. Wait for the container to finish before proceeding.\n\n**2. Start the server** (example):\n\n```bash\ndocker run -d \\\n  --rm \\\n  --name ovms \\\n  --user $(id -u):$(id -g) \\\n  -p 8000:8000 \\\n  -v $(pwd)\u002Fmodels:\u002Fmodels \\\n  --device \u002Fdev\u002Fdri \\\n  --group-add=$(stat -c \"%g\" \u002Fdev\u002Fdri\u002Frender* | head -n 1) \\\n  openvino\u002Fmodel_server:latest-gpu \\\n  --rest_port 8000 \\\n  --model_name openai\u002Fgpt-oss-20b \\\n  --model_repository_path \u002Fmodels \\\n  --source_model OpenVINO\u002Fgpt-oss-20b-int4-ov \\\n  --task text_generation \\\n  --tool_parser gptoss \\\n  --reasoning_parser gptoss \\\n  --enable_prefix_caching true \\\n  --target_device GPU\n```\n\n**3. Add to config** (partial — merge into `~\u002F.nanobot\u002Fconfig.json`):\n\n```json\n{\n  \"providers\": {\n    \"ovms\": {\n      \"apiBase\": \"http:\u002F\u002Flocalhost:8000\u002Fv3\"\n    }\n  },\n  \"agents\": {\n    \"defaults\": {\n      \"provider\": \"ovms\",\n      \"model\": \"openai\u002Fgpt-oss-20b\"\n    }\n  }\n}\n```\n\n> OVMS is a local server — no API key required. Supports tool calling (`--tool_parser gptoss`), reasoning (`--reasoning_parser gptoss`), and streaming.\n> See the [official OVMS docs](https:\u002F\u002Fdocs.openvino.ai\u002F2026\u002Fmodel-server\u002Fovms_docs_llm_quickstart.html) for more details.\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>vLLM (local \u002F OpenAI-compatible)\u003C\u002Fb>\u003C\u002Fsummary>\n\nRun your own model with vLLM or any OpenAI-compatible server, then add to config:\n\n**1. Start the server** (example):\n```bash\nvllm serve meta-llama\u002FLlama-3.1-8B-Instruct --port 8000\n```\n\n**2. Add to config** (partial — merge into `~\u002F.nanobot\u002Fconfig.json`):\n\n*Provider (key can be any non-empty string for local):*\n```json\n{\n  \"providers\": {\n    \"vllm\": {\n      \"apiKey\": \"dummy\",\n      \"apiBase\": \"http:\u002F\u002Flocalhost:8000\u002Fv1\"\n    }\n  }\n}\n```\n\n*Model:*\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"meta-llama\u002FLlama-3.1-8B-Instruct\"\n    }\n  }\n}\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Adding a New Provider (Developer Guide)\u003C\u002Fb>\u003C\u002Fsummary>\n\nnanobot uses a **Provider Registry** (`nanobot\u002Fproviders\u002Fregistry.py`) as the single source of truth.\nAdding a new provider only takes **2 steps** — no if-elif chains to touch.\n\n**Step 1.** Add a `ProviderSpec` entry to `PROVIDERS` in `nanobot\u002Fproviders\u002Fregistry.py`:\n\n```python\nProviderSpec(\n    name=\"myprovider\",                   # config field name\n    keywords=(\"myprovider\", \"mymodel\"),  # model-name keywords for auto-matching\n    env_key=\"MYPROVIDER_API_KEY\",        # env var name\n    display_name=\"My Provider\",          # shown in `nanobot status`\n    default_api_base=\"https:\u002F\u002Fapi.myprovider.com\u002Fv1\",  # OpenAI-compatible endpoint\n)\n```\n\n**Step 2.** Add a field to `ProvidersConfig` in `nanobot\u002Fconfig\u002Fschema.py`:\n\n```python\nclass ProvidersConfig(BaseModel):\n    ...\n    myprovider: ProviderConfig = ProviderConfig()\n```\n\nThat's it! Environment variables, model routing, config matching, and `nanobot status` display will all work automatically.\n\n**Common `ProviderSpec` options:**\n\n| Field | Description | Example |\n|-------|-------------|---------|\n| `default_api_base` | OpenAI-compatible base URL | `\"https:\u002F\u002Fapi.deepseek.com\"` |\n| `env_extras` | Additional env vars to set | `((\"ZHIPUAI_API_KEY\", \"{api_key}\"),)` |\n| `model_overrides` | Per-model parameter overrides | `((\"kimi-k2.5\", {\"temperature\": 1.0}),)` |\n| `is_gateway` | Can route any model (like OpenRouter) | `True` |\n| `detect_by_key_prefix` | Detect gateway by API key prefix | `\"sk-or-\"` |\n| `detect_by_base_keyword` | Detect gateway by API base URL | `\"openrouter\"` |\n| `strip_model_prefix` | Strip provider prefix before sending to gateway | `True` (for AiHubMix) |\n| `supports_max_completion_tokens` | Use `max_completion_tokens` instead of `max_tokens`; required for providers that reject both being set simultaneously (e.g. VolcEngine) | `True` |\n\n\u003C\u002Fdetails>\n\n### Channel Settings\n\nGlobal settings that apply to all channels. Configure under the `channels` section in `~\u002F.nanobot\u002Fconfig.json`:\n\n```json\n{\n  \"channels\": {\n    \"sendProgress\": true,\n    \"sendToolHints\": false,\n    \"sendMaxRetries\": 3,\n    \"telegram\": { ... }\n  }\n}\n```\n\n| Setting | Default | Description |\n|---------|---------|-------------|\n| `sendProgress` | `true` | Stream agent's text progress to the channel |\n| `sendToolHints` | `false` | Stream tool-call hints (e.g. `read_file(\"…\")`) |\n| `sendMaxRetries` | `3` | Max delivery attempts per outbound message, including the initial send (0-10 configured, minimum 1 actual attempt) |\n\n#### Retry Behavior\n\nWhen a channel send operation raises an error, nanobot retries with exponential backoff:\n\n- **Attempt 1**: Initial send\n- **Attempts 2-4**: Retry delays are 1s, 2s, 4s\n- **Attempts 5+**: Retry delay caps at 4s\n- **Transient failures** (network hiccups, temporary API limits): Retry usually succeeds\n- **Permanent failures** (invalid token, channel banned): All retries fail\n\n> [!NOTE]\n> When a channel is completely unavailable, there's no way to notify the user since we cannot reach them through that channel. Monitor logs for \"Failed to send to {channel} after N attempts\" to detect persistent delivery failures.\n\n### Web Search\n\n> [!TIP]\n> Use `proxy` in `tools.web` to route all web requests (search + fetch) through a proxy:\n> ```json\n> { \"tools\": { \"web\": { \"proxy\": \"http:\u002F\u002F127.0.0.1:7890\" } } }\n> ```\n\nnanobot supports multiple web search providers. Configure in `~\u002F.nanobot\u002Fconfig.json` under `tools.web.search`.\n\n| Provider | Config fields | Env var fallback | Free |\n|----------|--------------|------------------|------|\n| `brave` (default) | `apiKey` | `BRAVE_API_KEY` | No |\n| `tavily` | `apiKey` | `TAVILY_API_KEY` | No |\n| `jina` | `apiKey` | `JINA_API_KEY` | Free tier (10M tokens) |\n| `searxng` | `baseUrl` | `SEARXNG_BASE_URL` | Yes (self-hosted) |\n| `duckduckgo` | — | — | Yes |\n\nWhen credentials are missing, nanobot automatically falls back to DuckDuckGo.\n\n**Brave** (default):\n```json\n{\n  \"tools\": {\n    \"web\": {\n      \"search\": {\n        \"provider\": \"brave\",\n        \"apiKey\": \"BSA...\"\n      }\n    }\n  }\n}\n```\n\n**Tavily:**\n```json\n{\n  \"tools\": {\n    \"web\": {\n      \"search\": {\n        \"provider\": \"tavily\",\n        \"apiKey\": \"tvly-...\"\n      }\n    }\n  }\n}\n```\n\n**Jina** (free tier with 10M tokens):\n```json\n{\n  \"tools\": {\n    \"web\": {\n      \"search\": {\n        \"provider\": \"jina\",\n        \"apiKey\": \"jina_...\"\n      }\n    }\n  }\n}\n```\n\n**SearXNG** (self-hosted, no API key needed):\n```json\n{\n  \"tools\": {\n    \"web\": {\n      \"search\": {\n        \"provider\": \"searxng\",\n        \"baseUrl\": \"https:\u002F\u002Fsearx.example\"\n      }\n    }\n  }\n}\n```\n\n**DuckDuckGo** (zero config):\n```json\n{\n  \"tools\": {\n    \"web\": {\n      \"search\": {\n        \"provider\": \"duckduckgo\"\n      }\n    }\n  }\n}\n```\n\n| Option | Type | Default | Description |\n|--------|------|---------|-------------|\n| `provider` | string | `\"brave\"` | Search backend: `brave`, `tavily`, `jina`, `searxng`, `duckduckgo` |\n| `apiKey` | string | `\"\"` | API key for Brave or Tavily |\n| `baseUrl` | string | `\"\"` | Base URL for SearXNG |\n| `maxResults` | integer | `5` | Results per search (1–10) |\n\n### MCP (Model Context Protocol)\n\n> [!TIP]\n> The config format is compatible with Claude Desktop \u002F Cursor. You can copy MCP server configs directly from any MCP server's README.\n\nnanobot supports [MCP](https:\u002F\u002Fmodelcontextprotocol.io\u002F) — connect external tool servers and use them as native agent tools.\n\nAdd MCP servers to your `config.json`:\n\n```json\n{\n  \"tools\": {\n    \"mcpServers\": {\n      \"filesystem\": {\n        \"command\": \"npx\",\n        \"args\": [\"-y\", \"@modelcontextprotocol\u002Fserver-filesystem\", \"\u002Fpath\u002Fto\u002Fdir\"]\n      },\n      \"my-remote-mcp\": {\n        \"url\": \"https:\u002F\u002Fexample.com\u002Fmcp\u002F\",\n        \"headers\": {\n          \"Authorization\": \"Bearer xxxxx\"\n        }\n      }\n    }\n  }\n}\n```\n\nTwo transport modes are supported:\n\n| Mode | Config | Example |\n|------|--------|---------|\n| **Stdio** | `command` + `args` | Local process via `npx` \u002F `uvx` |\n| **HTTP** | `url` + `headers` (optional) | Remote endpoint (`https:\u002F\u002Fmcp.example.com\u002Fsse`) |\n\nUse `toolTimeout` to override the default 30s per-call timeout for slow servers:\n\n```json\n{\n  \"tools\": {\n    \"mcpServers\": {\n      \"my-slow-server\": {\n        \"url\": \"https:\u002F\u002Fexample.com\u002Fmcp\u002F\",\n        \"toolTimeout\": 120\n      }\n    }\n  }\n}\n```\n\nUse `enabledTools` to register only a subset of tools from an MCP server:\n\n```json\n{\n  \"tools\": {\n    \"mcpServers\": {\n      \"filesystem\": {\n        \"command\": \"npx\",\n        \"args\": [\"-y\", \"@modelcontextprotocol\u002Fserver-filesystem\", \"\u002Fpath\u002Fto\u002Fdir\"],\n        \"enabledTools\": [\"read_file\", \"mcp_filesystem_write_file\"]\n      }\n    }\n  }\n}\n```\n\n`enabledTools` accepts either the raw MCP tool name (for example `read_file`) or the wrapped nanobot tool name (for example `mcp_filesystem_write_file`).\n\n- Omit `enabledTools`, or set it to `[\"*\"]`, to register all tools.\n- Set `enabledTools` to `[]` to register no tools from that server.\n- Set `enabledTools` to a non-empty list of names to register only that subset.\n\nMCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools — no extra configuration needed.\n\n\n\n\n### Security\n\n> [!TIP]\n> For production deployments, set `\"restrictToWorkspace\": true` in your config to sandbox the agent.\n> In `v0.1.4.post3` and earlier, an empty `allowFrom` allowed all senders. Since `v0.1.4.post4`, empty `allowFrom` denies all access by default. To allow all senders, set `\"allowFrom\": [\"*\"]`.\n\n| Option | Default | Description |\n|--------|---------|-------------|\n| `tools.restrictToWorkspace` | `false` | When `true`, restricts **all** agent tools (shell, file read\u002Fwrite\u002Fedit, list) to the workspace directory. Prevents path traversal and out-of-scope access. |\n| `tools.exec.enable` | `true` | When `false`, the shell `exec` tool is not registered at all. Use this to completely disable shell command execution. |\n| `tools.exec.pathAppend` | `\"\"` | Extra directories to append to `PATH` when running shell commands (e.g. `\u002Fusr\u002Fsbin` for `ufw`). |\n| `channels.*.allowFrom` | `[]` (deny all) | Whitelist of user IDs. Empty denies all; use `[\"*\"]` to allow everyone. |\n\n\n### Timezone\n\nTime is context. Context should be precise.\n\nBy default, nanobot uses `UTC` for runtime time context. If you want the agent to think in your local time, set `agents.defaults.timezone` to a valid [IANA timezone name](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FList_of_tz_database_time_zones):\n\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"timezone\": \"Asia\u002FShanghai\"\n    }\n  }\n}\n```\n\nThis affects runtime time strings shown to the model, such as runtime context and heartbeat prompts. It also becomes the default timezone for cron schedules when a cron expression omits `tz`, and for one-shot `at` times when the ISO datetime has no explicit offset.\n\nCommon examples: `UTC`, `America\u002FNew_York`, `America\u002FLos_Angeles`, `Europe\u002FLondon`, `Europe\u002FBerlin`, `Asia\u002FTokyo`, `Asia\u002FShanghai`, `Asia\u002FSingapore`, `Australia\u002FSydney`.\n\n> Need another timezone? Browse the full [IANA Time Zone Database](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FList_of_tz_database_time_zones).\n\n## 🧩 Multiple Instances\n\nRun multiple nanobot instances simultaneously with separate configs and runtime data. Use `--config` as the main entrypoint. Optionally pass `--workspace` during `onboard` when you want to initialize or update the saved workspace for a specific instance.\n\n### Quick Start\n\nIf you want each instance to have its own dedicated workspace from the start, pass both `--config` and `--workspace` during onboarding.\n\n**Initialize instances:**\n\n```bash\n# Create separate instance configs and workspaces\nnanobot onboard --config ~\u002F.nanobot-telegram\u002Fconfig.json --workspace ~\u002F.nanobot-telegram\u002Fworkspace\nnanobot onboard --config ~\u002F.nanobot-discord\u002Fconfig.json --workspace ~\u002F.nanobot-discord\u002Fworkspace\nnanobot onboard --config ~\u002F.nanobot-feishu\u002Fconfig.json --workspace ~\u002F.nanobot-feishu\u002Fworkspace\n```\n\n**Configure each instance:**\n\nEdit `~\u002F.nanobot-telegram\u002Fconfig.json`, `~\u002F.nanobot-discord\u002Fconfig.json`, etc. with different channel settings. The workspace you passed during `onboard` is saved into each config as that instance's default workspace.\n\n**Run instances:**\n\n```bash\n# Instance A - Telegram bot\nnanobot gateway --config ~\u002F.nanobot-telegram\u002Fconfig.json\n\n# Instance B - Discord bot  \nnanobot gateway --config ~\u002F.nanobot-discord\u002Fconfig.json\n\n# Instance C - Feishu bot with custom port\nnanobot gateway --config ~\u002F.nanobot-feishu\u002Fconfig.json --port 18792\n```\n\n### Path Resolution\n\nWhen using `--config`, nanobot derives its runtime data directory from the config file location. The workspace still comes from `agents.defaults.workspace` unless you override it with `--workspace`.\n\nTo open a CLI session against one of these instances locally:\n\n```bash\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -m \"Hello from Telegram instance\"\nnanobot agent -c ~\u002F.nanobot-discord\u002Fconfig.json -m \"Hello from Discord instance\"\n\n# Optional one-off workspace override\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -w \u002Ftmp\u002Fnanobot-telegram-test\n```\n\n> `nanobot agent` starts a local CLI agent using the selected workspace\u002Fconfig. It does not attach to or proxy through an already running `nanobot gateway` process.\n\n| Component | Resolved From | Example |\n|-----------|---------------|---------|\n| **Config** | `--config` path | `~\u002F.nanobot-A\u002Fconfig.json` |\n| **Workspace** | `--workspace` or config | `~\u002F.nanobot-A\u002Fworkspace\u002F` |\n| **Cron Jobs** | config directory | `~\u002F.nanobot-A\u002Fcron\u002F` |\n| **Media \u002F runtime state** | config directory | `~\u002F.nanobot-A\u002Fmedia\u002F` |\n\n### How It Works\n\n- `--config` selects which config file to load\n- By default, the workspace comes from `agents.defaults.workspace` in that config\n- If you pass `--workspace`, it overrides the workspace from the config file\n\n### Minimal Setup\n\n1. Copy your base config into a new instance directory.\n2. Set a different `agents.defaults.workspace` for that instance.\n3. Start the instance with `--config`.\n\nExample config:\n\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"workspace\": \"~\u002F.nanobot-telegram\u002Fworkspace\",\n      \"model\": \"anthropic\u002Fclaude-sonnet-4-6\"\n    }\n  },\n  \"channels\": {\n    \"telegram\": {\n      \"enabled\": true,\n      \"token\": \"YOUR_TELEGRAM_BOT_TOKEN\"\n    }\n  },\n  \"gateway\": {\n    \"port\": 18790\n  }\n}\n```\n\nStart separate instances:\n\n```bash\nnanobot gateway --config ~\u002F.nanobot-telegram\u002Fconfig.json\nnanobot gateway --config ~\u002F.nanobot-discord\u002Fconfig.json\n```\n\nOverride workspace for one-off runs when needed:\n\n```bash\nnanobot gateway --config ~\u002F.nanobot-telegram\u002Fconfig.json --workspace \u002Ftmp\u002Fnanobot-telegram-test\n```\n\n### Common Use Cases\n\n- Run separate bots for Telegram, Discord, Feishu, and other platforms\n- Keep testing and production instances isolated\n- Use different models or providers for different teams\n- Serve multiple tenants with separate configs and runtime data\n\n### Notes\n\n- Each instance must use a different port if they run at the same time\n- Use a different workspace per instance if you want isolated memory, sessions, and skills\n- `--workspace` overrides the workspace defined in the config file\n- Cron jobs and runtime media\u002Fstate are derived from the config directory\n\n## 💻 CLI Reference\n\n| Command | Description |\n|---------|-------------|\n| `nanobot onboard` | Initialize config & workspace at `~\u002F.nanobot\u002F` |\n| `nanobot onboard --wizard` | Launch the interactive onboarding wizard |\n| `nanobot onboard -c \u003Cconfig> -w \u003Cworkspace>` | Initialize or refresh a specific instance config and workspace |\n| `nanobot agent -m \"...\"` | Chat with the agent |\n| `nanobot agent -w \u003Cworkspace>` | Chat against a specific workspace |\n| `nanobot agent -w \u003Cworkspace> -c \u003Cconfig>` | Chat against a specific workspace\u002Fconfig |\n| `nanobot agent` | Interactive chat mode |\n| `nanobot agent --no-markdown` | Show plain-text replies |\n| `nanobot agent --logs` | Show runtime logs during chat |\n| `nanobot serve` | Start the OpenAI-compatible API |\n| `nanobot gateway` | Start the gateway |\n| `nanobot status` | Show status |\n| `nanobot provider login openai-codex` | OAuth login for providers |\n| `nanobot channels login \u003Cchannel>` | Authenticate a channel interactively |\n| `nanobot channels status` | Show channel status |\n\nInteractive mode exits: `exit`, `quit`, `\u002Fexit`, `\u002Fquit`, `:q`, or `Ctrl+D`.\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Heartbeat (Periodic Tasks)\u003C\u002Fb>\u003C\u002Fsummary>\n\nThe gateway wakes up every 30 minutes and checks `HEARTBEAT.md` in your workspace (`~\u002F.nanobot\u002Fworkspace\u002FHEARTBEAT.md`). If the file has tasks, the agent executes them and delivers results to your most recently active chat channel.\n\n**Setup:** edit `~\u002F.nanobot\u002Fworkspace\u002FHEARTBEAT.md` (created automatically by `nanobot onboard`):\n\n```markdown\n## Periodic Tasks\n\n- [ ] Check weather forecast and send a summary\n- [ ] Scan inbox for urgent emails\n```\n\nThe agent can also manage this file itself — ask it to \"add a periodic task\" and it will update `HEARTBEAT.md` for you.\n\n> **Note:** The gateway must be running (`nanobot gateway`) and you must have chatted with the bot at least once so it knows which channel to deliver to.\n\n\u003C\u002Fdetails>\n\n## 🐍 Python SDK\n\nUse nanobot as a library — no CLI, no gateway, just Python:\n\n```python\nfrom nanobot import Nanobot\n\nbot = Nanobot.from_config()\nresult = await bot.run(\"Summarize the README\")\nprint(result.content)\n```\n\nEach call carries a `session_key` for conversation isolation — different keys get independent history:\n\n```python\nawait bot.run(\"hi\", session_key=\"user-alice\")\nawait bot.run(\"hi\", session_key=\"task-42\")\n```\n\nAdd lifecycle hooks to observe or customize the agent:\n\n```python\nfrom nanobot.agent import AgentHook, AgentHookContext\n\nclass AuditHook(AgentHook):\n    async def before_execute_tools(self, ctx: AgentHookContext) -> None:\n        for tc in ctx.tool_calls:\n            print(f\"[tool] {tc.name}\")\n\nresult = await bot.run(\"Hello\", hooks=[AuditHook()])\n```\n\nSee [docs\u002FPYTHON_SDK.md](docs\u002FPYTHON_SDK.md) for the full SDK reference.\n\n## 🔌 OpenAI-Compatible API\n\nnanobot can expose a minimal OpenAI-compatible endpoint for local integrations:\n\n```bash\npip install \"nanobot-ai[api]\"\nnanobot serve\n```\n\nBy default, the API binds to `127.0.0.1:8900`. You can change this in `config.json`.\n\n### Behavior\n\n- Session isolation: pass `\"session_id\"` in the request body to isolate conversations; omit for a shared default session (`api:default`)\n- Single-message input: each request must contain exactly one `user` message\n- Fixed model: omit `model`, or pass the same model shown by `\u002Fv1\u002Fmodels`\n- No streaming: `stream=true` is not supported\n\n### Endpoints\n\n- `GET \u002Fhealth`\n- `GET \u002Fv1\u002Fmodels`\n- `POST \u002Fv1\u002Fchat\u002Fcompletions`\n\n### curl\n\n```bash\ncurl http:\u002F\u002F127.0.0.1:8900\u002Fv1\u002Fchat\u002Fcompletions \\\n  -H \"Content-Type: application\u002Fjson\" \\\n  -d '{\n    \"messages\": [{\"role\": \"user\", \"content\": \"hi\"}],\n    \"session_id\": \"my-session\"\n  }'\n```\n\n### Python (`requests`)\n\n```python\nimport requests\n\nresp = requests.post(\n    \"http:\u002F\u002F127.0.0.1:8900\u002Fv1\u002Fchat\u002Fcompletions\",\n    json={\n        \"messages\": [{\"role\": \"user\", \"content\": \"hi\"}],\n        \"session_id\": \"my-session\",  # optional: isolate conversation\n    },\n    timeout=120,\n)\nresp.raise_for_status()\nprint(resp.json()[\"choices\"][0][\"message\"][\"content\"])\n```\n\n### Python (`openai`)\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(\n    base_url=\"http:\u002F\u002F127.0.0.1:8900\u002Fv1\",\n    api_key=\"dummy\",\n)\n\nresp = client.chat.completions.create(\n    model=\"MiniMax-M2.7\",\n    messages=[{\"role\": \"user\", \"content\": \"hi\"}],\n    extra_body={\"session_id\": \"my-session\"},  # optional: isolate conversation\n)\nprint(resp.choices[0].message.content)\n```\n\n## 🐳 Docker\n\n> [!TIP]\n> The `-v ~\u002F.nanobot:\u002Froot\u002F.nanobot` flag mounts your local config directory into the container, so your config and workspace persist across container restarts.\n\n### Docker Compose\n\n```bash\ndocker compose run --rm nanobot-cli onboard   # first-time setup\nvim ~\u002F.nanobot\u002Fconfig.json                     # add API keys\ndocker compose up -d nanobot-gateway           # start gateway\n```\n\n```bash\ndocker compose run --rm nanobot-cli agent -m \"Hello!\"   # run CLI\ndocker compose logs -f nanobot-gateway                   # view logs\ndocker compose down                                      # stop\n```\n\n### Docker\n\n```bash\n# Build the image\ndocker build -t nanobot .\n\n# Initialize config (first time only)\ndocker run -v ~\u002F.nanobot:\u002Froot\u002F.nanobot --rm nanobot onboard\n\n# Edit config on host to add API keys\nvim ~\u002F.nanobot\u002Fconfig.json\n\n# Run gateway (connects to enabled channels, e.g. Telegram\u002FDiscord\u002FMochat)\ndocker run -v ~\u002F.nanobot:\u002Froot\u002F.nanobot -p 18790:18790 nanobot gateway\n\n# Or run a single command\ndocker run -v ~\u002F.nanobot:\u002Froot\u002F.nanobot --rm nanobot agent -m \"Hello!\"\ndocker run -v ~\u002F.nanobot:\u002Froot\u002F.nanobot --rm nanobot status\n```\n\n## 🐧 Linux Service\n\nRun the gateway as a systemd user service so it starts automatically and restarts on failure.\n\n**1. Find the nanobot binary path:**\n\n```bash\nwhich nanobot   # e.g. \u002Fhome\u002Fuser\u002F.local\u002Fbin\u002Fnanobot\n```\n\n**2. Create the service file** at `~\u002F.config\u002Fsystemd\u002Fuser\u002Fnanobot-gateway.service` (replace `ExecStart` path if needed):\n\n```ini\n[Unit]\nDescription=Nanobot Gateway\nAfter=network.target\n\n[Service]\nType=simple\nExecStart=%h\u002F.local\u002Fbin\u002Fnanobot gateway\nRestart=always\nRestartSec=10\nNoNewPrivileges=yes\nProtectSystem=strict\nReadWritePaths=%h\n\n[Install]\nWantedBy=default.target\n```\n\n**3. Enable and start:**\n\n```bash\nsystemctl --user daemon-reload\nsystemctl --user enable --now nanobot-gateway\n```\n\n**Common operations:**\n\n```bash\nsystemctl --user status nanobot-gateway        # check status\nsystemctl --user restart nanobot-gateway       # restart after config changes\njournalctl --user -u nanobot-gateway -f        # follow logs\n```\n\nIf you edit the `.service` file itself, run `systemctl --user daemon-reload` before restarting.\n\n> **Note:** User services only run while you are logged in. To keep the gateway running after logout, enable lingering:\n>\n> ```bash\n> loginctl enable-linger $USER\n> ```\n\n## 📁 Project Structure\n\n```\nnanobot\u002F\n├── agent\u002F          # 🧠 Core agent logic\n│   ├── loop.py     #    Agent loop (LLM ↔ tool execution)\n│   ├── context.py  #    Prompt builder\n│   ├── memory.py   #    Persistent memory\n│   ├── skills.py   #    Skills loader\n│   ├── subagent.py #    Background task execution\n│   └── tools\u002F      #    Built-in tools (incl. spawn)\n├── skills\u002F         # 🎯 Bundled skills (github, weather, tmux...)\n├── channels\u002F       # 📱 Chat channel integrations (supports plugins)\n├── bus\u002F            # 🚌 Message routing\n├── cron\u002F           # ⏰ Scheduled tasks\n├── heartbeat\u002F      # 💓 Proactive wake-up\n├── providers\u002F      # 🤖 LLM providers (OpenRouter, etc.)\n├── session\u002F        # 💬 Conversation sessions\n├── config\u002F         # ⚙️ Configuration\n└── cli\u002F            # 🖥️ Commands\n```\n\n## 🤝 Contribute & Roadmap\n\nPRs welcome! The codebase is intentionally small and readable. 🤗\n\n### Branching Strategy\n\n| Branch | Purpose |\n|--------|---------|\n| `main` | Stable releases — bug fixes and minor improvements |\n| `nightly` | Experimental features — new features and breaking changes |\n\n**Unsure which branch to target?** See [CONTRIBUTING.md](.\u002FCONTRIBUTING.md) for details.\n\n**Roadmap** — Pick an item and [open a PR](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpulls)!\n\n- [ ] **Multi-modal** — See and hear (images, voice, video)\n- [ ] **Long-term memory** — Never forget important context\n- [ ] **Better reasoning** — Multi-step planning and reflection\n- [ ] **More integrations** — Calendar and more\n- [ ] **Self-improvement** — Learn from feedback and mistakes\n\n### Contributors\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_3b80c1ee6247.png\" alt=\"Contributors\" \u002F>\n\u003C\u002Fa>\n\n\n## ⭐ Star History\n\n\u003Cdiv align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fstar-history.com\u002F#HKUDS\u002Fnanobot&Date\">\n    \u003Cpicture>\n      \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_0f5aff70ff64.png&theme=dark\" \u002F>\n      \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_0f5aff70ff64.png\" \u002F>\n      \u003Cimg alt=\"Star History Chart\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_0f5aff70ff64.png\" style=\"border-radius: 15px; box-shadow: 0 0 30px rgba(0, 217, 255, 0.3);\" \u002F>\n    \u003C\u002Fpicture>\n  \u003C\u002Fa>\n\u003C\u002Fdiv>\n\n\u003Cp align=\"center\">\n  \u003Cem> Thanks for visiting ✨ nanobot!\u003C\u002Fem>\u003Cbr>\u003Cbr>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_b348048724fa.png\" alt=\"Views\">\n\u003C\u002Fp>\n\n\n\u003Cp align=\"center\">\n  \u003Csub>nanobot is for educational, research, and technical exchange purposes only\u003C\u002Fsub>\n\u003C\u002Fp>\n","\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_ef974e5bf5af.png\" alt=\"nanobot\" width=\"500\">\n  \u003Ch1>nanobot：超轻量级个人AI助手\u003C\u002Fh1>\n  \u003Cp>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fnanobot-ai\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fnanobot-ai\" alt=\"PyPI\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fnanobot-ai\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_726788d99f74.png\" alt=\"下载量\">\u003C\u002Fa>\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-≥3.11-blue\" alt=\"Python\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-green\" alt=\"许可证\">\n    \u003Ca href=\".\u002FCOMMUNICATION.md\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F飞书-群-E9DBFC?style=flat&logo=feishu&logoColor=white\" alt=\"飞书\">\u003C\u002Fa>\n    \u003Ca href=\".\u002FCOMMUNICATION.md\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F微信-群-C5EAB4?style=flat&logo=wechat&logoColor=white\" alt=\"微信\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FMnCvHqpUGB\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-社区-5865F2?style=flat&logo=discord&logoColor=white\" alt=\"Discord\">\u003C\u002Fa>\n  \u003C\u002Fp>\n\u003C\u002Fdiv>\n\n🐈 **nanobot** 是一款受 [OpenClaw](https:\u002F\u002Fgithub.com\u002Fopenclaw\u002Fopenclaw) 启发的 **超轻量级** 个人AI助手。\n\n⚡️ 以比 OpenClaw 少 **99% 的代码行数** 实现核心代理功能。\n\n📏 实时代码行数统计：随时运行 `bash core_agent_lines.sh` 进行核对。\n\n## 📢 最新消息\n\n> [!重要]\n> **安全提示：** 由于 `litellm` 供应链污染，请 **尽快检查您的 Python 环境**，并参考此 [公告](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fdiscussions\u002F2445) 获取详细信息。自 **v0.1.4.post6** 起，我们已完全移除 `litellm`。\n\n- **2026年3月27日** 🚀 发布 **v0.1.4.post6** — 架构解耦、移除 litellm、端到端流式传输、微信渠道及一项安全修复。详情请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post6)。\n- **2026年3月26日** 🏗️ 提取代理运行器并统一生命周期钩子；在边界处合并流式增量。\n- **2026年3月25日** 🌏 StepFun 提供商、可配置时区、Gemini 思考签名。\n- **2026年3月24日** 🔧 微信兼容性、飞书 CardKit 流式传输、测试套件重构。\n- **2026年3月23日** 🔧 针对插件、WhatsApp\u002F微信媒体以及统一渠道登录 CLI 重新设计命令路由。\n- **2026年3月22日** ⚡ 端到端流式传输、微信渠道、Anthropic 缓存优化以及 `\u002Fstatus` 命令。\n- **2026年3月21日** 🔒 用原生 `openai` + `anthropic` SDK 替代 `litellm`。详情请参阅 [提交记录](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fcommit\u002F3dfdab7)。\n- **2026年3月20日** 🧙 交互式设置向导——选择提供商、模型自动补全，即可开始使用。\n- **2026年3月19日** 💬 Telegram 在高负载下更加稳定；飞书现在能正确渲染代码块。\n- **2026年3月18日** 📷 Telegram 现可通过 URL 发送媒体文件。Cron 定时任务显示更易读的详细信息。\n- **2026年3月17日** ✨ 飞书格式焕然一新，Slack 在完成时会有响应，自定义端点支持额外头信息，图像处理也更为可靠。\n\n\u003Cdetails>\n\u003Csummary>往期新闻\u003C\u002Fsummary>\n\n- **2026-03-16** 🚀 发布了 **v0.1.4.post5** — 一个以优化为重点的版本，提升了可靠性和渠道支持，让日常使用更加稳定。详情请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post5)。\n- **2026-03-15** 🧩 支持钉钉富媒体、内置技能更智能，模型兼容性更清晰。\n- **2026-03-14** 💬 渠道插件、飞书回复，以及更稳定的MCP、QQ和媒体处理。\n- **2026-03-13** 🌐 多提供商网络搜索、LangSmith集成，以及更广泛的可靠性改进。\n- **2026-03-12** 🚀 支持火山引擎、Telegram回复上下文、`\u002Frestart`命令，以及更稳固的内存管理。\n- **2026-03-11** 🔌 支持企业微信、Ollama，发现功能更简洁，工具行为更安全。\n- **2026-03-10** 🧠 基于Token的内存管理、共享重试机制，以及网关和Telegram行为的优化。\n- **2026-03-09** 💬 Slack线程功能优化，飞书音频兼容性更好。\n- **2026-03-08** 🚀 发布了 **v0.1.4.post4** — 一个注重可靠性的新版本，包含更安全的默认设置、更好的多实例支持、更稳健的MCP，以及对各大渠道和提供商的重大改进。详情请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post4)。\n- **2026-03-07** 🚀 支持Azure OpenAI提供商、WhatsApp媒体消息、QQ群聊，以及对Telegram和飞书的进一步优化。\n- **2026-03-06** 🪄 提供商模块更轻量、媒体处理更智能，内存管理和CLI兼容性更强。\n- **2026-03-05** ⚡️ Telegram草稿流式传输、MCP SSE支持，以及更全面的渠道可靠性修复。\n- **2026-03-04** 🛠️ 依赖清理、更安全的文件读取，以及新一轮的测试和Cron任务修复。\n- **2026-03-03** 🧠 用户消息合并更干净、多模态数据保存更安全，Cron守护机制更强。\n- **2026-03-02** 🛡️ 默认访问控制更安全、Cron重新加载更稳定，Matrix媒体处理更整洁。\n- **2026-03-01** 🌐 支持Web代理、更智能的Cron提醒，以及飞书富文本解析的改进。\n- **2026-02-28** 🚀 发布了 **v0.1.4.post3** — 上下文更清晰、会话历史更健壮，代理功能更智能。详情请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post3)。\n- **2026-02-27** 🧠 支持实验性思维模式、钉钉媒体消息，以及飞书和QQ渠道的修复。\n- **2026-02-26** 🛡️ 修复会话污染问题、WhatsApp去重、Windows路径保护，以及与Mistral的兼容性改进。\n- **2026-02-25** 🧹 新增Matrix频道、会话上下文更整洁，自动同步工作区模板。\n- **2026-02-24** 🚀 发布了 **v0.1.4.post2** — 一个以可靠性为核心的版本，重新设计了心跳机制、优化了提示缓存，并增强了提供商和渠道的稳定性。详情请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post2)。\n- **2026-02-23** 🔧 虚拟工具调用心跳机制、提示缓存优化，以及Slack Markdown修复。\n- **2026-02-22** 🛡️ Slack线程隔离、Discord打字状态修复，以及代理可靠性提升。\n- **2026-02-21** 🎉 发布了 **v0.1.4.post1** — 新增提供商、跨渠道媒体支持，以及重大稳定性改进。详情请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4.post1)。\n- **2026-02-20** 🐦 飞书现在可以接收用户发送的多模态文件。后台内存管理更可靠。\n- **2026-02-19** ✨ Slack现在可以发送文件，Discord会自动分割长消息，子代理也可以在CLI模式下运行。\n- **2026-02-18** ⚡️ nanobot现在支持火山引擎、MCP自定义认证头，以及Anthropic提示缓存。\n- **2026-02-17** 🎉 发布了 **v0.1.4** — 支持MCP、进度流式传输、新增提供商，并对多个渠道进行了改进。详情请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.4)。\n- **2026-02-16** 🦞 nanobot现已集成[ClawHub](https:\u002F\u002Fclawhub.ai)技能——可搜索并安装公开的代理技能。\n- **2026-02-15** 🔑 nanobot现支持OpenAI Codex提供商，并加入OAuth登录支持。\n- **2026-02-14** 🔌 nanobot现已支持MCP！详情请参阅[MCP模型上下文协议](#mcp-model-context-protocol)。\n- **2026-02-13** 🎉 发布了 **v0.1.3.post7** — 包含安全加固及多项改进。**请升级至最新版本以解决安全问题**。更多详情请参阅 [发布说明](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.3.post7)。\n- **2026-02-12** 🧠 重新设计了内存系统——代码更少，可靠性更高。欢迎参与关于此话题的 [讨论](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fdiscussions\u002F566)！\n- **2026-02-11** ✨ CLI体验增强，并新增MiniMax支持！\n- **2026-02-10** 🎉 发布了 **v0.1.3.post6**，包含多项改进！请查看更新 [说明](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.3.post6)以及我们的 [路线图](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fdiscussions\u002F431)。\n- **2026-02-09** 💬 新增Slack、Email和QQ支持——nanobot现已支持多个聊天平台！\n- **2026-02-08** 🔧 重构了提供商模块——现在只需简单两步即可添加新的LLM提供商！详情请见 [这里](#providers)。\n- **2026-02-07** 🚀 发布了 **v0.1.3.post5**，新增通义千问支持及多项关键改进！详情请参阅 [这里](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.3.post5)。\n- **2026-02-06** ✨ 新增Moonshot\u002FKimi提供商、Discord集成，并进一步强化了安全防护！\n- **2026-02-05** ✨ 新增飞书频道、DeepSeek提供商，并增强了定时任务支持！\n- **2026-02-04** 🚀 发布了 **v0.1.3.post4**，支持多提供商和Docker！详情请参阅 [这里](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Freleases\u002Ftag\u002Fv0.1.3.post4)。\n- **2026-02-03** ⚡ 集成了vLLM，用于本地LLM支持，并改进了自然语言任务调度！\n- **2026-02-02** 🎉 nanobot正式上线！欢迎体验猫咪机器人！\n\n\u003C\u002Fdetails>\n\n> 🐈 nanobot仅用于教育、研究和技术交流目的。它与加密货币无关，不涉及任何官方代币或硬币。\n\n\n\n## nanobot的核心特性：\n\n🪶 **超轻量级**：OpenClaw的极简实现——体积缩小99%，速度大幅提升。\n\n🔬 **科研友好**：代码简洁易懂，便于理解、修改和扩展，适合科研使用。\n\n⚡️ **极速响应**：极小的资源占用意味着更快的启动速度、更低的资源消耗和更高效的迭代。\n\n💎 **易于使用**：一键部署，即刻上手。\n\n## 🏗️ 架构\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_1ceb50b3cf68.png\" alt=\"nanobot架构\" width=\"800\">\n\u003C\u002Fp>\n\n## 目录\n\n- [新闻](#-news)\n- [核心特性](#key-features-of-nanobot)\n- [架构](#️-architecture)\n- [功能](#-features)\n- [安装](#-install)\n- [快速入门](#-quick-start)\n- [聊天应用](#-chat-apps)\n- [代理社交网络](#-agent-social-network)\n- [配置](#️-configuration)\n- [多实例](#-multiple-instances)\n- [CLI参考](#-cli-reference)\n- [Python SDK](#-python-sdk)\n- [OpenAI兼容API](#-openai-compatible-api)\n- [Docker](#-docker)\n- [Linux服务](#-linux-service)\n- [项目结构](#-project-structure)\n- [贡献与路线图](#-contribute--roadmap)\n- [星标历史](#-star-history)\n\n## ✨ 功能\n\n\u003Ctable align=\"center\">\n  \u003Ctr align=\"center\">\n    \u003Cth>\u003Cp align=\"center\">📈 全天候实时市场分析\u003C\u002Fp>\u003C\u002Fth>\n    \u003Cth>\u003Cp align=\"center\">🚀 全栈软件工程师\u003C\u002Fp>\u003C\u002Fth>\n    \u003Cth>\u003Cp align=\"center\">📅 智能日常事务管理器\u003C\u002Fp>\u003C\u002Fth>\n    \u003Cth>\u003Cp align=\"center\">📚 个人知识助手\u003C\u002Fp>\u003C\u002Fth>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_02d06ff8b9e2.gif\" width=\"180\" height=\"400\">\u003C\u002Fp>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_36d322a926fa.gif\" width=\"180\" height=\"400\">\u003C\u002Fp>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_55871898f25e.gif\" width=\"180\" height=\"400\">\u003C\u002Fp>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_a1d9382d70c7.gif\" width=\"180\" height=\"400\">\u003C\u002Fp>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">发现 • 洞察 • 趋势\u003C\u002Ftd>\n    \u003Ctd align=\"center\">开发 • 部署 • 扩展\u003C\u002Ftd>\n    \u003Ctd align=\"center\">计划 • 自动化 • 整理\u003C\u002Ftd>\n    \u003Ctd align=\"center\">学习 • 记忆 • 推理\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## 📦 安装\n\n**从源码安装**（最新功能，推荐用于开发）\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot.git\ncd nanobot\npip install -e .\n```\n\n**使用 [uv](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv) 安装**（稳定、快速）\n\n```bash\nuv tool install nanobot-ai\n```\n\n**从 PyPI 安装**（稳定）\n\n```bash\npip install nanobot-ai\n```\n\n### 更新到最新版本\n\n**PyPI \u002F pip**\n\n```bash\npip install -U nanobot-ai\nnanobot --version\n```\n\n**uv**\n\n```bash\nuv tool upgrade nanobot-ai\nnanobot --version\n```\n\n**使用 WhatsApp 吗？** 升级后请重建本地桥接：\n\n```bash\nrm -rf ~\u002F.nanobot\u002Fbridge\nnanobot channels login whatsapp\n```\n\n## 🚀 快速入门\n\n> [!TIP]\n> 请在 `~\u002F.nanobot\u002Fconfig.json` 中设置您的 API 密钥。\n> 获取 API 密钥：[OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fkeys)（全球）\n>\n> 如需其他 LLM 提供商，请参阅【提供商】部分。\n>\n> 如需设置网络搜索功能，请参阅【网络搜索】部分。\n\n**1. 初始化**\n\n```bash\nnanobot onboard\n```\n\n如果您希望使用交互式设置向导，可以使用 `nanobot onboard --wizard`。\n\n**2. 配置**（`~\u002F.nanobot\u002Fconfig.json`）\n\n在配置文件中需设置以下 **两部分**（其他选项已有默认值）。\n\n*设置您的 API 密钥*（例如 OpenRouter，推荐给全球用户）：\n```json\n{\n  \"providers\": {\n    \"openrouter\": {\n      \"apiKey\": \"sk-or-v1-xxx\"\n    }\n  }\n}\n```\n\n*设置您的模型*（可选择固定某个提供商 — 默认为自动检测）：\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"anthropic\u002Fclaude-opus-4-5\",\n      \"provider\": \"openrouter\"\n    }\n  }\n}\n```\n\n**3. 聊天**\n\n```bash\nnanobot agent\n```\n\n就是这样！您只需 2 分钟即可拥有一个可用的 AI 助手。\n\n## 💬 聊天应用\n\n将 nanobot 连接到您最喜欢的聊天平台。想自己搭建吗？请参阅[频道插件指南](.\u002Fdocs\u002FCHANNEL_PLUGIN_GUIDE.md)。\n\n| 频道 | 所需信息 |\n|---------|---------------|\n| **Telegram** | 来自 @BotFather 的机器人令牌 |\n| **Discord** | 机器人令牌 + 消息内容权限 |\n| **WhatsApp** | 扫描二维码（`nanobot channels login whatsapp`） |\n| **WeChat (微信)** | 扫描二维码（`nanobot channels login weixin`） |\n| **飞书** | 应用程序 ID + 应用程序密钥 |\n| **钉钉** | 应用程序 Key + 应用程序 Secret |\n| **Slack** | 机器人令牌 + 应用级别令牌 |\n| **Matrix** | 宿主服务器 URL + 访问令牌 |\n| **电子邮件** | IMAP\u002FSMTP 凭证 |\n| **QQ** | 应用程序 ID + 应用程序密钥 |\n| **企业微信** | 机器人 ID + 机器人密钥 |\n| **Mochat** | Claw 令牌（支持自动配置） |\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Telegram\u003C\u002Fb>（推荐）\u003C\u002Fsummary>\n\n**1. 创建机器人**\n- 打开 Telegram，搜索 `@BotFather`\n- 发送 `\u002Fnewbot`，按照提示操作\n- 复制令牌\n\n**2. 配置**\n\n```json\n{\n  \"channels\": {\n    \"telegram\": {\n      \"enabled\": true,\n      \"token\": \"YOUR_BOT_TOKEN\",\n      \"allowFrom\": [\"YOUR_USER_ID\"]\n    }\n  }\n}\n```\n\n> 您可以在 Telegram 设置中找到您的 **用户 ID**，显示为 `@yourUserId`。\n> 请复制此值 **不带 `@` 符号**，并粘贴到配置文件中。\n\n\n**3. 运行**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Mochat（Claw IM）\u003C\u002Fb>\u003C\u002Fsummary>\n\n默认使用 **Socket.IO WebSocket**，并回退到 HTTP 轮询。\n\n**1. 让 nanobot 为您设置 Mochat**\n\n只需向 nanobot 发送以下消息（将 `xxx@xxx` 替换为您的真实邮箱）：\n\n```\n阅读 https:\u002F\u002Fraw.githubusercontent.com\u002FHKUDS\u002FMoChat\u002Frefs\u002Fheads\u002Fmain\u002Fskills\u002Fnanobot\u002Fskill.md，并在 MoChat 上注册。我的邮箱是 xxx@xxx，请将我绑定为您的所有者，并通过 MoChat 私信联系我。\n```\n\nnanobot 将自动注册、配置 `~\u002F.nanobot\u002Fconfig.json`，并连接到 Mochat。\n\n**2. 重启网关**\n\n```bash\nnanobot gateway\n```\n\n就这样——剩下的就由 nanobot 自动处理了！\n\n\u003Cbr>\n\n\u003Cdetails>\n\u003Csummary>手动配置（高级）\u003C\u002Fsummary>\n\n如果您更倾向于手动配置，请将以下内容添加到 `~\u002F.nanobot\u002Fconfig.json`：\n\n> 请务必保密 `claw_token`。它应仅通过 `X-Claw-Token` 头发送到您的 Mochat API 端点。\n\n```json\n{\n  \"channels\": {\n    \"mochat\": {\n      \"enabled\": true,\n      \"base_url\": \"https:\u002F\u002Fmochat.io\",\n      \"socket_url\": \"https:\u002F\u002Fmochat.io\",\n      \"socket_path\": \"\u002Fsocket.io\",\n      \"claw_token\": \"claw_xxx\",\n      \"agent_user_id\": \"6982abcdef\",\n      \"sessions\": [\"*\"],\n      \"panels\": [\"*\"],\n      \"reply_delay_mode\": \"non-mention\",\n      \"reply_delay_ms\": 120000\n    }\n  }\n}\n```\n\n\n\n\u003C\u002Fdetails>\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Discord\u003C\u002Fb>\u003C\u002Fsummary>\n\n**1. 创建机器人**\n- 访问 https:\u002F\u002Fdiscord.com\u002Fdevelopers\u002Fapplications\n- 创建应用程序 → 机器人 → 添加机器人\n- 复制机器人令牌\n\n**2. 启用权限**\n- 在机器人设置中，启用 **MESSAGE CONTENT INTENT**\n- （可选）如果您计划根据成员数据使用允许列表，也可启用 **SERVER MEMBERS INTENT**\n\n**3. 获取您的用户 ID**\n- Discord 设置 → 高级 → 启用 **开发者模式**\n- 右键单击您的头像 → **复制用户 ID**\n\n**4. 配置**\n\n```json\n{\n  \"channels\": {\n    \"discord\": {\n      \"enabled\": true,\n      \"token\": \"YOUR_BOT_TOKEN\",\n      \"allowFrom\": [\"YOUR_USER_ID\"],\n      \"groupPolicy\": \"mention\"\n    }\n  }\n}\n```\n\n> `groupPolicy` 控制机器人在群组频道中的响应方式：\n> - `\"mention\"`（默认）— 仅在被提及时回复\n> - `\"open\"` — 回复所有消息\n> 私信始终会在发信人位于 `allowFrom` 列表中时回复。\n> - 如果您将群组策略设置为 open，请创建新线程作为私有线程，然后@机器人进入该线程。否则，线程本身以及您发起它的频道都会启动一个机器人会话。\n\n**5. 邀请机器人**\n- OAuth2 → URL 生成器\n- 范围：`bot`\n- 机器人权限：`发送消息`、`读取消息历史`\n- 打开生成的邀请链接，将机器人添加到您的服务器\n\n**6. 运行**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Matrix（Element）\u003C\u002Fb>\u003C\u002Fsummary>\n\n首先安装 Matrix 的依赖项：\n\n```bash\npip install nanobot-ai[matrix]\n```\n\n**1. 创建或选择一个 Matrix 账户**\n\n- 在您的宿主服务器上创建或重复使用一个 Matrix 账户（例如 `matrix.org`）。\n- 确认您可以使用 Element 登录。\n\n**2. 获取凭证**\n\n- 您需要：\n  - `userId`（示例：`@nanobot:matrix.org`）\n  - `accessToken`\n  - `deviceId`（建议使用，以便在重启后恢复同步令牌）\n- 您可以从您的宿主服务器登录 API（`\u002F_matrix\u002Fclient\u002Fv3\u002Flogin`）或从客户端的高级会话设置中获取这些信息。\n\n**3. 配置**\n\n```json\n{\n  \"channels\": {\n    \"matrix\": {\n      \"enabled\": true,\n      \"homeserver\": \"https:\u002F\u002Fmatrix.org\",\n      \"userId\": \"@nanobot:matrix.org\",\n      \"accessToken\": \"syt_xxx\",\n      \"deviceId\": \"NANOBOT01\",\n      \"e2eeEnabled\": true,\n      \"allowFrom\": [\"@your_user:matrix.org\"],\n      \"groupPolicy\": \"open\",\n      \"groupAllowFrom\": [],\n      \"allowRoomMentions\": false,\n      \"maxMediaBytes\": 20971520\n    }\n  }\n}\n```\n\n> 请保持持久化的 `matrix-store` 和稳定的 `deviceId`——如果这些在重启时发生变化，加密会话状态就会丢失。\n\n| 选项 | 描述 |\n|--------|-------------|\n| `allowFrom` | 允许互动的用户 ID。为空则拒绝所有人；使用 `[\"*\"]` 允许所有人。 |\n| `groupPolicy` | `open`（默认）、`mention` 或 `allowlist`。 |\n| `groupAllowFrom` | 房间允许列表（当策略为 `allowlist` 时使用）。 |\n| `allowRoomMentions` | 在提及模式下接受 `@room` 提及。 |\n| `e2eeEnabled` | E2EE 支持（默认为 `true`）。设置为 `false` 以仅使用明文。 |\n| `maxMediaBytes` | 最大附件大小（默认为 `20MB`）。设置为 `0` 以阻止所有媒体。 |\n\n\n\n\n**4. 运行**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>WhatsApp\u003C\u002Fb>\u003C\u002Fsummary>\n\n需要 **Node.js ≥18**。\n\n**1. 链接设备**\n\n```bash\nnanobot channels login whatsapp\n# 使用 WhatsApp 扫描 QR 码 → 设置 → 已链接设备\n```\n\n**2. 配置**\n\n```json\n{\n  \"channels\": {\n    \"whatsapp\": {\n      \"enabled\": true,\n      \"allowFrom\": [\"+1234567890\"]\n    }\n  }\n}\n```\n\n**3. 运行**（两个终端）\n\n```bash\n# 终端 1\nnanobot channels login whatsapp\n\n# 终端 2\nnanobot gateway\n```\n\n> 对于现有安装，WhatsApp 桥接更新不会自动应用。\n> 升级 nanobot 后，请通过以下命令重建本地桥接：\n> `rm -rf ~\u002F.nanobot\u002Fbridge && nanobot channels login whatsapp`\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>飞书\u003C\u002Fb>\u003C\u002Fsummary>\n\n使用 **WebSocket** 长连接——无需公网 IP。\n\n**1. 创建飞书机器人**\n- 访问 [飞书开放平台](https:\u002F\u002Fopen.feishu.cn\u002Fapp)\n- 创建新应用 → 启用 **Bot** 能力\n- **权限**：\n  - `im:message`（发送消息）和 `im:message.p2p_msg:readonly`（接收消息）\n  - **流式回复**（nanobot 默认启用）：添加 **`cardkit:card:write`**（在飞书开发者控制台中通常标记为 **创建与更新卡片**）。这是 CardKit 实体和流式助手文本所必需的。较旧的应用可能尚未具备此权限——请打开 **权限管理**，启用该范围，然后如果控制台要求，再 **发布** 新的应用版本。\n  - 如果你 **无法** 添加 `cardkit:card:write`，则在 `channels.feishu` 下将 `\"streaming\": false`。机器人仍然可以工作；回复将使用普通的交互式卡片，而不进行逐 token 的流式传输。\n- **事件**：添加 `im.message.receive_v1`（接收消息）\n  - 选择 **长连接** 模式（需要先运行 nanobot 建立连接）\n- 从“凭证与基本信息”中获取 **App ID** 和 **App Secret**\n- 发布应用\n\n**2. 配置**\n\n```json\n{\n  \"channels\": {\n    \"feishu\": {\n      \"enabled\": true,\n      \"appId\": \"cli_xxx\",\n      \"appSecret\": \"xxx\",\n      \"encryptKey\": \"\",\n      \"verificationToken\": \"\",\n      \"allowFrom\": [\"ou_YOUR_OPEN_ID\"],\n      \"groupPolicy\": \"mention\",\n      \"streaming\": true\n    }\n  }\n}\n```\n\n> `streaming` 默认为 `true`。如果你的应用没有 **`cardkit:card:write`** 权限（见上文权限说明），则使用 `false`。\n> `encryptKey` 和 `verificationToken` 在长连接模式下是可选的。\n> `allowFrom`：添加你的 open_id（可在你向机器人发送消息时的 nanobot 日志中找到）。使用 `[\"*\"]` 可允许所有用户。\n> `groupPolicy`：“mention”（默认——仅在被提及时回复）、“open”（回复所有群聊消息）。私聊始终会回复。\n\n**3. 运行**\n\n```bash\nnanobot gateway\n```\n\n> [!TIP]\n> 飞书使用 WebSocket 接收消息——无需 Webhook 或公网 IP！\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>QQ（QQ单聊）\u003C\u002Fb>\u003C\u002Fsummary>\n\n使用 **botpy SDK** 结合 WebSocket——无需公网 IP。目前仅支持 **私人消息**。\n\n**1. 注册并创建机器人**\n- 访问 [QQ开放平台](https:\u002F\u002Fq.qq.com) → 注册成为开发者（个人或企业）\n- 创建新的机器人应用\n- 进入 **开发设置** → 复制 **AppID** 和 **AppSecret**\n\n**2. 设置沙盒用于测试**\n- 在机器人管理控制台中，找到 **沙箱配置**\n- 在 **在消息列表配置** 下，点击 **添加成员** 并添加你自己的 QQ 号码\n- 添加成功后，用手机 QQ 扫描机器人的二维码→打开机器人资料页→点击“发消息”即可开始聊天。\n\n**3. 配置**\n\n> - `allowFrom`：添加你的 openid（可在你向机器人发送消息时的 nanobot 日志中找到）。使用 `[\"*\"]` 可公开访问。\n> - `msgFormat`：可选。使用 `\"plain\"`（默认）以获得与旧版 QQ 客户端的最大兼容性，或使用 `\"markdown\"` 以在较新客户端上实现更丰富的格式。\n> - 生产环境：在机器人控制台提交审核并发布。完整发布流程请参阅 [QQ Bot 文档](https:\u002F\u002Fbot.q.qq.com\u002Fwiki\u002F)。\n\n```json\n{\n  \"channels\": {\n    \"qq\": {\n      \"enabled\": true,\n      \"appId\": \"YOUR_APP_ID\",\n      \"secret\": \"YOUR_APP_SECRET\",\n      \"allowFrom\": [\"YOUR_OPENID\"],\n      \"msgFormat\": \"plain\"\n    }\n  }\n}\n```\n\n**4. 运行**\n\n```bash\nnanobot gateway\n```\n\n现在你可以从 QQ 向机器人发送消息——它应该会回复！\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>钉钉\u003C\u002Fb>\u003C\u002Fsummary>\n\n使用 **流模式**——无需公网 IP。\n\n**1. 创建钉钉机器人**\n- 访问 [钉钉开放平台](https:\u002F\u002Fopen-dev.dingtalk.com\u002F)\n- 创建新应用 → 添加 **Robot** 能力\n- **配置**：\n  - 开启 **流模式**\n- **权限**：添加发送消息所需的必要权限\n- 从“凭证”中获取 **AppKey**（Client ID）和 **AppSecret**（Client Secret）\n- 发布应用\n\n**2. 配置**\n\n```json\n{\n  \"channels\": {\n    \"dingtalk\": {\n      \"enabled\": true,\n      \"clientId\": \"YOUR_APP_KEY\",\n      \"clientSecret\": \"YOUR_APP_SECRET\",\n      \"allowFrom\": [\"YOUR_STAFF_ID\"]\n    }\n  }\n}\n```\n\n> `allowFrom`：添加你的员工 ID。使用 `[\"*\"]` 可允许所有用户。\n\n**3. 运行**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Slack\u003C\u002Fb>\u003C\u002Fsummary>\n\n使用 **Socket 模式**——无需公共 URL。\n\n**1. 创建 Slack 应用**\n- 访问 [Slack API](https:\u002F\u002Fapi.slack.com\u002Fapps) → **创建新应用** → “从头开始”\n- 选择一个名称并选定你的工作区\n\n**2. 配置应用**\n- **Socket 模式**：开启 → 生成具有 `connections:write` 范围的 **应用级令牌** → 复制该令牌（`xapp-...`）\n- **OAuth 与权限**：添加机器人权限：`chat:write`、`reactions:write`、`app_mentions:read`\n- **事件订阅**：开启 → 订阅机器人事件：`message.im`、`message.channels`、`app_mention` → 保存更改\n- **App Home**：滚动到 **显示选项卡** → 启用 **消息选项卡** → 勾选 **“允许用户从消息选项卡发送 Slash 命令和消息”**\n- **安装应用**：点击 **安装到工作区** → 授权 → 复制 **机器人令牌**（`xoxb-...`）\n\n**3. 配置 nanobot**\n\n```json\n{\n  \"channels\": {\n    \"slack\": {\n      \"enabled\": true,\n      \"botToken\": \"xoxb-...\",\n      \"appToken\": \"xapp-...\",\n      \"allowFrom\": [\"YOUR_SLACK_USER_ID\"],\n      \"groupPolicy\": \"mention\"\n    }\n  }\n}\n```\n\n**4. 运行**\n\n```bash\nnanobot gateway\n```\n\n直接给机器人发私信，或在频道中@提及它——它应该会回复！\n\n> [!TIP]\n> - `groupPolicy`：“mention”（默认——仅在被提及时回复）、“open”（回复所有频道消息），或“allowlist”（限制于特定频道）。\n> - 私信政策默认为开放。若要禁用私信，可设置 `\"dm\": {\"enabled\": false}`。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>电子邮件\u003C\u002Fb>\u003C\u002Fsummary>\n\n为 nanobot 准备一个独立的邮箱账号。它通过 **IMAP** 轮询收件箱，并通过 **SMTP** 回复邮件——就像一位私人邮件助理一样。\n\n**1. 获取凭据（以 Gmail 为例）**\n- 为你的机器人创建一个专用的 Gmail 账号（例如 `my-nanobot@gmail.com`）\n- 启用两步验证 → 创建一个 [应用密码](https:\u002F\u002Fmyaccount.google.com\u002Fapppasswords)\n- 使用此应用密码同时进行 IMAP 和 SMTP 操作\n\n**2. 配置**\n\n> - `consentGranted` 必须为 `true` 才能允许访问邮箱。这是一个安全机制——将其设为 `false` 可完全禁用。\n> - `allowFrom`：添加你的邮箱地址。使用 `[\"*\"]` 可接受来自任何人的邮件。\n> - `smtpUseTls` 和 `smtpUseSsl` 分别默认为 `true` 和 `false`，这对 Gmail（端口 587 + STARTTLS）来说是正确的，无需显式设置。\n> - 如果你只想读取\u002F分析邮件而不想自动回复，则可将 `\"autoReplyEnabled\": false`。\n\n```json\n{\n  \"channels\": {\n    \"email\": {\n      \"enabled\": true,\n      \"consentGranted\": true,\n      \"imapHost\": \"imap.gmail.com\",\n      \"imapPort\": 993,\n      \"imapUsername\": \"my-nanobot@gmail.com\",\n      \"imapPassword\": \"your-app-password\",\n      \"smtpHost\": \"smtp.gmail.com\",\n      \"smtpPort\": 587,\n      \"smtpUsername\": \"my-nanobot@gmail.com\",\n      \"smtpPassword\": \"your-app-password\",\n      \"fromAddress\": \"my-nanobot@gmail.com\",\n      \"allowFrom\": [\"your-real-email@gmail.com\"]\n    }\n  }\n}\n```\n\n\n**3. 运行**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>微信 (WeChat \u002F Weixin)\u003C\u002Fb>\u003C\u002Fsummary>\n\n使用 ilinkai 个人微信 API，通过二维码登录实现 **HTTP 长轮询**。无需本地安装微信桌面客户端。\n\n**1. 安装支持微信的版本**\n\n```bash\npip install \"nanobot-ai[weixin]\"\n```\n\n**2. 配置**\n\n```json\n{\n  \"channels\": {\n    \"weixin\": {\n      \"enabled\": true,\n      \"allowFrom\": [\"YOUR_WECHAT_USER_ID\"]\n    }\n  }\n}\n```\n\n> - `allowFrom`: 添加你在 nanobot 日志中看到的微信账号发送者 ID。使用 `[\"*\"]` 可允许所有用户。\n> - `token`: 可选。若省略，则需交互式登录，nanobot 会为你保存 token。\n> - `routeTag`: 可选。当你的上游微信部署需要请求路由时，nanobot 会将其作为 `SKRouteTag` 头发送。\n> - `stateDir`: 可选。默认为 nanobot 的运行目录，用于存储微信状态。\n> - `pollTimeout`: 可选，长轮询超时时间（单位：秒）。\n\n**3. 登录**\n\n```bash\nnanobot channels login weixin\n```\n\n使用 `--force` 强制重新认证，并忽略已保存的 token：\n\n```bash\nnanobot channels login weixin --force\n```\n\n**4. 运行**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>企业微信 (Wecom)\u003C\u002Fb>\u003C\u002Fsummary>\n\n> 我们这里使用 [wecom-aibot-sdk-python](https:\u002F\u002Fgithub.com\u002Fchengyongru\u002Fwecom_aibot_sdk)（社区版 Python SDK，对应官方 [@wecom\u002Faibot-node-sdk](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@wecom\u002Faibot-node-sdk)）。\n>\n> 使用 **WebSocket** 长连接——无需公网 IP。\n\n**1. 安装可选依赖**\n\n```bash\npip install nanobot-ai[wecom]\n```\n\n**2. 创建企业微信 AI 机器人**\n\n前往企业微信管理后台 → 智能机器人 → 创建机器人 → 选择 **API 模式** 并启用 **长连接**。复制机器人 ID 和密钥。\n\n**3. 配置**\n\n```json\n{\n  \"channels\": {\n    \"wecom\": {\n      \"enabled\": true,\n      \"botId\": \"your_bot_id\",\n      \"secret\": \"your_bot_secret\",\n      \"allowFrom\": [\"your_id\"]\n    }\n  }\n}\n```\n\n**4. 运行**\n\n```bash\nnanobot gateway\n```\n\n\u003C\u002Fdetails>\n\n\n\n## 🌐 代理社交网络\n\n🐈 nanobot 能够接入代理社交网络（代理社区）。**只需发送一条消息，你的 nanobot 就会自动加入！**\n\n| 平台 | 如何加入（向你的机器人发送此消息） |\n|----------|-------------|\n| [**Moltbook**](https:\u002F\u002Fwww.moltbook.com\u002F) | `阅读 https:\u002F\u002Fmoltbook.com\u002Fskill.md 并按照说明加入 Moltbook` |\n| [**ClawdChat**](https:\u002F\u002Fclawdchat.ai\u002F) | `阅读 https:\u002F\u002Fclawdchat.ai\u002Fskill.md 并按照说明加入 ClawdChat` |\n\n只需通过 CLI 或任何聊天渠道将上述命令发送给你的 nanobot，剩下的步骤它会自动完成。\n\n## ⚙️ 配置\n\n配置文件：`~\u002F.nanobot\u002Fconfig.json`\n\n### Providers\n\n> [!TIP]\n> - **Groq** provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.\n> - **MiniMax Coding Plan**: Exclusive discount links for the nanobot community: [Overseas](https:\u002F\u002Fplatform.minimax.io\u002Fsubscribe\u002Fcoding-plan?code=9txpdXw04g&source=link) · [Mainland China](https:\u002F\u002Fplatform.minimaxi.com\u002Fsubscribe\u002Ftoken-plan?code=GILTJpMTqZ&source=link)\n> - **MiniMax (Mainland China)**: If your API key is from MiniMax's mainland China platform (minimaxi.com), set `\"apiBase\": \"https:\u002F\u002Fapi.minimaxi.com\u002Fv1\"` in your minimax provider config.\n> - **VolcEngine \u002F BytePlus Coding Plan**: Use dedicated providers `volcengineCodingPlan` or `byteplusCodingPlan` instead of the pay-per-use `volcengine` \u002F `byteplus` providers.\n> - **Zhipu Coding Plan**: If you're on Zhipu's coding plan, set `\"apiBase\": \"https:\u002F\u002Fopen.bigmodel.cn\u002Fapi\u002Fcoding\u002Fpaas\u002Fv4\"` in your zhipu provider config.\n> - **Alibaba Cloud BaiLian**: If you're using Alibaba Cloud BaiLian's OpenAI-compatible endpoint, set `\"apiBase\": \"https:\u002F\u002Fdashscope.aliyuncs.com\u002Fcompatible-mode\u002Fv1\"` in your dashscope provider config.\n> - **Step Fun (Mainland China)**: If your API key is from Step Fun's mainland China platform (stepfun.com), set `\"apiBase\": \"https:\u002F\u002Fapi.stepfun.com\u002Fv1\"` in your stepfun provider config.\n\n| Provider | Purpose | Get API Key |\n|----------|---------|-------------|\n| `custom` | Any OpenAI-compatible endpoint | — |\n| `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https:\u002F\u002Fopenrouter.ai) |\n| `volcengine` | LLM (VolcEngine, pay-per-use) | [Coding Plan](https:\u002F\u002Fwww.volcengine.com\u002Factivity\u002Fcodingplan?utm_campaign=nanobot&utm_content=nanobot&utm_medium=devrel&utm_source=OWO&utm_term=nanobot) · [volcengine.com](https:\u002F\u002Fwww.volcengine.com) |\n| `byteplus` | LLM (VolcEngine international, pay-per-use) | [Coding Plan](https:\u002F\u002Fwww.byteplus.com\u002Fen\u002Factivity\u002Fcodingplan?utm_campaign=nanobot&utm_content=nanobot&utm_medium=devrel&utm_source=OWO&utm_term=nanobot) · [byteplus.com](https:\u002F\u002Fwww.byteplus.com) |\n| `anthropic` | LLM (Claude direct) | [console.anthropic.com](https:\u002F\u002Fconsole.anthropic.com) |\n| `azure_openai` | LLM (Azure OpenAI) | [portal.azure.com](https:\u002F\u002Fportal.azure.com) |\n| `openai` | LLM (GPT direct) | [platform.openai.com](https:\u002F\u002Fplatform.openai.com) |\n| `deepseek` | LLM (DeepSeek direct) | [platform.deepseek.com](https:\u002F\u002Fplatform.deepseek.com) |\n| `groq` | LLM + **Voice transcription** (Whisper) | [console.groq.com](https:\u002F\u002Fconsole.groq.com) |\n| `minimax` | LLM (MiniMax direct) | [platform.minimaxi.com](https:\u002F\u002Fplatform.minimaxi.com) |\n| `gemini` | LLM (Gemini direct) | [aistudio.google.com](https:\u002F\u002Faistudio.google.com) |\n| `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https:\u002F\u002Faihubmix.com) |\n| `siliconflow` | LLM (SiliconFlow\u002F硅基流动) | [siliconflow.cn](https:\u002F\u002Fsiliconflow.cn) |\n| `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https:\u002F\u002Fdashscope.console.aliyun.com) |\n| `moonshot` | LLM (Moonshot\u002FKimi) | [platform.moonshot.cn](https:\u002F\u002Fplatform.moonshot.cn) |\n| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https:\u002F\u002Fopen.bigmodel.cn) |\n| `ollama` | LLM (local, Ollama) | — |\n| `mistral` | LLM | [docs.mistral.ai](https:\u002F\u002Fdocs.mistral.ai\u002F) |\n| `stepfun` | LLM (Step Fun\u002F阶跃星辰) | [platform.stepfun.com](https:\u002F\u002Fplatform.stepfun.com) |\n| `ovms` | LLM (local, OpenVINO Model Server) | [docs.openvino.ai](https:\u002F\u002Fdocs.openvino.ai\u002F2026\u002Fmodel-server\u002Fovms_docs_llm_quickstart.html) |\n| `vllm` | LLM (local, any OpenAI-compatible server) | — |\n| `openai_codex` | LLM (Codex, OAuth) | `nanobot provider login openai-codex` |\n| `github_copilot` | LLM (GitHub Copilot, OAuth) | `nanobot provider login github-copilot` |\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>OpenAI Codex (OAuth)\u003C\u002Fb>\u003C\u002Fsummary>\n\nCodex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.\nNo `providers.openaiCodex` block is needed in `config.json`; `nanobot provider login` stores the OAuth session outside config.\n\n**1. Login:**\n```bash\nnanobot provider login openai-codex\n```\n\n**2. Set model** (merge into `~\u002F.nanobot\u002Fconfig.json`):\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"openai-codex\u002Fgpt-5.1-codex\"\n    }\n  }\n}\n```\n\n**3. Chat:**\n```bash\nnanobot agent -m \"Hello!\"\n\n# Target a specific workspace\u002Fconfig locally\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -m \"Hello!\"\n\n# One-off workspace override on top of that config\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -w \u002Ftmp\u002Fnanobot-telegram-test -m \"Hello!\"\n```\n\n> Docker users: use `docker run -it` for interactive OAuth login.\n\n\u003C\u002Fdetails>\n\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>GitHub Copilot (OAuth)\u003C\u002Fb>\u003C\u002Fsummary>\n\nGitHub Copilot uses OAuth instead of API keys. Requires a [GitHub account with a plan](https:\u002F\u002Fgithub.com\u002Ffeatures\u002Fcopilot\u002Fplans) configured.\nNo `providers.githubCopilot` block is needed in `config.json`; `nanobot provider login` stores the OAuth session outside config.\n\n**1. Login:**\n```bash\nnanobot provider login github-copilot\n```\n\n**2. Set model** (merge into `~\u002F.nanobot\u002Fconfig.json`):\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"github-copilot\u002Fgpt-4.1\"\n    }\n  }\n}\n```\n\n**3. Chat:**\n```bash\nnanobot agent -m \"Hello!\"\n\n# Target a specific workspace\u002Fconfig locally\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -m \"Hello!\"\n\n# One-off workspace override on top of that config\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -w \u002Ftmp\u002Fnanobot-telegram-test -m \"Hello!\"\n```\n\n> Docker users: use `docker run -it` for interactive OAuth login.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Custom Provider (Any OpenAI-compatible API)\u003C\u002Fb>\u003C\u002Fsummary>\n\nConnects directly to any OpenAI-compatible endpoint — LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Model name is passed as-is.\n\n```json\n{\n  \"providers\": {\n    \"custom\": {\n      \"apiKey\": \"your-api-key\",\n      \"apiBase\": \"https:\u002F\u002Fapi.your-provider.com\u002Fv1\"\n    }\n  },\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"your-model-name\"\n    }\n  }\n}\n```\n\n> For local servers that don't require a key, set `apiKey` to any non-empty string (e.g. `\"no-key\"`).\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Ollama (local)\u003C\u002Fb>\u003C\u002Fsummary>\n\nRun a local model with Ollama, then add to config:\n\n**1. Start Ollama** (example):\n```bash\nollama run llama3.2\n```\n\n**2. Add to config** (partial — merge into `~\u002F.nanobot\u002Fconfig.json`):\n```json\n{\n  \"providers\": {\n    \"ollama\": {\n      \"apiBase\": \"http:\u002F\u002Flocalhost:11434\"\n    }\n  },\n  \"agents\": {\n    \"defaults\": {\n      \"provider\": \"ollama\",\n      \"model\": \"llama3.2\"\n    }\n  }\n}\n```\n\n> `provider: \"auto\"` also works when `providers.ollama.apiBase` is configured, but setting `\"provider\": \"ollama\"` is the clearest option.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>OpenVINO Model Server (local \u002F OpenAI-compatible)\u003C\u002Fb>\u003C\u002Fsummary>\n\nRun LLMs locally on Intel GPUs using [OpenVINO Model Server](https:\u002F\u002Fdocs.openvino.ai\u002F2026\u002Fmodel-server\u002Fovms_docs_llm_quickstart.html). OVMS exposes an OpenAI-compatible API at `\u002Fv3`.\n\n> Requires Docker and an Intel GPU with driver access (`\u002Fdev\u002Fdri`).\n\n**1. Pull the model** (example):\n\n```bash\nmkdir -p ov\u002Fmodels && cd ov\n\ndocker run -d \\\n  --rm \\\n  --user $(id -u):$(id -g) \\\n  -v $(pwd)\u002Fmodels:\u002Fmodels \\\n  openvino\u002Fmodel_server:latest-gpu \\\n  --pull \\\n  --model_name openai\u002Fgpt-oss-20b \\\n  --model_repository_path \u002Fmodels \\\n  --source_model OpenVINO\u002Fgpt-oss-20b-int4-ov \\\n  --task text_generation \\\n  --tool_parser gptoss \\\n  --reasoning_parser gptoss \\\n  --enable_prefix_caching true \\\n  --target_device GPU\n```\n\n> This downloads the model weights. Wait for the container to finish before proceeding.\n\n**2. Start the server** (example):\n\n```bash\ndocker run -d \\\n  --rm \\\n  --name ovms \\\n  --user $(id -u):$(id -g) \\\n  -p 8000:8000 \\\n  -v $(pwd)\u002Fmodels:\u002Fmodels \\\n  --device \u002Fdev\u002Fdri \\\n  --group-add=$(stat -c \"%g\" \u002Fdev\u002Fdri\u002Frender* | head -n 1) \\\n  openvino\u002Fmodel_server:latest-gpu \\\n  --rest_port 8000 \\\n  --model_name openai\u002Fgpt-oss-20b \\\n  --model_repository_path \u002Fmodels \\\n  --source_model OpenVINO\u002Fgpt-oss-20b-int4-ov \\\n  --task text_generation \\\n  --tool_parser gptoss \\\n  --reasoning_parser gptoss \\\n  --enable_prefix_caching true \\\n  --target_device GPU\n```\n\n**3. Add to config** (partial — merge into `~\u002F.nanobot\u002Fconfig.json`):\n\n```json\n{\n  \"providers\": {\n    \"ovms\": {\n      \"apiBase\": \"http:\u002F\u002Flocalhost:8000\u002Fv3\"\n    }\n  },\n  \"agents\": {\n    \"defaults\": {\n      \"provider\": \"ovms\",\n      \"model\": \"openai\u002Fgpt-oss-20b\"\n    }\n  }\n}\n```\n\n> OVMS is a local server — no API key required. Supports tool calling (`--tool_parser gptoss`), reasoning (`--reasoning_parser gptoss`), and streaming.\n> See the [official OVMS docs](https:\u002F\u002Fdocs.openvino.ai\u002F2026\u002Fmodel-server\u002Fovms_docs_llm_quickstart.html) for more details.\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>vLLM (local \u002F OpenAI-compatible)\u003C\u002Fb>\u003C\u002Fsummary>\n\nRun your own model with vLLM or any OpenAI-compatible server, then add to config:\n\n**1. Start the server** (example):\n```bash\nvllm serve meta-llama\u002FLlama-3.1-8B-Instruct --port 8000\n```\n\n**2. Add to config** (partial — merge into `~\u002F.nanobot\u002Fconfig.json`):\n\n*Provider (key can be any non-empty string for local):*\n```json\n{\n  \"providers\": {\n    \"vllm\": {\n      \"apiKey\": \"dummy\",\n      \"apiBase\": \"http:\u002F\u002Flocalhost:8000\u002Fv1\"\n    }\n  }\n}\n```\n\n*Model:*\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"meta-llama\u002FLlama-3.1-8B-Instruct\"\n    }\n  }\n}\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Adding a New Provider (Developer Guide)\u003C\u002Fb>\u003C\u002Fsummary>\n\nnanobot uses a **Provider Registry** (`nanobot\u002Fproviders\u002Fregistry.py`) as the single source of truth.\nAdding a new provider only takes **2 steps** — no if-elif chains to touch.\n\n**Step 1.** Add a `ProviderSpec` entry to `PROVIDERS` in `nanobot\u002Fproviders\u002Fregistry.py`:\n\n```python\nProviderSpec(\n    name=\"myprovider\",                   # config field name\n    keywords=(\"myprovider\", \"mymodel\"),  # model-name keywords for auto-matching\n    env_key=\"MYPROVIDER_API_KEY\",        # env var name\n    display_name=\"My Provider\",          # shown in `nanobot status`\n    default_api_base=\"https:\u002F\u002Fapi.myprovider.com\u002Fv1\",  # OpenAI-compatible endpoint\n)\n```\n\n**Step 2.** Add a field to `ProvidersConfig` in `nanobot\u002Fconfig\u002Fschema.py`:\n\n```python\nclass ProvidersConfig(BaseModel):\n    ...\n    myprovider: ProviderConfig = ProviderConfig()\n```\n\nThat's it! Environment variables, model routing, config matching, and `nanobot status` display will all work automatically.\n\n**Common `ProviderSpec` options:**\n\n| Field | Description | Example |\n|-------|-------------|---------|\n| `default_api_base` | OpenAI-compatible base URL | `\"https:\u002F\u002Fapi.deepseek.com\"` |\n| `env_extras` | Additional env vars to set | `((\"ZHIPUAI_API_KEY\", \"{api_key}\"),)` |\n| `model_overrides` | Per-model parameter overrides | `((\"kimi-k2.5\", {\"temperature\": 1.0}),)` |\n| `is_gateway` | Can route any model (like OpenRouter) | `True` |\n| `detect_by_key_prefix` | Detect gateway by API key prefix | `\"sk-or-\"` |\n| `detect_by_base_keyword` | Detect gateway by API base URL | `\"openrouter\"` |\n| `strip_model_prefix` | Strip provider prefix before sending to gateway | `True` (for AiHubMix) |\n| `supports_max_completion_tokens` | Use `max_completion_tokens` instead of `max_tokens`; required for providers that reject both being set simultaneously (e.g. VolcEngine) | `True` |\n\n\u003C\u002Fdetails>\n\n### 频道设置\n\n适用于所有频道的全局设置。可在 `~\u002F.nanobot\u002Fconfig.json` 文件中的 `channels` 部分进行配置：\n\n```json\n{\n  \"channels\": {\n    \"sendProgress\": true,\n    \"sendToolHints\": false,\n    \"sendMaxRetries\": 3,\n    \"telegram\": { ... }\n  }\n}\n```\n\n| 设置 | 默认值 | 描述 |\n|---------|---------|-------------|\n| `sendProgress` | `true` | 将代理的文本处理进度流式传输到频道 |\n| `sendToolHints` | `false` | 流式传输工具调用提示（例如 `read_file(\"…\")`）|\n| `sendMaxRetries` | `3` | 每条出站消息的最大投递尝试次数，包括首次发送（可配置范围为 0–10，实际最少尝试 1 次）|\n\n#### 重试行为\n\n当频道发送操作引发错误时，nanobot 会以指数退避方式重试：\n\n- **第 1 次**：初始发送\n- **第 2–4 次**：重试延迟分别为 1 秒、2 秒、4 秒\n- **第 5 次及以上**：重试延迟上限为 4 秒\n- **临时性失败**（网络波动、临时 API 限制）：通常重试会成功\n- **永久性失败**（无效令牌、频道被封禁）：所有重试均会失败\n\n> [!NOTE]\n> 当某个频道完全不可用时，由于无法通过该频道联系用户，因此无法向用户发出通知。请监控日志中“尝试 N 次后仍无法发送至 {channel}”的信息，以检测持续的投递失败。\n\n### 网络搜索\n\n> [!TIP]\n> 使用 `tools.web` 中的 `proxy` 可将所有网络请求（搜索 + 获取）通过代理路由：\n> ```json\n> { \"tools\": { \"web\": { \"proxy\": \"http:\u002F\u002F127.0.0.1:7890\" } } }\n> ```\n\nnanobot 支持多种网络搜索引擎提供商。可在 `~\u002F.nanobot\u002Fconfig.json` 文件的 `tools.web.search` 部分进行配置。\n\n| 提供商 | 配置字段 | 环境变量回退 | 免费 |\n|----------|--------------|------------------|------|\n| `brave`（默认） | `apiKey` | `BRAVE_API_KEY` | 否 |\n| `tavily` | `apiKey` | `TAVILY_API_KEY` | 否 |\n| `jina` | `apiKey` | `JINA_API_KEY` | 免费层级（10M 个 token）|\n| `searxng` | `baseUrl` | `SEARXNG_BASE_URL` | 是（自托管）|\n| `duckduckgo` | — | — | 是 |\n\n当凭据缺失时，nanobot 会自动回退到 DuckDuckGo。\n\n**Brave**（默认）：\n```json\n{\n  \"tools\": {\n    \"web\": {\n      \"search\": {\n        \"provider\": \"brave\",\n        \"apiKey\": \"BSA...\"\n      }\n    }\n  }\n}\n```\n\n**Tavily**：\n```json\n{\n  \"tools\": {\n    \"web\": {\n      \"search\": {\n        \"provider\": \"tavily\",\n        \"apiKey\": \"tvly-...\"\n      }\n    }\n  }\n}\n```\n\n**Jina**（免费层级，10M 个 token）：\n```json\n{\n  \"tools\": {\n    \"web\": {\n      \"search\": {\n        \"provider\": \"jina\",\n        \"apiKey\": \"jina_...\"\n      }\n    }\n  }\n}\n```\n\n**SearXNG**（自托管，无需 API 密钥）：\n```json\n{\n  \"tools\": {\n    \"web\": {\n      \"search\": {\n        \"provider\": \"searxng\",\n        \"baseUrl\": \"https:\u002F\u002Fsearx.example\"\n      }\n    }\n  }\n}\n```\n\n**DuckDuckGo**（零配置）：\n```json\n{\n  \"tools\": {\n    \"web\": {\n      \"search\": {\n        \"provider\": \"duckduckgo\"\n      }\n    }\n  }\n}\n```\n\n| 选项 | 类型 | 默认值 | 描述 |\n|--------|------|---------|-------------|\n| `provider` | 字符串 | `\"brave\"` | 搜索后端：`brave`、`tavily`、`jina`、`searxng`、`duckduckgo` |\n| `apiKey` | 字符串 | `\"\"` | Brave 或 Tavily 的 API 密钥 |\n| `baseUrl` | 字符串 | `\"\"` | SearXNG 的基础 URL |\n| `maxResults` | 整数 | `5` | 每次搜索返回的结果数量（1–10）|\n\n### MCP（模型上下文协议）\n\n> [!TIP]\n> 该配置格式与 Claude Desktop \u002F Cursor 兼容。您可以直接从任何 MCP 服务器的 README 中复制 MCP 服务器配置。\n\nnanobot 支持 [MCP](https:\u002F\u002Fmodelcontextprotocol.io\u002F) — 连接外部工具服务器，并将其用作原生代理工具。\n\n将 MCP 服务器添加到您的 `config.json` 文件中：\n\n```json\n{\n  \"tools\": {\n    \"mcpServers\": {\n      \"filesystem\": {\n        \"command\": \"npx\",\n        \"args\": [\"-y\", \"@modelcontextprotocol\u002Fserver-filesystem\", \"\u002Fpath\u002Fto\u002Fdir\"]\n      },\n      \"my-remote-mcp\": {\n        \"url\": \"https:\u002F\u002Fexample.com\u002Fmcp\u002F\",\n        \"headers\": {\n          \"Authorization\": \"Bearer xxxxx\"\n        }\n      }\n    }\n  }\n}\n```\n\n支持两种传输模式：\n\n| 模式 | 配置 | 示例 |\n|------|--------|---------|\n| **Stdio** | `command` + `args` | 通过 `npx` \u002F `uvx` 运行本地进程 |\n| **HTTP** | `url` + `headers`（可选） | 远程端点（`https:\u002F\u002Fmcp.example.com\u002Fsse`）|\n\n使用 `toolTimeout` 可覆盖默认的每调用 30 秒超时时间，以适应慢速服务器：\n\n```json\n{\n  \"tools\": {\n    \"mcpServers\": {\n      \"my-slow-server\": {\n        \"url\": \"https:\u002F\u002Fexample.com\u002Fmcp\u002F\",\n        \"toolTimeout\": 120\n      }\n    }\n  }\n}\n```\n\n使用 `enabledTools` 可仅注册 MCP 服务器中的部分工具：\n\n```json\n{\n  \"tools\": {\n    \"mcpServers\": {\n      \"filesystem\": {\n        \"command\": \"npx\",\n        \"args\": [\"-y\", \"@modelcontextprotocol\u002Fserver-filesystem\", \"\u002Fpath\u002Fto\u002Fdir\"],\n        \"enabledTools\": [\"read_file\", \"mcp_filesystem_write_file\"]\n      }\n    }\n  }\n}\n```\n\n`enabledTools` 接受原始 MCP 工具名称（例如 `read_file`）或封装后的 nanobot 工具名称（例如 `mcp_filesystem_write_file`）。\n\n- 如果省略 `enabledTools`，或将其设置为 `[\"*\"]`，则会注册所有工具。\n- 将 `enabledTools` 设置为 `[]`，则不会注册该服务器的任何工具。\n- 将 `enabledTools` 设置为非空名称列表，则仅注册该子集。\n\nMCP 工具会在启动时自动发现并注册。LLM 可以将其与内置工具一起使用，无需额外配置。\n\n\n\n\n### 安全性\n\n> [!TIP]\n> 对于生产部署，请在配置中设置 `\"restrictToWorkspace\": true`，以将代理沙盒化。\n> 在 `v0.1.4.post3` 及更早版本中，空的 `allowFrom` 允许所有发送者。自 `v0.1.4.post4` 起，空的 `allowFrom` 默认拒绝所有访问。要允许所有发送者，请设置 `\"allowFrom\": [\"*\"]`。\n\n| 选项 | 默认值 | 描述 |\n|--------|---------|-------------|\n| `tools.restrictToWorkspace` | `false` | 当设置为 `true` 时，会将代理的**所有**工具（shell、文件读写编辑、列表等）限制在工作目录内。防止路径遍历和越界访问。 |\n| `tools.exec.enable` | `true` | 当设置为 `false` 时，shell `exec` 工具将不会被注册。可用于完全禁用 shell 命令执行。 |\n| `tools.exec.pathAppend` | `\"\"` | 运行 shell 命令时附加到 `PATH` 的额外目录（例如 `\u002Fusr\u002Fsbin` 用于 `ufw`）。 |\n| `channels.*.allowFrom` | `[]`（拒绝所有） | 白名单用户 ID 列表。空列表表示拒绝所有；使用 `[\"*\"]` 可允许所有人。 |\n\n### 时区\n\n时间是上下文的一部分。上下文应当精确。\n\n默认情况下，nanobot 使用 `UTC` 作为运行时的时间上下文。如果你希望代理以你的本地时间思考，请将 `agents.defaults.timezone` 设置为有效的 [IANA 时区名称](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FList_of_tz_database_time_zones)：\n\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"timezone\": \"Asia\u002FShanghai\"\n    }\n  }\n}\n```\n\n这会影响显示给模型的运行时时间字符串，例如运行时上下文和心跳提示。它也会成为 cron 定时任务的默认时区（当 cron 表达式省略 `tz` 时），以及一次性 `at` 时间的默认时区（当 ISO 日期时间没有明确的偏移量时）。\n\n常见示例：`UTC`、`America\u002FNew_York`、`America\u002FLos_Angeles`、`Europe\u002FLondon`、`Europe\u002FBerlin`、`Asia\u002FTokyo`、`Asia\u002FShanghai`、`Asia\u002FSingapore`、`Australia\u002FSydney`。\n\n> 需要其他时区吗？请浏览完整的 [IANA 时区数据库](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FList_of_tz_database_time_zones)。\n\n## 🧩 多实例\n\n可以同时运行多个 nanobot 实例，每个实例使用独立的配置和运行时数据。使用 `--config` 作为主要入口点。如果需要为特定实例初始化或更新保存的工作空间，可以在 `onboard` 时选择性地传递 `--workspace`。\n\n### 快速入门\n\n如果你想让每个实例从一开始就拥有自己专用的工作空间，那么在入职时同时传递 `--config` 和 `--workspace`。\n\n**初始化实例：**\n\n```bash\n# 创建独立的实例配置和工作空间\nnanobot onboard --config ~\u002F.nanobot-telegram\u002Fconfig.json --workspace ~\u002F.nanobot-telegram\u002Fworkspace\nnanobot onboard --config ~\u002F.nanobot-discord\u002Fconfig.json --workspace ~\u002F.nanobot-discord\u002Fworkspace\nnanobot onboard --config ~\u002F.nanobot-feishu\u002Fconfig.json --workspace ~\u002F.nanobot-feishu\u002Fworkspace\n```\n\n**配置每个实例：**\n\n编辑 `~\u002F.nanobot-telegram\u002Fconfig.json`、`~\u002F.nanobot-discord\u002Fconfig.json` 等文件，设置不同的频道参数。你在 `onboard` 时传递的工作空间会保存到每个配置中，作为该实例的默认工作空间。\n\n**运行实例：**\n\n```bash\n# 实例 A - Telegram 机器人\nnanobot gateway --config ~\u002F.nanobot-telegram\u002Fconfig.json\n\n# 实例 B - Discord 机器人  \nnanobot gateway --config ~\u002F.nanobot-discord\u002Fconfig.json\n\n# 实例 C - Feishu 机器人，自定义端口\nnanobot gateway --config ~\u002F.nanobot-feishu\u002Fconfig.json --port 18792\n```\n\n### 路径解析\n\n当使用 `--config` 时，nanobot 会根据配置文件的位置推导出其运行时数据目录。工作空间仍然来自 `agents.defaults.workspace`，除非你用 `--workspace` 覆盖它。\n\n要在本地针对其中一个实例打开 CLI 会话：\n\n```bash\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -m \"来自 Telegram 实例的问候\"\nnanobot agent -c ~\u002F.nanobot-discord\u002Fconfig.json -m \"来自 Discord 实例的问候\"\n\n# 可选的一次性工作空间覆盖\nnanobot agent -c ~\u002F.nanobot-telegram\u002Fconfig.json -w \u002Ftmp\u002Fnanobot-telegram-test\n```\n\n> `nanobot agent` 会使用选定的工作空间\u002F配置启动一个本地 CLI 代理，但它不会附加到或代理已经运行的 `nanobot gateway` 进程。\n\n| 组件         | 解析来源               | 示例                     |\n|--------------|------------------------|--------------------------|\n| **配置**     | `--config` 路径        | `~\u002F.nanobot-A\u002Fconfig.json` |\n| **工作空间** | `--workspace` 或配置   | `~\u002F.nanobot-A\u002Fworkspace\u002F`  |\n| **Cron 任务** | 配置目录               | `~\u002F.nanobot-A\u002Fcron\u002F`      |\n| **媒体\u002F运行时状态** | 配置目录             | `~\u002F.nanobot-A\u002Fmedia\u002F`     |\n\n### 工作原理\n\n- `--config` 用于选择加载哪个配置文件。\n- 默认情况下，工作空间来自该配置中的 `agents.defaults.workspace`。\n- 如果你传递了 `--workspace`，它会覆盖配置文件中定义的工作空间。\n\n### 最小化设置\n\n1. 将你的基础配置复制到一个新的实例目录。\n2. 为该实例设置不同的 `agents.defaults.workspace`。\n3. 使用 `--config` 启动实例。\n\n示例配置：\n\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"workspace\": \"~\u002F.nanobot-telegram\u002Fworkspace\",\n      \"model\": \"anthropic\u002Fclaude-sonnet-4-6\"\n    }\n  },\n  \"channels\": {\n    \"telegram\": {\n      \"enabled\": true,\n      \"token\": \"YOUR_TELEGRAM_BOT_TOKEN\"\n    }\n  },\n  \"gateway\": {\n    \"port\": 18790\n  }\n}\n```\n\n启动独立实例：\n\n```bash\nnanobot gateway --config ~\u002F.nanobot-telegram\u002Fconfig.json\nnanobot gateway --config ~\u002F.nanobot-discord\u002Fconfig.json\n```\n\n必要时，可以通过以下方式覆盖工作空间进行一次性运行：\n\n```bash\nnanobot gateway --config ~\u002F.nanobot-telegram\u002Fconfig.json --workspace \u002Ftmp\u002Fnanobot-telegram-test\n```\n\n### 常见用例\n\n- 为 Telegram、Discord、Feishu 等不同平台运行独立的机器人。\n- 保持测试和生产实例隔离。\n- 为不同团队使用不同的模型或提供商。\n- 为多个租户提供服务，每个租户使用独立的配置和运行时数据。\n\n### 注意事项\n\n- 如果多个实例同时运行，必须使用不同的端口。\n- 如果希望内存、会话和技能相互隔离，建议为每个实例使用不同的工作空间。\n- `--workspace` 会覆盖配置文件中定义的工作空间。\n- Cron 任务以及运行时的媒体和状态都来自配置目录。\n\n## 💻 CLI 参考\n\n| 命令                  | 描述                                   |\n|-----------------------|----------------------------------------|\n| `nanobot onboard`     | 在 `~\u002F.nanobot\u002F` 初始化配置和工作空间 |\n| `nanobot onboard --wizard` | 启动交互式入职向导                 |\n| `nanobot onboard -c \u003Cconfig> -w \u003Cworkspace>` | 初始化或刷新特定实例的配置和工作空间 |\n| `nanobot agent -m \"...\"` | 与代理聊天                             |\n| `nanobot agent -w \u003Cworkspace>` | 针对特定工作空间聊天                 |\n| `nanobot agent -w \u003Cworkspace> -c \u003Cconfig>` | 针对特定工作空间\u002F配置聊天           |\n| `nanobot agent`       | 交互式聊天模式                         |\n| `nanobot agent --no-markdown` | 显示纯文本回复                       |\n| `nanobot agent --logs` | 聊天过程中显示运行日志                 |\n| `nanobot serve`       | 启动兼容 OpenAI 的 API                 |\n| `nanobot gateway`     | 启动网关                               |\n| `nanobot status`      | 显示状态                               |\n| `nanobot provider login openai-codex` | 通过 OAuth 登录提供商              |\n| `nanobot channels login \u003Cchannel>` | 交互式认证某个频道                   |\n| `nanobot channels status` | 显示频道状态                           |\n\n交互模式退出方式：`exit`、`quit`、`\u002Fexit`、`\u002Fquit`、`:q` 或 `Ctrl+D`。\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>心跳（周期性任务）\u003C\u002Fb>\u003C\u002Fsummary>\n\n网关每 30 分钟唤醒一次，并检查你工作空间中的 `HEARTBEAT.md` 文件（`~\u002F.nanobot\u002Fworkspace\u002FHEARTBEAT.md`）。如果文件中有任务，代理会执行这些任务，并将结果发送到你最近活跃的聊天频道。\n\n**设置：** 编辑 `~\u002F.nanobot\u002Fworkspace\u002FHEARTBEAT.md`（由 `nanobot onboard` 自动创建）：\n\n```markdown\n## 周期性任务\n\n- [ ] 检查天气预报并发送摘要\n- [ ] 扫描收件箱中的紧急邮件\n```\n\n代理也可以自行管理此文件——只需让它“添加一个周期性任务”，它就会为你更新 `HEARTBEAT.md`。\n\n> **注意：** 网关必须正在运行（`nanobot gateway`），并且你至少与机器人聊过一次，这样它才能知道将结果发送到哪个频道。\n\u003C\u002Fdetails>\n\n## 🐍 Python SDK\n\n将 nanobot 用作库——无需 CLI，无需网关，只需 Python：\n\n```python\nfrom nanobot import Nanobot\n\nbot = Nanobot.from_config()\nresult = await bot.run(\"总结 README\")\nprint(result.content)\n```\n\n每次调用都会携带一个 `session_key` 来实现对话隔离——不同的密钥会拥有独立的对话历史：\n\n```python\nawait bot.run(\"hi\", session_key=\"user-alice\")\nawait bot.run(\"hi\", session_key=\"task-42\")\n```\n\n添加生命周期钩子来观察或自定义代理行为：\n\n```python\nfrom nanobot.agent import AgentHook, AgentHookContext\n\nclass AuditHook(AgentHook):\n    async def before_execute_tools(self, ctx: AgentHookContext) -> None:\n        for tc in ctx.tool_calls:\n            print(f\"[tool] {tc.name}\")\n\nresult = await bot.run(\"Hello\", hooks=[AuditHook()])\n```\n\n完整的 SDK 参考请参阅 [docs\u002FPYTHON_SDK.md](docs\u002FPYTHON_SDK.md)。\n\n## 🔌 OpenAI 兼容 API\n\nnanobot 可以暴露一个最小化的 OpenAI 兼容端点，用于本地集成：\n\n```bash\npip install \"nanobot-ai[api]\"\nnanobot serve\n```\n\n默认情况下，API 绑定到 `127.0.0.1:8900`。您可以在 `config.json` 中更改此设置。\n\n### 行为\n\n- 会话隔离：在请求体中传递 `\"session_id\"` 以隔离对话；省略则使用共享的默认会话（`api:default`）\n- 单消息输入：每个请求必须恰好包含一条 `user` 消息\n- 固定模型：省略 `model`，或传入与 `\u002Fv1\u002Fmodels` 显示相同的模型\n- 不支持流式输出：`stream=true` 不被支持\n\n### 端点\n\n- `GET \u002Fhealth`\n- `GET \u002Fv1\u002Fmodels`\n- `POST \u002Fv1\u002Fchat\u002Fcompletions`\n\n### curl\n\n```bash\ncurl http:\u002F\u002F127.0.0.1:8900\u002Fv1\u002Fchat\u002Fcompletions \\\n  -H \"Content-Type: application\u002Fjson\" \\\n  -d '{\n    \"messages\": [{\"role\": \"user\", \"content\": \"hi\"}],\n    \"session_id\": \"my-session\"\n  }'\n```\n\n### Python (`requests`)\n\n```python\nimport requests\n\nresp = requests.post(\n    \"http:\u002F\u002F127.0.0.1:8900\u002Fv1\u002Fchat\u002Fcompletions\",\n    json={\n        \"messages\": [{\"role\": \"user\", \"content\": \"hi\"}],\n        \"session_id\": \"my-session\",  # 可选：隔离对话\n    },\n    timeout=120,\n)\nresp.raise_for_status()\nprint(resp.json()[\"choices\"][0][\"message\"][\"content\"])\n```\n\n### Python (`openai`)\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(\n    base_url=\"http:\u002F\u002F127.0.0.1:8900\u002Fv1\",\n    api_key=\"dummy\",\n)\n\nresp = client.chat.completions.create(\n    model=\"MiniMax-M2.7\",\n    messages=[{\"role\": \"user\", \"content\": \"hi\"}],\n    extra_body={\"session_id\": \"my-session\"},  # 可选：隔离对话\n)\nprint(resp.choices[0].message.content)\n```\n\n## 🐳 Docker\n\n> [!TIP]\n> `-v ~\u002F.nanobot:\u002Froot\u002F.nanobot` 标志会将您的本地配置目录挂载到容器中，这样您的配置和工作区会在容器重启后仍然保留。\n\n### Docker Compose\n\n```bash\ndocker compose run --rm nanobot-cli onboard   # 首次设置\nvim ~\u002F.nanobot\u002Fconfig.json                     # 添加 API 密钥\ndocker compose up -d nanobot-gateway           # 启动网关\n```\n\n```bash\ndocker compose run --rm nanobot-cli agent -m \"Hello!\"   # 运行 CLI\ndocker compose logs -f nanobot-gateway                   # 查看日志\ndocker compose down                                      # 停止\n```\n\n### Docker\n\n```bash\n# 构建镜像\ndocker build -t nanobot .\n\n# 初始化配置（仅首次）\ndocker run -v ~\u002F.nanobot:\u002Froot\u002F.nanobot --rm nanobot onboard\n\n# 在主机上编辑配置以添加 API 密钥\nvim ~\u002F.nanobot\u002Fconfig.json\n\n# 运行网关（连接到已启用的渠道，例如 Telegram\u002FDiscord\u002FMochat）\ndocker run -v ~\u002F.nanobot:\u002Froot\u002F.nanobot -p 18790:18790 nanobot gateway\n\n# 或者运行单个命令\ndocker run -v ~\u002F.nanobot:\u002Froot\u002F.nanobot --rm nanobot agent -m \"Hello!\"\ndocker run -v ~\u002F.nanobot:\u002Froot\u002F.nanobot --rm nanobot status\n```\n\n## 🐧 Linux 服务\n\n将网关作为 systemd 用户服务运行，以便它能够自动启动并在发生故障时重启。\n\n**1. 查找 nanobot 二进制文件路径：**\n\n```bash\nwhich nanobot   # 例如 \u002Fhome\u002Fuser\u002F.local\u002Fbin\u002Fnanobot\n```\n\n**2. 创建服务文件**，位于 `~\u002F.config\u002Fsystemd\u002Fuser\u002Fnanobot-gateway.service`（如果需要，请替换 `ExecStart` 路径）：\n\n```ini\n[Unit]\nDescription=Nanobot Gateway\nAfter=network.target\n\n[Service]\nType=simple\nExecStart=%h\u002F.local\u002Fbin\u002Fnanobot gateway\nRestart=always\nRestartSec=10\nNoNewPrivileges=yes\nProtectSystem=strict\nReadWritePaths=%h\n\n[Install]\nWantedBy=default.target\n```\n\n**3. 启用并启动：**\n\n```bash\nsystemctl --user daemon-reload\nsystemctl --user enable --now nanobot-gateway\n```\n\n**常用操作：**\n\n```bash\nsystemctl --user status nanobot-gateway        # 检查状态\nsystemctl --user restart nanobot-gateway       # 在配置更改后重启\njournalctl --user -u nanobot-gateway -f        # 实时查看日志\n```\n\n如果您编辑了 `.service` 文件本身，请在重启之前运行 `systemctl --user daemon-reload`。\n\n> **注意：** 用户服务仅在您登录时运行。要使网关在注销后继续运行，请启用 linger 功能：\n\n```bash\nloginctl enable-linger $USER\n```\n\n## 📁 项目结构\n\n```\nnanobot\u002F\n├── agent\u002F          # 🧠 核心代理逻辑\n│   ├── loop.py     #    代理循环（LLM ↔ 工具执行）\n│   ├── context.py  #    提示词构建器\n│   ├── memory.py   #    持久化内存\n│   ├── skills.py   #    技能加载器\n│   ├── subagent.py #    后台任务执行\n│   └── tools\u002F      #    内置工具（包括 spawn）\n├── skills\u002F         # 🎯 捆绑技能（github、天气、tmux 等）\n├── channels\u002F       # 📱 聊天渠道集成（支持插件）\n├── bus\u002F            # 🚌 消息路由\n├── cron\u002F           # ⏰ 定时任务\n├── heartbeat\u002F      # 💓 主动唤醒\n├── providers\u002F      # 🤖 LLM 提供商（OpenRouter 等）\n├── session\u002F        # 💬 对话会话\n├── config\u002F         # ⚙️ 配置\n└── cli\u002F            # 🖥️ 命令\n```\n\n## 🤝 贡献与路线图\n\n欢迎提交 PR！代码库刻意保持小巧且易于阅读。🤗\n\n### 分支策略\n\n| 分支 | 目的 |\n|--------|---------|\n| `main` | 稳定版本——修复 bug 和小幅改进 |\n| `nightly` | 实验性功能——新功能和破坏性变更 |\n\n**不确定该提交到哪个分支？** 请参阅 [CONTRIBUTING.md](.\u002FCONTRIBUTING.md) 了解详情。\n\n**路线图**——选择一项并 [提交 PR](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpulls)！\n\n- [ ] **多模态**——看见和听见（图片、语音、视频）\n- [ ] **长期记忆**——永不忘记重要上下文\n- [ ] **更好的推理能力**——多步规划和反思\n- [ ] **更多集成**——日历等\n- [ ] **自我改进**——从反馈和错误中学习\n\n### 贡献者\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_3b80c1ee6247.png\" alt=\"贡献者\" \u002F>\n\u003C\u002Fa>\n\n## ⭐ 星标历史\n\n\u003Cdiv align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fstar-history.com\u002F#HKUDS\u002Fnanobot&Date\">\n    \u003Cpicture>\n      \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_0f5aff70ff64.png&theme=dark\" \u002F>\n      \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_0f5aff70ff64.png\" \u002F>\n      \u003Cimg alt=\"星标历史图表\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_0f5aff70ff64.png\" style=\"border-radius: 15px; box-shadow: 0 0 30px rgba(0, 217, 255, 0.3);\" \u002F>\n    \u003C\u002Fpicture>\n  \u003C\u002Fa>\n\u003C\u002Fdiv>\n\n\u003Cp align=\"center\">\n  \u003Cem> 感谢您的访问 ✨ nanobot！\u003C\u002Fem>\u003Cbr>\u003Cbr>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_readme_b348048724fa.png\" alt=\"访问量\">\n\u003C\u002Fp>\n\n\n\u003Cp align=\"center\">\n  \u003Csub>nanobot 仅用于教育、研究和技术交流目的\u003C\u002Fsub>\n\u003C\u002Fp>","# nanobot 快速上手指南\n\nnanobot 是一款受 OpenClaw 启发的**超轻量级**个人 AI 助手。它仅用极少的代码量实现了核心 Agent 功能，启动迅速、资源占用低，非常适合开发者进行研究、二次开发或个人部署。\n\n## 1. 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS, 或 Windows\n*   **Python 版本**：≥ 3.11 (必须)\n*   **包管理工具**：推荐安装 `uv` (更快更稳定) 或使用标准的 `pip`\n*   **API Key**：需要准备一个大模型服务的 API Key（如 OpenRouter, OpenAI, DeepSeek, Moonshot 等）\n\n> **安全提示**：本项目已移除 `litellm` 依赖以规避供应链安全风险。请确保安装最新版本 (v0.1.4.post6+)。\n\n## 2. 安装步骤\n\n你可以选择以下任意一种方式进行安装。国内用户若遇到网络问题，建议配置 pip 国内镜像源（如清华源、阿里源）。\n\n### 方式一：使用 uv 安装（推荐，速度快）\n\n```bash\n# 安装 uv (如果尚未安装)\ncurl -LsSf https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.sh | sh\n\n# 安装 nanobot\nuv tool install nanobot-ai\n```\n\n### 方式二：使用 pip 安装（稳定版）\n\n```bash\n# 建议使用国内镜像加速\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple nanobot-ai\n```\n\n### 方式三：源码安装（适合开发与贡献）\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot.git\ncd nanobot\npip install -e .\n# 或者使用国内镜像\n# pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple -e .\n```\n\n### 验证安装与升级\n\n检查版本确认安装成功：\n```bash\nnanobot --version\n```\n\n升级至最新版本：\n```bash\n# pip 用户\npip install -U nanobot-ai\n\n# uv 用户\nuv tool upgrade nanobot-ai\n```\n\n## 3. 基本使用\n\n### 第一步：初始化配置\n\n运行初始化命令，系统将引导你完成交互式设置（选择模型提供商、输入 API Key 等）。\n\n```bash\nnanobot onboard\n```\n\n> **手动配置提示**：\n> 你也可以直接编辑配置文件 `~\u002F.nanobot\u002Fconfig.json` 来设置 API Key。\n> *   全球通用推荐：[OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fkeys)\n> *   国内模型推荐：在配置中选择 DeepSeek、Moonshot (Kimi)、通义千问等提供商并填入对应 Key。\n\n### 第二步：启动助手\n\n配置完成后，直接在终端启动 nanobot：\n\n```bash\nnanobot\n```\n\n启动后，你将进入交互式命令行界面，可以直接与 AI 助手对话。\n\n### 第三步：尝试功能\n\n你可以尝试以下简单指令体验其能力：\n\n*   **日常对话**：直接输入自然语言问题。\n*   **联网搜索**：询问实时新闻或需要检索的信息（需确保已配置搜索能力）。\n*   **代码生成**：例如输入“写一个 Python 脚本来计算斐波那契数列”。\n*   **查看状态**：输入 `\u002Fstatus` 查看当前运行状态。\n\n### 进阶：多实例与渠道集成\n\nnanobot 支持同时运行多个实例或集成到微信、飞书、Telegram 等聊天软件中。\n\n*   **登录特定渠道**（以微信为例）：\n    ```bash\n    nanobot channels login wechat\n    ```\n*   **运行多实例**：通过指定不同的配置目录实现隔离运行。\n\n---\n*注：本工具仅供教育、研究和技术交流使用。*","某独立开发者希望为个人项目快速接入一个支持微信、飞书和 Telegram 的多渠道 AI 助手，用于自动处理用户反馈和定时任务通知。\n\n### 没有 nanobot 时\n- **代码臃肿难维护**：参考类似 OpenClaw 的框架，需要理解和维护数千行核心代码，仅为了实现基础的消息路由功能。\n- **环境安全隐患**：依赖复杂的第三方库（如曾受供应链投毒影响的 litellm），需花费大量时间排查安全漏洞和版本冲突。\n- **多渠道开发繁琐**：为微信、飞书等不同平台分别编写适配层，处理媒体发送、流式响应和格式渲染的工作量巨大且易出错。\n- **部署配置复杂**：缺乏交互式引导，手动配置模型提供商、API 密钥和时区参数容易出错，启动门槛高。\n\n### 使用 nanobot 后\n- **极致轻量精简**：nanobot 将核心代理功能压缩至原框架 1% 的代码量，开发者可瞬间读懂逻辑并按需修改，维护成本极低。\n- **原生安全架构**：nanobot 移除了高风险依赖，直接采用原生 OpenAI 和 Anthropic SDK，从根源上消除了供应链中毒风险。\n- **全渠道一键打通**：通过 nanobot 内置插件，无需额外编码即可在微信、飞书和 Telegram 间实现流式消息、代码块渲染及媒体文件的统一收发。\n- **向导式极速启动**：利用 nanobot 的交互式设置向导，几分钟内即可完成模型选择、密钥配置和多渠道登录，立即投入运行。\n\nnanobot 以极致的轻量化和安全架构，让个人开发者能以最小代价构建生产级的多渠道 AI 助手。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHKUDS_nanobot_ef974e5b.png","HKUDS","✨Data Intelligence Lab@HKU✨","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FHKUDS_fc32cc87.jpg",null,"https:\u002F\u002Fsites.google.com\u002Fview\u002Fchaoh","https:\u002F\u002Fgithub.com\u002FHKUDS",[82,86,90,94,98],{"name":83,"color":84,"percentage":85},"Python","#3572A5",98.5,{"name":87,"color":88,"percentage":89},"TypeScript","#3178c6",0.9,{"name":91,"color":92,"percentage":93},"Shell","#89e051",0.5,{"name":95,"color":96,"percentage":97},"Dockerfile","#384d54",0.1,{"name":99,"color":100,"percentage":97},"JavaScript","#f1e05a",37994,6603,"2026-04-05T11:35:08","MIT","Linux, macOS, Windows","未说明",{"notes":108,"python":109,"dependencies":110},"该工具定位为超轻量级个人 AI 助手。自 v0.1.4.post6 版本起已移除 litellm 依赖以修复供应链安全问题，改用原生的 openai 和 anthropic SDK。支持通过 uv、pip 或源码安装。需注意检查 Python 环境安全性。","≥3.11",[111,112],"openai","anthropic",[26,15],12,"2026-03-27T02:49:30.150509","2026-04-06T07:15:10.504819",[118,123,128,133,138,142],{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},10216,"如何配置 Ollama 本地模型使其在 nanobot 中正常工作？","Ollama 兼容 OpenAI 接口，建议在配置文件中将 provider 设置为 `vllm` 或 `openai`，并指定 apiBase。例如：\n在 `providers` 部分配置：\n\"vllm\": {\n  \"apiKey\": \"ollama\",\n  \"apiBase\": \"http:\u002F\u002F127.0.0.1:11434\u002Fv1\"\n}\n然后在 `agents` 的 `model` 字段直接使用模型名称（如 \"ministral-3:3b\"），无需添加 ollama 前缀。","https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fissues\u002F75",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},10217,"DeepSeek 提供商报错 \"LLM Provider NOT provided\" 如何解决？","该错误通常由缓存文件导致。尝试清除 `cli_direct.jsonl` 文件的内容即可解决问题。如果问题依旧，请检查 DeepSeek API Key 是否正确配置，并确认网络能正常访问 DeepSeek API。","https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fissues\u002F336",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},10218,"飞书（Feishu）渠道启动时提示 \"No channels enabled\" 怎么办？","首先确保配置文件中 `feishu` 下的 `enabled` 字段已设置为 `true`。其次，必须安装必要的依赖包 `lark-oapi`。请运行以下命令安装：\npip install lark-oapi\n安装完成后重启 nanobot gateway 即可。","https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fissues\u002F176",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},10219,"如何在 WhatsApp 中通过“发给自己”的消息来测试 nanobot？","默认情况下 bridge 会过滤掉自己发送的消息。若要启用此功能，需修改 `nanobot\u002Fbridge\u002Fsrc\u002Fwhatsapp.ts` 文件（约第 111-113 行），删除或注释掉 `if (msg.key.fromMe) continue;` 这行代码。\n修改后需在 bridge 目录下执行 `npm run build` 重新编译，然后将编译好的目录复制到 `.nanobot` 目录下覆盖原有的 bridge 目录，最后重启服务。","https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fissues\u002F218",{"id":139,"question_zh":140,"answer_zh":141,"source_url":137},10220,"执行 `nanobot channels login` 没有反应或无法登录怎么办？","这通常是因为 bridge 服务未正确编译或运行。请确保在 `bridge` 目录下先执行 `npm run build` 进行编译，然后运行 `npm run dev` 启动开发服务。或者将编译好的 bridge 目录复制到 `.nanobot` 目录下覆盖原有文件，再尝试执行登录命令。同时确保终端中已启动 nanobot gateway 服务。",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},10221,"nanobot 是否支持智谱 AI（Z.AI \u002F GLM 系列模型）？","是的，nanobot 已原生支持智谱 AI。您可以在配置文件的 `providers` 中添加 `zai` 配置项，并在 `agents` 的 `model` 字段中使用 `zai\u002Fglm-4.7` 等模型标识。需要设置环境变量 `ZAI_API_KEY` 或在配置文件中填写 API Key。支持的模型包括 glm-4.7, glm-4.6, glm-4.5-air 等。","https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fissues\u002F2",[148,153,158,163,168,173,178,183,188,193,198],{"id":149,"version":150,"summary_zh":151,"released_at":152},107458,"v0.1.4.post6","🐈 nanobot `v0.1.4.post6` is here — 57 PRs merged, 27 new contributors, and a release that's less about adding surface area than about rethinking what's underneath.\r\n\r\nSome releases are about what you can do. This one is about how cleanly you can do it. The agent runtime got formally decomposed, a major dependency was removed, streaming went end-to-end, and a security vulnerability was closed. Beneath the feature work, `v0.1.4.post6` is a structural turning point — the kind of release that makes the *next* release possible.\r\n\r\n## Highlights\r\n\r\n- **The agent runtime was decomposed into composable pieces** — A shared `AgentRunner` was extracted, lifecycle hooks were unified into a formal `HookContext`, and subagent progress is now preserved even on failure. Command routing was refactored into a plugin-friendly structure, and `process_direct` was unified to return `OutboundMessage` consistently. This isn't just cleanup — it's the foundation for pluggable agent behaviors, custom execution strategies, and third-party lifecycle integrations that are coming next. (#2524, #2541, #2388, #2338)\r\n\r\n- **litellm was replaced with native OpenAI + Anthropic SDKs** — The entire provider layer was rewritten to talk directly to upstream SDKs instead of routing through litellm. Prompt cache optimization for Anthropic, proper `max_completion_tokens` handling for OpenAI o1, and Gemini thought signature preservation all came along for the ride. If you've ever debugged a litellm traceback at 2am, you understand why this matters. (#2448, #1109, #2468, #2550, #2453)\r\n\r\n- **Streaming went end-to-end** — From provider to channel to CLI, streaming output now flows as a first-class path. Feishu gained CardKit streaming support, queued stream deltas are coalesced to reduce API calls, and the channel manager handles stream boundaries correctly. This is the difference between \"the bot is typing...\" and actually watching it think. (#2365, #2545, #2497)\r\n\r\n- **A security vulnerability was patched** — Email injection and spoofing via missing authentication verification has been fixed. Inbound emails now verify SPF\u002FDKIM through `Authentication-Results` headers, with `verify_dkim` and `verify_spf` enabled by default. Email content is tagged with `[EMAIL-CONTEXT]` to prevent LLM prompt injection from email bodies. See the advisory for details. (GHSA-4gmr-2vc8-7qh3)\r\n\r\n- **WeChat support landed as a full channel** — WeChat (Weixin) joined the channel family with HTTP long-poll, QR code login, and plugin 1.0.3 compatibility. Alongside it, Telegram, QQ, WhatsApp, and Feishu all received cross-channel enhancements including retry mechanisms with exponential backoff. (#2412, #2428, #2386, #2478)\r\n\r\n- **Provider coverage kept expanding** — Mistral and OVMS providers arrived, Step Fun (阶跃星辰) joined the ecosystem, and custom provider error reporting got much more honest — raw API errors instead of opaque `JSONDecodeError`. nanobot continues to meet users wherever their models live. (#2199, #2472, #2289, #2139)\r\n\r\n- **The agent got smarter about resources** — Per-session concurrent dispatch landed, native multimodal sensory capabilities were added, token estimation now counts all message fields, and memory consolidation properly reserves completion headroom. The agent loop also handles `CancelledError` gracefully and records subagent results with correct roles. (#2393, #2304, #2344, #2378, #2239, #2104)\r\n\r\n- **Feishu and Telegram both leveled up** — Feishu gained streaming cards, code block parsing in post messages, and fixes for markdown rendering and media types. Telegram got HTTP(S) URL media support, separated connection pools to prevent pool exhaustion, and quieter network error logging. Small individually, substantial together. (#2545, #2246, #1814, #1755, #1793, #2247, #2272)\r\n\r\n- **CLI and onboarding became more capable** — A full-featured onboard wizard arrived, `--dir` enables multiple instances, `\u002Fstatus` shows runtime info, `-h` works everywhere, and timezone is now configurable. The kind of polish that makes first-run experience feel intentional. (#2101, #1763, #1985, #2123, #2477, #1136, #2266)\r\n\r\n- **Infrastructure hardened across the board** — Zombie processes are reaped on shell timeout, cron job stores are scoped to workspaces, MCP tool schemas handle nullable params correctly, Docker builds include openssh-client, and the test suite was reorganized into a cleaner structure. The kind of work that prevents the bug report you'd otherwise file next month. (#2362, #2204, #2230, #2287, #1911, #2427, #2367)\r\n\r\n## Community\r\n\r\nA warm welcome to our **27 new contributors** in this release.\r\n\r\n`v0.1.4.post6` is shaped by a belief that the most important work in open source isn't always the most visible. Replacing a core dependency, decomposing a runtime, closing a security hole — none of these make for flashy demos, but all of them make nanobot a project you can build on with more confidence tomorrow than yesterday. Thank you to everyone who con","2026-03-27T15:05:22",{"id":154,"version":155,"summary_zh":156,"released_at":157},107459,"v0.1.4.post5","🐈 nanobot `v0.1.4.post5` is here — 57 PRs merged, 29 new contributors, and a release cycle shaped less by spectacle than by something quieter: careful refinement where it matters most.\r\n\r\nThis is the kind of release that makes a project feel more trustworthy in daily use. The edges got smoother, the failure modes got softer, and the platform got broader. Across channels, providers, memory, MCP, CLI, and infrastructure, nanobot is becoming not just more capable, but more dependable — more like a tool you can actually live with.\r\n\r\n## Highlights\r\n\r\n- **Reliability took center stage** — A lot of this release is about making nanobot fail more gracefully. Agent loops are less likely to crash, MCP connections now handle cancellation better, orphaned tool results are preserved correctly, and async CLI\u002Fsubagent output behaves more cleanly. (#1999, #1953, #2075, #1930, #2039)\r\n\r\n- **Memory became more practical** — Async background consolidation landed, consolidation inputs are passed through more faithfully, payloads are validated before persistence, and `save_memory` is enforced more consistently. This is a meaningful step toward memory that feels less magical and more reliable. (#1961, #1962, #1868, #1810, #1909)\r\n\r\n- **The channel layer keeps maturing** — Channel plugin architecture arrived, channel discovery is now automatic, and built-in channel\u002Fconfig boundaries are cleaner. That kind of structural work matters: it makes growth easier without making the system brittle. (#1982, #1888)\r\n\r\n- **Provider support keeps expanding outward** — Ollama support landed for local models, VolcEngine and BytePlus joined the ecosystem, `openrouter\u002F*` models are supported, and web search providers are now configurable with fallback behavior. nanobot is increasingly meeting users where they already are. (#1863, #1608, #2026, #398)\r\n\r\n- **Observability and tooling got stronger** — Langsmith integration brings better conversation tracking, built-in skill packaging got fixed up, and smarter filesystem\u002Fshell tooling improves pagination, fallback matching, and output behavior. The system is becoming easier to inspect and easier to trust. (#1920, #1416, #1895)\r\n\r\n- **Feishu saw major polish** — Reply\u002Fquote support landed, tool calls can now render in code blocks, group mention behavior improved, Groq Whisper audio compatibility was fixed, and broader multimedia handling got much better. Feishu support feels substantially more complete after this cycle. (#1963, #1966, #1768, #1741, #2034)\r\n\r\n- **Telegram got meaningfully better in groups and media workflows** — Group response behavior is now configurable, reply-to-message context works across text and media, and media filename collision bugs were cleaned up. These are small details individually, but together they make conversations feel much more natural. (#1389, #1900, #1796)\r\n\r\n- **Enterprise and collaboration channels improved too** — WeCom channel support landed, Slack thread behavior was clarified, QQ legacy plain-text replies were restored, and DingTalk gained both voice recognition text retrieval and multimedia improvements. (#1327, #1784, #1941, #1859, #2034)\r\n\r\n- **CLI and runtime behavior are more predictable** — Gateway port defaults now respect config, restart flows are more portable, Windows compatibility got attention, and shell\u002Fworkspace guards became stricter around home-expanded and tilde-based paths. This is the kind of work users only notice when it’s missing — which is exactly why it matters. (#1797, #1785, #1958, #1479, #1827, #1845)\r\n\r\n- **A lot of sharp edges disappeared** — Hidden files are no longer synced by accident, non-vision models won’t receive `image_url`, heartbeat and cron got less noisy, and version IDs now show up in logs. These aren’t flashy changes, but they make nanobot feel more settled, more deliberate, and more production-ready. (#1856, #1901, #1973, #2058)\r\n\r\n## Community\r\n\r\nA huge welcome to our **29 new contributors** in this release.\r\n\r\nOpen source grows in two ways: through bold new ideas, and through the patient work of noticing rough edges and smoothing them out. `v0.1.4.post5` has plenty of both. Thank you to everyone who contributed features, fixes, refactors, docs, and infrastructure improvements — nanobot is becoming stronger not all at once, but through many careful hands moving it forward together.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fcompare\u002Fv0.1.4.post4...v0.1.4.post5\r\n\r\n## What's Next\r\n\r\nLooking ahead, we’ll continue moving nanobot toward a more modular, plugin-oriented ecosystem. As a first step, we plan to experiment and iterate in the channel layer first — using channels as the proving ground for a more extensible architecture before expanding that approach further across the project.\r\n\r\nWe’ll also keep using the [`nightly`](https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Ftree\u002Fnightly) branch for faster testing, earlier feedback, and quicker iteration on new ideas. If you’re interested in helping shape that future, we’d ","2026-03-16T15:21:09",{"id":159,"version":160,"summary_zh":161,"released_at":162},107460,"v0.1.4.post4","🐈 nanobot `v0.1.4.post4` is here — 58 PRs merged, 29 new contributors, and a *lot* of real-world polish from the community.\r\n\r\nThis release is a big one: safer defaults, better multi-instance support, stronger MCP\u002Ftool reliability, and major improvements across Telegram, Feishu, QQ, DingTalk, Discord, WhatsApp, Matrix, and more. A ton of sharp edge cases got cleaned up in this cycle — the kind of fixes that make nanobot feel much more solid day to day.\r\n\r\n## Highlights\r\n\r\n- **Safer by default** — Access control is tighter now. This release includes a real authorization-bypass fix and stronger default handling around `allowFrom`, making deployments much safer out of the box. (#1403, #1677)\r\n\r\n- **Multi-instance support is finally real** — `--config` support landed, runtime path handling got cleaned up, and the CLI agent now supports `--workspace` \u002F `--config` too. Running separate bots, tenants, or environments is much easier now. (#1581, #1635)\r\n\r\n- **MCP got tougher** — SSE transport support is in, auto-detection is smarter, and MCP tool calls are much less likely to take down the process when cancellations or weird failures happen. (#1488, #1728)\r\n\r\n- **Tooling got more forgiving** — Auto-casting tool params, safer validation, graceful datetime handling, and `read_file` size limits all help nanobot fail less badly when tools or inputs get weird. (#1610, #1507, #1508, #1511)\r\n\r\n- **Provider support keeps growing** — Azure OpenAI support is here, Alibaba Cloud Coding Plan API support landed, prompt-caching affinity headers were added, and GitHub Copilot \u002F Codex compatibility keeps improving. (#1618, #1563, #1428, #1555, #1637, #1525)\r\n\r\n- **Cron got much sturdier** — External `jobs.json` changes are respected, job context handling is better, and cron jobs are no longer allowed to recursively schedule more cron jobs forever. (#1371, #1375, #1399, #1458)\r\n\r\n- **Context got cleaner** — Consecutive user messages are merged to avoid provider-side errors, internal reasoning is hidden from user-facing progress updates, image MIME detection is more reliable, and prompt\u002Fplatform guidance is more polished. (#1456, #1655, #1573, #1579)\r\n\r\n- **Telegram got a lot of love** — Proxy handling was fixed, group topics are supported, `\u002Fstop` works better, streaming messages landed, and generic documents now keep the right file extensions. (#1535, #1476, #1482, #1660, #1522, #436)\r\n\r\n- **Feishu keeps getting better** — Rich text parsing, table\u002Fcard splitting, smarter format selection, Groq Whisper transcription, audio\u002Fvideo compatibility fixes, and cleaner event handling all landed in this release. (#1361, #1384, #1648, #1605, #1531, #1594, #332, #1568)\r\n\r\n- **More channels, more polish** — DingTalk group chat support, QQ group handling and markdown sending, Discord group policy + attachments, WhatsApp media support, Slack fallback fixes, and Matrix media normalization all made it in. (#1467, #532, #1727, #553, #1613, #1638, #673, #1406)\r\n\r\n- **Cross-platform stability improved** — Better Windows signal handling, fewer missing imports\u002Fdependencies, cleaner tests, and overall more confidence that nanobot behaves well across different environments. (#1598, #1485, #1533, #1546, #1339, #1521)\r\n\r\n## Community\r\n\r\nA huge welcome to our **29 new contributors** in this release — thank you all for jumping in and improving nanobot across channels, providers, tooling, docs, and infrastructure. This project keeps getting better because of contributions like these.\r\n\r\n## What's Changed\r\n* fix(cron): auto-reload jobs.json when modified externally by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1371\r\n* style: unify code formatting and import order by @JackLuguibin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1339\r\n* fix(feishu): parse post wrapper payload for rich text messages by @cyzlmh in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1361\r\n* feat(cron): improve cron job context handling by @VITOHJL in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1375\r\n* feat(tool): add web search proxy by @chengyongru in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1370\r\n* security: deny by default in is_allowed for all channels by @chengyongru in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1403\r\n* fix(matrix): normalize media metadata and keyword-call attachment upload by @wenjielei1990 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1406\r\n* cron: honor external jobs.json updates during timer ticks by @cyzlmh in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1399\r\n* fix: merge consecutive user messages to prevent API errors by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1456\r\n* fix: prevent cron job from scheduling new jobs (feedback loop) by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1458\r\n* fix: add missed `openai` dependency by @cocolato in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1485\r\n* fix(codex): pass reasoning_effort to Codex API by @danielemden in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1525\r\n* test: fix test failures from refactored cron and context builder by @chengyong","2026-03-08T16:49:40",{"id":164,"version":165,"summary_zh":166,"released_at":167},107461,"v0.1.4.post3","🐈 nanobot `v0.1.4.post3` just dropped — 33 PRs merged, 16 new contributors! You guys are unstoppable 🔥\r\n\r\nThis release is all about making the agent **smarter by seeing less junk**: cleaner context, hardened session history, and fewer ghost messages. Less noise in → less hallucination out. Plus a shiny new Matrix channel and experimental thinking mode.\r\n\r\n## Highlights\r\n\r\n- **Agent Loop Hardening** — Empty assistant messages (no content, no tool calls) are now filtered instead of poisoning context; LLM error responses are no longer saved to history, preventing permanent 400-error loops; message tool suppression scoped to same-channel only (#1314, #1198, #1206)\r\n- **Context Noise Reduction** — Runtime metadata (time, channel, chat ID) separated into an untrusted layer and excluded from session history, so the agent sees only what matters — fewer tokens, less confusion, less hallucination (#1126, #1222)\r\n- **Session Safety** — Base64 images stripped from history to prevent context overflow; null responses no longer corrupt future turns (#1191, #1314)\r\n- **Matrix Channel** — Full Matrix (Element) chat channel support with E2EE, media uploads, and typing indicators (#420, #1239)\r\n- **Provider Compatibility** — Short alphanumeric `tool_call_id` for Mistral; list-type tool arguments handled gracefully; explicit provider selection in config (#1293, #1294, #1214, #1316)\r\n- **Thinking Mode (experimental)** — New `reasoning_effort` config enables LLM reasoning for supported models; session history preserves `reasoning_content` and `thinking_blocks` across turns (#1351, #1330, #1074)\r\n- **Subagent Improvements** — `\u002Fstop` cancels spawned subagents; streamlined prompt eliminates dead code and reuses ContextBuilder (#1180, #1347)\r\n- **Feishu** — Interactive card text extraction fix, configurable reaction emoji, corrected bot permissions (#1323, #1257, #1317, #1348)\r\n- **DingTalk** — Images and media sent as proper message types instead of plain-text links (#1337)\r\n- **Telegram** — Media groups aggregated into a single inbound message (#1258)\r\n- **QQ** — Fixed C2C reply permissions, disabled file log on read-only filesystems (#1307, #1346)\r\n- **WhatsApp** — Message deduplication prevents redundant agent processing (#1325)\r\n- **Shell & Security** — Full Windows path parsing in workspace guard; configurable `path_append` for subprocess PATH (#1286, #1083)\r\n- **Memory & Concurrency** — WeakValueDictionary for consolidation locks eliminates race conditions and manual cleanup (#1326)\r\n- **Workspace** — Automatic template synchronization restores critical files on startup (#1253)\r\n\r\n## What's Changed\r\n* fix: preserve reasoning_content in messages for thinking-enabled models by @haosenwang1018 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1074\r\n* feat\u002Ffix(exec): add path_append config to extend PATH for subprocess by @aiguozhi123456 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1083\r\n* feat: add stable system layer + untrusted runtime context layer by @pikaxinge in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1126\r\n* feat: \u002Fstop cancels spawned subagents via session tracking by @coldxiangyu163 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1180\r\n* feat: support explicit provider selection in config by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1214\r\n* Fix assistant messages without tool calls not being saved to session history by @VITOHJL in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1198\r\n* Fix: The base64 images are stored in the session history, causing context overflow. by @dxtime in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1191\r\n* feat: add Matrix (Element) chat channel support by @tanishra in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F420\r\n* fix(agent): only suppress final reply when message tool sends to same… by @chengyongru in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1206\r\n* fix(web): use self.api_key instead of undefined api_key by @yongPhone in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1228\r\n* fix(telegram): aggregate media groups into a single inbound message by @KimGLee in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1258\r\n* feat(feishu): make reaction emoji configurable by @kimkitsuragi26 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1257\r\n* feat: automatic workspace template synchronization by @honjiaxuan in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1253\r\n* Fix Matrix channel initialization and configuration by @tanishra in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1239\r\n* fix(agent): avoid persisting runtime context metadata into history by @KimGLee in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1222\r\n* fix: update heartbeat tests to match two-phase tool-call architecture by @intelliot in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1200\r\n* Fix(prompt): guide llm grep using timestamp by @aiguozhi123456 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1278\r\n* fix: generate short alphanumeric tool_call_id for Mistral compatibility by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1293\r\n* fix: remove overly broad \"codex\" keyword from openai_codex provider by @nikolasdehor in https:\u002F\u002Fgithub","2026-02-28T18:01:25",{"id":169,"version":170,"summary_zh":171,"released_at":172},107462,"v0.1.4.post2","🐈 nanobot `v0.1.4.post2` is here — 32 PRs merged, 14 new contributors! The community is on fire 🔥\r\n\r\nThis release focuses on **reliability**: a completely redesigned heartbeat, prompt caching for lower API costs, hardened provider compatibility, and dozens of channel-level fixes. Fewer surprises, more stability.\r\n\r\n## Highlights\r\n\r\n- **Heartbeat Overhaul** — Replaced fragile token detection with a virtual tool-call decision mechanism; heartbeat is now truly silent when idle (#1102, #1054, #1039, #1036)\r\n- **Prompt Cache Optimization** — Dynamic context (time, session) moved from system prompt to user message, enabling persistent prompt cache hits (#1115)\r\n- **Agent Reliability** — Behavioral constraints, full tool history in context, and actionable error hints (#1046)\r\n- **Slack** — Thread-isolated sessions, mrkdwn post-processing for bold\u002Fheader artifacts, socket error handling (#1048, #1107, #957)\r\n- **Feishu** — Rich-text image extraction in post messages, file download API fix (#1090, #986)\r\n- **Email** — Proactive sends work even when autoReply is disabled, smarter dedup eviction (#1077, #959)\r\n- **Provider Fixes** — DeepSeek `reasoning_content` normalization, empty content block filtering, API key hot-reload via `@property` (#947, #955, #949, #1071, #1098)\r\n- **Security** — Path traversal prevention using `relative_to()` instead of `startswith()` (#956)\r\n- **MCP** — Configurable timeouts to prevent agent hangs, removed conflicting defaults for HTTP transport (#950, #1062)\r\n- **Memory** — Fixed TypeError when LLM returns dict arguments during consolidation (#1061)\r\n- **Discord** — Break typing indicator loop on persistent HTTP failure (#1029)\r\n- **CLI** — DingTalk, QQ, Email added to `nanobot status` output (#982)\r\n- **Packaging** — Workspace templates moved to `nanobot\u002Ftemplates\u002F` for proper pip packaging (#1043)\r\n- **Docs** — systemd user service deployment guide (#968)\r\n\r\n## What's Changed\r\n* fix: change VolcEngine litellm prefix from openai to volcengine by @init-new-world in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F951\r\n* fix(context): Fix 'Missing `reasoning_content` field' error for deepseek provider. by @homorunner in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F947\r\n* Remove redundant tools description by @vincentchen0x2-dev in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F939\r\n* feat(cli): add DingTalk, QQ, and Email to channels status output by @luoyingwen in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F982\r\n* fix(security): prevent path traversal bypass via startswith check by @nghiahsgs in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F956\r\n* fix(slack): add exception handling to socket listener by @nghiahsgs in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F957\r\n* fix(session): handle errors in legacy session migration by @nghiahsgs in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F958\r\n* fix(email): evict oldest half of dedup set instead of clearing entirely by @nghiahsgs in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F959\r\n* fix(loop): serialize \u002Fnew consolidation and preserve session on archival failure by @Athemis in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F881\r\n* fix(qq): make start() long-running per base channel contract by @nghiahsgs in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F962\r\n* docs: add systemd user service instructions to README by @katafractari in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F968\r\n* fix(mcp): add 30s timeout to MCP tool calls to prevent agent hangs by @eliumusk in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F950\r\n* fix(providers): normalize empty reasoning_content to None at provider level by @nghiahsgs in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F955\r\n* fix(feishu): replace file.get with message_resource.get to fix feishu file download problem by @FloRainRJY in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F986\r\n* fix(provider): filter empty text content blocks causing API 400 by @eliumusk in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F949\r\n* feat(channels): add send_progress option to control progress message … by @luoyingwen in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1000\r\n* fix(heartbeat): route heartbeat runs to enabled chat context by @KimGLee in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1036\r\n* refactor: move workspace\u002F to nanobot\u002Ftemplates\u002F for packaging by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1043\r\n* improve agent reliability: behavioral constraints, full tool history, error hints by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1046\r\n* feat(slack): isolate session context per thread by @pjbakker in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1048\r\n* fix(heartbeat): deliver agent response to user and fix HEARTBEAT_OK detection by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1054\r\n* fix(heartbeat): make start idempotent and require exact HEARTBEAT_OK by @cyzlmh in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1039\r\n* fix: break Discord typing loop on persistent HTTP failure by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F1029\r\n* fix(heartbeat): replace HEARTBEAT_OK token with virtual tool-call decision by @Re-bin in https:\u002F\u002Fgithub.com\u002F","2026-02-24T16:31:15",{"id":174,"version":175,"summary_zh":176,"released_at":177},107463,"v0.1.4.post1","🎉 Another big release from the 🐈 nanobot community — thanks to all contributors, especially our 19 new ones!\r\n\r\nThis release brings major channel improvements across Feishu, Slack, and Discord, a new provider, prompt caching, and a significant agent loop refactor. More reliable, more capable, leaner code 😄.\r\n\r\n## Highlights\r\n\r\n- **Feishu Media** — Bot can now send and receive images, audio, and files (#844, #922)\r\n- **Slack Media Upload** — Send files directly from the agent (#904)\r\n- **Discord Long Messages** — Automatically splits messages exceeding 2000 characters (#900)\r\n- **VolcEngine Provider** — New LLM provider support (#812)\r\n- **Prompt Caching** — Anthropic and OpenRouter cache_control support for lower costs (#854, #905)\r\n- **MCP Auth Headers** — Custom HTTP headers for authenticated MCP servers (#807)\r\n- **Telegram Improvements** — Configurable reply-to behavior, \u002Fhelp bypasses ACL (#879, #824)\r\n- **CLI Subagent Support** — CLI now routes through message bus, enabling full subagent support (#908)\r\n- **Reliable Memory** — Memory consolidation uses structured tool calls instead of fragile JSON parsing (#866)\r\n- **Leaner Agent Loop** — Extracted memory logic, merged redundant methods, -31% lines (#930)\r\n- **Smarter File Editing** — edit_file now shows a diff when old_text doesn't match (#921)\r\n- **Bug Fixes** — MCP reconnection, session key persistence, duplicate replies, cron execution, and more (#892, #902, #832, #821, #823, ...)\r\n\r\n## What's Changed\r\n* feat(feishu): support sending images, audio, and files by @KinglittleQ in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F844\r\n* fix: Codex provider routing for GitHub Copilot models by @Molunerfinn in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F836\r\n* fix: wait for killed process after shell timeout to prevent fd leaks by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F851\r\n* Fix safety guard false positive on 'format' in URLs by @rubychilds in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F820\r\n* fix: remove dead pub\u002Fsub code from MessageBus by @AlexanderMerkel in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F870\r\n* fix: use loguru native formatting to prevent KeyError on curly braces by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F864\r\n* Fix: Add UTF-8 encoding and unicode support for JSON output by @chtangwin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F455\r\n* fix(tools): resolve relative file paths against workspace by @omdv in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F653\r\n* fix(cron): validate timezone inputs when adding jobs by @Athemis in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F763\r\n* fix(agent): handle non-string values in memory consolidation by @jswxharry in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F644\r\n* feat: add Anthropic prompt caching via cache_control by @tercerapersona in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F854\r\n* fix: allow retry for models that send interim text before tool calls by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F825\r\n* fix: prevent duplicate memory consolidation tasks per session by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F823\r\n* fix: \u002Fhelp command bypasses ACL on Telegram by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F824\r\n* feat: Add VolcEngine LLM provider support by @init-new-world in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F812\r\n* feature: Added custom headers for MCP Auth use. by @dxtime in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F807\r\n* feat: Agent is able to reply to original message (Telegram Channel) by @DaryeDev in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F815\r\n* fix: make cron run command actually execute the agent by @ClaytonWWilson in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F821\r\n* feat: make Telegram reply-to-message behavior configurable, default false by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F879\r\n* fix: fixed not logging tool uses if a think fragment had them attached. by @DaryeDev in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F833\r\n* fix: Resolve \"Unrecognized chat message\" error for StepFun and strict providers by @Tevkanbot in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F795\r\n* feat: add OpenRouter prompt caching via cache_control by @tercerapersona in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F905\r\n* feat(slack): add media file upload support by @pjbakker in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F904\r\n* fix: convert remaining f-string logger calls to loguru native format by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F903\r\n* fix: split Discord messages exceeding 2000-character limit by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F900\r\n* fix: store session key in metadata to avoid lossy filename reconstruction by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F902\r\n* fix(agent): avoid duplicate email replies when message tool already sends by @KimGLee in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F832\r\n* fix: allow MCP reconnection after transient failure by @nikolasdehor in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F892\r\n* refactor: route CLI interactive mode through message bus by @Re-bi","2026-02-21T13:09:50",{"id":179,"version":180,"summary_zh":181,"released_at":182},107464,"v0.1.4","🎉 Another big release from the 🐈 nanobot community — thanks to all contributors, especially our 18 new ones!\r\n\r\nThis release adds MCP tool server support, real-time progress streaming so users actually see what the agent is doing, and a wave of new providers (Custom OpenAI-compatible, GitHub Copilot, OpenAI Codex, SiliconFlow). Channels got a lot of love too — Telegram now handles media uploads and long messages, Slack got proper mrkdwn and thread replies, Feishu supports rich text. We also added Docker Compose for one-command deployment, scoped sessions to workspace, and switched to json_repair for bulletproof LLM response parsing. Less silence, more providers, better channels — that's the nanobot way.\r\n\r\n### Highlights\r\n\r\n- **MCP Support** — Connect external tool servers via Model Context Protocol (#554)\r\n- **Progress Streaming** — Agent shows what it's doing during multi-step tool execution (#802)\r\n- **New Providers** — Custom OpenAI-compatible endpoints, GitHub Copilot, OpenAI Codex, SiliconFlow (#786, #720, #312, #151, #630)\r\n- **Channel Improvements** — Telegram media uploads & message splitting, Slack thread replies & mrkdwn, Feishu rich text (#747, #694, #717, #784, #629, #593)\r\n- **Docker Compose** — One-command deployment (#765)\r\n- **Workspace-scoped Sessions** — Sessions now live inside workspace with legacy migration (#713)\r\n- **Robust JSON Parsing** — json_repair for handling malformed LLM responses (#664)\r\n\r\n## What's Changed\r\n* 优化飞书卡片信息中的markdown标题显示 by @C-Li in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F593\r\n* feat: add custom provider and non-destructive onboard by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F604\r\n* Upgrade Onboarding feature by @lukemilby in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F583\r\n* fix: bug #370, support temperature configuration by @wymcmh in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F560\r\n* fix: Cache expiration issue related fixes by @chengyongru in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F590\r\n* 增加支持飞书富文本内容接收。Add support for receiving Feishu rich text content. by @C-Li in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F629\r\n* fix(providers): clamp max_tokens to >= 1 before calling LiteLLM by @themavik in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F617\r\n* 修复corn忽略时区设置的问题，并在缺省时自动使用本机时区 by @C-Li in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F625\r\n* feat(tools): add mcp support by @SergioSV96 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F554\r\n* fix: use json_repair for robust LLM response parsing by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F664\r\n* Add OpenAI Codex OAuth login and provider support by @pinhua33 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F151\r\n* Fix\u002FConvert Markdown to Slack's mrkdwn formatting by @alekwo in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F704\r\n* fix(telegram): Slash commands failing access check due to missing username in sender_id by @TomLisankie in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F701\r\n* fix(telegram): split long messages to avoid Message is too long error by @zhouzhuojie in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F694\r\n* slack: use slackify-markdown for proper mrkdwn formatting by @xek in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F717\r\n* feat: add ClawHub skill by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F758\r\n* fix(cron): preserve timezone for cron schedules by @jcpoyser in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F744\r\n* feat: Added GitHub Copilot as Provider by @DaryeDev in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F720\r\n* fix: avoid sending empty content entries in assistant messages by @zhuhui-in in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F748\r\n* feat: Allow Agent to upload media files (voice, pictures, documents) to Telegram Channel by @DaryeDev in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F747\r\n* fix(config): mcpServers env variables should not be converted to snake case by @fyhertz in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F766\r\n* Enable Cron Tool on CLI Agent. by @DaryeDev in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F746\r\n* feat: add Docker Compose support for easy deployment by @srajasimman in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F765\r\n* feat: add custom provider with direct openai-compatible support by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F786\r\n* slack: Added replyInThread logic and custom react emoji in config by @hyudryu in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F784\r\n* [ADD] GitHub copilot support by @jeroenev in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F312\r\n* feat: add SiliconFlow provider support by @mtics in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F630\r\n* Scope Session Storage to Workspace (with legacy fallback) by @kiplangatkorir in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F713\r\n* feat: stream intermediate progress to user during tool execution by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F802\r\n\r\n## New Contributors\r\n* @C-Li made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F593\r\n* @lukemilby made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F583\r\n* @wymcmh made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F560\r\n*","2026-02-18T14:38:07",{"id":184,"version":185,"summary_zh":186,"released_at":187},107465,"v0.1.3.post7","🎉 Another big release from the 🐈 nanobot community — thanks to all contributors, especially our 8 new ones!\r\n\r\nThis release fixes a critical WhatsApp bridge security vulnerability, redesigns the memory system from the ground up (two plain files + grep, no RAG), and adds interleaved chain-of-thought for smarter multi-step reasoning. We also welcome MiniMax as a new provider, and unified `\u002Fnew` command across all platforms. Less code, more reliable — that's the nanobot way.\r\n\r\n## What's Changed\r\n* Support MoChat Channel, an agent-first IM platform by @tjb-tech in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F389\r\n* fixed dingtalk exception. by @Mrart in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F439\r\n* feat: add MiniMax provider support via LiteLLM by @tars90percent in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F307\r\n* fix:cli input clean and display issue by @zcxixixi in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F488\r\n* fix: pydantic deprecation configdict by @SergioSV96 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F516\r\n* feat: add interleaved chain-of-thought to agent loop by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F538\r\n* feat(cron): add 'at' parameter for one-time scheduled tasks by @3927o in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F533\r\n* fix(subagent): add edit_file tool and time context to sub agent by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F543\r\n* feat: redesign memory system — two-layer architecture with grep-based retrieval by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F565\r\n* feat: add \u002Fnew command by @Qinnnnnn in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F569\r\n* Add max iterations info to fallback message by @3927o in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F567\r\n* fix(security): bind WhatsApp bridge to localhost + optional token auth by @Re-bin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F587\r\n\r\n## New Contributors\r\n* @tjb-tech made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F389\r\n* @Mrart made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F439\r\n* @tars90percent made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F307\r\n* @SergioSV96 made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F516\r\n* @Re-bin made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F538\r\n* @3927o made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F533\r\n* @Qinnnnnn made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F569\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fcompare\u002Fv0.1.3.post6...v0.1.3.post7","2026-02-13T06:21:56",{"id":189,"version":190,"summary_zh":191,"released_at":192},107466,"v0.1.3.post6","🎉 The 🐈 nanobot community keeps growing — thanks to all the amazing contributors who made this release possible!\r\n\r\n> [!TIP]\r\n> Due to security concerns, this version has a vulnerability in WhatsApp. **Please do not install this version**. Instead, install v0.1.3.post7 or higher.\r\n\r\nThis release brings more channels (DingTalk, Slack, Email, QQ), a smoother `nanobot agent` CLI experience, provider cleanup, and README polish. Want to see where we're heading? Check out our roadmap discussion: **From Lightweight Agent to Agent Kernel** — we'd love your feedback! 👉 https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fdiscussions\u002F431\r\n\r\n## What's Changed\r\n* feat(channels): add DingTalk channel support by @tianrking in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F219\r\n* Codex\u002Ffix cli input  by @zcxixixi in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F326\r\n* Drop unsupported parameters for providers. by @chaowu2009 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F225\r\n* Improve `nanobot agent` CLI chat rendering and input experience by @chris-alexander in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F360\r\n* feat(email): add consent-gated IMAP\u002FSMTP email channel by @zcxixixi in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F248\r\n* feat(channels): add Slack Socket Mode support by @kamalakarrao in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F116\r\n* Update README.md by @JakeRowe19 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F381\r\n* feat: Add QQ channel integration with botpy SDK by @yinwm in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F383\r\n\r\n## New Contributors\r\n* @tianrking made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F219\r\n* @zcxixixi made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F326\r\n* @chaowu2009 made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F225\r\n* @chris-alexander made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F360\r\n* @kamalakarrao made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F116\r\n* @JakeRowe19 made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F381\r\n* @yinwm made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F383\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fcompare\u002Fv0.1.3.post5...v0.1.3.post6","2026-02-10T03:04:29",{"id":194,"version":195,"summary_zh":196,"released_at":197},107467,"v0.1.3.post5","🎉 The 🐈  **nanobot** community keeps growing — thanks to all the amazing contributors who made this release possible!\r\n\r\n> [!TIP]\r\n> Due to security concerns, this version has a vulnerability in WhatsApp. **Please do not install this version**. Instead, install v0.1.3.post7 or higher.\r\n\r\n🐈 nanobot now supports more LLMs (DashScope\u002FQwen, Moonshot\u002FKimi, DeepSeek), hangs out in more places (Discord, Feishu), and got a serious security checkup. 🐈 nanobot has you covered.\r\n\r\n## What's Changed\r\n* add feishu channel support by @huhu-tiger in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F84\r\n* feat: add DeepSeek provider support by @kyya in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F38\r\n* chore: change 'depoly' to 'deploy' by @vivganes in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F174\r\n* feat: added runtime environment summary to system prompt by @DeeJ4yNg in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F107\r\n* feat: discord support by @anunay999 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F24\r\n* feat: add Moonshot provider support by @Rheasilvia in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F202\r\n* Security: Critical vulnerabilities found - Shell Injection, Path Traversal, and LiteLLM RCE by @kingassune in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F77\r\n* fix: correct API key environment variable for vLLM mode by @popcell in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F42\r\n* [Fix-204]: use correct ZAI_API_KEY for Zhipu\u002FGLM models #204 by @wcmolin in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F205\r\n* Feat: Add DashScope support for improved accessibility by @ZJUCQR in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F46\r\n* Fixes Access Denied because only the LID was used. by @adrianhoehne in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F287\r\n* feat: add telegram proxy support and add error handling for channel s… by @adieUkid in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F289\r\n\r\n## New Contributors\r\n* @huhu-tiger made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F84\r\n* @kyya made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F38\r\n* @vivganes made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F174\r\n* @DeeJ4yNg made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F107\r\n* @Rheasilvia made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F202\r\n* @kingassune made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F77\r\n* @popcell made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F42\r\n* @wcmolin made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F205\r\n* @ZJUCQR made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F46\r\n* @adrianhoehne made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F287\r\n* @adieUkid made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F289\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fcompare\u002Fv0.1.3.post4...v0.1.3.post5","2026-02-07T18:08:44",{"id":199,"version":200,"summary_zh":201,"released_at":202},107468,"v0.1.3.post4","🎉 Thanks to all the amazing contributors who made this release possible!\r\n\r\n> [!TIP]\r\n> Due to security concerns, this version has a vulnerability in WhatsApp. **Please do not install this version**. Instead, install v0.1.3.post7 or higher.\r\n\r\nThis release brings exciting new features including vLLM\u002Flocal LLM support, Gemini & Zhipu & Bedrock providers, Telegram vision & voice support, Docker deployment, and important bug fixes. We're thrilled to see the 🐈**nanobot** community growing!\r\n\r\n## What's Changed\r\n* feat: add vLLM\u002Flocal LLM support by @ZhihaoZhang97 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F4\r\n* Change default gateway port to 18790 by @Neutralmilkzzz in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F8\r\n* feat: add Zhipu API support and set glm-4.7-flash as default model by @SalimBinYousuf1 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F3\r\n* Resolve PR #3 conflicts and fix issue #10 (detailed command logs) by @SalimBinYousuf1 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F15\r\n* feat: add Gemini provider support by @anunay999 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F9\r\n* feat: add vision support for image recognition in Telegram by @Lyt060814 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F12\r\n* feat: Add uv as install method by @pve in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F14\r\n* feat: add voice transcription support with groq (fixes #13) by @SalimBinYousuf1 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F17\r\n* feat: Dockerfile and instructions by @pve in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F18\r\n* docs: update news date from 2025 to 2026 by @tlguszz1010 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F43\r\n* feat: add Amazon Bedrock support by @shaun0927 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F21\r\n* fix: add Telegram channel to `channels status` command by @WangCheng0116 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F26\r\n* feat: improve web_fetch URL validation and security by @WangCheng0116 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F22\r\n* fix: correct heartbeat token matching logic by @WangCheng0116 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F23\r\n* fix: status command now respects workspace from config by @WangCheng0116 in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F27\r\n* Validate tool params and add tests by @kiplangatkorir in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F28\r\n* Harden exec tool with safety guard by @kiplangatkorir in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F30\r\n* fix: Use correct 'zai\u002F' prefix for Zhipu AI models in LiteLLM by @pjperez in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F32\r\n\r\n## New Contributors\r\n* @ZhihaoZhang97 made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F4\r\n* @Neutralmilkzzz made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F8\r\n* @SalimBinYousuf1 made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F3\r\n* @anunay999 made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F9\r\n* @Lyt060814 made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F12\r\n* @pve made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F14\r\n* @tlguszz1010 made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F43\r\n* @shaun0927 made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F21\r\n* @WangCheng0116 made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F26\r\n* @kiplangatkorir made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F28\r\n* @pjperez made their first contribution in https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fpull\u002F32\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FHKUDS\u002Fnanobot\u002Fcommits\u002Fv0.1.3.post4","2026-02-04T09:33:56"]