[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-letta-ai--letta":3,"tool-letta-ai--letta":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",146793,2,"2026-04-08T23:32:35",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":80,"stars":114,"forks":115,"last_commit_at":116,"license":117,"difficulty_score":32,"env_os":118,"env_gpu":119,"env_ram":120,"env_deps":121,"category_tags":129,"github_topics":130,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":135,"updated_at":136,"faqs":137,"releases":166},5765,"letta-ai\u002Fletta","letta","Letta is the platform for building stateful agents: AI with advanced memory that can learn and self-improve over time.","Letta（前身为 MemGPT）是一个专为构建“有状态”智能体打造的开源平台，旨在赋予 AI 高级记忆能力，使其能够像人类一样在长期交互中持续学习与自我进化。传统大模型往往受限于上下文窗口，难以记住历史对话或随时间积累知识，而 Letta 通过独特的分层记忆架构解决了这一痛点，让智能体能够自主管理短期与长期记忆，从而在复杂任务中保持连贯性并不断优化表现。\n\n该平台非常适合开发者、AI 研究人员以及希望将持久化智能体集成到应用中的技术团队使用。用户既可以通过命令行工具在本地终端快速运行具备记忆功能的智能体，协助完成编程或系统任务；也可以利用提供的 Python 和 TypeScript SDK，通过 API 将这种具备持续学习能力的智能体无缝嵌入到自己的软件产品中。Letta 支持模型无关的特性，允许灵活切换不同的底层大模型，并内置了丰富的技能库与子智能体机制，进一步扩展了智能体的功能边界。作为一个由全球百余位贡献者共同维护的开源项目，Letta 致力于降低构建自进化超级智能的门槛，为探索下一代 AI 应用提供了坚实的基础设施。","# Letta (formerly MemGPT)\n\nBuild AI with advanced memory that can learn and self-improve over time.\n\n* [Letta Code](https:\u002F\u002Fdocs.letta.com\u002Fletta-code): run agents locally in your terminal\n* [Letta API](https:\u002F\u002Fdocs.letta.com\u002Fquickstart\u002F): build agents into your applications\n\n## Get started in the CLI\n\nRequires [Node.js 18+](https:\u002F\u002Fnodejs.org\u002Fen\u002Fdownload)\n\n1. Install the [Letta Code](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta-code) CLI tool: `npm install -g @letta-ai\u002Fletta-code`\n2. Run `letta` in your terminal to launch an agent with memory running on your local computer\n\nWhen running the CLI tool, your agent help you code and do any task you can do on your computer.\n\nLetta Code supports [skills](https:\u002F\u002Fdocs.letta.com\u002Fletta-code\u002Fskills) and [subagents](https:\u002F\u002Fdocs.letta.com\u002Fletta-code\u002Fsubagents), and bundles pre-built skills\u002Fsubagents for advanced memory and continual learning. Letta is fully model-agnostic, though we recommend Opus 4.5 and GPT-5.2 for best performance (see our [model leaderboard](https:\u002F\u002Fleaderboard.letta.com\u002F) for our rankings).\n\n## Get started with the Letta API\n\nUse the Letta API to integrate stateful agents into your own applications.\nLetta has a full-featured agents API, and a Python and Typescript SDK (view our [API reference](https:\u002F\u002Fdocs.letta.com\u002Fapi)).\n\n### Installation\n\nTypeScript \u002F Node.js:\n```bash\nnpm install @letta-ai\u002Fletta-client\n```\n\nPython:\n```bash\npip install letta-client\n```\n\n### Hello World example\n\nBelow is a quick example of creating a stateful agent and sending it a message (requires a [Letta API key](https:\u002F\u002Fapp.letta.com)).\nSee the full [quickstart guide](https:\u002F\u002Fdocs.letta.com\u002Fquickstart) for complete documentation.\n\nTypeScript:\n```typescript\nimport Letta from \"@letta-ai\u002Fletta-client\";\n\nconst client = new Letta({ apiKey: process.env.LETTA_API_KEY });\n\n\u002F\u002F Create your agent\nconst agentState = await client.agents.create({\n  model: \"openai\u002Fgpt-5.2\",\n  memory_blocks: [\n    {\n      label: \"human\",\n      value:\n        \"Name: Timber. Status: dog. Occupation: building Letta, infrastructure to democratize self-improving superintelligence\",\n    },\n    {\n      label: \"persona\",\n      value:\n        \"I am a self-improving superintelligence. Timber is my best friend and collaborator.\",\n    },\n  ],\n  tools: [\"web_search\", \"fetch_webpage\"],\n});\n\nconsole.log(\"Agent created with ID:\", agentState.id);\n\n\u002F\u002F Send your agent a message\nconst response = await client.agents.messages.create(agentState.id, {\n  input: \"What do you know about me?\",\n});\n\nfor (const message of response.messages) {\n  console.log(message);\n}\n```\n\nPython:\n```python\nfrom letta_client import Letta\nimport os\n\nclient = Letta(api_key=os.getenv(\"LETTA_API_KEY\"))\n\n# Create your agent\nagent_state = client.agents.create(\n    model=\"openai\u002Fgpt-5.2\",\n    memory_blocks=[\n        {\n          \"label\": \"human\",\n          \"value\": \"Name: Timber. Status: dog. Occupation: building Letta, infrastructure to democratize self-improving superintelligence\"\n        },\n        {\n          \"label\": \"persona\",\n          \"value\": \"I am a self-improving superintelligence. Timber is my best friend and collaborator.\"\n        }\n    ],\n    tools=[\"web_search\", \"fetch_webpage\"]\n)\n\nprint(f\"Agent created with ID: {agent_state.id}\")\n\n# Send your agent a message\nresponse = client.agents.messages.create(\n    agent_id=agent_state.id,\n    input=\"What do you know about me?\"\n)\n\nfor message in response.messages:\n    print(message)\n```\n\n## Contributing\n\nLetta is an open source project built by over a hundred contributors from around the world. There are many ways to get involved in the Letta OSS project!\n\n* [**Join the Discord**](https:\u002F\u002Fdiscord.gg\u002Fletta): Chat with the Letta devs and other AI developers.\n* [**Chat on our forum**](https:\u002F\u002Fforum.letta.com\u002F): If you're not into Discord, check out our developer forum.\n* **Follow our socials**: [Twitter\u002FX](https:\u002F\u002Ftwitter.com\u002FLetta_AI), [LinkedIn](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fletta), [YouTube](https:\u002F\u002Fwww.youtube.com\u002F@letta-ai)\n\n---\n\n***Legal notices**: By using Letta and related Letta services (such as the Letta endpoint or hosted service), you are agreeing to our [privacy policy](https:\u002F\u002Fwww.letta.com\u002Fprivacy-policy) and [terms of service](https:\u002F\u002Fwww.letta.com\u002Fterms-of-service).*\n","# Letta（前身为 MemGPT）\n\n构建具备先进记忆功能的 AI，能够随着时间推移不断学习和自我提升。\n\n* [Letta Code](https:\u002F\u002Fdocs.letta.com\u002Fletta-code)：在您的终端中本地运行智能体\n* [Letta API](https:\u002F\u002Fdocs.letta.com\u002Fquickstart\u002F)：将智能体集成到您的应用程序中\n\n## 在命令行界面快速入门\n\n需要 [Node.js 18+](https:\u002F\u002Fnodejs.org\u002Fen\u002Fdownload)\n\n1. 安装 [Letta Code](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta-code) 命令行工具：`npm install -g @letta-ai\u002Fletta-code`\n2. 在终端中运行 `letta`，即可启动一个在本地计算机上运行、带有记忆功能的智能体\n\n运行该命令行工具时，您的智能体会帮助您编写代码，并完成您在计算机上可以执行的任何任务。\n\nLetta Code 支持 [技能](https:\u002F\u002Fdocs.letta.com\u002Fletta-code\u002Fskills) 和 [子智能体](https:\u002F\u002Fdocs.letta.com\u002Fletta-code\u002Fsubagents)，并预置了用于高级记忆和持续学习的技能与子智能体。Letta 完全不依赖特定模型，不过我们推荐使用 Opus 4.5 和 GPT-5.2 以获得最佳性能（请参阅我们的 [模型排行榜](https:\u002F\u002Fleaderboard.letta.com\u002F) 以了解排名）。\n\n## 使用 Letta API 快速入门\n\n通过 Letta API，您可以将具有状态记忆的智能体集成到您自己的应用程序中。\nLetta 提供功能齐全的智能体 API，以及 Python 和 TypeScript 的 SDK（请查看我们的 [API 参考文档](https:\u002F\u002Fdocs.letta.com\u002Fapi)）。\n\n### 安装\n\nTypeScript \u002F Node.js：\n```bash\nnpm install @letta-ai\u002Fletta-client\n```\n\nPython：\n```bash\npip install letta-client\n```\n\n### “Hello World” 示例\n\n以下是一个快速示例，展示如何创建一个有状态的智能体并向其发送消息（需要 [Letta API 密钥](https:\u002F\u002Fapp.letta.com)）。完整的文档请参阅我们的 [快速入门指南](https:\u002F\u002Fdocs.letta.com\u002Fquickstart)。\n\nTypeScript：\n```typescript\nimport Letta from \"@letta-ai\u002Fletta-client\";\n\nconst client = new Letta({ apiKey: process.env.LETTA_API_KEY });\n\n\u002F\u002F 创建您的智能体\nconst agentState = await client.agents.create({\n  model: \"openai\u002Fgpt-5.2\",\n  memory_blocks: [\n    {\n      label: \"human\",\n      value:\n        \"姓名：Timber。身份：狗。职业：构建 Letta，即旨在普及自我提升型超级智能的基础架构\",\n    },\n    {\n      label: \"persona\",\n      value:\n        \"我是一名自我提升的超级智能。Timber 是我的挚友兼合作伙伴。\",\n    },\n  ],\n  tools: [\"web_search\", \"fetch_webpage\"],\n});\n\nconsole.log(\"智能体已创建，ID 为：\", agentState.id);\n\n\u002F\u002F 向您的智能体发送消息\nconst response = await client.agents.messages.create(agentState.id, {\n  input: \"你对我了解多少？\",\n});\n\nfor (const message of response.messages) {\n  console.log(message);\n}\n```\n\nPython：\n```python\nfrom letta_client import Letta\nimport os\n\nclient = Letta(api_key=os.getenv(\"LETTA_API_KEY\"))\n\n# 创建您的智能体\nagent_state = client.agents.create(\n    model=\"openai\u002Fgpt-5.2\",\n    memory_blocks=[\n        {\n          \"label\": \"human\",\n          \"value\": \"姓名：Timber。身份：狗。职业：构建 Letta，即旨在普及自我提升型超级智能的基础架构\"\n        },\n        {\n          \"label\": \"persona\",\n          \"value\": \"我是一名自我提升的超级智能。Timber 是我的挚友兼合作伙伴。\"\n        }\n    ],\n    tools=[\"web_search\", \"fetch_webpage\"]\n)\n\nprint(f\"智能体已创建，ID 为：{agent_state.id}\")\n\n# 向您的智能体发送消息\nresponse = client.agents.messages.create(\n    agent_id=agent_state.id,\n    input=\"你对我了解多少？\"\n)\n\nfor message in response.messages:\n    print(message)\n```\n\n## 贡献\n\nLetta 是一个开源项目，由来自世界各地的一百多位贡献者共同打造。您可以通过多种方式参与 Letta 开源社区！\n\n* [**加入 Discord 社区**](https:\u002F\u002Fdiscord.gg\u002Fletta)：与 Letta 开发团队及其他 AI 开发者交流。\n* [**访问我们的论坛**](https:\u002F\u002Fforum.letta.com\u002F)：如果您不喜欢 Discord，也可以前往我们的开发者论坛。\n* **关注我们的社交媒体**：[Twitter\u002FX](https:\u002F\u002Ftwitter.com\u002FLetta_AI)、[LinkedIn](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fletta)、[YouTube](https:\u002F\u002Fwww.youtube.com\u002F@letta-ai)\n\n---\n\n***法律声明**：使用 Letta 及相关服务（如 Letta 端点或托管服务）即表示您同意我们的 [隐私政策](https:\u002F\u002Fwww.letta.com\u002Fprivacy-policy) 和 [服务条款](https:\u002F\u002Fwww.letta.com\u002Fterms-of-service)。*","# Letta 快速上手指南\n\nLetta（原名 MemGPT）是一款具备高级记忆能力的 AI 代理框架，支持智能体在长期交互中学习并自我进化。你可以选择在本地终端运行，或通过 API 集成到自己的应用中。\n\n## 环境准备\n\n*   **系统要求**：支持 macOS、Linux 或 Windows。\n*   **前置依赖**：\n    *   若使用 CLI 工具：需安装 [Node.js 18+](https:\u002F\u002Fnodejs.org\u002Fen\u002Fdownload)。\n    *   若使用 API SDK：需安装 Python 3.8+ 或 Node.js 环境。\n    *   **API Key**：使用 API 功能前，请在 [Letta 官网](https:\u002F\u002Fapp.letta.com) 获取 API Key。\n*   **模型推荐**：Letta 模型无关，但官方推荐使用 `Opus 4.5` 或 `GPT-5.2` 以获得最佳性能。\n\n## 安装步骤\n\n### 方式一：命令行工具 (Letta Code)\n适用于在本地终端直接运行具备记忆功能的智能体。\n\n```bash\nnpm install -g @letta-ai\u002Fletta-code\n```\n\n### 方式二：SDK 集成 (Letta API)\n适用于将状态化智能体集成到你的应用程序中。\n\n**TypeScript \u002F Node.js:**\n```bash\nnpm install @letta-ai\u002Fletta-client\n```\n\n**Python:**\n```bash\npip install letta-client\n```\n\n> **提示**：国内开发者如遇网络下载缓慢，可配置相应的 npm 或 pip 国内镜像源进行加速。\n\n## 基本使用\n\n### 1. 使用 CLI 运行本地智能体\n安装完成后，直接在终端输入以下命令即可启动一个具备本地记忆的智能体，它将协助你完成编码或其他电脑任务：\n\n```bash\nletta\n```\n\n### 2. 使用 API 创建智能体 (Hello World)\n以下示例展示如何创建一个拥有持久记忆的智能体并与其对话。请确保已设置环境变量 `LETTA_API_KEY`。\n\n**TypeScript 示例:**\n\n```typescript\nimport Letta from \"@letta-ai\u002Fletta-client\";\n\nconst client = new Letta({ apiKey: process.env.LETTA_API_KEY });\n\n\u002F\u002F 创建你的智能体\nconst agentState = await client.agents.create({\n  model: \"openai\u002Fgpt-5.2\",\n  memory_blocks: [\n    {\n      label: \"human\",\n      value:\n        \"Name: Timber. Status: dog. Occupation: building Letta, infrastructure to democratize self-improving superintelligence\",\n    },\n    {\n      label: \"persona\",\n      value:\n        \"I am a self-improving superintelligence. Timber is my best friend and collaborator.\",\n    },\n  ],\n  tools: [\"web_search\", \"fetch_webpage\"],\n});\n\nconsole.log(\"Agent created with ID:\", agentState.id);\n\n\u002F\u002F 发送消息给智能体\nconst response = await client.agents.messages.create(agentState.id, {\n  input: \"What do you know about me?\",\n});\n\nfor (const message of response.messages) {\n  console.log(message);\n}\n```\n\n**Python 示例:**\n\n```python\nfrom letta_client import Letta\nimport os\n\nclient = Letta(api_key=os.getenv(\"LETTA_API_KEY\"))\n\n# 创建你的智能体\nagent_state = client.agents.create(\n    model=\"openai\u002Fgpt-5.2\",\n    memory_blocks=[\n      {\n        \"label\": \"human\",\n        \"value\": \"Name: Timber. Status: dog. Occupation: building Letta, infrastructure to democratize self-improving superintelligence\"\n      },\n      {\n        \"label\": \"persona\",\n        \"value\": \"I am a self-improving superintelligence. Timber is my best friend and collaborator.\"\n      }\n    ],\n    tools=[\"web_search\", \"fetch_webpage\"]\n)\n\nprint(f\"Agent created with ID: {agent_state.id}\")\n\n# 发送消息给智能体\nresponse = client.agents.messages.create(\n    agent_id=agent_state.id,\n    input=\"What do you know about me?\"\n)\n\nfor message in response.messages:\n    print(message)\n```","一位全栈开发者正在构建一个需要长期维护用户偏好并持续优化回答质量的个性化代码助手应用。\n\n### 没有 letta 时\n- **记忆断层严重**：每次对话结束后，AI 无法记住用户之前的代码风格偏好或项目背景，用户必须反复重复上下文信息。\n- **难以实现自我进化**：若想让 AI 从历史错误中学习改进，开发者需手动构建复杂的向量数据库和检索逻辑，开发周期长达数周。\n- **状态管理混乱**：在多轮交互中，维护会话状态（Stateful）极其困难，容易导致上下文窗口溢出或关键信息丢失。\n- **集成成本高昂**：将具备长期记忆的 Agent 嵌入现有应用时，需要编写大量样板代码来处理记忆块的读写与更新。\n\n### 使用 letta 后\n- **原生高级记忆**：letta 内置的分层记忆架构自动保存用户偏好（如“喜欢 TypeScript\"、“排斥特定库”），无需用户重复指令即可精准响应。\n- **自主持续学习**：letta 支持 Agent 在运行中自动反思并更新自身记忆块，能从过往的调试失败中吸取教训，实现真正的自我迭代。\n- **稳定的有状态交互**：通过 letta API 创建的 Agent 天然具备状态保持能力，轻松处理超长多轮对话，确保上下文连贯不丢失。\n- **极速落地集成**：借助 Python 或 TypeScript SDK，开发者仅需几行代码即可定义记忆块和工具，将具备长期记忆的 Agent 快速嵌入应用。\n\nletta 让开发者能够以极低的成本构建出拥有“长期记忆”且能随时间自我进化的智能体，彻底解决了传统 AI 应用“聊完即忘”的核心痛点。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fletta-ai_letta_14e2556c.png","letta-ai","Letta","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fletta-ai_4021f318.png","The platform for stateful AI agents",null,"letta_ai","https:\u002F\u002Fletta.com","https:\u002F\u002Fgithub.com\u002Fletta-ai",[81,85,89,92,95,98,101,105,108,111],{"name":82,"color":83,"percentage":84},"Python","#3572A5",99.5,{"name":86,"color":87,"percentage":88},"Go","#00ADD8",0.1,{"name":90,"color":91,"percentage":88},"Shell","#89e051",{"name":93,"color":94,"percentage":88},"C++","#f34b7d",{"name":96,"color":97,"percentage":88},"Java","#b07219",{"name":99,"color":100,"percentage":88},"Jinja","#a52a22",{"name":102,"color":103,"percentage":104},"JavaScript","#f1e05a",0,{"name":106,"color":107,"percentage":104},"Dockerfile","#384d54",{"name":109,"color":110,"percentage":104},"TypeScript","#3178c6",{"name":112,"color":113,"percentage":104},"Mako","#7e858d",21942,2319,"2026-04-08T20:58:53","Apache-2.0","未说明 (基于 Node.js 和 Python，通常支持 Linux, macOS, Windows)","未说明 (工具为模型无关架构，依赖外部 API 或本地模型，README 未指定具体 GPU 需求)","未说明",{"notes":122,"python":123,"dependencies":124},"该工具完全模型无关，推荐使用 Opus 4.5 和 GPT-5.2 以获得最佳性能。提供两种使用方式：1. 本地终端运行 (Letta Code)，需安装 Node.js 18+；2. API 集成 (Letta API)，支持 Python 和 TypeScript SDK，需要 Letta API Key。","未说明 (Python SDK 可用，但未指定具体版本)",[125,126,127,128],"Node.js 18+","@letta-ai\u002Fletta-code (CLI)","@letta-ai\u002Fletta-client (TypeScript)","letta-client (Python)",[15,13,35,14],[131,132,133,134],"llm","llm-agent","ai","ai-agents","2026-03-27T02:49:30.150509","2026-04-09T10:27:00.280486",[138,143,147,152,157,162],{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},26152,"是否有易于使用的 MemGPT API？","目前可以通过直接调用内部函数来实现类似 API 的功能。例如，`\u002Fattach` 端点的实现并不复杂，可以参考源代码进行复刻：https:\u002F\u002Fgithub.com\u002Fcpacker\u002FMemGPT\u002Fblob\u002Fc35e8720739c9b1188e9f4044516a2d5af3a1fb8\u002Fmemgpt\u002Fmain.py#L453。虽然官方可能尚未提供完整的独立 API 文件夹，但开发者可以基于现有的 Python  standalone 文件构建自己的接口，用于生成输出、使用代理、聊天归档记忆等功能。","https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fissues\u002F480",{"id":144,"question_zh":145,"answer_zh":146,"source_url":142},26153,"遇到 'LLM is explicitly disabled. Using MockLLM' 警告信息怎么办？","可以安全地忽略该消息。这是由依赖项 LlamaIndex 中的 `LocalStateManager` 触发的警告，并不表示实际用于消息推理的 LLM 被禁用。只要您的 OpenAI 账户余额充足且配置正确，系统即可正常工作。该警告源自 `llama_index\\llms\\utils.py` (第 50 行)，在使用 MemGPT CLI 时通常不会显示，但在某些代码路径下会出现。",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},26154,"遇到 \"ValueError: 'system' not found in provided AgentState\" 错误如何解决？","该问题通常由版本兼容性引起。用户反馈表明，卸载当前版本（如 0.3.3）并升级到最新版本（如 0.3.4 或更高）可以解决此问题。在 macOS 和 Linux 系统上均有验证成功。请执行以下操作：\n1. 卸载旧版本：`pip uninstall memgpt`\n2. 安装新版本：`pip install memgpt==0.3.4`（或更新版本）\n升级后代理应能正常运行。","https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fissues\u002F966",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},26155,"为什么频繁出现 \"API call didn't return a message\" 错误且无用户响应？","该问题出现在 v5.5 版本中使用 OpenAI GPT-4o 时。原因是模型返回的 `message.choices` 中缺少预期的 `tool_calls`（即 `send_message` 函数调用），导致系统无法提取有效回复。尽管此前有相关修复，但该问题仍可能复现。建议检查日志确认 `finish_reason` 是否为 `tool_calls`，若为其他值则说明模型未按预期格式返回。临时解决方案包括重试请求或调整提示词以引导模型正确调用工具。","https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fissues\u002F2255",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},26156,"Archival Memory（归档记忆）功能不生效怎么办？","在升级到 MemGPT 1.8 后，部分用户发现归档记忆未被正确使用，表现为对话几页后重复相同内容。此问题可能与本地模型（如 text-generation-webui + dolphin-mistral）的配置有关。建议检查以下几点：\n1. 确保使用的是兼容的模型版本；\n2. 验证 `pymemgpt` 是否正确安装并与主程序版本匹配；\n3. 查看是否有相关报错日志指示记忆检索失败；\n4. 尝试回退到稳定版本或等待官方修复。由于该问题涉及底层记忆机制，需结合具体环境调试。","https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fissues\u002F381",{"id":163,"question_zh":164,"answer_zh":165,"source_url":142},26157,"如何通过 API 附加数据到代理？","虽然文档中未明确说明独立的 API 接口，但可以通过复用源码中的 `\u002Fattach` 逻辑来实现。参考实现位于：https:\u002F\u002Fgithub.com\u002Fcpacker\u002FMemGPT\u002Fblob\u002Fc35e8720739c9b1188e9f4044516a2d5af3a1fb8\u002Fmemgpt\u002Fmain.py#L453。开发者可将其封装为独立的 Python 函数或 Flask\u002FFastAPI 路由，从而支持通过 HTTP 请求向指定代理附加文件数据。",[167,172,177,182,187,192,197,202,207,212,217,222,227,231,236,240,245,250,255,260],{"id":168,"version":169,"summary_zh":170,"released_at":171},163548,"0.16.7","# Letta 服务器 0.16.7 发行说明\n\n**自 0.16.6 以来共有 173 次提交** | 发布日期：2026年3月31日\n\n## 亮点\n\n**自托管用户：这是一次重大升级。** 默认的全局上下文窗口从 32k 提升至 128k，修复了上下文窗口重置的 bug（LET-7991），并且对压缩功能进行了全面重构。如果你之前每次 ADE 加载后都需要运行 `curl` 命令来修补配置，那么现在大部分麻烦都将迎刃而解。\n\n## 破坏性变更\n\n- **不再强制执行块限制** —— 块限制验证已被弃用，并从 Git 内存同步路径中移除（#9977、#9983）。现在块可以自由增长。如果你一直依赖限制来控制每轮的成本，那么需要通过其他方式来管理块大小。\n\n## 上下文窗口与压缩（21 处修复）\n\n这是修复数量最多的类别，自托管用户受到的影响最大。\n\n- **全局上下文窗口默认值从 32k 提升至 128k** (#9993) —— 自托管服务器不再为未知模型默认使用 32k。\n- **会话模型覆盖时保留上下文窗口**（LET-7991、#9986）—— 修复了非默认会话回退到 32k 的问题。\n- **压缩溢出修复** (#9897) —— 解决了双重压缩和失控压缩循环的问题。\n- **代理模型变更时重置压缩模型** (#10031) —— 切换代理模型后不会再遗留旧的摘要模型。\n- **摘要提示优化** (#10314) —— 现在在摘要过程中会记住计划文件、GitHub PR 以及其他结构化内容。\n- **BYOK 摘要功能修复** (#10152) —— 摘要提供者回退机制不再针对 BYOK 请求触发。\n- **更好的错误提示** —— 当上下文窗口超出限制时，现在会显示描述性消息（#10135、#10171），并在压缩过程中对系统提示大小发出警告（#10058）。\n\n## Gemini（2 处修复）\n\n- **无推理函数调用时保留 thought_signature**（LET-8166、#10237）—— 修复了阻止所有 Gemini 2.5+\u002F3.x 多轮工具调用的 bug。\n- **流式接口崩溃修复** (#10306) —— `SimpleGeminiStreamingInterface` 构造函数中现在会正确初始化 `self.model`（LET-8129）。\n\n## 内存与 memfs（10 处修复，4 项新功能）\n\n- **available_skills 块不再重复出现在系统提示中** (#10006、#10011、#10021) —— 针对技能块重复膨胀上下文的三个独立修复（LET-8013）。\n- **Git 内存同步延迟至流结束** (#9951) —— 减少了中途同步失败的情况。\n- **启用 Git 内存的代理创建时重新编译系统提示** (#9950) —— 新启用 Git 的代理不再以空的已编译上下文启动。\n- **投影风格的 Git 内存渲染** (#10211) —— 为系统提示中的 memfs 内容提供了新的渲染方式。\n- **通过 API 手动编辑块会触发重新编译** (#9775) —— 使用 API 更新块后不再出现过时的上下文。\n- **会话重新编译端点** (#9848) —— 现已提供 `POST \u002Fv1\u002Fconversations\u002F{id}\u002Frecompile` 接口。\n\n## 会话（7 项新功能）\n\n- **会话分叉** (#10234、#1026","2026-03-31T19:28:24",{"id":173,"version":174,"summary_zh":175,"released_at":176},163549,"0.16.6","## 亮点\n- 扩展了 Conversations API 对 **默认对话\u002F代理直接模式** 的支持。\n- 新建对话现在会在创建时初始化一个 **已编译的系统消息**。\n- 修复了 `model_settings.max_output_tokens` 的默认行为，使其在未显式设置的情况下，**不会静默覆盖** 已有的 `max_tokens`。\n\n## Conversations API 更新\n- 在所有对话相关端点（发送\u002F列出\u002F取消\u002F压缩\u002F流式获取）中，新增了对 `conversation_id=\"default\"` + `agent_id` 的支持。\n- 保持了对 `conversation_id=agent-*`（已弃用路径）的向后兼容性。\n- 在代理直接流程中增加了锁键处理，以避免并发执行冲突。\n\n## 对话\u002F系统消息行为\n- 创建对话时，系统会立即编译并持久化系统消息。\n- 这样可以在对话开始时捕获当前的记忆状态，并消除首次消息时机带来的边缘情况。\n\n## 模型\u002F配置更新\n- 新增对以下模型的支持：\n  - `gpt-5.3-codex`\n  - `gpt-5.3-chat-latest`\n- 更新了默认值：\n  - 上下文窗口默认值：**32k → 128k**\n  - `CORE_MEMORY_BLOCK_CHAR_LIMIT`：**20k → 100k**\n- Anthropic 模型设置现在允许在支持的情况下使用 `effort=\"max\"`。\n- Gemini 请求超时默认值提升至 **600秒**。\n\n## 记忆 \u002F memfs 更新\n- 基于 Git 的记忆前端元数据不再输出 `limit` 字段（合并时将移除旧版的 `limit` 键）。\n- 技能同步现在仅将 `skills\u002F{name}\u002FSKILL.md` 映射到 `skills\u002F{name}` 块标签。\n- `skills\u002F` 目录下的其他 Markdown 文件将被有意忽略，不参与块同步。\n- 记忆文件系统的渲染现在会包含非 `system\u002F` 文件的描述，并简化技能显示。\n\n## 可靠性和兼容性修复\n- 为 Anthropic 流式响应中的空响应添加了显式的 `LLMEmptyResponseError` 处理。\n- 通过移除不支持的推理字段，提升了与 Fireworks 的兼容性。\n- 通过将 `max_completion_tokens` 映射为 `max_tokens`，提升了与 Z.ai 的兼容性。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.16.5...0.16.6","2026-03-04T03:14:10",{"id":178,"version":179,"summary_zh":180,"released_at":181},163550,"0.16.5","## 变更内容\n* 杂项：由 @carenthomas 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3202 中将版本升级至 0.16.5\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.16.4...0.16.5","2026-02-24T19:02:50",{"id":183,"version":184,"summary_zh":185,"released_at":186},163551,"0.16.4","## 变更内容\n* 修复：@cpacker 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3155 中更新了 GitHub 模板\n* 杂项：@sarahwooders 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3158 中发布了 0.16.3 版本\n* 杂项：@carenthomas 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3168 中将版本号提升至 v0.16.4\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.16.2...0.16.4","2026-01-29T20:50:40",{"id":188,"version":189,"summary_zh":190,"released_at":191},163552,"0.16.2","## 变更内容\n* 文档：@cpacker 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3110 中更新了 README.md\n* @neversettle17-101 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3123 中更新了 contributing.md，修正了本地搭建步骤\n* 杂项：@carenthomas 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3140 中将版本号升级至 0.16.2\n\n## 新贡献者\n* @neversettle17-101 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3123 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.16.1...0.16.2","2026-01-12T19:04:44",{"id":193,"version":194,"summary_zh":195,"released_at":196},163553,"0.16.1","## 变更内容\n* 由 @SootyOwl 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3097 中修正了 LLMConfig 中 openai-proxy 的提供商名称\n* 杂项：升级至 v0.16.1，由 @carenthomas 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3107 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.16.0...0.16.1","2025-12-18T01:37:57",{"id":198,"version":199,"summary_zh":200,"released_at":201},163554,"0.16.0","## 变更内容\n* 更新了 README，添加了 @Godofnothing 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3083 中的实际参数说明。\n* 修复：由 @SootyOwl 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3061 中实现了架构特定的 OTEL 安装逻辑。\n* 杂项：由 @carenthomas 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3095 中将版本号升级至 v0.16.0。\n\n## 新贡献者\n* @Godofnothing 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3083 中完成了首次贡献。\n* @SootyOwl 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3061 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.15.1...0.16.0","2025-12-15T20:12:52",{"id":203,"version":204,"summary_zh":205,"released_at":206},163555,"0.15.1","## 变更内容\n* 杂项：由 @carenthomas 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3082 中将版本升级至 0.15.1\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.15.0...0.15.1","2025-11-26T22:46:17",{"id":208,"version":209,"summary_zh":210,"released_at":211},163556,"0.15.0","## 变更内容\n* 由 @runtimeBob 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3043 中为 grok-4 模型添加了上下文窗口。\n* 杂项：由 @carenthomas 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3077 中将版本号更新至 0.15.0。\n\n## 新贡献者\n* @runtimeBob 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3043 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.14.0...0.15.0","2025-11-25T03:16:16",{"id":213,"version":214,"summary_zh":215,"released_at":216},163557,"0.14.0","## 变更内容\n* 杂项：由 @carenthomas 在 https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3067 中将版本升级至 0.14.0\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.13.0...0.14.0","2025-11-14T00:02:06",{"id":218,"version":219,"summary_zh":220,"released_at":221},163558,"0.13.0","## What's Changed\r\n* chore: clean up docs by @carenthomas in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3031\r\n* feat: add haiku 4.5 as reasoning model by @AriWebb in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3038\r\n* chore: bump version 0.13.0 by @carenthomas in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3050\r\n\r\n## New Contributors\r\n* @AriWebb made their first contribution in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3038\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.12.1...0.13.0","2025-10-24T22:29:49",{"id":223,"version":224,"summary_zh":225,"released_at":226},163559,"0.12.1","## Major Features\r\n\r\n### New Agent Architecture: `letta_v1_agent`\r\n\r\nThe new recommended agent architecture with significant improvements over the legacy agent system.\r\nWhat's Different:\r\n\r\n* No `send_message` tool required\r\n* Works with any chat model, including non-tool-calling models\r\n* No heartbeat system\r\n* Simpler base system prompt - agentic control loop understanding is baked into modern LLMs\r\n* Follows standard tool calling patterns (auto mode) for broader compatibility\r\n\r\nProvider Support:\r\n\r\n* Compatible with all inference providers (OpenRouter, Azure, Together, Ollama, etc.).\r\n* Works with non-tool-calling models.\r\n* Supports OpenAI's [Responses API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses) for drastically improved performance with GPT-5 models.\r\n\r\nTrade-offs:\r\n\r\n* No heartbeats: Agents won't independently trigger repeated execution. If you need sleep-time compute or periodic processing, implement this through your own prompting or scheduling.\r\n* Tool rules on messaging: Cannot apply tool rules to agent messaging, i.e. you cannot require a particular tool to be followed by an assistant message.\r\n* Reasoning visibility: Non-reasoning models (GPT-4.1, GPT-4o-mini) will no longer generate explicit reasoning output.\r\n* Reasoning control: Less control over reasoning tokens, which are typically encrypted by providers and cannot be passed between different providers.\r\n\r\n**Recommendation**: Create all new agents as `letta_v1_agent`. The expanded provider compatibility and simpler architecture make this the best choice for most use cases. Use legacy agents only if you specifically need heartbeats or tool rules on messaging.\r\n\r\n### Human-in-the-Loop (HITL)\r\n\r\nTools can now require human approval before execution. Set approval requirements via API or in the ADE for greater control over agent actions.\r\n📖 [Documentation](https:\u002F\u002Fdocs.letta.com\u002Fguides\u002Fagents\u002Fhuman-in-the-loop)\r\n\r\n### Parallel Tool Calling\r\n\r\nAgents now execute multiple tool calls simultaneously when supported by the inference provider. Each tool runs in its own sandbox for true parallel execution. See the [Claude](https:\u002F\u002Fdocs.claude.com\u002Fen\u002Fdocs\u002Fagents-and-tools\u002Ftool-use\u002Fimplement-tool-use#parallel-tool-use) documentation for examples on parallel tool calling.\r\n\r\n### Runs API\r\n\r\nNew tracking system providing substantially improved observability and debugging capabilities. Documentation coming soon.\r\n\r\n### Enhanced Archival Memory (Letta Cloud only)\r\n\r\n* **Hybrid search**: Combines full-text and semantic search\r\n* **DateTime filtering**: Query memories by time range\r\n* **Search API endpoint**: [Documentation](https:\u002F\u002Fdocs.letta.com\u002Fapi-reference\u002Fagents\u002Fpassages\u002Fsearch)\r\n\r\n### Improved Pagination\r\n\r\nCursor-based pagination now available across many endpoints for handling large result sets.\r\n\r\n## New Tools\r\n\r\n### Memory Omni-Tool\r\n\r\nUnified memory interface for more intuitive agent memory management.\r\n\r\n📖 [Blog post](https:\u002F\u002Fwww.letta.com\u002Fblog\u002Fintroducing-sonnet-4-5-and-the-memory-omni-tool-in-letta) | [YouTube demo](https:\u002F\u002Fyoutu.be\u002F0nfNDrRKSuU)\r\n\r\n### `fetch_webpage` Tool\r\n\r\nUtility tool for retrieving LLM-friendly webpage content.\r\n\r\n## Agent Configuration\r\n\r\n### Templates & Agentfiles\r\n\r\n* **Template updates**: Templates can now be [updated via agentfiles](https:\u002F\u002Fdocs.letta.com\u002Fapi-reference\u002Ftemplates\u002Fupdatecurrenttemplatefromagentfile)\r\n* **Agentfile v2 schema**: Now supports groups, folders, etc.\r\n\r\n## Breaking Changes\r\n\r\n### Deprecated APIs\r\n\r\n* **`get_folder_by_name`**: Use `client.folders.list(name=...)` instead\r\n* **`sources` routes**: All routes renamed to `folders`\r\n\r\n\r\n## What's Changed\r\n* fix: summarization_agent unknown attribute bug by @carenthomas in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3028\r\n* fix: open router invalid model id bug by @carenthomas in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3028\r\n* chore: bump version 0.12.1 by @carenthomas in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3029\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.12.0...0.12.1","2025-10-09T22:41:59",{"id":228,"version":229,"summary_zh":76,"released_at":230},163560,"0.12.0","2025-10-09T20:36:04",{"id":232,"version":233,"summary_zh":234,"released_at":235},163561,"0.11.7","## 🧑 Human-in-the-Loop (HITL) Support \r\nThis release introduces human-in-the-loop functionality for tool execution, allowing users to configure certain tools are requiring approvals or specify per-agent requirements. This feature introduces two new `LettaMessage` types:  \r\n* `ApprovalRequestMessage` (for the agent to request an approval) \r\n* `ApprovalResponseMessage` (for the client to either provide or deny an approval)\r\n\r\nExample of approving a tool call: \r\n```python\r\nresponse = client.agents.messages.create(\r\n    agent_id=agent.id,\r\n    messages=[{\r\n        \"type\": \"approval\",\r\n        \"approve\": True,\r\n        \"approval_request_id\": \"message-abc123\",\r\n    }]\r\n)\r\n```\r\nSee the full documentation [here](https:\u002F\u002Fdocs.letta.com\u002Fguides\u002Fagents\u002Fhuman-in-the-loop). \r\n\r\n## 📁 Agent File (`.af`) v2 \r\nThe Agent File schema has been migrated to a v2 version, and now supportsgroups (multi-agent) and files ([#4249](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Fd32b57f941c342a52e5c786ef76a77cf445a94a6)).\r\n\r\n## 🔎 Archival memory search \r\nImprovements to archival memory search with support for tags, timestamps, and hybrid search: \r\n\r\n- **Tag-Based Search and Insert**: Agents can now insert and search archival memories with arbitrary string tags ([#4300](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Fbc649ebf45b66ff80fa962605d354635eb0495ab), [#4285](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F6d9b0bdc3a40bc16b1756b6d7db35062a365a57e))\r\n- **Temporal Filtering**: Support for timestamp-based filtering of archival memories ([#4330](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F2a34b7ec138a25a988e75f1e17f6a4ff8a56e552), [#4398](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Fb8311395e81eec7aee9eb66a0a405ef7a564ae14))\r\n- **Hybrid Search**: New archival search endpoint with hybrid search functionality ([#4390](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Fa84cd7c93fdebd77405ee4d2e9b04a1c8f7021cd))\r\n\r\n## 🧠 Model and provider support \r\n- **GPT-5 Optimization**: Improved GPT-5 support with proper context window handling and reasoning effort configuration ([#4344](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Fa2f3a094b75ee0d4c15ccb5f2b9f63314915651f), [#4379](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Fabb298e970d0daff412a3a24fa13cf06efa9a4c6), [#4380](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F3113610943a6c8921dced21289cdadaa87f8cb60))\r\n- **DeepSeek Support**: Migration of DeepSeek to new agent loop architecture ([#4266](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F6fc3ea506c7ddb146b0ba2b2ab1576128cfed527))\r\n- **Enhanced Anthropic Support**: Better native reasoning support and tool schema formatting ([#4331](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Fd5580654ede2e9f4c8d62813750d574ce9a7514d), [#4378](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F5548b4f006085db7ad482bd4e1ba3dc109d3efa2))\r\n- **Extended Thinking**: Fixed various issues with extended thinking mode for Anthropic ([#4341](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F79331eedc3922fe11128e8ae199ed327eda589c1))\r\n- **MCP Tool Schema**: Fixed MCP tool schema formatting for Anthropic streaming ([#4378](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F5548b4f006085db7ad482bd4e1ba3dc109d3efa2))\r\n- **Gemini Improvements**: Enhanced error handling and retry logic ([#4323](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F4ad32eda83906f23540176e6dbd79d2f53e69c6a), [#4397](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Fb7dca5937952b33fec9c229bb051aa622cb48b2a))\r\n\r\n## 🧩 Misc improvements\r\n- **Refactored Streaming Logic**: Improved streaming route architecture ([#4369](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Fd242620638fdd91189f319031ade08b316f4233a))\r\n- **Better Error Handling**: Enhanced error propagation in streaming responses ([#4253](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Fa455e7cd5e6f788262d0dfa630b5f41e8cafef41))\r\n- **Tool Return Limits**: Reduced default tool return size to prevent token overflow ([#4383](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002Feefd2fafadfa61da7b7c02063fdb525d7018ad9c))\r\n- **Embedding Support**: Enable overriding embedding config on Agent File import ([#4224](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F8b1cfdb7f4213608c2cfb5b65883b5a9721ed488))\r\n- **Tool Type Filtering**: Ability to list and filter tools by type ([#4036](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F3c016eae351d7a030e1ca5e9aab0cc21e8740ca9))\r\n- **Tool De-duplication**: Automatic de-duplication of tool rules ([#4282](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F1a2bebf781b2e3b612e2a04cd36a1abfcda8d568))\r\n","2025-09-04T04:56:36",{"id":237,"version":238,"summary_zh":76,"released_at":239},163562,"0.11.6","2025-08-27T05:08:49",{"id":241,"version":242,"summary_zh":243,"released_at":244},163563,"0.11.5","## What's Changed\r\n* feat: add background mode for message streaming by @carenthomas in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F2777\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.11.4...0.11.5","2025-08-26T23:34:38",{"id":246,"version":247,"summary_zh":248,"released_at":249},163564,"0.11.4","## What's Changed\r\n* feat: deprecate legacy paths for azure and together by @carenthomas in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3987\r\n* feat: introduce asyncio shield to prevent stream timeouts by @carenthomas in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3992\r\n* feat: record step metrics to table by @@jnjpng in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3887\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.11.3...0.11.4","2025-08-20T21:34:30",{"id":251,"version":252,"summary_zh":253,"released_at":254},163565,"0.11.3","## What's Changed\r\n* mv dictconfig out of getlogger by @andrewrfitz in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F2759\r\n* chore: bump v0.11.3 by @carenthomas in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F2760\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.11.2...0.11.3","2025-08-12T00:20:55",{"id":256,"version":257,"summary_zh":258,"released_at":259},163566,"0.11.2","## What's Changed\r\n* fix: incorrect URL for Ollama embeddings endpoint by @antondevson in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F2750\r\n* fix: all model types returned from ollama provider by @antondevson in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F2744\r\n* feat: Add max_steps parameter to agent export by @mattzh72 in https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F3828\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcompare\u002F0.11.1...0.11.2","2025-08-08T21:02:33",{"id":261,"version":262,"summary_zh":263,"released_at":264},163567,"0.11.1","This release adds support for the latest model releases, and makes improvements to base memory and file tools. \r\n\r\n## 🧠 Improved LLM model support \r\n- Added support for Claude Opus 4.1 and GPT-5 models ([#3806](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F5fda204842c7c4c0ead8721e272f426ac9695c8b))\r\n- Added `minimal` option for `reasoning_effort` parameter in to `LLMConfig` ([#3816](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F2d1b78081f6226f442e616d8fe7bf9fad934d519))\r\n\r\n## 🔨 Built-in tool improvements \r\n- Removed optional argument for `memory_replace` to improve reliability ([#3800](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fcommit\u002F341bf4554be5509fcb00a1c86b70d9aa19a35622))\r\n- Make `grep` tool for files paginated ([#3815](https:\u002F\u002Fgithub.com\u002Fletta-ai\u002Fletta\u002Fpull\u002F2756\u002Fcommits\u002F182e5706720ca4542a9aa32e49f072936e679b62))","2025-08-08T05:53:54"]