[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-mem0ai--mem0":3,"tool-mem0ai--mem0":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",157379,2,"2026-04-15T23:32:42",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":72,"owner_website":78,"owner_url":79,"languages":80,"stars":118,"forks":119,"last_commit_at":120,"license":121,"difficulty_score":32,"env_os":122,"env_gpu":123,"env_ram":122,"env_deps":124,"category_tags":130,"github_topics":131,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":146,"updated_at":147,"faqs":148,"releases":177},7936,"mem0ai\u002Fmem0","mem0","Universal memory layer for AI Agents","mem0 是一款专为 AI 助手和智能体打造的通用记忆层，旨在让人工智能拥有类似人类的长期记忆能力。它解决了当前大模型在交互中“记不住”用户偏好、历史背景和个性化需求的痛点，避免了每次对话都需重复输入上下文或依赖昂贵且低效的全量上下文窗口。\n\n通过 mem0，AI 能够自动提取并存储用户、会话及智能体状态等多层级信息，随着时间推移不断适应用户习惯，从而提供连贯、精准且个性化的服务。无论是构建客服机器人、个人助理，还是开发医疗关怀或游戏互动应用，mem0 都能显著提升体验的一致性与深度。\n\n这款工具主要面向开发者和技术研究人员，提供了直观的 API、跨平台 SDK（支持 Python 和 Node.js）以及可自托管的开源方案，同时也提供全托管云服务以降低运维门槛。其独特技术亮点在于卓越的性能表现：相比原生记忆方案，mem0 在 LOCOMO 基准测试中准确率提升 26%，响应速度快 91%，同时令牌消耗降低 90%，大幅减少了运行成本并确保了低延迟。对于希望构建具备持续学习能力的高性能 AI 应用的团队来说，mem0 是一个高效且经济的基础设施选择。","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmem0ai_mem0_readme_401ed3c27069.png\" width=\"800px\" alt=\"Mem0 - The Memory Layer for Personalized AI\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\" style=\"display: flex; justify-content: center; gap: 20px; align-items: center;\">\n  \u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F11194\" target=\"blank\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmem0ai_mem0_readme_4a68feb902da.png\" alt=\"mem0ai%2Fmem0 | Trendshift\" width=\"250\" height=\"55\"\u002F>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fmem0.ai\">Learn more\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fmem0.dev\u002FDiG\">Join Discord\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fmem0.dev\u002Fdemo\">Demo\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fmem0.dev\u002FDiG\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-%235865F2.svg?&logo=discord&logoColor=white\" alt=\"Mem0 Discord\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fmem0ai\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fmem0ai\" alt=\"Mem0 PyPI - Downloads\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002Fmem0ai\u002Fmem0?style=flat-square\" alt=\"GitHub commit activity\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmem0ai\" target=\"blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmem0ai?color=%2334D058&label=pypi%20package\" alt=\"Package version\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fmem0ai\" target=\"blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fmem0ai\" alt=\"Npm package\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.ycombinator.com\u002Fcompanies\u002Fmem0\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FY%20Combinator-S24-orange?style=flat-square\" alt=\"Y Combinator S24\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fmem0.ai\u002Fresearch\">\u003Cstrong>📄 Building Production-Ready AI Agents with Scalable Long-Term Memory →\u003C\u002Fstrong>\u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cstrong>⚡ +26% Accuracy vs. OpenAI Memory • 🚀 91% Faster • 💰 90% Fewer Tokens\u003C\u002Fstrong>\n\u003C\u002Fp>\n\n> **🎉 mem0ai v1.0.0 is now available!** This major release includes API modernization, improved vector store support, and enhanced GCP integration. [See migration guide →](MIGRATION_GUIDE_v1.0.md)\n\n##  🔥 Research Highlights\n- **+26% Accuracy** over OpenAI Memory on the LOCOMO benchmark\n- **91% Faster Responses** than full-context, ensuring low-latency at scale\n- **90% Lower Token Usage** than full-context, cutting costs without compromise\n- [Read the full paper](https:\u002F\u002Fmem0.ai\u002Fresearch)\n\n# Introduction\n\n[Mem0](https:\u002F\u002Fmem0.ai) (\"mem-zero\") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. It remembers user preferences, adapts to individual needs, and continuously learns over time—ideal for customer support chatbots, AI assistants, and autonomous systems.\n\n### Key Features & Use Cases\n\n**Core Capabilities:**\n- **Multi-Level Memory**: Seamlessly retains User, Session, and Agent state with adaptive personalization\n- **Developer-Friendly**: Intuitive API, cross-platform SDKs, and a fully managed service option\n\n**Applications:**\n- **AI Assistants**: Consistent, context-rich conversations\n- **Customer Support**: Recall past tickets and user history for tailored help\n- **Healthcare**: Track patient preferences and history for personalized care\n- **Productivity & Gaming**: Adaptive workflows and environments based on user behavior\n\n## 🚀 Quickstart Guide \u003Ca name=\"quickstart\">\u003C\u002Fa>\n\nChoose between our hosted platform or self-hosted package:\n\n### Hosted Platform\n\nGet up and running in minutes with automatic updates, analytics, and enterprise security.\n\n1. Sign up on [Mem0 Platform](https:\u002F\u002Fapp.mem0.ai)\n2. Embed the memory layer via SDK or API keys\n\n### Self-Hosted (Open Source)\n\nInstall the sdk via pip:\n\n```bash\npip install mem0ai\n```\n\nFor enhanced hybrid search with BM25 keyword matching and entity extraction, install with NLP support:\n\n```bash\npip install mem0ai[nlp]\npython -m spacy download en_core_web_sm\n```\n\nInstall sdk via npm:\n```bash\nnpm install mem0ai\n```\n\n### CLI\n\nManage memories from your terminal:\n\n```bash\nnpm install -g @mem0\u002Fcli   # or: pip install mem0-cli\n\nmem0 init\nmem0 add \"Prefers dark mode and vim keybindings\" --user-id alice\nmem0 search \"What does Alice prefer?\" --user-id alice\n```\n\nSee the [CLI documentation](https:\u002F\u002Fdocs.mem0.ai\u002Fplatform\u002Fcli) for the full command reference.\n\n### Basic Usage\n\nMem0 requires an LLM to function, with `gpt-4.1-nano-2025-04-14` from OpenAI as the default. However, it supports a variety of LLMs; for details, refer to our [Supported LLMs documentation](https:\u002F\u002Fdocs.mem0.ai\u002Fcomponents\u002Fllms\u002Foverview).\n\nMem0 uses `text-embedding-3-small` from OpenAI as the default embedding model. For best results with hybrid search (semantic + keyword + entity boosting), we recommend using at least [Qwen 600M](https:\u002F\u002Fhuggingface.co\u002FAlibaba-NLP\u002Fgte-Qwen2-1.5B-instruct) or a comparable embedding model. See [Supported Embeddings](https:\u002F\u002Fdocs.mem0.ai\u002Fcomponents\u002Fembedders\u002Foverview) for configuration details.\n\nFirst step is to instantiate the memory:\n\n```python\nfrom openai import OpenAI\nfrom mem0 import Memory\n\nopenai_client = OpenAI()\nmemory = Memory()\n\ndef chat_with_memories(message: str, user_id: str = \"default_user\") -> str:\n    # Retrieve relevant memories\n    relevant_memories = memory.search(query=message, user_id=user_id, limit=3)\n    memories_str = \"\\n\".join(f\"- {entry['memory']}\" for entry in relevant_memories[\"results\"])\n\n    # Generate Assistant response\n    system_prompt = f\"You are a helpful AI. Answer the question based on query and memories.\\nUser Memories:\\n{memories_str}\"\n    messages = [{\"role\": \"system\", \"content\": system_prompt}, {\"role\": \"user\", \"content\": message}]\n    response = openai_client.chat.completions.create(model=\"gpt-4.1-nano-2025-04-14\", messages=messages)\n    assistant_response = response.choices[0].message.content\n\n    # Create new memories from the conversation\n    messages.append({\"role\": \"assistant\", \"content\": assistant_response})\n    memory.add(messages, user_id=user_id)\n\n    return assistant_response\n\ndef main():\n    print(\"Chat with AI (type 'exit' to quit)\")\n    while True:\n        user_input = input(\"You: \").strip()\n        if user_input.lower() == 'exit':\n            print(\"Goodbye!\")\n            break\n        print(f\"AI: {chat_with_memories(user_input)}\")\n\nif __name__ == \"__main__\":\n    main()\n```\n\nFor detailed integration steps, see the [Quickstart](https:\u002F\u002Fdocs.mem0.ai\u002Fquickstart) and [API Reference](https:\u002F\u002Fdocs.mem0.ai\u002Fapi-reference).\n\n## 🔗 Integrations & Demos\n\n- **ChatGPT with Memory**: Personalized chat powered by Mem0 ([Live Demo](https:\u002F\u002Fmem0.dev\u002Fdemo))\n- **Browser Extension**: Store memories across ChatGPT, Perplexity, and Claude ([Chrome Extension](https:\u002F\u002Fchromewebstore.google.com\u002Fdetail\u002Fonihkkbipkfeijkadecaafbgagkhglop?utm_source=item-share-cb))\n- **Langgraph Support**: Build a customer bot with Langgraph + Mem0 ([Guide](https:\u002F\u002Fdocs.mem0.ai\u002Fintegrations\u002Flanggraph))\n- **CrewAI Integration**: Tailor CrewAI outputs with Mem0 ([Example](https:\u002F\u002Fdocs.mem0.ai\u002Fintegrations\u002Fcrewai))\n\n## 📚 Documentation & Support\n\n- Full docs: https:\u002F\u002Fdocs.mem0.ai\n- Community: [Discord](https:\u002F\u002Fmem0.dev\u002FDiG) · [X (formerly Twitter)](https:\u002F\u002Fx.com\u002Fmem0ai)\n- Contact: founders@mem0.ai\n\n## Citation\n\nWe now have a paper you can cite:\n\n```bibtex\n@article{mem0,\n  title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},\n  author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},\n  journal={arXiv preprint arXiv:2504.19413},\n  year={2025}\n}\n```\n\n## ⚖️ License\n\nApache 2.0 — see the [LICENSE](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fblob\u002Fmain\u002FLICENSE) file for details.\n","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmem0ai_mem0_readme_401ed3c27069.png\" width=\"800px\" alt=\"Mem0 - 个性化AI的记忆层\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\" style=\"display: flex; justify-content: center; gap: 20px; align-items: center;\">\n  \u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F11194\" target=\"blank\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmem0ai_mem0_readme_4a68feb902da.png\" alt=\"mem0ai%2Fmem0 | Trendshift\" width=\"250\" height=\"55\"\u002F>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fmem0.ai\">了解更多\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fmem0.dev\u002FDiG\">加入Discord\u003C\u002Fa>\n  ·\n  \u003Ca href=\"https:\u002F\u002Fmem0.dev\u002Fdemo\">演示\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fmem0.dev\u002FDiG\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-%235865F2.svg?&logo=discord&logoColor=white\" alt=\"Mem0 Discord\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fmem0ai\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fmem0ai\" alt=\"Mem0 PyPI - 下载量\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002Fmem0ai\u002Fmem0?style=flat-square\" alt=\"GitHub提交活跃度\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmem0ai\" target=\"blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmem0ai?color=%2334D058&label=pypi%20package\" alt=\"软件包版本\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fmem0ai\" target=\"blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fmem0ai\" alt=\"Npm软件包\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.ycombinator.com\u002Fcompanies\u002Fmem0\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FY%20Combinator-S24-orange?style=flat-square\" alt=\"Y Combinator S24\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fmem0.ai\u002Fresearch\">\u003Cstrong>📄 使用可扩展的长期记忆构建生产就绪的AI智能体 →\u003C\u002Fstrong>\u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cstrong>⚡ 相比OpenAI Memory，准确率提升26% • 🚀 响应速度提升91% • 💰 令牌使用量减少90%\u003C\u002Fstrong>\n\u003C\u002Fp>\n\n> **🎉 mem0ai v1.0.0现已发布！** 此次重大更新包括API现代化、向量存储支持改进以及GCP集成增强。[查看迁移指南 →](MIGRATION_GUIDE_v1.0.md)\n\n## 🔥 研究亮点\n- 在LOCOMO基准测试中，相比OpenAI Memory，**准确率提升26%**\n- 响应速度比全上下文模式快**91%**，确保大规模部署下的低延迟\n- 令牌使用量比全上下文模式低**90%**，在不牺牲性能的情况下大幅降低成本\n- [阅读完整论文](https:\u002F\u002Fmem0.ai\u002Fresearch)\n\n# 简介\n\n[Mem0](https:\u002F\u002Fmem0.ai)（“mem-zero”）通过智能记忆层增强AI助手和智能体，实现个性化的AI交互。它能够记住用户偏好、适应个体需求，并随时间不断学习——非常适合客户支持聊天机器人、AI助手以及自主系统。\n\n### 核心功能与应用场景\n\n**核心能力：**\n- **多层级记忆**：无缝保留用户、会话和智能体状态，实现自适应个性化\n- **开发者友好**：直观的API、跨平台SDK以及完全托管的服务选项\n\n**应用领域：**\n- **AI助手**：保持连贯且富含上下文的对话\n- **客户支持**：调用过往工单和用户历史，提供定制化帮助\n- **医疗健康**：跟踪患者偏好和病史，实现个性化护理\n- **生产力与游戏**：根据用户行为调整工作流和环境\n\n## 🚀 快速入门指南 \u003Ca name=\"quickstart\">\u003C\u002Fa>\n\n您可以选择我们的托管平台或自托管方案：\n\n### 托管平台\n\n几分钟内即可启动并运行，享受自动更新、数据分析和企业级安全性。\n\n1. 在[Mem0平台](https:\u002F\u002Fapp.mem0.ai)注册\n2. 通过SDK或API密钥嵌入记忆层\n\n### 自托管（开源）\n\n使用pip安装SDK：\n\n```bash\npip install mem0ai\n```\n\n若需增强混合搜索功能，结合BM25关键词匹配和实体抽取，请安装带有NLP支持的版本：\n\n```bash\npip install mem0ai[nlp]\npython -m spacy download en_core_web_sm\n```\n\n使用npm安装SDK：\n```bash\nnpm install mem0ai\n```\n\n### CLI\n\n通过终端管理记忆：\n\n```bash\nnpm install -g @mem0\u002Fcli   # 或：pip install mem0-cli\n\nmem0 init\nmem0 add \"偏好深色模式和vim键位\" --user-id alice\nmem0 search \"Alice偏好什么？\" --user-id alice\n```\n\n完整命令参考请参阅[CLI文档](https:\u002F\u002Fdocs.mem0.ai\u002Fplatform\u002Fcli)。\n\n### 基本使用\n\nMem0需要LLM才能运行，默认使用来自OpenAI的`gpt-4.1-nano-2025-04-14`模型。不过，它也支持多种LLM；详情请参阅我们的[支持的LLM文档](https:\u002F\u002Fdocs.mem0.ai\u002Fcomponents\u002Fllms\u002Foverview)。\n\nMem0默认使用OpenAI的`text-embedding-3-small`作为嵌入模型。为获得最佳的混合搜索效果（语义+关键词+实体增强），我们建议至少使用[Qwen 600M](https:\u002F\u002Fhuggingface.co\u002FAlibaba-NLP\u002Fgte-Qwen2-1.5B-instruct)或类似的嵌入模型。配置详情请参阅[支持的嵌入模型文档](https:\u002F\u002Fdocs.mem0.ai\u002Fcomponents\u002Fembedders\u002Foverview)。\n\n第一步是实例化记忆对象：\n\n```python\nfrom openai import OpenAI\nfrom mem0 import Memory\n\nopenai_client = OpenAI()\nmemory = Memory()\n\ndef chat_with_memories(message: str, user_id: str = \"default_user\") -> str:\n    # 检索相关记忆\n    relevant_memories = memory.search(query=message, user_id=user_id, limit=3)\n    memories_str = \"\\n\".join(f\"- {entry['memory']}\" for entry in relevant_memories[\"results\"])\n\n    # 生成助手回复\n    system_prompt = f\"你是一个有用的AI助手。请根据问题和记忆回答。\\n用户记忆：\\n{memories_str}\"\n    messages = [{\"role\": \"system\", \"content\": system_prompt}, {\"role\": \"user\", \"content\": message}]\n    response = openai_client.chat.completions.create(model=\"gpt-4.1-nano-2025-04-14\", messages=messages)\n    assistant_response = response.choices[0].message.content\n\n    # 根据对话创建新记忆\n    messages.append({\"role\": \"assistant\", \"content\": assistant_response})\n    memory.add(messages, user_id=user_id)\n\n    return assistant_response\n\ndef main():\n    print(\"与AI聊天（输入'exit'退出）\")\n    while True:\n        user_input = input(\"你：\").strip()\n        if user_input.lower() == 'exit':\n            print(\"再见！\")\n            break\n        print(f\"AI：{chat_with_memories(user_input)}\")\n\nif __name__ == \"__main__\":\n    main()\n```\n\n有关详细集成步骤，请参阅[快速入门](https:\u002F\u002Fdocs.mem0.ai\u002Fquickstart)和[API参考](https:\u002F\u002Fdocs.mem0.ai\u002Fapi-reference)。\n\n## 🔗 集成与演示\n\n- **带记忆的 ChatGPT**：由 Mem0 提供支持的个性化聊天（[在线演示](https:\u002F\u002Fmem0.dev\u002Fdemo)）\n- **浏览器扩展**：在 ChatGPT、Perplexity 和 Claude 之间存储记忆（[Chrome 扩展](https:\u002F\u002Fchromewebstore.google.com\u002Fdetail\u002Fonihkkbipkfeijkadecaafbgagkhglop?utm_source=item-share-cb)）\n- **Langgraph 支持**：使用 Langgraph + Mem0 构建客户机器人（[指南](https:\u002F\u002Fdocs.mem0.ai\u002Fintegrations\u002Flanggraph)）\n- **CrewAI 集成**：用 Mem0 定制 CrewAI 的输出（[示例](https:\u002F\u002Fdocs.mem0.ai\u002Fintegrations\u002Fcrewai)）\n\n## 📚 文档与支持\n\n- 完整文档：https:\u002F\u002Fdocs.mem0.ai\n- 社区：[Discord](https:\u002F\u002Fmem0.dev\u002FDiG) · [X（原 Twitter）](https:\u002F\u002Fx.com\u002Fmem0ai)\n- 联系方式：founders@mem0.ai\n\n## 引用\n\n我们现在有一篇可供引用的论文：\n\n```bibtex\n@article{mem0,\n  title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},\n  author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},\n  journal={arXiv preprint arXiv:2504.19413},\n  year={2025}\n}\n```\n\n## ⚖️ 许可证\n\nApache 2.0 — 详情请参阅 [LICENSE](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fblob\u002Fmain\u002FLICENSE) 文件。","# Mem0 快速上手指南\n\nMem0 是一个专为个性化 AI 设计的智能记忆层，能够帮助 AI 助手和 Agent 记住用户偏好、适应个体需求并随时间持续学习。它适用于客服机器人、个人助理及自主系统，相比全上下文模式，可实现响应速度提升 91% 且 Token 用量减少 90%。\n\n## 环境准备\n\n在开始之前，请确保满足以下前置条件：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：Python 3.8 或更高版本\n*   **LLM API Key**：Mem0 需要连接大语言模型才能运行。默认支持 OpenAI，你需要准备 `OPENAI_API_KEY` 环境变量。\n    *   *注：Mem0 也支持其他 LLM 提供商，具体配置可参考官方文档。*\n*   **网络环境**：确保能够访问 PyPI 源及对应的 LLM 服务接口。\n\n## 安装步骤\n\n你可以通过 pip 或 npm 安装 Mem0 SDK。\n\n### 方式一：使用 pip 安装（推荐 Python 开发者）\n\n**基础安装：**\n```bash\npip install mem0ai\n```\n\n**增强安装（推荐）：**\n若需启用混合搜索（BM25 关键词匹配 + 实体提取）以获得更佳效果，请安装带有 NLP 支持的版本，并下载 spaCy 模型：\n```bash\npip install mem0ai[nlp]\npython -m spacy download en_core_web_sm\n```\n\n### 方式二：使用 npm 安装（推荐 Node.js 开发者）\n\n```bash\nnpm install mem0ai\n```\n\n### 可选：安装命令行工具 (CLI)\n\n如果你希望通过终端直接管理记忆数据：\n```bash\nnpm install -g @mem0\u002Fcli\n# 或者使用 pip\npip install mem0-cli\n```\n\n## 基本使用\n\n以下是一个最简单的 Python 示例，展示如何初始化记忆、检索相关上下文并更新记忆。\n\n### 1. 配置环境变量\n在运行代码前，请在终端设置你的 OpenAI API Key：\n```bash\nexport OPENAI_API_KEY=\"your-openai-api-key\"\n```\n\n### 2. 代码示例\n创建一个名为 `main.py` 的文件，填入以下代码：\n\n```python\nfrom openai import OpenAI\nfrom mem0 import Memory\n\n# 初始化 OpenAI 客户端和 Mem0 记忆层\nopenai_client = OpenAI()\nmemory = Memory()\n\ndef chat_with_memories(message: str, user_id: str = \"default_user\") -> str:\n    # 1. 检索与当前问题相关的记忆\n    relevant_memories = memory.search(query=message, user_id=user_id, limit=3)\n    memories_str = \"\\n\".join(f\"- {entry['memory']}\" for entry in relevant_memories[\"results\"])\n\n    # 2. 构建包含记忆的 System Prompt\n    system_prompt = f\"You are a helpful AI. Answer the question based on query and memories.\\nUser Memories:\\n{memories_str}\"\n    messages = [{\"role\": \"system\", \"content\": system_prompt}, {\"role\": \"user\", \"content\": message}]\n    \n    # 3. 调用 LLM 生成回复\n    response = openai_client.chat.completions.create(model=\"gpt-4.1-nano-2025-04-14\", messages=messages)\n    assistant_response = response.choices[0].message.content\n\n    # 4. 将对话内容存入记忆，供未来使用\n    messages.append({\"role\": \"assistant\", \"content\": assistant_response})\n    memory.add(messages, user_id=user_id)\n\n    return assistant_response\n\ndef main():\n    print(\"Chat with AI (type 'exit' to quit)\")\n    while True:\n        user_input = input(\"You: \").strip()\n        if user_input.lower() == 'exit':\n            print(\"Goodbye!\")\n            break\n        print(f\"AI: {chat_with_memories(user_input)}\")\n\nif __name__ == \"__main__\":\n    main()\n```\n\n### 3. 运行测试\n运行脚本即可开始体验带记忆的对话：\n```bash\npython main.py\n```\n\n### CLI 快速测试\n如果你安装了 CLI 工具，也可以直接在命令行操作：\n```bash\nmem0 init\nmem0 add \"Prefers dark mode and vim keybindings\" --user-id alice\nmem0 search \"What does Alice prefer?\" --user-id alice\n```","某电商企业正在开发一款智能客服助手，旨在为回头客提供基于历史购买记录和偏好的个性化购物建议。\n\n### 没有 mem0 时\n- **上下文丢失严重**：每次用户开启新会话，AI 都必须重新询问尺码、偏好风格等基础信息，导致用户体验割裂且烦躁。\n- **响应延迟高**：为了维持对话连贯性，开发者被迫将长历史聊天记录全部填入提示词（Context），导致首字生成速度极慢。\n- **运营成本高昂**：全量上下文传输消耗了大量 Token，使得单次对话成本居高不下，难以规模化部署。\n- **记忆更新困难**：当用户偏好发生变化（如从喜欢休闲风转为商务风）时，AI 无法自动修正旧印象，仍推荐错误商品。\n\n### 使用 mem0 后\n- **跨会话持久记忆**：mem0 自动提取并存储用户的尺码、品牌偏好等关键特征，新用户再次访问时能直接叫出名字并推荐合适商品。\n- **极速低延迟响应**：通过智能检索相关记忆片段而非灌输全文，响应速度提升 91%，确保大促期间高并发下的流畅体验。\n- **大幅降低 Token 消耗**：仅传递精炼后的记忆数据，相比全量上下文节省 90% 的 Token 用量，显著压缩运营预算。\n- **动态自适应进化**：mem0 具备自我更新机制，能根据用户最新的交互行为自动修正过时偏好，确保持续提供精准建议。\n\nmem0 通过构建高效的通用记忆层，让 AI 客服在大幅降低成本的同时，真正拥有了“记住”用户并持续进化的能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmem0ai_mem0_401ed3c2.png","mem0ai","Mem0","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmem0ai_823c3ad3.png","The Universal Memory Layer for AI Agents",null,"founders@mem0.ai","https:\u002F\u002Fmem0.ai","https:\u002F\u002Fgithub.com\u002Fmem0ai",[81,85,89,93,96,100,104,108,112,115],{"name":82,"color":83,"percentage":84},"Python","#3572A5",58.8,{"name":86,"color":87,"percentage":88},"TypeScript","#3178c6",31,{"name":90,"color":91,"percentage":92},"MDX","#fcb32c",4.9,{"name":94,"color":95,"percentage":10},"Jupyter Notebook","#DA5B0B",{"name":97,"color":98,"percentage":99},"JavaScript","#f1e05a",1.1,{"name":101,"color":102,"percentage":103},"Shell","#89e051",0.7,{"name":105,"color":106,"percentage":107},"Makefile","#427819",0.3,{"name":109,"color":110,"percentage":111},"Dockerfile","#384d54",0.1,{"name":113,"color":114,"percentage":111},"CSS","#663399",{"name":116,"color":117,"percentage":111},"SCSS","#c6538c",53129,5960,"2026-04-15T19:03:21","Apache-2.0","未说明","非必需（默认使用云端 LLM 如 OpenAI）；若本地运行混合搜索推荐模型，需支持运行约 600M-1.5B 参数量的嵌入模型",{"notes":125,"python":122,"dependencies":126},"该工具主要作为客户端 SDK 运行，默认依赖外部 API（如 OpenAI）提供 LLM 和嵌入服务，因此对本地硬件要求极低。若启用本地混合搜索（BM25 + 实体提取），需安装 'mem0ai[nlp]' 并下载 Spacy 模型。官方推荐使用至少 Qwen 600M 或同等规模的嵌入模型以获得最佳效果。支持通过 pip 或 npm 安装，并提供 CLI 工具进行管理。",[72,127,128,129],"openai","spacy (可选，用于 NLP 增强)","en_core_web_sm (Spacy 模型)",[35,15,13,14],[132,133,134,135,136,137,138,139,140,141,142,143,144,145],"ai","chatgpt","llm","python","chatbots","rag","application","long-term-memory","memory","memory-management","state-management","ai-agents","agents","genai","2026-03-27T02:49:30.150509","2026-04-16T08:14:12.361434",[149,154,159,164,169,173],{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},35532,"Python 中的元数据过滤（metadata filtering）为什么不起作用？","该问题已在最新版本中解决。开源版本（OSS）现在的 `search()` 方法已支持元数据过滤，包括高级操作符（`eq`, `ne`, `gt`, `gte`, `lt`, `lte`, `in`, `nin`, `contains`, `icontains`）和逻辑操作符（`AND`, `OR`, `NOT`）。如果您仍遇到问题，请确保升级到最新版本（如 1.0.0 beta 或更高），相关实现位于 `mem0\u002Fmemory\u002Fmain.py` 的 `_process_metadata_filters()` 和 `_has_advanced_operators()` 函数中。","https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fissues\u002F3284",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},35533,"调用 add 或 search 接口时遇到 '400 Bad Request' 错误怎么办？","这通常是由于 API 版本不兼容或文档过时导致的。维护者表示正在更新文档以解决此问题。建议检查您的 `output_format` 参数（注意 'v1.0' 已弃用，默认设为 'v1.1'），并参考最新的官方集成文档（如 LangGraph 集成指南）。如果问题依旧，请确保使用的是与当前 API 匹配的客户端版本。","https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fissues\u002F3256",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},35534,"使用 Ollama、Groq 或 AWS Bedrock 时出现 'Invalid JSON response: Expecting value' 错误如何解决？","这是一个已知问题，通常发生在特定版本的 mem0 与某些 LLM 提供商配合使用时。最有效的解决方法是安装 GitHub 上的最新开发版代码，而不是通过 pip 安装稳定版。请运行以下命令升级：`pip install 'git+https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0.git@main'`。此外，尝试在 `Memory.add()` 中设置 `infer=False` 也可能暂时规避该问题。","https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fissues\u002F3391",{"id":165,"question_zh":166,"answer_zh":167,"source_url":168},35535,"为什么本地复现 Locomo 评估分数远低于论文中的数据？","本地复现分数较低通常是因为本地开源版本与 Mem0 平台版本在底层实现或模型配置上存在差异。论文中的高分是在 Mem0 平台特定配置下取得的。如果在本地使用 `mem0.memory.main.Memory` 替代 `MemoryClient`，需确保向量存储、嵌入模型和 LLM 的配置与平台环境完全一致。建议仔细核对评估脚本中的超参数及模型版本，或直接使用 Mem0 平台进行评估以获得一致结果。","https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fissues\u002F2800",{"id":170,"question_zh":171,"answer_zh":172,"source_url":158},35536,"Mem0 的历史对话数据是保存在云端还是本地？是临时还是永久的？","数据存储位置取决于您的配置。如果您使用 `MemoryClient` 并连接官方 API，数据通常存储在云端；如果您使用本地配置的 `Memory` 类（配合本地向量数据库如 Qdrant、Pgvector 等），数据则存储在本地。关于持久性，只要底层向量数据库或存储服务未清除数据，记忆通常是永久保存的，除非您显式设置了过期时间（expiration_date）或删除了记忆。",{"id":174,"question_zh":175,"answer_zh":176,"source_url":163},35537,"如何在本地配置 Mem0 以使用 Ollama 作为 LLM 和 Embedder？","您可以通过传递配置字典给 `Memory.from_config()` 来使用 Ollama。以下是一个示例配置：\n```python\nconfig = {\n    \"vector_store\": {\n        \"provider\": \"qdrant\",\n        \"config\": {\n            \"collection_name\": \"test\",\n            \"host\": \"localhost\",\n            \"port\": 6333,\n            \"embedding_model_dims\": 768\n        },\n    },\n    \"llm\": {\n        \"provider\": \"ollama\",\n        \"config\": {\n            \"model\": \"llama3.1:latest\",\n            \"temperature\": 0,\n            \"max_tokens\": 2000,\n            \"ollama_base_url\": \"http:\u002F\u002Flocalhost:11434\"\n        },\n    },\n    \"embedder\": {\n        \"provider\": \"ollama\",\n        \"config\": {\n            \"model\": \"nomic-embed-text:latest\",\n            \"ollama_base_url\": \"http:\u002F\u002Flocalhost:11434\"\n        },\n    },\n}\nm = Memory.from_config(config)\n```\n请确保本地 Ollama 服务已启动且模型已拉取。如果遇到 JSON 解析错误，请尝试安装 main 分支的最新代码。",[178,183,188,193,198,203,208,213,218,223,228,233,238,243,248,253,258,263,268,273],{"id":179,"version":180,"summary_zh":181,"released_at":182},280696,"ts-v3.0.0-beta.1","## Mem0 节点 SDK（v3.0.0-beta.1）\n\n### 已编辑","2026-04-14T12:39:09",{"id":184,"version":185,"summary_zh":186,"released_at":187},280697,"v2.0.0b1","## Mem0 Python SDK (v2.0.0b1)\n\n### 已编辑","2026-04-14T12:36:54",{"id":189,"version":190,"summary_zh":191,"released_at":192},280698,"ts-v3.0.0-beta.0","## Mem0 节点 SDK（v3.0.0-beta.0）\n\nTypeScript SDK 下一个主要版本的测试版发布。（仍存在功能缺失）\n\n### 重大变更\n\n- 从 OSS 内存配置中移除 `enableGraph` 标志位 (#4776)\n- 为保持一致性，将客户端 SDK 参数命名规范由 snake_case 改为 camelCase (#4776)\n- 移除 LLM、嵌入、向量存储和图数据库中的已弃用参数 (#4740)\n\n### 功能\n\n- **llms**: 添加 DeepSeek LLM 提供商，并配套编写单元测试 (#4613)\n\n### 重构\n\n- **telemetry**: 将 OSS 热路径事件采样率降至 10%，以降低 PostHog 的数据量 (#4771)\n\n### 上一版本\n\n- v2.4.6（2026年4月6日）","2026-04-13T10:27:09",{"id":194,"version":195,"summary_zh":196,"released_at":197},280699,"v2.0.0b0","## Mem0 Python SDK（v2.0.0b0）\n\nPython SDK 下一个主要版本的测试版。（仍存在功能缺失）\n\n### 重大变更\n\n- 移除了 `MemoryConfig` 及相关 API 中已弃用的 `enable_graph` 参数 (#4776)\n- 移除了 LLM、嵌入、向量存储和图数据库中的已弃用参数 (#4740)\n\n### 错误修复\n\n- **client**：防止在响应属性缺失时反馈遥测中出现 `TypeError` (#4795)\n- **memory**：对 `temp_uuid_mapping` 查找进行防护，避免因 LLM 幻觉生成的 ID 导致崩溃，修复内存操作期间的崩溃问题 (#4674，修复 #3931)\n- **azure_openai**：将 `response_format` 参数转发至 Azure OpenAI API，以支持结构化输出 (#4689)\n\n### 重构\n\n- **telemetry**：将开源软件热门路径事件采样率降低至 10%，以减少 PostHog 的数据量 (#4771)\n\n### 上一版本\n\n- v1.0.11（2026年4月6日）","2026-04-13T10:23:53",{"id":199,"version":200,"summary_zh":201,"released_at":202},280700,"cli-node-v0.2.3","## Mem0 节点 CLI（v0.2.3）\n\n**错误修复：**\n- **遥测：** 将共享的 `\"anonymous-cli\"` 备用标识替换为每台机器上持久化的随机哈希值（`cli-anon-\u003Cuuid>`），以便在 PostHog 中将匿名 CLI 用户分别计数，而不是合并为一个身份标识（[#4789](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4789)）\n- **遥测：** 在首次认证登录时添加了 PostHog 的 `$identify` 事件，以将注册前的匿名使用记录关联到已认证的用户资料上（[#4789](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4789)）\n\n**改进：**\n- **API：** 所有 API 请求现在都会在请求体（POST\u002FPUT）和查询参数（GET\u002FDELETE）中包含 `source=CLI`，用于服务器端归因（[#4789](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4789)）","2026-04-11T15:32:55",{"id":204,"version":205,"summary_zh":206,"released_at":207},280701,"cli-v0.2.3","## Mem0 Python CLI（v0.2.3）\n\n**错误修复：**\n- **遥测：** 将共享的 `\"anonymous-cli\"` 备用标识替换为每台机器持久化的随机哈希值（`cli-anon-\u003Cuuid>`），以便在 PostHog 中将匿名 CLI 用户分别计数，而不是合并为一个身份标识（[#4789](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4789)）\n- **遥测：** 在首次认证登录时添加了 PostHog 的 `$identify` 事件，以将注册前的匿名使用记录关联到已认证的用户资料上（[#4789](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4789)）\n\n**改进：**\n- **API：** 所有 API 请求现在都会在请求体（POST\u002FPUT）和查询参数（GET\u002FDELETE）中包含 `source=CLI`，用于服务端归因（[#4789](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4789)）","2026-04-11T15:31:28",{"id":209,"version":210,"summary_zh":211,"released_at":212},280702,"openclaw-v1.0.6","## Mem0 OpenClaw 插件（v1.0.6）\n\n**错误修复：**\n- **遥测：** 将共享的 `\"anonymous-openclaw\"` 备用方案替换为基于每台机器的持久性随机哈希值（`openclaw-anon-\u003Cuuid>`），以便在 PostHog 中单独统计匿名插件用户（[#4790](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4790)）。\n- **遥测：** 在首次认证运行时添加了 PostHog 的 `$identify` 事件，以将匿名历史记录关联到已认证的用户档案中（[#4790](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4790)）。\n- **遥测：** 修复了短生命周期 CLI 调用中的事件丢失问题——添加了 `beforeExit` 处理程序，在进程退出前刷新队列中的事件（[#4790](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4790)）。\n- **遥测：** 添加了延迟的 `\u002Fv1\u002Fping\u002F` 邮箱解析功能，使得在 `mem0 init` 之外配置 API 密钥的用户，在 PostHog 中显示为其邮箱地址，而非 MD5 哈希值（[#4790](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4790)）。\n- **遥测：** 在 needsSetup 分支上将 CLI 事件前缀从 `openclaw.\u003Ccmd>` 统一为 `openclaw.cli.\u003Ccmd>`, 以与已认证分支保持一致（[#4790](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4790)）。\n\n**改进：**\n- **API：** 在所有工具、CLI 命令、召回功能以及 OSS 后端适配器中，向所有提供商调用（`add`、`search`、`getAll`）添加了 `source: \"OPENCLAW\"` 标记（[#4790](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4790)）。","2026-04-11T15:28:54",{"id":214,"version":215,"summary_zh":216,"released_at":217},280703,"openclaw-v1.0.5","## Mem0 OpenClaw 插件（v1.0.5）\n\n### 修复\n- **修复交互式选择 bug**：修复了 `openclaw mem0 init` 中的数字选择问题——输入 1\u002F2\u002F3 现在能够正确选择对应选项（此前由于 readline 的预填充功能与用户输入拼接，导致该功能失效）。\n- **OSS pgvector 崩溃**（[#4727]）：修复了在 OSS 模式下使用 pgvector 时出现的“客户端已连接”级联错误。之前的热身调用会吞掉错误，导致 PostgreSQL 客户端处于半初始化状态；随后并发的召回或捕获操作都会尝试对同一个客户端调用 `client.connect()`。修复方案：让热身阶段的错误继续传播（从而使 `initPromise` 重置并使用全新的 Memory 和 PostgreSQL 客户端重新尝试），并在每次尝试时构建新的配置对象，而不是修改共享状态。\n\n### 移除\n- **`orgId` \u002F `projectId` 配置参数**：已从配置 schema、CLI（`config show\u002Fget\u002Fset`）、初始化界面以及各提供者中移除。API 密钥是按项目作用域的，因此单独的组织和项目 ID 并不必要，且若不匹配反而可能引发访问错误。\n- **`enableGraph` 配置参数**：已从所有配置界面、提供者、后端及工具中移除。图内存功能即将弃用，移除该标志可避免不必要的暴露。","2026-04-09T12:13:31",{"id":219,"version":220,"summary_zh":221,"released_at":222},280704,"cli-v0.2.2","## Mem0 Python 命令行界面 (v0.2.2)\n*新增*\n- 匿名和代理遥测支持","2026-04-06T12:01:21",{"id":224,"version":225,"summary_zh":226,"released_at":227},280705,"openclaw-v1.0.4","## Mem0 OpenClaw 插件（v1.0.4）\n\n### 新增\n- **交互式初始化流程**：`openclaw mem0 init` 提供交互式菜单（邮箱验证或直接输入 API 密钥）。非交互模式包括：`--api-key`、`--email`、`--email --code`。\n- **`memory_add` 工具**：取代 `memory_store`——名称现与 `mem0` CLI 和平台 API 一致。\n- **`memory_delete` 工具**：统一删除功能——支持单个 ID 删除、搜索后删除、批量删除以及实体级级联删除。取代了 `memory_forget` 和 `memory_delete_all`。\n- **CLI 子命令**：`openclaw mem0 init`、`openclaw mem0 status`（由 `stats` 重命名）、`openclaw mem0 config show`、`openclaw mem0 config set`。\n- **`import` CLI 命令**：从 JSON 文件批量导入记忆，并可使用 `--user-id` 和 `--agent-id` 覆盖默认值。\n- **`event list` \u002F `event status` CLI 命令**：用于监控后台处理事件。\n- **`fs-safe.ts` 模块**：将文件系统操作封装为独立模块（同步读写、检查是否存在、创建目录、删除文件），并置于单独的入口文件中，以避免文件 I\u002FO 影响主包。\n- **`backend\u002F` 模块**：新增 `PlatformBackend`，为 CLI 命令提供直接 HTTP API 访问。\n- **`cli\u002Fconfig-file.ts`**：插件认证信息持久化存储于 `~\u002F.openclaw\u002Fopenclaw.json`。\n- **插件清单**：在 `openclaw.plugin.json` 中新增 `contracts.tools`、`configSchema` 和 `uiHints`。\n- **测试套件**：共 10 个测试文件，包含 329 个测试用例，覆盖工具、CLI、配置、梦之门、提供商及技能加载器等功能。\n\n### 变更\n- **模块化架构**：将工具提取至 `tools\u002F` 目录（6 个文件），并将 CLI 功能移至 `cli\u002Fcommands.ts`——`index.ts` 代码量从约 1700 行减少至约 890 行。\n- **代码拆分**：使用 tsup 构建时启用 `splitting: true`，并设置两个入口文件（`index.ts` 和 `fs-safe.ts`），从而将文件系统 I\u002FO 与主包分离。\n- **技能文档更新**：所有 `SKILL.md` 文件均引用新的工具名称（`memory_add`、`memory_delete`），与插件清单保持一致。\n- **WRITE_TOOLS 更新**：梦之门现追踪 `memory_delete` 和 `memory_add`，而非之前的 `memory_forget` 和 `memory_store`。\n- **`mem0ai` 依赖库**：从 `2.3.0` 升级至 `2.4.5`。\n- **自动召回超时机制**：召回操作被包裹在 8 秒的 `Promise.race` 中——若 LLM 处理时间过长，则跳过召回，避免阻塞网关。\n- **自动捕获“即发即忘”模式**：`provider.add()` 现通过 `.then()\u002F.catch()` 在后台执行——`agent_end` 钩子会立即返回，不会阻塞事件循环。\n- **自动捕获最小内容门槛**：当用户内容经过滤后总长度小于 50 字符时，将跳过提取过程——对于“好的”、“谢谢”等简短对话，不再触发 LLM 调用。\n- **CLI 搜索阈值**：将阈值下调至 0.3，使显式搜索比自动召回更加宽松。\n- **初始化默认值**：按下回车键时，默认选择 `1`（邮箱登录）；用户 ID 默认为操作系统用户名——不再允许空值。\n- **初始化不再存储 `baseUrl`**：直接使用 `https:\u002F\u002Fapi.mem0.ai`，不再将其持久化到配置中。\n- **帮助输出重新排序**：`openclaw mem0 help` 现将高价值命令（如搜索、添加）置于首位，而非按字母顺序排列。\n\n### 移除\n- `","2026-04-06T11:59:16",{"id":229,"version":230,"summary_zh":231,"released_at":232},280706,"cli-node-v0.2.2","## Mem0 Node CLI (v0.2.2)\r\n\r\n### Added\r\n- Anonymous and Agent Telemetry Support","2026-04-06T11:54:40",{"id":234,"version":235,"summary_zh":236,"released_at":237},280707,"ts-v2.4.6","## Mem0 Node SDK (v2.4.6)\r\n\r\n**New Features & Updates:**\r\n- **Client:** Added `multilingual` parameter to project update types ([#4314](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4314))\r\n","2026-04-06T11:53:03",{"id":239,"version":240,"summary_zh":241,"released_at":242},280708,"v1.0.11","## Mem0 Python SDK (v1.0.11)\r\n**New Features & Updates:**\r\n- **SDK:** Added `multilingual` parameter to project update ([#4314](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4314))\r\n\r\n**Bug Fixes:**\r\n- **LLMs:** Fixed Groq model configuration ([#4700](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4700))\r\n- **Core:** Prevented thread and memory leaks from PostHog telemetry ([#4535](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4535))\r\n- **Vector Stores:** Used `DatetimeRange` for datetime string values in Qdrant range filters ([#4659](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4659))\r\n- **Configs:** Added missing `ConfigDict` to vector store configs (Elasticsearch, MongoDB, Neptune, OpenSearch, PGVector, Supabase, Valkey) ([#4656](https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4656))","2026-04-06T11:31:00",{"id":244,"version":245,"summary_zh":246,"released_at":247},280709,"openclaw-v1.0.4-beta.0","## openclaw-v1.0.4-beta.0- 2026-04-03 (beta)\r\n\r\n### Added\r\n- **Interactive login flow**: `openclaw mem0 login` with interactive menu (email verification or direct API key). Non-interactive modes: `--api-key`, `--email`, `--email --code`\r\n- **Config file fallback**: Reads `~\u002F.mem0\u002Fconfig.json` (shared with Python CLI) when no API key in plugin config. Supports both camelCase and snake_case field names\r\n- **CLI subcommands**: `openclaw mem0 login`, `openclaw mem0 search`, `openclaw mem0 stats`, `openclaw mem0 status`, `openclaw mem0 dream`\r\n- **`memory_delete` tool**: Unified delete — single ID, search-then-delete, bulk, entity cascade. Replaces `memory_forget` and `memory_delete_all`\r\n- **Backend layer**: `backend\u002Fbase.ts` + `backend\u002Fplatform.ts` with direct `fetch()` for platform mode, `providerToBackend()` adapter for OSS\r\n- **Plugin manifest**: Added `name`, `description`, `contracts.tools`, `baseUrl` config field, CLI `descriptors` for lazy-loading\r\n\r\n### Changed\r\n- **Modular architecture**: Extracted tools into `tools\u002F` directory (7 files) and CLI into `cli\u002Fcommands.ts` — `index.ts` down from 1724 to ~780 lines\r\n- **WRITE_TOOLS updated**: Dream gate tracks `memory_delete` instead of removed `memory_forget` \u002F `memory_delete_all`\r\n- **Auto-recall timeout** (#4634): Recall wrapped in 8-second `Promise.race` — if OSS\u002FOllama LLM takes too long, recall is skipped instead of stalling the gateway\r\n- **Auto-capture fire-and-forget** (#4634): `provider.add()` now runs in the background via `.then()\u002F.catch()` — the `agent_end` hook returns immediately, zero event loop blocking\r\n- **Auto-capture minimum content gate**: Skips extraction when total user content is \u003C50 chars after filtering — trivial conversations (\"ok\", \"thanks\") no longer trigger LLM calls\r\n- **CLI search**: Removed `source: \"OPENCLAW\"` filter and lowered threshold to 0.3 so explicit searches find all memories, not just plugin-tagged ones\r\n\r\n### Removed\r\n- `memory_forget` tool — replaced by `memory_delete`\r\n- `memory_delete_all` tool — replaced by `memory_delete`\r\n- `memory_status` tool — redundant with `openclaw mem0 status` CLI\r\n- `memory_import` tool — bulk import, rarely needed by agents\r\n- `entity_list`, `entity_delete` tools — niche, platform-only\r\n- `event_list`, `event_status` tools — debugging tools, not agent tools\r\n- Duplicate `ToolContext` interfaces from individual tool files — now imports from canonical `tools\u002Findex.ts`","2026-04-03T15:50:14",{"id":249,"version":250,"summary_zh":251,"released_at":252},280710,"openclaw-v1.0.3","# @mem0\u002Fopenclaw-mem0 v1.0.3\r\n\r\n**Patch release — security fix, regression revert, supply-chain hardening**\r\n\r\nCompatibility: OpenClaw Gateway `>=2026.3.24-beta.2` | mem0ai `2.3.0`\r\n\r\n---\r\n\r\n## What's Changed\r\n\r\n### Security\r\n\r\n* **fix(openclaw): path traversal in skill-loader** — `readSkillFile` and `readDomainOverlay` constructed file paths from user-controllable config values (`config.domain`) via `path.join()` without verifying the result stayed within the skills directory. A crafted `domain` value containing `..\u002F` could read arbitrary files on the host filesystem. Added `safePath()` containment helper that resolves and validates all paths before any `fs.readFileSync` call. The exported `loadSkill` API is now self-defending against traversal inputs.\r\n\r\n* **fix(openclaw): pin mem0ai to exact 2.3.0** — Changed `\"mem0ai\": \"^2.3.0\"` to `\"mem0ai\": \"2.3.0\"`. The semver caret range accepted any `2.x.y >= 2.3.0`, meaning a compromised minor or patch release would auto-install on `npm install`. Exact pinning eliminates this supply-chain vector.\r\n\r\n### Bug Fixes\r\n\r\n* **fix(openclaw): revert broken Post-Compaction regex rename** — PR #4678 renamed `Post-Compaction` to `After-Compaction` in two noise-filter regex patterns in `filtering.ts`, claiming this cleared a security scanner false positive. The upstream system emits messages with the literal string `\"Post-Compaction Audit\"`, so the renamed regex silently stopped matching real noise — leaking compaction audit messages into the memory extraction pipeline. Reverted to the correct `Post-Compaction` pattern.\r\n\r\n* **fix(openclaw): revert cosmetic comment change in recall.ts** — Restored `\u002F\u002F Over-fetch for ranking` comment (was changed to `\u002F\u002F Request more candidates for ranking` to work around a scanner matching the substring `fetch` in a code comment).\r\n\r\n### Tests\r\n\r\n* Added 12 new tests in `skill-loader.test.ts`:\r\n  - 8 unit tests for `safePath()` covering parent traversal, deep traversal, nested segment traversal, bare `..`, valid paths, and disguised traversal\r\n  - 4 integration tests for `loadSkill()` covering traversal rejection, valid skill loading, and malicious domain overlay with valid skill\r\n\r\n---\r\n\r\n## Upgrade\r\n\r\n```bash\r\nopenclaw plugins install @mem0\u002Fopenclaw-mem0@1.0.3\r\n```\r\n\r\nNo configuration changes required. Fully backward-compatible with v1.0.2.\r\n\r\n---","2026-04-02T19:08:57",{"id":254,"version":255,"summary_zh":256,"released_at":257},280711,"cli-v0.2.1","# mem0-cli v0.2.1\r\n\r\n### Documentation\r\n- Expanded README with comprehensive command reference — all 13 commands with flags, examples, agent mode, output formats, global flags, and environment variables\r\n\r\n### Fixes\r\n- Restored purple brand color palette (`#8b5cf6` \u002F `#a78bfa`)\r\n- Synced `__init__.py` version with `pyproject.toml`\r\n- Removed hardcoded version assertion tests that broke on every version bump\r\n\r\n### Other\r\n- Version aligned with Node SDK (both now `0.2.1`)\r\n","2026-04-02T17:26:22",{"id":259,"version":260,"summary_zh":261,"released_at":262},280712,"cli-node-v0.2.1","# @mem0\u002Fcli v0.2.1\r\n\r\n### Documentation\r\n- Expanded README with comprehensive command reference — all 13 commands with flags, examples, agent mode, output formats, global flags, and environment variables\r\n\r\n### Fixes\r\n- Restored purple brand color palette (`#8b5cf6` \u002F `#a78bfa`)\r\n- Added `repository` field to `package.json` for npm provenance verification\r\n\r\n### CI\u002FCD\r\n- Added CD workflows with OIDC trusted publishing\r\n\r\n### Other\r\n- Version aligned with Python SDK (both now `0.2.1`)\r\n","2026-04-02T17:25:15",{"id":264,"version":265,"summary_zh":266,"released_at":267},280713,"openclaw-v1.0.2","# @mem0\u002Fopenclaw-mem0 v1.0.2\r\n\r\nPatch release that eliminates the OpenClaw security scanner warning by removing redundant `process.env` access from the plugin bundle.\r\n\r\n## What's Changed\r\n\r\n### Fixed\r\n* **fix(openclaw): remove process.env access to clear security scanner warning** — removed `resolveEnvVars()` and `resolveEnvVarsDeep()` from `config.ts`; OpenClaw already resolves `${VAR}` in `openclaw.json` before passing config to the plugin, so plugin-side env resolution was redundant and was triggering the \"credential harvesting\" static analysis warning by @chaithanyak42 in https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4676\r\n\r\n## Compatibility\r\n\r\n| Requirement | Version |\r\n| --- | --- |\r\n| OpenClaw Gateway | `>=2026.3.24-beta.2` |\r\n| Plugin SDK | `2026.3.24-beta.2` |\r\n| mem0ai | `^2.3.0` |\r\n\r\n## Upgrade\r\n\r\n```bash\r\nnpm install @mem0\u002Fopenclaw-mem0@1.0.2\r\n```","2026-04-02T15:52:31",{"id":269,"version":270,"summary_zh":271,"released_at":272},280714,"openclaw-v1.0.1","# @mem0\u002Fopenclaw-mem0 v1.0.1\r\n\r\nPatch release with dream gate reliability fixes, graceful startup for key-less environments, and automated publishing infrastructure.\r\n\r\n## What's Changed\r\n\r\n### Fixed\r\n* **fix(openclaw): dream gate correctness** — cheap-first ordering, session isolation, verified completion by @chaithanyak42 in https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4666\r\n* **fix(openclaw): graceful startup without API key** — plugin now initializes cleanly when no API key is configured by @chaithanyak42 in https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4669\r\n\r\n### Added\r\n* **fix(openclaw): plugin configuration** — added `compat` and `build` metadata to `package.json`, specifying minimum gateway version (`>=2026.3.24-beta.2`) and OpenClaw SDK compatibility; added Apache-2.0 LICENSE file by @kartik-mem0 in https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4667\r\n* **ci: add CD workflow for @mem0\u002Fopenclaw-mem0** — continuous deployment with OIDC trusted publishing by @whysosaket in https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\u002Fpull\u002F4672\r\n\r\n## Compatibility\r\n\r\n| Requirement | Version |\r\n| --- | --- |\r\n| OpenClaw Gateway | `>=2026.3.24-beta.2` |\r\n| Plugin SDK | `2026.3.24-beta.2` |\r\n| mem0ai | `^2.3.0` |\r\n\r\n## Upgrade\r\n\r\n```bash\r\nnpm install @mem0\u002Fopenclaw-mem0@1.0.1\r\n```\r\n","2026-04-02T14:59:47",{"id":274,"version":275,"summary_zh":276,"released_at":277},280715,"cli-node-v0.1.2","# @mem0\u002Fcli Changelog\r\n\r\n## 0.1.2\r\n\r\n### New Features\r\n\r\n- **Event commands** — `mem0 event list` and `mem0 event status \u003Cid>` to track background processing events (#4649)\r\n- **`--json` \u002F `--agent` global flag** — switches all command output to a structured JSON envelope for programmatic and agent consumption (#4649)\r\n- **Email verification login** — `mem0 init` now supports email-based verification code login in addition to API key (#4623)\r\n- **Brand refresh** — updated color palette from purple to golden (#4664)\r\n\r\n### Bug Fixes\r\n\r\n- **Fix critical crash** on startup in certain environments (#4636)\r\n- **Fix `status` command** — now uses lightweight `\u002Fv1\u002Fping\u002F` endpoint instead of heavyweight entity\u002Fmemory checks (#4649)\r\n- **Fix double error printing** — `cmd_add` was printing errors twice (once explicitly, once via `timed_status`) (#4636)\r\n- **Fix entity delete** — switched from v1 to v2 API (`DELETE \u002Fv2\u002Fentities\u002F{type}\u002F{id}\u002F`), all entity types now work (#4649)\r\n- **Fix `mem0 init` in non-TTY** — `--api-key` alone now defaults `user_id` to `$USER`; warns before overwriting existing config with `--force` flag (#4649)\r\n- **Fix stdin hang** — `add`, `search`, `update` no longer hang waiting for stdin when called with no input (#4649)\r\n- **Deduplicate PENDING results** — shows \"1 event pending\" instead of misleading \"2 memories extracted\" in `mem0 add` (#4649)\r\n- **Auth error UX** — all commands show a helpful `mem0 init` hint when unauthenticated (#4649)\r\n\r\n### Improvements\r\n\r\n- **Agent output sanitization** — raw API responses are projected to only relevant fields per command, removing noise like `graph_status` and duplicate scope fields (#4649)\r\n- **Full UUIDs in tables** — no longer truncated, so `mem0 get \u003Cid>` works directly from table output (#4636)\r\n- **Score column in search** — search results now show relevance scores (#4636)\r\n- **Short config aliases** — `config get api_key`, `user_id`, etc. now work (#4636)\r\n- **Client-side validation** — validates `--expires`, `--page-size`, `--page`, `--top-k`, `--threshold`, and empty content before hitting the API (#4636)\r\n- **Better API error messages** — shows full response detail instead of bare \"Bad Request\" (#4636)\r\n- **`mem0 version`** registered as a proper subcommand (#4636)\r\n- **`list -o json`** returns a pagination envelope instead of a bare array (#4636)\r\n","2026-04-02T10:55:10"]