[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-oceanbase--powermem":3,"tool-oceanbase--powermem":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":111,"forks":112,"last_commit_at":113,"license":114,"difficulty_score":23,"env_os":115,"env_gpu":116,"env_ram":116,"env_deps":117,"category_tags":121,"github_topics":122,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":141,"updated_at":142,"faqs":143,"releases":172},2031,"oceanbase\u002Fpowermem","powermem","PowerMem: Your AI-Powered Long-Term Memory — Accurate, Agile, Affordable. Also friendly support for the OpenClaw Memory Plugin.","PowerMem 是一个为 AI 应用打造的智能长期记忆系统，帮助大语言模型像人一样记住对话历史、用户偏好和上下文信息，而不是每次重头开始。它通过融合向量检索、全文搜索和图数据库技术，并引入认知科学中的“艾宾浩斯遗忘曲线”，实现更精准、高效的记忆管理。相比直接使用完整上下文，PowerMem 在保持高准确率的同时，响应速度提升近 92%，token 消耗减少 96%，大幅降低使用成本。它支持多智能体独立记忆与安全协作，具备细粒度权限控制和隐私保护机制。开发者可通过简单的 Python SDK、命令行或 HTTP API 快速集成，也兼容 OpenClaw 等主流 AI 框架。适合 AI 开发者、智能体系统研究者和需要构建长期交互能力的应用团队使用，尤其适合对性能、成本和隐私有要求的项目。","\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Foceanbase\u002Foceanbase\">\n        \u003Cimg alt=\"OceanBase Logo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foceanbase_powermem_readme_4761c34d2e36.png\" width=\"50%\" \u002F>\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n\n*PowerMem integrated with [OpenClaw](https:\u002F\u002Fgithub.com\u002Fopenclaw-ai\u002Fopenclaw): intelligent memory for AI agents. **OpenClaw PowerMem Plugin**: [View Plugin](https:\u002F\u002Fgithub.com\u002Fob-labs\u002Fmemory-powermem)*\n\nOne command to add PowerMem memory to OpenClaw: `openclaw plugins install memory-powermem`.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foceanbase_powermem_readme_482ac403be4c.jpeg\" alt=\"PowerMem with OpenClaw\" width=\"900\"\u002F>\n\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fpowermem\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fpowermem\" alt=\"PowerMem PyPI - Downloads\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002Foceanbase\u002Fpowermem?style=flat-square\" alt=\"GitHub commit activity\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fpowermem\" target=\"blank\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fpowermem?color=%2334D058&label=pypi%20package\" alt=\"Package version\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fblob\u002Fmaster\u002FLICENSE\">\n        \u003Cimg alt=\"license\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache%202.0-green.svg\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython%20-3.10.0%2B-blue.svg\">\n        \u003Cimg alt=\"pyversions\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython%20-3.10.0%2B-blue.svg\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdeepwiki.com\u002Foceanbase\u002Fpowermem\">\n        \u003Cimg alt=\"Ask DeepWiki\" src=\"https:\u002F\u002Fdeepwiki.com\u002Fbadge.svg\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdiscord.com\u002Finvite\u002F74cF8vbNEs\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Join%20Discord-5865F2?logo=discord&logoColor=white\" alt=\"Join Discord\">\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n[English](README.md) | [中文](README_CN.md) | [日本語](README_JP.md)\n\n## ✨ Highlights\n\n\u003Cdiv align=\"center\">\n\n\u003Cimg src=\"docs\u002Fimages\u002Fbenchmark_metrics_en.svg\" alt=\"PowerMem LOCOMO Benchmark Metrics\" width=\"900\"\u002F>\n\n\u003C\u002Fdiv>\n\n- 🎯 **Accurate**: **[48.77% Accuracy Improvement]** More accurate than full-context in the LOCOMO benchmark (78.70% VS 52.9%)\n- ⚡ **Agile**: **[91.83% Faster Response]** Significantly reduced p95 latency for retrieval compared to full-context (1.44s VS 17.12s)\n- 💰 **Affordable**: **[96.53% Token Reduction]** Significantly reduced costs compared to full-context without sacrificing performance (0.9k VS 26k)\n\n# 🧠 PowerMem - Intelligent Memory System\n\nIn AI application development, enabling large language models to persistently \"remember\" historical conversations, user preferences, and contextual information is a core challenge. PowerMem combines a hybrid storage architecture of vector retrieval, full-text search, and graph databases, and introduces the Ebbinghaus forgetting curve theory from cognitive science to build a powerful memory infrastructure for AI applications. The system also provides comprehensive multi-agent support capabilities, including agent memory isolation, cross-agent collaboration and sharing, fine-grained permission control, and privacy protection mechanisms, enabling multiple AI agents to achieve efficient collaboration while maintaining independent memory spaces.\n\n## 🚀 Core Features\n\n### 👨‍💻 Developer Friendly\n- 🔌 **[Lightweight Integration](docs\u002Fexamples\u002Fscenario_1_basic_usage.md)**: Provides a simple Python SDK, automatically loads configuration from `.env` files, enabling developers to quickly integrate into existing projects. Also supports [CLI](docs\u002Fguides\u002F0012-cli_usage.md) (`pmem`), [MCP Server](docs\u002Fapi\u002F0004-mcp.md), and [HTTP API Server](docs\u002Fapi\u002F0005-api_server.md) integration methods\n\n### 🧠 Intelligent Memory Management\n- 🔍 **[Intelligent Memory Extraction](docs\u002Fexamples\u002Fscenario_2_intelligent_memory.md)**: Automatically extracts key facts from conversations through LLM, intelligently detects duplicates, updates conflicting information, and merges related memories to ensure accuracy and consistency of the memory database\n- 📉 **[Ebbinghaus Forgetting Curve](docs\u002Fexamples\u002Fscenario_8_ebbinghaus_forgetting_curve.md)**: Based on the memory forgetting patterns from cognitive science, automatically calculates memory retention rates and implements time-decay weighting, prioritizing recent and relevant memories, allowing AI systems to naturally \"forget\" outdated information like humans\n\n### 👤 User Profile Support\n- 🎭 **[User Profile](docs\u002Fexamples\u002Fscenario_9_user_memory.md)**: Automatically builds and updates user profiles based on historical conversations and behavioral data, applicable to scenarios such as personalized recommendations and AI companionship, enabling AI systems to better understand and serve each user\n\n### 🤖 Multi-Agent Support\n- 🔐 **[Agent Shared\u002FIsolated Memory](docs\u002Fexamples\u002Fscenario_3_multi_agent.md)**: Provides independent memory spaces for each agent, supports cross-agent memory sharing and collaboration, and enables flexible permission management through scope control\n\n### 🎨 Multimodal Support\n- 🖼️ **[Text, Image, and Audio Memory](docs\u002Fexamples\u002Fscenario_7_multimodal.md)**: Automatically converts images and audio to text descriptions for storage, supports retrieval of multimodal mixed content (text + image + audio), enabling AI systems to understand richer contextual information\n\n### 💾 Deeply Optimized Data Storage\n- 📦 **[Sub Stores Support](docs\u002Fexamples\u002Fscenario_6_sub_stores.md)**: Implements data partition management through sub stores, supports automatic query routing, significantly improving query performance and resource utilization for ultra-large-scale data\n- 🔗 **[Hybrid Retrieval](docs\u002Fexamples\u002Fscenario_2_intelligent_memory.md)**: Combines multi-channel recall capabilities of vector retrieval, full-text search, and graph retrieval, builds knowledge graphs through LLM and supports multi-hop graph traversal for precise retrieval of complex memory relationships\n\n## 🚀 Quick Start\n\n### 📥 Installation\n\n```bash\npip install powermem\n```\n\n### 💡 Basic Usage(SDK)\n\n**✨ Simplest Way**: Create memory from `.env` file automatically! [Configuration Reference](.env.example)\n\n```python\nfrom powermem import Memory, auto_config\n\n# Load configuration (auto-loads from .env)\nconfig = auto_config()\n# Create memory instance\nmemory = Memory(config=config)\n\n# Add memory\nmemory.add(\"User likes coffee\", user_id=\"user123\")\n\n# Search memories\nresults = memory.search(\"user preferences\", user_id=\"user123\")\nfor result in results.get('results', []):\n    print(f\"- {result.get('memory')}\")\n```\n\nFor more detailed examples and usage patterns, see the [Getting Started Guide](docs\u002Fguides\u002F0001-getting_started.md).\n\n### ⌨️ PowerMem CLI (1.0.0+)\n\nPowerMem provides a command-line interface (`pmem`) for memory operations, configuration, backup\u002Frestore, and an interactive shell—without writing Python code.\n\n```bash\n# Add and search memories\npmem memory add \"User prefers dark mode\" --user-id user123\npmem memory search \"preferences\" --user-id user123\n\n# Configuration and statistics\npmem config show\npmem config init          # Interactive .env wizard\npmem stats --json\n\n# Interactive shell\npmem shell\n```\n\nFor full CLI reference and examples, see the [CLI Usage Guide](docs\u002Fguides\u002F0012-cli_usage.md).\n\n### 🌐 HTTP API Server & Dashboard\n\nPowerMem provides a production-ready HTTP API server that exposes all core memory management capabilities through RESTful APIs. It also serves a **Dashboard** (at `\u002Fdashboard\u002F`) as the web admin UI.\n\n**Relationship with SDK**: The API server uses the same PowerMem SDK under the hood and shares the same configuration (`.env` file). It provides an HTTP interface to the same memory management features available in the Python SDK, making PowerMem accessible to non-Python applications.\n\n**Starting the API Server (with Dashboard)**:\n\n```bash\n# Method 1: Using CLI command (after pip install)\npowermem-server --host 0.0.0.0 --port 8000\n\n# Method 2: Using Docker (API server + dashboard in one container)\ndocker run -d \\\n  --name powermem-server \\\n  -p 8000:8000 \\\n  -v $(pwd)\u002F.env:\u002Fapp\u002F.env:ro \\\n  --env-file .env \\\n  oceanbase\u002Fpowermem-server:latest\n\n# Or use Docker Compose (recommended)\ndocker-compose -f docker\u002Fdocker-compose.yml up -d\n```\n\nOnce started, the same server provides:\n- RESTful API endpoints for all memory operations\n- **Dashboard** at `http:\u002F\u002Flocalhost:8000\u002Fdashboard\u002F`\n- Interactive API documentation at `http:\u002F\u002Flocalhost:8000\u002Fdocs`\n- API Key authentication and rate limiting support\n- Same configuration as SDK (via `.env` file)\n\nFor complete API documentation and usage examples, see the [API Server Documentation](docs\u002Fapi\u002F0005-api_server.md).\n\n### 🔌 MCP Server\n\nPowerMem also provides a Model Context Protocol (MCP) server that enables integration with MCP-compatible clients such as Claude Desktop. The MCP server exposes PowerMem's memory management capabilities through the MCP protocol, allowing AI assistants to access and manage memories seamlessly.\n\n**Relationship with SDK**: The MCP server uses the same PowerMem SDK and shares the same configuration (`.env` file). It provides an MCP interface to the same memory management features, making PowerMem accessible to MCP-compatible AI assistants.\n\n**Installation**:\n\n```bash\n# Install PowerMem (required)\npip install powermem\n\n# Install uvx (if not already installed)\n# On macOS\u002FLinux:\ncurl -LsSf https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.sh | sh\n\n# On Windows:\npowershell -c \"irm https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.ps1 | iex\"\n```\n\n**Starting the MCP Server**:\n\n```bash\n# SSE mode (recommended, default port 8000)\nuvx powermem-mcp sse\n\n# SSE mode with custom port\nuvx powermem-mcp sse 8001\n\n# Stdio mode\nuvx powermem-mcp stdio\n\n# Streamable HTTP mode (default port 8000)\nuvx powermem-mcp streamable-http\n\n# Streamable HTTP mode with custom port\nuvx powermem-mcp streamable-http 8001\n```\n\n**Integration with Claude Desktop**:\n\nAdd the following configuration to your Claude Desktop config file:\n\n```json\n{\n  \"mcpServers\": {\n    \"powermem\": {\n      \"url\": \"http:\u002F\u002Flocalhost:8000\u002Fmcp\"\n    }\n  }\n}\n```\n\nThe MCP server provides tools for memory management including adding, searching, updating, and deleting memories. For complete MCP documentation and usage examples, see the [MCP Server Documentation](docs\u002Fapi\u002F0004-mcp.md).\n\n## 🔗 Integrations & Demos\n- 🔗 **openclaw Memory Plugin**: Use PowerMem as long-term memory in [openclaw](https:\u002F\u002Fgithub.com\u002Fopenclaw\u002Fopenclaw) via extraction, Ebbinghaus forgetting curve, multi-agent isolation. [View Plugin](https:\u002F\u002Fgithub.com\u002Fob-labs\u002Fmemory-powermem)\n- 🔗 **LangChain Integration**: Build medical support chatbot using LangChain + PowerMem + OceanBase, [View Example](examples\u002Flangchain\u002FREADME.md)\n- 🔗 **LangGraph Integration**: Build customer service chatbot using LangGraph + PowerMem + OceanBase, [View Example](examples\u002Flanggraph\u002FREADME.md)\n\n## 📚 Documentation\n\n- 📖 **[Getting Started](docs\u002Fguides\u002F0001-getting_started.md)**: Installation and quick start guide\n- ⌨️ **[CLI Usage Guide](docs\u002Fguides\u002F0012-cli_usage.md)**: PowerMem CLI (pmem) reference (1.0.0+)\n- ⚙️ **[Configuration Guide](docs\u002Fguides\u002F0003-configuration.md)**: Complete configuration options\n- 🤖 **[Multi-Agent Guide](docs\u002Fguides\u002F0005-multi_agent.md)**: Multi-agent scenarios and examples\n- 🔌 **[Integrations Guide](docs\u002Fguides\u002F0009-integrations.md)**: Integrations Guide\n- 📦 **[Sub Stores Guide](docs\u002Fguides\u002F0006-sub_stores.md)**: Sub stores usage and examples\n- 📋 **[API Documentation](docs\u002Fapi\u002Foverview.md)**: Complete API reference\n- 🏗️ **[Architecture Guide](docs\u002Farchitecture\u002Foverview.md)**: System architecture and design\n- 📓 **[Examples](docs\u002Fexamples\u002Foverview.md)**: Interactive Jupyter notebooks and use cases\n- 👨‍💻 **[Development Documentation](docs\u002Fdevelopment\u002Foverview.md)**: Developer documentation\n\n## ⭐ Highlights Release Notes\n\n| Version | Release Date | Function |\n|---------|--------------|---------|\n| 1.0.0 | 2026.03.16   | \u003Cul>\u003Cli>PowerMem CLI (pmem): memory operations, config management, backup\u002Frestore\u002Fmigrate, interactive shell, and shell completion\u003C\u002Fli>\u003Cli>Web Dashboard for memory management and visualization\u003C\u002Fli>\u003C\u002Ful> |\n| 0.5.0 | 2026.02.06   | \u003Cul>\u003Cli>Unified configuration governance across SDK\u002FAPI Server (pydantic-settings based)\u003C\u002Fli>\u003Cli>Added OceanBase native hybrid search support\u003C\u002Fli>\u003Cli>Enhanced Memory query handling and added sorting support for memory list operations\u003C\u002Fli>\u003Cli>Added user profile support for custom native-language output\u003C\u002Fli>\u003C\u002Ful> |\n| 0.4.0 | 2026.01.20   | \u003Cul>\u003Cli>Sparse vector support for enhanced hybrid retrieval, combining dense vector, full-text, and sparse vector search\u003C\u002Fli>\u003Cli>User memory query rewriting - automatically enhances search queries based on user profiles for improved recall\u003C\u002Fli>\u003Cli>Schema upgrade and data migration tools for existing tables\u003C\u002Fli>\u003C\u002Ful> |\n| 0.3.0 | 2026.01.09   | \u003Cul>\u003Cli>Production-ready HTTP API Server with RESTful endpoints for all memory operations\u003C\u002Fli>\u003Cli>Docker support for easy deployment and containerization\u003C\u002Fli>\u003C\u002Ful> |\n| 0.2.0 | 2025.12.16   | \u003Cul>\u003Cli>Advanced user profile management, supporting \"personalized experience\" for AI applications\u003C\u002Fli>\u003Cli>Expanded multimodal support, including text, image, and audio memory\u003C\u002Fli>\u003C\u002Ful> |\n| 0.1.0 | 2025.11.14   | \u003Cul>\u003Cli>Core memory management functionality, supporting persistent storage of memories\u003C\u002Fli>\u003Cli>Hybrid retrieval supporting vector, full-text, and graph search\u003C\u002Fli>\u003Cli>Intelligent memory extraction based on LLM fact extraction\u003C\u002Fli>\u003Cli>Full lifecycle memory management supporting Ebbinghaus forgetting curve\u003C\u002Fli>\u003Cli>Multi-Agent memory management support\u003C\u002Fli>\u003Cli>Multiple storage backend support (OceanBase, PostgreSQL, SQLite)\u003C\u002Fli>\u003Cli>Support for knowledge graph retrieval through multi-hop graph search\u003C\u002Fli>\u003C\u002Ful> |\n\n## 💬 Support\n\n- 🐛 **Issue Reporting**: [GitHub Issues](https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fissues)\n- 💭 **Discussions**: [GitHub Discussions](https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fdiscussions)\n\n---\n\n## 📄 License\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.","\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Foceanbase\u002Foceanbase\">\n        \u003Cimg alt=\"OceanBase Logo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foceanbase_powermem_readme_4761c34d2e36.png\" width=\"50%\" \u002F>\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n\n*PowerMem 集成 [OpenClaw](https:\u002F\u002Fgithub.com\u002Fopenclaw-ai\u002Fopenclaw)：面向 AI 代理的智能内存。**OpenClaw PowerMem 插件**：[查看插件](https:\u002F\u002Fgithub.com\u002Fob-labs\u002Fmemory-powermem)*\n\n只需一条命令即可将 PowerMem 内存添加到 OpenClaw：`openclaw plugins install memory-powermem`。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foceanbase_powermem_readme_482ac403be4c.jpeg\" alt=\"PowerMem 与 OpenClaw\" width=\"900\"\u002F>\n\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fpowermem\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fpowermem\" alt=\"PowerMem PyPI - 下载量\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002Foceanbase\u002Fpowermem?style=flat-square\" alt=\"GitHub 提交活动\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fpowermem\" target=\"blank\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fpowermem?color=%2334D058&label=pypi%20软件包\" alt=\"软件包版本\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fblob\u002Fmaster\u002FLICENSE\">\n        \u003Cimg alt=\"许可证\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F许可证-Apache%202.0-green.svg\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython%20-3.10.0%2B-blue.svg\">\n        \u003Cimg alt=\"Python版本\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython%20-3.10.0%2B-blue.svg\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdeepwiki.com\u002Foceanbase\u002Fpowermem\">\n        \u003Cimg alt=\"问DeepWiki\" src=\"https:\u002F\u002Fdeepwiki.com\u002Fbadge.svg\" \u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdiscord.com\u002Finvite\u002F74cF8vbNEs\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-加入Discord-5865F2?logo=discord&logoColor=white\" alt=\"加入Discord\">\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n[English](README.md) | [中文](README_CN.md) | [日本語](README_JP.md)\n\n## ✨ 亮点\n\n\u003Cdiv align=\"center\">\n\n\u003Cimg src=\"docs\u002Fimages\u002Fbenchmark_metrics_en.svg\" alt=\"PowerMem LOCOMO 基准测试指标\" width=\"900\"\u002F>\n\n\u003C\u002Fdiv>\n\n- 🎯 **精准**：**[准确率提升48.77%]** 在LOCOMO基准测试中，比全上下文更精准（78.70% VS 52.9%）\n- ⚡ **敏捷**：**[响应速度提升91.83%]** 相较于全上下文，检索的p95延迟显著降低（1.44秒 VS 17.12秒）\n- 💰 **经济**：**[标记减少96.53%]** 相较于全上下文，成本大幅降低且性能不打折扣（0.9k VS 26k）\n\n# 🧠 PowerMem - 智能内存系统\n\n在AI应用开发中，让大型语言模型持久“记住”历史对话、用户偏好和上下文信息是一个核心挑战。PowerMem结合了向量检索、全文搜索和图数据库的混合存储架构，并引入认知科学中的埃宾浩斯遗忘曲线理论，为AI应用构建强大的内存基础设施。该系统还提供了全面的多智能体支持能力，包括智能体内存隔离、跨智能体协作与共享、细粒度权限控制以及隐私保护机制，使多个AI智能体能够在保持独立内存空间的同时实现高效协作。\n\n## 🚀 核心功能\n\n### 👨‍💻 开发者友好\n- 🔌 **[轻量级集成](docs\u002Fexamples\u002Fscenario_1_basic_usage.md)**：提供简单的Python SDK，自动从`.env`文件加载配置，方便开发者快速集成到现有项目中。同时支持[CLI](docs\u002Fguides\u002F0012-cli_usage.md) (`pmem`)、[MCP服务器](docs\u002Fapi\u002F0004-mcp.md)和[HTTP API服务器](docs\u002Fapi\u002F0005-api_server.md)集成方式\n\n### 🧠 智能内存管理\n- 🔍 **[智能内存提取](docs\u002Fexamples\u002Fscenario_2_intelligent_memory.md)**：通过LLM自动从对话中提取关键事实，智能检测重复内容，更新冲突信息并合并相关记忆，确保内存数据库的准确性和一致性\n- 📉 **[埃宾浩斯遗忘曲线](docs\u002Fexamples\u002Fscenario_8_ebbinghaus_forgetting_curve.md)**：基于认知科学的遗忘规律，自动计算记忆保留率并实施时间衰减加权，优先保留近期和相关的记忆，让AI系统像人类一样自然“遗忘”过时信息\n\n### 👤 用户档案支持\n- 🎭 **[用户档案](docs\u002Fexamples\u002Fscenario_9_user_memory.md)**：根据历史对话和行为数据自动构建和更新用户档案，适用于个性化推荐和AI陪伴等场景，让AI系统更好地理解和服务每个用户\n\n### 🤖 多智能体支持\n- 🔐 **[智能体共享\u002F隔离内存](docs\u002Fexamples\u002Fscenario_3_multi_agent.md)**：为每个智能体提供独立的内存空间，支持跨智能体内存共享与协作，并通过范围控制实现灵活的权限管理\n\n### 🎨 多模态支持\n- 🖼️ **[文本、图像和音频内存](docs\u002Fexamples\u002Fscenario_7_multimodal.md)**：自动将图像和音频转换为文本描述进行存储，支持多模态混合内容（文本+图像+音频）的检索，让AI系统能够理解更丰富的上下文信息\n\n### 💾 深度优化的数据存储\n- 📦 **[子存储支持](docs\u002Fexamples\u002Fscenario_6_sub_stores.md)**：通过子存储实现数据分区管理，支持自动查询路由，显著提升超大规模数据的查询性能和资源利用率\n- 🔗 **[混合检索](docs\u002Fexamples\u002Fscenario_2_intelligent_memory.md)**：结合向量检索、全文搜索和图检索的多通道召回能力，通过LLM构建知识图谱，并支持多跳图遍历，精准检索复杂记忆关系\n\n## 🚀 快速开始\n\n### 📥 安装\n\n```bash\npip install powermem\n```\n\n### 💡 基本使用（SDK）\n\n**✨ 最简单方式**：从`.env`文件自动创建内存！[配置参考](.env.example)\n\n```python\nfrom powermem import Memory, auto_config\n\n# 加载配置（自动从.env加载）\nconfig = auto_config()\n# 创建内存实例\nmemory = Memory(config=config)\n\n# 添加内存\nmemory.add(\"用户喜欢咖啡\", user_id=\"user123\")\n\n# 搜索内存\nresults = memory.search(\"用户偏好\", user_id=\"user123\")\nfor result in results.get('results', []):\n    print(f\"- {result.get('memory')}\")\n```\n\n更多详细示例和使用模式，请参阅[快速入门指南](docs\u002Fguides\u002F0001-getting_started.md)。\n\n### ⌨️ PowerMem CLI (1.0.0+)\n\nPowerMem提供命令行界面(`pmem`)用于内存操作、配置、备份\u002F恢复以及交互式Shell——无需编写Python代码。\n\n```bash\n# 添加和搜索内存\npmem memory add \"用户偏好深色模式\" --user-id user123\npmem memory search \"偏好\" --user-id user123\n\n# 配置与统计\npmem config show  \npmem config init          # 交互式 .env 向导  \npmem stats --json  \n\n# 交互式 shell  \npmem shell  \n```\n\n如需查看完整的 CLI 参考和示例，请参阅[CLI 使用指南](docs\u002Fguides\u002F0012-cli_usage.md)。\n\n### 🌐 HTTP API 服务器与仪表板\n\nPowerMem 提供了一个生产就绪的 HTTP API 服务器，通过 RESTful API 暴露所有核心内存管理功能。此外，它还提供了一个**仪表板**（位于 `\u002Fdashboard\u002F`），作为 Web 管理界面。\n\n**与 SDK 的关系**：API 服务器底层使用相同的 PowerMem SDK，并共享相同的配置文件（`.env`）。它为 Python SDK 中提供的相同内存管理功能提供了 HTTP 接口，使非 Python 应用程序也能访问 PowerMem。\n\n**启动 API 服务器（含仪表板）**：\n\n```bash\n# 方法一：使用 CLI 命令（pip 安装后）\npowermem-server --host 0.0.0.0 --port 8000  \n\n# 方法二：使用 Docker（API 服务器 + 仪表板在同一容器中）\ndocker run -d \\\n  --name powermem-server \\\n  -p 8000:8000 \\\n  -v $(pwd)\u002F.env:\u002Fapp\u002F.env:ro \\\n  --env-file .env \\\n  oceanbase\u002Fpowermem-server:latest  \n\n# 或者使用 Docker Compose（推荐）\ndocker-compose -f docker\u002Fdocker-compose.yml up -d\n```\n\n启动后，同一服务器将提供：\n- 所有内存操作的 RESTful API 端点  \n- 位于 `http:\u002F\u002Flocalhost:8000\u002Fdashboard\u002F` 的**仪表板**  \n- 位于 `http:\u002F\u002Flocalhost:8000\u002Fdocs` 的交互式 API 文档  \n- API Key 认证与速率限制支持  \n- 与 SDK 相同的配置（通过 `.env` 文件）\n\n如需查看完整的 API 文档与使用示例，请参阅[API 服务器文档](docs\u002Fapi\u002F0005-api_server.md)。\n\n### 🔌 MCP 服务器\n\nPowerMem 还提供了一个模型上下文协议（MCP）服务器，可与 Claude Desktop 等兼容 MCP 的客户端集成。MCP 服务器通过 MCP 协议暴露 PowerMem 的内存管理功能，让 AI 助手能够无缝访问和管理记忆。\n\n**与 SDK 的关系**：MCP 服务器使用相同的 PowerMem SDK，并共享相同的配置文件（`.env`）。它为 MCP 兼容的 AI 助手提供了相同的内存管理功能接口。\n\n**安装**：\n\n```bash\n# 安装 PowerMem（必选）\npip install powermem  \n\n# 安装 uvx（若尚未安装）\n# macOS\u002FLinux:\ncurl -LsSf https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.sh | sh  \n\n# Windows:\npowershell -c \"irm https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.ps1 | iex\"\n```\n\n**启动 MCP 服务器**：\n\n```bash\n# SSE 模式（推荐，默认端口 8000）\nuvx powermem-mcp sse  \n\n# 自定义端口的 SSE 模式\nuvx powermem-mcp sse 8001  \n\n# Stdio 模式\nuvx powermem-mcp stdio  \n\n# 可流式 HTTP 模式（默认端口 8000）\nuvx powermem-mcp streamable-http  \n\n# 自定义端口的可流式 HTTP 模式\nuvx powermem-mcp streamable-http 8001\n```\n\n**与 Claude Desktop 集成**：\n\n在您的 Claude Desktop 配置文件中添加以下配置：\n\n```json\n{\n  \"mcpServers\": {\n    \"powermem\": {\n      \"url\": \"http:\u002F\u002Flocalhost:8000\u002Fmcp\"\n    }\n  }\n}\n```\n\nMCP 服务器提供内存管理工具，包括添加、搜索、更新和删除记忆。如需查看完整的 MCP 文档与使用示例，请参阅[MCP 服务器文档](docs\u002Fapi\u002F0004-mcp.md)。\n\n## 🔗 集成与演示\n- 🔗 **openclaw 内存插件**：通过提取、埃宾浩斯遗忘曲线、多智能体隔离，在 [openclaw](https:\u002F\u002Fgithub.com\u002Fopenclaw\u002Fopenclaw) 中将 PowerMem 用作长期记忆。[查看插件](https:\u002F\u002Fgithub.com\u002Fob-labs\u002Fmemory-powermem)\n- 🔗 **LangChain 集成**：使用 LangChain + PowerMem + OceanBase 构建医疗辅助聊天机器人，[查看示例](examples\u002Flangchain\u002FREADME.md)\n- 🔗 **LangGraph 集成**：使用 LangGraph + PowerMem + OceanBase 构建客户服务聊天机器人，[查看示例](examples\u002Flanggraph\u002FREADME.md)\n\n## 📚 文档\n\n- 📖 **[入门指南](docs\u002Fguides\u002F0001-getting_started.md)**：安装与快速入门指南  \n- ⌨️ **[CLI 使用指南](docs\u002Fguides\u002F0012-cli_usage.md)**：PowerMem CLI（pmem）参考（1.0.0+）  \n- ⚙️ **[配置指南](docs\u002Fguides\u002F0003-configuration.md)**：完整配置选项  \n- 🤖 **[多智能体指南](docs\u002Fguides\u002F0005-multi_agent.md)**：多智能体场景与示例  \n- 🔌 **[集成指南](docs\u002Fguides\u002F0009-integrations.md)**：集成指南  \n- 📦 **[子存储指南](docs\u002Fguides\u002F0006-sub_stores.md)**：子存储使用与示例  \n- 📋 **[API 文档](docs\u002Fapi\u002Foverview.md)**：完整 API 参考  \n- 🏗️ **[架构指南](docs\u002Farchitecture\u002Foverview.md)**：系统架构与设计  \n- 📓 **[示例](docs\u002Fexamples\u002Foverview.md)**：交互式 Jupyter Notebook 与用例  \n- 👨‍💻 **[开发文档](docs\u002Fdevelopment\u002Foverview.md)**：开发者文档  \n\n## ⭐ 亮点发布说明\n\n| 版本 | 发布日期 | 功能 |\n|------|----------|------|\n| 1.0.0 | 2026.03.16 | \u003Cul>\u003Cli>PowerMem CLI（pmem）：内存操作、配置管理、备份\u002F恢复\u002F迁移、交互式 Shell 以及 Shell 补全\u003C\u002Fli>\u003Cli>用于内存管理和可视化的 Web 仪表板\u003C\u002Fli>\u003C\u002Ful> |\n| 0.5.0 | 2026.02.06 | \u003Cul>\u003Cli>统一的 SDK\u002FAPI 服务器配置治理（基于 pydantic-settings）\u003C\u002Fli>\u003Cli>新增 OceanBase 原生混合搜索支持\u003C\u002Fli>\u003Cli>增强内存查询处理，为内存列表操作增加排序支持\u003C\u002Fli>\u003Cli>新增用户个人资料支持，以实现自定义母语输出\u003C\u002Fli>\u003C\u002Ful> |\n| 0.4.0 | 2026.01.20 | \u003Cul>\u003Cli>稀疏向量支持，增强混合检索能力，结合密集向量、全文和稀疏向量搜索\u003C\u002Fli>\u003Cli>用户内存查询重写——根据用户个人资料自动优化搜索查询，提升召回率\u003C\u002Fli>\u003Cli>模式升级与数据迁移工具，用于现有表\u003C\u002Fli>\u003C\u002Ful> |\n| 0.3.0 | 2026.01.09 | \u003Cul>\u003Cli>生产就绪的 HTTP API 服务器，提供所有内存操作的 RESTful 端点\u003C\u002Fli>\u003Cli>Docker 支持，便于部署与容器化\u003C\u002Fli>\u003C\u002Ful> |\n| 0.2.0 | 2025.12.16 | \u003Cul>\u003Cli>高级用户个人资料管理，支持 AI 应用的“个性化体验”\u003C\u002Fli>\u003Cli>扩展多模态支持，包括文本、图像与音频记忆\u003C\u002Fli>\u003C\u002Ful> |\n| 0.1.0 | 2025.11.14 | \u003Cul>\u003Cli>核心内存管理功能，支持记忆的持久化存储\u003C\u002Fli>\u003Cli>混合检索支持向量、全文与图搜索\u003C\u002Fli>\u003Cli>基于 LLM 事实提取的智能记忆提取\u003C\u002Fli>\u003Cli>全生命周期内存管理，支持埃宾浩斯遗忘曲线\u003C\u002Fli>\u003Cli>多智能体内存管理支持\u003C\u002Fli>\u003Cli>多种存储后端支持（OceanBase、PostgreSQL、SQLite）\u003C\u002Fli>\u003Cli>支持通过多跳图搜索进行知识图谱检索\u003C\u002Fli>\u003C\u002Ful> |\n\n## 💬 支持\n\n- 🐛 **问题报告**：[GitHub Issues](https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fissues)  \n- 💭 **讨论**：[GitHub Discussions](https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fdiscussions)  \n\n---\n\n## 📄 许可证\n\n本项目采用 Apache 许可证 2.0 版本授权——详情请参阅 [LICENSE](LICENSE) 文件。","# PowerMem 快速上手指南\n\n## 环境准备\n\n- **Python 版本**：3.10.0 或更高  \n- **操作系统**：Linux \u002F macOS \u002F Windows（推荐使用 Linux 环境）  \n- **前置依赖**：无强制依赖，推荐安装 `pip` 和 `dotenv`（自动加载 `.env` 文件）  \n- **国内加速建议**：使用清华源加速安装  \n  ```bash\n  pip install powermem -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n  ```\n\n## 安装步骤\n\n```bash\npip install powermem -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> 推荐使用 `pip` 安装官方 PyPI 包，无需额外编译或依赖配置。\n\n## 基本使用\n\n### 最简 SDK 使用（推荐）\n\n1. 创建 `.env` 文件（参考 `.env.example`），配置存储路径等参数（可选，默认使用本地文件系统）  \n2. 在 Python 中直接调用：\n\n```python\nfrom powermem import Memory, auto_config\n\n# 自动从 .env 加载配置\nconfig = auto_config()\nmemory = Memory(config=config)\n\n# 添加记忆\nmemory.add(\"用户喜欢咖啡\", user_id=\"user123\")\n\n# 搜索记忆\nresults = memory.search(\"用户偏好\", user_id=\"user123\")\nfor result in results.get('results', []):\n    print(f\"- {result.get('memory')}\")\n```\n\n> 支持中文内容存储与检索，无需额外编码处理。\n\n### CLI 快速操作（无需写代码）\n\n```bash\n# 添加记忆\npmem memory add \"用户喜欢深色模式\" --user-id user123\n\n# 搜索记忆\npmem memory search \"偏好\" --user-id user123\n\n# 查看配置\npmem config show\n```\n\n> CLI 工具 `pmem` 1.0.0+ 已内置，安装后直接使用，支持交互式 shell：`pmem shell`","一位AI客服系统开发者正在为一家电商公司构建多智能体客服代理，需要让多个AI客服（如订单查询、退换货、物流跟踪）能长期记住用户的历史行为、偏好和过往纠纷记录，实现个性化、连贯的服务体验。\n\n### 没有 powermem 时\n- 每次对话都必须重新传入用户过去半年的聊天记录，导致单次请求上下文高达26k token，成本高昂且响应缓慢（平均延迟超17秒）\n- 客服代理之间无法共享记忆，比如“退换货代理”不知道用户曾因物流延迟投诉过，重复道歉却无解决方案\n- 用户反复提及相同问题（如“我上次说的快递单号”），系统无法自动识别并关联历史记录，体验割裂\n- 内存中存在大量重复或冲突信息（如用户两次填写的地址不一致），缺乏自动去重与冲突解决机制\n- 系统无法根据用户活跃度动态遗忘过时信息（如三个月前的退货记录），占用大量资源却无实际价值\n\n### 使用 powermem 后\n- 通过向量+图数据库混合检索，仅需0.9k token即可精准召回关键记忆，成本降低96.53%，响应速度提升91.83%，延迟降至1.44秒内\n- 多个客服代理可安全共享用户画像（如偏好品牌、历史投诉倾向），同时保持各自独立记忆空间，协作更高效\n- 自动提取对话中的关键事实（如“用户偏好顺丰、不喜欢电话回访”），并智能合并冲突信息，确保记忆准确一致\n- 借助艾宾浩斯遗忘曲线，自动弱化三个月前的低频记忆，释放存储资源，提升系统整体效率\n- 开发者仅需一条命令 `openclaw plugins install memory-powermem` 即可接入，无需重写逻辑，3小时内完成部署\n\npowermem 让AI客服真正“记得住、记得准、记得省”，把碎片化对话转化为有持续价值的用户认知资产。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Foceanbase_powermem_482ac403.jpg","oceanbase","OceanBase","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Foceanbase_069a8934.png","The Fastest Distributed Database for Transactional, Analytical, and  AI Workloads",null,"OceanBaseDB","https:\u002F\u002Fen.oceanbase.com\u002F","https:\u002F\u002Fgithub.com\u002Foceanbase",[84,88,92,96,100,104,107],{"name":85,"color":86,"percentage":87},"Python","#3572A5",92.8,{"name":89,"color":90,"percentage":91},"TypeScript","#3178c6",5.9,{"name":93,"color":94,"percentage":95},"Makefile","#427819",0.6,{"name":97,"color":98,"percentage":99},"Shell","#89e051",0.3,{"name":101,"color":102,"percentage":103},"CSS","#663399",0.2,{"name":105,"color":106,"percentage":103},"Dockerfile","#384d54",{"name":108,"color":109,"percentage":110},"HTML","#e34c26",0,613,71,"2026-04-04T01:13:49","NOASSERTION","Linux, macOS, Windows","未说明",{"notes":118,"python":119,"dependencies":120},"建议使用 .env 文件配置环境，首次运行可能需下载模型或依赖资源；支持通过 Docker 快速部署，推荐使用 Docker Compose 管理服务；CLI 和 API 服务无需编写代码即可使用，适合多语言集成。","3.10.0+",[67],[13,51,14,26,53,15],[123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140],"memory","context-engineering","databases","agentic","agents","chatbot","long-term-memory","multi-agent","ai-companion","ai","ai-agents","vector","moltbot","openclaw","clawdbot-plugin","clawdbot-skill","openclaw-extension","openclaw-plugin","2026-03-27T02:49:30.150509","2026-04-06T08:09:11.025299",[144,149,154,159,163,167],{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},9231,"为什么 Snowflake 的 memory_id 在 JavaScript\u002FTypeScript 中精度丢失？如何解决？","因为 JavaScript 的 Number 类型只能安全表示 53 位以内的整数，而 Snowflake memory_id 是 64 位整数，超出范围后会丢失精度。解决方案是：在 API 和 SDK 的所有外部接口中，将 memory_id 以字符串形式传输，而不是数字。这样可确保客户端解析时保持精确值。","https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fissues\u002F91",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},9232,"如何为 powermem 命令行工具启用命令自动补全功能？","运行命令 `pmem --install-completion bash` 或 `pmem --install-completion zsh`，系统会提示是否将补全脚本写入 ~\u002F.bashrc 或 ~\u002F.zshrc。选择 y 后，重启终端即可使用。一级命令（如 pmem add）和二级命令（如 pmem config show）均支持 Tab 自动补全。","https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fissues\u002F279",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},9233,"如何避免在部署 PowerMem 时要求用户安装 Node.js 和 pnpm？","不要将前端仪表盘的 dist 目录提交到代码库，而是通过 GitHub Actions 在 CI 流程中构建仪表盘，并将编译后的静态文件作为发布产物（release artifact）或嵌入 Docker 镜像中。这样用户部署时只需 Python，无需 Node.js 环境。","https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fissues\u002F138",{"id":160,"question_zh":161,"answer_zh":162,"source_url":153},9234,"为什么重复执行 pmem --install-completion 不会重复添加补全脚本到 shell 配置文件？","脚本会检查 ~\u002F.bashrc 或 ~\u002F.zshrc 是否已包含 source 命令，若已存在则不再追加，仅提示 'Already sourced in ...; no change.'；若不存在，则询问是否添加一行 source 命令。这避免了配置文件中出现重复内容。",{"id":164,"question_zh":165,"answer_zh":166,"source_url":148},9235,"为什么 memory_id 是整数而其他 *_id 是字符串，容易导致混淆？如何避免？","因为内部实现使用整数，但外部接口应统一用字符串表示。为避免混淆，建议在文档中明确说明：所有对外暴露的 ID（包括 memory_id）都应作为字符串处理，客户端不应假设其为数字类型。",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},9236,"如何正确使用 memory.add(infer=True) 避免误判新记忆为重复？","若遇到 infer=True 误判新记忆为重复的情况，可尝试：1）提高相似度搜索阈值；2）在搜索结果为空或相似度极低时跳过 LLM 判断，直接添加；3）优化 LLM 提示词，使其更严格区分新旧内容。但当前版本未修复此问题，建议临时使用 infer=False 保证可靠性。","https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fissues\u002F154",[173,178,183,188,193,198,203,208,213,218,223,228,233,238],{"id":174,"version":175,"summary_zh":176,"released_at":177},116330,"v1.1.0","# PowerMem 1.1.0\r\n\r\n**Release date:** April 2, 2026\r\n\r\n## Overview\r\n\r\nThis minor release focuses on **embedded seekdb** support for OceanBase-backed deployments, so you can run PowerMem against a **local embedded database** without a separate database server. It also includes CLI and embedding configuration fixes.\r\n\r\n## Quick start: from install to embedded seekdb\r\n\r\nA **minimal runnable path**: no Docker, no separate database service—only Python, `.env`, and your app or CLI.\r\n\r\n### 1. Install\r\n\r\n```bash\r\npip install powermem\r\n```\r\n\r\nEmbedded seekdb depends on **`pyseekdb`**, which is pulled in with **`powermem`**; you do not need an extra `pip install` for it.\r\n\r\n### 2. Configure `.env` (embedded mode)\r\n\r\nCopy the template and edit as needed:\r\n\r\n```bash\r\ncp .env.example .env\r\n```\r\n\r\nFor **OceanBase storage with embedded seekdb**, the essentials are: **do not configure a remote host**, and **point at a local data directory**:\r\n\r\n```env\r\n# OceanBase vector storage\r\nDATABASE_PROVIDER=oceanbase\r\n\r\n# Embedded mode: leave remote host empty (or unset)\r\nOCEANBASE_HOST=\r\n# Local data directory — same idea as ob_path in config\r\nOCEANBASE_PATH=.\u002Fseekdb_data\r\n\r\nOCEANBASE_PORT=2881\r\nOCEANBASE_USER=root@sys\r\nOCEANBASE_PASSWORD=your_password\r\nOCEANBASE_DATABASE=powermem\r\nOCEANBASE_COLLECTION=memories\r\n\r\n# Vector dimension must match your embedding model (example is a common size; adjust for your embedder)\r\nOCEANBASE_EMBEDDING_MODEL_DIMS=1536\r\n```\r\n\r\nConfigure **LLM** and **embedding** per the README (for example `LLM_PROVIDER`, `LLM_API_KEY`, `EMBEDDER_*`). The embedder output dimension and **`OCEANBASE_EMBEDDING_MODEL_DIMS`** must match. You can also run **`pmem config init`** for an interactive wizard that explains embedded seekdb vs remote OceanBase.\r\n\r\n## Added\r\n\r\n- **Embedded seekdb (OceanBase)**\r\n  - Configure a local data directory via `ob_path` and use embedded mode when no remote host is required.\r\n  - Automatic handling to ensure the target database exists before connecting.\r\n  - Safer defaults for small embedded datasets (e.g. HNSW index behavior where IVF-family indexes are unsuitable).\r\n  - **Synchronous update\u002Fdelete** paths for embedded storage to avoid stability issues from concurrent access.\r\n  - **`pyseekdb`** dependency for embedded use.\r\n\r\n\r\n## Changed\r\n\r\n- **CLI (`pmem config`)**\r\n  - Prompts and help text updated to explain embedded seekdb options alongside remote OceanBase.\r\n\r\n- **OceanBase vector store**\r\n  - **`update`** merges payloads with existing rows more reliably and reduces accidental loss of fields (including sparse embedding-related data).\r\n\r\n## Fixed\r\n\r\n- **CLI**\r\n  - **`--env-file`**: respects custom env files by loading `POWERMEM_ENV_FILE` during dotenv initialization.\r\n  - **Memory list**: consistent ID truncation in non-interactive output; ellipsis for truncated fields in interactive list.\r\n\r\n- **Embeddings**\r\n  - **Ollama**: `ollama_base_url` is accepted in Ollama embedding configuration.\r\n\r\n## Build & tooling\r\n\r\n- **CI**: workflow to **export Docker image** packages.\r\n\r\n## Upgrade notes\r\n\r\n1. If you use **OceanBase**, review **`.env.example`** for embedded vs remote connection settings (`ob_path`, host, etc.).\r\n2. For **embedded seekdb**, use the synchronous **`Memory`** API; see the async memory docs for the **`AsyncMemory`** limitation.\r\n3. **Python 3.11+** remains the supported baseline (as documented in the README).\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fcompare\u002Fv1.0.2...v1.1.0","2026-04-02T07:37:31",{"id":179,"version":180,"summary_zh":181,"released_at":182},116331,"v1.0.2","## What's Changed\r\n* Powermem CLI Cases by @Ripcord55 in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F369\r\n* fix(cli): honor --env-file by loading POWERMEM_ENV_FILE in dotenv step by @Teingi in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F371\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fcompare\u002Fv1.0.1...v1.0.2","2026-03-23T11:30:48",{"id":184,"version":185,"summary_zh":186,"released_at":187},116332,"v1.0.1","## What's Changed\r\n* pyobvector dependencies by @Ripcord55 in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F361\r\n* Merge benchmark code by @Evenss in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F362\r\n* Native hybrid search case modification by @Ripcord55 in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F363\r\n* fix(api): register search router before memories by @Teingi in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F367\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fcompare\u002Fv1.0.0...v1.0.1","2026-03-19T12:46:54",{"id":189,"version":190,"summary_zh":191,"released_at":192},116333,"v1.0.0","## Why We're Shipping the CLI and Dashboard\r\n\r\nWe believe that in the Agent era, the products that pull ahead will not be the ones with only a UI for humans to click. They will be the ones that deliver **two layers at once**: a layer where **Agents** can connect, execute, and orchestrate with low friction; and a layer where **humans** can understand the whole picture, form judgment, and consume results. The **CLI** is the operation surface for Agents and automation. The **Dashboard** is the cognition surface for people. Only when both exist does a product complete the shift from “software for humans” to “a system where humans and Agents work together.” That’s why, in v1.0.0, we’re making the CLI and the Memory Statistics and Analytics Dashboard first-class parts of PowerMem.\r\n\r\n- **CLI — the operation layer.**  \r\n  Agents and scripts need a stable, scriptable interface: same config (`.env`), same storage as the SDK and API. The CLI (`pmem`) gives them exactly that—add\u002Fsearch\u002Flist\u002Fbackup\u002Frestore from the terminal, in CI, or via an interactive shell (`pmem shell`)—without requiring a Python runtime or HTTP client in every context. It’s the low-friction surface for Agents to plug in and for you to automate bootstrap, migration, and recovery.\r\n\r\n- **Dashboard — the cognition layer.**  \r\n  Humans need to see what’s in the system: how many memories, how they’re distributed by user\u002Fagent\u002Ftype, and how healthy the system is. The dashboard is a web UI on top of the same HTTP API server, so operators and developers can build a mental model and make decisions without calling APIs or writing scripts. It’s optional to build and serve, but it’s the place where the “global view” lives.\r\n\r\nTogether, CLI and Dashboard close the loop: Agents operate; humans understand. That’s the dual-surface we’re shipping in v1.0.0.\r\n\r\n---\r\n\r\n## New Features\r\n\r\n### 1. CLI (`pmem`) — Command structure and config init\r\n\r\n#### Command structure\r\n\r\nAll memory operations now live under the **`memory`** subcommand so the CLI has a clear hierarchy and room for other command groups (config, stats, manage, shell).\r\n\r\n**Command overview:**\r\n\r\n| Group    | Subcommands | Purpose |\r\n|----------|-------------|---------|\r\n| **memory** | add, search, get, update, delete, list, delete-all | CRUD and semantic search over memories. |\r\n| **config** | show, validate, test, init | Inspect, validate, test, and create `.env` configuration. |\r\n| **stats**  | — | Print memory statistics (counts, distribution). |\r\n| **manage** | backup, restore, cleanup, migrate | Backup\u002Frestore, Ebbinghaus cleanup, store migration. |\r\n| **shell**  | — | Interactive REPL with session defaults. |\r\n\r\n**Invocation:** After `pip install powermem`, use `pmem` or `powermem-cli`. Global options: `--env-file PATH` \u002F `-e`, `--json` \u002F `-j`, `--verbose` \u002F `-v`, `--install-completion SHELL`, `--version`, `--help`.\r\n\r\n**Usage examples:**\r\n\r\n```bash\r\n# Memory: add, search, list (with filters and pagination)\r\npmem memory add \"User prefers dark mode\" --user-id user123\r\npmem memory add \"Meeting at 3pm Friday\" -u user1 -a agent1 --no-infer\r\npmem memory search \"user preferences\" --user-id user123\r\npmem memory search \"dark mode\" -l 5 -t 0.3 -j\r\npmem memory list --user-id user123 -l 20 -o 0 --sort-by created_at --order desc\r\npmem memory get 123456789 --user-id user123\r\npmem memory update 123456789 \"Updated content\" -m '{\"updated\": true}'\r\npmem memory delete 123456789 --yes\r\npmem memory delete-all --user-id user123 --confirm\r\n\r\n# Config: show, validate, test, and interactive init\r\npmem config show\r\npmem config show --section llm\r\npmem config validate -f .env.production\r\npmem config test -c database\r\npmem config init\r\npmem config init -f .env --test --component database\r\n\r\n# Statistics and management\r\npmem stats\r\npmem stats -u user123 --detailed -j\r\npmem manage backup -o backup.json --user-id user123 -l 1000\r\npmem manage restore -i backup.json --skip-duplicates --dry-run\r\npmem manage restore -i backup.json -u new_user\r\npmem manage cleanup --dry-run\r\npmem manage cleanup --threshold 0.2 -u user123 --force\r\n\r\n# Interactive shell (REPL with session defaults)\r\npmem shell\r\n# Inside shell: set user user123; add \"User likes tea\"; search \"preferences\"; list --limit 10; exit\r\n\r\n# Use a specific .env and JSON output\r\npmem -e .env.production --json stats\r\npmem --install-completion bash\r\n```\r\n\r\n\u003Cimg width=\"3640\" height=\"5164\" alt=\"image\" src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F50f03780-304d-4ae5-8cbb-e1f8fa53b3a8\" \u002F>\r\n\r\n\r\n#### `pmem config init`\r\n\r\nInteractive wizard to **create or update a `.env` file** so you can bootstrap PowerMem without copying `.env.example` by hand.\r\n\r\n- **Modes:** Quickstart (minimal prompts) or full custom (all sections).\r\n- **Options:** `--env-file PATH` (target file), `--dry-run` (no write), `--test` \u002F `--no-test` (run validation after writing), `--component` (e.g. `database`, `llm`, `embedder`, `all`) when using `--test`.\r\n\r\n**Examples:**\r\n\r\n```bash\r\npmem config init\r\npmem config i","2026-03-16T09:52:30",{"id":194,"version":195,"summary_zh":196,"released_at":197},116334,"v0.5.3","## What's Changed\r\n* fix lists bug by @Evenss in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F268\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fcompare\u002Fv0.5.2...v0.5.3","2026-02-26T09:01:14",{"id":199,"version":200,"summary_zh":201,"released_at":202},116335,"v0.5.2","## What's Changed\r\n* prompts: LANGUAGE DO NOT translate by @Teingi in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F248\r\n* Enhance memory listing functionality with pagination and sorting support by @Evenss in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F246\r\n* fix search bug by @Evenss in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F253\r\n* Enhance PGVectorConfig for flexible database connection settings by @Evenss in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F257\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fcompare\u002Fv0.5.1...v0.5.2","2026-02-12T14:11:50",{"id":204,"version":205,"summary_zh":206,"released_at":207},116336,"v0.5.1","## What's Changed\r\n1. Default role filters removed in UserMemory profile extraction:\r\n- include_roles default changed from [\"user\"] to None\r\n- exclude_roles default changed from [\"assistant\"] to None\r\n2. Documentation updated to reflect the new defaults.\r\n\r\n## Impact\r\n- Behavior change: Profile extraction will now, by default, not filter messages by role unless you explicitly pass include_roles and\u002For exclude_roles.\r\n- This may change extracted profiles for users who relied on the previous implicit defaults (user-only input, assistant excluded).\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fcompare\u002F0.5.0...v0.5.1","2026-02-06T09:33:57",{"id":209,"version":210,"summary_zh":211,"released_at":212},116337,"0.5.0","# PowerMem v0.5.0 Release Notes (2026-02-06)\r\n\r\n## Highlights\r\n\r\n- **Unified configuration governance** across SDK\u002FAPI Server (pydantic-settings based).\r\n- **OceanBase native hybrid search support**.\r\n- **Enhanced Memory query experience** with improved query handling and **sorting support** for memory list operations.\r\n- **User profiles** now support **custom native-language output**.\r\n\r\n## What’s Changed\r\n\r\n### Configuration & Settings\r\n\r\n- Unified configuration governance across the SDK and API Server based on pydantic-settings.\r\n\r\n### Retrieval & Storage\r\n\r\n- Added OceanBase native hybrid search support.\r\n\r\n### SDK \u002F API\r\n\r\n- Enhanced `Memory` query handling.\r\n- Added sorting support for memory list operations.\r\n\r\n### User Profile\r\n\r\n- Added user profile support for custom native-language output.\r\n\r\n## Installation\r\n\r\n```bash\r\npip install -U powermem==0.5.0\r\n```\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fcompare\u002Fv0.4.0...0.5.0","2026-02-06T03:08:43",{"id":214,"version":215,"summary_zh":216,"released_at":217},116338,"v0.4.0","# PowerMem 0.4.0 Release Notes\r\n\r\n**Release Date:** January 20, 2026\r\n\r\nWe're excited to announce the release of PowerMem 0.4.0! This version introduces significant enhancements to our retrieval capabilities with the addition of sparse vector support, enabling even more accurate and efficient memory search.\r\n\r\n## 🎉 What's New\r\n\r\n### ✨ Sparse Vector Support\r\n\r\nPowerMem 0.4.0 introduces **sparse vector support**, a powerful enhancement to our hybrid retrieval system. Sparse vectors complement our existing dense vector and full-text search capabilities, providing a third dimension of semantic matching that significantly improves search accuracy, especially for keyword-rich queries.\r\n\r\n**Key Features:**\r\n- **Enhanced Hybrid Retrieval**: Combines dense vector search, full-text search, and sparse vector search for superior retrieval accuracy\r\n- **Automatic Sparse Vector Generation**: Automatically generates sparse vectors when adding memories (no code changes required)\r\n- **Configurable Search Weights**: Fine-tune the influence of each search method (vector, full-text, and sparse) through weight configuration\r\n\r\n**Database Requirements:**\r\n- OceanBase >= 4.5.0\r\n- seekdb\r\n\r\n**Supported Providers:**\r\n- Qwen (text-embedding-v4)\r\n\r\n### 🔧 Schema Upgrade & Migration Tools\r\n\r\nTo help users upgrade existing tables and migrate historical data, we've introduced comprehensive migration tools:\r\n\r\n- **Schema Upgrade Script**: Automatically adds sparse vector support to existing OceanBase tables\r\n- **Data Migration Script**: Migrates historical data to include sparse vectors with progress tracking and error handling\r\n\r\nThe migration tools support:\r\n- Batch processing with configurable batch sizes\r\n- Multi-threaded migration for improved performance\r\n- Real-time progress monitoring\r\n- Automatic skip of already migrated records\r\n\r\n### 🧠 User Memory Query Rewriting\r\n\r\nPowerMem 0.4.0 introduces **intelligent query rewriting** for UserMemory, which automatically enhances search queries based on user profiles to improve recall and accuracy.\r\n\r\n**Key Features:**\r\n- **Automatic Query Enhancement**: Rewrites vague or ambiguous queries using user profile information to make them more precise\r\n- **Profile-Based Context**: Leverages extracted user profile content to fill in missing context in queries\r\n- **Graceful Fallback**: Automatically falls back to the original query if profile is missing, query is too short, or rewrite fails\r\n- **Configurable**: Enable\u002Fdisable via configuration, with optional custom rewrite instructions\r\n- **Transparent Operation**: No changes to search API - works seamlessly with existing `UserMemory.search()` calls\r\n\r\n**Configuration:**\r\n\r\nEnable query rewriting in your configuration:\r\n\r\n```python\r\nconfig = {\r\n    # ... other config\r\n    \"query_rewrite\": {\r\n        \"enabled\": True,\r\n        # Optional: custom instructions for rewrite behavior\r\n        # \"prompt\": \"Rewrite queries to be specific and grounded in the user profile.\"\r\n    }\r\n}\r\n```\r\n\r\nOr via environment variables:\r\n\r\n```env\r\nQUERY_REWRITE_ENABLED=true\r\n# QUERY_REWRITE_PROMPT=  # Optional custom instructions\r\n```\r\n\r\n**How It Works:**\r\n\r\nWhen `UserMemory.search()` is called with `user_id` and query rewrite is enabled:\r\n1. Retrieves the user's profile from the profile store\r\n2. Uses LLM to rewrite the query based on profile content\r\n3. Executes search with the rewritten query for better results\r\n4. Falls back to original query if any step fails\r\n\r\nThis feature significantly improves search recall by making queries more specific and context-aware based on what the system knows about each user.\r\n\r\n## 📚 Documentation\r\n\r\nComprehensive documentation has been added for sparse vector functionality:\r\n\r\n- **[Sparse Vector Guide](docs\u002Fguides\u002F0011-sparse_vector.md)**: Complete guide on configuring and using sparse vectors\r\n- **[Sparse Vector Example](docs\u002Fexamples\u002Fscenario_10_sparse_vector.md)**: Step-by-step tutorial with code examples\r\n- **[Migration Guide](docs\u002Fmigration\u002Fsparse_vector_migration.md)**: Detailed instructions for upgrading existing tables and migrating data\r\n\r\n## 🚀 Getting Started\r\n\r\n### Enable Sparse Vector\r\n\r\nAdd the following to your `.env` file:\r\n\r\n```env\r\n# Enable sparse vector\r\nSPARSE_VECTOR_ENABLE=true\r\n\r\n# Sparse vector embedding configuration\r\nSPARSE_EMBEDDER_PROVIDER=qwen\r\nSPARSE_EMBEDDER_API_KEY=your_api_key\r\nSPARSE_EMBEDDER_MODEL=text-embedding-v4\r\nSPARSE_EMBEDDER_DIMS=1536\r\n```\r\n\r\n### For New Tables\r\n\r\nSimply enable sparse vector in your configuration - no additional steps required:\r\n\r\n```python\r\nfrom powermem import Memory, auto_config\r\n\r\nconfig = auto_config()  # Ensure SPARSE_VECTOR_ENABLE=true\r\nmemory = Memory(config=config)\r\n\r\n# Add memories (automatically generates sparse vectors)\r\nmemory.add(\"Your memory content\", user_id=\"user123\")\r\n\r\n# Search (automatically uses sparse vector for hybrid search)\r\nresults = memory.search(\"query\", user_id=\"user123\")\r\n```\r\n\r\n### For Existing Tables\r\n\r\nUpgrade your existing tables using the migratio","2026-01-20T06:56:04",{"id":219,"version":220,"summary_zh":221,"released_at":222},116339,"v0.3.1","# Release v0.3.1\r\n\r\n## 🎉 Overview\r\n\r\nThis PR releases PowerMem v0.3.1, which includes bug fixes, new LLM provider support, and documentation improvements.\r\n\r\n## ✨ What's New\r\n\r\n### 🐛 Bug Fixes\r\n- **Fixed user profile extraction bug** (#172): Resolved an issue where user profile extraction was not working correctly in certain scenarios\r\n- **Fixed vector setting bug** (#170): Corrected a bug related to vector database configuration that was causing issues in vector retrieval operations\r\n\r\n### 🚀 New Features\r\n- **Added Zhipu AI (z.ai) integration support** (#165): PowerMem now supports Zhipu AI as an LLM provider, expanding the options for users to choose their preferred AI service\r\n\r\n### 📚 Documentation Improvements\r\n- **Added MCP and HTTP integration methods documentation** (#164): Comprehensive documentation for integrating PowerMem via MCP Server and HTTP API Server\r\n- **Updated and fixed multiple documentation issues** (#166, #84, #74, #61): Improved documentation accuracy and completeness across various sections\r\n\r\n## 📋 Changes Summary\r\n\r\n| Category | Description |\r\n|----------|-------------|\r\n| Bug Fixes | User profile extraction, vector setting configuration |\r\n| New Features | Zhipu AI provider integration |\r\n| Documentation | MCP\u002FHTTP integration guides, various documentation updates |\r\n\r\n## 🔄 Migration Notes\r\n\r\nNo breaking changes in this release. Users can upgrade from v0.3.0 to v0.3.1 without any code changes.\r\n\r\n## 📦 Release Information\r\n\r\n- **Version**: 0.3.1\r\n- **Release Date**: 2026-01-13\r\n- **Previous Version**: 0.3.0\r\n\r\n---\r\n\r\n## Related PRs\r\n* docs: add MCP and HTTP integration methods documentation by @Teingi in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F164\r\n* docs: update docs by @Teingi in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F166\r\n* Added support for Zhipu AI (z.ai) integration by @wayyoungboy in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F165\r\n* fix vector setting bug by @Evenss in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F170\r\n* fix user profile extract bug by @Evenss in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F172\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fcompare\u002Fv0.3.0...v0.3.1","2026-01-13T03:40:51",{"id":224,"version":225,"summary_zh":226,"released_at":227},116340,"v0.3.0","# PowerMem 0.3.0 Release Notes\r\n\r\n**Release Date**: January 9, 2026\r\n\r\nWe're excited to announce the release of PowerMem 0.3.0! This version introduces a production-ready HTTP API Server and comprehensive Docker support, making PowerMem accessible to any application that supports HTTP calls, regardless of programming language.\r\n\r\n## 🎉 Major Features\r\n\r\n### 🌐 Production-Ready HTTP API Server\r\n\r\nPowerMem now provides a fully-featured HTTP API Server built with FastAPI, exposing all core memory management capabilities through RESTful APIs. This enables seamless integration with any application that supports HTTP calls.\r\n\r\n#### Key Features\r\n\r\n- **RESTful API Endpoints**: Complete REST API for all memory operations\r\n  - Memory management (create, read, update, delete, search)\r\n  - Batch operations for efficient bulk processing\r\n  - User profile management\r\n  - Agent memory management and sharing\r\n  - System health and status monitoring\r\n\r\n- **Security & Authentication**\r\n  - API Key authentication with configurable keys\r\n  - Rate limiting to protect server resources (configurable per IP)\r\n  - CORS support for web applications\r\n\r\n- **Production Features**\r\n  - Health check endpoint for monitoring\r\n  - System status and metrics endpoints\r\n  - Prometheus-compatible metrics format\r\n  - Configurable worker processes for scalability\r\n\r\n#### API Endpoints Overview\r\n\r\n**System Endpoints**\r\n- `GET \u002Fapi\u002Fv1\u002Fsystem\u002Fhealth` - Health check (public endpoint)\r\n- `GET \u002Fapi\u002Fv1\u002Fsystem\u002Fstatus` - System status and configuration\r\n- `GET \u002Fapi\u002Fv1\u002Fsystem\u002Fmetrics` - Prometheus metrics\r\n- `DELETE \u002Fapi\u002Fv1\u002Fsystem\u002Fdelete-all-memories` - Bulk memory deletion\r\n\r\n**Memory Management Endpoints**\r\n- `POST \u002Fapi\u002Fv1\u002Fmemories` - Create memory (with intelligent extraction)\r\n- `POST \u002Fapi\u002Fv1\u002Fmemories\u002Fbatch` - Batch create memories\r\n- `GET \u002Fapi\u002Fv1\u002Fmemories` - List memories with pagination\r\n- `GET \u002Fapi\u002Fv1\u002Fmemories\u002F{memory_id}` - Get specific memory\r\n- `PUT \u002Fapi\u002Fv1\u002Fmemories\u002F{memory_id}` - Update memory\r\n- `PUT \u002Fapi\u002Fv1\u002Fmemories\u002Fbatch` - Batch update memories\r\n- `DELETE \u002Fapi\u002Fv1\u002Fmemories\u002F{memory_id}` - Delete memory\r\n- `DELETE \u002Fapi\u002Fv1\u002Fmemories\u002Fbatch` - Bulk delete memories\r\n- `POST \u002Fapi\u002Fv1\u002Fmemories\u002Fsearch` - Semantic search with hybrid retrieval\r\n\r\n**User Profile Endpoints**\r\n- `POST \u002Fapi\u002Fv1\u002Fusers\u002F{user_id}\u002Fprofile` - Create or update user profile\r\n- `GET \u002Fapi\u002Fv1\u002Fusers\u002F{user_id}\u002Fprofile` - Get user profile\r\n- `DELETE \u002Fapi\u002Fv1\u002Fusers\u002F{user_id}\u002Fprofile` - Delete user profile\r\n- `GET \u002Fapi\u002Fv1\u002Fusers\u002F{user_id}\u002Fmemories` - Get all user memories\r\n- `DELETE \u002Fapi\u002Fv1\u002Fusers\u002F{user_id}\u002Fmemories` - Delete all user memories\r\n\r\n**Agent Management Endpoints**\r\n- `POST \u002Fapi\u002Fv1\u002Fagents\u002F{agent_id}\u002Fmemories` - Create agent memory\r\n- `GET \u002Fapi\u002Fv1\u002Fagents\u002F{agent_id}\u002Fmemories` - Get agent memories\r\n- `POST \u002Fapi\u002Fv1\u002Fagents\u002F{agent_id}\u002Fmemories\u002Fshare` - Share memories between agents\r\n- `GET \u002Fapi\u002Fv1\u002Fagents\u002F{agent_id}\u002Fmemories\u002Fshare` - Get shared memories\r\n\r\n#### Quick Start\r\n\r\n```bash\r\n# Install PowerMem\r\npip install powermem\r\n\r\n# Start the API server\r\npowermem-server --host 0.0.0.0 --port 8000\r\n\r\n# Access interactive API documentation\r\n# Open http:\u002F\u002Flocalhost:8000\u002Fdocs in your browser\r\n```\r\n\r\n### 🐳 Docker Support\r\n\r\nComprehensive Docker support for easy deployment and containerization, making PowerMem production-ready for containerized environments.\r\n\r\n#### Features\r\n\r\n- **Dockerfile**: Production-optimized multi-stage build\r\n  - Minimal image size\r\n  - Non-root user for security\r\n  - Health check support\r\n  - Configurable build arguments for mirror sources\r\n\r\n- **Docker Compose**: Pre-configured setup for easy deployment\r\n  - Automatic environment variable loading\r\n  - Shared `.env` file support (SDK and Server)\r\n  - Health checks and auto-restart\r\n  - Logging configuration\r\n\r\n- **Production-Ready Configuration**\r\n  - Resource limits support\r\n  - Health check endpoints\r\n  - Structured logging with rotation\r\n  - Security best practices\r\n\r\n#### Quick Start with Docker\r\n\r\n```bash\r\n# Using Docker\r\ndocker run -d \\\r\n  --name powermem-server \\\r\n  -p 8000:8000 \\\r\n  -v $(pwd)\u002F.env:\u002Fapp\u002F.env:ro \\\r\n  --env-file .env \\\r\n  oceanbase\u002Fpowermem-server:latest\r\n\r\n# Using Docker Compose (recommended)\r\ndocker-compose -f docker\u002Fdocker-compose.yml up -d\r\n```\r\n\r\n## 📋 Configuration\r\n\r\nThe HTTP API Server shares the same configuration file (`.env`) as the PowerMem SDK, ensuring consistency between SDK and API server usage. New configuration sections have been added:\r\n\r\n### Server Configuration\r\n\r\n```bash\r\n# Server Settings\r\nPOWERMEM_SERVER_HOST=0.0.0.0\r\nPOWERMEM_SERVER_PORT=8000\r\nPOWERMEM_SERVER_WORKERS=4\r\nPOWERMEM_SERVER_RELOAD=false\r\n\r\n# Authentication\r\nPOWERMEM_SERVER_AUTH_ENABLED=false\r\nPOWERMEM_SERVER_API_KEYS=key1,key2,key3\r\n\r\n# Rate Limiting\r\nPOWERMEM_SERVER_RATE_LIMIT_ENABLED=true\r\nPOWERMEM_SERVER_RATE_LIMIT_PER_MINUTE=100\r\n\r\n# Logging\r\nPOWERMEM_SERVER_LOG_FILE=server.log\r\nPOWERMEM_SERVER_LOG_LEVEL=INFO\r\nPOWERMEM_SERVER_LOG_FORMAT=text\r\n\r\n# CORS\r\nPOWERMEM_SERVER_CORS_ENABLED=true\r\nPOWERMEM_SERVER_CORS_ORIG","2026-01-09T03:32:04",{"id":229,"version":230,"summary_zh":231,"released_at":232},116341,"v0.2.1","## What's Changed\r\n* fix(config):fix embedding_model_dims string bug by @Evenss in https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fpull\u002F128\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fcompare\u002Fv0.2.0...v0.2.1","2025-12-19T10:05:13",{"id":234,"version":235,"summary_zh":236,"released_at":237},116342,"v0.2.0","# PowerMem 0.2.0 Release Notes\r\n\r\n**Release Date:** December 16, 2025\r\n\r\nWe're excited to announce the release of PowerMem 0.2.0! This version introduces powerful new features that enhance AI applications with advanced user profile management and comprehensive multimodal support, enabling more personalized and contextually rich AI experiences.\r\n\r\n## 🎉 What's New\r\n\r\n### 👤 Advanced User Profile Management\r\n\r\nPowerMem 0.2.0 introduces the **UserMemory** feature, providing intelligent user profile extraction and management capabilities that enable AI applications to deliver truly personalized experiences.\r\n\r\n#### Key Features:\r\n\r\n- **Automatic Profile Extraction**: Automatically extracts user-related information from conversations, including:\r\n  - Personal details (name, age, location)\r\n  - Professional information (profession, workplace)\r\n  - Interests and preferences\r\n  - Behavioral patterns\r\n\r\n- **Continuous Profile Updates**: Profiles are automatically refined and updated as new conversations occur, ensuring the AI system always has the most current understanding of each user.\r\n\r\n- **Efficient Profile Storage**: User profiles are stored separately from memories, enabling fast retrieval and efficient management of user-specific information.\r\n\r\n- **Joint Search Capability**: Optionally include user profile information when searching memories, providing richer context for more accurate and personalized responses.\r\n\r\n- **Profile Management API**: Complete CRUD operations for user profiles, including:\r\n  - `profile()` - Retrieve user profiles\r\n  - `delete_profile()` - Remove user profiles\r\n  - Automatic profile extraction via `add()` method\r\n\r\n#### Use Cases:\r\n\r\n- **Personalized Recommendations**: Build AI systems that understand user preferences and deliver tailored recommendations\r\n- **AI Companionship**: Create AI companions that remember and adapt to individual users\r\n- **Customer Service**: Enable customer service bots that maintain context about each customer's history and preferences\r\n- **Personal Assistants**: Develop assistants that learn user habits and preferences over time\r\n\r\n#### Example Usage:\r\n\r\n```python\r\nfrom powermem import UserMemory, auto_config\r\n\r\nconfig = auto_config()\r\nuser_memory = UserMemory(config=config)\r\n\r\n# Add conversation - profile is automatically extracted\r\nconversation = [\r\n    {\"role\": \"user\", \"content\": \"Hi, I'm Alice. I'm a 28-year-old software engineer from San Francisco.\"},\r\n    {\"role\": \"assistant\", \"content\": \"Nice to meet you, Alice!\"}\r\n]\r\n\r\nresult = user_memory.add(\r\n    messages=conversation,\r\n    user_id=\"user_001\",\r\n    agent_id=\"assistant_agent\"\r\n)\r\n\r\n# Search with profile for personalized context\r\nresults = user_memory.search(\r\n    query=\"user preferences\",\r\n    user_id=\"user_001\",\r\n    add_profile=True  # Include profile in results\r\n)\r\n```\r\n\r\n> **Note**: UserMemory requires OceanBase as the storage backend.\r\n\r\n### 🎨 Expanded Multimodal Support\r\n\r\nPowerMem 0.2.0 significantly expands multimodal capabilities, enabling AI applications to process and remember not just text, but also images and audio content.\r\n\r\n#### Key Features:\r\n\r\n- **Image Memory Support**: \r\n  - Process images from URLs\r\n  - Automatic image-to-text description conversion using vision-capable LLM models\r\n  - Support for mixed content (text + images)\r\n  - Configurable image analysis precision (auto\u002Flow\u002Fhigh)\r\n\r\n- **Audio Memory Support**:\r\n  - Process audio files from URLs\r\n  - Automatic speech-to-text transcription\r\n  - Support for voice messages and audio content\r\n  - Integration with ASR (Automatic Speech Recognition) providers\r\n\r\n- **Unified Multimodal API**: \r\n  - Standard OpenAI multimodal message format support\r\n  - Seamless handling of text, image, and audio in a single message\r\n  - Automatic content type detection and processing\r\n\r\n- **Multimodal Retrieval**: \r\n  - Search across text, image descriptions, and audio transcriptions\r\n  - Unified search interface for all content types\r\n  - Metadata support for multimodal content\r\n\r\n#### Supported Models:\r\n\r\n- **Vision Models**: `gpt-4o`, `gpt-4-vision-preview`, `qwen-vl-plus`, `qwen-vl-max`\r\n- **Audio ASR**: `qwen3-asr-flash` (via qwen_asr provider)\r\n- **Compatible**: Any model supporting OpenAI vision API format\r\n\r\n#### Example Usage:\r\n\r\n```python\r\nfrom powermem import Memory\r\n\r\n# Configure with multimodal support\r\nconfig = {\r\n    \"llm\": {\r\n        \"provider\": \"openai\",\r\n        \"config\": {\r\n            \"model\": \"gpt-4o\",\r\n            \"enable_vision\": True,  # Enable vision processing\r\n            \"vision_details\": \"auto\"\r\n        }\r\n    },\r\n    \"audio_llm\": {\r\n        \"provider\": \"qwen_asr\",\r\n        \"config\": {\r\n            \"model\": \"qwen3-asr-flash\",\r\n            \"api_key\": \"your-api-key\"\r\n        }\r\n    }\r\n}\r\n\r\nmemory = Memory(config=config)\r\n\r\n# Add multimodal memory (text + image)\r\nmessages = [\r\n    {\r\n        \"role\": \"user\",\r\n        \"content\": [\r\n            {\"type\": \"text\", \"text\": \"This is Bob's favorite workspace\"},\r\n            {\r\n ","2025-12-16T13:01:40",{"id":239,"version":240,"summary_zh":241,"released_at":242},116343,"v0.1.0","## Highlights\r\n\r\n- **More Accurate**: **[48.77% Accuracy Improvement]** More accurate than full-context in the LOCOMO benchmark (78.70% VS 52.9%)\r\n- **Faster**: **[91.83% Faster Response]** Significantly reduced p95 latency for retrieval compared to full-context (1.44s VS 17.12s)\r\n- **More Economical**: **[96.53% Token Reduction]** Significantly reduced costs compared to full-context without sacrificing performance (0.9k VS 26k)\r\n\r\n# PowerMem - Intelligent Memory System\r\n\r\nIn AI application development, enabling large language models to persistently \"remember\" historical conversations, user preferences, and contextual information is a core challenge. PowerMem combines a hybrid storage architecture of vector retrieval, full-text search, and graph databases, and introduces the Ebbinghaus forgetting curve theory from cognitive science to build a powerful memory infrastructure for AI applications. The system also provides comprehensive multi-agent support capabilities, including agent memory isolation, cross-agent collaboration and sharing, fine-grained permission control, and privacy protection mechanisms, enabling multiple AI agents to achieve efficient collaboration while maintaining independent memory spaces.\r\n\r\n## Core Features\r\n\r\n### Developer Friendly\r\n- **Lightweight Integration**: Provides a simple Python SDK, automatically loads configuration from `.env` files, enabling developers to quickly integrate into existing projects\r\n\r\n### Intelligent Memory Management\r\n- **Intelligent Memory Extraction**: Automatically extracts key facts from conversations through LLM, intelligently detects duplicates, updates conflicting information, and merges related memories to ensure accuracy and consistency of the memory database\r\n- **Ebbinghaus Forgetting Curve**: Based on the memory forgetting patterns from cognitive science, automatically calculates memory retention rates and implements time-decay weighting, prioritizing recent and relevant memories, allowing AI systems to naturally \"forget\" outdated information like humans\r\n\r\n### Multi-Agent Support\r\n- **Agent Shared\u002FIsolated Memory**: Provides independent memory spaces for each agent, supports cross-agent memory sharing and collaboration, and enables flexible permission management through scope control\r\n\r\n### Multimodal Support\r\n- **Text, Image, and Audio Memory**: Automatically converts images and audio to text descriptions for storage, supports retrieval of multimodal mixed content (text + image + audio), enabling AI systems to understand richer contextual information\r\n\r\n### Deeply Optimized Data Storage\r\n- **Sub Stores Support**: Implements data partition management through sub stores, supports automatic query routing, significantly improving query performance and resource utilization for ultra-large-scale data\r\n- **Hybrid Retrieval**: Combines multi-channel recall capabilities of vector retrieval, full-text search, and graph retrieval, builds knowledge graphs through LLM and supports multi-hop graph traversal for precise retrieval of complex memory relationships\r\n\r\n## Quick Start\r\n\r\n### Installation\r\n\r\n```bash\r\npip install powermem\r\n```\r\n\r\n### Basic Usage\r\n\r\n**Simplest Way**: Create memory from `.env` file automatically! \r\n\r\n```python\r\nfrom powermem import Memory, auto_config\r\n\r\n# Load configuration (auto-loads from .env)\r\nconfig = auto_config()\r\n# Create memory instance\r\nmemory = Memory(config=config)\r\n\r\n# Add memory\r\nmemory.add(\"User likes coffee\", user_id=\"user123\")\r\n\r\n# Search memories\r\nresults = memory.search(\"user preferences\", user_id=\"user123\")\r\nfor result in results.get('results', []):\r\n    print(f\"- {result.get('memory')}\")\r\n```\r\n\r\nFor more detailed, see the [Github](https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem).\r\n\r\n\r\n\r\n## 💬 Support\r\n\r\n- **Issue Reporting**: [GitHub Issues](https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fissues)\r\n- **Discussions**: [GitHub Discussions](https:\u002F\u002Fgithub.com\u002Foceanbase\u002Fpowermem\u002Fdiscussions)\r\n\r\n---\r\n\r\n## License\r\n\r\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.","2025-11-14T15:29:39"]