[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-agentscope-ai--ReMe":3,"tool-agentscope-ai--ReMe":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":81,"owner_twitter":80,"owner_website":80,"owner_url":82,"languages":83,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":23,"env_os":96,"env_gpu":96,"env_ram":96,"env_deps":97,"category_tags":104,"github_topics":105,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":112,"updated_at":113,"faqs":114,"releases":140},938,"agentscope-ai\u002FReMe","ReMe","ReMe: Memory Management Kit for Agents - Remember Me, Refine Me.","ReMe 是一个专为 AI 智能体设计的记忆管理工具包，旨在解决智能体在对话和任务处理中的记忆问题。它通过文件存储和向量存储两种方式，帮助智能体实现长期记忆功能，避免因上下文窗口限制导致的信息丢失，同时让新会话能够继承历史数据，而不是从零开始。\n\nReMe 的核心优势在于其“真实记忆”能力：它可以自动压缩旧对话、持久存储重要信息，并在未来的交互中智能召回相关上下文。无论是个人助理、客服机器人还是代码助手，ReMe 都能让这些应用更好地记住用户偏好、项目背景或问题历史，从而提供更个性化的服务。例如，它可以帮代码助手记录开发者的风格偏好，或为客服系统追踪用户的过往问题和需求。\n\n对于开发者和研究人员来说，ReMe 提供了灵活的解决方案。它的文件存储模式（ReMeLight）将记忆以 Markdown 文件的形式保存，便于阅读、编辑和迁移；而向量存储模式则支持语义搜索和精准匹配，适合构建知识库或多轮对话系统。此外，ReMe 在 LoCoMo 和 HaluMem 等基准测试中表现出色，展现了其技术实力。\n\n如果你正在开发需要长期记忆功能的 AI 应用，或者希望提升智能体的上下文管理能力，ReM","ReMe 是一个专为 AI 智能体设计的记忆管理工具包，旨在解决智能体在对话和任务处理中的记忆问题。它通过文件存储和向量存储两种方式，帮助智能体实现长期记忆功能，避免因上下文窗口限制导致的信息丢失，同时让新会话能够继承历史数据，而不是从零开始。\n\nReMe 的核心优势在于其“真实记忆”能力：它可以自动压缩旧对话、持久存储重要信息，并在未来的交互中智能召回相关上下文。无论是个人助理、客服机器人还是代码助手，ReMe 都能让这些应用更好地记住用户偏好、项目背景或问题历史，从而提供更个性化的服务。例如，它可以帮代码助手记录开发者的风格偏好，或为客服系统追踪用户的过往问题和需求。\n\n对于开发者和研究人员来说，ReMe 提供了灵活的解决方案。它的文件存储模式（ReMeLight）将记忆以 Markdown 文件的形式保存，便于阅读、编辑和迁移；而向量存储模式则支持语义搜索和精准匹配，适合构建知识库或多轮对话系统。此外，ReMe 在 LoCoMo 和 HaluMem 等基准测试中表现出色，展现了其技术实力。\n\n如果你正在开发需要长期记忆功能的 AI 应用，或者希望提升智能体的上下文管理能力，ReMe 会是一个值得尝试的开源工具。","\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fagentscope-ai_ReMe_readme_d312bba8a8ea.png\" alt=\"ReMe Logo\" width=\"50%\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Freme-ai\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.10+-blue\" alt=\"Python Version\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Freme-ai\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Freme-ai.svg?logo=pypi\" alt=\"PyPI Version\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Freme-ai\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Freme-ai\" alt=\"PyPI Downloads\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002Fagentscope-ai\u002FReMe?style=flat-square\" alt=\"GitHub commit activity\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\".\u002FLICENSE\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache--2.0-black\" alt=\"License\">\u003C\u002Fa>\n  \u003Ca href=\".\u002FREADME.md\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEnglish-Click-yellow\" alt=\"English\">\u003C\u002Fa>\n  \u003Ca href=\".\u002FREADME_ZH.md\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F简体中文-点击查看-orange\" alt=\"简体中文\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fagentscope-ai\u002FReMe?style=social\" alt=\"GitHub Stars\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fdeepwiki.com\u002Fagentscope-ai\u002FReMe\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDeepWiki-Ask_Devin-navy.svg\" alt=\"DeepWiki\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n\u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F20528\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fagentscope-ai_ReMe_readme_4cc089988f35.png\" alt=\"agentscope-ai%2FReMe | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"\u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cstrong>A memory management toolkit for AI agents — Remember Me, Refine Me.\u003C\u002Fstrong>\u003Cbr>\n\u003C\u002Fp>\n\n> For the older version, please refer to the [0.2.x documentation](docs\u002FREADME_0_2_x.md).\n\n---\n\n🧠 ReMe is a memory management framework designed for **AI agents**, providing\nboth [file-based](#-file-based-memory-system-remelight) and [vector-based](#-vector-based-memory-system) memory systems.\n\nIt tackles two core problems of agent memory: **limited context window** (early information is truncated or lost in long\nconversations) and **stateless sessions** (new sessions cannot inherit history and always start from scratch).\n\nReMe gives agents **real memory** — old conversations are automatically compacted, important information is persistently\nstored, and relevant context is automatically recalled in future interactions.\n\nReMe achieves state-of-the-art results on the LoCoMo and HaluMem benchmarks; see the [Experimental results](#experimental-results).\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>What you can do with ReMe\u003C\u002Fb>\u003C\u002Fsummary>\n\n\u003Cbr>\n\n- **Personal assistant**: Provide long-term memory for agents like [CoPaw](https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FCoPaw),\n  remembering user preferences and conversation history.\n- **Coding assistant**: Record code style preferences and project context, maintaining a consistent development\n  experience across sessions.\n- **Customer service bot**: Track user issue history and preference settings for personalized service.\n- **Task automation**: Learn success\u002Ffailure patterns from historical tasks to continuously optimize execution\n  strategies.\n- **Knowledge Q&A**: Build a searchable knowledge base with semantic search and exact matching support.\n- **Multi-turn dialogue**: Automatically compress long conversations while retaining key information within limited\n  context windows.\n\n\u003C\u002Fdetails>\n\n---\n\n## 📁 File-based memory system (ReMeLight)\n\n> Memory as files, files as memory.\n\nTreat **memory as files** — readable, editable, and copyable.\n[CoPaw](https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FCoPaw) integrates long-term memory and context management by inheriting from\n`ReMeLight`.\n\n| Traditional memory system | File-based ReMe      |\n|---------------------------|----------------------|\n| 🗄️ Database storage      | 📝 Markdown files    |\n| 🔒 Opaque                 | 👀 Always readable   |\n| ❌ Hard to modify          | ✏️ Directly editable |\n| 🚫 Hard to migrate        | 📦 Copy to migrate   |\n\n```\nworking_dir\u002F\n├── MEMORY.md              # Long-term memory: persistent info such as user preferences\n├── memory\u002F\n│   └── YYYY-MM-DD.md      # Daily journal: automatically written after each conversation\n├── dialog\u002F                # Raw conversation records: full dialog before compression\n│   └── YYYY-MM-DD.jsonl   # Daily conversation messages in JSONL format\n└── tool_result\u002F           # Cache for long tool outputs (auto-managed, expired entries auto-cleaned)\n    └── \u003Cuuid>.txt\n```\n\n### Core capabilities\n\n[ReMeLight](reme\u002Freme_light.py) is the core class of the file-based memory system. It provides full memory management\ncapabilities for AI agents:\n\n\u003Ctable>\n\u003Ctr>\u003Cth>Category\u003C\u002Fth>\u003Cth>Method\u003C\u002Fth>\u003Cth>Function\u003C\u002Fth>\u003Cth>Key components\u003C\u002Fth>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd rowspan=\"4\">Context Management\u003C\u002Ftd>\u003Ctd>\u003Ccode>check_context\u003C\u002Fcode>\u003C\u002Ftd>\u003Ctd>📊 Check context size\u003C\u002Ftd>\u003Ctd>\u003Ca href=\"reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Fcontext_checker.py\">ContextChecker\u003C\u002Fa> — checks whether context exceeds thresholds and splits messages\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>\u003Ccode>compact_memory\u003C\u002Fcode>\u003C\u002Ftd>\u003Ctd>📦 Compact history into summary\u003C\u002Ftd>\u003Ctd>\u003Ca href=\"reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Fcompactor.py\">Compactor\u003C\u002Fa> — ReActAgent that generates structured context summaries\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>\u003Ccode>compact_tool_result\u003C\u002Fcode>\u003C\u002Ftd>\u003Ctd>✂️ Compact long tool outputs\u003C\u002Ftd>\u003Ctd>\u003Ca href=\"reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Ftool_result_compactor.py\">ToolResultCompactor\u003C\u002Fa> — truncates long tool outputs and stores them in \u003Ccode>tool_result\u002F\u003C\u002Fcode> while keeping file references in messages\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>\u003Ccode>pre_reasoning_hook\u003C\u002Fcode>\u003C\u002Ftd>\u003Ctd>🔄 Pre-reasoning hook\u003C\u002Ftd>\u003Ctd>\u003Ccode>compact_tool_result\u003C\u002Fcode> + \u003Ccode>check_context\u003C\u002Fcode> + \u003Ccode>compact_memory\u003C\u002Fcode> + \u003Ccode>summary_memory\u003C\u002Fcode> (async)\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd rowspan=\"2\">Long-term Memory\u003C\u002Ftd>\u003Ctd>\u003Ccode>summary_memory\u003C\u002Fcode>\u003C\u002Ftd>\u003Ctd>📝 Persist important memory to files\u003C\u002Ftd>\u003Ctd>\u003Ca href=\"reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Fsummarizer.py\">Summarizer\u003C\u002Fa> — ReActAgent + file tools (\u003Ccode>read\u003C\u002Fcode> \u002F \u003Ccode>write\u003C\u002Fcode> \u002F \u003Ccode>edit\u003C\u002Fcode>)\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>\u003Ccode>memory_search\u003C\u002Fcode>\u003C\u002Ftd>\u003Ctd>🔍 Semantic memory search\u003C\u002Ftd>\u003Ctd>\u003Ca href=\"reme\u002Fmemory\u002Ffile_based\u002Ftools\u002Fmemory_search.py\">MemorySearch\u003C\u002Fa> — hybrid retrieval with vectors + BM25\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd rowspan=\"2\">Session Memory\u003C\u002Ftd>\u003Ctd>\u003Ccode>get_in_memory_memory\u003C\u002Fcode>\u003C\u002Ftd>\u003Ctd>💾 Create in-session memory instance\u003C\u002Ftd>\u003Ctd>Returns ReMeInMemoryMemory with dialog_path configured for persistence\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>\u003Ccode>await_summary_tasks\u003C\u002Fcode>\u003C\u002Ftd>\u003Ctd>⏳ Wait for async summary tasks\u003C\u002Ftd>\u003Ctd>Block until all background summary tasks complete\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>-\u003C\u002Ftd>\u003Ctd>\u003Ccode>start\u003C\u002Fcode>\u003C\u002Ftd>\u003Ctd>🚀 Start memory system\u003C\u002Ftd>\u003Ctd>Initialize file storage, file watcher, and embedding cache; clean up expired tool result files\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>-\u003C\u002Ftd>\u003Ctd>\u003Ccode>close\u003C\u002Fcode>\u003C\u002Ftd>\u003Ctd>📕 Shutdown and cleanup\u003C\u002Ftd>\u003Ctd>Clean up tool result files, stop file watcher, and persist embedding cache\u003C\u002Ftd>\u003C\u002Ftr>\n\u003C\u002Ftable>\n\n---\n\n### 🚀 Quick start\n\n#### Installation\n\n**Install from source:**\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe.git\ncd ReMe\npip install -e \".[light]\"\n```\n\n**Update to the latest version:**\n\n```bash\ngit pull\npip install -e \".[light]\"\n```\n\n#### Environment variables\n\n`ReMeLight` uses environment variables to configure the embedding model and storage backends:\n\n| Variable             | Description                   | Example                                             |\n|----------------------|-------------------------------|-----------------------------------------------------|\n| `LLM_API_KEY`        | LLM API key                   | `sk-xxx`                                            |\n| `LLM_BASE_URL`       | LLM base URL                  | `https:\u002F\u002Fdashscope.aliyuncs.com\u002Fcompatible-mode\u002Fv1` |\n| `EMBEDDING_API_KEY`  | Embedding API key (optional)  | `sk-xxx`                                            |\n| `EMBEDDING_BASE_URL` | Embedding base URL (optional) | `https:\u002F\u002Fdashscope.aliyuncs.com\u002Fcompatible-mode\u002Fv1` |\n\n#### Python usage\n\n```python\nimport asyncio\n\nfrom reme.reme_light import ReMeLight\n\n\nasync def main():\n    # Initialize ReMeLight\n    reme = ReMeLight(\n        default_as_llm_config={\"model_name\": \"qwen3.5-35b-a3b\"},\n        # default_embedding_model_config={\"model_name\": \"text-embedding-v4\"},\n        default_file_store_config={\"fts_enabled\": True, \"vector_enabled\": False},\n        enable_load_env=True,\n    )\n    await reme.start()\n\n    messages = [...]  # List of conversation messages\n\n    # 1. Check context size (token counting, determine if compaction is needed)\n    messages_to_compact, messages_to_keep, is_valid = await reme.check_context(\n        messages=messages,\n        memory_compact_threshold=90000,  # Threshold to trigger compaction (tokens)\n        memory_compact_reserve=10000,  # Token count to reserve for recent messages\n    )\n\n    # 2. Compact conversation history into a structured summary\n    summary = await reme.compact_memory(\n        messages=messages,\n        previous_summary=\"\",\n        max_input_length=128000,  # Model context window (tokens)\n        compact_ratio=0.7,  # Trigger compaction when exceeding max_input_length * 0.7\n        language=\"zh\",  # Summary language (e.g., \"zh\" \u002F \"\")\n    )\n\n    # 3. Compact long tool outputs (prevent tool results from blowing up context)\n    messages = await reme.compact_tool_result(messages)\n\n    # 4. Pre-reasoning hook (auto compact tool results + check context + generate summaries)\n    processed_messages, compressed_summary = await reme.pre_reasoning_hook(\n        messages=messages,\n        system_prompt=\"You are a helpful AI assistant.\",\n        compressed_summary=\"\",\n        max_input_length=128000,\n        compact_ratio=0.7,\n        memory_compact_reserve=10000,\n        enable_tool_result_compact=True,\n        tool_result_compact_keep_n=3,\n    )\n\n    # 5. Persist important memory to files (writes to memory\u002FYYYY-MM-DD.md)\n    summary_result = await reme.summary_memory(\n        messages=messages,\n        language=\"zh\",\n    )\n\n    # 6. Semantic memory search (vector + BM25 hybrid retrieval)\n    result = await reme.memory_search(query=\"Python version preference\", max_results=5)\n\n    # 7. Create in-session memory instance (manages context for one conversation)\n    memory = reme.get_in_memory_memory()  # Auto-configures dialog_path\n    for msg in messages:\n        await memory.add(msg)\n    token_stats = await memory.estimate_tokens(max_input_length=128000)\n    print(f\"Current context usage: {token_stats['context_usage_ratio']:.1f}%\")\n    print(f\"Message token count: {token_stats['messages_tokens']}\")\n    print(f\"Estimated total tokens: {token_stats['estimated_tokens']}\")\n\n    # 8. Mark messages as compressed (auto-persists to dialog\u002FYYYY-MM-DD.jsonl)\n    # await memory.mark_messages_compressed(messages_to_compact)\n\n    # Shutdown ReMeLight\n    await reme.close()\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n> 📂 Full example: [test_reme_light.py](tests\u002Flight\u002Ftest_reme_light.py)\n> 📋 Sample run log: [test_reme_light_log.txt](tests\u002Flight\u002Ftest_reme_light_log.txt) (223,838 tokens → 1,105 tokens, 99.5%\n> compression)\n\n### Architecture of the file-based ReMeLight memory system\n\n#### Context data structure\n\n```mermaid\nflowchart TD\n    A[Context] --> B[compact_summary]\n    B --> C[dialog path guide + Goal\u002FConstraints\u002FProgress\u002FKeyDecisions\u002FNextSteps]\n    A --> E[messages: full dialogue history]\n    A --> F[File System Cache]\n    F --> G[dialog\u002FYYYY-MM-DD.jsonl]\n    F --> H[tool_result\u002Fuuid.txt N-day TTL]\n```\n\n---\n\n[CoPaw MemoryManager](https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FCoPaw\u002Fblob\u002Fmain\u002Fsrc\u002Fcopaw\u002Fagents\u002Fmemory\u002Freme_light_memory_manager.py)\ninherits `ReMeLight` and integrates its memory capabilities into the agent reasoning loop:\n\n```mermaid\ngraph LR\n    Agent[Agent] -->|Before each reasoning step| Hook[pre_reasoning_hook]\n    Hook --> TC[compact_tool_result\u003Cbr>Compact tool outputs]\n    TC --> CC[check_context\u003Cbr>Token counting]\n    CC -->|Exceeds limit| CM[compact_memory\u003Cbr>Generate summary]\n    CC -->|Exceeds limit| SM[summary_memory\u003Cbr>Async persistence]\n    SM -->|ReAct + FileIO| Files[memory\u002F*.md]\n    CC -->|Exceeds limit| MMC[mark_messages_compressed\u003Cbr>Persist raw dialog]\n    MMC --> Dialog[dialog\u002F*.jsonl]\n    Agent -->|Explicit call| Search[memory_search\u003Cbr>Vector+BM25]\n    Agent -->|In - session| InMem[ReMeInMemoryMemory\u003Cbr>Token-aware memory]\n    InMem -->|Compress\u002FClear| Dialog\n    Files -.->|FileWatcher| Store[(FileStore\u003Cbr>Vector+FTS index)]\n    Search --> Store\n```\n\n---\n\n#### 1. `check_context` — context checking\n\n[ContextChecker](reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Fcontext_checker.py) uses token counting to determine whether the\ncontext exceeds thresholds and automatically splits messages into a \"to compact\" group and a \"to keep\" group.\n\n```mermaid\ngraph LR\n    M[messages] --> H[AsMsgHandler\u003Cbr>Token counting]\n    H --> C{total > threshold?}\n    C -->|No| K[Return all messages]\n    C -->|Yes| S[Keep from tail\u003Cbr>reserve tokens]\n    S --> CP[messages_to_compact\u003Cbr>Earlier messages]\n    S --> KP[messages_to_keep\u003Cbr>Recent messages]\n    S --> V{is_valid\u003Cbr>Tool calls aligned?}\n```\n\n- **Core logic**: keep `reserve` tokens from the tail; mark the rest as messages to compact.\n- **Integrity guarantee**: preserves complete user-assistant turns and tool_use\u002Ftool_result pairs without splitting\n  them.\n\n---\n\n#### 2. `compact_memory` — conversation compaction\n\n[Compactor](reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Fcompactor.py) uses a ReActAgent to compact conversation history into a *\n*structured context summary**.\n\n```mermaid\ngraph LR\n    M[messages] --> H[AsMsgHandler\u003Cbr>format_msgs_to_str]\n    H --> A[ReActAgent\u003Cbr>reme_compactor]\n    P[previous_summary] -->|Incremental update| A\n    A --> S[Structured summary\u003Cbr>Goal\u002FProgress\u002FDecisions...]\n```\n\n**Summary structure** (context checkpoints):\n\n| Field                 | Description                                                                             |\n|-----------------------|-----------------------------------------------------------------------------------------|\n| `## Goal`             | User goals                                                                              |\n| `## Constraints`      | Constraints and preferences                                                             |\n| `## Progress`         | Task progress                                                                           |\n| `## Key Decisions`    | Key decisions                                                                           |\n| `## Next Steps`       | Next step plans                                                                         |\n| `## Critical Context` | Critical data such as file paths, function names, error messages, etc.                  |\n\n- **Incremental updates**: when `previous_summary` is provided, new conversations are merged into the existing summary.\n- **Thinking enhancement**: with `add_thinking_block=True` (default), a reasoning step is added before generating the\n  summary to improve quality.\n\n---\n\n#### 3. `summary_memory` — persistent memory\n\n[Summarizer](reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Fsummarizer.py) uses a **ReAct + file tools** pattern so that the AI can\ndecide what to write and where to write it.\n\n```mermaid\ngraph LR\n    M[messages] --> A[ReActAgent\u003Cbr>reme_summarizer]\n    A -->|read| R[Read memory\u002FYYYY-MM-DD.md]\n    R --> T{Reason: how to merge?}\n    T -->|write| W[Overwrite]\n    T -->|edit| E[Edit in place]\n    W --> F[memory\u002FYYYY-MM-DD.md]\n    E --> F\n```\n\n**File tools** ([FileIO](reme\u002Fmemory\u002Ffile_based\u002Ftools\u002Ffile_io.py)):\n\n| Tool    | Function              |\n|---------|-----------------------|\n| `read`  | Read file content     |\n| `write` | Overwrite file        |\n| `edit`  | Find-and-replace edit |\n\n---\n\n#### 4. `compact_tool_result` — tool result compaction\n\n[ToolResultCompactor](reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Ftool_result_compactor.py) addresses the problem of long tool\noutputs bloating the context. It applies two different truncation strategies depending on whether a message falls within\nthe `recent_n` window:\n\n```mermaid\ngraph LR\n    M[messages] --> B{Within recent_n?}\n    B -->|Yes - recent| C[Low truncation recent_max_bytes=100KB\u003Cbr>Save full content to tool_result\u002Fuuid.txt\u003Cbr>Hint: 'Read from line N']\n    B -->|No - old| D[High truncation old_max_bytes=3KB\u003Cbr>Reference existing file\u003Cbr>More aggressive truncation]\n    C --> E[cleanup_expired_files\u003Cbr>Delete expired files]\n    D --> E\n```\n\n| Parameter          | Default               | Description                                                                                                                   |\n|--------------------|-----------------------|-------------------------------------------------------------------------------------------------------------------------------|\n| `recent_n`         | `1`                   | Minimum number of trailing consecutive tool-result messages treated as \"recent\" (use low truncation)                          |\n| `recent_max_bytes` | `100 * 1024` (100 KB) | Truncation threshold for recent messages; content beyond this is saved to `tool_result\u002F` with a file path and start-line hint |\n| `old_max_bytes`    | `3000` (3 KB)         | Truncation threshold for older messages; truncation is more aggressive                                                        |\n| `retention_days`   | `3`                   | Number of days to retain tool result files; expired files are auto-cleaned                                                    |\n\n- **Auto cleanup**: expired files (older than `retention_days`) are deleted automatically during `start` \u002F `close` \u002F\n  `compact_tool_result`.\n\n---\n\n#### 5. `memory_search` — memory retrieval\n\n[MemorySearch](reme\u002Fmemory\u002Ffile_based\u002Ftools\u002Fmemory_search.py) provides **vector + BM25 hybrid retrieval**.\n\n```mermaid\ngraph LR\n    Q[query] --> E[Embedding\u003Cbr>Vectorization]\n    E --> V[vector_search\u003Cbr>Semantic similarity]\n    Q --> B[BM25\u003Cbr>Keyword matching]\n    V -->|\" weight: 0.7 \"| M[Deduplicate + weighted merge]\n    B -->|\" weight: 0.3 \"| M\n    M --> F[min_score filter]\n    F --> R[Top-N results]\n```\n\n- **Fusion mechanism**: vector weight 0.7 + BM25 weight 0.3 — balancing semantic similarity and exact matches.\n\n---\n\n#### 6. `ReMeInMemoryMemory` — in-session memory\n\n[ReMeInMemoryMemory](reme\u002Fmemory\u002Ffile_based\u002Freme_in_memory_memory.py) extends AgentScope's `InMemoryMemory` to provide\ntoken-aware memory management and raw conversation persistence.\n\n```mermaid\ngraph LR\n    C[content] --> G[get_memory\u003Cbr>exclude_mark=COMPRESSED]\n    G --> F[Filter out compressed messages]\n    F --> P{prepend_summary?}\n    P -->|Yes| S[Prepend previous summary]\n    S --> O[Output messages]\n    P -->|No| O\n    M[mark_messages_compressed] --> D[Persist to dialog\u002FYYYY-MM-DD.jsonl]\n    D --> R[Remove from memory]\n```\n\n| Function                         | Description                                              |\n|----------------------------------|----------------------------------------------------------|\n| `get_memory`                     | Filter messages by mark and auto-append summary          |\n| `estimate_tokens`                | Estimate token usage of the context                      |\n| `state_dict` \u002F `load_state_dict` | Serialize\u002Fdeserialize state (session persistence)        |\n| `mark_messages_compressed`       | Mark messages compressed and persist to dialog directory |\n| `clear_content`                  | Persist all messages before clearing memory              |\n\n**Raw conversation persistence**: When messages are compressed or cleared, they are automatically saved to\n`{dialog_path}\u002F{date}.jsonl` with one JSON-formatted message per line.\n\n---\n\n#### 7. `pre_reasoning_hook` — pre-reasoning processing\n\nThis is a unified entry point that wires all the above components together and automatically manages context before each\nreasoning step.\n\n```mermaid\ngraph LR\n    M[messages] --> TC[compact_tool_result\u003Cbr>Compact long tool outputs]\n    TC --> CC[check_context\u003Cbr>Compute remaining space]\n    CC --> D{messages_to_compact\u003Cbr>Non-empty?}\n    D -->|No| K[Return original messages + summary]\n    D -->|Yes| V{is_valid?}\n    V -->|No| K\n    V -->|Yes| CM[compact_memory\u003Cbr>Sync summary generation]\n    V -->|Yes| SM[add_async_summary_task\u003Cbr>Async persistence]\n    CM --> R[Return messages_to_keep + new summary]\n```\n\n**Execution flow**:\n\n1. `compact_tool_result` — compact long tool outputs for all messages except the most recent\n   `tool_result_compact_keep_n`.\n2. `check_context` — check whether the context exceeds limits (remaining space = threshold minus tokens used by system\n   prompt and compressed summary).\n3. `compact_memory` — generate compact summary (sync), appended into `compact_summary`.\n4. `summary_memory` — persist memory to `memory\u002F*.md` (async in the background, non-blocking).\n\n| Key parameter                | Default | Description                                                                         |\n|------------------------------|---------|-------------------------------------------------------------------------------------|\n| `tool_result_compact_keep_n` | `3`     | Skip tool result compaction for the most recent N messages (preserve full content)  |\n| `memory_compact_reserve`     | `10000` | Token count to reserve for recent messages; messages beyond this trigger compaction |\n| `compact_ratio`              | `0.7`   | Compaction threshold ratio: `max_input_length × compact_ratio × 0.95`               |\n\n---\n\n## 🗃️ Vector-based memory system\n\n[ReMe Vector Based](reme\u002Freme.py) is the core class for the vector-based memory system. It manages three types of\nmemories:\n\n| Memory type           | Use case                                                          |\n|-----------------------|-------------------------------------------------------------------|\n| **Personal memory**   | Records user preferences and habits                               |\n| **Procedural memory** | Records task execution experience and patterns of success\u002Ffailure |\n| **Tool memory**       | Records tool usage experience and parameter tuning                |\n\n### Core capabilities\n\n| Method             | Function     | Description                                                 |\n|--------------------|--------------|-------------------------------------------------------------|\n| `summarize_memory` | 🧠 Summarize | Automatically extract and store memories from conversations |\n| `retrieve_memory`  | 🔍 Retrieve  | Retrieve related memories based on a query                  |\n| `add_memory`       | ➕ Add        | Manually add memories into the vector store                 |\n| `get_memory`       | 📖 Get       | Get a single memory by ID                                   |\n| `update_memory`    | ✏️ Update    | Update existing memory content or metadata                  |\n| `delete_memory`    | 🗑️ Delete   | Delete a specific memory                                    |\n| `list_memory`      | 📋 List      | List memories with filtering and sorting                    |\n\n### Installation and environment variables\n\nInstallation and environment configuration are the same as [ReMeLight](#installation).\nAPI keys are configured via environment variables and can be stored in a `.env` file at the project root.\n\n### Python usage\n\n```python\nimport asyncio\n\nfrom reme import ReMe\n\n\nasync def main():\n    # Initialize ReMe\n    reme = ReMe(\n        working_dir=\".reme\",\n        default_llm_config={\n            \"backend\": \"openai\",\n            \"model_name\": \"qwen3.5-plus\",\n        },\n        default_embedding_model_config={\n            \"backend\": \"openai\",\n            \"model_name\": \"text-embedding-v4\",\n            \"dimensions\": 1024,\n        },\n        default_vector_store_config={\n            \"backend\": \"local\",  # Supports local\u002Fchroma\u002Fqdrant\u002Felasticsearch\n        },\n    )\n    await reme.start()\n\n    messages = [\n        {\"role\": \"user\", \"content\": \"Help me write a Python script\", \"time_created\": \"2026-02-28 10:00:00\"},\n        {\"role\": \"assistant\", \"content\": \"Sure, I'll help you with that.\", \"time_created\": \"2026-02-28 10:00:05\"},\n    ]\n\n    # 1. Summarize memories from conversation (automatically extract user preferences, task experience, etc.)\n    result = await reme.summarize_memory(\n        messages=messages,\n        user_name=\"alice\",  # Personal memory\n        # task_name=\"code_writing\",  # Procedural memory\n    )\n    print(f\"Summary result: {result}\")\n\n    # 2. Retrieve related memories\n    memories = await reme.retrieve_memory(\n        query=\"Python programming\",\n        user_name=\"alice\",\n        # task_name=\"code_writing\",\n    )\n    print(f\"Retrieved memories: {memories}\")\n\n    # 3. Manually add a memory\n    memory_node = await reme.add_memory(\n        memory_content=\"The user prefers concise code style.\",\n        user_name=\"alice\",\n    )\n    print(f\"Added memory: {memory_node}\")\n    memory_id = memory_node.memory_id\n\n    # 4. Get a single memory by ID\n    fetched_memory = await reme.get_memory(memory_id=memory_id)\n    print(f\"Fetched memory: {fetched_memory}\")\n\n    # 5. Update memory content\n    updated_memory = await reme.update_memory(\n        memory_id=memory_id,\n        user_name=\"alice\",\n        memory_content=\"The user prefers concise code with comments.\",\n    )\n    print(f\"Updated memory: {updated_memory}\")\n\n    # 6. List all memories for the user (supports filtering and sorting)\n    all_memories = await reme.list_memory(\n        user_name=\"alice\",\n        limit=10,\n        sort_key=\"time_created\",\n        reverse=True,\n    )\n    print(f\"User memory list: {all_memories}\")\n\n    # 7. Delete a specific memory\n    await reme.delete_memory(memory_id=memory_id)\n    print(f\"Deleted memory: {memory_id}\")\n\n    # 8. Delete all memories (use with care)\n    # await reme.delete_all()\n\n    await reme.close()\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### Technical architecture\n\n```mermaid\ngraph LR\n    User[User \u002F Agent] --> ReMe[Vector Based ReMe]\n    ReMe --> Summarize[Summarize memories]\n    ReMe --> Retrieve[Retrieve memories]\n    ReMe --> CRUD[CRUD operations]\n    Summarize --> PersonalSum[PersonalSummarizer]\n    Summarize --> ProceduralSum[ProceduralSummarizer]\n    Summarize --> ToolSum[ToolSummarizer]\n    Retrieve --> PersonalRet[PersonalRetriever]\n    Retrieve --> ProceduralRet[ProceduralRetriever]\n    Retrieve --> ToolRet[ToolRetriever]\n    PersonalSum --> VectorStore[Vector database]\n    ProceduralSum --> VectorStore\n    ToolSum --> VectorStore\n    PersonalRet --> VectorStore\n    ProceduralRet --> VectorStore\n    ToolRet --> VectorStore\n```\n\n### Experimental results\n\nEvaluations are conducted on two benchmarks: **LoCoMo** and **HaluMem**. Experimental settings:\n\n1. **ReMe backbone**: as specified in each table.\n2. **Evaluation protocol**: LLM-as-a-Judge following MemOS — each answer is scored by GPT-4o-mini.\n\nBaseline results are reproduced from their respective papers under aligned settings where possible.\n\n### LoCoMo\n\n| Method   | Single Hop | Multi Hop | Temporal  | Open Domain | Overall   |\n|----------|------------|-----------|-----------|-------------|-----------|\n| MemoryOS | 62.43      | 56.50     | 37.18     | 40.28       | 54.70     |\n| Mem0     | 66.71      | 58.16     | 55.45     | 40.62       | 61.00     |\n| MemU     | 72.77      | 62.41     | 33.96     | 46.88       | 61.15     |\n| MemOS    | 81.45      | 69.15     | 72.27     | 60.42       | 75.87     |\n| HiMem    | 89.22      | 70.92     | 74.77     | 54.86       | 80.71     |\n| Zep      | 88.11      | 71.99     | 74.45     | 66.67       | 81.06     |\n| TiMem    | 81.43      | 62.20     | 77.63     | 52.08       | 75.30     |\n| TSM      | 84.30      | 66.67     | 71.03     | 58.33       | 76.69     |\n| MemR3    | 89.44      | 71.39     | 76.22     | 61.11       | 81.55     |\n| **ReMe** | **89.89**  | **82.98** | **83.80** | **71.88**   | **86.23** |\n\n### HaluMem\n\n| Method      | Memory Integrity | Memory Accuracy | QA Accuracy |\n|-------------|------------------|-----------------|-------------|\n| MemoBase    | 14.55            | 92.24           | 35.53       |\n| Supermemory | 41.53            | 90.32           | 54.07       |\n| Mem0        | 42.91            | 86.26           | 53.02       |\n| ProMem      | **73.80**        | 89.47           | 62.26       |\n| **ReMe**    | 67.72            | **94.06**       | **88.78**   |\n\n---\n\n## 🧪 Procedural memory paper\n\n> Our procedural (task) memory paper is available on [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10696).\n\n### 🌍 [Appworld benchmark](benchmark\u002Fappworld\u002Fquickstart.md)\n\nWe evaluate ReMe on the Appworld environment using Qwen3-8B (non-thinking mode):\n\n| Method   | Avg@4               | Pass@4              |\n|----------|---------------------|---------------------|\n| w\u002Fo ReMe | 0.1497              | 0.3285              |\n| w\u002F ReMe  | 0.1706 **(+2.09%)** | 0.3631 **(+3.46%)** |\n\nPass@K measures the probability that at least one of K generated candidates successfully completes the task (score=1).\nThe current experiments use an internal AppWorld environment, which may differ slightly from the public version.\n\nFor more details on how to reproduce the experiments, see [quickstart.md](benchmark\u002Fappworld\u002Fquickstart.md).\n\n### 🔧 [BFCL-V3 benchmark](benchmark\u002Fbfcl\u002Fquickstart.md)\n\nWe evaluate ReMe on the BFCL-V3 multi-turn-base task (random split 50 train \u002F 150 val) using Qwen3-8B (thinking mode):\n\n| Method   | Avg@4               | Pass@4              |\n|----------|---------------------|---------------------|\n| w\u002Fo ReMe | 0.4033              | 0.5955              |\n| w\u002F ReMe  | 0.4450 **(+4.17%)** | 0.6577 **(+6.22%)** |\n\nFor more details on how to reproduce the experiments, see [quickstart.md](benchmark\u002Fbfcl\u002Fquickstart.md).\n\n## ⭐ Community & support\n\n- **Star & Watch**: Starring helps more agent developers discover ReMe; Watching keeps you up to date with new releases\n  and features.\n- **Share your results**: Share how ReMe empowers your agents in Issues or Discussions — we are happy to showcase great\n  community use cases.\n- **Need a new feature?** Open a feature request; we’ll evolve ReMe together with the community.\n- **Code contributions**: All forms of contributions are welcome. Please see\n  the [contribution guide](docs\u002Fcontribution.md).\n- **Acknowledgements**: We thank excellent open-source projects such as OpenClaw, Mem0, MemU, and CoPaw for their\n  inspiration and support.\n\n### Contributors\n\nThanks to all who have contributed to ReMe:\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fagentscope-ai_ReMe_readme_cafe97c05130.png\" alt=\"Contributors\" \u002F>\n\u003C\u002Fa>\n\n---\n\n## 📄 Citation\n\n```bibtex\n@software{AgentscopeReMe2025,\n  title = {AgentscopeReMe: Memory Management Kit for Agents},\n  author = {ReMe Team},\n  url = {https:\u002F\u002Freme.agentscope.io},\n  year = {2025}\n}\n```\n\n---\n\n## ⚖️ License\n\nThis project is open-sourced under the Apache License 2.0. See [LICENSE](.\u002FLICENSE) for details.\n\n---\n\n## 🤔 Why ReMe?\n\nReMe stands for **Remember Me** and **Refine Me**, symbolizing our goal to help AI agents \"remember\" users and \"refine\"\nthemselves through interactions. We hope ReMe is not just a cold memory module, but a partner that truly helps agents\nunderstand users, accumulate experience, and continuously evolve.\n\n---\n\n## 📈 Star history\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fagentscope-ai_ReMe_readme_c8066ca6b7a9.png)](https:\u002F\u002Fwww.star-history.com\u002F#agentscope-ai\u002FReMe&Date)\n\n","\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fagentscope-ai_ReMe_readme_d312bba8a8ea.png\" alt=\"ReMe Logo\" width=\"50%\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Freme-ai\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.10+-blue\" alt=\"Python 版本\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Freme-ai\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Freme-ai.svg?logo=pypi\" alt=\"PyPI 版本\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Freme-ai\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Freme-ai\" alt=\"PyPI 下载量\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002Fagentscope-ai\u002FReMe?style=flat-square\" alt=\"GitHub 提交活动\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\".\u002FLICENSE\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache--2.0-black\" alt=\"许可证\">\u003C\u002Fa>\n  \u003Ca href=\".\u002FREADME.md\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEnglish-Click-yellow\" alt=\"英文\">\u003C\u002Fa>\n  \u003Ca href=\".\u002FREADME_ZH.md\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F简体中文-点击查看-orange\" alt=\"简体中文\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fagentscope-ai\u002FReMe?style=social\" alt=\"GitHub 星标\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fdeepwiki.com\u002Fagentscope-ai\u002FReMe\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDeepWiki-Ask_Devin-navy.svg\" alt=\"DeepWiki\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n\u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F20528\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fagentscope-ai_ReMe_readme_4cc089988f35.png\" alt=\"agentscope-ai%2FReMe | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"\u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cstrong>一个面向 AI 智能体的记忆管理工具包 —— 记住我，优化我。\u003C\u002Fstrong>\u003Cbr>\n\u003C\u002Fp>\n\n> 对于旧版本，请参考 [0.2.x 文档](docs\u002FREADME_0_2_x.md)。\n\n---\n\n🧠 ReMe 是一个专为 **AI 智能体**设计的记忆管理框架，提供基于[文件](#-基于文件的记忆系统-remelight)和[向量](#-基于向量的记忆系统)的两种记忆系统。\n\n它解决了智能体记忆的两个核心问题：**有限的上下文窗口**（长时间对话中早期信息被截断或丢失）和**无状态会话**（新会话无法继承历史记录，总是从头开始）。\n\nReMe 为智能体赋予了**真实记忆**——旧对话会被自动压缩，重要信息持久存储，并在未来的交互中自动召回相关上下文。\n\nReMe 在 LoCoMo 和 HaluMem 基准测试中取得了最先进的结果；详见[实验结果](#实验结果)。\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>使用 ReMe 可以做什么\u003C\u002Fb>\u003C\u002Fsummary>\n\n\u003Cbr>\n\n- **个人助理**：为像 [CoPaw](https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FCoPaw) 这样的智能体提供长期记忆，记住用户偏好和对话历史。\n- **编程助手**：记录代码风格偏好和项目上下文，在不同会话间保持一致的开发体验。\n- **客户服务机器人**：跟踪用户问题历史和偏好设置，提供个性化服务。\n- **任务自动化**：从历史任务中学习成功\u002F失败模式，\n\n### 🚀 快速开始\n\n#### 安装\n\n**从源码安装：**\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe.git\ncd ReMe\npip install -e \".[light]\"\n```\n\n**更新到最新版本：**\n\n```bash\ngit pull\npip install -e \".[light]\"\n```\n\n#### 环境变量\n\n`ReMeLight` 使用环境变量来配置嵌入模型（embedding model）和存储后端：\n\n| 变量                 | 描述                     | 示例                                             |\n|----------------------|-------------------------|-------------------------------------------------|\n| `LLM_API_KEY`        | LLM API 密钥            | `sk-xxx`                                        |\n| `LLM_BASE_URL`       | LLM 基础 URL            | `https:\u002F\u002Fdashscope.aliyuncs.com\u002Fcompatible-mode\u002Fv1` |\n| `EMBEDDING_API_KEY`  | 嵌入 API 密钥（可选）   | `sk-xxx`                                        |\n| `EMBEDDING_BASE_URL` | 嵌入基础 URL（可选）    | `https:\u002F\u002Fdashscope.aliyuncs.com\u002Fcompatible-mode\u002Fv1` |\n\n#### Python 使用方法\n\n```python\nimport asyncio\n\nfrom reme.reme_light import ReMeLight\n\n\nasync def main():\n    # 初始化 ReMeLight\n    reme = ReMeLight(\n        default_as_llm_config={\"model_name\": \"qwen3.5-35b-a3b\"},\n        # default_embedding_model_config={\"model_name\": \"text-embedding-v4\"},\n        default_file_store_config={\"fts_enabled\": True, \"vector_enabled\": False},\n        enable_load_env=True,\n    )\n    await reme.start()\n\n    messages = [...]  # 对话消息列表\n\n    # 1. 检查上下文大小（token 计数，判断是否需要压缩）\n    messages_to_compact, messages_to_keep, is_valid = await reme.check_context(\n        messages=messages,\n        memory_compact_threshold=90000,  # 触发压缩的阈值（tokens）\n        memory_compact_reserve=10000,  # 为最近的消息保留的 token 数量\n    )\n\n    # 2. 将对话历史压缩为结构化摘要\n    summary = await reme.compact_memory(\n        messages=messages,\n        previous_summary=\"\",\n        max_input_length=128000,  # 模型上下文窗口（tokens）\n        compact_ratio=0.7,  # 超过 max_input_length * 0.7 时触发压缩\n        language=\"zh\",  # 摘要语言（例如 \"zh\" \u002F \"\"）\n    )\n\n    # 3. 压缩长工具输出（防止工具结果占用过多上下文）\n    messages = await reme.compact_tool_result(messages)\n\n    # 4. 推理前钩子（自动压缩工具结果 + 检查上下文 + 生成摘要）\n    processed_messages, compressed_summary = await reme.pre_reasoning_hook(\n        messages=messages,\n        system_prompt=\"你是一个有用的 AI 助手。\",\n        compressed_summary=\"\",\n        max_input_length=128000,\n        compact_ratio=0.7,\n        memory_compact_reserve=10000,\n        enable_tool_result_compact=True,\n        tool_result_compact_keep_n=3,\n    )\n\n    # 5. 将重要记忆持久化到文件（写入 memory\u002FYYYY-MM-DD.md）\n    summary_result = await reme.summary_memory(\n        messages=messages,\n        language=\"zh\",\n    )\n\n    # 6. 语义记忆搜索（向量 + BM25 混合检索）\n    result = await reme.memory_search(query=\"Python 版本偏好\", max_results=5)\n\n    # 7. 创建会话内记忆实例（管理单个对话的上下文）\n    memory = reme.get_in_memory_memory()  # 自动配置 dialog_path\n    for msg in messages:\n        await memory.add(msg)\n    token_stats = await memory.estimate_tokens(max_input_length=128000)\n    print(f\"当前上下文使用率: {token_stats['context_usage_ratio']:.1f}%\")\n    print(f\"消息 token 数量: {token_stats['messages_tokens']}\")\n    print(f\"估计总 token 数量: {token_stats['estimated_tokens']}\")\n\n    # 8. 标记消息为已压缩（自动持久化到 dialog\u002FYYYY-MM-DD.jsonl）\n    # await memory.mark_messages_compressed(messages_to_compact)\n\n    # 关闭 ReMeLight\n    await reme.close()\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n> 📂 完整示例: [test_reme_light.py](tests\u002Flight\u002Ftest_reme_light.py)\n> 📋 示例运行日志: [test_reme_light_log.txt](tests\u002Flight\u002Ftest_reme_light_log.txt) (223,838 tokens → 1,105 tokens, 99.5%\n> 压缩率)\n\n### 基于文件的 ReMeLight 内存系统架构\n\n#### 上下文数据结构\n\n```mermaid\nflowchart TD\n    A[Context] --> B[compact_summary]\n    B --> C[dialog path guide + Goal\u002FConstraints\u002FProgress\u002FKeyDecisions\u002FNextSteps]\n    A --> E[messages: full dialogue history]\n    A --> F[File System Cache]\n    F --> G[dialog\u002FYYYY-MM-DD.jsonl]\n    F --> H[tool_result\u002Fuuid.txt N-day TTL]\n```\n\n---\n\n[CoPaw MemoryManager](https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FCoPaw\u002Fblob\u002Fmain\u002Fsrc\u002Fcopaw\u002Fagents\u002Fmemory\u002Freme_light_memory_manager.py)\n继承了 `ReMeLight` 并将其内存能力集成到代理推理循环中：\n\n```mermaid\ngraph LR\n    Agent[Agent] -->|每次推理步骤之前| Hook[pre_reasoning_hook]\n    Hook --> TC[compact_tool_result\u003Cbr>压缩工具输出]\n    TC --> CC[check_context\u003Cbr>Token 计数]\n    CC -->|超出限制| CM[compact_memory\u003Cbr>生成摘要]\n    CC -->|超出限制| SM[summary_memory\u003Cbr>异步持久化]\n    SM -->|ReAct + FileIO| Files[memory\u002F*.md]\n    CC -->|超出限制| MMC[mark_messages_compressed\u003Cbr>持久化原始对话]\n    MMC --> Dialog[dialog\u002F*.jsonl]\n    Agent -->|显式调用| Search[memory_search\u003Cbr>向量+BM25]\n    Agent -->|会话内| InMem[ReMeInMemoryMemory\u003Cbr>Token 感知内存]\n    InMem -->|压缩\u002F清除| Dialog\n    Files -.->|FileWatcher| Store[(FileStore\u003Cbr>向量+FTS 索引)]\n    Search --> Store\n```\n\n---\n\n#### 1. `check_context` — 上下文检查\n\n[ContextChecker](reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Fcontext_checker.py) 使用 token 计数来判断上下文是否超过阈值，并自动将消息分为“待压缩”组和“保留”组。\n\n```mermaid\ngraph LR\n    M[messages] --> H[AsMsgHandler\u003Cbr>Token 计数]\n    H --> C{total > threshold?}\n    C -->|No| K[返回所有消息]\n    C -->|Yes| S[从尾部保留\u003Cbr>reserve tokens]\n    S --> CP[messages_to_compact\u003Cbr>较早的消息]\n    S --> KP[messages_to_keep\u003Cbr>最近的消息]\n    S --> V{is_valid\u003Cbr>工具调用对齐？}\n```\n\n- **核心逻辑**: 从尾部保留 `reserve` 个 token；其余标记为待压缩消息。\n- **完整性保证**: 保留完整的用户-助手轮次以及 tool_use\u002Ftool_result 对，不拆分它们。\n\n---\n\n#### 2. `compact_memory` — 对话压缩\n\n[Compactor](reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Fcompactor.py) 使用 ReActAgent 将对话历史压缩为**结构化上下文摘要**。\n\n```mermaid\ngraph LR\n    M[messages] --> H[AsMsgHandler\u003Cbr>format_msgs_to_str]\n    H --> A[ReActAgent\u003Cbr>reme_compactor]\n    P[previous_summary] -->|增量更新| A\n    A --> S[结构化摘要\u003Cbr>目标\u002F进展\u002F决策...]\n```\n\n**摘要结构**（上下文检查点）：\n\n| 字段                  | 描述                                                                                   |\n|-----------------------|----------------------------------------------------------------------------------------|\n| `## Goal`             | 用户目标                                                                               |\n| `## Constraints`      | 限制条件和偏好                                                                         |\n| `## Progress`         | 任务进度                                                                               |\n| `## Key Decisions`    | 关键决策                                                                               |\n| `## Next Steps`       | 下一步计划                                                                             |\n| `## Critical Context` | 关键数据，例如文件路径、函数名称、错误消息等                                           |\n\n- **增量更新**：当提供 `previous_summary` 时，新对话将合并到现有摘要中。\n- **思维增强**：通过设置 `add_thinking_block=True`（默认值），在生成摘要前增加推理步骤以提高质量。\n\n---\n\n#### 3. `summary_memory` — 持久化记忆\n\n[Summarizer](reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Fsummarizer.py) 使用 **ReAct + 文件工具** 模式，使 AI 能够决定写什么以及写在哪里。\n\n```mermaid\ngraph LR\n    M[messages] --> A[ReActAgent\u003Cbr>reme_summarizer]\n    A -->|read| R[Read memory\u002FYYYY-MM-DD.md]\n    R --> T{Reason: how to merge?}\n    T -->|write| W[Overwrite]\n    T -->|edit| E[Edit in place]\n    W --> F[memory\u002FYYYY-MM-DD.md]\n    E --> F\n```\n\n**文件工具** ([FileIO](reme\u002Fmemory\u002Ffile_based\u002Ftools\u002Ffile_io.py))：\n\n| 工具    | 功能              |\n|---------|-------------------|\n| `read`  | 读取文件内容       |\n| `write` | 覆盖文件           |\n| `edit`  | 查找替换编辑       |\n\n---\n\n#### 4. `compact_tool_result` — 工具结果压缩\n\n[ToolResultCompactor](reme\u002Fmemory\u002Ffile_based\u002Fcomponents\u002Ftool_result_compactor.py) 解决了长工具输出导致上下文膨胀的问题。它根据消息是否在 `recent_n` 窗口内应用两种不同的截断策略：\n\n```mermaid\ngraph LR\n    M[messages] --> B{Within recent_n?}\n    B -->|Yes - recent| C[Low truncation recent_max_bytes=100KB\u003Cbr>Save full content to tool_result\u002Fuuid.txt\u003Cbr>Hint: 'Read from line N']\n    B -->|No - old| D[High truncation old_max_bytes=3KB\u003Cbr>Reference existing file\u003Cbr>More aggressive truncation]\n    C --> E[cleanup_expired_files\u003Cbr>Delete expired files]\n    D --> E\n```\n\n| 参数            | 默认值                 | 描述                                                                                                                   |\n|-----------------|-----------------------|-----------------------------------------------------------------------------------------------------------------------|\n| `recent_n`      | `1`                   | 视为“最近”的最少连续尾部工具结果消息数量（使用低截断）                                                                |\n| `recent_max_bytes` | `100 * 1024` (100 KB) | 最近消息的截断阈值；超出此大小的内容保存到 `tool_result\u002F` 并附带文件路径和起始行提示                                 |\n| `old_max_bytes` | `3000` (3 KB)         | 较旧消息的截断阈值；截断更加激进                                                                                      |\n| `retention_days`| `3`                   | 工具结果文件的保留天数；过期文件会自动清理                                                                            |\n\n- **自动清理**：过期文件（超过 `retention_days` 的文件）会在 `start` \u002F `close` \u002F `compact_tool_result` 期间自动删除。\n\n---\n\n#### 5. `memory_search` — 记忆检索\n\n[MemorySearch](reme\u002Fmemory\u002Ffile_based\u002Ftools\u002Fmemory_search.py) 提供 **向量 + BM25 混合检索**。\n\n```mermaid\ngraph LR\n    Q[query] --> E[Embedding\u003Cbr>Vectorization]\n    E --> V[vector_search\u003Cbr>Semantic similarity]\n    Q --> B[BM25\u003Cbr>Keyword matching]\n    V -->|\" weight: 0.7 \"| M[Deduplicate + weighted merge]\n    B -->|\" weight: 0.3 \"| M\n    M --> F[min_score filter]\n    F --> R[Top-N results]\n```\n\n- **融合机制**：向量权重 0.7 + BM25 权重 0.3 —— 平衡语义相似性和精确匹配。\n\n---\n\n#### 6. `ReMeInMemoryMemory` — 会话内记忆\n\n[ReMeInMemoryMemory](reme\u002Fmemory\u002Ffile_based\u002Freme_in_memory_memory.py) 扩展了 AgentScope 的 `InMemoryMemory`，提供基于 token 的记忆管理以及原始对话持久化。\n\n```mermaid\ngraph LR\n    C[content] --> G[get_memory\u003Cbr>exclude_mark=COMPRESSED]\n    G --> F[Filter out compressed messages]\n    F --> P{prepend_summary?}\n    P -->|Yes| S[Prepend previous summary]\n    S --> O[Output messages]\n    P -->|No| O\n    M[mark_messages_compressed] --> D[Persist to dialog\u002FYYYY-MM-DD.jsonl]\n    D --> R[Remove from memory]\n```\n\n| 功能                         | 描述                                              |\n|------------------------------|--------------------------------------------------|\n| `get_memory`                 | 根据标记过滤消息并自动附加摘要                   |\n| `estimate_tokens`            | 估算上下文的 token 使用量                        |\n| `state_dict` \u002F `load_state_dict` | 序列化\u002F反序列化状态（会话持久化）               |\n| `mark_messages_compressed`   | 标记消息已压缩并持久化到对话目录                 |\n| `clear_content`              | 在清除内存之前持久化所有消息                     |\n\n**原始对话持久化**：当消息被压缩或清除时，它们会自动保存到 `{dialog_path}\u002F{date}.jsonl`，每行一条 JSON 格式的消息。\n\n---\n\n#### 7. `pre_reasoning_hook` — 推理前处理\n\n这是一个统一的入口点，将上述所有组件连接在一起，并在每次推理步骤之前自动管理上下文。\n\n```mermaid\ngraph LR\n    M[messages] --> TC[compact_tool_result\u003Cbr>Compact long tool outputs]\n    TC --> CC[check_context\u003Cbr>Compute remaining space]\n    CC --> D{messages_to_compact\u003Cbr>Non-empty?}\n    D -->|No| K[Return original messages + summary]\n    D -->|Yes| V{is_valid?}\n    V -->|No| K\n    V -->|Yes| CM[compact_memory\u003Cbr>Sync summary generation]\n    V -->|Yes| SM[add_async_summary_task\u003Cbr>Async persistence]\n    CM --> R[Return messages_to_keep + new summary]\n```\n\n**执行流程**：\n\n1. `compact_tool_result` — 对所有消息（最近的 `tool_result_compact_keep_n` 条除外）压缩长工具输出。\n2. `check_context` — 检查上下文是否超出限制（剩余空间 = 阈值减去系统提示和压缩摘要所用的 token 数）。\n3. `compact_memory` — 生成紧凑的摘要（同步），追加到 `compact_summary` 中。\n4. `summary_memory` — 将记忆持久化到 `memory\u002F*.md` 文件中（异步后台运行，非阻塞）。\n\n| 关键参数                 | 默认值   | 描述                                                                                   |\n|--------------------------|----------|----------------------------------------------------------------------------------------|\n| `tool_result_compact_keep_n` | `3`      | 跳过对最近 N 条消息的工具结果压缩（保留完整内容）                                      |\n| `memory_compact_reserve`     | `10000`  | 为最近的消息保留的 token 数；超过此值的消息将触发压缩                                   |\n| `compact_ratio`              | `0.7`    | 压缩阈值比率：`max_input_length × compact_ratio × 0.95`                                 |\n\n---\n\n## 🗃️ 基于向量的记忆系统\n\n[ReMe Vector Based](reme\u002Freme.py) 是基于向量的记忆系统的核心类。它管理三种类型的记忆：\n\n| 记忆类型             | 使用场景                                                     |\n|----------------------|--------------------------------------------------------------|\n| **个人记忆**         | 记录用户偏好和习惯                                           |\n| **程序记忆**         | 记录任务执行经验及成功\u002F失败模式                              |\n| **工具记忆**         | 记录工具使用经验和参数调整                                   |\n\n### 核心功能\n\n| 方法               | 功能       | 描述                                                         |\n|--------------------|------------|-------------------------------------------------------------|\n| `summarize_memory` | 🧠 总结     | 自动从对话中提取并存储记忆                                   |\n| `retrieve_memory`  | 🔍 检索     | 根据查询检索相关记忆                                         |\n| `add_memory`       | ➕ 添加     | 手动将记忆添加到向量存储中                                   |\n| `get_memory`       | 📖 获取     | 通过 ID 获取单个记忆                                         |\n| `update_memory`    | ✏️ 更新     | 更新现有记忆的内容或元数据                                   |\n| `delete_memory`    | 🗑️ 删除    | 删除特定记忆                                                 |\n| `list_memory`      | 📋 列表     | 列出记忆（支持过滤和排序）                                   |\n\n### 安装与环境变量\n\n安装和环境配置与 [ReMeLight](#installation) 相同。\nAPI 密钥通过环境变量配置，并可以存储在项目根目录下的 `.env` 文件中。\n\n### Python 使用示例\n\n```python\nimport asyncio\n\nfrom reme import ReMe\n\n\nasync def main():\n    # 初始化 ReMe\n    reme = ReMe(\n        working_dir=\".reme\",\n        default_llm_config={\n            \"backend\": \"openai\",\n            \"model_name\": \"qwen3.5-plus\",\n        },\n        default_embedding_model_config={\n            \"backend\": \"openai\",\n            \"model_name\": \"text-embedding-v4\",\n            \"dimensions\": 1024,\n        },\n        default_vector_store_config={\n            \"backend\": \"local\",  # 支持 local\u002Fchroma\u002Fqdrant\u002Felasticsearch\n        },\n    )\n    await reme.start()\n\n    messages = [\n        {\"role\": \"user\", \"content\": \"Help me write a Python script\", \"time_created\": \"2026-02-28 10:00:00\"},\n        {\"role\": \"assistant\", \"content\": \"Sure, I'll help you with that.\", \"time_created\": \"2026-02-28 10:00:05\"},\n    ]\n\n    # 1. 从对话中总结记忆（自动提取用户偏好、任务经验等）\n    result = await reme.summarize_memory(\n        messages=messages,\n        user_name=\"alice\",  # 个人记忆\n        # task_name=\"code_writing\",  # 程序记忆\n    )\n    print(f\"Summary result: {result}\")\n\n    # 2. 检索相关记忆\n    memories = await reme.retrieve_memory(\n        query=\"Python programming\",\n        user_name=\"alice\",\n        # task_name=\"code_writing\",\n    )\n    print(f\"Retrieved memories: {memories}\")\n\n    # 3. 手动添加记忆\n    memory_node = await reme.add_memory(\n        memory_content=\"The user prefers concise code style.\",\n        user_name=\"alice\",\n    )\n    print(f\"Added memory: {memory_node}\")\n    memory_id = memory_node.memory_id\n\n    # 4. 通过 ID 获取单个记忆\n    fetched_memory = await reme.get_memory(memory_id=memory_id)\n    print(f\"Fetched memory: {fetched_memory}\")\n\n    # 5. 更新记忆内容\n    updated_memory = await reme.update_memory(\n        memory_id=memory_id,\n        user_name=\"alice\",\n        memory_content=\"The user prefers concise code with comments.\",\n    )\n    print(f\"Updated memory: {updated_memory}\")\n\n    # 6. 列出用户的全部记忆（支持过滤和排序）\n    all_memories = await reme.list_memory(\n        user_name=\"alice\",\n        limit=10,\n        sort_key=\"time_created\",\n        reverse=True,\n    )\n    print(f\"User memory list: {all_memories}\")\n\n    # 7. 删除特定记忆\n    await reme.delete_memory(memory_id=memory_id)\n    print(f\"Deleted memory: {memory_id}\")\n\n    # 8. 删除所有记忆（请谨慎使用）\n    # await reme.delete_all()\n\n    await reme.close()\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### 技术架构\n\n```mermaid\ngraph LR\n    User[用户 \u002F 代理] --> ReMe[基于向量的 ReMe]\n    ReMe --> Summarize[总结记忆]\n    ReMe --> Retrieve[检索记忆]\n    ReMe --> CRUD[CRUD 操作]\n    Summarize --> PersonalSum[个人总结器]\n    Summarize --> ProceduralSum[程序总结器]\n    Summarize --> ToolSum[工具总结器]\n    Retrieve --> PersonalRet[个人检索器]\n    Retrieve --> ProceduralRet[程序检索器]\n    Retrieve --> ToolRet[工具检索器]\n    PersonalSum --> VectorStore[向量数据库]\n    ProceduralSum --> VectorStore\n    ToolSum --> VectorStore\n    PersonalRet --> VectorStore\n    ProceduralRet --> VectorStore\n    ToolRet --> VectorStore\n```\n\n### 实验结果\n\n评估基于两个基准测试：**LoCoMo** 和 **HaluMem**。实验设置如下：\n\n1. **ReMe 主干模型**：如每个表格中指定。\n2. **评估协议**：遵循 MemOS 的 LLM-as-a-Judge 方法——每个答案由 GPT-4o-mini 进行评分。\n\n基线结果尽可能在一致的设置下从各自论文中复现。\n\n### LoCoMo\n\n| 方法       | 单跳   | 多跳   | 时间性  | 开放域   | 总体     |\n|------------|--------|--------|---------|----------|----------|\n| MemoryOS   | 62.43  | 56.50  | 37.18   | 40.28    | 54.70    |\n| Mem0       | 66.71  | 58.16  | 55.45   | 40.62    | 61.00    |\n| MemU       | 72.77  | 62.41  | 33.96   | 46.88    | 61.15    |\n| MemOS      | 81.45  | 69.15  | 72.27   | 60.42    | 75.87    |\n| HiMem      | 89.22  | 70.92  | 74.77   | 54.86    | 80.71    |\n| Zep        | 88.11  | 71.99  | 74.45   | 66.67    | 81.06    |\n| TiMem      | 81.43  | 62.20  | 77.63   | 52.08    | 75.30    |\n| TSM        | 84.30  | 66.67  | 71.03   | 58.33    | 76.69    |\n| MemR3      | 89.44  | 71.39  | 76.22   | 61.11    | 81.55    |\n| **ReMe**   | **89.89** | **82.98** | **83.80** | **71.88** | **86.23** |\n\n### HaluMem\n\n| 方法         | 记忆完整性   | 记忆准确性   | QA 准确性   |\n|--------------|--------------|--------------|-------------|\n| MemoBase     | 14.55        | 92.24        | 35.53       |\n| Supermemory  | 41.53        | 90.32        | 54.07       |\n| Mem0         | 42.91        | 86.26        | 53.02       |\n| ProMem       | **73.80**    | 89.47        | 62.26       |\n| **ReMe**     | 67.72        | **94.06**    | **88.78**   |\n\n---\n\n## 🧪 程序记忆论文\n\n> 我们的程序（任务）记忆论文已在 [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10696) 上发布。\n\n### 🌍 [Appworld 基准测试](benchmark\u002Fappworld\u002Fquickstart.md)\n\n我们在 Appworld 环境中使用 Qwen3-8B（非思考模式）评估 ReMe：\n\n| 方法       | Avg@4               | Pass@4              |\n|------------|---------------------|---------------------|\n| w\u002Fo ReMe   | 0.1497              | 0.3285              |\n| w\u002F ReMe    | 0.1706 **(+2.09%)** | 0.3631 **(+3.46%)** |\n\nPass@K 衡量的是 K 个生成的候选者中至少有一个成功完成任务（得分=1）的概率。\n当前实验使用的是内部 AppWorld 环境，可能与公开版本略有不同。\n\n有关如何重现实验的更多详细信息，请参阅 [quickstart.md](benchmark\u002Fappworld\u002Fquickstart.md)。\n\n### 🔧 [BFCL-V3 基准测试](benchmark\u002Fbfcl\u002Fquickstart.md)\n\n我们在 BFCL-V3 多轮任务（随机拆分 50 训练 \u002F 150 验证）上使用 Qwen3-8B（思考模式）评估 ReMe：\n\n| 方法       | Avg@4               | Pass@4              |\n|------------|---------------------|---------------------|\n| w\u002Fo ReMe   | 0.4033              | 0.5955              |\n| w\u002F ReMe    | 0.4450 **(+4.17%)** | 0.6577 **(+6.22%)** |\n\n有关如何重现实验的更多详细信息，请参阅 [quickstart.md](benchmark\u002Fbfcl\u002Fquickstart.md)。\n\n## ⭐ 社区与支持\n\n- **Star & Watch**：加星有助于更多代理开发者发现 ReMe；关注可以让你及时了解新版本和功能。\n- **分享你的成果**：在 Issues 或 Discussions 中分享 ReMe 如何增强你的代理——我们很乐意展示优秀的社区用例。\n- **需要新功能？** 提交功能请求；我们将与社区一起发展 ReMe。\n- **代码贡献**：欢迎任何形式的贡献。请参阅 [贡献指南](docs\u002Fcontribution.md)。\n- **致谢**：感谢 OpenClaw、Mem0、MemU 和 CoPaw 等优秀开源项目提供的灵感和支持。\n\n### 贡献者\n\n感谢所有为 ReMe 做出贡献的人：\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fagentscope-ai_ReMe_readme_cafe97c05130.png\" alt=\"Contributors\" \u002F>\n\u003C\u002Fa>\n\n---\n\n## 📄 引用\n\n```bibtex\n@software{AgentscopeReMe2025,\n  title = {AgentscopeReMe: Memory Management Kit for Agents},\n  author = {ReMe Team},\n  url = {https:\u002F\u002Freme.agentscope.io},\n  year = {2025}\n}\n```\n\n---\n\n## ⚖️ 许可证\n\n本项目在 Apache License 2.0 下开源。详见 [LICENSE](.\u002FLICENSE)。\n\n---\n\n## 🤔 为什么选择 ReMe？\n\nReMe 代表 **Remember Me**（记住我）和 **Refine Me**（优化我），象征着我们帮助 AI 代理“记住”用户并通过交互“优化”自身的目标。我们希望 ReMe 不仅仅是一个冰冷的记忆模块，而是一个真正帮助代理理解用户、积累经验并不断进化的伙伴。\n\n---\n\n## 📈 Star 历史\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fagentscope-ai_ReMe_readme_c8066ca6b7a9.png)](https:\u002F\u002Fwww.star-history.com\u002F#agentscope-ai\u002FReMe&Date)","# ReMe 快速上手指南\n\nReMe 是一个专为 AI 智能体设计的记忆管理框架，支持基于文件和向量的记忆系统，解决上下文窗口限制和会话无状态的问题。\n\n---\n\n## 环境准备\n\n### 系统要求\n- Python 版本：3.10 或更高版本\n- 支持的操作系统：Linux、macOS 和 Windows\n\n### 前置依赖\n- 需要安装 Git 和 pip。\n- 推荐使用虚拟环境（如 `venv` 或 `conda`）以避免依赖冲突。\n\n---\n\n## 安装步骤\n\n### 从源码安装\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe.git\ncd ReMe\npip install -e \".[light]\"\n```\n\n### 更新到最新版本\n如果已克隆仓库，可以通过以下命令更新：\n```bash\ngit pull\npip install -e \".[light]\"\n```\n\n> **国内加速方案**  \n> 如果您在中国大陆，建议使用国内镜像源加速安装。例如：\n> ```bash\n> pip install -e \".[light]\" -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n---\n\n## 基本使用\n\n以下是一个简单的使用示例，展示如何初始化 `ReMeLight` 并执行基本操作。\n\n### 示例代码\n\n```python\nimport asyncio\n\nfrom reme.reme_light import ReMeLight\n\n\nasync def main():\n    # 初始化 ReMeLight\n    reme = ReMeLight(\n        default_as_llm_config={\"model_name\": \"qwen3.5-35b-a3b\"},\n        default_file_store_config={\"fts_enabled\": True, \"vector_enabled\": False},\n        enable_load_env=True,\n    )\n    await reme.start()\n\n    # 示例对话消息\n    messages = [\n        {\"role\": \"user\", \"content\": \"你好！\"},\n        {\"role\": \"assistant\", \"content\": \"你好！有什么可以帮您的吗？\"},\n    ]\n\n    # 1. 检查上下文大小\n    messages_to_compact, messages_to_keep, is_valid = await reme.check_context(\n        messages=messages,\n        memory_compact_threshold=90000,  # 触发压缩的阈值（token 数）\n        memory_compact_reserve=10000,   # 保留最近消息的 token 数\n    )\n\n    # 2. 压缩对话历史为结构化摘要\n    summary = await reme.compact_memory(\n        messages=messages,\n        previous_summary=\"\",\n        max_input_length=128000,  # 模型上下文窗口（token 数）\n        compact_ratio=0.7,        # 超过 max_input_length * 0.7 时触发压缩\n        language=\"zh\",             # 摘要语言（如 \"zh\"）\n    )\n\n    # 3. 压缩长工具输出\n    messages = await reme.compact_tool_result(messages)\n\n    # 4. 持久化重要记忆到文件\n    summary_result = await reme.summary_memory(\n        messages=messages,\n        language=\"zh\",\n    )\n\n    # 5. 语义记忆搜索\n    result = await reme.memory_search(query=\"Python 版本偏好\", max_results=5)\n\n    # 关闭 ReMeLight\n    await reme.close()\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n---\n\n### 运行说明\n1. **环境变量配置**  \n   在运行代码前，请确保设置以下环境变量（根据实际使用的模型服务）：\n   ```bash\n   export LLM_API_KEY=\"sk-xxx\"\n   export LLM_BASE_URL=\"https:\u002F\u002Fdashscope.aliyuncs.com\u002Fcompatible-mode\u002Fv1\"\n   ```\n   如果需要使用嵌入模型，还需配置：\n   ```bash\n   export EMBEDDING_API_KEY=\"sk-xxx\"\n   export EMBEDDING_BASE_URL=\"https:\u002F\u002Fdashscope.aliyuncs.com\u002Fcompatible-mode\u002Fv1\"\n   ```\n\n2. **日志与调试**  \n   默认情况下，`ReMeLight` 会在控制台输出日志信息。如果需要更详细的日志，可以在初始化时启用调试模式。\n\n3. **文件存储结构**  \n   运行后，`ReMeLight` 会在当前工作目录生成以下文件结构：\n   ```\n   working_dir\u002F\n   ├── MEMORY.md              # 长期记忆文件\n   ├── memory\u002F\n   │   └── YYYY-MM-DD.md      # 每日对话摘要\n   ├── dialog\u002F                # 原始对话记录\n   │   └── YYYY-MM-DD.jsonl   # 每日对话消息\n   └── tool_result\u002F           # 工具结果缓存\n       └── \u003Cuuid>.txt\n   ```\n\n---\n\n通过以上步骤，您可以快速上手并开始使用 ReMe 的核心功能！","一位开发者正在为电商网站构建智能客服机器人，需要处理大量用户咨询并记住用户的偏好和历史问题。\n\n### 没有 ReMe 时\n- 用户每次咨询都需要重复提供基本信息，例如订单号、商品名称，导致体验差且效率低  \n- 客服机器人无法记住用户的特殊需求（如只接受环保包装），每次对话都像“第一次见面”  \n- 长时间的多轮对话中，早期的关键信息经常丢失，导致问题解决过程反复且混乱  \n- 历史对话数据存储在数据库中，难以直接查看或编辑，调整记忆内容需要复杂的操作  \n- 当系统迁移或扩展时，记忆数据的导出和导入非常麻烦，容易出现数据不一致  \n\n### 使用 ReMe 后\n- 用户的历史信息和偏好被自动记录并持久存储，新对话可以直接调用相关背景，提升用户体验  \n- 通过文件化的记忆系统，开发人员可以随时查看和编辑 Markdown 格式的记忆文件，灵活调整内容  \n- 长对话中的关键信息会被自动压缩和提炼，确保重要上下文始终保留在有限的上下文窗口中  \n- 支持语义搜索和精确匹配，客服机器人能快速从历史记忆中召回相关信息，提供更精准的服务  \n- 记忆数据以文件形式存储，迁移和备份变得简单直观，大幅降低了系统扩展的复杂度  \n\nReMe 让智能客服机器人真正拥有了“长期记忆”，显著提升了服务效率和用户满意度。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fagentscope-ai_ReMe_66d278f2.png","agentscope-ai","AgentScope-AI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fagentscope-ai_aa2b4933.png","",null,"agentscope.team@gmail.com","https:\u002F\u002Fgithub.com\u002Fagentscope-ai",[84,88],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,{"name":89,"color":90,"percentage":91},"Shell","#89e051",0,2610,214,"2026-04-05T08:54:25","Apache-2.0","未说明",{"notes":98,"python":99,"dependencies":100},"需要配置环境变量（如 LLM_API_KEY 和 EMBEDDING_API_KEY），并支持从源码安装。首次运行可能需要下载相关模型文件。","3.10+",[101,76,102,103],"reme-ai","transformers","asyncio",[15,13],[106,107,108,109,110,111],"agent","ai-agents","memory","reme","memoryscope","rag","2026-03-27T02:49:30.150509","2026-04-06T06:46:12.328821",[115,120,125,130,135],{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},4122,"为什么在部署项目时会遇到异常？","可以通过 llamaindex 访问来解决问题，这是最便捷的方式。如果对 llamaindex 不熟悉，仅需学习其基本用法即可，无需额外配置 api_url。","https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fissues\u002F13",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},4123,"如何在本地部署的模型上添加记忆功能？","可以通过修改配置文件启用记忆功能，例如设置 `enable_ranker=False` 并确保 `embedding_model` 的 `model_name` 修改为本地部署的模型名称。","https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fissues\u002F14",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},4124,"是否只能使用支持 dimensions 参数的模型？","可以尝试设置 `embedding_model.default.params.dimensions=None` 来避免相关错误。","https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fissues\u002F69",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},4120,"下载的 BFCL 数据集字段与代码不匹配，如何解决？","可以尝试使用此 [脚本](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Ffiles\u002F25455536\u002Fpreprocess.py) 预处理数据文件以匹配代码需求。此外，还可以参考提供的 [可能答案文件](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Ffiles\u002F25281134\u002FBFCL_v3_multi_turn_base.json)。","https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fissues\u002F45",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},4121,"如何更可靠地复现 ReMe (fixed) 的结果？","建议将 vector_store 从 local_file 更改为 elasticsearch，因为后者是功能更强大的数据库，可能会改善结果。","https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fissues\u002F57",[141,146,151,156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231,236],{"id":142,"version":143,"summary_zh":144,"released_at":145},103523,"v0.3.1.8","## What's Changed\r\n* fix(core): handle chromadb import error gracefully by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F192\r\n* refactor(file_store): move sqlite3 imports inside initialization methods by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F193\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.1.7...v0.3.1.8","2026-03-31T13:02:23",{"id":147,"version":148,"summary_zh":149,"released_at":150},103524,"v0.3.1.7","## What's Changed\r\n* docs(memory): update ReMeLight memory system documentation by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F186\r\n* docs(context): add comprehensive context management design documentation by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F187\r\n* feat(compactor): add extra instruction support and improve error handing by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F190\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.1.6...v0.3.1.7","2026-03-31T08:56:41",{"id":152,"version":153,"summary_zh":154,"released_at":155},103525,"v0.3.1.6","## What's Changed\r\n* refactor(truncation): improve file truncation logic by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F184\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.1.5...v0.3.1.6","2026-03-28T10:41:49",{"id":157,"version":158,"summary_zh":159,"released_at":160},103526,"v0.3.1.5","## What's Changed\r\n* fix: surface summarize\u002Fretrieve failures instead of masking them by @fancyboi999 in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F160\r\n* feat(memory): improve skills tool result truncation by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F182\r\n\r\n## New Contributors\r\n* @fancyboi999 made their first contribution in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F160\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.1.4...v0.3.1.5","2026-03-27T13:15:44",{"id":162,"version":163,"summary_zh":164,"released_at":165},103527,"v0.3.1.4","## What's Changed\r\n* refactor(file_io): update file I\u002FO operations and truncation logic by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F177\r\n* refactor(core): replace text truncation utilities with new marker system by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F179\r\n* docs(README): add Trendshift repository badge by @ployts in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F180\r\n* feat(reme): add configurable file watcher support by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F181\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.1.3...v0.3.1.4","2026-03-26T07:51:00",{"id":167,"version":168,"summary_zh":169,"released_at":170},103528,"v0.3.1.3","## What's Changed\r\n* style(memory): update message formatting and improve logging by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F175\r\n* chore(deps): update version and move litellm to dev dependencies by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F176\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.1.2...v0.3.1.3","2026-03-24T14:21:35",{"id":172,"version":173,"summary_zh":174,"released_at":175},103529,"v0.3.1.2-2","## What's Changed\r\n* fix: support BGE-M3 embedding (dense_embedding fallback) by @FUYOH666 in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F169\r\n* 更新实验结果在README中位置 by @nitwtog in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F171\r\n* fix(file-store): handle embedding API errors gracefully with fallback… by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F173\r\n* style(memory): update message formatting and improve logging by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F175\r\n\r\n## New Contributors\r\n* @FUYOH666 made their first contribution in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F169\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.1.1...v0.3.1.2-2","2026-03-23T16:25:33",{"id":177,"version":178,"summary_zh":179,"released_at":180},103530,"v0.3.1.1","## What's Changed\r\n* refactor(memory): update conversation continuity context handling by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F170\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.1.0...v0.3.1.1","2026-03-19T15:48:43",{"id":182,"version":183,"summary_zh":184,"released_at":185},103531,"v0.3.1.0","## What's Changed\r\n* fix(reme): use timezone-aware datetime in memory summarization by @aquamarine-bot in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F165\r\n* feat(memory): enhance summarizer to include experience reflections by @ployts in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F167\r\n* refactor(core): update text truncation utilities and tool result handling by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F168\r\n\r\n## New Contributors\r\n* @aquamarine-bot made their first contribution in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F165\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.9...v0.3.1.0","2026-03-19T11:54:10",{"id":187,"version":188,"summary_zh":189,"released_at":190},103532,"v0.3.0.9","## What's Changed\r\n* feat(core): update version and enhance configuration management by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F164\r\n* feat(core): add application restart capability with enhanced configuration options by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F166\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.8...v0.3.0.9","2026-03-19T03:17:15",{"id":192,"version":193,"summary_zh":194,"released_at":195},103533,"v0.3.0.8","**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.7...v0.3.0.8","2026-03-17T17:17:46",{"id":197,"version":198,"summary_zh":199,"released_at":200},103534,"v0.3.0.7","## What's Changed\r\n* fix(file_watcher): Add clear-on-start option and remove redundant clears in file watcher by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F161\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.6...v0.3.0.7","2026-03-17T12:14:52",{"id":202,"version":203,"summary_zh":204,"released_at":205},103535,"v0.3.0.6","## What's Changed\r\n* Upgrade GitHub Actions for Node 24 compatibility by @salmanmkc in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F136\r\n* Fix docs: correct English badge link in README by @04cb in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F133\r\n* Dev\u002Freadme by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F137\r\n* feat(file-watcher): add configurable retry for file watcher by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F140\r\n* Update: check the code&docs for evaluation on bfcl&appworld by @zouyingcao in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F141\r\n* feat(memory): replace memory formatter with AsMsgHandler for enhanced message processing by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F142\r\n* feat(memory): add ContextChecker component for context size management by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F144\r\n* refactor(memory): restructure file-based memory components and enhanc… by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F145\r\n* docs(readme): update documentation with new features and installation… by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F146\r\n* 增加locomo的代码 by @hyp-001 in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F148\r\n* update readme by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F149\r\n* fin(emb_dim) by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F150\r\n* refactor(cli): using AgentScope components to reimplement the reme_cli logic by @zouyingcao in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F153\r\n* 添加Reme在Halumem和Locomo的实验结果 by @nitwtog in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F155\r\n* Dev\u002Ftoken by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F159\r\n\r\n## New Contributors\r\n* @salmanmkc made their first contribution in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F136\r\n* @04cb made their first contribution in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F133\r\n* @hyp-001 made their first contribution in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F148\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.5...v0.3.0.6","2026-03-17T11:31:16",{"id":207,"version":208,"summary_zh":209,"released_at":210},103536,"v0.3.0.6b3","## What's Changed\r\n* docs(readme): update documentation with new features and installation… by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F146\r\n* 增加locomo的代码 by @hyp-001 in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F148\r\n* update readme by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F149\r\n* fin(emb_dim) by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F150\r\n\r\n## New Contributors\r\n* @hyp-001 made their first contribution in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F148\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.6b2...v0.3.0.6b3","2026-03-10T10:23:58",{"id":212,"version":213,"summary_zh":214,"released_at":215},103537,"v0.3.0.6b2","**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.6b1...v0.3.0.6b2","2026-03-07T07:31:55",{"id":217,"version":218,"summary_zh":219,"released_at":220},103538,"v0.3.0.6b1","## What's Changed\r\n* Upgrade GitHub Actions for Node 24 compatibility by @salmanmkc in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F136\r\n* Fix docs: correct English badge link in README by @04cb in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F133\r\n* Dev\u002Freadme by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F137\r\n* feat(file-watcher): add configurable retry for file watcher by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F140\r\n* Update: check the code&docs for evaluation on bfcl&appworld by @zouyingcao in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F141\r\n* feat(memory): replace memory formatter with AsMsgHandler for enhanced message processing by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F142\r\n* feat(memory): add ContextChecker component for context size management by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F144\r\n* refactor(memory): restructure file-based memory components and enhanc… by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F145\r\n\r\n## New Contributors\r\n* @salmanmkc made their first contribution in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F136\r\n* @04cb made their first contribution in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F133\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.5...v0.3.0.6b1","2026-03-07T07:23:53",{"id":222,"version":223,"summary_zh":224,"released_at":225},103539,"v0.3.0.5","**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.4...v0.3.0.5","2026-03-04T06:10:59",{"id":227,"version":228,"summary_zh":229,"released_at":230},103540,"v0.3.0.4","**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.3...v0.3.0.4","2026-03-04T06:02:12",{"id":232,"version":233,"summary_zh":234,"released_at":235},103541,"v0.3.0.3","## What's Changed\r\n* Add FAQ.md about ReMe paper by @zouyingcao in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F128\r\n* feat(memory): add CoPaw file-based memory system with compaction and … by @jinliyl in https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fpull\u002F134\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fagentscope-ai\u002FReMe\u002Fcompare\u002Fv0.3.0.2...v0.3.0.3","2026-03-04T02:57:36",{"id":237,"version":238,"summary_zh":80,"released_at":239},103542,"v0.3.0.2","2026-03-03T06:02:34"]