[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-zjunlp--LightMem":3,"tool-zjunlp--LightMem":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",141543,2,"2026-04-06T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":32,"env_os":98,"env_gpu":99,"env_ram":100,"env_deps":101,"category_tags":112,"github_topics":113,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":131,"updated_at":132,"faqs":133,"releases":168},4641,"zjunlp\u002FLightMem","LightMem","[ICLR 2026] LightMem: Lightweight and Efficient Memory-Augmented Generation","LightMem 是一个专为大语言模型和 AI 智能体设计的轻量级记忆增强框架，旨在让 AI 具备高效的长期记忆能力。它主要解决了传统 AI 在处理长上下文或复杂任务时容易“遗忘”关键信息，以及现有记忆方案资源消耗大、集成难度高的问题。通过提供极简的存储、检索与更新机制，LightMem 能帮助开发者快速构建拥有持久记忆的智能应用。\n\n该工具非常适合 AI 应用开发者、研究人员以及希望提升 Agent 长期交互能力的工程团队使用。其核心亮点在于“轻量高效”与“灵活兼容”：采用模块化架构，仅需几行代码即可集成；支持自定义存储引擎和检索策略；同时广泛兼容云端 API（如 OpenAI、DeepSeek）及本地部署模型（如 Ollama、vLLM）。此外，LightMem 还配备了完善的基准评估框架，方便用户在 LoCoMo 等数据集上验证记忆模块性能。作为 ICLR 2026 收录成果，LightMem 以低资源占用和快速响应著称，是让 AI 记住过往交互、实现更连贯对话的理想选择。","\u003Cdiv align=\"center\">\n  \u003Cpicture>\n    \u003Csource srcset=\".\u002Ffigs\u002Flightmem_logo_dark.png\" media=\"(prefers-color-scheme: dark)\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_2e1e292b4634.png\" width=\"60%\" height=\"40%\" \u002F>\n  \u003C\u002Fpicture>\n\u003C\u002Fdiv>\n\u003Ch1 align=\"center\"> LightMem: Lightweight and Efficient Memory-Augmented Generation \u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18866\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Paper-red\" alt=\"arXiv\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fzjunlp\u002FLightMem?style=social\" alt=\"GitHub Stars\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002FLICENSE\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-green.svg\" alt=\"License: MIT\">\n  \u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fzjunlp\u002FLightMem?color=blue\" alt=\"Last Commit\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-Welcome-red\" alt=\"PRs Welcome\">\n\u003C\u002Fp>\n\n\u003Ch5 align=\"center\"> ⭐ If you like our project, please give us a star on GitHub for the latest updates!\u003C\u002Fh5>\n\n---\n\n**LightMem** is a lightweight and efficient memory management framework designed for Large Language Models and AI Agents. It provides a simple yet powerful memory storage, retrieval, and update mechanism to help you quickly build intelligent applications with long-term memory capabilities.\n\n* 🚀 **Lightweight & Efficient**\n  \u003Cbr> Minimalist design with minimal resource consumption and fast response times\n\n* 🎯 **Easy to Use**\n  \u003Cbr> Simple API design - integrate into your application with just a few lines of code\n\n* 🔌 **Flexible & Extensible**\n  \u003Cbr> Modular architecture supporting custom storage engines and retrieval strategies\n\n* 🌐 **Broad Compatibility**\n  \u003Cbr> Support for cloud APIs (OpenAI, DeepSeek) and local models (Ollama, vLLM, etc.)\n\n  \u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_e4e6cd6d5efd.png\" width=\"100%\" height=\"60%\" \u002F>\u003C\u002Fdiv>\n\n\u003Cspan id='news'\u002F>\n\n## 📢 News\n- **[2026-03-21]**: 🚀 We provide a more comprehensive [baseline evaluation framework](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FMemBase), supporting the benchmarking of memory layers such as Mem0, A-MEM, EverMemOS, LangMem on multiple datasets like LoCoMo and LongMemEval.\n- **[2026-02-15]**: 🚀 **[StructMem](.\u002FStructMem.md)** is released: A hierarchical memory framework that preserves event-level memory bindings and cross-event memory connections. \n- **[2026-01-26]**: 🎉🎉🎉 [**LightMem: Lightweight and Efficient Memory-Augmented Generation**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18866) has been accepted by **ICLR 2026**!\n- **[2026-01-17]**: 🚀 We provide a comprehensive [baseline evaluation framework](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Fmemory_toolkits\u002Freadme.md), supporting the benchmarking of memory layers such as Mem0, A-MEM, and LangMem on multiple datasets like LoCoMo and LongMemEval.\n- **[2025-12-09]**: 🎬 Released a **[Demo Video](#demo)** showcasing long-context handling, along with comprehensive **[Tutorial Notebooks](.\u002Ftutorial-notebooks\u002F)** for various scenarios!\n- **[2025-11-30]**: 🚌 LightMem now supports calling multiple tools provided by its [**MCP Server**](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fmcp\u002Fserver.py).\n- **[2025-11-26]**: 🚀 Added full **LoCoMo** dataset support, delivering strong [results](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem?tab=readme-ov-file#locomo) with leading performance and efficiency! Here is the [**reproduction script**](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fexperiments\u002Flocomo\u002Freadme.md)!\n- **[2025-11-09]**: ✨ LightMem now supports local deployment via [**Ollama**](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Ffactory\u002Fmemory_manager\u002Follama.py), [**vLLM**](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Ffactory\u002Fmemory_manager\u002Fvllm_offline.py), and [**Transformers**](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Ffactory\u002Fmemory_manager\u002Ftransformers.py) auto-loading!\n- **[2025-10-12]**: 🎉 LightMem project is officially Open-Sourced!\n\n\n\u003Cspan id='reproduction'\u002F>\n\n## 🧪 Reproduction Scripts for LoCoMo & LongMemEval\n\nWe provide lightweight, ready-to-run scripts for reproducing results on **LoCoMo**, **LongMemEval**, and their combined baselines.\n\n| Dataset                  | Description                                                                  | Script                                                                                                         | Result |\n| :----------------------- | :--------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------| :---------------------------------------------|\n| **LongMemEval**          | Run LightMem on LongMemEval, including evaluation and offline memory update. | [run_lightmem_longmemeval.md](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fexperiments\u002Flongmemeval\u002Freadme.md)  |  [LongMemEval Results](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fexperiments\u002Flongmemeval\u002Freadme.md#results) |\n| **LoCoMo**               | Scripts for reproducing LightMem results on LoCoMo.                          | [run_lightmem_locomo.md](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fexperiments\u002Flocomo\u002Freadme.md)            |  [LoCoMo Results](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fexperiments\u002Flocomo\u002Freadme.md#results)      |\n| **LongMemEval & LoCoMo** | Unified baseline scripts for running both datasets.                          | [run_baselines.md](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Fmemory_toolkits\u002Freadme.md)        |  [Baseline Results](#experimental-results)    |\n\n\u003Cspan id='baseline-evaluation'\u002F>\n\n## 🧪 Baseline Evaluation\n\nWe provide a comprehensive [baseline evaluation framework](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Fmemory_toolkits\u002Freadme.md), supporting the benchmarking of memory layers such as Mem0, A-MEM, and LangMem on multiple datasets like LoCoMo and LongMemEval.\n\n\u003Cspan id='demo'\u002F>\n\n## 🎥 Demo & Tutorials\n\n**Watch Demo:** [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=r7sk_7Yv66I) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1a7mJBbEVM\u002F)\n\n### 📚 Hands-on Tutorials\nWe provide ready-to-use Jupyter notebooks corresponding to the demo and other use cases. You can find them in the [`tutorial-notebooks`](.\u002Ftutorial-notebooks\u002F) directory.\n\n| Scenario | Description | Notebook Link |\n| :--- | :--- | :--- |\n| **Travel Planning** | A complete guide to building a travel agent with memory. | [LightMem_Example_travel.ipynb](.\u002Ftutorial-notebooks\u002FLightMem_Example_travel.ipynb) |\n| **Code Assistant** | A complete guide to building a code agent with memory. | [LightMem_Example_code.ipynb](.\u002Ftutorial-notebooks\u002FLightMem_Example_code.ipynb) |\n| **LongMemEval** | A tutorial on how to run evaluations on LongMemEval benchmarks using LightMem. | [LightMem_Example_longmemeval.ipynb](.\u002Ftutorial-notebooks\u002FLightMem_Example_longmemeval.ipynb) |\n\n\n\u003Cspan id='todo'\u002F>\n\n## ☑️ Todo List\nLightMem is continuously evolving! Here's what's coming:\n    \n- Offline Pre-computation of KV Cache for Update (Lossless)\n- Online Pre-computation of KV Cache Before Q&A (Lossy)\n- Integration More Models and Feature Enhancement\n- Coordinated Use of Context and Long-Term Memory Storage\n- Multi Modal Memory \n\n\n\u003Cspan id='contents'\u002F>\n\n## 📑 Table of Contents\n\n* \u003Ca href='#news'>📢 News\u003C\u002Fa>\n* \u003Ca href='#reproduction'>🧪 Reproduction Scripts\u003C\u002Fa>\n* \u003Ca href='#baseline-evaluation'>🧪 Baseline Evaluation\u003C\u002Fa>\n* \u003Ca href='#demo'>🎥 Demo & Tutorials\u003C\u002Fa>\n* \u003Ca href='#todo'>☑️ Todo List\u003C\u002Fa>\n* \u003Ca href='#installation'>🔧 Installation\u003C\u002Fa>\n* \u003Ca href='#quickstart'>⚡ Quick Start\u003C\u002Fa>\n* \u003Ca href='#architecture'>🏗️ Architecture\u003C\u002Fa>\n* \u003Ca href='#examples'>💡 Examples\u003C\u002Fa>\n* \u003Ca href='#experimental-results'>📁 Experimental Results\u003C\u002Fa>\n* \u003Ca href='#configuration'>⚙️ Configuration\u003C\u002Fa>\n* \u003Ca href='#contributors'>👥 Contributors\u003C\u002Fa>\n* \u003Ca href='#related'>🔗 Related Projects\u003C\u002Fa>\n\n\u003Cspan id='installation'\u002F>\n\n## 🔧 Installation\n\n### Installation Steps\n\n#### Option 1: Install from Source \n```bash\n# Clone the repository\ngit clone https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem.git\ncd LightMem\n\n# Create virtual environment\nconda create -n lightmem python=3.11 -y\nconda activate lightmem\n\n# Install dependencies\nunset ALL_PROXY\npip install -e .\n```\n\n#### Option 2: Install via pip\n```bash\npip install lightmem  # Coming soon\n```\n\n## ⚡ Quick Start\n\n1. Modify the `JUDGE_MODEL`, `LLM_MODEL`, and their respective `API_KEY` and `BASE_URL` in `API Configuration`.\n\n2. Download `LLMLINGUA_MODEL` from [microsoft\u002Fllmlingua-2-bert-base-multilingual-cased-meetingbank](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002Fllmlingua-2-bert-base-multilingual-cased-meetingbank) and `EMBEDDING_MODEL` from [sentence-transformers\u002Fall-MiniLM-L6-v2](https:\u002F\u002Fhuggingface.co\u002Fsentence-transformers\u002Fall-MiniLM-L6-v2) and modify their paths in `Model Paths`.\n\n3. Download the dataset from [longmemeval-cleaned](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fxiaowu0162\u002Flongmemeval-cleaned), and modidy the path in `Data Configuration`.\n\n```python\ncd experiments\npython run_lightmem_qwen.py\n```\n\n\u003Cspan id='architecture'\u002F>\n\n## 🏗️ Architecture\n\n### 🗺️ Core Modules Overview\nLightMem adopts a modular design, breaking down the memory management process into several pluggable components. The core directory structure exposed to users is outlined below, allowing for easy customization and extension:\n\n```python\nLightMem\u002F\n├── src\u002Flightmem\u002F            # Main package\n│   ├── __init__.py          # Package initialization\n│   ├── configs\u002F             # Configuration files\n│   ├── factory\u002F             # Factory methods\n│   ├── memory\u002F              # Core memory management\n│   └── memory_toolkits\u002F     # Memory toolkits\n├── mcp\u002F                     # LightMem MCP server\n├── experiments\u002F             # Experiment scripts\n├── datasets\u002F                # Datasets files\n└── examples\u002F                # Examples\n```\n\n### 🧩 Supported Backends per Module\n\nThe following table lists the backends values currently recognized by each configuration module. Use the `model_name` field (or the corresponding config object) to select one of these backends.\n\n| Module (config)                 | Supported backends |\n| :---                            | :--- |\n| `PreCompressorConfig`           | `llmlingua-2`, `entropy_compress` |\n| `TopicSegmenterConfig`          | `llmlingua-2` |\n| `MemoryManagerConfig`           | `openai`, `deepseek`, `ollama`, `vllm`, etc. |\n| `TextEmbedderConfig`            | `huggingface` |\n| `MMEmbedderConfig`              | `huggingface` |\n| `RetrieverConfig`      | `qdrant`, `FAISS`, `BM25` |\n\n\u003Cspan id='examples'\u002F>\n\n## 💡 Examples\n\n### Initialize LightMem\n```python\nimport os\nfrom datetime import datetime\nfrom lightmem.memory.lightmem import LightMemory\n\n\nLOGS_ROOT = \".\u002Flogs\"\nRUN_TIMESTAMP = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\nRUN_LOG_DIR = os.path.join(LOGS_ROOT, RUN_TIMESTAMP)\nos.makedirs(RUN_LOG_DIR, exist_ok=True)\n\nAPI_KEY='your_api_key'\nAPI_BASE_URL='your_api_base_url'\nLLM_MODEL='your_model_name' # such as 'gpt-4o-mini' (API) or 'gemma3:latest' (Local Ollama) ...\nEMBEDDING_MODEL_PATH='\u002Fyour\u002Fpath\u002Fto\u002Fmodels\u002Fall-MiniLM-L6-v2'\nLLMLINGUA_MODEL_PATH='\u002Fyour\u002Fpath\u002Fto\u002Fmodels\u002Fllmlingua-2-bert-base-multilingual-cased-meetingbank'\n\nconfig_dict = {\n    \"pre_compress\": True,\n    \"pre_compressor\": {\n        \"model_name\": \"llmlingua-2\",\n        \"configs\": {\n            \"llmlingua_config\": {\n                \"model_name\": LLMLINGUA_MODEL_PATH,\n                \"device_map\": \"cuda\",\n                \"use_llmlingua2\": True,\n            },\n        }\n    },\n    \"topic_segment\": True,\n    \"precomp_topic_shared\": True,\n    \"topic_segmenter\": {\n        \"model_name\": \"llmlingua-2\",\n    },\n    \"messages_use\": \"user_only\",\n    \"metadata_generate\": True,\n    \"text_summary\": True,\n    \"memory_manager\": {\n        \"model_name\": 'xxx', # such as 'openai' or 'ollama' ...\n        \"configs\": {\n            \"model\": LLM_MODEL,\n            \"api_key\": API_KEY,\n            \"max_tokens\": 16000,\n            \"xxx_base_url\": API_BASE_URL # API model specific, such as 'openai_base_url' or 'deepseek_base_url' ...\n        }\n    },\n    \"extract_threshold\": 0.1,\n    \"index_strategy\": \"embedding\",\n    \"text_embedder\": {\n        \"model_name\": \"huggingface\",\n        \"configs\": {\n            \"model\": EMBEDDING_MODEL_PATH,\n            \"embedding_dims\": 384,\n            \"model_kwargs\": {\"device\": \"cuda\"},\n        },\n    },\n    \"retrieve_strategy\": \"embedding\",\n    \"embedding_retriever\": {\n        \"model_name\": \"qdrant\",\n        \"configs\": {\n            \"collection_name\": \"my_long_term_chat\",\n            \"embedding_model_dims\": 384,\n            \"path\": \".\u002Fmy_long_term_chat\", \n        }\n    },\n    \"summary_retriever\": {\n        \"model_name\": \"qdrant\",\n        \"configs\": {\n            \"collection_name\": \"my_chat_summaries\",\n            \"embedding_model_dims\": 384,\n            \"path\": \".\u002Fmy_chat_summaries\",\n        }\n    },\n    \"update\": \"offline\",\n    \"logging\": {\n        \"level\": \"DEBUG\",\n        \"file_enabled\": True,\n        \"log_dir\": RUN_LOG_DIR,\n    }\n}\n\nlightmem = LightMemory.from_config(config_dict)\n```\n\n### Add Memory\n```python\nsession = {\n\"timestamp\": \"2025-01-10\",\n\"turns\": [\n    [\n        {\"role\": \"user\", \"content\": \"My favorite ice cream flavor is pistachio, and my dog's name is Rex.\"}, \n        {\"role\": \"assistant\", \"content\": \"Got it. Pistachio is a great choice.\"}], \n    ]\n}\n\n\nfor turn_messages in session[\"turns\"]:\n    timestamp = session[\"timestamp\"]\n    for msg in turn_messages:\n        msg[\"time_stamp\"] = timestamp\n        \n    store_result = lightmem.add_memory(\n        messages=turn_messages,\n        force_segment=True,\n        force_extract=True\n    )\n```\n\n### Offline Update\n```python\nlightmem.construct_update_queue_all_entries()\nlightmem.offline_update_all_entries(score_threshold=0.8)\n``` \n\n### Generate summaries \n```python\nsummary_result = lightmem.summarize()\n```\n\n### Retrieve Memory\n```python\nquestion = \"What is the name of my dog?\"\nrelated_memories = lightmem.retrieve(question, limit=5)\nprint(related_memories)\n``` \n\n### MCP Server\n\nLightMem also supports the Model Context Protocol ([MCP](https:\u002F\u002Fmodelcontextprotocol.io\u002Fdocs\u002Fgetting-started\u002Fintro)) server:\n\n```bash\n# Running at Root Directory\ncd LightMem\n\n# Environment\npip install '.[mcp]'\n\n# MCP Inspector [Optional]\nnpx @modelcontextprotocol\u002Finspector python mcp\u002Fserver.py\n\n# Start API by HTTP (http:\u002F\u002F127.0.0.1:8000\u002Fmcp)\nfastmcp run mcp\u002Fserver.py:mcp --transport http --port 8000\n```\n\nThe MCP config `json` file of your local client may looks like:\n\n```json\n{\n  \"yourMcpServers\": {\n    \"LightMem\": {\n      \"url\": \"http:\u002F\u002F127.0.0.1:8000\u002Fmcp\",\n      \"otherParameters\": \"...\"\n    }\n  }\n}\n```\n\n\u003Cspan id='experimental-results'\u002F>\n\n## 📁 Experimental Results\n\nFor transparency and reproducibility, we have shared the results of our experiments on Google Drive. This includes model outputs, evaluation logs, and predictions used in our study.\n\n🔗 Access the data here: [Google Drive - Experimental Results](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1n1YCqq0aDeWiPILhkq-uS3sU3FDmslz9?usp=drive_link)\n\nPlease feel free to download, explore, and use these resources for research or reference purposes.\n\n\u003Cspan id='configuration'\u002F>\n\n### LOCOMO: \n\n#### Overview\n\nbackbone: `gpt-4o-mini`, judge model: `gpt-4o-mini` & `qwen2.5-32b-instruct`\n\n| Method             | ACC(%) gpt-4o-mini | ACC(%) qwen2.5-32b-instruct | Memory-Con Tokens(k) Total | QA Tokens(k) total | Total(k)     | Calls  | Runtime(s) total |\n|-------------------|--------------------|------------------------------|-----------------------------|---------------------|--------------|--------|------------------|\n| FullText          | 73.83              | 73.18                        | –                           | 54,884.479          | 54,884.479   | –      | 6,971           |\n| NaiveRAG          | 63.64              | 63.12                        | –                           | 3,870.187           | 3,870.187    | –      | 1,884           |\n| A-MEM             | 64.16              | 60.71                        | 11,494.344                  | 10,170.567          | 21,664.907   | 11,754 | 67,084          |\n| MemoryOS(eval)    | 58.25              | 61.04                        | 2,870.036                   | 7,649.343           | 10,519.379   | 5,534  | 26,129          |\n| MemoryOS(pypi)    | 54.87              | 55.91                        | 5,264.801                   | 6,126.111           | 11,390.004   | 10,160 | 37,912          |\n| Mem0              | 36.49              | 37.01                        | 24,304.872                  | 1,488.618           | 25,793.490   | 19,070 | 120,175         |\n| Mem0(api)         | 61.69              | 61.69                        | 68,347.720                  | 4,169.909           | 72,517.629   | 6,022  | 10,445          |\n| Mem0-g(api)       | 60.32              | 59.48                        | 69,684.818                  | 4,389.147           | 74,073.965   | 6,022  | 10,926          |\n\nbackbone: `qwen3-30b-a3b-instruct-2507`, judge model: `gpt-4o-mini` & `qwen2.5-32b-instruct`\n\n| Method             | ACC(%) gpt-4o-mini | ACC(%) qwen2.5-32b-instruct | Memory-Con Tokens(k) Total | QA Tokens(k) total | Total(k)     | Calls  | Runtime(s) total |\n|-------------------|--------------------|------------------------------|-----------------------------|---------------------|--------------|--------|------------------|\n| FullText          | 74.87              | 74.35                        | –                           | 60,873.076          | 60,873.076   | –      | 10,555           |\n| NaiveRAG          | 66.95              | 64.68                        | –                           | 4,271.052           | 4,271.052    | –      | 1,252            |\n| A-MEM             | 56.10              | 54.81                        | 16,267.997                  | 17,340.881          | 33,608.878   | 11,754 | 69,339           |\n| MemoryOS(eval)    | 61.04              | 59.81                        | 3,615.087                   | 9,703.169           | 11,946.442   | 4,147  | 13,710           |\n| MemoryOS(pypi)    | 51.30              | 51.95                        | 6,663.527                   | 7,764.991           | 14,428.518   | 10,046 | 20,830           |\n| Mem0              | 43.31              | 43.25                        | 17,994.035                  | 1,765.570           | 19,759.605   | 16,145 | 46,500           |\n\n\n#### Details\n\nbackbone: `gpt-4o-mini`, judge model: `gpt-4o-mini` & `qwen2.5-32b-instruct`\n\n| Method             | Summary Tokens(k) In | Summary Tokens(k) Out | Update Tokens(k) In | Update Tokens(k) Out | QA Tokens(k) In | QA Tokens(k) Out | Runtime(s) mem-con | Runtime(s) qa |\n|-------------------|-----------------------|------------------------|----------------------|-----------------------|------------------|-------------------|----------------------|----------------|\n| FullText          | –                     | –                      | –                    | –                     | 54,858.770       | 25.709            | –                    | 6,971          |\n| NaiveRAG          | –                     | –                      | –                    | –                     | 3,851.029        | 19.158            | –                    | 1,884          |\n| A-MEM             | 1,827.373             | 492.883                | 7,298.878            | 1,875.210             | 10,113.252       | 57.315            | 60,607               | 6,477          |\n| MemoryOS(eval)    | 1,109.849             | 333.970                | 780.807              | 645.410               | 7,638.539        | 10.804            | 24,220               | 1,909          |\n| MemoryOS(pypi)    | 1,007.729             | 294.601                | 3,037.509            | 924.962               | 6,116.239        | 9.872             | 33,325               | 4,587          |\n| Mem0              | 8,127.398             | 253.187                | 12,722.011           | 3,202.276             | 1,478.830        | 9.788             | 118,268              | 1,907          |\n| Mem0(api)         | \\                     | \\                      | \\                    | \\                     | 4,156.850        | 13.059            | 4,328                | 6,117          |\n| Mem0-g(api)       | \\                     | \\                      | \\                    | \\                     | 4,375.900        | 13.247            | 5,381                | 5,545          |\n\nbackbone: `qwen3-30b-a3b-instruct-2507`, judge model: `gpt-4o-mini` & `qwen2.5-32b-instruct`\n\n| Method             | Summary Tokens(k) In | Summary Tokens(k) Out | Update Tokens(k) In | Update Tokens(k) Out | QA Tokens(k) In | QA Tokens(k) Out | Runtime(s) mem-con | Runtime(s) qa |\n|-------------------|-----------------------|------------------------|----------------------|-----------------------|------------------|-------------------|----------------------|----------------|\n| FullText          | –                     | –                      | –                    | –                     | 60,838.694       | 34.382            | –                    | 10,555         |\n| NaiveRAG          | –                     | –                      | –                    | –                     | 4,239.030        | 32.022            | –                    | 1,252          |\n| A-MEM             | 1,582.942             | 608.507                | 9,241.928            | 4,835.070             | 17,528.876       | 82.005            | 55,439               | 13,900         |\n| MemoryOS(eval)    | 1,222.139             | 531.157                | 1,044.307            | 817.484               | 9,679.996        | 23.173            | 12,697               | 1,012          |\n| MemoryOS(pypi)    | 2,288.533             | 516.024                | 2,422.693            | 1,436.277             | 7,743.391        | 21.600            | 19,822               | 1,007          |\n| Mem0              | 8,270.874             | 186.354                | 7,638.827            | 1,897.980             | 1,739.246        | 26.324            | 45,407               | 1,093          |\n\n#### Performance metrics\nbackbone: `gpt-4o-mini`, judge model: `gpt-4o-mini`\n\n| Method | Overall ↑ | Multi | Open | Single | Temp |\n| :--- | :---: | :---: | :---: | :---: | :---: |\n| FullText         | 73.83 | 68.79 | 56.25 | 86.56 | 50.16 |\n| NaiveRAG         | 63.64 | 55.32 | 47.92 | 70.99 | 56.39 |\n| A-MEM            | 64.16 | 56.03 | 31.25 | 72.06 | 60.44 |\n| MemoryOS(eval)   | 58.25 | 56.74 | 45.83 | 67.06 | 40.19 |\n| MemoryOS(pypi)   | 54.87 | 52.13 | 43.75 | 63.97 | 36.76 |\n| Mem0             | 36.49 | 30.85 | 34.38 | 38.41 | 37.07 |\n| Mem0(api)        | 61.69 | 56.38 | 43.75 | 66.47 | 59.19 |\n| Mem0-g(api)      | 60.32 | 54.26 | 39.58 | 65.99 | 57.01 |\n\nbackbone: `gpt-4o-mini`, judge model: `qwen2.5-32b-instruct`\n\n| Method | Overall ↑ | Multi | Open | Single | Temp |\n| :--- | :---: | :---: | :---: | :---: | :---: |\n| FullText         | 73.18 | 68.09 | 54.17 | 86.21 | 49.22 |\n| NaiveRAG         | 63.12 | 53.55 | 50.00 | 71.34 | 53.89 |\n| A-MEM            | 60.71 | 53.55 | 32.29 | 69.08 | 53.58 |\n| MemoryOS(eval)   | 61.04 | 64.18 | 40.62 | 70.15 | 40.50 |\n| MemoryOS(pypi)   | 55.91 | 52.48 | 41.67 | 66.35 | 35.83 |\n| Mem0             | 37.01 | 31.91 | 37.50 | 38.53 | 37.38 |\n| Mem0(api)        | 61.69 | 54.26 | 46.88 | 67.66 | 57.01 |\n| Mem0-g(api)      | 59.48 | 55.32 | 42.71 | 65.04 | 53.58 |\n\nbackbone: `qwen3-30b-a3b-instruct-2507`, judge model: `gpt-4o-mini`\n\n| Method | Overall ↑ | Multi | Open | Single | Temp |\n| :--- | :---: | :---: | :---: | :---: | :---: |\n| FullText         | 74.87 | 69.86 | 57.29 | 87.40 | 51.71 |\n| NaiveRAG         | 66.95 | 62.41 | 57.29 | 76.81 | 47.98 |\n| A-MEM            | 56.10 | 57.45 | 43.75 | 67.90 | 27.73 |\n| MemoryOS(eval)   | 61.04 | 62.77 | 51.04 | 72.29 | 33.02 |\n| MemoryOS(pypi)   | 51.30 | 52.48 | 40.62 | 61.59 | 26.48 |\n| Mem0             | 43.31 | 42.91 | 46.88 | 46.37 | 34.58 |\n| Mem0(api)        | 61.69 | 54.26 | 46.88 | 67.66 | 57.01 |\n| Mem0-g(api)      | 59.48 | 55.32 | 42.71 | 65.04 | 53.58 |\n\nbackbone: `qwen3-30b-a3b-instruct-2507`, judge model: `qwen2.5-32b-instruct`\n\n| Method | Overall ↑ | Multi | Open | Single | Temp |\n| :--- | :---: | :---: | :---: | :---: | :---: |\n| FullText         | 74.35 | 68.09 | 63.54 | 86.33 | 51.71 |\n| NaiveRAG         | 64.68 | 60.28 | 52.08 | 75.62 | 43.61 |\n| A-MEM            | 54.81 | 56.74 | 39.58 | 67.42 | 24.61 |\n| MemoryOS(eval)   | 59.81 | 63.12 | 48.96 | 70.51 | 32.09 |\n| MemoryOS(pypi)   | 51.95 | 55.67 | 39.58 | 61.47 | 27.41 |\n| Mem0             | 43.25 | 45.04 | 46.88 | 45.78 | 33.96 |\n| Mem0(api)        | 61.69 | 54.26 | 46.88 | 67.66 | 57.01 |\n| Mem0-g(api)      | 59.48 | 55.32 | 42.71 | 65.04 | 53.58 |\n\n\n## ⚙️ Configuration\n\nAll behaviors of LightMem are controlled via the BaseMemoryConfigs configuration class. Users can customize aspects like pre-processing, memory extraction, retrieval strategy, and update mechanisms by providing a custom configuration.\n\n#### Key Configuration Options (Usage)\n\n| Option                    | Default                                     | Usage (allowed values and behavior) |\n| :---                      | :---                                        | :--- |\n| `pre_compress`        | `False`                                     | True \u002F False. If True, input messages are pre-compressed using the `pre_compressor` configuration before being stored. This reduces storage and indexing cost but may remove fine-grained details. If False, messages are stored without pre-compression. |\n| `pre_compressor`      | `None`                                      | dict \u002F object. Configuration for the pre-compression component (`PreCompressorConfig`) with fields like `model_name` (e.g., `llmlingua-2`, `entropy_compress`) and `configs` (model-specific parameters). Effective only when `pre_compress=True`. |\n| `topic_segment`       | `False`                                     | True \u002F False. Enables topic-based segmentation of long conversations. When True, long conversations are split into topic segments and each segment can be indexed\u002Fstored independently (requires `topic_segmenter`). When False, messages are stored sequentially. |\n| `precomp_topic_shared`| `False`                                     | True \u002F False. If True, pre-compression and topic segmentation can share intermediate results to avoid redundant processing. May improve performance but requires careful configuration to avoid cross-topic leakage. |\n| `topic_segmenter`     | `None`                                      | dict \u002F object. Configuration for topic segmentation (`TopicSegmenterConfig`), including `model_name` and `configs` (segment length, overlap, etc.). Used when `topic_segment=True`. |\n| `messages_use`        | `'user_only'`                               | `'user_only'` \u002F `'assistant_only'` \u002F `'hybrid'`. Controls which messages are used to generate metadata and summaries: `user_only` uses user inputs, `assistant_only` uses assistant responses, `hybrid` uses both. Choosing `hybrid` increases processing but yields richer context. |\n| `metadata_generate`   | `True`                                      | True \u002F False. If True, metadata such as keywords and entities are extracted and stored to support attribute-based and filtered retrieval. If False, no metadata extraction occurs. |\n| `text_summary`        | `True`                                      | True \u002F False. If True, a text summary is generated and stored alongside the original text (reduces retrieval cost and speeds review). If False, only the original text is stored. Summary quality depends on `memory_manager`. |\n| `memory_manager`      | `MemoryManagerConfig()`                     | dict \u002F object. Controls the model used to generate summaries and metadata (`MemoryManagerConfig`), e.g., `model_name` (`openai`, `ollama`, etc.) and `configs`. Changing this affects summary style, length, and cost. |\n| `extract_threshold`   | `0.5`                                       | float (0.0 - 1.0). Threshold used to decide whether content is important enough to be extracted as metadata or highlight. Higher values (e.g., 0.8) mean more conservative extraction; lower values (e.g., 0.2) extract more items (may increase noise). |\n| `index_strategy`      | `None`                                      | `'embedding'` \u002F `'context'` \u002F `'hybrid'` \u002F `None`. Determines how memories are indexed: 'embedding' uses vector-based indexing (requires embedders\u002Fretriever) for semantic search; 'context' uses text-based\u002Fcontextual retrieval (requires context_retriever) for keyword\u002Fdocument similarity; and 'hybrid' combines context filtering and vector reranking for robustness and higher accuracy. |\n| `text_embedder`       | `None`                                      | dict \u002F object. Configuration for text embedding model (`TextEmbedderConfig`) with `model_name` (e.g., `huggingface`) and `configs` (batch size, device, embedding dim). Required when `index_strategy` or `retrieve_strategy` includes `'embedding'`. |\n| `multimodal_embedder` | `None`                                      | dict \u002F object. Configuration for multimodal\u002Fimage embedder (`MMEmbedderConfig`). Used for non-text modalities. |\n| `history_db_path`     | `os.path.join(lightmem_dir, \"history.db\")`  | str. Path to persist conversation history and lightweight state. Useful to restore state across restarts. |\n| `retrieve_strategy`   | `'embedding'`                               | `'embedding'` \u002F `'context'` \u002F `'hybrid'`. Strategy used at query time to fetch relevant memories. Pick based on data and query type: semantic queries -> `'embedding'`; keyword\u002Fstructured queries -> `'context'`; mixed -> `'hybrid'`. |\n| `context_retriever`   | `None`                                      | dict \u002F object. Configuration for context-based retriever (`ContextRetrieverConfig`), e.g., `model_name='BM25'` and `configs` like `top_k`. Used when `retrieve_strategy` includes `'context'`. |\n| `embedding_retriever` | `None`                                      | dict \u002F object. Vector store configuration (`EmbeddingRetrieverConfig`), e.g., `model_name='qdrant'` and connection\u002Findex params. Used when `retrieve_strategy` includes `'embedding'`. |\n| `summary_retriever`   | `None`                                      | dict \u002F object. Configuration for summary-specific vector store (`EmbeddingRetrieverConfig`). When configured, summaries are stored in a separate collection for hierarchical retrieval. Used in StructMem mode to store and retrieve session\u002Ftopic summaries independently from detailed memories. |\n| `update`              | `'offline'`                                 | `'online'` \u002F `'offline'`. `'offline'`: batch or scheduled updates to save cost and aggregate changes — this is the fully supported mode with complete functionality. `'online'`: reserved for future development (currently a no-op placeholder; memory will not be persisted when this mode is set). |\n| `kv_cache`            | `False`                                     | True \u002F False. If True, attempt to precompute and persist model KV caches to accelerate repeated LLM calls (requires support from the LLM runtime and may increase storage). Uses `kv_cache_path` to store cache. |\n| `kv_cache_path`       | `os.path.join(lightmem_dir, \"kv_cache.db\")` | str. File path for KV cache storage when `kv_cache=True`. |\n| `graph_mem`           | `False`                                     | True \u002F False. When True, some memories will be organized as a graph (nodes and relationships) to support complex relation queries and reasoning. Requires additional graph processing\u002Fstorage. |\n| `extraction_mode`     | `'flat'`                                    | `'flat'` \u002F `'event'`. Memory extraction mode: `'flat'` extracts factual entries as independent units suitable for general knowledge retention; `'event'` extracts event-level structures with both factual and relational components, preserving temporal bindings and causal relationships. Use `'event'` for narrative-heavy or time-sensitive scenarios. |\n| `version`             | `'v1.1'`                                    | str. Configuration\u002FAPI version. Only change if you know compatibility implications. |\n| `logging`             | `'None'`                                    | dict \u002F object. Configuration for logging enabled. |\n\n## 🏆 Contributors\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FJizhanFang\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_20ccf2936b9a.png\" width=\"80\" style=\"border-radius:50%\" alt=\"JizhanFang\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>JizhanFang\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FXinle-Deng\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_f41d001a9d10.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Xinle-Deng\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Xinle-Deng\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FXubqpanda\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_d97c8fdadfd4.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Xubqpanda\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Xubqpanda\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHaomingX\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_ed6cec0c1348.png\" width=\"80\" style=\"border-radius:50%\" alt=\"HaomingX\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>HaomingX\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002F453251\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_4b3dbe893af4.png\" width=\"80\" style=\"border-radius:50%\" alt=\"453251\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>453251\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FJames-TYQ\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_5d5aae6a1ecc.png\" width=\"80\" style=\"border-radius:50%\" alt=\"James-TYQ\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>James-TYQ\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fevy568\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_e85e4ac234ba.png\" width=\"80\" style=\"border-radius:50%\" alt=\"evy568\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>evy568\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FNorah-Feathertail\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_3a4538a48afc.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Norah-Feathertail\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Norah-Feathertail\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FTongjiCst\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_1aff83dfd64b.png\" width=\"80\" style=\"border-radius:50%\" alt=\"TongjiCst\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>TongjiCst\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\nWe welcome contributions from the community! If you'd like to contribute, please fork the repository and submit a pull request. For major changes, please open an issue first to discuss what you would like to change.\n\n\u003Cspan id='related'\u002F>\n\n## 🔗 Related Projects\n\n\u003Cdiv align=\"center\">\n  \u003Ctable>\n    \u003Ctr>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_823c3ad3cce6.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Mem0\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>Mem0\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMemTensor\u002FMemOS\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_94b8a217e0ab.png\" width=\"80\" style=\"border-radius:50%\" alt=\"MemOS\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>Memos\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fgetzep\u002Fzep\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_500c9c14d7d0.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Zep\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>Zep\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMirix-AI\u002FMIRIX\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_39fa8da20eee.png\" width=\"80\" style=\"border-radius:50%\" alt=\"MIRIX\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>MIRIX\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FNevaMind-AI\u002FmemU\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_69a8dbb58c51.png\" width=\"80\" style=\"border-radius:50%\" alt=\"MemU\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>MemU\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmemodb-io\u002Fmemobase\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_8f5ab648a5c7.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Memobase\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>Memobase\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n    \u003C\u002Ftr>\n  \u003C\u002Ftable>\n\u003C\u002Fdiv>\n","\u003Cdiv align=\"center\">\n  \u003Cpicture>\n    \u003Csource srcset=\".\u002Ffigs\u002Flightmem_logo_dark.png\" media=\"(prefers-color-scheme: dark)\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_2e1e292b4634.png\" width=\"60%\" height=\"40%\" \u002F>\n  \u003C\u002Fpicture>\n\u003C\u002Fdiv>\n\u003Ch1 align=\"center\"> LightMem: 轻量高效的内存增强生成 \u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18866\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Paper-red\" alt=\"arXiv\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fzjunlp\u002FLightMem?style=social\" alt=\"GitHub Stars\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002FLICENSE\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-green.svg\" alt=\"License: MIT\">\n  \u003C\u002Fa>\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fzjunlp\u002FLightMem?color=blue\" alt=\"Last Commit\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-Welcome-red\" alt=\"PRs Welcome\">\n\u003C\u002Fp>\n\n\u003Ch5 align=\"center\"> ⭐ 如果你喜欢我们的项目，请在 GitHub 上给我们点个星，以获取最新更新！\u003C\u002Fh5>\n\n---\n\n**LightMem** 是一个专为大型语言模型和 AI 代理设计的轻量高效内存管理框架。它提供简单而强大的内存存储、检索和更新机制，帮助你快速构建具备长期记忆能力的智能应用。\n\n* 🚀 **轻量高效**\n  \u003Cbr> 极简设计，资源消耗低，响应速度快\n\n* 🎯 **易于使用**\n  \u003Cbr> 简单的 API 设计——只需几行代码即可集成到你的应用中\n\n* 🔌 **灵活可扩展**\n  \u003Cbr> 模块化架构，支持自定义存储引擎和检索策略\n\n* 🌐 **广泛兼容**\n  \u003Cbr> 支持云端 API（OpenAI、DeepSeek）以及本地模型（Ollama、vLLM 等）\n\n  \u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_e4e6cd6d5efd.png\" width=\"100%\" height=\"60%\" \u002F>\u003C\u002Fdiv>\n\n\u003Cspan id='news'\u002F>\n\n## 📢 最新消息\n- **[2026-03-21]**：🚀 我们提供了更全面的 [基准评估框架](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FMemBase)，支持在 LoCoMo 和 LongMemEval 等多个数据集上对 Mem0、A-MEM、EverMemOS、LangMem 等内存层进行基准测试。\n- **[2026-02-15]**：🚀 **[StructMem](.\u002FStructMem.md)** 发布：一种分层内存框架，能够保留事件级别的记忆绑定及跨事件的记忆连接。\n- **[2026-01-26]**：🎉🎉🎉 [**LightMem：轻量高效的内存增强生成**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18866) 已被 **ICLR 2026** 接受！\n- **[2026-01-17]**：🚀 我们提供了一个全面的 [基准评估框架](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Fmemory_toolkits\u002Freadme.md)，支持在 LoCoMo 和 LongMemEval 等多个数据集上对 Mem0、A-MEM、LangMem 等内存层进行基准测试。\n- **[2025-12-09]**：🎬 发布了 **[演示视频](#demo)**，展示了长上下文处理能力，并附带针对各种场景的完整 **[教程笔记本]**！\n- **[2025-11-30]**：🚌 LightMem 现在支持调用其 [**MCP 服务器**](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fmcp\u002Fserver.py) 提供的多种工具。\n- **[2025-11-26]**：🚀 增加了对 **LoCoMo** 数据集的完整支持，取得了领先的性能和效率，并提供了详细的 [结果](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem?tab=readme-ov-file#locomo)！以下是 [**复现脚本**](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fexperiments\u002Flocomo\u002Freadme.md)！\n- **[2025-11-09]**：✨ LightMem 现在支持通过 [**Ollama**](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Ffactory\u002Fmemory_manager\u002Follama.py)、[**vLLM**](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Ffactory\u002Fmemory_manager\u002Fvllm_offline.py) 和 [**Transformers**](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Ffactory\u002Fmemory_manager\u002Ftransformers.py) 自动加载的方式进行本地部署！\n- **[2025-10-12]**：🎉 LightMem 项目正式开源！\n\n\n\u003Cspan id='reproduction'\u002F>\n\n## 🧪 LoCoMo 和 LongMemEval 的复现脚本\n\n我们提供了轻量级、开箱即用的脚本，用于复现 **LoCoMo**、**LongMemEval** 及其联合基线的结果。\n\n| 数据集                  | 描述                                                                  | 脚本                                                                                                         | 结果 |\n| :----------------------- | :--------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------| :---------------------------------------------|\n| **LongMemEval**          | 在 LongMemEval 上运行 LightMem，包括评估和离线内存更新。                 | [run_lightmem_longmemeval.md](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fexperiments\u002Flongmemeval\u002Freadme.md)  |  [LongMemEval 结果](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fexperiments\u002Flongmemeval\u002Freadme.md#results) |\n| **LoCoMo**               | 复现 LightMem 在 LoCoMo 上结果的脚本。                                  | [run_lightmem_locomo.md](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fexperiments\u002Flocomo\u002Freadme.md)            |  [LoCoMo 结果](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fexperiments\u002Flocomo\u002Freadme.md#results)      |\n| **LongMemEval & LoCoMo** | 统一的基线脚本，用于同时运行这两个数据集。                            | [run_baselines.md](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Fmemory_toolkits\u002Freadme.md)        |  [基准结果](#experimental-results)    |\n\n\u003Cspan id='baseline-evaluation'\u002F>\n\n## 🧪 基准评估\n\n我们提供了一个全面的 [基准评估框架](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fblob\u002Fmain\u002Fsrc\u002Flightmem\u002Fmemory_toolkits\u002Freadme.md)，支持在 LoCoMo 和 LongMemEval 等多个数据集上对 Mem0、A-MEM、LangMem 等内存层进行基准测试。\n\n\u003Cspan id='demo'\u002F>\n\n## 🎥 演示与教程\n\n**观看演示：** [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=r7sk_7Yv66I) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1a7mJBbEVM\u002F)\n\n### 📚 实战教程\n我们提供了与演示及其他用例相对应的即用型 Jupyter 笔记本。你可以在 [`tutorial-notebooks`](.\u002Ftutorial-notebooks\u002F) 目录中找到它们。\n\n| 场景 | 描述 | 笔记本链接 |\n| :--- | :--- | :--- |\n| **旅行规划** | 使用记忆功能构建旅行代理的完整指南。 | [LightMem_Example_travel.ipynb](.\u002Ftutorial-notebooks\u002FLightMem_Example_travel.ipynb) |\n| **代码助手** | 使用记忆功能构建代码代理的完整指南。 | [LightMem_Example_code.ipynb](.\u002Ftutorial-notebooks\u002FLightMem_Example_code.ipynb) |\n| **LongMemEval** | 如何使用 LightMem 在 LongMemEval 基准测试上进行评估的教程。 | [LightMem_Example_longmemeval.ipynb](.\u002Ftutorial-notebooks\u002FLightMem_Example_longmemeval.ipynb) |\n\n\n\u003Cspan id='todo'\u002F>\n\n## ☑️ 待办事项清单\nLightMem 正在不断进化！以下是接下来的计划：\n    \n- 更新时的 KV 缓存离线预计算（无损）\n- 在问答前进行 KV 缓存的在线预计算（有损）\n- 集成更多模型并增强功能\n- 协调使用上下文和长期记忆存储\n- 多模态记忆 \n\n\n\u003Cspan id='contents'\u002F>\n\n## 📑 目录\n\n* \u003Ca href='#news'>📢 新闻\u003C\u002Fa>\n* \u003Ca href='#reproduction'>🧪 复现脚本\u003C\u002Fa>\n* \u003Ca href='#baseline-evaluation'>🧪 基线评估\u003C\u002Fa>\n* \u003Ca href='#demo'>🎥 演示与教程\u003C\u002Fa>\n* \u003Ca href='#todo'>☑️ 待办事项清单\u003C\u002Fa>\n* \u003Ca href='#installation'>🔧 安装\u003C\u002Fa>\n* \u003Ca href='#quickstart'>⚡ 快速入门\u003C\u002Fa>\n* \u003Ca href='#architecture'>🏗️ 架构\u003C\u002Fa>\n* \u003Ca href='#examples'>💡 示例\u003C\u002Fa>\n* \u003Ca href='#experimental-results'>📁 实验结果\u003C\u002Fa>\n* \u003Ca href='#configuration'>⚙️ 配置\u003C\u002Fa>\n* \u003Ca href='#contributors'>👥 贡献者\u003C\u002Fa>\n* \u003Ca href='#related'>🔗 相关项目\u003C\u002Fa>\n\n\u003Cspan id='installation'\u002F>\n\n## 🔧 安装\n\n### 安装步骤\n\n#### 选项 1：从源码安装 \n```bash\n# 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem.git\ncd LightMem\n\n# 创建虚拟环境\nconda create -n lightmem python=3.11 -y\nconda activate lightmem\n\n# 安装依赖\nunset ALL_PROXY\npip install -e .\n```\n\n#### 选项 2：通过 pip 安装\n```bash\npip install lightmem  # 即将推出\n```\n\n## ⚡ 快速入门\n\n1. 修改 `API Configuration` 中的 `JUDGE_MODEL`、`LLM_MODEL` 及其对应的 `API_KEY` 和 `BASE_URL`。\n\n2. 从 [microsoft\u002Fllmlingua-2-bert-base-multilingual-cased-meetingbank](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002Fllmlingua-2-bert-base-multilingual-cased-meetingbank) 下载 `LLMLINGUA_MODEL`，从 [sentence-transformers\u002Fall-MiniLM-L6-v2](https:\u002F\u002Fhuggingface.co\u002Fsentence-transformers\u002Fall-MiniLM-L6-v2) 下载 `EMBEDDING_MODEL`，并在 `Model Paths` 中修改它们的路径。\n\n3. 从 [longmemeval-cleaned](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fxiaowu0162\u002Flongmemeval-cleaned) 下载数据集，并在 `Data Configuration` 中修改路径。\n\n```python\ncd experiments\npython run_lightmem_qwen.py\n```\n\n\u003Cspan id='architecture'\u002F>\n\n## 🏗️ 架构\n\n### 🗺️ 核心模块概览\nLightMem 采用模块化设计，将内存管理过程分解为多个可插拔组件。向用户公开的核心目录结构如下，便于自定义和扩展：\n\n```python\nLightMem\u002F\n├── src\u002Flightmem\u002F            # 主包\n│   ├── __init__.py          # 包初始化\n│   ├── configs\u002F             # 配置文件\n│   ├── factory\u002F             # 工厂方法\n│   ├── memory\u002F              # 核心内存管理\n│   └── memory_toolkits\u002F     # 内存工具包\n├── mcp\u002F                     # LightMem MCP 服务器\n├── experiments\u002F             # 实验脚本\n├── datasets\u002F                # 数据集文件\n└── examples\u002F                # 示例\n```\n\n### 🧩 各模块支持的后端\n\n下表列出了每个配置模块当前识别的后端值。请使用 `model_name` 字段（或相应的配置对象）来选择这些后端之一。\n\n| 模块 (config)                 | 支持的后端 |\n| :---                            | :--- |\n| `PreCompressorConfig`           | `llmlingua-2`, `entropy_compress` |\n| `TopicSegmenterConfig`          | `llmlingua-2` |\n| `MemoryManagerConfig`           | `openai`, `deepseek`, `ollama`, `vllm`, 等 |\n| `TextEmbedderConfig`            | `huggingface` |\n| `MMEmbedderConfig`              | `huggingface` |\n| `RetrieverConfig`      | `qdrant`, `FAISS`, `BM25` |\n\n\u003Cspan id='examples'\u002F>\n\n## 💡 示例\n\n### 初始化 LightMem\n```python\nimport os\nfrom datetime import datetime\nfrom lightmem.memory.lightmem import LightMemory\n\n\nLOGS_ROOT = \".\u002Flogs\"\nRUN_TIMESTAMP = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\nRUN_LOG_DIR = os.path.join(LOGS_ROOT, RUN_TIMESTAMP)\nos.makedirs(RUN_LOG_DIR, exist_ok=True)\n\nAPI_KEY='your_api_key'\nAPI_BASE_URL='your_api_base_url'\nLLM_MODEL='your_model_name' # 如 'gpt-4o-mini' (API) 或 'gemma3:latest' (本地 Ollama) ...\nEMBEDDING_MODEL_PATH='\u002Fyour\u002Fpath\u002Fto\u002Fmodels\u002Fall-MiniLM-L6-v2'\nLLMLINGUA_MODEL_PATH='\u002Fyour\u002Fpath\u002Fto\u002Fmodels\u002Fllmlingua-2-bert-base-multilingual-cased-meetingbank'\n\nconfig_dict = {\n    \"pre_compress\": True,\n    \"pre_compressor\": {\n        \"model_name\": \"llmlingua-2\",\n        \"configs\": {\n            \"llmlingua_config\": {\n                \"model_name\": LLMLINGUA_MODEL_PATH,\n                \"device_map\": \"cuda\",\n                \"use_llmlingua2\": True,\n            },\n        }\n    },\n    \"topic_segment\": True,\n    \"precomp_topic_shared\": True,\n    \"topic_segmenter\": {\n        \"model_name\": \"llmlingua-2\",\n    },\n    \"messages_use\": \"user_only\",\n    \"metadata_generate\": True,\n    \"text_summary\": True,\n    \"memory_manager\": {\n        \"model_name\": 'xxx', # 如 'openai' 或 'ollama' ...\n        \"configs\": {\n            \"model\": LLM_MODEL,\n            \"api_key\": API_KEY,\n            \"max_tokens\": 16000,\n            \"xxx_base_url\": API_BASE_URL # API 模型特定，如 'openai_base_url' 或 'deepseek_base_url' ...\n        }\n    },\n    \"extract_threshold\": 0.1,\n    \"index_strategy\": \"embedding\",\n    \"text_embedder\": {\n        \"model_name\": \"huggingface\",\n        \"configs\": {\n            \"model\": EMBEDDING_MODEL_PATH,\n            \"embedding_dims\": 384,\n            \"model_kwargs\": {\"device\": \"cuda\"},\n        },\n    },\n    \"retrieve_strategy\": \"embedding\",\n    \"embedding_retriever\": {\n        \"model_name\": \"qdrant\",\n        \"configs\": {\n            \"collection_name\": \"my_long_term_chat\",\n            \"embedding_model_dims\": 384,\n            \"path\": \".\u002Fmy_long_term_chat\", \n        }\n    },\n    \"summary_retriever\": {\n        \"model_name\": \"qdrant\",\n        \"configs\": {\n            \"collection_name\": \"my_chat_summaries\",\n            \"embedding_model_dims\": 384,\n            \"path\": \".\u002Fmy_chat_summaries\",\n        }\n    },\n    \"update\": \"offline\",\n    \"logging\": {\n        \"level\": \"DEBUG\",\n        \"file_enabled\": True,\n        \"log_dir\": RUN_LOG_DIR,\n    }\n}\n\nlightmem = LightMemory.from_config(config_dict)\n```\n\n### 添加记忆\n```python\nsession = {\n\"timestamp\": \"2025-01-10\",\n\"turns\": [\n    [\n        {\"role\": \"user\", \"content\": \"我最喜欢的冰淇淋口味是开心果，我的狗叫雷克斯。\"}, \n        {\"role\": \"assistant\", \"content\": \"明白了。开心果确实是个不错的选择。\"}], \n    ]\n}\n\n\nfor turn_messages in session[\"turns\"]:\n    timestamp = session[\"timestamp\"]\n    for msg in turn_messages:\n        msg[\"time_stamp\"] = timestamp\n        \n    store_result = lightmem.add_memory(\n        messages=turn_messages,\n        force_segment=True,\n        force_extract=True\n    )\n```\n\n### 离线更新\n```python\nlightmem.construct_update_queue_all_entries()\nlightmem.offline_update_all_entries(score_threshold=0.8)\n``` \n\n### 生成摘要 \n```python\nsummary_result = lightmem.summarize()\n```\n\n### 检索记忆\n```python\nquestion = \"我的狗叫什么名字？\"\nrelated_memories = lightmem.retrieve(question, limit=5)\nprint(related_memories)\n``` \n\n### MCP 服务器\n\nLightMem 还支持模型上下文协议（[MCP](https:\u002F\u002Fmodelcontextprotocol.io\u002Fdocs\u002Fgetting-started\u002Fintro)）服务器：\n\n```bash\n# 在根目录运行\ncd LightMem\n\n# 环境\npip install '.[mcp]'\n\n# MCP 检查器 [可选]\nnpx @modelcontextprotocol\u002Finspector python mcp\u002Fserver.py\n\n# 通过 HTTP 启动 API (http:\u002F\u002F127.0.0.1:8000\u002Fmcp)\nfastmcp run mcp\u002Fserver.py:mcp --transport http --port 8000\n```\n\n您本地客户端的 MCP 配置 `json` 文件可能如下所示：\n\n```json\n{\n  \"yourMcpServers\": {\n    \"LightMem\": {\n      \"url\": \"http:\u002F\u002F127.0.0.1:8000\u002Fmcp\",\n      \"otherParameters\": \"...\"\n    }\n  }\n}\n```\n\n\u003Cspan id='experimental-results'\u002F>\n\n## 📁 实验结果\n\n为确保透明度和可重复性，我们已将实验结果分享至 Google Drive。其中包括模型输出、评估日志以及本研究中使用的预测数据。\n\n🔗 点击此处访问数据：[Google Drive - 实验结果](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1n1YCqq0aDeWiPILhkq-uS3sU3FDmslz9?usp=drive_link)\n\n欢迎下载、探索并使用这些资源进行研究或参考。\n\n\u003Cspan id='configuration'\u002F>\n\n### LOCOMO： \n\n#### 概述\n\n主干模型：`gpt-4o-mini`，评判模型：`gpt-4o-mini` 和 `qwen2.5-32b-instruct`\n\n| 方法             | ACC(%) gpt-4o-mini | ACC(%) qwen2.5-32b-instruct | 记忆相关 Tokens(k) 总计 | QA Tokens(k) 总计 | 总计(k)     | 调用次数  | 总运行时间(s) |\n|-------------------|--------------------|------------------------------|-----------------------------|---------------------|--------------|--------|------------------|\n| FullText          | 73.83              | 73.18                        | –                           | 54,884.479          | 54,884.479   | –      | 6,971           |\n| NaiveRAG          | 63.64              | 63.12                        | –                           | 3,870.187           | 3,870.187    | –      | 1,884           |\n| A-MEM             | 64.16              | 60.71                        | 11,494.344                  | 10,170.567          | 21,664.907   | 11,754 | 67,084          |\n| MemoryOS(eval)    | 58.25              | 61.04                        | 2,870.036                   | 7,649.343           | 10,519.379   | 5,534  | 26,129          |\n| MemoryOS(pypi)    | 54.87              | 55.91                        | 5,264.801                   | 6,126.111           | 11,390.004   | 10,160 | 37,912          |\n| Mem0              | 36.49              | 37.01                        | 24,304.872                  | 1,488.618           | 25,793.490   | 19,070 | 120,175         |\n| Mem0(api)         | 61.69              | 61.69                        | 68,347.720                  | 4,169.909           | 72,517.629   | 6,022  | 10,445          |\n| Mem0-g(api)       | 60.32              | 59.48                        | 69,684.818                  | 4,389.147           | 74,073.965   | 6,022  | 10,926          |\n\n主干模型：`qwen3-30b-a3b-instruct-2507`，评判模型：`gpt-4o-mini` 和 `qwen2.5-32b-instruct`\n\n| 方法             | ACC(%) gpt-4o-mini | ACC(%) qwen2.5-32b-instruct | 记忆相关 Tokens(k) 总计 | QA Tokens(k) 总计 | 总计(k)     | 调用次数  | 总运行时间(s) |\n|-------------------|--------------------|------------------------------|-----------------------------|---------------------|--------------|--------|------------------|\n| FullText          | 74.87              | 74.35                        | –                           | 60,873.076          | 60,873.076   | –      | 10,555           |\n| NaiveRAG          | 66.95              | 64.68                        | –                           | 4,271.052           | 4,271.052    | –      | 1,252            |\n| A-MEM             | 56.10              | 54.81                        | 16,267.997                  | 17,340.881          | 33,608.878   | 11,754 | 69,339           |\n| MemoryOS(eval)    | 61.04              | 59.81                        | 3,615.087                   | 9,703.169           | 11,946.442   | 4,147  | 13,710           |\n| MemoryOS(pypi)    | 51.30              | 51.95                        | 6,663.527                   | 7,764.991           | 14,428.518   | 10,046 | 20,830           |\n| Mem0              | 43.31              | 43.25                        | 17,994.035                  | 1,765.570           | 19,759.605   | 16,145 | 46,500           |\n\n\n#### 详情\n\n主干模型：`gpt-4o-mini`，评判模型：`gpt-4o-mini` 和 `qwen2.5-32b-instruct`\n\n| 方法             | 摘要 Tokens(k) 输入 | 摘要 Tokens(k) 输出 | 更新 Tokens(k) 输入 | 更新 Tokens(k) 输出 | QA Tokens(k) 输入 | QA Tokens(k) 输出 | 记忆相关运行时间(s) | QA 运行时间(s) |\n|-------------------|-----------------------|------------------------|----------------------|-----------------------|------------------|-------------------|----------------------|----------------|\n| FullText          | –                     | –                      | –                    | –                     | 54,858.770       | 25.709            | –                    | 6,971          |\n| NaiveRAG          | –                     | –                      | –                    | –                     | 3,851.029        | 19.158            | –                    | 1,884          |\n| A-MEM             | 1,827.373             | 492.883                | 7,298.878            | 1,875.210             | 10,113.252       | 57.315            | 60,607               | 6,477          |\n| MemoryOS(eval)    | 1,109.849             | 333.970                | 780.807              | 645.410               | 7,638.539        | 10.804            | 24,220               | 1,909          |\n| MemoryOS(pypi)    | 1,007.729             | 294.601                | 3,037.509            | 924.962               | 6,116.239        | 9.872             | 33,325               | 4,587          |\n| Mem0              | 8,127.398             | 253.187                | 12,722.011           | 3,202.276             | 1,478.830        | 9.788             | 118,268              | 1,907          |\n| Mem0(api)         | \\                     | \\                      | \\                    | \\                     | 4,156.850        | 13.059            | 4,328                | 6,117          |\n| Mem0-g(api)       | \\                     | \\                      | \\                    | \\                     | 4,375.900        | 13.247            | 5,381                | 5,545          |\n\n主干模型：`qwen3-30b-a3b-instruct-2507`，评判模型：`gpt-4o-mini` 和 `qwen2.5-32b-instruct`\n\n| 方法             | 摘要令牌数（千）输入 | 摘要令牌数（千）输出 | 更新令牌数（千）输入 | 更新令牌数（千）输出 | QA 令牌数（千）输入 | QA 令牌数（千）输出 | 运行时（秒）内存消耗 | 运行时（秒）QA |\n|-------------------|-----------------------|------------------------|----------------------|-----------------------|------------------|-------------------|----------------------|----------------|\n| FullText          | –                     | –                      | –                    | –                     | 60,838.694       | 34.382            | –                    | 10,555         |\n| NaiveRAG          | –                     | –                      | –                    | –                     | 4,239.030        | 32.022            | –                    | 1,252          |\n| A-MEM             | 1,582.942             | 608.507                | 9,241.928            | 4,835.070             | 17,528.876       | 82.005            | 55,439               | 13,900         |\n| MemoryOS(eval)    | 1,222.139             | 531.157                | 1,044.307            | 817.484               | 9,679.996        | 23.173            | 12,697               | 1,012          |\n| MemoryOS(pypi)    | 2,288.533             | 516.024                | 2,422.693            | 1,436.277             | 7,743.391        | 21.600            | 19,822               | 1,007          |\n| Mem0              | 8,270.874             | 186.354                | 7,638.827            | 1,897.980             | 1,739.246        | 26.324            | 45,407               | 1,093          |\n\n#### 性能指标\n主干模型：`gpt-4o-mini`，评判模型：`gpt-4o-mini`\n\n| 方法 | 整体 ↑ | 多轮 | 开放 | 单轮 | 温度 |\n| :--- | :---: | :---: | :---: | :---: | :---: |\n| FullText         | 73.83 | 68.79 | 56.25 | 86.56 | 50.16 |\n| NaiveRAG         | 63.64 | 55.32 | 47.92 | 70.99 | 56.39 |\n| A-MEM            | 64.16 | 56.03 | 31.25 | 72.06 | 60.44 |\n| MemoryOS(eval)   | 58.25 | 56.74 | 45.83 | 67.06 | 40.19 |\n| MemoryOS(pypi)   | 54.87 | 52.13 | 43.75 | 63.97 | 36.76 |\n| Mem0             | 36.49 | 30.85 | 34.38 | 38.41 | 37.07 |\n| Mem0(api)        | 61.69 | 56.38 | 43.75 | 66.47 | 59.19 |\n| Mem0-g(api)      | 60.32 | 54.26 | 39.58 | 65.99 | 57.01 |\n\n主干模型：`gpt-4o-mini`，评判模型：`qwen2.5-32b-instruct`\n\n| 方法 | 整体 ↑ | 多轮 | 开放 | 单轮 | 温度 |\n| :--- | :---: | :---: | :---: | :---: | :---: |\n| FullText         | 73.18 | 68.09 | 54.17 | 86.21 | 49.22 |\n| NaiveRAG         | 63.12 | 53.55 | 50.00 | 71.34 | 53.89 |\n| A-MEM            | 60.71 | 53.55 | 32.29 | 69.08 | 53.58 |\n| MemoryOS(eval)   | 61.04 | 64.18 | 40.62 | 70.15 | 40.50 |\n| MemoryOS(pypi)   | 55.91 | 52.48 | 41.67 | 66.35 | 35.83 |\n| Mem0             | 37.01 | 31.91 | 37.50 | 38.53 | 37.38 |\n| Mem0(api)        | 61.69 | 54.26 | 46.88 | 67.66 | 57.01 |\n| Mem0-g(api)      | 59.48 | 55.32 | 42.71 | 65.04 | 53.58 |\n\n主干模型：`qwen3-30b-a3b-instruct-2507`，评判模型：`gpt-4o-mini`\n\n| 方法 | 整体 ↑ | 多轮 | 开放 | 单轮 | 温度 |\n| :--- | :---: | :---: | :---: | :---: | :---: |\n| FullText         | 74.87 | 69.86 | 57.29 | 87.40 | 51.71 |\n| NaiveRAG         | 66.95 | 62.41 | 57.29 | 76.81 | 47.98 |\n| A-MEM            | 56.10 | 57.45 | 43.75 | 67.90 | 27.73 |\n| MemoryOS(eval)   | 61.04 | 62.77 | 51.04 | 72.29 | 33.02 |\n| MemoryOS(pypi)   | 51.30 | 52.48 | 40.62 | 61.59 | 26.48 |\n| Mem0             | 43.31 | 42.91 | 46.88 | 46.37 | 34.58 |\n| Mem0(api)        | 61.69 | 54.26 | 46.88 | 67.66 | 57.01 |\n| Mem0-g(api)      | 59.48 | 55.32 | 42.71 | 65.04 | 53.58 |\n\n主干模型：`qwen3-30b-a3b-instruct-2507`，评判模型：`qwen2.5-32b-instruct`\n\n| 方法 | 整体 ↑ | 多轮 | 开放 | 单轮 | 温度 |\n| :--- | :---: | :---: | :---: | :---: | :---: |\n| FullText         | 74.35 | 68.09 | 63.54 | 86.33 | 51.71 |\n| NaiveRAG         | 64.68 | 60.28 | 52.08 | 75.62 | 43.61 |\n| A-MEM            | 54.81 | 56.74 | 39.58 | 67.42 | 24.61 |\n| MemoryOS(eval)   | 59.81 | 63.12 | 48.96 | 70.51 | 32.09 |\n| MemoryOS(pypi)   | 51.95 | 55.67 | 39.58 | 61.47 | 27.41 |\n| Mem0             | 43.25 | 45.04 | 46.88 | 45.78 | 33.96 |\n| Mem0(api)        | 61.69 | 54.26 | 46.88 | 67.66 | 57.01 |\n| Mem0-g(api)      | 59.48 | 55.32 | 42.71 | 65.04 | 53.58 |\n\n\n\n\n## ⚙️ 配置\n\nLightMem 的所有行为都通过 BaseMemoryConfigs 配置类进行控制。用户可以通过提供自定义配置来定制预处理、记忆提取、检索策略和更新机制等方面。\n\n#### 关键配置选项（用法）\n\n| 选项                    | 默认值                                     | 使用说明（允许值及行为） |\n| :---                      | :---                                        | :--- |\n| `pre_compress`        | `False`                                     | True \u002F False。若为 True，输入的消息会在存储前使用 `pre_compressor` 配置进行预压缩。这可以降低存储和索引成本，但可能会丢失细粒度细节。若为 False，则消息会未经预压缩直接存储。 |\n| `pre_compressor`      | `None`                                      | dict \u002F object。预压缩组件的配置（`PreCompressorConfig`），包含字段如 `model_name`（例如 `llmlingua-2`、`entropy_compress`）和 `configs`（模型特定参数）。仅在 `pre_compress=True` 时生效。 |\n| `topic_segment`       | `False`                                     | True \u002F False。启用基于主题的长对话分割功能。当为 True时，长对话会被拆分为多个主题片段，每个片段可独立索引\u002F存储（需配合 `topic_segmenter` 使用）。当为 False时，消息按顺序存储。 |\n| `precomp_topic_shared`| `False`                                     | True \u002F False。若为 True，预压缩和主题分割可以共享中间结果，避免重复计算。这可能提升性能，但需要仔细配置以防止跨主题信息泄露。 |\n| `topic_segmenter`     | `None`                                      | dict \u002F object。主题分割的配置（`TopicSegmenterConfig`），包括 `model_name` 和 `configs`（片段长度、重叠等）。仅在 `topic_segment=True` 时使用。 |\n| `messages_use`        | `'user_only'`                               | `'user_only'` \u002F `'assistant_only'` \u002F `'hybrid'`。控制用于生成元数据和摘要的消息类型：`user_only` 使用用户输入，`assistant_only` 使用助手回复，`hybrid` 则同时使用两者。选择 `hybrid` 会增加处理量，但能提供更丰富的上下文信息。 |\n| `metadata_generate`   | `True`                                      | True \u002F False。若为 True，将提取并存储关键词、实体等元数据，以支持基于属性和过滤条件的检索。若为 False，则不进行元数据提取。 |\n| `text_summary`        | `True`                                      | True \u002F False。若为 True，会生成文本摘要并与原文一同存储（可降低检索成本并加快查阅速度）。若为 False，则仅存储原文。摘要质量取决于 `memory_manager` 的配置。 |\n| `memory_manager`      | `MemoryManagerConfig()`                     | dict \u002F object。控制用于生成摘要和元数据的模型（`MemoryManagerConfig`），例如 `model_name`（`openai`、`ollama` 等）和 `configs`。更改此配置会影响摘要的风格、长度和成本。 |\n| `extract_threshold`   | `0.5`                                       | 浮点数（0.0 - 1.0）。用于决定内容是否足够重要，从而被提取为元数据或高亮显示。较高值（如 0.8）表示更保守的提取策略；较低值（如 0.2）则会提取更多内容（可能导致噪声增加）。 |\n| `index_strategy`      | `None`                                      | `'embedding'` \u002F `'context'` \u002F `'hybrid'` \u002F `None`。决定记忆如何被索引：`embedding` 使用向量索引（需配备嵌入器和检索器）进行语义搜索；`context` 使用基于文本\u002F上下文的检索方式（需配备上下文检索器）进行关键词或文档相似度匹配；`hybrid` 则结合上下文过滤与向量重排序，以提高鲁棒性和准确性。 |\n| `text_embedder`       | `None`                                      | dict \u002F object。文本嵌入模型的配置（`TextEmbedderConfig`），包含 `model_name`（如 `huggingface`）和 `configs`（批量大小、设备、嵌入维度等）。当 `index_strategy` 或 `retrieve_strategy` 包含 `'embedding'` 时必需。 |\n| `multimodal_embedder` | `None`                                      | dict \u002F object。多模态\u002F图像嵌入器的配置（`MMEmbedderConfig`）。用于非文本模态的数据。 |\n| `history_db_path`     | `os.path.join(lightmem_dir, \"history.db\")`  | str。用于持久化对话历史和轻量级状态的路径。有助于在重启后恢复状态。 |\n| `retrieve_strategy`   | `'embedding'`                               | `'embedding'` \u002F `'context'` \u002F `'hybrid'`。查询时用于获取相关记忆的策略。应根据数据和查询类型选择：语义查询 -> `'embedding'`；关键词\u002F结构化查询 -> `'context'`；混合查询 -> `'hybrid'`。 |\n| `context_retriever`   | `None`                                      | dict \u002F object。基于上下文的检索器配置（`ContextRetrieverConfig`），例如 `model_name='BM25'` 和 `configs` 如 `top_k`。仅在 `retrieve_strategy` 包含 `'context'` 时使用。 |\n| `embedding_retriever` | `None`                                      | dict \u002F object。向量存储的配置（`EmbeddingRetrieverConfig`），例如 `model_name='qdrant'` 和连接\u002F索引参数。仅在 `retrieve_strategy` 包含 `'embedding'` 时使用。 |\n| `summary_retriever`   | `None`                                      | dict \u002F object。针对摘要的专用向量存储配置（`EmbeddingRetrieverConfig`）。配置后，摘要将被存储在单独的集合中，实现分层检索。在 StructMem 模式下，可用于独立存储和检索会话\u002F主题摘要，而不受详细记忆的影响。 |\n| `update`              | `'offline'`                                 | `'online'` \u002F `'offline'`。`'offline'`：批量或定时更新，以节省成本并聚合变更——这是完全支持且功能完整的模式。`'online'`：保留用于未来开发（目前仅为占位符，设置该模式时不会持久化内存）。 |\n| `kv_cache`            | `False`                                     | True \u002F False。若为 True，尝试预先计算并持久化模型的 KV 缓存，以加速重复的 LLM 调用（需 LLM 运行时支持，并可能增加存储需求）。缓存将存储在 `kv_cache_path` 中。 |\n| `kv_cache_path`       | `os.path.join(lightmem_dir, \"kv_cache.db\")` | str。当 `kv_cache=True` 时，KV 缓存的存储文件路径。 |\n| `graph_mem`           | `False`                                     | True \u002F False。若为 True，部分记忆将以图结构（节点和关系）组织，以支持复杂的关系查询和推理。这需要额外的图处理和存储资源。 |\n| `extraction_mode`     | `'flat'`                                    | `'flat'` \u002F `'event'`。记忆提取模式：`'flat'` 将事实性条目提取为独立单元，适合一般知识的保存；`'event'` 则提取事件级别的结构，包含事实性和关系性成分，保留时间关联和因果关系。建议在叙事性强或对时间敏感的场景中使用 `'event'`。 |\n| `version`             | `'v1.1'`                                    | str。配置\u002FAPI 版本。仅在了解兼容性影响时才更改。 |\n| `logging`             | `'None'`                                    | dict \u002F object。日志记录的配置，用于启用日志功能。\n\n## 🏆 贡献者\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FJizhanFang\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_20ccf2936b9a.png\" width=\"80\" style=\"border-radius:50%\" alt=\"JizhanFang\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>JizhanFang\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FXinle-Deng\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_f41d001a9d10.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Xinle-Deng\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Xinle-Deng\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FXubqpanda\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_d97c8fdadfd4.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Xubqpanda\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Xubqpanda\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FHaomingX\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_ed6cec0c1348.png\" width=\"80\" style=\"border-radius:50%\" alt=\"HaomingX\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>HaomingX\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002F453251\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_4b3dbe893af4.png\" width=\"80\" style=\"border-radius:50%\" alt=\"453251\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>453251\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FJames-TYQ\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_5d5aae6a1ecc.png\" width=\"80\" style=\"border-radius:50%\" alt=\"James-TYQ\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>James-TYQ\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fevy568\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_e85e4ac234ba.png\" width=\"80\" style=\"border-radius:50%\" alt=\"evy568\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>evy568\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FNorah-Feathertail\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_3a4538a48afc.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Norah-Feathertail\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Norah-Feathertail\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\" width=\"120\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FTongjiCst\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_1aff83dfd64b.png\" width=\"80\" style=\"border-radius:50%\" alt=\"TongjiCst\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>TongjiCst\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n我们欢迎社区的贡献！如果您想参与贡献，请先 fork 该仓库，然后提交 pull request。对于重大更改，请先打开 issue 以讨论您希望进行的修改。\n\n\u003Cspan id='related'\u002F>\n\n## 🔗 相关项目\n\n\u003Cdiv align=\"center\">\n  \u003Ctable>\n    \u003Ctr>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmem0ai\u002Fmem0\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_823c3ad3cce6.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Mem0\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>Mem0\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMemTensor\u002FMemOS\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_94b8a217e0ab.png\" width=\"80\" style=\"border-radius:50%\" alt=\"MemOS\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>Memos\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fgetzep\u002Fzep\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_500c9c14d7d0.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Zep\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>Zep\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMirix-AI\u002FMIRIX\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_39fa8da20eee.png\" width=\"80\" style=\"border-radius:50%\" alt=\"MIRIX\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>MIRIX\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FNevaMind-AI\u002FmemU\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_69a8dbb58c51.png\" width=\"80\" style=\"border-radius:50%\" alt=\"MemU\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>MemU\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n      \u003Ctd align=\"center\" width=\"150\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmemodb-io\u002Fmemobase\">\n          \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_readme_8f5ab648a5c7.png\" width=\"80\" style=\"border-radius:50%\" alt=\"Memobase\"\u002F>\n          \u003Cbr \u002F>\n          \u003Csub>\u003Cb>Memobase\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n      \u003C\u002Ftd>\n    \u003C\u002Ftr>\n  \u003C\u002Ftable>\n\u003C\u002Fdiv>","# LightMem 快速上手指南\n\nLightMem 是一个轻量级且高效的记忆增强生成框架，专为大语言模型（LLM）和 AI Agent 设计。它提供了简洁的记忆存储、检索和更新机制，帮助开发者快速构建具备长期记忆能力的智能应用。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux \u002F macOS \u002F Windows (推荐 Linux)\n*   **Python 版本**: Python 3.11 (官方推荐)\n*   **硬件要求**: \n    *   若使用本地模型（如 Ollama, vLLM），建议配备 NVIDIA GPU 及 CUDA 环境。\n    *   若仅使用云端 API（如 OpenAI, DeepSeek），对本地硬件无特殊要求。\n*   **前置依赖**:\n    *   `conda` (推荐用于环境管理)\n    *   `git`\n    *   Hugging Face 账号（如需下载预训练模型，国内用户建议配置镜像或使用代理）\n\n## 2. 安装步骤\n\n推荐使用源码安装方式，以获取最新功能支持。\n\n### 第一步：克隆项目\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem.git\ncd LightMem\n```\n\n### 第二步：创建并激活虚拟环境\n```bash\nconda create -n lightmem python=3.11 -y\nconda activate lightmem\n```\n\n### 第三步：安装依赖\n```bash\nunset ALL_PROXY\npip install -e .\n```\n> **提示**：国内用户若下载依赖较慢，可临时指定清华或阿里镜像源：\n> `pip install -e . -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n### 第四步：准备模型资源\nLightMem 需要本地部署压缩模型和嵌入模型。请从 Hugging Face 下载以下模型并记录本地路径：\n\n1.  **压缩模型 (LLMLINGUA)**: [microsoft\u002Fllmlingua-2-bert-base-multilingual-cased-meetingbank](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002Fllmlingua-2-bert-base-multilingual-cased-meetingbank)\n2.  **嵌入模型 (Embedding)**: [sentence-transformers\u002Fall-MiniLM-L6-v2](https:\u002F\u002Fhuggingface.co\u002Fsentence-transformers\u002Fall-MiniLM-L6-v2)\n\n*(国内用户可通过 ModelScope 魔搭社区搜索对应模型进行下载)*\n\n## 3. 基本使用\n\n以下是最小化的快速启动示例，展示如何初始化并配置 LightMem。\n\n### 配置文件与参数设置\n\n在使用前，您需要准备 API Key 和本地模型路径。创建一个 Python 脚本（例如 `quick_start.py`）：\n\n```python\nimport os\nfrom datetime import datetime\nfrom lightmem.memory.lightmem import LightMemory\n\n# 1. 设置日志目录\nLOGS_ROOT = \".\u002Flogs\"\nRUN_TIMESTAMP = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\nRUN_LOG_DIR = os.path.join(LOGS_ROOT, RUN_TIMESTAMP)\nos.makedirs(RUN_LOG_DIR, exist_ok=True)\n\n# 2. 配置 API 信息 (以 OpenAI 兼容接口为例，也可替换为 DeepSeek 或本地 Ollama)\nAPI_KEY = 'your_api_key'           # 替换为您的 API Key\nAPI_BASE_URL = 'your_api_base_url' # 替换为您的 Base URL (如 https:\u002F\u002Fapi.openai.com\u002Fv1)\nLLM_MODEL = 'gpt-4o-mini'          # 模型名称，本地模型可填 'gemma3:latest'\n\n# 3. 配置本地模型路径 (替换为您实际下载的路径)\nEMBEDDING_MODEL_PATH = '\u002Fpath\u002Fto\u002Fmodels\u002Fall-MiniLM-L6-v2'\nLLMLINGUA_MODEL_PATH = '\u002Fpath\u002Fto\u002Fmodels\u002Fllmlingua-2-bert-base-multilingual-cased-meetingbank'\n\n# 4. 构建配置字典\nconfig_dict = {\n    \"pre_compress\": True,\n    \"pre_compressor\": {\n        \"model_name\": \"llmlingua-2\",\n        \"configs\": {\n            \"llmlingua_config\": {\n                \"model_name\": LLMLINGUA_MODEL_PATH,\n                \"device_map\": \"cuda\",       # 若无 GPU 改为 \"cpu\"\n                \"use_llmlingua2\": True,\n            },\n        }\n    },\n    \"topic_segment\": True,\n    \"precomp_topic_shared\": True,\n    \"topic_segmenter\": {\n        \"model_name\": \"llmlingua-2\",\n    },\n    \"messages_use\": \"user_only\",\n    \"metadata_generate\": True,\n    \"text_summary\": True,\n    \"memory_manager\": {\n        \"model_name\": 'openai', # 可选：openai, deepseek, ollama, vllm\n        \"configs\": {\n            \"model\": LLM_MODEL,\n            \"api_key\": API_KEY,\n            \"max_tokens\": 16000,\n            \"base_url\": API_BASE_URL \n        }\n    },\n    \"text_embedder\": {\n        \"model_name\": \"huggingface\",\n        \"configs\": {\n            \"model_name\": EMBEDDING_MODEL_PATH,\n            \"device\": \"cuda\"\n        }\n    },\n    \"retriever\": {\n        \"model_name\": \"FAISS\", # 可选：FAISS, qdrant, BM25\n        \"configs\": {}\n    },\n    \"log_dir\": RUN_LOG_DIR\n}\n\n# 5. 初始化 LightMem\nlight_mem = LightMemory(config=config_dict)\n\n# 6. 简单测试：添加记忆并查询\n# 模拟用户输入\nuser_message = \"我计划下个月去北京旅行，主要想看故宫和长城。\"\nlight_mem.add_memory(user_message)\n\n# 模拟后续对话，触发记忆检索\nquery = \"我之前提到的旅行计划里包含哪些景点？\"\nresponse = light_mem.generate(query)\n\nprint(f\"AI 回答：{response}\")\n```\n\n### 运行脚本\n\n确保所有路径和 Key 已正确填写后，在终端运行：\n\n```bash\npython quick_start.py\n```\n\n### 进阶场景\nLightMem 还提供了针对特定场景的 Jupyter Notebook 教程，位于项目根目录的 `tutorial-notebooks\u002F` 文件夹中：\n*   **旅行规划助手**: `LightMem_Example_travel.ipynb`\n*   **代码辅助助手**: `LightMem_Example_code.ipynb`\n*   **长文本评估**: `LightMem_Example_longmemeval.ipynb`\n\n您可以直接在这些 Notebook 中交互式地体验完整功能。","某初创团队正在开发一款面向资深用户的“个人法律案件追踪助手”，需要 AI 长期记住用户数月内上传的复杂案情细节、证据链变化及律师沟通记录。\n\n### 没有 LightMem 时\n- **上下文窗口爆炸**：随着案件时间线拉长，历史对话远超模型上下文限制，导致早期关键证据被截断遗忘。\n- **检索效率低下**：每次回答需重新扫描数万字的完整历史记录，响应延迟高达数秒，用户体验极差。\n- **记忆更新困难**：当案件状态变更（如“已开庭”）时，难以精准定位并更新旧信息，常出现新旧事实矛盾的幻觉。\n- **资源成本高昂**：为维持长上下文不得不调用超大参数模型或支付昂贵的云 API 费用，初创团队难以负担。\n\n### 使用 LightMem 后\n- **长效记忆存储**：LightMem 将案情结构化存入轻量级外部记忆库，自动管理时间跨度数月的案件细节，不再受上下文窗口限制。\n- **毫秒级精准召回**：针对用户提问，LightMem 仅检索相关片段注入上下文，响应速度提升 10 倍以上，实现实时交互。\n- **动态记忆维护**：利用其更新机制，当用户补充新证据时，LightMem 自动关联并修正旧有记忆节点，确保逻辑一致性。\n- **低成本灵活部署**：团队可利用 LightMem 的模块化架构，在本地 Ollama 小模型上运行，大幅降低算力成本且保护数据隐私。\n\nLightMem 通过轻量高效的记忆增强机制，让资源有限的应用也能拥有像人类专家一样“过目不忘”且逻辑严密的长期案件追踪能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_LightMem_2e1e292b.png","zjunlp","ZJUNLP","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fzjunlp_4dd6d5d4.jpg","Knowledge Engine Lab: A NLP & KG Group of  Zhejiang  University",null,"huajunsir@zju.edu.cn","ChenHuajun","http:\u002F\u002Fzjunlp.org","https:\u002F\u002Fgithub.com\u002Fzjunlp",[82,86,90],{"name":83,"color":84,"percentage":85},"Python","#3572A5",64.7,{"name":87,"color":88,"percentage":89},"Jupyter Notebook","#DA5B0B",35.2,{"name":91,"color":92,"percentage":93},"Shell","#89e051",0.1,737,63,"2026-04-06T12:41:31","MIT","Linux, macOS, Windows","可选但推荐（用于加速 llmlingua-2 和嵌入模型）。配置中指定 device_map='cuda'，暗示需要 NVIDIA GPU。具体显存和 CUDA 版本未说明，取决于所选本地模型（如 Ollama\u002FvLLM 运行的模型大小）。","未说明（建议 16GB+ 以运行本地大模型和压缩模型）",{"notes":102,"python":103,"dependencies":104},"1. 官方推荐使用 conda 创建 Python 3.11 虚拟环境进行安装。\n2. 首次运行需手动下载两个关键模型：llmlingua-2（用于文本压缩\u002F分段）和 all-MiniLM-L6-v2（用于文本嵌入），需提前配置本地路径。\n3. 支持多种后端：本地部署支持 Ollama、vLLM、Transformers；云端支持 OpenAI、DeepSeek。\n4. 检索后端支持 Qdrant、FAISS 和 BM25，需根据选择安装相应依赖。","3.11",[105,106,107,108,109,110,111],"torch","transformers","sentence-transformers","llmlingua-2","qdrant-client","faiss-cpu","rank-bm25",[16,35,14,13],[114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130],"agent","ai-agents","artificial-intelligence","chatbot","genai","knowledge","large-language-models","lightweight","llm","long-term-memory","memory","memory-management","natural-language-processing","personalization","python","rag","lightmem","2026-03-27T02:49:30.150509","2026-04-07T06:13:34.750176",[134,139,144,149,154,159,163],{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},21097,"如何复现论文表 2 中的实验结果（设置压缩率 r=0.6 和短期记忆阈值 th=768）？","要复现特定超参数的实验，请按以下步骤配置：\n1. **设置压缩率 (r)**：在 `config_dict` 的 `compress_config` 块中修改 `rate` 参数为 0.6。\n```python\n\"pre_compressor\": {\n    \"model_name\": \"llmlingua-2\",\n    \"configs\": {\n        \"llmlingua_config\": {\n            \"model_name\": LLMLINGUA_MODEL_PATH,\n            \"device_map\": \"cuda\",\n            \"use_llmlingua2\": True,\n        },\n        \"compress_config\": {\n            \"instruction\": \"\",\n            \"rate\": 0.6,  # 在此处设置 r = 0.6\n            \"target_token\": -1\n        },\n    }\n}\n```\n2. **设置短期记忆阈值 (th)**：在 `src\u002Flightmem\u002Fmemory\u002Flightmem.py` 中找到 `ShortMemBufferManager` 初始化的位置，将 `max_tokens` 参数修改为 768。\n```python\nself.shortmem_buffer_manager = ShortMemBufferManager(\n    max_tokens=768,  # 设置 th = 768\n    ...\n)\n```","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fissues\u002F42",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},21098,"项目中是否提供了 Full Text 和 Naive RAG 等基线方法的实现代码？","是的，项目已添加了这两个基线方法的复现代码。您可以访问以下目录获取：\n`src\u002Flightmem\u002Fmemory_toolkits\u002Fnaive_baselines`\n此外，关于 RAG 实验，作者在实验中使用的 top-k 值为 20。","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fissues\u002F7",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},21099,"运行 LongMemEval 完整数据集（500 个样本）需要多长时间？是否正常？","处理完整数据集确实需要较长时间。用户反馈在使用大型模型部署时，处理 500 个样本大约需要 50 小时，这属于预期范围内的处理时间。如果您需要参考结果，作者已在 Google Drive 上发布了所有基线和 LightMem 的运行结果供对比。","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fissues\u002F6",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},21100,"运行 run_lightmem_gpt.py 脚本时遇到时间格式错误（ValueError: Time format ... not supported）如何解决？","该错误是由于 `run_lightmem_gpt.py` 脚本中缺少 `convert_timestamp` 函数导致的，该函数仅在教程 Notebook 中定义。这造成了脚本工作流与 Notebook 工作流的不一致。\n解决方案：\n1. 手动在 `run_lightmem_gpt.py` 中添加缺失的 `convert_timestamp` 函数（可参考 Notebook 中的实现）。\n2. 或者检查 `factory\u002Fmemory_manager` 相关代码，确保时间格式解析逻辑一致。维护者已确认这是已知不一致问题，建议用户自行修复或等待官方更新统一接口。","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fissues\u002F38",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},21101,"使用 Ollama 运行 LightMem 时出现 TypeError: got an unexpected keyword argument 'topic_id_mapping' 错误怎么办？","此错误是因为 `OllamaManager`（以及 `VllmManager`）的 `meta_text_extract` 函数参数与 `OpenaiManager` 不一致造成的。旧版本中 Ollama 管理器不支持 `topic_id_mapping` 等参数。\n**解决方案**：请更新到最新版本的代码库。维护者已在最新更新中修复了此问题，现在 `OllamaManager` 和其他非 OpenAI 管理器已与 OpenAI 风格的提取接口对齐，支持相同的参数输入。","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fissues\u002F50",{"id":160,"question_zh":161,"answer_zh":162,"source_url":138},21102,"如何在没有离线更新（no offline-update \u002F OP-Update）的设置下复现结果？","要在推理时不检索记忆（即“无离线更新”模式），您需要将相关配置设置为 \"online\"。具体操作是在运行脚本（如 `experiments\u002Flongmemeval\u002Frun_lightmem_gpt.py`）中，找到控制记忆更新模式的配置行，将其修改为在线模式。如果仍然无法复现（例如没有记忆被检索），请检查是否遗漏了其他关于记忆缓冲区的初始化配置，并确保没有启用后台的记忆合并或更新进程。",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},21103,"代码中是否存在 SensoryMemory 添加消息时的循环逻辑错误导致对话格式破坏？","是的，这是一个已确认的逻辑缺陷。在 `factory.memory_buffer.sensory_memory.SenMemBufferManager.add_messages` 方法中，直接在遍历 `self.big_buffer` 列表时使用 `remove` 删除元素会导致迭代器跳过部分元素（特别是奇数索引的元素）。\n**后果**：这会导致 `self.buffer` 中的数据顺序错乱，原本交替的 user-assistant 格式变成前面全是 user、后面全是 assistant，从而破坏后续逻辑依赖的对话格式。\n**建议**：社区用户已提交 PR 修复此问题，建议使用最新代码或避免在遍历列表时直接修改列表结构（应先收集待删除项，遍历结束后统一删除，或使用列表推导式重构）。","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLightMem\u002Fissues\u002F28",[]]