[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Shichun-Liu--Agent-Memory-Paper-List":3,"tool-Shichun-Liu--Agent-Memory-Paper-List":62},[4,18,28,37,45,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":24,"last_commit_at":25,"category_tags":26,"status":17},9989,"n8n","n8n-io\u002Fn8n","n8n 是一款面向技术团队的公平代码（fair-code）工作流自动化平台，旨在让用户在享受低代码快速构建便利的同时，保留编写自定义代码的灵活性。它主要解决了传统自动化工具要么过于封闭难以扩展、要么完全依赖手写代码效率低下的痛点，帮助用户轻松连接 400 多种应用与服务，实现复杂业务流程的自动化。\n\nn8n 特别适合开发者、工程师以及具备一定技术背景的业务人员使用。其核心亮点在于“按需编码”：既可以通过直观的可视化界面拖拽节点搭建流程，也能随时插入 JavaScript 或 Python 代码、调用 npm 包来处理复杂逻辑。此外，n8n 原生集成了基于 LangChain 的 AI 能力，支持用户利用自有数据和模型构建智能体工作流。在部署方面，n8n 提供极高的自由度，支持完全自托管以保障数据隐私和控制权，也提供云端服务选项。凭借活跃的社区生态和数百个现成模板，n8n 让构建强大且可控的自动化系统变得简单高效。",184740,2,"2026-04-19T23:22:26",[16,14,13,15,27],"插件",{"id":29,"name":30,"github_repo":31,"description_zh":32,"stars":33,"difficulty_score":10,"last_commit_at":34,"category_tags":35,"status":17},10095,"AutoGPT","Significant-Gravitas\u002FAutoGPT","AutoGPT 是一个旨在让每个人都能轻松使用和构建 AI 的强大平台，核心功能是帮助用户创建、部署和管理能够自动执行复杂任务的连续型 AI 智能体。它解决了传统 AI 应用中需要频繁人工干预、难以自动化长流程工作的痛点，让用户只需设定目标，AI 即可自主规划步骤、调用工具并持续运行直至完成任务。\n\n无论是开发者、研究人员，还是希望提升工作效率的普通用户，都能从 AutoGPT 中受益。开发者可利用其低代码界面快速定制专属智能体；研究人员能基于开源架构探索多智能体协作机制；而非技术背景用户也可直接选用预置的智能体模板，立即投入实际工作场景。\n\nAutoGPT 的技术亮点在于其模块化“积木式”工作流设计——用户通过连接功能块即可构建复杂逻辑，每个块负责单一动作，灵活且易于调试。同时，平台支持本地自托管与云端部署两种模式，兼顾数据隐私与使用便捷性。配合完善的文档和一键安装脚本，即使是初次接触的用户也能在几分钟内启动自己的第一个 AI 智能体。AutoGPT 正致力于降低 AI 应用门槛，让人人都能成为 AI 的创造者与受益者。",183572,"2026-04-20T04:47:55",[13,36,27,14,15],"语言模型",{"id":38,"name":39,"github_repo":40,"description_zh":41,"stars":42,"difficulty_score":10,"last_commit_at":43,"category_tags":44,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":46,"name":47,"github_repo":48,"description_zh":49,"stars":50,"difficulty_score":24,"last_commit_at":51,"category_tags":52,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",161147,"2026-04-19T23:31:47",[14,13,36],{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":59,"last_commit_at":60,"category_tags":61,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,27],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":76,"stars":80,"forks":81,"last_commit_at":82,"license":83,"difficulty_score":59,"env_os":84,"env_gpu":85,"env_ram":85,"env_deps":86,"category_tags":89,"github_topics":90,"view_count":24,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":93,"updated_at":94,"faqs":95,"releases":126},10009,"Shichun-Liu\u002FAgent-Memory-Paper-List","Agent-Memory-Paper-List","The paper list of \"Memory in the Age of AI Agents: A Survey\"","Agent-Memory-Paper-List 是一个专注于整理和追踪\"AI 智能体记忆”领域前沿学术成果的开源项目。它依托于综述论文《Memory in the Age of AI Agents: A Survey》，旨在为这一快速发展的研究方向提供系统化的文献索引。\n\n当前，智能体记忆研究虽呈爆发式增长，但面临术语定义模糊、分类标准不一的碎片化困境，且常与 RAG（检索增强生成）或上下文工程等概念混淆。该项目通过构建统一的分类体系，清晰界定了智能体记忆的边界，并从“形式”（存储介质）、“功能”（事实、经验与工作记忆）及“动态演化”（形成、巩固与检索）三个维度对海量论文进行梳理，帮助研究者理清技术脉络。\n\n该资源特别适合人工智能领域的研究人员、算法工程师及对大模型智能体架构感兴趣的技术开发者使用。其核心亮点在于提出了一套创新的三维分类法，不仅区分了令牌级、参数级和潜在状态等不同记忆形态，还深入剖析了记忆的生命周期机制。无论是希望快速把握领域全貌的初学者，还是寻求特定技术突破的资深专家，都能在此找到清晰的理论支撑和最新的论文列表，是推动智能体记忆研究从分散走向规范的重要参考库。","\u003C!-- # Memory in the Age of AI Agents: A Survey -->\n\n\u003Ch1 align=\"center\">\n  \u003Cstrong>Memory in the Age of AI Agents: A Survey\u003C\u002Fstrong>\n\u003C\u002Fh1>\n\n\u003Cdiv align=\"center\">\n\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2512.13564-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13564)\n[![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHugging_Face-2512.13564-292929.svg?logo=huggingface)](https:\u002F\u002Fhuggingface.co\u002Fpapers\u002F2512.13564)\n[![Contribution Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContributions-welcome-Green?logo=mercadopago&logoColor=white)](https:\u002F\u002Fgithub.com\u002FShichun-Liu\u002FAgent-Memory-Paper-List\u002Fpulls)\n[![GitHub star chart](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FShichun-Liu\u002FAgent-Memory-Paper-List?style=social)](https:\u002F\u002Fstar-history.com\u002F#Shichun-Liu\u002FAgent-Memory-Paper-List)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg?)](LICENSE)\n[![Semantic Scholar Citations](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?label=Citations&query=%24.citationCount&url=https%3A%2F%2Fapi.semanticscholar.org%2Fgraph%2Fv1%2Fpaper%2Fd362b7619fcd2df4241696a19aec95961b8a729c%3Ffields%3DcitationCount&logo=semanticscholar&cacheSeconds=3600)](https:\u002F\u002Fwww.semanticscholar.org\u002Fpaper\u002Fd362b7619fcd2df4241696a19aec95961b8a729c)\n\n\n\u003C\u002Fdiv>\n\n## 📢 News\n- [2026\u002F01\u002F29] 🎉 Our repository has reached **1k stars**! Thank you all for your support and interest in Agent Memory research!\n- [2026\u002F01\u002F13] 📄 We have updated our survey to incorporate several recent works, and we sincerely thank the community for their valuable contributions and suggestions. See [Memory in the Age of AI Agents: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13564) for the paper!\n- [2025\u002F12\u002F16] 🎉 Our paper is featured on [Huggingface Daily Paper #1](https:\u002F\u002Fhuggingface.co\u002Fpapers\u002Fdate\u002F2025-12-16)!\n- [2025\u002F12\u002F16] 📚 We create this repository to maintain a paper list on Agent Memory. More papers are coming soon!\n- [2025\u002F12\u002F16] 📄 Our survey is released! See [Memory in the Age of AI Agents: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13564) for the paper!\n\n\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShichun-Liu_Agent-Memory-Paper-List_readme_071057f19e08.png\" alt=\"Overview of agent memory organized by the unified taxonomy\" width=\"80%\" \u002F>\n  \u003Cp>\u003Cem>\u003Cstrong>Figure:\u003C\u002Fstrong> Overview of agent memory organized by the unified taxonomy of \u003Cstrong>forms\u003C\u002Fstrong>, \u003Cstrong>functions\u003C\u002Fstrong>, and \u003Cstrong>dynamics\u003C\u002Fstrong>.\u003C\u002Fem>\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n## 👋 Introduction\n\nMemory serves as the cornerstone of foundation model-based agents, underpinning their ability to perform long-horizon reasoning, adapt continually, and interact effectively with complex environments.\n\nDespite the explosion of research in this field, the landscape remains highly fragmented, with loosely defined terminologies and inconsistent taxonomies. This repository aims to bridge this gap. We distinguish Agent Memory from related concepts like RAG and Context Engineering, and provide a comprehensive overview through three unified lenses:\n\n- Forms (What Carries Memory?): Categorizing memory by its storage medium—Token-level (explicit & discrete), Parametric (implicit weights), and Latent (hidden states) .\n- Functions (Why Agents Need Memory?): Moving beyond simple temporal divisions to a functional taxonomy: Factual (knowledge), Experiential (insights & skills), and Working Memory (active context management) .\n- Dynamics (How Memory Evolves?): Dissecting the operational lifecycle into Formation (extraction), Evolution (consolidation & forgetting), and Retrieval (access strategies) .\n\nThrough this structure, we hope to provide a conceptual foundation for rethinking memory as a first-class primitive in future agentic intelligence.\n\n## 💡 Concepts\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShichun-Liu_Agent-Memory-Paper-List_readme_70b7d371cc7e.png\" alt=\"Conceptual Comparison\" width=\"80%\" \u002F>\n  \u003Cp>\u003Cem>\u003Cstrong>Figure:\u003C\u002Fstrong> Conceptual comparison of \u003Cstrong>Agent Memory\u003C\u002Fstrong> with \u003Cstrong>LLM Memory\u003C\u002Fstrong>, \u003Cstrong>RAG\u003C\u002Fstrong>, and \u003Cstrong>Context Engineering\u003C\u002Fstrong>.\u003C\u002Fem>\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n\n## 📚 Paper list\n\n### Factual Memory\n\n#### Token-level\n- [2026\u002F01] Memory Matters More: Event-Centric Memory as a Logic Map for Agent Searching and Reasoning. [[paper](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2601.04726)]\n- [2026\u002F01] MAGMA: A Multi-Graph based Agentic Memory Architecture for AI Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03236)]\n- [2026\u002F01] EverMemOS: A Self-Organizing Memory Operating System for Structured Long-Horizon Reasoning. [[paper](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2601.02163)]\n- [2025\u002F12] From Context to EDUs: Faithful and Structured Context Compression via Elementary Discourse Unit Decomposition. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14244)]\n- [2025\u002F12] MemVerse: Multimodal Memory for Lifelong Learning Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03627)]\n- [2025\u002F12] MMAG: Mixed Memory-Augmented Generation for Large Language Models Applications. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01710)]\n- [2025\u002F12] Sophia: A Persistent Agent Framework of Artificial Life. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18202)]\n- [2025\u002F12] WorldMM: Dynamic Multimodal Memory Agent for Long Video Reasoning. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02425)]\n- [2025\u002F12] Memoria: A Scalable Agentic Memory Framework for Personalized Conversational AI. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12686)]\n- [2025\u002F12] Hindsight is 20\u002F20: Building Agent Memory that Retains, Recalls, and Reflects. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12818)]\n- [2025\u002F11] A Simple Yet Strong Baseline for Long-Term Conversational Memory of LLM Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.17208)]\n- [2025\u002F11] General Agentic Memory Via Deep Research. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.18423)]\n- [2025\u002F11] O-Mem: Omni Memory System for Personalized, Long Horizon, Self-Evolving Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.13593)]\n- [2025\u002F11] RCR-Router: Efficient Role-Aware Context Routing for Multi-Agent LLM Systems with Structured Memory. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.04903)]\n- [2025\u002F11] Enabling Personalized Long-term Interactions in LLM-based Agents through Persistent Memory and User Profiles. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.07925)]\n- [2025\u002F10] Livia: An Emotion-Aware AR Companion Powered by Modular AI Agents and Progressive Memory Compression. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.05298)]\n- [2025\u002F10] D-SMART: Enhancing LLM Dialogue Consistency via Dynamic Structured Memory And Reasoning Tree. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.13363)]\n- [2025\u002F10] WebWeaver: Structuring Web-Scale Evidence with Dynamic Outlines for Open-Ended Deep Research. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.13312)]\n- [2025\u002F10] CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.05520)]\n- [2025\u002F10] Pre-Storage Reasoning for Episodic Memory: Shifting Inference Burden to Memory for Personalized Dialogue. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.10852)]\n- [2025\u002F10] LightMem: Lightweight and Efficient Memory-Augmented Generation. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18866)]\n- [2025\u002F10] RGMem: Renormalization Group-based Memory Evolution for Language Agent User Profile. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16392)]\n- [2025\u002F09] Mem-α: Learning Memory Construction via Reinforcement Learning. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.25911)]\n- [2025\u002F09] SGMem: Sentence Graph Memory for Long-Term Conversational Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21212)]\n- [2025\u002F09] Nemori: Self-Organizing Agent Memory Inspired by Cognitive Science. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.03341)]\n- [2025\u002F09] MOOM: Maintenance, Organization and Optimization of Memory in Ultra-Long Role-Playing Dialogues. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.11860)]\n- [2025\u002F09] Multiple Memory Systems for Enhancing the Long-term Memory of Agent. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.15294)]\n- [2025\u002F09] Semantic Anchoring in Agentic Memory: Leveraging Linguistic Structures for Persistent Conversational Context. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.12630)]\n- [2025\u002F09] ComoRAG: A Cognitive-Inspired Memory-Organized RAG for Stateful Long Narrative Reasoning. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.10419)]\n- [2025\u002F08] Building Self-Evolving Agents via Experience-Driven Lifelong Learning: A Framework and Benchmark. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.19005)]\n- [2025\u002F08] Seeing, Listening, Remembering, and Reasoning: A Multimodal Agent with Long-Term Memory. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09736)]\n- [2025\u002F08] Memory-R1: Enhancing Large Language Model Agents to Manage and Utilize Memories via Reinforcement Learning. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.19828)]\n- [2025\u002F08] Intrinsic Memory Agents: Heterogeneous Multi-Agent LLM Systems through Structured Contextual Memory. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08997)]\n- [2025\u002F07] MIRIX: Multi-Agent Memory System for LLM-Based Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07957)]\n- [2025\u002F07] Hierarchical Memory for High-Efficiency Long-Term Reasoning in LLM Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.22925)]\n- [2025\u002F06] G-Memory: Tracing Hierarchical Memory for Multi-Agent Systems. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07398)]\n- [2025\u002F06] Embodied Agents Meet Personalization: Exploring Memory Utilization for Personalized Assistance. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2505.16348)]\n- [2025\u002F05] MemGuide: Intent-Driven Memory Selection for Goal-Oriented Multi-Session LLM Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20231)]\n- [2025\u002F05] Pre-training Limited Memory Language Models with Internal and External Knowledge. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15962)]\n- [2025\u002F05] Embodied VideoAgent: Persistent Memory from Egocentric Videos and Embodied Sensors Enables Dynamic Scene Understanding. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2501.00358)]\n- [2025\u002F04] Mem0: Building production-ready ai agents with scalable long-term memory. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19413)]\n- [2025\u002F03] In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents. [[paper](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.413\u002F)]\n- [2025\u002F02] SeCom: On Memory Construction and Retrieval for Personalized Conversational Agents. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=xKDZAW0He3)]\n- [2025\u002F02] Zep: A Temporal Knowledge Graph Architecture for Agent Memory. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2501.13956)]\n- [2025\u002F02] R{\\({^3}\\)}Mem: Bridging Memory Retention and Retrieval via Reversible Compression. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.15957)]\n- [2025\u002F02] A-MEM: Agentic Memory for LLM Agents. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FARXIV.2502.12110)]\n- [2025\u002F02] Unveiling Privacy Risks in LLM Agent Memory. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13172)]\n- [2025\u002F02] Mem2Ego: Empowering Vision-Language Models with Global-to-Ego Memory for Long-Horizon Embodied Navigation. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2502.14254)]\n- [2024\u002F12] AI PERSONA: Towards Life-long Personalization of LLMs. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.13103)]\n- [2024\u002F11] OASIS: Open Agent Social Interaction Simulations with One Million Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.11581)]\n- [2024\u002F10] Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehension. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13093)]\n- [2024\u002F10] Memolet: Reifying the Reuse of User-AI Conversational Memories. [[paper](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3654777.3676388)]\n- [2024\u002F10] From Isolated Conversations to Hierarchical Schemas: Dynamic Tree Memory Representation for LLMs. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14052)]\n- [2024\u002F10] Enhancing Long Context Performance in LLMs Through Inner Loop Query Mechanism. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12859)]\n- [2024\u002F09] Crafting Personalized Agents through Retrieval-Augmented Generation on Editable Memory Graphs. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19401)]\n- [2024\u002F07] Human-inspired Episodic Memory for Infinite Context LLMs. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=BI2int5SAC)]\n- [2024\u002F07] Arigraph: Learning knowledge graph world models with episodic memory for llm agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.04363)]\n- [2024\u002F07] ChatHaruhi: Reviving Anime Character in Reality via Large Language Model. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2308.09597)]\n- [2024\u002F07] Toward Conversational Agents with Context and Time Sensitive Long-term Memory. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2406.00057)]\n- [2024\u002F06] Enhancing Long-Term Memory using Hierarchical Aggregate Tree for Retrieval Augmented Generation. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06124)]\n- [2024\u002F06] Towards Lifelong Dialogue Agents via Timeline-based Memory Management. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10996)]\n- [2024\u002F05] HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14831)]\n- [2024\u002F05] Memory Sharing for Large Language Model based Agents. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2404.09982)]\n- [2024\u002F05] Knowledge Graph Tuning: Real-time Large Language Model Personalization based on Human Feedback. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.19686)]\n- [2024\u002F04] From Local to Global: A Graph RAG Approach to Query-Focused Summarization. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.16130)]\n- [2024\u002F03] Memoro: Using Large Language Models to Realize a Concise Interface for Real-Time Memory Augmentation. [[paper](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3613904.3642450)]\n- [2023\u002F10] RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models. [[paper](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2024.findings-acl.878)]\n- [2023\u002F10] MemGPT: Towards LLMs as Operating Systems. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08560)]\n- [2023\u002F10] GameGPT: Multi-agent Collaborative Framework for Game Development. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FARXIV.2310.08067)]\n- [2023\u002F10] Lyfe Agents: Generative agents for low-cost real-time social interactions. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02172)]\n- [2023\u002F08] CALYPSO: LLMs as Dungeon Masters' Assistants. [[paper](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faiide.v19i1.27534)]\n- [2023\u002F08] MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00352)]\n- [2023\u002F08] Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations. [[paper](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3731446)]\n- [2023\u002F08] MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.08239)]\n- [2023\u002F08] Recursively summarizing enables long-term dialogue memory in large language models. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15022)]\n- [2023\u002F07] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding. [[paper](https:\u002F\u002Fdoi.org\u002F10.1109\u002FCVPR52733.2024.01725)]\n- [2023\u002F07] S${}^3$: Social-network Simulation System with Large Language Model-Empowered Agents. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FARXIV.2307.14984)]\n- [2023\u002F05] Prompted LLMs as Chatbot Modules for Long Open-domain Conversation. [[paper](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2023.findings-acl.277)]\n- [2023\u002F05] RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13304)]\n- [2023\u002F05] Memorybank: Enhancing large language models with long-term memory. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10250)]\n- [2023\u002F05] RET-LLM: Towards a general read-write memory for large language models. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14322)]\n- [2023\u002F04] Generative agents: Interactive simulacra of human behavior. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.03442)]\n- [2023\u002F04] HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06975)]\n- [2023\u002F04] SCM: Enhancing Large Language Model with Self-Controlled Memory Framework. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13343)]\n\n\n#### Parametric\n- [2025\u002F10] MemLoRA: Distilling Expert Adapters for On-Device Memory Systems. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04763)]\n- [2025\u002F10] Pretraining with hierarchical memories: separating long-tail and common knowledge. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02375)]\n- [2025\u002F08] Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09874)]\n- [2025\u002F08] MLP Memory: Language Modeling with Retriever-pretrained External Memory. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.01832)]\n- [2024\u002F10] Self-Updatable Large Language Models by Integrating Context into Model Parameters. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=aCPFCDL9QY)]\n- [2024\u002F10] AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02355)]\n- [2024\u002F08] ELDER: Enhancing Lifelong Model Editing with Mixture-of-LoRA. [[paper](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faaai.v39i23.34622)]\n- [2024\u002F05] WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models. [[paper](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002F60960ad78868fce5c165295fbd895060-Abstract-Conference.html)]\n- [2024\u002F03] Online Adaptation of Language Models with a Memory of Amortized Contexts. [[paper](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002Feaf956b52bae51fbf387b8be4cc3ce18-Abstract-Conference.html)]\n- [2024\u002F01] Neighboring Perturbations of Knowledge Editing on Large Language Models. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=K9NTPRvVRI)]\n- [2023\u002F11] CharacterGLM: Customizing Social Characters with Large Language Models. [[paper](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2024.emnlp-industry.107)]\n- [2023\u002F10] Character-LLM: A Trainable Agent for Role-Playing. [[paper](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2023.emnlp-main.814)]\n- [2021\u002F10] Fast Model Editing at Scale. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=0DcZxeWfOPt)]\n- [2021\u002F04] Editing Factual Knowledge in Language Models. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08164)]\n- [2020\u002F02] K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. [[paper](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2021.findings-acl.121)]\n- [2013\u002F02] ELLA: An Efficient Lifelong Learning Algorithm. [[paper](https:\u002F\u002Fproceedings.mlr.press\u002Fv28\u002Fruvolo13.html)]\n\n#### Latent\n\n- [2025\u002F09] Similarity-Distance-Magnitude Activations. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12760)]\n- [2025\u002F08] Towards General Continuous Memory for Vision-Language Models. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17670)]\n- [2025\u002F03] M+: Extending MemoryLLM with Scalable Long-Term Memory. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2502.00592)]\n- [2025\u002F02] R3Mem: Bridging Memory Retention and Retrieval via Reversible Compression [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.15957v1)]\n- [2024\u002F07] Memory${}^3$: Language Modeling with Explicit Memory. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2407.01178)]\n- [2024\u002F03] Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=LjivA1SLZ6)]\n- [2023\u002F10] Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03052)]\n- [2021\u002F12] Detecting Local Insights from Global Labels: Supervised & Zero-Shot Sequence Labeling via a Convolutional Decomposition. [[paper](https:\u002F\u002Fdoi.org\u002F10.1162\u002Fcoli_a_00416)]\n\n### Experiential Memory\n\n#### Token-level\n- [2026\u002F01] MemRL: Self-Evolving Agents via Runtime Reinforcement Learning on Episodic Memory. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03192)]\n- [2025\u002F12] MemEvolve: Meta-Evolution of Agent Memory Systems. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18746)]\n- [2025\u002F12] Remember Me, Refine Me: A Dynamic Procedural Memory Framework for Experience-Driven Agent Evolution. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10696)]\n- [2025\u002F12] Hindsight is 20\u002F20: Building Agent Memory that Retains, Recalls, and Reflects. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12818)]\n- [2025\u002F11] Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.04618)]\n- [2025\u002F11] FLEX: Continuous Agent Evolution via Forward Learning from Experience. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06449)]\n- [2025\u002F11] Scaling Agent Learning via Experience Synthesis. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.03773)]\n- [2025\u002F11] UFO2: The Desktop AgentOS. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2504.14603)]\n- [2025\u002F10] PRINCIPLES: Synthetic Strategy Memory for Proactive Dialogue Agents. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.17459)]\n- [2025\u002F10] Training-Free Group Relative Policy Optimization. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08191)]\n- [2025\u002F10] ToolMem: Enhancing Multimodal Agents with Learnable Tool Capability Memory. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.06664)]\n- [2025\u002F10] H${}^2$R: Hierarchical Hindsight Reflection for Multi-Task LLM Agents. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.12810)]\n- [2025\u002F10] BrowserAgent: Building Web Agents with Human-Inspired Web Browsing Actions. [[paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F2510.10666)]\n- [2025\u002F10] LEGOMem: Modular Procedural Memory for Multi-agent LLM Systems for Workflow Automation. [[paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04851)]\n- [2025\u002F10] Alita-G: Self-Evolving Generative Agent for Agent Generation. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.23601)]\n- [2025\u002F09] ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.25140)]\n- [2025\u002F09] Memento: Fine-tuning LLM Agents without Fine-tuning LLMs. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.16153)]\n- [2025\u002F08] Memp: Exploring Agent Procedural Memory. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06433)]\n- [2025\u002F08] SEAgent: Self-Evolving Computer Use Agent with Autonomous Learning from Experience. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.04700)]\n- [2025\u002F07] Agent KB: Leveraging Cross-Domain Experience for Agentic Problem Solving. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06229)]\n- [2025\u002F07] MemTool: Optimizing short-term memory management for dynamic tool calling in llm agent multi-turn conversations. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.21428)]\n- [2025\u002F05] Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2505.22954)]\n- [2025\u002F05] Alita: Generalist Agent Enabling Scalable Agentic Reasoning with Minimal Predefinition and Maximal Self-Evolution. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20286)]\n- [2025\u002F05] SkillWeaver: Web Agents can Self-Improve by Discovering and Honing Skills. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2504.07079)]\n- [2025\u002F05] LearnAct: Few-Shot Mobile GUI Agent with a Unified Demonstration Benchmark. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2504.13805)]\n- [2025\u002F05] Retrieval Models Aren't Tool-Savvy: Benchmarking Tool Retrieval for Large Language Models. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2503.01763)]\n- [2025\u002F04] Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07952)]\n- [2025\u002F04] Inducing Programmatic Skills for Agentic Tasks. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06821)]\n- [2025\u002F03] COLA: A Scalable Multi-Agent Framework For Windows UI Task Automation. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2503.09263)]\n- [2025\u002F03] Memory-augmented Query Reconstruction for LLM-based Knowledge Graph Reasoning. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05193)]\n- [2025\u002F02] From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2410.08197)]\n- [2025\u002F02] From RAG to Memory: Non-Parametric Continual Learning for Large Language Models. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14802)]\n- [2024\u002F12] Planning from Imagination: Episodic Simulation and Episodic Memory for Vision-and-Language Navigation. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01857)]\n- [2024\u002F10] RepairAgent: An Autonomous, LLM-Based Agent for Program Repair. [[paper](http:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17134)]\n- [2024\u002F09] SAGE: Self-evolving Agents with Reflective and Memory-augmented Abilities. [[paper](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.neucom.2025.130470)]\n- [2024\u002F07] Agent Workflow Memory. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=NTAhi2JEEE)]\n- [2024\u002F07] Fincon: A synthesized llm multi-agent system with conceptual verbal reinforcement for enhanced financial decision making. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06567)]\n- [2024\u002F06] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models. [[paper](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002Fcde328b7bf6358f5ebb91fe9c539745e-Abstract-Conference.html)]\n- [2024\u002F05] COLT: Towards Completeness-Oriented Tool Retrieval for Large Language Models. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2405.16089)]\n- [2023\u002F11] JARVIS-1: Open-World Multi-Task Agents With Memory-Augmented Multimodal Language Models. [[paper](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTPAMI.2024.3511593)]\n- [2023\u002F08] RecMind: Large Language Model Powered Agent For Recommendation. [[paper](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2024.findings-naacl.271)]\n- [2023\u002F08] ExpeL: LLM Agents Are Experiential Learners. [[paper](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faaai.v38i17.29936)]\n- [2023\u002F07] ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.16789)]\n- [2023\u002F05] CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models. [[paper](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2023.findings-emnlp.462)]\n- [2023\u002F03] Reflexion: Language agents with verbal reinforcement learning. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11366)]\n- [2023\u002F02] Toolformer: Language models can teach themselves to use tools. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.04761)]\n\n#### Parametric\n\n- [2025\u002F11] AgentEvolver: Towards Efficient Self-Evolving Agent System. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.10395)]\n- [2025\u002F10] Agent Learning via Early Experience. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08558)]\n- [2025\u002F10] Scaling Agents via Continual Pre-training. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.13310)]\n- [2024\u002F10] ToolGen: Unified Tool Retrieval and Calling via Generation. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.03439)]\n- [2023\u002F08] Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.02151)]\n- [2023\u002F06] A Machine with Short-Term, Episodic, and Semantic Memory Systems. [[paper](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faaai.v37i1.25075)]\n\n#### Latent\n\n- [2025\u002F11] Auto-scaling Continuous Memory for GUI Agent. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.09038)]\n\n### Working Memory\n\n#### Token-level\n- [2026\u002F01] MemRL: Self-Evolving Agents via Runtime Reinforcement Learning on Episodic Memory. [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03192)\n- [2026\u002F01] Agentic Memory: Learning Unified Long-Term and Short-Term Memory Management for Large Language Model Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01885)]\n- [2025\u002F11] Memory as Action: Autonomous Context Curation for Long-Horizon Agentic Tasks. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.12635)]\n- [2025\u002F11] IterResearch: Rethinking Long-Horizon Agents via Markovian State Reconstruction. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07327)]\n- [2025\u002F11] MemSearcher: Training LLMs to Reason, Search and Manage Memory via End-to-End Reinforcement Learning. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2511.02805)]\n- [2025\u002F10] AgentFold: Long-Horizon Web Agents with Proactive Context Management. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.24699)]\n- [2025\u002F10] PRIME: Planning and Retrieval-Integrated Memory for Enhanced Reasoning. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.22315)]\n- [2025\u002F10] Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2506.03141)]\n- [2025\u002F10] DeepAgent: A General Reasoning Agent with Scalable Toolsets. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.21618)]\n- [2025\u002F10] ACON: Optimizing Context Compression for Long-Horizon LLM Agents. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.00615)]\n- [2025\u002F09] ReSum: Unlocking Long-Horizon Search Intelligence via Context Summarization. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FARXIV.2509.13313)]\n- [2025\u002F08] Sculptor: Empowering LLMs with Cognitive Agency via Active Context Management. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.04664)]\n- [2025\u002F07] MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02259)]\n- [2024\u002F10] Agent S: An Open Agentic Framework That Uses Computers Like a Human. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08164)]\n\n#### Parametric\n\n- [2024\u002F05] Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=5wm6TiUP4X)]\n- [2024\u002F01] Efficient Streaming Language Models with Attention Sinks. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=NG7sS51zVF)]\n\n#### Latent\n\n- [2025\u002F11] VisMem: Latent Vision Memory Unlocks Potential of Vision-Language Models [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11007)]\n- [2025\u002F09] MemGen: Weaving Generative Latent Memory for Self-Evolving Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24704)]\n- [2025\u002F09] Conflict-Aware Soft Prompting for Retrieval-Augmented Generation. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.15253)]\n- [2025\u002F09] MemoryVLA: Perceptual-Cognitive Memory in Vision-Language-Action Models for Robotic Manipulation. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.19236)]\n- [2025\u002F06] MEM1: Learning to Synergize Memory and Reasoning for Efficient Long-Horizon Agents. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15841)]\n- [2025\u002F05] RazorAttention: Efficient KV Cache Compression Through Retrieval Heads. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=tkiZQlL04w)]\n- [2025\u002F04] MemoRAG: Boosting Long Context Processing with Global Memory-Enhanced Retrieval Augmentation. [[paper](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3696410.3714805)]\n- [2025\u002F04] SnapKV: LLM Knows What You are Looking for Before Generation. [[paper](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002F28ab418242603e0f7323e54185d19bde-Abstract-Conference.html)]\n- [2025\u002F03] LM2: Large Memory Models. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2502.06049)]\n- [2025\u002F02] SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. [[paper](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.1137\u002F)]\n- [2025\u002F02] Time-VLM: Exploring Multimodal Vision-Language Models for Augmented Time Series Forecasting. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2502.04395)]\n- [2025\u002F02] Titans: Learning to Memorize at Test Time. [[paper](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2501.00663)]\n- [2024\u002F08] Augmenting Language Models with Long-Term Memory. [[paper](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002Febd82705f44793b6f9ade5a669d0f0bf-Abstract-Conference.html)]\n- [2024\u002F06] Taking a Deep Breath: Enhancing Language Modeling of Large Language Models with Sentinel Tokens. [[paper](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2024.findings-emnlp.233)]\n- [2024\u002F04] Adapting Language Models to Compress Contexts. [[paper](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2023.emnlp-main.232)]\n- [2024\u002F03] Learning to Compress Prompts with Gist Tokens. [[paper](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F3d77c6dcc7f143aa2154e7f4d5e22d68-Abstract-Conference.html)]\n- [2024\u002F03] Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time. [[paper](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002Fa452a7c6c463e4ae8fbdc614c6e983e6-Abstract-Conference.html)]\n- [2024\u002F03] Focused Transformer: Contrastive Training for Context Scaling. [[paper](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F8511d06d5590f4bda24d42087802cc81-Abstract-Conference.html)]\n- [2023\u002F07] In-Context Autoencoder for Context Compression in a Large Language Model. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.06945)]\n- [2023\u002F06] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models. [[paper](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F6ceefa7b15572587b78ecfcebb2827f8-Abstract-Conference.html)]\n- [2022\u002F08] Memorizing Transformers. [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=TrjbxzRcnf-)]\n- [2022\u002F07] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model. [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.07115)]\n\n## 📖 Citation\n\nIf you find this repository helpful, a citation to our paper would be greatly appreciated:\n\n```bibtex\n@article{DBLP:journals\u002Fcorr\u002Fabs-2512-13564,\n  author       = {Yuyang Hu and Shichun Liu and Yanwei Yue and Guibin Zhang and Boyang Liu and Fangyi Zhu and Jiahang Lin and Honglin Guo and Shihan Dou and Zhiheng Xi and Senjie Jin and Jiejun Tan and Yanbin Yin and Jiongnan Liu and Zeyu Zhang and Zhongxiang Sun and Yutao Zhu and Hao Sun and Boci Peng and Zhenrong Cheng and Xuanbo Fan and Jiaxin Guo and Xinlei Yu and Zhenhong Zhou and Zewen Hu and Jiahao Huo and Junhao Wang and Yuwei Niu and Yu Wang and Zhenfei Yin and Xiaobin Hu and Yue Liao and Qiankun Li and Kun Wang and Wangchunshu Zhou and Yixin Liu and Dawei Cheng and Qi Zhang and Tao Gui and Shirui Pan and Yan Zhang and Philip Torr and Zhicheng Dou and Ji{-}Rong Wen and Xuanjing Huang and Yu{-}Gang Jiang and Shuicheng Yan},\n  title        = {Memory in the Age of {AI} Agents},\n  journal      = {CoRR},\n  volume       = {abs\u002F2512.13564},\n  year         = {2025},\n  url          = {https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2512.13564},\n  doi          = {10.48550\u002FARXIV.2512.13564},\n  eprinttype    = {arXiv},\n  eprint       = {2512.13564},\n  timestamp    = {Mon, 26 Jan 2026 16:10:18 +0100},\n  biburl       = {https:\u002F\u002Fdblp.org\u002Frec\u002Fjournals\u002Fcorr\u002Fabs-2512-13564.bib},\n  bibsource    = {dblp computer science bibliography, https:\u002F\u002Fdblp.org}\n}\n```\n\n## ⭐️ Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShichun-Liu_Agent-Memory-Paper-List_readme_3674d16ffb3d.png)](https:\u002F\u002Fwww.star-history.com\u002F#Shichun-Liu\u002FAgent-Memory-Paper-List&type=date&legend=top-left)\n\n","\u003C!-- # 人工智能智能体时代的记忆：综述 -->\n\n\u003Ch1 align=\"center\">\n  \u003Cstrong>人工智能智能体时代的记忆：综述\u003C\u002Fstrong>\n\u003C\u002Fh1>\n\n\u003Cdiv align=\"center\">\n\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2512.13564-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13564)\n[![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHugging_Face-2512.13564-292929.svg?logo=huggingface)](https:\u002F\u002Fhuggingface.co\u002Fpapers\u002F2512.13564)\n[![欢迎贡献](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContributions-welcome-Green?logo=mercadopago&logoColor=white)](https:\u002F\u002Fgithub.com\u002FShichun-Liu\u002FAgent-Memory-Paper-List\u002Fpulls)\n[![GitHub 星级图](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FShichun-Liu\u002FAgent-Memory-Paper-List?style=social)](https:\u002F\u002Fstar-history.com\u002F#Shichun-Liu\u002FAgent-Memory-Paper-List)\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg?)](LICENSE)\n[![Semantic Scholar 引用数](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdynamic\u002Fjson?label=Citations&query=%24.citationCount&url=https%3A%2F%2Fapi.semanticscholar.org%2Fgraph%2Fv1%2Fpaper%2Fd362b7619fcd2df4241696a19aec95961b8a729c%3Ffields%3DcitationCount&logo=semanticscholar&cacheSeconds=3600)](https:\u002F\u002Fwww.semanticscholar.org\u002Fpaper\u002Fd362b7619fcd2df4241696a19aec95961b8a729c)\n\n\n\u003C\u002Fdiv>\n\n## 📢 新闻\n- [2026年1月29日] 🎉 我们的仓库已达到**1000颗星**！感谢大家对智能体记忆研究的支持与关注！\n- [2026年1月13日] 📄 我们更新了综述，纳入了几篇最新研究成果，并衷心感谢社区的宝贵贡献和建议。论文详见[人工智能智能体时代的记忆：综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13564)！\n- [2025年12月16日] 🎉 我们的论文被收录于[Huggingface每日论文#1](https:\u002F\u002Fhuggingface.co\u002Fpapers\u002Fdate\u002F2025-12-16)！\n- [2025年12月16日] 📚 我们创建了这个仓库，用于维护智能体记忆相关的论文列表。更多论文即将发布！\n- [2025年12月16日] 📄 我们的综述正式发布！论文详见[人工智能智能体时代的记忆：综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13564)！\n\n\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShichun-Liu_Agent-Memory-Paper-List_readme_071057f19e08.png\" alt=\"基于统一分类法整理的智能体记忆概览\" width=\"80%\" \u002F>\n  \u003Cp>\u003Cem>\u003Cstrong>图：\u003C\u002Fstrong> 按照\u003Cstrong>形式\u003C\u002Fstrong>、\u003Cstrong>功能\u003C\u002Fstrong>和\u003Cstrong>动态\u003C\u002Fstrong>的统一分类法整理的智能体记忆概览。\u003C\u002Fem>\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n## 👋 引言\n\n记忆是基于基础模型的智能体的核心支柱，支撑着它们进行长时程推理、持续适应以及与复杂环境的有效交互。\n\n尽管该领域的研究呈爆炸式增长，但目前的研究格局仍然高度分散，术语定义模糊不清且分类体系不一致。本仓库旨在弥合这一鸿沟。我们区分了智能体记忆与RAG、上下文工程等相关概念，并通过三个统一的视角提供全面概述：\n\n- 形式（记忆由什么承载？）：根据存储介质将记忆分为标记级（显式且离散）、参数级（隐式权重）和潜在状态级（隐藏状态）。\n- 功能（智能体为何需要记忆？）：超越简单的时序划分，提出功能性分类：事实性（知识）、经验性（洞见与技能）以及工作记忆（主动上下文管理）。\n- 动态（记忆如何演化？）：将记忆的操作生命周期细分为形成（提取）、演化（巩固与遗忘）以及检索（访问策略）。\n\n通过这一结构，我们希望为未来智能体智能中将记忆重新视为一等公民奠定概念基础。\n\n## 💡 概念\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShichun-Liu_Agent-Memory-Paper-List_readme_70b7d371cc7e.png\" alt=\"概念对比\" width=\"80%\" \u002F>\n  \u003Cp>\u003Cem>\u003Cstrong>图：\u003C\u002Fstrong> \u003Cstrong>智能体记忆\u003C\u002Fstrong>与\u003Cstrong>大语言模型记忆\u003C\u002Fstrong>、\u003Cstrong>RAG\u003C\u002Fstrong>和\u003Cstrong>上下文工程\u003C\u002Fstrong>的概念对比。\u003C\u002Fem>\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n\n## 📚 论文列表\n\n### 事实性记忆\n\n#### 词级别\n- [2026\u002F01] 记忆更重要：以事件为中心的记忆作为智能体搜索与推理的逻辑图。[[论文](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2601.04726)]\n- [2026\u002F01] MAGMA：面向AI智能体的多图结构代理记忆架构。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03236)]\n- [2026\u002F01] EverMemOS：用于结构化长时序推理的自组织记忆操作系统。[[论文](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2601.02163)]\n- [2025\u002F12] 从上下文到EDU：通过基本话语单元分解实现忠实且结构化的上下文压缩。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14244)]\n- [2025\u002F12] MemVerse：面向终身学习智能体的多模态记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03627)]\n- [2025\u002F12] MMAG：面向大型语言模型应用的混合记忆增强生成。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01710)]\n- [2025\u002F12] Sophia：一种持续性人工生命代理框架。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18202)]\n- [2025\u002F12] WorldMM：用于长视频推理的动态多模态记忆智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02425)]\n- [2025\u002F12] Memoria：面向个性化对话式AI的可扩展代理记忆框架。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12686)]\n- [2025\u002F12] 后见之明：构建能够保持、回忆并反思的智能体记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12818)]\n- [2025\u002F11] LLM智能体长期对话记忆的一个简单而强大的基线。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.17208)]\n- [2025\u002F11] 通过深度研究实现通用代理记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.18423)]\n- [2025\u002F11] O-Mem：面向个性化、长时程、自我演进智能体的全息记忆系统。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.13593)]\n- [2025\u002F11] RCR-Router：具有结构化记忆的多智能体LLM系统中高效的角色感知上下文路由。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.04903)]\n- [2025\u002F11] 通过持久化记忆和用户画像，实现基于LLM的智能体中的个性化长期交互。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.07925)]\n- [2025\u002F10] Livia：一款由模块化AI智能体和渐进式记忆压缩技术驱动的情感感知AR伴侣。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.05298)]\n- [2025\u002F10] D-SMART：通过动态结构化记忆和推理树提升LLM对话一致性。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.13363)]\n- [2025\u002F10] WebWeaver：利用动态提纲对网络规模证据进行结构化处理，以支持开放式深度研究。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.13312)]\n- [2025\u002F10] CAM：基于建构主义视角的LLM阅读理解用代理记忆。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.05520)]\n- [2025\u002F10] 情景记忆的预存储推理：将推理负担转移至记忆，以实现个性化对话。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.10852)]\n- [2025\u002F10] LightMem：轻量级高效的记忆增强生成。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18866)]\n- [2025\u002F10] RGMem：基于重正化群理论的语言智能体用户画像记忆演化。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.16392)]\n- [2025\u002F09] Mem-α：通过强化学习学习记忆构建。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.25911)]\n- [2025\u002F09] SGMem：面向长期对话智能体的句子图记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21212)]\n- [2025\u002F09] Nemori：受认知科学启发的自组织智能体记忆。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.03341)]\n- [2025\u002F09] MOOM：超长角色扮演对话中记忆的维护、组织与优化。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.11860)]\n- [2025\u002F09] 多重记忆系统以增强智能体的长期记忆。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.15294)]\n- [2025\u002F09] 代理记忆中的语义锚定：利用语言结构维持持续的对话上下文。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.12630)]\n- [2025\u002F09] ComoRAG：一种受认知启发、以记忆组织为核心的有状态长篇叙事推理框架。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.10419)]\n- [2025\u002F08] 通过经验驱动的终身学习构建自我演进智能体：一个框架与基准测试。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.19005)]\n- [2025\u002F08] 看、听、记、思：一款具备长期记忆的多模态智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09736)]\n- [2025\u002F08] Memory-R1：通过强化学习提升大型语言模型智能体管理和利用记忆的能力。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.19828)]\n- [2025\u002F08] 内在记忆智能体：通过结构化情境记忆实现异构多智能体LLM系统。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08997)]\n- [2025\u002F07] MIRIX：面向基于LLM的智能体的多智能体记忆系统。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07957)]\n- [2025\u002F07] 高效的LLM智能体长期推理用分层记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.22925)]\n- [2025\u002F06] G-Memory：追踪多智能体系统的分层记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07398)]\n- [2025\u002F06] 身体化智能体与个性化相遇：探索记忆在个性化辅助中的应用。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2505.16348)]\n- [2025\u002F05] MemGuide：面向目标导向多轮LLM智能体的情意驱动记忆选择。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20231)]\n- [2025\u002F05] 使用内部和外部知识对有限记忆语言模型进行预训练。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15962)]\n- [2025\u002F05] 身体化VideoAgent：来自第一人称视频和身体传感器的持久记忆实现了动态场景理解。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2501.00358)]\n- [2025\u002F04] Mem0：使用可扩展的长期记忆构建生产就绪的AI智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19413)]\n- [2025\u002F03] 展望与回顾：面向长期个性化对话智能体的反思式记忆管理。[[论文](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.413\u002F)]\n- [2025\u002F02] SeCom：关于个性化对话智能体的记忆构建与检索。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=xKDZAW0He3)]\n- [2025\u002F02] Zep：一种用于智能体记忆的时间知识图架构。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2501.13956)]\n- [2025\u002F02] R{\\({^3}\\)}Mem：通过可逆压缩连接记忆保持与检索。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.15957)]\n- [2025\u002F02] A-MEM：面向LLM智能体的代理记忆。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FARXIV.2502.12110)]\n- [2025\u002F02] 揭示LLM智能体记忆中的隐私风险。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13172)]\n- [2025\u002F02] Mem2Ego：通过全局到第一人称的记忆赋能视觉-语言模型，实现长时程具身导航。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2502.14254)]\n- [2024\u002F12] AI PERSONA：迈向LLM的终身个性化。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.13103)]\n- [2024\u002F11] OASIS：拥有百万智能体的开放代理社交互动模拟。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.11581)]\n- [2024\u002F10] Video-RAG：视觉对齐的检索增强型长视频理解。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13093)]\n- [2024\u002F10] Memolet：将用户-AI对话记忆的复用具体化。[[论文](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3654777.3676388)]\n- [2024\u002F10] 从孤立对话到层次化模式：LLM的动态树形记忆表示。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14052)]\n- [2024\u002F10] 通过内循环查询机制提升LLM的长上下文性能。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12859)]\n- [2024\u002F09] 基于可编辑记忆图的检索增强生成打造个性化智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19401)]\n- [2024\u002F07] 受人类启发的情景记忆，适用于无限上下文的LLM。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=BI2int5SAC)]\n- [2024\u002F07] Arigraph：利用情景记忆为LLM智能体学习知识图世界模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.04363)]\n- [2024\u002F07] ChatHaruhi：通过大型语言模型将动漫角色复活于现实。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2308.09597)]\n- [2024\u002F07] 迈向具备上下文和时间敏感长期记忆的对话式智能体。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2406.00057)]\n- [2024\u002F06] 利用分层聚合树增强长期记忆，用于检索增强生成。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06124)]\n- [2024\u002F06] 通过基于时间线的记忆管理迈向终身对话智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10996)]\n- [2024\u002F05] HippoRAG：受神经生物学启发的大型语言模型长期记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14831)]\n- [2024\u002F05] 大型语言模型智能体之间的记忆共享。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2404.09982)]\n- [2024\u002F05] 知识图调优：基于人类反馈的实时大型语言模型个性化。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.19686)]\n- [2024\u002F04] 从局部到全局：一种基于图的RAG方法用于聚焦查询的摘要生成。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.16130)]\n- [2024\u002F03] Memoro：利用大型语言模型实现实时记忆增强的简洁界面。[[论文](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3613904.3642450)]\n- [2023\u002F10] RoleLLM：大型语言模型角色扮演能力的基准测试、激发与提升。[[论文](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2024.findings-acl.878)]\n- [2023\u002F10] MemGPT：迈向将LLM作为操作系统。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08560)]\n- [2023\u002F10] GameGPT：用于游戏开发的多智能体协作框架。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FARXIV.2310.08067)]\n- [2023\u002F10] Lyfe Agents：用于低成本实时社交互动的生成式智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02172)]\n- [2023\u002F08] CALYPSO：LLM作为地下城主的助手。[[论文](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faiide.v19i1.27534)]\n- [2023\u002F08] MetaGPT：面向多智能体协作框架的元编程。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00352)]\n- [2023\u002F08] 推荐AI智能体：整合大型语言模型实现交互式推荐。[[论文](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3731446)]\n- [2023\u002F08] MemoChat：调优LLM使其能够利用备忘录进行一致的长程开放域对话。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.08239)]\n- [2023\u002F08] 递归摘要使大型语言模型具备长期对话记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15022)]\n- [2023\u002F07] MovieChat：从密集标记到稀疏记忆，用于长视频理解。[[论文](https:\u002F\u002Fdoi.org\u002F10.1109\u002FCVPR52733.2024.01725)]\n- [2023\u002F07] S${}^3$：一款由大型语言模型赋能智能体的社会网络模拟系统。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FARXIV.2307.14984)]\n- [2023\u002F05] 提示式LLM作为聊天机器人模块，用于长期开放域对话。[[论文](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2023.findings-acl.277)]\n- [2023\u002F05] RecurrentGPT：交互式生成任意长度文本。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13304)]\n- [2023\u002F05] Memorybank：通过长期记忆增强大型语言模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10250)]\n- [2023\u002F05] RET-LLM：迈向大型语言模型的通用读写记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14322)]\n- [2023\u002F04] 生成式智能体：人类行为的交互式模拟。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.03442)]\n- [2023\u002F04] HuaTuo：用中医知识调优LLaMA模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06975)]\n- [2023\u002F04] SCM：通过自我控制的记忆框架增强大型语言模型。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13343)]\n\n#### 参数化\n- [2025\u002F10] MemLoRA：为设备端内存系统蒸馏专家适配器。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04763)]\n- [2025\u002F10] 基于层次化记忆的预训练：分离长尾知识与常用知识。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02375)]\n- [2025\u002F08] 内存解码器：一种用于大型语言模型的预训练即插即用型内存。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09874)]\n- [2025\u002F08] MLP内存：利用检索器预训练的外部内存进行语言建模。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.01832)]\n- [2024\u002F10] 通过将上下文整合到模型参数中实现自更新的大规模语言模型。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=aCPFCDL9QY)]\n- [2024\u002F10] AlphaEdit：面向语言模型的知识编辑，基于零空间约束。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02355)]\n- [2024\u002F08] ELDER：利用LoRA混合体增强终身模型编辑能力。[[论文](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faaai.v39i23.34622)]\n- [2024\u002F05] WISE：重新思考大型语言模型终身模型编辑中的知识记忆。[[论文](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002F60960ad78868fce5c165295fbd895060-Abstract-Conference.html)]\n- [2024\u002F03] 利用摊销上下文记忆对语言模型进行在线适应。[[论文](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002Feaf956b52bae51fbf387b8be4cc3ce18-Abstract-Conference.html)]\n- [2024\u002F01] 大型语言模型上知识编辑的邻域扰动。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=K9NTPRvVRI)]\n- [2023\u002F11] CharacterGLM：利用大型语言模型定制社交角色。[[论文](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2024.emnlp-industry.107)]\n- [2023\u002F10] Character-LLM：一种可训练的角色扮演代理。[[论文](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2023.emnlp-main.814)]\n- [2021\u002F10] 大规模快速模型编辑。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=0DcZxeWfOPt)]\n- [2021\u002F04] 编辑语言模型中的事实性知识。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08164)]\n- [2020\u002F02] K-Adapter：通过适配器将知识注入预训练模型。[[论文](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2021.findings-acl.121)]\n- [2013\u002F02] ELLA：一种高效的终身学习算法。[[论文](https:\u002F\u002Fproceedings.mlr.press\u002Fv28\u002Fruvolo13.html)]\n\n#### 隐变量\n- [2025\u002F09] 相似性–距离–幅度激活。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12760)]\n- [2025\u002F08] 向视觉–语言模型的通用连续内存迈进。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17670)]\n- [2025\u002F03] M+：通过可扩展的长期记忆扩展MemoryLLM。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2502.00592)]\n- [2025\u002F02] R3Mem：通过可逆压缩连接记忆保持与检索 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.15957v1)]\n- [2024\u002F07] Memory${}^3$：具有显式记忆的语言建模。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2407.01178)]\n- [2024\u002F03] 协作式多智能体强化学习中高效利用情景记忆。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=LjivA1SLZ6)]\n- [2023\u002F10] Memoria：通过受人脑启发的记忆架构解决灾难性遗忘问题。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03052)]\n- [2021\u002F12] 从全局标签中检测局部洞见：基于卷积分解的监督与零样本序列标注。[[论文](https:\u002F\u002Fdoi.org\u002F10.1162\u002Fcoli_a_00416)]\n\n\n\n### 经验记忆\n\n#### 令牌级别\n- [2026\u002F01] MemRL：基于情景记忆的运行时强化学习实现自我进化的智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03192)]\n- [2025\u002F12] MemEvolve：智能体记忆系统的元进化。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18746)]\n- [2025\u002F12] 记住我，完善我：一种面向经验驱动型智能体进化的动态程序化记忆框架。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10696)]\n- [2025\u002F12] 后见之明：构建能够保持、回忆并反思的记忆系统。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12818)]\n- [2025\u002F11] 智能体上下文工程：为自我改进的语言模型进化上下文。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.04618)]\n- [2025\u002F11] FLEX：通过从经验中向前学习实现持续的智能体进化。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06449)]\n- [2025\u002F11] 通过经验合成扩展智能体学习能力。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.03773)]\n- [2025\u002F11] UFO2：桌面版AgentOS。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2504.14603)]\n- [2025\u002F10] PRINCIPLES：用于主动对话智能体的合成策略记忆。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.17459)]\n- [2025\u002F10] 无训练的群体相对策略优化。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08191)]\n- [2025\u002F10] ToolMem：利用可学习的工具能力记忆增强多模态智能体。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.06664)]\n- [2025\u002F10] H${}^2$R：面向多任务LLM智能体的层次化事后反思。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.12810)]\n- [2025\u002F10] BrowserAgent：构建具有受人类启发的网页浏览行为的网络智能体。[[论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F2510.10666)]\n- [2025\u002F10] LEGOMem：用于工作流自动化的多智能体LLM系统中的模块化程序记忆。[[论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04851)]\n- [2025\u002F10] Alita-G：用于生成智能体的自我进化的生成式智能体。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.23601)]\n- [2025\u002F09] ReasoningBank：借助推理记忆扩展智能体的自我进化能力。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.25140)]\n- [2025\u002F09] Memento：无需微调LLM即可微调LLM智能体。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.16153)]\n- [2025\u002F08] Memp：探索智能体的程序化记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06433)]\n- [2025\u002F08] SEAgent：具备自主经验学习能力的自我进化的计算机使用智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.04700)]\n- [2025\u002F07] Agent KB：利用跨领域经验进行智能体问题解决。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06229)]\n- [2025\u002F07] MemTool：优化LLM智能体多轮对话中动态工具调用的短期记忆管理。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.21428)]\n- [2025\u002F05] 达尔文哥德尔机器：自我改进型智能体的开放式进化。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2505.22954)]\n- [2025\u002F05] Alita：通用型智能体，能够在最小预定义和最大自我进化的情况下实现可扩展的智能体推理。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20286)]\n- [2025\u002F05] SkillWeaver：网络智能体可通过发现和磨练技能实现自我改进。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2504.07079)]\n- [2025\u002F05] LearnAct：具有统一演示基准的少样本移动GUI智能体。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2504.13805)]\n- [2025\u002F05] 检索模型并不擅长工具：针对大型语言模型的工具检索基准测试。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2503.01763)]\n- [2025\u002F04] 动态速查表：利用自适应记忆进行测试时学习。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07952)]\n- [2025\u002F04] 诱导智能体任务中的程序化技能。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06821)]\n- [2025\u002F03] COLA：用于Windows UI任务自动化可扩展的多智能体框架。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2503.09263)]\n- [2025\u002F03] 面向LLM知识图谱推理的记忆增强查询重构。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05193)]\n- [2025\u002F02] 从探索到精通：通过自主交互使LLM掌握工具的能力。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2410.08197)]\n- [2025\u002F02] 从RAG到记忆：大型语言模型的非参数持续学习。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14802)]\n- [2024\u002F12] 基于想象的规划：视觉-语言导航中的情景模拟与情景记忆。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01857)]\n- [2024\u002F10] RepairAgent：基于LLM的自主程序修复智能体。[[论文](http:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17134)]\n- [2024\u002F09] SAGE：具备反思与记忆增强能力的自我进化的智能体。[[论文](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.neucom.2025.130470)]\n- [2024\u002F07] 智能体工作流记忆。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=NTAhi2JEEE)]\n- [2024\u002F07] Fincon：一种合成的LLM多智能体系统，通过概念性言语强化提升金融决策能力。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06567)]\n- [2024\u002F06] 思维缓冲区：大型语言模型的思维增强推理。[[论文](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002Fcde328b7bf6358f5ebb91fe9c539745e-Abstract-Conference.html)]\n- [2024\u002F05] COLT：迈向面向完整性的大型语言模型工具检索。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2405.16089)]\n- [2023\u002F11] JARVIS-1：具有记忆增强型多模态语言模型的开放世界多任务智能体。[[论文](https:\u002F\u002Fdoi.org\u002F10.1109\u002FTPAMI.2024.3511593)]\n- [2023\u002F08] RecMind：由大型语言模型驱动的推荐智能体。[[论文](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2024.findings-naacl.271)]\n- [2023\u002F08] ExpeL：LLM智能体是经验型学习者。[[论文](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faaai.v38i17.29936)]\n- [2023\u002F07] ToolLLM：帮助大型语言模型掌握超过16000个真实世界的API。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.16789)]\n- [2023\u002F05] CREATOR：工具创建以解耦大型语言模型的抽象与具体推理。[[论文](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2023.findings-emnlp.462)]\n- [2023\u002F03] Reflexion：具有言语强化学习能力的语言智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11366)]\n- [2023\u002F02] Toolformer：语言模型可以自学如何使用工具。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.04761)]\n\n#### 参数级别\n- [2025\u002F11] AgentEvolver：迈向高效的自我进化的智能体系统。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.10395)]\n- [2025\u002F10] 基于早期经验的智能体学习。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08558)]\n- [2025\u002F10] 通过持续预训练扩展智能体规模。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.13310)]\n- [2024\u002F10] ToolGen：通过生成实现统一的工具检索与调用。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.03439)]\n- [2023\u002F08] Retroformer：具有策略梯度优化功能的回顾性大型语言智能体。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.02151)]\n- [2023\u002F06] 一台同时具备短期、情景和语义记忆系统的机器。[[论文](https:\u002F\u002Fdoi.org\u002F10.1609\u002Faaai.v37i1.25075)]\n\n#### 隐含\n\n- [2025\u002F11] 用于 GUI 代理的自动扩展连续内存。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.09038)]\n\n\n\n### 工作记忆\n\n#### 令牌级别\n- [2026\u002F01] MemRL：基于情景记忆的运行时强化学习实现自我进化代理。[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03192)\n- [2026\u002F01] 智能体记忆：为大型语言模型代理学习统一的长期与短期记忆管理。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01885)]\n- [2025\u002F11] 记忆即行动：面向长 horizon 智能体任务的自主上下文整理。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.12635)]\n- [2025\u002F11] IterResearch：通过马尔可夫状态重构重新思考长 horizon 代理。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07327)]\n- [2025\u002F11] MemSearcher：通过端到端强化学习训练 LLM 进行推理、搜索和记忆管理。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2511.02805)]\n- [2025\u002F10] AgentFold：具有主动上下文管理的长 horizon 网络代理。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.24699)]\n- [2025\u002F10] PRIME：集成规划与检索的记忆，用于增强推理能力。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2509.22315)]\n- [2025\u002F10] 上下文即记忆：基于记忆检索的场景一致交互式长视频生成。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2506.03141)]\n- [2025\u002F10] DeepAgent：具有可扩展工具集的通用推理代理。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.21618)]\n- [2025\u002F10] ACON：优化长 horizon LLM 代理的上下文压缩。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2510.00615)]\n- [2025\u002F09] ReSum：通过上下文摘要解锁长 horizon 搜索智能。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FARXIV.2509.13313)]\n- [2025\u002F08] Sculptor：通过主动上下文管理赋予 LLM 认知代理能力。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.04664)]\n- [2025\u002F07] MemAgent：基于多卷积 RL 的记忆代理重塑长上下文 LLM。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02259)]\n- [2024\u002F10] Agent S：一个像人类一样使用计算机的开放智能体框架。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08164)]\n\n#### 参数级别\n- [2024\u002F05] 不同长度，恒定速度：利用闪电注意力实现高效语言建模。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=5wm6TiUP4X)]\n- [2024\u002F01] 带有注意力汇的高效流式语言模型。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=NG7sS51zVF)]\n\n#### 隐变量级别\n- [2025\u002F11] VisMem：潜在视觉记忆释放视觉-语言模型的潜力 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11007)]\n- [2025\u002F09] MemGen：编织生成式隐变量记忆以支持自我进化代理。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24704)]\n- [2025\u002F09] 冲突感知的软提示用于检索增强型生成。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.15253)]\n- [2025\u002F09] MemoryVLA：视觉-语言-动作模型中的感知-认知记忆，用于机器人操作。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2508.19236)]\n- [2025\u002F06] MEM1：学习协同记忆与推理，以构建高效的长 horizon 代理。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15841)]\n- [2025\u002F05] RazorAttention：通过检索头实现高效的 KV 缓存压缩。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=tkiZQlL04w)]\n- [2025\u002F04] MemoRAG：借助全局记忆增强的检索增强技术提升长上下文处理能力。[[论文](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3696410.3714805)]\n- [2025\u002F04] SnapKV：LLM 在生成之前就知道你在寻找什么。[[论文](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002F28ab418242603e0f7323e54185d19bde-Abstract-Conference.html)]\n- [2025\u002F03] LM2：大型记忆模型。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2502.06049)]\n- [2025\u002F02] SoftCoT：用于 LLM 高效推理的软思维链。[[论文](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.1137\u002F)]\n- [2025\u002F02] Time-VLM：探索多模态视觉-语言模型在增强时间序列预测中的应用。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2502.04395)]\n- [2025\u002F02] Titans：在测试时学习记忆。[[论文](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2501.00663)]\n- [2024\u002F08] 利用长期记忆增强语言模型。[[论文](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002Febd82705f44793b6f9ade5a669d0f0bf-Abstract-Conference.html)]\n- [2024\u002F06] 深呼吸：利用哨兵标记提升大型语言模型的语言建模能力。[[论文](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2024.findings-emnlp.233)]\n- [2024\u002F04] 适应性语言模型以压缩上下文。[[论文](https:\u002F\u002Fdoi.org\u002F10.18653\u002Fv1\u002F2023.emnlp-main.232)]\n- [2024\u002F03] 学习用要点标记压缩提示。[[论文](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F3d77c6dcc7f143aa2154e7f4d5e22d68-Abstract-Conference.html)]\n- [2024\u002F03] 剪刀手：利用重要性假设的持久性在测试时压缩 LLM 的 KV 缓存。[[论文](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002Fa452a7c6c463e4ae8fbdc614c6e983e6-Abstract-Conference.html)]\n- [2024\u002F03] 聚焦变压器：用于上下文缩放的对比训练。[[论文](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F8511d06d5590f4bda24d42087802cc81-Abstract-Conference.html)]\n- [2023\u002F07] 大型语言模型中用于上下文压缩的上下文自编码器。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.06945)]\n- [2023\u002F06] H2O：大型语言模型高效生成推理的重击者预言机。[[论文](http:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F6ceefa7b15572587b78ecfcebb2827f8-Abstract-Conference.html)]\n- [2022\u002F08] 记忆化变压器。[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=TrjbxzRcnf-)]\n- [2022\u002F07] XMem：基于阿特金森-希夫林记忆模型的长期视频目标分割。[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.07115)]\n\n## 📖 引用\n\n如果您觉得本仓库对您有所帮助，我们非常感谢您能引用我们的论文：\n\n```bibtex\n@article{DBLP:journals\u002Fcorr\u002Fabs-2512-13564,\n  author       = {Yuyang Hu 和 Shichun Liu 和 Yanwei Yue 和 Guibin Zhang 和 Boyang Liu 和 Fangyi Zhu 和 Jiahang Lin 和 Honglin Guo 和 Shihan Dou 和 Zhiheng Xi 和 Senjie Jin 和 Jiejun Tan 和 Yanbin Yin 和 Jiongnan Liu 和 Zeyu Zhang 和 Zhongxiang Sun 和 Yutao Zhu 和 Hao Sun 和 Boci Peng 和 Zhenrong Cheng 和 Xuanbo Fan 和 Jiaxin Guo 和 Xinlei Yu 和 Zhenhong Zhou 和 Zewen Hu 和 Jiahao Huo 和 Junhao Wang 和 Yuwei Niu 和 Yu Wang 和 Zhenfei Yin 和 Xiaobin Hu 和 Yue Liao 和 Qiankun Li 和 Kun Wang 和 Wangchunshu Zhou 和 Yixin Liu 和 Dawei Cheng 和 Qi Zhang 和 Tao Gui 和 Shirui Pan 和 Yan Zhang 和 Philip Torr 和 Zhicheng Dou 和 Ji{-}Rong Wen 和 Xuanjing Huang 和 Yu{-}Gang Jiang 和 Shuicheng Yan},\n  title        = {人工智能代理时代的记忆},\n  journal      = {CoRR},\n  volume       = {abs\u002F2512.13564},\n  year         = {2025},\n  url          = {https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2512.13564},\n  doi          = {10.48550\u002FARXIV.2512.13564},\n  eprinttype    = {arXiv},\n  eprint       = {2512.13564},\n  timestamp    = {周一, 2026年1月26日 16:10:18 +0100},\n  biburl       = {https:\u002F\u002Fdblp.org\u002Frec\u002Fjournals\u002Fcorr\u002Fabs-2512-13564.bib},\n  bibsource    = {dblp 计算机科学文献数据库, https:\u002F\u002Fdblp.org}\n}\n```\n\n## ⭐️ 星标历史\n\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShichun-Liu_Agent-Memory-Paper-List_readme_3674d16ffb3d.png)](https:\u002F\u002Fwww.star-history.com\u002F#Shichun-Liu\u002FAgent-Memory-Paper-List?type=date&legend=top-left)","# Agent-Memory-Paper-List 快速上手指南\n\n`Agent-Memory-Paper-List` 并非一个需要安装运行的软件库，而是一个持续更新的**学术论文清单资源库**。它旨在梳理 AI Agent 记忆领域的研究现状，提供统一的分类框架（形式、功能、动态），并收录相关前沿论文。\n\n开发者无需配置复杂环境，只需通过 Git 克隆仓库即可获取最新的论文列表和综述内容。\n\n## 环境准备\n\n本项目主要包含 Markdown 文档、论文链接及示意图，对系统无特殊要求。\n\n- **操作系统**：Windows \u002F macOS \u002F Linux\n- **前置依赖**：\n  - `Git`：用于克隆代码仓库\n  - 浏览器：用于访问论文链接（arXiv, Hugging Face 等）\n\n> **提示**：国内用户建议配置 Git 加速或使用镜像源，以提升克隆速度。\n\n## 安装步骤\n\n通过以下命令将仓库克隆到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FShichun-Liu\u002FAgent-Memory-Paper-List.git\n```\n\n进入项目目录：\n\n```bash\ncd Agent-Memory-Paper-List\n```\n\n> **国内加速方案**：如果直接克隆速度慢，可使用国内镜像源（如 Gitee 镜像，若有）或设置 Git 代理：\n> ```bash\n> git clone https:\u002F\u002Fgitee.com\u002Fmirror\u002FAgent-Memory-Paper-List.git # 示例，具体地址需确认是否有同步镜像\n> # 或者使用通用加速技巧\n> git clone https:\u002F\u002Fgithub.com\u002FShichun-Liu\u002FAgent-Memory-Paper-List.git --depth 1\n> ```\n\n## 基本使用\n\n克隆完成后，您可以直接在本地查看整理好的论文列表和综述图表。\n\n### 1. 浏览论文列表\n打开项目根目录下的 `README.md` 文件，即可看到按**事实记忆 (Factual Memory)**、**经验记忆**等分类的最新论文清单。每篇论文均附带标题、发布日期及 arXiv\u002FPDF 链接。\n\n例如，查找关于“多模态记忆”的论文，可在文件中搜索 `Multimodal`，快速定位如 `MemVerse` 或 `WorldMM` 等最新研究。\n\n### 2. 查看统一分类框架\n项目中包含核心概念图（位于 `assets\u002F` 目录），展示了 Agent Memory 的统一分类法：\n- **Forms (形式)**：Token 级、参数级、潜在状态\n- **Functions (功能)**：事实性、经验性、工作记忆\n- **Dynamics (动态)**：形成、演化、检索\n\n您可以在支持 Markdown 预览的编辑器（如 VS Code）中打开 `README.md` 直接查看图表，或直接访问在线仓库页面阅读。\n\n### 3. 追踪最新进展\n由于该列表持续更新，建议定期拉取最新内容：\n\n```bash\ngit pull origin main\n```\n\n通过此方式，您可以随时掌握 AI Agent 记忆领域的最新突破，为研发长程推理、个性化交互或自进化 Agent 系统提供理论支撑。","某 AI 初创公司的算法团队正在研发一款具备长期陪伴能力的个人助理 Agent，急需构建高效的记忆模块以支持跨天对话和个性化服务。\n\n### 没有 Agent-Memory-Paper-List 时\n- **概念混淆严重**：团队成员常将 RAG（检索增强生成）、上下文工程与真正的 Agent 记忆混为一谈，导致技术选型错误，架构设计偏离核心需求。\n- **文献调研低效**：面对碎片化的学术论文，研究人员需花费数周手动筛选，难以区分基于 Token、参数还是隐状态的记忆形式，进度严重滞后。\n- **缺乏统一视角**：由于缺少对记忆“形成、演化、检索”全生命周期的系统梳理，开发的记忆模块只能存储事实，无法实现经验提炼或动态遗忘，智能体显得僵化。\n- **重复造轮子**：因不了解最新的“体验式记忆”或“工作记忆”研究成果，团队耗费大量资源复现了已被证明效果有限的旧方案。\n\n### 使用 Agent-Memory-Paper-List 后\n- **概念清晰界定**：借助其统一分类法，团队迅速厘清 Agent Memory 与 RAG 的边界，精准锁定了适合长程推理的“隐状态记忆”技术路线。\n- **调研效率倍增**：直接利用按“形式、功能、动态”三维结构整理的论文列表，半天内便完成了从基础理论到 SOTA（最先进）方案的全面摸底。\n- **架构设计升级**：参考列表中关于记忆演化机制的前沿论文，团队成功引入了动态巩固与遗忘策略，使 Agent 能像人类一样从交互中提炼技能并更新认知。\n- **创新起点提高**：基于最新的“事件中心记忆”等成果，团队避开了过时方法，直接在最新研究基础上进行微调创新，大幅缩短了研发周期。\n\nAgent-Memory-Paper-List 通过提供结构化的知识图谱，帮助开发者从混乱的学术海洋中快速提炼出构建高智能 Agent 所需的核心记忆范式。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShichun-Liu_Agent-Memory-Paper-List_84776f3e.png","Shichun-Liu","Liusc","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FShichun-Liu_9b8b5d90.png",null,"Fudan University","https:\u002F\u002Fshichun-liu.github.io\u002F","https:\u002F\u002Fgithub.com\u002FShichun-Liu",1838,82,"2026-04-19T22:25:57","MIT","","未说明",{"notes":87,"python":85,"dependencies":88},"该项目是一个论文列表仓库（Paper List），用于整理和展示关于 AI Agent 记忆领域的综述论文及相关研究，并非可执行的软件工具或模型代码库，因此没有特定的操作系统、GPU、内存、Python 版本或依赖库要求。用户只需通过浏览器访问链接或在本地查看 Markdown 文件即可使用。",[],[13],[91,92],"agent","memory","2026-03-27T02:49:30.150509","2026-04-20T16:31:19.769460",[96,101,106,111,116,121],{"id":97,"question_zh":98,"answer_zh":99,"source_url":100},44960,"发现列表中存在重复的论文条目该怎么办？","请直接提交一个 Issue 指出重复条目的具体位置（例如在 README.md 中的行号）以及重复的论文名称。维护者在核实后，会在后续的更新中移除重复项，保持列表的整洁和准确性。","https:\u002F\u002Fgithub.com\u002FShichun-Liu\u002FAgent-Memory-Paper-List\u002Fissues\u002F9",{"id":102,"question_zh":103,"answer_zh":104,"source_url":105},44956,"如何请求将我的相关论文添加到综述文章或 GitHub 仓库中？","您可以直接在 GitHub 仓库中提交一个新的 Issue。在 Issue 正文中，请提供论文的标题、arXiv 链接（或会议论文链接）、GitHub 代码库地址（如有），并附上简短的描述，说明该工作如何与智能体记忆（Agent Memory）主题相关。维护者通常会在双周更新周期内审查您的请求，如果符合主题，他们会将其添加到综述正文及仓库列表中。","https:\u002F\u002Fgithub.com\u002FShichun-Liu\u002FAgent-Memory-Paper-List\u002Fissues\u002F4",{"id":107,"question_zh":108,"answer_zh":109,"source_url":110},44957,"如果发现综述文章或列表中存在引用错误，应该如何反馈？","请提交一个 Issue 明确指出错误的具体位置（例如页码、表格编号或行号），并提供正确的引用信息或截图证据。维护者确认后会感谢您的指正，并承诺在下一个修订版本中更正该引用错误。","https:\u002F\u002Fgithub.com\u002FShichun-Liu\u002FAgent-Memory-Paper-List\u002Fissues\u002F12",{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},44958,"提交论文添加请求后，通常需要多久才能被收录？","该项目目前维持每两周（bi-weekly）更新一次的频率。维护者会在收到请求后仔细审查相关工作，并计划在接下来的双周更新中将其纳入综述文本和 GitHub 仓库。建议您定期查看后续版本以确认收录情况。","https:\u002F\u002Fgithub.com\u002FShichun-Liu\u002FAgent-Memory-Paper-List\u002Fissues\u002F6",{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},44959,"我想推荐的项目似乎已经被收录了，该如何确认和处理？","在提交新请求前，建议先检查 README 文件和综述论文的最新版本。如果您发现您的工作（如 MAGMA）实际上已经被包含在列表中，您可以自行关闭该 Issue，或者在评论中说明“已确认收录”并关闭问题，以避免重复劳动。","https:\u002F\u002Fgithub.com\u002FShichun-Liu\u002FAgent-Memory-Paper-List\u002Fissues\u002F17",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},44961,"什么样的研究工作适合被添加到这个智能体记忆论文列表中？","适合收录的工作应紧密围绕“智能体记忆”（Agent Memory）这一主题。这包括但不限于：多模态记忆框架（如结合文本和视觉记忆）、长视频推理中的动态记忆机制、事实性记忆的检索与保留技术、基于外部工具的记忆增强系统（如 Video-RAG），以及具有高效记忆管理策略（如时间衰减、用户画像构建）的系统。如果您的工作解决了长上下文场景下的记忆压缩、检索精度或记忆演化问题，通常都会被考虑收录。","https:\u002F\u002Fgithub.com\u002FShichun-Liu\u002FAgent-Memory-Paper-List\u002Fissues\u002F5",[]]