[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-kingjulio8238--Memary":3,"tool-kingjulio8238--Memary":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",151918,2,"2026-04-12T11:33:05",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":78,"languages":79,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":92,"env_os":93,"env_gpu":94,"env_ram":95,"env_deps":96,"category_tags":107,"github_topics":108,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":115,"updated_at":116,"faqs":117,"releases":148},6895,"kingjulio8238\u002FMemary","Memary","The Open Source Memory Layer For Autonomous Agents","Memary 是一款专为自主智能体（Autonomous Agents）打造的开源记忆层工具。它旨在解决当前 AI 智能体缺乏长期记忆和上下文连贯性的痛点，通过模拟人类的记忆机制，让智能体能够像人一样存储、检索并利用过往经验来辅助推理和决策，从而向通用人工智能（AGI）迈出关键一步。\n\n这款工具特别适合 AI 开发者、研究人员以及希望构建具备持续学习能力的智能体应用的工程师使用。Memary 的核心亮点在于其灵活的架构设计：它不仅支持本地部署的模型（如通过 Ollama 运行的 Llama 3 和 LLaVA），也兼容 OpenAI 等云端服务，允许用户轻松切换不同模型。此外，它内置了对 FalkorDB 和 Neo4j 等图数据库的支持，能够高效地管理复杂的记忆关联。通过简单的配置，开发者即可为智能体赋予“记住”过往交互、理解用户画像并据此优化任务执行的能力，是构建高智商、拟人化 AI 代理的理想基础设施。","\u003Cp align=\"center\">\n  \u003Cimg alt=\"memary_logo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_4f88572b3df4.png\">\n\u003C\u002Fp>\n\n[![LinkedIn](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-LinkedIn-blue?style=flat&logo=linkedin&labelColor=blue)](https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fmemary\u002F)\n[![Follow](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFollow_on_X-000000?style=flat-square&logo=x&logoColor=white)](https:\u002F\u002Fx.com\u002Fmemary_labs)\n[![Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocumentation-memary-428BCA?style=flat&logo=open-book)](https:\u002F\u002Fkingjulio8238.github.io\u002Fmemarydocs\u002F)\n[![Demo](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWatch-Demo-red?logo=youtube)](https:\u002F\u002Fyoutu.be\u002FGnUU3_xK6bg)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmemary.svg?style=flat&color=orange)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmemary\u002F)\n[![Downloads](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fmemary.svg?style=flat&label=Downloads)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmemary\u002F)\n[![Last Commit](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fkingjulio8238\u002Fmemary.svg?style=flat&color=blue)](https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002Fmemary)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n\n\n## Manage Your Agent Memories\n\nAgents promote human-type reasoning and are a great advancement towards building AGI and understanding ourselves as humans. Memory is a key component of how humans approach tasks and should be weighted the same when building AI agents. **memary emulates human memory to advance these agents.**\n\n## Quickstart 🏁\n\n### Install memary \n1. With pip:\n   \nMake sure you are running python version \u003C= 3.11.9, then run \n```\npip install memary\n```\n\n2. Locally:\n   \ni. Create a virtual environment with the python version set as specified above \n\nii. Install python dependencies: \n```\npip install -r requirements.txt\n```\n### Specify Models Used \nAt the time of writing, memary assumes installation of local models and we currently support all models available through **Ollama**:\n\n- LLM running locally using Ollama (`Llama 3 8B\u002F40B` as suggested defaults) **OR** `gpt-3.5-turbo`\n- Vision model running locally using Ollama (`LLaVA` as suggested default) **OR** `gpt-4-vision-preview`\n\nmemary will default to the locally run models unless explicitly specified. Additionally, memary allows developers to **easily switch between downloaded models**. \n\n### Run memary\n**Steps**\n1. [Optional] If running models locally using Ollama, follow this the instructions in this [repo](https:\u002F\u002Fgithub.com\u002Follama\u002Follama).\n\n2. Ensure that a `.env` exists with any necessary credentials.\n\n   \u003Cdetails>\n     \u003Csummary>.env\u003C\u002Fsummary>\n  \n   ```\n   OPENAI_API_KEY=\"YOUR_API_KEY\"\n   PERPLEXITY_API_KEY=\"YOUR_API_KEY\"\n   GOOGLEMAPS_API_KEY=\"YOUR_API_KEY\"\n   ALPHA_VANTAGE_API_KEY=\"YOUR_API_KEY\"\n   \n   Database usage (see API info):\n   FALKORDB_URL=\"falkor:\u002F\u002F[[username]:[password]]@[falkor_host_url]:port\"\n   or\n   NEO4J_PW=\"YOUR_NEO4J_PW\"\n   NEO4J_URL=\"YOUR_NEO4J_URL\"\n   ```\n  \n   \u003C\u002Fdetails>\n   \n\n3. Fetch API credentials:\n   \u003Cdetails>\n     \u003Csummary>API Info\u003C\u002Fsummary>\n\n    - [**OpenAI key**](https:\u002F\u002Fopenai.com\u002Findex\u002Fopenai-api)\n    - [**FalkorDB**](https:\u002F\u002Fapp.falkordb.cloud\u002F)\n      - Login &rarr; Click 'Subscribe` &rarr; Create a free instance on the Dashboard &rarr; use the credentials (username, passward, falkor_host_url and port).  \n    - [**Neo4j**](https:\u002F\u002Fneo4j.com\u002Fcloud\u002Fplatform\u002Faura-graph-database\u002F?ref=nav-get-started-cta)\n      - Click 'Start for free` &rarr; Create a free instance &rarr; Open auto-downloaded txt file and use the credentials\n    - [**Perplexity key**](https:\u002F\u002Fwww.perplexity.ai\u002Fsettings\u002Fapi)\n    - [**Google Maps**](https:\u002F\u002Fconsole.cloud.google.com\u002Fapis\u002Fcredentials)\n      - Keys are generated in the 'Credentials' page of the 'APIs & Services' tab of Google Cloud Console\n    - [Alpha Vantage](https:\u002F\u002Fwww.alphavantage.co\u002Fsupport\u002F#api-key)\n      - Recommended to use https:\u002F\u002F10minutemail.com\u002F to generate a temporary email to use\n    \n    \u003C\u002Fdetails>\n\n4.  Update user persona which can be found in `streamlit_app\u002Fdata\u002Fuser_persona.txt` using the user persona template which can be found in `streamlit_app\u002Fdata\u002Fuser_persona_template.txt`. Instructions have been provided - replace the curly brackets with relevant information. \n\n5. [Optional] Update system persona, if needed, which can be found in `streamlit_app\u002Fdata\u002Fsystem_persona.txt`.\n\n6. [Optional] Multi Graphs - Users who are using FalkorDB can generate multiple graphs and switch between their IDs, which correspond to different agents. This enables seamless transitions and management of different agents' memory and knowledge contexts.\n\n7. Run:\n\n```\ncd streamlit_app\nstreamlit run app.py\n```\n\n## Basic Usage\n```python\nfrom memary.agent.chat_agent import ChatAgent\n\nsystem_persona_txt = \"data\u002Fsystem_persona.txt\"\nuser_persona_txt = \"data\u002Fuser_persona.txt\"\npast_chat_json = \"data\u002Fpast_chat.json\"\nmemory_stream_json = \"data\u002Fmemory_stream.json\"\nentity_knowledge_store_json = \"data\u002Fentity_knowledge_store.json\"\nchat_agent = ChatAgent(\n    \"Personal Agent\",\n    memory_stream_json,\n    entity_knowledge_store_json,\n    system_persona_txt,\n    user_persona_txt,\n    past_chat_json,\n)\n```\nPass in subset of `['search', 'vision', 'locate', 'stocks']` as `include_from_defaults` for different set of default tools upon initialization.\n\n### Multi-Graph\nWhen using FalkorDB database, you can create multi-agents. Here is an example of how to set up personal agents for different users:\n\n```python\n# User A personal agent\nchat_agent_user_a = ChatAgent(\n    \"Personal Agent\",\n    memory_stream_json_user_a,\n    entity_knowledge_store_json_user_a,\n    system_persona_txt_user_a,\n    user_persona_txt_user_a,\n    past_chat_json_user_a,\n    user_id='user_a_id'\n)\n\n# User B personal agent\nchat_agent_user_b = ChatAgent(\n    \"Personal Agent\",\n    memory_stream_json_user_b,\n    entity_knowledge_store_json_user_b,\n    system_persona_txt_user_b,\n    user_persona_txt_user_b,\n    past_chat_json_user_b,\n    user_id='user_b_id'\n)\n```\n\n### Adding Custom Tools\n```python\ndef multiply(a: int, b: int) -> int:\n    \"\"\"Multiply two integers and returns the result integer\"\"\"\n    return a * b\n\nchat_agent.add_tool({\"multiply\": multiply})\n```\nMore information about creating custom tools for the LlamaIndex ReAct Agent  can be found [here](https:\u002F\u002Fdocs.llamaindex.ai\u002Fen\u002Fstable\u002Fexamples\u002Fagent\u002Freact_agent\u002F).\n\n### Removing Custom Tools\n```python\nchat_agent.remove_tool(\"multiply\")\n```\n\n## Core Concepts 🧪\nThe current structure of memary is detailed in the diagram below.\n\n\u003Cimg width=\"1410\" alt=\"memary overview\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_e65131197d35.png\">\n\nAt the time of writing, the above system design includes the routing agent, knoweldge graph and memory module are all integrated into the `ChatAgent` class located in the `src\u002Fagent` directory.\n\nRaw source code for these components can also be found in their respective directories including benchmarks, notebooks, and updates.\n\n### Principles \nmemary integrates itself onto your existing agents with as little developer implementation as possible. We achieve this sticking to a few principles. \n\n- Auto-generated Memory \n    - After initializing memary, agent memory automatically updates as the agent interacts. This type of generation allows us to capture all memories to easily display in your dashboard. Additionally, we allow the combination of databases with little or no code! \n\n- Memory Modules \n    - Given a current state of the databases, memary tracks users' preferences which are displayed in your dashboard for analysis. \n\n- System Improvement \n    - memary mimics how human memory evolves and learns over time. We will provide the rate of your agents improvement in your dashboard. \n\n- Rewind Memories \n    - memary takes care of keeping track of all chats so you can rewind agent executions and access the agents memory at a certain period (coming soon).\n\n### Agent\n\n\u003Cimg alt=\"routing agent\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_fb8e2606ff8b.png\">\n\nTo provide developers, who don't have existing agents, access to memary we setup a simple agent implementation. We use the [ReAct](https:\u002F\u002Freact-lm.github.io\u002F) agent to plan and execute a query given the tools provided. \n\nWhile we didn't emphasize equipping the agent with many tools, the **search tool is crucial to retrieve information from the knowledge graph**. This tool queries the knowledge graph for a response based on existing nodes and executes an external search if no related entities exist. Other default agent tools include computer vision powered by LLaVa and a location tool using geococder and google maps. \n\nNote: In future version releases, the current ReAct agent (that was used for demo purposes) will be removed from the package so that **memary can support any type of agents from any provider**. \n\n``` py title=\"external_query\" hl_lines=\"1\"\ndef external_query(self, query: str):\n    messages_dict = [\n        {\"role\": \"system\", \"content\": \"Be precise and concise.\"},\n        {\"role\": \"user\", \"content\": query},\n    ]\n    messages = [ChatMessage(**msg) for msg in messages_dict]\n    external_response = self.query_llm.chat(messages)\n\n    return str(external_response)\n```\n\n``` py title=\"search\" hl_lines=\"1\"\ndef search(self, query: str) -> str:\n    response = self.query_engine.query(query)\n\n    if response.metadata is None:\n        return self.external_query(query)\n    else:\n        return response\n```\n\n### Knowledge Graphs\n\n![KG diagram](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_f74600d98a9d.png)\n\n#### Knowledge Graphs ↔ LLMs\n- memary uses a graph database to store knoweldge.\n- Llama Index was used to add nodes into the graph store based on documents.\n- Perplexity (mistral-7b-instruct model) was used for external queries.\n\n#### Knowledge Graph Use Cases\n- Inject the final agent responses into existing KGs.\n- memary uses a [recursive](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.18059.pdf) retrieval approach to search the KG, which involves determining what the key entities are in the query, building a subgraph of those entities with a maximum depth of 2 away, and finally using that subgraph to build up the context.\n- When faced with multiple key entities in a query, memary uses [multi-hop](https:\u002F\u002Fneo4j.com\u002Fdeveloper-blog\u002Fknowledge-graphs-llms-multi-hop-question-answering\u002F) reasoning to join multiple subgraphs into a larger subgraph to search through.\n- These techniques reduce latency compared to searching the entire knowledge graph at once.\n\n``` py title=\"store in KG\" hl_lines=\"1\"\ndef query(self, query: str) -> str:\n        # get the response from react agent\n        response = self.routing_agent.chat(query)\n        self.routing_agent.reset()\n        # write response to file for KG writeback\n        with open(\"data\u002Fexternal_response.txt\", \"w\") as f:\n            print(response, file=f)\n        # write back to the KG\n        self.write_back()\n        return response\n```\n\n``` py title=\"recursive retrieval\" hl_lines=\"1\"\ndef check_KG(self, query: str) -> bool:\n        \"\"\"Check if the query is in the knowledge graph.\n\n        Args:\n            query (str): query to check in the knowledge graph\n\n        Returns:\n            bool: True if the query is in the knowledge graph, False otherwise\n        \"\"\"\n        response = self.query_engine.query(query)\n\n        if response.metadata is None:\n            return False\n        return generate_string(\n            list(list(response.metadata.values())[0][\"kg_rel_map\"].keys())\n        )\n```\n\n### Memory Modules\n\n![Memory Module](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_22e0184f4607.png)\n\nThe memory module comprises the **Memory Stream and Entity Knowledge Store.** The memory module was influenced by the design of [K-LaMP](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.06318.pdf) proposed by Microsoft Research.\n\n#### Memory Stream \nThe Memory Stream captures all entities inserted into the KG and their associated timestamps. This stream reflects the **breadth of the users' knowledge**, i.e., concepts users have had exposure to but no depth of exposure is inferred.\n- Timeline Analysis: Map out a timeline of interactions, highlighting moments of high engagement or shifts in topic focus. This helps in understanding the evolution of the user's interests over time.\n\n``` py title=\"add to memory stream\" hl_lines=\"1\"\ndef add_memory(self, entities):\n        self.memory.extend([\n            MemoryItem(str(entity),\n                       datetime.now().replace(microsecond=0))\n            for entity in entities\n        ])\n```\n\n- Extract Themes: Look for recurring themes or topics within the interactions. This thematic analysis can help anticipate user interests or questions even before they are explicitly stated.\n\n``` py title=\"retrieve from memory stream\" hl_lines=\"1\"\ndef get_memory(self) -> list[MemoryItem]:\n        return self.memory\n```\n\n#### Entity Knowledge Store \nThe Entity Knowledge Store tracks the frequency and recency of references to each entity stored in the memory stream. This knowledge store reflects **users' depth of knowledge**, i.e., concepts they are more familiar with than others.\n- Rank Entities by Relevance: Use both frequency and recency to rank entities. An entity frequently mentioned (high count) and referenced recently is likely of high importance, and the user is well aware of this concept.\n\n``` py title=\"select most relevant entities\" hl_lines=\"1\"\ndef _select_top_entities(self):\n        entity_knowledge_store = self.message.llm_message['knowledge_entity_store']\n        entities = [entity.to_dict() for entity in entity_knowledge_store]\n        entity_counts = [entity['count'] for entity in entities]\n        top_indexes = np.argsort(entity_counts)[:TOP_ENTITIES]\n        return [entities[index] for index in top_indexes]\n```\n\n- Categorize Entities: Group entities into categories based on their nature or the context in which they're mentioned (e.g., technical terms, personal interests). This categorization aids in quickly accessing relevant information tailored to the user's inquiries.\n\n``` py title=\"group entities\" hl_lines=\"1\"\ndef _convert_memory_to_knowledge_memory(\n            self, memory_stream: list) -> list[KnowledgeMemoryItem]:\n        \"\"\"Converts memory from memory stream to entity knowledge store by grouping entities \n\n        Returns:\n            knowledge_memory (list): list of KnowledgeMemoryItem\n        \"\"\"\n        knowledge_memory = []\n\n        entities = set([item.entity for item in memory_stream])\n        for entity in entities:\n            memory_dates = [\n                item.date for item in memory_stream if item.entity == entity\n            ]\n            knowledge_memory.append(\n                KnowledgeMemoryItem(entity, len(memory_dates),\n                                    max(memory_dates)))\n        return knowledge_memory\n```\n\n- Highlight Changes Over Time: Identify any significant changes in the entities' ranking or categorization over time. A shift in the most frequently mentioned entities could indicate a change in the user's interests or knowledge.\n- Additional information on the memory module can be found [here](https:\u002F\u002Fgithub.com\u002Fseyeong-han\u002FKnowledgeGraphRAG)\n\n![Memory Compression](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_2c3170e7a618.png)\n\n### New Context Window \n![New_Context_Window](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_2e5c76716945.png)\n\nNote: We utilize the the key categorized entities and themes associated with users to tailor agent responses more closely to the user's current interests\u002Fpreferences and knowledge level\u002Fexpertise. The new context window is made up of the following: \n\n- Agent response \n``` py title=\"retrieve agent response\" hl_lines=\"1\"\ndef get_routing_agent_response(self, query, return_entity=False):\n        \"\"\"Get response from the ReAct.\"\"\"\n        response = \"\"\n        if self.debug:\n            # writes ReAct agent steps to separate file and modifies format to be readable in .txt file\n            with open(\"data\u002Frouting_response.txt\", \"w\") as f:\n                orig_stdout = sys.stdout\n                sys.stdout = f\n                response = str(self.query(query))\n                sys.stdout.flush()\n                sys.stdout = orig_stdout\n            text = \"\"\n            with open(\"data\u002Frouting_response.txt\", \"r\") as f:\n                text = f.read()\n\n            plain = ansi_strip(text)\n            with open(\"data\u002Frouting_response.txt\", \"w\") as f:\n                f.write(plain)\n        else:\n            response = str(self.query(query))\n\n        if return_entity:\n            # the query above already adds final response to KG so entities will be present in the KG\n            return response, self.get_entity(self.query_engine.retrieve(query))\n        return response\n```\n\n- Most relevant entities \n``` py title=\"retrieve important entities\" hl_lines=\"1\"\ndef get_entity(self, retrieve) -> list[str]:\n        \"\"\"retrieve is a list of QueryBundle objects.\n        A retrieved QueryBundle object has a \"node\" attribute,\n        which has a \"metadata\" attribute.\n\n        example for \"kg_rel_map\":\n        kg_rel_map = {\n            'Harry': [['DREAMED_OF', 'Unknown relation'], ['FELL_HARD_ON', 'Concrete floor']],\n            'Potter': [['WORE', 'Round glasses'], ['HAD', 'Dream']]\n        }\n\n        Args:\n            retrieve (list[NodeWithScore]): list of NodeWithScore objects\n        return:\n            list[str]: list of string entities\n        \"\"\"\n\n        entities = []\n        kg_rel_map = retrieve[0].node.metadata[\"kg_rel_map\"]\n        for key, items in kg_rel_map.items():\n            # key is the entity of question\n            entities.append(key)\n            # items is a list of [relationship, entity]\n            entities.extend(item[1] for item in items)\n            if len(entities) > MAX_ENTITIES_FROM_KG:\n                break\n        entities = list(set(entities))\n        for exceptions in ENTITY_EXCEPTIONS:\n            if exceptions in entities:\n                entities.remove(exceptions)\n        return entities\n```\n\n- Chat history (summarized to avoid token overflow)\n``` py title=\"summarize chat history\" hl_lines=\"1\"\ndef _summarize_contexts(self, total_tokens: int):\n        \"\"\"Summarize the contexts.\n\n        Args:\n            total_tokens (int): total tokens in the response\n        \"\"\"\n        messages = self.message.llm_message[\"messages\"]\n\n        # First two messages are system and user personas\n        if len(messages) > 2 + NONEVICTION_LENGTH:\n            messages = messages[2:-NONEVICTION_LENGTH]\n            del self.message.llm_message[\"messages\"][2:-NONEVICTION_LENGTH]\n        else:\n            messages = messages[2:]\n            del self.message.llm_message[\"messages\"][2:]\n\n        message_contents = [message.to_dict()[\"content\"] for message in messages]\n\n        llm_message_chatgpt = {\n            \"model\": self.model,\n            \"messages\": [\n                {\n                    \"role\": \"user\",\n                    \"content\": \"Summarize these previous conversations into 50 words:\"\n                    + str(message_contents),\n                }\n            ],\n        }\n        response, _ = self._get_gpt_response(llm_message_chatgpt)\n        content = \"Summarized past conversation:\" + response\n        self._add_contexts_to_llm_message(\"assistant\", content, index=2)\n        logging.info(f\"Contexts summarized successfully. \\n summary: {response}\")\n        logging.info(f\"Total tokens after eviction: {total_tokens*EVICTION_RATE}\")\n```\n\n## License \n\nmemary is released under the MIT License.\n","\u003Cp align=\"center\">\n  \u003Cimg alt=\"memary_logo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_4f88572b3df4.png\">\n\u003C\u002Fp>\n\n[![LinkedIn](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-LinkedIn-blue?style=flat&logo=linkedin&labelColor=blue)](https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fmemary\u002F)\n[![Follow](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFollow_on_X-000000?style=flat-square&logo=x&logoColor=white)](https:\u002F\u002Fx.com\u002Fmemary_labs)\n[![Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocumentation-memary-428BCA?style=flat&logo=open-book)](https:\u002F\u002Fkingjulio8238.github.io\u002Fmemarydocs\u002F)\n[![Demo](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWatch-Demo-red?logo=youtube)](https:\u002F\u002Fyoutu.be\u002FGnUU3_xK6bg)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmemary.svg?style=flat&color=orange)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmemary\u002F)\n[![Downloads](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fmemary.svg?style=flat&label=Downloads)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmemary\u002F)\n[![Last Commit](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fkingjulio8238\u002Fmemary.svg?style=flat&color=blue)](https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002Fmemary)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n\n\n## 管理您的智能体记忆\n\n智能体能够促进类人推理，是迈向通用人工智能（AGI）以及理解人类自身的重要一步。记忆是人类处理任务的关键组成部分，在构建人工智能智能体时也应给予同等重视。**memary 模拟人类记忆，以推动这些智能体的发展。**\n\n## 快速入门 🏁\n\n### 安装 memary \n1. 使用 pip：\n   \n请确保您使用的 Python 版本不超过 3.11.9，然后运行以下命令：\n```\npip install memary\n```\n\n2. 本地安装：\n   \ni. 创建一个使用上述指定版本 Python 的虚拟环境。\n\nii. 安装 Python 依赖项：\n```\npip install -r requirements.txt\n```\n### 指定所用模型 \n截至撰写本文时，memary 假设已安装本地模型，并且目前支持通过 **Ollama** 提供的所有模型：\n\n- 使用 Ollama 在本地运行的 LLM（建议默认为 `Llama 3 8B\u002F40B`）**或** `gpt-3.5-turbo`\n- 使用 Ollama 在本地运行的视觉模型（建议默认为 `LLaVA`）**或** `gpt-4-vision-preview`\n\n除非明确指定，否则 memary 将默认使用本地运行的模型。此外，memary 还允许开发者 **轻松切换已下载的模型**。\n\n### 运行 memary\n**步骤**\n1. [可选] 如果您使用 Ollama 在本地运行模型，请按照此 [仓库](https:\u002F\u002Fgithub.com\u002Follama\u002Follama) 中的说明操作。\n\n2. 确保存在一个包含必要凭据的 `.env` 文件。\n\n   \u003Cdetails>\n     \u003Csummary>.env\u003C\u002Fsummary>\n  \n   ```\n   OPENAI_API_KEY=\"YOUR_API_KEY\"\n   PERPLEXITY_API_KEY=\"YOUR_API_KEY\"\n   GOOGLEMAPS_API_KEY=\"YOUR_API_KEY\"\n   ALPHA_VANTAGE_API_KEY=\"YOUR_API_KEY\"\n   \n   数据库使用（参见 API 信息）：\n   FALKORDB_URL=\"falkor:\u002F\u002F[[username]:[password]]@[falkor_host_url]:port\"\n   或\n   NEO4J_PW=\"YOUR_NEO4J_PW\"\n   NEO4J_URL=\"YOUR_NEO4J_URL\"\n   ```\n  \n   \u003C\u002Fdetails>\n   \n\n3. 获取 API 凭据：\n   \u003Cdetails>\n     \u003Csummary>API 信息\u003C\u002Fsummary>\n\n    - [**OpenAI 密钥**](https:\u002F\u002Fopenai.com\u002Findex\u002Fopenai-api)\n    - [**FalkorDB**](https:\u002F\u002Fapp.falkordb.cloud\u002F)\n      - 登录 &rarr; 点击“订阅” &rarr; 在仪表板上创建一个免费实例 &rarr; 使用凭据（用户名、密码、falkor_host_url 和端口）。  \n    - [**Neo4j**](https:\u002F\u002Fneo4j.com\u002Fcloud\u002Fplatform\u002Faura-graph-database\u002F?ref=nav-get-started-cta)\n      - 点击“免费开始” &rarr; 创建一个免费实例 &rarr; 打开自动下载的文本文件并使用凭据。\n    - [**Perplexity 密钥**](https:\u002F\u002Fwww.perplexity.ai\u002Fsettings\u002Fapi)\n    - [**Google 地图**](https:\u002F\u002Fconsole.cloud.google.com\u002Fapis\u002Fcredentials)\n      - 密钥在 Google Cloud 控制台的“API 和服务”选项卡中的“凭据”页面生成。\n    - [Alpha Vantage](https:\u002F\u002Fwww.alphavantage.co\u002Fsupport\u002F#api-key)\n      - 建议使用 https:\u002F\u002F10minutemail.com\u002F 生成临时邮箱使用。\n    \n    \u003C\u002Fdetails>\n\n4. 更新用户角色，可在 `streamlit_app\u002Fdata\u002Fuser_persona.txt` 中找到，使用位于 `streamlit_app\u002Fdata\u002Fuser_persona_template.txt` 中的用户角色模板。已提供说明——将大括号替换为相关信息。\n\n5. [可选] 如有需要，更新系统角色，可在 `streamlit_app\u002Fdata\u002Fsystem_persona.txt` 中找到。\n\n6. [可选] 多图谱——使用 FalkorDB 的用户可以生成多个图谱，并在其 ID 之间切换，这些 ID 对应不同的智能体。这使得不同智能体的记忆和知识上下文能够无缝切换和管理。\n\n7. 运行：\n\n```\ncd streamlit_app\nstreamlit run app.py\n```\n\n## 基本用法\n```python\nfrom memary.agent.chat_agent import ChatAgent\n\nsystem_persona_txt = \"data\u002Fsystem_persona.txt\"\nuser_persona_txt = \"data\u002Fuser_persona.txt\"\npast_chat_json = \"data\u002Fpast_chat.json\"\nmemory_stream_json = \"data\u002Fmemory_stream.json\"\nentity_knowledge_store_json = \"data\u002Fentity_knowledge_store.json\"\nchat_agent = ChatAgent(\n    \"个人智能体\",\n    memory_stream_json,\n    entity_knowledge_store_json,\n    system_persona_txt,\n    user_persona_txt,\n    past_chat_json,\n)\n```\n在初始化时，可根据需要传入 `['search', 'vision', 'locate', 'stocks']` 子集作为 `include_from_defaults`，以使用不同的默认工具集。\n\n### 多图谱\n当使用 FalkorDB 数据库时，您可以创建多个智能体。以下是为不同用户设置个人智能体的示例：\n\n```python\n# 用户 A 的个人智能体\nchat_agent_user_a = ChatAgent(\n    \"个人智能体\",\n    memory_stream_json_user_a，\n    entity_knowledge_store_json_user_a，\n    system_persona_txt_user_a，\n    user_persona_txt_user_a，\n    past_chat_json_user_a，\n    user_id='user_a_id'\n)\n\n# 用户 B 的个人智能体\nchat_agent_user_b = ChatAgent(\n    \"个人智能体\",\n    memory_stream_json_user_b，\n    entity_knowledge_store_json_user_b，\n    system_persona_txt_user_b，\n    user_persona_txt_user_b，\n    past_chat_json_user_b，\n    user_id='user_b_id'\n)\n```\n\n### 添加自定义工具\n```python\ndef multiply(a: int, b: int) -> int:\n    \"\"\"将两个整数相乘并返回结果\"\"\"\n    return a * b\n\nchat_agent.add_tool({\"multiply\": multiply})\n```\n有关为 LlamaIndex ReAct Agent 创建自定义工具的更多信息，请参阅 [此处](https:\u002F\u002Fdocs.llamaindex.ai\u002Fen\u002Fstable\u002Fexamples\u002Fagent\u002Freact_agent\u002F)。\n\n### 移除自定义工具\n```python\nchat_agent.remove_tool(\"multiply\")\n```\n\n## 核心概念 🧪\nmemary 当前的结构如下面的示意图所示。\n\n\u003Cimg width=\"1410\" alt=\"memary 概览\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_e65131197d35.png\">\n\n截至撰写本文时，上述系统设计中，路由智能体、知识图谱和记忆模块均已集成到位于 `src\u002Fagent` 目录下的 `ChatAgent` 类中。\n\n这些组件的原始源代码也可在其各自的目录中找到，包括基准测试、笔记本和更新内容。\n\n### 原则\nmemary 以尽可能少的开发人员实现工作，无缝集成到您现有的代理中。我们通过坚持几项原则来实现这一点。\n\n- 自动生成的记忆  \n    - 初始化 memary 后，随着代理的交互，代理记忆会自动更新。这种自动生成方式使我们能够捕获所有记忆，并在您的仪表板上轻松显示。此外，我们还支持无需或仅需少量代码即可将数据库组合在一起！\n\n- 记忆模块  \n    - 根据当前数据库的状态，memary 会跟踪用户的偏好，并在您的仪表板上显示这些信息以便分析。\n\n- 系统改进  \n    - memary 模仿人类记忆随时间演变和学习的方式。我们将在仪表板上展示您的代理改进的速度。\n\n- 回溯记忆  \n    - memary 会自动记录所有聊天内容，以便您可以回溯代理的执行过程，并访问特定时间点的代理记忆（即将推出）。\n\n### 代理\n\n\u003Cimg alt=\"路由代理\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_fb8e2606ff8b.png\">\n\n为了使没有现有代理的开发者也能使用 memary，我们设置了一个简单的代理实现。我们使用 [ReAct](https:\u002F\u002Freact-lm.github.io\u002F) 代理，根据提供的工具来规划并执行查询。\n\n虽然我们并未强调为代理配备大量工具，但**搜索工具对于从知识图谱中检索信息至关重要**。该工具会基于现有节点查询知识图谱以获取响应；如果不存在相关实体，则会执行外部搜索。其他默认代理工具包括由 LLaVa 提供支持的计算机视觉工具，以及使用 geocodder 和 Google 地图的位置工具。\n\n注意：在未来的版本发布中，目前用于演示目的的 ReAct 代理将被从软件包中移除，以便 **memary 能够支持来自任何提供商的任何类型的代理**。\n\n``` py title=\"外部查询\" hl_lines=\"1\"\ndef external_query(self, query: str):\n    messages_dict = [\n        {\"role\": \"system\", \"content\": \"务必精确简洁。\"},\n        {\"role\": \"user\", \"content\": query},\n    ]\n    messages = [ChatMessage(**msg) for msg in messages_dict]\n    external_response = self.query_llm.chat(messages)\n\n    return str(external_response)\n```\n\n``` py title=\"搜索\" hl_lines=\"1\"\ndef search(self, query: str) -> str:\n    response = self.query_engine.query(query)\n\n    if response.metadata is None:\n        return self.external_query(query)\n    else:\n        return response\n```\n\n### 知识图谱\n\n![KG 图解](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_f74600d98a9d.png)\n\n#### 知识图谱 ↔ 大型语言模型\n- memary 使用图数据库来存储知识。\n- 我们使用 Llama Index 根据文档向图存储中添加节点。\n- 对于外部查询，则使用 Perplexity（mistral-7b-instruct 模型）。\n\n#### 知识图谱用例\n- 将最终的代理响应注入到现有的知识图谱中。\n- memary 采用一种[递归式](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.18059.pdf)检索方法来搜索知识图谱：首先确定查询中的关键实体，然后构建一个包含这些实体且最大深度为 2 的子图，最后利用该子图来构建上下文。\n- 当查询中存在多个关键实体时，memary 会使用[多跳](https:\u002F\u002Fneo4j.com\u002Fdeveloper-blog\u002Fknowledge-graphs-llms-multi-hop-question-answering\u002F)推理，将多个子图合并成一个更大的子图进行搜索。\n- 这些技术相比一次性搜索整个知识图谱，能够显著降低延迟。\n\n``` py title=\"写入知识图谱\" hl_lines=\"1\"\ndef query(self, query: str) -> str:\n        # 获取 React 代理的响应\n        response = self.routing_agent.chat(query)\n        self.routing_agent.reset()\n        # 将响应写入文件，以便后续写回知识图谱\n        with open(\"data\u002Fexternal_response.txt\", \"w\") as f:\n            print(response, file=f)\n        # 写回知识图谱\n        self.write_back()\n        return response\n```\n\n``` py title=\"递归式检索\" hl_lines=\"1\"\ndef check_KG(self, query: str) -> bool:\n        \"\"\"检查查询是否存在于知识图谱中。\n\n        Args:\n            query (str): 需要在知识图谱中检查的查询\n\n        Returns:\n            bool: 如果查询存在于知识图谱中则返回 True，否则返回 False\n        \"\"\"\n        response = self.query_engine.query(query)\n\n        if response.metadata is None:\n            return False\n        return generate_string(\n            list(list(response.metadata.values())[0][\"kg_rel_map\"].keys())\n        )\n```\n\n### 内存模块\n\n![内存模块](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_22e0184f4607.png)\n\n内存模块由**记忆流和实体知识库**组成。该模块的设计受到微软研究院提出的[K-LaMP](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.06318.pdf)的启发。\n\n#### 内存流\n内存流会捕获所有被插入到知识图谱中的实体及其相关时间戳。这一流反映了用户的**知识广度**，即用户接触过但未深入理解的概念。\n\n- 时间线分析：绘制交互的时间线，突出显示高互动时刻或话题焦点的变化。这有助于理解用户兴趣随时间的演变。\n  \n``` py title=\"添加到内存流\" hl_lines=\"1\"\ndef add_memory(self, entities):\n        self.memory.extend([\n            MemoryItem(str(entity),\n                       datetime.now().replace(microsecond=0))\n            for entity in entities\n        ])\n```\n\n- 提取主题：在交互中寻找反复出现的主题或话题。这种主题分析可以帮助我们在用户明确表达之前就预测其兴趣或问题。\n\n``` py title=\"从内存流中检索\" hl_lines=\"1\"\ndef get_memory(self) -> list[MemoryItem]:\n        return self.memory\n```\n\n#### 实体知识库\n实体知识库会跟踪存储在内存流中每个实体的引用频率和最近一次引用时间。该知识库反映了用户的**知识深度**，即用户对某些概念比其他概念更为熟悉。\n\n- 按相关性对实体进行排序：结合频率和近期性对实体进行排序。如果某个实体被频繁提及（计数高）且最近被引用，则说明该实体可能非常重要，用户对该概念也非常了解。\n\n``` py title=\"选择最相关的实体\" hl_lines=\"1\"\ndef _select_top_entities(self):\n        entity_knowledge_store = self.message.llm_message['knowledge_entity_store']\n        entities = [entity.to_dict() for entity in entity_knowledge_store]\n        entity_counts = [entity['count'] for entity in entities]\n        top_indexes = np.argsort(entity_counts)[:TOP_ENTITIES]\n        return [entities[index] for index in top_indexes]\n```\n\n- 对实体进行分类：根据实体的性质或其出现的上下文（例如技术术语、个人兴趣）将实体分组。这种分类有助于快速获取针对用户查询的相关信息。\n\n``` py title=\"对实体分组\" hl_lines=\"1\"\ndef _convert_memory_to_knowledge_memory(\n            self, memory_stream: list) -> list[KnowledgeMemoryItem]:\n        \"\"\"将内存流中的记忆转换为实体知识库，通过分组实体实现\n\n        返回：\n            knowledge_memory (list): KnowledgeMemoryItem列表\n        \"\"\"\n        knowledge_memory = []\n\n        entities = set([item.entity for item in memory_stream])\n        for entity in entities:\n            memory_dates = [\n                item.date for item in memory_stream if item.entity == entity\n            ]\n            knowledge_memory.append(\n                KnowledgeMemoryItem(entity, len(memory_dates),\n                                    max(memory_dates)))\n        return knowledge_memory\n```\n\n- 突出显示随时间的变化：识别实体排名或分类随时间发生的显著变化。最常提及实体的变化可能表明用户兴趣或知识结构发生了转变。\n- 关于内存模块的更多信息请参见[这里](https:\u002F\u002Fgithub.com\u002Fseyeong-han\u002FKnowledgeGraphRAG)。\n\n![内存压缩](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_2c3170e7a618.png)\n\n### 新上下文窗口\n![New_Context_Window](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_readme_2e5c76716945.png)\n\n注意：我们利用与用户相关的关键分类实体和主题，以便使智能体的回答更贴合用户的当前兴趣\u002F偏好以及知识水平\u002F专业程度。新的上下文窗口由以下内容组成：\n\n- 智能体响应\n``` py title=\"获取智能体响应\" hl_lines=\"1\"\ndef get_routing_agent_response(self, query, return_entity=False):\n        \"\"\"从ReAct获取响应\"\"\"\n        response = \"\"\n        if self.debug:\n            # 将ReAct智能体的步骤写入单独文件，并修改格式以便在.txt文件中可读\n            with open(\"data\u002Frouting_response.txt\", \"w\") as f:\n                orig_stdout = sys.stdout\n                sys.stdout = f\n                response = str(self.query(query))\n                sys.stdout.flush()\n                sys.stdout = orig_stdout\n            text = \"\"\n            with open(\"data\u002Frouting_response.txt\", \"r\") as f:\n                text = f.read()\n\n            plain = ansi_strip(text)\n            with open(\"data\u002Frouting_response.txt\", \"w\") as f:\n                f.write(plain)\n        else:\n            response = str(self.query(query))\n\n        if return_entity:\n            # 上述查询已将最终响应添加到知识图谱中，因此实体会存在于知识图谱中\n            return response, self.get_entity(self.query_engine.retrieve(query))\n        return response\n```\n\n- 最相关实体\n``` py title=\"获取重要实体\" hl_lines=\"1\"\ndef get_entity(self, retrieve) -> list[str]:\n        \"\"\"retrieve 是一个 QueryBundle 对象列表。\n        一个检索到的 QueryBundle 对象具有“node”属性，\n        而该属性又包含“metadata”属性。\n\n        例如对于“kg_rel_map”：\n        kg_rel_map = {\n            'Harry': [['DREAMED_OF', '未知关系'], ['FELL_HARD_ON', '水泥地面']],\n            'Potter': [['WORE', '圆形眼镜'], ['HAD', '梦境']]\n        }\n\n        参数：\n            retrieve (list[NodeWithScore]): NodeWithScore 对象列表\n        返回：\n            list[str]: 字符串实体列表\n        \"\"\"\n\n        entities = []\n        kg_rel_map = retrieve[0].node.metadata[\"kg_rel_map\"]\n        for key, items in kg_rel_map.items():\n            # key 是问题中的实体\n            entities.append(key)\n            # items 是 [关系, 实体] 的列表\n            entities.extend(item[1] for item in items)\n            if len(entities) > MAX_ENTITIES_FROM_KG:\n                break\n        entities = list(set(entities))\n        for exceptions in ENTITY_EXCEPTIONS:\n            if exceptions in entities:\n                entities.remove(exceptions)\n        return entities\n```\n\n- 聊天历史（已摘要以避免令牌溢出）\n``` py title=\"总结聊天历史\" hl_lines=\"1\"\ndef _summarize_contexts(self, total_tokens: int):\n        \"\"\"总结上下文。\n\n        参数：\n            total_tokens (int): 响应中的总令牌数\n        \"\"\"\n        messages = self.message.llm_message[\"messages\"]\n\n        # 前两条消息分别是系统和用户角色\n        if len(messages) > 2 + NONEVICTION_LENGTH:\n            messages = messages[2:-NONEVICTION_LENGTH]\n            del self.message.llm_message[\"messages\"][2:-NONEVICTION_LENGTH]\n        else:\n            messages = messages[2:]\n            del self.message.llm_message[\"messages\"][2:]\n\n        message_contents = [message.to_dict()[\"content\"] for message in messages]\n\n        llm_message_chatgpt = {\n            \"model\": self.model,\n            \"messages\": [\n                {\n                    \"role\": \"user\",\n                    \"content\": \"请将之前的对话总结成50字左右：\"\n                    + str(message_contents),\n                }\n            ],\n        }\n        response, _ = self._get_gpt_response(llm_message_chatgpt)\n        content = \"总结的过往对话：\" + response\n        self._add_contexts_to_llm_message(\"assistant\", content，index=2)\n        logging.info(f\"上下文已成功总结。 \\n 总结内容：{response}\")\n        logging.info(f\"驱逐后总令牌数：{total_tokens*EVICTION_RATE}\")\n```\n\n## 许可证\n\nmemary 采用 MIT 许可证发布。","# Memary 快速上手指南\n\nMemary 是一个旨在模拟人类记忆机制的 AI 代理框架，通过知识图谱和记忆流模块，帮助开发者构建具备长期记忆和自我进化能力的智能代理。\n\n## 环境准备\n\n在开始之前，请确保满足以下系统要求和前置依赖：\n\n*   **Python 版本**：必须使用 **Python \u003C= 3.11.9**（更高版本可能导致兼容性问题）。\n*   **本地模型服务（可选但推荐）**：\n    *   若使用本地模型，需安装并运行 [Ollama](https:\u002F\u002Fgithub.com\u002Follama\u002Follama)。\n    *   推荐拉取的模型：\n        *   LLM: `Llama 3 8B` 或 `40B`\n        *   视觉模型：`LLaVA`\n*   **API 密钥**：根据需求准备以下服务的 API Key（需在 `.env` 文件中配置）：\n    *   OpenAI (必需，若不使用纯本地模型)\n    *   FalkorDB 或 Neo4j (用于知识图谱存储)\n    *   Perplexity, Google Maps, Alpha Vantage (可选，用于增强工具能力)\n\n## 安装步骤\n\n### 方法一：通过 PyPI 安装（推荐）\n\n确保 Python 版本符合要求后，直接运行：\n\n```bash\npip install memary\n```\n\n> **提示**：国内用户若下载缓慢，可使用清华源加速：\n> ```bash\n> pip install memary -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 方法二：本地源码安装\n\n1.  创建并激活虚拟环境（确保 Python 版本 \u003C= 3.11.9）：\n    ```bash\n    python -m venv venv\n    # Windows\n    venv\\Scripts\\activate\n    # Mac\u002FLinux\n    source venv\u002Fbin\u002Factivate\n    ```\n\n2.  克隆项目并安装依赖：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002Fmemary.git\n    cd memary\n    pip install -r requirements.txt\n    ```\n\n## 基本使用\n\n### 1. 配置文件设置\n\n在项目根目录下创建 `.env` 文件，填入必要的凭证。示例如下：\n\n```ini\nOPENAI_API_KEY=\"YOUR_API_KEY\"\nPERPLEXITY_API_KEY=\"YOUR_API_KEY\"\nGOOGLEMAPS_API_KEY=\"YOUR_API_KEY\"\nALPHA_VANTAGE_API_KEY=\"YOUR_API_KEY\"\n\n# 数据库配置 (二选一)\n# 方案 A: FalkorDB\nFALKORDB_URL=\"falkor:\u002F\u002F[username]:[password]@[host]:[port]\"\n\n# 方案 B: Neo4j\nNEO4J_PW=\"YOUR_NEO4J_PW\"\nNEO4J_URL=\"YOUR_NEO4J_URL\"\n```\n\n### 2. 设置用户人设\n\n编辑 `streamlit_app\u002Fdata\u002Fuser_persona.txt` 文件。你可以参考同目录下的 `user_persona_template.txt` 模板，将大括号 `{}` 中的内容替换为实际的用户信息。如有需要，也可修改 `system_persona.txt` 来调整系统行为。\n\n### 3. 启动演示应用\n\n完成配置后，运行 Streamlit 应用以体验完整功能：\n\n```bash\ncd streamlit_app\nstreamlit run app.py\n```\n\n### 4. 代码集成（Python SDK）\n\n你可以在自己的 Python 项目中直接调用 Memary 的核心代理类：\n\n```python\nfrom memary.agent.chat_agent import ChatAgent\n\n# 定义文件路径\nsystem_persona_txt = \"data\u002Fsystem_persona.txt\"\nuser_persona_txt = \"data\u002Fuser_persona.txt\"\npast_chat_json = \"data\u002Fpast_chat.json\"\nmemory_stream_json = \"data\u002Fmemory_stream.json\"\nentity_knowledge_store_json = \"data\u002Fentity_knowledge_store.json\"\n\n# 初始化代理\nchat_agent = ChatAgent(\n    \"Personal Agent\",\n    memory_stream_json,\n    entity_knowledge_store_json,\n    system_persona_txt,\n    user_persona_txt,\n    past_chat_json,\n)\n\n# (可选) 指定初始化工具子集，如搜索、视觉、定位等\n# chat_agent = ChatAgent(..., include_from_defaults=['search', 'vision'])\n```\n\n#### 高级功能示例\n\n**多用户\u002F多图谱支持 (配合 FalkorDB):**\n```python\n# 用户 A 的代理\nchat_agent_user_a = ChatAgent(\n    \"Personal Agent\",\n    memory_stream_json_user_a,\n    entity_knowledge_store_json_user_a,\n    system_persona_txt_user_a,\n    user_persona_txt_user_a,\n    past_chat_json_user_a,\n    user_id='user_a_id'\n)\n```\n\n**添加自定义工具:**\n```python\ndef multiply(a: int, b: int) -> int:\n    \"\"\"Multiply two integers and returns the result integer\"\"\"\n    return a * b\n\nchat_agent.add_tool({\"multiply\": multiply})\n```\n\n**移除自定义工具:**\n```python\nchat_agent.remove_tool(\"multiply\")\n```","一位开发者正在构建一个能够长期陪伴用户、协助管理复杂个人项目的自主 AI 助手，该助手需要跨越数周甚至数月持续跟踪任务进度与用户偏好。\n\n### 没有 Memary 时\n- **记忆碎片化**：AI 助手无法有效存储历史对话中的关键信息（如用户的项目目标、代码风格偏好），每次交互都像“失忆”般重新开始，导致重复询问相同背景。\n- **上下文丢失**：在处理长周期任务（如软件开发迭代）时，助手难以关联几天前的决策逻辑，经常给出与之前约定相冲突的建议。\n- **开发成本高**：开发者需手动设计复杂的数据库 schema 和检索逻辑来模拟记忆功能，耗费大量时间编写样板代码而非优化核心智能。\n- **缺乏推理连贯性**：由于缺少类似人类的短期与长期记忆分层机制，助手在进行多步推理时容易逻辑断层，无法像人类一样“回想”起之前的线索。\n\n### 使用 Memary 后\n- **类人记忆模拟**：Memary 自动为助手构建了分层记忆系统，能精准记录并调用用户的个性化设定与历史决策，让助手越用越“懂”用户。\n- **长程上下文关联**：借助内置的记忆检索机制，助手能瞬间关联数周前的项目细节，确保在长期任务中保持逻辑一致性和执行连贯性。\n- **开箱即用的记忆层**：开发者只需几行代码即可集成 Memary，无需从零搭建记忆架构，直接利用其支持的 Ollama 本地模型或云端 API 快速部署。\n- **动态记忆更新**：Memary  emulate 人类记忆的遗忘与强化机制，自动筛选高价值信息存入长期记忆，使助手的推理过程更加流畅且符合直觉。\n\nMemary 通过赋予自主代理类人的记忆能力，将原本割裂的交互转化为连续、深度的智能协作，极大降低了构建高阶 AGI 应用的门槛。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkingjulio8238_Memary_4f88572b.png","kingjulio8238","kingJulio","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkingjulio8238_11a5568d.png","Moving up the Kardashev scale ",null,"San Francisco, California, USA ","https:\u002F\u002Fgithub.com\u002Fkingjulio8238",[80,84],{"name":81,"color":82,"percentage":83},"Jupyter Notebook","#DA5B0B",78.7,{"name":85,"color":86,"percentage":87},"Python","#3572A5",21.3,2576,190,"2026-04-10T13:39:27","MIT",4,"未说明","非必需。若使用本地模型（Ollama），需根据具体模型（如 Llama 3 8B\u002F40B, LLaVA）自行配置相应 GPU 资源；若使用 OpenAI API 则无需本地 GPU。","未说明（运行本地大模型通常建议 16GB+）",{"notes":97,"python":98,"dependencies":99},"1. 核心依赖本地运行的 Ollama 服务（支持 Llama 3, LLaVA 等）或直接调用 OpenAI API。\n2. 必须配置图数据库：推荐使用 FalkorDB 或 Neo4j 存储知识图谱。\n3. 需要创建 .env 文件并配置多个 API Key（OpenAI, Perplexity, Google Maps, Alpha Vantage 等）。\n4. 前端界面基于 Streamlit 运行。\n5. 支持通过 user_persona.txt 自定义用户人设。","\u003C=3.11.9",[100,101,102,103,104,105,106],"ollama","streamlit","FalkorDB 或 Neo4j","openai","perplexity_api","googlemaps","llama-index",[16,13,14],[109,110,111,112,113,114],"agents","memory","knowledge-graph","rag","multiagent-systems","self-improvement","2026-03-27T02:49:30.150509","2026-04-13T00:24:36.716321",[118,123,128,133,138,143],{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},31064,"如何将现有的 Neo4j 知识图谱导入到 Memary 中？","可以直接使用现有的 Neo4j 知识图谱。只需在项目的 `.env` 文件中填入 Neo4j 提供的对应连接信息（如 URI、用户名、密码等），Memary 即可连接到该图谱并开始工作，无需重新构建基础数据。","https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fissues\u002F47",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},31065,"为什么最终回复偶尔会输出实体列表而不是正常的回答？","这通常是由于使用的本地模型能力不足导致的。例如，Llama 3 8B 在处理包含大量 persona 和知识实体信息的上下文时容易混淆。解决方案是切换到更强大的模型（如 `gpt-3.5-preview`、Llama 3 70B 或其他高级模型），这样可以显著提高回复的准确性和一致性。建议在配置中选择参数量更大的模型以获得更好效果。","https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fissues\u002F35",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},31066,"如何免费使用 Memary 或在本地运行模型（不使用 GPT-4）？","Memary 已支持模型设置功能，允许用户自由选择模型和 API Key。您可以配置使用开源模型（如 Llama 3 8B）并通过 Ollama 在本地运行，从而避免产生 GPT-4 的费用。对于视觉工具，也可以配置使用 LLaVA 模型。请在配置选项中指定您想要使用的本地模型路径或 Ollama 模型名称。","https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fissues\u002F31",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},31067,"运行时遇到 \"ModuleNotFoundError: No module named 'src'\" 错误怎么办？","此错误通常由目录路径设置引起。解决方法有两种：\n1. **推荐方法**：确保从 `streamlit_app\u002F` 目录下运行应用，命令为：`streamlit run streamlit_app\u002Fapp.py`。\n2. **代码修改方法**：如果必须从其他目录运行，可修改 `\u002Fstreamlit_app\u002Fapp.py` 文件，将 `parent_dir = os.path.dirname(curr_dir)` 替换为 `parent_dir = os.path.dirname(curr_dir) + '\u002Fmemary'`，并将 `data` 文件夹从 `\u002Fstreamlit_app` 移动到与 `src` 文件夹同级的位置。","https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fissues\u002F21",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},31068,"连接 Neo4jGraphStore 时出现 \"Failed to write data to connection\" 错误，但数据库似乎仍在运行，该如何处理？","如果 Neo4j 实例确实在运行且程序功能正常，该错误可能只是 Neo4j Python 驱动的一个已知警告，可以忽略。请首先检查您的 Neo4j 实例是否处于活动状态。如果服务正常且能进行读写操作，则无需担心此报错；若服务不可用，请重启 Neo4j 实例或检查网络连接配置。","https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fissues\u002F27",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},31069,"如何在 UI 中切换不同的模型或选择要使用的工具？","Memary 已通过更新（参考 PR #26）增强了 UI 集成功能。现在用户可以通过界面直接切换不同的后端模型、输入 API Keys，并选择代理（Agent）需要使用的具体工具。这使得在不修改代码的情况下灵活配置工作流成为可能，特别是在工具数量增加时非常有用。","https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fissues\u002F23",[149,154,159,164,169,174],{"id":150,"version":151,"summary_zh":152,"released_at":153},223006,"v0.1.5","## 变更内容\n* FalkorDB 多图功能，由 @galshubeli 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fpull\u002F61 中实现\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fcompare\u002Fv0.1.4...v0.1.5","2024-10-22T00:46:00",{"id":155,"version":156,"summary_zh":157,"released_at":158},223007,"v0.1.4","## 变更内容\n* 图表添加，由 @kevinl424 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fpull\u002F48 中完成\n* 更新描述，由 @kingjulio8238 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fpull\u002F54 中完成\n* 修复 README 封面\u002F路由代理图片，由 @kevinl424 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fpull\u002F55 中完成\n* 添加社交媒体链接，由 @kingjulio8238 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fpull\u002F57 中完成\n* 再次添加社交媒体链接，由 @kingjulio8238 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fpull\u002F58 中完成\n* 清理 README，由 @kevinl424 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fpull\u002F60 中完成\n* 添加 FalkorDB 图数据库，由 @galshubeli 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fpull\u002F59 中完成\n\n## 新贡献者\n* @galshubeli 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fpull\u002F59 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002FMemary\u002Fcompare\u002Fv0.1.3...0.1.4","2024-09-09T19:15:36",{"id":160,"version":161,"summary_zh":162,"released_at":163},223008,"v0.1.3","## 变更内容\n* 版本 0.1.2，由 @kevinl424 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002Fmemary\u002Fpull\u002F38 中进行的快速修复\n* 由 @kingjulio8238 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002Fmemary\u002Fpull\u002F41 中指定建议的默认模型\n* 由 @kevinl424 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002Fmemary\u002Fpull\u002F40 中提升工具灵活性\n  * 新增参数，支持使用默认工具子集初始化 ChatAgent\n  * add_tools 现在允许用户添加自定义工具供 ChatAgent 使用\n  * remove_tool 允许用户移除自定义工具和默认工具\n  * 在 README 的“使用说明”部分新增内容，详细介绍了这些新方法的用法\n\n版本发布演示：https:\u002F\u002Fyoutu.be\u002FJhXn8HE56Rw","2024-05-27T21:27:37",{"id":165,"version":166,"summary_zh":167,"released_at":168},223009,"v0.1.2","## 变更内容\n* docs: 更新 README 文件，支持通过 pip 安装，由 @kevinl424 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002Fmemary\u002Fpull\u002F34 中完成\n* Ollama 集成，由 @kevinl424 在 https:\u002F\u002Fgithub.com\u002Fkingjulio8238\u002Fmemary\u002Fpull\u002F36 中完成\n  * feat: 支持 llama3 和 llava 模型\n  * feat: UI 中新增 LLM\u002F视觉模型切换界面\n\n## 贡献者\n@kevinl424 @kingjulio8238 @seyeong-han","2024-05-21T05:33:23",{"id":170,"version":171,"summary_zh":172,"released_at":173},223010,"v0.1.1","# 重大更新！\r\n您可以通过 Python 包管理器安装 `memary`，并在自己的项目中使用我们的模块！\r\n```\r\npip install memary==0.1.1\r\n```\r\n\r\n\r\n# 变更内容\r\n- 功能：由 @kevinl424 实现的 COLBERT 重排序模型 #15\r\n- 功能：由 @kevinl424 添加的查询分解模型 #22\r\n- 功能：由 @shreybirmiwal 添加的带有附加工具的工具选择器 UI #26\r\n- 功能：由 @kingjulio8238、@kevinl424 和 @seyeong-han 实现的 PyPI `memary` 集成 #29\r\n- 文档：由 @kevinl424 和 @seyeong-han 更新 README 中的图片链接 #29\r\n---\r\n\r\n# 贡献者\r\n@kevinl424、@shreybirmiwal、@kingjulio8238、@seyeong-han","2024-05-09T23:52:34",{"id":175,"version":176,"summary_zh":177,"released_at":178},223011,"v0.1.0","用这段稳定的代码收尾。","2024-05-02T13:20:10"]