[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-CaviraOSS--OpenMemory":3,"similar-CaviraOSS--OpenMemory":158},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":9,"readme_en":10,"readme_zh":11,"quickstart_zh":12,"use_case_zh":13,"hero_image_url":14,"owner_login":15,"owner_name":16,"owner_avatar_url":17,"owner_bio":18,"owner_company":19,"owner_location":19,"owner_email":20,"owner_twitter":19,"owner_website":19,"owner_url":21,"languages":22,"stars":47,"forks":48,"last_commit_at":49,"license":50,"difficulty_score":34,"env_os":51,"env_gpu":52,"env_ram":52,"env_deps":53,"category_tags":57,"github_topics":64,"view_count":85,"oss_zip_url":19,"oss_zip_packed_at":19,"status":86,"created_at":87,"updated_at":88,"faqs":89,"releases":117},1043,"CaviraOSS\u002FOpenMemory","OpenMemory","Local persistent memory store for LLM applications including claude desktop, github copilot, codex, antigravity, etc.","OpenMemory 是一个专为大语言模型（LLM）和智能体设计的认知记忆引擎。它旨在解决 AI 应用普遍存在的“失忆”问题，让模型拥有真实的长期记忆，而不仅仅是依赖传统的向量数据库或 RAG 技术。通过 OpenMemory，开发者可以让无状态的模型记住用户偏好、历史对话及关键上下文，使应用不再“健忘”。\n\nOpenMemory 非常适合构建 AI 应用的开发者使用，支持 Python 和 Node.js 环境。它采用本地优先架构，支持 SQLite 或 Postgres 自托管，确保数据隐私与控制权。OpenMemory 提供了丰富的集成方案，可无缝对接 LangChain、CrewAI、AutoGen 等主流框架，甚至能作为 VS Code 插件或 Claude Desktop 的本地记忆存储。\n\n技术亮点方面，OpenMemory 不仅支持记忆的可解释性追踪，让用户了解为何特定信息被召回，还内置了 GitHub、Notion、Google Drive 等多种数据连接器，方便直接导入外部知识。无论是构建个人智能助手还是企业级多用户记忆系统，OpenMemory 都能提供灵活且持久","OpenMemory 是一个专为大语言模型（LLM）和智能体设计的认知记忆引擎。它旨在解决 AI 应用普遍存在的“失忆”问题，让模型拥有真实的长期记忆，而不仅仅是依赖传统的向量数据库或 RAG 技术。通过 OpenMemory，开发者可以让无状态的模型记住用户偏好、历史对话及关键上下文，使应用不再“健忘”。\n\nOpenMemory 非常适合构建 AI 应用的开发者使用，支持 Python 和 Node.js 环境。它采用本地优先架构，支持 SQLite 或 Postgres 自托管，确保数据隐私与控制权。OpenMemory 提供了丰富的集成方案，可无缝对接 LangChain、CrewAI、AutoGen 等主流框架，甚至能作为 VS Code 插件或 Claude Desktop 的本地记忆存储。\n\n技术亮点方面，OpenMemory 不仅支持记忆的可解释性追踪，让用户了解为何特定信息被召回，还内置了 GitHub、Notion、Google Drive 等多种数据连接器，方便直接导入外部知识。无论是构建个人智能助手还是企业级多用户记忆系统，OpenMemory 都能提供灵活且持久的记忆支持。","# 🚧 This project is currently being fully rewritten.\n\nExpect breaking changes and potential bugs.  \nIf you find an issue, please open a GitHub issue with details so it can be tracked and resolved.\n## OpenMemory\n\n> **Real long-term memory for AI agents. Not RAG. Not a vector DB. Self-hosted, Python + Node.**\n\n[![VS Code Extension](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVS%20Code-Extension-007ACC?logo=visualstudiocode)](https:\u002F\u002Fmarketplace.visualstudio.com\u002Fitems?itemName=Nullure.openmemory-vscode)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1300368230320697404?label=Discord)](https:\u002F\u002Fdiscord.gg\u002FP7HaRayqTh)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fopenmemory-py.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fopenmemory-py\u002F)\n[![npm](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fopenmemory-js.svg)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fopenmemory-js)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FCaviraOSS\u002FOpenMemory)](LICENSE)\n\n![OpenMemory demo](.github\u002Fopenmemory.gif)\n\nOpenMemory is a **cognitive memory engine** for LLMs and agents.\n\n- 🧠 Real long-term memory (not just embeddings in a table)\n- 💾 Self-hosted, local-first (SQLite \u002F Postgres)\n- 🐍 Python + 🟦 Node SDKs\n- 🧩 Integrations: LangChain, CrewAI, AutoGen, Streamlit, MCP, VS Code\n- 📥 Sources: GitHub, Notion, Google Drive, OneDrive, Web Crawler\n- 🔍 Explainable traces (see *why* something was recalled)\n\nYour model stays stateless. **Your app stops being amnesiac.**\n\n---\n\n## ☁️ One‑click Deploy\n\nSpin up a shared OpenMemory backend (HTTP API + MCP + dashboard):\n\n[![Deploy on Railway](https:\u002F\u002Frailway.app\u002Fbutton.svg)](https:\u002F\u002Frailway.app\u002Ftemplate\u002FYOUR_TEMPLATE_ID)\n[![Deploy to Render](https:\u002F\u002Frender.com\u002Fimages\u002Fdeploy-to-render-button.svg)](https:\u002F\u002Frender.com\u002Fdeploy?repo=https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory)\n[![Deploy with Vercel](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FCaviraOSS_OpenMemory_readme_a4c0f8073a9c.png)](https:\u002F\u002Fvercel.com\u002Fnew\u002Fclone?repository-url=https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory)\n\n> Use the SDKs when you want **embedded local memory**. Use the server when you want **multi‑user org‑wide memory**.\n\n---\n\n## 1. TL;DR – Use It in 10 Seconds\n\n### 🐍 Python (local-first)\n\nInstall:\n\n```bash\npip install openmemory-py\n```\n\nUse:\n\n```python\nfrom openmemory.client import Memory\n\nmem = Memory()\nmem.add(\"user prefers dark mode\", user_id=\"u1\")\nresults = mem.search(\"preferences\", user_id=\"u1\")\nawait mem.delete(\"memory_id\")\n```\n\n> Note: `add`, `search`, `get`, `delete` are async. Use `await` in async contexts.\n\n#### 🔗 OpenAI\n\n```python\nmem = Memory()\nclient = mem.openai.register(OpenAI(), user_id=\"u1\")\nresp = client.chat.completions.create(...)\n```\n\n#### 🧱 LangChain\n\n```python\nfrom openmemory.integrations.langchain import OpenMemoryChatMessageHistory\n\nhistory = OpenMemoryChatMessageHistory(memory=mem, user_id=\"u1\")\n```\n\n#### 🤝 CrewAI \u002F AutoGen \u002F Streamlit\n\nOpenMemory is designed to sit behind **agent frameworks and UIs**:\n\n- Crew-style agents: use `Memory` as a shared long-term store\n- AutoGen-style orchestrations: store dialog + tool calls as episodic memory\n- Streamlit apps: give each user a persistent memory by `user_id`\n\nSee the integrations section in the docs for concrete patterns.\n\n---\n\n### 🟦 Node \u002F JavaScript (local-first)\n\nInstall:\n\n```bash\nnpm install openmemory-js\n```\n\nUse:\n\n```ts\nimport { Memory } from \"openmemory-js\"\n\nconst mem = new Memory()\nawait mem.add(\"user likes spicy food\", { user_id: \"u1\" })\nconst results = await mem.search(\"food?\", { user_id: \"u1\" })\nawait mem.delete(\"memory_id\")\n```\n\nDrop this into:\n\n- Node backends\n- CLIs\n- local tools\n- anything that needs durable memory without running a separate service.\n\n---\n\n### 📥 Connectors\n\nIngest data from external sources directly into memory:\n\n```python\n# python\ngithub = mem.source(\"github\")\nawait github.connect(token=\"ghp_...\")\nawait github.ingest_all(repo=\"owner\u002Frepo\")\n```\n\n```ts\n\u002F\u002F javascript\nconst github = await mem.source(\"github\")\nawait github.connect({ token: \"ghp_...\" })\nawait github.ingest_all({ repo: \"owner\u002Frepo\" })\n```\n\nAvailable connectors: `github`, `notion`, `google_drive`, `google_sheets`, `google_slides`, `onedrive`, `web_crawler`\n\n---\n\n## 2. Modes: SDKs, Server, MCP\n\nOpenMemory can run **inside your app** or as a **central service**.\n\n### 2.1 Python SDK\n\n- ✅ Local SQLite by default\n- ✅ Supports external DBs (via config)\n- ✅ Great fit for LangChain \u002F LangGraph \u002F CrewAI \u002F notebooks\n\nDocs: https:\u002F\u002Fopenmemory.cavira.app\u002Fdocs\u002Fsdks\u002Fpython\n\n---\n\n### 2.2 Node SDK\n\n- Same cognitive model as Python\n- Ideal for JS\u002FTS applications\n- Can either run fully local or talk to a central backend\n\nDocs: https:\u002F\u002Fopenmemory.cavira.app\u002Fdocs\u002Fsdks\u002Fjavascript\n\n---\n\n### 2.3 Backend server (multi-user + dashboard + MCP)\n\nUse when you want:\n\n- org‑wide memory\n- HTTP API\n- dashboard\n- MCP server for Claude \u002F Cursor \u002F Windsurf\n\nRun from source:\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory.git\ncd OpenMemory\ncp .env.example .env\n\ncd backend\nnpm install\nnpm run dev   # default :8080\n```\n\nOr with Docker (API + MCP):\n\n```bash\ndocker compose up --build -d\n```\n\nOptional: include the dashboard service profile:\n\n```bash\ndocker compose --profile ui up --build -d\n```\n\nUsing Doppler-managed config (recommended for hosted dashboard\u002FAPI URLs):\n\n```bash\ncd OpenMemory\ntools\u002Fops\u002Fcompose_with_doppler.sh up -d --build\n```\n\nCheck service status:\n\n```bash\ndocker compose ps\ncurl -f http:\u002F\u002Flocalhost:8080\u002Fhealth\n```\n\nThe backend exposes:\n\n- `\u002Fapi\u002Fmemory\u002F*` – memory operations\n- `\u002Fapi\u002Ftemporal\u002F*` – temporal knowledge graph\n- `\u002Fmcp` – MCP server\n- dashboard UI (when `ui` profile is enabled)\n\n---\n\n## 3. Why OpenMemory (vs RAG, vs “just vectors”)\n\nLLMs forget everything between messages.  \nMost “memory” solutions are really just **RAG pipelines**:\n\n- text is chunked\n- embedded into a vector store\n- retrieved by similarity\n\nThey don’t understand:\n\n- whether something is a **fact**, **event**, **preference**, or **feeling**\n- how **recent \u002F important** it is\n- how it links to other memories\n- what was true at a specific **time**\n\nCloud memory APIs add:\n\n- vendor lock‑in\n- latency\n- opaque behavior\n- privacy problems\n\n**OpenMemory gives you an actual memory system:**\n\n- 🧠 Multi‑sector memory (episodic, semantic, procedural, emotional, reflective)\n- ⏱ Temporal reasoning (what was true *when*)\n- 📉 Decay & reinforcement instead of dumb TTLs\n- 🕸 Waypoint graph (associative, traversable links)\n- 🔍 Explainable traces (see which nodes were recalled and why)\n- 🏠 Self‑hosted, local‑first, you own the DB\n- 🔌 SDKs + server + VS Code + MCP\n\nIt behaves like a memory module, not a “vector DB with marketing copy”.\n\n---\n\n## 4. The “Old Way” vs OpenMemory\n\n**Vector DB + LangChain (cloud-heavy, ceremony):**\n\n```python\nimport os\nimport time\nfrom langchain.chains import ConversationChain\nfrom langchain.memory import VectorStoreRetrieverMemory\nfrom langchain_community.vectorstores import Pinecone\nfrom langchain_openai import ChatOpenAI, OpenAIEmbeddings\n\nos.environ[\"PINECONE_API_KEY\"] = \"sk-...\"\nos.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\ntime.sleep(3)  # cloud warmup\n\nembeddings = OpenAIEmbeddings()\npinecone = Pinecone.from_existing_index(embeddings, index_name=\"my-memory\")\nretriever = pinecone.as_retriever(search_kwargs={\"k\": 2})\nmemory = VectorStoreRetrieverMemory(retriever=retriever)\nconversation = ConversationChain(llm=ChatOpenAI(), memory=memory)\n\nconversation.predict(input=\"I'm allergic to peanuts\")\n```\n\n**OpenMemory (3 lines, local file, no vendor lock-in):**\n\n```python\nfrom openmemory.client import Memory\n\nmem = Memory()\nmem.add(\"user allergic to peanuts\", user_id=\"user123\")\nresults = mem.search(\"allergies\", user_id=\"user123\")\n```\n\n✅ Zero cloud config • ✅ Local SQLite • ✅ Offline‑friendly • ✅ Your DB, your schema\n\n---\n\n## 5. Features at a Glance\n\n- **Multi-sector memory**  \n  Episodic (events), semantic (facts), procedural (skills), emotional (feelings), reflective (insights).\n\n- **Temporal knowledge graph**  \n  `valid_from` \u002F `valid_to`, point‑in‑time truth, evolution over time.\n\n- **Composite scoring**  \n  Salience + recency + coactivation, not just cosine distance.\n\n- **Decay engine**  \n  Adaptive forgetting per sector instead of hard TTLs.\n\n- **Explainable recall**  \n  “Waypoint” traces that show exactly which nodes were used in context.\n\n- **Embeddings**  \n  OpenAI, Gemini, Ollama, AWS, synthetic fallback.\n\n- **Integrations**  \n  LangChain, CrewAI, AutoGen, Streamlit, MCP, VS Code, IDEs.\n\n- **Connectors**  \n  Import from GitHub, Notion, Google Drive, Google Sheets\u002FSlides, OneDrive, Web Crawler.\n\n- **Migration tool**  \n  Import memories from Mem0, Zep, Supermemory and more.\n\nIf you’re building **agents, copilots, journaling systems, knowledge workers, or coding assistants**, OpenMemory is the piece that turns them from “goldfish” into something that actually remembers.\n\n---\n\n## 6. MCP & IDE Workflow\n\nOpenMemory ships a native MCP server, so any MCP‑aware client can treat it as a tool.\n\n### Claude \u002F Claude Code\n\n```bash\nclaude mcp add --transport http openmemory http:\u002F\u002Flocalhost:8080\u002Fmcp\n```\n\n### Cursor \u002F Windsurf\n\n`.mcp.json`:\n\n```json\n{\n  \"mcpServers\": {\n    \"openmemory\": {\n      \"type\": \"http\",\n      \"url\": \"http:\u002F\u002Flocalhost:8080\u002Fmcp\"\n    }\n  }\n}\n```\n\nAvailable tools include:\n\n- `openmemory_query`\n- `openmemory_store`\n- `openmemory_list`\n- `openmemory_get`\n- `openmemory_reinforce`\n\nYour IDE assistant can query, store, list, and reinforce memories without you wiring every call manually.\n\n---\n\n## 7. Temporal Knowledge Graph\n\nOpenMemory treats **time** as a first‑class dimension.\n\n### Concepts\n\n- `valid_from` \u002F `valid_to` – truth windows\n- auto‑evolution – new facts close previous ones\n- confidence decay – old facts fade gracefully\n- point‑in‑time queries – “what was true on X?”\n- timelines – reconstruct an entity’s history\n- change detection – see when something flipped\n\n### Example\n\n```http\nPOST \u002Fapi\u002Ftemporal\u002Ffact\n{\n  \"subject\": \"CompanyX\",\n  \"predicate\": \"has_CEO\",\n  \"object\": \"Alice\",\n  \"valid_from\": \"2021-01-01\"\n}\n```\n\nThen later:\n\n```http\nPOST \u002Fapi\u002Ftemporal\u002Ffact\n{\n  \"subject\": \"CompanyX\",\n  \"predicate\": \"has_CEO\",\n  \"object\": \"Bob\",\n  \"valid_from\": \"2024-04-10\"\n}\n```\n\nAlice’s term is automatically closed; timeline queries stay sane.\n\n---\n\n## 8. CLI (opm)\n\nThe `opm` CLI talks directly to the engine \u002F server.\n\n### Install\n\n```bash\ncd packages\u002Fopenmemory-js\nnpm install\nnpm run build\nnpm link   # adds `opm` to your PATH\n```\n\n### Usage\n\n```bash\n# Start the API server\nopm serve\n\n# In another terminal:\nopm health\nopm add \"Recall that I prefer TypeScript over Python\" --tags preference\nopm query \"language preference\"\n```\n\n### Commands\n\n```bash\nopm add \"user prefers dark mode\" --user u1 --tags prefs\nopm query \"preferences\" --user u1 --limit 5\nopm list --user u1\nopm delete \u003Cid>\nopm reinforce \u003Cid>\nopm stats\n```\n\nUseful for scripting, debugging, and non‑LLM pipelines that still want memory.\n\n---\n\n## 9. Architecture (High Level)\n\nOpenMemory uses **Hierarchical Memory Decomposition** with a temporal graph on top.\n\n```mermaid\ngraph TB\n    classDef inputStyle fill:#eceff1,stroke:#546e7a,stroke-width:2px,color:#37474f\n    classDef processStyle fill:#e3f2fd,stroke:#1976d2,stroke-width:2px,color:#0d47a1\n    classDef sectorStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#e65100\n    classDef storageStyle fill:#fce4ec,stroke:#c2185b,stroke-width:2px,color:#880e4f\n    classDef engineStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#4a148c\n    classDef outputStyle fill:#e8f5e9,stroke:#388e3c,stroke-width:2px,color:#1b5e20\n    classDef graphStyle fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#01579b\n\n    INPUT[Input \u002F Query]:::inputStyle\n    CLASSIFIER[Sector Classifier]:::processStyle\n\n    EPISODIC[Episodic]:::sectorStyle\n    SEMANTIC[Semantic]:::sectorStyle\n    PROCEDURAL[Procedural]:::sectorStyle\n    EMOTIONAL[Emotional]:::sectorStyle\n    REFLECTIVE[Reflective]:::sectorStyle\n\n    EMBED[Embedding Engine]:::processStyle\n\n    SQLITE[(SQLite\u002FPostgres\u003Cbr\u002F>Memories \u002F Vectors \u002F Waypoints)]:::storageStyle\n    TEMPORAL[(Temporal Graph)]:::storageStyle\n\n    subgraph RECALL_ENGINE[\"Recall Engine\"]\n        VECTOR[Vector Search]:::engineStyle\n        WAYPOINT[Waypoint Graph]:::engineStyle\n        SCORING[Composite Scoring]:::engineStyle\n        DECAY[Decay Engine]:::engineStyle\n    end\n\n    subgraph TKG[\"Temporal KG\"]\n        FACTS[Facts]:::graphStyle\n        TIMELINE[Timeline]:::graphStyle\n    end\n\n    CONSOLIDATE[Consolidation]:::processStyle\n    REFLECT[Reflection]:::processStyle\n    OUTPUT[Recall + Trace]:::outputStyle\n\n    INPUT --> CLASSIFIER\n    CLASSIFIER --> EPISODIC\n    CLASSIFIER --> SEMANTIC\n    CLASSIFIER --> PROCEDURAL\n    CLASSIFIER --> EMOTIONAL\n    CLASSIFIER --> REFLECTIVE\n\n    EPISODIC --> EMBED\n    SEMANTIC --> EMBED\n    PROCEDURAL --> EMBED\n    EMOTIONAL --> EMBED\n    REFLECTIVE --> EMBED\n\n    EMBED --> SQLITE\n    EMBED --> TEMPORAL\n\n    SQLITE --> VECTOR\n    SQLITE --> WAYPOINT\n    SQLITE --> DECAY\n\n    TEMPORAL --> FACTS\n    FACTS --> TIMELINE\n\n    VECTOR --> SCORING\n    WAYPOINT --> SCORING\n    DECAY --> SCORING\n    TIMELINE --> SCORING\n\n    SCORING --> CONSOLIDATE\n    CONSOLIDATE --> REFLECT\n    REFLECT --> OUTPUT\n\n    OUTPUT -.->|Reinforce| WAYPOINT\n    OUTPUT -.->|Salience| DECAY\n```\n\n---\n\n## 10. Migration\n\nOpenMemory ships a migration tool to import data from other memory systems.\n\nSupported:\n\n- Mem0\n- Zep\n- Supermemory\n\nExample:\n\n```bash\ncd migrate\npython -m migrate --from zep --api-key ZEP_KEY --verify\n```\n\n(See `migrate\u002F` and docs for detailed commands per provider.)\n\n---\n\n## 11. Roadmap\n\n- 🧬 Learned sector classifier (trainable on your data)\n- 🕸 Federated \u002F clustered memory nodes\n- 🤝 Deeper LangGraph \u002F CrewAI \u002F AutoGen integrations\n- 🔭 Memory visualizer 2.0\n- 🔐 Pluggable encryption at rest\n\nStar the repo to follow along.\n\n---\n\n## 12. Contributing\n\nIssues and PRs are welcome.\n\n- Bugs: https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fissues\n- Feature requests: use the GitHub issue templates\n- Before large changes, open a discussion or small design PR\n\n---\n\n## 13. License\n\nOpenMemory is licensed under **Apache 2.0**. See [LICENSE](LICENSE) for details.\n","# 🚧 本项目目前正在全面重写中\n\n预计会出现破坏性变更和潜在Bug。  \n如发现任何问题，请提交GitHub Issue并附上详细信息，以便跟踪和修复。\n\n## OpenMemory\n\n> **为AI代理提供真正的长期记忆。不是RAG，不是向量数据库。支持自托管，基于Python + Node。**\n\n[![VS Code扩展](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVS%20Code-Extension-007ACC?logo=visualstudiocode)](https:\u002F\u002Fmarketplace.visualstudio.com\u002Fitems?itemName=Nullure.openmemory-vscode)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1300368230320697404?label=Discord)](https:\u002F\u002Fdiscord.gg\u002FP7HaRayqTh)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fopenmemory-py.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fopenmemory-py\u002F)\n[![npm](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fopenmemory-js.svg)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fopenmemory-js)\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FCaviraOSS\u002FOpenMemory)](LICENSE)\n\n![OpenMemory演示](.github\u002Fopenmemory.gif)\n\nOpenMemory是一个面向大型语言模型（LLM）和代理的**认知内存引擎**。\n\n- 🧠 真正的长期记忆（不只是表中的嵌入向量）\n- 💾 自托管，本地优先（支持SQLite\u002FPostgres）\n- 🐍 Python + 🟦 Node SDK\n- 🧩 集成：LangChain、CrewAI、AutoGen、Streamlit、MCP、VS Code\n- 📥 数据源：GitHub、Notion、Google Drive、OneDrive、网络爬虫\n- 🔍 可解释的检索轨迹（查看记忆被召回的**原因**）\n\n你的模型保持无状态。**你的应用不再失忆。**\n\n---\n\n## ☁️ 一键部署\n\n快速启动共享的OpenMemory后端（包含HTTP API + MCP + 仪表盘）：\n\n[![在Railway部署](https:\u002F\u002Frailway.app\u002Fbutton.svg)](https:\u002F\u002Frailway.app\u002Ftemplate\u002FYOUR_TEMPLATE_ID)\n[![部署到Render](https:\u002F\u002Frender.com\u002Fimages\u002Fdeploy-to-render-button.svg)](https:\u002F\u002Frender.com\u002Fdeploy?repo=https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory)\n[![使用Vercel部署](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FCaviraOSS_OpenMemory_readme_a4c0f8073a9c.png)](https:\u002F\u002Fvercel.com\u002Fnew\u002Fclone?repository-url=https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory)\n\n> 当需要**嵌入式本地内存**时使用SDK，当需要**多用户组织级内存**时使用服务器。\n\n---\n\n## 1. TL;DR – 10秒快速上手\n\n### 🐍 Python（本地优先）\n\n安装：\n\n```bash\npip install openmemory-py\n```\n\n使用：\n\n```python\nfrom openmemory.client import Memory\n\nmem = Memory()\nmem.add(\"用户偏好深色模式\", user_id=\"u1\")\nresults = mem.search(\"偏好设置\", user_id=\"u1\")\nawait mem.delete(\"memory_id\")\n```\n\n> 注意：`add`、`search`、`get`、`delete` 是异步方法。在异步上下文中使用 `await`。\n\n#### 🔗 OpenAI\n\n```python\nmem = Memory()\nclient = mem.openai.register(OpenAI(), user_id=\"u1\")\nresp = client.chat.completions.create(...)\n```\n\n#### 🧱 LangChain\n\n```python\nfrom openmemory.integrations.langchain import OpenMemoryChatMessageHistory\n\nhistory = OpenMemoryChatMessageHistory(memory=mem, user_id=\"u1\")\n```\n\n#### 🤝 CrewAI \u002F AutoGen \u002F Streamlit\n\nOpenMemory专为**代理框架和UI**提供底层支持：\n\n- Crew风格代理：使用`Memory`作为共享长期存储\n- AutoGen风格编排：将对话和工具调用存储为情景记忆\n- Streamlit应用：通过`user_id`为每个用户提供持久化内存\n\n具体集成模式请参阅文档中的集成章节。\n\n---\n\n### 🟦 Node \u002F JavaScript（本地优先）\n\n安装：\n\n```bash\nnpm install openmemory-js\n```\n\n使用：\n\n```ts\nimport { Memory } from \"openmemory-js\"\n\nconst mem = new Memory()\nawait mem.add(\"用户喜欢辣食\", { user_id: \"u1\" })\nconst results = await mem.search(\"食物偏好\", { user_id: \"u1\" })\nawait mem.delete(\"memory_id\")\n```\n\n适用于以下场景：\n\n- Node后端\n- 命令行工具\n- 本地工具\n- 需要持久化内存但不想运行独立服务的场景\n\n---\n\n### 📥 数据连接器\n\n直接从外部数据源导入数据到内存：\n\n```python\n# python\ngithub = mem.source(\"github\")\nawait github.connect(token=\"ghp_...\")\nawait github.ingest_all(repo=\"owner\u002Frepo\")\n```\n\n```ts\n\u002F\u002F javascript\nconst github = await mem.source(\"github\")\nawait github.connect({ token: \"ghp_...\" })\nawait github.ingest_all({ repo: \"owner\u002Frepo\" })\n```\n\n支持的数据连接器：`github`、`notion`、`google_drive`、`google_sheets`、`google_slides`、`onedrive`、`web_crawler`\n\n---\n\n## 2. 运行模式：SDK、服务器、MCP\n\nOpenMemory可以**内嵌运行在应用中**，也可以作为**中心化服务**运行。\n\n### 2.1 Python SDK\n\n- ✅ 默认使用本地SQLite\n- ✅ 支持外部数据库（通过配置）\n- ✅ 适合LangChain \u002F LangGraph \u002F CrewAI \u002F 笔记本使用\n\n文档：https:\u002F\u002Fopenmemory.cavira.app\u002Fdocs\u002Fsdks\u002Fpython\n\n---\n\n### 2.2 Node SDK\n\n- 与Python相同认知模型\n- 适合JS\u002FTS应用\n- 可本地运行或连接中心化后端\n\n文档：https:\u002F\u002Fopenmemory.cavira.app\u002Fdocs\u002Fsdks\u002Fjavascript\n\n---\n\n### 2.3 后端服务器（多用户 + 仪表盘 + MCP）\n\n适用于需要以下功能的场景：\n\n- 组织级内存\n- HTTP API\n- 仪表盘\n- 支持Claude \u002F Cursor \u002F Windsurf的MCP服务\n\n从源码运行：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory.git\ncd OpenMemory\ncp .env.example .env\n\ncd backend\nnpm install\nnpm run dev   # 默认监听8080端口\n```\n\n或使用Docker（API + MCP）：\n\n```bash\ndocker compose up --build -d\n```\n\n可选：启用仪表盘服务：\n\n```bash\ndocker compose --profile ui up --build -d\n```\n\n使用Doppler管理配置（推荐用于托管仪表盘\u002FAPI的URL）：\n\n```bash\ncd OpenMemory\ntools\u002Fops\u002Fcompose_with_doppler.sh up -d --build\n```\n\n检查服务状态：\n\n```bash\ndocker compose ps\ncurl -f http:\u002F\u002Flocalhost:8080\u002Fhealth\n```\n\n后端暴露以下接口：\n\n- `\u002Fapi\u002Fmemory\u002F*` – 内存操作接口\n- `\u002Fapi\u002Ftemporal\u002F*` – 时序知识图谱\n- `\u002Fmcp` – MCP服务\n- 仪表盘UI（启用`ui`配置时）\n\n---\n\n## 3. 为什么选择OpenMemory（对比RAG，对比\"纯向量\"方案）\n\nLLM在消息间会遗忘所有状态。  \n大多数\"记忆\"方案本质上只是**RAG流水线**：\n\n- 文本被切分\n- 转换为向量存储\n- 通过相似度检索\n\n它们无法理解：\n\n- 某条信息是**事实**、**事件**、**偏好**还是**情感**\n- 其**时效性\u002F重要性**\n- 与其他记忆的关联\n- 特定时间点的真值状态\n\n云服务记忆API带来：\n\n- 供应商锁定\n- 延迟问题\n- 不透明的行为\n- 隐私风险\n\n**OpenMemory为你提供真正的记忆系统：**\n\n- 🧠 多区域记忆（情景、语义、程序、情感、反思）\n- ⏱ 时序推理（知晓特定时间的真值状态）\n- 📉 衰减与强化机制（替代简单的TTL）\n- 🕸 路标图（可遍历的关联链接）\n- 🔍 可解释的检索轨迹（查看被召回的节点及其原因）\n- 🏠 自托管，本地优先，数据完全可控\n- 🔌 SDK + 服务器 + VS Code + MCP\n\n它表现得像一个真正的内存模块，而非\"加了营销文案的向量数据库\"。\n\n## 4. 传统方式 vs OpenMemory\n\n**Vector DB + LangChain（重度云依赖，流程繁琐）：**\n\n```python\nimport os\nimport time\nfrom langchain.chains import ConversationChain\nfrom langchain.memory import VectorStoreRetrieverMemory\nfrom langchain_community.vectorstores import Pinecone\nfrom langchain_openai import ChatOpenAI, OpenAIEmbeddings\n\nos.environ[\"PINECONE_API_KEY\"] = \"sk-...\"\nos.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\ntime.sleep(3)  # 云端预热\n\nembeddings = OpenAIEmbeddings()\npinecone = Pinecone.from_existing_index(embeddings, index_name=\"my-memory\")\nretriever = pinecone.as_retriever(search_kwargs={\"k\": 2})\nmemory = VectorStoreRetrieverMemory(retriever=retriever)\nconversation = ConversationChain(llm=ChatOpenAI(), memory=memory)\n\nconversation.predict(input=\"I'm allergic to peanuts\")\n```\n\n**OpenMemory（3行代码，本地文件，无厂商锁定）：**\n\n```python\nfrom openmemory.client import Memory\n\nmem = Memory()\nmem.add(\"user allergic to peanuts\", user_id=\"user123\")\nresults = mem.search(\"allergies\", user_id=\"user123\")\n```\n\n✅ 零云配置 • ✅ 本地SQLite • ✅ 支持离线 • ✅ 自主数据库，自主架构\n\n---\n\n## 5. 核心功能概览\n\n- **多维度内存**  \n  情景记忆（事件）、语义记忆（事实）、程序记忆（技能）、情感记忆（感受）、反思记忆（洞察）。\n\n- **时间知识图谱（Temporal knowledge graph）**  \n  `valid_from`\u002F`valid_to`时间窗口，时点真相，随时间演进。\n\n- **复合评分机制**  \n  显著度+时效性+共激活度，不局限于余弦距离。\n\n- **衰减引擎**  \n  按维度自适应遗忘，替代硬性TTL设置。\n\n- **可解释召回**  \n  通过\"航路点（waypoint）\"追踪显示上下文使用的具体节点。\n\n- **嵌入支持**  \n  兼容OpenAI、Gemini、Ollama、AWS，支持合成回退。\n\n- **集成能力**  \n  支持LangChain、CrewAI、AutoGen、Streamlit、MCP、VS Code等IDE。\n\n- **数据连接器**  \n  支持从GitHub、Notion、Google Drive、Google表格\u002F幻灯片、OneDrive、网络爬虫导入数据。\n\n- **迁移工具**  \n  支持从Mem0、Zep、Supermemory等迁移记忆数据。\n\n若您正在开发**智能代理、协作助手、日志系统、知识工作者或编程助手**，OpenMemory能让这些系统从\"金鱼记忆\"升级为真正具备记忆能力的智能体。\n\n---\n\n## 6. MCP与IDE工作流\n\nOpenMemory内置原生MCP服务器，任何支持MCP的客户端均可将其作为工具使用。\n\n### Claude \u002F Claude Code\n\n```bash\nclaude mcp add --transport http openmemory http:\u002F\u002Flocalhost:8080\u002Fmcp\n```\n\n### Cursor \u002F Windsurf\n\n`.mcp.json`配置：\n\n```json\n{\n  \"mcpServers\": {\n    \"openmemory\": {\n      \"type\": \"http\",\n      \"url\": \"http:\u002F\u002Flocalhost:8080\u002Fmcp\"\n    }\n  }\n}\n```\n\n支持的工具包括：\n\n- `openmemory_query`\n- `openmemory_store`\n- `openmemory_list`\n- `openmemory_get`\n- `openmemory_reinforce`\n\n您的IDE助手可直接查询、存储、列举和强化记忆，无需手动编写每个调用。\n\n---\n\n## 7. 时间知识图谱\n\nOpenMemory将**时间**作为核心维度处理。\n\n### 核心概念\n\n- `valid_from`\u002F`valid_to` - 真值时间窗口\n- 自动演进 - 新事实自动闭合旧记录\n- 置信衰减 - 旧事实渐进式失效\n- 时点查询 - \"X时间点什么为真？\"\n- 时间线 - 重建实体历史\n- 变更检测 - 识别状态变更时间点\n\n### 示例\n\n```http\nPOST \u002Fapi\u002Ftemporal\u002Ffact\n{\n  \"subject\": \"CompanyX\",\n  \"predicate\": \"has_CEO\",\n  \"object\": \"Alice\",\n  \"valid_from\": \"2021-01-01\"\n}\n```\n\n后续更新：\n\n```http\nPOST \u002Fapi\u002Ftemporal\u002Ffact\n{\n  \"subject\": \"CompanyX\",\n  \"predicate\": \"has_CEO\",\n  \"object\": \"Bob\",\n  \"valid_from\": \"2024-04-10\"\n}\n```\n\nAlice的任期自动闭合，时间线查询保持逻辑完整。\n\n---\n\n## 8. 命令行工具（opm）\n\n`opm`命令行工具可直接与引擎\u002F服务器通信。\n\n### 安装\n\n```bash\ncd packages\u002Fopenmemory-js\nnpm install\nnpm run build\nnpm link   # 将`opm`添加到系统路径\n```\n\n### 使用示例\n\n```bash\n# 启动API服务器\nopm serve\n\n# 新终端中：\nopm health\nopm add \"Recall that I prefer TypeScript over Python\" --tags preference\nopm query \"language preference\"\n```\n\n### 支持命令\n\n```bash\nopm add \"user prefers dark mode\" --user u1 --tags prefs\nopm query \"preferences\" --user u1 --limit 5\nopm list --user u1\nopm delete \u003Cid>\nopm reinforce \u003Cid>\nopm stats\n```\n\n适用于脚本编写、调试及非LLM流水线的记忆管理。\n\n---\n\n## 9. 架构设计（高层视图）\n\nOpenMemory采用**分层内存分解（Hierarchical Memory Decomposition）**架构，叠加时间图谱层。\n\n```mermaid\ngraph TB\n    classDef inputStyle fill:#eceff1,stroke:#546e7a,stroke-width:2px,color:#37474f\n    classDef processStyle fill:#e3f2fd,stroke:#1976d2,stroke-width:2px,color:#0d47a1\n    classDef sectorStyle fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#e65100\n    classDef storageStyle fill:#fce4ec,stroke:#c2185b,stroke-width:2px,color:#880e4f\n    classDef engineStyle fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#4a148c\n    classDef outputStyle fill:#e8f5e9,stroke:#388e3c,stroke-width:2px,color:#1b5e20\n    classDef graphStyle fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#01579b\n\n    INPUT[输入 \u002F 查询]:::inputStyle\n    CLASSIFIER[维度分类器]:::processStyle\n\n    EPISODIC[情景记忆]:::sectorStyle\n    SEMANTIC[语义记忆]:::sectorStyle\n    PROCEDURAL[程序记忆]:::sectorStyle\n    EMOTIONAL[情感记忆]:::sectorStyle\n    REFLECTIVE[反思记忆]:::sectorStyle\n\n    EMBED[嵌入引擎]:::processStyle\n\n    SQLITE[(SQLite\u002FPostgres\u003Cbr\u002F>记忆数据 \u002F 向量 \u002F 航路点)]:::storageStyle\n    TEMPORAL[(时间图谱)]:::storageStyle\n\n    subgraph RECALL_ENGINE[\"召回引擎\"]\n        VECTOR[向量搜索]:::engineStyle\n        WAYPOINT[航路点图]:::engineStyle\n        SCORING[复合评分]:::engineStyle\n        DECAY[衰减引擎]:::engineStyle\n    end\n\n    subgraph TKG[\"时间知识图谱\"]\n        FACTS[事实存储]:::graphStyle\n        TIMELINE[时间线]:::graphStyle\n    end\n\n    CONSOLIDATE[整合处理]:::processStyle\n    REFLECT[反思处理]:::processStyle\n    OUTPUT[召回结果 + 追踪]:::outputStyle\n\n    INPUT --> CLASSIFIER\n    CLASSIFIER --> EPISODIC\n    CLASSIFIER --> SEMANTIC\n    CLASSIFIER --> PROCEDURAL\n    CLASSIFIER --> EMOTIONAL\n    CLASSIFIER --> REFLECTIVE\n\n    EPISODIC --> EMBED\n    SEMANTIC --> EMBED\n    PROCEDURAL --> EMBED\n    EMOTIONAL --> EMBED\n    REFLECTIVE --> EMBED\n\n    EMBED --> SQLITE\n    EMBED --> TEMPORAL\n\n    SQLITE --> VECTOR\n    SQLITE --> WAYPOINT\n    SQLITE --> DECAY\n\n    TEMPORAL --> FACTS\n    FACTS --> TIMELINE\n\n    VECTOR --> SCORING\n    WAYPOINT --> SCORING\n    DECAY --> SCORING\n    TIMELINE --> SCORING\n\n    SCORING --> CONSOLIDATE\n    CONSOLIDATE --> REFLECT\n    REFLECT --> OUTPUT\n\n    OUTPUT -.->|强化| WAYPOINT\n    OUTPUT -.->|显著度| DECAY\n```\n\n---\n\n## 10. 数据迁移\n\nOpenMemory提供迁移工具支持从其他记忆系统导入数据。\n\n支持来源：\n\n- Mem0\n- Zep\n- Supermemory\n\n示例命令：\n\n```bash\ncd migrate\npython -m migrate --from zep --api-key ZEP_KEY --verify\n```\n\n（详见`migrate\u002F`目录及文档中各提供商的具体命令）\n\n## 11. 路线图\n\n- 🧬 可学习的扇区分类器（可在您的数据上训练）\n- 🕸 联邦学习\u002F集群内存节点（Federated \u002F clustered memory nodes）\n- 🤝 更深入的 LangGraph \u002F CrewAI \u002F AutoGen 集成\n- 🔭 内存可视化工具 2.0\n- 🔐 可插拔的静态加密（Pluggable encryption at rest）\n\n点击 Star 关注项目进展。\n\n---\n\n## 12. 贡献指南\n\n欢迎提交问题反馈和 Pull Request。\n\n- Bug 反馈: https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fissues\n- 功能请求: 使用 GitHub 问题模板\n- 进行重大更改前，请先发起讨论或提交小型设计 PR\n\n---\n\n## 13. 许可证\n\nOpenMemory 采用 **Apache 2.0 许可证**。详情请参阅 [LICENSE](LICENSE) 文件。","# OpenMemory 快速上手指南\n\n---\n\n## 🧰 环境准备\n- **Python 用户**：Python 3.10+（推荐使用 [清华源](https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple) 加速安装）\n- **Node.js 用户**：Node.js 18.x 或更高版本（推荐使用 [nvm](https:\u002F\u002Fgithub.com\u002Fnvm-sh\u002Fnvm) 管理版本）\n- **数据库**：默认使用 SQLite（无需额外安装），生产环境可选 PostgreSQL\n- **后端部署**：Docker 20+ 及 Docker Compose（建议安装 [阿里云容器镜像加速器](https:\u002F\u002Fcr.console.aliyun.com\u002F)）\n\n---\n\n## 📦 安装步骤\n\n### Python SDK 安装\n```bash\npip install openmemory-py -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### Node.js SDK 安装\n```bash\nnpm install openmemory-js --registry=https:\u002F\u002Fregistry.npmmirror.com\n```\n\n### 后端服务部署（含 Web UI）\n```bash\n# 克隆仓库（国内用户可替换为 GitHub 镜像）\ngit clone https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory.git\ncd OpenMemory\n\n# 使用 Docker Compose 启动服务\ndocker compose up --build -d\n\n# 启动带 Web UI 的完整服务\ndocker compose --profile ui up --build -d\n```\n\n---\n\n## 🚀 基本使用\n\n### Python 本地记忆操作\n```python\nfrom openmemory.client import Memory\n\n# 初始化本地内存引擎\nmem = Memory()\n\n# 添加记忆（支持异步）\nawait mem.add(\"用户喜欢辣食\", user_id=\"u1\")\n\n# 搜索记忆\nresults = await mem.search(\"饮食偏好\", user_id=\"u1\")\n\n# 删除记忆\nawait mem.delete(\"memory_id\")\n```\n\n### Node.js 本地记忆操作\n```ts\nimport { Memory } from \"openmemory-js\"\n\nconst mem = new Memory()\nawait mem.add(\"用户喜欢夜间工作\", { user_id: \"u1\" })\nconst results = await mem.search(\"作息偏好\", { user_id: \"u1\" })\nawait mem.delete(\"memory_id\")\n```\n\n### 连接外部数据源（GitHub 示例）\n```python\n# Python 连接 GitHub 仓库\ngithub = mem.source(\"github\")\nawait github.connect(token=\"your_github_token\")\nawait github.ingest_all(repo=\"owner\u002Frepo\")\n```\n\n### 后端服务启动验证\n```bash\n# 检查服务状态\ndocker compose ps\n\n# 健康检查\ncurl -f http:\u002F\u002Flocalhost:8080\u002Fhealth\n```\n\n---\n\n> 💡 **国内加速建议**：  \n> 1. Python 包安装时添加 `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`  \n> 2. Docker 用户配置阿里云容器镜像加速器  \n> 3. Node.js 安装时添加 `--registry=https:\u002F\u002Fregistry.npmmirror.com`  \n\n--- \n\n通过以上步骤即可快速实现本地记忆存储或部署全功能服务，完整功能请参考官方[集成文档](https:\u002F\u002Fopenmemory.cavira.app\u002Fdocs\u002Fsdks\u002Fpython)。","某软件开发团队在构建AI代码助手时，需要让模型记住每个开发者的个性化编码习惯和项目特定规则。\n\n### 没有 OpenMemory 时\n- 开发者每次重启IDE都要重新输入缩进风格、命名规范等偏好设置\n- 跨设备使用时（如从MacBook切换到公司PC）需要手动同步配置文件\n- 团队协作时无法共享项目专属的代码模板和架构约束\n- 模型无法记住历史对话中的特殊约定（如特定业务术语的缩写规则）\n- 需要为每个项目维护独立的配置文件，导致管理复杂度指数级上升\n\n### 使用 OpenMemory 后\n- 开发者首次设置的编码偏好（如Python的PEP8严格模式）自动持久化存储\n- 通过用户ID关联，跨设备使用VS Code时自动同步个性化配置\n- 团队共享的项目记忆库自动记录架构决策（如\"订单服务采用CQRS模式\"）\n- 历史对话中的特殊约定（如\"CRM模块用user代替customer\"）被自动关联到代码上下文\n- 通过标签系统统一管理多项目配置，SDK自动根据当前工作目录匹配对应记忆\n\n核心价值：OpenMemory让AI代码助手具备持续记忆能力，开发者只需设置一次即可跨设备、跨会话、跨项目保持个性化体验，团队知识自动沉淀为可复用的智能资产。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FCaviraOSS_OpenMemory_faa62365.png","CaviraOSS","Cavira","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FCaviraOSS_4aae91c1.png","Building unbeatable OSS tools for the community",null,"hi@cavira.app","https:\u002F\u002Fgithub.com\u002FCaviraOSS",[23,27,31,35,39,43],{"name":24,"color":25,"percentage":26},"TypeScript","#3178c6",68.7,{"name":28,"color":29,"percentage":30},"Python","#3572A5",27.3,{"name":32,"color":33,"percentage":34},"JavaScript","#f1e05a",3,{"name":36,"color":37,"percentage":38},"Makefile","#427819",0.5,{"name":40,"color":41,"percentage":42},"Dockerfile","#384d54",0.3,{"name":44,"color":45,"percentage":46},"CSS","#663399",0.1,3863,441,"2026-04-05T10:32:37","Apache-2.0","Linux, macOS, Windows","未说明",{"notes":54,"python":55,"dependencies":56},"需安装Docker用于后端部署，Python SDK需要基础环境，Node.js环境需16.x+，首次运行需下载模型文件约5GB","3.6+",[52],[58,59,60,61,62,63],"语言模型","图像","开发框架","Agent","其他","数据工具",[65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84],"ai","ai-agents","ai-infrastructure","ai-memory","artificial-intelligence","cognitive-architecture","embeddings","gemini","llm","long-term-memory","memory","memory-engine","memory-retrieval","ollama","openai","openmemory","rag","supermemory","vector-database","one-line",4,"ready","2026-03-27T02:49:30.150509","2026-04-06T05:15:32.465892",[90,95,100,105,109,113],{"id":91,"question_zh":92,"answer_zh":93,"source_url":94},4654,"使用 PostgreSQL 时 user_id 必填导致 VS Code 扩展无法连接如何解决？","需要确保数据库迁移脚本正确应用，user_id 字段在创建表时允许空值或设置默认值。检查后端日志中的具体错误信息，调整数据库模式。例如修改表结构允许 user_id 为空：ALTER TABLE openmemory_waypoints ALTER COLUMN user_id DROP NOT NULL;","https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fissues\u002F63",{"id":96,"question_zh":97,"answer_zh":98,"source_url":99},4655,"Windsurf MCP 中工具名称显示为空如何修复？","工具名称需符合正则表达式 ^[a-zA-Z0-9_-]{1,64}$。将名称从 openmemory.\u003Cname> 改为下划线格式如 openmemory_\u003Cname>，并检查 MCP 服务端的工具注册代码是否符合命名规范。","https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fissues\u002F35",{"id":101,"question_zh":102,"answer_zh":103,"source_url":104},4656,"Antigravity 中使用 Claude 模型时 MCP 导致代理终止如何解决？","更新 OpenMemory 到 v2.1.7 版本，调整 JSON Schema 验证规则：移除 Claude 不兼容的字段（如 $schema、additionalProperties），保留 Gemini 所需字段。修改 schema 校验逻辑为宽松模式：const validator = new Validator({ schema: '7-strict' });","https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fissues\u002F115",{"id":106,"question_zh":107,"answer_zh":108,"source_url":99},4657,"openmemory:\u002F\u002Fconfig 端点报错 'relation \"memories\" does not exist' 如何处理？","检查环境变量 OM_PG_TABLE 是否正确设置，确保数据库查询使用实际表名而非默认值。修改 MCP 服务端代码中数据库查询语句，使用环境变量指定的表名：const table = process.env.OM_PG_TABLE || 'memories';",{"id":110,"question_zh":111,"answer_zh":112,"source_url":94},4658,"VS Code 扩展显示 'Connected successfully' 但状态仍为断开如何排查？","查看服务端日志确认数据库约束问题，检查 IDE 会话结束时的错误日志。尝试清除浏览器缓存并重新连接，若问题持续可临时禁用 user_id 强制约束进行测试。",{"id":114,"question_zh":115,"answer_zh":116,"source_url":99},4659,"构建 OpenMemory 时出现 TypeScript 类型错误如何解决？","更新 tsconfig.json 配置，添加类型校验规则：\n{\n  \"compilerOptions\": {\n    \"strict\": true,\n    \"noImplicitAny\": true,\n    \"strictNullChecks\": true\n  }\n}\n或临时禁用特定类型检查：\u002F\u002F @ts-ignore",[118,123,128,133,138,143,148,153],{"id":119,"version":120,"summary_zh":121,"released_at":122},104149,"v1.3.0","### Changelog\r\n\r\n#### api simplification\r\n- **python sdk**: simplified to zero-config `Memory()` api matching javascript\r\n  - `from openmemory.client import Memory` → `mem = Memory()`\r\n  - works out of the box with sensible defaults (in-memory sqlite, fast tier, synthetic embeddings)\r\n  - optional configuration via environment variables or constructor\r\n  - breaking change: moved from `OpenMemory` class to `Memory` class\r\n\r\n#### benchmark suite rewrite\r\n- implemented comprehensive benchmark suite in `temp\u002Fbenchmarks\u002F`\r\n  - typescript-based using `tsx` for execution\r\n  - supports longmemeval dataset evaluation\r\n  - multi-backend comparison (openmemory, mem0, zep, supermemory)\r\n- created `src\u002Fmain.ts` consolidated benchmark runner\r\n  - environment validation\r\n  - backend instantiation checks\r\n  - sequential benchmark execution with detailed logging\r\n\r\n### ✨ features\r\n\r\n#### core improvements\r\n- **`Memory.wipe()`**: added database wipe functionality for testing\r\n  - `clear_all` implementation in `db.ts` for postgres and sqlite\r\n  - clears memories, vectors, waypoints, and users tables\r\n  - useful for benchmark isolation and test cleanup\r\n\r\n- **environment variable overrides**:\r\n  - `OM_OLLAMA_MODEL`: override ollama embedding model\r\n  - `OM_OPENAI_MODEL`: override openai embedding model\r\n  - `OM_VEC_DIM`: configure vector dimension (critical for embedding compatibility)\r\n  - `OM_DB_PATH`: sqlite database path (supports `:memory:`)\r\n\r\n#### vector store enhancements\r\n- added comprehensive logging to `PostgresVectorStore`\r\n  - logs vector storage operations with id, sector, dimension\r\n  - logs search operations with sector and result count\r\n  - aids in debugging retrieval issues\r\n\r\n### 🐛 bug fixes\r\n\r\n- **embedding configuration**:\r\n  - fixed `models.ts` to respect `OM_OLLAMA_MODEL` environment variable\r\n  - resolved dimension mismatch issues (768 vs 1536) for embeddinggemma\r\n  - ensured `OM_TIER=deep` uses semantic embeddings (not synthetic fallback)\r\n\r\n- **benchmark data isolation**:\r\n  - implemented proper database reset between benchmark runs\r\n  - fixed simhash collision issues causing cross-user contamination\r\n  - added `resetUser()` functionality calling `Memory.wipe()`\r\n\r\n- **configuration loading**:\r\n  - fixed dotenv timing issues in benchmark suite\r\n  - ensured environment variables load before openmemory-js initialization\r\n  - corrected dataset path resolution (`longmemeval_s.json`)\r\n\r\n### 📚 documentation\r\n\r\n- **comprehensive readme updates**:\r\n  - root `README.md`: language-agnostic, showcases both python & javascript sdks\r\n  - `packages\u002Fopenmemory-js\u002FREADME.md`: complete api reference, mcp integration, examples\r\n  - `packages\u002Fopenmemory-py\u002FREADME.md`: zero-config usage, all embedding providers\r\n\r\n- **api documentation**:\r\n  - environment variables with descriptions\r\n  - cognitive sectors explanation\r\n  - performance tiers breakdown\r\n  - embedding provider configurations\r\n\r\n### 🔧 internal improvements\r\n\r\n- **type safety**: added lint error handling in benchmark adapters\r\n- **code organization**: separated generator, judge, and backend interfaces\r\n- **debug tooling**: created dimension check script (`check_dim.ts`)\r\n- **logging standardization**: consistent `[Component]` prefix pattern\r\n\r\n### ⚠️ breaking changes\r\n\r\n- python sdk now uses `from openmemory.client import Memory` instead of `from openmemory import OpenMemory`\r\n- `Memory()` constructor signature changed to accept optional parameters (was required)\r\n- benchmark suite moved to typescript (was python)\r\n\r\n---\r\n* Feature - Add Frontend Docker Image by @dflor003 in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F100\r\n\r\n## New Contributors\r\n* @dflor003 made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F100\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fcompare\u002Fv1.2.3...v1.3.0","2025-12-20T16:45:32",{"id":124,"version":125,"summary_zh":126,"released_at":127},104150,"v1.2.3","## 1.2.3 - 2025-12-14\r\n\r\n### Added\r\n\r\n- **Temporal Filtering**: Enables precise time-based memory retrieval\r\n  - Added `startTime` and `endTime` filters to `query` method across Backend, JS SDK, and Python SDK.\r\n  - Allows filtering memories by creation time range.\r\n  - Fully integrated into `hsg_query` logic.\r\n\r\n### Fixed\r\n\r\n- **JavaScript SDK Types**: Fixed `IngestURLResult` import error and `v.v` property access bug in `VectorStore` integration.\r\n- **Python SDK Filtering**: Fixed missing implementation of `user_id` and temporal filters in `hsg_query` loop.\r\n\r\n## What's Changed\r\n* fix: Add PATCH to CORS allowed methods by @aziham in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F93\r\n\r\n## New Contributors\r\n* @aziham made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F93\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fcompare\u002Fv1.2.2...v1.2.3","2025-12-12T15:32:37",{"id":129,"version":130,"summary_zh":131,"released_at":132},104151,"v1.2.2","### Fixed\r\n\r\n- **MCP Server Path Resolution**: Fixed ENOENT error in stdio mode (Claude Desktop)\r\n  - Enforced absolute path resolution for SQLite database\r\n  - Ensures correct data directory creation regardless of working directory\r\n  - Critical fix for local desktop client integration\r\n\r\n- **VectorStore Refactor**: Fixed build regressions in backend\r\n  - Migrated deprecated `q` vector operations to `VectorStore` interface\r\n  - Fixed `users.ts`, `memory.ts`, `graph.ts`, `mcp.ts`, and `decay.ts`\r\n  - Removed partial SQL updates in favor of unified vector store methods\r\n\r\n### Added\r\n\r\n- **Valkey VectorStore Enhancements**: Improved compatibility and performance\r\n  - Refined vector storage implementation for Valkey backend\r\n  - Optimized vector retrieval and storage operations\r\n\r\n### Changed\r\n\r\n- **IDE Extension**:\r\n  - Updates to Dashboard UI (`DashboardPanel.ts`) and extension activation logic (`extension.ts`)\r\n  - Configuration and dependency updates\r\n\r\n- **Python SDK**:\r\n  - Refinements to embedding logic (`embed.py`)\r\n  - Project configuration updates in `pyproject.toml`\r\n\r\n- **Backend Maintenance**:\r\n  - Dockerfile updates for improved containerization\r\n  - Updates to CLI tool (`bin\u002Fopm.js`)\r\n## New Contributors\r\n* @ajitam made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F69\r\n* @fparrav made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F80\r\n* @DAESA24 made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F83\r\n* @oantoshchenko made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F84\r\n* @therexone made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F85\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fcompare\u002F1.2.1...v1.2.2\r\n\r\n![openmemorye](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F79a9e6b1-2369-4123-b105-c227ec6780da)\r\n","2025-12-06T14:42:40",{"id":134,"version":135,"summary_zh":136,"released_at":137},104152,"1.2.1","## 1.2.1 - Standalone Sdk\r\n\r\n### Added\r\n\r\n- **Python SDK (`sdk-py\u002F`)**: SDK Overhaul, it can now perform as a standalone version of OpenMemory\r\n  - Full feature parity with Backend\r\n  - Local-first architecture with SQLite backend\r\n  - Multi-sector memory (episodic, semantic, procedural, emotional, reflective)\r\n  - All embedding providers: synthetic, OpenAI, Gemini, Ollama, AWS\r\n  - Advanced features: decay, compression, reflection\r\n  - Comprehensive test suite (`sdk-py\u002Ftests\u002Ftest_sdk_py.py`)\r\n\r\n- **JavaScript SDK Enhancements (`sdk-js\u002F`)**: SDK Overhaul, it can now perform as a standalone version of OpenMemory\r\n  - Full feature parity with Backend\r\n  - Local-first architecture with SQLite backend\r\n  - Multi-sector memory (episodic, semantic, procedural, emotional, reflective)\r\n  - All embedding providers: synthetic, OpenAI, Gemini, Ollama, AWS\r\n  - Advanced features: decay, compression, reflection\r\n\r\n- **Examples**: Complete rewrite of both JS and Python examples\r\n  - `examples\u002Fjs-sdk\u002Fbasic-usage.js` - CRUD operations\r\n  - `examples\u002Fjs-sdk\u002Fadvanced-features.js` - Decay, compression, reflection\r\n  - `examples\u002Fjs-sdk\u002Fbrain-sectors.js` - Multi-sector demonstration\r\n  - `examples\u002Fpy-sdk\u002Fbasic_usage.py` - Python CRUD operations\r\n  - `examples\u002Fpy-sdk\u002Fadvanced_features.py` - Advanced configuration\r\n  - `examples\u002Fpy-sdk\u002Fbrain_sectors.py` - Sector demonstration\r\n  - `examples\u002Fpy-sdk\u002Fperformance_benchmark.py` - Performance testing\r\n\r\n- **Tests**: Comprehensive test suites for both SDKs\r\n  - `tests\u002Fjs-sdk\u002Fjs-sdk.test.js` - Full SDK validation\r\n  - `tests\u002Fpy-sdk\u002Ftest-sdk.py` - Python SDK validation\r\n  - Tests cover: initialisation, CRUD, sectors, advanced features\r\n\r\n- **Architecture Documentation**\r\n  - Mermaid diagram in main README showing complete data flow\r\n  - Covers all 5 cognitive sectors\r\n  - Shows embedding engine, storage layer, and recall engine\r\n  - Includes temporal knowledge graph integration\r\n  - Node.js script to regenerate diagrams","2025-11-23T12:20:21",{"id":139,"version":140,"summary_zh":141,"released_at":142},104153,"1.2.0","# What's New\r\n* Web UI to control OpenMemory\r\n* HYBRID Tier Performance Mode\r\n* Memory Compression Engine\r\n\r\n## What's Changed\r\n* Decay system\r\n* perf(vector): Optimized the aggregateVectors function by @DKB0512 in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F26\r\n* perf(embedding): Optimize embedWithLocal by @DKB0512 in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F29\r\n* perf(chunk): Optimized the combineChunk function by @DKB0512 in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F27\r\n* Add permissions for content read access by @recabasic in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F31\r\n\r\n## New Contributors\r\n* @recabasic made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F31\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fcompare\u002F1.1.1...1.2.0","2025-11-05T16:42:03",{"id":144,"version":145,"summary_zh":146,"released_at":147},104154,"1.1.1","# Changelog\r\n\r\n### Added\r\n\r\n- **Memory Compression Engine**: Auto-compresses chat\u002Fmemory content to reduce tokens and latency\r\n\r\n  - 5 compression algorithms: whitespace, filler, semantic, aggressive, balanced\r\n  - Auto-selects optimal algorithm based on content analysis\r\n  - Batch compression support for multiple texts\r\n  - Live savings metrics (tokens saved, latency reduction, compression ratio)\r\n  - Real-time statistics tracking across all compressions\r\n  - Integrated into memory storage with automatic compression\r\n  - REST API endpoints: `\u002Fapi\u002Fcompression\u002Fcompress`, `\u002Fapi\u002Fcompression\u002Fbatch`, `\u002Fapi\u002Fcompression\u002Fanalyze`, `\u002Fapi\u002Fcompression\u002Fstats`\r\n  - Example usage in `examples\u002Fbackend\u002Fcompression-examples.mjs`\r\n\r\n- **VS Code Extension with AI Auto-Link**\r\n\r\n  - Auto-links OpenMemory to 6 AI tools: Cursor, Claude, Windsurf, GitHub Copilot, Codex\r\n  - Dual mode support: Direct HTTP or MCP (Model Context Protocol)\r\n  - Status bar UI with clickable menu for easy control\r\n  - Toggle between HTTP\u002FMCP mode in real-time\r\n  - Zero-config setup - automatically detects backend and writes configs\r\n  - Performance optimizations:\r\n    - **ESH (Event Signature Hash)**: Deduplicates ~70% redundant saves\r\n    - **HCR (Hybrid Context Recall)**: Sub-80ms queries with sector filtering\r\n    - **MVC (Micro-Vector Cache)**: 32-entry LRU cache saves ~60% embedding calls\r\n  - Settings for backend URL, API key, MCP mode toggle\r\n  - Postinstall script for automatic setup\r\n\r\n- **API Authentication & Security**\r\n\r\n  - API key authentication with timing-safe comparison\r\n  - Rate limiting middleware (configurable, default 100 req\u002Fmin)\r\n  - Compact 75-line auth implementation\r\n  - Environment-based configuration\r\n\r\n- **CI\u002FCD**\r\n  - GitHub Action for automated Docker build testing\r\n  - Ensures Docker images build successfully on every push\r\n\r\n### Changed\r\n\r\n- Optimized all compression code for maximum efficiency","2025-10-30T14:47:46",{"id":149,"version":150,"summary_zh":151,"released_at":152},104155,"1.1.0","* Add Pluggable vector dbs and PostgreSQL support\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fcompare\u002F1.0.0...1.1.0","2025-10-26T10:04:57",{"id":154,"version":155,"summary_zh":156,"released_at":157},104156,"1.0.0","## What's Changed\r\n* Add Model Context Protocol (MCP)\r\n* Add Tag and Metadata Filtering to HSG + API Query by @ammesonb in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F1\r\n* refactor: Dockerfile to install all dependencies and prune dev by @josephgoksu in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F4\r\n* Fix Docker build by @pc-quiknode in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F5\r\n\r\n## New Contributors\r\n* @ammesonb made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F1\r\n* @josephgoksu made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F4\r\n* @pc-quiknode made their first contribution in https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fpull\u002F5\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FCaviraOSS\u002FOpenMemory\u002Fcommits\u002F1.0.0","2025-10-26T07:29:02",[159,167,176,184,192,203],{"id":160,"name":161,"github_repo":162,"description_zh":163,"stars":164,"difficulty_score":34,"last_commit_at":165,"category_tags":166,"status":86},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[60,59,61],{"id":168,"name":169,"github_repo":170,"description_zh":171,"stars":172,"difficulty_score":173,"last_commit_at":174,"category_tags":175,"status":86},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[60,61,58],{"id":177,"name":178,"github_repo":179,"description_zh":180,"stars":181,"difficulty_score":173,"last_commit_at":182,"category_tags":183,"status":86},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[60,59,61],{"id":185,"name":186,"github_repo":187,"description_zh":188,"stars":189,"difficulty_score":173,"last_commit_at":190,"category_tags":191,"status":86},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[60,58],{"id":193,"name":194,"github_repo":195,"description_zh":196,"stars":197,"difficulty_score":173,"last_commit_at":198,"category_tags":199,"status":86},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[59,63,200,201,61,62,58,60,202],"视频","插件","音频",{"id":204,"name":205,"github_repo":206,"description_zh":207,"stars":208,"difficulty_score":34,"last_commit_at":209,"category_tags":210,"status":86},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[61,59,60,58,62]]