[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-GreatScottyMac--context-portal":3,"similar-GreatScottyMac--context-portal":89},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":15,"owner_company":15,"owner_location":15,"owner_email":15,"owner_twitter":15,"owner_website":15,"owner_url":17,"languages":18,"stars":27,"forks":28,"last_commit_at":29,"license":30,"difficulty_score":31,"env_os":32,"env_gpu":33,"env_ram":34,"env_deps":35,"category_tags":41,"github_topics":15,"view_count":45,"oss_zip_url":15,"oss_zip_packed_at":15,"status":46,"created_at":47,"updated_at":48,"faqs":49,"releases":78},239,"GreatScottyMac\u002Fcontext-portal","context-portal","Context Portal (ConPort): A memory bank MCP server building a project-specific knowledge graph to supercharge AI assistants. Enables powerful Retrieval Augmented Generation (RAG) for context-aware development in your IDE.","Context Portal（简称 ConPort）是一个面向开发者的 AI 记忆库工具。它本质上是一个 MCP（Model Context Protocol）服务器，能够为你的项目建立专属的知识图谱，帮助 AI 助手更好地理解你的代码库。\n\n在实际开发中，AI 助手往往对你的项目架构、设计决策和业务逻辑一无所知，导致给出的建议不够准确。ConPort 正是为了解决这个痛点而设计的——它会系统性地记录项目中的关键信息，包括架构设计、技术选型、待办事项等，并建立它们之间的关联关系。当 AI 需要了解项目背景时，可以快速从记忆库中检索相关信息，从而提供更精准的上下文感知建议。\n\nConPort 支持语义搜索和向量嵌入，能够实现真正的 RAG（检索增强生成）体验。它使用 SQLite 作为存储，每个项目独立建库，数据结构化且易于查询。目前已支持 Roo Code、CLine、Windsurf、Cursor 等主流 AI 编程助手。\n\n简单来说，如果你希望在 IDE 中获得更懂你项目的 AI 辅助，或者想让团队协作时的 AI 上下文保持一致，ConPort 是一个值得尝试的解决方案。","\u003Cdiv align=\"center\">\n\n\u003Cbr>\n\n# Context Portal MCP (ConPort)\n\n## (It's a memory bank!)\n\n\u003Cbr>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGreatScottyMac_context-portal_readme_238a0831dbd5.png\" alt=\"Roo Code Logo\" height=\"40\"\u002F>&nbsp;&nbsp;&nbsp;\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGreatScottyMac_context-portal_readme_c658c89fa8a2.png\" alt=\"CLine Logo\" height=\"40\"\u002F>&nbsp;&nbsp;&nbsp;\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGreatScottyMac_context-portal_readme_c985f355e8b4.png\" alt=\"Windsurf Cascade Logo\" height=\"40\"\u002F>&nbsp;&nbsp;&nbsp;\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGreatScottyMac_context-portal_readme_05034a08a002.png\" alt=\"Cursor IDE Logo\" height=\"40\"\u002F>\n\n\u003Cbr>\n\nA database-backed Model Context Protocol (MCP) server for managing structured project context, designed to be used by AI assistants and developer tools within IDEs and other interfaces.\n\n\u003C\u002Fdiv>\n\n\u003Cbr>\n\n## What is Context Portal MCP server (ConPort)?\n\nContext Portal (ConPort) is your project's **memory bank**. It's a tool that helps AI assistants understand your specific software project better by storing important information like decisions, tasks, and architectural patterns in a structured way. Think of it as building a project-specific knowledge base that the AI can easily access and use to give you more accurate and helpful responses.\n\n**What it does:**\n\n- Keeps track of project decisions, progress, and system designs.\n- Stores custom project data (like glossaries or specs).\n- Helps AI find relevant project information quickly (like a smart search).\n- Enables AI to use project context for better responses (RAG).\n- More efficient for managing, searching, and updating context compared to simple text file-based memory banks.\n\nConPort provides a robust and structured way for AI assistants to store, retrieve, and manage various types of project context. It effectively builds a **project-specific knowledge graph**, capturing entities like decisions, progress, and architecture, along with their relationships. This structured knowledge base, enhanced by **vector embeddings** for semantic search, then serves as a powerful backend for **Retrieval Augmented Generation (RAG)**, enabling AI assistants to access precise, up-to-date information for more context-aware and accurate responses.\n\nIt replaces older file-based context management systems by offering a more reliable and queryable database backend (SQLite per workspace). ConPort is designed to be a generic context backend, compatible with various IDEs and client interfaces that support MCP.\n\nKey features include:\n\n- Structured context storage using SQLite (one DB per workspace, automatically created).\n- MCP server (`context_portal_mcp`) built with Python\u002FFastAPI.\n- A comprehensive suite of defined MCP tools for interaction (see \"Available ConPort Tools\" below).\n- Multi-workspace support via `workspace_id`.\n- Primary deployment mode: STDIO for tight IDE integration.\n- Enables building a dynamic **project knowledge graph** with explicit relationships between context items.\n- Includes **vector data storage** and **semantic search** capabilities to power advanced RAG.\n- Serves as an ideal backend for **Retrieval Augmented Generation (RAG)**, providing AI with precise, queryable project memory.\n- Provides structured context that AI assistants can leverage for **prompt caching** with compatible LLM providers.\n- Manages database schema evolution using **Alembic migrations**, ensuring seamless updates and data integrity.\n\n## Prerequisites\n\nBefore you begin, ensure you have the following installed:\n\n- **Python:** Version 3.8 or higher is recommended.\n  - [Download Python](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002F)\n  - Ensure Python is added to your system's PATH during installation (especially on Windows).\n- **uv:** (Highly Recommended) A fast Python environment and package manager. Using `uv` significantly simplifies virtual environment creation and dependency installation.\n  - [Install uv](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv#installation)\n\n## Installation and Configuration (Recommended)\n\nThe recommended way to install and run ConPort is by using `uvx` to execute the package directly from PyPI. This method avoids the need to manually create and manage virtual environments.\n\n### `uvx` Configuration (Recommended for most IDEs)\n\nIn your MCP client settings (e.g., `mcp_settings.json`), use the following configuration:\n\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--from\",\n        \"context-portal-mcp\",\n        \"conport-mcp\",\n        \"--mode\",\n        \"stdio\",\n        \"--workspace_id\",\n        \"${workspaceFolder}\",\n        \"--log-file\",\n        \".\u002Flogs\u002Fconport.log\",\n        \"--log-level\",\n        \"INFO\"\n      ]\n    }\n  }\n}\n```\n\n- **`command`**: `uvx` handles the environment for you.\n- **`args`**: Contains the arguments to run the ConPort server.\n- `${workspaceFolder}`: This IDE variable is used to automatically provide the absolute path of the current project workspace.\n- `--log-file`: Optional: Path to a file where server logs will be written. If not provided, logs are directed to `stderr` (console). Useful for persistent logging and debugging server behavior.\n- `--log-level`: Optional: Sets the minimum logging level for the server. Valid choices are `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`. Defaults to `INFO`. Set to `DEBUG` for verbose output during development or troubleshooting.\n\n> Important: Many IDEs do not expand `${workspaceFolder}` when launching MCP servers. Use one of these safe options:\n> 1) Provide an absolute path for `--workspace_id`.\n> 2) Omit `--workspace_id` at launch and rely on per-call `workspace_id` (recommended if your client provides it on every call).\n\nAlternative configuration (no `--workspace_id` at launch):\n\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--from\",\n        \"context-portal-mcp\",\n        \"conport-mcp\",\n        \"--mode\",\n        \"stdio\",\n        \"--log-file\",\n        \".\u002Flogs\u002Fconport.log\",\n        \"--log-level\",\n        \"INFO\"\n      ]\n    }\n  }\n}\n```\n\nIf you omit `--workspace_id`, the server will skip pre-initialization and initialize the database on the first tool call using the `workspace_id` provided in that call.\n\n\u003Cbr>\n\n## Installation for Developers (from Git Repository)\n\nThe most appropriate way to develop and test ConPort is to run it in your IDE as an MCP server using the configuration above. This exercises STDIO mode and real client behavior.\n\nIf you need to run against a local checkout and virtualenv, you can configure your MCP client to launch the dev server via `uv run` and your `.venv\u002Fbin\u002Fpython`:\n\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uv\",\n      \"args\": [\n        \"run\",\n        \"--python\",\n        \".venv\u002Fbin\u002Fpython\",\n        \"--directory\",\n        \"\u003Cpath to context-portal repo> \",\n        \"conport-mcp\",\n        \"--mode\",\n        \"stdio\",\n        \"--log-file\",\n        \".\u002Flogs\u002Fconport-dev.log\",\n        \"--log-level\",\n        \"DEBUG\"\n      ],\n      \"disabled\": false\n    }\n  }\n}\n```\n\nNotes:\n- Set `--directory` to your repo path; this uses your local checkout and venv interpreter.\n- Logs go to `.\u002Flogs\u002Fconport-dev.log` with `DEBUG` verbosity.\n\n\n### Local environment setup\n\nSet up for development or contribution via the Git repo.\n\n1. **Clone the repository**\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git\n   cd context-portal\n   ```\n\n2. **Create a virtual environment**\n   ```bash\n   uv venv\n   ```\n   Activate it using your shell’s standard activation (e.g., `source .venv\u002Fbin\u002Factivate` on macOS\u002FLinux).\n\n3. **Install dependencies**\n   ```bash\n   uv pip install -r requirements.txt\n   ```\n\n4. **Run in your IDE (recommended)**\n   Configure your IDE’s MCP settings using the \"uvx Configuration\" or the dev `uv run` configuration shown above. This is the most representative test of ConPort in STDIO mode.\n\n5. **Optional: CLI help**\n   ```bash\n   uv run python src\u002Fcontext_portal_mcp\u002Fmain.py --help\n   ```\n\nNotes:\n- For `--workspace_id` behavior and IDE path handling, see the guidance under the \"uvx Configuration\" section above. Many IDEs do not expand `${workspaceFolder}`.\n\n\u003Cbr>\n\nFor pre-upgrade cleanup, including clearing Python bytecode cache, please refer to the [v0.2.4_UPDATE_GUIDE.md](v0.2.4_UPDATE_GUIDE.md#1-pre-upgrade-cleanup).\n\n## Usage with LLM Agents (Custom Instructions)\n\nConPort's effectiveness with LLM agents is significantly enhanced by providing specific custom instructions or system prompts to the LLM. This repository includes tailored strategy files for different environments:\n\n- **For Roo Code:**\n\n  - [`roo_code_conport_strategy`](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fblob\u002Fmain\u002Fconport-custom-instructions\u002Froo_code_conport_strategy): Contains detailed instructions for LLMs operating within the Roo Code VS Code extension, guiding them on how to use ConPort tools for context management.\n\n  \u003Cbr>\n\n- **For CLine:**\n\n  - [`cline_conport_strategy`](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fblob\u002Fmain\u002Fconport-custom-instructions\u002Fcline_conport_strategy): Contains detailed instructions for LLMs operating within the Cline VS Code extension, guiding them on how to use ConPort tools for context management.\n\n  \u003Cbr>\n\n- **For Windsurf Cascade:**\n\n  - [`cascade_conport_strategy`](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fblob\u002Fmain\u002Fconport-custom-instructions\u002Fcascade_conport_strategy): Specific guidance for LLMs integrated with the Windsurf Cascade environment. _Important_: When initiating a session in Cascade, it is necessary to explicity tell the LLM:\n\n  ```\n  Initialize according to custom instructions\n  ```\n\n- **For General\u002FPlatform-Agnostic Use:**\n\n  - [`generic_conport_strategy`](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fblob\u002Fmain\u002Fconport-custom-instructions\u002Fgeneric_conport_strategy): Provides a platform-agnostic set of instructions for any MCP-capable LLM. It emphasizes using ConPort's `get_conport_schema` operation to dynamically discover the exact ConPort tool names and their parameters, guiding the LLM on _when_ and _why_ to perform conceptual interactions (like logging a decision or updating product context) rather than hardcoding specific tool invocation details.\n\n  \u003Cbr>\n\n**How to Use These Strategy Files:**\n\n1.  Identify the strategy file relevant to your LLM agent's environment.\n2.  Copy the **entire content** of that file.\n3.  Paste it into your LLM's custom instructions or system prompt area. The method varies by LLM platform (IDE extension settings, web UI, API configuration).\n\nThese instructions equip the LLM with the knowledge to:\n\n- Initialize and load context from ConPort.\n- Update ConPort with new information (decisions, progress, etc.).\n- Manage custom data and relationships.\n- Understand the importance of `workspace_id`.\n  **Important Tip for Starting Sessions:**\n  To ensure the LLM agent correctly initializes and loads context, especially in interfaces that might not always strictly adhere to custom instructions on the first message, it's a good practice to start your interaction with a clear directive like:\n  `Initialize according to custom instructions.`\n  This can help prompt the agent to perform its ConPort initialization sequence as defined in its strategy file.\n\n### New Strategy Set: mem4sprint (What’s new)\n\nThe repository includes a new strategy\u002Fdocumentation set focused on sprint planning and operational flows:\n\n- `conport-custom-instructions\u002Fmem4sprint.md` — concise guidance and patterns for using flat categories and valid FTS prefixes.\n- `conport-custom-instructions\u002Fmem4sprint.schema_and_templates.md` — meta schema, compact starters, FTS query rules, and minimal operational call recipes.\n\nKey highlights:\n- Flat category model (e.g., `artifacts`, `rfc_doc`, `retrospective`, `ProjectGlossary`, `critical_settings`).\n- Valid FTS5 prefixes only: `category:`, `key:`, `value_text:` for custom data; `summary:`, `rationale:`, `implementation_details:`, `tags:` for decisions.\n- Handler-layer query normalization; database layer remains unchanged.\n\nRelease note summary:\n- Added mem4sprint strategy\u002Fdocs with flattened categories and explicit FTS rules.\n- Simplified examples and included minimal operational call recipes.\n- Documentation clarifies IDE workspace path handling for MCP.\n\n## Initial ConPort Usage in a Workspace\n\nWhen you first start using ConPort in a new or existing project workspace, the ConPort database (`context_portal\u002Fcontext.db`) will be automatically created by the server if it doesn't exist. To help bootstrap the initial project context, especially the **Product Context**, consider the following:\n\n### Using a `projectBrief.md` File (Recommended)\n\n1.  **Create `projectBrief.md`:** In the root directory of your project workspace, create a file named `projectBrief.md`.\n2.  **Add Content:** Populate this file with a high-level overview of your project. This could include:\n    - The main goal or purpose of the project.\n    - Key features or components.\n    - Target audience or users.\n    - Overall architectural style or key technologies (if known).\n    - Any other foundational information that defines the project.\n3.  **Automatic Prompt for Import:** When an LLM agent using one of the provided ConPort custom instruction sets (e.g., `roo_code_conport_strategy`) initializes in the workspace, it is designed to:\n    - Check for the existence of `projectBrief.md`.\n    - If found, it will read the file and ask you if you'd like to import its content into the ConPort **Product Context**.\n    - If you agree, the content will be added to ConPort, providing an immediate baseline for the project's Product Context.\n\n### Manual Initialization\n\nIf `projectBrief.md` is not found, or if you choose not to import it:\n\n- The LLM agent (guided by its custom instructions) will typically inform you that the ConPort Product Context appears uninitialized.\n- It may offer to help you define the Product Context manually, potentially by listing other files in your workspace to gather relevant information.\n\nBy providing initial context, either through `projectBrief.md` or manual entry, you enable ConPort and the connected LLM agent to have a better foundational understanding of your project from the start.\n\n## Automatic Workspace Detection\n\nConPort can automatically determine the correct `workspace_id` so you do not need to hardcode an absolute path in your MCP client configuration. This is especially useful for IDEs that fail to expand `${workspaceFolder}` when launching MCP servers.\n\nDetection is enabled by default and can be controlled via CLI flags:\n\nFlags:\n- `--auto-detect-workspace` (default: enabled) Turns on automatic detection.\n- `--no-auto-detect` Disables detection (explicit `--workspace_id` or per-tool `workspace_id` must then be provided).\n- `--workspace-search-start \u003Cpath>` Optional starting directory for upward search (defaults to current working directory).\n\nHow it works (multi‑strategy):\n1. Strong Indicators (fast path): Looks for high-confidence project roots containing any of: `package.json`, `.git`, `pyproject.toml`, `Cargo.toml`, `go.mod`, `pom.xml`.\n2. Multiple General Indicators: If ≥2 general indicators (README, license, build files, etc.) exist in a directory, it is treated as a root.\n3. Existing ConPort Workspace: Presence of a `context_portal\u002F` directory indicates a valid workspace.\n4. MCP Environment Context: Honors environment variables like `VSCODE_WORKSPACE_FOLDER` or `CONPORT_WORKSPACE` when set and valid.\n5. Fallback: If no indicators are found, uses the starting directory verbosely (with a warning).\n\nTooling:\n- `get_workspace_detection_info` (MCP tool) exposes a diagnostic dictionary showing:\n  - start_path\n  - detected_workspace\n  - detection_method (strong_indicators | multiple_indicators | existing_context_portal | fallback)\n  - indicators_found\n  - relevant environment variables\n\nBest Practices:\n- Keep detection enabled unless you operate multi-root scenarios where explicit isolation per call is required.\n- If an IDE passes the literal string `${workspaceFolder}`, ConPort will ignore it and auto-detect safely (logged at WARNING).\n- For debugging ambiguous roots (e.g., nested repos), run the detection info tool to confirm which directory was selected.\n\nExample MCP launch (relying fully on auto-detect):\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--from\", \"context-portal-mcp\",\n        \"conport-mcp\",\n        \"--mode\", \"stdio\",\n        \"--log-level\", \"INFO\"\n      ]\n    }\n  }\n}\n```\n\nTo disable detection explicitly (forcing provided IDs only):\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--from\", \"context-portal-mcp\",\n        \"conport-mcp\",\n        \"--mode\", \"stdio\",\n        \"--no-auto-detect\",\n        \"--workspace_id\", \"\u002Fabsolute\u002Fpath\u002Fto\u002Fproject\"\n      ]\n    }\n  }\n}\n```\n\nIf you have a launcher that starts inside a deep subdirectory, provide a higher start path:\n```bash\nconport-mcp --mode stdio --workspace-search-start ..\u002F..\u002F\n```\n\nSee `UNIVERSAL_WORKSPACE_DETECTION.md` for full rationale, edge cases, and troubleshooting.\n\n## Available ConPort Tools\n\nThe ConPort server exposes the following tools via MCP, allowing interaction with the underlying **project knowledge graph**. This includes tools for **semantic search** powered by **vector data storage**. These tools facilitate the **Retrieval** aspect crucial for **Augmented Generation (RAG)** by AI agents. All tools require a `workspace_id` argument (string, required) to specify the target project workspace.\n\nNote: For convenience, all integer-like parameters accept either numbers or digit-only strings (e.g., \"10\", \" 3\"). The server trims whitespace and coerces these to integers while preserving validation bounds (e.g., ge=1). Credit: @cipradu.\n\n- **Product Context Management:**\n  - `get_product_context`: Retrieves the overall project goals, features, and architecture.\n  - `update_product_context`: Updates the product context. Accepts full `content` (object) or `patch_content` (object) for partial updates (use `__DELETE__` as a value in patch to remove a key).\n- **Active Context Management:**\n  - `get_active_context`: Retrieves the current working focus, recent changes, and open issues.\n  - `update_active_context`: Updates the active context. Accepts full `content` (object) or `patch_content` (object) for partial updates (use `__DELETE__` as a value in patch to remove a key).\n- **Decision Logging:**\n  - `log_decision`: Logs an architectural or implementation decision.\n    - Args: `summary` (str, req), `rationale` (str, opt), `implementation_details` (str, opt), `tags` (list[str], opt).\n  - `get_decisions`: Retrieves logged decisions.\n    - Args: `limit` (int, opt), `tags_filter_include_all` (list[str], opt), `tags_filter_include_any` (list[str], opt).\n  - `search_decisions_fts`: Full-text search across decision fields (summary, rationale, details, tags).\n    - Args: `query_term` (str, req), `limit` (int, opt).\n  - `delete_decision_by_id`: Deletes a decision by its ID.\n    - Args: `decision_id` (int, req).\n- **Progress Tracking:**\n  - `log_progress`: Logs a progress entry or task status.\n    - Args: `status` (str, req), `description` (str, req), `parent_id` (int, opt), `linked_item_type` (str, opt), `linked_item_id` (str, opt).\n  - `get_progress`: Retrieves progress entries.\n    - Args: `status_filter` (str, opt), `parent_id_filter` (int, opt), `limit` (int, opt).\n  - `update_progress`: Updates an existing progress entry.\n    - Args: `progress_id` (int, req), `status` (str, opt), `description` (str, opt), `parent_id` (int, opt).\n  - `delete_progress_by_id`: Deletes a progress entry by its ID.\n    - Args: `progress_id` (int, req).\n- **System Pattern Management:**\n  - `log_system_pattern`: Logs or updates a system\u002Fcoding pattern.\n    - Args: `name` (str, req), `description` (str, opt), `tags` (list[str], opt).\n  - `get_system_patterns`: Retrieves system patterns.\n    - Args: `tags_filter_include_all` (list[str], opt), `tags_filter_include_any` (list[str], opt).\n  - `delete_system_pattern_by_id`: Deletes a system pattern by its ID.\n    - Args: `pattern_id` (int, req).\n- **Custom Data Management:**\n  - `log_custom_data`: Stores\u002Fupdates a custom key-value entry under a category. Value is JSON-serializable.\n    - Args: `category` (str, req), `key` (str, req), `value` (any, req).\n  - `get_custom_data`: Retrieves custom data.\n    - Args: `category` (str, opt), `key` (str, opt).\n  - `delete_custom_data`: Deletes a specific custom data entry.\n    - Args: `category` (str, req), `key` (str, req).\n  - `search_project_glossary_fts`: Full-text search within the 'ProjectGlossary' custom data category.\n    - Args: `query_term` (str, req), `limit` (int, opt).\n  - `search_custom_data_value_fts`: Full-text search across all custom data values, categories, and keys.\n    - Args: `query_term` (str, req), `category_filter` (str, opt), `limit` (int, opt).\n- **Context Linking:**\n  - `link_conport_items`: Creates a relationship link between two ConPort items, explicitly building out the **project knowledge graph**.\n    - Args: `source_item_type` (str, req), `source_item_id` (str, req), `target_item_type` (str, req), `target_item_id` (str, req), `relationship_type` (str, req), `description` (str, opt).\n  - `get_linked_items`: Retrieves items linked to a specific item.\n    - Args: `item_type` (str, req), `item_id` (str, req), `relationship_type_filter` (str, opt), `linked_item_type_filter` (str, opt), `limit` (int, opt).\n- **History & Meta Tools:**\n  - `get_item_history`: Retrieves version history for Product or Active Context.\n    - Args: `item_type` (\"product_context\" | \"active_context\", req), `version` (int, opt), `before_timestamp` (datetime, opt), `after_timestamp` (datetime, opt), `limit` (int, opt).\n  - `get_recent_activity_summary`: Provides a summary of recent ConPort activity.\n    - Args: `hours_ago` (int, opt), `since_timestamp` (datetime, opt), `limit_per_type` (int, opt, default: 5).\n  - `get_conport_schema`: Retrieves the schema of available ConPort tools and their arguments.\n- **Import\u002FExport:**\n  - `export_conport_to_markdown`: Exports ConPort data to markdown files.\n    - Args: `output_path` (str, opt, default: \".\u002Fconport_export\u002F\").\n  - `import_markdown_to_conport`: Imports data from markdown files into ConPort.\n    - Args: `input_path` (str, opt, default: \".\u002Fconport_export\u002F\").\n- **Batch Operations:**\n  - `batch_log_items`: Logs multiple items of the same type (e.g., decisions, progress entries) in a single call.\n    - Args: `item_type` (str, req - e.g., \"decision\", \"progress_entry\"), `items` (list[dict], req - list of Pydantic model dicts for the item type).\n\n## Further Reading\n\nFor a more in-depth understanding of ConPort's design, architecture, and advanced usage patterns, please refer to:\n\n- [`conport_mcp_deep_dive.md`](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fblob\u002Fmain\u002Fconport_mcp_deep_dive.md)\n\n## Contributing\n\nPlease see our [CONTRIBUTING.md](CONTRIBUTING.md) guide for details on how to contribute to the ConPort project.\n## License\n This project is licensed under the [Apache-2.0 license](LICENSE).\n  \n \n## Acknowledgments\n\n- Special thanks to **@cipradu** for the valuable suggestion to implement integer-string coercion for numeric arguments, which improves the user experience when interacting with the MCP server from various clients.\n\n## Database Migration & Update Guide\n \nFor detailed instructions on how to manage your `context.db` file, especially when updating ConPort across versions that include database schema changes, please refer to the dedicated [v0.2.4_UPDATE_GUIDE.md](v0.2.4_UPDATE_GUIDE.md). This guide provides steps for manual data migration (export\u002Fimport) if needed, and troubleshooting tips.\n","\u003Cdiv align=\"center\">\n\n\u003Cbr>\n\n# Context Portal MCP (ConPort)\n\n## (这是一个记忆库！)\n\n\u003Cbr>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGreatScottyMac_context-portal_readme_238a0831dbd5.png\" alt=\"Roo Code Logo\" height=\"40\"\u002F>&nbsp;&nbsp;&nbsp;\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGreatScottyMac_context-portal_readme_c658c89fa8a2.png\" alt=\"CLine Logo\" height=\"40\"\u002F>&nbsp;&nbsp;&nbsp;\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGreatScottyMac_context-portal_readme_c985f355e8b4.png\" alt=\"Windsurf Cascade Logo\" height=\"40\"\u002F>&nbsp;&nbsp;&nbsp;\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGreatScottyMac_context-portal_readme_05034a08a002.png\" alt=\"Cursor IDE Logo\" height=\"40\"\u002F>\n\n\u003Cbr>\n\n一个基于数据库的 Model Context Protocol (MCP) 服务器，用于管理结构化的项目上下文，专为 IDE 和其他接口中的 AI 助手和开发者工具设计。\n\n\u003C\u002Fdiv>\n\n\u003Cbr>\n\n## 什么是 Context Portal MCP 服务器 (ConPort)？\n\nContext Portal (ConPort) 是你项目的**记忆库**。它是一个帮助 AI 助手更好地理解你的特定软件项目的工具，通过以结构化的方式存储重要信息，如决策、任务和架构模式。想象它就像在构建一个项目特定的知识库，AI 可以轻松访问并使用它来为你提供更准确和更有帮助的响应。\n\n**它的功能：**\n\n- 跟踪项目决策、进度和系统设计。\n- 存储自定义项目数据（如术语表或规格说明）。\n- 帮助 AI 快速找到相关的项目信息（如智能搜索）。\n- 使 AI 能够使用项目上下文来提供更好的响应（RAG）。\n- 与简单的基于文本文件的记忆库相比，在管理、搜索和更新上下文方面更加高效。\n\nConPort 为 AI 助手提供了存储、检索和管理各种类型项目上下文的强大而结构化的方式。它有效地构建了一个**项目特定的知识图谱**，捕获决策、进度和架构等实体及其关系。这个结构化的知识库，通过**向量嵌入（vector embeddings）**增强以支持语义搜索，然后作为**检索增强生成（Retrieval Augmented Generation，RAG）**的强大后端，使 AI 助手能够访问精确、最新的信息，从而提供更具上下文感知能力的准确响应。\n\n它用更可靠且可查询的数据库后端（每个工作区一个 SQLite 数据库）取代了旧的基于文件的上下文管理方式。ConPort 旨在成为一个通用的上下文后端，兼容支持 MCP 的各种 IDE 和客户端接口。\n\n主要功能包括：\n\n- 使用 SQLite 进行结构化上下文存储（每个工作区一个数据库，自动创建）。\n- 使用 Python\u002FFastAPI 构建的 MCP 服务器（`context_portal_mcp`）。\n- 完整的已定义 MCP 工具套件用于交互（参见下面的\"可用的 ConPort 工具\"）。\n- 通过 `workspace_id` 支持多工作区。\n- 主要部署模式：STDIO，用于紧密的 IDE 集成。\n- 支持构建动态**项目知识图谱**，明确呈现上下文项目之间的关系。\n- 包含**向量数据存储**和**语义搜索**功能，为高级 RAG 提供支持。\n- 作为**检索增强生成（RAG）**的理想后端，为 AI 提供精确、可查询的项目记忆。\n- 提供结构化上下文，AI 助手可以利用它与兼容的 LLM 提供商进行**提示缓存（prompt caching）**。\n- 使用 **Alembic 迁移**管理数据库架构演进，确保无缝更新和数据完整性。\n\n## 前置要求\n\n在开始之前，请确保已安装以下软件：\n\n- **Python：** 建议使用 3.8 或更高版本。\n  - [下载 Python](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002F)\n  - 确保在安装过程中将 Python 添加到系统的 PATH（特别是在 Windows 上）。\n- **uv：** （强烈推荐）一个快速的 Python 环境和包管理器。使用 `uv` 可以显著简化虚拟环境创建和依赖安装。\n  - [安装 uv](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv#installation)\n\n## 安装和配置（推荐）\n\n推荐使用 `uvx` 直接从 PyPI 执行包来安装和运行 ConPort。这种方法无需手动创建和管理虚拟环境。\n\n### `uvx` 配置（推荐用于大多数 IDE）\n\n在你的 MCP 客户端设置（例如 `mcp_settings.json`）中，使用以下配置：\n\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--from\",\n        \"context-portal-mcp\",\n        \"conport-mcp\",\n        \"--mode\",\n        \"stdio\",\n        \"--workspace_id\",\n        \"${workspaceFolder}\",\n        \"--log-file\",\n        \".\u002Flogs\u002Fconport.log\",\n        \"--log-level\",\n        \"INFO\"\n      ]\n    }\n  }\n}\n```\n\n- **`command`**：`uvx` 会为你处理环境。\n- **`args`**：包含运行 ConPort 服务器的参数。\n- `${workspaceFolder}`：此 IDE 变量用于自动提供当前项目工作区的绝对路径。\n- `--log-file`：可选：服务器日志写入的文件路径。如果未提供，日志将定向到 `stderr`（控制台）。有助于持久化日志和调试服务器行为。\n- `--log-level`：可选：设置服务器的最小日志级别。有效选项为 `DEBUG`、`INFO`、`WARNING`、`ERROR`、`CRITICAL`。默认为 `INFO`。在开发或故障排除期间设置为 `DEBUG` 以获取详细输出。\n\n> 重要提示：许多 IDE 在启动 MCP 服务器时不会展开 `${workspaceFolder}`。请使用以下安全选项之一：\n> 1) 为 `--workspace_id` 提供绝对路径。\n> 2) 启动时省略 `--workspace_id`，依赖每次调用时提供的 `workspace_id`（如果你的客户端在每次调用时都提供此参数，则推荐此方式）。\n\n替代配置（启动时不使用 `--workspace_id`）：\n\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--from\",\n        \"context-portal-mcp\",\n        \"conport-mcp\",\n        \"--mode\",\n        \"stdio\",\n        \"--log-file\",\n        \".\u002Flogs\u002Fconport.log\",\n        \"--log-level\",\n        \"INFO\"\n      ]\n    }\n  }\n}\n```\n\n如果你省略 `--workspace_id`，服务器将跳过预初始化，并在第一次工具调用时使用该调用中提供的 `workspace_id` 初始化数据库。\n\n\u003Cbr>\n\n## 开发者安装（从 Git 仓库）\n\n开发和测试 ConPort 最合适的方式是在 IDE 中将其作为 MCP 服务器运行，使用上述配置。这可以测试 STDIO 模式和真实客户端行为。\n\n如果需要针对本地检出的代码和虚拟环境运行，可以将 MCP 客户端配置为通过 `uv run` 和你的 `.venv\u002Fbin\u002Fpython` 启动开发服务器：\n\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uv\",\n      \"args\": [\n        \"run\",\n        \"--python\",\n        \".venv\u002Fbin\u002Fpython\",\n        \"--directory\",\n        \"\u003Cpath to context-portal repo> \",\n        \"conport-mcp\",\n        \"--mode\",\n        \"stdio\",\n        \"--log-file\",\n        \".\u002Flogs\u002Fconport-dev.log\",\n        \"--log-level\",\n        \"DEBUG\"\n      ],\n      \"disabled\": false\n    }\n  }\n}\n```\n\n注意：\n- 将 `--directory` 设置为你的仓库路径；这将使用你的本地检出和 venv 解释器。\n- 日志将写入 `.\u002Flogs\u002Fconport-dev.log`，并使用 `DEBUG` 详细级别。\n\n### 本地环境设置\n\n通过 Git 仓库进行开发或贡献的设置。\n\n1. **克隆仓库**\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git\n   cd context-portal\n   ```\n\n2. **创建虚拟环境**\n   ```bash\n   uv venv\n   ```\n   使用 shell 标准激活方式激活它（例如在 macOS\u002FLinux 上使用 `source .venv\u002Fbin\u002Factivate`）。\n\n3. **安装依赖**\n   ```bash\n   uv pip install -r requirements.txt\n   ```\n\n4. **在 IDE 中运行（推荐）**\n   使用上面显示的 \"uvx Configuration\" 或开发 `uv run` 配置来配置 IDE 的 MCP 设置。这是对 ConPort 在 STDIO 模式下最具代表性的测试。\n\n5. **可选：CLI 帮助**\n   ```bash\n   uv run python src\u002Fcontext_portal_mcp\u002Fmain.py --help\n   ```\n\n注意：\n- 关于 `--workspace_id` 行为和 IDE 路径处理，请参阅上面 \"uvx Configuration\" 部分的指导。许多 IDE 不会展开 `${workspaceFolder}`。\n\n\u003Cbr>\n\n关于升级前的清理工作（包括清除 Python 字节码缓存），请参阅 [v0.2.4_UPDATE_GUIDE.md](v0.2.4_UPDATE_GUIDE.md#1-pre-upgrade-cleanup)。\n\n## 与 LLM 代理一起使用（自定义指令）\n\n通过向 LLM 提供特定的自定义指令或系统提示，可以显著增强 ConPort 与 LLM 代理配合使用的效果。本仓库包含针对不同环境定制的策略文件：\n\n- **对于 Roo Code：**\n\n  - [`roo_code_conport_strategy`](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fblob\u002Fmain\u002Fconport-custom-instructions\u002Froo_code_conport_strategy)：包含在 Roo Code VS Code 扩展中运行的 LLM 的详细指令，指导它们如何使用 ConPort 工具进行上下文管理。\n\n  \u003Cbr>\n\n- **对于 CLine：**\n\n  - [`cline_conport_strategy`](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fblob\u002Fmain\u002Fconport-custom-instructions\u002Fcline_conport_strategy)：包含在 Cline VS Code 扩展中运行的 LLM 的详细指令，指导它们如何使用 ConPort 工具进行上下文管理。\n\n  \u003Cbr>\n\n- **对于 Windsurf Cascade：**\n\n  - [`cascade_conport_strategy`](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fblob\u002Fmain\u002Fconport-custom-instructions\u002Fcascade_conport_strategy)：针对与 Windsurf Cascade 环境集成的 LLM 的特定指导。_重要提示_：在 Cascade 中启动会话时，需要明确告诉 LLM：\n\n  ```\n  Initialize according to custom instructions\n  ```\n\n- **对于通用\u002F平台无关的使用：**\n\n  - [`generic_conport_strategy`](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fblob\u002Fmain\u002Fconport-custom-instructions\u002Fgeneric_conport_strategy)：为任何支持 MCP 的 LLM 提供平台无关的指令集。它强调使用 ConPort 的 `get_conport_schema` 操作来动态发现确切的 ConPort 工具名称及其参数，指导 LLM 何时以及为何执行概念交互（如记录决策或更新产品上下文），而不是硬编码特定的工具调用细节。\n\n  \u003Cbr>\n\n**如何使用这些策略文件：**\n\n1. 识别与你的 LLM 代理环境相关的策略文件。\n2. 复制该文件的**全部内容**。\n3. 将其粘贴到 LLM 的自定义指令或系统提示区域。方法因 LLM 平台而异（IDE 扩展设置、Web UI、API 配置）。\n\n这些指令使 LLM 能够：\n\n- 从 ConPort 初始化和加载上下文。\n- 使用新信息更新 ConPort（决策、进度等）。\n- 管理自定义数据和关系。\n- 理解 `workspace_id` 的重要性。\n  **启动会话的重要提示：**\n  为确保 LLM 代理正确初始化和加载上下文，尤其是在可能不会在第一条消息时严格遵守自定义指令的界面中，最好以明确的指令开始你的交互，例如：\n  `Initialize according to custom instructions.`\n  这可以帮助提示代理执行其策略文件中定义的 ConPort 初始化序列。\n\n### 新策略集：mem4sprint（新增内容）\n\n仓库包含了一个专注于冲刺规划和运营流程的新策略\u002F文档集：\n\n- `conport-custom-instructions\u002Fmem4sprint.md` — 关于使用扁平类别和有效 FTS 前缀的简洁指导和模式。\n- `conport-custom-instructions\u002Fmem4sprint.schema_and_templates.md` — 元模式、紧凑启动器、FTS 查询规则和最小化运营调用配方。\n\n主要亮点：\n- 扁平类别模型（例如 `artifacts`、`rfc_doc`、`retrospective`、`ProjectGlossary`、`critical_settings`）。\n- 仅支持有效的 FTS5 前缀：`category:`、`key:`、`value_text:` 用于自定义数据；`summary:`、`rationale:`、`implementation_details:`、`tags:` 用于决策。\n- 处理程序层查询规范化；数据库层保持不变。\n\n发布说明摘要：\n- 添加了 mem4sprint 策略\u002F文档，包含扁平类别和明确的 FTS 规则。\n- 简化了示例，并包含最小化运营调用配方。\n- 文档阐明了 IDE 工作区路径处理以支持 MCP。\n\n## 在工作区中首次使用 ConPort\n\n当你首次在新项目或现有项目工作区中使用 ConPort 时，ConPort 数据库（`context_portal\u002Fcontext.db`）将在不存在时由服务器自动创建。为了帮助引导初始项目上下文，特别是**产品上下文**，请考虑以下事项：\n\n### 使用 `projectBrief.md` 文件（推荐）\n\n1. **创建 `projectBrief.md`**：在项目工作区的根目录中，创建一个名为 `projectBrief.md` 的文件。\n2. **添加内容**：在此文件中添加项目的高级概述，包括：\n   - 项目的主要目标或目的\n   - 关键功能或组件\n   - 目标受众或用户\n   - 整体架构风格或关键技术（如果已知）\n   - 其他定义项目的基础信息\n3. **自动提示导入**：当使用提供的 ConPort 自定义指令集（如 `roo_code_conport_strategy`）的 LLM 代理在workspace中初始化时，它会：\n   - 检查 `projectBrief.md` 是否存在\n   - 如果找到，它会读取文件并询问你是否愿意将其内容导入 ConPort **产品上下文**（Product Context）\n   - 如果你同意，内容将被添加到 ConPort，为项目的产品上下文提供即时基线\n\n### 手动初始化\n\n如果未找到 `projectBrief.md`，或者你选择不导入它：\n\n- LLM 代理（根据其自定义指令引导）通常会通知你 ConPort 产品上下文似乎未初始化。\n- 它可能会主动帮助你手动定义产品上下文，可能通过列出workspace中的其他文件来收集相关信息。\n\n通过提供初始上下文，无论是通过 `projectBrief.md` 还是手动输入，你都能让 ConPort 和连接的 LLM 代理从一开始就更好地理解项目的基础。\n\n## 自动工作区检测\n\nConPort 可以自动确定正确的 `workspace_id`，因此你无需在 MCP 客户端配置中硬编码绝对路径。这对于无法在启动 MCP 服务器时展开 `${workspaceFolder}` 的 IDE 特别有用。\n\n检测功能默认启用，可通过 CLI 标志控制：\n\n标志：\n- `--auto-detect-workspace`（默认：启用）开启自动检测。\n- `--no-auto-detect` 禁用检测（此时必须提供显式的 `--workspace_id` 或每个工具的 `workspace_id`）。\n- `--workspace-search-start \u003Cpath>` 可选的向上搜索起始目录（默认为当前工作目录）。\n\n工作原理（多策略）：\n1. **强指标**（快速路径）：查找包含以下任一文件的高置信度项目根目录：`package.json`、`.git`、`pyproject.toml`、`Cargo.toml`、`go.mod`、`pom.xml`。\n2. **多个通用指标**：如果目录中存在 ≥2 个通用指标（README、许可证、构建文件等），则被视为根目录。\n3. **现有 ConPort 工作区**：`context_portal\u002F` 目录的存在表示一个有效的workspace。\n4. **MCP 环境上下文**：尊重环境变量，如 `VSCODE_WORKSPACE_FOLDER` 或 `CONPORT_WORKSPACE`（如果已设置且有效）。\n5. **回退**：如果未找到任何指标，则使用起始目录（并发出警告）。\n\n工具：\n- `get_workspace_detection_info`（MCP 工具）暴露一个诊断字典，包含：\n  - start_path（起始路径）\n  - detected_workspace（检测到的工作区）\n  - detection_method（检测方法：strong_indicators | multiple_indicators | existing_context_portal | fallback）\n  - indicators_found（发现的指标）\n  - relevant environment variables（相关环境变量）\n\n最佳实践：\n- 保持检测启用，除非你在多根场景下需要每个调用进行显式隔离。\n- 如果 IDE 传递字面字符串 `${workspaceFolder}`，ConPort 将忽略它并安全地自动检测（记录为 WARNING 级别日志）。\n- 对于调试模糊的根目录（例如嵌套仓库），运行检测信息工具以确认选择了哪个目录。\n\n完全依赖自动检测的 MCP 启动示例：\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--from\", \"context-portal-mcp\",\n        \"conport-mcp\",\n        \"--mode\", \"stdio\",\n        \"--log-level\", \"INFO\"\n      ]\n    }\n  }\n}\n```\n\n显式禁用检测（强制仅使用提供的 ID）：\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--from\", \"context-portal-mcp\",\n        \"conport-mcp\",\n        \"--mode\", \"stdio\",\n        \"--no-auto-detect\",\n        \"--workspace_id\", \"\u002Fabsolute\u002Fpath\u002Fto\u002Fproject\"\n      ]\n    }\n  }\n}\n```\n\n如果你有一个在深层子目录内启动的启动器，请提供更高的起始路径：\n```bash\nconport-mcp --mode stdio --workspace-search-start ..\u002F..\u002F\n```\n\n完整的原理、边缘情况和故障排除请参阅 `UNIVERSAL_WORKSPACE_DETECTION.md`。\n\n## 可用的 ConPort 工具\n\nConPort 服务器通过 MCP 暴露以下工具，允许与底层**知识图谱**进行交互。这包括由**向量数据存储**支持的**语义搜索**功能。这些工具为 AI 代理的**检索增强生成（RAG）**提供了关键的检索能力。所有工具都需要 `workspace_id` 参数（字符串，必填）来指定目标项目工作区。\n\n注意：为方便起见，所有类似整数的参数都接受数字或仅包含数字的字符串（例如 \"10\"、\"3\"）。服务器会去除空格并将其转换为整数，同时保留验证边界（例如 ge=1）。贡献者：@cipradu。\n\n- **产品上下文管理：**\n  - `get_product_context`：获取项目的整体目标、功能和架构。\n  - `update_product_context`：更新产品上下文。接受完整的 `content`（对象）或 `patch_content`（对象）进行部分更新（在补丁中使用 `__DELETE__` 作为值可删除键）。\n- **活动上下文管理：**\n  - `get_active_context`：获取当前工作重点、近期更改和待处理问题。\n  - `update_active_context`：更新活动上下文。接受完整的 `content`（对象）或 `patch_content`（对象）进行部分更新（在补丁中使用 `__DELETE__` 作为值可删除键）。\n- **决策记录：**\n  - `log_decision`：记录架构或实现决策。\n    - 参数：`summary`（字符串，必填）、`rationale`（字符串，可选）、`implementation_details`（字符串，可选）、`tags`（列表[字符串]，可选）。\n  - `get_decisions`：获取已记录的决策。\n    - 参数：`limit`（整数，可选）、`tags_filter_include_all`（列表[字符串]，可选）、`tags_filter_include_any`（列表[字符串]，可选）。\n  - `search_decisions_fts`：在决策字段（摘要、理由、详情、标签）中进行全文搜索。\n    - 参数：`query_term`（字符串，必填）、`limit`（整数，可选）。\n  - `delete_decision_by_id`：根据 ID 删除决策。\n    - 参数：`decision_id`（整数，必填）。\n- **进度跟踪：**\n  - `log_progress`：记录进度条目或任务状态。\n    - 参数：`status`（字符串，必填）、`description`（字符串，必填）、`parent_id`（整数，可选）、`linked_item_type`（字符串，可选）、`linked_item_id`（字符串，可选）。\n  - `get_progress`：获取进度条目。\n    - 参数：`status_filter`（字符串，可选）、`parent_id_filter`（整数，可选）、`limit`（整数，可选）。\n  - `update_progress`：更新现有进度条目。\n    - 参数：`progress_id`（整数，必填）、`status`（字符串，可选）、`description`（字符串，可选）、`parent_id`（整数，可选）。\n  - `delete_progress_by_id`：根据 ID 删除进度条目。\n    - 参数：`progress_id`（整数，必填）。\n- **系统模式管理：**\n  - `log_system_pattern`：记录或更新系统\u002F编码模式。\n    - 参数：`name`（字符串，必填）、`description`（字符串，可选）、`tags`（列表[字符串]，可选）。\n  - `get_system_patterns`：获取系统模式。\n    - 参数：`tags_filter_include_all`（列表[字符串]，可选）、`tags_filter_include_any`（列表[字符串]，可选）。\n  - `delete_system_pattern_by_id`：根据 ID 删除系统模式。\n    - 参数：`pattern_id`（整数，必填）。\n- **自定义数据管理：**\n  - `log_custom_data`：在某个分类下存储\u002F更新自定义键值条目。值应为 JSON 可序列化对象。\n    - 参数：`category`（字符串，必填）、`key`（字符串，必填）、`value`（任意类型，必填）。\n  - `get_custom_data`：获取自定义数据。\n    - 参数：`category`（字符串，可选）、`key`（字符串，可选）。\n  - `delete_custom_data`：删除特定的自定义数据条目。\n    - 参数：`category`（字符串，必填）、`key`（字符串，必填）。\n  - `search_project_glossary_fts`：在 'ProjectGlossary' 自定义数据分类中进行全文搜索。\n    - 参数：`query_term`（字符串，必填）、`limit`（整数，可选）。\n  - `search_custom_data_value_fts`：在所有自定义数据值、分类和键中进行全文搜索。\n    - 参数：`query_term`（字符串，必填）、`category_filter`（字符串，可选）、`limit`（整数，可选）。\n- **上下文链接：**\n  - `link_conport_items`：在两个 ConPort 条目之间创建关系链接，明确构建**项目知识图谱**。\n    - 参数：`source_item_type`（字符串，必填）、`source_item_id`（字符串，必填）、`target_item_type`（字符串，必填）、`target_item_id`（字符串，必填）、`relationship_type`（字符串，必填）、`description`（字符串，可选）。\n  - `get_linked_items`：获取链接到特定条目的项目。\n    - 参数：`item_type`（字符串，必填）、`item_id`（字符串，必填）、`relationship_type_filter`（字符串，可选）、`linked_item_type_filter`（字符串，可选）、`limit`（整数，可选）。\n- **历史和元工具：**\n  - `get_item_history`：获取产品或活动上下文的版本历史。\n    - 参数：`item_type`（\"product_context\" | \"active_context\"，必填）、`version`（整数，可选）、`before_timestamp`（日期时间，可选）、`after_timestamp`（日期时间，可选）、`limit`（整数，可选）。\n  - `get_recent_activity_summary`：提供近期 ConPort 活动的摘要。\n    - 参数：`hours_ago`（整数，可选）、`since_timestamp`（日期时间，可选）、`limit_per_type`（整数，可选，默认值：5）。\n  - `get_conport_schema`：获取可用 ConPort 工具及其参数的架构定义。\n- **导入\u002F导出：**\n  - `export_conport_to_markdown`：将 ConPort 数据导出为 markdown 文件。\n    - 参数：`output_path`（字符串，可选，默认值：\".\u002Fconport_export\u002F\"）。\n  - `import_markdown_to_conport`：从 markdown 文件导入数据到 ConPort。\n    - 参数：`input_path`（字符串，可选，默认值：\".\u002Fconport_export\u002F\"）。\n- **批量操作：**\n  - `batch_log_items`：在单次调用中记录多个相同类型的条目（例如决策、进度条目）。\n    - 参数：`item_type`（字符串，必填 - 例如 \"decision\"、\"progress_entry\"）、`items`（列表[字典]，必填 - 该类型条目的 Pydantic 模型字典列表）。\n\n## 深入阅读\n\n要更深入地了解 ConPort 的设计、架构和高级使用模式，请参阅：\n\n- [`conport_mcp_deep_dive.md`](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fblob\u002Fmain\u002Fconport_mcp_deep_dive.md)\n\n## 贡献指南\n\n请参阅我们的 [CONTRIBUTING.md](CONTRIBUTING.md) 指南，了解如何为 ConPort 项目做出贡献。\n\n## 许可证\n\n本项目根据 [Apache-2.0 许可证](LICENSE) 获得许可。\n\n## 致谢\n\n- 特别感谢 **@cipradu** 提出的宝贵建议，实现了数字参数的整数字符串强制转换，这改善了从各种客户端与 MCP 服务器交互时的用户体验。\n\n## 数据库迁移和更新指南\n\n有关如何管理 `context.db` 文件的详细说明，特别是在更新包含数据库架构更改的 ConPort 版本时，请参阅专门的 [v0.2.4_UPDATE_GUIDE.md](v0.2.4_UPDATE_GUIDE.md)。该指南提供了手动数据迁移（导出\u002F导入）的步骤（如有需要）以及故障排除提示。","# Context Portal MCP (ConPort) 快速上手指南\n\n## 环境准备\n\n### 系统要求\n\n- **Python**：3.8 或更高版本\n  - [下载 Python](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002F)\n  - 安装时请勾选「将 Python 添加到系统 PATH」\n\n- **uv**（推荐）：快速的 Python 环境管理工具\n  - 安装命令：\n    ```bash\n    # macOS \u002F Linux\n    curl -LsSf https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.sh | sh\n    \n    # Windows (PowerShell)\n    irm https:\u002F\u002Fastral.sh\u002Fuv\u002Finstall.ps1 | iex\n    ```\n\n> 国内用户可使用镜像加速安装：\n> ```bash\n> # 使用清华镜像\n> pip install uv -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 安装步骤\n\n### 方式一：uvx 方式（推荐）\n\n这是最简便的安装方式，无需手动创建虚拟环境。在你的 MCP 客户端配置文件中添加以下内容：\n\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uvx\",\n      \"args\": [\n        \"--from\",\n        \"context-portal-mcp\",\n        \"conport-mcp\",\n        \"--mode\",\n        \"stdio\",\n        \"--workspace_id\",\n        \"你的项目绝对路径\",\n        \"--log-file\",\n        \".\u002Flogs\u002Fconport.log\",\n        \"--log-level\",\n        \"INFO\"\n      ]\n    }\n  }\n}\n```\n\n> **注意**：许多 IDE 不支持 `${workspaceFolder}` 变量展开，建议直接填写项目绝对路径，或省略 `--workspace_id` 参数（首次调用时再传入）。\n\n### 方式二：开发者本地开发模式\n\n如果你想从 Git 仓库运行：\n\n```bash\n# 1. 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git\ncd context-portal\n\n# 2. 创建虚拟环境\nuv venv\nsource .venv\u002Fbin\u002Factivate  # Windows: .venv\\Scripts\\activate\n\n# 3. 安装依赖\nuv pip install -r requirements.txt\n\n# 4. 配置 MCP 客户端\n```\n\nMCP 配置示例：\n\n```json\n{\n  \"mcpServers\": {\n    \"conport\": {\n      \"command\": \"uv\",\n      \"args\": [\n        \"run\",\n        \"--python\",\n        \".venv\u002Fbin\u002Fpython\",\n        \"--directory\",\n        \"\u002Fpath\u002Fto\u002Fcontext-portal\",\n        \"conport-mcp\",\n        \"--mode\",\n        \"stdio\",\n        \"--log-file\",\n        \".\u002Flogs\u002Fconport-dev.log\",\n        \"--log-level\",\n        \"DEBUG\"\n      ]\n    }\n  }\n}\n```\n\n## 基本使用\n\n### 1. 配置 LLM 自定义指令\n\n根据你使用的 IDE，选择对应的策略文件，将内容复制到 LLM 的自定义指令中：\n\n| IDE | 策略文件 |\n|-----|---------|\n| Roo Code | `roo_code_conport_strategy` |\n| CLine | `cline_conport_strategy` |\n| Windsurf Cascade | `cascade_conport_strategy` |\n| 通用 | `generic_conport_strategy` |\n\n策略文件地址：https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Ftree\u002Fmain\u002Fconport-custom-instructions\n\n### 2. 启动会话\n\n在开始新对话时，建议发送以下指令让 LLM 初始化 ConPort：\n\n```\nInitialize according to custom instructions.\n```\n\n### 3. 核心功能\n\nConPort 会自动管理以下内容：\n\n- **项目决策记录**：存储架构设计、技术选型等重要决策\n- **任务进度追踪**：记录待办事项和完成状态\n- **自定义数据**：支持存储项目术语表、规范文档等\n- **语义搜索**：通过向量嵌入实现智能检索\n\n数据库文件会自动在 `${workspace_id}\u002F.conport\u002F` 目录下创建（SQLite 格式）。","一名后端开发工程师正在维护一个微服务架构的电商系统，项目涉及订单、库存、支付、用户等多个领域模块，已迭代开发近一年。他日常使用 AI 编码助手（如 Roo Code）辅助开发，几乎每天都会与 AI 协作处理需求。\n\n### 没有 context-portal 时\n\n- 每次让 AI 修改代码时，都要先花时间解释项目的架构模式、技术选型理由和模块依赖关系，沟通成本很高\n- AI 不了解之前做过的技术决策和原因，例如为什么选择分库分表、为什么某个接口要加分布式锁，容易提出与现有设计冲突的方案\n- 项目中有特定的业务术语和领域模型（如\"预售单\"\"SKU 组合\"\"履约时效\"），AI 经常理解错误，导致生成的代码逻辑不对\n- 之前解决过的线上问题和踩过的坑无法被复用，AI 可能重复给出类似的错误方案\n- 当需要查找历史需求文档或技术方案时，只能用关键词搜索，经常找不到语义相关但表述不同的内容\n\n### 使用 context-portal 后\n\n- 项目架构、技术选型、领域模型等核心知识被结构化存储在知识图谱中，AI 可以随时查询，一句话就能获取完整的项目背景\n- 之前的技术决策和讨论被记录为上下文条目，AI 清楚\"为什么这样做\"，给出的方案与现有设计保持一致\n- 项目特定术语和业务规则被明确定义为知识条目，AI 对业务概念的理解更准确，生成的代码逻辑符合业务预期\n- 之前的线上问题处理经验被沉淀下来，AI 可以参考类似问题的解决方案，避免重复踩坑\n- 向量嵌入和语义搜索让相关内容被智能关联，即使表述不同也能找到真正相关的信息\n\ncontext-portal 让 AI 助手真正\"记住\"整个项目的上下文，开发者在每次对话中无需重复解释项目背景，AI 就能给出更准确、更贴合项目实际的相关建议。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGreatScottyMac_context-portal_87009600.png","GreatScottyMac",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FGreatScottyMac_0560f2c3.png","https:\u002F\u002Fgithub.com\u002FGreatScottyMac",[19,23],{"name":20,"color":21,"percentage":22},"Python","#3572A5",99.5,{"name":24,"color":25,"percentage":26},"Dockerfile","#384d54",0.5,762,83,"2026-04-04T06:58:31","Apache-2.0",2,"Linux, macOS, Windows","不需要","未说明",{"notes":36,"python":37,"dependencies":38},"这是一个基于 Python\u002FFastAPI 的 MCP 服务器，不需要 GPU。使用 SQLite 数据库（每个工作区一个数据库文件，自动创建）。推荐使用 uv 作为包管理器进行环境管理。服务器主要通过 STDIO 模式与 IDE 集成运行。数据库迁移使用 Alembic 管理。","3.8+",[39,40],"fastapi","alembic",[42,43,44],"开发框架","数据工具","Agent",3,"ready","2026-03-27T02:49:30.150509","2026-04-06T08:52:27.103561",[50,55,60,65,70,74],{"id":51,"question_zh":52,"answer_zh":53,"source_url":54},721,"运行 ConPort MCP 时出现 'No script_location key found in configuration' 错误如何解决？","这是因为 Alembic 迁移脚本目录未找到导致的。错误信息显示 'Alembic scripts directory not found'。解决方案：1) 确保项目目录下存在 alembic 文件夹和迁移脚本；2) 检查配置文件是否正确设置了 script_location；3) 如果问题仍然存在，尝试清除 Python 缓存（删除所有 __pycache__ 目录和 .pyc 文件），这在某些情况下可以解决导入问题。","https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fissues\u002F21",{"id":56,"question_zh":57,"answer_zh":58,"source_url":59},722,"在 Cline 中配置 ConPort MCP 时出现只读文件系统错误如何解决？","错误信息为 'Failed to set up file logging to ${workspaceFolder}\u002Fcontext_portal\u002Flogs\u002Fconport.log: [Errno 30] Read-only file system'。解决方案：将 workspace_id 参数设置为字符串 '.\u002F' 而不是使用 ${workspaceFolder} 变量。具体配置如下：\"args\": [\"--from\", \"git+https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git\", \"conport-mcp\", \"--mode\", \"stdio\", \"--workspace_id\", \".\u002F\", \"--log-file\", \".\u002Flogs\u002Fconport.log\", \"--log-level\", \"INFO\"]。","https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fissues\u002F38",{"id":61,"question_zh":62,"answer_zh":63,"source_url":64},723,"Claude、Cursor 或 Cline 发送 limit 和 hours_ago 参数为字符串而非整数导致错误如何处理？","AI 工具（如 Claude、Cursor、Cline）总是将这些参数作为字符串发送。解决方案是在工具内部添加验证逻辑：检查参数是否为数字字符串，如果是则转换为整数。建议在接收参数后进行类型检查和转换，确保参数类型正确后再进行处理。","https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fissues\u002F65",{"id":66,"question_zh":67,"answer_zh":68,"source_url":69},724,"使用 log_custom_data 时出现 'CustomData' object has no attribute 'timestamp' 错误如何解决？","这是 CustomData 模型缺少 timestamp 字段的 Bug。根据维护者的反馈，此问题已在后续版本中修复。建议更新到最新版本的 context-portal-mcp（0.1.8 之后的版本）。","https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fissues\u002F20",{"id":71,"question_zh":72,"answer_zh":73,"source_url":54},725,"在 stdio 模式下运行 ConPort MCP 出现 'ensure_alembic_files_exist is not defined' 错误如何解决？","这是因为 ensure_alembic_files_exist 函数在主 try 块中导入，但在 fallback 分支（except ImportError）中未导入。当使用相对路径导入包时（如从 IDE 运行时），会触发 fallback 分支，导致该函数未定义。解决方案是在 except ImportError 块中添加该函数的导入语句。",{"id":75,"question_zh":76,"answer_zh":77,"source_url":59},726,"ConPort MCP 在多仓库同时打开时工作不正常如何解决？","当同时打开两个或多个仓库时，可能会出现配置冲突问题。建议将 workspace_id 设置为字符串 '.\u002F' 而不是使用 ${workspaceFolder} 变量，这样可以避免路径解析问题。",[79,84],{"id":80,"version":81,"summary_zh":82,"released_at":83},109979,"v0.3.13","# Context Portal MCP Release Notes\r\n\r\n\u003Cbr>\r\n\r\n## v0.3.13 (2025-12-31)\r\n\r\n### Features\r\n- **Tool Annotations:** Added MCP tool annotations (`readOnlyHint`, `destructiveHint`, `title`) to all tools to help LLMs understand tool behavior and improve safety. (Credit: @triepod-ai)\r\n\r\n### Installation\r\n```bash\r\nuvx --from git+https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git@v0.3.13 conport-mcp --mode stdio\r\n```\r\n\r\nOr via pip:\r\n```bash\r\npip install context-portal-mcp\r\n```\r\n\r\n\u003Cbr>\r\n\r\n## v0.3.12 (2025-12-19)\r\n\r\n### Security\r\n- **Dependabot Alert #14:** Mitigated a TOCTOU race condition in `filelock` that can enable symlink attacks during lock file creation (CVE-2024-53981) by enforcing `filelock>=3.16.2`.\r\n\r\n### Installation\r\n```bash\r\nuvx --from git+https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git@v0.3.12 conport-mcp --mode stdio\r\n```\r\n\r\nOr via pip:\r\n```bash\r\npip install context-portal-mcp\r\n```\r\n\r\n\u003Cbr>\r\n\r\n## v0.3.11 (2025-12-06)\r\n\r\n### Security\r\n- **Dependabot Alert #11:** Resolved a DNS rebinding vulnerability in the `mcp` Python SDK by forcing an update to `mcp>=1.23.0`. This was achieved by adding a `[tool.uv]` override in `pyproject.toml` to bypass the restrictive dependency in `fastmcp` 2.13.3.\r\n\r\n### Maintenance\r\n- **Dependency Updates:** Updated all project dependencies to their latest compatible versions using `uv lock --upgrade`.\r\n    - `anyio` -> `4.12.0`\r\n    - `attrs` -> `25.4.0`\r\n    - `bcrypt` -> `5.0.0`\r\n    - `cachetools` -> `6.2.2`\r\n    - `certifi` -> `2025.11.12`\r\n    - `cryptography` -> `46.0.3`\r\n    - `fsspec` -> `2025.12.0`\r\n    - `google-auth` -> `2.43.0`\r\n    - `grpcio` -> `1.76.0`\r\n    - `huggingface-hub` -> `0.36.0`\r\n    - `numpy` -> `2.3.5`\r\n    - `onnxruntime` -> `1.23.2`\r\n    - `opentelemetry-*` -> `1.39.0`\r\n    - `pillow` -> `12.0.0`\r\n    - `protobuf` -> `6.33.2`\r\n    - `pydantic-settings` -> `2.12.0`\r\n    - `pytest` -> `9.0.1`\r\n    - `sentence-transformers` -> `5.1.2`\r\n    - `sqlalchemy` -> `2.0.44`\r\n    - `torch` -> `2.9.1`\r\n    - `transformers` -> `4.57.3`\r\n    - `typer` -> `0.20.0`\r\n\r\n### Installation\r\n```bash\r\nuvx --from git+https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git@v0.3.11 conport-mcp --mode stdio\r\n```\r\n\r\nOr via pip:\r\n```bash\r\npip install context-portal-mcp\r\n```\r\n\r\n\u003Cbr>\r\n\r\n## v0.3.10 (2025-12-06)\r\n\r\n### Maintenance\r\n- **Codebase Refactoring:** Major cleanup of `main.py` to resolve over 200 linting issues, improving code quality and maintainability.\r\n- **Version Synchronization:** Synchronized version numbers between `pyproject.toml` and `main.py`.\r\n\r\n### Installation\r\n```bash\r\nuvx --from git+https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git@v0.3.10 conport-mcp --mode stdio\r\n```\r\n\r\nOr via pip:\r\n```bash\r\npip install context-portal-mcp\r\n```\r\n\r\n\u003Cbr>\r\n\r\n## v0.3.9 (2025-12-06)\r\n\r\n### Security & Maintenance\r\n- **Dependency Updates:** Updated all core dependencies to their latest stable versions (as of Dec 2025) to address potential security vulnerabilities and ensure compatibility.\r\n    - `fastapi` -> `0.120.0`\r\n    - `uvicorn` -> `0.38.0`\r\n    - `pydantic` -> `2.12.5`\r\n    - `fastmcp` -> `2.13.3`\r\n    - `sentence-transformers` -> `3.3.1`\r\n    - `chromadb` -> `1.3.5`\r\n    - `alembic` -> `1.17.2`\r\n    - `urllib3` -> `2.6.0`\r\n    - `httpx` -> `0.28.1`\r\n    - `starlette` -> `0.50.0`\r\n\r\n### Installation\r\n```bash\r\nuvx --from git+https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git@v0.3.9 conport-mcp --mode stdio\r\n```\r\n\r\nOr via pip:\r\n```bash\r\npip install context-portal-mcp\r\n```\r\n\r\n\u003Cbr>\r\n\r\n## v0.3.8 (2025-10-30)\r\n\r\n### Dependency Management\r\n- **Updated Dependencies:** Resolved dependency conflicts by updating package version constraints\r\n- **FastMCP Security:** Ensured compatibility with secure FastMCP versions (>=2.13.0)\r\n- **HTTPX Compatibility:** Updated to require httpx>=0.28.1 to match FastMCP requirements\r\n\r\n### Installation\r\n```bash\r\nuvx --from git+https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git@v0.3.8 conport-mcp --mode stdio\r\n```\r\n\r\n## v0.3.7 (2025-10-30)\r\n\r\n### Critical Fix\r\n- **Resolved FastAPI\u002FStarlette Dependency Conflict:** Fixed a dependency conflict that was preventing uvx installation. The issue occurred because `fastapi==0.116.2` required `starlette\u003C0.49.0`, while we needed `starlette>=0.49.1` for the CVE-2025-62727 security fix.\r\n\r\n### Changes\r\n- **Updated FastAPI:** Upgraded from `0.116.2` to `>=0.119.1`, which natively supports Starlette 0.49.1+\r\n- **Removed Explicit Starlette Dependency:** No longer needed as FastAPI 0.119.1+ automatically includes the secure version of Starlette\r\n- **Maintained Security Posture:** The update preserves all security fixes including CVE-2025-62727 (Starlette), CVE-2025-50181, and CVE-2025-50182 (urllib3)\r\n\r\n### Installation\r\n```bash\r\nuvx --from git+https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal.git conport-mcp --mode stdio\r\n```\r\n\r\nOr via pip:\r\n```bash\r\npip install context-portal-mcp\r\n```\r\n\r\n\u003Cbr>\r\n\r\n## v0.3.6 (2025-10-28)\r\n\r\n### Security\r\n- Updated **starlette** to `>=0.49.1` to remediate CVE-2025-62727 (High sever","2025-10-29T00:43:41",{"id":85,"version":86,"summary_zh":87,"released_at":88},109980,"v0.3.5","# Context Portal MCP Release Notes\r\n\r\n\u003Cbr>\r\n\r\n## v0.3.5 (2025-10-22)\r\n\r\n### Security\r\n- Bumped Authlib to `~=1.6.5` to remediate CVE-2025-61920 (High) and GHSA-g7f3-828f-7h7m (Moderate).\r\n- Regenerated `uv.lock` to pin `authlib==1.6.5` and align with current dependencies.\r\n- Verified via full test run: 15 passed, 0 failed.\r\n\r\n### Packaging\r\n- Updated project version to `0.3.5` in `pyproject.toml`.\r\n- Ensured `authlib` is declared in `pyproject.toml` dependencies to keep locks and installs consistent across environments.\r\n\r\n## v0.3.4 (2025-09-18)\r\n\r\n### Critical Bug Fix\r\n- **String-to-Integer Coercion:** Fixed a validation timing issue where field-level `ge`\u002F`le` constraints in FastMCP tool definitions were preventing string-to-integer coercion from working properly. String parameters like `\"5\"` for `limit` were being rejected before the `IntCoercionMixin` could convert them to integers. The fix removes field-level constraints from 13 affected tools and replaces them with `@model_validator(mode='after')` methods in Pydantic models, ensuring coercion happens before validation.\r\n\r\n### Technical Details\r\n### Security\r\n- Dependency hardening: Pin Authlib to `~=1.6.5` to address CVE-2025-61920 (High) and GHSA-g7f3-828f-7h7m (Moderate). Regenerated `uv.lock` to ensure 1.6.5 is locked. No runtime regressions observed (15\u002F15 tests passing).\r\n- **Affected Tools:** `get_decisions`, `get_progress`, `get_system_patterns`, `get_custom_data`, `search_decisions_fts`, `search_custom_data_value_fts`, `search_project_glossary_fts`, `get_recent_activity_summary`, `semantic_search_conport`, `get_item_history`, `batch_log_items`, `delete_decision_by_id`, `delete_system_pattern_by_id`\r\n- **Root Cause:** FastMCP field-level `ge=1` and `le=25` constraints were applied before Pydantic model validation, preventing the custom `IntCoercionMixin` from converting string inputs to integers\r\n- **Solution:** Moved all integer validation logic to `@model_validator(mode='after')` methods that run after field coercion\r\n\r\n\u003Cbr>\r\n\r\n## v0.3.3 (2025-09-18)\r\n\r\n### Fixes & Improvements\r\n- **Pydantic Validation Fix:** Corrected an issue where Pydantic's `ge=1` constraint was being applied before the `IntCoercionMixin` could convert string-based integers, causing validation errors. The fix ensures that string-to-integer coercion happens before validation, allowing for more flexible input.\r\n- Timezone-aware datetimes: replaced naive UTC usage with aware UTC across models and DB code, and registered SQLite adapters\u002Fconverters for reliable UTC round-tripping. Files: [src\u002Fcontext_portal_mcp\u002Fdb\u002Fmodels.py](src\u002Fcontext_portal_mcp\u002Fdb\u002Fmodels.py), [src\u002Fcontext_portal_mcp\u002Fdb\u002Fdatabase.py](src\u002Fcontext_portal_mcp\u002Fdb\u002Fdatabase.py).\r\n- Integer-like string inputs: added lenient parsing that coerces digit-only strings to integers before validation in relevant argument models. Credit: @cipradu.\r\n- Dependency security: addressed Starlette advisory GHSA-2c2j-9gv5-cj73 by upgrading FastAPI to 0.116.2 and constraining Starlette to >=0.47.2,\u003C0.49.0; verified via pip-audit: \"No known vulnerabilities found.\"\r\n\r\n\u003Cbr>\r\n\r\n## v0.3.0 (2025-09-08)\r\n\r\n### Features\r\n* **Universal Workspace Auto-Detection:** Integrated multi-strategy workspace discovery (strong indicators, multiple indicators, existing `context_portal\u002F`, environment variables, fallback). Eliminates need to hardcode `--workspace_id` in most MCP client configs. Includes new CLI flags: `--auto-detect-workspace` (default enabled), `--no-auto-detect`, and `--workspace-search-start \u003Cpath>`.\r\n* **Diagnostic Tool:** Added `get_workspace_detection_info` MCP tool to expose detection details for debugging ambiguous setups.\r\n* **Graceful `${workspaceFolder}` Handling:** If an IDE passes the literal `${workspaceFolder}`, the server now warns and safely auto-detects instead of initializing incorrectly.\r\n* **Documentation:** Added `UNIVERSAL_WORKSPACE_DETECTION.md` plus README section “Automatic Workspace Detection” with usage guidance and examples.\r\n\r\n### Notes\r\nThis release supersedes the stalled external contribution (original PR #60). Attribution preserved in commit metadata. Users are encouraged to remove hardcoded absolute paths where safe.\r\n\r\n### Upgrade Guidance\r\nNo migration steps required. Existing workflows with explicit `--workspace_id` continue to function. To leverage auto-detection, you may remove the flag (or allow per-call workspace_id injection).\r\n\r\n\u003Cbr>\r\n\r\n## v0.2.23 (2025-08-30)\r\n\r\n### Features\r\n* **Compact ConPort Strategy for Windsurf:** Added a compact ConPort memory strategy file under 12k characters for Windsurf IDE compatibility, preserving core functionality while reducing size. (Credit: @kundeng, [PR #55](https:\u002F\u002Fgithub.com\u002FGreatScottyMac\u002Fcontext-portal\u002Fpull\u002F55))\r\n* **Mem4Sprint Strategy and FTS5 Updates:** Introduced mem4sprint strategy with flat categories, FTS5-safe examples, handler-only query normalization, and updated README for better IDE configuration. (Credit: @kundeng, [PR #56](https:\u002F\u002Fgithub.com\u002FG","2025-05-12T01:55:32",[90,99,108,116,124,136],{"id":91,"name":92,"github_repo":93,"description_zh":94,"stars":95,"difficulty_score":45,"last_commit_at":96,"category_tags":97,"status":46},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[42,98,44],"图像",{"id":100,"name":101,"github_repo":102,"description_zh":103,"stars":104,"difficulty_score":31,"last_commit_at":105,"category_tags":106,"status":46},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,"2026-04-05T23:32:43",[42,44,107],"语言模型",{"id":109,"name":110,"github_repo":111,"description_zh":112,"stars":113,"difficulty_score":31,"last_commit_at":114,"category_tags":115,"status":46},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[42,98,44],{"id":117,"name":118,"github_repo":119,"description_zh":120,"stars":121,"difficulty_score":31,"last_commit_at":122,"category_tags":123,"status":46},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[42,107],{"id":125,"name":126,"github_repo":127,"description_zh":128,"stars":129,"difficulty_score":31,"last_commit_at":130,"category_tags":131,"status":46},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[98,43,132,133,44,134,107,42,135],"视频","插件","其他","音频",{"id":137,"name":138,"github_repo":139,"description_zh":140,"stars":141,"difficulty_score":45,"last_commit_at":142,"category_tags":143,"status":46},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[44,98,42,107,134]]