[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ExtensityAI--symbolicai":3,"tool-ExtensityAI--symbolicai":64},[4,17,25,39,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":10,"last_commit_at":23,"category_tags":24,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":26,"name":27,"github_repo":28,"description_zh":29,"stars":30,"difficulty_score":10,"last_commit_at":31,"category_tags":32,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[33,34,35,36,14,37,15,13,38],"图像","数据工具","视频","插件","其他","音频",{"id":40,"name":41,"github_repo":42,"description_zh":43,"stars":44,"difficulty_score":45,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[14,33,13,15,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":45,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[15,33,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":45,"last_commit_at":62,"category_tags":63,"status":16},2181,"OpenHands","OpenHands\u002FOpenHands","OpenHands 是一个专注于 AI 驱动开发的开源平台，旨在让智能体（Agent）像人类开发者一样理解、编写和调试代码。它解决了传统编程中重复性劳动多、环境配置复杂以及人机协作效率低等痛点，通过自动化流程显著提升开发速度。\n\n无论是希望提升编码效率的软件工程师、探索智能体技术的研究人员，还是需要快速原型验证的技术团队，都能从中受益。OpenHands 提供了灵活多样的使用方式：既可以通过命令行（CLI）或本地图形界面在个人电脑上轻松上手，体验类似 Devin 的流畅交互；也能利用其强大的 Python SDK 自定义智能体逻辑，甚至在云端大规模部署上千个智能体并行工作。\n\n其核心技术亮点在于模块化的软件智能体 SDK，这不仅构成了平台的引擎，还支持高度可组合的开发模式。此外，OpenHands 在 SWE-bench 基准测试中取得了 77.6% 的优异成绩，证明了其解决真实世界软件工程问题的能力。平台还具备完善的企业级功能，支持与 Slack、Jira 等工具集成，并提供细粒度的权限管理，适合从个人开发者到大型企业的各类用户场景。",70612,"2026-04-05T11:12:22",[15,14,13,36],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":75,"owner_website":80,"owner_url":81,"languages":82,"stars":103,"forks":104,"last_commit_at":105,"license":106,"difficulty_score":45,"env_os":107,"env_gpu":108,"env_ram":109,"env_deps":110,"category_tags":124,"github_topics":125,"view_count":45,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":129,"updated_at":130,"faqs":131,"releases":161},1174,"ExtensityAI\u002Fsymbolicai","symbolicai","A neurosymbolic perspective on LLMs","SymbolicAI 是一个结合神经网络与符号逻辑的框架，让开发者能够更自然地在 Python 中使用大语言模型。它通过“符号”对象将传统编程与语言模型的能力融合，既保留了 Python 的简洁性，又增强了对语义和上下文的理解。SymbolicAI 解决了传统 LLM 使用中难以灵活控制、效率低以及语义理解不足的问题，适合希望深入定制 AI 交互流程的开发者和研究人员。其核心概念包括“语法符号”和“语义符号”，支持按需切换模式，实现高效且精准的语义处理。框架模块化设计使得扩展和集成其他工具（如搜索、图像生成）变得简单，是进行高级 AI 应用开发的理想选择。","# **SymbolicAI: A neuro-symbolic perspective on LLMs**\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FExtensityAI_symbolicai_readme_2c24e6e7722c.png\">\n\n\u003Cdiv align=\"center\">\n\n[![Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocumentation-blue?style=for-the-badge)](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai)\n[![Arxiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-32758e?style=for-the-badge)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.00854)\n[![DeepWiki](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDeepWiki-yellow?style=for-the-badge)](https:\u002F\u002Fdeepwiki.com\u002FExtensityAI\u002Fsymbolicai)\n\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttps\u002Ftwitter.com\u002Fdinumariusc.svg?style=social&label=@DinuMariusC)](https:\u002F\u002Ftwitter.com\u002FDinuMariusC) [![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttps\u002Ftwitter.com\u002Fsymbolicapi.svg?style=social&label=@ExtensityAI)](https:\u002F\u002Ftwitter.com\u002FExtensityAI)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttps\u002Ftwitter.com\u002Ffuturisold.svg?style=social&label=@futurisold)](https:\u002F\u002Fx.com\u002Ffuturisold)\n\n\u003C\u002Fdiv>\n\n---\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FExtensityAI_symbolicai_readme_2775076bc0d0.gif\">\n\n## What is SymbolicAI?\n\nSymbolicAI is a **neuro-symbolic** framework, combining classical Python programming with the differentiable, programmable nature of LLMs in a way that actually feels natural in Python.\nIt's built to not stand in the way of your ambitions.\nIt's easily extensible and customizable to your needs by virtue of its modular design.\nIt's quite easy to [write your own engine](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Fengines\u002Fcustom_engine), [host locally](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Fengines\u002Flocal_engine) an engine of your choice, or interface with tools like [web search](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Fengines\u002Fsearch_engine) or [image generation](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Fengines\u002Fdrawing_engine).\nTo keep things concise in this README, we'll introduce two key concepts that define SymbolicAI: **primitives** and **contracts**.\n\n > ❗️**NOTE**❗️ The framework's name is intended to credit the foundational work of Allen Newell and Herbert Simon that inspired this project.\n\n### Primitives\nAt the core of SymbolicAI are `Symbol` objects—each one comes with a set of tiny, composable operations that feel like native Python.\n```python\nfrom symai import Symbol\n```\n\n`Symbol` comes in **two flavours**:\n\n1. **Syntactic** – behaves like a normal Python value (string, list, int ‐ whatever you passed in).\n2. **Semantic**  – is wired to the neuro-symbolic engine and therefore *understands* meaning and\n   context.\n\nWhy is syntactic the default?\nBecause Python operators (`==`, `~`, `&`, …) are overloaded in `symai`.\nIf we would immediately fire the engine for *every* bitshift or comparison, code would be slow and could produce surprising side-effects.\nStarting syntactic keeps things safe and fast; you opt-in to semantics only where you need them.\n\n#### How to switch to the semantic view\n\n1. **At creation time**\n\n   ```python\n   S = Symbol(\"Cats are adorable\", semantic=True) # already semantic\n   print(\"feline\" in S) # => True\n   ```\n\n2. **On demand with the `.sem` projection** – the twin `.syn` flips you back:\n\n   ```python\n   S = Symbol(\"Cats are adorable\") # default = syntactic\n   print(\"feline\" in S.sem) # => True\n   print(\"feline\" in S)     # => False\n   ```\n\n3. Invoking **dot-notation operations**—such as `.map()` or any other semantic function—automatically switches the symbol to semantic mode:\n\n   ```python\n    S = Symbol(['apple', 'banana', 'cherry', 'cat', 'dog'])\n    print(S.map('convert all fruits to vegetables'))\n    # => ['carrot', 'broccoli', 'spinach', 'cat', 'dog']\n   ```\n\nBecause the projections return the *same underlying object* with just a different behavioural coat, you can weave complex chains of syntactic and semantic operations on a single symbol. Think of them as your building blocks for semantic reasoning. Right now, we support a wide range of primitives; check out the docs [here](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Ffeatures\u002Fprimitives), but here's a quick snack:\n\n| Primitive\u002FOperator | Category         | Syntactic | Semantic | Description |\n|--------------------|-----------------|:---------:|:--------:|-------------|\n| `==`               | Comparison      | ✓         | ✓        | Tests for equality. Syntactic: literal match. Semantic: fuzzy\u002Fconceptual equivalence (e.g. 'Hi' == 'Hello'). |\n| `+`                | Arithmetic      | ✓         | ✓        | Syntactic: numeric\u002Fstring\u002Flist addition. Semantic: meaningful composition, blending, or conceptual merge. |\n| `&`                | Logical\u002FBitwise | ✓         | ✓        | Syntactic: bitwise\u002Flogical AND. Semantic: logical conjunction, inference, e.g., context merge. |\n| `symbol[index] = value` | Iteration        | ✓         | ✓        | Set item or slice. |\n| `.startswith(prefix)`    | String Helper    | ✓         | ✓        | Check if a string starts with given prefix (in both modes). |\n| `.choice(cases, default)` | Pattern Matching|           | ✓        | Select best match from provided cases. |\n| `.foreach(condition, apply)`| Execution Control |         | ✓        | Apply action to each element. |\n| `.cluster(**clustering_kwargs?)`              | Data Clustering  |         | ✓        | Cluster data into groups semantically. (uses sklearn's DBSCAN)|\n| `.similarity(other, metric?, normalize?)` | Embedding    |         | ✓        | Compute similarity between embeddings. |\n| ... | ...    |   ...|  ...        | ... |\n\n### Contracts\n\nThey say LLMs hallucinate—but your code can't afford to. That's why SymbolicAI brings **Design by Contract** principles into the world of LLMs. Instead of relying solely on post-hoc testing, contracts help build correctness directly into your design, everything packed into a decorator that will operate on your defined data models and validation constraints:\n```python\nfrom symai import Expression\nfrom symai.strategy import contract\nfrom symai.models import LLMDataModel # Compatible with Pydantic's BaseModel\nfrom pydantic import Field, field_validator\n\n# ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬\n#  Data models                                              ▬\n#  – clear structure + rich Field descriptions power        ▬\n#    validation, automatic prompt templating & remedies     ▬\n# ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬\nclass DataModel(LLMDataModel):\n    some_field: some_type = Field(description=\"very descriptive field\", and_other_supported_options_here=\"...\")\n\n    @field_validator('some_field')\n    def validate_some_field(cls, v):\n        # Custom basic validation logic can be added here too besides pre\u002Fpost\n        valid_opts = ['A', 'B', 'C']\n        if v not in valid_opts:\n            raise ValueError(f'Must be one of {valid_opts}, got \"{v}\".')\n        return v\n\n# ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬\n#  The contracted expression class                          ▬\n# ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬\n@contract(\n    # ── Remedies ─────────────────────────────────────────── #\n    pre_remedy=True,        # Try to fix bad inputs automatically\n    post_remedy=True,       # Try to fix bad LLM outputs automatically\n    accumulate_errors=True, # Feed history of errors to each retry\n    verbose=True,           # Nicely displays progress in terminal\n    remedy_retry_params=dict(tries=3, delay=0.4, max_delay=4.0,\n                             jitter=0.15, backoff=1.8, graceful=False),\n)\nclass Agent(Expression):\n    #\n    # High-level behaviour:\n    #  *. `prompt` – a *static* description of what the LLM must do (mandatory)\n    #  1. `pre`    – sanity-check inputs (optional)\n    #  2. `act`    – mutate state (optional)\n    #  3. LLM      – generate expected answer (handled by SymbolicAI engine)\n    #  4. `post`   – ensure answer meets semantic rules (optional)\n    #  5. `forward` (mandatory)\n    #     • if contract succeeded → return type validated LLM object\n    #     • else                  → graceful fallback answer\n    # ...\n```\n\nBecause we don't want to bloat this README file with long Python snippets, learn more about contracts [here](https:\u002F\u002Fdeepwiki.com\u002FExtensityAI\u002Fsymbolicai\u002F7.1-contract-validation-system) and [here](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Ffeatures\u002Fcontracts).\n\n## Installation\n\n### Core Features\n\nTo get started with SymbolicAI, you can install it using pip:\n\n```bash\npip install symbolicai\n```\n\nAlternatively, clone the repository and set up a Python virtual environment using [uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F) (>= 0.9.17):\n```bash\ngit clone git@github.com:ExtensityAI\u002Fsymbolicai.git\ncd symbolicai\nuv sync --python x.xx\nsource .\u002F.venv\u002Fbin\u002Factivate\n```\nRunning `symconfig` will now use this Python environment.\n\n#### Optional Features\n\nSymbolicAI uses multiple engines to process text, speech and images. We also include search engine access to retrieve information from the web. To use all of them, you will need to also install the following dependencies and assign the API keys to the respective engines. E.g.:\n\n```bash\npip install \"symbolicai[bitsandbytes]\"\npip install \"symbolicai[hf]\"\npip install \"symbolicai[lean]\"\npip install \"symbolicai[llama_cpp]\"\npip install \"symbolicai[ocr]\"\npip install \"symbolicai[qdrant]\"\npip install \"symbolicai[scrape]\"\npip install \"symbolicai[search]\"\npip install \"symbolicai[serpapi]\"\npip install \"symbolicai[services]\"\npip install \"symbolicai[solver]\"\npip install \"symbolicai[whisper]\"\npip install \"symbolicai[wolframalpha]\"\n```\n\nOr, install all optional dependencies at once:\n\n```bash\npip install \"symbolicai[all]\"\n```\n\nTo install dependencies exactly as locked in the provided lock file:\n```bash\nuv sync --frozen\n```\n\nTo install optional extras via uv:\n```bash\nuv sync --extra all # all optional extras\nuv sync --extra scrape # only scrape\n```\n\n> ❗️**NOTE**❗️Please note that some of these optional dependencies may require additional installation steps. Additionally, some are only experimentally supported now and may not work as expected. If a feature is extremely important to you, please consider contributing to the project or reaching out to us.\n\n## Configuration Management\n\nSymbolicAI now features a configuration management system with priority-based loading. The configuration system looks for settings in three different locations, in order of priority:\n\n1. **Debug Mode** (Current Working Directory)\n   - Highest priority\n   - Only applies to `symai.config.json`\n   - Useful for development and testing\n\n2. **Environment-Specific Config** (Python Environment)\n   - Second priority\n   - Located in `{python_env}\u002F.symai\u002F`\n   - Ideal for project-specific settings\n\n3. **Global Config** (Home Directory)\n   - Lowest priority\n   - Located in `~\u002F.symai\u002F`\n   - Default fallback for all settings\n\n### Configuration Files\n\nThe system manages three main configuration files:\n- `symai.config.json`: Main SymbolicAI configuration\n- `symsh.config.json`: Shell configuration\n- `symserver.config.json`: Server configuration\n\n### Viewing Your Configuration\n\nBefore using the `symai`, we recommend inspecting your current configuration setup using the command below. It will start the initial packages caching and initializing the `symbolicai` configuration files.\n\n```bash\nsymconfig\n\n# UserWarning: No configuration file found for the environment. A new configuration file has been created at \u003Cfull-path>\u002F.symai\u002Fsymai.config.json. Please configure your environment.\n```\n\nYou then must edit the `symai.config.json` file. A neurosymbolic engine is **required** to use the `symai` package. Read more about how to use a neuro-symbolic engine [here](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Fengines\u002Fneurosymbolic_engine).\n\nThis command will show:\n- All configuration locations\n- Active configuration paths\n- Current settings (with sensitive data truncated)\n\n### Configuration Priority Example\n\n```console\nmy_project\u002F              # Debug mode (highest priority)\n└── symai.config.json    # Only this file is checked in debug mode\n\n{python_env}\u002F.symai\u002F     # Environment config (second priority)\n├── symai.config.json\n├── symsh.config.json\n└── symserver.config.json\n\n~\u002F.symai\u002F                # Global config (lowest priority)\n├── symai.config.json\n├── symsh.config.json\n└── symserver.config.json\n```\n\nIf a configuration file exists in multiple locations, the system will use the highest-priority version. If the environment-specific configuration is missing or invalid, the system will automatically fall back to the global configuration in the home directory.\n\n### Best Practices\n\n- Use the global config (`~\u002F.symai\u002F`) for your default settings\n- Use environment-specific configs for project-specific settings\n- Use debug mode (current directory) for development and testing\n- Run `symconfig` to inspect your current configuration setup\n\n### Configuration File\n\nYou can specify engine properties in a `symai.config.json` file in your project path. This will replace the environment variables.\nExample of a configuration file with all engines enabled:\n```json\n{\n    \"NEUROSYMBOLIC_ENGINE_API_KEY\": \"\u003CANTHROPIC_API_KEY>\",\n    \"NEUROSYMBOLIC_ENGINE_MODEL\": \"claude-sonnet-4-6\",\n    \"SYMBOLIC_ENGINE_API_KEY\": \"\u003CWOLFRAMALPHA_API_KEY>\",\n    \"SYMBOLIC_ENGINE\": \"wolframalpha\",\n    \"FORMAL_ENGINE_API_KEY\": \"\u003CAXIOM_API_KEY>\",\n    \"FORMAL_ENGINE\": \"axiom\",\n    \"EMBEDDING_ENGINE_API_KEY\": \"\u003COPENAI_API_KEY>\",\n    \"EMBEDDING_ENGINE_MODEL\": \"text-embedding-3-small\",\n    \"SEARCH_ENGINE_API_KEY\": \"\u003CPARALLEL_API_KEY>\",\n    \"SEARCH_ENGINE_MODEL\": \"parallel\",\n    \"TEXT_TO_SPEECH_ENGINE_API_KEY\": \"\u003COPENAI_API_KEY>\",\n    \"TEXT_TO_SPEECH_ENGINE_MODEL\": \"tts-1\",\n    \"INDEXING_ENGINE\": \"qdrant\",\n    \"INDEXING_ENGINE_API_KEY\": \"\u003CQDRANT_API_KEY>\",\n    \"INDEXING_ENGINE_URL\": \"http:\u002F\u002Flocalhost:6333\",\n    \"DRAWING_ENGINE_API_KEY\": \"\u003CBFL_API_KEY>\",\n    \"DRAWING_ENGINE_MODEL\": \"flux-pro-1.1\",\n    \"VISION_ENGINE_MODEL\": \"openai\u002Fclip-vit-base-patch32\",\n    \"OCR_ENGINE_API_KEY\": \"\u003COCR_API_KEY>\",\n    \"OCR_ENGINE_MODEL\": \"mistral-ocr-latest\",\n    \"SPEECH_TO_TEXT_ENGINE_MODEL\": \"turbo\",\n}\n```\n\nWith these steps completed, you should be ready to start using SymbolicAI in your projects.\n\n> ❗️**NOTE**❗️By default, the user warnings are enabled. To disable them, export `SYMAI_WARNINGS=0` in your environment variables.\n\n### Running tests\nSome examples of running tests locally:\n```bash\n# Run all tests\npytest tests\n# Run mandatory tests\npytest -m mandatory\n```\nBe sure to have your configuration set up correctly before running the tests. You can also run the tests with coverage to see how much of the code is covered by tests:\n```bash\npytest --cov=symbolicai tests\n```\n\n## 🪜 Next Steps\n\nNow, there are tools like DeepWiki that provide better documentation than we could ever write, and we don’t want to compete with that; we'll correct it where it's plain wrong. Please go read SymbolicAI's DeepWiki [page](https:\u002F\u002Fdeepwiki.com\u002FExtensityAI\u002Fsymbolicai\u002F). There's a lot of interesting stuff in there. Last but not least, check out our [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.00854) that describes the framework in detail. If you like watching videos, we have a series of tutorials that you can find [here](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Ftutorials\u002Fvideo_tutorials).\n\n## 📜 Citation\n\n```bibtex\n@article{dinu2024symbolicai,\n  title={Symbolicai: A framework for logic-based approaches combining generative models and solvers},\n  author={Dinu, Marius-Constantin and Leoveanu-Condrei, Claudiu and Holzleitner, Markus and Zellinger, Werner and Hochreiter, Sepp},\n  journal={arXiv preprint arXiv:2402.00854},\n  year={2024}\n}\n```\n\n## 📝 License\n\nThis project is licensed under the BSD-3-Clause License.\n\n## Like this Project?\n\nIf you appreciate this project, please leave a star ⭐️ and share it with friends and colleagues. To support the ongoing development of this project even further, consider donating. Thank you!\n\n[![Donate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDonate-PayPal-green.svg?style=for-the-badge)](https:\u002F\u002Fwww.paypal.com\u002Fdonate\u002F?hosted_button_id=WCWP5D2QWZXFQ)\n\nWe are also seeking contributors or investors to help grow and support this project. If you are interested, please reach out to us.\n\n## 📫 Contact\n\nFeel free to contact us with any questions about this project via [email](mailto:office@extensity.ai), through our [website](https:\u002F\u002Fextensity.ai\u002F), or find us on Discord:\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F768087161878085643?label=Discord&logo=Discord&logoColor=white?style=for-the-badge)](https:\u002F\u002Fdiscord.gg\u002FQYMNnh9ra8)\n","# **SymbolicAI：大语言模型的神经符号视角**\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FExtensityAI_symbolicai_readme_2c24e6e7722c.png\">\n\n\u003Cdiv align=\"center\">\n\n[![文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocumentation-blue?style=for-the-badge)](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai)\n[![Arxiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-32758e?style=for-the-badge)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.00854)\n[![DeepWiki](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDeepWiki-yellow?style=for-the-badge)](https:\u002F\u002Fdeepwiki.com\u002FExtensityAI\u002Fsymbolicai)\n\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttps\u002Ftwitter.com\u002Fdinumariusc.svg?style=social&label=@DinuMariusC)](https:\u002F\u002Ftwitter.com\u002FDinuMariusC) [![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttps\u002Ftwitter.com\u002Fsymbolicapi.svg?style=social&label=@ExtensityAI)](https:\u002F\u002Ftwitter.com\u002FExtensityAI)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttps\u002Ftwitter.com\u002Ffuturisold.svg?style=social&label=@futurisold)](https:\u002F\u002Fx.com\u002Ffuturisold)\n\n\u003C\u002Fdiv>\n\n---\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FExtensityAI_symbolicai_readme_2775076bc0d0.gif\">\n\n## 什么是 SymbolicAI？\n\nSymbolicAI 是一个**神经符号**框架，它将经典的 Python 编程与大语言模型的可微、可编程特性相结合，以一种在 Python 中自然流畅的方式实现。该框架旨在不阻碍你的任何目标，凭借其模块化设计，可以轻松扩展和定制以满足你的需求。你可以很轻松地[编写自己的引擎](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Fengines\u002Fcustom_engine)，[本地部署](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Fengines\u002Flocal_engine)你选择的引擎，或者与诸如[网络搜索](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Fengines\u002Fsearch_engine)或[图像生成](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Fengines\u002Fdrawing_engine)之类的工具进行集成。为了使本 README 更加简洁，我们将介绍定义 SymbolicAI 的两个关键概念：**原语**和**契约**。\n\n > ❗️**注意**❗️ 框架名称旨在致敬启发该项目的艾伦·纽厄尔和赫伯特·西蒙的基础性工作。\n\n### 原语\n\nSymbolicAI 的核心是 `Symbol` 对象——每个对象都带有一组小巧、可组合的操作，这些操作用起来就像原生的 Python 代码一样。\n```python\nfrom symai import Symbol\n```\n\n`Symbol` 有两种形式：\n\n1. **句法型** – 行为类似于普通的 Python 值（字符串、列表、整数——无论你传入什么）。\n2. **语义型** – 连接到神经符号引擎，因此能够*理解*含义和上下文。\n\n为什么默认是句法型呢？\n因为 Python 的运算符（`==`、`~`、`&` 等）在 `symai` 中被重载了。如果我们对每一次位移或比较都立即触发引擎，代码就会变得很慢，并且可能产生意想不到的副作用。从句法型开始可以保证安全和高效；只有在需要时才选择语义型。\n\n#### 如何切换到语义视图\n\n1. **创建时**\n\n   ```python\n   S = Symbol(\"Cats are adorable\", semantic=True) # 已经是语义型\n   print(\"feline\" in S) # => True\n   ```\n\n2. **按需使用 `.sem` 投影** – 使用对应的 `.syn` 可以切换回句法型：\n\n   ```python\n   S = Symbol(\"Cats are adorable\") # 默认是句法型\n   print(\"feline\" in S.sem) # => True\n   print(\"feline\" in S)     # => False\n   ```\n\n3. 调用**点语法操作**——例如 `.map()` 或其他语义函数——会自动将符号切换到语义模式：\n\n   ```python\n    S = Symbol(['apple', 'banana', 'cherry', 'cat', 'dog'])\n    print(S.map('convert all fruits to vegetables'))\n    # => ['carrot', 'broccoli', 'spinach', 'cat', 'dog']\n   ```\n\n由于这些投影只是为同一个底层对象添加了不同的行为层，因此你可以在单个符号上编织复杂的句法和语义操作链。它们就像是你进行语义推理的构建块。目前我们支持广泛的原语；详细信息请参阅文档[这里](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Ffeatures\u002Fprimitives)，以下是一些简要内容：\n\n| 原语\u002F运算符 | 类别         | 句法型 | 语义型 | 描述 |\n|--------------------|-----------------|:---------:|:--------:|-------------|\n| `==`               | 比较      | ✓         | ✓        | 测试相等性。句法型：字面匹配。语义型：模糊\u002F概念等价（例如‘Hi’ == ‘Hello’）。 |\n| `+`                | 算术      | ✓         | ✓        | 句法型：数字\u002F字符串\u002F列表相加。语义型：有意义的组合、融合或概念合并。 |\n| `&`                | 逻辑\u002F位运算 | ✓         | ✓        | 句法型：位逻辑与。语义型：逻辑合取、推理，例如上下文合并。 |\n| `symbol[index] = value` | 迭代        | ✓         | ✓        | 设置元素或切片。 |\n| `.startswith(prefix)`    | 字符串辅助    | ✓         | ✓        | 检查字符串是否以给定前缀开头（两种模式下均适用）。 |\n| `.choice(cases, default)` | 模式匹配|           | ✓        | 从提供的选项中选择最佳匹配。 |\n| `.foreach(condition, apply)`| 执行控制 |         | ✓        | 对每个元素应用操作。 |\n| `.cluster(**clustering_kwargs?)`              | 数据聚类  |         | ✓        | 在语义层面将数据聚类成组。（使用 sklearn 的 DBSCAN）|\n| `.similarity(other, metric?, normalize?)` | 嵌入    |         | ✓        | 计算嵌入之间的相似度。 |\n| ... | ...    |   ...|  ...        | ... |\n\n### 契约\n\n人们常说大语言模型会“幻觉”，但你的代码可承受不起这样的风险。这就是为什么 SymbolicAI 将**契约式设计**原则引入大语言模型领域。与其仅仅依赖事后测试，契约可以直接将正确性融入你的设计中，所有内容都封装在一个装饰器里，该装饰器将作用于你定义的数据模型和验证约束：\n```python\nfrom symai import Expression\nfrom symai.strategy import contract\nfrom symai.models import LLMDataModel # 兼容 Pydantic 的 BaseModel\nfrom pydantic import Field, field_validator\n\n# ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬\n#  数据模型                                              ▬\n#  – 清晰的结构 + 丰富的字段描述增强了        ▬\n#    验证、自动提示模板化及补救措施     ▬\n# ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬\nclass DataModel(LLMDataModel):\n    some_field: some_type = Field(description=\"非常详细的字段\", and_other_supported_options_here=\"...\")\n\n    @field_validator('some_field')\n    def validate_some_field(cls, v):\n        # 除了预\u002F后处理之外，还可以在此处添加自定义的基本验证逻辑\n        valid_opts = ['A', 'B', 'C']\n        if v not in valid_opts:\n            raise ValueError(f'必须是 {valid_opts} 中的一个，得到的是 \"{v}\"。')\n        return v\n\n# ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬\n#  经过契约约束的表达式类                          ▬\n\n# ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬\n@contract(\n    # ── 补救措施 ─────────────────────────────────────────── #\n    pre_remedy=True,        # 尝试自动修复不良输入\n    post_remedy=True,       # 尝试自动修复LLM的不良输出\n    accumulate_errors=True, # 将错误历史传递给每次重试\n    verbose=True,           # 在终端中清晰地显示进度\n    remedy_retry_params=dict(tries=3, delay=0.4, max_delay=4.0,\n                             jitter=0.15, backoff=1.8, graceful=False),\n)\nclass Agent(Expression):\n    #\n    # 高层次行为：\n    #  *. `prompt` – LLM 必须执行的 *静态* 描述（必填）\n    #  1. `pre`    – 对输入进行健全性检查（可选）\n    #  2. `act`    – 改变状态（可选）\n    #  3. LLM      – 生成预期答案（由 SymbolicAI 引擎处理）\n    #  4. `post`   – 确保答案符合语义规则（可选）\n    #  5. `forward` （必填）\n    #     • 如果合约成功 → 返回经过类型验证的 LLM 对象\n    #     • 否则                  → 温和的回退答案\n    # ...\n```\n\n由于我们不想让这个 README 文件因过长的 Python 代码片段而变得臃肿，关于合约的更多信息请参阅 [这里](https:\u002F\u002Fdeepwiki.com\u002FExtensityAI\u002Fsymbolicai\u002F7.1-contract-validation-system) 和 [这里](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Ffeatures\u002Fcontracts)。\n\n## 安装\n\n### 核心功能\n\n要开始使用 SymbolicAI，您可以使用 pip 进行安装：\n\n```bash\npip install symbolicai\n```\n\n或者，克隆仓库并使用 [uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F)（≥ 0.9.17）设置 Python 虚拟环境：\n```bash\ngit clone git@github.com:ExtensityAI\u002Fsymbolicai.git\ncd symbolicai\nuv sync --python x.xx\nsource .\u002F.venv\u002Fbin\u002Factivate\n```\n现在运行 `symconfig` 将会使用这个 Python 环境。\n\n#### 可选功能\n\nSymbolicAI 使用多个引擎来处理文本、语音和图像。我们还集成了搜索引擎访问功能，以便从网络上获取信息。要使用所有这些功能，您还需要安装以下依赖项，并将 API 密钥分配给相应的引擎。例如：\n\n```bash\npip install \"symbolicai[bitsandbytes]\"\npip install \"symbolicai[hf]\"\npip install \"symbolicai[lean]\"\npip install \"symbolicai[llama_cpp]\"\npip install \"symbolicai[ocr]\"\npip install \"symbolicai[qdrant]\"\npip install \"symbolicai[scrape]\"\npip install \"symbolicai[search]\"\npip install \"symbolicai[serpapi]\"\npip install \"symbolicai[services]\"\npip install \"symbolicai[solver]\"\npip install \"symbolicai[whisper]\"\npip install \"symbolicai[wolframalpha]\"\n```\n\n或者，一次性安装所有可选依赖项：\n\n```bash\npip install \"symbolicai[all]\"\n```\n\n要按照提供的锁定文件精确安装依赖项：\n```bash\nuv sync --frozen\n```\n\n通过 uv 安装可选扩展：\n```bash\nuv sync --extra all # 所有可选扩展\nuv sync --extra scrape # 仅安装 scrape 相关依赖\n```\n\n> ❗️**注意**❗️请注意，其中一些可选依赖可能需要额外的安装步骤。此外，部分功能目前仅处于实验性支持阶段，可能无法按预期工作。如果您对某项功能极为重要，请考虑为项目贡献代码或与我们联系。\n\n## 配置管理\n\nSymbolicAI 现在配备了基于优先级加载的配置管理系统。该系统会按优先级顺序在三个不同位置查找配置：\n\n1. **调试模式**（当前工作目录）\n   - 优先级最高\n   - 仅适用于 `symai.config.json`\n   - 适用于开发和测试\n\n2. **环境特定配置**（Python 环境）\n   - 优先级第二\n   - 位于 `{python_env}\u002F.symai\u002F`\n   - 适合项目特定的设置\n\n3. **全局配置**（主目录）\n   - 优先级最低\n   - 位于 `~\u002F.symai\u002F`\n   - 所有设置的默认回退\n\n### 配置文件\n\n系统管理三个主要配置文件：\n- `symai.config.json`: 主 SymbolicAI 配置\n- `symsh.config.json`: Shell 配置\n- `symserver.config.json`: 服务器配置\n\n### 查看您的配置\n\n在使用 `symai` 之前，我们建议您使用以下命令检查当前的配置设置。该命令将启动初始包缓存，并初始化 `symbolicai` 的配置文件。\n\n```bash\nsymconfig\n\n# UserWarning: 未找到环境的配置文件。已在 \u003Cfull-path>\u002F.symai\u002Fsymai.config.json 创建了一个新的配置文件。请配置您的环境。\n```\n\n随后您需要编辑 `symai.config.json` 文件。使用 `symai` 包时，**必须**配备神经符号引擎。有关如何使用神经符号引擎的更多信息，请参阅 [这里](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Fengines\u002Fneurosymbolic_engine)。\n\n此命令将显示：\n- 所有配置位置\n- 活动配置路径\n- 当前设置（敏感数据已被截断）\n\n### 配置优先级示例\n\n```console\nmy_project\u002F              # 调试模式（最高优先级）\n└── symai.config.json    # 调试模式下仅检查此文件\n\n{python_env}\u002F.symai\u002F     # 环境配置（第二优先级）\n├── symai.config.json\n├── symsh.config.json\n└── symserver.config.json\n\n~\u002F.symai\u002F                # 全局配置（最低优先级）\n├── symai.config.json\n├── symsh.config.json\n└── symserver.config.json\n```\n\n如果某个配置文件存在于多个位置，系统将使用优先级最高的版本。如果环境特定配置缺失或无效，系统将自动回退到主目录中的全局配置。\n\n### 最佳实践\n\n- 使用全局配置 (`~\u002F.symai\u002F`) 作为默认设置\n- 使用环境特定配置来存储项目相关的设置\n- 使用调试模式（当前目录）进行开发和测试\n- 运行 `symconfig` 来检查当前的配置设置\n\n### 配置文件\n\n您可以在项目路径中创建一个 `symai.config.json` 文件来指定引擎属性。这将覆盖环境变量。\n\n所有引擎都启用的配置文件示例如下：\n```json\n{\n    \"NEUROSYMBOLIC_ENGINE_API_KEY\": \"\u003CANTHROPIC_API_KEY>\",\n    \"NEUROSYMBOLIC_ENGINE_MODEL\": \"claude-sonnet-4-6\",\n    \"SYMBOLIC_ENGINE_API_KEY\": \"\u003CWOLFRAMALPHA_API_KEY>\",\n    \"SYMBOLIC_ENGINE\": \"wolframalpha\",\n    \"FORMAL_ENGINE_API_KEY\": \"\u003CAXIOM_API_KEY>\",\n    \"FORMAL_ENGINE\": \"axiom\",\n    \"EMBEDDING_ENGINE_API_KEY\": \"\u003COPENAI_API_KEY>\",\n    \"EMBEDDING_ENGINE_MODEL\": \"text-embedding-3-small\",\n    \"SEARCH_ENGINE_API_KEY\": \"\u003CPARALLEL_API_KEY>\",\n    \"SEARCH_ENGINE_MODEL\": \"parallel\",\n    \"TEXT_TO_SPEECH_ENGINE_API_KEY\": \"\u003COPENAI_API_KEY>\",\n    \"TEXT_TO_SPEECH_ENGINE_MODEL\": \"tts-1\",\n    \"INDEXING_ENGINE\": \"qdrant\",\n    \"INDEXING_ENGINE_API_KEY\": \"\u003CQDRANT_API_KEY>\",\n    \"INDEXING_ENGINE_URL\": \"http:\u002F\u002Flocalhost:6333\",\n    \"DRAWING_ENGINE_API_KEY\": \"\u003CBFL_API_KEY>\",\n    \"DRAWING_ENGINE_MODEL\": \"flux-pro-1.1\",\n    \"VISION_ENGINE_MODEL\": \"openai\u002Fclip-vit-base-patch32\",\n    \"OCR_ENGINE_API_KEY\": \"\u003COCR_API_KEY>\",\n    \"OCR_ENGINE_MODEL\": \"mistral-ocr-latest\",\n    \"SPEECH_TO_TEXT_ENGINE_MODEL\": \"turbo\",\n}\n```\n\n完成以上步骤后，您应该就可以在项目中开始使用 SymbolicAI 了。\n\n> ❗️**注意**❗️ 默认情况下，用户警告是启用的。要禁用它们，请在您的环境变量中导出 `SYMAI_WARNINGS=0`。\n\n### 运行测试\n以下是一些在本地运行测试的示例：\n```bash\n# 运行所有测试\npytest tests\n# 运行强制性测试\npytest -m mandatory\n```\n请确保在运行测试之前正确配置好相关设置。您还可以运行带有覆盖率的测试，以查看有多少代码被测试覆盖：\n```bash\npytest --cov=symbolicai tests\n```\n\n## 🪜 下一步\n\n现在有一些工具，比如 DeepWiki，提供了比我们所能编写的更好的文档，我们并不想与之竞争；只有在明显错误的地方，我们才会进行修正。请前往 SymbolicAI 的 DeepWiki [页面](https:\u002F\u002Fdeepwiki.com\u002FExtensityAI\u002Fsymbolicai\u002F)阅读相关内容。那里有很多有趣的信息。最后，别忘了查看我们的 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.00854)，其中详细介绍了该框架。如果您喜欢观看视频，我们还有一系列教程，您可以在这里找到：[SymbolicAI 教程视频](https:\u002F\u002Fextensityai.gitbook.io\u002Fsymbolicai\u002Ftutorials\u002Fvideo_tutorials)。\n\n## 📜 引用\n\n```bibtex\n@article{dinu2024symbolicai,\n  title={Symbolicai: A framework for logic-based approaches combining generative models and solvers},\n  author={Dinu, Marius-Constantin and Leoveanu-Condrei, Claudiu and Holzleitner, Markus and Zellinger, Werner and Hochreiter, Sepp},\n  journal={arXiv preprint arXiv:2402.00854},\n  year={2024}\n}\n```\n\n## 📝 许可证\n\n本项目采用 BSD-3-Clause 许可证。\n\n## 喜欢这个项目吗？\n\n如果您喜欢这个项目，请给它点个赞 ⭐️ 并分享给朋友和同事。为了进一步支持该项目的持续开发，您也可以考虑捐赠。感谢您的支持！\n\n[![捐赠](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDonate-PayPal-green.svg?style=for-the-badge)](https:\u002F\u002Fwww.paypal.com\u002Fdonate\u002F?hosted_button_id=WCWP5D2QWZXFQ)\n\n我们也在寻找贡献者或投资者来帮助发展和支持这个项目。如果您感兴趣，请随时与我们联系。\n\n## 📫 联系方式\n\n如有关于该项目的问题，欢迎通过 [电子邮件](mailto:office@extensity.ai)、我们的 [网站](https:\u002F\u002Fextensity.ai\u002F) 或 Discord 与我们联系：\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F768087161878085643?label=Discord&logo=Discord&logoColor=white?style=for-the-badge)](https:\u002F\u002Fdiscord.gg\u002FQYMNnh9ra8)","# SymbolicAI 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- Python 3.8 或更高版本\n- 支持 pip 和 uv（推荐）的环境\n\n### 前置依赖\n- 无需额外安装，除非使用可选功能（如搜索、OCR、图像生成等）\n\n## 安装步骤\n\n### 核心安装\n```bash\npip install symbolicai\n```\n\n### 使用 uv 设置虚拟环境（推荐）\n```bash\ngit clone git@github.com:ExtensityAI\u002Fsymbolicai.git\ncd symbolicai\nuv sync --python 3.10\nsource .\u002F.venv\u002Fbin\u002Factivate\n```\n\n### 可选功能安装\n安装所有可选功能：\n```bash\npip install \"symbolicai[all]\"\n```\n\n或按需安装：\n```bash\npip install \"symbolicai[search]\"\npip install \"symbolicai[ocr]\"\npip install \"symbolicai[wolframalpha]\"\n```\n\n> ⚠️ 注意：部分可选功能可能需要额外配置或仅实验性支持。\n\n## 基本使用\n\n### 创建符号对象\n```python\nfrom symai import Symbol\n\n# 创建语法符号\ns = Symbol(\"Cats are adorable\")\nprint(\"feline\" in s)  # => False\n\n# 创建语义符号\ns = Symbol(\"Cats are adorable\", semantic=True)\nprint(\"feline\" in s)  # => True\n```\n\n### 使用 `.sem` 切换到语义视图\n```python\ns = Symbol(\"Cats are adorable\")\nprint(\"feline\" in s.sem)  # => True\n```\n\n### 执行语义操作\n```python\ns = Symbol(['apple', 'banana', 'cherry', 'cat', 'dog'])\nprint(s.map('convert all fruits to vegetables'))\n# 输出: ['carrot', 'broccoli', 'spinach', 'cat', 'dog']\n```\n\n### 使用 Contracts（设计契约）\n```python\nfrom symai import Expression\nfrom symai.strategy import contract\nfrom symai.models import LLMDataModel\nfrom pydantic import Field, field_validator\n\nclass DataModel(LLMDataModel):\n    some_field: str = Field(description=\"描述字段\")\n\n    @field_validator('some_field')\n    def validate_some_field(cls, v):\n        if v not in ['A', 'B', 'C']:\n            raise ValueError(f'必须是 A、B 或 C，得到的是 \"{v}\"')\n        return v\n\n@contract()\nclass Agent(Expression):\n    def forward(self):\n        return self._llm_call()\n```\n\n### 配置管理\n运行以下命令初始化配置：\n```bash\nsymconfig\n```\n\n然后编辑 `~\u002F.symai\u002Fsymai.config.json` 文件，确保配置了神经符号引擎。","某数据科学家正在开发一个自动化报告生成系统，需要从非结构化文本中提取关键信息，并根据规则生成标准化的分析报告。他需要处理大量文档，包括新闻文章、市场调研和客户反馈。\n\n### 没有 SymbolicAI 时  \n- 需要手动编写复杂的正则表达式来提取特定信息，维护成本高且容易出错  \n- 对于语义理解任务（如情感分析、实体识别）依赖外部API，调用成本高且响应慢  \n- 缺乏统一的接口来整合不同模型和逻辑，代码结构混乱  \n- 处理过程中难以实现动态逻辑调整，灵活性差  \n\n### 使用 SymbolicAI 后  \n- 可以通过 `Symbol` 对象直接操作文本，结合语法和语义能力，简化信息提取流程  \n- 利用内置的神经符号引擎，无需频繁调用外部服务即可完成语义理解任务  \n- 通过模块化设计，轻松集成自定义逻辑与预训练模型，提升代码可维护性  \n- 支持灵活的语义操作，可根据需求动态调整处理逻辑，提高系统适应性  \n\nSymbolicAI 让数据科学家能够更高效地构建智能文本处理系统，兼顾性能与灵活性。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FExtensityAI_symbolicai_7d561cf1.png","ExtensityAI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FExtensityAI_95e5db94.png","",null,"office@extensity.ai","www.extensity.ai","https:\u002F\u002Fgithub.com\u002FExtensityAI",[83,87,91,95,99],{"name":84,"color":85,"percentage":86},"Python","#3572A5",90.6,{"name":88,"color":89,"percentage":90},"HTML","#e34c26",7.9,{"name":92,"color":93,"percentage":94},"Jupyter Notebook","#DA5B0B",1.2,{"name":96,"color":97,"percentage":98},"Shell","#89e051",0.2,{"name":100,"color":101,"percentage":102},"PowerShell","#012456",0.1,1701,84,"2026-04-03T05:39:37","BSD-3-Clause","Linux, macOS, Windows","需要 NVIDIA GPU，显存 8GB+，CUDA 11.7+","16GB+",{"notes":111,"python":112,"dependencies":113},"建议使用 conda 管理环境，首次运行需下载约 5GB 模型文件。部分可选功能需要额外安装依赖和 API 密钥。","3.8+",[114,115,116,117,118,119,120,121,122,123],"torch>=2.0","transformers>=4.30","accelerate","pydantic","numpy","scikit-learn","requests","httpx","tqdm","pandas",[15],[126,127,128],"large-language-models","neurosymbolic-ai","probabilistic-programming","2026-03-27T02:49:30.150509","2026-04-06T05:17:31.196024",[132,137,142,147,152,157],{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},5320,"Symbol.translate 方法调用时出现 'NoneType' object has no attribute 'prepare' 错误如何解决？","该错误通常由 Python 版本过低引起。目前仅支持 Python 3.10 及以上版本。请确保使用 Python 3.10 或更高版本，并安装 `httpx==0.27.2`。同时，建议通过克隆仓库并运行 `pip install -e .` 进行安装。","https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fissues\u002F79",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},5321,"SymbiaChat 执行时总是将用户输入识别为 [DK] 意图，如何解决？","当前 `SymbiaChat` 在主分支中已过时且存在 bug。建议切换到 `dev` 分支进行测试，或参考官方文档中的更新说明。修复将在近期合并到主分支。","https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fissues\u002F85",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},5322,"如何在本地使用 llama\u002Fvicuna 等模型？","可以通过修改配置文件 `symai\u002Fbackend\u002Fservices\u002Fconfigs\u002Fhuggingface_causallm.config.json` 来指定本地模型。此外，可以使用 `symsvr` 启动本地服务器，并通过 `NeSyClientEngine` 连接。","https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fissues\u002F39",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},5323,"使用 Python 3.12 安装 symbolicai 时出现 ModuleNotFoundError: No module named 'distutils' 错误如何解决？","此问题可能与 setuptools 相关，而非 symbolicai 本身的问题。尝试升级 setuptools 或检查是否为环境配置问题。目前尚未有明确的解决方案，建议关注项目更新。","https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fissues\u002F48",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},5324,"Causal Reasoning Expressions 不工作，结果只是拼接字符串，如何解决？","可能是 Jupyter 环境导致的问题。尝试在本地环境中运行代码，或确认是否为版本兼容性问题。维护者表示在最新主分支上未复现该问题。","https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fissues\u002F71",{"id":158,"question_zh":159,"answer_zh":160,"source_url":136},5325,"如何在 WSL 上正确配置 symai.config.json 文件？","请确保 `symai.config.json` 文件位于 `~\u002F.symai` 文件夹内，并正确填写密钥和模型信息。如果仍存在问题，可尝试重新生成配置文件或联系维护者获取帮助。",[162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242,247,252,257],{"id":163,"version":164,"summary_zh":165,"released_at":166},104815,"1.13.0","## 2026-04-02\r\n\r\n### Added\r\n- Local Lean4 engine: run formal verification via Docker with no cloud dependency\r\n- Lean4 FastAPI server with Docker container lifecycle management and idle timeout\r\n- `symserver --lean4` CLI command to start the local Lean4 server\r\n- Gemini `3.1-flash-lite-preview` model support with `thinking_level` config\r\n- Local file and base64 input support for Mistral OCR engine\r\n- `caption_prompt` kwarg for FileEngine to customize vision prompts\r\n\r\n### Changed\r\n- Formal engine now supports two backends: Axiom (cloud) and Local (Docker)\r\n- Server shutdown message changed from warning to success style\r\n\r\n### Fixed\r\n- Path traversal vulnerability in document path handling\r\n- SSRF and DNS rebinding in Qdrant URL downloads\r\n- Gemini thinking tokens now tracked via `thoughts_token_count` in MetadataTracker\r\n- MetadataTracker handles engines without `self.model` attribute\r\n- Thread-safe contracts via per-instance `TypeValidationFunction`\r\n- Noisy warning removed on expected non-`LLMDataModel` input coercion\r\n\r\n### Security\r\n- Prevent path traversal in FileEngine document handling\r\n- Prevent SSRF and DNS rebinding in Qdrant URL download validation\r\n\r\n### Maintenance\r\n- Narrow broad `except Exception` clauses to specific types in Lean4 engine\r\n- Strip dead SSH artifacts from Lean4 Dockerfile\r\n- Unify port-finding logic via atomic `socket.bind`\r\n- Add `uv` required-version and dependency cooldown via `exclude-newer`\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.12.0...1.13.0","2026-04-02T11:51:18",{"id":168,"version":169,"summary_zh":170,"released_at":171},104816,"1.12.0","## 2026-03-28\r\n\r\n**BREAKING CHANGES**\r\n- OCR engine replaced: APILayer removed in favor of Mistral Document AI (`mistral-ocr-latest`)\r\n- OCR interface now requires keyword args (`document_url=` \u002F `image_url=`) instead of positional URL\r\n- New config key `OCR_ENGINE_MODEL` required alongside `OCR_ENGINE_API_KEY`\r\n\r\n### Added\r\n- Mistral OCR engine with document and image support, per-page output, and image extraction\r\n- `pip install symbolicai[ocr]` extra for the Mistral OCR dependency\r\n- Support for `gpt-5.4-mini` and `gpt-5.4-nano` reasoning models\r\n- Token tracking for `EmbeddingEngine` in `MetadataTracker`\r\n- Page-based usage tracking for Mistral OCR in `MetadataTracker`\r\n- `cache_control` kwarg override for Anthropic engines; pass `False` to disable\r\n\r\n### Removed\r\n- APILayer OCR engine (`engine_apilayer.py`)\r\n- Unused files: `app.py`, `build.py`, `icon_converter.py`, `installer.py`, `Dockerfile`, `docker-compose.yml`, `trusted_repos.yml`, `CITATION.cff`, `environment.yml`\r\n\r\n### Security\r\n- Bump `requests` to fix reported vulnerability\r\n- Bump `pyasn1` to fix reported vulnerability\r\n- Bump `pyjwt` to fix reported vulnerability\r\n- Pin `chonkie` without extras to avoid `litellm` hack\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.11.0...1.12.0","2026-03-27T22:48:37",{"id":173,"version":174,"summary_zh":175,"released_at":176},104817,"1.11.0","## 2026-03-13\r\n\r\n**BREAKING CHANGES**\r\n- `symserver` Qdrant invocation changed from `INDEXING_ENGINE=qdrant symserver ...` to `symserver qdrant ...`.\r\n\r\n### Added\r\n- Qdrant RAG API: a FastAPI\u002Fuvicorn companion server with `\u002Fsearch`, `\u002Fchunk-upsert`, `\u002Fpoints`, `\u002Fretrieve`, `\u002Fcollections`, and `\u002Fhealthz` endpoints.\r\n- `symserver qdrant --rag` flag launches the RAG API alongside Qdrant as co-managed processes.\r\n- RAG API flags: `--rag-host`, `--rag-port`, `--rag-workers`, `--rag-token`, `--rag-reload` for full configuration.\r\n- Bearer-token and `X-API-Key` authentication on all RAG API endpoints (via `--rag-token` \u002F `RAG_API_TOKEN`).\r\n- Automatic query embedding: `\u002Fsearch` accepts plain text and embeds it via SymbolicAI before querying Qdrant.\r\n- Metadata-based operations: `\u002Fpoints\u002Fcount-by-metadata`, `\u002Fpoints\u002Fdelete-by-metadata`, `\u002Fpoints\u002Flist-by-metadata`.\r\n\r\n### Changed\r\n- `qdrant_server()` now accepts an explicit `argv` parameter and strips the positional `qdrant` subcommand.\r\n- `_QDRANT_SERVER_FLAGS` frozenset now uses enum `.value` instead of enum members.\r\n- `symserver` config persistence consolidated into `_save_symserver_config()` helper.\r\n- Graceful co-process shutdown: `symserver` terminates RAG API and Qdrant on exit, stops detached Docker containers.\r\n\r\n### Maintenance\r\n- Indexing engine documentation updated with RAG API setup instructions and HTTP usage examples.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.10.0...1.11.0","2026-03-13T18:13:16",{"id":178,"version":179,"summary_zh":180,"released_at":181},104818,"1.10.0","## 2026-03-09\r\n\r\n### Added\r\n- `symserver` now accepts `--max-workers`, `--max-search-threads`, `--api-key`, `--read-only-api-key`, `--log-level`, `--disable-telemetry`, `--snapshots-path`, `--enable-tls` \u002F `--tls-*`, and `--set` flags, forwarding them as `QDRANT__*` env vars to Qdrant.\r\n- `symserver` Qdrant branch now auto-detected from any qdrant-exclusive flag (e.g. `--docker-detach`), not just args containing `\"qdrant\"`.\r\n- `local_search` interface accepts explicit `url` and `api_key` parameters to avoid relying on stale `symserver.config.json` entries.\r\n- `QdrantServerFlag` enum (composed from `QdrantConfigFlag`, `QdrantTLSFlag`, `QdrantDockerFlag`, `QdrantStorageFlag`, `QdrantBoolFlag`, `QdrantGenericFlag`) exposes all supported CLI flags as a typed, self-documenting contract.\r\n- `TestEmbedBatching` and `TestRagEmbedBatching` benchmarks assert batch embedding is faster than sequential per-chunk calls (measured 8–116× speedup).\r\n- `TestRagEmbedBatching::test_concurrent_search_faster_than_sequential` confirms `--max-workers` benefit via `asyncio.gather`.\r\n- `pytest-asyncio` added as a dev dependency; `asyncio_mode = auto` enabled project-wide.\r\n\r\n### Changed\r\n- `chunk_and_upsert` now embeds all chunks in a **single batched API call** instead of one call per chunk, reducing ingestion latency by 8–100× on typical documents.\r\n- `symserver` error message corrected: no longer lists `qdrant` as a neurosymbolic model option; now provides per-engine configuration guidance.\r\n- TLS configuration in `qdrant_server.py` separated into `_apply_tls_env()` helper with a `path_transform` parameter, eliminating the binary\u002FDocker code duplication.\r\n- `config_args` persistence loops replaced by iteration over `QdrantConfigFlag` and `QdrantTLSFlag` enums, removing hardcoded `(attr, flag)` tuples.\r\n\r\n### Performance\r\n- RAG ingestion: batch embedding reduces N HTTP round-trips to 1 for `chunk_and_upsert` regardless of document size.\r\n- Qdrant search concurrency: `--max-workers` and `--max-search-threads` flags enable parallel request handling and per-request segment scanning.\r\n\r\n### Maintenance\r\n- Test imports moved to module level with `try\u002Fexcept ImportError` guards for optional dependencies (`qdrant_client`, `ChonkieChunker`).\r\n- Removed unused `numpy` and `QdrantResult` imports from the Qdrant test suite.\r\n- Collapsed 32 runs of triple-blank lines left by removed in-function imports.\r\n- `_QDRANT_SERVER_FLAGS` frozenset moved to module level (was rebuilt on every `run_server()` call).\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.9.0...1.10.0","2026-03-09T19:48:01",{"id":183,"version":184,"summary_zh":185,"released_at":186},104819,"1.9.0","## 2026-03-07\r\n\r\n**BREAKING CHANGES**\r\n- `core.expression()` renamed to `core.symbolic()`; no backward-compatibility alias\r\n\r\n### Added\r\n- Axiom formal verification engine via cloud Axle SDK (`pip install symbolicai[lean]`)\r\n- `core.formal()` decorator routing to the new `formal` engine\r\n- `Interface('axiom')` with 14 Lean4 tools (check, verify_proof, disprove, etc.)\r\n- `@contract` post-conditions can delegate LLM outputs to Lean4 for proof verification\r\n- GPT-5.4 and GPT-5.4 Pro model support with resilient client config\r\n- Claude Sonnet 4.6 support with 1-hour prompt caching enabled by default\r\n- `FORMAL_ENGINE_API_KEY` \u002F `FORMAL_ENGINE` config pair for formal verification\r\n- Local Lean4 engine (`FORMAL_ENGINE=local`) via Docker, moved to `formal` package\r\n\r\n### Changed\r\n- OpenAI Responses client uses 600s timeout and 3 retries for long-running models\r\n\r\n### Fixed\r\n- Bump `lxml-html-clean` from 0.4.3 to 0.4.4\r\n\r\n### Maintenance\r\n- Refactor Parallel search engine: use SDK Pydantic types, pre-compile regex, use tldextract\r\n- Simplify neuro-symbolic engine tests\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.8.3...1.9.0","2026-03-06T22:27:07",{"id":188,"version":189,"summary_zh":190,"released_at":191},104820,"1.8.3","## 2026-02-17\r\n\r\n### Added\r\n- **Qdrant tag-based searching**: filter by tags with any-of semantics via `MatchAny`.\r\n- **Document deletion** by source path, tags, or metadata filter (`delete_documents`, `delete_by_filter`).\r\n- **Chunk location metadata**: `chunk_and_upsert` stores line numbers, char offsets, and PDF page ranges per chunk.\r\n- **Existence and counting helpers**: `document_exists`, `tag_exists`, `count`, `documents_for_tag`, `count_documents_for_tag`.\r\n- `file:\u002F\u002F` citation URLs now include provenance fragments (`#L10-L42`, `#page=3`).\r\n- Dict filter shorthand supports list\u002Ftuple\u002Fset values as any-of matches.\r\n- `metadata` and `where` accepted as filter aliases in `local_search`.\r\n- New `examples\u002Fqdrant_rag_demo.ipynb` notebook demonstrating the Qdrant RAG workflow.\r\n\r\n### Changed\r\n- Dict-based `query_filter` now treats list\u002Ftuple\u002Fset values as `MatchAny` instead of erroring.\r\n- `search()` accepts `query_filter` as both kwarg and via `filter` alias for consistency.\r\n- PDF files in `chunk_and_upsert` now use `backend='markitdown'` for reliable extraction.\r\n- `scikit-learn` and `dill` promoted to core dependencies.\r\n\r\n### Maintenance\r\n- Bumped version to 1.8.3.\r\n- Updated indexing engine documentation with deletion, counting, and provenance sections.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.8.2...1.8.3","2026-02-17T10:20:53",{"id":193,"version":194,"summary_zh":195,"released_at":196},104821,"1.8.2","## 2026-02-16\r\n\r\n### Added\r\n- **Auto backend** for the file engine — intelligently routes reads to the best backend per file type.\r\n- Three-backend dispatch: `auto` (default), `standard` (strict), `markitdown` (force rich conversion).\r\n- Unknown file extensions gracefully fall back through plain-text then markitdown in auto mode.\r\n- URL reading now works with the default `auto` backend (previously required explicit `markitdown`).\r\n\r\n### Changed\r\n- **markitdown is now a core dependency** — no longer requires `pip install 'symbolicai[files]'`.\r\n- Default file engine backend changed from `standard` to `auto`.\r\n- File engine `forward()` refactored into three dispatch methods for clarity.\r\n- markitdown imports are now direct (removed conditional `try\u002Fexcept ImportError` guards).\r\n- Converter tuples simplified from conditional expressions to plain tuples.\r\n\r\n### Removed\r\n- `files` optional dependency group from `pyproject.toml`.\r\n- `symbolicai[files]` entry from the `all` extras group.\r\n- `_MARKITDOWN_AVAILABLE` feature flag and all associated runtime guards.\r\n- `_has_markitdown` \u002F `_skip_no_markitdown` test skip markers.\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.8.1...1.8.2","2026-02-16T13:45:20",{"id":198,"version":199,"summary_zh":200,"released_at":201},104822,"1.8.1","## 2026-02-14\r\n\r\n### Added\r\n- Standard file backend now returns native Python objects: JSON\u002FYAML\u002FTOML → `dict`, CSV\u002FTSV → `list[dict]`\r\n- Image files (.jpg, .jpeg, .png) return RGB `numpy.ndarray` via cv2 in standard backend\r\n- TSV support in structured parsing and `as_box` (tab-delimited → `list[dict]` or `BoxList`)\r\n\r\n### Changed\r\n- `Symbol.open()` auto-parses structured formats by default; `as_box=True` preserves `Box`\u002F`BoxList`\r\n- Supported formats table in docs now shows default return types per category\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.8.0...1.8.1","2026-02-14T10:46:52",{"id":203,"version":204,"summary_zh":205,"released_at":206},104823,"1.8.0","## 2026-02-12\r\n\r\n**BREAKING CHANGES**\r\n- Replace tika with markitdown for file reading; `tika` and `pypdf` are no longer dependencies\r\n- Remove `fix_pdf`, `with_metadata`, and file-slice syntax (`file.txt[0:10]`) kwargs from file engine\r\n- Remove `RetrievalAugmentedConversation`, `DocumentRetriever`, `TextContainer`, and `TextContainerFormatter`\r\n- File engine now uses explicit `backend` kwarg: `'standard'` (default) or `'markitdown'`\r\n\r\n### Added\r\n- New markitdown-based file engine supporting PDF, DOCX, PPTX, XLSX, HTML, EPUB, images, audio, and more\r\n- `as_box=True` kwarg for `Symbol.open()` to parse JSON\u002FYAML\u002FTOML\u002FCSV into Box objects with dot-access\r\n- LLM-powered image captioning and PPTX slide descriptions via `_SymaiVisionClient` adapter\r\n- URL support in file engine for RSS feeds and web pages (`backend='markitdown'`)\r\n- `files` optional dependency group: `pip install 'symbolicai[files]'`\r\n- Parallel file reading via `FileReader(workers=N)` using `ProcessPoolExecutor`\r\n- `FileReader.get_files()` now discovers all supported formats (was limited to PDF\u002FMD\u002FTXT)\r\n\r\n### Changed\r\n- Shell RAG indexing uses `Indexer` + `Conversation` directly instead of `DocumentRetriever`\r\n- `test_imports.py` now reads `pyproject.toml` dynamically instead of maintaining a hardcoded list\r\n- Move `sentencepiece` and `sentence-transformers` from core to `hf` optional group\r\n\r\n### Removed\r\n- `tika`, `pypdf`, `natsort`, `pandas`, `scikit-learn`, `torchvision`, `PyYAML`, `colorama`, `pathos`, `pydantic-settings`, `httpx`, `pydub`, `pycryptodome`, `pymongo`, `playwright`, `llvmlite` from core dependencies\r\n- `legacy\u002F` directory (old notebooks, examples, and test files)\r\n- `TextContainerFormatter` from `symai.formatter`\r\n- `DocumentRetriever` from `symai.extended`\r\n\r\n### Fixed\r\n- Shell missing `--conversation-style` CLI argument\r\n- Temp file leak in vision adapter for base64 image decoding\r\n- `prepare()` backslash handling now uses `Path.as_posix()` instead of stripping backslashes\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.7.3...1.8.0","2026-02-14T09:30:20",{"id":208,"version":209,"summary_zh":210,"released_at":211},104824,"1.7.3","## 2026-02-06\r\n\r\n### Fixed\r\n- Prevented import-time failures when optional `parallel-web` is absent.\r\n- Prevented import-time failures when optional `firecrawl-py` is absent.\r\n- Optional search SDK errors now occur only when those engines initialize.\r\n\r\n### Maintenance\r\n- Applied Ruff autofixes across `symai` for style and import ordering.\r\n- Simplified string-splitting calls for cleaner, more efficient internals.\r\n- Removed an unused blanket `noqa` in the Qdrant server wrapper.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.7.1...1.7.3","2026-02-06T18:56:51",{"id":213,"version":214,"summary_zh":215,"released_at":216},104825,"1.7.1","## 2026-02-05\r\n\r\n### Added\r\n- Added Opus 4.6 adaptive thinking via `thinking={\"type\":\"adaptive\"}` in Anthropic reasoning requests.\r\n- Added adaptive effort control values: `low`, `medium`, `high`, and `max`.\r\n\r\n### Changed\r\n- Anthropic reasoning now merges adaptive effort with structured output format settings.\r\n- Non-Opus adaptive requests now warn and fall back to manual thinking budgets.\r\n\r\n### Maintenance\r\n- Expanded neurosymbolic engine docs with adaptive-thinking runtime examples and fallback behavior.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.7.0...1.7.1","2026-02-05T21:47:28",{"id":218,"version":219,"summary_zh":220,"released_at":221},104826,"1.7.0","## 2026-02-05\r\n\r\n### Added\r\n- Added `long_context_1m` runtime opt-in for Anthropic reasoning 1M context requests.\r\n- Documented Anthropic 1M runtime usage and fallback behavior in neuro-symbolic engine docs.\r\n- Documented contract remedy retries using `dynamic_engine` in contracts docs.\r\n\r\n### Changed\r\n- Mapped Opus JSON contracts to Anthropic schema-based `output_config` responses.\r\n- Resolved response format using active dynamic engine model before static config.\r\n- Enabled Claude Opus 4.6 routing across Anthropic chat and reasoning engines.\r\n\r\n### Fixed\r\n- Normalized `response_format=None` to avoid Opus structured-output regressions.\r\n- Fixed Anthropic model ID typo for `claude-3-opus-20240229`.\r\n- Silenced Cerebras HPACK debug logs to reduce low-level HTTP noise.\r\n- Constrained Torch to `\u003C2.10.0` to avoid tensor compatibility failures.\r\n\r\n### Maintenance\r\n- Updated dependency lockfile and release metadata.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.6.0...1.7.0","2026-02-05T21:02:29",{"id":223,"version":224,"summary_zh":225,"released_at":226},104827,"1.6.0","## 2025-02-04\r\n\r\n### Added\r\n- OpenRouter engine with Kimi-K2.5 model support\r\n- Nanobanana interface for extended integrations\r\n- MetadataTracker support for Claude and Gemini models\r\n- Gemini image drawing engine\r\n\r\n### Fixed\r\n- Embedding engine now works when `api_key` is passed to constructor (uses OpenAI client instance)\r\n\r\n### Removed\r\n- `SUPPORT_COMMUNITY` configuration option and data collection functionality\r\n- `symai\u002Fcollect` module (MongoDB collection, Aggregator, dynamic object creation)\r\n- `blip2` extra dependency\r\n\r\n### Maintenance\r\n- Updated engine documentation for drawing and neurosymbolic engines\r\n- Added tests for embedding and drawing engines\r\n- Simplified splash screen config handling\r\n\r\n## New Contributors\r\n* @vaidoamne made their first contribution in https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fpull\u002F98\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.5.0...1.6.0","2026-02-04T21:29:04",{"id":228,"version":229,"summary_zh":230,"released_at":231},104828,"1.5.0","## 2026-01-17\r\n\r\n### Added\r\n- Firecrawl.dev engine for reliable web scraping with JS rendering, proxies, and anti-bot handling.\r\n- `Interface(\"firecrawl\")` with `.search()` and `.scrape()` methods supporting markdown, JSON schema extraction.\r\n- User agent rotation pool (8 modern browser UAs) in RequestsEngine for better scrape resilience.\r\n- Exponential backoff retry strategy for transient server errors (500, 502, 503, 504).\r\n- DynamicEngine support for search engines (firecrawl, parallel) with automatic routing.\r\n\r\n### Fixed\r\n- `include_tags` and `exclude_tags` now correctly passed to search scrape options.\r\n\r\n### Changed\r\n- EngineRepository routes search requests through DynamicEngine when configured.\r\n\r\n### Maintenance\r\n- Updated lock file dependencies.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.4.0...1.5.0","2026-01-17T16:14:52",{"id":233,"version":234,"summary_zh":235,"released_at":236},104829,"1.4.0","## 2026-01-07\r\n\r\n### Added\r\n- Added `local_search` interface for Qdrant with citation-formatted results.\r\n- Added `treat_as_search_engine` mode for Qdrant forward searches.\r\n\r\n### Changed\r\n- Qdrant citations now resolve absolute `file:\u002F\u002F` URIs and per-chunk source identifiers.\r\n- Qdrant search accepts dict filters and forwards supported query parameters.\r\n- Qdrant query handling now filters unsupported kwargs for client compatibility.\r\n\r\n### Fixed\r\n- Updated Qdrant engine behavior to align with Qdrant 1.16.1 changes.\r\n\r\n### Maintenance\r\n- Expanded Qdrant indexing documentation and local_search examples.\r\n- Added Qdrant PDF fixtures and citation-focused tests.\r\n- Added `pytest-asyncio` to dev extras.\r\n\r\n## PRs\r\n* Feature\u002Fqdrant rag by @ryanhg in https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fpull\u002F97\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.3.0...1.4.0","2026-01-07T19:36:09",{"id":238,"version":239,"summary_zh":240,"released_at":241},104830,"1.3.0"," ## 2026-01-05\r\n\r\n  ### Added\r\n\r\n  - Add AGENTS.md with contributor and agent workflow guidelines.\r\n\r\n  ### Changed\r\n\r\n  - Parallel task results include structured reasoning\u002Fanswer\u002Fconfidence blocks when available.\r\n  - Parallel tasks expose first-basis reasoning and confidence metadata.\r\n\r\n  ### Fixed\r\n\r\n  - Parallel Extract parsing preserves non-string full_content outputs and falls back to excerpts.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.2.1...1.3.0","2026-01-05T18:59:06",{"id":243,"version":244,"summary_zh":245,"released_at":246},104831,"1.2.1","## 2025-12-12\r\n\r\n  ### Added\r\n\r\n  - `gpt-5.2*` in Responses Engine. \r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.2.0...1.2.1","2025-12-12T13:30:35",{"id":248,"version":249,"summary_zh":250,"released_at":251},104832,"1.2.0","## 2025-12-05\r\n\r\n  ### Added\r\n\r\n  - OpenAI Responses engine for responses: models with reasoning trace and tool calls.\r\n  - Added claude-opus-4-5 to reasoning model support.\r\n  - Qdrant search now accepts dict metadata filters and optional score thresholds.\r\n  - Parallel search can exclude domains with validation and deduplication.\r\n  - Qdrant server flags for env-provided storage and disabling static cache.\r\n  - RuntimeInfo exposes extras for engine-specific metrics, including ParallelEngine usage.\r\n\r\n  ### Fixed\r\n\r\n  - Qdrant index engine updated for Qdrant 1.16.1 compatibility, including query_points and point ID normalization.\r\n\r\n  ### Maintenance\r\n\r\n  - Relaxed Ruff defaults by ignoring C901 and removing the mccabe complexity check.\r\n  - Added tests and docs for Qdrant filters and OpenAI Responses usage.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.1.1...1.2.0","2025-12-05T13:41:49",{"id":253,"version":254,"summary_zh":255,"released_at":256},104833,"1.1.1","## 2025-11-18\r\n\r\n### Added\r\n  - Added dynamic engine support for Cerebras chat and reasoning models.\r\n  - Added usage tracking for Cerebras engines in metadata and runtime statistics.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.1.0...1.1.1","2025-11-18T16:58:24",{"id":258,"version":259,"summary_zh":260,"released_at":261},104834,"1.1.0","## 2025-11-18\r\n\r\n  BREAKING CHANGES\r\n\r\n  - Renamed web scraping extra from \"webscraping\" to \"scrape\"; update installation and UV extras accordingly.\r\n  - Replaced \"naive_webscraping\" interface with \"naive_scrape\"; adjust Interface(\"naive_webscraping\") usages to the new name.\r\n\r\n  ### Added\r\n\r\n  - Added Qdrant-based indexing engine with chunking, collection management APIs, and dedicated qdrant extra.\r\n  - Added qdrant_server helper to start local Qdrant via Docker or binary with sensible defaults.\r\n  - Introduced naive_scrape interface and scrape engine for lightweight HTTP and PDF content extraction.\r\n  - Added Parallel.ai search engine with citation-aware SearchResult, domain filters, and task processor integration.\r\n  - Enabled Parallel.ai scrape route via parallel interface.scrape for robust extraction on dynamic pages.\r\n  - Added Cerebras neurosymbolic engine with thinking-trace capture and cerebras:model_id style model configuration.\r\n  - Introduced ChonkieChunker helper for configurable token- and embedding-based document chunking workflows.\r\n  - Added scrape and search extras to install web scraping and Parallel dependencies separately.\r\n\r\n  ### Changed\r\n\r\n  - Updated neurosymbolic engine docs to cover Cerebras-specific model keys and reasoning trace behavior.\r\n  - Updated configuration mapping so cfg_to_interface() uses naive_scrape for the generic \"scraper\" shortcut.\r\n\r\n  ### Fixed\r\n\r\n  - Fixed Qdrant collection creation and Docker wrapper configuration for more robust local setups.\r\n  - Fixed chonkie integration by lazily importing chunking utilities to avoid crashes when extra is missing.\r\n\r\n  ### Removed\r\n\r\n  - Removed webscraping engine names and documentation in favor of the new scrape engine naming.\r\n\r\n  ### Maintenance\r\n\r\n  - Applied ruff formatting and minor refactors across core, engine, and interface modules.\r\n  - Expanded tests for Qdrant, search, scrape, neurosymbolic, and symbol components to improve coverage.\r\n  - Documented regular expression behavior and added safety notes to scraping and search engines.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FExtensityAI\u002Fsymbolicai\u002Fcompare\u002F1.0.0...1.1.0","2025-11-18T16:13:22"]