[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-FalkorDB--GraphRAG-SDK":3,"tool-FalkorDB--GraphRAG-SDK":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":10,"env_os":93,"env_gpu":93,"env_ram":93,"env_deps":94,"category_tags":99,"github_topics":100,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":109,"updated_at":110,"faqs":111,"releases":142},1057,"FalkorDB\u002FGraphRAG-SDK","GraphRAG-SDK","Build fast and accurate GenAI apps with GraphRAG SDK at scale.","GraphRAG-SDK 是一个面向生成式AI应用开发的工具包，专注于通过图检索增强生成（GraphRAG）技术提升信息检索与内容生成的准确性。它将知识图谱的结构化数据优势与大语言模型（LLM）的语义理解能力结合，帮助开发者构建可扩展的智能问答、推荐系统或数据分析应用。传统RAG技术在处理复杂关联数据时可能存在检索效率低、上下文碎片化的问题，而GraphRAG通过知识图谱的拓扑结构优化检索路径，同时支持多跳推理，显著提升了跨实体关系的信息整合能力。\n\n该工具特别适合需要处理复杂数据关联的开发者和研究人员，例如构建企业级智能客服、科研领域的知识发现系统或需要动态更新知识库的团队。开发者可通过Docker快速部署本地环境，或使用FalkorDB云服务进行扩展，同时支持OpenAI、Google Gemini、Ollama等主流模型灵活接入。\n\n技术层面，GraphRAG-SDK 的核心亮点包括：自动从现有知识图谱中提取本体结构，降低建模门槛；通过LiteLLM框架实现多模型协同推理；提供端到端的工作流配置（从数据摄入到查询优化），并兼容FalkorDB的图数据库特性。对于需要平衡生成质量与","GraphRAG-SDK 是一个面向生成式AI应用开发的工具包，专注于通过图检索增强生成（GraphRAG）技术提升信息检索与内容生成的准确性。它将知识图谱的结构化数据优势与大语言模型（LLM）的语义理解能力结合，帮助开发者构建可扩展的智能问答、推荐系统或数据分析应用。传统RAG技术在处理复杂关联数据时可能存在检索效率低、上下文碎片化的问题，而GraphRAG通过知识图谱的拓扑结构优化检索路径，同时支持多跳推理，显著提升了跨实体关系的信息整合能力。\n\n该工具特别适合需要处理复杂数据关联的开发者和研究人员，例如构建企业级智能客服、科研领域的知识发现系统或需要动态更新知识库的团队。开发者可通过Docker快速部署本地环境，或使用FalkorDB云服务进行扩展，同时支持OpenAI、Google Gemini、Ollama等主流模型灵活接入。\n\n技术层面，GraphRAG-SDK 的核心亮点包括：自动从现有知识图谱中提取本体结构，降低建模门槛；通过LiteLLM框架实现多模型协同推理；提供端到端的工作流配置（从数据摄入到查询优化），并兼容FalkorDB的图数据库特性。对于需要平衡生成质量与计算成本的场景，其分阶段模型选择机制（如Ollama用于知识图生成，GPT-4用于最终生成）可有效优化资源分配。","# GraphRAG \r\n[![Dockerhub](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Ffalkordb\u002Ffalkordb?label=Docker)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ffalkordb\u002Ffalkordb\u002F)\r\n[![pypi](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fgraphrag_sdk.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fgraphrag_sdk\u002F)\r\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1146782921294884966?style=flat-square)](https:\u002F\u002Fdiscord.gg\u002F6M4QwDXn2w)\r\n[![Contributor Covenant](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContributor%20Covenant-2.1-4baaaa.svg)](CODE_OF_CONDUCT.md)\r\n\r\n\u003Cp align=\"center\">\r\n  \u003Cimg alt=\"FalkorDB GraphRAG-SDK README Banner\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFalkorDB_GraphRAG-SDK_readme_6f218dcf6bdc.png\" width=\"1500\">\r\n\u003C\u002Fp>\r\n\r\n### Build fast and accurate GenAI apps with GraphRAG SDK at scale\r\n\r\nSimplify the development of your next GenAI application with GraphRAG-SDK, a specialized toolkit for building Graph Retrieval-Augmented Generation (GraphRAG) systems. It integrates knowledge graphs, ontology management, and state-of-the-art LLMs to deliver accurate, efficient, and customizable RAG workflows.\r\n\r\n# GraphRAG Setup\r\n### Database Setup\r\n\r\n[![Try Free](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTry%20Free-FalkorDB%20Cloud-FF8101?labelColor=FDE900&style=for-the-badge&link=https:\u002F\u002Fapp.falkordb.cloud)](https:\u002F\u002Fapp.falkordb.cloud)\r\n\r\nOr use on premise with Docker:\r\n\r\n```sh\r\ndocker run -p 6379:6379 -p 3000:3000 -it --rm  -v .\u002Fdata:\u002Fdata falkordb\u002Ffalkordb:latest\r\n```\r\n\r\n### Dependencies:\r\n```sh\r\npip install graphrag_sdk\r\n```\r\n\r\n### Configure Credentials. See [.env](.env.template) for examples.\r\n\r\n* [LiteLLM](https:\u002F\u002Fdocs.litellm.ai): A framework supporting inference of large language models, allowing flexibility in deployment and use cases.  \r\n  To choose vendor use the prefix \"specific_vendor\u002Fyour_model\", for example \"openai\u002Fgpt-4.1\".\r\n* [OpenAI](https:\u002F\u002Fopenai.com\u002Findex\u002Fopenai-api) Recommended model:`gpt-4.1`\r\n* [Google](https:\u002F\u002Fmakersuite.google.com\u002Fapp\u002Fapikey) Recommended model:`gemini-2.0-flash`\r\n* [Azure-OpenAI](https:\u002F\u002Fai.azure.com) Recommended model:`gpt-4.1`\r\n* [Ollama](https:\u002F\u002Follama.com\u002F) Available only to the Q&A step. Recommended models: `llama3`. Ollama models are suitable for the Q&A step only (after the knowledge graph (KG) created).\r\n\r\n\r\n# How to use\r\n[![Get started](https:\u002F\u002Fpl-bolts-doc-images.s3.us-east-2.amazonaws.com\u002Fapp-2\u002Fget-started-badge.svg)](https:\u002F\u002Flightning.ai\u002Fmuhammadqadora\u002Fstudios\u002Fbuild-fast-accurate-genai-apps-advanced-rag-with-falkordb)\r\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FFalkorDB\u002FGraphRAG-SDK\u002Fblob\u002Fmain\u002Fexamples\u002Fmovies\u002Fdemo-movies.ipynb)\r\n\r\n### Environment Configuration\r\n\r\nBefore using the SDK, configure your environment variables:\r\n\r\n```bash\r\n# FalkorDB Connection (defaults are for on-premises)\r\nexport FALKORDB_HOST=\"localhost\" \r\nexport FALKORDB_PORT=6379 \r\nexport FALKORDB_USERNAME=\"your-username\"  # optional for on-premises\r\nexport FALKORDB_PASSWORD=\"your-password\"  # optional for on-premises\r\n\r\n# LLM Provider (choose one)\r\nexport OPENAI_API_KEY=\"your-key\"  # or GOOGLE_API_KEY, GROQ_API_KEY, etc.\r\n```\r\n\r\n## Quick Start with Existing Knowledge Graph\r\n\r\nIf you already have a knowledge graph in FalkorDB, you can quickly set up GraphRAG by extracting the ontology from your existing graph:\r\n\r\n```python\r\nimport os\r\nfrom falkordb import FalkorDB\r\nfrom graphrag_sdk import KnowledgeGraph\r\nfrom graphrag_sdk.ontology import Ontology\r\nfrom graphrag_sdk.models.litellm import LiteModel\r\nfrom graphrag_sdk.model_config import KnowledgeGraphModelConfig\r\n\r\ngraph_name = \"my_existing_graph\"\r\n\r\n# Connect to FalkorDB using environment variables\r\ndb = FalkorDB(\r\n    host=os.getenv(\"FALKORDB_HOST\", \"localhost\"),\r\n    port=int(os.getenv(\"FALKORDB_PORT\", 6379)),\r\n    username=os.getenv(\"FALKORDB_USERNAME\"),  # optional for on-premises\r\n    password=os.getenv(\"FALKORDB_PASSWORD\")   # optional for on-premises\r\n)\r\n\r\n# Select graph\r\ngraph = db.select_graph(graph_name)\r\n\r\n# Extract ontology from existing knowledge graph\r\nontology = Ontology.from_kg_graph(graph)\r\n\r\n# Configure model and create GraphRAG instance\r\nmodel = LiteModel()  # Default is OpenAI GPT-4.1, can specify different model\r\nmodel_config = KnowledgeGraphModelConfig.with_model(model)\r\n\r\n# Create KnowledgeGraph instance\r\nkg = KnowledgeGraph(\r\n    name=graph_name,\r\n    model_config=model_config,\r\n    ontology=ontology,\r\n    host=os.getenv(\"FALKORDB_HOST\", \"localhost\"),\r\n    port=int(os.getenv(\"FALKORDB_PORT\", 6379)),\r\n    username=os.getenv(\"FALKORDB_USERNAME\"),\r\n    password=os.getenv(\"FALKORDB_PASSWORD\")\r\n)\r\n\r\n# Start chat session\r\nchat = kg.chat_session()\r\n\r\n# Ask questions\r\nresponse = chat.send_message(\"What products are available?\")\r\nprint(response[\"response\"])\r\n\r\n# Ask follow-up questions\r\nresponse = chat.send_message(\"Tell me which one of them is the most expensive\")\r\nprint(response[\"response\"])\r\n```\r\n\r\n## Creating Knowledge Graphs from Scratch\r\n\r\n### Step 1: Creating Ontologies\r\nAutomate ontology creation from unstructured data or define it manually - See [example](https:\u002F\u002Fgithub.com\u002Ffalkordb\u002FGraphRAG-SDK\u002Fblob\u002Fmain\u002Fexamples\u002Ftrip\u002Fdemo_orchestrator_trip.ipynb)\r\n\r\n```python\r\nfrom dotenv import load_dotenv\r\nimport json\r\nfrom graphrag_sdk.source import URL\r\nfrom graphrag_sdk import KnowledgeGraph, Ontology\r\nfrom graphrag_sdk.models.litellm import LiteModel\r\nfrom graphrag_sdk.model_config import KnowledgeGraphModelConfig\r\nload_dotenv()\r\n\r\n# Import Data\r\nurls = [\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fside_by_side_2012\",\r\n\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fmatrix\",\r\n\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fmatrix_revolutions\",\r\n\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fmatrix_reloaded\",\r\n\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fspeed_1994\",\r\n\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fjohn_wick_chapter_4\"]\r\n\r\nsources = [URL(url) for url in urls]\r\n\r\n# Model - vendor: openai, model: gpt-4.1 -> openai\u002Fgpt-4.1\r\nmodel = LiteModel(model_name=\"openai\u002Fgpt-4.1\")\r\n\r\n# Ontology Auto-Detection\r\nontology = Ontology.from_sources(\r\n    sources=sources,\r\n    model=model,\r\n)\r\n# Save the ontology to the disk as a json file.\r\nwith open(\"ontology.json\", \"w\", encoding=\"utf-8\") as file:\r\n    file.write(json.dumps(ontology.to_json(), indent=2))\r\n```\r\n\r\n### Step 2: Creating a knowledge graph agent\r\nBuild, query, and manage knowledge graphs optimized for retrieval and augmentation tasks. \r\nLeverages FalkorDB for high-performance graph querying and multi-tenancy.\r\n\r\n```python\r\n# After approving the ontology, load it from disk.\r\nontology_file = \"ontology.json\"\r\nwith open(ontology_file, \"r\", encoding=\"utf-8\") as file:\r\n    ontology = Ontology.from_json(json.loads(file.read()))\r\n\r\nkg = KnowledgeGraph(\r\n    name=\"kg_name\",\r\n    model_config=KnowledgeGraphModelConfig.with_model(model),\r\n    ontology=ontology,\r\n    host=\"127.0.0.1\",\r\n    port=6379,\r\n    # username=falkor_username, # optional\r\n    # password=falkor_password  # optional\r\n)\r\n\r\nkg.process_sources(sources)\r\n```\r\n\r\n### Step 3: Query your Graph RAG\r\n\r\nAt this point, you have a Knowledge Graph that can be queried using this SDK. Use the method `chat_session` for start a conversation.\r\n\r\n```python\r\n# Conversation\r\nchat = kg.chat_session()\r\nresponse = chat.send_message(\"Who is the director of the movie The Matrix?\")\r\nprint(response)\r\nresponse = chat.send_message(\"How this director connected to Keanu Reeves?\")\r\nprint(response)\r\n```\r\n\r\n## Next Steps\r\nWith these 3 steps now completed, you're ready to interact and query your knowledge graph.  Here are suggestions for use cases:\r\n\u003Cp align=\"left\">\r\n  \u003Cimg alt=\"GraphRAG-SDK Use Cases Banner from FalkorDB\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFalkorDB_GraphRAG-SDK_readme_748478b3a1b2.png\" width=\"800\">\r\n\u003C\u002Fp>\r\n\r\n**Need help with your use case? let's [talk](https:\u002F\u002Fwww.falkordb.com\u002Fget-demo\u002F)**\r\n\r\n\u003Cbr \u002F>\r\n\r\n# Using Ollama\r\n\r\nOllama models are suitable for the Q&A step only (after the knowledge graph has been created).\r\n\r\n## Setup\r\n\r\n```python\r\nfrom graphrag_sdk.models.ollama import OllamaGenerativeModel\r\n\r\n# Local Ollama (default: http:\u002F\u002Flocalhost:11434)\r\nqa_model = OllamaGenerativeModel(model_name=\"llama3:8b\")\r\n\r\n# Remote Ollama\r\nqa_model = OllamaGenerativeModel(\r\n    model_name=\"llama3:8b\",\r\n    api_base=\"http:\u002F\u002Fremote-host:11434\"\r\n)\r\n```\r\n\r\n\u003Cbr \u002F>\r\n\r\n# AI Agents with GraphRAG\r\n\r\n### Orchestrator\r\nThe GraphRAG-SDK supports Knowledge Graph-based agents. Each agent is an expert in his domain, and the orchestrator orchestrates the agents.\r\n\r\nCheck out the example:\r\n\r\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FFalkorDB\u002FGraphRAG-SDK\u002Fblob\u002Fmain\u002Fexamples\u002Ftrip\u002Fdemo_orchestrator_trip.ipynb)\r\n\r\n\r\n### Agents\r\n\r\nSee the [Step 1](#how-to-use) section to understand how to create Knowledge Graph objects for the agents.\r\n\r\n```python\r\n# Define the model\r\nmodel = LiteModel(model_name=\"openai\u002Fgpt-4.1\")\r\n\r\n# Create the Knowledge Graph from the predefined ontology.\r\n# In this example, we will use the restaurants agent and the attractions agent.\r\nrestaurants_kg = KnowledgeGraph(\r\n    name=\"restaurants\",\r\n    ontology=restaurants_ontology,\r\n    model_config=KnowledgeGraphModelConfig.with_model(model),\r\n    host=\"127.0.0.1\",\r\n    port=6379,\r\n    # username=falkor_username, # optional\r\n    # password=falkor_password  # optional\r\n)\r\nattractions_kg = KnowledgeGraph(\r\n    name=\"attractions\",\r\n    ontology=attractions_ontology,\r\n    model_config=KnowledgeGraphModelConfig.with_model(model),\r\n    host=\"127.0.0.1\",\r\n    port=6379,\r\n    # username=falkor_username, # optional\r\n    # password=falkor_password  # optional\r\n)\r\n\r\n\r\n# The following agent is specialized in finding restaurants.\r\nrestaurants_agent = KGAgent(\r\n    agent_id=\"restaurants_agent\",\r\n    kg=restaurants_kg,\r\n    introduction=\"I'm a restaurant agent, specialized in finding the best restaurants for you.\",\r\n)\r\n\r\n# The following agent is specialized in finding tourist attractions.\r\nattractions_agent = KGAgent(\r\n    agent_id=\"attractions_agent\",\r\n    kg=attractions_kg,\r\n    introduction=\"I'm an attractions agent, specialized in finding the best tourist attractions for you.\",\r\n)\r\n```\r\n\r\n### Orchestrator - Multi-Agent System\r\n\r\nThe orchestrator manages the usage of agents and handles questioning.\r\n\r\n```python\r\n# Initialize the orchestrator while giving it the backstory.\r\norchestrator = Orchestrator(\r\n    model,\r\n    backstory=\"You are a trip planner, and you want to provide the best possible itinerary for your clients.\",\r\n)\r\n\r\n# Register the agents that we created above.\r\norchestrator.register_agent(restaurants_agent)\r\norchestrator.register_agent(attractions_agent)\r\n\r\n# Query the orchestrator.\r\nrunner = orchestrator.ask(\"Create a two-day itinerary for a trip to Rome. Please don't ask me any questions; just provide the best itinerary you can.\")\r\nprint(runner.output)\r\n\r\n```\r\n## Community\r\n\r\nHave questions or feedback? Reach out via:\r\n- [GitHub Issues](https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fissues)\r\n- Join our [Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002F6M4QwDXn2w)\r\n\r\n⭐️ If you find this repository helpful, please consider giving it a star!\r\n\r\n## Additional Enhancement: Configuring your prompts\r\nWhen creating your Knowledge Graph (KG) agent, you can customize the prompts to tailor its behavior.\r\n\r\n💡 This step is optional but can enhance functionality.\r\n\r\nThere are five types of prompts:\r\n\r\n1. **`cypher_system_instruction`**  \r\n   - System instructions for the Cypher generation step.  \r\n   - **Note:** Ensure your prompt includes `{ontology}`.\r\n\r\n2. **`qa_system_instruction`**  \r\n   - System instructions for the Q&A step.\r\n\r\n3. **`cypher_gen_prompt`**  \r\n   - The prompt used during the Cypher generation step.  \r\n   - **Note:** Include `{question}` in your prompt.\r\n\r\n4. **`cypher_gen_prompt_history`**  \r\n   - The prompt for Cypher generation when history needs to be considered.  \r\n   - **Note:** Include `{question}` and `{last_answer}` in your prompt.\r\n\r\n5. **`qa_prompt`**  \r\n   - The prompt used during the Q&A step.  \r\n   - **Note:** Include `{question}`, `{context}`, and `{cypher}` in your prompt.\r\n\r\nHere’s an example configuration:\r\n\r\n```python\r\nkg = KnowledgeGraph(\r\n    name=\"kg_name\",\r\n    model_config=KnowledgeGraphModelConfig.with_model(model),\r\n    ontology=ontology,\r\n    cypher_system_instruction=cypher_system_instruction,\r\n    qa_system_instruction=qa_system_instruction,\r\n    cypher_gen_prompt=cypher_gen_prompt,\r\n    cypher_gen_prompt_history=cypher_gen_prompt_history,\r\n    qa_prompt=qa_prompt\r\n    host=\"127.0.0.1\",\r\n    port=6379,\r\n    # username=falkor_username, # optional\r\n    # password=falkor_password  # optional\r\n)\r\n```\r\n\r\n\r\n## FAQ\r\n**Which databases are supported?**\r\n\r\nGraphRAG-SDK is optimized for FalkorDB. Other backends may require adapters.\r\n\r\n**How scalable is the SDK?**\r\n\r\nGraphRAG-SDK is designed for multi-tenancy and large-scale applications. Performance depends on FalkorDB deployment configuration.\r\n\r\n**How does this SDK improve retrieval-augmented generation?**\r\n\r\nBy leveraging knowledge graphs, GraphRAG-SDK enables semantic relationships and ontology-driven queries that go beyond standard vector similarity.\r\n\r\n**Which file formats does the SDK support?**\r\n\r\nSupported formats include PDF, JSONL, CSV, HTML, TEXT, and URLs.\r\n\r\n**How does the SDK handle latency?**\r\n\r\nThe SDK is optimized for low-latency operations through FalkorDB, using techniques like query optimization and in-memory processing.\r\n\r\n**Does the SDK support multi-graph querying?**\r\n\r\nYes. Multi-graph querying is supported with APIs designed for cross-domain and hierarchical graph exploration.\r\n\r\n\u003Cbr \u002F>\r\n\r\n# License\r\n\r\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.\r\n\r\nKeywords: RAG, graphrag, Retrieval-Augmented Generation, NLP, AI, Information Retrieval, Natural Language Processing, LLM, Embeddings, Semantic Search\r\n","# GraphRAG \n[![Dockerhub](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Ffalkordb\u002Ffalkordb?label=Docker)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ffalkordb\u002Ffalkordb\u002F)\n[![pypi](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fgraphrag_sdk.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fgraphrag_sdk\u002F)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1146782921294884966?style=flat-square)](https:\u002F\u002Fdiscord.gg\u002F6M4QwDXn2w)\n[![Contributor Covenant](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContributor%20Covenant-2.1-4baaaa.svg)](CODE_OF_CONDUCT.md)\n\n\u003Cp align=\"center\">\n  \u003Cimg alt=\"FalkorDB GraphRAG-SDK README Banner\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFalkorDB_GraphRAG-SDK_readme_6f218dcf6bdc.png\" width=\"1500\">\n\u003C\u002Fp>\n\n### 使用 GraphRAG SDK (图检索增强生成软件开发工具包) 大规模构建快速且准确的 GenAI (生成式人工智能) 应用\n\n使用 GraphRAG-SDK 简化您的下一代 GenAI 应用开发，这是一个用于构建图检索增强生成 (Graph Retrieval-Augmented Generation, GraphRAG) 系统的专用工具包。它集成了知识图谱 (Knowledge Graph)、本体 (Ontology) 管理和最先进的 LLM (大语言模型)，以提供准确、高效且可定制的 RAG (检索增强生成) 工作流。\n\n# GraphRAG 设置\n### 数据库设置\n\n[![Try Free](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTry%20Free-FalkorDB%20Cloud-FF8101?labelColor=FDE900&style=for-the-badge&link=https:\u002F\u002Fapp.falkordb.cloud)](https:\u002F\u002Fapp.falkordb.cloud)\n\n或者使用 Docker 进行本地部署 (On-premise)：\n\n```sh\ndocker run -p 6379:6379 -p 3000:3000 -it --rm  -v .\u002Fdata:\u002Fdata falkordb\u002Ffalkordb:latest\n```\n\n### 依赖项：\n```sh\npip install graphrag_sdk\n```\n\n### 配置凭证。参见 [.env](.env.template) 获取示例。\n\n* [LiteLLM](https:\u002F\u002Fdocs.litellm.ai): 一个支持大语言模型推理的框架，允许在部署和用例方面具有灵活性。  \n  要选择供应商，请使用前缀 \"specific_vendor\u002Fyour_model\"，例如 \"openai\u002Fgpt-4.1\"。\n* [OpenAI](https:\u002F\u002Fopenai.com\u002Findex\u002Fopenai-api) 推荐模型：`gpt-4.1`\n* [Google](https:\u002F\u002Fmakersuite.google.com\u002Fapp\u002Fapikey) 推荐模型：`gemini-2.0-flash`\n* [Azure-OpenAI](https:\u002F\u002Fai.azure.com) 推荐模型：`gpt-4.1`\n* [Ollama](https:\u002F\u002Follama.com\u002F) 仅适用于问答 (Q&A) 步骤。推荐模型：`llama3`。Ollama 模型仅适用于问答步骤（在知识图谱 (KG) 创建之后）。\n\n\n# 如何使用\n[![Get started](https:\u002F\u002Fpl-bolts-doc-images.s3.us-east-2.amazonaws.com\u002Fapp-2\u002Fget-started-badge.svg)](https:\u002F\u002Flightning.ai\u002Fmuhammadqadora\u002Fstudios\u002Fbuild-fast-accurate-genai-apps-advanced-rag-with-falkordb)\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FFalkorDB\u002FGraphRAG-SDK\u002Fblob\u002Fmain\u002Fexamples\u002Fmovies\u002Fdemo-movies.ipynb)\n\n### 环境配置\n\n在使用 SDK 之前，配置您的环境变量：\n\n```bash\n# FalkorDB Connection (defaults are for on-premises)\nexport FALKORDB_HOST=\"localhost\" \nexport FALKORDB_PORT=6379 \nexport FALKORDB_USERNAME=\"your-username\"  # optional for on-premises\nexport FALKORDB_PASSWORD=\"your-password\"  # optional for on-premises\n\n# LLM Provider (choose one)\nexport OPENAI_API_KEY=\"your-key\"  # or GOOGLE_API_KEY, GROQ_API_KEY, etc.\n```\n\n## 使用现有知识图谱快速入门\n\n如果您已经在 FalkorDB 中拥有知识图谱，可以通过从现有图谱中提取本体 (Ontology) 来快速设置 GraphRAG：\n\n```python\nimport os\nfrom falkordb import FalkorDB\nfrom graphrag_sdk import KnowledgeGraph\nfrom graphrag_sdk.ontology import Ontology\nfrom graphrag_sdk.models.litellm import LiteModel\nfrom graphrag_sdk.model_config import KnowledgeGraphModelConfig\n\ngraph_name = \"my_existing_graph\"\n\n# Connect to FalkorDB using environment variables\ndb = FalkorDB(\n    host=os.getenv(\"FALKORDB_HOST\", \"localhost\"),\n    port=int(os.getenv(\"FALKORDB_PORT\", 6379)),\n    username=os.getenv(\"FALKORDB_USERNAME\"),  # optional for on-premises\n    password=os.getenv(\"FALKORDB_PASSWORD\")   # optional for on-premises\n)\n\n# Select graph\ngraph = db.select_graph(graph_name)\n\n# Extract ontology from existing knowledge graph\nontology = Ontology.from_kg_graph(graph)\n\n# Configure model and create GraphRAG instance\nmodel = LiteModel()  # Default is OpenAI GPT-4.1, can specify different model\nmodel_config = KnowledgeGraphModelConfig.with_model(model)\n\n# Create KnowledgeGraph instance\nkg = KnowledgeGraph(\n    name=graph_name,\n    model_config=model_config,\n    ontology=ontology,\n    host=os.getenv(\"FALKORDB_HOST\", \"localhost\"),\n    port=int(os.getenv(\"FALKORDB_PORT\", 6379)),\n    username=os.getenv(\"FALKORDB_USERNAME\"),\n    password=os.getenv(\"FALKORDB_PASSWORD\")\n)\n\n# Start chat session\nchat = kg.chat_session()\n\n# Ask questions\nresponse = chat.send_message(\"What products are available?\")\nprint(response[\"response\"])\n\n# Ask follow-up questions\nresponse = chat.send_message(\"Tell me which one of them is the most expensive\")\nprint(response[\"response\"])\n```\n\n## 从头创建知识图谱\n\n### 步骤 1：创建本体\n从不结构化数据自动化创建本体 (Ontology) 或手动定义 - 参见 [示例](https:\u002F\u002Fgithub.com\u002Ffalkordb\u002FGraphRAG-SDK\u002Fblob\u002Fmain\u002Fexamples\u002Ftrip\u002Fdemo_orchestrator_trip.ipynb)\n\n```python\nfrom dotenv import load_dotenv\nimport json\nfrom graphrag_sdk.source import URL\nfrom graphrag_sdk import KnowledgeGraph, Ontology\nfrom graphrag_sdk.models.litellm import LiteModel\nfrom graphrag_sdk.model_config import KnowledgeGraphModelConfig\nload_dotenv()\n\n# Import Data\nurls = [\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fside_by_side_2012\",\n\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fmatrix\",\n\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fmatrix_revolutions\",\n\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fmatrix_reloaded\",\n\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fspeed_1994\",\n\"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fjohn_wick_chapter_4\"]\n\nsources = [URL(url) for url in urls]\n\n# Model - vendor: openai, model: gpt-4.1 -> openai\u002Fgpt-4.1\nmodel = LiteModel(model_name=\"openai\u002Fgpt-4.1\")\n\n# Ontology Auto-Detection\nontology = Ontology.from_sources(\n    sources=sources,\n    model=model,\n)\n# Save the ontology to the disk as a json file.\nwith open(\"ontology.json\", \"w\", encoding=\"utf-8\") as file:\n    file.write(json.dumps(ontology.to_json(), indent=2))\n```\n\n### 步骤 2：创建知识图谱代理 (Agent)\n构建、查询和管理针对检索和增强任务优化的知识图谱 (Knowledge Graph)。\n利用 FalkorDB 实现高性能图谱查询和多租户支持。\n\n```python\n# After approving the ontology, load it from disk.\nontology_file = \"ontology.json\"\nwith open(ontology_file, \"r\", encoding=\"utf-8\") as file:\n    ontology = Ontology.from_json(json.loads(file.read()))\n\nkg = KnowledgeGraph(\n    name=\"kg_name\",\n    model_config=KnowledgeGraphModelConfig.with_model(model),\n    ontology=ontology,\n    host=\"127.0.0.1\",\n    port=6379,\n    # username=falkor_username, # optional\n    # password=falkor_password  # optional\n)\n\nkg.process_sources(sources)\n```\n\n### 步骤 3：查询您的 Graph RAG\n\n此时，您已经拥有一个可以使用此软件开发工具包（SDK）进行查询的知识图谱（Knowledge Graph）。使用 `chat_session` 方法来开始对话。\n\n```python\n# Conversation\nchat = kg.chat_session()\nresponse = chat.send_message(\"Who is the director of the movie The Matrix?\")\nprint(response)\nresponse = chat.send_message(\"How this director connected to Keanu Reeves?\")\nprint(response)\n```\n\n## 下一步\n\n完成这 3 个步骤后，您就可以与您的知识图谱进行交互和查询了。以下是一些用例建议：\n\u003Cp align=\"left\">\n  \u003Cimg alt=\"GraphRAG-SDK Use Cases Banner from FalkorDB\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFalkorDB_GraphRAG-SDK_readme_748478b3a1b2.png\" width=\"800\">\n\u003C\u002Fp>\n\n**需要针对您的用例寻求帮助？让我们 [谈谈](https:\u002F\u002Fwww.falkordb.com\u002Fget-demo\u002F)**\n\n\u003Cbr \u002F>\n\n# 使用 Ollama\n\nOllama 模型仅适用于问答（Q&A）步骤（即在知识图谱创建之后）。\n\n## 设置\n\n```python\nfrom graphrag_sdk.models.ollama import OllamaGenerativeModel\n\n# Local Ollama (default: http:\u002F\u002Flocalhost:11434)\nqa_model = OllamaGenerativeModel(model_name=\"llama3:8b\")\n\n# Remote Ollama\nqa_model = OllamaGenerativeModel(\n    model_name=\"llama3:8b\",\n    api_base=\"http:\u002F\u002Fremote-host:11434\"\n)\n```\n\n\u003Cbr \u002F>\n\n# 使用 GraphRAG 的 AI 智能体（Agents）\n\n### 编排器（Orchestrator）\nGraphRAG-SDK 支持基于知识图谱的智能体。每个智能体都是其领域的专家，而编排器负责协调这些智能体。\n\n查看示例：\n\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FFalkorDB\u002FGraphRAG-SDK\u002Fblob\u002Fmain\u002Fexamples\u002Ftrip\u002Fdemo_orchestrator_trip.ipynb)\n\n\n### 智能体（Agents）\n\n请参阅 [步骤 1](#how-to-use) 部分，了解如何基于预定义的本体（Ontology）为智能体创建知识图谱对象。\n\n```python\n# Define the model\nmodel = LiteModel(model_name=\"openai\u002Fgpt-4.1\")\n\n# Create the Knowledge Graph from the predefined ontology.\n# In this example, we will use the restaurants agent and the attractions agent.\nrestaurants_kg = KnowledgeGraph(\n    name=\"restaurants\",\n    ontology=restaurants_ontology,\n    model_config=KnowledgeGraphModelConfig.with_model(model),\n    host=\"127.0.0.1\",\n    port=6379,\n    # username=falkor_username, # optional\n    # password=falkor_password  # optional\n)\nattractions_kg = KnowledgeGraph(\n    name=\"attractions\",\n    ontology=attractions_ontology,\n    model_config=KnowledgeGraphModelConfig.with_model(model),\n    host=\"127.0.0.1\",\n    port=6379,\n    # username=falkor_username, # optional\n    # password=falkor_password  # optional\n)\n\n\n# The following agent is specialized in finding restaurants.\nrestaurants_agent = KGAgent(\n    agent_id=\"restaurants_agent\",\n    kg=restaurants_kg,\n    introduction=\"I'm a restaurant agent, specialized in finding the best restaurants for you.\",\n)\n\n# The following agent is specialized in finding tourist attractions.\nattractions_agent = KGAgent(\n    agent_id=\"attractions_agent\",\n    kg=attractions_kg,\n    introduction=\"I'm an attractions agent, specialized in finding the best tourist attractions for you.\",\n)\n```\n\n### 编排器 - 多智能体系统（Multi-Agent System）\n\n编排器管理智能体的使用并处理提问。\n\n```python\n# Initialize the orchestrator while giving it the backstory.\norchestrator = Orchestrator(\n    model,\n    backstory=\"You are a trip planner, and you want to provide the best possible itinerary for your clients.\",\n)\n\n# Register the agents that we created above.\norchestrator.register_agent(restaurants_agent)\norchestrator.register_agent(attractions_agent)\n\n# Query the orchestrator.\nrunner = orchestrator.ask(\"Create a two-day itinerary for a trip to Rome. Please don't ask me any questions; just provide the best itinerary you can.\")\nprint(runner.output)\n\n```\n## 社区\n\n有问题或反馈？请通过以下方式联系：\n- [GitHub 问题](https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fissues)\n- 加入我们的 [Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002F6M4QwDXn2w)\n\n⭐️ 如果您觉得这个仓库有帮助，请考虑给它点个星！\n\n## 额外增强：配置您的提示词（Prompts）\n在创建您的知识图谱（KG）智能体时，您可以自定义提示词以调整其行为。\n\n💡 此步骤是可选的，但可以增强功能。\n\n这里有五种类型的提示词：\n\n1. **`cypher_system_instruction`**  \n   - 用于 Cypher 生成步骤的系统指令。  \n   - **注意：** 确保您的提示词包含 `{ontology}`。\n\n2. **`qa_system_instruction`**  \n   - 用于问答（Q&A）步骤的系统指令。\n\n3. **`cypher_gen_prompt`**  \n   - 在 Cypher 生成步骤期间使用的提示词。  \n   - **注意：** 在您的提示词中包含 `{question}`。\n\n4. **`cypher_gen_prompt_history`**  \n   - 当需要考虑历史记录时，用于 Cypher 生成的提示词。  \n   - **注意：** 在您的提示词中包含 `{question}` 和 `{last_answer}`。\n\n5. **`qa_prompt`**  \n   - 在问答（Q&A）步骤期间使用的提示词。  \n   - **注意：** 在您的提示词中包含 `{question}`、`{context}` 和 `{cypher}`。\n\n这是一个配置示例：\n\n```python\nkg = KnowledgeGraph(\n    name=\"kg_name\",\n    model_config=KnowledgeGraphModelConfig.with_model(model),\n    ontology=ontology,\n    cypher_system_instruction=cypher_system_instruction,\n    qa_system_instruction=qa_system_instruction,\n    cypher_gen_prompt=cypher_gen_prompt,\n    cypher_gen_prompt_history=cypher_gen_prompt_history,\n    qa_prompt=qa_prompt\n    host=\"127.0.0.1\",\n    port=6379,\n    # username=falkor_username, # optional\n    # password=falkor_password  # optional\n)\n```\n\n\n## 常见问题解答（FAQ）\n**支持哪些数据库？**\n\nGraphRAG-SDK 针对 FalkorDB 进行了优化。其他后端可能需要适配器。\n\n**SDK 的可扩展性如何？**\n\nGraphRAG-SDK 专为多租户和大规模应用设计。性能取决于 FalkorDB 的部署配置。\n\n**此 SDK 如何改进检索增强生成（Retrieval-Augmented Generation）？**\n\n通过利用知识图谱，GraphRAG-SDK 能够实现语义关系和本体驱动的查询，超越了标准的向量相似度。\n\n**SDK 支持哪些文件格式？**\n\n支持的格式包括 PDF、JSONL、CSV、HTML、TEXT 和 URL。\n\n**SDK 如何处理延迟？**\n\nSDK 通过 FalkorDB 针对低延迟操作进行了优化，使用了查询优化和内存处理等技术。\n\n**SDK 支持多图查询吗？**\n\n支持。多图查询受到支持，API 专为跨域和分层图谱探索设计。\n\n\u003Cbr \u002F>\n\n# 许可证\n\n本项目采用 MIT 许可证授权。详情请参见 [LICENSE](LICENSE) 文件。\n\n关键词：RAG（检索增强生成）, graphrag, 检索增强生成（Retrieval-Augmented Generation）, NLP（自然语言处理）, AI（人工智能）, 信息检索（Information Retrieval）, 自然语言处理（Natural Language Processing）, LLM（大型语言模型）, 嵌入（Embeddings）, 语义搜索（Semantic Search）","# GraphRAG-SDK 快速上手指南\n\nGraphRAG-SDK 是一个用于构建图检索增强生成（GraphRAG）系统的专用工具包。它集成了知识图谱、本体管理和先进的大语言模型（LLM），旨在简化 GenAI 应用的开发，提供准确、高效且可定制的 RAG 工作流。\n\n## 环境准备\n\n在开始之前，请确保满足以下系统要求：\n\n*   **操作系统**：支持 Docker 的系统（Linux \u002F macOS \u002F Windows）\n*   **Python 环境**：Python 3.8+\n*   **数据库**：FalkorDB（可通过 Docker 本地部署或使用 FalkorDB Cloud）\n*   **大模型凭证**：需准备至少一个 LLM 提供商的 API Key（推荐 OpenAI、Google 或 Azure）\n\n## 安装步骤\n\n### 1. 部署 FalkorDB 数据库\n\n使用 Docker 快速启动本地 FalkorDB 实例：\n\n```sh\ndocker run -p 6379:6379 -p 3000:3000 -it --rm  -v .\u002Fdata:\u002Fdata falkordb\u002Ffalkordb:latest\n```\n\n### 2. 安装 Python SDK\n\n通过 pip 安装 GraphRAG-SDK：\n\n```sh\npip install graphrag_sdk\n```\n\n### 3. 配置环境变量\n\n在使用 SDK 前，需配置数据库连接和 LLM 凭证。建议在终端导出变量或创建 `.env` 文件：\n\n```bash\n# FalkorDB 连接配置 (本地默认)\nexport FALKORDB_HOST=\"localhost\" \nexport FALKORDB_PORT=6379 \nexport FALKORDB_USERNAME=\"your-username\"  # 本地部署可选\nexport FALKORDB_PASSWORD=\"your-password\"  # 本地部署可选\n\n# LLM 提供商配置 (任选其一)\nexport OPENAI_API_KEY=\"your-key\"  # 或 GOOGLE_API_KEY, GROQ_API_KEY 等\n```\n\n## 基本使用\n\n以下示例展示如何从零开始创建知识图谱并进行问答。流程分为：定义数据源 -> 构建本体 -> 创建图谱 -> 对话查询。\n\n### 1. 构建知识图谱\n\n首先导入数据源并自动检测本体结构，然后初始化知识图谱代理。\n\n```python\nimport os\nimport json\nfrom dotenv import load_dotenv\nfrom graphrag_sdk.source import URL\nfrom graphrag_sdk import KnowledgeGraph, Ontology\nfrom graphrag_sdk.models.litellm import LiteModel\nfrom graphrag_sdk.model_config import KnowledgeGraphModelConfig\n\nload_dotenv()\n\n# 1. 准备数据源 (示例为电影网页)\nurls = [\n    \"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fmatrix\",\n    \"https:\u002F\u002Fwww.rottentomatoes.com\u002Fm\u002Fmatrix_revolutions\"\n]\nsources = [URL(url) for url in urls]\n\n# 2. 配置模型 (默认使用 OpenAI GPT-4.1，可通过 openai\u002Fgpt-4.1 指定)\nmodel = LiteModel(model_name=\"openai\u002Fgpt-4.1\")\n\n# 3. 自动检测本体结构\nontology = Ontology.from_sources(sources=sources, model=model)\n\n# 4. 创建知识图谱实例\nkg = KnowledgeGraph(\n    name=\"movies_kg\",\n    model_config=KnowledgeGraphModelConfig.with_model(model),\n    ontology=ontology,\n    host=os.getenv(\"FALKORDB_HOST\", \"localhost\"),\n    port=int(os.getenv(\"FALKORDB_PORT\", 6379)),\n    username=os.getenv(\"FALKORDB_USERNAME\"),\n    password=os.getenv(\"FALKORDB_PASSWORD\")\n)\n\n# 5. 处理数据源并构建图谱\nkg.process_sources(sources)\n```\n\n### 2. 查询与对话\n\n图谱构建完成后，即可开启会话进行自然语言查询。\n\n```python\n# 开启聊天会话\nchat = kg.chat_session()\n\n# 发送问题\nresponse = chat.send_message(\"Who is the director of the movie The Matrix?\")\nprint(response)\n\n# 多轮对话\nresponse = chat.send_message(\"How this director connected to Keanu Reeves?\")\nprint(response)\n```\n\n### 进阶提示\n\n*   **本地模型支持**：问答步骤支持使用 Ollama 本地模型（如 `llama3`），需在初始化 `KnowledgeGraph` 后配置 `OllamaGenerativeModel`。\n*   **多智能体协作**：SDK 支持基于知识图谱的多智能体系统（Orchestrator），可为不同领域（如餐厅、景点）创建独立 Agent 并由协调器统一管理。\n*   **提示词定制**：创建 `KnowledgeGraph` 时可自定义 `cypher_system_instruction`、`qa_prompt` 等提示词以优化行为。","某金融风控团队正在开发智能调查助手，需要从海量交易流水中快速挖掘隐藏的欺诈关联与资金链路。\n\n### 没有 GraphRAG-SDK 时\n- 传统向量检索只能匹配文本相似度，无法理解\"A 公司控股的 B 公司与 C 是否有交易”这类多跳逻辑关系。\n- 开发人员必须手动编写复杂的 Cypher 查询语句，对接图谱数据库门槛高且容易出错。\n- 大模型容易产生幻觉，经常编造不存在的资金流向，导致调查结论不可信。\n- 数据更新后需要重建全文索引，耗时数小时，无法支持实时的风险预警。\n\n### 使用 GraphRAG-SDK 后\n- GraphRAG-SDK 自动从现有图谱提取本体 ontology，精准识别实体间的控股、交易等复杂关系。\n- 内置 FalkorDB 集成，无需手写图查询代码，直接用自然语言即可穿透多层关联网络。\n- 结合知识图谱结构约束生成答案，大幅减少模型幻觉，资金链路回答准确率提升至 95% 以上。\n- 支持图谱增量更新，新交易数据分钟级可见，调查人员能实时追踪最新风险动态。\n\nGraphRAG-SDK 让复杂关系查询像聊天一样简单，彻底解决了传统 RAG 不懂逻辑关联的难题，为风控决策提供可靠依据。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFalkorDB_GraphRAG-SDK_6f218dcf.png","FalkorDB","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FFalkorDB_862d7706.png","The fastest way to your knowledge",null,"info@falkordb.com","falkordb","https:\u002F\u002Fwww.falkordb.com","https:\u002F\u002Fgithub.com\u002FFalkorDB",[85],{"name":86,"color":87,"percentage":88},"Python","#3572A5",100,597,75,"2026-04-05T07:00:40","MIT","未说明",{"notes":95,"python":93,"dependencies":96},"需部署 FalkorDB 数据库（支持 Docker 本地部署或云端）；需配置 LLM 提供商 API Key（如 OpenAI、Google、Azure 等）；Ollama 模型仅适用于知识图谱创建后的问答步骤；支持通过 LiteLLM 框架灵活调用多种大模型",[97,81,98],"graphrag_sdk","python-dotenv",[13,51,26],[81,101,102,103,104,105,106,107,108],"graphrag","knowledge-graph","rag","graph-database","open-source","sdk","genai","llm","2026-03-27T02:49:30.150509","2026-04-06T05:17:35.858250",[112,117,122,127,132,137],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},4714,"遇到 llama-index 和 graphrag-sdk 之间的 pypdf 版本冲突怎么办？","这是一个已知的依赖冲突问题。`graphrag-sdk 0.8.0` 需要 `pypdf\u003C5.0.0`，而 `llama-index-readers-file 0.4.11` 需要 `pypdf\u003C6,>=5.1.0`。目前建议暂时避免同时使用这两个特定版本，或手动调整依赖版本，请关注后续版本修复。","https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fissues\u002F119",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},4710,"如何连接到自定义的 OpenAI 兼容提供商（如 LiteLLM）？","可以通过设置环境变量并使用 `LiteModel` 来连接。具体配置如下：\n1. 设置环境变量：\n```python\nos.environ['OPENAI_API_KEY'] = 'yourkey'\nos.environ['OPENAI_BASE_URL'] = \"http:\u002F\u002Fyourendpoint.com\u002Fv1\"\n```\n2. 初始化模型：\n```python\nmodel = LiteModel(\"openai\u002Fdeepseek-v3.1\")\n```","https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fissues\u002F133",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},4711,"使用 Azure OpenAI 时报错 'Missing model-version in header' 如何解决？","Azure 需要特定的 header 配置。在初始化 `AzureOpenAI` 客户端时，需手动添加 `default_headers`。临时解决方案如下：\n```python\nself.client = AzureOpenAI(\n    azure_endpoint=self.azure_endpoint,\n    api_version=self.api_version,\n    api_key=self.api_key,\n    default_headers={\"model-version\": \"2024-07-18\"}\n)\n```\n维护者已确认该问题并计划修复。","https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fissues\u002F85",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},4712,"使用远程 Ollama 服务器时连接失败或报错怎么办？","确保同时设置以下两个环境变量，文档中可能未明确提及 `OLLAMA_HOST`：\n- `OLLAMA_API_BASE`\n- `OLLAMA_HOST`\n\n此外，SDK 0.8.0 版本曾存在 deepcopy  bug，维护者已更新变量用法并修复，请确保使用最新代码。如果问题仍存在，检查环境变量配置是否完整。","https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fissues\u002F123",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},4713,"GraphRAG-SDK 是否支持导入 TTL (Turtle) 文件？","目前尚未原生支持 .ttl 文件作为输入。该功能已作为特性请求提出（Feature Request），社区成员正在跟进开发。建议关注相关 Issue 以获取最新支持进度。","https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fissues\u002F139",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},4715,"哪里可以找到 PDF 处理示例或本体 (Ontology) 审查指南？","官方文档正在更新中，计划包含 PDF 示例以及本体（Ontology）的审查和扩展使用方法。目前可以参考项目文档中关于 'Ontology exiting graph usage' 和 'PDF example' 的更新任务进度。","https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fissues\u002F121",[143,148,153,158,163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,237],{"id":144,"version":145,"summary_zh":146,"released_at":147},104229,"v0.8.1","## What's Changed\r\n* Fix poetry and pip file by @Naseem77 in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F122\r\n* Add refresh_ontology() method to kg by @Naseem77 in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F126\r\n* fix: upgrage pypdf version by @priyansh4320 in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F120\r\n* update README by @Naseem77 in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F125\r\n* avoid deepcopy exception with OllamaGenerativeModel by @gsw945 in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F128\r\n* Update Ollama Usage - Arguments & Readme by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F129\r\n* fix `ValueError` in `AttributeType.from_string()` by @gsw945 in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F130\r\n* Fixed typo in prompts by @Tesla2000 in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F131\r\n\r\n## New Contributors\r\n* @Naseem77 made their first contribution in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F122\r\n* @priyansh4320 made their first contribution in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F120\r\n* @gsw945 made their first contribution in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F128\r\n* @Tesla2000 made their first contribution in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F131\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.8.0...v0.8.1","2025-09-29T09:51:21",{"id":149,"version":150,"summary_zh":151,"released_at":152},104230,"v0.8.0","## What's Changed\r\n* Fix LLM configuration usage and Providers API by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F116\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.7.1...v0.8.0","2025-07-02T14:23:13",{"id":154,"version":155,"summary_zh":156,"released_at":157},104231,"v0.7.1","## What's Changed\r\n* Fix build by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F106\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.7.0...v0.7.1","2025-04-01T09:29:09",{"id":159,"version":160,"summary_zh":161,"released_at":162},104232,"v0.7.0","## What's Changed\r\n* Update pytests by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F52\r\n* [Feature] Streaming option to the Q&A step by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F103\r\n* added chat metadata field and return query execition time on send_mes… by @matanbroit in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F104\r\n* v0.7.0 by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F105\r\n\r\n## New Contributors\r\n* @matanbroit made their first contribution in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F104\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.6.2...v0.7.0","2025-03-31T13:21:27",{"id":164,"version":165,"summary_zh":166,"released_at":167},104233,"v0.6.2","## What's Changed\r\n* Standardize Function Docs & Optional Arguments by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F94\r\n* Update README.md to gemini-2.0-flash by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F92\r\n* Define AttributeType as Enum by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F95\r\n* Fixes for Gemini API Configuration Usage by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F100\r\n* Fix the option to use additional arguments for Azure OpenAI by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F98\r\n* v0.6.2 by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F101\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.6.1...v0.6.2","2025-03-05T12:47:02",{"id":169,"version":170,"summary_zh":171,"released_at":172},104234,"v0.6.1","## What's Changed\r\n* switch from | to Union by @swilly22 in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F88\r\n* version bump by @swilly22 in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F90\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.6.0...v0.6.1","2025-02-20T08:41:10",{"id":174,"version":175,"summary_zh":176,"released_at":177},104235,"v0.6.0","## What's Changed\r\n* Add support for string-based source by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F76\r\n* Fix Multiple Document Processing (e.g., PDFs) Handling by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F83\r\n* update version to v0.6.0 by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F84\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.5.0...v0.6.0","2025-02-05T12:11:55",{"id":179,"version":180,"summary_zh":181,"released_at":182},104236,"v0.5.0","## What's Changed\r\n* Progress bar to the KG creation step by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F54\r\n* Ontology creation from Knowledge Graph by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F65\r\n* update version to v0.5.0 by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F75\r\n\r\n## New Contributors\r\n* @liorkesos made their first contribution in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F55\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.4.1...v0.5.0","2025-01-22T09:48:32",{"id":184,"version":185,"summary_zh":186,"released_at":187},104237,"v0.4.1","## What's Changed\r\n* Connection between Ontology graph & Knowledge graph  by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F48\r\n* Update README.md - Fix link to CODE_OF_CONDUCT.md by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F50\r\n* v0.4.1 by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F49\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.4.0...v0.4.1","2024-12-19T08:10:05",{"id":189,"version":190,"summary_zh":191,"released_at":192},104238,"v0.4.0","## What's Changed\r\n* Expanded SDK capabilities to include support for LiteLLM models. by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F40\r\n* Readme updates by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F47\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.3.4...v0.4.0","2024-12-18T12:49:21",{"id":194,"version":195,"summary_zh":196,"released_at":197},104239,"v0.3.4","## What's Changed\r\n* Update pyproject.toml add description by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F44\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.3.3...v0.3.4","2024-12-11T17:39:33",{"id":199,"version":200,"summary_zh":201,"released_at":202},104240,"v0.3.3","## What's Changed\r\n* Cypher validation fix and add lists to the attribute types by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F42\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.3.2...v0.3.3","2024-11-28T14:01:26",{"id":204,"version":205,"summary_zh":206,"released_at":207},104241,"v0.3.2","## What's Changed\r\n* Configurable prompts allowing user the control the prompt by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F35\r\n* Reveal chat context, response include the a map with the Cypher query and the end result  by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F35\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.3.1...v0.3.2","2024-11-25T08:48:37",{"id":209,"version":210,"summary_zh":211,"released_at":212},104242,"v0.3.1","## What's Changed\r\n* update dependecies, adding requirements.txt by @swilly22 in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F38\r\n* Update documentation with Azure-openai and ollama by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F36\r\n* Fix Cypher generation iterative loop by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F37\r\n* v0.3.1 by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F39\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.3.0...v0.3.1","2024-11-19T12:06:35",{"id":214,"version":215,"summary_zh":216,"released_at":217},104243,"v0.3.0","## What's Changed\r\n* Move deps to Extras by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F32\r\n* update version to 0.3 by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F34\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.2.2...v0.3.0","2024-11-12T11:34:51",{"id":219,"version":220,"summary_zh":221,"released_at":222},104244,"v0.2.2","## What's Changed\r\n* Add pip install to Colab Notebooks by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F19\r\n* Fix tests by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F21\r\n* Update FalkorDB client version by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F28\r\n* openai-azure by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F22\r\n* Ollama integration by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F29\r\n* History to Cypher generation prompt  by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F23\r\n* update version - 0.2.2 by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F31\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.2.1...v0.2.2","2024-10-22T14:45:06",{"id":224,"version":225,"summary_zh":226,"released_at":227},104245,"v0.2.1","## What's Changed\r\n* Update README.md by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F13\r\n* Create LICENSE by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F14\r\n* Create CONTRIBUTING.md by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F16\r\n* fix-readme by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F17\r\n* Downgrade the required python version  by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F18\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.2.0...v0.2.1","2024-09-19T08:03:47",{"id":229,"version":230,"summary_zh":231,"released_at":232},104246,"v0.2.0","Release notes:\r\n\r\n- Multi-Model Support: Added integration with the Google Gemini model.\r\n- Multi-Knowledge Graph: Enhanced capability for managing multiple knowledge graphs\r\n- Source Formats: Expanded support to include URL, CSV, and JSON formats.\r\n- Multi-Agent and Orchestrator: Introduced functionality for creating and managing a multi-agent system with agents and an orchestrator.\r\n- Schema Update: Changed terminology from \"Scheme\" to \"Ontology.\"\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.1.0...v0.2.0\r\n\r\n## What's Changed\r\n* Update README.md with description  by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F5\r\n* Add link to the getting started Colab by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F6\r\n* Add description by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F7\r\n* version-2 by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F10\r\n* Update README.md - change title by @gkorland in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F11\r\n* update-version-toml by @galshubeli in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F12\r\n\r\n## New Contributors\r\n* @gkorland made their first contribution in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F5\r\n* @galshubeli made their first contribution in https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fpull\u002F10\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FFalkorDB\u002FGraphRAG-SDK\u002Fcompare\u002Fv0.1.0...v0.2.0","2024-09-17T11:56:03",{"id":234,"version":235,"summary_zh":79,"released_at":236},104247,"v0.1.3-beta","2024-07-02T07:17:29",{"id":238,"version":239,"summary_zh":79,"released_at":240},104248,"v0.1.2-beta","2024-06-12T18:47:11"]