[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-pchunduri6--rag-demystified":3,"tool-pchunduri6--rag-demystified":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":23,"env_os":92,"env_gpu":92,"env_ram":92,"env_deps":93,"category_tags":100,"github_topics":101,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":110,"updated_at":111,"faqs":112,"releases":141},834,"pchunduri6\u002Frag-demystified","rag-demystified","An LLM-powered advanced RAG pipeline built from scratch","rag-demystified 是一个从零构建的、由大语言模型驱动的先进 RAG（检索增强生成）管道示例。它的核心目标是“去神秘化”，即通过透明化的方式展示高级 RAG 系统的内部运作机制。当前流行的 RAG 框架虽然降低了使用门槛，但也带来了黑盒效应，使得用户在遇到错误或不一致时难以定位问题根源，甚至无法评估系统的实际成本。\n\nrag-demystified 适合希望深入理解 RAG 原理的开发者、研究人员以及技术爱好者。它摒弃了过度封装的抽象层，以子问题查询引擎为例，将复杂流程拆解为数据仓库管理、向量检索及响应生成等核心组件。通过处理跨多个数据源的复杂问答任务，rag-demystified 不仅揭示了高级 RAG 的机械原理，还帮助使用者识别其中的局限性与潜在风险。对于想要掌握 RAG 底层逻辑而非仅仅调用 API 的技术人员来说，这是一个极佳的实践参考，能够让人真正看懂数据在系统中是如何流动的。","# Demystifying Advanced RAG Pipelines\n\nRetrieval-Augmented Generation (RAG) pipelines powered by large language models (LLMs) are gaining popularity for building end-to-end question answering systems. Frameworks such as [LlamaIndex](https:\u002F\u002Fgithub.com\u002Frun-llama\u002Fllama_index) and [Haystack](https:\u002F\u002Fgithub.com\u002Fdeepset-ai\u002Fhaystack) have made significant progress in making RAG pipelines easy to use. While these frameworks provide excellent abstractions for building advanced RAG pipelines, they do so at the cost of transparency. From a user perspective, it's not readily apparent what's going on under the hood, particularly when errors or inconsistencies arise. \n\nIn this [EvaDB](https:\u002F\u002Fgithub.com\u002Fgeorgia-tech-db\u002Fevadb) application, we'll shed light on the inner workings of advanced RAG pipelines by examining the mechanics, limitations, and costs that often remain opaque.\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"70%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_8003bae475a0.png\" title=\"llama working on a laptop to retrieve data\" >\n  \u003Cbr>\n  \u003Cb>\u003Ci>Llama working on a laptop\u003C\u002Fi> 🙂\u003C\u002Fb>\n\u003C\u002Fp>\n\n## Quick start\n\nIf you want to jump right in, use the following commands to run the application:\n\n```\npip install -r requirements.txt\n\necho OPENAI_API_KEY='yourkey' > .env\npython complex_qa.py\n```\n\n## RAG Overview\n\nRetrieval-augmented generation (RAG) is a cutting-edge AI paradigm for LLM-based question answering.\nA RAG pipeline typically contains:\n\n1. **Data Warehouse** - A collection of data sources (e.g., documents, tables etc.) that contain information relevant to the question answering task.\n\n2. **Vector Retrieval** - Given a question, find the top K most similar data chunks to the question. This is done using a vector store (e.g., [Faiss](https:\u002F\u002Ffaiss.ai\u002Findex.html)).\n\n3. **Response Generation** - Given the top K most similar data chunks, generate a response using a large language model (e.g. GPT-4).\n\nRAG provides two key advantages over traditional LLM-based question answering:\n1. **Up-to-date information** - The data warehouse can be updated in real-time, so the information is always up-to-date.\n\n2. **Source tracking** - RAG provides clear traceability, enabling users to identify the sources of information, which is crucial for accuracy verification and mitigating LLM hallucinations.\n\n## Building advanced RAG Pipelines\n\nTo enable answering more complex questions, recent AI frameworks like LlamaIndex have introduced more advanced abstractions such as the [Sub-question Query Engine](https:\u002F\u002Fgpt-index.readthedocs.io\u002Fen\u002Flatest\u002Fexamples\u002Fquery_engine\u002Fsub_question_query_engine.html).\n\nIn this application, we'll demystify sophisticated RAG pipelines by using the Sub-question Query Engine as an example. We'll examine the inner workings of the Sub-question Query Engine and simplify the abstractions to their core components. We'll also identify some challenges associated with advanced RAG pipelines.\n\n### The setup\n\nA data warehouse is a collection of data sources (e.g., documents, tables etc.) that contain information relevant to the question answering task.\n\nIn this example, we'll use a simple data warehouse containing multiple Wikipedia articles for different popular cities, inspired by LlamaIndex's [illustrative use-case](https:\u002F\u002Fdocs.llamaindex.ai\u002Fen\u002Fstable\u002Fexamples\u002Findex_structs\u002Fdoc_summary\u002FDocSummary.html). Each city's wiki is a separate data source. Note that for simplicity, we limit each document's size to fit within the LLM context limit.\n\nOur goal is to build a system that can answer questions like:\n1. *\"What is the population of Chicago?\"*\n2. *\"Give me a summary of the positive aspects of Atlanta.\"*\n3. *\"Which city has the highest population?\"*\n\nAs you can see, the questions can be simple factoid\u002Fsummarization questions over a single data source (Q1\u002FQ2) or complex factoid\u002Fsummarization questions over multiple data sources (Q3).\n\nWe have the following *retrieval methods* at our disposal:\n\n1. **vector retrieval** - Given a question and a data source, generate an LLM response using the top-K most similar data chunks to the question from the data source as the context. We use the off-the-shelf FAISS vector index from [EvaDB](https:\u002F\u002Fgithub.com\u002Fgeorgia-tech-db\u002Fevadb) for vector retrieval. However, the concepts are applicable to any vector index.\n\n2. **summary retrieval** - Given a summary question and a data source, generate an LLM response using the entire data source as context.\n\n### The secret sauce\n\nOur key insight is that each component in an advanced RAG pipeline is powered by a single LLM call. The entire pipeline is a series of LLM calls with carefully crafted prompt templates. These prompt templates are the secret sauce that enable advanced RAG pipelines to perform complex tasks.\n\nIn fact, any advanced RAG pipeline can be broken down into a series of individual LLM calls that follow a universal input pattern:\n\n![equation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_23100a8df052.png)\n\n\u003C!-- LLM input = **Prompt Template** + **Context** + **Question** -->\nwhere:\n- **Prompt Template** - A curated prompt template for the specific task (e.g., sub-question generation, summarization)\n- **Context** - The context to use to perform the task (e.g. top-K most similar data chunks)\n- **Question** - The question to answer\n\nNow, we illustrate this principle by examining the inner workings of the Sub-question Query Engine.\n\nThe Sub-question Query Engine has to perform three tasks:\n1. **Sub-question generation** - Given a complex question, break it down into a set of sub-questions, while identifying the appropriate data source and retrieval function for each sub-question.\n2. **Vector\u002FSummary Retrieval** - For each sub-question, use the chosen retrieval function over the corresponding data source to retrieve the relevant information.\n3. **Response Aggregation** - Aggregate the responses from the sub-questions into a final response.\n\nLet's examine each task in detail.\n\n### Task 1: Sub-question Generation\n\nOur goal is to break down a complex question into a set of sub-questions, while identifying the appropriate data source and retrieval function for each sub-question. For example, the question *\"Which city has the highest population?\"* is broken down into five sub-questions, one for each city, of the form *\"What is the population of {city}?\".* The data source for each sub-question has to be the corresponding city's wiki, and the retrieval function has to be vector retrieval.\n\nAt first glance, this seems like a daunting task. Specifically, we need to answer the following questions:\n1. **How do we know which sub-questions to generate?**\n2. **How do we know which data source to use for each sub-question?**\n3. **How do we know which retrieval function to use for each sub-question?**\n\nRemarkably, the answer to all three questions is the same - a single LLM call! The entire sub-question query engine is powered by a single LLM call with a carefully crafted prompt template. Let's call this template the **Sub-question Prompt Template**.\n\n```\n-- Sub-question Prompt Template --\n\n\"\"\"\n    You are an AI assistant that specializes in breaking down complex questions into simpler, manageable sub-questions.\n    When presented with a complex user question, your role is to generate a list of sub-questions that, when answered, will comprehensively address the original question.\n    You have at your disposal a pre-defined set of functions and data sources to utilize in answering each sub-question.\n    If a user question is straightforward, your task is to return the original question, identifying the appropriate function and data source to use for its solution.\n    Please remember that you are limited to the provided functions and data sources, and that each sub-question should be a full question that can be answered using a single function and a single data source.\n\"\"\"\n```\n\nThe context for the LLM call is the names of the data sources and the functions available to the system. The question is the user question. The LLM outputs a list of sub-questions, each with a function and a data source.\n\n![task_1_table](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_944a46c6bb0b.png)\n\nFor the three example questions, the LLM returns the following output:\n\n\u003Cdetails>\n  \u003Csummary>\n    LLM output Table\n  \u003C\u002Fsummary>\n\u003Ctable>\n\u003Cthead>\n  \u003Ctr>\n    \u003Cth>Question\u003C\u002Fth>\n    \u003Cth>Subquestions\u003C\u002Fth>\n    \u003Cth>Retrieval method\u003C\u002Fth>\n    \u003Cth>Data Source\u003C\u002Fth>\n  \u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\n  \u003Ctr>\n    \u003Ctd>\"What is the population of Chicago?\"\u003C\u002Ftd>\n    \u003Ctd>\"What is the population of Chicago?\"\u003C\u002Ftd>\n    \u003Ctd>vector retrieval\u003C\u002Ftd>\n    \u003Ctd>Chicago\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd>\"Give me a summary of the positive aspects of Atlanta.\"\u003C\u002Ftd>\n    \u003Ctd>\"Give me a summary of the positive aspects of Atlanta.\"\u003C\u002Ftd>\n    \u003Ctd>summary retrieval\u003C\u002Ftd>\n    \u003Ctd>Atlanta\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd rowspan=5>\"Which city has the highest population?\"\u003C\u002Ftd>\n    \u003Ctd>\"What is the population of Toronto?\"\u003C\u002Ftd>\n    \u003Ctd>vector retrieval\u003C\u002Ftd>\n    \u003Ctd>Toronto\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd>\"What is the population of Chicago?\"\u003C\u002Ftd>\n    \u003Ctd>vector retrieval\u003C\u002Ftd>\n    \u003Ctd>Chicago\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd>\"What is the population of Houston?\"\u003C\u002Ftd>\n    \u003Ctd>vector retrieval\u003C\u002Ftd>\n    \u003Ctd>Houston\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd>\"What is the population of Boston?\"\u003C\u002Ftd>\n    \u003Ctd>vector retrieval\u003C\u002Ftd>\n    \u003Ctd>Boston\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd>\"What is the population of Atlanta?\"\u003C\u002Ftd>\n    \u003Ctd>vector retrieval\u003C\u002Ftd>\n    \u003Ctd>Atlanta\u003C\u002Ftd>\n    \u003C\u002Ftr>\n\u003C\u002Ftbody>\n\u003C\u002Ftable>\n\u003C\u002Fdetails>\n\n### Task 2: Vector\u002FSummary Retrieval\n\nFor each sub-question, we use the chosen retrieval function over the corresponding data source to retrieve the relevant information. For example, for the sub-question *\"What is the population of Chicago?\"*, we use vector retrieval over the Chicago data source. Similarly, for the sub-question *\"Give me a summary of the positive aspects of Atlanta.\"*, we use summary retrieval over the Atlanta data source.\n\nFor both retrieval methods, we use the same LLM prompt template. In fact, we find that the popular **RAG Prompt** from [LangchainHub](https:\u002F\u002Fsmith.langchain.com\u002Fhub) works great out-of-the-box for this step.\n\n```\n-- RAG Prompt Template --\n\n\"\"\"\nYou are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: {question}\nContext: {context}\nAnswer:\n```\n\nBoth the retrieval methods only differ in the context used for the LLM call. For vector retrieval, we use the top K most similar data chunks to the sub-question as context. For summary retrieval, we use the entire data source as context.\n\n![task_2_table](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_f28d34151f24.png)\n\n### Task 3: Response Aggregation\n\nThis is the final step that aggregates the responses from the sub-questions into a final response. For example, for the question *\"Which city has the highest population?\"*, the sub-questions retrieve the population of each city and then response aggregation finds and returns the city with the highest population.\nThe **RAG Prompt** works great for this step as well.\n\nThe context for the LLM call is the list of responses from the sub-questions. The question is the original user question and the LLM outputs a final response.\n\n![task_3_table](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_0b075ff42484.png)\n\n### Putting it all together\n\nAfter unraveling the layers of abstraction, we uncovered the secret ingredient powering the sub-question query engine - 4 types of LLM calls each with different prompt template, context, and a question. This fits the universal input pattern that we identified earlier perfectly, and is a far cry from the complex abstractions that we started with.\nTo summarize:\n![equation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_23100a8df052.png)\n![call_types_table](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_4a8483ca7fc7.png)\n\nTo see the full pipeline in action, run the following commands:\n\n```\npip install -r requirements.txt\n\necho OPENAI_API_KEY='yourkey' > .env\npython complex_qa.py\n```\n\nHere is an example of the system answering the question *\"Which city with the highest population?\"*.\n\n![full_pipeline](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_fe354400f7c5.png)\n\n## Challenges\n\nNow that we've demystified the inner workings of advanced RAG pipelines, let's examine the challenges associated with them.\n\n1. **Question sensitivity** - The biggest challenge that we observed with these systems is the question sensitivity. The LLMs are extremely sensitive to the user question, and the pipeline fails unexpectedly for several user questions. Here are a few example failure cases that we encountered:\n    - **Incorrect sub-questions** - The LLM sometimes generates incorrect sub-questions. For example, *\"Which city has the highest number of tech companies?\"* is broken down into *\"What are the tech companies in each city?\"* 5 times (once for each city) instead of *\"What is the number of tech companies in Toronto?\"*, *\"What is the number of tech companies in Chicago?\"*, etc.\n    - **Incorrect retrieval function** - *\"Summarize the positive aspects of Atlanta and Toronto.\"* results in using the vector retrieval function instead of the summary retrieval method.\n\nWe had to put in significant effort into prompt engineering to get the pipeline to work for each question. This is a significant challenge for building robust systems.\n\nTo verify this behavior, we [implemented the example](llama_index_baseline.py) using the LlamaIndex Sub-question query engine. Consistent with our observations, the system often generates the wrong sub-questions and also uses the wrong retrieval function for the sub-questions, as shown below.\n\n![llama_index_baseline](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_d51d31724bc4.png)\n\n\n2. **Cost** - The second challenge is the cost dynamics of advanced RAG pipelines. The issue is two-fold:\n    - **Cost sensitivity** - The final cost of the question is dependent on the number of sub-questions generated, the retrieval function used, and the number of data sources queried. Since the LLMs are sensitive to the prompt, the cost of the question can vary significantly depending on the question and the LLM output. For example, the incorrect model choice in the LlamaIndex baseline example above (`summary_tool`) results in a 3x higher cost compared to the `vector_tool` while also generating an incorrect response.\n    - **Cost estimation** - Advanced abstractions in RAG frameworks obscure the estimated cost of the question. Setting up a cost monitoring system is challenging since the cost of the question is dependent on the LLM output.\n\n\n## Conclusion\n\nAdvanced RAG pipelines powered by LLMs have revolutionized question-answering systems.\nHowever, as we have seen, these pipelines are not turnkey solutions. Under the hood, they rely on carefully engineered prompt templates and multiple chained LLM calls. As illustrated in this [EvaDB](https:\u002F\u002Fgithub.com\u002Fgeorgia-tech-db\u002Fevadb) application, these pipelines can be question-sensitive, brittle, and opaque in their cost dynamics. Understanding these intricacies is key to leveraging their full potential and paving the way for more robust and efficient systems in the future.\n\n\n\u003C!-- ## Appendix\n\n\nTo reliably generate the correct format of functions and data sources, we use the powerful [OpenAI function calling](https:\u002F\u002Fopenai.com\u002Fblog\u002Ffunction-calling-and-other-api-updates) feature paired with Pydantic models. We also use the [Instructor](https:\u002F\u002Fgithub.com\u002Fjxnl\u002Finstructor) library to easily generate LLM-ready function schemas.\n\nMore details on the full schema definition can be found [here](subquestion_generator.py).\n\nFor example, the function schema to choose vector\u002Fsummary retrieval is as simple as:\n\n```python\nclass FunctionEnum(str, Enum):\n    \"\"\"The function to use to answer the questions.\n    Use vector_retrieval for factoid questions.\n    Use summary_retrieval for summarization questions.\n    \"\"\"\n    VECTOR_RETRIEVAL = \"vector_retrieval\"\n    SUMMARY_RETRIEVAL = \"summary_retrieval\"\n```\n\nThe data source schema definition is also straightforward:\n```python\nclass DataSourceEnum(str, Enum):\n    \"\"\"The data source to use to answer the corresponding subquestion\"\"\"\n    TORONTO = \"Toronto\"\n    CHICAGO = \"Chicago\"\n    HOUSTON = \"Houston\"\n    BOSTON = \"Boston\"\n    ATLANTA = \"Atlanta\"\n```\n\nAll of this can be packaged into a simple Pydantic model:\n\n```python\nclass QuestionBundle(BaseModel):\n    question: str = Field(None, description=\"The subquestion extracted from the user's question\")\n    function: FunctionEnum\n    data_source: DataSourceEnum\n```\n\nUsing the Instructor library, we can provide the above schema as the desired output format to OpenAI.\n```python\nfrom instructor import OpenAISchema\n\nclass SubQuestionBundleList(OpenAISchema):\n    subquestion_bundle_list: List[QuestionBundle] = Field(None, description=\"A list of subquestions - each item in the list contains a question, a function, and a data source\")\n\nresponse = openai.ChatCompletion.create(\n        model=\"gpt-3.5-turbo\",\n        functions=[QuestionBundle.OpenAISchema],\n        ...\n)\n``` -->\n","# 揭秘高级检索增强生成 (RAG) 管道\n\n由大型语言模型（LLM）驱动的检索增强生成（RAG）管道在构建端到端问答系统中越来越受欢迎。[LlamaIndex](https:\u002F\u002Fgithub.com\u002Frun-llama\u002Fllama_index) 和 [Haystack](https:\u002F\u002Fgithub.com\u002Fdeepset-ai\u002Fhaystack) 等框架在使 RAG 管道易于使用方面取得了显著进展。虽然这些框架为构建高级 RAG 管道提供了出色的抽象，但这是以牺牲透明度为代价的。从用户角度来看，底层发生了什么并不显而易见，特别是在出现错误或异常情况时。\n\n在这个 [EvaDB](https:\u002F\u002Fgithub.com\u002Fgeorgia-tech-db\u002Fevadb) 应用中，我们将通过检查通常保持不透明的机制、限制和成本，来揭示高级 RAG 管道的内部工作原理。\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"70%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_8003bae475a0.png\" title=\"llama working on a laptop to retrieve data\" >\n  \u003Cbr>\n  \u003Cb>\u003Ci>在笔记本电脑上工作的 Llama\u003C\u002Fi> 🙂\u003C\u002Fb>\n\u003C\u002Fp>\n\n## 快速开始\n\n如果您想立即开始，请使用以下命令运行应用程序：\n\n```\npip install -r requirements.txt\n\necho OPENAI_API_KEY='yourkey' > .env\npython complex_qa.py\n```\n\n## RAG 概述\n\n检索增强生成（RAG）是一种用于基于 LLM 的问答的前沿 AI 范式。\n一个 RAG 管道通常包含：\n\n1. **数据仓库** - 包含与问答任务相关的信息的数据源集合（例如，文档、表格等）。\n\n2. **向量检索** - 给定一个问题，找到与该问题最相似的 Top K 个数据块。这是使用向量存储（例如，[Faiss](https:\u002F\u002Ffaiss.ai\u002Findex.html)）完成的。\n\n3. **响应生成** - 给定最相似的 Top K 个数据块，使用大型语言模型（例如 GPT-4）生成响应。\n\nRAG 相比传统的基于 LLM 的问答提供了两个关键优势：\n1. **最新信息** - 数据仓库可以实时更新，因此信息始终是最新的。\n\n2. **来源追踪** - RAG 提供清晰的追溯性，使用户能够识别信息来源，这对于准确性验证和减轻 LLM 幻觉至关重要。\n\n## 构建高级 RAG 管道\n\n为了能够回答更复杂的问题，最近的 AI 框架如 LlamaIndex 引入了更高级的抽象，例如 [子问题查询引擎](https:\u002F\u002Fgpt-index.readthedocs.io\u002Fen\u002Flatest\u002Fexamples\u002Fquery_engine\u002Fsub_question_query_engine.html)。\n\n在本应用中，我们将以子问题查询引擎为例，揭开复杂 RAG 管道的神秘面纱。我们将检查子问题查询引擎的内部工作原理，并将抽象简化为其核心组件。我们还将确定与高级 RAG 管道相关的一些挑战。\n\n### 设置\n\n数据仓库是包含与问答任务相关的信息的数据源的集合（例如，文档、表格等）。\n\n在本示例中，我们将使用一个简单的数据仓库，其中包含多个关于不同热门城市的维基百科文章，灵感来自 LlamaIndex 的 [说明性用例](https:\u002F\u002Fdocs.llamaindex.ai\u002Fen\u002Fstable\u002Fexamples\u002Findex_structs\u002Fdoc_summary\u002FDocSummary.html)。每个城市的维基都是一个独立的数据源。请注意，为了简单起见，我们将每个文档的大小限制在 LLM 上下文限制范围内。\n\n我们的目标是构建一个系统，能够回答诸如以下问题：\n1. *“芝加哥的人口是多少？”*\n2. *“请总结亚特兰大的积极方面。”*\n3. *“哪个城市人口最多？”*\n\n如您所见，问题可以是针对单个数据源的简单事实型\u002F摘要型问题（Q1\u002FQ2），也可以是针对多个数据源的复杂事实型\u002F摘要型问题（Q3）。\n\n我们有以下*检索方法*可供选择：\n\n1. **向量检索** - 给定一个问题和一个数据源，使用来自该数据源中与问题最相似的前-K 个数据块作为上下文，生成 LLM 响应。我们使用来自 [EvaDB](https:\u002F\u002Fgithub.com\u002Fgeorgia-tech-db\u002Fevadb) 的现成 FAISS 向量索引进行向量检索。但是，这些概念适用于任何向量索引。\n\n2. **摘要检索** - 给定一个摘要问题和一个数据源，使用该数据源的全部作为上下文，生成 LLM 响应。\n\n### 秘诀\n\n我们的关键见解是，高级 RAG 管道中的每个组件都由单次 LLM 调用驱动。整个管道是一系列精心设计的提示模板的 LLM 调用。这些提示模板是使高级 RAG 管道能够执行复杂任务的秘诀。\n\n事实上，任何高级 RAG 管道都可以分解为遵循通用输入模式的一系列单独 LLM 调用：\n\n![equation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_23100a8df052.png)\n\n\u003C!-- LLM 输入 = **提示模板** + **上下文** + **问题** -->\n其中：\n- **提示模板** - 针对特定任务策划的提示模板（例如，子问题生成、摘要）\n- **上下文** - 用于执行任务的上下文（例如，最相似的 Top-K 个数据块）\n- **问题** - 要回答的问题\n\n现在，我们通过检查子问题查询引擎的内部工作原理来说明这一原则。\n\n子问题查询引擎必须执行三个任务：\n1. **子问题生成** - 给定一个复杂问题，将其分解为一组子问题，同时为每个子问题识别适当的数据源和检索函数。\n2. **向量\u002F摘要检索** - 对于每个子问题，使用所选的检索函数在相应的数据源上检索相关信息。\n3. **响应聚合** - 将来自子问题的响应聚合成最终响应。\n\n让我们详细检查每个任务。\n\n### 任务 1：子问题生成\n\n我们的目标是将复杂问题分解为一组子问题，同时为每个子问题识别合适的数据源和检索函数。例如，问题 *\"Which city has the highest population?\"*（哪个城市人口最多？）被分解为五个子问题，每个城市一个，形式为 *\"What is the population of {city}?\"*（{城市}的人口是多少？）。每个子问题的数据源必须是相应城市的维基百科，检索函数必须是向量检索。\n\n乍一看，这似乎是一项艰巨的任务。具体来说，我们需要回答以下问题：\n1. **我们如何知道要生成哪些子问题？**\n2. **我们如何知道每个子问题使用哪个数据源？**\n3. **我们如何知道每个子问题使用哪个检索函数？**\n\n值得注意的是，这三个问题的答案都是一样的——一次单一的 **LLM（大型语言模型）** 调用！整个子问题查询引擎由一次精心设计的提示模板的单一 LLM 调用驱动。让我们称这个模板为 **子问题提示模板**。\n\n```\n-- Sub-question Prompt Template --\n\n\"\"\"\n    You are an AI assistant that specializes in breaking down complex questions into simpler, manageable sub-questions.\n    When presented with a complex user question, your role is to generate a list of sub-questions that, when answered, will comprehensively address the original question.\n    You have at your disposal a pre-defined set of functions and data sources to utilize in answering each sub-question.\n    If a user question is straightforward, your task is to return the original question, identifying the appropriate function and data source to use for its solution.\n    Please remember that you are limited to the provided functions and data sources, and that each sub-question should be a full question that can be answered using a single function and a single data source.\n\"\"\"\n```\n\nLLM 调用的上下文是系统可用的数据源名称和函数。问题是用户问题。LLM 输出一系列子问题，每个都包含一个函数和一个数据源。\n\n![task_1_table](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_944a46c6bb0b.png)\n\n对于这三个示例问题，LLM 返回以下输出：\n\n\u003Cdetails>\n  \u003Csummary>\n    LLM 输出表格\n  \u003C\u002Fsummary>\n\u003Ctable>\n\u003Cthead>\n  \u003Ctr>\n    \u003Cth>问题\u003C\u002Fth>\n    \u003Cth>子问题\u003C\u002Fth>\n    \u003Cth>检索方法\u003C\u002Fth>\n    \u003Cth>数据源\u003C\u002Fth>\n  \u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\n  \u003Ctr>\n    \u003Ctd>\"芝加哥的人口是多少？\"\u003C\u002Ftd>\n    \u003Ctd>\"芝加哥的人口是多少？\"\u003C\u002Ftd>\n    \u003Ctd>向量检索\u003C\u002Ftd>\n    \u003Ctd>Chicago\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd>\"给我总结一下亚特兰大的积极方面。\"\u003C\u002Ftd>\n    \u003Ctd>\"给我总结一下亚特兰大的积极方面。\"\u003C\u002Ftd>\n    \u003Ctd>摘要检索\u003C\u002Ftd>\n    \u003Ctd>Atlanta\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd rowspan=5>\"哪个城市人口最多？\"\u003C\u002Ftd>\n    \u003Ctd>\"多伦多的人口是多少？\"\u003C\u002Ftd>\n    \u003Ctd>向量检索\u003C\u002Ftd>\n    \u003Ctd>Toronto\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd>\"芝加哥的人口是多少？\"\u003C\u002Ftd>\n    \u003Ctd>向量检索\u003C\u002Ftd>\n    \u003Ctd>Chicago\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd>\"休斯顿的人口是多少？\"\u003C\u002Ftd>\n    \u003Ctd>向量检索\u003C\u002Ftd>\n    \u003Ctd>Houston\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd>\"波士顿的人口是多少？\"\u003C\u002Ftd>\n    \u003Ctd>向量检索\u003C\u002Ftd>\n    \u003Ctd>Boston\u003C\u002Ftd>\n    \u003C\u002Ftr>\n    \u003Ctr>\n    \u003Ctd>\"亚特兰大的人口是多少？\"\u003C\u002Ftd>\n    \u003Ctd>向量检索\u003C\u002Ftd>\n    \u003Ctd>Atlanta\u003C\u002Ftd>\n    \u003C\u002Ftr>\n\u003C\u002Ftbody>\n\u003C\u002Ftable>\n\u003C\u002Fdetails>\n\n### 任务 2：向量\u002F摘要检索\n\n对于每个子问题，我们使用选定的检索函数在相应的数据源上检索相关信息。例如，对于子问题 *\"What is the population of Chicago?\"*（芝加哥的人口是多少？），我们在芝加哥数据源上使用向量检索。同样，对于子问题 *\"Give me a summary of the positive aspects of Atlanta.\"*（给我总结一下亚特兰大的积极方面。），我们在亚特兰大数据源上使用摘要检索。\n\n对于这两种检索方法，我们使用相同的 LLM 提示模板。事实上，我们发现来自 [LangchainHub](https:\u002F\u002Fsmith.langchain.com\u002Fhub) 的流行 **RAG Prompt（检索增强生成提示词）** 开箱即用，非常适合此步骤。\n\n```\n-- RAG Prompt Template --\n\n\"\"\"\nYou are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: {question}\nContext: {context}\nAnswer:\n```\n\n这两种检索方法仅在用于 LLM 调用的上下文上有所不同。对于向量检索，我们使用与子问题最相似的 Top K 个数据块作为上下文。对于摘要检索，我们使用整个数据源作为上下文。\n\n![task_2_table](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_f28d34151f24.png)\n\n### 任务 3：响应聚合\n\n这是最后一步，将来自子问题的响应聚合成最终响应。例如，对于问题 *\"Which city has the highest population?\"*（哪个城市人口最多？），子问题检索了每个城市的人口，然后响应聚合查找并返回人口最多的城市。\n**RAG Prompt** 在此步骤中也表现优异。\n\nLLM 调用的上下文是来自子问题的响应列表。问题是原始用户问题，LLM 输出最终响应。\n\n![task_3_table](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_0b075ff42484.png)\n\n### 整合所有内容\n\n在解开抽象层的奥秘后，我们发现了驱动子问题查询引擎的秘密成分——4 种类型的 LLM 调用，每种都有不同的提示模板、上下文和问题。这完美契合了我们之前确定的通用输入模式，与我们最初开始的复杂抽象相去甚远。\n总结如下：\n![equation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_23100a8df052.png)\n![call_types_table](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_4a8483ca7fc7.png)\n\n要查看完整流程的运行情况，请运行以下命令：\n\n```\npip install -r requirements.txt\n\necho OPENAI_API_KEY='yourkey' > .env\npython complex_qa.py\n```\n\n以下是系统回答问题 *\"Which city with the highest population?\"*（哪个城市人口最多？）的示例。\n\n![full_pipeline](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_fe354400f7c5.png)\n\n## 挑战\n\n既然我们已经揭开了高级 RAG（检索增强生成）管道内部运作的神秘面纱，让我们来看看与之相关的挑战。\n\n1. **问题敏感性** - 我们观察到的这些系统面临的最大挑战是问题敏感性。大语言模型（LLM）对用户问题极其敏感，导致管道在某些用户问题上意外失败。以下是我们遇到的一些示例失败案例：\n    - **不正确的子问题** - 大语言模型有时会生成不正确的子问题。例如，*\"Which city has the highest number of tech companies?\"* 被分解为 *\"What are the tech companies in each city?\"* 5 次（每个城市一次），而不是 *\"What is the number of tech companies in Toronto?\"*, *\"What is the number of tech companies in Chicago?\"* 等。\n    - **不正确的检索函数** - *\"Summarize the positive aspects of Atlanta and Toronto.\"* 导致使用了向量检索（vector retrieval）函数，而不是摘要检索（summary retrieval）方法。\n\n我们必须投入大量精力进行提示工程（Prompt Engineering），才能使管道针对每个问题正常工作。这对于构建稳健的系统是一个重大挑战。\n\n为了验证这种行为，我们使用 LlamaIndex 子问题查询引擎 [实现了该示例](llama_index_baseline.py)。与我们的观察一致，该系统经常生成错误的子问题，并且为子问题使用了错误的检索函数，如下所示。\n\n![llama_index_baseline](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_readme_d51d31724bc4.png)\n\n\n2. **成本** - 第二个挑战是高级 RAG 管道的成本动态变化。这个问题有两方面：\n    - **成本敏感性** - 问题的最终成本取决于生成的子问题数量、使用的检索函数以及查询的数据源数量。由于 LLM 对提示词敏感，问题的成本会根据问题和 LLM 输出而有显著差异。例如，上述 LlamaIndex 基线示例中错误的模型选择（`summary_tool`）导致成本比 `vector_tool` 高出 3 倍，同时还会生成错误的响应。\n    - **成本估算** - RAG 框架中的高级抽象模糊了问题的预估成本。建立成本监控系统具有挑战性，因为问题的成本取决于 LLM 的输出。\n\n\n## 结论\n\n由 LLM 驱动的高级 RAG 管道彻底改变了问答系统。\n然而，正如我们所见，这些管道并非即插即用解决方案。在底层，它们依赖于精心设计的提示模板和多次链式调用的 LLM。正如本 [EvaDB](https:\u002F\u002Fgithub.com\u002Fgeorgia-tech-db\u002Fevadb) 应用所示，这些管道可能对问题敏感、脆弱，且其成本动态不透明。理解这些细微差别是利用其全部潜力的关键，并为未来构建更稳健和高效的系统铺平道路。\n\n\n\u003C!-- ## 附录\n\n\n为了可靠地生成函数和数据源的正确格式，我们结合了强大的 [OpenAI 函数调用](https:\u002F\u002Fopenai.com\u002Fblog\u002Ffunction-calling-and-other-api-updates) 功能与 Pydantic 模型。我们还使用 [Instructor](https:\u002F\u002Fgithub.com\u002Fjxnl\u002Finstructor) 库来轻松生成适用于 LLM 的函数模式。\n\n有关完整模式定义的更多详细信息，请参见 [此处](subquestion_generator.py)。\n\n例如，选择向量\u002F摘要检索的函数模式非常简单：\n\n```python\nclass FunctionEnum(str, Enum):\n    \"\"\"The function to use to answer the questions.\n    Use vector_retrieval for factoid questions.\n    Use summary_retrieval for summarization questions.\n    \"\"\"\n    VECTOR_RETRIEVAL = \"vector_retrieval\"\n    SUMMARY_RETRIEVAL = \"summary_retrieval\"\n```\n\n数据源模式定义也很直接：\n```python\nclass DataSourceEnum(str, Enum):\n    \"\"\"The data source to use to answer the corresponding subquestion\"\"\"\n    TORONTO = \"Toronto\"\n    CHICAGO = \"Chicago\"\n    HOUSTON = \"Houston\"\n    BOSTON = \"Boston\"\n    ATLANTA = \"Atlanta\"\n```\n\n所有这些都可以打包成一个简单的 Pydantic 模型：\n\n```python\nclass QuestionBundle(BaseModel):\n    question: str = Field(None, description=\"The subquestion extracted from the user's question\")\n    function: FunctionEnum\n    data_source: DataSourceEnum\n```\n\n使用 Instructor 库，我们可以将上述模式作为期望的输出格式提供给 OpenAI。\n```python\nfrom instructor import OpenAISchema\n\nclass SubQuestionBundleList(OpenAISchema):\n    subquestion_bundle_list: List[QuestionBundle] = Field(None, description=\"A list of subquestions - each item in the list contains a question, a function, and a data source\")\n\nresponse = openai.ChatCompletion.create(\n        model=\"gpt-3.5-turbo\",\n        functions=[QuestionBundle.OpenAISchema],\n        ...\n)\n``` -->","# rag-demystified 快速上手指南\n\n本工具旨在揭示高级 RAG（检索增强生成）管道的内部机制，通过拆解复杂的抽象层，展示其核心由一系列精心设计的 LLM 调用组成。\n\n## 环境准备\n\n*   **Python 环境**：需安装 Python 3.8 或更高版本。\n*   **API Key**：需要准备一个有效的 OpenAI API Key，用于驱动大语言模型。\n*   **网络环境**：确保能够访问 PyPI 包仓库及 OpenAI 服务接口。\n\n## 安装步骤\n\n1.  将项目代码克隆或下载到本地目录。\n2.  在项目根目录下安装所需依赖：\n    ```bash\n    pip install -r requirements.txt\n    ```\n3.  创建 `.env` 文件并配置 API Key（请将 `yourkey` 替换为实际密钥）：\n    ```bash\n    echo OPENAI_API_KEY='yourkey' > .env\n    ```\n\n## 基本使用\n\n运行主脚本即可启动应用，体验针对多数据源复杂问题的问答功能（示例场景为比较不同城市的人口数据）：\n\n```bash\npython complex_qa.py\n```\n\n系统将自动执行以下流程：\n1.  **子问题生成**：将复杂问题拆解为多个简单子问题。\n2.  **检索与生成**：对每个子问题进行向量或摘要检索并生成回答。\n3.  **聚合响应**：汇总所有子问题的回答，输出最终结果。","某金融分析团队正在开发一个支持多城市经济数据对比的智能助手，需处理跨文档的复杂事实查询。\n\n### 没有 rag-demystified 时\n- 依赖高层抽象框架导致内部逻辑不透明，难以定位回答错误的根本原因。\n- 面对“哪个城市人口最多”这类跨源问题时，无法看清子问题拆解与合并的具体过程。\n- 出现信息幻觉时缺乏来源追踪能力，无法验证生成内容是否准确引用了原始文档。\n- 调试困难，无法评估不同检索策略对最终成本和延迟的实际影响。\n\n### 使用 rag-demystified 后\n- 通过从零构建的流水线清晰展示向量检索、重排序及生成的完整执行链路。\n- 直观观察子问题引擎如何将复杂查询分解为单文档任务并聚合结果。\n- 能够直接检查中间检索到的数据块，快速调整参数以解决召回不准的问题。\n- 明确掌握每个推理步骤的资源消耗，便于针对性优化系统性能与响应速度。\n\n它将高级 RAG 流程从不可见的黑盒转变为透明可控的工程实践，显著提升系统可维护性。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpchunduri6_rag-demystified_23100a8d.png","pchunduri6","Pramod Chunduri","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fpchunduri6_fee025e2.jpg","Graduate Student at Georgia Tech building AI-powered database systems",null,"pramodchunduri","https:\u002F\u002Fpchunduri6.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fpchunduri6",[84],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,860,55,"2026-03-09T04:56:33","Apache-2.0","未说明",{"notes":94,"python":92,"dependencies":95},"1. 运行前必须在 .env 文件中配置 OPENAI_API_KEY 环境变量。\n2. 核心生成任务依赖云端大模型（如 GPT-4），本地主要承担向量检索任务（FAISS\u002FEvaDB）。\n3. 具体依赖库及版本请查看项目根目录下的 requirements.txt 文件。\n4. 文档处理时需限制大小以适配 LLM 上下文窗口限制。",[96,97,98,99],"llama-index","faiss","evadb","openai",[14,15,13,51,26,54],[102,103,104,105,106,107,108,109],"ai","gpt","llm","question-answering","chatgpt","retrieval-augmented-generation","vector-database","rag","2026-03-27T02:49:30.150509","2026-04-06T07:15:10.809369",[113,118,123,127,132,136],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},3587,"运行时提示无法加载 .env 文件或为空，需要在文件中配置什么？","需要在 `.env` 文件中添加您的 OpenAI API 密钥。具体格式如下：\n\nOPENAI_API_KEY=\"sk-...\"\n\n请确保该文件存在且可读，否则程序将无法连接服务。","https:\u002F\u002Fgithub.com\u002Fpchunduri6\u002Frag-demystified\u002Fissues\u002F4",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},3588,"运行时报错 pydantic_core.ValidationError，关于模型名称应如何修正？","错误通常由 OpenAI 函数调用响应不可靠或模型名称拼写错误引起。请将代码中的模型名称从 `gpt-35-turbo` 修改为 `gpt-3.5-turbo`。同时建议在 `openai_utils.py` 的定价字典中添加以下配置以确保兼容性：\n\n\"gpt-3.5-turbo-0613\": {\"prompt\": 0.0015, \"completion\": 0.002}","https:\u002F\u002Fgithub.com\u002Fpchunduri6\u002Frag-demystified\u002Fissues\u002F7",{"id":124,"question_zh":125,"answer_zh":126,"source_url":122},3589,"项目是否已适配 OpenAI 新版 API？如何解决函数调用可靠性问题？","是的，代码已迁移至 OpenAI v1.x 版本。维护者已在 PR #8 中修复了相关逻辑，解决了因模型响应不符合指令导致的验证错误。建议更新代码以获取最新修复，之前的提示词工程内容仍保留在代码中以供参考。",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},3590,"创建向量存储时出现 `lark.exceptions.UnexpectedCharacters` 错误是什么原因？","该错误发生在解析 SQL 语句时（例如 `CREATE FUNCTION` 语句），通常由语法解析器上下文不匹配导致。报错信息显示期望的是 INDEX、DATABASE 等关键字，但遇到了其他字符。","https:\u002F\u002Fgithub.com\u002Fpchunduri6\u002Frag-demystified\u002Fissues\u002F6",{"id":133,"question_zh":134,"answer_zh":135,"source_url":131},3591,"遇到 EvaDB 与 Ray 的 Pydantic 版本冲突该如何处理？","这是一个已知问题，由 Ray 中的 Pydantic 版本限制引起（参考 Ray Issue #37019）。该不兼容仅在您想使用 EvaDB 的高级 Ray 功能时才可能发生。如果不需要高级功能，可忽略此警告或尝试升级依赖包解决兼容性问题。",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},3592,"如何处理 OpenAI API 的速率限制问题？","维护者已在 PR #2 中添加了重试功能（retry functionality）。建议使用带有指数退避等待策略的重试装饰器来处理请求失败的情况，例如设置最小\u002F最大等待时间、最大重试次数以及日志记录，以防止因速率限制导致的程序中断。","https:\u002F\u002Fgithub.com\u002Fpchunduri6\u002Frag-demystified\u002Fissues\u002F1",[]]