[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-microsoft--promptbase":3,"similar-microsoft--promptbase":50},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":18,"owner_email":19,"owner_twitter":20,"owner_website":21,"owner_url":22,"languages":23,"stars":28,"forks":29,"last_commit_at":30,"license":31,"difficulty_score":32,"env_os":33,"env_gpu":34,"env_ram":33,"env_deps":35,"category_tags":37,"github_topics":18,"view_count":39,"oss_zip_url":18,"oss_zip_packed_at":18,"status":40,"created_at":41,"updated_at":42,"faqs":43,"releases":49},448,"microsoft\u002Fpromptbase","promptbase","All things prompt engineering","promptbase 是一个专注于提示工程的资源库，汇集了各类最佳实践、示例脚本和资源，旨在帮助用户挖掘 GPT-4 等基础模型的极致性能。它不仅提供了现成的代码示例，还深入探讨了如何通过精心设计的提示策略，让通用模型在特定领域（如医学）达到甚至超越专用微调模型的效果。\n\npromptbase 的核心亮点在于展示了“Medprompt”方法论。这种方法通过组合“动态少样本选择”、“自生成思维链”等技术，成功引导 GPT-4 在医学等专业基准测试中取得了专家级的成绩，并进一步扩展到了非医疗领域。这意味着开发者无需昂贵的模型微调，仅通过优化提示词就能获得高质量的输出。\n\n对于 AI 开发者、提示词工程师以及研究人员来说，promptbase 是一个极具价值的参考宝库。无论你是希望提升模型在复杂任务中的表现，还是想深入了解提示工程背后的科学原理，都能在这里找到实用的指导和灵感。","# promptbase\n\n`promptbase` is an evolving collection of resources, best practices, and example scripts for eliciting the best performance from foundation models like `GPT-4`. We currently host scripts demonstrating the [`Medprompt` methodology](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16452), including examples of how we further extended this collection of prompting techniques (\"`Medprompt+`\") into non-medical domains: \n\n| Benchmark | GPT-4 Prompt | GPT-4 Results | Gemini Ultra Results |\n| ---- | ------- | ------- | ---- |\n| MMLU | Medprompt+ | 90.10% | 90.04% |\n| GSM8K | Zero-shot | 95.3% | 94.4% |\n| MATH | Zero-shot | 68.4% | 53.2% |\n| HumanEval | Zero-shot | 87.8% | 74.4% |\n| BIG-Bench-Hard | Few-shot + CoT | 89.0% | 83.6% |\n| DROP | Zero-shot + CoT | 83.7% | 82.4% |\n| HellaSwag | 10-shot | 95.3% | 87.8% |\n\n\nIn the near future, `promptbase` will also offer further case studies and structured interviews around the scientific process we take behind prompt engineering. We'll also offer specialized deep dives into specialized tooling that accentuates the prompt engineering process. Stay tuned!\n\n## `Medprompt` and The Power of Prompting\n\n\u003Cdetails>\n\u003Csummary>\n    \u003Cem>\"Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine\" (H. Nori, Y. T. Lee, S. Zhang, D. Carignan, R. Edgar, N. Fusi, N. King, J. Larson, Y. Li, W. Liu, R. Luo, S. M. McKinney, R. O. Ness, H. Poon, T. Qin, N. Usuyama, C. White, E. Horvitz 2023)\u003C\u002Fem>\n\u003C\u002Fsummary>\n\u003Cbr\u002F>\n\u003Cpre>\n\n@article{nori2023can,\n  title={Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine},\n  author={Nori, Harsha and Lee, Yin Tat and Zhang, Sheng and Carignan, Dean and Edgar, Richard and Fusi, Nicolo and King, Nicholas and Larson, Jonathan and Li, Yuanzhi and Liu, Weishung and others},\n  journal={arXiv preprint arXiv:2311.16452},\n  year={2023}\n}\n    \u003C\u002Fpre>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.09223.pdf\">Paper link\u003C\u002Fa>\n\u003C\u002Fdetails>\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_promptbase_readme_11171d6d8f52.png)\n\nIn a recent [study](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16452), we showed how the composition of several prompting strategies into a method that we refer to as `Medprompt` can efficiently steer generalist models like GPT-4 to achieve top performance, even when compared to models specifically finetuned for medicine. `Medprompt` composes three distinct strategies together -- including dynamic few-shot selection, self-generated chain of thought, and choice-shuffle ensembling -- to elicit specialist level performance from GPT-4. We briefly describe these strategies here:\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_promptbase_readme_83cdeb933ae8.png)\n\n- **Dynamic Few Shots**: Few-shot learning -- providing several examples of the task and response to a foundation model -- enables models quickly adapt to a specific domain and\nlearn to follow the task format. For simplicity and efficiency, the few-shot examples applied in prompting for a particular task are typically fixed; they are unchanged across test examples. This necessitates that the few-shot examples selected are broadly representative and relevant to a wide distribution of text examples. One approach to meeting these requirements is to have domain experts carefully hand-craft exemplars. Even so, this approach cannot guarantee that the curated, fixed few-shot examples will be appropriately representative of every test example. However, with enough available data, we can select _different_ few-shot examples for different task inputs. We refer to this approach as employing dynamic few-shot examples. The method makes use of a mechanism to identify examples based on their similarity to the case at hand. For Medprompt, we did the following to identify representative few shot examples: Given a test example, we choose k training examples that are semantically similar using a k-NN clustering in the embedding space. Specifically, we first use OpenAI's `text-embedding-ada-002` model to embed candidate exemplars for few-shot learning. Then, for each test question x, we retrieve its nearest k neighbors x1, x2, ..., xk from the training set (according to distance in the embedding space of text-embedding-ada-002). These examples -- the ones most similar in embedding space to the test question -- are ultimately registered in the prompt.\n\n- **Self-Generated Chain of Thought (CoT)**: Chain-of-thought (CoT) uses natural language statements, such as “Let’s think step by step,” to explicitly encourage the model to generate a series of intermediate reasoning steps. The approach has been found to significantly improve the ability of foundation models to perform complex reasoning. Most approaches to chain-of-thought center on the use of experts to manually compose few-shot examples with chains of thought for prompting. Rather than rely on human experts, we pursued\na mechanism to automate the creation of chain-of-thought examples. We found that we could simply ask GPT-4 to generate chain-of-thought for the training examples, with appropriate guardrails for reducing risk of hallucination via incorrect reasoning chains.\n\n- **Majority Vote Ensembling**: [Ensembling](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FEnsemble_learning) refers to combining the output of several algorithms together to yield better predictive performance than any individual algorithm. Frontier models like `GPT-4` benefit from ensembling of their own outputs. A simple technique is to have a variety of prompts, or a single prompt with varied `temperature`, and report the most frequent answer amongst the ensemble constituents. For multiple choice questions, we employ a further trick that increases the diversity of the ensemble called `choice-shuffling`, where we shuffle the relative order of the answer choices before generating each reasoning\npath. We then select the most consistent answer, i.e., the one that is least sensitive to choice shuffling, which increases the robustness of the answer.\n\nThe combination of these three techniques led to breakthrough performance in Medprompt for medical challenge questions. Implementation details of these techniques can be found here: https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fpromptbase\u002Ftree\u002Fmain\u002Fsrc\u002Fpromptbase\u002Fmmlu\n\n## `Medprompt+` | Extending the power of prompting \n\nHere we provide some intuitive details on how we extended the `medprompt` prompting framework to elicit even stronger out-of-domain performance on the MMLU (Measuring Massive Multitask Language Understanding) benchmark.  MMLU was established as a test of general knowledge and reasoning powers of large language models.  The complete MMLU benchmark contains tens of thousands of challenge problems of different forms across 57 areas from basic mathematics to United States history, law, computer science, engineering, medicine, and more. \n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_promptbase_readme_227ff0e704cf.png)\n\nWe found that applying Medprompt without modification to the whole MMLU achieved a score of 89.1%. Not bad for a single policy working across a great diversity of problems!  But could we push Medprompt to do better?  Simply scaling-up MedPrompt can yield further benefits. As a first step, we increased the number of ensembled calls from five to 20.  This boosted performance to 89.56%. \n\nOn working to push further with refinement of Medprompt, we noticed that performance was relatively poor for specific topics of the MMLU. MMLU contains a great diversity of types of questions, depending on the discipline and specific benchmark at hand. How might we push GPT-4 to perform even better on MMLU given the diversity of problems?\n\nWe focused on extension to a portfolio approach based on the observation that some topical areas tend to ask questions that would require multiple steps of reasoning and perhaps a scratch pad to keep track of multiple parts of a solution. Other areas seek factual answers that follow more directly from questions. Medprompt employs “chain-of-thought” (CoT) reasoning, resonating with multi-step solving.  We wondered if the sophisticated Medprompt-classic approach might do less well on very simple questions and if the system might do better if a simpler method were used for the factual queries. \n\nFollowing this argument, we found that we could boost the performance on MMLU by extending MedPrompt with a simple two-method prompt portfolio. We add to the classic Medprompt a set of 10 simple, direct few-shot prompts soliciting an answer directly without Chain of Thought. We then ask GPT-4 for help with deciding on the best strategy for each topic area and question. As a screening call, for each question we first ask GPT-4:\n```\n# Question\n{{ question }}\n \n# Task\nDoes answering the question above require a scratch-pad?\nA. Yes\nB. No\n```\n\nIf GPT-4 thinks the question does require a scratch-pad, then the contribution of the Chain-of-Thought component of the ensemble is doubled. If it doesn't, we halve that contribution (and let the ensemble instead depend more on the direct few-shot prompts). Dynamically leveraging the appropriate prompting technique in the ensemble led to a further +0.5% performance improvement across the MMLU.\n\nWe note that Medprompt+ relies on accessing confidence scores (logprobs) from GPT-4. These are not publicly available via the current API but will be enabled for all in the near future.\n\n\n## Running Scripts\n\n> Note: Some scripts hosted here are published for reference on methodology, but may not be immediately executable against public APIs. We're working hard on making the pipelines easier to run \"out of the box\" over the next few days, and appreciate your patience in the interim!\n\nFirst, clone the repo and install the promptbase package:\n\n```bash\ncd src\npip install -e .\n```\n\nNext, decide which tests you'd like to run. You can choose from:\n\n- bigbench\n- drop\n- gsm8k\n- humaneval\n- math\n- mmlu\n\nBefore running the tests, you will need to download the datasets from the original sources (see below) and place them in the `src\u002Fpromptbase\u002Fdatasets` directory.\n\nAfter downloading datasets and installing the promptbase package, you can run a test with:\n\n`python -m promptbase dataset_name`\n\nFor example:\n\n`python -m promptbase gsm8k`\n\n## Dataset Links\n\nTo run evaluations, download these datasets and add them to \u002Fsrc\u002Fpromptbase\u002Fdatasets\u002F\n\n - MMLU: https:\u002F\u002Fgithub.com\u002Fhendrycks\u002Ftest\n    - Download the `data.tar` file from the above page\n    - Extract the contents\n    - Run `mkdir src\u002Fpromptbase\u002Fdatasets\u002Fmmlu`\n    - Run `python .\u002Fsrc\u002Fpromptbase\u002Fformat\u002Fformat_mmlu.py --mmlu_csv_dir \u002Fpath\u002Fto\u002Fextracted\u002Fcsv\u002Ffiles --output_path .\u002Fsrc\u002Fpromptbase\u002Fdatasets\u002Fmmlu`\n    - You will also need to set the following environment variables:\n      - `AZURE_OPENAI_API_KEY`\n      - `AZURE_OPENAI_CHAT_API_KEY`\n      - `AZURE_OPENAI_CHAT_ENDPOINT_URL`\n      - `AZURE_OPENAI_EMBEDDINGS_URL`\n    - Run with `python -m promptbase mmlu --subject \u003CSUBJECT>` where `\u003CSUBJECT>` is one of the MMLU datasets (such as 'abstract_algebra')\n    - In addition to the individual subjects, the `format_mmlu.py` script prepares files which enables `all` to be passed as a subject, which will run on the entire dataset\n - HumanEval: https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fopenai_humaneval\n - DROP: https:\u002F\u002Fallenai.org\u002Fdata\u002Fdrop\n - GSM8K: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgrade-school-math\n - MATH: https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fhendrycks\u002Fcompetition_math\n - Big-Bench-Hard: https:\u002F\u002Fgithub.com\u002Fsuzgunmirac\u002FBIG-Bench-Hard\n   The contents of this repo need to be put into a directory called `BigBench` in the `datasets` directory\n\n## Other Resources:\n\nMedprompt Blog: https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fblog\u002Fthe-power-of-prompting\u002F\n\nMedprompt Research Paper: https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16452\n\nMedprompt+: https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fblog\u002Fsteering-at-the-frontier-extending-the-power-of-prompting\u002F\n\nMicrosoft Introduction to Prompt Engineering: https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Fconcepts\u002Fprompt-engineering\n\nMicrosoft Advanced Prompt Engineering Guide: https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Fconcepts\u002Fadvanced-prompt-engineering?pivots=programming-language-chat-completions\n\n\n\n\n\n\n","# promptbase\n\n`promptbase` 是一个不断发展的资源、最佳实践和示例脚本的集合，旨在从 `GPT-4` 等基础模型（Foundation Models）中激发最佳性能。我们目前托管了演示 [`Medprompt` 方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16452) 的脚本，包括我们如何将这套提示工程技术（\"`Medprompt+`\"）进一步扩展到非医疗领域的示例：\n\n| Benchmark (基准测试) | GPT-4 Prompt (提示词) | GPT-4 Results (结果) | Gemini Ultra Results (结果) |\n| ---- | ------- | ------- | ---- |\n| MMLU | Medprompt+ | 90.10% | 90.04% |\n| GSM8K | Zero-shot | 95.3% | 94.4% |\n| MATH | Zero-shot | 68.4% | 53.2% |\n| HumanEval | Zero-shot | 87.8% | 74.4% |\n| BIG-Bench-Hard | Few-shot + CoT | 89.0% | 83.6% |\n| DROP | Zero-shot + CoT | 83.7% | 82.4% |\n| HellaSwag | 10-shot | 95.3% | 87.8% |\n\n\n在不久的将来，`promptbase` 还将提供更多的案例研究和结构化访谈，围绕我们在提示工程背后所采取的科学流程展开。我们还将提供专门的深度剖析，介绍那些能强化提示工程流程的专业工具。敬请期待！\n\n## `Medprompt` 与提示的力量\n\n\u003Cdetails>\n\u003Csummary>\n    \u003Cem>\"Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine\" (H. Nori, Y. T. Lee, S. Zhang, D. Carignan, R. Edgar, N. Fusi, N. King, J. Larson, Y. Li, W. Liu, R. Luo, S. M. McKinney, R. O. Ness, H. Poon, T. Qin, N. Usuyama, C. White, E. Horvitz 2023)\u003C\u002Fem>\n\u003C\u002Fsummary>\n\u003Cbr\u002F>\n\u003Cpre>\n\n@article{nori2023can,\n  title={Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine},\n  author={Nori, Harsha and Lee, Yin Tat and Zhang, Sheng and Carignan, Dean and Edgar, Richard and Fusi, Nicolo and King, Nicholas and Larson, Jonathan and Li, Yuanzhi and Liu, Weishung and others},\n  journal={arXiv preprint arXiv:2311.16452},\n  year={2023}\n}\n    \u003C\u002Fpre>\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.09223.pdf\">论文链接\u003C\u002Fa>\n\u003C\u002Fdetails>\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_promptbase_readme_11171d6d8f52.png)\n\n在最近的一项[研究](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16452)中，我们展示了如何将几种提示策略组合成一种我们称之为 `Medprompt` 的方法，从而有效地引导像 GPT-4 这样的通用模型实现顶级性能，甚至可以与专门针对医学领域微调（Finetuned）的模型相媲美。`Medprompt` 组合了三种不同的策略——包括动态少样本选择（Dynamic Few-shot Selection）、自生成思维链（Self-generated Chain of Thought）和选项洗牌集成（Choice-shuffle Ensembling）——以激发 GPT-4 的专家级表现。我们在此简要描述这些策略：\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_promptbase_readme_83cdeb933ae8.png)\n\n- **动态少样本（Dynamic Few Shots）**：少样本学习（Few-shot learning）——向基础模型提供任务和响应的几个示例——使模型能够快速适应特定领域并学习遵循任务格式。为了简单和高效，用于特定任务提示的少样本示例通常是固定的；它们在不同的测试示例中保持不变。这就要求所选的少样本示例具有广泛的代表性，并与广泛的文本示例分布相关。满足这些要求的一种方法是让领域专家精心手工制作样本。即便如此，这种方法也不能保证策划好的固定少样本示例能恰当地代表每一个测试示例。然而，如果有足够多的可用数据，我们可以为不同的任务输入选择*不同*的少样本示例。我们将这种方法称为采用动态少样本示例。该方法利用一种机制，根据示例与当前案例的相似性来识别示例。对于 Medprompt，我们通过以下方式识别具有代表性的少样本示例：给定一个测试示例，我们使用嵌入空间（Embedding Space）中的 k近邻聚类（k-NN clustering）选择 k 个语义相似的训练示例。具体来说，我们首先使用 OpenAI 的 `text-embedding-ada-002` 模型对少样本学习的候选样本进行嵌入。然后，对于每个测试问题 x，我们从训练集中检索其最近的 k 个邻居 x1, x2, ..., xk（根据 text-embedding-ada-002 嵌入空间中的距离）。这些示例——即在嵌入空间中与测试问题最相似的示例——最终被注册到提示词中。\n\n- **自生成思维链（Self-Generated Chain of Thought, CoT）**：思维链使用自然语言陈述，例如“让我们一步步思考”，明确鼓励模型生成一系列中间推理步骤。该方法已被发现能显著提高基础模型执行复杂推理的能力。大多数思维链方法的核心是利用专家手动编写带有思维链的少样本示例用于提示。我们不再依赖人类专家，而是寻求一种机制来自动化创建思维链示例。我们发现，可以简单地要求 GPT-4 为训练示例生成思维链，并设置适当的防护措施以减少因错误推理链导致幻觉的风险。\n\n- **多数投票集成（Majority Vote Ensembling）**：[集成](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FEnsemble_learning)是指将多个算法的输出组合在一起，从而获得比任何单一算法更好的预测性能。像 `GPT-4` 这样的前沿模型受益于其自身输出的集成。一种简单的技术是使用多种提示词，或者使用具有不同 `temperature`（温度）参数的单一提示词，然后报告集成成分中最频繁的答案。对于多项选择题，我们采用了一种进一步增加集成多样性的技巧，称为 `choice-shuffling`（选项洗牌），即在生成每条推理路径之前打乱答案选项的相对顺序。然后我们选择最一致的答案，即对选项洗牌最不敏感的答案，这增加了答案的鲁棒性。\n\n这三种技术的结合使 Medprompt 在医学挑战问题上取得了突破性的性能。这些技术的实现细节可以在这里找到：https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fpromptbase\u002Ftree\u002Fmain\u002Fsrc\u002Fpromptbase\u002Fmmlu\n\n## `Medprompt+` | 扩展提示的能力\n\n在这里，我们提供了一些直观的细节，介绍我们如何扩展 `medprompt` 提示框架，以在 MMLU（Measuring Massive Multitask Language Understanding，大规模多任务语言理解测量）基准测试中激发更强的域外性能。MMLU 的建立是为了测试大语言模型的通用知识和推理能力。完整的 MMLU 基准测试包含数万个不同形式的挑战性问题，涵盖从基础数学到美国历史、法律、计算机科学、工程、医学等 57 个领域。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_promptbase_readme_227ff0e704cf.png)\n\n我们发现，将 Medprompt 不经修改直接应用于整个 MMLU 得分为 89.1%。对于在极其多样化的问题上运作的单一策略来说，这还不错！但我们能否推动 Medprompt 做得更好？简单地扩展 MedPrompt 可以带来进一步的收益。作为第一步，我们将集成调用（ensembled calls）的数量从 5 次增加到 20 次。这将性能提升到了 89.56%。\n\n在致力于进一步优化 Medprompt 的过程中，我们注意到 MMLU 特定主题的表现相对较差。根据学科和具体基准测试的不同，MMLU 包含多种类型的问题。鉴于问题的多样性，我们如何推动 GPT-4 在 MMLU 上表现得更好？\n\n我们专注于扩展到一种组合策略（portfolio approach），基于观察到某些主题领域倾向于提出需要多步推理的问题，可能还需要草稿板（scratch pad）来跟踪解决方案的多个部分。其他领域则寻求直接从问题得出的事实性答案。Medprompt 采用“思维链”（Chain-of-Thought，CoT）推理，与多步求解相呼应。我们想知道，复杂的经典 Medprompt 方法是否在非常简单的问题上表现较差，以及如果对事实性查询使用更简单的方法，系统是否会表现更好。\n\n遵循这一论点，我们发现通过使用简单的双方法提示组合扩展 MedPrompt，可以提高 MMLU 上的性能。我们在经典 Medprompt 的基础上增加了一组 10 个简单、直接的少样本（few-shot）提示，直接征求答案而不使用思维链。然后我们请求 GPT-4 帮助决定每个主题领域和问题的最佳策略。作为筛选步骤，对于每个问题，我们首先询问 GPT-4：\n```\n# Question\n{{ question }}\n \n# Task\nDoes answering the question above require a scratch-pad?\nA. Yes\nB. No\n```\n\n如果 GPT-4 认为问题确实需要草稿板，那么集成中思维链组件的贡献权重会加倍。如果不需要，我们将该贡献减半（并让集成更多地依赖于直接的少样本提示）。在集成中动态利用适当的提示技术，使得 MMLU 的性能进一步提升了 +0.5%。\n\n我们注意到 Medprompt+ 依赖于访问 GPT-4 的置信度分数（logprobs，对数概率）。这些目前无法通过当前的 API 公开获取，但将在不久的将来向所有人开放。\n\n\n## 运行脚本\n\n> 注意：此处托管的某些脚本发布仅供参考方法论，但可能无法立即在公共 API 上执行。我们正在努力使流程在未来几天内更易于“开箱即用”，感谢您在此期间的耐心等待！\n\n首先，克隆仓库并安装 promptbase 包：\n\n```bash\ncd src\npip install -e .\n```\n\n接下来，决定您要运行哪些测试。您可以从以下选项中选择：\n\n- bigbench\n- drop\n- gsm8k\n- humaneval\n- math\n- mmlu\n\n在运行测试之前，您需要从原始来源下载数据集（见下文），并将它们放置在 `src\u002Fpromptbase\u002Fdatasets` 目录中。\n\n下载数据集并安装 promptbase 包后，您可以使用以下命令运行测试：\n\n`python -m promptbase dataset_name`\n\n例如：\n\n`python -m promptbase gsm8k`\n\n## 数据集链接\n\n要运行评估，请下载这些数据集并将其添加到 \u002Fsrc\u002Fpromptbase\u002Fdatasets\u002F\n\n - MMLU: https:\u002F\u002Fgithub.com\u002Fhendrycks\u002Ftest\n    - 从上述页面下载 `data.tar` 文件\n    - 解压内容\n    - 运行 `mkdir src\u002Fpromptbase\u002Fdatasets\u002Fmmlu`\n    - 运行 `python .\u002Fsrc\u002Fpromptbase\u002Fformat\u002Fformat_mmlu.py --mmlu_csv_dir \u002Fpath\u002Fto\u002Fextracted\u002Fcsv\u002Ffiles --output_path .\u002Fsrc\u002Fpromptbase\u002Fdatasets\u002Fmmlu`\n    - 您还需要设置以下环境变量：\n      - `AZURE_OPENAI_API_KEY`\n      - `AZURE_OPENAI_CHAT_API_KEY`\n      - `AZURE_OPENAI_CHAT_ENDPOINT_URL`\n      - `AZURE_OPENAI_EMBEDDINGS_URL`\n    - 使用 `python -m promptbase mmlu --subject \u003CSUBJECT>` 运行，其中 `\u003CSUBJECT>` 是 MMLU 数据集之一（例如 'abstract_algebra'）\n    - 除了单个主题外，`format_mmlu.py` 脚本还准备了允许将 `all` 作为主题传递的文件，这将在整个数据集上运行\n - HumanEval: https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fopenai_humaneval\n - DROP: https:\u002F\u002Fallenai.org\u002Fdata\u002Fdrop\n - GSM8K: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgrade-school-math\n - MATH: https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fhendrycks\u002Fcompetition_math\n - Big-Bench-Hard: https:\u002F\u002Fgithub.com\u002Fsuzgunmirac\u002FBIG-Bench-Hard\n   此仓库的内容需要放入 `datasets` 目录中名为 `BigBench` 的目录中\n\n## 其他资源：\n\nMedprompt 博客：https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fblog\u002Fthe-power-of-prompting\u002F\n\nMedprompt 研究论文：https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16452\n\nMedprompt+：https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fblog\u002Fsteering-at-the-frontier-extending-the-power-of-prompting\u002F\n\nMicrosoft 提示工程简介：https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Fconcepts\u002Fprompt-engineering\n\nMicrosoft 高级提示工程指南：https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Fconcepts\u002Fadvanced-prompt-engineering?pivots=programming-language-chat-completions","# promptbase 快速上手指南\n\n## 环境准备\n\n*   **系统要求**：Python 环境。\n*   **前置依赖**：\n    *   需要拥有 Azure OpenAI 的访问权限。\n    *   需设置以下环境变量：\n        *   `AZURE_OPENAI_API_KEY`\n        *   `AZURE_OPENAI_CHAT_API_KEY`\n        *   `AZURE_OPENAI_CHAT_ENDPOINT_URL`\n        *   `AZURE_OPENAI_EMBEDDINGS_URL`\n\n## 安装步骤\n\n1.  **克隆代码库**\n    首先将项目克隆到本地。\n\n2.  **安装包**\n    进入 `src` 目录并以可编辑模式安装 `promptbase` 包：\n\n    ```bash\n    cd src\n    pip install -e .\n    ```\n\n## 基本使用\n\n### 1. 数据准备\n运行测试前，需从原始来源下载对应数据集并放置于 `src\u002Fpromptbase\u002Fdatasets` 目录。\n\n支持的数据集包括：`bigbench`, `drop`, `gsm8k`, `humaneval`, `math`, `mmlu`。\n\n### 2. 运行测试\n安装完成并准备好数据后，使用以下命令运行指定数据集的测试：\n\n```bash\npython -m promptbase dataset_name\n```\n\n例如，运行 GSM8K 数据集测试：\n\n```bash\npython -m promptbase gsm8k\n```\n\n### 3. MMLU 数据集特殊配置\n针对 MMLU 数据集，需执行额外的格式化步骤：\n\n1.  下载 `data.tar` 并解压。\n2.  创建目录：`mkdir src\u002Fpromptbase\u002Fdatasets\u002Fmmlu`。\n3.  运行格式化脚本：\n    ```bash\n    python .\u002Fsrc\u002Fpromptbase\u002Fformat\u002Fformat_mmlu.py --mmlu_csv_dir \u002Fpath\u002Fto\u002Fextracted\u002Fcsv\u002Ffiles --output_path .\u002Fsrc\u002Fpromptbase\u002Fdatasets\u002Fmmlu\n    ```\n4.  运行测试：\n    ```bash\n    python -m promptbase mmlu --subject \u003CSUBJECT>\n    ```\n    *注：`\u003CSUBJECT>` 为具体科目名称（如 `abstract_algebra`），也可使用 `all` 运行整个数据集。*","某金融科技公司的 AI 工程师正在开发一套智能法律合同审查系统，需要利用 GPT-4 对复杂的法律条款进行风险识别，但发现模型在处理罕见或复杂的条款时经常出现幻觉或理解偏差，难以满足业务对准确率的严苛要求。\n\n### 没有 promptbase 时\n\n- 只能通过简单的 Zero-shot 或手动拼凑固定的 Few-shot 示例，模型难以适应千变万化的合同类型，导致在边缘案例上表现不佳。\n- 为了提升效果，团队曾考虑对模型进行微调，但这需要高昂的数据标注成本和算力资源，且迭代周期过长，无法满足业务快速上线的需求。\n- 面对模型输出的错误逻辑，缺乏有效的引导机制（如思维链）来纠正，导致审查准确率长期卡在瓶颈，无法突破专家级的水平。\n- 提示词工程缺乏系统性方法论，仅凭经验调试，无法复现学术界已验证的高性能策略，开发效率低下。\n\n### 使用 promptbase 后\n\n- 直接参考 Medprompt 方法论，利用动态 Few-shot 技术，根据输入合同自动检索语义最相似的高质量案例作为示例，大幅提升了模型对特定上下文的理解力。\n- 无需进行昂贵的模型微调，仅通过组合提示策略（如自生成思维链），就让通用模型 GPT-4 达到了媲美专用法律模型的推理深度。\n- 引入 Choice-shuffle Ensembling（选择洗牌集成）策略，通过多次推理投票显著降低了模型的幻觉概率，使输出结果更加稳定可靠。\n- 获得了经过基准测试验证的最佳实践脚本，从“凭感觉写提示词”转变为“科学构建提示工程”，快速突破了性能瓶颈。\n\npromptbase 提供了一套经过实战验证的高级提示工程方法论与代码实现，让开发者无需昂贵的模型微调，也能榨干大模型的极致推理性能。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_promptbase_e15aed9f.png","microsoft","Microsoft","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmicrosoft_4900709c.png","Open source projects and samples from Microsoft",null,"opensource@microsoft.com","OpenAtMicrosoft","https:\u002F\u002Fopensource.microsoft.com","https:\u002F\u002Fgithub.com\u002Fmicrosoft",[24],{"name":25,"color":26,"percentage":27},"Python","#3572A5",100,5742,328,"2026-04-03T00:06:59","MIT",2,"未说明","不需要（主要调用 Azure OpenAI 等 API，无需本地推理）",{"notes":36,"python":33,"dependencies":33},"1. 必须配置 Azure OpenAI 相关环境变量（API Key、Endpoint URL 等）才能运行。\n2. 需手动下载 MMLU、GSM8K、HumanEval 等数据集并放入 src\u002Fpromptbase\u002Fdatasets 目录。\n3. Medprompt+ 功能依赖 GPT-4 的 logprobs（置信度分数），README 提及当前公共 API 暂未完全开放此功能。\n4. 这是一个提示工程测试框架，主要针对 GPT-4 等云端模型，非本地模型运行环境。",[38],"语言模型",3,"ready","2026-03-27T02:49:30.150509","2026-04-06T06:44:17.491965",[44],{"id":45,"question_zh":46,"answer_zh":47,"source_url":48},1738,"为什么仓库会提示缺少重要文件？","这是微软开源项目的自动合规检查流程。系统检测到仓库缺少微软项目应有的标准文件（如安全或支持文档），并已自动创建 Pull Request 来补充这些文件。当 PR 合并后，此提示会自动关闭。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fpromptbase\u002Fissues\u002F2",[],[51,61,69,83,91,99],{"id":52,"name":53,"github_repo":54,"description_zh":55,"stars":56,"difficulty_score":32,"last_commit_at":57,"category_tags":58,"status":40},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,"2026-04-05T11:33:21",[59,60,38],"开发框架","Agent",{"id":62,"name":63,"github_repo":64,"description_zh":65,"stars":66,"difficulty_score":32,"last_commit_at":67,"category_tags":68,"status":40},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[59,38],{"id":70,"name":71,"github_repo":72,"description_zh":73,"stars":74,"difficulty_score":32,"last_commit_at":75,"category_tags":76,"status":40},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[77,78,79,80,60,81,38,59,82],"图像","数据工具","视频","插件","其他","音频",{"id":84,"name":85,"github_repo":86,"description_zh":87,"stars":88,"difficulty_score":39,"last_commit_at":89,"category_tags":90,"status":40},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[60,77,59,38,81],{"id":92,"name":93,"github_repo":94,"description_zh":95,"stars":96,"difficulty_score":39,"last_commit_at":97,"category_tags":98,"status":40},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[38,77,59,81],{"id":100,"name":101,"github_repo":102,"description_zh":103,"stars":104,"difficulty_score":39,"last_commit_at":105,"category_tags":106,"status":40},2181,"OpenHands","OpenHands\u002FOpenHands","OpenHands 是一个专注于 AI 驱动开发的开源平台，旨在让智能体（Agent）像人类开发者一样理解、编写和调试代码。它解决了传统编程中重复性劳动多、环境配置复杂以及人机协作效率低等痛点，通过自动化流程显著提升开发速度。\n\n无论是希望提升编码效率的软件工程师、探索智能体技术的研究人员，还是需要快速原型验证的技术团队，都能从中受益。OpenHands 提供了灵活多样的使用方式：既可以通过命令行（CLI）或本地图形界面在个人电脑上轻松上手，体验类似 Devin 的流畅交互；也能利用其强大的 Python SDK 自定义智能体逻辑，甚至在云端大规模部署上千个智能体并行工作。\n\n其核心技术亮点在于模块化的软件智能体 SDK，这不仅构成了平台的引擎，还支持高度可组合的开发模式。此外，OpenHands 在 SWE-bench 基准测试中取得了 77.6% 的优异成绩，证明了其解决真实世界软件工程问题的能力。平台还具备完善的企业级功能，支持与 Slack、Jira 等工具集成，并提供细粒度的权限管理，适合从个人开发者到大型企业的各类用户场景。",70612,"2026-04-05T11:12:22",[38,60,59,80]]