[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-character-ai--prompt-poet":3,"tool-character-ai--prompt-poet":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":76,"owner_url":80,"languages":81,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":23,"env_os":102,"env_gpu":103,"env_ram":104,"env_deps":105,"category_tags":112,"github_topics":113,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":121,"updated_at":122,"faqs":123,"releases":124},3421,"character-ai\u002Fprompt-poet","prompt-poet","Streamlines and simplifies prompt design for both developers and non-technical users with a low code approach.","Prompt Poet 是一款旨在简化 AI 提示词（Prompt）设计的开源工具，它通过“低代码”理念让开发者与非技术人员都能轻松构建高质量的交互指令。在传统开发中，拼接复杂的提示词字符串往往繁琐且易错，Prompt Poet 巧妙地将 YAML 的结构化优势与 Jinja2 模板引擎的动态逻辑相结合，让用户从枯燥的字符串操作中解放出来，专注于提示词内容的优化。\n\n该工具特别适合需要频繁调整提示词结构的开发者、AI 应用研究人员以及希望快速原型验证的产品设计师。其核心技术亮点在于独特的两阶段处理机制：首先利用 Jinja2 执行数据渲染、逻辑判断和循环操作（如动态插入聊天历史或根据用户模态调整指令），随后将结果解析为标准的 YAML 结构。这种设计不仅支持灵活的条件控制和列表插值，还内置了智能的上下文截断功能——通过设置优先级，自动在超出长度限制时按顺序丢弃旧消息，有效解决了长对话中的上下文窗口管理难题。无论是构建简单的问答机器人还是复杂的多轮对话系统，Prompt Poet 都能以清晰、可维护的方式提升工程效率。","[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcharacter-ai_prompt-poet_readme_4df8048eb1e0.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fprompt-poet)\n\n# Prompt Poet\n\nPrompt Poet streamlines and simplifies prompt design for both developers and non-technical users with its low code approach. Using a mix of YAML and Jinja2, Prompt Poet allows for flexible, dynamic prompt creation, enhancing the efficiency and quality of interactions with AI models. It saves time on engineering string manipulations, enabling everyone to focus more on crafting the optimal prompts for their users.\n\n### Installation\n\n```shell\npip install prompt-poet\n```\n\n### Basic Usage\n\n```python\nimport os\nimport getpass\n\nfrom prompt_poet import Prompt\nfrom langchain import ChatOpenAI\n\n# Uncomment if you need to set OPENAI_API_KEY.\n# os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n\nraw_template = \"\"\"\n- name: system instructions\n  role: system\n  content: |\n    Your name is {{ character_name }} and you are meant to be helpful and never harmful to humans.\n\n- name: user query\n  role: user\n  content: |\n   {{ username}}: {{ user_query }}\n\n- name: response\n  role: user\n  content: |\n    {{ character_name }}:\n\"\"\"\n\ntemplate_data = {\n  \"character_name\": \"Character Assistant\",\n  \"username\": \"Jeff\",\n  \"user_query\": \"Can you help me with my homework?\"\n}\n\nprompt = Prompt(\n    raw_template=raw_template,\n    template_data=template_data\n)\n\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\nresponse = model.invoke(prompt.messages)\n```\n\n### Prompt Templates\nPrompt Poet templates use a mix of YAML and Jinja2. Template processing occurs in two primary stages:\n\n- Rendering: Initially, Jinja2 processes the input data. During this phase, control flow logic is executed, data is validated and appropriately bound to variables, and functions within the template are appropriately evaluated.\n- Loading: Post-rendering, the output is a structured YAML file. This YAML structure consists of repeated blocks or parts, each encapsulated into a Python data structure. These parts are characterized by several attributes:\n  - Name: A clear, human-readable identifier for the part.\n  - Content: The actual string payload that forms part of the prompt.\n  - Role (Optional): Specifies the role of the participant, aiding in distinguishing between different users or system components.\n  - Truncation Priority (Optional): Determines the order of truncation when necessary, with parts having the same priority being truncated in the order in which they appear.\n\n#### Example: Basic Q&A Bot\n```yaml\n- name: system instructions\n  role: system\n  content: |\n    Your name is {{ character_name }} and you are meant to be helpful and never harmful to humans.\n\n- name: user query\n  role: user\n  content: |\n   {{ username}}: {{ user_query }}\n\n- name: reply_prompt\n  role: user\n  content: |\n    {{ character_name }}:\n```\n\n#### Interpolating Lists\nIf you have elements (e.g. messages) in a list you can parse them into your template like so.\n```yaml\n{% for message in current_chat_messages %}\n- name: chat_message\n  role: user\n  content: |\n    {{ message.author }}: {{ message.content }}\n{% endfor %}\n```\n\n#### Truncating Old Messages\nContext length is limited and can’t always fit the entire chat history– so we can set a truncation priority on the message parts and Prompt Poet will truncate these parts in the order in which they appear (oldest to newest).\n```yaml\n{% for message in current_chat_messages %}\n- name: chat_message\n  role: user\n  truncation_priority: 1\n  content: |\n    {{ message.author }}: {{ message.content }}\n{% endfor %}\n```\n\n#### Adapting to User Modality\nTo tailor instructions based on the user's current modality (audio or text).\n```yaml\n{% if modality == \"audio\" %}\n- name: special audio instruction\n  role: system\n  content: |\n    {{ username }} is currently using audio. Keep your answers succinct.\n{% endif %}\n```\n\n#### Targeting Specific Queries\nTo include context-specific examples like homework help when needed.\n```yaml\n{% if extract_user_query_topic(user_query) == \"homework_help\" %}\n{% for homework_example in fetch_few_shot_homework_examples(username, character_name) %}\n- name: homework_example_{{ loop.index }}\n  role: user\n  content: |\n    {{ homework_example }}\n{% endfor %}\n{% endif %}\n```\n\n#### Handling Whitespace\nPrompt Poet will strip whitespace by default to avoid unwanted newlines in your final prompt. If you want to include an explicit space use the special built-in space marker “\u003C|space|>” to ensure proper formatting.\n```yaml\n- name: system instructions\n  role: system\n  content: |\n    Your name is {{ character_name }} and you are meant to be helpful and never harmful to humans.\n\n- name: user query\n  role: user\n  content: |\n   \u003C|space|>{{ username}}: {{ user_query }}\n```\n\n#### Putting It All Together\nCompositionality is a core strength of Prompt Poet templates, enabling the creation of complex, dynamic prompts.\n```yaml\n- name: system instructions\n  role: system\n  content: |\n    Your name is {{ character_name }} and you are meant to be helpful and never harmful to humans.\n\n{% if modality == \"audio\" %}\n- name: special audio instruction\n  role: system\n  content: |\n    {{ username }} is currently using audio modality. Keep your answers succinct and to the point.\n{% endif %}\n\n{% if extract_user_query_topic(user_query) == \"homework_help\" %}\n{% for homework_example in fetch_few_shot_homework_examples(username, character_name) %}\n- name: homework_example_{{ loop.index }}\n  role: user\n  content: |\n    {{ homework_example }}\n{% endfor %}\n{% endif %}\n\n{% for message in current_chat_messages %}\n- name: chat_message\n  role: user\n  truncation_priority: 1\n  content: |\n    {{ message.author }}: {{ message.content }}\n{% endfor %}\n\n- name: user query\n  role: user\n  content: |\n   {{ username}}: {{ user_query }}\n\n- name: reply_prompt\n  role: user\n  content: |\n    {{ character_name }}:\n```\n\n#### Decomposing Into Sections\nTo maintain DRY principles in your templates, break them down into reusable sections that can be applied across different templates, such as when A\u002FB testing a new prompt.\n```yaml\n{% include 'sections\u002Fsystem_instruction.yml.j2' %}\n\n{% include 'sections\u002Faudio_instruction.yml.j2' %}\n\n{% if extract_user_query_topic(user_query) == \"homework_help\" %}\n{% include 'sections\u002Fhomework_examples.yml.j2' %}\n{% endif %}\n\n{% include 'sections\u002Fchat_messages.yml.j2' %}\n\n{% include 'sections\u002Fuser_query.yml.j2' %}\n\n{% include 'sections\u002Freply_prompt.yml.j2' %}\n```\n\n#### Nested Sections for Token Statistics\nFor detailed monitoring of token usage within different content components, you can use nested sections. This is useful when you need granular statistics about which parts of your prompt are consuming tokens (e.g., character definition vs. safety rules vs. conversation style).\n\n```yaml\n- name: system_instructions\n  role: system\n  sections:\n    - name: character_intro\n      content: |\n        Your name is {{ character_name }} and you are a helpful assistant.\n    - name: safety_rules\n      content: |\n        Never be harmful to humans. Always be respectful and kind.\n    - name: conversation_style\n      content: |\n        Keep your responses concise and engaging.\n```\n\nYou can then access detailed statistics:\n\n```python\nprompt = Prompt(raw_template=raw_template, template_data=template_data)\nprompt.tokenize()\n\n# Get hierarchical statistics\nstats = prompt.section_stats\nprint(stats)\n>>> [\n    {\n        \"part_name\": \"system_instructions\",\n        \"part_tokens\": 150,\n        \"part_role\": \"system\",\n        \"has_sections\": True,\n        \"sections\": [\n            {\"section_name\": \"character_intro\", \"section_tokens\": 50},\n            {\"section_name\": \"safety_rules\", \"section_tokens\": 60},\n            {\"section_name\": \"conversation_style\", \"section_tokens\": 40}\n        ]\n    }\n]\n\n# Or get a flat mapping of token counts\ncounts = prompt.get_section_token_counts()\nprint(counts)\n>>> {\n    \"system_instructions\": {\n        \"character_intro\": 50,\n        \"safety_rules\": 60,\n        \"conversation_style\": 40\n    }\n}\n```\n\n**Important Notes:**\n- Parts can have **either** `content` OR `sections`, not both\n- Sections use YAML block scalars (`|`) to preserve newlines for proper separation when concatenated\n- Section contents are automatically concatenated to form the part's content\n- Token counts for individual sections may not sum exactly to the part's token count due to tokenization boundary effects\n- This feature is fully backward compatible – existing templates without sections work unchanged\n\n### Design Choices\n\n#### Prompt Poet Library\nThe Prompt Poet Library provides various features and settings, including prompt properties. Key features like tokenization and truncation help with efficient caching and low latency responses\n```python\nprompt.tokenize()\nprompt.truncate(token_limit=TOKEN_LIMIT, truncation_step=TRUNCATION_STEP)\n\n# Inspect prompt as a raw string.\nprompt.string: str\n>>> \"...\"\n\n# Inpsect the prompt as raw tokens.\nprompt.tokens: list[int]\n>>> [...]\n\n# Inspect the prompt as LLM API message dicts.\nprompt.messages: list[dict]\n>>> [...]\n\n# Inspect the prompt as first class parts.\nprompt.parts: list[PromptPart]\n>>> [...]\n```\n\n#### Templating Language\nJinja2 and YAML combine to offer an incredibly extensible and expressive templating language. Jinja2 facilitates direct data bindings, arbitrary function calls, and basic control flow within templates. YAML provides structure to our templates (with depth=1) allowing us to perform sophisticated truncation when the token limit is reached. This pairing of Jinja2 and YAML is not unique – most notably it is used by [Ansible](https:\u002F\u002Fgithub.com\u002Fansible\u002Fansible).\n\n#### Template-native Function Calling\nOne standout feature of Jinja2 is the ability to invoke arbitrary Python functions directly within templates at runtime. This feature is crucial for on-the-fly data retrieval, manipulation, and validation, streamlining how prompts are constructed. Here `extract_user_query_topic` can perform arbitrary processing of the user's query used in the template's control flow--perhaps by performing a round-trip to a topic classifier.\n```python\n{% if extract_user_query_topic(user_query) == \"homework_help\" %}\n{% for homework_example in fetch_few_shot_homework_examples(username, character_name) %}\n- name: homework_example_{{ loop.index }}\n  role: user\n  content: |\n    {{ homework_example }}\n{% endfor %}\n{% endif %}\n```\n\n#### Custom Encoding Function\nBy default Prompt Poet will use the TikToken “o200k_base” tokenizer although alternate encoding names may be provided in the top-level `tiktoken_encoding_name`. Alternatively, users can provide their own encode function with the top-level `encode_func: Callable[[str], list[int]]`.\n\n```python\nfrom tiktoken import get_encoding\nencode_func = get_encoding(\"o200k_base\")\n\nprompt = Prompt(\n    raw_template=raw_template,\n    template_data=template_data,\n    encode_func=encode_func\n)\nprompt.tokenize()\nprompt.tokens\n>>> [...]\n```\n\n#### Truncation\nIf your LLM provider supports GPU affinity and prefix cache, utilize Character.AI’s truncation algorithm to maximize the prefix-cache rate. The prefix cache rate is defined as the number of prompt tokens retrieved from cache over the total number of prompt tokens. Find the optimal values for truncation step and token limit for your use case. As the truncation step increases, the prefix cache rate also rises, but more tokens are truncated from the prompt.\n\n```python\nTOKEN_LIMIT = 128000\nTRUNCATION_STEP = 4000\n\n# Tokenize and truncate the prompt.\nprompt.tokenize()\nprompt.truncate(token_limit=TOKEN_LIMIT, truncation_step=TRUNCATION_STEP)\n\nresponse = model.invoke(prompt.messages)\n```\n\n#### Cache-aware Truncation Explained\nIn short, Cache Aware Truncation truncates up to a fixed truncation point every time it is invoked–only moving this truncation point on average every k turns. This allows your LLM provider to maximally exploit GPU prefix cache described in [Optimizing Inference](https:\u002F\u002Fresearch.character.ai\u002Foptimizing-inference\u002F). If instead we simply truncated until reaching the token limit (L) this truncation point would move every turn which would cause a significant reduction in prefix cache rate. The tradeoff in this approach is that we often truncate more than we strictly need to.\n\n![Cache-aware Truncation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcharacter-ai_prompt-poet_readme_08bb849d912b.png)\n\n#### Template Registry\nA Template Registry is simply the concept of storing templates as files on disk. In using a Template Registry you can isolate template files from your python code and load these files directly from disk. In production systems, these template files can optionally be loaded from an in-memory cache on successive uses, saving on disk I\u002FO. In the future a Template Registry may become a first-class citizen of Prompt Poet.\n\nFilename: **chat_template.yml.j2**\n```yaml\n- name: system instructions\n  role: system\n  content: |\n    Your name is {{ character_name }} and you are meant to be helpful and never harmful to humans.\n\n- name: user query\n  role: user\n  content: |\n   {{ username}}: {{ user_query }}\n\n- name: response\n  role: user\n  content: |\n    {{ character_name }}:\n```\n\nRun this python code from the same directory you have saved the file `chat_template.yml.j2` to.\n\n```python\nfrom prompt_poet import Prompt\n\nprompt = Prompt(\n    template_path=\"chat_template.yml.j2\",\n    template_data=template_data\n)\nprint(prompt.string)\n>>> 'Your name is Character Assistant and you are meant to be helpful and never harmful to humans.Jeff: Can you help me with my homework?Character Assistant:'\n```\n\n### Related Work\n- [Priompt](https:\u002F\u002Fgithub.com\u002Fanysphere\u002Fpriompt): Priompt (priority + prompt) is a JSX-based prompting library. It uses priorities to decide what to include in the context window. This project achieves a similar goal in separating a templating layer from a logical construction layer written in and compatible with TypeScript-based usage.\n- [dspy](https:\u002F\u002Fgithub.com\u002Fstanfordnlp\u002Fdspy): Provides a great way of automagically optimizing prompts for different models though lacks deterministic control of the prompt important for things like caching and high-throughput, low latency production systems.\n- [Prompt Engine](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fprompt-engine): Born from a common problem of production prompt engineering requiring substantial code to manipulate and update strings this Typescript package similarly adds structure to the prompt templating process– though comes across as being somewhat opinionated making assumptions based on the use cases. With last commits being from 2 years ago it does not seem as though this package is in active development.\n- [llm](https:\u002F\u002Fgithub.com\u002Fsimonw\u002Fllm): Allows basic prompts to be defined in YAML with the Jinja2 enabled features like dynamic control flow, function calling and data bindings.\n- Raw Python f-strings: There are several projects that take slightly different approaches to prompt templating by wrapping f-strings:\n  - [LangChain](https:\u002F\u002Fpython.langchain.com\u002Fv0.1\u002Fdocs\u002Fmodules\u002Fmodel_io\u002Fprompts\u002F): LangChain has a much larger scope than prompt templates though it does provide some basic templating abstractions. Good for simple templating use cases then starts to get unwieldy as prompts increase in complexity.\n  - [LlamaIndex](https:\u002F\u002Fdocs.llamaindex.ai\u002Fen\u002Fstable\u002Fmodule_guides\u002Fmodels\u002Fprompts\u002F): Like LangChain, LlamaIndex has a much larger scope than prompt templates though it also provides some basic templating abstractions.\n  - [Mirascope](https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope): Implements a novel approach to prompt templating by encapsulating everything in a single python class and using the class’s docstring as the f-string into which to bind data.\n","[![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcharacter-ai_prompt-poet_readme_4df8048eb1e0.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fprompt-poet)\n\n# Prompt Poet\n\nPrompt Poet 通过低代码方式，为开发者和非技术用户简化并优化提示词设计流程。它结合 YAML 和 Jinja2 模板语言，支持灵活、动态的提示词生成，从而提升与 AI 模型交互的效率和质量。使用 Prompt Poet 可以节省大量字符串处理的工程工作，让团队成员能够更专注于为用户提供最佳的提示词设计。\n\n### 安装\n\n```shell\npip install prompt-poet\n```\n\n### 基本用法\n\n```python\nimport os\nimport getpass\n\nfrom prompt_poet import Prompt\nfrom langchain import ChatOpenAI\n\n# 如果需要设置 OPENAI_API_KEY，请取消注释。\n# os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n\nraw_template = \"\"\"\n- name: 系统指令\n  role: system\n  content: |\n    你的名字是 {{ character_name }}，你应当乐于助人，绝不会对人类造成任何伤害。\n\n- name: 用户问题\n  role: user\n  content: |\n   {{ username}}: {{ user_query }}\n\n- name: 回答\n  role: assistant\n  content: |\n    {{ character_name }}:\n\"\"\"\n\ntemplate_data = {\n  \"character_name\": \"角色助手\",\n  \"username\": \"杰夫\",\n  \"user_query\": \"你能帮我做作业吗？\"\n}\n\nprompt = Prompt(\n    raw_template=raw_template,\n    template_data=template_data\n)\n\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\nresponse = model.invoke(prompt.messages)\n```\n\n### 提示模板\nPrompt Poet 模板结合使用 YAML 和 Jinja2。模板处理主要分为两个阶段：\n\n- 渲染：首先，Jinja2 会处理输入数据。在此阶段，控制流逻辑会被执行，数据会被验证并正确绑定到变量上，模板中的函数也会被适当地评估。\n- 加载：渲染完成后，输出是一个结构化的 YAML 文件。该 YAML 结构由多个重复的块或部分组成，每个部分都被封装为一个 Python 数据结构。这些部分具有以下几个属性：\n  - 名称：清晰、易于理解的部分标识符。\n  - 内容：实际的字符串内容，构成提示的一部分。\n  - 角色（可选）：指定参与者的角色，有助于区分不同的用户或系统组件。\n  - 截断优先级（可选）：在需要截断时决定截断顺序，具有相同优先级的部分会按照它们出现的顺序进行截断。\n\n#### 示例：基础问答机器人\n```yaml\n- name: system instructions\n  role: system\n  content: |\n    你的名字是 {{ character_name }}，你应当乐于助人，绝不能对人类造成伤害。\n\n- name: user query\n  role: user\n  content: |\n   {{ username}}: {{ user_query }}\n\n- name: reply_prompt\n  role: user\n  content: |\n    {{ character_name }}:\n```\n\n#### 插值列表\n如果你有一个包含元素（例如消息）的列表，可以将其解析到模板中，如下所示：\n```yaml\n{% for message in current_chat_messages %}\n- name: chat_message\n  role: user\n  content: |\n    {{ message.author }}: {{ message.content }}\n{% endfor %}\n```\n\n#### 截断旧消息\n上下文长度有限，有时无法容纳整个聊天历史——因此我们可以为消息部分设置截断优先级，Prompt Poet 会按照它们出现的顺序（从最旧到最新）来截断这些部分。\n```yaml\n{% for message in current_chat_messages %}\n- name: chat_message\n  role: user\n  truncation_priority: 1\n  content: |\n    {{ message.author }}: {{ message.content }}\n{% endfor %}\n```\n\n#### 根据用户模式调整\n根据用户当前的模式（音频或文本）定制指令：\n```yaml\n{% if modality == \"audio\" %}\n- name: special audio instruction\n  role: system\n  content: |\n    {{ username }} 目前正在使用音频模式。请保持回答简洁明了。\n{% endif %}\n```\n\n#### 针对特定查询\n在需要时加入与上下文相关的示例，例如作业辅导：\n```yaml\n{% if extract_user_query_topic(user_query) == \"homework_help\" %}\n{% for homework_example in fetch_few_shot_homework_examples(username, character_name) %}\n- name: homework_example_{{ loop.index }}\n  role: user\n  content: |\n    {{ homework_example }}\n{% endfor %}\n{% endif %}\n```\n\n#### 处理空白字符\n默认情况下，Prompt Poet 会去除空白字符，以避免最终提示中出现不必要的换行。如果需要保留明确的空格，请使用内置的特殊空格标记“\u003C|space|>”，以确保格式正确。\n```yaml\n- name: system instructions\n  role: system\n  content: |\n    你的名字是 {{ character_name }}，你应当乐于助人，绝不能对人类造成伤害。\n\n- name: user query\n  role: user\n  content: |\n   \u003C|space|>{{ username}}: {{ user_query }}\n```\n\n#### 综合应用\n组合性是 Prompt Poet 模板的核心优势之一，它能够创建复杂且动态的提示。\n```yaml\n- name: system instructions\n  role: system\n  content: |\n    你的名字是 {{ character_name }}，你应当乐于助人，绝不能对人类造成伤害。\n\n{% if modality == \"audio\" %}\n- name: special audio instruction\n  role: system\n  content: |\n    {{ username }} 目前正在使用音频模式。请保持回答简洁明了。\n{% endif %}\n\n{% if extract_user_query_topic(user_query) == \"homework_help\" %}\n{% for homework_example in fetch_few_shot_homework_examples(username, character_name) %}\n- name: homework_example_{{ loop.index }}\n  role: user\n  content: |\n    {{ homework_example }}\n{% endfor %}\n{% endif %}\n\n{% for message in current_chat_messages %}\n- name: chat_message\n  role: user\n  truncation_priority: 1\n  content: |\n    {{ message.author }}: {{ message.content }}\n{% endfor %}\n\n- name: user query\n  role: user\n  content: |\n   {{ username}}: {{ user_query }}\n\n- name: reply_prompt\n  role: user\n  content: |\n    {{ character_name }}:\n```\n\n#### 分解为小节\n为了遵循 DRY 原则，你可以将模板分解为可重用的小节，以便在不同模板中复用，比如在进行新提示的 A\u002FB 测试时。\n```yaml\n{% include 'sections\u002Fsystem_instruction.yml.j2' %}\n\n{% include 'sections\u002Faudio_instruction.yml.j2' %}\n\n{% if extract_user_query_topic(user_query) == \"homework_help\" %}\n{% include 'sections\u002Fhomework_examples.yml.j2' %}\n{% endif %}\n\n{% include 'sections\u002Fchat_messages.yml.j2' %}\n\n{% include 'sections\u002Fuser_query.yml.j2' %}\n\n{% include 'sections\u002Freply_prompt.yml.j2' %}\n```\n\n#### 嵌套小节用于统计 token 使用情况\n为了详细监控不同内容组件中的 token 使用情况，可以使用嵌套小节。这在需要细粒度统计哪些部分的提示占用了 token 时非常有用（例如，角色定义 vs. 安全规则 vs. 对话风格）。\n\n```yaml\n- name: system_instructions\n  role: system\n  sections:\n    - name: character_intro\n      content: |\n        你的名字是 {{ character_name }}，你是一位乐于助人的助手。\n    - name: safety_rules\n      content: |\n        绝不伤害人类。始终保持尊重和友善。\n    - name: conversation_style\n      content: |\n        回答应简洁而富有吸引力。\n```\n\n随后可以访问详细的统计信息：\n\n```python\nprompt = Prompt(raw_template=raw_template, template_data=template_data)\nprompt.tokenize()\n\n# 获取分层统计\nstats = prompt.section_stats\nprint(stats)\n>>> [\n    {\n        \"part_name\": \"system_instructions\",\n        \"part_tokens\": 150,\n        \"part_role\": \"system\",\n        \"has_sections\": True,\n        \"sections\": [\n            {\"section_name\": \"character_intro\", \"section_tokens\": 50},\n            {\"section_name\": \"safety_rules\", \"section_tokens\": 60},\n            {\"section_name\": \"conversation_style\", \"section_tokens\": 40}\n        ]\n    }\n]\n\n# 或者获取标记计数的扁平映射\ncounts = prompt.get_section_token_counts()\nprint(counts)\n>>> {\n    \"system_instructions\": {\n        \"character_intro\": 50,\n        \"safety_rules\": 60,\n        \"conversation_style\": 40\n    }\n}\n```\n\n**重要提示：**\n- 部分可以拥有 **内容** 或 **章节**，但不能同时拥有两者。\n- 章节使用 YAML 块标量（`|`）来保留换行符，以便在拼接时正确分隔。\n- 章节内容会自动拼接成该部分的内容。\n- 由于标记化边界效应，各个章节的标记计数之和可能不完全等于该部分的标记计数。\n- 此功能完全向后兼容——现有的无章节模板无需更改即可正常工作。\n\n### 设计选择\n\n#### Prompt Poet 库\nPrompt Poet 库提供了多种功能和设置，包括提示属性。关键功能如标记化和截断有助于高效缓存和低延迟响应。\n```python\nprompt.tokenize()\nprompt.truncate(token_limit=TOKEN_LIMIT, truncation_step=TRUNCATION_STEP)\n\n# 检查提示作为原始字符串。\nprompt.string: str\n>>> \"...\"\n\n# 检查提示作为原始标记。\nprompt.tokens: list[int]\n>>> [...]\n\n# 检查提示作为 LLM API 消息字典。\nprompt.messages: list[dict]\n>>> [...]\n\n# 检查提示作为一等公民部分。\nprompt.parts: list[PromptPart]\n>>> [...]\n```\n\n#### 模板语言\nJinja2 和 YAML 的结合提供了一种极具扩展性和表现力的模板语言。Jinja2 支持直接的数据绑定、任意函数调用以及模板内的基本控制流。YAML 则为我们的模板提供了结构（深度为 1），使我们在达到标记限制时能够进行复杂的截断操作。这种 Jinja2 和 YAML 的组合并不独特——最著名的例子就是 [Ansible](https:\u002F\u002Fgithub.com\u002Fansible\u002Fansible) 所使用的组合。\n\n#### 模板原生函数调用\nJinja2 的一个突出特点是能够在运行时直接在模板中调用任意 Python 函数。这一特性对于实时数据检索、处理和验证至关重要，从而简化了提示的构建过程。例如，`extract_user_query_topic` 可以对用户查询进行任意处理，并将其用于模板的控制流中——比如通过与主题分类器进行一次往返调用来实现。\n```python\n{% if extract_user_query_topic(user_query) == \"homework_help\" %}\n{% for homework_example in fetch_few_shot_homework_examples(username, character_name) %}\n- name: homework_example_{{ loop.index }}\n  role: user\n  content: |\n    {{ homework_example }}\n{% endfor %}\n{% endif %}\n```\n\n#### 自定义编码函数\n默认情况下，Prompt Poet 会使用 TikToken 的 “o200k_base” 编码器，不过也可以在顶层指定其他编码名称 `tiktoken_encoding_name`。此外，用户还可以通过顶层参数 `encode_func: Callable[[str], list[int]]` 提供自己的编码函数。\n\n```python\nfrom tiktoken import get_encoding\nencode_func = get_encoding(\"o200k_base\")\n\nprompt = Prompt(\n    raw_template=raw_template,\n    template_data=template_data,\n    encode_func=encode_func\n)\nprompt.tokenize()\nprompt.tokens\n>>> [...]\n```\n\n#### 截断\n如果您的 LLM 提供商支持 GPU 关联性和前缀缓存，请使用 Character.AI 的截断算法来最大化前缀缓存率。前缀缓存率定义为从缓存中检索到的提示标记数占提示总标记数的比例。请根据您的具体用例找到最佳的截断步长和标记限制值。随着截断步长的增加，前缀缓存率也会提高，但同时会有更多的标记被截断。\n\n```python\nTOKEN_LIMIT = 128000\nTRUNCATION_STEP = 4000\n\n# 对提示进行标记化并截断。\nprompt.tokenize()\nprompt.truncate(token_limit=TOKEN_LIMIT, truncation_step=TRUNCATION_STEP)\n\nresponse = model.invoke(prompt.messages)\n```\n\n#### 缓存感知截断详解\n简而言之，缓存感知截断会在每次调用时都截断到一个固定的截断点——只有平均每 k 轮才会移动这个截断点。这样可以让您的 LLM 提供商最大限度地利用 [优化推理](https:\u002F\u002Fresearch.character.ai\u002Foptimizing-inference\u002F) 中描述的 GPU 前缀缓存。相反，如果我们只是简单地截断到标记限制（L），那么截断点每轮都会移动，这将显著降低前缀缓存率。这种方法的权衡在于，我们往往会截断比实际需要更多的内容。\n\n![缓存感知截断](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcharacter-ai_prompt-poet_readme_08bb849d912b.png)\n\n#### 模板注册表\n模板注册表的概念就是将模板文件存储在磁盘上。使用模板注册表可以使模板文件与 Python 代码分离，并可以直接从磁盘加载这些文件。在生产系统中，这些模板文件可以在后续使用时选择从内存缓存中加载，从而节省磁盘 I\u002FO。未来，模板注册表可能会成为 Prompt Poet 的一等公民。\n\n文件名：**chat_template.yml.j2**\n```yaml\n- name: 系统指令\n  role: 系统\n  content: |\n    你的名字是 {{ character_name }}，你应当乐于助人，绝不会伤害人类。\n\n- name: 用户提问\n  role: 用户\n  content: |\n    {{ username}}: {{ user_query }}\n\n- name: 回答\n  role: 用户\n  content: |\n    {{ character_name }}:\n```\n\n请在保存了 `chat_template.yml.j2` 文件的同一目录下运行以下 Python 代码。\n\n```python\nfrom prompt_poet import Prompt\n\nprompt = Prompt(\n    template_path=\"chat_template.yml.j2\",\n    template_data=template_data\n)\nprint(prompt.string)\n>>> '你的名字是角色助手，你应当乐于助人，绝不会伤害人类。杰夫：你能帮我做作业吗？角色助手：'\n```\n\n### 相关工作\n- [Priompt](https:\u002F\u002Fgithub.com\u002Fanysphere\u002Fpriompt)：Priompt（优先级 + 提示）是一个基于 JSX 的提示库。它通过优先级来决定哪些内容应被纳入上下文窗口。该项目实现了类似的目标，即在 TypeScript 为基础的用法中，将模板层与逻辑构建层分离，并保持兼容性。\n- [dspy](https:\u002F\u002Fgithub.com\u002Fstanfordnlp\u002Fdspy)：提供了一种自动优化针对不同模型的提示的优秀方法，但缺乏对提示的确定性控制，而这对于缓存以及高吞吐、低延迟的生产系统来说至关重要。\n- [Prompt Engine](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fprompt-engine)：该 TypeScript 包源于一个常见的问题——生产环境中的提示工程需要大量代码来操作和更新字符串。它同样为提示模板化过程增加了结构，不过显得较为主观，基于使用场景做出了一些假设。由于最近的提交是在两年前，目前看来该包并未处于积极开发状态。\n- [llm](https:\u002F\u002Fgithub.com\u002Fsimonw\u002Fllm)：允许以 YAML 格式定义基础提示，并启用 Jinja2 的功能，如动态控制流、函数调用和数据绑定。\n- 原生 Python f-string：有多个项目通过封装 f-string 来采用略有不同的提示模板化方法：\n  - [LangChain](https:\u002F\u002Fpython.langchain.com\u002Fv0.1\u002Fdocs\u002Fmodules\u002Fmodel_io\u002Fprompts\u002F)：LangChain 的覆盖范围远超提示模板，但它确实提供了一些基础的模板化抽象。适用于简单的模板化场景，但随着提示复杂度的增加，会变得越来越笨重。\n  - [LlamaIndex](https:\u002F\u002Fdocs.llamaindex.ai\u002Fen\u002Fstable\u002Fmodule_guides\u002Fmodels\u002Fprompts\u002F)：与 LangChain 类似，LlamaIndex 的覆盖范围也远超提示模板，同时提供了一些基础的模板化抽象。\n  - [Mirascope](https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope)：实现了一种新颖的提示模板化方法，即将所有内容封装在一个 Python 类中，并使用该类的文档字符串作为 f-string，用于绑定数据。","# Prompt Poet 快速上手指南\n\nPrompt Poet 是一款专为开发者和非技术用户设计的提示词（Prompt）工程工具。它结合了 YAML 的结构化优势与 Jinja2 的模板灵活性，支持动态数据绑定、逻辑控制流以及智能截断策略，帮助你高效构建高质量的 AI 交互提示词。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n- **操作系统**：Windows, macOS, 或 Linux\n- **Python 版本**：Python 3.8 或更高版本\n- **前置依赖**：\n  - `pip` (Python 包管理工具)\n  - 推荐使用虚拟环境 (如 `venv` 或 `conda`) 隔离项目依赖\n- **API Key**：若需调用 OpenAI 模型，请准备好 `OPENAI_API_KEY`\n\n> **国内加速建议**：如果安装过程中遇到网络问题，建议使用国内镜像源（如清华源或阿里源）进行安装。\n\n## 安装步骤\n\n使用 pip 直接安装 Prompt Poet：\n\n```shell\npip install prompt-poet\n```\n\n**推荐使用国内镜像源加速安装：**\n\n```shell\npip install prompt-poet -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n如需同时安装 LangChain（用于示例中的模型调用）：\n\n```shell\npip install langchain openai -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\n以下是使用 Prompt Poet 构建并调用一个大语言模型的最简示例。该示例展示了如何定义模板、注入数据并生成最终的消息列表。\n\n### 1. 编写 Python 脚本\n\n创建一个名为 `main.py` 的文件，并填入以下代码：\n\n```python\nimport os\nimport getpass\n\nfrom prompt_poet import Prompt\nfrom langchain import ChatOpenAI\n\n# 如果未设置环境变量，请在运行时输入 API Key\n# os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n\n# 定义原始模板 (混合 YAML 结构与 Jinja2 语法)\nraw_template = \"\"\"\n- name: system instructions\n  role: system\n  content: |\n    Your name is {{ character_name }} and you are meant to be helpful and never harmful to humans.\n\n- name: user query\n  role: user\n  content: |\n   {{ username}}: {{ user_query }}\n\n- name: response\n  role: user\n  content: |\n    {{ character_name }}:\n\"\"\"\n\n# 准备模板数据\ntemplate_data = {\n  \"character_name\": \"Character Assistant\",\n  \"username\": \"Jeff\",\n  \"user_query\": \"Can you help me with my homework?\"\n}\n\n# 初始化 Prompt 对象\nprompt = Prompt(\n    raw_template=raw_template,\n    template_data=template_data\n)\n\n# 初始化模型 (以 OpenAI GPT-4o-mini 为例)\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n# 调用模型并获取响应\n# prompt.messages 会自动转换为 LLM API 所需的消息字典格式\nresponse = model.invoke(prompt.messages)\n\nprint(response.content)\n```\n\n### 2. 运行脚本\n\n在终端中执行脚本：\n\n```shell\npython main.py\n```\n\n程序将自动渲染模板，将 `template_data` 中的变量填入 YAML 结构，生成标准的消息列表发送给模型，并打印返回结果。\n\n### 核心特性简述\n\n- **动态渲染**：利用 Jinja2 语法（如 `{% if %}`, `{% for %}`）在模板中实现逻辑判断和循环。\n- **结构化输出**：模板渲染后自动解析为包含 `name`, `role`, `content` 的标准消息块。\n- **智能截断**：支持通过 `truncation_priority` 属性控制长上下文时的截断顺序，优化 Token 使用。","某初创团队正在开发一款支持多模态交互（文本\u002F语音）的个性化 AI 家教助手，需要动态构建包含角色设定、历史对话及上下文策略的复杂提示词。\n\n### 没有 prompt-poet 时\n- 开发人员需手动编写大量字符串拼接代码来组装消息列表，导致核心业务逻辑被繁琐的文本处理淹没。\n- 难以优雅地处理长对话历史的截断问题，往往需要硬编码索引或编写复杂的切片逻辑，容易出错且维护困难。\n- 针对不同用户模态（如语音模式需简短回答）的条件判断分散在代码各处，模板结构混乱，非技术人员无法参与优化。\n- 每次调整提示词结构（如增加系统指令字段）都需要修改 Python 代码并重新部署，迭代周期长。\n\n### 使用 prompt-poet 后\n- 利用 YAML 结合 Jinja2 模板清晰定义提示词结构，将数据渲染与业务逻辑分离，代码简洁易读。\n- 通过 `truncation_priority` 属性自动管理上下文长度，按优先级智能截断旧消息，无需手动计算令牌限制。\n- 在模板中直接使用 `{% if %}` 语法根据用户模态动态注入指令，灵活适配不同场景，逻辑直观可见。\n- 产品经理可直接编辑 YAML 模板文件调整话术和流程，无需等待开发排期，大幅提升了提示词的迭代效率。\n\nprompt-poet 通过低代码的模板化方案，让团队从繁琐的字符串工程中解放出来，专注于打造更高质量的 AI 交互体验。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcharacter-ai_prompt-poet_d7e671fc.png","character-ai","character.ai","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fcharacter-ai_97a04ce8.png","",null,"https:\u002F\u002Fgithub.com\u002Fcharacter-ai",[82,86,90,94],{"name":83,"color":84,"percentage":85},"Python","#3572A5",93.5,{"name":87,"color":88,"percentage":89},"Jinja","#a52a22",5.5,{"name":91,"color":92,"percentage":93},"Shell","#89e051",0.7,{"name":95,"color":96,"percentage":97},"Makefile","#427819",0.3,1145,93,"2026-04-03T20:22:56","MIT","未说明 (跨平台 Python 库)","不需要 (纯文本处理库，无本地模型推理需求)","未说明",{"notes":106,"python":104,"dependencies":107},"这是一个用于构建和管理提示词（Prompt）的轻量级 Python 库，本身不包含大型 AI 模型，因此对硬件无特殊要求。它依赖 Jinja2 进行模板渲染和 YAML 解析。实际运行时的资源消耗取决于所连接的后端大模型服务（如 OpenAI API）。默认使用 TikToken 的 'o200k_base' 编码器，也支持自定义编码函数。",[67,108,109,110,111],"langchain","jinja2","pyyaml","tiktoken",[26,13],[114,115,116,117,118,119,120],"prompt-engineering","llm","llm-inference","prompt","prompt-design","prompt-tuning","prompting","2026-03-27T02:49:30.150509","2026-04-06T05:44:10.986641",[],[]]