[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-humanlayer--12-factor-agents":3,"tool-humanlayer--12-factor-agents":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":107,"forks":108,"last_commit_at":109,"license":110,"difficulty_score":98,"env_os":111,"env_gpu":111,"env_ram":111,"env_deps":112,"category_tags":115,"github_topics":116,"view_count":127,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":128,"updated_at":129,"faqs":130,"releases":159},573,"humanlayer\u002F12-factor-agents","12-factor-agents","What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?","12-factor-agents 是一套基于经典 12-Factor App 理念打造的开源指南，旨在帮助开发者构建真正可靠、可投入生产环境的 LLM 应用。当前许多所谓的“智能体”项目实际上只是确定性代码中穿插 LLM 调用，缺乏足够的稳定性与代理能力，导致开发者常需自行搭建技术栈。12-factor-agents 正是为了解决这一痛点而生，它提供了一套标准化的原则，明确如何设计出既具备智能又足够稳健的软件系统。\n\n这套资源特别适合致力于 AI 产品落地的开发者、初创团队创始人以及关注工程化实践的研究人员。它的独特之处在于超越了单纯的代码库，深入探讨了上下文工程、小模型专注任务等核心设计原则。通过遵循这些指导方针，团队能有效避开常见陷阱，实现从实验原型到生产级应用的平滑过渡，最终构建出用户真正信赖的智能体产品。","# 12-Factor Agents - Principles for building reliable LLM applications\n\n\u003Cdiv align=\"center\">\n\u003Ca href=\"https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCode-Apache%202.0-blue.svg\" alt=\"Code License: Apache 2.0\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-sa\u002F4.0\u002F\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContent-CC%20BY--SA%204.0-lightgrey.svg\" alt=\"Content License: CC BY-SA 4.0\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fhumanlayer.dev\u002Fdiscord\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fchat-discord-5865F2\" alt=\"Discord Server\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8kMaTybvDUw\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Faidotengineer-conf_talk_(17m)-white\" alt=\"YouTube\nDeep Dive\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=yxJDyQ8v6P0\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fyoutube-deep_dive-crimson\" alt=\"YouTube\nDeep Dive\">\u003C\u002Fa>\n    \n\u003C\u002Fdiv>\n\n\u003Cp>\u003C\u002Fp>\n\n*In the spirit of [12 Factor Apps](https:\u002F\u002F12factor.net\u002F)*.  *The source for this project is public at https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents, and I welcome your feedback and contributions. Let's figure this out together!*\n\n> [!TIP]\n> Missed the AI Engineer World's Fair? [Catch the talk here](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8kMaTybvDUw)\n>\n> Looking for Context Engineering? [Jump straight to factor 3](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-03-own-your-context-window.md)\n>\n> Want to contribute to `npx\u002Fuvx create-12-factor-agent` - check out [the discussion thread](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fdiscussions\u002F61)\n\n\n\u003Cimg referrerpolicy=\"no-referrer-when-downgrade\" src=\"https:\u002F\u002Fstatic.scarf.sh\u002Fa.png?x-pxid=2acad99a-c2d9-48df-86f5-9ca8061b7bf9\" \u002F>\n\n\u003Ca href=\"#visual-nav\">\u003Cimg width=\"907\" alt=\"Screenshot 2025-04-03 at 2 49 07 PM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_b5c6ed971829.png\" \u002F>\u003C\u002Fa>\n\n\nHi, I'm Dex. I've been [hacking](https:\u002F\u002Fyoutu.be\u002F8bIHcttkOTE) on [AI agents](https:\u002F\u002Ftheouterloop.substack.com) for [a while](https:\u002F\u002Fhumanlayer.dev). \n\n\n**I've tried every agent framework out there**, from the plug-and-play crew\u002Flangchains to the \"minimalist\" smolagents of the world to the \"production grade\" langraph, griptape, etc. \n\n**I've talked to a lot of really strong founders**, in and out of YC, who are all building really impressive things with AI. Most of them are rolling the stack themselves. I don't see a lot of frameworks in production customer-facing agents.\n\n**I've been surprised to find** that most of the products out there billing themselves as \"AI Agents\" are not all that agentic. A lot of them are mostly deterministic code, with LLM steps sprinkled in at just the right points to make the experience truly magical.\n\nAgents, at least the good ones, don't follow the [\"here's your prompt, here's a bag of tools, loop until you hit the goal\"](https:\u002F\u002Fwww.anthropic.com\u002Fengineering\u002Fbuilding-effective-agents#agents) pattern. Rather, they are comprised of mostly just software. \n\nSo, I set out to answer:\n\n> ### **What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?**\n\nWelcome to 12-factor agents. As every Chicago mayor since Daley has consistently plastered all over the city's major airports, we're glad you're here.\n\n*Special thanks to [@iantbutler01](https:\u002F\u002Fgithub.com\u002Fiantbutler01), [@tnm](https:\u002F\u002Fgithub.com\u002Ftnm), [@hellovai](https:\u002F\u002Fwww.github.com\u002Fhellovai), [@stantonk](https:\u002F\u002Fwww.github.com\u002Fstantonk), [@balanceiskey](https:\u002F\u002Fwww.github.com\u002Fbalanceiskey), [@AdjectiveAllison](https:\u002F\u002Fwww.github.com\u002FAdjectiveAllison), [@pfbyjy](https:\u002F\u002Fwww.github.com\u002Fpfbyjy), [@a-churchill](https:\u002F\u002Fwww.github.com\u002Fa-churchill), and the SF MLOps community for early feedback on this guide.*\n\n## The Short Version: The 12 Factors\n\nEven if LLMs [continue to get exponentially more powerful](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-10-small-focused-agents.md#what-if-llms-get-smarter), there will be core engineering techniques that make LLM-powered software more reliable, more scalable, and easier to maintain.\n\n- [How We Got Here: A Brief History of Software](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Fbrief-history-of-software.md)\n- [Factor 1: Natural Language to Tool Calls](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-01-natural-language-to-tool-calls.md)\n- [Factor 2: Own your prompts](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-02-own-your-prompts.md)\n- [Factor 3: Own your context window](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-03-own-your-context-window.md)\n- [Factor 4: Tools are just structured outputs](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-04-tools-are-structured-outputs.md)\n- [Factor 5: Unify execution state and business state](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-05-unify-execution-state.md)\n- [Factor 6: Launch\u002FPause\u002FResume with simple APIs](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-06-launch-pause-resume.md)\n- [Factor 7: Contact humans with tool calls](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-07-contact-humans-with-tools.md)\n- [Factor 8: Own your control flow](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-08-own-your-control-flow.md)\n- [Factor 9: Compact Errors into Context Window](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-09-compact-errors.md)\n- [Factor 10: Small, Focused Agents](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-10-small-focused-agents.md)\n- [Factor 11: Trigger from anywhere, meet users where they are](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-11-trigger-from-anywhere.md)\n- [Factor 12: Make your agent a stateless reducer](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-12-stateless-reducer.md)\n\n### Visual Nav\n\n|    |    |    |\n|----|----|-----|\n|[![factor 1](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_5c8089e17aaf.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-01-natural-language-to-tool-calls.md) | [![factor 2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_a417237e7ed6.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-02-own-your-prompts.md) | [![factor 3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_fa988d1c9bb5.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-03-own-your-context-window.md) |\n|[![factor 4](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_b0b26cb77f70.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-04-tools-are-structured-outputs.md) | [![factor 5](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_128eef946c83.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-05-unify-execution-state.md) | [![factor 6](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_07ba11f3874b.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-06-launch-pause-resume.md) |\n| [![factor 7](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_4ad4d41146f6.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-07-contact-humans-with-tools.md) | [![factor 8](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_a8ddf2653568.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-08-own-your-control-flow.md) | [![factor 9](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_e123b4f375fe.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-09-compact-errors.md) |\n| [![factor 10](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_8018d8b64340.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-10-small-focused-agents.md) | [![factor 11](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_1f70161ab495.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-11-trigger-from-anywhere.md) | [![factor 12](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_0669f0945ca7.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-12-stateless-reducer.md) |\n\n## How we got here\n\nFor a deeper dive on my agent journey and what led us here, check out [A Brief History of Software](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Fbrief-history-of-software.md) - a quick summary here:\n\n### The promise of agents\n\nWe're gonna talk a lot about Directed Graphs (DGs) and their Acyclic friends, DAGs. I'll start by pointing out that...well...software is a directed graph. There's a reason we used to represent programs as flow charts.\n\n![010-software-dag](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_fb6e0bd55b06.png)\n\n### From code to DAGs\n\nAround 20 years ago, we started to see DAG orchestrators become popular. We're talking classics like [Airflow](https:\u002F\u002Fairflow.apache.org\u002F), [Prefect](https:\u002F\u002Fwww.prefect.io\u002F), some predecessors, and some newer ones like ([dagster](https:\u002F\u002Fdagster.io\u002F), [inggest](https:\u002F\u002Fwww.inngest.com\u002F), [windmill](https:\u002F\u002Fwww.windmill.dev\u002F)). These followed the same graph pattern, with the added benefit of observability, modularity, retries, administration, etc.\n\n![015-dag-orchestrators](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_28e76e6e6bec.png)\n\n### The promise of agents\n\nI'm not the first [person to say this](https:\u002F\u002Fyoutu.be\u002FDc99-zTMyMg?si=bcT0hIwWij2mR-40&t=73), but my biggest takeaway when I started learning about agents, was that you get to throw the DAG away. Instead of software engineers coding each step and edge case, you can give the agent a goal and a set of transitions:\n\n![025-agent-dag](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_39f2777129f8.png)\n\nAnd let the LLM make decisions in real time to figure out the path\n\n![026-agent-dag-lines](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_2f34edc54c2c.png)\n\nThe promise here is that you write less software, you just give the LLM the \"edges\" of the graph and let it figure out the nodes. You can recover from errors, you can write less code, and you may find that LLMs find novel solutions to problems.\n\n\n### Agents as loops\n\nAs we'll see later, it turns out this doesn't quite work.\n\nLet's dive one step deeper - with agents you've got this loop consisting of 3 steps:\n\n1. LLM determines the next step in the workflow, outputting structured json (\"tool calling\")\n2. Deterministic code executes the tool call\n3. The result is appended to the context window \n4. Repeat until the next step is determined to be \"done\"\n\n```python\ninitial_event = {\"message\": \"...\"}\ncontext = [initial_event]\nwhile True:\n  next_step = await llm.determine_next_step(context)\n  context.append(next_step)\n\n  if (next_step.intent === \"done\"):\n    return next_step.final_answer\n\n  result = await execute_step(next_step)\n  context.append(result)\n```\n\nOur initial context is just the starting event (maybe a user message, maybe a cron fired, maybe a webhook, etc), and we ask the llm to choose the next step (tool) or to determine that we're done.\n\nHere's a multi-step example:\n\n[![027-agent-loop-animation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_52923b09dea4.gif)](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F3beb0966-fdb1-4c12-a47f-ed4e8240f8fd)\n\n\u003Cdetails>\n\u003Csummary>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_52923b09dea4.gif\">GIF Version\u003C\u002Fa>\u003C\u002Fsummary>\n\n![027-agent-loop-animation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_52923b09dea4.gif)\n\n\u003C\u002Fdetails>\n\n## Why 12-factor agents?\n\nAt the end of the day, this approach just doesn't work as well as we want it to.\n\nIn building HumanLayer, I've talked to at least 100 SaaS builders (mostly technical founders) looking to make their existing product more agentic. The journey usually goes something like:\n\n1. Decide you want to build an agent\n2. Product design, UX mapping, what problems to solve\n3. Want to move fast, so grab $FRAMEWORK and *get to building*\n4. Get to 70-80% quality bar \n5. Realize that 80% isn't good enough for most customer-facing features\n6. Realize that getting past 80% requires reverse-engineering the framework, prompts, flow, etc.\n7. Start over from scratch\n\n\u003Cdetails>\n\u003Csummary>Random Disclaimers\u003C\u002Fsummary>\n\n**DISCLAIMER**: I'm not sure the exact right place to say this, but here seems as good as any: **this in BY NO MEANS meant to be a dig on either the many frameworks out there, or the pretty dang smart people who work on them**. They enable incredible things and have accelerated the AI ecosystem. \n\nI hope that one outcome of this post is that agent framework builders can learn from the journeys of myself and others, and make frameworks even better. \n\nEspecially for builders who want to move fast but need deep control.\n\n**DISCLAIMER 2**: I'm not going to talk about MCP. I'm sure you can see where it fits in.\n\n**DISCLAIMER 3**: I'm using mostly typescript, for [reasons](https:\u002F\u002Fwww.linkedin.com\u002Fposts\u002Fdexterihorthy_llms-typescript-aiagents-activity-7290858296679313408-Lh9e?utm_source=share&utm_medium=member_desktop&rcm=ACoAAA4oHTkByAiD-wZjnGsMBUL_JT6nyyhOh30) but all this stuff works in python or any other language you prefer. \n\n\nAnyways back to the thing...\n\n\u003C\u002Fdetails>\n\n### Design Patterns for great LLM applications\n\nAfter digging through hundreds of AI libriaries and working with dozens of founders, my instinct is this:\n\n1. There are some core things that make agents great\n2. Going all in on a framework and building what is essentially a greenfield rewrite may be counter-productive\n3. There are some core principles that make agents great, and you will get most\u002Fall of them if you pull in a framework\n4. BUT, the fastest way I've seen for builders to get high-quality AI software in the hands of customers is to take small, modular concepts from agent building, and incorporate them into their existing product\n5. These modular concepts from agents can be defined and applied by most skilled software engineers, even if they don't have an AI background\n\n> #### The fastest way I've seen for builders to get good AI software in the hands of customers is to take small, modular concepts from agent building, and incorporate them into their existing product\n\n\n## The 12 Factors (again)\n\n\n- [How We Got Here: A Brief History of Software](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Fbrief-history-of-software.md)\n- [Factor 1: Natural Language to Tool Calls](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-01-natural-language-to-tool-calls.md)\n- [Factor 2: Own your prompts](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-02-own-your-prompts.md)\n- [Factor 3: Own your context window](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-03-own-your-context-window.md)\n- [Factor 4: Tools are just structured outputs](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-04-tools-are-structured-outputs.md)\n- [Factor 5: Unify execution state and business state](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-05-unify-execution-state.md)\n- [Factor 6: Launch\u002FPause\u002FResume with simple APIs](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-06-launch-pause-resume.md)\n- [Factor 7: Contact humans with tool calls](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-07-contact-humans-with-tools.md)\n- [Factor 8: Own your control flow](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-08-own-your-control-flow.md)\n- [Factor 9: Compact Errors into Context Window](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-09-compact-errors.md)\n- [Factor 10: Small, Focused Agents](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-10-small-focused-agents.md)\n- [Factor 11: Trigger from anywhere, meet users where they are](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-11-trigger-from-anywhere.md)\n- [Factor 12: Make your agent a stateless reducer](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-12-stateless-reducer.md)\n\n## Honorable Mentions \u002F other advice\n\n- [Factor 13: Pre-fetch all the context you might need](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Fappendix-13-pre-fetch.md)\n\n## Related Resources\n\n- Contribute to this guide [here](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents)\n- [I talked about a lot of this on an episode of the Tool Use podcast](https:\u002F\u002Fyoutu.be\u002F8bIHcttkOTE) in March 2025\n- I write about some of this stuff at [The Outer Loop](https:\u002F\u002Ftheouterloop.substack.com)\n- I do [webinars about Maximizing LLM Performance](https:\u002F\u002Fgithub.com\u002Fhellovai\u002Fai-that-works\u002Ftree\u002Fmain) with [@hellovai](https:\u002F\u002Fgithub.com\u002Fhellovai)\n- We build OSS agents with this methodology under [got-agents\u002Fagents](https:\u002F\u002Fgithub.com\u002Fgot-agents\u002Fagents)\n- We ignored all our own advice and built a [framework for running distributed agents in kubernetes](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002Fkubechain)\n- Other links from this guide:\n  - [12 Factor Apps](https:\u002F\u002F12factor.net)\n  - [Building Effective Agents (Anthropic)](https:\u002F\u002Fwww.anthropic.com\u002Fengineering\u002Fbuilding-effective-agents#agents)\n  - [Prompts are Functions](https:\u002F\u002Fthedataexchange.media\u002Fbaml-revolution-in-ai-engineering\u002F )\n  - [Library patterns: Why frameworks are evil](https:\u002F\u002Ftomasp.net\u002Fblog\u002F2015\u002Flibrary-frameworks\u002F)\n  - [The Wrong Abstraction](https:\u002F\u002Fsandimetz.com\u002Fblog\u002F2016\u002F1\u002F20\u002Fthe-wrong-abstraction)\n  - [Mailcrew Agent](https:\u002F\u002Fgithub.com\u002Fdexhorthy\u002Fmailcrew)\n  - [Mailcrew Demo Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=f_cKnoPC_Oo)\n  - [Chainlit Demo](https:\u002F\u002Fx.com\u002Fchainlit_io\u002Fstatus\u002F1858613325921480922)\n  - [TypeScript for LLMs](https:\u002F\u002Fwww.linkedin.com\u002Fposts\u002Fdexterihorthy_llms-typescript-aiagents-activity-7290858296679313408-Lh9e)\n  - [Schema Aligned Parsing](https:\u002F\u002Fwww.boundaryml.com\u002Fblog\u002Fschema-aligned-parsing)\n  - [Function Calling vs Structured Outputs vs JSON Mode](https:\u002F\u002Fwww.vellum.ai\u002Fblog\u002Fwhen-should-i-use-function-calling-structured-outputs-or-json-mode)\n  - [BAML on GitHub](https:\u002F\u002Fgithub.com\u002Fboundaryml\u002Fbaml)\n  - [OpenAI JSON vs Function Calling](https:\u002F\u002Fdocs.llamaindex.ai\u002Fen\u002Fstable\u002Fexamples\u002Fllm\u002Fopenai_json_vs_function_calling\u002F)\n  - [Outer Loop Agents](https:\u002F\u002Ftheouterloop.substack.com\u002Fp\u002Fopenais-realtime-api-is-a-step-towards)\n  - [Airflow](https:\u002F\u002Fairflow.apache.org\u002F)\n  - [Prefect](https:\u002F\u002Fwww.prefect.io\u002F)\n  - [Dagster](https:\u002F\u002Fdagster.io\u002F)\n  - [Inngest](https:\u002F\u002Fwww.inngest.com\u002F)\n  - [Windmill](https:\u002F\u002Fwww.windmill.dev\u002F)\n  - [The AI Agent Index (MIT)](https:\u002F\u002Faiagentindex.mit.edu\u002F)\n  - [NotebookLM on Finding Model Capability Boundaries](https:\u002F\u002Fopen.substack.com\u002Fpub\u002Fswyx\u002Fp\u002Fnotebooklm?selection=08e1187c-cfee-4c63-93c9-71216640a5f8)\n\n## Contributors\n\nThanks to everyone who has contributed to 12-factor agents!\n\n[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_121d703d705a.png\" width=\"80px\" alt=\"dexhorthy\" \u002F>](https:\u002F\u002Fgithub.com\u002Fdexhorthy) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_da43c4ed75cd.png\" width=\"80px\" alt=\"Sypherd\" \u002F>](https:\u002F\u002Fgithub.com\u002FSypherd) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_9229c7059eda.png\" width=\"80px\" alt=\"tofaramususa\" \u002F>](https:\u002F\u002Fgithub.com\u002Ftofaramususa) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_f8e7f97d06c1.png\" width=\"80px\" alt=\"a-churchill\" \u002F>](https:\u002F\u002Fgithub.com\u002Fa-churchill) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_7e751d602f95.png\" width=\"80px\" alt=\"Elijas\" \u002F>](https:\u002F\u002Fgithub.com\u002FElijas) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_f6799391fb21.png\" width=\"80px\" alt=\"hugolmn\" \u002F>](https:\u002F\u002Fgithub.com\u002Fhugolmn) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_461e989a4885.png\" width=\"80px\" alt=\"jeremypeters\" \u002F>](https:\u002F\u002Fgithub.com\u002Fjeremypeters)\n\n[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_af74ab49896c.png\" width=\"80px\" alt=\"kndl\" \u002F>](https:\u002F\u002Fgithub.com\u002Fkndl) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_a7ba125d8d24.png\" width=\"80px\" alt=\"maciejkos\" \u002F>](https:\u002F\u002Fgithub.com\u002Fmaciejkos) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_48511b953be0.png\" width=\"80px\" alt=\"pfbyjy\" \u002F>](https:\u002F\u002Fgithub.com\u002Fpfbyjy) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_7e7264d0e301.png\" width=\"80px\" alt=\"0xRaduan\" \u002F>](https:\u002F\u002Fgithub.com\u002F0xRaduan) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_63dd0f90225a.png\" width=\"80px\" alt=\"zyuanlim\" \u002F>](https:\u002F\u002Fgithub.com\u002Fzyuanlim) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_a51ae544362d.png\" width=\"80px\" alt=\"lombardo-chcg\" \u002F>](https:\u002F\u002Fgithub.com\u002Flombardo-chcg) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_5d0004a4a0f1.png\" width=\"80px\" alt=\"sahanatvessel\" \u002F>](https:\u002F\u002Fgithub.com\u002Fsahanatvessel)\n \n## License\n\nAll content and images are licensed under a \u003Ca href=\"https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-sa\u002F4.0\u002F\">CC BY-SA 4.0 License\u003C\u002Fa>\n\nCode is licensed under the \u003Ca href=\"https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0\">Apache 2.0 License\u003C\u002Fa>\n\n\n","# 12 因子智能体 - 构建可靠的大语言模型（LLM）应用程序的原则\n\n\u003Cdiv align=\"center\">\n\u003Ca href=\"https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCode-Apache%202.0-blue.svg\" alt=\"Code License: Apache 2.0\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-sa\u002F4.0\u002F\">\n        \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContent-CC%20BY--SA%204.0-lightgrey.svg\" alt=\"Content License: CC BY-SA 4.0\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fhumanlayer.dev\u002Fdiscord\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fchat-discord-5865F2\" alt=\"Discord Server\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8kMaTybvDUw\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Faidotengineer-conf_talk_(17m)-white\" alt=\"YouTube\nDeep Dive\">\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=yxJDyQ8v6P0\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fyoutube-deep_dive-crimson\" alt=\"YouTube\nDeep Dive\">\u003C\u002Fa>\n    \n\u003C\u002Fdiv>\n\n\u003Cp>\u003C\u002Fp>\n\n*秉承 [12 因子应用](https:\u002F\u002F12factor.net\u002F) 的精神*。*本项目的源代码公开于 https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents，欢迎提供反馈和贡献。让我们一起解决这个问题！*\n\n> [!TIP]\n> 错过了 AI 工程师世界博览会？[在此观看演讲](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8kMaTybvDUw)\n>\n> 正在寻找上下文工程（Context Engineering）相关内容？[直接跳转到第 3 因子](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-03-own-your-context-window.md)\n>\n> 想要为 `npx\u002Fuvx create-12-factor-agent` 做贡献？请查看 [讨论线程](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fdiscussions\u002F61)\n\n\n\u003Cimg referrerpolicy=\"no-referrer-when-downgrade\" src=\"https:\u002F\u002Fstatic.scarf.sh\u002Fa.png?x-pxid=2acad99a-c2d9-48df-86f5-9ca8061b7bf9\" \u002F>\n\n\u003Ca href=\"#visual-nav\">\u003Cimg width=\"907\" alt=\"Screenshot 2025-04-03 at 2 49 07 PM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_b5c6ed971829.png\" \u002F>\u003C\u002Fa>\n\n\n你好，我是 Dex。我已经 [钻研](https:\u002F\u002Fyoutu.be\u002F8bIHcttkOTE) [AI 智能体](https:\u002F\u002Ftheouterloop.substack.com) 有一段时间了 ([链接](https:\u002F\u002Fhumanlayer.dev))。\n\n**我尝试过市面上所有的智能体框架**，从即插即用（plug-and-play）的 crew\u002Flangchains，到“极简主义”的 smolagents，再到“生产级”的 langraph、griptape 等等。\n\n**我和许多非常优秀的创始人交谈过**，无论是在 YC 内部还是外部，他们都在用 AI 构建令人印象深刻的事物。大多数人都是自己从头搭建技术栈（rolling the stack）。我在面向客户的生产环境中很少看到很多现成的框架。\n\n**令我惊讶的是**，大多数自称是\"AI 智能体”的产品其实并没有那么具备代理能力（agentic）。它们大多是确定性代码，只是在恰到好处的地方点缀了一些 LLM 步骤，以使体验真正变得神奇。\n\n至少好的智能体不会遵循 [\"这是你的提示词，这是一堆工具，循环直到达成目标\"](https:\u002F\u002Fwww.anthropic.com\u002Fengineering\u002Fbuilding-effective-agents#agents) 的模式。相反，它们主要由软件组成。\n\n因此，我决定回答这个问题：\n\n> ### **我们可以使用哪些原则来构建真正足够好、可以交付给生产环境客户的大语言模型（LLM）驱动的软件？**\n\n欢迎来到 12 因子智能体。正如自戴利（Daley）以来的每一位芝加哥市长一致在城市主要机场张贴的那样，很高兴你来到这里。\n\n*特别感谢 [@iantbutler01](https:\u002F\u002Fgithub.com\u002Fiantbutler01)、[@tnm](https:\u002F\u002Fgithub.com\u002Ftnm)、[@hellovai](https:\u002F\u002Fwww.github.com\u002Fhellovai)、[@stantonk](https:\u002F\u002Fwww.github.com\u002Fstantonk)、[@balanceiskey](https:\u002F\u002Fwww.github.com\u002Fbalanceiskey)、[@AdjectiveAllison](https:\u002F\u002Fwww.github.com\u002FAdjectiveAllison)、[@pfbyjy](https:\u002F\u002Fwww.github.com\u002Fpfbyjy)、[@a-churchill](https:\u002F\u002Fwww.github.com\u002Fa-churchill) 以及旧金山 MLOps 社区对本指南的早期反馈。*\n\n## 简版：12 个因子\n\n即使 LLM [继续以指数级变得更强大](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-10-small-focused-agents.md#what-if-llms-get-smarter)，仍有一些核心工程技术能使 LLM 驱动的软件更可靠、更具可扩展性且更易维护。\n\n- [我们如何走到这一步：软件简史](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Fbrief-history-of-software.md)\n- [因子 1：自然语言到工具调用](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-01-natural-language-to-tool-calls.md)\n- [因子 2：掌控你的提示词](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-02-own-your-prompts.md)\n- [因子 3：掌控你的上下文窗口](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-03-own-your-context-window.md)\n- [因子 4：工具只是结构化输出](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-04-tools-are-structured-outputs.md)\n- [因子 5：统一执行状态与业务状态](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-05-unify-execution-state.md)\n- [因子 6：通过简单 API 启动\u002F暂停\u002F恢复](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-06-launch-pause-resume.md)\n- [因子 7：通过工具调用联系人类](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-07-contact-humans-with-tools.md)\n- [因子 8：掌控你的控制流](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-08-own-your-control-flow.md)\n- [因子 9：将错误紧凑化至上下文窗口](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-09-compact-errors.md)\n- [因子 10：小型、专注的智能体](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-10-small-focused-agents.md)\n- [因子 11：随处触发，在用户所在之处与他们相遇](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-11-trigger-from-anywhere.md)\n- [因子 12：让你的智能体成为无状态归约器](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-12-stateless-reducer.md)\n\n### 视觉导航\n\n|    |    |    |\n|----|----|-----|\n|[![factor 1](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_5c8089e17aaf.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-01-natural-language-to-tool-calls.md) | [![factor 2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_a417237e7ed6.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-02-own-your-prompts.md) | [![factor 3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_fa988d1c9bb5.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-03-own-your-context-window.md) |\n|[![factor 4](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_b0b26cb77f70.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-04-tools-are-structured-outputs.md) | [![factor 5](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_128eef946c83.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-05-unify-execution-state.md) | [![factor 6](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_07ba11f3874b.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-06-launch-pause-resume.md) |\n| [![factor 7](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_4ad4d41146f6.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-07-contact-humans-with-tools.md) | [![factor 8](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_a8ddf2653568.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-08-own-your-control-flow.md) | [![factor 9](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_e123b4f375fe.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-09-compact-errors.md) |\n| [![factor 10](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_8018d8b64340.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-10-small-focused-agents.md) | [![factor 11](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_1f70161ab495.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-11-trigger-from-anywhere.md) | [![factor 12](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_0669f0945ca7.png)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-12-stateless-reducer.md) |\n\n## 我们是如何走到这一步的\n\n为了更深入地了解我的智能体之旅以及我们是如何走到这一步的，请查看 [软件简史](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Fbrief-history-of-software.md) - 这里有一个快速摘要：\n\n### 智能体的承诺\n\n我们将要谈论很多关于**有向图 (DGs)** 及其无环的朋友，即 **有向无环图 (DAGs)**。我首先要指出的是……嗯……软件本质上就是一个有向图。我们过去用流程图来表示程序是有原因的。\n\n![010-software-dag](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_fb6e0bd55b06.png)\n\n### 从代码到 DAGs\n\n大约 20 年前，我们开始看到 DAG 编排器变得流行。我们说的是经典之作，如 [Airflow](https:\u002F\u002Fairflow.apache.org\u002F)、[Prefect](https:\u002F\u002Fwww.prefect.io\u002F)，一些前身，以及一些较新的项目，如 ([dagster](https:\u002F\u002Fdagster.io\u002F)、[inggest](https:\u002F\u002Fwww.inngest.com\u002F)、[windmill](https:\u002F\u002Fwww.windmill.dev\u002F))。这些遵循相同的图模式，并增加了可观测性、模块化、重试、管理等好处。\n\n![015-dag-orchestrators](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_28e76e6e6bec.png)\n\n### 智能体的承诺\n\n我不是第一个[说过这话的人](https:\u002F\u002Fyoutu.be\u002FDc99-zTMyMg?si=bcT0hIwWij2mR-40&t=73)，但我刚开始学习智能体时最大的收获是，你可以直接把 DAG 扔掉。与其让软件工程师编写每一个步骤和边界情况，你可以给智能体一个目标和一组转换：\n\n![025-agent-dag](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_39f2777129f8.png)\n\n然后让 **大语言模型 (LLM)** 实时做出决策来找出路径\n\n![026-agent-dag-lines](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_2f34edc54c2c.png)\n\n这里的承诺是，你写的软件更少，你只需要给 LLM 图的“边”，让它去找出节点。你可以从错误中恢复，你可以写更少的代码，你可能会发现 LLM 能找到解决问题的新颖方案。\n\n### 智能体即循环\n\n正如我们稍后将会看到的，事实证明这并不完全奏效。\n\n让我们再深入一步——有了智能体，你就有了一个由 3 个步骤组成的循环：\n\n1. LLM 确定工作流中的下一步，输出结构化的 json（“工具调用”）\n2. 确定性代码执行工具调用\n3. 结果被追加到上下文窗口中\n4. 重复直到下一步被确定为“完成”\n\n```python\ninitial_event = {\"message\": \"...\"}\ncontext = [initial_event]\nwhile True:\n  next_step = await llm.determine_next_step(context)\n  context.append(next_step)\n\n  if (next_step.intent === \"done\"):\n    return next_step.final_answer\n\n  result = await execute_step(next_step)\n  context.append(result)\n```\n\n我们的初始上下文只是起始事件（可能是用户消息，可能是 cron 触发，可能是 webhook 等），我们要求 LLM 选择下一步（工具）或确定我们已完成。\n\n这是一个多步骤示例：\n\n[![027-agent-loop-animation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_52923b09dea4.gif)](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F3beb0966-fdb1-4c12-a47f-ed4e8240f8fd)\n\n\u003Cdetails>\n\u003Csummary>\u003Ca href=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_52923b09dea4.gif\">GIF 版本\u003C\u002Fa>\u003C\u002Fsummary>\n\n![027-agent-loop-animation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_52923b09dea4.gif)\n\n\u003C\u002Fdetails>\n\n## 为什么是 12 因子智能体（Agents）？\n\n归根结底，这种方法并没有达到我们预期的效果。\n\n在构建 HumanLayer 的过程中，我至少与 100 位 SaaS 构建者（SaaS Builders，主要是技术创始人）交谈过，他们希望让自己的现有产品更具智能体（Agent）特性。这段旅程通常如下所示：\n\n1. 决定要构建一个智能体（Agent）\n2. 产品设计、用户体验（UX）映射、确定要解决的问题\n3. 想要快速推进，所以直接拿取 $FRAMEWORK 并*开始构建*\n4. 达到 70-80% 的质量标准 \n5. 意识到对于大多数面向客户的功能来说，80% 还不够好\n6. 意识到要突破 80% 需要逆向工程框架（Framework）、提示词（Prompts）、流程等\n7. 从头再来\n\n\u003Cdetails>\n\u003Csummary>随机免责声明\u003C\u002Fsummary>\n\n**免责声明**：我不确定确切该在哪里说这个，但这里似乎和哪里一样好：**这绝非意在贬低现有的众多框架（Frameworks），或是那些非常聪明的工作于其上的人们**。它们实现了不可思议的事情，并加速了 AI 生态系统的发展。 \n\n我希望这篇文章的一个成果是，智能体框架的构建者能从我和其他人的经历中学习，使框架变得更好。 \n\n特别是对于那些想要快速推进但又需要深度控制的构建者而言。\n\n**免责声明 2**：我不会谈论 MCP。我相信你能看出它放在哪里合适。\n\n**免责声明 3**：我主要使用 TypeScript，[出于某些原因](https:\u002F\u002Fwww.linkedin.com\u002Fposts\u002Fdexterihorthy_llms-typescript-aiagents-activity-7290858296679313408-Lh9e?utm_source=share&utm_medium=member_desktop&rcm=ACoAAA4oHTkByAiD-wZjnGsMBUL_JT6nyyhOh30)，但这所有内容在 Python 或你喜欢的任何其他语言中都能工作。 \n\n\n总之，回到正题……\n\n\u003C\u002Fdetails>\n\n### 优秀大语言模型（LLM）应用的设计模式\n\n在钻研了数百个 AI 库并与数十位创始人合作后，我的直觉是这样的：\n\n1. 有一些核心要素能让智能体（Agent）变得出色\n2. 完全押注一个框架（Framework），并构建本质上属于从零开始的重写，可能会适得其反\n3. 有一些核心原则能让智能体（Agent）变得出色，如果你引入一个框架，你将获得其中大部分\u002F全部\n4. 但是，我所见过的构建者将高质量 AI 软件交付给客户的最快途径，是从智能体构建中提取小型、模块化的概念，并将它们整合到现有产品中\n5. 这些来自智能体（Agent）的模块化概念可以由大多数熟练的软件工程师定义和应用，即使他们没有 AI 背景\n\n> #### 我所见过的构建者将优质 AI 软件交付给客户的最快途径，是从智能体构建中提取小型、模块化的概念，并将它们整合到现有产品中\n\n\n## 12 因子（再次）\n\n\n- [我们如何走到这一步：软件简史](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Fbrief-history-of-software.md)\n- [因子 1：自然语言到工具调用](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-01-natural-language-to-tool-calls.md)\n- [因子 2：掌控你的提示词（Prompts）](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-02-own-your-prompts.md)\n- [因子 3：掌控你的上下文窗口（Context Window）](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-03-own-your-context-window.md)\n- [因子 4：工具只是结构化输出](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-04-tools-are-structured-outputs.md)\n- [因子 5：统一执行状态和业务状态](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-05-unify-execution-state.md)\n- [因子 6：通过简单 API 启动\u002F暂停\u002F恢复](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-06-launch-pause-resume.md)\n- [因子 7：通过工具调用联系人类](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-07-contact-humans-with-tools.md)\n- [因子 8：掌控你的控制流（Control Flow）](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-08-own-your-control-flow.md)\n- [因子 9：将错误紧凑化到上下文窗口](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-09-compact-errors.md)\n- [因子 10：小型、专注的智能体（Agents）](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-10-small-focused-agents.md)\n- [因子 11：从任何地方触发，在用户所在处相遇](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-11-trigger-from-anywhere.md)\n- [因子 12：让你的智能体（Agent）成为无状态归约器（Stateless Reducer）](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Ffactor-12-stateless-reducer.md)\n\n## 荣誉提名 \u002F 其他建议\n\n- [因子 13：预取你可能需要的所有上下文](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fblob\u002Fmain\u002Fcontent\u002Fappendix-13-pre-fetch.md)\n\n## 相关资源\n\n- 在此 [贡献本指南](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents)\n- [我在 Tool Use 播客的一期节目中谈到了很多相关内容](https:\u002F\u002Fyoutu.be\u002F8bIHcttkOTE)（2025 年 3 月）\n- 我在 [The Outer Loop](https:\u002F\u002Ftheouterloop.substack.com) 上撰写了一些相关内容\n- 我与 [@hellovai](https:\u002F\u002Fgithub.com\u002Fhellovai) 一起举办关于 [最大化 LLM（大语言模型）性能](https:\u002F\u002Fgithub.com\u002Fhellovai\u002Fai-that-works\u002Ftree\u002Fmain) 的网络研讨会\n- 我们使用此方法论构建 [开源 (OSS) 智能体 (Agent)](https:\u002F\u002Fgithub.com\u002Fgot-agents\u002Fagents)\n- 我们无视了所有自己的建议，并构建了一个用于在 [Kubernetes（容器编排系统）中运行分布式智能体 (Agent)](https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002Fkubechain) 的框架\n- 本指南的其他链接：\n  - [12 因子应用](https:\u002F\u002F12factor.net)\n  - [构建有效的智能体 (Anthropic)](https:\u002F\u002Fwww.anthropic.com\u002Fengineering\u002Fbuilding-effective-agents#agents)\n  - [提示词即函数](https:\u002F\u002Fthedataexchange.media\u002Fbaml-revolution-in-ai-engineering\u002F )\n  - [库模式：为什么框架是邪恶的](https:\u002F\u002Ftomasp.net\u002Fblog\u002F2015\u002Flibrary-frameworks\u002F)\n  - [错误的抽象](https:\u002F\u002Fsandimetz.com\u002Fblog\u002F2016\u002F1\u002F20\u002Fthe-wrong-abstraction)\n  - [Mailcrew 智能体 (Agent)](https:\u002F\u002Fgithub.com\u002Fdexhorthy\u002Fmailcrew)\n  - [Mailcrew 演示视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=f_cKnoPC_Oo)\n  - [Chainlit 演示](https:\u002F\u002Fx.com\u002Fchainlit_io\u002Fstatus\u002F1858613325921480922)\n  - [面向 LLM（大语言模型）的 TypeScript](https:\u002F\u002Fwww.linkedin.com\u002Fposts\u002Fdexterihorthy_llms-typescript-aiagents-activity-7290858296679313408-Lh9e)\n  - [模式对齐解析](https:\u002F\u002Fwww.boundaryml.com\u002Fblog\u002Fschema-aligned-parsing)\n  - [函数调用与结构化输出及 JSON 模式对比](https:\u002F\u002Fwww.vellum.ai\u002Fblog\u002Fwhen-should-i-use-function-calling-structured-outputs-or-json-mode)\n  - [GitHub 上的 BAML](https:\u002F\u002Fgithub.com\u002Fboundaryml\u002Fbaml)\n  - [OpenAI JSON 与函数调用](https:\u002F\u002Fdocs.llamaindex.ai\u002Fen\u002Fstable\u002Fexamples\u002Fllm\u002Fopenai_json_vs_function_calling\u002F)\n  - [外环智能体 (Agent)](https:\u002F\u002Ftheouterloop.substack.com\u002Fp\u002Fopenais-realtime-api-is-a-step-towards)\n  - [Airflow](https:\u002F\u002Fairflow.apache.org\u002F)\n  - [Prefect](https:\u002F\u002Fwww.prefect.io\u002F)\n  - [Dagster](https:\u002F\u002Fdagster.io\u002F)\n  - [Inngest](https:\u002F\u002Fwww.inngest.com\u002F)\n  - [Windmill](https:\u002F\u002Fwww.windmill.dev\u002F)\n  - [AI 智能体索引 (MIT)](https:\u002F\u002Faiagentindex.mit.edu\u002F)\n  - [NotebookLM 关于寻找模型能力边界](https:\u002F\u002Fopen.substack.com\u002Fpub\u002Fswyx\u002Fp\u002Fnotebooklm?selection=08e1187c-cfee-4c63-93c9-71216640a5f8)\n\n## 贡献者\n\n感谢所有为 12-factor 智能体 (Agent) 做出贡献的人！\n\n[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_121d703d705a.png\" width=\"80px\" alt=\"dexhorthy\" \u002F>](https:\u002F\u002Fgithub.com\u002Fdexhorthy) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_da43c4ed75cd.png\" width=\"80px\" alt=\"Sypherd\" \u002F>](https:\u002F\u002Fgithub.com\u002FSypherd) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_9229c7059eda.png\" width=\"80px\" alt=\"tofaramususa\" \u002F>](https:\u002F\u002Fgithub.com\u002Ftofaramususa) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_f8e7f97d06c1.png\" width=\"80px\" alt=\"a-churchill\" \u002F>](https:\u002F\u002Fgithub.com\u002Fa-churchill) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_7e751d602f95.png\" width=\"80px\" alt=\"Elijas\" \u002F>](https:\u002F\u002Fgithub.com\u002FElijas) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_f6799391fb21.png\" width=\"80px\" alt=\"hugolmn\" \u002F>](https:\u002F\u002Fgithub.com\u002Fhugolmn) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_461e989a4885.png\" width=\"80px\" alt=\"jeremypeters\" \u002F>](https:\u002F\u002Fgithub.com\u002Fjeremypeters)\n\n[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_af74ab49896c.png\" width=\"80px\" alt=\"kndl\" \u002F>](https:\u002F\u002Fgithub.com\u002Fkndl) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_a7ba125d8d24.png\" width=\"80px\" alt=\"maciejkos\" \u002F>](https:\u002F\u002Fgithub.com\u002Fmaciejkos) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_48511b953be0.png\" width=\"80px\" alt=\"pfbyjy\" \u002F>](https:\u002F\u002Fgithub.com\u002Fpfbyjy) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_7e7264d0e301.png\" width=\"80px\" alt=\"0xRaduan\" \u002F>](https:\u002F\u002Fgithub.com\u002F0xRaduan) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_63dd0f90225a.png\" width=\"80px\" alt=\"zyuanlim\" \u002F>](https:\u002F\u002Fgithub.com\u002Fzyuanlim) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_a51ae544362d.png\" width=\"80px\" alt=\"lombardo-chcg\" \u002F>](https:\u002F\u002Fgithub.com\u002Flombardo-chcg) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_readme_5d0004a4a0f1.png\" width=\"80px\" alt=\"sahanatvessel\" \u002F>](https:\u002F\u002Fgithub.com\u002Fsahanatvessel)\n \n## 许可证\n\n所有内容和图片均根据 \u003Ca href=\"https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-sa\u002F4.0\u002F\">CC BY-SA 4.0 许可协议\u003C\u002Fa> 授权\n\n代码根据 \u003Ca href=\"https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0\">Apache 2.0 许可协议\u003C\u002Fa> 授权","# 12-Factor Agents 快速上手指南\n\n**12-Factor Agents** 是一套基于 12-Factor Apps 原则构建可靠 LLM 应用的方法论与工具集。它旨在帮助开发者构建生产级、可扩展且易于维护的 AI Agent 系统，强调将软件工程的最佳实践应用于大模型应用开发中。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: macOS \u002F Linux \u002F Windows\n*   **运行时**: \n    *   **Node.js** (推荐 v16+)：用于通过 `npx` 运行工具。\n    *   **Python** (推荐 3.10+)：用于通过 `uvx` 运行工具。\n*   **包管理工具**: \n    *   `npm` (随 Node.js 安装)\n    *   `uv` (Python 包管理器，需单独安装)\n*   **版本控制**: `git`\n\n## 安装步骤\n\n本项目主要通过命令行工具进行初始化，同时也提供完整的文档仓库供参考。\n\n### 方式一：使用 CLI 初始化（推荐）\n\n如果您希望快速创建一个符合 12-Factor 规范的项目骨架，可以使用以下命令之一：\n\n```bash\n# 使用 npm\nnpx create-12-factor-agent\n\n# 或使用 uv (Python 生态)\nuvx create-12-factor-agent\n```\n\n### 方式二：克隆完整仓库\n\n如果您需要查阅所有 12 个因子（Factors）的详细原理和最佳实践，建议直接克隆源代码仓库：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\ncd 12-factor-agents\n```\n\n## 基本使用\n\n### 1. 启动项目\n\n执行安装步骤中的 CLI 命令后，按照提示输入项目名称和配置，即可生成基础项目结构。生成的项目将包含遵循 12-Factor 原则的代码模板。\n\n### 2. 理解核心概念\n\n本项目的核心理念是避免简单的“循环调用”，而是构建由软件主导的确定性流程。以下是项目中展示的基础 Agent 循环逻辑示例：\n\n```python\ninitial_event = {\"message\": \"...\"}\ncontext = [initial_event]\nwhile True:\n  next_step = await llm.determine_next_step(context)\n  context.append(next_step)\n\n  if (next_step.intent === \"done\"):\n    return next_step.final_answer\n\n  result = await execute_step(next_step)\n  context.append(result)\n```\n\n### 3. 阅读核心文档\n\n项目详细定义了构建可靠 LLM 应用的 12 个关键因子，建议按顺序阅读以深入理解架构设计：\n\n*   **Factor 1**: Natural Language to Tool Calls (自然语言到工具调用)\n*   **Factor 2**: Own your prompts (掌控你的提示词)\n*   **Factor 3**: Own your context window (掌控上下文窗口)\n*   **Factor 4**: Tools are just structured outputs (工具即结构化输出)\n*   **Factor 5**: Unify execution state and business state (统一执行状态与业务状态)\n*   **Factor 6**: Launch\u002FPause\u002FResume with simple APIs (通过简单 API 启动\u002F暂停\u002F恢复)\n*   **Factor 7**: Contact humans with tool calls (通过工具调用联系人类)\n*   **Factor 8**: Own your control flow (掌控控制流)\n*   **Factor 9**: Compact Errors into Context Window (将错误压缩进上下文窗口)\n*   **Factor 10**: Small, Focused Agents (小型、专注的 Agent)\n*   **Factor 11**: Trigger from anywhere, meet users where they are (随处触发，适应用户)\n*   **Factor 12**: Make your agent a stateless reducer (让 Agent 成为无状态归约器)\n\n通过以上步骤，您可以快速搭建起一个符合工业级标准的 AI Agent 应用原型。","某电商初创团队正在开发一个自动化退款审核代理，需处理复杂订单查询与政策匹配。该系统直接面对真实用户，对稳定性和可维护性有极高要求。\n\n### 没有 12-factor-agents 时\n- 代码逻辑混乱，LLM 调用和 API 请求硬编码混在一起，难以定位故障根源，测试覆盖率极低。\n- 上下文窗口管理失控，长对话历史导致 Token 成本飙升且关键信息被遗忘，用户体验不稳定。\n- 缺乏标准化配置，更换模型供应商或调整参数时需要重写大量业务代码，迭代效率低下。\n- 错误处理脆弱，一旦外部服务波动整个流程就卡死，无法优雅降级，严重影响客户信任。\n\n### 使用 12-factor-agents 后\n- 遵循因子化原则分离状态与计算，核心逻辑清晰独立，大幅降低维护难度，新成员上手更快。\n- 通过上下文工程最佳实践精准控制记忆长度，在保证效果的同时显著降低成本，预算更可控。\n- 统一配置管理接口，支持快速切换不同大模型后端而不影响业务逻辑代码，技术选型更灵活。\n- 内置可观测性标准，能完整追踪每个 Agent 步骤的输入输出，便于生产环境排查，运维更省心。\n\n这套方法论将 AI 应用从脆弱的实验品转变为真正可靠的生产级软件。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhumanlayer_12-factor-agents_f0b40ee0.png","humanlayer","HumanLayer","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhumanlayer_e3a6494c.png","",null,"https:\u002F\u002Fhumanlayer.dev","https:\u002F\u002Fgithub.com\u002Fhumanlayer",[83,87,91,95,99,103],{"name":84,"color":85,"percentage":86},"TypeScript","#3178c6",80.2,{"name":88,"color":89,"percentage":90},"Jupyter Notebook","#DA5B0B",11.2,{"name":92,"color":93,"percentage":94},"Python","#3572A5",7.5,{"name":96,"color":97,"percentage":98},"Shell","#89e051",1,{"name":100,"color":101,"percentage":102},"Makefile","#427819",0.1,{"name":104,"color":105,"percentage":106},"JavaScript","#f1e05a",0,19104,1447,"2026-04-05T09:59:39","NOASSERTION","未说明",{"notes":113,"python":111,"dependencies":114},"该项目主要是一套关于构建可靠 LLM 应用的 12 项工程原则指南（类似 12-Factor App），包含设计文档、概念解释及伪代码示例，而非直接安装运行的独立软件包。文中提及作者曾使用 LangChain、LangGraph 等框架，但未列出本项目具体的依赖库列表。实际开发需根据所选 Agent 框架自行配置环境。",[111],[14,15,13,26],[117,118,119,120,121,122,123,124,125,126,67],"agents","ai","context-window","framework","llms","memory","orchestration","prompt-engineering","rag","12-factor",22,"2026-03-27T02:49:30.150509","2026-04-06T05:16:49.601906",[131,136,140,145,149,154],{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},2337,"如何确保 GitHub 仓库中的 Markdown 文件按正确顺序排列？","建议在文件名中使用带零填充的数字编号（例如 \"01\", \"02\" 而不是 \"1\", \"2\"），这样可以保证在 GitHub 的文件列表视图中按照正确的字典序显示。","https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fissues\u002F14",{"id":137,"question_zh":138,"answer_zh":139,"source_url":135},2338,"如何申请参与文件排序问题的修复工作？","用户可以在 Issue 下留言表明意愿。已有用户 Kartik Gile 认领该任务，计划对 content 文件夹进行重构。",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},2339,"不同厂商的推理模型（Reasoning Models）对 API 调用有什么特殊要求？","各厂商 API 对推理消息有特定要求，例如 OpenAI 的 Responses API 需要传递 \"Reasoning ID\"，而 Anthropic 需要传递 \"reasoning signature\"。如果不包含这些特定字段，API 可能会报错。","https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fissues\u002F44",{"id":146,"question_zh":147,"answer_zh":148,"source_url":144},2340,"如何在提示词中平衡推理模型的完整性与厂商锁定风险？","需要在性能和控制权之间做权衡：仅传递推理摘要可以减少厂商锁定但可能降低性能；传递完整思维链能提升质量但可能导致厂商绑定。建议参考 AI That Works 的会话记录获取更详细的最佳实践资源。",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},2341,"如何利用 Schema 对齐技术提高 LLM 结构化输出的准确性？","可参考 Boundary ML 的方案：首先为每种内容块（标题、段落、代码等）定义明确的 JSON Schema；其次在解析管道的提示词中强制模型遵守 Schema；最后增加生成后的验证步骤以捕获 Schema 违规，并可根据文档类型自动调整 Schema 定义。","https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fissues\u002F50",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},2342,"为什么无法看到心率区间分解或回答健康数据问题？","这可能与 Sensai 无法回答健康数据问题（如 HRV）有关，导致相关功能（如运动完成标签页的心率区间分解）暂时不可用。","https:\u002F\u002Fgithub.com\u002Fhumanlayer\u002F12-factor-agents\u002Fissues\u002F48",[]]