[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-gofireflyio--aiac":3,"tool-gofireflyio--aiac":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":97,"forks":98,"last_commit_at":99,"license":100,"difficulty_score":23,"env_os":101,"env_gpu":102,"env_ram":102,"env_deps":103,"category_tags":108,"github_topics":109,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":119,"updated_at":120,"faqs":121,"releases":156},2109,"gofireflyio\u002Faiac","aiac","Artificial Intelligence Infrastructure-as-Code Generator.","aiac 是一款基于大语言模型（LLM）的智能基础设施即代码（IaC）生成器。它旨在解决开发者在编写云资源模板、配置文件及运维脚本时耗时费力且易出错的痛点，通过自然语言指令自动产出高质量代码。\n\n无论是需要快速构建 AWS EC2 的 Terraform 模块、生成安全的 Nginx Dockerfile，还是编写复杂的 CI\u002FCD 流水线与数据库查询语句，用户只需输入如“为高可用 EKS 生成 Terraform 代码”之类的提示词，aiac 即可调用 OpenAI、Amazon Bedrock 或本地 Ollama 等模型，瞬间生成并输出对应的代码文件。\n\n这款工具特别适合云原生工程师、DevOps 专家及后端开发人员使用，能显著降低重复性编码工作的门槛，提升基础设施搭建效率。其独特亮点在于支持灵活配置多种 LLM 后端，既兼容云端强大模型，也支持本地私有化部署，兼顾了生成的智能性与数据的安全性。作为命令行工具与开发库的双重形态，aiac 能无缝融入现有的自动化工作流，让基础设施管理变得更加敏捷直观。","# ![AIAC](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgofireflyio_aiac_readme_74e21e9a5719.png) ![AIAC](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgofireflyio_aiac_readme_96b6c86b5a57.png)\n\nArtificial Intelligence\nInfrastructure-as-Code\nGenerator.\n\n\u003Ckbd>[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgofireflyio_aiac_readme_982bc4241ac8.gif\" style=\"width: 100%; border: 1px solid silver;\" border=\"1\" alt=\"demo\">](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgofireflyio_aiac_readme_982bc4241ac8.gif)\u003C\u002Fkbd>\n\n\u003C!-- vim-markdown-toc GFM -->\n\n* [Description](#description)\n* [Use Cases and Example Prompts](#use-cases-and-example-prompts)\n    * [Generate IaC](#generate-iac)\n    * [Generate Configuration Files](#generate-configuration-files)\n    * [Generate CI\u002FCD Pipelines](#generate-cicd-pipelines)\n    * [Generate Policy as Code](#generate-policy-as-code)\n    * [Generate Utilities](#generate-utilities)\n    * [Command Line Builder](#command-line-builder)\n    * [Query Builder](#query-builder)\n* [Instructions](#instructions)\n    * [Installation](#installation)\n    * [Configuration](#configuration)\n    * [Usage](#usage)\n        * [Command Line](#command-line)\n            * [Listing Models](#listing-models)\n            * [Generating Code](#generating-code)\n        * [Via Docker](#via-docker)\n        * [As a Library](#as-a-library)\n    * [Upgrading from v4 to v5](#upgrading-from-v4-to-v5)\n        * [Changes in Configuration](#changes-in-configuration)\n        * [Changes in CLI Invokation](#changes-in-cli-invokation)\n        * [Changes in Model Usage and Support](#changes-in-model-usage-and-support)\n        * [Other Changes](#other-changes)\n* [Example Output](#example-output)\n* [Troubleshooting](#troubleshooting)\n* [License](#license)\n\n\u003C!-- vim-markdown-toc -->\n\n## Description\n\n`aiac` is a library and command line tool to generate IaC (Infrastructure as Code)\ntemplates, configurations, utilities, queries and more via [LLM](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLarge_language_model) providers such\nas [OpenAI](https:\u002F\u002Fopenai.com\u002F), [Amazon Bedrock](https:\u002F\u002Faws.amazon.com\u002Fbedrock\u002F) and [Ollama](https:\u002F\u002Follama.ai\u002F).\n\nThe CLI allows you to ask a model to generate templates for different scenarios\n(e.g. \"get terraform for AWS EC2\"). It composes an appropriate request to the\nselected provider, and stores the resulting code to a file, and\u002For prints it to\nstandard output.\n\nUsers can define multiple \"backends\" targeting different LLM providers and\nenvironments using a simple configuration file.\n\n## Use Cases and Example Prompts\n\n### Generate IaC\n\n- `aiac terraform for a highly available eks`\n- `aiac pulumi golang for an s3 with sns notification`\n- `aiac cloudformation for a neptundb`\n\n### Generate Configuration Files\n\n- `aiac dockerfile for a secured nginx`\n- `aiac k8s manifest for a mongodb deployment`\n\n### Generate CI\u002FCD Pipelines\n\n- `aiac jenkins pipeline for building nodejs`\n- `aiac github action that plans and applies terraform and sends a slack notification`\n\n### Generate Policy as Code\n\n- `aiac opa policy that enforces readiness probe at k8s deployments`\n\n### Generate Utilities\n\n- `aiac python code that scans all open ports in my network`\n- `aiac bash script that kills all active terminal sessions`\n\n### Command Line Builder\n\n- `aiac kubectl that gets ExternalIPs of all nodes`\n- `aiac awscli that lists instances with public IP address and Name`\n\n### Query Builder\n\n- `aiac mongo query that aggregates all documents by created date`\n- `aiac elastic query that applies a condition on a value greater than some value in aggregation`\n- `aiac sql query that counts the appearances of each row in one table in another table based on an id column`\n\n## Instructions\n\nBefore installing\u002Frunning `aiac`, you may need to configure your LLM providers\nor collect some information.\n\nFor **OpenAI**, you will need an API key in order for `aiac` to work. Refer to\n[OpenAI's pricing model](https:\u002F\u002Fopenai.com\u002Fpricing?trk=public_post-text) for more information. If you're not using the API hosted\nby OpenAI (for example, you may be using Azure OpenAI), you will also need to\nprovide the API URL endpoint.\n\nFor **Amazon Bedrock**, you will need an AWS account with Bedrock enabled, and\naccess to relevant models. Refer to the [Bedrock documentation](https:\u002F\u002Fdocs.aws.amazon.com\u002Fbedrock\u002Flatest\u002Fuserguide\u002Fwhat-is-bedrock.html)\nfor more information.\n\nFor **Ollama**, you only need the URL to the local Ollama API server, including\nthe \u002Fapi path prefix. This defaults to http:\u002F\u002Flocalhost:11434\u002Fapi. Ollama does\nnot provide an authentication mechanism, but one may be in place in case of a\nproxy server being used. This scenario is not currently supported by `aiac`.\n\n### Installation\n\nVia `brew`:\n\n    brew tap gofireflyio\u002Faiac https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac\n    brew install aiac\n\nUsing `docker`:\n\n    docker pull ghcr.io\u002Fgofireflyio\u002Faiac\n\nUsing `go install`:\n\n    go install github.com\u002Fgofireflyio\u002Faiac\u002Fv5@latest\n\nAlternatively, clone the repository and build from source:\n\n    git clone https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac.git\n    go build\n\n`aiac` is also available in the Arch Linux user repository (AUR) as [aiac](https:\u002F\u002Faur.archlinux.org\u002Fpackages\u002Faiac) (which\ncompiles from source) and [aiac-bin](https:\u002F\u002Faur.archlinux.org\u002Fpackages\u002Faiac-bin) (which downloads a compiled executable).\n\n### Configuration\n\n`aiac` is configured via a TOML configuration file. Unless a specific path is\nprovided, `aiac` looks for a configuration file in the user's [XDG_CONFIG_HOME](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FFreedesktop.org#User_directories)\ndirectory, specifically `${XDG_CONFIG_HOME}\u002Faiac\u002Faiac.toml`. On Unix-like\noperating systems, this will default to \"~\u002F.config\u002Faiac\u002Faiac.toml\". If you want\nto use a different path, provide the `--config` or `-c` flag with the file's path.\n\nThe configuration file defines one or more named backends. Each backend has a\ntype identifying the LLM provider (e.g. \"openai\", \"bedrock\", \"ollama\"), and\nvarious settings relevant to that provider. Multiple backends of the same LLM\nprovider can be configured, for example for \"staging\" and \"production\"\nenvironments.\n\nHere's an example configuration file:\n\n```toml\ndefault_backend = \"official_openai\"   # Default backend when one is not selected\n\n[backends.official_openai]\ntype = \"openai\"\napi_key = \"API KEY\"\n# Or \n# api_key = \"$OPENAI_API_KEY\"\ndefault_model = \"gpt-4o\"              # Default model to use for this backend\n\n[backends.azure_openai]\ntype = \"openai\"\nurl = \"https:\u002F\u002Ftenant.openai.azure.com\u002Fopenai\u002Fdeployments\u002Ftest\"\napi_key = \"API KEY\"\napi_version = \"2023-05-15\"            # Optional\nauth_header = \"api-key\"               # Default is \"Authorization\"\nextra_headers = { X-Header-1 = \"one\", X-Header-2 = \"two\" }\n\n[backends.aws_staging]\ntype = \"bedrock\"\naws_profile = \"staging\"\naws_region = \"eu-west-2\"\n\n[backends.aws_prod]\ntype = \"bedrock\"\naws_profile = \"production\"\naws_region = \"us-east-1\"\ndefault_model = \"amazon.titan-text-express-v1\"\n\n[backends.localhost]\ntype = \"ollama\"\nurl = \"http:\u002F\u002Flocalhost:11434\u002Fapi\"     # This is the default\n```\n\nNotes:\n\n1. Every backend can have a default model (via configuration key `default_model`).\n   If not provided, calls that do not define a model will fail.\n2. Backends of type \"openai\" can change the header used for authorization by\n   providing the `auth_header` setting. This defaults to \"Authorization\", but\n   Azure OpenAI uses \"api-key\" instead. When the header is either \"Authorization\"\n   or \"Proxy-Authorization\", the header's value for requests will be \"Bearer\n   API_KEY\". If it's anything else, it'll simply be \"API_KEY\".\n3. Backends of type \"openai\" and \"ollama\" support adding extra headers to every\n   request issued by aiac, by utilizing the `extra_headers` setting.\n\n### Usage\n\nOnce a configuration file is created, you can start generating code and you only\nneed to refer to the name of the backend. You can use `aiac` from the command\nline, or as a Go library.\n\n#### Command Line\n\n##### Listing Models\n\nBefore starting to generate code, you can list all models available in a\nbackend:\n\n    aiac -b aws_prod --list-models\n\nThis will return a list of all available models. Note that depending on the LLM\nprovider, this may list models that aren't accessible or enabled for the\nspecific account.\n\n##### Generating Code\n\nBy default, aiac prints the extracted code to standard output and opens an\ninteractive shell that allows conversing with the model, retrying requests,\nsaving output to files, copying code to clipboard, and more:\n\n    aiac terraform for AWS EC2\n\nThis will use the default backend in the configuration file and the default\nmodel for that backend, assuming they are indeed defined. To use a specific\nbackend, provide the `--backend` or `-b` flag:\n\n    aiac -b aws_prod terraform for AWS EC2\n\nTo use a specific model, provide the `--model` or `-m` flag:\n\n    aiac -m gpt-4-turbo terraform for AWS EC2\n\nYou can ask `aiac` to save the resulting code to a specific file:\n\n    aiac terraform for eks --output-file=eks.tf\n\nYou can use a flag to save the full Markdown output as well:\n\n    aiac terraform for eks --output-file=eks.tf --readme-file=eks.md\n\nIf you prefer aiac to print the full Markdown output to standard output rather\nthan the extracted code, use the `-f` or `--full` flag:\n\n    aiac terraform for eks -f\n\nYou can use aiac in non-interactive mode, simply printing the generated code\nto standard output, and optionally saving it to files with the above flags,\nby providing the `-q` or `--quiet` flag:\n\n    aiac terraform for eks -q\n\nIn quiet mode, you can also send the resulting code to the clipboard by\nproviding the `--clipboard` flag:\n\n    aiac terraform for eks -q --clipboard\n\nNote that aiac will not exit in this case until the contents of the clipboard\nchanges. This is due to the mechanics of the clipboard.\n\n#### Via Docker\n\nAll the same instructions apply, except you execute a `docker` image:\n\n    docker run \\\n        -it \\\n        -v ~\u002F.config\u002Faiac\u002Faiac.toml:~\u002F.config\u002Faiac\u002Faiac.toml \\\n        ghcr.io\u002Fgofireflyio\u002Faiac terraform for ec2\n\n#### As a Library\n\nYou can use `aiac` as a Go library:\n\n```go\npackage main\n\nimport (\n    \"context\"\n    \"log\"\n    \"os\"\n\n    \"github.com\u002Fgofireflyio\u002Faiac\u002Fv5\u002Flibaiac\"\n)\n\nfunc main() {\n    aiac, err := libaiac.New() \u002F\u002F Will load default configuration path.\n                               \u002F\u002F You can also do libaiac.New(\"\u002Fpath\u002Fto\u002Faiac.toml\")\n    if err != nil {\n        log.Fatalf(\"Failed creating aiac object: %s\", err)\n    }\n\n    ctx := context.TODO()\n\n    models, err := aiac.ListModels(ctx, \"backend name\")\n    if err != nil {\n        log.Fatalf(\"Failed listing models: %s\", err)\n    }\n\n    chat, err := aiac.Chat(ctx, \"backend name\", \"model name\")\n    if err != nil {\n        log.Fatalf(\"Failed starting chat: %s\", err)\n    }\n\n    res, err = chat.Send(ctx, \"generate terraform for eks\")\n    res, err = chat.Send(ctx, \"region must be eu-central-1\")\n}\n```\n\n### Upgrading from v4 to v5\n\nVersion 5.0.0 introduced a significant change to the `aiac` API in both the\ncommand line and library forms, as per feedback from the community.\n\n#### Changes in Configuration\n\nBefore v5, there was no concept of a configuration file or named backends. Users\nhad to provide all the information necessary to contact a specific LLM provider\nvia command line flags or environment variables, and the library allowed\ncreating a \"client\" object that could only talk with one LLM provider.\n\nBackends are now configured only via the configuration file. Refer to the\n[Configuration](#configuration) section for instructions. Provider-specific flags such as\n`--api-key`, `--aws-profile`, etc. (and their respective environment variables,\nif any) are no longer accepted.\n\nSince v5, backends are also named. Previously, the `--backend` and `-b` flags\nreferred to the name of the LLM provider (e.g. \"openai\", \"bedrock\", \"ollama\").\nNow they refer to whatever name you've defined in the configuration file:\n\n```toml\n[backends.my_local_llm]\ntype = \"ollama\"\nurl = \"http:\u002F\u002Flocalhost:11434\u002Fapi\"\n```\n\nHere we configure an Ollama backend named \"my_local_llm\". When you want to\ngenerate code with this backend, you will use `-b my_local_llm` rather than\n`-b ollama`, as multiple backends may exist for the same LLM provider.\n\n#### Changes in CLI Invokation\n\nBefore v5, the command line was split into three subcommands: `get`,\n`list-models` and `version`. Due to this hierarchical nature of the CLI, flags may\nnot have been accepted if they were provided in the \"wrong location\". For\nexample, the `--model` flag had to be provided after the word \"get\", otherwise\nit would not be accepted. In v5, there are no subcommands, so the position of\nthe flags no longer matters.\n\nThe `list-models` subcommand is replaced with the flag `--list-models`, and the\n`version` subcommand is replaced with the flag `--version`.\n\nBefore v5:\n\n    aiac -b ollama list-models\n\nSince v5:\n\n    aiac -b my_local_llm --list-models\n\nIn earlier versions, the word \"get\" was actually a subcommand and not truly part\nof the prompt sent to the LLM provider. Since v5, there is no \"get\" subcommand,\nso you no longer need to add this word to your prompts.\n\nBefore v5:\n\n    aiac get terraform for S3 bucket\n\nSince v5:\n\n    aiac terraform for S3 bucket\n\nThat said, adding either the word \"get\" or \"generate\" will not hurt, as v5 will\nsimply remove it if provided.\n\n#### Changes in Model Usage and Support\n\nBefore v5, the models for each LLM provider were hardcoded in each backend\nimplementation, and each provider had a hardcoded default model. This\nsignificantly limited the usability of the project, and required us to update\n`aiac` whenever new models were added or deprecated. On the other hand, we could\nprovide extra information about each model, such as its context lengths and\ntype, as we manually extracted them from the provider documentation.\n\nSince v5, `aiac` no longer hardcodes any models, including default ones. It\nwill not attempt to verify the model you select actually exists. The\n`--list-models` flag will now directly contact the chosen backend API to get a\nlist of supported models. Setting a model when generating code simply sends its\nname to the API as-is. Also, instead of hardcoding a default model for each\nbackend, users can define their own default models in the configuration file:\n\n```toml\n[backends.my_local_llm]\ntype = \"ollama\"\nurl = \"http:\u002F\u002Flocalhost:11434\u002Fapi\"\ndefault_model = \"mistral:latest\"\n```\n\nBefore v5, `aiac` supported both completion models and chat models. Since v5,\nit only supports chat models. Since none of the LLM provider APIs actually\nnote whether a model is a completion model or a chat model (or even an image\nor video model), the `--list-models` flag may list models which are not actually\nusable, and attempting to use them will result in an error being returned from\nthe provider API. The reason we've decided to drop support for completion models\nwas that they require setting a maximum amount of tokens for the API to\ngenerate (at least in OpenAI), which we can no longer do without knowing the\ncontext length. Chat models are not only a lot more useful, but they do not have\nthis limitation.\n\n#### Other Changes\n\nMost LLM provider APIs, when returning a response to a prompt, will include a\n\"reason\" for why the response ended where it did. Generally, the response should\nend because the model finished generating a response, but sometimes the response\nmay be truncated due to the model's context length or the user's token\nutilization. When the response did not \"stop\" because it finished generation,\nthe response is said to be \"truncated\". Before v5, if the API returned that the\nresponse was truncated, `aiac` returned an error. Since v5, an error is no longer\nreturned, as it seems that some providers do not return an accurate stop reason.\nInstead, the library returns the stop reason as part of its output for users to\ndecide how to proceed.\n\n## Example Output\n\nCommand line prompt:\n\n    aiac dockerfile for nodejs with comments\n\nOutput:\n\n```Dockerfile\nFROM node:latest\n\n# Create app directory\nWORKDIR \u002Fusr\u002Fsrc\u002Fapp\n\n# Install app dependencies\n# A wildcard is used to ensure both package.json AND package-lock.json are copied\n# where available (npm@5+)\nCOPY package*.json .\u002F\n\nRUN npm install\n# If you are building your code for production\n# RUN npm ci --only=production\n\n# Bundle app source\nCOPY . .\n\nEXPOSE 8080\nCMD [ \"node\", \"index.js\" ]\n```\n\n## Troubleshooting\n\nMost errors that you are likely to encounter are coming from the LLM provider\nAPI, e.g. OpenAI or Amazon Bedrock. Some common errors you may encounter are:\n\n- \"[insufficient_quota] You exceeded your current quota, please check your plan and billing details\":\n  As described in the [Instructions](#instructions) section, OpenAI is a paid API with a certain\n  amount of free credits given. This error means you have exceeded your quota,\n  whether free or paid. You will need to top up to continue usage.\n\n- \"[tokens] Rate limit reached...\":\n  The OpenAI API employs rate limiting as [described here](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Frate-limits\u002Frequest-increase). `aiac` only performs\n  individual requests and cannot workaround or prevent these rate limits. If\n  you are using `aiac` in programmatically, you will have to implement throttling\n  yourself. See [here](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-cookbook\u002Fblob\u002Fmain\u002Fexamples\u002FHow_to_handle_rate_limits.ipynb) for tips.\n\n## License\n\nThis code is published under the terms of the [Apache License 2.0](\u002FLICENSE).\n","# ![AIAC](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgofireflyio_aiac_readme_74e21e9a5719.png) ![AIAC](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgofireflyio_aiac_readme_96b6c86b5a57.png)\n\n人工智能\n基础设施即代码\n生成器。\n\n\u003Ckbd>[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgofireflyio_aiac_readme_982bc4241ac8.gif\" style=\"width: 100%; border: 1px solid silver;\" border=\"1\" alt=\"demo\">](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgofireflyio_aiac_readme_982bc4241ac8.gif)\u003C\u002Fkbd>\n\n\u003C!-- vim-markdown-toc GFM -->\n\n* [描述](#description)\n* [用例与示例提示](#use-cases-and-example-prompts)\n    * [生成 IaC](#generate-iac)\n    * [生成配置文件](#generate-configuration-files)\n    * [生成 CI\u002FCD 流水线](#generate-cicd-pipelines)\n    * [生成策略即代码](#generate-policy-as-code)\n    * [生成实用工具](#generate-utilities)\n    * [命令行构建器](#command-line-builder)\n    * [查询构建器](#query-builder)\n* [使用说明](#instructions)\n    * [安装](#installation)\n    * [配置](#configuration)\n    * [使用](#usage)\n        * [命令行](#command-line)\n            * [列出模型](#listing-models)\n            * [生成代码](#generating-code)\n        * [通过 Docker](#via-docker)\n        * [作为库](#as-a-library)\n    * [从 v4 升级到 v5](#upgrading-from-v4-to-v5)\n        * [配置变化](#changes-in-configuration)\n        * [CLI 调用变化](#changes-in-cli-invokation)\n        * [模型使用与支持变化](#changes-in-model-usage-and-support)\n        * [其他变化](#other-changes)\n* [示例输出](#example-output)\n* [故障排除](#troubleshooting)\n* [许可证](#license)\n\n\u003C!-- vim-markdown-toc -->\n\n## 描述\n\n`aiac` 是一个库和命令行工具，用于通过 [LLM](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLarge_language_model) 提供商（如\n[OpenAI](https:\u002F\u002Fopenai.com\u002F)、[Amazon Bedrock](https:\u002F\u002Faws.amazon.com\u002Fbedrock\u002F) 和 [Ollama](https:\u002F\u002Follama.ai\u002F)）生成 IaC（基础设施即代码）\n模板、配置、实用工具、查询等。\n\nCLI 允许您请求模型为不同场景生成模板（例如，“为 AWS EC2 获取 Terraform”）。它会向所选提供商组成适当的请求，并将生成的代码存储到文件中，和\u002F或打印到标准输出。\n\n用户可以使用简单的配置文件定义多个“后端”，以针对不同的 LLM 提供商和环境。\n\n## 用例与示例提示\n\n### 生成 IaC\n\n- `aiac terraform for a highly available eks`\n- `aiac pulumi golang for an s3 with sns notification`\n- `aiac cloudformation for a neptundb`\n\n### 生成配置文件\n\n- `aiac dockerfile for a secured nginx`\n- `aiac k8s manifest for a mongodb deployment`\n\n### 生成 CI\u002FCD 流水线\n\n- `aiac jenkins pipeline for building nodejs`\n- `aiac github action that plans and applies terraform and sends a slack notification`\n\n### 生成策略即代码\n\n- `aiac opa policy that enforces readiness probe at k8s deployments`\n\n### 生成实用工具\n\n- `aiac python code that scans all open ports in my network`\n- `aiac bash script that kills all active terminal sessions`\n\n### 命令行构建器\n\n- `aiac kubectl that gets ExternalIPs of all nodes`\n- `aiac awscli that lists instances with public IP address and Name`\n\n### 查询构建器\n\n- `aiac mongo query that aggregates all documents by created date`\n- `aiac elastic query that applies a condition on a value greater than some value in aggregation`\n- `aiac sql query that counts the appearances of each row in one table in another table based on an id column`\n\n## 使用说明\n\n在安装\u002F运行 `aiac` 之前，您可能需要配置您的 LLM 提供商或收集一些信息。\n\n对于 **OpenAI**，您需要一个 API 密钥才能使 `aiac` 正常工作。有关更多信息，请参阅 [OpenAI 的定价模型](https:\u002F\u002Fopenai.com\u002Fpricing?trk=public_post-text)。如果您未使用 OpenAI 托管的 API（例如，您可能正在使用 Azure OpenAI），则还需要提供 API URL 端点。\n\n对于 **Amazon Bedrock**，您需要一个启用了 Bedrock 的 AWS 账户，并且能够访问相关模型。有关更多信息，请参阅 [Bedrock 文档](https:\u002F\u002Fdocs.aws.amazon.com\u002Fbedrock\u002Flatest\u002Fuserguide\u002Fwhat-is-bedrock.html)。\n\n对于 **Ollama**，您只需要本地 Ollama API 服务器的 URL，包括 \u002Fapi 路径前缀。默认值为 http:\u002F\u002Flocalhost:11434\u002Fapi。Ollama 不提供身份验证机制，但在使用代理服务器的情况下可能会有身份验证机制。目前 `aiac` 尚不支持此场景。\n\n### 安装\n\n通过 `brew`：\n\n    brew tap gofireflyio\u002Faiac https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac\n    brew install aiac\n\n使用 `docker`：\n\n    docker pull ghcr.io\u002Fgofireflyio\u002Faiac\n\n使用 `go install`：\n\n    go install github.com\u002Fgofireflyio\u002Faiac\u002Fv5@latest\n\n或者，您可以克隆仓库并从源代码构建：\n\n    git clone https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac.git\n    go build\n\n`aiac` 也已在 Arch Linux 用户仓库 (AUR) 中提供，分别为 [aiac](https:\u002F\u002Faur.archlinux.org\u002Fpackages\u002Faiac)（从源代码编译）和 [aiac-bin](https:\u002F\u002Faur.archlinux.org\u002Fpackages\u002Faiac-bin)（下载已编译的可执行文件）。\n\n### 配置\n\n`aiac` 通过 TOML 配置文件进行配置。除非特别指定路径，否则 `aiac` 会在用户的 [XDG_CONFIG_HOME](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FFreedesktop.org#User_directories) 目录中查找配置文件，具体路径为 `${XDG_CONFIG_HOME}\u002Faiac\u002Faiac.toml`。在类 Unix 操作系统上，默认路径为 `~\u002F.config\u002Faiac\u002Faiac.toml`。如果您想使用其他路径，请使用 `--config` 或 `-c` 标志指定文件路径。\n\n配置文件定义了一个或多个命名后端。每个后端都有一个类型，用于标识 LLM 提供商（例如，“openai”、“bedrock”、“ollama”），以及与该提供商相关的各种设置。可以配置同一 LLM 提供商的多个后端，例如用于“staging”和“production”环境。\n\n以下是一个示例配置文件：\n\n```toml\ndefault_backend = \"official_openai\"   # 当未选择后端时的默认后端\n\n[backends.official_openai]\ntype = \"openai\"\napi_key = \"API KEY\"\n# Or\n\n# api_key = \"$OPENAI_API_KEY\"\ndefault_model = \"gpt-4o\"              # 此后端默认使用的模型\n\n[backends.azure_openai]\ntype = \"openai\"\nurl = \"https:\u002F\u002Ftenant.openai.azure.com\u002Fopenai\u002Fdeployments\u002Ftest\"\napi_key = \"API KEY\"\napi_version = \"2023-05-15\"            # 可选\nauth_header = \"api-key\"               # 默认为 \"Authorization\"\nextra_headers = { X-Header-1 = \"one\", X-Header-2 = \"two\" }\n\n[backends.aws_staging]\ntype = \"bedrock\"\naws_profile = \"staging\"\naws_region = \"eu-west-2\"\n\n[backends.aws_prod]\ntype = \"bedrock\"\naws_profile = \"production\"\naws_region = \"us-east-1\"\ndefault_model = \"amazon.titan-text-express-v1\"\n\n[backends.localhost]\ntype = \"ollama\"\nurl = \"http:\u002F\u002Flocalhost:11434\u002Fapi\"     # 这是默认值\n```\n\n注意事项：\n\n1. 每个后端都可以设置一个默认模型（通过配置键 `default_model`）。如果未提供，默认情况下，未指定模型的调用将会失败。\n2. 类型为 \"openai\" 的后端可以通过设置 `auth_header` 来更改用于授权的头部字段。默认为 \"Authorization\"，但 Azure OpenAI 使用的是 \"api-key\"。当头部字段为 \"Authorization\" 或 \"Proxy-Authorization\" 时，请求中的值会是 \"Bearer API_KEY\"；如果是其他值，则直接使用 \"API_KEY\"。\n3. 类型为 \"openai\" 和 \"ollama\" 的后端支持通过 `extra_headers` 设置向 aiac 发出的每个请求添加额外的头部信息。\n\n### 使用方法\n\n创建配置文件后，您就可以开始生成代码了，只需引用后端名称即可。您可以从命令行使用 `aiac`，也可以将其作为 Go 库来使用。\n\n#### 命令行\n\n##### 列出模型\n\n在开始生成代码之前，您可以列出某个后端中所有可用的模型：\n\n    aiac -b aws_prod --list-models\n\n这将返回所有可用模型的列表。请注意，根据 LLM 提供商的不同，此列表可能包含对该特定账户不可访问或未启用的模型。\n\n##### 生成代码\n\n默认情况下，aiac 会将提取出的代码打印到标准输出，并打开一个交互式 shell，允许与模型对话、重试请求、将输出保存到文件、复制代码到剪贴板等操作：\n\n    aiac terraform for AWS EC2\n\n这将使用配置文件中的默认后端以及该后端的默认模型（前提是已定义）。若要使用特定后端，请提供 `--backend` 或 `-b` 标志：\n\n    aiac -b aws_prod terraform for AWS EC2\n\n若要使用特定模型，请提供 `--model` 或 `-m` 标志：\n\n    aiac -m gpt-4-turbo terraform for AWS EC2\n\n您可以让 aiac 将生成的代码保存到指定文件中：\n\n    aiac terraform for eks --output-file=eks.tf\n\n您还可以使用标志将完整的 Markdown 输出一并保存：\n\n    aiac terraform for eks --output-file=eks.tf --readme-file=eks.md\n\n如果您希望 aiac 打印完整的 Markdown 输出而不是提取出的代码到标准输出，可以使用 `-f` 或 `--full` 标志：\n\n    aiac terraform for eks -f\n\n您也可以在非交互模式下使用 aiac，仅将生成的代码打印到标准输出，并可选择使用上述标志将其保存到文件，只需提供 `-q` 或 `--quiet` 标志：\n\n    aiac terraform for eks -q\n\n在静默模式下，您还可以通过提供 `--clipboard` 标志将生成的代码发送到剪贴板：\n\n    aiac terraform for eks -q --clipboard\n\n请注意，在这种情况下，aiac 不会退出，直到剪贴板的内容发生变化。这是由于剪贴板的工作机制所致。\n\n#### 通过 Docker\n\n所有相同的指令都适用，只是您需要运行一个 Docker 镜像：\n\n    docker run \\\n        -it \\\n        -v ~\u002F.config\u002Faiac\u002Faiac.toml:~\u002F.config\u002Faiac\u002Faiac.toml \\\n        ghcr.io\u002Fgofireflyio\u002Faiac terraform for ec2\n\n#### 作为库\n\n您也可以将 `aiac` 作为 Go 库来使用：\n\n```go\npackage main\n\nimport (\n    \"context\"\n    \"log\"\n    \"os\"\n\n    \"github.com\u002Fgofireflyio\u002Faiac\u002Fv5\u002Flibaiac\"\n)\n\nfunc main() {\n    aiac, err := libaiac.New() \u002F\u002F 将加载默认配置路径。\n                               \u002F\u002F 您也可以指定路径：libaiac.New(\"\u002Fpath\u002Fto\u002Faiac.toml\")\n    if err != nil {\n        log.Fatalf(\"创建 aiac 对象失败: %s\", err)\n    }\n\n    ctx := context.TODO()\n\n    models, err := aiac.ListModels(ctx, \"后端名称\")\n    if err != nil {\n        log.Fatalf(\"列出模型失败: %s\", err)\n    }\n\n    chat, err := aiac.Chat(ctx, \"后端名称\", \"模型名称\")\n    if err != nil {\n        log.Fatalf(\"启动聊天失败: %s\", err)\n    }\n\n    res, err := chat.Send(ctx, \"为 eks 生成 Terraform 配置\")\n    res, err = chat.Send(ctx, \"区域必须是 eu-central-1\")\n}\n```\n\n### 从 v4 升级到 v5\n\n版本 5.0.0 根据社区反馈，对命令行和库形式的 `aiac` API 进行了重大更改。\n\n#### 配置的变化\n\n在 v5 之前，没有配置文件或命名后端的概念。用户必须通过命令行参数或环境变量提供与特定 LLM 提供商通信所需的所有信息，而库则允许创建只能与一个 LLM 提供商对话的“客户端”对象。\n\n现在，后端仅通过配置文件进行配置。有关说明，请参阅[配置](#configuration)部分。诸如 `--api-key`、`--aws-profile` 等特定于提供商的标志（以及相应的环境变量，如果有的话）已不再被接受。\n\n自 v5 起，后端也有了名称。以前，`--backend` 和 `-b` 标志指的是 LLM 提供商的名称（例如，“openai”、“bedrock”、“ollama”）。现在，它们指你在配置文件中定义的任何名称：\n\n```toml\n[backends.my_local_llm]\ntype = \"ollama\"\nurl = \"http:\u002F\u002Flocalhost:11434\u002Fapi\"\n```\n\n在这里，我们配置了一个名为“my_local_llm”的 Ollama 后端。当你想使用这个后端生成代码时，将使用 `-b my_local_llm` 而不是 `-b ollama`，因为同一个 LLM 提供商可能有多个后端。\n\n#### CLI 调用的变化\n\n在 v5 之前，命令行分为三个子命令：`get`、`list-models` 和 `version`。由于 CLI 的这种层次结构，如果参数位于“错误的位置”，可能会不被接受。例如，`--model` 标志必须紧跟在“get”之后，否则不会被接受。而在 v5 中，不再有子命令，因此参数的位置不再重要。\n\n`list-models` 子命令已被 `--list-models` 标志取代，`version` 子命令已被 `--version` 标志取代。\n\nv5 之前：\n\n    aiac -b ollama list-models\n\n自 v5 起：\n\n    aiac -b my_local_llm --list-models\n\n在早期版本中，“get”实际上是一个子命令，并不是真正发送给 LLM 提供商的提示的一部分。自 v5 起，不再有“get”子命令，因此你不再需要在提示中添加这个词。\n\nv5 之前：\n\n    aiac get terraform for S3 bucket\n\n自 v5 起：\n\n    aiac terraform for S3 bucket\n\n不过，添加“get”或“generate”这两个词并不会有任何问题，因为 v5 会在接收到这些词时将其直接移除。\n\n#### 模型使用和支持的变化\n\n在 v5 之前，每个 LLM 提供商的模型都硬编码在每个后端实现中，且每个提供商都有一个硬编码的默认模型。这极大地限制了项目的可用性，每当有新模型被添加或弃用时，我们都必须更新 `aiac`。另一方面，我们可以手动从提供商文档中提取每个模型的上下文长度和类型等信息，从而提供更详细的信息。\n\n自 v5 起，`aiac` 不再硬编码任何模型，包括默认模型。它不会再尝试验证你选择的模型是否真的存在。`--list-models` 标志现在会直接联系所选后端的 API，以获取支持的模型列表。在生成代码时设置模型，只是将模型名称原样发送给 API。此外，不再为每个后端硬编码默认模型，用户可以在配置文件中定义自己的默认模型：\n\n```toml\n[backends.my_local_llm]\ntype = \"ollama\"\nurl = \"http:\u002F\u002Flocalhost:11434\u002Fapi\"\ndefault_model = \"mistral:latest\"\n```\n\n在 v5 之前，`aiac` 同时支持完成模型和聊天模型。自 v5 起，它只支持聊天模型。由于没有任何 LLM 提供商的 API 会明确标注某个模型是完成模型还是聊天模型（甚至是否是图像或视频模型），`--list-models` 标志可能会列出一些实际上无法使用的模型，尝试使用这些模型会导致来自提供商 API 的错误。我们决定放弃对完成模型的支持，是因为完成模型需要为 API 设置最大令牌数来生成内容（至少在 OpenAI 中是这样），而我们无法在不知道上下文长度的情况下做到这一点。相比之下，聊天模型不仅更有用，而且没有这一限制。\n\n#### 其他变化\n\n大多数 LLM 提供商的 API 在返回提示响应时，都会包含一个“停止原因”，说明为什么响应会在那里结束。通常，响应应该是因为模型完成了生成而结束，但有时响应可能会因为模型的上下文长度或用户的令牌使用量而被截断。当响应并非因生成完成而“停止”时，就被称为“截断”。在 v5 之前，如果 API 返回响应被截断，`aiac` 会返回错误。自 v5 起，不再返回错误，因为似乎有些提供商并没有返回准确的停止原因。相反，库会将停止原因作为输出的一部分返回，以便用户自行决定如何继续。\n\n## 示例输出\n\n命令行提示：\n\n    aiac dockerfile for nodejs with comments\n\n输出：\n\n```Dockerfile\nFROM node:latest\n\n# 创建应用目录\nWORKDIR \u002Fusr\u002Fsrc\u002Fapp\n\n# 安装应用依赖\n# 使用通配符确保无论是否有 package-lock.json，package.json 都会被复制过来（npm@5+）\nCOPY package*.json .\u002F\n\nRUN npm install\n# 如果您正在为生产构建代码\n# RUN npm ci --only=production\n\n# 打包应用源代码\nCOPY . .\n\nEXPOSE 8080\nCMD [ \"node\", \"index.js\" ]\n```\n\n## 故障排除\n\n你可能会遇到的大多数错误都来自 LLM 提供商的 API，例如 OpenAI 或 Amazon Bedrock。一些常见的错误包括：\n\n- “[insufficient_quota] 您已超出当前配额，请检查您的计划和账单详情”：\n  如[说明](#instructions)部分所述，OpenAI 是一项付费 API，会提供一定数量的免费额度。此错误表示您已超出免费或付费配额。您需要充值才能继续使用。\n\n- “[tokens] 已达到速率限制...”：\n  OpenAI API 采用速率限制机制，具体说明请参见[此处](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Frate-limits\u002Frequest-increase)。`aiac` 只执行单独的请求，无法绕过或防止这些速率限制。如果你以编程方式使用 `aiac`，则需要自行实施限流措施。有关提示，请参阅[这里](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-cookbook\u002Fblob\u002Fmain\u002Fexamples\u002FHow_to_handle_rate_limits.ipynb)。\n\n## 许可证\n\n本代码根据[Apache License 2.0](\u002FLICENSE)条款发布。","# AIAC 快速上手指南\n\n`aiac` 是一款基于大语言模型（LLM）的基础设施即代码（IaC）生成工具。它支持通过自然语言提示词，自动生成 Terraform、CloudFormation、Dockerfile、K8s Manifest、CI\u002FCD 流水线等代码模板。支持的后端包括 OpenAI、Amazon Bedrock 和 Ollama（本地部署）。\n\n## 环境准备\n\n在开始之前，请确保满足以下条件：\n\n*   **操作系统**：macOS, Linux (包括 Arch Linux), 或任何支持 Docker 的环境。\n*   **前置依赖**：\n    *   若使用源码安装：需安装 [Go](https:\u002F\u002Fgo.dev\u002F) (建议 1.20+)。\n    *   若使用 Homebrew (macOS)：需安装 [Homebrew](https:\u002F\u002Fbrew.sh\u002F)。\n*   **LLM 凭证**：\n    *   **OpenAI**: 需要有效的 `API Key`。\n    *   **Amazon Bedrock**: 需要配置好权限的 AWS 账号及凭证。\n    *   **Ollama**: 需在本地运行 Ollama 服务（默认端口 `11434`），无需 API Key。\n\n## 安装步骤\n\n选择以下任意一种方式进行安装：\n\n### 方式一：使用 Homebrew (推荐 macOS 用户)\n\n```bash\nbrew tap gofireflyio\u002Faiac https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac\nbrew install aiac\n```\n\n### 方式二：使用 Go 安装\n\n```bash\ngo install github.com\u002Fgofireflyio\u002Faiac\u002Fv5@latest\n```\n\n### 方式三：使用 Docker\n\n```bash\ndocker pull ghcr.io\u002Fgofireflyio\u002Faiac\n```\n\n### 方式四：Arch Linux (AUR)\n\n```bash\n# 编译安装\nyay -S aiac\n# 或直接下载二进制包\nyay -S aiac-bin\n```\n\n## 基本使用\n\n### 1. 配置文件设置\n\n`aiac` v5 版本强制要求通过 TOML 配置文件管理后端。默认配置路径为 `~\u002F.config\u002Faiac\u002Faiac.toml`。\n\n创建配置文件并填入你的 LLM 信息：\n\n```toml\n# ~\u002F.config\u002Faiac\u002Faiac.toml\n\n# 设置默认后端名称\ndefault_backend = \"my_openai\"\n\n[backends.my_openai]\ntype = \"openai\"\napi_key = \"sk-...\"  # 替换为你的 OpenAI API Key，或使用 \"$OPENAI_API_KEY\"\ndefault_model = \"gpt-4o\"\n\n# 如果使用本地 Ollama\n[backends.local_ollama]\ntype = \"ollama\"\nurl = \"http:\u002F\u002Flocalhost:11434\u002Fapi\"\ndefault_model = \"llama3\"\n```\n\n> **注意**：如果你使用的是国内网络访问 OpenAI，可能需要在配置中通过代理环境变量或在 `extra_headers` 中调整，或者直接使用部署在国内的兼容 OpenAI 接口的模型服务（修改 `url` 地址即可）。\n\n### 2. 生成代码\n\n配置完成后，即可通过自然语言生成代码。\n\n**交互式生成（默认模式）：**\n运行后会进入交互界面，可多次对话、重试或直接保存文件。\n\n```bash\naiac terraform for AWS EC2\n```\n\n**指定后端和模型：**\n\n```bash\naiac -b local_ollama -m llama3 dockerfile for a secured nginx\n```\n\n**非交互式模式（直接输出代码到终端）：**\n适合脚本调用或快速查看。\n\n```bash\naiac terraform for eks -q\n```\n\n**生成并保存到文件：**\n\n```bash\n# 生成代码并保存为 main.tf\naiac terraform for a highly available eks --output-file=main.tf\n\n# 同时保存代码文件和说明文档\naiac pulumi golang for an s3 with sns notification --output-file=main.go --readme-file=README.md\n```\n\n**复制到剪贴板：**\n\n```bash\naiac kubectl that gets ExternalIPs of all nodes -q --clipboard\n```\n\n### 3. 查看可用模型\n\n列出指定后端下所有可用的模型：\n\n```bash\naiac -b my_openai --list-models\n```","某初创公司的 DevOps 工程师需要在周五下班前紧急搭建一套高可用的 AWS EKS 集群，并配置相应的 CI\u002FCD 流水线以支持下周的产品发布。\n\n### 没有 aiac 时\n- **文档检索耗时**：工程师需在 Terraform 官方文档、AWS 指南和 GitHub 示例间反复切换，花费数小时拼凑基础架构代码。\n- **配置易出错**：手动编写复杂的 Kubernetes Manifest 和安全组策略时，极易因缩进错误或参数遗漏导致部署失败。\n- **脚本开发缓慢**：为验证环境编写的网络扫描 Python 脚本和清理会话的 Bash 脚本需从零开始编码，占用大量核心工作时间。\n- **标准化难统一**：不同成员编写的 IaC 模板风格各异，缺乏统一的代码规范，增加了后续维护和技术审查的难度。\n- **响应突发需求慢**：面对临时增加的监控查询或策略变更（如 OPA 策略），无法快速生成可用代码，拖慢整体交付节奏。\n\n### 使用 aiac 后\n- **一键生成模板**：只需输入\"aiac terraform for a highly available eks\"，几秒钟即可获得完整且经过优化的基础设施代码。\n- **自动修正语法**：aiac 直接生成格式正确的 Dockerfile 和 K8s 配置文件，消除了手动编码带来的低级语法错误。\n- **即时获取工具脚本**：通过\"aiac python code that scans all open ports\"等指令，瞬间得到可执行的工具脚本，无需手动编写。\n- **代码风格统一**：无论生成何种语言或框架的代码，aiac 均输出符合最佳实践的标准格式，确保团队代码库的一致性。\n- **敏捷应对变更**：面对新的查询需求或策略调整，利用 aiac 的查询构建器功能即刻生成 SQL 或 OPA 策略，大幅缩短响应时间。\n\naiac 将原本需要数天的基础设施编码工作压缩至分钟级，让工程师从繁琐的样板代码中解放出来，专注于架构优化与业务创新。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fgofireflyio_aiac_dfe0b5ba.png","gofireflyio","Firefly AI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fgofireflyio_8bd45e9f.png","Automated Cloud Resilience",null,"contact@firefly.ai","fireflydotai","https:\u002F\u002Ffirefly.ai","https:\u002F\u002Fgithub.com\u002Fgofireflyio",[85,89,93],{"name":86,"color":87,"percentage":88},"Go","#00ADD8",96,{"name":90,"color":91,"percentage":92},"Ruby","#701516",3.8,{"name":94,"color":95,"percentage":96},"Dockerfile","#384d54",0.3,3793,295,"2026-04-01T17:14:33","Apache-2.0","Linux, macOS","未说明",{"notes":104,"python":102,"dependencies":105},"该工具是基于 Go 语言开发的命令行工具和库，并非本地运行的 AI 模型，因此无需 GPU 或特定显存。它通过 API 调用外部大模型服务（如 OpenAI、Amazon Bedrock）或本地 Ollama 服务来生成代码。安装方式支持 Homebrew、Docker、Go install 或源码编译。使用前需配置 TOML 文件以设置后端服务的 API Key 或连接地址。",[106,107],"Go (用于编译\u002F安装)","Docker (可选)",[13,14,15,26],[110,111,112,113,114,115,116,117,118],"ai","chatgpt","iac","openai","pulumi","terraform","amazon-bedrock","ollama","llms","2026-03-27T02:49:30.150509","2026-04-06T05:37:52.125262",[122,127,132,137,142,147,152],{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},9706,"在 Ubuntu 上构建时遇到 'error obtaining VCS status' 错误怎么办？","这通常是 Go 仓库的临时性问题。您可以尝试使用 `-buildvcs=false` 标志来禁用 VCS 标记进行构建，命令为：`go build -buildvcs=false`。如果问题仍然存在，可能是暂时性的网络或仓库问题，稍后重试即可。","https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac\u002Fissues\u002F17",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},9707,"在 Mac M2 上使用 Homebrew 安装后运行出现 'nil pointer dereference' 恐慌错误如何解决？","如果您试图使用 Bedrock 后端但遇到了此错误，可能是因为未正确指定后端。请尝试添加 `-b` 标志，或者设置 `AIAC_BACKEND` 环境变量来明确指定后端。例如：`aiac -b ...` 或 `export AIAC_BACKEND=bedrock`。即使打算使用 Bedrock，某些版本可能仍需要检查 OPENAI_API_KEY 的配置。","https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac\u002Fissues\u002F84",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},9708,"如何将 aiac 生成的代码直接复制到剪贴板？","该功能已在最新版本中发布。您现在可以直接使用内置的“复制到剪贴板”选项，无需手动复制。请确保您已更新到最新版本并尝试使用该功能。","https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac\u002Fissues\u002F36",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},9709,"aiac 工具到底有什么特别之处？它真的能生成基础设施即代码（IaC）吗？","目前，aiac 主要作为一个模型前端，其核心库并不专门针对 IaC，唯一的 IaC 相关逻辑在于提示词格式化。虽然最初计划加入输出验证（如确保 Terraform 模板语法正确），但因依赖过多而暂未实现。目前它更像是一个通用的代码生成助手，团队正在探索更多通用的 IaC 功能，欢迎用户提出建议。","https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac\u002Fissues\u002F124",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},9710,"使用 Azure OpenAI 等非默认后端时，命令行参数太多导致使用不便，有解决办法吗？","确实，当前需要在子命令后传递大量标志（如 `--api-version`, `--url`）导致别名难以使用。解决方案包括：1. 使用环境变量代替后端相关的标志；2. 开发团队计划 soon 支持配置文件来存储这些设置，从而简化命令。目前可以先通过设置环境变量来缓解这一问题。","https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac\u002Fissues\u002F91",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},9711,"运行 `go install` 命令时提示需要版本号怎么办？","当当前目录不在模块中时，`go install` 需要指定版本。请按照错误提示，在命令末尾添加 `@latest` 来安装最新版本。正确的命令是：`go install github.com\u002Fgofireflyio\u002Faiac\u002Fv2@latest`（请注意根据实际版本号调整，如 v4）。","https:\u002F\u002Fgithub.com\u002Fgofireflyio\u002Faiac\u002Fissues\u002F28",{"id":153,"question_zh":154,"answer_zh":155,"source_url":146},9712,"如何访问 Llama3、Gemma 等其他大语言模型？","硬编码的模型列表将在下一个版本中移除，届时将支持更多模型。目前如果您想使用非默认模型，可以尝试通过配置文件或环境变量（一旦支持）来指定，或者关注后续更新以获取对更多 LLM（如 Llama3, Gemma）的原生支持。",[157,162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242],{"id":158,"version":159,"summary_zh":160,"released_at":161},106975,"v5.3.0","## Changelog\n* 2a02387a3f1e555e4e0e6682013f50705e224dfe Fix --timeout help description\n* 9174f11742731f9dbc49d7fc9fabc64602168149 add the --timeout flag\n* 5532917c7f78c551aaaab2308935294b93d64bc3 Adding support for more config properties\n* 55dde84be01234fdada5b042d053b7393d496202 Enabling Config Values to Use ENV Variables\n* 3055cef6449180281872b8d9bc70123580281f9d Brew formula update for aiac version v5.2.1\n\n","2024-10-29T20:46:02",{"id":163,"version":164,"summary_zh":165,"released_at":166},106976,"v5.2.1","## Changelog\n* a070ae6 openai: do not force API key, fix error processing\n* 7c28aed Brew formula update for aiac version v5.2.0\n\n","2024-07-08T16:09:41",{"id":168,"version":169,"summary_zh":170,"released_at":171},106977,"v5.2.0","## Changelog\n* 374205d Allow extra headers for specific conversations too\n* 1ad9e02 Brew formula update for aiac version v5.1.1\n\n","2024-07-01T15:11:56",{"id":173,"version":174,"summary_zh":175,"released_at":176},106978,"v5.1.1","## Changelog\n* 8ac377f Bugfix: main API doesn't expose previous messages cap\n* ee8e8d8 Brew formula update for aiac version v5.1.0\n\n","2024-07-01T14:20:28",{"id":178,"version":179,"summary_zh":180,"released_at":181},106979,"v5.1.0","## Changelog\n* b209dc1 Allow conversations to have \"previous messages\"\n* ae51057 Support sending extra headers and changing auth header\n* f2dee73 Add note about AUR packages\n* f77de0f Brew formula update for aiac version v5.0.1\n\n","2024-07-01T13:12:32",{"id":183,"version":184,"summary_zh":185,"released_at":186},106980,"v5.0.1","## Changelog\n* 61a9e78 Fix wrong version major version in goreleaser.yml\n* dde1322 Fix mistakes in README.md\n* a44cdcc Brew formula update for aiac version v5.0.0\n\n","2024-06-28T16:09:09",{"id":188,"version":189,"summary_zh":190,"released_at":191},106981,"v5.0.0","## Changelog\n* 94ed02e Introduce config files, multiple backends, refactor\n* fee4609 Move Homebrew formula to HomebrewFormula\n* 87c94cc Brew formula update for aiac version v4.3.0\n\n","2024-06-26T16:24:11",{"id":193,"version":194,"summary_zh":195,"released_at":196},106982,"v4.3.0","## Changelog\n* c7d259d Fix release workflow\n* 0fc48d4 Add new GPT models to OpenAI backend\n* 87b19f7 Update README: mistral is the default for ollama\n* 4948563 Added ModelMistral and made it default for OLLAMA (#90)\n\n","2024-06-11T13:24:04",{"id":198,"version":199,"summary_zh":200,"released_at":201},106983,"v4.2.0","## Changelog\n* bfc9a45 Add support for the Ollama backend\n* 9d24ddf Bugfix: segfault when OpenAI API key not provided\n\n","2024-02-06T16:24:07",{"id":203,"version":204,"summary_zh":205,"released_at":206},106984,"v4.1.0","## Changelog\n* 043a0d0 Add ability to save and continue chatting\n* ab97758 Remove deprecated OpenAI models\n\n","2024-01-02T12:49:31",{"id":208,"version":209,"summary_zh":210,"released_at":211},106985,"v4.0.0","## Changelog\n* 5b239c5 Update major version in goreleaser.yml\n* 4c6ca4b Add support for Amazon Bedrock\n\n","2023-12-26T15:28:06",{"id":213,"version":214,"summary_zh":215,"released_at":216},106986,"v2.5.0","## Changelog\n* 75a0ab3 Add support for azure open ai api adaptations (#59)\n* 43a6574 Change clipboard package (#54)\n* 209a69f Update goreleaser.yml (#52)\n\n","2023-06-08T14:24:22",{"id":218,"version":219,"summary_zh":220,"released_at":221},106987,"v2.4.0","## Changelog\n* 13c7cc3 Change clipboard package\n* 209a69f Update goreleaser.yml (#52)\n\n","2023-05-07T09:49:23",{"id":223,"version":224,"summary_zh":225,"released_at":226},106988,"v2.3.0","## Changelog\n* a2f3a71 Allow copying to clipboard in quiet mode\n* 90d1e14 Add the ability to ask with a conversation history to improve results.\n* 55b942b Add ability to copy to clipboard in interactive mode\n* b593087 Add support for GPT-4 models\n* 22f0a30 Add a check for --full flag when constructing a prompt\n* 9cc08e6 Allow returning full Markdown response to stdout\n* 2b58074 Update README.md\n\n","2023-04-27T11:27:28",{"id":228,"version":229,"summary_zh":230,"released_at":231},106989,"v2.2.0","## Changelog\n* 94a936e Introduce chat mode, refactor API (#32)\n\n","2023-03-22T12:53:35",{"id":233,"version":234,"summary_zh":235,"released_at":236},106990,"v2.1.0","## Changelog\n* f89e9b7 Switch to ChatGPT API by default, allow other models (#27)\n* 7b39661 Merge pull request #26 from gofireflyio\u002Fido-pricing\n* 7a4127d Add important information to the README\n\n","2023-03-09T15:21:36",{"id":238,"version":239,"summary_zh":240,"released_at":241},106991,"v2.0.0","## Changelog\n* e42a13c Merge pull request #22 from gofireflyio\u002Fido-api\n* acbe643 Remove ChatGPT support, document code, small refactor\n* 4409dc3 Install the CodeSee workflow. Learn more at https:\u002F\u002Fdocs.codesee.io (#13)\n* 06b6c73 Merge pull request #11 from gofireflyio\u002Freadme-more-usecase\n* 7148254 resloving pr comments\n* 27ba918 addtional use cases and support info\n* 640def0 Update README.md\n* 855feb3 adding more example to the readme (#10)\n* 54d5267 Update README.md (#9)\n* 8f43cba Update README.md\n\n","2023-02-16T15:03:07",{"id":243,"version":244,"summary_zh":245,"released_at":246},106992,"v1.0.0","## Changelog\n* 0419245 Support brew installation (#7)\n* a8ee156 Set gofireflyio\u002Faiac image name (#6)\n* 842d739 Fix ci (#5)\n* e603828 Add Dockerfile (#4)\n* dd6a9e0 Remove SBOM (#3)\n* a3f4278 Add go releaser (#2)\n* 4f736d3 Add loader and texts inputs for a better UX (#1)\n* 278c7e6 Add support for OpenAI API\n* da3edd0 Introduce aiac, an AI-generate IaC tool\n* 9de3798 Initial commit\n\n","2022-12-12T18:37:11"]