[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-automata--aicodeguide":3,"similar-automata--aicodeguide":52},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":19,"owner_email":20,"owner_twitter":21,"owner_website":22,"owner_url":23,"languages":24,"stars":25,"forks":26,"last_commit_at":27,"license":24,"difficulty_score":28,"env_os":29,"env_gpu":29,"env_ram":29,"env_deps":30,"category_tags":33,"github_topics":38,"view_count":46,"oss_zip_url":24,"oss_zip_packed_at":24,"status":47,"created_at":48,"updated_at":49,"faqs":50,"releases":51},4355,"automata\u002Faicodeguide","aicodeguide","AI Code Guide is a roadmap to start coding with AI","aicodeguide 是一份专为 AI 编程时代打造的实用路线图，旨在帮助开发者高效利用人工智能辅助编码，甚至让 AI 独立完成任务。面对每周涌现的新大模型、新工具以及\"Vibe Coding\"等新概念，学习资源往往分散且难以追踪，aicodeguide 将这些碎片化信息整合为一站式的指南，清晰梳理了当前的最佳实践与核心工具。\n\n该项目主要解决了开发者在技术快速迭代中难以保持同步的痛点，既帮助传统程序员掌握如何与 AI 协作以提升日常效率，也为零基础用户提供了构建软件产品的入门路径，同时客观区分了真正的技术价值与市场炒作。内容采用类似 FAQ 的结构组织，便于用户按需查阅，并持续更新来自行业专家的最新洞察。\n\n无论是希望引入 AI 助手的资深工程师，还是想尝试\"Vibe Coding\"创建 SaaS 产品的初学者，都能从中获益。其独特亮点在于不仅罗列工具，更强调思维模式的转变，引导用户从“编写代码”转向“指导 AI\"，并提供了丰富的外部资源链接，如知名技术博客与播客，确保用户能获取最前沿的行业动态。","\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fi.imgur.com\u002Fv0Tx6am.png\" alt=\"AI Code Guide\" \u002F>\n  \u003Cp align=\"center\">\n    By \u003Ca href=\"https:\u002F\u002Fx.com\u002Faut0mata\">Vilson Vieira\u003C\u002Fa> and\n       \u003Ca href=\"https:\u002F\u002Fx.com\u002Fesrtweet\">Eric S. Raymond\u003C\u002Fa>\n  \u003C\u002Fp>\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FNrDfXmtvw3\">\n     \u003Cimg src=\"https:\u002F\u002Fi.imgur.com\u002FuqKVFHj.png\" alt=\"Join our Discord\" height=\"48\" \u002F>\n  \u003C\u002Fa>\n\u003C\u002Fdiv>\n\u003Cbr\u002F>\n\n> Everything you wanted to know about using AI to help you code and\u002For to code\n> for you.\n\n\u003Cdiv align=\"center\" style=\"font-size: 30px\">\n  \u003Ca href=\"#vibe\">TL;DR Just show me how to vibe code 😎!\u003C\u002Fa>\n\u003C\u002Fdiv>\n\u003Cbr \u002F>\u003Cbr \u002F>\n\n## Introduction\n\nThe way we interact with computers and write code for them is\nchanging. And it's a profound change: on the tools we use, the\nway we code and think about software products and systems.\n\nAnd it's changing super fast! New LLM models are being released every week. New\ntools, new editors, new AI coding and \"vibe coding\" practices, new protocols, MCP, A2A, SLOP,\n... And it's really hard to keep track of all that. Everything is\nscattered in different places, websites, repos, YouTube videos, etc.\n\nThat's why we decided to\nwrite this guide. It's our humble attempt to put everything together and present\nyou the practices and tools around **AI coding** or **AI assisted code generation**, all in one\nplace, with no fuss, in an accessible form.\n\n- **If you're a coder but is not using AI code assistants yet**, this guide is\n  for you: it presents the most\n  recent tools and good practices to make the most of them to help on your daily\n  jobs. Either having AI as your copilot or being the copilot for an AI agent.\n\n- **If you never coded before but you're interested in this new \"vibe coding\"\n  thing to build your own SaaS and other software products**, this guide is\n  definitely for you: We'll try to do my best to remove obscurity and leave you\n  with what's required to start your journey, but being super critic about what\n  is really important and what's \"just hype\".\n\nBefore we start, just a small suggestion on how to read this guide. We tried to organize\nit in a *FAQ-ish way*, so feel free to search and jump for possible answers to your\nquestions. You'll see that every section as some \"Resources\" listed: we keep\nupdating those resources and you'll find the most recent ones in the top of\nthe list.\n\nLike I said, AI changes a lot and in a daily basis, we try our best to keep this guide\nupdated, but if you find anything missing, please feel free to open a PR or an issue,\nor even jump in your discord and share your new findings with us so we can include it\nhere!\n\nCool, let's start!\n\n📚 Resources:\n\n- [Zen of AI coding](https:\u002F\u002Fnonstructured.com\u002Fzen-of-ai-coding\u002F) by Yoav Aviram\n- [How I use AI to code as a software engineer](https:\u002F\u002Fhackable.substack.com\u002Fp\u002Fhow-i-use-ai-to-code-as-a-software) by Vilson Vieira\n- [What I learned using AI to code for a year](https:\u002F\u002Fhackable.substack.com\u002Fp\u002Fwhat-i-learned-using-ai-to-code-for) by Vilson Vieira\n- [Software Survival 3.0](https:\u002F\u002Fsteve-yegge.medium.com\u002Fsoftware-survival-3-0-97a2a6255f7b) by Steve Yegge\n- [The death of the stubborn developer](https:\u002F\u002Fsteve-yegge.medium.com\u002Fthe-death-of-the-stubborn-developer-b5e8f78d326b) by Steve Yegge\n- [Software Is Changing (Again)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=LCEmiRjPEtQ) by Andrej Karpathy\n- [Raising an Agent podcast](https:\u002F\u002Fampcode.com\u002Fpodcast) by Amp team\n- [Become an AI-augmented engineer](https:\u002F\u002Fmaryrosecook.com\u002Fblog\u002Fpost\u002Fbecome-an-ai-augmented-engineer) by Mary Rose Cook\n- [The 70% problem: Hard truths about AI-assisted coding](https:\u002F\u002Faddyo.substack.com\u002Fp\u002Fthe-70-problem-hard-truths-about) by Addy Osmani\n- [Using LLMs for code](https:\u002F\u002Fsimonwillison.net\u002F2025\u002FMar\u002F11\u002Fusing-llms-for-code\u002F) by Simon Willison\n- [How to Build an Agent](https:\u002F\u002Fampcode.com\u002Fhow-to-build-an-agent) by Thorsten Ball\n- [Dear Student: Yes, AI is here, you're screwed unless you take action...](https:\u002F\u002Fghuntley.com\u002Fscrewed\u002F) by Geoffrey Huntley\n- [The revenge of the junior developer](https:\u002F\u002Fsourcegraph.com\u002Fblog\u002Frevenge-of-the-junior-developer) by Steve Yegge\n- [How to prepare for the future of development and AI](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=BP54GqVK3JA) by Santiago\n- [The End of Programming as We Know It](https:\u002F\u002Fwww.oreilly.com\u002Fradar\u002Fthe-end-of-programming-as-we-know-it) by Tim O'Reilly\n\n## AI coding? Vibe coding? Agentic coding?\n\nAll those terms are pretty similar. But basically AI coding is about using AI\nmodels (specially LLMs these days) and all the tooling around it to help you\nwrite software. It's also called \"AI for code generation\" or \"code gen\" for\nshort, and there's an entire fascinating field of research and engineering,\nthat dates back to 1950's when we used Lisp to generate code. Now we have LLMs\nas main engines to power code generation, and there's also some threads on\nneurosymbolic hybrid approaches starting to show up. AI coding is also a\npractice: if you're using Cursor and tab-tab-tab your way to get completions,\nyou're \"AI coding\"; if you're full on using Codex's agent mode, you're also \"AI\ncoding\". In summary: it's any way to use AI models to help you generate\ncode. Generally you have people who already know how to code in this group.\n\nVibe coding is AI coding cranked up :-) Here you don't care much about the code\nbeing generated, you just give a prompt and expect the AI to code everything\nfor you. The term was [coined by Karpathy](https:\u002F\u002Fx.com\u002Fkarpathy\u002Fstatus\u002F1886192184808149383) in 2025 and it's getting pretty\npopular. IMHO it's helping to democratize coding for everyone that never\nthought about coding before!\n\nAnd then we have a new breed surfacing: agentic coding. That's when you run an\nagent for many rounds, by itself, in a loop, ideally with some feedback signal\n(tests, etc). You can either run one agent or use *orchestrators* like GasTown\nto run many agents at same time, with minor to zero human interaction.\n\nSo, in summary, no matter if you're using AI to discuss your software ideas or\nto help on coding only parts of your already existing code base, or if you're\nfull on vibe coding or if you let one or 100 agents running 24\u002F7 without\nintervention, you're using AI to help you generate code. Let's call it\nAI coding and move on.\n\n## How can I use it?\n\nYou can use AI coding in many different ways, but in summary:\n\n- AI is your copilot: you use AI models to augment yourself, to boost your\n  productivity. Either by firing up ChatGPT to help you on brainstroming ideas\n  for your SaaS; or using Cursor to autocomplete your docstrings. There are\n  many gains here, specially for creative exploration and to automate boring\n  parts of your work.\n\n- AI is the pilot: here you are the copilot. This is where the \"vibe coding\"\n  happens. You turn on the Cursor Agent YOLO mode or run Claude Code\n  with `--dangerously-skip-permissions` flag set and trust everything the\n  agents do to generate your code. Really powerful way to automate yourself,\n  but demands some good practices on how to design systems, tame the agents and\n  jumping in on a spaghetti of code you actually don't know about, specially to\n  solve errors.\n\nYou should learn and practice both!\n\nBut lean more towards copiloting and away from pure YOLO vibecoding as\nproject complexity scales up. The more likely it is that another human\n(or yourself in six months) will have to maintain the code, the more\nimportant this is.\n\n# 🗺️ The Roadmap\n\n## How I start?\n\n- If you don't know how to code and want to play with it, we recommend starting\nwith some web-based tool like [Bolt](https:\u002F\u002Fbolt.new), [Replit](https:\u002F\u002Freplit.com),\n[v0](https:\u002F\u002Fv0.dev), [Suuper](https:\u002F\u002Fsuuper.dev) or [Lovable](https:\u002F\u002Flovable.dev).\n\n- If you already know how to code, install [Claude Code](https:\u002F\u002Fclaude.com\u002Fproduct\u002Fclaude-code),\n[Codex](https:\u002F\u002Fopenai.com\u002Fcodex\u002F),\n[Cursor](https:\u002F\u002Fcursor.com\u002F), [Amp](https:\u002F\u002Fampcode.com) or [Windsurf](https:\u002F\u002Fwindsurf.com\u002F).\nYou can start with the\nfree plan and then upgrade to $20 monthly plan. Claude Code Pro plan, Codex and\nCursor are both pretty good and cheap, given you'll have tons of tokens to use on\nmost recent LLM models out there. VSCode has its own\n[Agent Mode](https:\u002F\u002Fcode.visualstudio.com\u002Fblogs\u002F2025\u002F02\u002F24\u002Fintroducing-copilot-agent-mode)\nas well. It pairs with Github Copilot and uses and Agentic Workflow to make changes\nand edit files.\n\n- If you want a more open source alternative, try [Pi](https:\u002F\u002Fpi.dev\u002F) (recommended),\n[OpenCode](https:\u002F\u002Fopencode.ai\u002F) or [OpenHands](https:\u002F\u002Fgithub.com\u002FAll-Hands-AI\u002FOpenHands).\n\n> Suggestion: We really recommend creating an account in OpenRouter. It's really easy and you'll\nget access to the most updated LLM models and even free versions of it.\n\n> Important note: Using Claude Code through their API\u002FSDK is super expensive! You can easily burn\nhundreds of dollars a day without noticing.\nThat's why it's recommended to start with Claude Code Pro or Max plans, so you don't\nhave to worry about it. If you really need to use their API\u002FSDK (either to embed it in your\napp or other use case), make sure to keep an eye on your usage, displayed on Anthropic's dashboard.\n\n📚 Resources:\n\n- [Vibe Coding 101 with Replit](https:\u002F\u002Fwww.deeplearning.ai\u002Fshort-courses\u002Fvibe-coding-101-with-replit\u002F)\n- [Cursor AI Tutorial for Beginners (2025 Edition)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=3289vhOUdKA)\n\n## How I prompt for coding? AKA How I vibe code?\u003Ca id='vibe'>\u003C\u002Fa>\n\nBefore mid or end-2025, it was really common to LLMs hallucinate and enter in endless cycles\nto fix errors. Nowadays the models are really good, but it's still interesting to\nfollow basic principles when using them to code:\n\n- Do not ask everything in one prompt. Only prompting \"hey, build me an app\n  for my pet store\" doesn't help a software engineer and much less an AI :-)\n  Understand your project, brainstorm first with an LLM, create a PRD\n  (Product Requirements Document), make a plan and split them in tasks. You'll\n  find bellow a recipe on how to use ChatGPT to create one for you.\n- Give it details. If you know what you want, say it. If you know which\n  programming language you want, which tech stack, what type of audience, add\n  it to your prompt.\n- Markdown or any other light text-based format like asciidoc should be OK for LLMs to\n  interpret. In the end the text will be encoded as tokens. However, to put emphasis\n  in specific parts of your prompt it's recommended to use some symbols\n  like [XML tags](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fbuild-with-claude\u002Fprompt-engineering\u002Fuse-xml-tags).\n- Break you project into tasks and subtasks. Remember that good software engineering\n  practices are still important. Think of the agent as a \"junior developer\" you just hired:\n  what kind of information would you write in a PR assigned to it that would maximize\n  the PR to me approved and merged in the end?\n- Try different models for different goals. For instance, Opus or ChatGPT are pretty good\n  for planning, but you can use Sonnet or some opensource model like Kimi or Minimax to\n  execute\u002Fimplement the plan.\n- Try different models to confirm and validate output of other models. You can even\n  run them in parallel and choose the best one!\n- LLMs are \"yes machines\", so apply critical thinking. Do not accept everything they\n  send your way. Review it, test it. In the end they are just a tool and the code\n  generated is still your responsibility.\n\nIn the rest of our examples we will use the .md file extension associated with Markdown\nIf you prefer asciidoc (which has somewhat better support for structured documents)\nuse that and substitute \".adoc\" in these instructions. The LLMs don't care, they\nwill handle Markdown or asciidoc or any other purely textual format you throw at them.\n\nHere is a method\u002Fprocedure\u002Fstrategy\u002Fworkflow that generally works well:\n\n1. Use `ChatGPT`, Codex or Claude Code itself with the following prompt:\n\n```\nYou're a senior software engineer. We're going to build the PRD of a project\ntogether.\n\nVERY IMPORTANT:\n- Ask one question at a time\n- Each question should be based on previous answers\n- Go deeper on every important detail required\n\nIDEA:\n\u003Cpaste here your idea>\n```\n\n2. You're going to enter in a loop of questions\u002Fanswers for a few minutes. Try\n   to answer as much as you can with as much details as possible. When finished\n   (or when you decide it's enough), send this prompt to guide the model to\n   compile it as a PRD (this is also called SDD, or Spec-Driven Development):\n\n```\nCompile those findings into a PRD. Use markdown format. It should contain the\nfollowing sections:\n\n- Project overview\n- Core requirements\n- Core features\n- Core components\n- App\u002Fuser flow\n- Techstack\n- Implementation plan\n```\n\n3. Copy and save this file into a `docs\u002Fspecs.md` inside your project folder\n4. Now let's create the task list for your project. Ask the following:\n```\nBased on the generated PRD, create a detailed step-by-step plan to build this project.\nThen break it down into small tasks that build on each other.\nBased on those tasks, break them into smaller subtasks.\nMake sure the steps are small enough to be implemented in a step but big enough\nto finish the project with success.\nUse best practices of software development and project management, no big\ncomplexity jumps. Wire tasks into others, creating a dependency list. There\nshould be no orphan tasks.\n\nVERY IMPORTANT:\n- Use markdown or asciidoc \n- Each task and subtask should be a checklist item\n- Provide context enough per task so a developer should be able to implement it\n- Each task should have a number id\n- Each task should list dependent task ids\n```\n5. Save this as `docs\u002Ftodo.md` inside your project folder\n6. It's also really important to have an `AGENTS.md` (or `CLAUDE.md` if you're using Claude Code)\nin the root of your repository folder. You should see this file as an \"agent README\", so\njust like you keep a `README.md` for humans, keep one `AGENTS.md` for future agents\nthat will read and edit your source code project! Important things to have: a summary of your project,\nwhat tech stack you use, how to install, how to test and deploy or run your project, where are\nthe main PRD\u002Fspecs\u002Fdesign or other relevant documentation, or even the main source code\nfiles. You can learn more about `AGENTS.md` and examples [here](https:\u002F\u002Fagents.md).\n\n[Here is an example](https:\u002F\u002Fchatgpt.com\u002Fshare\u002F67f8e8c6-c92c-8007-8fe0-76bdc73f9812)\nof a brainstorming\u002Fplanning session done with ChatGPT 4o\nfor a simple CLI tool, use it as inspiration for yours.\n\nNow create a local folder for you project, remember to install and run `git\ninit` inside the folder to keep it under version control.\n\nThis should give you the PRD and the tasks lists to build your project! With\nthat in hands, you can run your Codex (or other AI code agent), point it to\nthose files and ask:\n\n```\nYou're a senior software engineer. Study @docs\u002Fspecs.md and implement what's\nstill missing in @docs\u002Ftodo.md. Implement each task each time and respect task\nand subtask dependencies. Once finished a task, check it in the list and move\nto the next.\n```\n\nAnd here you'll find the Git repos for a CLI tool built in 10 min based on the\nthis workflow: https:\u002F\u002Fgithub.com\u002Fautomata\u002Flocalbiz\n\n> Important: although it's super cool to \"vibe code\", it's also really\n> interesting to know what you're doing :-) Reviewing the code the agent is\n> generating will also help you a lot when errors happen (and they will!) and\n> to improve your skills on code reviewing (not only made by AIs but by\n> yourself and other developers).\n\n📚 Resources:\n\n- [You are using Cursor AI incorrectly...](https:\u002F\u002Fghuntley.com\u002Fstdlib\u002F) by Geoffrey Huntley:\nhere Geoff introduces his\nidea of using a stdlib of Cursor rules\n- [From Design doc to code: the Groundhog AI coding assistant (and new Cursor vibecoding meta)](https:\u002F\u002Fghuntley.com\u002Fspecs\u002F) by Geoffrey Huntley:\npart 2 of the previous post, where Geoff suggests to use LLMs to build specs\n(PRD) and Cursor rules automatically\n- [My LLM codegen workflow atm](https:\u002F\u002Fharper.blog\u002F2025\u002F02\u002F16\u002Fmy-llm-codegen-workflow-atm\u002F) by Harper Reed\n- [An LLM Codegen Hero's Journey](https:\u002F\u002Fharper.blog\u002F2025\u002F04\u002F17\u002Fan-llm-codegen-heros-journey\u002F) by Harper Reed\n- [Claude Code: Best practices for agentic coding](https:\u002F\u002Fwww.anthropic.com\u002Fengineering\u002Fclaude-code-best-practices) by Anthropic: it's towards their Claude Code tool but it has\ninteresting tips for AI coding workflows based on any LLM model\n- [Prompt Engineering](https:\u002F\u002Fwww.kaggle.com\u002Fwhitepaper-prompt-engineering) by Lee Boonstra from Google: it's a 70 pages doc with interesting tips\n  on how to prompt engineer, with a section about code generation\n- [Prompt Engineering Guide](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fbuild-with-claude\u002Fprompt-engineering) by Anthropic\n- [GPT 4.1 Prompting Guide](https:\u002F\u002Fcookbook.openai.com\u002Fexamples\u002Fgpt4-1_prompting_guide) by OpenAI\n- [RepoPrompt](https:\u002F\u002Frepoprompt.com\u002F) is a tool to help assemble context from your project. It's worth watching an overview video of a [RepoPrompt Workflow](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fm3VreCt-5E) to learn how to easily leverage these tools to provide more context in your vibe coding prompts.\n- [Not all AI-assisted programming is vibe coding (but vibe coding rocks)](https:\u002F\u002Fsimonwillison.net\u002F2025\u002FMar\u002F19\u002Fvibe-coding\u002F) by Simon Willison\n\n## Which LLM model I should use?\n\nLLMs are trained and aligned with different goals. Models like ChatGPT or Claude are trained\nto be good generic-knowledge assistants in a chat with you. In other cases,\nlabs capture the output of a model running tools and \"reasoning\"\nabout the output of the tools in long sessions (from minutes to hours long), and use the feedback\nas a reward signal to the model, aligning the original trained model to the specific task. In the case\nof Claude Opus, Sonnet or OpenAI GPT, the task is coding.\nThat's why it's important to always use models that were trained\u002Faligned for coding\u002Fprogramming and\nthat support tools.\n\nGiven LLMs change in a daily basis, it's impossible these days to suggest one specific version of a model\nthat is good for coding. What we can do is to advice which \"family\" of models you should consider.\nNowadays we have proprietary LLM models like Claude Opus and OpenAI GPT as SotA for AI coding; and\nopen source models like Kimi, Minimax and GLM that are always in the tail of the proprietary ones,\ndecreasing the delta more and more on each release.\n\nTo know which model version to pick, the general advice is to select the latest one, or please check\nthe following leaderboards for more accurate comparison:\n\n- [Models.dev](https:\u002F\u002Fmodels.dev): An open-source database of AI models\n- [OpenRouter's Models](https:\u002F\u002Fopenrouter.ai\u002Fmodels?categories=programming&fmt=table): Set categories as `programming` and also filtering\n  only the models that support `tools` is generally a good way to select models\n  for AI assisted coding\n- [Agent Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fgalileo-ai\u002Fagent-leaderboard)\n\n## What to do when the dreaded \"rate limit\" message hits\n\nSwitch to a different model.\n\nThere are at least two different reasons this can happen. You can make\na heavy request that exceeds the model's input\u002Foutput token limit. Or,\nif the server cluster it's running on is having a bad day, you may be\nthrottled to reduce the load. The error messages you get are often not\ntransparent about this.  You can find a more detailed explanation\n[here](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Frate-limits).\n\nDifferent models have very different token limits. For example, as I\nwrite in late April 2025, gpt-4.1-mini is much more generous than\ngpt-4.1. Have several API keys in your pocket (this is inexpensive, since\nyou pay as you go) and visit the pages describing rate limits.\nHere's [Anthropic's](https:\u002F\u002Fconsole.anthropic.com\u002Fsettings\u002Flimits) as an example.\n\n## How do I set up project-wide rules?\n\nYou can define rules or conventions that will be applied to your project by \"injecting\" them in the\ncontext of the LLM. Each editor have some ways to do it:\n\n- For almost any agent except Claude Code, create an `AGENTS.md` file in the root folder of your\nproject. Check https:\u002F\u002Fagents.md about more information about how to create it and examples\n- For Claude Code, just create an `CLAUDE.md` file in the root folder of your project. Anthropic\ndidn't follow the `AGENTS.md` standard, so one good practice is to actually create an `AGENTS.md` file\nand then create a symbolic link from `CLAUDE.md` to the `AGENTS.md` file. You can also do the same\nby creating a `CLAUDE.md` file `@AGENTS.md` as the content.\n- In Cursor, just create\nmarkdown files inside a `.cursor\u002Frules\u002F` folder. Cursor will make it sure to\napply those on all communication with the LLM.\n- In Aider, create markdown files with [rules\u002Fconventions](https:\u002F\u002Faider.chat\u002Fdocs\u002Fusage\u002Fconventions.html) you want to use (like `rules.md`)\nand add the following in your `.aider.conf.yml` file: `read: rules.md`.\n\nAlso, many tools support configuring a rules\u002Fconventions file in your home directory to\nbe applied to all your projects. For instance, in Aider you can basically add global conventions in a file\ncalled `~\u002F.global_conventions.md` and then add it to the `.aider.conf.yml` with `read: [~\u002F.global_conventions.md, rules.md]`.\n\nYou can add part of your PRD as rules, for instance, like the tech stack or\nsome guidelines on code formating and style.\n\nRules are super powerful and you can even use the AI itself to create the rules\nfor you! [Check Geoff's method on that](https:\u002F\u002Fghuntley.com\u002Fspecs\u002F).\n\n📚 Resources:\n\n- [AGENTS.md](https:\u002F\u002Fagents.md)\n- [Cursor Rules Docs](https:\u002F\u002Fdocs.cursor.com\u002Fcontext\u002Frules-for-ai)\n- [Windsurf Rules Docs](https:\u002F\u002Fwindsurf.com\u002Feditor\u002Fdirectory)\n- [Aider Conventions Docs](https:\u002F\u002Faider.chat\u002Fdocs\u002Fusage\u002Fconventions.html)\n- [Aider Conventions Collection](https:\u002F\u002Fgithub.com\u002FAider-AI\u002Fconventions): a collection of community-contributed convention files for use with Aider.\n- [Awesome Cursor Rules](https:\u002F\u002Fgithub.com\u002FPatrickJS\u002Fawesome-cursorrules): a curated list of\n  awesome .cursorrules files for enhancing your Cursor AI experience.\n\n## How I avoid hallucinations? What is a PRD?\n\nPRD. What?! They say that the best ways to solve a problem in engineering is to\ncreate a new acronym and here it's no exception :-) J\u002Fk... PRD is short for\n**Product Requirements Document**. Basically, it's just a bunch of docs (or only\none doc) describing the requirements and other details about your software\nproject.\n\nTurns out, if you leave your loved LLM free, with not much context of what to\ndo, it will hallucinate pretty wild and quick. You need to tame the beast and\nPRDs are a great way to do it.\n\nWhat I most like about PRDs is how they are super helpful to anyone, from\npeople who never coded before to senior SWE or product managers.\n\nYou don't need any background to start a PRD, you just need your idea for an\napp and that's it, but it really helps to know the fundamentals of software design\nand details about the tech stack\u002Fframework you're using our planning to use.\n\nCheck \u003Ca href=\"#vibe\">here\u003C\u002Fa> how to use a LLM to create one for you.\n\n## Keep a prompt log\n\nRecord every prompt you send, with (this is important) your\ninterspersed comments on what you were thinking and any surprises you\ngot. This prompt log is your record of design intent; it will be\ninvaluable to anyone approaching the project without having been\ninvolved in it, including you in six months after you have forgotten\nwhat you were thinking.\n\nThere isn't any convention for the name of this file yet, you can use something like\n`vibecode.adoc` or `history.md`.\n\nThere are tools like [aider](https:\u002F\u002Faider.chat\u002F) that keep a log of all the back-and-forth\nchat you had with your LLM.\nSo one option is to set the following env vars and keep all those history files under version control:\n\n```bash\n# History Files:\n\n## Specify the chat input history file (default: .aider.input.history)\n#AIDER_INPUT_HISTORY_FILE=.aider.input.history\n\n## Specify the chat history file (default: .aider.chat.history.md)\n#AIDER_CHAT_HISTORY_FILE=.aider.chat.history.md\n\n## Log the conversation with the LLM to this file (for example, .aider.llm.history)\n#AIDER_LLM_HISTORY_FILE=.aider.llm.history\n```\n\nWith those files in hand, you can then comment on them with your own thoughts while you go.\nYou (and others) can learn a lot about your project when you revisit it in the future, and\nyou'll probably start noticing patterns and tips you can use in your next sessions.\n\nAgent harnesses like [Pi](https:\u002F\u002Fpi.dev) or [Amp](https:\u002F\u002Fampcode.com) also allow you\nto keep and share the coding sessions. To do that\nin Pi, just type `\u002Fshare` and it will create a GitHub gist with it and generate and easy\nto share and URL for a cool UI to visualize it.\n\n## How I start my project?\n\n### Webapp (Frontend)\n\nModern Web development is pretty overwhelming. There are tons of JavaScript\u002F\nTypeScript frameworks, CSS frameworks, engines, deployment services, etc so\nit's really hard to get started\nand think about which one to use. After spending last months building frontends,\nthat's what I generally suggest:\n\n- [TanStack Start](https:\u002F\u002Ftanstack.com\u002Fstart): you get a powerful React framework and\nyou are free from Vercel or any other provider, so you can deploy anywhere\nyou want\n- [Next.js](https:\u002F\u002Fnextjs.org): it's still the most popular React framework\nout there, but you'll be dependant of the Vercel ecosystem, which is good and bad\n- [FastHTML](https:\u002F\u002Ffastht.ml): if you love Python and care more about exposing a core\nbackend functionality (eg some data analysis, or AI\u002FML model pipeline) than\nsuper pretty UI\n\nIf you're also coding the backend, make sure to have either one folder for\nbackend and other for frontend in your root folder for your Git repos; or\nhave different repos for backend and frontend, and then add them to the same\nworkspace in Cursor (so you can reference files of backend in your frontend\nagent and vice-versa) or run Claude Code or other CLI-based harness in the\nroot folder.\n\nTo avoid integrating the frontend into the backend too early, instruct the AI\nagent to use mock\u002Fdummy data instead, so you can update it later once you\nhave the backend implemented.\n\nAnother interesting tip is to use good MCP tools to integrate your coding agent\nto playwright or browser-use. This way you can avoid the copy-paste cycle of\nerrors from the web browser into your AI agent, given the AI will control the\nbrowser and grab screenshots and errors messages by itself.\nYou can use bash commands to that as well, if you are like me and don't like MCP.\n\nIf you want to use 3D content in your webapp and you're using React, it's\ninteresting to use [React Three Fiber](https:\u002F\u002Fgithub.com\u002Fpmndrs\u002Freact-three-fiber)\ninstead of trying to use the three.js\nlibrary directly. R3F makes it easier to deal with state given it wraps all\nthree.js objects as React components.\n\n### Backend\n\nBackends are not flexible or really scalable in Web based tools like Lovable. So you will probably\nneed to get your hands at Claude Code, Codex or any other non-Web-based tool.\n\nUsing Python and FastAPI is a great option. If you prefer to stay with the\nsame language of your frontend (guessing it will be JavaScript or TypeScript\nmost of the time), you can use Nodejs or Bun. For most of the API endpoints needs\nof your TanStack or Next.js frontend, you can use Server Functions (or Server Actions),\nso no need for a separated backend.\n\nBackends are a great target for end-to-end tests, so consider guiding the\nagent to write tests and run them for each new feature and its subtasks.\n\nOnce you're done with you backend, you can use its documentation (specially on\nHTTP endpoints) as input for the agent working in your frontend. This way\nyou'll be able to move your frontend from using mock\u002Fdummy data to real one\ncoming from the backend.\n\n### Game\n\nFor small games, use vanilla JS in a single .js file and use threejs for 3D\ngames or pixijs for 2D games.\n\nGames are all about good assets, so consider using services like Tripo AI\nand [Everything Universe](https:\u002F\u002Feverythinguniver.se) to generate 3D assets and rig\u002Fanimate them.\n\n## How I deal with errors and bugs?\n\nOne thing you must know about coding and software in general: they will fail.\nNo matter what you try to prevent that, it will happen. So let's first embrace\nthat and be friends with errors and bugs.\n\nThe first strategy here is to mimic what SWE do: see what the\ninterpreter\u002Fcompiler gave to you as an error message and try to understand it.\nCopy and paste the error back to the LLM and ask it to fix it.\nAnother great idea is to add MCP tools great for debugging like\n[BrowserTools](https:\u002F\u002Fbrowsertools.agentdesk.ai\u002F) or connecting Claude\nCode or other agents to your local Chrome. It's also possible to use headless\nbrowsers through Playwright in remote development or if you don't want to\nuse your current local Chrome.\n\n## What's MCP, SLOP and A2A and how can I benefit from it?\n\n> Update March 2026: Nowadays, we know that giving bash as a tool for agents\n> is generally enough as an alternative to MCP: the agent can use `curl` to\n> call APIs or run other commands, in a more efficient way (eg consuming\n> much less tokens). But MCP is still interesting for some use cases.\n\nMCP is short for Model Context Protocol. It was developed by Anthropic but it's\nstarting to be considered by other LLMs like OpenAI's GPT and Google's Gemini.\nIt's a powerful concept and it's pretty linked to another one: function\u002Ftool\ncalling.\n\nTool calling is a way for LLMs to call tools or functions to execute some\noperation. It's a way to update the knowledge window of a LLM (trained on data from the past)\nwith new information and at same time to integrate it with external tools and\nendpoints. For instance, if you want to search the web for some information, you can instruct the LLM\nto use a tool that does that (eg `Hey, if you need to search for something on\nweb, use this tool: search(term)`). Then, instead of spending many tokens, iteration steps and\nparsing workloads, the LLM will call the tool, get the output and use it when\ngenerating new predictions for you.\n\nMCP extends this idea by creating a standard for it. This way we can create a\nMCP server that will expose some resource (a database, for instance) or tool (a\nspecific piece of software that will compute something and return the results,\nfor instance) to the LLM.\n\nWait, but it's not just an API? Couldn't I just mimic the same with a REST API\nserver\u002Fclient and some parsing in the LLM prompts? Kinda, and that's what SLOP\n(Simple Language Open Protocol)\nproposes. However, having a standard like MCP makes it easier to make sure the\nLLM will support it natively without extra parsing and tricks on the client\nside.\n\nA2A (Agent to Agent Protocol) was created by\nGoogle to \"complement\" MCP, focusing on multiagent communication while MCP\nfocuses on LLM-tools communications.\n\nOne more thing before we go: if you are using a harness like Pi that doesn't\nsupport MCP, no problem, you can basically wrap your beloved MCP tool as a CLI,\nand then just let PI call it using the bash. Just use [mcporter](https:\u002F\u002Fgithub.com\u002Fsteipete\u002Fmcporter\u002F)\nto do the trick.\n\n📚 Resources:\n\n- [MCP](https:\u002F\u002Fmodelcontextprotocol.io\u002F)\n- [MCP is Dead; Long Live MCP!](https:\u002F\u002Fchrlschn.dev\u002Fblog\u002F2026\u002F03\u002Fmcp-is-dead-long-live-mcp\u002F)\n- [Anthropic's ilist of MCP servers](https:\u002F\u002Fgithub.com\u002Fmodelcontextprotocol\u002Fservers)\n- [SLOP](https:\u002F\u002Fgithub.com\u002Fagnt-gg\u002Fslop)\n- [Introducing SLOP](https:\u002F\u002Frussell.ballestrini.net\u002Fintroducing-slop\u002F)\n- [MCP vs SLOP](https:\u002F\u002Fmcpslop.com\u002F)\n- [A2A](https:\u002F\u002Fgoogle.github.io\u002FA2A\u002F)\n\n## Start from scratch or with a boilerplate?\n\nGenerally LLMs do better starting fresh. But you can also start with a\nboilerplate (basically a starter kit: a folder with initial skeletons of\nminimal source files and configs needed to get a working project running for a\nspecific tech stack) and add rules in the context window to make it sure it will respect\nyour starter kit.\n\nAnother great idea is to index your project (that you just created with the\nstarter kit) using Cursor's own indexing feature, or using some tool like\n[repomix](https:\u002F\u002Frepomix.com\u002F) or\n[files-to-prompt](https:\u002F\u002Fgithub.com\u002Fsimonw\u002Ffiles-to-prompt).\n\n## Greenfield\u002Fclean state\u002Ffresh project vs existing codebase\n\nIf you have an existing codebase, a good idea is to use\n[repomix](https:\u002F\u002Frepomix.com\u002F) or [files-to-prompt](https:\u002F\u002Fgithub.com\u002Fsimonw\u002Ffiles-to-prompt)\npack it into the context window.\n\nAnother great tip is to prompt for changes at task level and not in project\nlevel. For instance, focus in one feature you want to implement and ask Cursor\nAgent to implement it. Give a mini-PRD for the specific feature. Imagine that\nyou're guiding a junior developer to work in a specific GH ticket :-)\n\n📚 Resources:\n\n- [Karpathy's tweet about his way to use AI-assisted coding](https:\u002F\u002Fx.com\u002Fkarpathy\u002Fstatus\u002F1915581920022585597)\n- [How to work with large codebases in Cursor](https:\u002F\u002Fdocs.cursor.com\u002Fguides\u002Fadvanced\u002Flarge-codebases)\n\n## Well-structured prompting for well-structured design\n\nAt present (in April 2025) LLMs are already good at generating working\ncode, but they're not very good at generating well-structured code -\nthat is, with proper layering and separation of concerns. Good\nstructure is important for readability and maintainability, and\nreduces your defect rate.\n\nThink through your design and then prompt-engineer it in a sequence\nthat produces good structure. In a database-centered application, for\nexample, it's a good idea to specify your record types first, then\nguide your LLM through building a manager class or module that\nencapsulates access to them. Only after that should you begin\nprompting for business logic.\n\nMore generally, when you prompt, think about separating engine code\nfrom policy code, and issue your prompts in a sequence that guides the\nLLM to do that.\n\nYou should include in your project rules, one that tells the LLM not\nto violate layering - if it needs a new engine method, it should add\nsomething to a clean encapsulation layer rather than having low-level\nimplementation details entangled with business logic.\n\nAnother interesting practice is to start with the core of you project\nand spend time making sure to get the main\nfunctionality implemented and organized the way you want. You can even write classes and functions\nskeletons and then let the LLM fill the gaps. Only after you have a good foundation and with\ngood tests you can move to consumers of this core library, like exposing it as CLI or REST API to\na future webapp, for instance.\n\n## Should I use TDD or any other type of tests?\n\nYes, definitely yes. Tests are more important than ever. At current state of the art in 2026,\nLLMs are good at generating clean and correct code, but they sometimes\nhallucinate, and more importantly, they can fail at understanding\nspecifications and generate correct code to do the wrong thing.\n\nIt is not likely that this will change even if and when we get full\nhuman-equivalent artificial general intelligence, after all, human\nbeings misunderstand specifications too! Ambiguity of language is why\ntests will continue to be important even into the future.\n\nUsing TDD to create skeletons of what you want as result can really help guiding LLMs to\nimplement the target piece of code you're testing. Instructing your LLM to create tests and\nto run them is also a great practice: it will be able to add possible errors breaking a given\ntest to its context and act on it, trying to make the test pass.\n\nTests are fundamental for growing your code base together with a LLM, only moving forward when\nall current tests pass.\n\nProperty-based tests are really interesting when working with LLMs. Testing for a whole\ndomain\u002Frange of values instead of just specific ones that you pinpointed will help making\nsure the code generated by the agent will still be valid even if later changes end up\nhitting edge cases you didn't think in advance. There are good libraries for property-based\ntests in every language, like [hypothesis](https:\u002F\u002Fhypothesis.readthedocs.io\u002Fen\u002Flatest\u002F) for Python\nor [fast-check](https:\u002F\u002Ffast-check.dev\u002F) for JavaScript\u002FTypeScript.\n\nIt's also important to always check the code generated by LLMs while trying to write or fix\na test: sometimes they will even try to generate some hardcoded output just to make the test pass\n:-)\n\n## How to make it safe?\n\nThe exact same rules and best practices advised for non-AI assisted coding\nare valid here. Research more about them and apply to your code. Here an\ninitial safe-check list:\n\n- Don't trust the code generated by an AI. Always verify. Remember that the AI\nwill not be responsible for the code you run out there, YOU WILL!\n- Do not store any API keys or other secrets as hardcoded strings, and\nspecially in frontend code. Store on backend as protected environment\nvariables (eg platforms like Vercel offer this option)\n- When querying API endpoints, always use HTTPS\n- When creating HTML forms, always do input validation and sanitization\n- Do not store sensible data in `localStorage`, `sessionStorage` or cookies\n- Run validators and security vulnerability scanners in your package\nrequirements\n\n## How to use any LLM in Claude Code?\n\nYou just want to try Kimi K2 or other LLM in your Claude Code CLI? You can use\nclaude-code-router to make Claude Code CLI use a \"proxy\" running locally at\nyour machine to route it to any model available at OpenRouter! The instructions\nbellow work for Kimi K2 but you can adapt it to any other LLM you want.\n\nFirst create an account at OpenRouter and grab your API key.\n\nMake sure you have Claude Code CLI installed:\n\n```\nnpm install -g @anthropic-ai\u002Fclaude-code\n```\n\nThen install claude-code-router:\n\n```\nnpm install -g @musistudio\u002Fclaude-code-router\n```\n\nAdd the following lines to your `~\u002F.claude-code-router\u002Fconfig.json` file,\nreplacing `OPENROUTER_API_KEY` with your API key from OpenRouter:\n\n```\n{\n  \"Providers\": [\n    {\n      \"name\": \"kimi-k2\",\n      \"api_base_url\": \"https:\u002F\u002Fopenrouter.ai\u002Fapi\u002Fv1\u002Fchat\u002Fcompletions\",\n      \"api_key\": \"OPENROUTER_API_KEY\",\n      \"models\": [\n        \"moonshotai\u002Fkimi-k2\"\n      ],\n      \"transformer\": {\n        \"use\": [\"openrouter\"]\n      }\n    }\n  ],\n  \"Router\": {\n    \"default\": \"kimi-k2,moonshotai\u002Fkimi-k2\"\n  }\n}\n```\n\nNow just run Claude Code through the router:\n\n```\nccr code\n```\n\nYou should see Claude Code `API Base URL: http:\u002F\u002F127.0.0.1:3456`, which means it's\nusing the local proxy created by claude-code-router. That's it!\n\nIf you're only interested in Kimi K2 or other models from Moonshot, an\nalternative is to use the model provided by Moonshot itself:\nhttps:\u002F\u002Fgithub.com\u002FLLM-Red-Team\u002Fkimi-cc\u002Fblob\u002Fmain\u002FREADME_EN.md\n\n## How to create my own AI coding agent?\n\nThe best intro is\n[this practical tutorial](https:\u002F\u002Fampcode.com\u002Fhow-to-build-an-agent) by Thorsten\nwhere you build a simple agent that uses basic the\nminimal amount of tools (`list_files`, `read_file`, `edit_file`) in Go, step-by-step.\n\nI also wrote another tutorial on how to [create your own AI coding agent here](https:\u002F\u002Fhackable.space\u002Fen\u002Fcourses\u002Fcreating-your-own-agent).\nIt's just 150 lines of Python and gives you an idea of what happens under the hood\nof an agent harness.\n\nIf you want to go deeper, [this series of open source books](https:\u002F\u002Fgerred.github.io\u002Fbuilding-an-agentic-system)\nby Gerred is definetely a great start.\n\n## What the hell are multi-agent orchestration?!\n\nAround 2024-2025 people started flirting with the idea of running not only one but a team\nof agents to perform a given coding task. By end of 2025, Steve Yegge and others created\nthe first implemnetations around the concept.\n[Gas Town](https:\u002F\u002Fgithub.com\u002Fsteveyegge\u002Fgastown) is probably the most popular so far.\nIn Gas Town you have one agent called Mayor who receives a task from the user and split it in\njobs (storing and managing those jobs in [Beads](https:\u002F\u002Fgithub.com\u002Fsteveyegge\u002Fbeads),\nanother tool created by Yegge, think \"GitHub issues\nbut for agents\"). The Mayor communicates with other agents (eg Polecats, the workers)\nby messages, so every agent has its\nown mail box and act when some message is received.\n\nThere are other incarnations of multi-agent orchestration like\n[Claude Flow](https:\u002F\u002Fgithub.com\u002Fruvnet\u002Fclaude-flow) and [Loom](https:\u002F\u002Fgithub.com\u002Fghuntley\u002Floom),\nbut everything it's really early to know if that's the way we'll end up developing software\nin the future. It's important though to play with them, understand their inner workings, but\none small advice: running Gas Town and other orchestration systems is pretty expensive, those\nagents running in parallel can eat tons of tokens in seconds.\n\n- [Welcome to Gas Town](https:\u002F\u002Fsteve-yegge.medium.com\u002Fwelcome-to-gas-town-4f25ee16dd04)\n- [The Future of Coding Agents](https:\u002F\u002Fsteve-yegge.medium.com\u002Fthe-future-of-coding-agents-e9451a84207c)\n- [Gas Town Emergency User Manual](https:\u002F\u002Fsteve-yegge.medium.com\u002Fgas-town-emergency-user-manual-cf0e4556d74b)\n\n# ✨ Tips and Tricks on Specific Tools and Agents\n\n## Claude Code\n\n- [Claude Code Guide](https:\u002F\u002Fgithub.com\u002Fzebbern\u002Fclaude-code-guide): Covers every discoverable Claude Code command,\n  including many features that are not widely known or documented in basic\n  tutorials\n\n# 🛠️  Tools\n\nHere we keep an updated list of main tools around using AI for coding. We tested\nmost of them and you'll find our honest opinion during the time we tested them.\n\n## Editors \u002F IDE\n\n- [Cursor](https:\u002F\u002Fcursor.com)\n- [Windsurf](https:\u002F\u002Fwindsurf.com)\n- [Cline](https:\u002F\u002Fcline.bot\u002F)\n- [OpenHands](https:\u002F\u002Fgithub.com\u002FAll-Hands-AI\u002FOpenHands)\n- [Devin](https:\u002F\u002Fdevin.ai)\n- [VSCode + GitHub Copilot](https:\u002F\u002Fcode.visualstudio.com\u002Fdocs\u002Fcopilot\u002Fsetup)\n- [Amp](https:\u002F\u002Fampcode.com)\n- [Kiro](https:\u002F\u002Fkiro.dev)\n\n## CLI\n\n- [Claude Code](https:\u002F\u002Fgithub.com\u002Fanthropics\u002Fclaude-code)\n- [OpenAI Codex](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcodex)\n- [Pi](https:\u002F\u002Fpi.dev)\n- [opencode](https:\u002F\u002Fgithub.com\u002Fopencode-ai\u002Fopencode)\n- [Gemini CLI](https:\u002F\u002Fgithub.com\u002Fgoogle-gemini\u002Fgemini-cli)\n- [Aider](http:\u002F\u002Faider.chat\u002F)\n- [Claude Engineer](https:\u002F\u002Fgithub.com\u002FDoriandarko\u002Fclaude-engineer)\n- [Roo Code](https:\u002F\u002Fgithub.com\u002FRooVetGit\u002FRoo-Code)\n- [Codebuff](https:\u002F\u002Fwww.codebuff.com\u002F)\n\n## Webapps\n\n- [Bolt](https:\u002F\u002Fbolt.new)\n- [v0](https:\u002F\u002Fv0.dev)\n- [Replit](https:\u002F\u002Freplit.com)\n- [Suuper](https:\u002F\u002Fsuuper.dev)\n- [Lovable](https:\u002F\u002Flovable.dev)\n- [Firebase Studio](https:\u002F\u002Ffirebase.studio\u002F)\n\n## Background\u002Fremote Agents\n\n- [ZenCoder](https:\u002F\u002Fzencoder.ai\u002F)\n- [CodeRabbit](https:\u002F\u002Fwww.coderabbit.ai\u002F)\n- [Factory AI](https:\u002F\u002Fwww.factory.ai\u002F)\n- [OpenAI Codex](https:\u002F\u002Fchatgpt.com\u002Fcodex)\n\n## Helpful Tools\n\n- [Specstory](https:\u002F\u002Fspecstory.com\u002F)\n- [Claude Task master](https:\u002F\u002Fgithub.com\u002Feyaltoledano\u002Fclaude-task-master)\n- [CodeGuide](https:\u002F\u002Fwww.codeguide.dev\u002F)\n- [repomix](https:\u002F\u002Frepomix.com\u002F)\n- [files-to-prompt](https:\u002F\u002Fgithub.com\u002Fsimonw\u002Ffiles-to-prompt)\n- [repo2txt](https:\u002F\u002Fgithub.com\u002Fdonoceidon\u002Frepo2txt)\n- [stakgraph](https:\u002F\u002Fgithub.com\u002Fstakwork\u002Fstakgraph)\n- [Repo Prompt](https:\u002F\u002Frepoprompt.com\u002F)\n- [Uzi](http:\u002F\u002Fuzi.sh\u002F)\n- [Claudia](https:\u002F\u002Fclaudia.asterisk.so\u002F)\n\n# 🤗 Who to follow\n\nSome super interesting people implementing AI coding models\u002Ftools or using it\non their own projects.\n\n- [Addy Osmani](https:\u002F\u002Fx.com\u002Faddyosmani)\n- [Andrej Karpathy](https:\u002F\u002Fx.com\u002Fkarpathy)\n- [Armin Ronacher](https:\u002F\u002Fx.com\u002Fmitsuhiko)\n- [Beyang Liu](https:\u002F\u002Fx.com\u002Fbeyang) (Amp)\n- [Cat Wu](https:\u002F\u002Fx.com\u002F_catwu) (Claude Code)\n- [Eric S. Raymond](https:\u002F\u002Fx.com\u002Fesrtweet)\n- [Eyal Toledano](https:\u002F\u002Fx.com\u002FEyalToledano) (TaskMaster)\n- [Geoffrey Huntley](https:\u002F\u002Fx.com\u002FGeoffreyHuntley)\n- [Gerred Dillon](https:\u002F\u002Fx.com\u002Fdevgerred)\n- [Harper Reed](https:\u002F\u002Fx.com\u002Fharper)\n- [Mario Zechner](https:\u002F\u002Fx.com\u002Fbadlogicgames) (pi)\n- [Nathan Wilbanks](https:\u002F\u002Fx.com\u002FNathanWilbanks_) (agnt, SLOP)\n- [Pietro Schirano](https:\u002F\u002Fx.com\u002Fskirano)\n- [Quinn Slack](https:\u002F\u002Fx.com\u002Fsqs) (Amp)\n- [Sandeep Pani](https:\u002F\u002Fx.com\u002Fskcd42) (Aider, AgentFarm)\n- [Simon Willison](https:\u002F\u002Fx.com\u002Fsimonw)\n- [Steve Yegge](https:\u002F\u002Fx.com\u002FSteve_Yegge) (Gas Town, Beads, Wasteland)\n- [Vilson Vieira](https:\u002F\u002Fx.com\u002Faut0mata)\n- [Thorsten Ball](https:\u002F\u002Fx.com\u002Fthorstenball) (Amp)\n- [Xingyao Wang](https:\u002F\u002Fx.com\u002Fxingyaow_) (OpenHands, AllHands)\n\n# 💖 Acknowledgements\n\nThis guide was inspired by the great [llm-course](https:\u002F\u002Fgithub.com\u002Fmlabonne\u002Fllm-course) from Maxime Labonne.\n\nSpecial thanks to:\n\n- [Gabriela Thumé](https:\u002F\u002Fgithub.com\u002Fgabithume) for everything ❤️\n- [Albert Espín](https:\u002F\u002Fgithub.com\u002Falbert-espin) for thoughtful feedback and the first error corrections\n- [Geoffrey Huntley](https:\u002F\u002Fx.com\u002FGeoffreyHuntley) for pointing me to property-based tests and for all his great\n  tutorials and experiments around autonomous agents\n- ChatGPT 4o for generating the banner you see on the top, inspired by the\n  incredible artist [Deathburger](https:\u002F\u002Fcitadel9.com\u002F)\n\n# ⭐ Contributing\n\nIf you want to contribute with corrections, feedbacks or some missing tool or\nreference, please feel free to open a new PR, a new issue or get in touch with\n[Eric](https:\u002F\u002Fx.com\u002Fesrtweet) or [Vilson](https:\u002F\u002Fx.com\u002Faut0mata).\n\nIf you like this guide, please consider giving it a star ⭐ and follow it for new updates!\n\n## Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fautomata_aicodeguide_readme_ea5b0eac74bb.png)](https:\u002F\u002Fwww.star-history.com\u002F#automata\u002Faicodeguide&type=date&legend=top-left)\n\n# ⚖️ License\n\nMIT\n","\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Fi.imgur.com\u002Fv0Tx6am.png\" alt=\"AI代码指南\" \u002F>\n  \u003Cp align=\"center\">\n    由 \u003Ca href=\"https:\u002F\u002Fx.com\u002Faut0mata\">Vilson Vieira\u003C\u002Fa> 和\n       \u003Ca href=\"https:\u002F\u002Fx.com\u002Fesrtweet\">Eric S. Raymond\u003C\u002Fa> 撰写\n  \u003C\u002Fp>\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FNrDfXmtvw3\">\n     \u003Cimg src=\"https:\u002F\u002Fi.imgur.com\u002FuqKVFHj.png\" alt=\"加入我们的Discord\" height=\"48\" \u002F>\n  \u003C\u002Fa>\n\u003C\u002Fdiv>\n\u003Cbr\u002F>\n\n> 关于如何利用AI辅助编程，乃至让AI代你编写代码，你想知道的一切。\n\n\u003Cdiv align=\"center\" style=\"font-size: 30px\">\n  \u003Ca href=\"#vibe\">TL;DR 直接告诉我怎么“ vibe coding”吧 😎！\u003C\u002Fa>\n\u003C\u002Fdiv>\n\u003Cbr \u002F>\u003Cbr \u002F>\n\n## 引言\n\n我们与计算机交互以及为它们编写代码的方式正在发生深刻的变化。这种变化不仅体现在我们使用的工具上，还影响着我们的编码方式、对软件产品和系统的思考方式。\n\n而且这一变革的速度非常快：每周都有新的大语言模型发布；新的工具、编辑器、AI编程及“vibe coding”实践、新协议、MCP、A2A、SLOP……要跟上所有这些进展实在不易。相关信息分散在各个地方——网站、代码仓库、YouTube视频等。\n\n正因如此，我们决定编写这份指南。这是我们的一点小小尝试，旨在将围绕**AI编程**或**AI辅助代码生成**的各种实践与工具整合起来，以通俗易懂的形式呈现给您，不搞复杂化。\n\n- **如果您是一名开发者，但尚未使用AI代码助手**，本指南非常适合您：它介绍了最新的工具和最佳实践，帮助您在日常工作中充分利用它们。无论是让AI作为您的副驾驶，还是您来充当AI代理的副驾驶。\n\n- **如果您从未写过代码，但对这种新兴的“vibe coding”感兴趣，希望借此构建自己的SaaS和其他软件产品**，那么本指南绝对适合您：我们会尽力消除晦涩难懂之处，只保留您开启旅程所需的内容，并严格甄别哪些是真正重要的，哪些只是炒作。\n\n在开始之前，我们建议您这样阅读本指南：我们尽量以“常见问题解答”的形式组织内容，您可以自由搜索并跳转到相关问题的答案。您会发现每个章节都列出了“资源”部分：我们会持续更新这些资源，最新的内容都会排在列表顶部。\n\n正如我所说，AI领域日新月异，我们也会尽最大努力保持本指南的及时更新。不过，如果您发现有任何遗漏，请随时提交PR或创建议题，或者直接在我们的Discord频道中分享您的新发现，以便我们将其纳入其中！\n\n好了，让我们开始吧！\n\n📚 资源：\n\n- Yoav Aviram 的《AI编程之道》（Zen of AI coding）\n- Vilson Vieira 的《作为一名软件工程师，我是如何使用AI编程的》\n- Vilson Vieira 的《使用AI编程一年后的感悟》\n- Steve Yegge 的《软件生存3.0》\n- Steve Yegge 的《顽固开发者的终结》\n- Andrej Karpathy 的视频《软件再次变革》\n- Amp团队的播客《培养一个智能体》\n- Mary Rose Cook 的《成为一名AI增强型工程师》\n- Addy Osmani 的《70%问题：关于AI辅助编程的残酷真相》\n- Simon Willison 的《如何使用LLMs进行代码生成》\n- Thorsten Ball 的《如何构建一个智能体》\n- Geoffrey Huntley 的《致学生：是的，AI已经来了，除非你采取行动，否则你就完蛋了……》\n- Steve Yegge 的《初级开发者的反击》\n- Santiago 的视频《如何为开发与AI的未来做好准备》\n- Tim O'Reilly 的《我们所知的编程时代的终结》\n\n## AI编程？Vibe编程？智能体编程？\n\n这些术语其实很相近。简单来说，AI编程就是利用AI模型（尤其是如今的LLMs）及其配套工具来帮助您编写软件。它也被称为“用于代码生成的AI”或简称“code gen”。这背后有一整片引人入胜的研究与工程领域，其历史可以追溯到20世纪50年代，那时人们就已用Lisp来生成代码。如今，LLMs已成为驱动代码生成的主要引擎，同时一些神经符号混合方法也开始崭露头角。AI编程也是一种实践：如果您使用Cursor，通过连续敲击Tab键来获取补全建议，那您就是在“AI编程”；如果您完全采用Codex的代理模式，同样属于“AI编程”。总之，只要借助AI模型来辅助生成代码，都可以称为AI编程。通常这类人群都已经具备一定的编程基础。\n\n而“vibe编程”则是把AI编程进一步升级 :-) 在这里，您并不太在意生成的代码本身，只需提供一个提示，便期待AI为您完成全部编码工作。这个术语由Karpathy于2025年提出（[见推文](https:\u002F\u002Fx.com\u002Fkarpathy\u002Fstatus\u002F1886192184808149383)），目前正变得越来越流行。在我看来，它有助于让那些从未想过编程的人也能轻松入门！\n\n此外，还涌现出一种新型的编程方式——“智能体编程”。在这种模式下，您会让一个智能体自行反复运行，形成闭环，最好还能结合反馈信号（如测试等）。您可以单独运行一个智能体，也可以借助像GasTown这样的编排工具，同时运行多个智能体，几乎无需人工干预。\n\n综上所述，无论您是用AI讨论软件构想，还是仅用来辅助现有代码库中的部分开发；无论是完全采用vibe编程，还是让一个或一百个智能体全天候无人值守地运行，您都在借助AI来生成代码。我们就统称为“AI编程”，继续往下看吧。\n\n## 我该如何使用它？\n\n你可以通过多种方式使用 AI 编码，但总的来说：\n\n- AI 是你的副驾驶：你利用 AI 模型来增强自身能力，提升工作效率。比如，打开 ChatGPT 帮助你为 SaaS 产品构思创意；或者使用 Cursor 自动补全你的文档字符串。这种方式能带来诸多好处，尤其适合创意探索和自动化那些枯燥的工作环节。\n\n- AI 是主驾驶员：在这种模式下，你则成为副驾驶。这就是所谓的“随性编码”场景。你可以开启 Cursor Agent 的 YOLO 模式，或运行 Claude Code 并加上 `--dangerously-skip-permissions` 标志，完全信任代理生成的代码。这是一种非常强大的自动化方式，但也要求你在系统设计、代理行为控制以及处理自己并不熟悉的复杂代码时遵循一些最佳实践，尤其是在调试错误时。\n\n建议你同时学习并实践这两种方式！\n\n不过，随着项目复杂度的增加，应更多地倾向于副驾驶模式，而减少纯粹的 YOLO 式随性编码。因为代码越有可能被其他人（甚至半年后的你自己）维护，这一点就越重要。\n\n# 🗺️ 路线图\n\n## 我该如何入门？\n\n- 如果你还不懂编程但想尝试一下，我们推荐从一些基于 Web 的工具开始，比如 [Bolt](https:\u002F\u002Fbolt.new)、[Replit](https:\u002F\u002Freplit.com)、[v0](https:\u002F\u002Fv0.dev)、[Suuper](https:\u002F\u002Fsuuper.dev) 或 [Lovable](https:\u002F\u002Flovable.dev)。\n\n- 如果你已经会编程，可以安装 [Claude Code](https:\u002F\u002Fclaude.com\u002Fproduct\u002Fclaude-code)、[Codex](https:\u002F\u002Fopenai.com\u002Fcodex\u002F)、[Cursor](https:\u002F\u002Fcursor.com\u002F)、[Amp](https:\u002F\u002Fampcode.com) 或 [Windsurf](https:\u002F\u002Fwindsurf.com\u002F)。你可以先从免费计划开始，再升级到每月 20 美元的方案。Claude Code Pro 计划、Codex 和 Cursor 都相当不错且价格实惠，尤其是它们提供了大量可用于当前最新 LLM 模型的 token。VSCode 也有自己的 [Agent Mode](https:\u002F\u002Fcode.visualstudio.com\u002Fblogs\u002F2025\u002F02\u002F24\u002Fintroducing-copilot-agent-mode)，它可以与 Github Copilot 配合使用，并采用 Agentic Workflow 来进行代码修改和文件编辑。\n\n- 如果你想要更开源的替代方案，可以试试 [Pi](https:\u002F\u002Fpi.dev\u002F)（推荐）、[OpenCode](https:\u002F\u002Fopencode.ai\u002F) 或 [OpenHands](https:\u002F\u002Fgithub.com\u002FAll-Hands-AI\u002FOpenHands)。\n\n> 小提示：我们强烈建议你在 OpenRouter 上注册一个账号。这非常简单，而且你可以获得最新版本的 LLM 模型，甚至还有免费版本可用。\n\n> 重要提醒：通过 Claude Code 的 API\u002FSDK 使用服务会非常昂贵！你可能在不知不觉中每天就烧掉几百美元。因此，建议直接从 Claude Code Pro 或 Max 计划开始，这样你就不用太担心费用问题。如果你确实需要使用其 API\u002FSDK（比如将其嵌入到你的应用中或其他用途），务必密切关注 Anthropic 控制面板上显示的用量。\n\n📚 相关资源：\n\n- [Replit 随性编码入门课程](https:\u002F\u002Fwww.deeplearning.ai\u002Fshort-courses\u002Fvibe-coding-101-with-replit\u002F)\n- [Cursor AI 初学者教程（2025 版）](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=3289vhOUdKA)\n\n## 如何编写编码提示？也就是如何进行随性编码？\u003Ca id='vibe'>\u003C\u002Fa>\n\n在 2025 年年中或之前，LLM 往往会出现幻觉现象，陷入无休止的循环来修复错误。如今这些模型已经非常出色，但在使用它们进行编码时，仍然值得遵循一些基本原则：\n\n- 不要将所有内容都塞进一个提示中。只说“嘿，帮我为我的宠物店开发一个应用”对软件工程师来说毫无帮助，对 AI 更是如此 :-) 先理解你的项目，与 LLM 一起头脑风暴，制定一份 PRD（产品需求文档），规划好任务并分解成具体的子任务。下面我们会提供一个用 ChatGPT 为你生成 PRD 的方法。\n- 提供详细信息。如果你知道自己想要什么，就明确说出来。比如你想用哪种编程语言、哪个技术栈、面向哪类用户等，都应在提示中注明。\n- Markdown 或其他轻量级文本格式（如 asciidoc）通常都能被 LLM 理解。最终文本都会被编码为 token。不过，为了突出提示中的某些部分，建议使用一些特殊符号，例如 [XML 标签](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fbuild-with-claude\u002Fprompt-engineering\u002Fuse-xml-tags)。\n- 将项目拆分为任务和子任务。记住，良好的软件工程实践依然很重要。可以把代理想象成你刚雇佣的一位初级开发者：你会给他分配怎样的 PR，才能最大限度地提高它被批准并最终合并的可能性呢？\n- 针对不同目标尝试不同的模型。例如，Opus 或 ChatGPT 非常适合规划阶段，而 Sonnet 或一些开源模型（如 Kimi 或 Minimax）则更适合执行和实现计划。\n- 可以尝试使用不同的模型来确认和验证彼此的输出，甚至可以并行运行多个模型，然后选择最好的结果！\n- LLM 本质上是“yes 机器”，因此你需要保持批判性思维。不要照单全收它们生成的内容，一定要仔细审查并测试。毕竟，它们只是工具，生成的代码仍由你负责。\n\n在接下来的示例中，我们将使用与 Markdown 关联的 .md 文件扩展名。如果你更喜欢 asciidoc（它对结构化文档的支持稍好一些），也可以使用它，并将这些说明中的“.adoc”替换为“.md”。LLM 对文件格式并不在意，无论是 Markdown、asciidoc 还是其他纯文本格式，它们都能很好地处理。\n\n以下是一种通常效果不错的流程\u002F方法\u002F策略\u002F工作流：\n\n1. 使用 `ChatGPT`、Codex 或 Claude Code 本身，输入如下提示：\n\n```\n你是一位资深软件工程师。我们将一起制定一个项目的 PRD。\n\n非常重要：\n- 每次只提一个问题\n- 后一个问题必须基于前一个问题的回答\n- 每个重要细节都要深入探讨\n\n想法：\n\u003Cpaste here your idea>\n```\n\n2. 你将进入一个持续几分钟的问答循环。尽量尽可能详细地回答每一个问题。完成后（或者当你觉得足够了），发送以下提示，引导模型将其整理成 PRD（也称为 SDD，即规范驱动开发）：\n\n```\n请将上述内容整理成一份 PRD，采用 Markdown 格式。内容应包括以下部分：\n\n- 项目概述\n- 核心需求\n- 核心功能\n- 核心组件\n- 应用\u002F用户流程\n- 技术栈\n- 实施计划\n```\n\n3. 将此文件复制并保存到你的项目文件夹内的 `docs\u002Fspecs.md` 中。\n4. 现在让我们为你的项目创建任务列表。请提出以下问题：\n```\n根据生成的 PRD，制定一个详细的分步计划来构建这个项目。\n然后将其分解为相互依赖的小任务。\n基于这些任务，再进一步拆分成更小的子任务。\n确保每个步骤既足够小以便于实现，又足够大以成功完成整个项目。\n遵循软件开发和项目管理的最佳实践，避免出现过于复杂的跳跃式进展。将任务串联起来，形成依赖关系列表。不应存在孤立的任务。\n\n非常重要：\n- 使用 Markdown 或 AsciiDoc 格式\n- 每个任务和子任务都应是一个待办事项清单条目\n- 为每个任务提供足够的上下文，使开发者能够独立完成\n- 每个任务都应有一个编号 ID\n- 每个任务都应列出其依赖的任务 ID\n```\n5. 将此内容保存为项目文件夹内的 `docs\u002Ftodo.md`。\n6. 在仓库根目录下，还有一个非常重要的文件：`AGENTS.md`（如果你使用 Claude Code，则可以命名为 `CLAUDE.md`）。你可以将这个文件视为“代理 README”，就像你为人类维护 `README.md` 一样，也为未来的代理维护一个 `AGENTS.md` 文件，它们会阅读和编辑你的源代码项目！该文件中应包含的重要内容有：项目的概要、所使用的技术栈、如何安装、测试和部署或运行项目、主要的 PRD\u002F规格\u002F设计或其他相关文档的位置，甚至包括主要的源代码文件。你可以通过 [这里](https:\u002F\u002Fagents.md) 了解更多关于 `AGENTS.md` 的信息及示例。\n\n[这里有一个示例](https:\u002F\u002Fchatgpt.com\u002Fshare\u002F67f8e8c6-c92c-8007-8fe0-76bdc73f9812) 是用 ChatGPT 4o 为一个简单的命令行工具进行头脑风暴\u002F规划的会话，你可以将其作为自己项目的灵感来源。\n\n现在为你的项目创建一个本地文件夹，并记得在该文件夹内运行 `git init` 来将其纳入版本控制。\n\n这样你就有了 PRD 和任务列表来构建你的项目了！有了这些文件，你可以启动你的 Codex（或其他 AI 代码代理），指向这些文件并输入以下指令：\n\n```\n你是一位资深软件工程师。请研究 @docs\u002Fspecs.md，并实现 @docs\u002Ftodo.md 中尚未完成的内容。每次只实现一个任务，并严格遵守任务和子任务的依赖关系。完成一个任务后，在列表中勾选它，然后继续下一个任务。\n```\n\n在这里你可以找到基于此工作流程在 10 分钟内构建的命令行工具的 Git 仓库：https:\u002F\u002Fgithub.com\u002Fautomata\u002Flocalbiz\n\n> 重要提示：虽然“随性编码”非常酷，但了解自己正在做什么也同样重要 :-) 审查代理生成的代码，不仅能在出现错误时帮助你解决问题（而错误是难免的），还能提升你对代码审查的能力（不仅是针对 AI 生成的代码，也包括你自己和其他开发人员编写的代码）。\n\n📚 资源：\n\n- [你可能正在错误地使用 Cursor AI……](https:\u002F\u002Fghuntley.com\u002Fstdlib\u002F)，作者：Geoffrey Huntley：在这篇文章中，Geoff 提出了使用 Cursor 规则标准库的想法。\n- [从设计文档到代码：Groundhog AI 编码助手（以及新的 Cursor 随性编码模式）](https:\u002F\u002Fghuntley.com\u002Fspecs\u002F)，作者：Geoffrey Huntley：这是前一篇文章的第二部分，Geoff 建议使用 LLM 自动构建规格说明（PRD）和 Cursor 规则。\n- [我目前的 LLM 代码生成工作流程](https:\u002F\u002Fharper.blog\u002F2025\u002F02\u002F16\u002Fmy-llm-codegen-workflow-atm\u002F)，作者：Harper Reed。\n- [LLM 代码生成英雄之旅](https:\u002F\u002Fharper.blog\u002F2025\u002F04\u002F17\u002Fan-llm-codegen-heros-journey\u002F)，作者：Harper Reed。\n- [Claude Code：代理式编码的最佳实践](https:\u002F\u002Fwww.anthropic.com\u002Fengineering\u002Fclaude-code-best-practices)，作者：Anthropic：虽然这是针对他们的 Claude Code 工具的指南，但也包含了适用于任何 LLM 模型的 AI 编码工作流程的实用技巧。\n- [提示工程](https:\u002F\u002Fwww.kaggle.com\u002Fwhitepaper-prompt-engineering)，作者：Lee Boonstra（来自 Google）：这是一份 70 页的文档，提供了关于提示工程的有趣建议，其中有一节专门讨论代码生成。\n- [提示工程指南](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fbuild-with-claude\u002Fprompt-engineering)，作者：Anthropic。\n- [GPT 4.1 提示工程指南](https:\u002F\u002Fcookbook.openai.com\u002Fexamples\u002Fgpt4-1_prompting_guide)，作者：OpenAI。\n- [RepoPrompt](https:\u002F\u002Frepoprompt.com\u002F) 是一款帮助整合项目上下文的工具。值得观看一段关于 [RepoPrompt 工作流程](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fm3VreCt-5E) 的概述视频，学习如何轻松利用这类工具为你的随性编码提示提供更多上下文。\n- [并非所有 AI 辅助编程都是随性编码（但随性编码确实很棒）](https:\u002F\u002Fsimonwillison.net\u002F2025\u002FMar\u002F19\u002Fvibe-coding\u002F)，作者：Simon Willison。\n\n\n\n## 我应该使用哪种 LLM 模型？\n\nLLM 是根据不同的目标进行训练和调整的。像 ChatGPT 或 Claude 这样的模型被训练成能够在与用户对话中提供通用知识支持的优秀助手。而在其他情况下，研究机构会捕捉模型在长时间运行工具并对其输出进行推理时的行为，并将反馈作为奖励信号返回给模型，从而将原始训练好的模型调整为适应特定任务。例如，对于 Claude Opus、Sonnet 或 OpenAI GPT 而言，其任务就是编码。\n\n因此，始终选择那些专为编码\u002F编程训练\u002F调整过，并且支持工具的模型非常重要。\n\n鉴于 LLM 的发展日新月异，如今很难推荐某一个特定版本的模型适合编码。我们能做的，是建议你应该考虑哪一类模型。目前，像 Claude Opus 和 OpenAI GPT 这样的专有模型在 AI 编码领域处于领先地位；而开源模型如 Kimi、Minimax 和 GLM 则一直紧随其后，每次发布新版本时都在不断缩小与专有模型之间的差距。\n\n要确定具体选择哪个版本的模型，一般建议选择最新版本，或者查看以下排行榜以获得更准确的比较：\n\n- [Models.dev](https:\u002F\u002Fmodels.dev)：一个开源的 AI 模型数据库。\n- [OpenRouter 的模型列表](https:\u002F\u002Fopenrouter.ai\u002Fmodels?categories=programming&fmt=table)：将类别设置为“编程”，并仅筛选支持“工具”的模型，通常是选择用于 AI 辅助编码模型的好方法。\n- [Agent Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fgalileo-ai\u002Fagent-leaderboard)\n\n## 当出现可怕的“速率限制”消息时该怎么办\n\n切换到不同的模型。\n\n这种情况至少有两种原因。一是你的请求过于密集，超出了模型的输入\u002F输出 token 限制；二是运行该模型的服务器集群可能遇到了问题，导致系统进行限流以减轻负载。然而，你收到的错误信息通常对此并不透明。你可以[在这里](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Frate-limits)找到更详细的解释。\n\n不同模型的 token 限制差异很大。例如，在2025年4月下旬，gpt-4.1-mini 的 token 限制就比 gpt-4.1 宽松得多。建议你准备多个 API 密钥（按使用量付费，成本不高），并查看各模型的速率限制说明页面。以 [Anthropic 的页面](https:\u002F\u002Fconsole.anthropic.com\u002Fsettings\u002Flimits) 为例。\n\n## 如何设置项目范围内的规则？\n\n你可以通过将规则或约定“注入”到 LLM 的上下文中，来定义适用于整个项目的规则。不同的编辑器有不同的实现方式：\n\n- 对于除 Claude Code 之外的几乎所有代理，可以在项目的根目录下创建一个 `AGENTS.md` 文件。更多信息和示例请参阅 https:\u002F\u002Fagents.md。\n- 对于 Claude Code，则只需在项目根目录下创建一个 `CLAUDE.md` 文件。Anthropic 并未遵循 `AGENTS.md` 标准，因此一种好的做法是先创建一个 `AGENTS.md` 文件，然后从 `CLAUDE.md` 创建指向 `AGENTS.md` 的符号链接。你也可以直接将 `CLAUDE.md` 文件的内容设置为 `@AGENTS.md`。\n- 在 Cursor 中，只需在 `.cursor\u002Frules\u002F` 文件夹内创建 Markdown 文件，Cursor 会确保这些文件应用于与 LLM 的所有交互。\n- 在 Aider 中，创建包含你希望使用的规则或约定的 Markdown 文件（如 `rules.md`），并在 `.aider.conf.yml` 文件中添加以下内容：`read: rules.md`。\n\n此外，许多工具还支持在你的主目录中配置一个规则\u002F约定文件，以便将其应用于所有项目。例如，在 Aider 中，你可以在名为 `~\u002F.global_conventions.md` 的文件中添加全局约定，然后在 `.aider.conf.yml` 中通过 `read: [~\u002F.global_conventions.md, rules.md]` 将其引入。\n\n你还可以将部分 PRD 内容作为规则，比如技术栈或代码格式和风格方面的指导原则。\n\n规则非常强大，甚至可以利用 AI 自身为你生成规则！[请参考 Geoff 的方法](https:\u002F\u002Fghuntley.com\u002Fspecs\u002F)。\n\n📚 资源：\n\n- [AGENTS.md](https:\u002F\u002Fagents.md)\n- [Cursor Rules 文档](https:\u002F\u002Fdocs.cursor.com\u002Fcontext\u002Frules-for-ai)\n- [Windsurf 规则文档](https:\u002F\u002Fwindsurf.com\u002Feditor\u002Fdirectory)\n- [Aider 约定文档](https:\u002F\u002Faider.chat\u002Fdocs\u002Fusage\u002Fconventions.html)\n- [Aider 约定集合](https:\u002F\u002Fgithub.com\u002FAider-AI\u002Fconventions)：由社区贡献的用于 Aider 的约定文件集合。\n- [Awesome Cursor Rules](https:\u002F\u002Fgithub.com\u002FPatrickJS\u002Fawesome-cursorrules)：精心整理的 .cursorrules 文件列表，用于提升你的 Cursor AI 使用体验。\n\n## 我如何避免幻觉？什么是 PRD？\n\nPRD？什么？！据说解决工程问题的最佳方法就是创造一个新的缩写词，这次也不例外 :-) 开玩笑啦……PRD 是 **产品需求文档** 的缩写。简单来说，它就是描述你的软件项目需求及其他细节的一组文档（或仅一份文档）。\n\n事实证明，如果你让心爱的 LLM 自由发挥，而没有提供足够的上下文信息，它很快就会产生荒谬的“幻觉”。你需要驯服这只野兽，而 PRD 正是很好的方式。\n\n我最喜欢 PRD 的一点是，它对任何人都非常有帮助——无论是从未写过代码的人，还是资深软件工程师或产品经理。\n\n你不需要任何背景知识就可以开始编写 PRD，只需要一个应用的想法即可。不过，了解软件设计的基本原理以及你正在使用或计划使用的技术栈\u002F框架的相关细节，确实会很有帮助。\n\n请查看\u003Ca href=\"#vibe\">这里\u003C\u002Fa>，了解如何利用 LLM 为你生成一份 PRD。\n\n## 记录提示日志\n\n记录你发送的每一个提示，同时（这一点很重要）穿插一些你当时的思考以及遇到的意外情况。这份提示日志是你设计意图的记录；对于任何未参与该项目的人来说，包括六个月后已经忘记自己想法的你本人，它都将具有不可估量的价值。\n\n目前还没有关于这个文件命名的统一规范，你可以使用类似 `vibecode.adoc` 或 `history.md` 这样的名称。\n\n有一些工具，比如 [aider](https:\u002F\u002Faider.chat\u002F)，会自动保存你与 LLM 之间的所有对话记录。因此，你可以设置以下环境变量，并将这些历史文件纳入版本控制：\n\n```bash\n# 历史文件：\n\n## 指定聊天输入历史文件（默认：.aider.input.history）\n#AIDER_INPUT_HISTORY_FILE=.aider.input.history\n\n## 指定聊天历史文件（默认：.aider.chat.history.md）\n#AIDER_CHAT_HISTORY_FILE=.aider.chat.history.md\n\n## 将与 LLM 的对话记录到此文件（例如 .aider.llm.history）\n#AIDER_LLM_HISTORY_FILE=.aider.llm.history\n```\n\n有了这些文件，你可以在回顾时加入自己的注释和思考。当你在未来重新审视项目时，你（以及其他人）都能从中学习到很多东西，并且很可能会发现一些模式和技巧，从而在接下来的会话中加以利用。\n\n像 [Pi](https:\u002F\u002Fpi.dev) 或 [Amp](https:\u002F\u002Fampcode.com) 这样的代理平台也允许你保存和分享编码会话。在 Pi 中，只需输入 `\u002Fshare`，它就会创建一个 GitHub gist，并生成一个易于分享的 URL，供你通过酷炫的界面进行可视化。\n\n## 我该如何开始我的项目？\n\n### Web 应用（前端）\n\n现代 Web 开发让人应接不暇。JavaScript\u002FTypeScript 框架、CSS 框架、构建工具、部署服务等层出不穷，因此很难入门，也难以决定该选择哪一种。经过过去几个月的前端开发实践，我通常会推荐以下几种方案：\n\n- [TanStack Start](https:\u002F\u002Ftanstack.com\u002Fstart)：它提供了一个功能强大的 React 框架，而且不受 Vercel 或其他任何提供商的限制，你可以将应用部署到任何你想要的地方。\n- [Next.js](https:\u002F\u002Fnextjs.org)：目前仍然是最受欢迎的 React 框架之一，但你会依赖于 Vercel 的生态系统，这既有优点也有缺点。\n- [FastHTML](https:\u002F\u002Ffastht.ml)：如果你喜欢 Python，并且更关注暴露核心后端功能（例如数据分析或 AI\u002FML 模型流水线），而不是追求极其精美的 UI。\n\n如果你也在编写后端代码，请确保在你的 Git 仓库根目录下，要么创建一个用于后端、另一个用于前端的文件夹；要么分别设置后端和前端两个独立的仓库，然后将它们添加到 Cursor 工作空间中（这样你就可以在前端代理中引用后端文件，反之亦然），或者直接在根目录下运行 Claude Code 或其他基于命令行的工具链。\n\n为了避免过早地将前端与后端集成，可以指示 AI 代理使用模拟数据或占位符数据，等到后端实现后再进行更新。\n\n另一个有趣的技巧是使用优秀的 MCP 工具，将你的编码代理与 Playwright 或浏览器结合起来。这样一来，AI 代理可以直接控制浏览器并自行截取屏幕截图和错误信息，从而避免将浏览器中的错误反复复制粘贴到 AI 代理中。当然，如果你像我一样不喜欢 MCP，也可以通过 Bash 命令来实现类似的功能。\n\n如果你想在 Web 应用中使用 3D 内容，并且正在使用 React，那么使用 [React Three Fiber](https:\u002F\u002Fgithub.com\u002Fpmndrs\u002Freact-three-fiber) 会比直接使用 three.js 库更加方便。R3F 将 all three.js 对象封装为 React 组件，使得状态管理变得更加容易。\n\n### 后端\n\n在基于 Web 的工具（如 Lovable）中，后端往往不够灵活或难以扩展。因此，你可能需要借助 Claude Code、Codex 或其他非 Web 工具来完成后端开发。\n\n使用 Python 和 FastAPI 是一个不错的选择。如果你倾向于与前端保持同一种编程语言（通常是 JavaScript 或 TypeScript），也可以选择 Node.js 或 Bun。对于 TanStack 或 Next.js 前端所需的大多数 API 端点，你可以使用服务器函数（Server Functions）或服务器动作（Server Actions），而无需单独搭建后端。\n\n后端非常适合进行端到端测试，因此建议引导代理为每个新功能及其子任务编写测试并运行。\n\n一旦后端开发完成，你可以将其文档（尤其是关于 HTTP 端点的部分）作为输入，供前端代理参考。这样，你就可以将前端从使用模拟数据切换到真正来自后端的数据。\n\n### 游戏\n\n对于小型游戏，可以在一个单独的 .js 文件中使用原生 JavaScript；如果是 3D 游戏，可以使用 three.js；如果是 2D 游戏，则可以使用 Pixi.js。\n\n游戏的关键在于优质的资源素材，因此可以考虑使用 Tripo AI 和 [Everything Universe](https:\u002F\u002Feverythinguniver.se) 等服务来生成 3D 资源，并对其进行绑定和动画制作。\n\n## 我如何处理错误和 Bug？\n\n关于编码和软件开发，有一件事必须清楚：它们一定会出错。无论你如何努力预防，错误总是会发生。所以，首先要坦然接受这一点，与错误和 Bug 友好相处。\n\n第一种策略是模仿 SWE 的做法：仔细查看解释器或编译器给出的错误信息，并尝试理解它。然后将错误信息复制粘贴回大模型，请求它帮助修复。另一个好主意是引入适合调试的 MCP 工具，比如 [BrowserTools](https:\u002F\u002Fbrowsertools.agentdesk.ai\u002F)，或者将 Claude Code 或其他代理连接到本地 Chrome 浏览器。此外，在远程开发场景中，或者当你不想使用当前的本地 Chrome 时，也可以通过 Playwright 使用无头浏览器。\n\n## 什么是 MCP、SLOP 和 A2A？我如何从中受益？\n\n> 2026 年 3 月更新：如今我们发现，将 Bash 作为代理工具，通常足以替代 MCP：代理可以通过 `curl` 调用 API 或执行其他命令，这种方式效率更高（例如消耗更少的 token）。不过，MCP 在某些场景下仍然具有吸引力。\n\nMCP 是 Model Context Protocol 的缩写，由 Anthropic 公司开发，但现在也被 OpenAI 的 GPT 和 Google 的 Gemini 等其他大模型所采纳。这是一个非常强大的概念，与函数\u002F工具调用密切相关。\n\n工具调用是指大模型能够调用外部工具或函数来执行特定操作的一种方式。它可以让大模型（基于历史数据训练而成）利用新的信息更新其知识窗口，同时与外部工具和端点进行集成。例如，如果你想在网络上搜索某些信息，可以指示大模型使用相应的工具（“如果需要在网上搜索内容，请使用这个工具：search(term)”）。这样一来，大模型就不必花费大量 token、多次迭代和复杂的解析工作，而是直接调用工具获取结果，再基于这些结果为你生成新的预测。\n\nMCP 则进一步扩展了这一理念，制定了一套标准化协议。通过 MCP，我们可以创建一个 MCP 服务器，向大模型公开某些资源（例如数据库）或工具（例如专门用于计算并返回结果的软件）。\n\n有人可能会问：这不就是 API 吗？难道不能通过 REST API 服务器\u002F客户端以及在大模型提示词中进行解析来实现同样的效果吗？某种程度上确实可以，这也是 SLOP（Simple Language Open Protocol）所倡导的。然而，拥有 MCP 这样的标准，可以更容易地确保大模型能够原生支持它，而无需在客户端端额外进行解析或使用各种技巧。\n\nA2A（Agent to Agent Protocol）是由 Google 提出的，旨在“补充”MCP，专注于多代理之间的通信，而 MCP 则侧重于大模型与工具之间的交互。\n\n最后一点提醒：如果你使用的工具链（如 Pi）不支持 MCP，也没关系，你可以将你喜欢的 MCP 工具包装成一个命令行工具，然后让 Pi 通过 Bash 调用它。只需使用 [mcporter](https:\u002F\u002Fgithub.com\u002Fsteipete\u002Fmcporter\u002F) 即可轻松实现这一操作。\n\n📚 资源：\n\n- [MCP](https:\u002F\u002Fmodelcontextprotocol.io\u002F)\n- [MCP 已死；MCP 万岁！](https:\u002F\u002Fchrlschn.dev\u002Fblog\u002F2026\u002F03\u002Fmcp-is-dead-long-live-mcp\u002F)\n- [Anthropic 的 MCP 服务器列表](https:\u002F\u002Fgithub.com\u002Fmodelcontextprotocol\u002Fservers)\n- [SLOP](https:\u002F\u002Fgithub.com\u002Fagnt-gg\u002Fslop)\n- [介绍 SLOP](https:\u002F\u002Frussell.ballestrini.net\u002Fintroducing-slop\u002F)\n- [MCP 与 SLOP 的比较](https:\u002F\u002Fmcpslop.com\u002F)\n- [A2A](https:\u002F\u002Fgoogle.github.io\u002FA2A\u002F)\n\n## 从零开始还是使用样板代码？\n\n一般来说，大模型从头开始效果更好。不过，你也可以从一个样板代码库（基本上就是一个入门套件：包含运行特定技术栈所需的基本源文件和配置文件的初始框架）入手，并在上下文中添加规则，以确保它会遵循你的入门套件规范。\n\n另一个好主意是利用 Cursor 自带的索引功能，或者使用像 [repomix](https:\u002F\u002Frepomix.com\u002F) 或 [files-to-prompt](https:\u002F\u002Fgithub.com\u002Fsimonw\u002Ffiles-to-prompt) 这样的工具，对你刚刚用入门套件创建的项目进行索引。\n\n## 从零开始的新项目与现有代码库\n\n如果你已经有现成的代码库，可以考虑使用 [repomix](https:\u002F\u002Frepomix.com\u002F) 或 [files-to-prompt](https:\u002F\u002Fgithub.com\u002Fsimonw\u002Ffiles-to-prompt) 将其打包并放入上下文窗口中。\n\n另一个实用技巧是在任务级别而非项目级别提出修改请求。例如，专注于你要实现的一个功能，然后让 Cursor Agent 来完成它。为该功能提供一份简要的需求文档。你可以想象自己正在指导一位初级开发者处理某个具体的 GitHub 任务 :-)\n\n📚 资源：\n\n- [Karpathy 关于他使用 AI 辅助编程方式的推文](https:\u002F\u002Fx.com\u002Fkarpathy\u002Fstatus\u002F1915581920022585597)\n- [如何在 Cursor 中处理大型代码库](https:\u002F\u002Fdocs.cursor.com\u002Fguides\u002Fadvanced\u002Flarge-codebases)\n\n## 结构良好的提示带来结构良好的设计\n\n目前（2025 年 4 月），大模型已经能够生成可运行的代码，但它们在生成结构良好、层次分明且关注点分离的代码方面仍显不足。良好的架构对于代码的可读性和可维护性至关重要，并能有效降低缺陷率。\n\n在编写提示时，应先仔细思考你的设计，然后按照能够产生良好结构的顺序来组织提示。例如，在以数据库为中心的应用中，最好先定义数据记录类型，再引导大模型构建一个封装了数据访问逻辑的管理类或模块。只有在此之后，才开始提示业务逻辑的实现。\n\n更一般地说，在编写提示时，应区分“引擎代码”和“策略代码”，并按顺序发出提示，以引导大模型正确地进行划分。\n\n你还可以在项目规则中加入一条：要求大模型不得破坏分层原则——如果需要新增引擎方法，应当将其添加到干净的封装层中，而不是让底层实现细节与业务逻辑混杂在一起。\n\n另一种有趣的做法是从项目的核心部分入手，花时间确保主要功能按预期实现并组织好。你甚至可以先编写类和函数的骨架，然后让大模型填充细节。只有在建立了稳固的基础并编写了完善的测试之后，再逐步开发该核心库的消费者，比如将其作为 CLI 或 REST API 对外暴露给未来的 Web 应用程序。\n\n## 我应该使用 TDD 或其他类型的测试吗？\n\n当然，一定要使用！测试比以往任何时候都更加重要。在当前的技术水平下（2026 年），大模型虽然能够生成整洁且正确的代码，但有时仍会出现“幻觉”，更重要的是，它们可能会误解需求规格，从而生成看似正确但实际上做错事的代码。\n\n即使未来我们真的实现了完全等同于人类的通用人工智能，这种情况也很难改变——毕竟，人类同样会误解需求！语言的模糊性决定了测试在未来依然不可或缺。\n\n采用测试驱动开发（TDD）来构建期望结果的骨架，确实可以帮助引导大模型实现你正在测试的目标代码。同时，指示大模型编写测试并运行它们也是一个很好的实践：这样它就能将可能导致测试失败的潜在错误纳入上下文，并尝试修复这些问题，使测试通过。\n\n测试是与大模型协同开发代码库的基础，只有在所有现有测试都通过的情况下，才能继续推进开发。\n\n属性测试在与大模型协作时尤为有趣。与其只针对你明确指定的几个具体值进行测试，不如覆盖整个取值范围，这样可以确保代理生成的代码在后续变更触及你事先未考虑到的边缘场景时仍然有效。每种编程语言都有不错的属性测试库，比如 Python 的 [hypothesis](https:\u002F\u002Fhypothesis.readthedocs.io\u002Fen\u002Flatest\u002F) 或 JavaScript\u002FTypeScript 的 [fast-check](https:\u002F\u002Ffast-check.dev\u002F)。\n\n此外，在编写或修复测试的过程中，务必仔细检查大模型生成的代码。有时，它们甚至会尝试生成硬编码的输出，只为让测试通过 :-)。\n\n## 如何确保安全？\n\n适用于非 AI 辅助编程的所有规则和最佳实践在这里同样适用。请进一步研究这些内容，并将其应用到你的代码中。以下是一个初步的安全检查清单：\n\n- 不要信任 AI 生成的代码，务必始终进行验证。记住，最终对运行的代码负责的是你，而不是 AI！\n- 不要在代码中以硬编码字符串的形式存储任何 API 密钥或其他敏感信息，尤其是在前端代码中。应将这些信息存储在后端的受保护环境变量中（例如，Vercel 等平台提供了此类选项）。\n- 在调用 API 端点时，务必使用 HTTPS 协议。\n- 创建 HTML 表单时，务必进行输入验证和清理。\n- 不要将敏感数据存储在 `localStorage`、`sessionStorage` 或 Cookie 中。\n- 定期在你的项目依赖中运行验证工具和安全漏洞扫描器。\n\n## 如何在 Claude Code 中使用任何 LLM？\n\n你是否想在自己的 Claude Code CLI 中尝试 Kimi K2 或其他 LLM？你可以使用 claude-code-router，让 Claude Code CLI 通过本地运行的“代理”将请求路由到 OpenRouter 上可用的任意模型！以下说明适用于 Kimi K2，但你也可以根据需要调整以适配其他 LLM。\n\n首先，在 OpenRouter 上注册账号并获取你的 API 密钥。\n\n确保已安装 Claude Code CLI：\n\n```\nnpm install -g @anthropic-ai\u002Fclaude-code\n```\n\n然后安装 claude-code-router：\n\n```\nnpm install -g @musistudio\u002Fclaude-code-router\n```\n\n将以下内容添加到你的 `~\u002F.claude-code-router\u002Fconfig.json` 文件中，将 `OPENROUTER_API_KEY` 替换为你从 OpenRouter 获取的 API 密钥：\n\n```\n{\n  \"Providers\": [\n    {\n      \"name\": \"kimi-k2\",\n      \"api_base_url\": \"https:\u002F\u002Fopenrouter.ai\u002Fapi\u002Fv1\u002Fchat\u002Fcompletions\",\n      \"api_key\": \"OPENROUTER_API_KEY\",\n      \"models\": [\n        \"moonshotai\u002Fkimi-k2\"\n      ],\n      \"transformer\": {\n        \"use\": [\"openrouter\"]\n      }\n    }\n  ],\n  \"Router\": {\n    \"default\": \"kimi-k2,moonshotai\u002Fkimi-k2\"\n  }\n}\n```\n\n现在只需通过路由器运行 Claude Code：\n\n```\nccr code\n```\n\n你应该会看到 Claude Code 显示 `API Base URL: http:\u002F\u002F127.0.0.1:3456`，这意味着它正在使用由 claude-code-router 创建的本地代理。就是这样！\n\n如果你只对 Kimi K2 或 Moonshot 的其他模型感兴趣，另一种选择是直接使用 Moonshot 自身提供的模型：https:\u002F\u002Fgithub.com\u002FLLM-Red-Team\u002Fkimi-cc\u002Fblob\u002Fmain\u002FREADME_EN.md\n\n## 如何创建属于自己的 AI 编码智能体？\n\n最好的入门教程是 Thorsten 撰写的这篇实用指南：[如何构建一个智能体](https:\u002F\u002Fampcode.com\u002Fhow-to-build-an-agent)，其中逐步教你用 Go 语言构建一个仅使用最少工具（`list_files`、`read_file`、`edit_file`）的简单智能体。\n\n我还写过另一篇关于如何在此处创建自定义 AI 编码智能体的教程：[在这里创建你的智能体](https:\u002F\u002Fhackable.space\u002Fen\u002Fcourses\u002Fcreating-your-own-agent)。该教程仅包含 150 行 Python 代码，能让你了解智能体框架背后的运作机制。\n\n如果你想深入研究，Gerred 编写的这套开源书籍系列：[构建智能体系统](https:\u002F\u002Fgerred.github.io\u002Fbuilding-an-agentic-system) 绝对是一个很好的起点。\n\n## 多智能体编排到底是什么鬼？！\n\n大约在 2024—2025 年间，人们开始尝试不再只运行单个智能体，而是组建一个团队来完成特定的编码任务。到了 2025 年底，Steve Yegge 等人围绕这一概念实现了首批方案。目前最流行的可能是 [Gas Town](https:\u002F\u002Fgithub.com\u002Fsteveyegge\u002Fgastown)。在 Gas Town 中，有一个名为 Mayor 的主智能体，它接收用户任务并将其拆分为多个子任务（这些子任务被存储和管理在 Yegge 创造的另一个工具 [Beads](https:\u002F\u002Fgithub.com\u002Fsteveyegge\u002Fbeads) 中，可以理解为“面向智能体的 GitHub Issues”）。Mayor 通过消息与其他智能体（例如工人 Polecats）进行通信，每个智能体都有自己的邮箱，并在收到消息时执行相应操作。\n\n此外还有其他多智能体编排的实现，比如 [Claude Flow](https:\u002F\u002Fgithub.com\u002Fruvnet\u002Fclaude-flow) 和 [Loom](https:\u002F\u002Fgithub.com\u002Fghuntley\u002Floom)，但目前尚不清楚这是否就是未来软件开发的方向。不过，尝试使用这些工具、理解其内部机制仍然很重要。需要注意的是，运行 Gas Town 等编排系统成本相当高，因为并行运行的智能体会在几秒钟内消耗大量 token。\n\n- [欢迎来到 Gas Town](https:\u002F\u002Fsteve-yegge.medium.com\u002Fwelcome-to-gas-town-4f25ee16dd04)\n- [编码智能体的未来](https:\u002F\u002Fsteve-yegge.medium.com\u002Fthe-future-of-coding-agents-e9451a84207c)\n- [Gas Town 紧急用户手册](https:\u002F\u002Fsteve-yegge.medium.com\u002Fgas-town-emergency-user-manual-cf0e4556d74b)\n\n# ✨ 特定工具与智能体的技巧与窍门\n\n## Claude Code\n\n- [Claude Code 指南](https:\u002F\u002Fgithub.com\u002Fzebbern\u002Fclaude-code-guide)：涵盖了所有可发现的 Claude Code 命令，包括许多在基础教程中未被广泛提及或记录的功能。\n\n# 🛠️ 工具\n\n这里我们会持续更新用于 AI 编码的主要工具列表。我们已经测试过其中大部分工具，并会在测试过程中分享我们的真实看法。\n\n## 编辑器 \u002F IDE\n\n- [Cursor](https:\u002F\u002Fcursor.com)\n- [Windsurf](https:\u002F\u002Fwindsurf.com)\n- [Cline](https:\u002F\u002Fcline.bot\u002F)\n- [OpenHands](https:\u002F\u002Fgithub.com\u002FAll-Hands-AI\u002FOpenHands)\n- [Devin](https:\u002F\u002Fdevin.ai)\n- [VSCode + GitHub Copilot](https:\u002F\u002Fcode.visualstudio.com\u002Fdocs\u002Fcopilot\u002Fsetup)\n- [Amp](https:\u002F\u002Fampcode.com)\n- [Kiro](https:\u002F\u002Fkiro.dev)\n\n## 命令行工具\n\n- [Claude Code](https:\u002F\u002Fgithub.com\u002Fanthropics\u002Fclaude-code)\n- [OpenAI Codex](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcodex)\n- [Pi](https:\u002F\u002Fpi.dev)\n- [opencode](https:\u002F\u002Fgithub.com\u002Fopencode-ai\u002Fopencode)\n- [Gemini CLI](https:\u002F\u002Fgithub.com\u002Fgoogle-gemini\u002Fgemini-cli)\n- [Aider](http:\u002F\u002Faider.chat\u002F)\n- [Claude Engineer](https:\u002F\u002Fgithub.com\u002FDoriandarko\u002Fclaude-engineer)\n- [Roo Code](https:\u002F\u002Fgithub.com\u002FRooVetGit\u002FRoo-Code)\n- [Codebuff](https:\u002F\u002Fwww.codebuff.com\u002F)\n\n## Web 应用\n\n- [Bolt](https:\u002F\u002Fbolt.new)\n- [v0](https:\u002F\u002Fv0.dev)\n- [Replit](https:\u002F\u002Freplit.com)\n- [Suuper](https:\u002F\u002Fsuuper.dev)\n- [Lovable](https:\u002F\u002Flovable.dev)\n- [Firebase Studio](https:\u002F\u002Ffirebase.studio\u002F)\n\n## 后台\u002F远程智能体\n\n- [ZenCoder](https:\u002F\u002Fzencoder.ai\u002F)\n- [CodeRabbit](https:\u002F\u002Fwww.coderabbit.ai\u002F)\n- [Factory AI](https:\u002F\u002Fwww.factory.ai\u002F)\n- [OpenAI Codex](https:\u002F\u002Fchatgpt.com\u002Fcodex)\n\n## 实用工具\n\n- [Specstory](https:\u002F\u002Fspecstory.com\u002F)\n- [Claude Task master](https:\u002F\u002Fgithub.com\u002Feyaltoledano\u002Fclaude-task-master)\n- [CodeGuide](https:\u002F\u002Fwww.codeguide.dev\u002F)\n- [repomix](https:\u002F\u002Frepomix.com\u002F)\n- [files-to-prompt](https:\u002F\u002Fgithub.com\u002Fsimonw\u002Ffiles-to-prompt)\n- [repo2txt](https:\u002F\u002Fgithub.com\u002Fdonoceidon\u002Frepo2txt)\n- [stakgraph](https:\u002F\u002Fgithub.com\u002Fstakwork\u002Fstakgraph)\n- [Repo Prompt](https:\u002F\u002Frepoprompt.com\u002F)\n- [Uzi](http:\u002F\u002Fuzi.sh\u002F)\n- [Claudia](https:\u002F\u002Fclaudia.asterisk.so\u002F)\n\n# 🤗 跟哪些人学习？\n\n以下是一些正在实践 AI 编码模型\u002F工具，或将 AI 应用于自身项目的有趣人物。\n\n- [Addy Osmani](https:\u002F\u002Fx.com\u002Faddyosmani)\n- [Andrej Karpathy](https:\u002F\u002Fx.com\u002Fkarpathy)\n- [Armin Ronacher](https:\u002F\u002Fx.com\u002Fmitsuhiko)\n- [Beyang Liu](https:\u002F\u002Fx.com\u002Fbeyang)（Amp）\n- [Cat Wu](https:\u002F\u002Fx.com\u002F_catwu)（Claude Code）\n- [Eric S. Raymond](https:\u002F\u002Fx.com\u002Fesrtweet)\n- [Eyal Toledano](https:\u002F\u002Fx.com\u002FEyalToledano)（TaskMaster）\n- [Geoffrey Huntley](https:\u002F\u002Fx.com\u002FGeoffreyHuntley)\n- [Gerred Dillon](https:\u002F\u002Fx.com\u002Fdevgerred)\n- [Harper Reed](https:\u002F\u002Fx.com\u002Fharper)\n- [Mario Zechner](https:\u002F\u002Fx.com\u002Fbadlogicgames)（pi）\n- [Nathan Wilbanks](https:\u002F\u002Fx.com\u002FNathanWilbanks_)（agnt、SLOP）\n- [Pietro Schirano](https:\u002F\u002Fx.com\u002Fskirano)\n- [Quinn Slack](https:\u002F\u002Fx.com\u002Fsqs)（Amp）\n- [Sandeep Pani](https:\u002F\u002Fx.com\u002Fskcd42)（Aider、AgentFarm）\n- [Simon Willison](https:\u002F\u002Fx.com\u002Fsimonw)\n- [Steve Yegge](https:\u002F\u002Fx.com\u002FSteve_Yegge)（Gas Town、Beads、Wasteland）\n- [Vilson Vieira](https:\u002F\u002Fx.com\u002Faut0mata)\n- [Thorsten Ball](https:\u002F\u002Fx.com\u002Fthorstenball)（Amp）\n- [Xingyao Wang](https:\u002F\u002Fx.com\u002Fxingyaow_)（OpenHands、AllHands）\n\n# 💖 致谢\n\n本指南的灵感来源于 Maxime Labonne 的优秀项目 [llm-course](https:\u002F\u002Fgithub.com\u002Fmlabonne\u002Fllm-course)。\n\n特别感谢：\n\n- [Gabriela Thumé](https:\u002F\u002Fgithub.com\u002Fgabithume)，感谢你的一切 ❤️\n- [Albert Espín](https:\u002F\u002Fgithub.com\u002Falbert-espin)，感谢你提供的宝贵反馈和首次错误修正\n- [Geoffrey Huntley](https:\u002F\u002Fx.com\u002FGeoffreyHuntley)，感谢你向我推荐基于属性的测试，以及你关于自主智能体的所有精彩教程和实验\n- ChatGPT 4o，感谢它为你在顶部看到的横幅提供了灵感，该横幅的设计受到了杰出艺术家 [Deathburger](https:\u002F\u002Fcitadel9.com\u002F) 的启发\n\n# ⭐ 贡献\n\n如果你希望贡献修正、反馈，或是补充一些缺失的工具或参考资料，请随时发起新的 Pull Request、新建 Issue，或者直接联系 [Eric](https:\u002F\u002Fx.com\u002Fesrtweet) 或 [Vilson](https:\u002F\u002Fx.com\u002Faut0mata)。\n\n如果你喜欢本指南，请考虑给它点个赞 ⭐，并关注以获取最新更新！\n\n## 星标历史\n\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fautomata_aicodeguide_readme_ea5b0eac74bb.png)](https:\u002F\u002Fwww.star-history.com\u002F#automata\u002Faicodeguide?type=date&legend=top-left)\n\n# ⚖️ 许可证\n\nMIT","# AI Code Guide 快速上手指南\n\n## 简介\n**AI Code Guide** 并非一个单一的可执行软件，而是一份由社区维护的**综合实践指南**。它旨在帮助开发者（无论是否有编程基础）掌握利用大语言模型（LLM）进行\"AI 编程”或“氛围编程（Vibe Coding）”的最佳实践、工具链和工作流。本指南将指导你如何选择合适的工具，并通过高效的提示词策略让 AI 协助你构建软件。\n\n## 环境准备\n\n在开始之前，请确保满足以下前置条件：\n\n1.  **操作系统**：Windows, macOS 或 Linux 均可。\n2.  **基础依赖**：\n    *   推荐安装 **VS Code** (Visual Studio Code) 作为基础编辑器（许多 AI 工具基于此或其生态）。\n    *   具备基本的命令行操作知识。\n3.  **账号准备**：\n    *   **OpenRouter 账号**（强烈推荐）：用于聚合访问最新的 LLM 模型，部分模型提供免费额度，且便于管理 Token 消耗。\n    *   **代码托管平台账号**：如 GitHub 或 GitLab（用于版本控制）。\n4.  **网络环境**：由于主要 AI 服务（如 Claude, OpenAI, Cursor 等）服务器位于海外，中国大陆用户需确保网络环境畅通，或使用合法的加速方案。\n\n> **注意**：直接使用 Claude Code 的 API\u002FSDK 可能产生高昂费用。初学者建议优先使用官方提供的 Pro\u002FMax 订阅套餐，或在 OpenRouter 上监控用量。\n\n## 安装步骤\n\n根据你的编程经验，选择以下任一主流工具进行安装：\n\n### 方案 A：零基础 \u002F 快速原型开发（网页版，无需安装）\n适合想通过“氛围编程”快速构建 SaaS 或原型的用户。\n\n*   **Bolt**: 访问 [bolt.new](https:\u002F\u002Fbolt.new)\n*   **Replit**: 访问 [replit.com](https:\u002F\u002Freplit.com)\n*   **v0**: 访问 [v0.dev](https:\u002F\u002Fv0.dev)\n*   **Lovable**: 访问 [lovable.dev](https:\u002F\u002Flovable.dev)\n\n### 方案 B：专业开发者 \u002F 深度集成（桌面应用）\n适合需要本地开发、复杂项目管理的用户。\n\n#### 1. 安装 Cursor (推荐)\nCursor 是目前最流行的 AI 原生代码编辑器。\n```bash\n# macOS (使用 Homebrew)\nbrew install --cask cursor\n\n# Windows \u002F Linux\n# 请访问 https:\u002F\u002Fcursor.com 下载对应安装包\n```\n\n#### 2. 安装 Claude Code (命令行工具)\nAnthropic 推出的强力命令行 AI 代理。\n```bash\n# 需要先安装 Node.js\nnpm install -g @anthropic-ai\u002Fclaude-code\n```\n\n#### 3. 开源替代方案\n如果你偏好完全开源的工具：\n```bash\n# 安装 OpenHands (原 OpenDevin)\ndocker run -it --pull=always \\\n    -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev\u002Fall-hands-ai\u002Fruntime:0.2-nikolaik \\\n    -e LOG_LEVEL=DEBUG \\\n    -v \u002Fvar\u002Frun\u002Fdocker.sock:\u002Fvar\u002Frun\u002Fdocker.sock \\\n    -p 3000:3000 \\\n    --add-host host.docker.internal:host-gateway \\\n    docker.all-hands.dev\u002Fall-hands-ai\u002Fopenhands:0.2\n```\n*(注：运行后在浏览器访问 `http:\u002F\u002Flocalhost:3000`)*\n\n## 基本使用\n\n### 1. 核心工作流：从构思到实现\n不要试图用一个提示词生成整个应用。遵循以下步骤：\n\n1.  **需求规划 (PRD)**：先让 AI 帮你撰写产品需求文档。\n2.  **任务拆解**：将大项目拆分为小的、可执行的任务。\n3.  **分步执行**：逐个任务让 AI 编写代码。\n4.  **审查与测试**：人工审查代码，运行测试，修复错误。\n\n### 2. 高效提示词示例 (Prompting)\n\n在 Cursor 或 Claude Code 中，创建一个 `project_plan.md` 文件，并使用以下策略与 AI 交互：\n\n**第一步：生成需求文档 (PRD)**\n在对话框中输入：\n```text\nYou're a senior software engineer. We're going to build the PRD of a project together.\n\nVERY IMPORTANT:\n- Ask one question at a time\n- Each question should be focused on clarifying requirements\n- Do not generate code yet, only ask questions to understand the scope\n- Once we have enough info, summarize the PRD in Markdown format\n\nProject idea: [在此处简述你的想法，例如：一个宠物店管理小程序]\n```\n\n**第二步：基于 PRD 编写代码**\n当 PRD 生成完毕后，继续输入：\n```text\nGreat. Now let's break this down into tasks. \nPlease create a list of subtasks required to build the MVP.\nFor each task, specify:\n1. The goal\n2. The tech stack (e.g., React, Python, SQLite)\n3. Expected output files\n\nAfter listing them, wait for my confirmation before starting the first task.\n```\n\n**第三步：执行具体任务**\n确认任务列表后，针对第一个任务发送：\n```text\nLet's start with Task 1: [任务名称].\nPlease implement the code following best practices. \nInclude comments and ensure error handling is in place.\n```\n\n### 3. 关键最佳实践\n\n*   **角色设定**：将 AI 视为“初级开发者”，你需要像审查下属代码一样审查它的输出。\n*   **格式规范**：使用 Markdown 或 XML 标签（如 `\u003Ccontext>`, `\u003Crules>`）来组织长提示词，提高 AI 理解力。\n*   **模型选择**：\n    *   **规划\u002F架构**：使用强模型（如 Claude Opus, GPT-4o）。\n    *   **代码实现**：可使用速度更快的模型（如 Claude Sonnet, Kimi, Minimax）。\n*   **保持批判性**：LLM 倾向于迎合用户（\"Yes Machine\"），务必对生成的代码进行逻辑验证和测试，不要盲目信任。\n\n### 4. 进阶：氛围编程 (Vibe Coding)\n如果你希望 AI 全自动完成更多工作（风险较高，适合简单项目）：\n*   在 Cursor 中启用 **Agent Mode**。\n*   在 Claude Code 中使用 `--dangerously-skip-permissions` 标志（需谨慎）。\n*   直接描述最终效果，让 AI 自主决定文件结构和依赖安装。\n\n> **警告**：随着项目复杂度增加，请逐渐减少纯“氛围编程”的比例，转向\"AI 副驾驶”模式，以确保代码的可维护性。","刚入行的全栈开发者小李想利用 AI 快速构建一个 SaaS 原型，但面对每周涌现的新模型、新协议（如 MCP）和各种\"Vibe Coding\"概念感到无从下手。\n\n### 没有 aicodeguide 时\n- **信息碎片化严重**：需要在 GitHub、YouTube、博客和 Discord 之间反复跳转，难以拼凑出完整的 AI 编程知识图谱。\n- **难以辨别真伪**：无法区分哪些是真正提升效率的实战技巧，哪些只是社区炒作的“噪音”，导致在无效工具上浪费时间。\n- **学习路径缺失**：作为新手不知道如何从传统编码思维过渡到\"AI 协作者”或\"AI 代理人”模式，缺乏系统性的入门指引。\n- **技术迭代焦虑**：面对日新月异的大模型和编辑器插件更新，担心自己掌握的知识瞬间过时，产生强烈的技术落伍感。\n\n### 使用 aicodeguide 后\n- **一站式知识聚合**：aicodeguide 将分散的工具、最佳实践和新协议整合在同一份指南中，让小李能快速获取经过筛选的核心资源。\n- **去伪存真明确方向**：指南批判性地指出了什么是关键能力、什么是营销噱头，帮助小李聚焦于真正重要的 AI 辅助编程技能。\n- **清晰的角色转型路径**：无论是想成为 AI 的副驾驶，还是想指挥 AI Agent 独立开发，aicodeguide 都提供了具体的操作路线图和心态建议。\n- **持续同步前沿动态**：依托社区共同维护的机制，aicodeguide 实时更新最新模型和工具链，让小李始终保持在技术浪潮的最前端。\n\naicodeguide 的核心价值在于它为混乱的 AI 编程生态提供了一张实时更新的导航图，让开发者能从信息焦虑中解脱，专注于创造本身。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fautomata_aicodeguide_7730b7d7.png","automata","Vilson Vieira","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fautomata_e49033e5.png","I create algorithms that create.","Lead ML Engineer","London","vilson@void.cc","aut0mata","https:\u002F\u002Fvoid.cc","https:\u002F\u002Fgithub.com\u002Fautomata",null,2440,239,"2026-04-06T04:20:51",1,"",{"notes":31,"python":29,"dependencies":32},"该工具（aicodeguide）并非一个需要本地安装运行的软件程序，而是一份关于如何使用 AI 进行编程（AI Coding\u002FVibe Coding\u002FAgentic Coding）的指南文档。它主要推荐用户使用在线服务（如 Bolt, Replit, v0）或现有的商业\u002F开源 IDE 插件（如 Cursor, Claude Code, Windsurf, OpenHands）。因此，本指南本身没有特定的操作系统、GPU、内存、Python 版本或依赖库要求。具体的环境需求取决于用户选择使用的上述第三方工具或模型服务商。",[],[34,35,36,37],"Agent","语言模型","开发框架","图像",[39,40,41,42,43,44,45],"ai","aicode","aicoding","llm","vibe-coding","course","roadmap",2,"ready","2026-03-27T02:49:30.150509","2026-04-06T19:00:50.629963",[],[],[53,63,71,79,87,96],{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":59,"last_commit_at":60,"category_tags":61,"status":47},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[34,36,37,62],"数据工具",{"id":64,"name":65,"github_repo":66,"description_zh":67,"stars":68,"difficulty_score":59,"last_commit_at":69,"category_tags":70,"status":47},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[36,37,34],{"id":72,"name":73,"github_repo":74,"description_zh":75,"stars":76,"difficulty_score":46,"last_commit_at":77,"category_tags":78,"status":47},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,"2026-04-05T23:32:43",[36,34,35],{"id":80,"name":81,"github_repo":82,"description_zh":83,"stars":84,"difficulty_score":46,"last_commit_at":85,"category_tags":86,"status":47},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[36,37,34],{"id":88,"name":89,"github_repo":90,"description_zh":91,"stars":92,"difficulty_score":59,"last_commit_at":93,"category_tags":94,"status":47},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[36,37,34,95],"视频",{"id":97,"name":98,"github_repo":99,"description_zh":100,"stars":101,"difficulty_score":46,"last_commit_at":102,"category_tags":103,"status":47},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[36,35]]