[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-zscole--adversarial-spec":3,"similar-zscole--adversarial-spec":90},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":19,"owner_email":20,"owner_twitter":21,"owner_website":22,"owner_url":23,"languages":24,"stars":29,"forks":30,"last_commit_at":31,"license":32,"difficulty_score":33,"env_os":34,"env_gpu":35,"env_ram":35,"env_deps":36,"category_tags":41,"github_topics":46,"view_count":54,"oss_zip_url":19,"oss_zip_packed_at":19,"status":55,"created_at":56,"updated_at":57,"faqs":58,"releases":89},1147,"zscole\u002Fadversarial-spec","adversarial-spec","A Claude Code plugin that iteratively refines product specifications by debating between multiple LLMs until all models reach consensus.","adversarial-spec通过多模型协作的方式优化产品需求文档。当用户输入产品概念后，工具会调用多个大模型并行进行辩论式审核，不断修正文档中的漏洞和假设偏差，最终生成经过严格验证的版本。这种机制能有效避免单一模型的盲区，发现潜在的边缘场景，特别适合需要高可靠性的技术文档场景。开发者和产品经理可借此提升需求文档的严谨性，研究人员能验证方案的全面性，而普通用户则能获得更清晰的产品说明。工具支持接入多种大模型服务，通过持续迭代的对抗性审查，确保输出结果经得起多角度推敲。","# adversarial-spec\n\nA Claude Code plugin that iteratively refines product specifications through multi-model debate until consensus is reached.\n\n**Key insight:** A single LLM reviewing a spec will miss things. Multiple LLMs debating a spec will catch gaps, challenge assumptions, and surface edge cases that any one model would overlook. The result is a document that has survived rigorous adversarial review.\n\n**Claude is an active participant**, not just an orchestrator. Claude provides independent critiques, challenges opponent models, and contributes substantive improvements alongside external models.\n\n## Quick Start\n\n```bash\n# 1. Add the marketplace and install the plugin\nclaude plugin marketplace add zscole\u002Fadversarial-spec\nclaude plugin install adversarial-spec\n\n# 2. Set at least one API key\nexport OPENAI_API_KEY=\"sk-...\"\n# Or use OpenRouter for access to multiple providers with one key\nexport OPENROUTER_API_KEY=\"sk-or-...\"\n\n# 3. Run it\n\u002Fadversarial-spec \"Build a rate limiter service with Redis backend\"\n```\n\n## How It Works\n\n```\nYou describe product --> Claude drafts spec --> Multiple LLMs critique in parallel\n        |                                              |\n        |                                              v\n        |                              Claude synthesizes + adds own critique\n        |                                              |\n        |                                              v\n        |                              Revise and repeat until ALL agree\n        |                                              |\n        +--------------------------------------------->|\n                                                       v\n                                            User review period\n                                                       |\n                                                       v\n                                            Final document output\n```\n\n1. Describe your product concept or provide an existing document\n2. (Optional) Start with an in-depth interview to capture requirements\n3. Claude drafts the initial document (PRD or tech spec)\n4. Document is sent to opponent models (GPT, Gemini, Grok, etc.) for parallel critique\n5. Claude provides independent critique alongside opponent feedback\n6. Claude synthesizes all feedback and revises\n7. Loop continues until ALL models AND Claude agree\n8. User review period: request changes or run additional cycles\n9. Final converged document is output\n\n## Requirements\n\n- Python 3.10+\n- `litellm` package: `pip install litellm`\n- API key for at least one LLM provider\n\n## Supported Models\n\n| Provider   | Env Var                | Example Models                               |\n|------------|------------------------|----------------------------------------------|\n| OpenAI     | `OPENAI_API_KEY`       | `gpt-4o`, `gpt-4-turbo`, `o1`                |\n| Anthropic  | `ANTHROPIC_API_KEY`    | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |\n| Google     | `GEMINI_API_KEY`       | `gemini\u002Fgemini-2.0-flash`, `gemini\u002Fgemini-pro` |\n| xAI        | `XAI_API_KEY`          | `xai\u002Fgrok-3`, `xai\u002Fgrok-beta`                |\n| Mistral    | `MISTRAL_API_KEY`      | `mistral\u002Fmistral-large`, `mistral\u002Fcodestral` |\n| Groq       | `GROQ_API_KEY`         | `groq\u002Fllama-3.3-70b-versatile`               |\n| OpenRouter | `OPENROUTER_API_KEY`   | `openrouter\u002Fopenai\u002Fgpt-4o`, `openrouter\u002Fanthropic\u002Fclaude-3.5-sonnet` |\n| Codex CLI  | ChatGPT subscription   | `codex\u002Fgpt-5.2-codex`, `codex\u002Fgpt-5.1-codex-max` |\n| Gemini CLI | Google account         | `gemini-cli\u002Fgemini-3-pro-preview`, `gemini-cli\u002Fgemini-3-flash-preview` |\n| Deepseek   | `DEEPSEEK_API_KEY`     | `deepseek\u002Fdeepseek-chat`                     |\n| Zhipu      | `ZHIPUAI_API_KEY`      | `zhipu\u002Fglm-4`, `zhipu\u002Fglm-4-plus`            |\n\nCheck which keys are configured:\n\n```bash\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" providers\n```\n\n## AWS Bedrock Support\n\nFor enterprise users who need to route all model calls through AWS Bedrock (e.g., for security compliance or inference gateway requirements):\n\n```bash\n# Enable Bedrock mode\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" bedrock enable --region us-east-1\n\n# Add models enabled in your Bedrock account\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" bedrock add-model claude-3-sonnet\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" bedrock add-model claude-3-haiku\n\n# Check configuration\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" bedrock status\n\n# Disable Bedrock mode\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" bedrock disable\n```\n\nWhen Bedrock is enabled, **all model calls route through Bedrock** - no direct API calls are made. Use friendly names like `claude-3-sonnet` which are automatically mapped to Bedrock model IDs.\n\nConfiguration is stored at `~\u002F.claude\u002Fadversarial-spec\u002Fconfig.json`.\n\n## OpenRouter Support\n\n[OpenRouter](https:\u002F\u002Fopenrouter.ai) provides unified access to multiple LLM providers through a single API. This is useful for:\n- Accessing models from multiple providers with one API key\n- Comparing models across different providers\n- Automatic fallback and load balancing\n- Cost optimization across providers\n\n**Setup:**\n\n```bash\n# Get your API key from https:\u002F\u002Fopenrouter.ai\u002Fkeys\nexport OPENROUTER_API_KEY=\"sk-or-...\"\n\n# Use OpenRouter models (prefix with openrouter\u002F)\npython3 debate.py critique --models openrouter\u002Fopenai\u002Fgpt-4o,openrouter\u002Fanthropic\u002Fclaude-3.5-sonnet \u003C spec.md\n```\n\n**Popular OpenRouter models:**\n- `openrouter\u002Fopenai\u002Fgpt-4o` - GPT-4o via OpenRouter\n- `openrouter\u002Fanthropic\u002Fclaude-3.5-sonnet` - Claude 3.5 Sonnet\n- `openrouter\u002Fgoogle\u002Fgemini-2.0-flash` - Gemini 2.0 Flash\n- `openrouter\u002Fmeta-llama\u002Fllama-3.3-70b-instruct` - Llama 3.3 70B\n- `openrouter\u002Fqwen\u002Fqwen-2.5-72b-instruct` - Qwen 2.5 72B\n\nSee the full model list at [openrouter.ai\u002Fmodels](https:\u002F\u002Fopenrouter.ai\u002Fmodels).\n\n## Codex CLI Support\n\n[Codex CLI](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcodex) allows ChatGPT Pro subscribers to use OpenAI models without separate API credits. Models prefixed with `codex\u002F` are routed through the Codex CLI.\n\n**Setup:**\n\n```bash\n# Install Codex CLI (requires ChatGPT Pro subscription)\nnpm install -g @openai\u002Fcodex\n\n# Use Codex models (prefix with codex\u002F)\npython3 debate.py critique --models codex\u002Fgpt-5.2-codex,gemini\u002Fgemini-2.0-flash \u003C spec.md\n```\n\n**Reasoning effort:**\n\nControl how much thinking time the model uses with `--codex-reasoning`:\n\n```bash\n# Available levels: low, medium, high, xhigh (default: xhigh)\npython3 debate.py critique --models codex\u002Fgpt-5.2-codex --codex-reasoning high \u003C spec.md\n```\n\nHigher reasoning effort produces more thorough analysis but uses more tokens.\n\n**Available Codex models:**\n- `codex\u002Fgpt-5.2-codex` - GPT-5.2 via Codex CLI\n- `codex\u002Fgpt-5.1-codex-max` - GPT-5.1 Max via Codex CLI\n\nCheck Codex CLI installation status:\n\n```bash\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" providers\n```\n\n## Gemini CLI Support\n\n[Gemini CLI](https:\u002F\u002Fgithub.com\u002Fgoogle-gemini\u002Fgemini-cli) allows Google account holders to use Gemini models without separate API credits. Models prefixed with `gemini-cli\u002F` are routed through the Gemini CLI.\n\n**Setup:**\n\n```bash\n# Install Gemini CLI\nnpm install -g @google\u002Fgemini-cli && gemini auth\n\n# Use Gemini CLI models (prefix with gemini-cli\u002F)\npython3 debate.py critique --models gemini-cli\u002Fgemini-3-pro-preview \u003C spec.md\n```\n\n**Available Gemini CLI models:**\n- `gemini-cli\u002Fgemini-3-pro-preview` - Gemini 3 Pro via CLI\n- `gemini-cli\u002Fgemini-3-flash-preview` - Gemini 3 Flash via CLI\n\nCheck Gemini CLI installation status:\n\n```bash\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" providers\n```\n\n## OpenAI-Compatible Endpoints\n\nFor models that expose an OpenAI-compatible API (local LLMs, self-hosted models, alternative providers), set `OPENAI_API_BASE`:\n\n```bash\n# Point to a custom endpoint\nexport OPENAI_API_KEY=\"your-key\"\nexport OPENAI_API_BASE=\"https:\u002F\u002Fyour-endpoint.com\u002Fv1\"\n\n# Use with any model name\npython3 debate.py critique --models gpt-4o \u003C spec.md\n```\n\nThis works with:\n- Local LLM servers (Ollama, vLLM, text-generation-webui)\n- OpenAI-compatible providers\n- Self-hosted inference endpoints\n\n## Usage\n\n**Start from scratch:**\n\n```\n\u002Fadversarial-spec \"Build a rate limiter service with Redis backend\"\n```\n\n**Refine an existing document:**\n\n```\n\u002Fadversarial-spec .\u002Fdocs\u002Fmy-spec.md\n```\n\nYou will be prompted for:\n\n1. **Document type**: PRD (business\u002Fproduct focus) or tech spec (engineering focus)\n2. **Interview mode**: Optional in-depth requirements gathering session\n3. **Opponent models**: Comma-separated list (e.g., `gpt-4o,gemini\u002Fgemini-2.0-flash,xai\u002Fgrok-3`)\n\nMore models = more perspectives = stricter convergence.\n\n## Document Types\n\n### PRD (Product Requirements Document)\n\nFor stakeholders, PMs, and designers.\n\n**Sections:** Executive Summary, Problem Statement, Target Users\u002FPersonas, User Stories, Functional Requirements, Non-Functional Requirements, Success Metrics, Scope (In\u002FOut), Dependencies, Risks\n\n**Critique focuses on:** Clear problem definition, well-defined personas, measurable success criteria, explicit scope boundaries, no technical implementation details\n\n### Technical Specification\n\nFor developers and architects.\n\n**Sections:** Overview, Goals\u002FNon-Goals, System Architecture, Component Design, API Design (full schemas), Data Models, Infrastructure, Security, Error Handling, Performance\u002FSLAs, Observability, Testing Strategy, Deployment Strategy\n\n**Critique focuses on:** Complete API contracts, data model coverage, security threat mitigation, error handling, specific performance targets, no ambiguity for engineers\n\n## Core Features\n\n### Interview Mode\n\nBefore the debate begins, opt into an in-depth interview session to capture requirements upfront.\n\n**Covers:** Problem context, users\u002Fstakeholders, functional requirements, technical constraints, UI\u002FUX, tradeoffs, risks, success criteria\n\nThe interview uses probing follow-up questions and challenges assumptions. After completion, Claude synthesizes answers into a complete spec before starting the adversarial debate.\n\n### Claude's Active Participation\n\nEach round, Claude:\n\n1. Reviews opponent critiques for validity\n2. Provides independent critique (what did opponents miss?)\n3. States agreement\u002Fdisagreement with specific points\n4. Synthesizes all feedback into revisions\n\nDisplay format:\n\n```\n--- Round N ---\nOpponent Models:\n- [GPT-4o]: critiqued: missing rate limit config\n- [Gemini]: agreed\n\nClaude's Critique:\nSecurity section lacks input validation strategy. Adding OWASP top 10 coverage.\n\nSynthesis:\n- Accepted from GPT-4o: rate limit configuration\n- Added by Claude: input validation, OWASP coverage\n- Rejected: none\n```\n\n### Early Agreement Verification\n\nIf a model agrees within the first 2 rounds, Claude is skeptical. The model is pressed to:\n\n- Confirm it read the entire document\n- List specific sections reviewed\n- Explain why it agrees\n- Identify any remaining concerns\n\nThis prevents false convergence from models that rubber-stamp without thorough review.\n\n### User Review Period\n\nAfter all models agree, you enter a review period with three options:\n\n1. **Accept as-is**: Document is complete\n2. **Request changes**: Claude updates the spec, you iterate without a full debate cycle\n3. **Run another cycle**: Send the updated spec through another adversarial debate\n\n### Additional Review Cycles\n\nRun multiple cycles with different strategies:\n\n- First cycle with fast models (gpt-4o), second with stronger models (o1)\n- First cycle for structure\u002Fcompleteness, second for security focus\n- Fresh perspective after user-requested changes\n\n### PRD to Tech Spec Flow\n\nWhen a PRD reaches consensus, you're offered the option to continue directly into a Technical Specification based on the PRD. This creates a complete documentation pair in a single session.\n\n## Advanced Features\n\n### Critique Focus Modes\n\nDirect models to prioritize specific concerns:\n\n```bash\n--focus security      # Auth, input validation, encryption, vulnerabilities\n--focus scalability   # Horizontal scaling, sharding, caching, capacity\n--focus performance   # Latency targets, throughput, query optimization\n--focus ux            # User journeys, error states, accessibility\n--focus reliability   # Failure modes, circuit breakers, disaster recovery\n--focus cost          # Infrastructure costs, resource efficiency\n```\n\n### Model Personas\n\nHave models critique from specific professional perspectives:\n\n```bash\n--persona security-engineer      # Thinks like an attacker\n--persona oncall-engineer        # Cares about debugging at 3am\n--persona junior-developer       # Flags ambiguity and tribal knowledge\n--persona qa-engineer            # Missing test scenarios\n--persona site-reliability       # Deployment, monitoring, incidents\n--persona product-manager        # User value, success metrics\n--persona data-engineer          # Data models, ETL implications\n--persona mobile-developer       # API design for mobile\n--persona accessibility-specialist  # WCAG, screen readers\n--persona legal-compliance       # GDPR, CCPA, regulatory\n```\n\nCustom personas also work: `--persona \"fintech compliance officer\"`\n\n### Context Injection\n\nInclude existing documents for models to consider:\n\n```bash\n--context .\u002Fexisting-api.md --context .\u002Fschema.sql\n```\n\nUse cases:\n- Existing API documentation the new spec must integrate with\n- Database schemas the spec must work with\n- Design documents or prior specs for consistency\n- Compliance requirements documents\n\n### Session Persistence and Resume\n\nLong debates can crash or need to pause. Sessions save state automatically:\n\n```bash\n# Start a named session\necho \"spec\" | python3 debate.py critique --models gpt-4o --session my-feature-spec\n\n# Resume where you left off\npython3 debate.py critique --resume my-feature-spec\n\n# List all sessions\npython3 debate.py sessions\n```\n\nSessions save:\n- Current spec state\n- Round number\n- All configuration (models, focus, persona, etc.)\n- History of previous rounds\n\nSessions are stored in `~\u002F.config\u002Fadversarial-spec\u002Fsessions\u002F`.\n\n### Auto-Checkpointing\n\nWhen using sessions, each round's spec is saved to `.adversarial-spec-checkpoints\u002F`:\n\n```\n.adversarial-spec-checkpoints\u002F\n├── my-feature-spec-round-1.md\n├── my-feature-spec-round-2.md\n└── my-feature-spec-round-3.md\n```\n\nUse these to rollback if a revision makes things worse.\n\n### Preserve Intent Mode\n\nConvergence can sand off novel ideas when models interpret \"unusual\" as \"wrong\". The `--preserve-intent` flag makes removal expensive:\n\n```bash\n--preserve-intent\n```\n\nWhen enabled, models must:\n\n1. **Quote exactly** what they want to remove or substantially change\n2. **Justify the harm** - not just \"unnecessary\" but what concrete problem it causes\n3. **Distinguish error from preference** - only remove things that are factually wrong, contradictory, or risky\n4. **Ask before removing** unusual but functional choices: \"Was this intentional?\"\n\nThis shifts the default from \"sand off anything unusual\" to \"add protective detail while preserving distinctive choices.\"\n\nUse when:\n- Your spec contains intentional unconventional choices\n- You want models to challenge your ideas, not homogenize them\n- Previous rounds removed things you wanted to keep\n\n### Cost Tracking\n\nEvery critique round displays token usage and estimated cost:\n\n```\n=== Cost Summary ===\nTotal tokens: 12,543 in \u002F 3,221 out\nTotal cost: $0.0847\n\nBy model:\n  gpt-4o: $0.0523 (8,234 in \u002F 2,100 out)\n  gemini\u002Fgemini-2.0-flash: $0.0324 (4,309 in \u002F 1,121 out)\n```\n\n### Saved Profiles\n\nSave frequently used configurations:\n\n```bash\n# Create a profile\npython3 debate.py save-profile strict-security \\\n  --models gpt-4o,gemini\u002Fgemini-2.0-flash \\\n  --focus security \\\n  --doc-type tech\n\n# Use a profile\npython3 debate.py critique --profile strict-security \u003C spec.md\n\n# List profiles\npython3 debate.py profiles\n```\n\nProfiles are stored in `~\u002F.config\u002Fadversarial-spec\u002Fprofiles\u002F`.\n\n### Diff Between Rounds\n\nSee exactly what changed between spec versions:\n\n```bash\npython3 debate.py diff --previous round1.md --current round2.md\n```\n\n### Export to Task List\n\nExtract actionable tasks from a finalized spec:\n\n```bash\ncat spec-output.md | python3 debate.py export-tasks --models gpt-4o --doc-type prd\n```\n\nOutput includes title, type, priority, description, and acceptance criteria.\n\nUse `--json` for structured output suitable for importing into issue trackers.\n\n## Telegram Integration (Optional)\n\nGet notified on your phone and inject feedback during the debate.\n\n**Setup:**\n\n1. Message @BotFather on Telegram, send `\u002Fnewbot`, follow prompts\n2. Copy the bot token\n3. Run: `python3 \"$(find ~\u002F.claude -name telegram_bot.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" setup`\n4. Message your bot, run setup again to get your chat ID\n5. Set environment variables:\n\n```bash\nexport TELEGRAM_BOT_TOKEN=\"...\"\nexport TELEGRAM_CHAT_ID=\"...\"\n```\n\n**Features:**\n\n- Async notifications when rounds complete (includes cost)\n- 60-second window to reply with feedback (incorporated into next round)\n- Final document sent to Telegram when debate concludes\n\n## Output\n\nFinal document is:\n\n- Complete, following full structure for document type\n- Vetted by all models until unanimous agreement\n- Ready for stakeholders without further editing\n\nOutput locations:\n\n- Printed to terminal\n- Written to `spec-output.md` (PRD) or `tech-spec-output.md` (tech spec)\n- Sent to Telegram (if enabled)\n\nDebate summary includes rounds completed, cycles run, models involved, Claude's contributions, cost, and key refinements made.\n\n## CLI Reference\n\n```bash\n# Core commands\ndebate.py critique --models MODEL_LIST --doc-type TYPE [OPTIONS] \u003C spec.md\ndebate.py critique --resume SESSION_ID\ndebate.py diff --previous OLD.md --current NEW.md\ndebate.py export-tasks --models MODEL --doc-type TYPE [--json] \u003C spec.md\n\n# Info commands\ndebate.py providers      # List providers and API key status\ndebate.py focus-areas    # List focus areas\ndebate.py personas       # List personas\ndebate.py profiles       # List saved profiles\ndebate.py sessions       # List saved sessions\n\n# Profile management\ndebate.py save-profile NAME --models ... [--focus ...] [--persona ...]\n\n# Bedrock management\ndebate.py bedrock status                      # Show Bedrock configuration\ndebate.py bedrock enable --region REGION      # Enable Bedrock mode\ndebate.py bedrock disable                     # Disable Bedrock mode\ndebate.py bedrock add-model MODEL             # Add model to available list\ndebate.py bedrock remove-model MODEL          # Remove model from list\ndebate.py bedrock list-models                 # List built-in model mappings\n```\n\n**Options:**\n- `--models, -m` - Comma-separated model list (auto-detects from available API keys if not specified)\n- `--doc-type, -d` - prd or tech\n- `--codex-reasoning` - Reasoning effort for Codex models (low, medium, high, xhigh; default: xhigh)\n- `--focus, -f` - Focus area (security, scalability, performance, ux, reliability, cost)\n- `--persona` - Professional persona\n- `--context, -c` - Context file (repeatable)\n- `--profile` - Load saved profile\n- `--preserve-intent` - Require justification for removals\n- `--session, -s` - Session ID for persistence and checkpointing\n- `--resume` - Resume a previous session\n- `--press, -p` - Anti-laziness check\n- `--telegram, -t` - Enable Telegram\n- `--json, -j` - JSON output\n\n## File Structure\n\n```\nadversarial-spec\u002F\n├── .claude-plugin\u002F\n│   └── plugin.json           # Plugin metadata\n├── README.md\n├── LICENSE\n└── skills\u002F\n    └── adversarial-spec\u002F\n        ├── SKILL.md          # Skill definition and process\n        └── scripts\u002F\n            ├── debate.py     # Multi-model debate orchestration\n            └── telegram_bot.py   # Telegram notifications\n```\n\n## License\n\nMIT\n","# 对抗性规格说明书\n\n一个 Claude 代码插件，通过多模型辩论迭代优化产品规格说明书，直到达成共识。\n\n**核心洞察：** 单一的 LLM 审查规格说明书时会遗漏一些内容。而多个 LLM 对规格说明书进行辩论，则能够发现漏洞、挑战假设，并揭示任何单一模型都可能忽视的边缘情况。最终生成的文档经过了严格的对抗性审查。\n\n**Claude 是积极参与者**，而不仅仅是协调者。Claude 会提供独立的批评意见，挑战对手模型，并与外部模型一起提出实质性的改进建议。\n\n## 快速入门\n\n```bash\n# 1. 添加市场并安装插件\nclaude plugin marketplace add zscole\u002Fadversarial-spec\nclaude plugin install adversarial-spec\n\n# 2. 设置至少一个 API 密钥\nexport OPENAI_API_KEY=\"sk-...\"\n# 或使用 OpenRouter，只需一个密钥即可访问多家供应商\nexport OPENROUTER_API_KEY=\"sk-or-...\"\n\n# 3. 运行它\n\u002Fadversarial-spec \"构建一个基于 Redis 后端的限流服务\"\n```\n\n## 工作原理\n\n```\n您描述产品 --> Claude 起草规格说明书 --> 多个 LLM 并行评审\n        |                                              |\n        |                                              v\n        |                              Claude 综合各方意见并加入自己的评论\n        |                                              |\n        |                                              v\n        |                              修改并重复，直到所有模型一致同意\n        |                                              |\n        +--------------------------------------------->|\n                                                       v\n                                            用户审阅期\n                                                       |\n                                                       v\n                                            最终文档输出\n```\n\n1. 描述您的产品概念或提供现有文档。\n2. （可选）从深入访谈开始，以捕捉需求。\n3. Claude 起草初始文档（PRD 或技术规格说明书）。\n4. 文档被发送给对手模型（GPT、Gemini、Grok 等）进行并行评审。\n5. Claude 在对手反馈的基础上提供独立的评论。\n6. Claude 综合所有反馈并进行修订。\n7. 循环持续，直到所有模型和 Claude 都达成一致。\n8. 用户审阅期：请求修改或再运行几轮。\n9. 最终收敛的文档会被输出。\n\n## 系统要求\n\n- Python 3.10+\n- `litellm` 包：`pip install litellm`\n- 至少一家 LLM 提供商的 API 密钥。\n\n## 支持的模型\n\n| 供应商   | 环境变量                | 示例模型                               |\n|------------|------------------------|----------------------------------------------|\n| OpenAI     | `OPENAI_API_KEY`       | `gpt-4o`, `gpt-4-turbo`, `o1`                |\n| Anthropic  | `ANTHROPIC_API_KEY`    | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |\n| Google     | `GEMINI_API_KEY`       | `gemini\u002Fgemini-2.0-flash`, `gemini\u002Fgemini-pro` |\n| xAI        | `XAI_API_KEY`          | `xai\u002Fgrok-3`, `xai\u002Fgrok-beta`                |\n| Mistral    | `MISTRAL_API_KEY`      | `mistral\u002Fmistral-large`, `mistral\u002Fcodestral` |\n| Groq       | `GROQ_API_KEY`         | `groq\u002Fllama-3.3-70b-versatile`               |\n| OpenRouter | `OPENROUTER_API_KEY`   | `openrouter\u002Fopenai\u002Fgpt-4o`, `openrouter\u002Fanthropic\u002Fclaude-3.5-sonnet` |\n| Codex CLI  | ChatGPT 订阅           | `codex\u002Fgpt-5.2-codex`, `codex\u002Fgpt-5.1-codex-max` |\n| Gemini CLI | Google 账户            | `gemini-cli\u002Fgemini-3-pro-preview`, `gemini-cli\u002Fgemini-3-flash-preview` |\n| Deepseek   | `DEEPSEEK_API_KEY`     | `deepseek\u002Fdeepseek-chat`                     |\n| Zhipu      | `ZHIPUAI_API_KEY`      | `zhipu\u002Fglm-4`, `zhipu\u002Fglm-4-plus`            |\n\n检查已配置的密钥：\n\n```bash\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" providers\n```\n\n## AWS Bedrock 支持\n\n对于需要将所有模型调用路由到 AWS Bedrock 的企业用户（例如出于安全合规或推理网关的要求）：\n\n```bash\n# 启用 Bedrock 模式\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" bedrock enable --region us-east-1\n\n# 添加您在 Bedrock 账户中启用的模型\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" bedrock add-model claude-3-sonnet\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" bedrock add-model claude-3-haiku\n\n# 检查配置\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" bedrock status\n\n# 禁用 Bedrock 模式\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" bedrock disable\n```\n\n当 Bedrock 启用时，**所有模型调用都会通过 Bedrock 路由**——不会直接进行 API 调用。您可以使用友好的名称，如 `claude-3-sonnet`，这些名称会自动映射到 Bedrock 的模型 ID。\n\n配置存储在 `~\u002F.claude\u002Fadversarial-spec\u002Fconfig.json` 中。\n\n## OpenRouter 支持\n\n[OpenRouter](https:\u002F\u002Fopenrouter.ai) 提供了一个统一的 API，可以访问多家 LLM 提供商的服务。这对于以下情况非常有用：\n- 使用一个 API 密钥即可访问多家提供商的模型。\n- 比较不同提供商的模型。\n- 自动回退和负载均衡。\n- 在不同提供商之间优化成本。\n\n**设置：**\n\n```bash\n# 从 https:\u002F\u002Fopenrouter.ai\u002Fkeys 获取您的 API 密钥\nexport OPENROUTER_API_KEY=\"sk-or-...\"\n\n# 使用 OpenRouter 模型（前缀为 openrouter\u002F）\npython3 debate.py critique --models openrouter\u002Fopenai\u002Fgpt-4o,openrouter\u002Fanthropic\u002Fclaude-3.5-sonnet \u003C spec.md\n```\n\n**流行的 OpenRouter 模型：**\n- `openrouter\u002Fopenai\u002Fgpt-4o` —— 通过 OpenRouter 使用 GPT-4o。\n- `openrouter\u002Fanthropic\u002Fclaude-3.5-sonnet` —— Claude 3.5 Sonnet。\n- `openrouter\u002Fgoogle\u002Fgemini-2.0-flash` —— Gemini 2.0 Flash。\n- `openrouter\u002Fmeta-llama\u002Fllama-3.3-70b-instruct` —— Llama 3.3 70B。\n- `openrouter\u002Fqwen\u002Fqwen-2.5-72b-instruct` —— Qwen 2.5 72B。\n\n完整的模型列表请参见 [openrouter.ai\u002Fmodels](https:\u002F\u002Fopenrouter.ai\u002Fmodels)。\n\n## Codex CLI 支持\n\n[Codex CLI](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcodex) 允许 ChatGPT Pro 订阅者无需单独的 API 积分即可使用 OpenAI 的模型。前缀为 `codex\u002F` 的模型会通过 Codex CLI 路由。\n\n**设置：**\n\n```bash\n# 安装 Codex CLI（需要 ChatGPT Pro 订阅）\nnpm install -g @openai\u002Fcodex\n\n# 使用 Codex 模型（前缀为 codex\u002F）\npython3 debate.py critique --models codex\u002Fgpt-5.2-codex,gemini\u002Fgemini-2.0-flash \u003C spec.md\n```\n\n**推理时间控制：**\n\n通过 `--codex-reasoning` 参数来控制模型使用的思考时间：\n\n```bash\npython3 debate.py critique --models codex\u002Fgpt-5.2-codex,gemini\u002Fgemini-2.0-flash --codex-reasoning=high \u003C spec.md\n```\n\n# 可用级别：低、中、高、超高（默认：超高）\npython3 debate.py critique --models codex\u002Fgpt-5.2-codex --codex-reasoning high \u003C spec.md\n```\n\n更高的推理力度会产生更全面的分析，但会消耗更多的 token。\n\n**可用的 Codex 模型：**\n- `codex\u002Fgpt-5.2-codex` - 通过 Codex CLI 的 GPT-5.2\n- `codex\u002Fgpt-5.1-codex-max` - 通过 Codex CLI 的 GPT-5.1 Max\n\n检查 Codex CLI 的安装状态：\n\n```bash\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" providers\n```\n\n## Gemini CLI 支持\n\n[Gemini CLI](https:\u002F\u002Fgithub.com\u002Fgoogle-gemini\u002Fgemini-cli) 允许 Google 账户持有人在无需单独 API 额度的情况下使用 Gemini 模型。前缀为 `gemini-cli\u002F` 的模型将通过 Gemini CLI 路由。\n\n**设置：**\n\n```bash\n# 安装 Gemini CLI\nnpm install -g @google\u002Fgemini-cli && gemini auth\n\n# 使用 Gemini CLI 模型（需加 gemini-cli\u002F 前缀）\npython3 debate.py critique --models gemini-cli\u002Fgemini-3-pro-preview \u003C spec.md\n```\n\n**可用的 Gemini CLI 模型：**\n- `gemini-cli\u002Fgemini-3-pro-preview` - 通过 CLI 的 Gemini 3 Pro\n- `gemini-cli\u002Fgemini-3-flash-preview` - 通过 CLI 的 Gemini 3 Flash\n\n检查 Gemini CLI 的安装状态：\n\n```bash\npython3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" providers\n```\n\n## OpenAI 兼容端点\n\n对于暴露 OpenAI 兼容 API 的模型（本地 LLM、自托管模型、替代提供商），请设置 `OPENAI_API_BASE`：\n\n```bash\n# 指向自定义端点\nexport OPENAI_API_KEY=\"your-key\"\nexport OPENAI_API_BASE=\"https:\u002F\u002Fyour-endpoint.com\u002Fv1\"\n\n# 与任何模型名称一起使用\npython3 debate.py critique --models gpt-4o \u003C spec.md\n```\n\n这适用于：\n- 本地 LLM 服务器（Ollama、vLLM、text-generation-webui）\n- OpenAI 兼容提供商\n- 自托管推理端点\n\n## 使用方法\n\n**从头开始：**\n\n```\n\u002Fadversarial-spec \"构建一个基于 Redis 后端的限流服务\"\n```\n\n**完善现有文档：**\n\n```\n\u002Fadversarial-spec .\u002Fdocs\u002Fmy-spec.md\n```\n\n系统将提示您提供以下信息：\n1. **文档类型**：PRD（面向业务\u002F产品）或技术规格书（面向工程）\n2. **访谈模式**：可选的深入需求收集环节\n3. **对手模型**：逗号分隔的列表（例如：`gpt-4o,gemini\u002Fgemini-2.0-flash,xai\u002Fgrok-3`）\n\n模型越多，视角越丰富，收敛标准也越严格。\n\n## 文档类型\n\n### PRD（产品需求文档）\n\n适用于利益相关者、产品经理和设计师。\n\n**章节：** 执行摘要、问题陈述、目标用户\u002F角色、用户故事、功能需求、非功能需求、成功指标、范围（包含\u002F排除）、依赖关系、风险\n\n**批评重点：** 清晰的问题定义、明确的角色定位、可衡量的成功标准、清晰的范围界定、不包含技术实现细节\n\n### 技术规格书\n\n适用于开发人员和架构师。\n\n**章节：** 概述、目标\u002F非目标、系统架构、组件设计、API 设计（完整模式）、数据模型、基础设施、安全、错误处理、性能\u002FSLA、可观性、测试策略、部署策略\n\n**批评重点：** 完整的 API 合同、数据模型覆盖、安全威胁缓解、错误处理机制、具体性能目标、避免对工程师造成歧义\n\n## 核心功能\n\n### 访谈模式\n\n在辩论开始前，可以选择进入深度访谈环节，以提前捕捉需求。\n\n**内容：** 问题背景、用户\u002F利益相关者、功能需求、技术限制、UI\u002FUX、权衡取舍、风险、成功标准\n\n访谈会通过深入追问来挑战假设。完成后，Claude 会将回答综合成一份完整的规格书，再开始对抗式辩论。\n\n### Claude 的积极参与\n\n每一轮，Claude 都会：\n1. 检查对手的批评是否有效\n2. 提供独立的批评（对手遗漏了什么？）\n3. 表明对特定观点的同意或不同意\n4. 将所有反馈综合为修订内容\n\n显示格式如下：\n\n```\n--- 第 N 轮 ---\n对手模型：\n- [GPT-4o]：批评——缺少限流配置\n- [Gemini]：同意\n\nClaude 的批评：\n安全部分缺乏输入验证策略。建议加入 OWASP 十大漏洞防护。\n\n综合意见：\n- 接受来自 GPT-4o 的建议：限流配置\n- Claude 添加：输入验证、OWASP 防护\n- 拒绝：无\n```\n\n### 早期一致性验证\n\n如果某个模型在前两轮内就表示同意，Claude 会持怀疑态度。该模型会被要求：\n- 确认已阅读全文\n- 列出具体审查过的章节\n- 解释为何同意\n- 指出仍存在的顾虑\n\n这样可以防止那些未经仔细审查就盲目认可的模型产生虚假的一致性。\n\n### 用户审核期\n\n当所有模型达成一致后，您将进入审核期，有三种选择：\n1. **按原样接受**：文档已完成\n2. **请求修改**：Claude 更新规格书，您无需进行完整辩论周期即可迭代\n3. **再运行一轮**：将更新后的规格书再次送入对抗式辩论\n\n### 多轮审核\n\n您可以采用不同策略运行多轮辩论：\n- 第一轮使用快速模型（gpt-4o），第二轮使用更强的模型（o1）\n- 第一轮关注结构完整性，第二轮聚焦安全性\n- 在用户提出修改后，获取新的视角\n\n### PRD 到技术规格书的流程\n\n当 PRD 达到共识时，您将被提供直接基于 PRD 继续撰写技术规格书的选项。这样可以在一次会话中完成一整套文档。\n\n## 进阶功能\n\n### 批评焦点模式\n\n引导模型优先关注特定问题：\n\n```bash\n--focus security      # 认证、输入验证、加密、漏洞\n--focus scalability   # 水平扩展、分片、缓存、容量\n--focus performance   # 延迟目标、吞吐量、查询优化\n--focus ux            # 用户旅程、错误状态、无障碍访问\n--focus reliability   # 故障模式、断路器、灾难恢复\n--focus cost          # 基础设施成本、资源效率\n```\n\n### 模型角色\n\n让模型从特定职业角度进行批评：\n\n```bash\n--persona security-engineer      # 以攻击者的思维思考\n--persona oncall-engineer        # 关注凌晨三点的调试\n--persona junior-developer       # 指出模糊性和经验性知识\n--persona qa-engineer            # 测试场景缺失\n--persona site-reliability       # 部署、监控、突发事件\n--persona product-manager        # 用户价值、成功指标\n--persona data-engineer          # 数据模型、ETL 影响\n--persona mobile-developer       # 针对移动端的 API 设计\n--persona accessibility-specialist  # WCAG、屏幕阅读器\n--persona legal-compliance       # GDPR、CCPA、监管要求\n```\n\n自定义角色同样适用：`--persona \"金融科技合规官\"`\n\n### 上下文注入\n\n包含现有文档供模型参考：\n\n```bash\n--context .\u002Fexisting-api.md --context .\u002Fschema.sql\n```\n\n使用场景：\n- 新规范必须集成的现有 API 文档\n- 规范必须兼容的数据库模式\n- 用于保持一致性的设计文档或先前规范\n- 合规性要求文档\n\n### 会话持久化与恢复\n\n长时间的讨论可能会崩溃或需要暂停。会话会自动保存状态：\n\n```bash\n# 启动一个命名会话\necho \"spec\" | python3 debate.py critique --models gpt-4o --session my-feature-spec\n\n# 从上次中断处恢复\npython3 debate.py critique --resume my-feature-spec\n\n# 列出所有会话\npython3 debate.py sessions\n```\n\n会话保存的内容包括：\n- 当前规范的状态\n- 轮次编号\n- 所有配置（模型、焦点、角色等）\n- 历史轮次的记录\n\n会话存储在 `~\u002F.config\u002Fadversarial-spec\u002Fsessions\u002F` 目录中。\n\n### 自动检查点\n\n使用会话时，每一轮的规范都会保存到 `.adversarial-spec-checkpoints\u002F` 目录中：\n\n```\n.adversarial-spec-checkpoints\u002F\n├── my-feature-spec-round-1.md\n├── my-feature-spec-round-2.md\n└── my-feature-spec-round-3.md\n```\n\n如果某次修订使情况变得更糟，可以使用这些检查点进行回滚。\n\n### 保留意图模式\n\n当模型将“不寻常”解释为“错误”时，收敛可能会抹去新颖的想法。`--preserve-intent` 标志会使移除操作变得代价高昂：\n\n```bash\n--preserve-intent\n```\n\n启用后，模型必须：\n1. **精确引用**他们想要移除或大幅更改的内容\n2. **说明其危害**——不仅仅是“不必要的”，还要指出它会造成哪些具体问题\n3. **区分错误与偏好**——只移除那些事实错误、自相矛盾或存在风险的内容\n4. **在移除之前询问**那些不寻常但功能正常的选项：“这是有意为之吗？”\n\n这会将默认行为从“抹去任何不寻常之处”转变为“增加保护性细节，同时保留独特的选择”。\n\n适用场景：\n- 您的规范包含有意为之的非常规选择\n- 您希望模型挑战您的想法，而不是将其同质化\n- 前几轮移除了您原本想保留的内容\n\n### 成本跟踪\n\n每一轮评论都会显示令牌使用量和预估成本：\n\n```\n=== 成本摘要 ===\n总令牌数：输入 12,543 \u002F 输出 3,221\n总成本：$0.0847\n\n按模型：\n  gpt-4o：$0.0523（输入 8,234 \u002F 输出 2,100）\n  gemini\u002Fgemini-2.0-flash：$0.0324（输入 4,309 \u002F 输出 1,121）\n```\n\n### 保存的配置文件\n\n保存常用配置：\n\n```bash\n# 创建一个配置文件\npython3 debate.py save-profile strict-security \\\n  --models gpt-4o,gemini\u002Fgemini-2.0-flash \\\n  --focus security \\\n  --doc-type tech\n\n# 使用配置文件\npython3 debate.py critique --profile strict-security \u003C spec.md\n\n# 列出配置文件\npython3 debate.py profiles\n```\n\n配置文件存储在 `~\u002F.config\u002Fadversarial-spec\u002Fprofiles\u002F` 目录中。\n\n### 轮次之间的差异比较\n\n查看规范版本之间的确切变化：\n\n```bash\npython3 debate.py diff --previous round1.md --current round2.md\n```\n\n### 导出为任务列表\n\n从最终确定的规范中提取可执行的任务：\n\n```bash\ncat spec-output.md | python3 debate.py export-tasks --models gpt-4o --doc-type prd\n```\n\n输出内容包括标题、类型、优先级、描述以及验收标准。\n\n使用 `--json` 可以获得适合导入问题跟踪系统的结构化输出。\n\n## Telegram 集成（可选）\n\n在手机上接收通知，并在讨论过程中注入反馈。\n\n**设置步骤：**\n\n1. 在 Telegram 中联系 @BotFather，发送 `\u002Fnewbot` 并按照提示操作\n2. 复制机器人令牌\n3. 运行：`python3 \"$(find ~\u002F.claude -name telegram_bot.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" setup`\n4. 向您的机器人发送消息，再次运行设置流程以获取聊天 ID\n5. 设置环境变量：\n\n```bash\nexport TELEGRAM_BOT_TOKEN=\"...\"\nexport TELEGRAM_CHAT_ID=\"...\"\n```\n\n**功能：**\n- 轮次完成后异步通知（包含成本信息）\n- 60 秒内回复反馈的时间窗口（反馈将被纳入下一轮）\n- 讨论结束后将最终文档发送至 Telegram\n\n## 输出\n\n最终文档具备以下特点：\n- 完整，遵循相应文档类型的完整结构\n- 经所有模型审核直至达成一致意见\n- 可直接交付给相关方，无需进一步编辑\n\n输出位置：\n- 打印到终端\n- 写入 `spec-output.md`（PRD）或 `tech-spec-output.md`（技术规范）\n- 发送至 Telegram（如已启用）\n\n辩论总结包括已完成的轮次、运行的周期、参与的模型、Claude 的贡献、成本以及所做的关键改进。\n\n## CLI 参考\n\n```bash\n# 核心命令\ndebate.py critique --models MODEL_LIST --doc-type TYPE [OPTIONS] \u003C spec.md\ndebate.py critique --resume SESSION_ID\ndebate.py diff --previous OLD.md --current NEW.md\ndebate.py export-tasks --models MODEL --doc-type TYPE [--json] \u003C spec.md\n\n# 信息命令\ndebate.py providers      # 列出提供商及 API 密钥状态\ndebate.py focus-areas    # 列出重点领域\ndebate.py personas       # 列出角色\ndebate.py profiles       # 列出已保存的配置文件\ndebate.py sessions       # 列出已保存的会话\n\n# 配置文件管理\ndebate.py save-profile NAME --models ... [--focus ...] [--persona ...]\n\n# Bedrock 管理\ndebate.py bedrock status                      # 显示 Bedrock 配置\ndebate.py bedrock enable --region REGION      # 启用 Bedrock 模式\ndebate.py bedrock disable                     # 关闭 Bedrock 模式\ndebate.py bedrock add-model MODEL             # 将模型添加到可用列表\ndebate.py bedrock remove-model MODEL          # 从列表中移除模型\ndebate.py bedrock list-models                 # 列出内置模型映射\n```\n\n**选项：**\n- `--models, -m` - 逗号分隔的模型列表（若未指定，则会根据可用的 API 密钥自动检测）\n- `--doc-type, -d` - prd 或 tech\n- `--codex-reasoning` - Codex 模型的推理力度（low、medium、high、xhigh；默认：xhigh）\n- `--focus, -f` - 重点领域（security、scalability、performance、ux、reliability、cost）\n- `--persona` - 专业角色\n- `--context, -c` - 上下文文件（可重复使用）\n- `--profile` - 加载已保存的配置文件\n- `--preserve-intent` - 要求对移除操作提供理由\n- `--session, -s` - 会话 ID，用于持久化和检查点\n- `--resume` - 恢复之前的会话\n- `--press, -p` - 防止拖延检查\n- `--telegram, -t` - 启用 Telegram\n- `--json, -j` - JSON 输出\n\n## 文件结构\n\n```\nadversarial-spec\u002F\n├── .claude-plugin\u002F\n│   └── plugin.json           # 插件元数据\n├── README.md\n├── LICENSE\n└── skills\u002F\n    └── adversarial-spec\u002F\n        ├── SKILL.md          # 技能定义和流程\n        └── scripts\u002F\n            ├── debate.py     # 多模型辩论编排\n            └── telegram_bot.py   # Telegram 通知\n```\n\n## 许可证\n\nMIT","# adversarial-spec 快速上手指南\n\n## 环境准备\n\n**系统要求**  \n- Python 3.10+  \n- 支持类 Unix 系统（Linux\u002FmacOS）  \n\n**前置依赖**  \n1. 安装 Python 3.10+（建议使用 [pyenv](https:\u002F\u002Fgithub.com\u002Fpyenv\u002Fpyenv) 管理版本）  \n2. 安装 `litellm` 包（支持多模型调用）  \n   ```bash\n   pip install litellm\n   ```\n   *推荐使用国内镜像加速*  \n   ```bash\n   pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple litellm\n   ```\n\n**API 密钥准备**  \n需配置至少一个 LLM 提供商的 API 密钥（如 OpenAI、OpenRouter 等）  \n```bash\nexport OPENAI_API_KEY=\"sk-...\"\n# 或使用 OpenRouter 统一接入\nexport OPENROUTER_API_KEY=\"sk-or-...\"\n```\n\n---\n\n## 安装步骤\n\n1. 添加插件市场源  \n   ```bash\n   claude plugin marketplace add zscole\u002Fadversarial-spec\n   ```\n\n2. 安装 adversarial-spec 插件  \n   ```bash\n   claude plugin install adversarial-spec\n   ```\n\n3. 验证安装（可选）  \n   ```bash\n   python3 \"$(find ~\u002F.claude -name debate.py -path '*adversarial-spec*' 2>\u002Fdev\u002Fnull | head -1)\" providers\n   ```\n\n---\n\n## 基本使用\n\n**场景 1：从零构建需求文档**  \n```bash\n\u002Fadversarial-spec \"Build a rate limiter service with Redis backend\"\n```\n\n**场景 2：迭代优化现有文档**  \n```bash\n\u002Fadversarial-spec .\u002Fdocs\u002Fmy-spec.md\n```\n\n**运行后需回答**  \n1. 文档类型：`PRD`（产品需求文档）或 `tech spec`（技术规格书）  \n2. 是否启用深度访谈（可选）  \n3. 指定对抗模型列表（如 `gpt-4o,gemini\u002Fgemini-2.0-flash`）  \n\n**输出结果**  \n经过多模型辩论收敛后的最终文档，支持后续迭代优化。","某中型软件公司需开发实时协作的在线文档编辑工具，产品经理需撰写PRD。团队曾依赖单一AI生成需求文档，但多次因遗漏关键细节导致开发返工。\n\n### 没有 adversarial-spec 时\n- 单一模型生成的文档常忽略安全机制，如未考虑用户权限动态调整场景\n- 团队成员对需求理解存在偏差，如实时同步的延迟容忍度定义模糊\n- 未覆盖边缘情况，例如网络中断时的文档状态保存策略缺失\n- 文档迭代依赖人工多轮沟通，需求变更跟踪效率低下\n\n### 使用 adversarial-spec 后\n- 多模型辩论暴露安全盲点，如未提及的跨站脚本攻击防护方案\n- 不同模型对需求的补充使协作逻辑更完备，明确分段加载与冲突解决优先级\n- 边缘案例被系统性挖掘，如断网后自动切换本地缓存的恢复流程\n- 自动化迭代减少人工协调，需求文档版本更新效率提升60%\n\n通过多模型对抗性评审，adversarial-spec将产品需求文档的完整性与可执行性提升至行业领先水平。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzscole_adversarial-spec_bb5afc37.png","zscole","zak","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fzscole_8473d936.jpg","01110111 01101001 01100101 01101110 01100101 01110010","@numbergroup ",null,"zcole@linux.com","0xzak","crap.dev","https:\u002F\u002Fgithub.com\u002Fzscole",[25],{"name":26,"color":27,"percentage":28},"Python","#3572A5",100,519,43,"2026-04-05T08:01:54","MIT",2,"Linux, macOS","未说明",{"notes":37,"python":38,"dependencies":39},"需要配置至少一个大模型API密钥，部分模型可能需要额外下载","3.10+",[40],"litellm",[42,43,44,45],"语言模型","插件","Agent","开发框架",[47,48,49,50,51,52,53],"claude-ai","claude-code","anthropic","claude-code-plugin","claude-skills","llm","orchestration",3,"ready","2026-03-27T02:49:30.150509","2026-04-06T05:36:43.322645",[59,64,69,74,79,84],{"id":60,"question_zh":61,"answer_zh":62,"source_url":63},5186,"SKILL.md 中的脚本路径与市场安装位置不匹配如何解决？","维护者已通过动态查找路径修复此问题。Marketplace 安装的插件脚本路径已调整为 ~\u002F.claude\u002Fplugins\u002F 目录，无需手动修改路径。请确保使用最新版本插件。","https:\u002F\u002Fgithub.com\u002Fzscole\u002Fadversarial-spec\u002Fissues\u002F14",{"id":65,"question_zh":66,"answer_zh":67,"source_url":68},5187,"如何添加 OpenRouter API 支持？","已通过 PR 实现支持。使用方法：1. 添加插件源 `claude plugin add github:nulone\u002Fadversarial-spec` 2. 设置环境变量 `export OPENROUTER_API_KEY=\"sk-or-...\"` 3. 运行命令 `\u002Fadversarial-spec \"Your prompt\" --models openrouter\u002Fopenai\u002Fgpt-4o`","https:\u002F\u002Fgithub.com\u002Fzscole\u002Fadversarial-spec\u002Fissues\u002F9",{"id":70,"question_zh":71,"answer_zh":72,"source_url":73},5188,"使用 OpenAI O 系列模型时温度参数报错如何解决？","维护者已自动识别 O 系列模型并移除温度参数。更新插件至最新版本后，O 系列模型（如 o1, o1-mini, o1-preview）将不再提示温度参数错误。","https:\u002F\u002Fgithub.com\u002Fzscole\u002Fadversarial-spec\u002Fissues\u002F8",{"id":75,"question_zh":76,"answer_zh":77,"source_url":78},5189,"如何解决 Codex CLI 的 git 仓库检查问题？","维护者已添加 `--skip-git-repo-check` 标志。更新插件后，非 Git 目录运行命令时将自动跳过仓库检查，无需手动修改代码。","https:\u002F\u002Fgithub.com\u002Fzscole\u002Fadversarial-spec\u002Fissues\u002F13",{"id":80,"question_zh":81,"answer_zh":82,"source_url":83},5190,"Opencode 支持是否可行？","维护者确认可通过修改实现，但需注意 Open Code 插件规范。建议直接尝试使用 `claude plugin add` 命令安装，或参考插件文档的兼容性说明。","https:\u002F\u002Fgithub.com\u002Fzscole\u002Fadversarial-spec\u002Fissues\u002F1",{"id":85,"question_zh":86,"answer_zh":87,"source_url":88},5191,"Pydantic 序列化警告如何处理？","此警告由 Pydantic 2.11+ 与 LiteLLM 的兼容性问题引起，属于非关键性警告。功能不受影响，建议保持依赖库更新以获取官方修复。","https:\u002F\u002Fgithub.com\u002Fzscole\u002Fadversarial-spec\u002Fissues\u002F6",[],[91,100,108,116,124,136],{"id":92,"name":93,"github_repo":94,"description_zh":95,"stars":96,"difficulty_score":54,"last_commit_at":97,"category_tags":98,"status":55},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[45,99,44],"图像",{"id":101,"name":102,"github_repo":103,"description_zh":104,"stars":105,"difficulty_score":33,"last_commit_at":106,"category_tags":107,"status":55},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,"2026-04-05T11:33:21",[45,44,42],{"id":109,"name":110,"github_repo":111,"description_zh":112,"stars":113,"difficulty_score":33,"last_commit_at":114,"category_tags":115,"status":55},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[45,99,44],{"id":117,"name":118,"github_repo":119,"description_zh":120,"stars":121,"difficulty_score":33,"last_commit_at":122,"category_tags":123,"status":55},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[45,42],{"id":125,"name":126,"github_repo":127,"description_zh":128,"stars":129,"difficulty_score":33,"last_commit_at":130,"category_tags":131,"status":55},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[99,132,133,43,44,134,42,45,135],"数据工具","视频","其他","音频",{"id":137,"name":138,"github_repo":139,"description_zh":140,"stars":141,"difficulty_score":54,"last_commit_at":142,"category_tags":143,"status":55},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[44,99,45,42,134]]