[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-paralleldrive--riteway":3,"tool-paralleldrive--riteway":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159636,2,"2026-04-17T23:33:34",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":78,"owner_twitter":77,"owner_website":79,"owner_url":80,"languages":81,"stars":86,"forks":87,"last_commit_at":88,"license":89,"difficulty_score":32,"env_os":90,"env_gpu":91,"env_ram":90,"env_deps":92,"category_tags":100,"github_topics":77,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":102,"updated_at":103,"faqs":104,"releases":130},8899,"paralleldrive\u002Friteway","riteway","Simple, readable, helpful unit tests. Optimized for AI Driven Development.","Riteway 是一款专为 AI 驱动开发（AIDD）设计的现代化单元测试框架，旨在让测试代码对人类和 AI 代理都更加简单、易读且有用。它通过独特的断言风格，解决了传统测试框架代码冗余、逻辑晦涩以及难以被 AI 准确理解的问题，显著降低了编写高质量测试的门槛。\n\n这款工具特别适合追求高效开发的软件工程师、致力于提升代码质量的团队，以及正在探索 AI 辅助编程的研究者。Riteway 的核心亮点在于其强制遵循的“五问”测试哲学，要求每个测试必须清晰回答被测单元、预期行为、实际输出、期望输出及复现步骤，从而为 AI 提供结构化的需求上下文，减少幻觉并提高生成准确率。此外，其极简的 API 设计不仅节省宝贵的 Token 空间，还内置了 `riteway ai` 命令行工具，支持通过 OAuth 直接调用 Claude、Cursor 等主流 AI 代理进行提示词评估，让开发者能像编写普通单元测试一样轻松验证 AI 的表现。无论是配合 Vitest、Playwright 使用，还是用于 JSX 组件测试，Riteway 都能帮助构建更可靠、更易维护的现代测试套件。","# Riteway\n[![SudoLang AIDD](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F✨_SudoLang_AIDD-black)](https:\u002F\u002Fgithub.com\u002Fparalleldrive\u002Faidd)[![Parallel Drive](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🖤_Parallel_Drive-000000?style=flat)](https:\u002F\u002Fparalleldrive.com)\n\n**The standard testing framework for AI Driven Development (AIDD) and software agents.**\n\nRiteway is a testing assertion style and philosophy which leads to simple, readable, helpful unit tests for humans and AI agents.\n\nIt lets you write better, more readable tests with a fraction of the code that traditional assertion frameworks would use, and the `riteway ai` CLI lets you write AI agent prompt evals as easily as you would write a unit testing suite.\n\nRiteway is the AI-native way to build a modern test suite. It pairs well with Vitest, Playwright, Claude Code, Cursor Agent, Google Antigravity, and more.\n\n* **R**eadable\n* **I**solated\u002F**I**ntegrated\n* **T**horough\n* **E**xplicit\n\nRiteway forces you to write **R**eadable, **I**solated, and **E**xplicit tests, because that's the only way you can use the API. It also makes it easier to be thorough by making test assertions so simple that you'll want to write more of them.\n\n## Why Riteway for AI Driven Development?\n\nRiteway's structured approach makes it ideal for AIDD:\n\n**📖 Learn more:** [Better AI Driven Development with Test Driven Development](https:\u002F\u002Fmedium.com\u002Feffortless-programming\u002Fbetter-ai-driven-development-with-test-driven-development-d4849f67e339)\n\n- **Clear requirements**: The given, should expectations and 5-question framework help AI better understand exactly what to build\n- **Readable by design**: Natural language descriptions make tests comprehensible to both humans and AI\n- **Simple API**: Minimal surface area reduces AI confusion and hallucinations\n- **Token efficient**: Concise syntax saves valuable context window space\n\n## The 5 Questions Every Test Must Answer\n\nThere are [5 questions every unit test must answer](https:\u002F\u002Fmedium.com\u002Fjavascript-scene\u002Fwhat-every-unit-test-needs-f6cd34d9836d). Riteway forces you to answer them.\n\n1. What is the unit under test (module, function, class, whatever)?\n2. What should it do? (Prose description)\n3. What was the actual output?\n4. What was the expected output?\n5. How do you reproduce the failure?\n\n\n## Installing\n\n```shell\nnpm install --save-dev riteway\n```\n\nThen add an npm command in your package.json:\n\n```json\n\"test\": \"riteway test\u002F**\u002F*-test.js\",\n```\n\nFor projects using both core Riteway tests and JSX component tests, you can use a dual test runner setup:\n\n```json\n\"test\": \"node source\u002Ftest.js && vitest run\",\n```\n\nNow you can run your tests with `npm test`. Riteway also supports full TAPE-compatible usage syntax, so you can have an advanced entry that looks like:\n\n```json\n\"test\": \"nyc riteway test\u002F**\u002F*-rt.js | tap-nirvana\",\n```\n\nIn this case, we're using [nyc](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fnyc), which generates test coverage reports. The output is piped through an advanced TAP formatter, [tap-nirvana](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Ftap-nirvana) that adds color coding, source line identification and advanced diff capabilities.\n\n### Requirements\n\nRiteway requires Node.js 16+ and uses native ES modules. Add `\"type\": \"module\"` to your package.json to enable ESM support. For JSX component testing, you'll need a build tool that can transpile JSX (see [JSX Setup](#jsx-setup) below).\n\n\n## `riteway ai` — AI Prompt Evaluations\n\nThe `riteway ai` CLI runs your AI agent prompt evaluations against a configurable pass-rate threshold. Write a `.sudo` test file, run it through any supported AI agent, and get a TAP-formatted report with per-assertion pass rates across multiple runs.\n\n### Authentication\n\nAll agents use OAuth authentication — no API keys needed. Authenticate once before running evals:\n\n| Agent | Command | Docs |\n|-------|---------|------|\n| Claude | `claude setup-token` | [Claude Code docs](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fclaude-code) |\n| Cursor | `agent login` | [Cursor docs](https:\u002F\u002Fdocs.cursor.com\u002Fcontext\u002Frules-for-ai) |\n| OpenCode | See docs | [opencode.ai\u002Fdocs\u002Fcli](https:\u002F\u002Fopencode.ai\u002Fdocs\u002Fcli\u002F) |\n\n### Writing a test file\n\nAI evals are written in `.sudo` files using [SudoLang](https:\u002F\u002Fgithub.com\u002Fparalleldrive\u002Fsudolang) syntax:\n\n```\n# my-feature-test.sudo\n\nimport 'path\u002Fto\u002Fspec.mdc'\n\nuserPrompt = \"\"\"\nImplement the sum function as described.\n\"\"\"\n\n- Given the spec, should name the function sum\n- Given the spec, should accept two parameters named a and b\n- Given the spec, should return the correct sum of the two parameters\n```\n\nEach `- Given ..., should ...` line becomes an independently judged assertion. The agent is asked to respond to the `userPrompt` (with any imported spec as context), and a judge agent scores each assertion across all runs.\n\n### Running an eval\n\n```shell\nriteway ai path\u002Fto\u002Fmy-feature-test.sudo\n```\n\nBy default this runs **4 passes**, requires **75% pass rate**, uses the **claude** agent, runs up to **4 tests concurrently**, and allows **300 seconds** per agent call.\n\n```shell\n# Specify runs, threshold, and agent\nriteway ai path\u002Fto\u002Ftest.sudo --runs 10 --threshold 80 --agent opencode\n\n# Use a Cursor agent with color output\nriteway ai path\u002Fto\u002Ftest.sudo --agent cursor --color\n\n# Use a custom agent config file (mutually exclusive with --agent)\nriteway ai path\u002Fto\u002Ftest.sudo --agent-config .\u002Fmy-agent.json\n```\n\n### Options\n\n| Flag | Default | Description |\n|------|---------|-------------|\n| `--runs N` | `4` | Number of passes per assertion |\n| `--threshold P` | `75` | Required pass percentage (0–100) |\n| `--timeout MS` | `300000` | Per-agent-call timeout in milliseconds |\n| `--agent NAME` | `claude` | Agent: `claude`, `opencode`, `cursor`, or a custom name from `riteway.agent-config.json` |\n| `--agent-config FILE` | — | Path to a flat single-agent JSON config `{\"command\",\"args\",\"outputFormat\"}` — mutually exclusive with `--agent` |\n| `--concurrency N` | `4` | Max concurrent test executions |\n| `--color` | off | Enable ANSI color output |\n| `--save-responses` | off | Save raw agent responses and judge details to a companion `.responses.md` file |\n\nResults are written as a TAP markdown file under `ai-evals\u002F` in the project root.\n\n### Saving raw responses for debugging\n\nWhen `--save-responses` is passed, a companion `.responses.md` file is written alongside the `.tap.md` output. It contains the raw result agent response and per-run judge details (passed, actual, expected, score) for every assertion — useful for debugging failures without adding console noise.\n\n```shell\nriteway ai path\u002Fto\u002Ftest.sudo --save-responses\n```\n\nEach test file produces its own uniquely-named pair of files (e.g. `2026-03-17-test-abc12.tap.md` and `2026-03-17-test-abc12.responses.md`), so multiple test files never conflict.\n\n#### Capturing responses as CI artifacts\n\nIn GitHub Actions, use `--save-responses` and upload the `ai-evals\u002F` directory as an artifact:\n\n```yaml\n- name: Run AI prompt evaluations\n  run: npx riteway ai path\u002Fto\u002Ftest.sudo --save-responses\n\n- name: Upload AI eval responses\n  if: always()\n  uses: actions\u002Fupload-artifact@v4\n  with:\n    name: ai-eval-responses\n    path: ai-evals\u002F*.responses.md\n    retention-days: 14\n```\n\nThe `if: always()` ensures responses are uploaded even when assertions fail, so you can inspect exactly what the agent produced.\n\n#### Partial results on timeout\n\nIf some runs complete before another times out, the completed runs' responses are still written to the responses file. The timed-out run's partial agent output is also captured, followed by a `[RITEWAY TIMEOUT]` marker showing when and where the timeout occurred. This lets you debug why a run took too long and potentially optimize the prompt to run faster.\n\n### Custom agent configuration\n\n`riteway ai init` writes all built-in agent configs to `riteway.agent-config.json` in your project root, so you can add custom agents or tweak existing flags:\n\n```shell\nriteway ai init           # create riteway.agent-config.json\nriteway ai init --force   # overwrite existing file\n```\n\nThe generated file is a keyed registry. Add a custom agent entry and use it with `--agent`:\n\n```json\n{\n  \"claude\":   { \"command\": \"claude\",   \"args\": [\"-p\", \"--output-format\", \"json\", \"--no-session-persistence\"], \"outputFormat\": \"json\"  },\n  \"opencode\": { \"command\": \"opencode\", \"args\": [\"run\", \"--format\", \"json\"],                                   \"outputFormat\": \"ndjson\" },\n  \"cursor\":   { \"command\": \"agent\",    \"args\": [\"--print\", \"--output-format\", \"json\"],                        \"outputFormat\": \"json\"  },\n  \"my-agent\": { \"command\": \"my-tool\",  \"args\": [\"--json\"],                                                    \"outputFormat\": \"json\"  }\n}\n```\n\n```shell\nriteway ai path\u002Fto\u002Ftest.sudo --agent my-agent\n```\n\nOnce `riteway.agent-config.json` exists, any agent key defined in it supersedes the library's built-in defaults for that agent.\n\n---\n\n## Example Usage\n\n```js\nimport { describe, Try } from 'riteway\u002Findex.js';\n\n\u002F\u002F a function to test\nconst sum = (...args) => {\n  if (args.some(v => Number.isNaN(v))) throw new TypeError('NaN');\n  return args.reduce((acc, n) => acc + n, 0);\n};\n\ndescribe('sum()', async assert => {\n  const should = 'return the correct sum';\n\n  assert({\n    given: 'no arguments',\n    should: 'return 0',\n    actual: sum(),\n    expected: 0\n  });\n\n  assert({\n    given: 'zero',\n    should,\n    actual: sum(2, 0),\n    expected: 2\n  });\n\n  assert({\n    given: 'negative numbers',\n    should,\n    actual: sum(1, -4),\n    expected: -3\n  });\n\n  assert({\n    given: 'NaN',\n    should: 'throw',\n    actual: Try(sum, 1, NaN),\n    expected: new TypeError('NaN')\n  });  \n});\n```\n\n### Testing React Components\n\n```js\nimport render from 'riteway\u002Frender-component';\nimport { describe } from 'riteway\u002Findex.js';\n\ndescribe('renderComponent', async assert => {\n  const $ = render(\u003Cdiv className=\"foo\">testing\u003C\u002Fdiv>);\n\n  assert({\n    given: 'some jsx',\n    should: 'render markup',\n    actual: $('.foo').html().trim(),\n    expected: 'testing'\n  });\n});\n```\n\n> Note: JSX component testing requires transpilation. See the [JSX Setup](#jsx-setup) section below for configuration with Vite or Next.js.\n\nRiteway makes it easier than ever to test pure React components using the `riteway\u002Frender-component` module. A pure component is a component which, given the same inputs, always renders the same output.\n\nI don't recommend unit testing stateful components, or components with side-effects. Write functional tests for those, instead, because you'll need tests which describe the complete end-to-end flow, from user input, to back-end-services, and back to the UI. Those tests frequently duplicate any testing effort you would spend unit-testing stateful UI behaviors. You'd need to do a lot of mocking to properly unit test those kinds of components anyway, and that mocking may cover up problems with too much coupling in your component. See [\"Mocking is a Code Smell\"](https:\u002F\u002Fmedium.com\u002Fjavascript-scene\u002Fmocking-is-a-code-smell-944a70c90a6a) for details.\n\nA great alternative is to encapsulate side-effects and state management in container components, and then pass state into pure components as props. Unit test the pure components and use functional tests to ensure that the complete UX flow works in real browsers from the user's perspective.\n\n#### Isolating React Unit Tests\n\nWhen you [unit test React components](https:\u002F\u002Fmedium.com\u002Fjavascript-scene\u002Funit-testing-react-components-aeda9a44aae2) you frequently have to render your components many times. Often, you want different props for some tests.\n\nRiteway makes it easy to isolate your tests while keeping them readable by using [factory functions](https:\u002F\u002Flink.medium.com\u002FWxHPhCc3OV) in conjunction with [block scope](https:\u002F\u002Fdeveloper.mozilla.org\u002Fen-US\u002Fdocs\u002FWeb\u002FJavaScript\u002FReference\u002FStatements\u002Fblock).\n\n```js\nimport ClickCounter from '..\u002Fclick-counter\u002Fclick-counter-component';\n\ndescribe('ClickCounter component', async assert => {\n  const createCounter = clickCount =>\n    render(\u003CClickCounter clicks={ clickCount } \u002F>)\n  ;\n\n  {\n    const count = 3;\n    const $ = createCounter(count);\n    assert({\n      given: 'a click count',\n      should: 'render the correct number of clicks.',\n      actual: parseInt($('.clicks-count').html().trim(), 10),\n      expected: count\n    });\n  }\n\n  {\n    const count = 5;\n    const $ = createCounter(count);\n    assert({\n      given: 'a click count',\n      should: 'render the correct number of clicks.',\n      actual: parseInt($('.clicks-count').html().trim(), 10),\n      expected: count\n    });\n  }\n});\n```\n\n## Output\n\nRiteway produces standard TAP output, so it's easy to integrate with just about any test formatter and reporting tool. (TAP is a well established standard with hundreds (thousands?) of integrations).\n\n```shell\nTAP version 13\n# sum()\nok 1 Given no arguments: should return 0\nok 2 Given zero: should return the correct sum\nok 3 Given negative numbers: should return the correct sum\nok 4 Given NaN: should throw\n\n1..4\n# tests 4\n# pass  4\n\n# ok\n```\n\nPrefer colorful output? No problem. The standard TAP output has you covered. You can run it through any TAP formatter you like:\n\n```shell\nnpm install -g tap-color\nnpm test | tap-color\n```\n\n![Colorized output](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fparalleldrive_riteway_readme_566c852de653.png)\n\n\n## API\n\n### describe\n\n```js\ndescribe = (unit: String, cb: TestFunction) => Void\n```\n\nDescribe takes a prose description of the unit under test (function, module, whatever), and a callback function (`cb: TestFunction`). The callback function should be an [async function](https:\u002F\u002Fdeveloper.mozilla.org\u002Fen-US\u002Fdocs\u002FWeb\u002FJavaScript\u002FReference\u002FStatements\u002Fasync_function) so that the test can automatically complete when it reaches the end. Riteway assumes that all tests are asynchronous. Async functions automatically return a promise in JavaScript, so Riteway knows when to end each test.\n\n### describe.only\n\n```js\ndescribe.only = (unit: String, cb: TestFunction) => Void\n```\n\nLike Describe, but don't run any other tests in the test suite.  See [test.only](https:\u002F\u002Fgithub.com\u002Fsubstack\u002Ftape#testonlyname-cb)\n\n### describe.skip\n\n```js\ndescribe.skip = (unit: String, cb: TestFunction) => Void\n```\n\nSkip running this test. See [test.skip](https:\u002F\u002Fgithub.com\u002Fsubstack\u002Ftape#testskipname-cb)\n\n### TestFunction\n\n```js\nTestFunction = assert => Promise\u003Cvoid>\n```\n\nThe `TestFunction` is a user-defined function which takes `assert()` and must return a promise. If you supply an async function, it will return a promise automatically. If you don't, you'll need to explicitly return a promise.\n\nFailure to resolve the `TestFunction` promise will cause an error telling you that your test exited without ending. Usually, the fix is to add `async` to your `TestFunction` signature, e.g.:\n\n```js\ndescribe('sum()', async assert => {\n  \u002F* test goes here *\u002F\n});\n```\n\n\n### assert\n\n```js\nassert = ({\n  given = Any,\n  should = '',\n  actual: Any,\n  expected: Any\n} = {}) => Void, throws\n```\n\nThe `assert` function is the function you call to make your assertions. It takes prose descriptions for `given` and `should` (which should be strings), and invokes the test harness to evaluate the pass\u002Ffail status of the test. Unless you're using a custom test harness, assertion failures will cause a test failure report and an error exit status.\n\nNote that `assert` uses [a deep equality check](https:\u002F\u002Fgithub.com\u002Fsubstack\u002Fnode-deep-equal) to compare the actual and expected values. Rarely, you may need another kind of check. In those cases, pass a JavaScript expression for the `actual` value.\n\n### createStream\n\n```js\ncreateStream = ({ objectMode: Boolean }) => NodeStream\n```\n\nCreate a stream of output, bypassing the default output stream that writes messages to `console.log()`. By default the stream will be a text stream of TAP output, but you can get an object stream instead by setting `opts.objectMode` to `true`.\n\n```js\nimport { describe, createStream } from 'riteway\u002Findex.js';\n\ncreateStream({ objectMode: true }).on('data', function (row) {\n    console.log(JSON.stringify(row))\n});\n\ndescribe('foo', async assert => {\n  \u002F* your tests here *\u002F\n});\n```\n\n### countKeys\n\nGiven an object, return a count of the object's own properties.\n\n```js\ncountKeys = (Object) => Number\n```\n\nThis function can be handy when you're adding new state to an object keyed by ID, and you want to ensure that the correct number of keys were added to the object.\n\n### Try\n\n```js\nTry = (fn, ...args) => Error | Any\n```\n\nExecute a function with the given arguments and return any error thrown, or the function's return value if no error occurs. This utility is designed for testing error cases in your assertions.\n\n`Try` handles both synchronous errors (via try\u002Fcatch) and asynchronous errors (via promise rejection), making it ideal for testing functions that throw exceptions or return rejected promises.\n\n#### Example: Testing Synchronous Errors\n\n```js\nconst sum = (...args) => {\n  if (args.some(v => Number.isNaN(v))) throw new TypeError('NaN');\n  return args.reduce((acc, n) => acc + n, 0);\n};\n\ndescribe('sum()', async assert => {\n  assert({\n    given: 'NaN',\n    should: 'throw TypeError',\n    actual: Try(sum, 1, NaN),\n    expected: new TypeError('NaN')\n  });\n});\n```\n\n#### Example: Testing Asynchronous Errors\n\n```js\nconst fetchUser = async (id) => {\n  if (!id) throw new Error('ID required');\n  return await fetch(`\u002Fapi\u002Fusers\u002F${id}`);\n};\n\ndescribe('fetchUser()', async assert => {\n  assert({\n    given: 'no ID',\n    should: 'throw an error',\n    actual: await Try(fetchUser),\n    expected: new Error('ID required')\n  });\n});\n```\n\n\n## Render Component\n\nFirst, import `render` from `riteway\u002Frender-component`:\n\n```js\nimport render from 'riteway\u002Frender-component';\n```\n\n```js\nrender = (jsx) => CheerioObject\n```\n\nTake a JSX object and return a [Cheerio object](https:\u002F\u002Fcheerio.js.org\u002F), a partial implementation of the jQuery core API which makes selecting from your rendered JSX markup just like selecting with jQuery or the `querySelectorAll` API.\n\n### Example\n\n```js\ndescribe('MyComponent', async assert => {\n  const $ = render(\u003CMyComponent \u002F>);\n\n  assert({\n    given: 'no params',\n    should: 'render something with the my-component class',\n    actual: $('.my-component').length,\n    expected: 1\n  });\n});\n```\n\n\n## Match\n\nFirst, import `match` from `riteway\u002Fmatch`:\n\n```js\nimport match from 'riteway\u002Fmatch.js';\n```\n\n```js\nmatch = text => pattern => String\n```\n\nTake some text to search and return a function which takes a pattern and returns the matched text, if found, or an empty string. The pattern can be a string or regular expression.\n\n### Example\n\nImagine you have a React component you need to test. The component takes some text and renders it in some div contents. You need to make sure that the passed text is getting rendered.\n\n```js\nconst MyComponent = ({text}) => \u003Cdiv className=\"contents\">{text}\u003C\u002Fdiv>;\n```\n\nYou can use match to create a new function that will test to see if your search\ntext contains anything matching the pattern you passed in. Writing tests this way\nallows you to see clear expected and actual values, so you can expect the specific\ntext you're expecting to find:\n\n```js\ndescribe('MyComponent', async assert => {\n  const text = 'Test for whatever you like!';\n  const $ = render(\u003CMyComponent text={ text }\u002F>);\n  const contains = match($('.contents').html());\n\n  assert({\n    given: 'some text to display',\n    should: 'render the text.',\n    actual: contains(text),\n    expected: text\n  });\n});\n```\n\n## JSX Setup\n\nFor JSX component testing, you need a build tool that can transpile JSX. We recommend **Vite** or **Next.js**, both of which handle JSX out of the box.\n\n### Option 1: Vite Setup\n\n[Vite](https:\u002F\u002Fvitejs.dev\u002F) provides excellent JSX support with minimal configuration:\n\n1. **Install Vite, Vitest, and React plugin:**\n   ```bash\n   npm install --save-dev vite vitest @vitejs\u002Fplugin-react\n   ```\n\n2. **Create `vite.config.js` in your project root:**\n   ```javascript\n   import { defineConfig } from 'vite';\n   import react from '@vitejs\u002Fplugin-react';\n\n   export default defineConfig({\n     plugins: [react()],\n   });\n   ```\n\n3. **Update your package.json test script:**\n   ```json\n   {\n     \"scripts\": {\n       \"test\": \"vitest run\"\n     }\n   }\n   ```\n\n   Note: Vitest configuration is optional. The above setup will work with default settings.\n\n### Option 2: Next.js Setup\n\n[Next.js](https:\u002F\u002Fnextjs.org\u002F) handles JSX transpilation automatically. No additional configuration needed for JSX support.\n\n## Vitest\n\n[Vitest](https:\u002F\u002Fvitest.dev\u002Fguide\u002F) is a [Vite](https:\u002F\u002Fvitejs.dev\u002F) plugin through which you can run Riteway tests. It's a great way to get started with Riteway because it's easy to set up and fast. It also runs tests in real browsers, so you can test standard web components.\n\n### Installing\n\nFirst you will need to install Vitest. You will also need to install Riteway into your project if you have not already done so. You can use any package manager you like:\n\n```shell\nnpm install --save-dev vitest\n```\n\n### Usage\n\nFirst, import `assert` from `riteway\u002Fvitest` and `describe` from `vitest`:\n\n```ts\nimport { assert } from 'riteway\u002Fvitest';\nimport { describe, test } from \"vitest\";\n```\n\nThen you can use the Vitest runner to test. You can run `npx vitest` directly or add a script to your package.json. See [here](https:\u002F\u002Fvitest.dev\u002Fconfig\u002F) for additional details on setting up a Vitest configuration.\n\nWhen using vitest, you should wrap your asserts inside a test function so that vitest can understand where your tests failed when it encounters a failure.\n\n\n```ts\n\u002F\u002F a function to test\nconst sum = (...args) => {\n  if (args.some(v => Number.isNaN(v))) throw new TypeError('NaN');\n  return args.reduce((acc, n) => acc + n, 0);\n};\n\ndescribe('sum()', () => {\n  test('basic summing', () => {\n    assert({\n      given: 'no arguments',\n      should: 'return 0',\n      actual: sum(),\n      expected: 0\n    });\n\n    assert({\n      given: 'two numbers',\n      should: 'return the correct sum',\n      actual: sum(2, 0),\n      expected: 2\n    });\n  });\n});\n```\n\n## Bun\n\n[Bun](https:\u002F\u002Fbun.sh\u002F) has a fast, built-in test runner that is Jest-compatible. Riteway provides a Bun adapter so you can use the familiar `assert` API with Bun's test runner.\n\n### Installing\n\nFirst, make sure you have Bun installed. Then install Riteway into your project:\n\n```shell\nbun add --dev riteway\n```\n\n### Setup\n\nBefore using `assert`, you need to call `setupRitewayBun()` once to register the custom matcher. We recommend doing this in a global setup file using Bun's `preload` option.\n\nCreate a setup file (e.g., `test\u002Fsetup.ts`):\n\n```ts\nimport { setupRitewayBun } from 'riteway\u002Fbun';\n\nsetupRitewayBun();\n```\n\nThen configure Bun to preload it. Add to your `bunfig.toml`:\n\n```toml\n[test]\npreload = [\".\u002Ftest\u002Fsetup.ts\"]\n```\n\nOr specify it via CLI:\n\n```shell\nbun test --preload .\u002Ftest\u002Fsetup.ts\n```\n\n### Usage\n\nIn your test files, import `test`, `describe`, and `assert` from `riteway\u002Fbun`:\n\n```ts\nimport { test, describe, assert } from 'riteway\u002Fbun';\n```\n\nThen run your tests with `bun test`:\n\n```shell\nbun test\n```\n\n### Example\n\n```ts\nimport { test, describe, assert } from 'riteway\u002Fbun';\n\n\u002F\u002F a function to test\nconst sum = (...args) => {\n  if (args.some(v => Number.isNaN(v))) throw new TypeError('NaN');\n  return args.reduce((acc, n) => acc + n, 0);\n};\n\ndescribe('sum()', () => {\n  test('given: no arguments, should: return 0', () => {\n    assert({\n      given: 'no arguments',\n      should: 'return 0',\n      actual: sum(),\n      expected: 0\n    });\n  });\n\n  test('given: two numbers, should: return the correct sum', () => {\n    assert({\n      given: 'two numbers',\n      should: 'return the correct sum',\n      actual: sum(2, 3),\n      expected: 5\n    });\n  });\n});\n```\n\n### Failure Output\n\nWhen a test fails, the error message includes the `given` and `should` context:\n\n```\nerror: Given two different numbers: should be equal\n\nExpected: 43\nReceived: 42\n```\n","# Riteway\n[![SudoLang AIDD](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F✨_SudoLang_AIDD-black)](https:\u002F\u002Fgithub.com\u002Fparalleldrive\u002Faidd)[![Parallel Drive](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🖤_Parallel_Drive-000000?style=flat)](https:\u002F\u002Fparalleldrive.com)\n\n**面向AI驱动开发（AIDD）与软件代理的标准测试框架。**\n\nRiteway是一种测试断言风格与理念，旨在为人类和AI代理编写简单、易读且富有帮助性的单元测试。\n\n它让你用远少于传统断言框架的代码量，写出更好、更易读的测试；而`riteway ai` CLI则使你能够像编写单元测试套件一样轻松地进行AI代理提示评估。\n\nRiteway是构建现代测试套件的原生AI方式。它与Vitest、Playwright、Claude Code、Cursor Agent、Google Antigravity等工具配合得非常好。\n\n* **R**eadable（可读性）\n* **I**solated\u002F**I**ntegrated（隔离性\u002F集成性）\n* **T**horough（全面性）\n* **E**xplicit（明确性）\n\nRiteway强制你编写**R**eadable、**I**solated和**E**xplicit的测试，因为这是使用其API的唯一方式。同时，它通过让测试断言变得极其简单，促使你愿意编写更多断言，从而更容易做到全面覆盖。\n\n## 为什么在AI驱动开发中选择Riteway？\n\nRiteway的结构化方法使其非常适合AIDD：\n\n**📖 了解更多：** [通过测试驱动开发实现更好的AI驱动开发](https:\u002F\u002Fmedium.com\u002Feffortless-programming\u002Fbetter-ai-driven-development-with-test-driven-development-d4849f67e339)\n\n- **清晰的需求**：通过“给定”、“应该”的期望以及5问框架，帮助AI更准确地理解需要构建的内容。\n- **设计之初即注重可读性**：自然语言描述使测试对人类和AI都易于理解。\n- **简洁的API**：最小化的接口表面减少了AI的困惑与幻觉。\n- **节省Token**：简洁的语法节约了宝贵的上下文窗口空间。\n\n## 每个测试必须回答的5个问题\n\n[每个单元测试都必须回答5个问题](https:\u002F\u002Fmedium.com\u002Fjavascript-scene\u002Fwhat-every-unit-test-needs-f6cd34d9836d)。Riteway会强制你回答这些问题。\n\n1. 被测单元是什么（模块、函数、类或其他）？\n2. 它应该做什么？（用文字描述）\n3. 实际输出是什么？\n4. 预期输出是什么？\n5. 如何复现失败？\n\n## 安装\n\n```shell\nnpm install --save-dev riteway\n```\n\n然后在你的package.json中添加一条npm命令：\n\n```json\n\"test\": \"riteway test\u002F**\u002F*-test.js\",\n```\n\n对于同时使用核心Riteway测试和JSX组件测试的项目，你可以采用双测试运行器配置：\n\n```json\n\"test\": \"node source\u002Ftest.js && vitest run\",\n```\n\n现在你可以通过`npm test`来运行测试。Riteway还完全兼容TAPE的使用语法，因此你也可以设置一个高级入口，如下所示：\n\n```json\n\"test\": \"nyc riteway test\u002F**\u002F*-rt.js | tap-nirvana\",\n```\n\n在这种情况下，我们使用了[nyc](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fnyc)，它可以生成测试覆盖率报告。输出会被管道传递给一个高级TAP格式化工具[ta-p-nirvana](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Ftap-nirvana)，后者会添加颜色编码、源码行号标识以及高级差异对比功能。\n\n### 系统要求\n\nRiteway需要Node.js 16及以上版本，并使用原生ES模块。请在你的package.json中添加`\"type\": \"module\"`以启用ESM支持。若要进行JSX组件测试，你需要一个能够转译JSX的构建工具（详见下文的[JSX设置]）。\n\n## `riteway ai` — AI提示评估\n\n`riteway ai` CLI会根据可配置的通过率阈值，运行你的AI代理提示评估。编写一个`.sudo`测试文件，将其输入任何支持的AI代理，即可获得一份TAP格式的报告，其中包含多次运行中每项断言的通过率。\n\n### 认证\n\n所有代理均采用OAuth认证——无需API密钥。在运行评估之前，只需进行一次认证：\n\n| 代理 | 命令 | 文档 |\n|-------|---------|------|\n| Claude | `claude setup-token` | [Claude Code文档](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fclaude-code) |\n| Cursor | `agent login` | [Cursor文档](https:\u002F\u002Fdocs.cursor.com\u002Fcontext\u002Frules-for-ai) |\n| OpenCode | 参见文档 | [opencode.ai\u002Fdocs\u002Fcli](https:\u002F\u002Fopencode.ai\u002Fdocs\u002Fcli\u002F) |\n\n### 编写测试文件\n\nAI评估使用[SudoLang](https:\u002F\u002Fgithub.com\u002Fparalleldrive\u002Fsudolang)语法，在`.sudo`文件中编写：\n\n```\n# my-feature-test.sudo\n\nimport 'path\u002Fto\u002Fspec.mdc'\n\nuserPrompt = \"\"\"\n按照描述实现sum函数。\n\"\"\"\n\n- Given the spec, should name the function sum\n- Given the spec, should accept two parameters named a and b\n- Given the spec, should return the correct sum of the two parameters\n```\n\n每一行`- Given ..., should ...`都会成为独立评判的断言。代理会根据`userPrompt`（并以导入的规格作为上下文）作出响应，随后由一位裁判代理对每次运行中的每项断言进行评分。\n\n### 运行评估\n\n```shell\nriteway ai path\u002Fto\u002Fmy-feature-test.sudo\n```\n\n默认情况下，此命令会执行**4轮**测试，要求**75%的通过率**，使用**Claude**代理，最多**4个测试并发执行**，并且每次代理调用允许**300秒**。\n\n```shell\n# 指定运行次数、阈值和代理\nriteway ai path\u002Fto\u002Ftest.sudo --runs 10 --threshold 80 --agent opencode\n\n# 使用Cursor代理并启用彩色输出\nriteway ai path\u002Fto\u002Ftest.sudo --agent cursor --color\n\n# 使用自定义代理配置文件（与--agent互斥）\nriteway ai path\u002Fto\u002Ftest.sudo --agent-config .\u002Fmy-agent.json\n```\n\n### 选项\n\n| 标志 | 默认值 | 描述 |\n|------|---------|-------------|\n| `--runs N` | `4` | 每项断言的运行次数 |\n| `--threshold P` | `75` | 所需的通过百分比（0–100） |\n| `--timeout MS` | `300000` | 每次代理调用的超时时间，单位为毫秒 |\n| `--agent NAME` | `claude` | 代理：`claude`、`opencode`、`cursor`，或来自`riteway.agent-config.json`的自定义名称 |\n| `--agent-config FILE` | — | 自定义单代理JSON配置文件路径，格式为`{\"command\",\"args\",\"outputFormat\"}`——与`--agent`互斥 |\n| `--concurrency N` | `4` | 最大并发测试执行数 |\n| `--color` | 关闭 | 启用ANSI彩色输出 |\n| `--save-responses` | 关闭 | 将原始代理响应及裁判详情保存到配套的`.responses.md`文件中 |\n\n结果将以TAP Markdown格式写入项目根目录下的`ai-evals\u002F`文件夹中。\n\n### 保存原始响应以进行调试\n\n当传递 `--save-responses` 时，会在 `.tap.md` 输出文件旁边生成一个配套的 `.responses.md` 文件。该文件包含每个断言的原始代理响应结果以及每次运行的评判详情（通过、实际值、期望值、分数），这在不增加控制台输出的情况下非常有助于调试失败原因。\n\n```shell\nriteway ai path\u002Fto\u002Ftest.sudo --save-responses\n```\n\n每个测试文件都会生成自己唯一命名的一对文件（例如 `2026-03-17-test-abc12.tap.md` 和 `2026-03-17-test-abc12.responses.md`），因此多个测试文件之间不会发生冲突。\n\n#### 将响应捕获为 CI 构建产物\n\n在 GitHub Actions 中，使用 `--save-responses` 并将 `ai-evals\u002F` 目录作为构建产物上传：\n\n```yaml\n- name: 运行 AI 提示词评估\n  run: npx riteway ai path\u002Fto\u002Ftest.sudo --save-responses\n\n- name: 上传 AI 评估响应\n  if: always()\n  uses: actions\u002Fupload-artifact@v4\n  with:\n    name: ai-eval-responses\n    path: ai-evals\u002F*.responses.md\n    retention-days: 14\n```\n\n`if: always()` 确保即使断言失败，响应也会被上传，这样你就可以精确地查看代理生成的内容。\n\n#### 超时情况下的部分结果\n\n如果某些运行在另一些超时之前完成，已完成运行的响应仍然会被写入响应文件。超时运行的部分代理输出也会被捕获，并在其后添加 `[RITEWAY TIMEOUT]` 标记，标明超时发生的时间和位置。这有助于你调试为什么某个运行耗时过长，并可能优化提示词以提高运行速度。\n\n### 自定义代理配置\n\n`riteway ai init` 会将所有内置代理配置写入项目根目录下的 `riteway.agent-config.json` 文件中，以便你可以添加自定义代理或调整现有标志：\n\n```shell\nriteway ai init           # 创建 riteway.agent-config.json\nriteway ai init --force   # 覆盖现有文件\n```\n\n生成的文件是一个键值注册表。你可以添加自定义代理条目，并使用 `--agent` 参数来调用它：\n\n```json\n{\n  \"claude\":   { \"command\": \"claude\",   \"args\": [\"-p\", \"--output-format\", \"json\", \"--no-session-persistence\"], \"outputFormat\": \"json\"  },\n  \"opencode\": { \"command\": \"opencode\", \"args\": [\"run\", \"--format\", \"json\"],                                   \"outputFormat\": \"ndjson\" },\n  \"cursor\":   { \"command\": \"agent\",    \"args\": [\"--print\", \"--output-format\", \"json\"],                        \"outputFormat\": \"json\"  },\n  \"my-agent\": { \"command\": \"my-tool\",  \"args\": [\"--json\"],                                                    \"outputFormat\": \"json\"  }\n}\n```\n\n```shell\nriteway ai path\u002Fto\u002Ftest.sudo --agent my-agent\n```\n\n一旦 `riteway.agent-config.json` 存在，其中定义的任何代理键都会覆盖库中针对该代理的默认设置。\n\n---\n\n## 示例用法\n\n```js\nimport { describe, Try } from 'riteway\u002Findex.js';\n\n\u002F\u002F 待测试的函数\nconst sum = (...args) => {\n  if (args.some(v => Number.isNaN(v))) throw new TypeError('NaN');\n  return args.reduce((acc, n) => acc + n, 0);\n};\n\ndescribe('sum()', async assert => {\n  const should = '返回正确的总和';\n\n  assert({\n    given: '无参数',\n    should: '返回 0',\n    actual: sum(),\n    expected: 0\n  });\n\n  assert({\n    given: '零',\n    should,\n    actual: sum(2, 0),\n    expected: 2\n  });\n\n  assert({\n    given: '负数',\n    should,\n    actual: sum(1, -4),\n    expected: -3\n  });\n\n  assert({\n    given: 'NaN',\n    should: '抛出错误',\n    actual: Try(sum, 1, NaN),\n    expected: new TypeError('NaN')\n  });  \n});\n```\n\n### 测试 React 组件\n\n```js\nimport render from 'riteway\u002Frender-component';\nimport { describe } from 'riteway\u002Findex.js';\n\ndescribe('renderComponent', async assert => {\n  const $ = render(\u003Cdiv className=\"foo\">testing\u003C\u002Fdiv>);\n\n  assert({\n    given: '一些 JSX',\n    should: '渲染标记',\n    actual: $('.foo').html().trim(),\n    expected: 'testing'\n  });\n});\n```\n\n> 注意：JSX 组件测试需要转译。请参阅下方的 [JSX 设置](#jsx-setup) 部分，了解如何与 Vite 或 Next.js 配合使用。\n\nRiteway 使使用 `riteway\u002Frender-component` 模块测试纯 React 组件变得比以往更加容易。所谓纯组件，是指在输入相同的情况下，始终渲染相同输出的组件。\n\n我不建议对有状态组件或带有副作用的组件进行单元测试。对于这类组件，应编写功能测试，因为它们需要描述从用户输入到后端服务再到 UI 的完整端到端流程。这些测试往往会重复你在单元测试有状态 UI 行为时所做的工作。无论如何，要正确地对这类组件进行单元测试，都需要大量的模拟，而这种模拟可能会掩盖组件内部耦合度过高的问题。有关详细信息，请参阅文章《Mocking is a Code Smell》（https:\u002F\u002Fmedium.com\u002Fjavascript-scene\u002Fmocking-is-a-code-smell-944a70c90a6a）。\n\n一个很好的替代方案是将副作用和状态管理封装在容器组件中，然后将状态作为 props 传递给纯组件。对纯组件进行单元测试，并使用功能测试确保完整的用户体验流程在真实浏览器中能够正常运作。\n\n#### 隔离 React 单元测试\n\n当你[对 React 组件进行单元测试](https:\u002F\u002Fmedium.com\u002Fjavascript-scene\u002Funit-testing-react-components-aeda9a44aae2)时，通常需要多次渲染你的组件。而且很多时候，不同的测试需要使用不同的 props。\n\nRiteway 可以通过结合使用[工厂函数](https:\u002F\u002Flink.medium.com\u002FWxHPhCc3OV)和[块级作用域](https:\u002F\u002Fdeveloper.mozilla.org\u002Fen-US\u002Fdocs\u002FWeb\u002FJavaScript\u002FReference\u002FStatements\u002Fblock)，在保持测试可读性的同时轻松实现测试的隔离。\n\n```js\nimport ClickCounter from '..\u002Fclick-counter\u002Fclick-counter-component';\n\ndescribe('ClickCounter 组件', async assert => {\n  const createCounter = clickCount =>\n    render(\u003CClickCounter clicks={ clickCount } \u002F>)\n  ;\n\n  {\n    const count = 3;\n    const $ = createCounter(count);\n    assert({\n      given: '点击次数',\n      should: '渲染正确的点击次数',\n      actual: parseInt($('.clicks-count').html().trim(), 10),\n      expected: count\n    });\n  }\n\n  {\n    const count = 5;\n    const $ = createCounter(count);\n    assert({\n      given: '点击次数',\n      should: '渲染正确的点击次数',\n      actual: parseInt($('.clicks-count').html().trim(), 10),\n      expected: count\n    });\n  }\n});\n```\n\n## 输出\n\nRiteway 生成标准的 TAP 输出，因此可以轻松集成到几乎任何测试格式化工具和报告工具中。（TAP 是一项成熟的标准，拥有数百（甚至数千？）种集成方式）。\n\n```shell\nTAP version 13\n# sum()\nok 1 给定无参数：应该返回 0\nok 2 给定零：应该返回正确的总和\nok 3 给定负数：应该返回正确的总和\nok 4 给定 NaN：应该抛出错误\n\n1..4\n# 测试总数 4\n# 通过数 4\n\n# 好的\n```\n\n想要更丰富的彩色输出吗？没问题。标准的 TAP 输出已经为你准备好了。你可以通过任何你喜欢的 TAP 格式化工具来处理它：\n\n```shell\nnpm install -g tap-color\nnpm test | tap-color\n```\n\n![彩色输出](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fparalleldrive_riteway_readme_566c852de653.png)\n\n\n## API\n\n### describe\n\n```js\ndescribe = (unit: String, cb: TestFunction) => Void\n```\n\n`describe` 接受一个对被测试单元（函数、模块等）的文字描述，以及一个回调函数 (`cb: TestFunction`)。这个回调函数应该是一个 [异步函数](https:\u002F\u002Fdeveloper.mozilla.org\u002Fen-US\u002Fdocs\u002FWeb\u002FJavaScript\u002FReference\u002FStatements\u002Fasync_function)，这样当测试执行到末尾时就能自动结束。Riteway 假设所有测试都是异步的。在 JavaScript 中，异步函数会自动返回一个 Promise，因此 Riteway 知道何时结束每个测试。\n\n### describe.only\n\n```js\ndescribe.only = (unit: String, cb: TestFunction) => Void\n```\n\n与 `describe` 类似，但不会运行测试套件中的其他测试。参见 [test.only](https:\u002F\u002Fgithub.com\u002Fsubstack\u002Ftape#testonlyname-cb)\n\n### describe.skip\n\n```js\ndescribe.skip = (unit: String, cb: TestFunction) => Void\n```\n\n跳过此测试的执行。参见 [test.skip](https:\u002F\u002Fgithub.com\u002Fsubstack\u002Ftape#testskipname-cb)\n\n### TestFunction\n\n```js\nTestFunction = assert => Promise\u003Cvoid>\n```\n\n`TestFunction` 是用户定义的函数，接受 `assert()` 作为参数，并且必须返回一个 Promise。如果你提供的是一个异步函数，它会自动返回一个 Promise。如果不是，你就需要显式地返回一个 Promise。\n\n如果 `TestFunction` 的 Promise 没有被解决，就会抛出一个错误，告诉你测试在未结束的情况下退出了。通常的解决方法是在你的 `TestFunction` 签名中添加 `async`，例如：\n\n```js\ndescribe('sum()', async assert => {\n  \u002F* 测试内容 *\u002F\n});\n```\n\n\n### assert\n\n```js\nassert = ({\n  given = Any,\n  should = '',\n  actual: Any,\n  expected: Any\n} = {}) => Void, throws\n```\n\n`assert` 函数是你用来进行断言的函数。它接受 `given` 和 `should` 的文字描述（应为字符串），并调用测试框架来评估测试的通过或失败状态。除非你使用自定义的测试框架，否则断言失败会导致测试失败报告和错误退出状态。\n\n请注意，`assert` 使用 [深度相等性检查](https:\u002F\u002Fgithub.com\u002Fsubstack\u002Fnode-deep-equal) 来比较实际值和期望值。在极少数情况下，你可能需要另一种类型的检查。这时可以为 `actual` 值传递一个 JavaScript 表达式。\n\n### createStream\n\n```js\ncreateStream = ({ objectMode: Boolean }) => NodeStream\n```\n\n创建一个输出流，绕过默认将消息写入 `console.log()` 的输出流。默认情况下，该流将是 TAP 输出的文本流，但你可以通过将 `opts.objectMode` 设置为 `true` 来获取对象流。\n\n```js\nimport { describe, createStream } from 'riteway\u002Findex.js';\n\ncreateStream({ objectMode: true }).on('data', function (row) {\n    console.log(JSON.stringify(row))\n});\n\ndescribe('foo', async assert => {\n  \u002F* 你的测试在这里 *\u002F\n});\n```\n\n### countKeys\n\n给定一个对象，返回该对象自有属性的数量。\n\n```js\ncountKeys = (Object) => Number\n```\n\n这个函数在你向以 ID 为键的对象添加新状态时非常有用，可以帮助你确保正确数量的键被添加到了对象中。\n\n### Try\n\n```js\nTry = (fn, ...args) => Error | Any\n```\n\n使用给定的参数执行函数，如果抛出错误则返回错误，如果没有错误则返回函数的返回值。这个工具专门用于测试断言中的错误情况。\n\n`Try` 同时处理同步错误（通过 try\u002Fcatch）和异步错误（通过 Promise 拒绝），因此非常适合测试那些会抛出异常或返回被拒绝的 Promise 的函数。\n\n#### 示例：测试同步错误\n\n```js\nconst sum = (...args) => {\n  if (args.some(v => Number.isNaN(v))) throw new TypeError('NaN');\n  return args.reduce((acc, n) => acc + n, 0);\n};\n\ndescribe('sum()', async assert => {\n  assert({\n    given: 'NaN',\n    should: '抛出 TypeError',\n    actual: Try(sum, 1, NaN),\n    expected: new TypeError('NaN')\n  });\n});\n```\n\n#### 示例：测试异步错误\n\n```js\nconst fetchUser = async (id) => {\n  if (!id) throw new Error('需要 ID');\n  return await fetch(`\u002Fapi\u002Fusers\u002F${id}`);\n};\n\ndescribe('fetchUser()', async assert => {\n  assert({\n    given: '没有 ID',\n    should: '抛出错误',\n    actual: await Try(fetchUser),\n    expected: new Error('需要 ID')\n  });\n});\n```\n\n\n## Render Component\n\n首先，从 `riteway\u002Frender-component` 中导入 `render`：\n\n```js\nimport render from 'riteway\u002Frender-component';\n```\n\n```js\nrender = (jsx) => CheerioObject\n```\n\n接收一个 JSX 对象，并返回一个 [Cheerio 对象](https:\u002F\u002Fcheerio.js.org\u002F)，它是 jQuery 核心 API 的部分实现，使得从渲染后的 JSX 标记中选择元素就像使用 jQuery 或 `querySelectorAll` API 一样简单。\n\n### 示例\n\n```js\ndescribe('MyComponent', async assert => {\n  const $ = render(\u003CMyComponent \u002F>);\n\n  assert({\n    given: '无参数',\n    should: '渲染带有 my-component 类的内容',\n    actual: $('.my-component').length,\n    expected: 1\n  });\n});\n```\n\n\n## Match\n\n首先，从 `riteway\u002Fmatch` 中导入 `match`：\n\n```js\nimport match from 'riteway\u002Fmatch.js';\n```\n\n```js\nmatch = text => pattern => String\n```\n\n接收一段要搜索的文本，返回一个函数，该函数接受一个模式并返回匹配的文本（如果找到的话），否则返回空字符串。模式可以是字符串或正则表达式。\n\n### 示例\n\n假设你有一个需要测试的 React 组件。该组件接收一些文本并在某个 div 内容中渲染出来。你需要确保传入的文本确实被渲染出来了。\n\n```js\nconst MyComponent = ({text}) => \u003Cdiv className=\"contents\">{text}\u003C\u002Fdiv>;\n```\n\n你可以使用 `match` 创建一个新的函数，用来测试你的搜索文本是否包含与你传入的模式匹配的内容。以这种方式编写测试可以使预期值和实际值更加清晰，从而确保你能找到预期的特定文本：\n\n```js\ndescribe('MyComponent', async assert => {\n  const text = '测试任意你喜欢的内容！';\n  const $ = render(\u003CMyComponent text={ text }\u002F>);\n  const contains = match($('.contents').html());\n\n  assert({\n    given: '一些要显示的文本',\n    should: '渲染该文本',\n    actual: contains(text),\n    expected: text\n  });\n});\n```\n\n\n## JSX 设置\n\n对于 JSX 组件的测试，你需要一个能够转译 JSX 的构建工具。我们推荐 **Vite** 或 **Next.js**，它们都可以直接处理 JSX。\n\n### 选项 1: Vite 설정\n\n[Vite](https:\u002F\u002Fvitejs.dev\u002F)는 최소한의 구성으로 훌륭한 JSX 지원을 제공합니다:\n\n1. **Vite, Vitest 및 React 플러그인 설치:**\n   ```bash\n   npm install --save-dev vite vitest @vitejs\u002Fplugin-react\n   ```\n\n2. **프로젝트 루트에 `vite.config.js` 생성:**\n   ```javascript\n   import { defineConfig } from 'vite';\n   import react from '@vitejs\u002Fplugin-react';\n\n   export default defineConfig({\n     plugins: [react()],\n   });\n   ```\n\n3. **package.json의 test 스크립트 업데이트:**\n   ```json\n   {\n     \"scripts\": {\n       \"test\": \"vitest run\"\n     }\n   }\n   ```\n\n   참고: Vitest 구성은 선택 사항입니다. 위의 설정은 기본 설정으로도 작동합니다.\n\n### 옵션 2: Next.js 설정\n\n[Next.js](https:\u002F\u002Fnextjs.org\u002F)는 JSX 트랜스파일링을 자동으로 처리합니다. JSX 지원을 위해 추가 구성이 필요하지 않습니다.\n\n## Vitest\n\n[Vitest](https:\u002F\u002Fvitest.dev\u002Fguide\u002F)는 Riteway 테스트를 실행할 수 있는 [Vite](https:\u002F\u002Fvitejs.dev\u002F) 플러그인입니다. 설정이 간편하고 속도가 빠르기 때문에 Riteway를 시작하기에 매우 좋은 방법입니다. 또한 실제 브라우저에서 테스트를 실행하므로 표준 웹 컴포넌트를 테스트할 수 있습니다.\n\n### 설치\n\n먼저 Vitest를 설치해야 합니다. 아직 프로젝트에 Riteway를 설치하지 않았다면 함께 설치해야 합니다. 원하는 패키지 매니저를 사용할 수 있습니다:\n\n```shell\nnpm install --save-dev vitest\n```\n\n### 사용법\n\n먼저 `riteway\u002Fvitest`에서 `assert`와 `vitest`에서 `describe`를 가져옵니다:\n\n```ts\nimport { assert } from 'riteway\u002Fvitest';\nimport { describe, test } from \"vitest\";\n```\n\n그런 다음 Vitest 러너를 사용하여 테스트를 실행할 수 있습니다. 직접 `npx vitest`를 실행하거나 package.json에 스크립트를 추가할 수 있습니다. Vitest 구성 설정에 대한 추가 정보는 [여기](https:\u002F\u002Fvitest.dev\u002Fconfig\u002F)를 참조하세요.\n\nVitest를 사용할 때는 실패가 발생했을 때 Vitest가 어디에서 실패했는지 파악할 수 있도록 아사ert 문을 테스트 함수 안에 감싸야 합니다.\n\n\n```ts\n\u002F\u002F 테스트할 함수\nconst sum = (...args) => {\n  if (args.some(v => Number.isNaN(v))) throw new TypeError('NaN');\n  return args.reduce((acc, n) => acc + n, 0);\n};\n\ndescribe('sum()', () => {\n  test('기본 합산', () => {\n    assert({\n      given: '인수 없음',\n      should: '0을 반환해야 함',\n      actual: sum(),\n      expected: 0\n    });\n\n    assert({\n      given: '두 숫자',\n      should: '올바른 합을 반환해야 함',\n      actual: sum(2, 0),\n      expected: 2\n    });\n  });\n});\n```\n\n## Bun\n\n[Bun](https:\u002F\u002Fbun.sh\u002F)에는 Jest와 호환되는 빠른 내장 테스트 러너가 있습니다. Riteway는 Bun 어댑터를 제공하여 익숙한 `assert` API를 Bun의 테스트 러너와 함께 사용할 수 있도록 해줍니다.\n\n### 설치\n\n먼저 Bun이 설치되어 있는지 확인하세요. 그런 다음 프로젝트에 Riteway를 설치합니다:\n\n```shell\nbun add --dev riteway\n```\n\n### 설정\n\n`assert`를 사용하기 전에 맞춤형 매처를 등록하기 위해 `setupRitewayBun()`을 한 번 호출해야 합니다. Bun의 `preload` 옵션을 사용하여 전역 설정 파일에서 이를 수행하는 것을 권장합니다.\n\n설정 파일(예: `test\u002Fsetup.ts`)을 만듭니다:\n\n```ts\nimport { setupRitewayBun } from 'riteway\u002Fbun';\n\nsetupRitewayBun();\n```\n\n그런 다음 Bun이 이를 미리 로드하도록 구성합니다. `bunfig.toml`에 다음을 추가하세요:\n\n```toml\n[test]\npreload = [\".\u002Ftest\u002Fsetup.ts\"]\n```\n\n또는 CLI를 통해 지정할 수도 있습니다:\n\n```shell\nbun test --preload .\u002Ftest\u002Fsetup.ts\n```\n\n### 사용법\n\n테스트 파일에서 `riteway\u002Fbun`에서 `test`, `describe` 및 `assert`를 가져옵니다:\n\n```ts\nimport { test, describe, assert } from 'riteway\u002Fbun';\n```\n\n그런 다음 `bun test`를 사용하여 테스트를 실행합니다:\n\n```shell\nbun test\n```\n\n### 예제\n\n```ts\nimport { test, describe, assert } from 'riteway\u002Fbun';\n\n\u002F\u002F 테스트할 함수\nconst sum = (...args) => {\n  if (args.some(v => Number.isNaN(v))) throw new TypeError('NaN');\n  return args.reduce((acc, n) => acc + n, 0);\n};\n\ndescribe('sum()', () => {\n  test('주어진 인수가 없을 경우, 0을 반환해야 함', () => {\n    assert({\n      given: '인수 없음',\n      should: '0을 반환해야 함',\n      actual: sum(),\n      expected: 0\n    });\n  });\n\n  test('두 숫자가 주어질 경우, 올바른 합을 반환해야 함', () => {\n    assert({\n      given: '두 숫자',\n      should: '올바른 합을 반환해야 함',\n      actual: sum(2, 3),\n      expected: 5\n    });\n  });\n});\n```\n\n### 실패 출력\n\n테스트가 실패하면 오류 메시지에 `given`와 `should` 컨텍스트가 포함됩니다:\n\n```\n오류: 두 개의 서로 다른 숫자가 주어졌을 때, 같아야 합니다.\n\n예상: 43\n받은 값: 42\n```","# Riteway 快速上手指南\n\nRiteway 是专为 **AI 驱动开发 (AIDD)** 和软件代理设计的标准化测试框架。它倡导一种简洁、可读性强的断言风格，既能让人类轻松理解，也能让 AI 代理高效执行。其核心理念遵循 **RITE** 原则：**R**eadable（可读）、**I**solated（隔离）、**T**horough（详尽）、**E**xplicit（明确）。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **Node.js**: 版本 16 或更高。\n*   **模块系统**: Riteway 使用原生 ES 模块 (ESM)。\n    *   请在项目的 `package.json` 中添加 `\"type\": \"module\"` 以启用支持。\n*   **JSX 支持 (可选)**: 如果需要测试 React 组件，需配置能够转译 JSX 的构建工具（如 Vite 或 Next.js）。\n\n## 安装步骤\n\n### 1. 安装依赖\n\n使用 npm 将 Riteway 安装为开发依赖：\n\n```shell\nnpm install --save-dev riteway\n```\n\n> **提示**: 国内开发者若遇到下载缓慢问题，可临时使用淘宝镜像源安装：\n> `npm install --save-dev riteway --registry=https:\u002F\u002Fregistry.npmmirror.com`\n\n### 2. 配置测试命令\n\n在 `package.json` 的 `scripts` 部分添加测试命令。\n\n**基础用法：**\n```json\n\"test\": \"riteway test\u002F**\u002F*-test.js\"\n```\n\n**混合项目用法 (同时运行核心测试和 JSX 组件测试)：**\n如果你同时使用 Vitest 进行组件测试，可以组合命令：\n```json\n\"test\": \"node source\u002Ftest.js && vitest run\"\n```\n\n**高级用法 (带覆盖率报告和美化输出)：**\n配合 `nyc` (覆盖率) 和 `tap-nirvana` (格式化输出)：\n```json\n\"test\": \"nyc riteway test\u002F**\u002F*-rt.js | tap-nirvana\"\n```\n\n配置完成后，即可通过 `npm test` 运行测试。\n\n## 基本使用\n\nRiteway 的核心在于强制每个测试回答五个关键问题：被测单元是什么？应该做什么？实际输出是什么？预期输出是什么？如何复现失败？\n\n### 1. 编写普通函数测试\n\n创建一个测试文件（例如 `sum-test.js`），导入 `describe` 和 `Try`：\n\n```js\nimport { describe, Try } from 'riteway\u002Findex.js';\n\n\u002F\u002F 待测函数\nconst sum = (...args) => {\n  if (args.some(v => Number.isNaN(v))) throw new TypeError('NaN');\n  return args.reduce((acc, n) => acc + n, 0);\n};\n\ndescribe('sum()', async assert => {\n  const should = 'return the correct sum';\n\n  \u002F\u002F 断言 1: 无参数\n  assert({\n    given: 'no arguments',\n    should: 'return 0',\n    actual: sum(),\n    expected: 0\n  });\n\n  \u002F\u002F 断言 2: 包含零\n  assert({\n    given: 'zero',\n    should,\n    actual: sum(2, 0),\n    expected: 2\n  });\n\n  \u002F\u002F 断言 3: 负数\n  assert({\n    given: 'negative numbers',\n    should,\n    actual: sum(1, -4),\n    expected: -3\n  });\n\n  \u002F\u002F 断言 4: 异常处理 (使用 Try 捕获错误)\n  assert({\n    given: 'NaN',\n    should: 'throw',\n    actual: Try(sum, 1, NaN),\n    expected: new TypeError('NaN')\n  });  \n});\n```\n\n### 2. 编写 React 组件测试\n\nRiteway 提供了 `riteway\u002Frender-component` 模块来简化纯组件（Pure Components）的测试。\n\n```js\nimport render from 'riteway\u002Frender-component';\nimport { describe } from 'riteway\u002Findex.js';\n\n\u002F\u002F 假设有一个简单的 JSX 组件\ndescribe('renderComponent', async assert => {\n  \u002F\u002F 渲染组件\n  const $ = render(\u003Cdiv className=\"foo\">testing\u003C\u002Fdiv>);\n\n  assert({\n    given: 'some jsx',\n    should: 'render markup',\n    actual: $('.foo').html().trim(),\n    expected: 'testing'\n  });\n});\n```\n\n> **注意**: 运行 JSX 测试前，请确保你的构建工具（如 Vite）已正确配置以处理 `.jsx` 或 `.js` 中的 JSX 语法。\n\n### 3. (进阶) 使用 `riteway ai` 进行 AI 提示词评估\n\nRiteway 独有的 `riteway ai` 命令行工具可用于评估 AI 代理对提示词（Prompt）的执行效果。\n\n**前置准备：**\n确保已安装并登录对应的 AI 代理工具（如 Claude Code, Cursor 等）：\n```shell\n# 示例：登录 Claude\nclaude setup-token\n```\n\n**编写评估文件 (.sudo):**\n创建 `my-feature-test.sudo` 文件，使用 SudoLang 语法描述需求：\n\n```text\n# my-feature-test.sudo\n\nimport 'path\u002Fto\u002Fspec.mdc'\n\nuserPrompt = \"\"\"\nImplement the sum function as described.\n\"\"\"\n\n- Given the spec, should name the function sum\n- Given the spec, should accept two parameters named a and b\n- Given the spec, should return the correct sum of the two parameters\n```\n\n**运行评估：**\n```shell\nriteway ai path\u002Fto\u002Fmy-feature-test.sudo\n```\n\n默认情况下，这将运行 4 次，要求 75% 的通过率，并使用 `claude` 代理。你可以通过参数自定义：\n\n```shell\n# 指定运行次数、通过率阈值和代理类型\nriteway ai path\u002Fto\u002Ftest.sudo --runs 10 --threshold 80 --agent cursor\n\n# 保存原始响应以便调试\nriteway ai path\u002Fto\u002Ftest.sudo --save-responses\n```\n\n评估结果将以 TAP 格式输出到项目根目录的 `ai-evals\u002F` 文件夹中。","某初创团队正在利用 Cursor 和 Claude Code 等 AI 助手快速迭代一个电商订单处理模块，急需建立一套既能被人类理解又能被 AI 精准执行的测试体系。\n\n### 没有 riteway 时\n- **AI 理解偏差**：传统断言代码冗长且逻辑隐晦，AI 经常误读测试意图，导致生成的代码虽通过测试却不符合业务需求。\n- **调试成本高昂**：测试失败时，缺乏清晰的“实际输出 vs 预期输出”对比，开发者需花费大量时间复现和定位问题。\n- **上下文浪费**：繁琐的样板代码占用了宝贵的 Token 上下文窗口，限制了 AI 对项目整体逻辑的分析深度。\n- **提示词评估困难**：缺乏标准化工具来量化评估 AI 提示词（Prompt）的效果，优化过程全靠主观猜测。\n\n### 使用 riteway 后\n- **需求精准对齐**：riteway 强制要求的“五问”结构（如“它应该做什么”、“如何复现失败”）让 AI 清晰掌握业务规则，生成代码一次通过率显著提升。\n- **故障一目了然**：测试报告以自然语言呈现明确的差异对比，开发者甚至无需查看代码即可直接修复逻辑漏洞。\n- **Token 高效利用**：极简的语法大幅减少代码行数，为 AI 留出更多空间分析复杂业务逻辑，提升开发效率。\n- **量化提示词效果**：通过 `riteway ai` 命令行工具，团队能像运行单元测试一样批量评估 Prompt，并获得具体的通过率报告，实现数据驱动的提示词优化。\n\nriteway 通过将测试转化为人类与 AI 都能读懂的明确契约，彻底解决了 AI 驱动开发中“沟通歧义”与“评估黑盒”的核心痛点。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fparalleldrive_riteway_566c852d.png","paralleldrive","Parallel Drive","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fparalleldrive_b4282b27.png","Software engineering lab redefining how products are built in the AI age.",null,"support@paralleldrive.com","https:\u002F\u002Fparalleldrive.com","https:\u002F\u002Fgithub.com\u002Fparalleldrive",[82],{"name":83,"color":84,"percentage":85},"JavaScript","#f1e05a",100,1172,38,"2026-04-15T01:59:07","MIT","未说明","无需求",{"notes":93,"python":94,"dependencies":95},"该工具基于 Node.js (v16+) 运行，需启用原生 ES 模块 (在 package.json 中设置 \"type\": \"module\")。AI 评估功能 (`riteway ai`) 依赖外部 AI 代理 CLI 工具（如 Claude Code, Cursor, OpenCode），并通过 OAuth 进行认证，无需 API Key。若需测试 React\u002FJSX 组件，需额外配置构建工具（如 Vite 或 Next.js）以支持转译。","不适用",[96,97,98,99],"Node.js>=16","vitest (可选，用于 JSX 测试)","nyc (可选，用于覆盖率)","tap-nirvana (可选，用于报告格式化)",[14,13,101],"其他","2026-03-27T02:49:30.150509","2026-04-18T14:15:10.668177",[105,110,115,120,125],{"id":106,"question_zh":107,"answer_zh":108,"source_url":109},39902,"如何在 EcmaScript 模块（ESM）环境中使用 RITEway？","在 riteway@v7.0.0 版本中已修复此问题。请升级您的 RITEway 依赖到 v7.0.0 或更高版本，即可直接在 ESM 项目中使用，无需 Babel 或其他额外工具配置。","https:\u002F\u002Fgithub.com\u002Fparalleldrive\u002Friteway\u002Fissues\u002F322",{"id":111,"question_zh":112,"answer_zh":113,"source_url":114},39903,"如何避免 RITEway 测试中的级联错误（即一个测试失败导致后续测试不运行）？","建议将 `assert` 调用包裹在测试框架（如 Vitest 或 Jest）的 `test` 块中以实现隔离。推荐的结构是：外层使用 `describe` 定义组件，内层使用 `test` 定义具体功能，并在 `test` 回调中执行 `assert`。如果 `actual` 或 `expected` 可能抛出异常，请将其封装为函数传递。示例代码：\n```js\ndescribe('Counter', async () => {\n  test('default starting count', () => {\n    const initial = 3;\n    const counter = createCounter(initial);\n    assert({\n      given: \"an initial value\",\n      should: \"start counting from the initial value\",\n      actual: counter().valueOf(),\n      expected: initial\n    });\n  });\n});\n```","https:\u002F\u002Fgithub.com\u002Fparalleldrive\u002Friteway\u002Fissues\u002F371",{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},39904,"RITEway 无法在 Create-React-App (CRA) 中开箱即用，报错 SyntaxError，如何解决？","这是因为 CRA 默认配置与 RITEway 的直接执行方式存在 Babel 编译冲突。解决方案是修改 `package.json` 中的测试脚本，显式指定 Babel 注册和 polyfill。具体步骤：\n1. 安装依赖：`yarn add tap-color -D`\n2. 更新 `package.json` 脚本命令：\n```json\n\"rite\": \"riteway -r @babel\u002Fregister -r @babel\u002Fpolyfill 'src\u002F**\u002F*.test.js' | tap-color\"\n```\n注意：由于需要编译 JSX 依赖，测试运行速度可能会稍慢。","https:\u002F\u002Fgithub.com\u002Fparalleldrive\u002Friteway\u002Fissues\u002F71",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},39905,"如何在 React Native 项目中使用 RITEway 进行测试？","由于 React Native 不渲染 HTML，不能直接使用默认的渲染器。社区提供了一个名为 `riteway-jest` 的包装器来支持此场景，或者您可以自定义渲染逻辑。核心思路是使用 `react-test-renderer` 来创建组件实例。自定义渲染器示例代码片段：\n```js\nimport TestRenderer from 'react-test-renderer';\nconst render = component => TestRenderer.create(component).root;\n\u002F\u002F 然后通过 instance.findByProps({ testID }) 等方式进行断言查询\n```","https:\u002F\u002Fgithub.com\u002Fparalleldrive\u002Friteway\u002Fissues\u002F48",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},39906,"在没有 npm 的生产或暂存环境中如何运行 RITEway 测试？","通常建议通过 `node` 直接运行测试文件，例如 `node lib\u002Fsomemodule\u002Fsometestfile.test.js`。但如果在运行 `.\u002Fnode_modules\u002F.bin\u002Friteway` 时遇到 \"Segmentation fault (core dumped)\" 错误，这通常是 Node.js 环境本身的问题而非 JS 代码导致（JS 运行在沙箱中）。请检查 Node 版本兼容性，或尝试直接在目标环境中调试具体的崩溃命令行。","https:\u002F\u002Fgithub.com\u002Fparalleldrive\u002Friteway\u002Fissues\u002F109",[131],{"id":132,"version":133,"summary_zh":134,"released_at":135},323210,"v6.1.1","小幅修复。\n\n* 更新了 TypeScript 定义以修复错误\n* 更新了文档\n* 大量依赖项更新","2019-11-05T23:30:30"]