[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-numman-ali--opencode-openai-codex-auth":3,"tool-numman-ali--opencode-openai-codex-auth":64},[4,17,25,39,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":10,"last_commit_at":23,"category_tags":24,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":26,"name":27,"github_repo":28,"description_zh":29,"stars":30,"difficulty_score":10,"last_commit_at":31,"category_tags":32,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[33,34,35,36,14,37,15,13,38],"图像","数据工具","视频","插件","其他","音频",{"id":40,"name":41,"github_repo":42,"description_zh":43,"stars":44,"difficulty_score":45,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[14,33,13,15,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":45,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[15,33,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":45,"last_commit_at":62,"category_tags":63,"status":16},2181,"OpenHands","OpenHands\u002FOpenHands","OpenHands 是一个专注于 AI 驱动开发的开源平台，旨在让智能体（Agent）像人类开发者一样理解、编写和调试代码。它解决了传统编程中重复性劳动多、环境配置复杂以及人机协作效率低等痛点，通过自动化流程显著提升开发速度。\n\n无论是希望提升编码效率的软件工程师、探索智能体技术的研究人员，还是需要快速原型验证的技术团队，都能从中受益。OpenHands 提供了灵活多样的使用方式：既可以通过命令行（CLI）或本地图形界面在个人电脑上轻松上手，体验类似 Devin 的流畅交互；也能利用其强大的 Python SDK 自定义智能体逻辑，甚至在云端大规模部署上千个智能体并行工作。\n\n其核心技术亮点在于模块化的软件智能体 SDK，这不仅构成了平台的引擎，还支持高度可组合的开发模式。此外，OpenHands 在 SWE-bench 基准测试中取得了 77.6% 的优异成绩，证明了其解决真实世界软件工程问题的能力。平台还具备完善的企业级功能，支持与 Slack、Jira 等工具集成，并提供细粒度的权限管理，适合从个人开发者到大型企业的各类用户场景。",70626,"2026-04-05T22:51:36",[15,14,13,36],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":102,"forks":103,"last_commit_at":104,"license":105,"difficulty_score":10,"env_os":106,"env_gpu":107,"env_ram":107,"env_deps":108,"category_tags":111,"github_topics":83,"view_count":10,"oss_zip_url":83,"oss_zip_packed_at":83,"status":16,"created_at":112,"updated_at":113,"faqs":114,"releases":144},1276,"numman-ali\u002Fopencode-openai-codex-auth","opencode-openai-codex-auth","OAuth authentication plugin for personal coding assistance with ChatGPT Plus\u002FPro subscriptions - uses OpenAI's official authentication method","opencode-openai-codex-auth 是一个用于个人开发的 OAuth 认证插件，旨在简化使用 ChatGPT Plus\u002FPro 订阅访问 OpenAI 系列模型（如 GPT-5.2、GPT-5.1 Codex 等）的过程。它通过官方认证方式实现与 ChatGPT 的集成，让用户能够轻松调用多种模型进行代码生成、任务执行等操作。\n\n这个工具解决了开发者在使用 OpenAI 模型时常见的配置复杂、权限管理繁琐的问题，提供了一键安装和统一配置的体验，支持多种模型变体和参数设置，极大降低了使用门槛。无论是编写脚本、调试代码还是自动化任务，都可以快速上手。\n\n适合有一定编程基础的开发者或研究人员使用，尤其适合需要频繁调用 OpenAI 模型进行代码生成或数据处理的用户。其独特的技术亮点包括对多模型的支持、OAuth 官方认证流程、自动刷新令牌机制以及兼容新旧版本 OpenCode 的配置方式，使得使用更加灵活可靠。整体设计注重简洁与效率，是追求开发效率的用户的理想选择。","![Image 1: opencode-openai-codex-auth](assets\u002Freadme-hero.svg)\n  \n  \n**Curated by [Numman Ali](https:\u002F\u002Fx.com\u002Fnummanali)**\n[![Twitter Follow](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fnummanali?style=social)](https:\u002F\u002Fx.com\u002Fnummanali)\n[![npm version](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fopencode-openai-codex-auth.svg)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fopencode-openai-codex-auth)\n[![Tests](https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Factions)\n[![npm downloads](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fdm\u002Fopencode-openai-codex-auth.svg)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fopencode-openai-codex-auth)\n**One install. Every Codex model.**\n[Install](#-quick-start) · [Models](#-models) · [Configuration](#-configuration) · [Docs](#-docs)\n\n---\n## 💡 Philosophy\n> **\"One config. Every model.\"**\nOpenCode should feel effortless. This plugin keeps the setup minimal while giving you full GPT‑5.x + Codex access via ChatGPT OAuth.\n```\n┌─────────────────────────────────────────────────────────┐\n│                                                         │\n│  ChatGPT OAuth → Codex backend → OpenCode               │\n│  One command install, full model presets, done.         │\n│                                                         │\n└─────────────────────────────────────────────────────────┘\n```\n---\n## 🚀 Quick Start\n```bash\nnpx -y opencode-openai-codex-auth@latest\n```\nThen:\n```bash\nopencode auth login\nopencode run \"write hello world to test.txt\" --model=openai\u002Fgpt-5.2 --variant=medium\n```\nLegacy OpenCode (v1.0.209 and below):\n```bash\nnpx -y opencode-openai-codex-auth@latest --legacy\nopencode run \"write hello world to test.txt\" --model=openai\u002Fgpt-5.2-medium\n```\nUninstall:\n```bash\nnpx -y opencode-openai-codex-auth@latest --uninstall\nnpx -y opencode-openai-codex-auth@latest --uninstall --all\n```\n---\n## 📦 Models\n- **gpt-5.2** (none\u002Flow\u002Fmedium\u002Fhigh\u002Fxhigh)\n- **gpt-5.2-codex** (low\u002Fmedium\u002Fhigh\u002Fxhigh)\n- **gpt-5.1-codex-max** (low\u002Fmedium\u002Fhigh\u002Fxhigh)\n- **gpt-5.1-codex** (low\u002Fmedium\u002Fhigh)\n- **gpt-5.1-codex-mini** (medium\u002Fhigh)\n- **gpt-5.1** (none\u002Flow\u002Fmedium\u002Fhigh)\n---\n## 🧩 Configuration\n- Modern (OpenCode v1.0.210+): `config\u002Fopencode-modern.json`\n- Legacy (OpenCode v1.0.209 and below): `config\u002Fopencode-legacy.json`\n\nMinimal configs are not supported for GPT‑5.x; use the full configs above.\n---\n## ✅ Features\n- ChatGPT Plus\u002FPro OAuth authentication (official flow)\n- 22 model presets across GPT‑5.2 \u002F GPT‑5.2 Codex \u002F GPT‑5.1 families\n- Variant system support (v1.0.210+) + legacy presets\n- Multimodal input enabled for all models\n- Usage‑aware errors + automatic token refresh\n---\n## 📚 Docs\n- Getting Started: `docs\u002Fgetting-started.md`\n- Configuration: `docs\u002Fconfiguration.md`\n- Troubleshooting: `docs\u002Ftroubleshooting.md`\n- Architecture: `docs\u002Fdevelopment\u002FARCHITECTURE.md`\n---\n## ⚠️ Usage Notice\nThis plugin is for **personal development use** with your own ChatGPT Plus\u002FPro subscription.\nFor production or multi‑user applications, use the OpenAI Platform API.\n\n**Built for developers who value simplicity.**\n","![图片 1：opencode-openai-codex-auth](assets\u002Freadme-hero.svg)\n  \n  \n**由 [Numman Ali](https:\u002F\u002Fx.com\u002Fnummanali) 精选**\n[![Twitter 关注](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fnummanali?style=social)](https:\u002F\u002Fx.com\u002Fnummanali)\n[![npm 版本](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fopencode-openai-codex-auth.svg)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fopencode-openai-codex-auth)\n[![测试](https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Factions)\n[![npm 下载量](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fdm\u002Fopencode-openai-codex-auth.svg)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fopencode-openai-codex-auth)\n**一次安装，畅享所有 Codex 模型。**\n[安装](#-快速入门) · [模型](#-模型) · [配置](#-配置) · [文档](#-文档)\n\n---\n## 💡 理念\n> **“一次配置，通吃所有模型。”**\nOpenCode 应当让用户感到轻松自如。这款插件将设置保持在最低限度，同时通过 ChatGPT OAuth 让您全面访问 GPT‑5.x + Codex。\n```\n┌─────────────────────────────────────────────────────────┐\n│                                                         │\n│  ChatGPT OAuth → Codex 后端 → OpenCode               │\n│  一键安装，完整模型预设，即刻完成。         │\n│                                                         │\n└─────────────────────────────────────────────────────────┘\n```\n---\n## 🚀 快速入门\n```bash\nnpx -y opencode-openai-codex-auth@latest\n```\n然后：\n```bash\nopencode auth login\nopencode run \"write hello world to test.txt\" --model=openai\u002Fgpt-5.2 --variant=medium\n```\n旧版 OpenCode（v1.0.209 及以下）：\n```bash\nnpx -y opencode-openai-codex-auth@latest --legacy\nopencode run \"write hello world to test.txt\" --model=openai\u002Fgpt-5.2-medium\n```\n卸载：\n```bash\nnpx -y opencode-openai-codex-auth@latest --uninstall\nnpx -y opencode-openai-codex-auth@latest --uninstall --all\n```\n---\n## 📦 模型\n- **gpt-5.2**（none\u002Flow\u002Fmedium\u002Fhigh\u002Fxhigh）\n- **gpt-5.2-codex**（low\u002Fmedium\u002Fhigh\u002Fxhigh）\n- **gpt-5.1-codex-max**（low\u002Fmedium\u002Fhigh\u002Fxhigh）\n- **gpt-5.1-codex**（low\u002Fmedium\u002Fhigh）\n- **gpt-5.1-codex-mini**（medium\u002Fhigh）\n- **gpt-5.1**（none\u002Flow\u002Fmedium\u002Fhigh）\n---\n## 🧩 配置\n- 现代版（OpenCode v1.0.210+）：`config\u002Fopencode-modern.json`\n- 旧版（OpenCode v1.0.209 及以下）：`config\u002Fopencode-legacy.json`\n\nGPT‑5.x 不支持极简配置；请使用上述完整配置。\n---\n## ✅ 功能\n- ChatGPT Plus\u002FPro OAuth 认证（官方流程）\n- 跨 GPT‑5.2 \u002F GPT‑5.2 Codex \u002F GPT‑5.1 系列的 22 种模型预设\n- 支持变体系统（v1.0.210+）及旧版预设\n- 所有模型均支持多模态输入\n- 使用感知型错误提示 + 自动令牌刷新\n---\n## 📚 文档\n- 入门指南：`docs\u002Fgetting-started.md`\n- 配置说明：`docs\u002Fconfiguration.md`\n- 故障排除：`docs\u002Ftroubleshooting.md`\n- 架构说明：`docs\u002Fdevelopment\u002FARCHITECTURE.md`\n---\n## ⚠️ 使用须知\n此插件仅供您个人开发使用，并需搭配自己的 ChatGPT Plus\u002FPro 订阅。\n如用于生产环境或多人协作应用，请使用 OpenAI 平台 API。\n\n**专为注重简洁性的开发者打造。**","# opencode-openai-codex-auth 快速上手指南\n\n## 环境准备\n\n- **系统要求**：支持 macOS、Linux 或 Windows（通过 WSL）。\n- **前置依赖**：\n  - Node.js (推荐 v16+)\n  - OpenCode CLI 工具（安装后自动配置）\n\n> 如果你在国内，建议使用国内镜像源加速 npm 包下载，例如使用 [taobao npm 镜像](https:\u002F\u002Fnpm.taobao.org\u002F)。\n\n## 安装步骤\n\n### 使用 npx 安装最新版本\n\n```bash\nnpx -y opencode-openai-codex-auth@latest\n```\n\n### 安装旧版本（适用于 OpenCode v1.0.209 及以下）\n\n```bash\nnpx -y opencode-openai-codex-auth@latest --legacy\n```\n\n### 卸载插件\n\n```bash\nnpx -y opencode-openai-codex-auth@latest --uninstall\n```\n\n如需卸载所有相关配置：\n\n```bash\nnpx -y opencode-openai-codex-auth@latest --uninstall --all\n```\n\n## 基本使用\n\n### 1. 登录 ChatGPT 账号\n\n```bash\nopencode auth login\n```\n\n> 此命令会引导你完成 ChatGPT Plus\u002FPro 的 OAuth 认证流程。\n\n### 2. 运行一个简单的任务示例\n\n```bash\nopencode run \"write hello world to test.txt\" --model=openai\u002Fgpt-5.2 --variant=medium\n```\n\n> 上述命令将使用 GPT-5.2 模型的 medium 变体，执行“将 'hello world' 写入 test.txt”任务。\n\n### 3. 使用旧版 OpenCode 的模型格式（仅限 v1.0.209 及以下）\n\n```bash\nopencode run \"write hello world to test.txt\" --model=openai\u002Fgpt-5.2-medium\n```\n\n---\n\n> 📌 注意：该插件仅供个人开发使用，如需用于生产环境或多人协作，请使用 OpenAI 官方平台 API。","一个前端开发工程师正在使用 OpenCode 工具链进行自动化代码生成和调试，但需要频繁调用 ChatGPT Plus\u002FPro 的 Codex 模型来完成复杂的编程任务。\n\n### 没有 opencode-openai-codex-auth 时\n- 需要手动配置多个 OAuth 认证流程，每次调用不同模型时都要重新登录，效率低下  \n- 不同版本的 OpenCode 配置文件格式不一致，导致模型选择和参数设置复杂且容易出错  \n- 缺乏统一的模型管理方式，无法快速切换 GPT-5.2、GPT-5.1 等多种 Codex 模型及其变体  \n- 无法自动刷新令牌或处理认证失败的情况，常因认证过期中断工作流  \n- 多次尝试后仍难以稳定调用高精度模型（如 gpt-5.2-codex-xhigh），影响开发进度  \n\n### 使用 opencode-openai-codex-auth 后\n- 通过一次 OAuth 登录即可访问所有支持的 Codex 模型，无需重复验证身份  \n- 提供统一的配置文件结构，无论是现代版还是旧版 OpenCode 都能轻松适配  \n- 支持一键切换多种模型及变体（如 low、medium、xhigh），满足不同场景需求  \n- 内置自动刷新令牌机制，避免因认证失效导致的中断，提升运行稳定性  \n- 可直接调用高精度模型并获得可靠结果，显著提高代码生成与调试效率  \n\n核心价值：opencode-openai-codex-auth 通过简化认证流程和统一模型管理，让开发者更专注于代码本身，而非繁琐的配置与权限问题。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fnumman-ali_opencode-openai-codex-auth_841502b6.png","numman-ali","Numman Ali","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fnumman-ali_df3cf2a5.png","Applied AI, Enterprise Agentic Coding & Product Strategy | Creator of OpenSkills and more | CTO @ https:\u002F\u002Fwww.retailbook.com (UK FinTech)","RetailBook","UK (Remote)","numman.ali@gmail.com","nummanali",null,"https:\u002F\u002Fgithub.com\u002Fnumman-ali",[86,90,94,98],{"name":87,"color":88,"percentage":89},"TypeScript","#3178c6",77.8,{"name":91,"color":92,"percentage":93},"HTML","#e34c26",9.9,{"name":95,"color":96,"percentage":97},"Shell","#89e051",6.8,{"name":99,"color":100,"percentage":101},"JavaScript","#f1e05a",5.5,1912,127,"2026-04-05T21:12:20","NOASSERTION","Linux, macOS, Windows","未说明",{"notes":109,"python":107,"dependencies":110},"该工具需要通过 ChatGPT OAuth 进行身份验证，适用于个人开发用途，不支持生产环境或多人应用。建议使用 OpenAI 平台 API 用于生产场景。",[],[15,36],"2026-03-27T02:49:30.150509","2026-04-06T08:09:13.205709",[115,120,125,130,135,140],{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},5823,"如何解决模型调用失败的问题？","如果遇到所有模型调用都失败的情况，请确保您已正确登录并检查网络连接。若问题仍然存在，可以尝试更新插件到最新版本，或参考维护者的建议进行排查。","https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fissues\u002F3",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},5824,"如何解决 'No tool call found for function call output with call_id' 错误？","此错误通常由插件版本过旧引起。请升级到 v4.3.0 或更高版本以修复该问题。可以通过运行命令 `npx -y opencode-openai-codex-auth@latest` 来更新插件。","https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fissues\u002F48",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},5825,"如何解决 GPT 模型在使用 todo\u002Ftool 调用时停止生成内容的问题？","此问题已在 v4.3.1 版本中修复。请将 `reasoning` 参数设置为 `low`（对于 GPT-5.x 模型），以防止后端拒绝请求和流式处理失败。更新插件至最新版本即可解决。","https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fissues\u002F101",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},5826,"如何解决 AGENTS.md 文件被删除的问题？","此问题是由于插件的请求转换器导致的。请升级到 v4.3.0 或更高版本以修复该问题。如果需要临时解决方案，可以手动修改插件代码中的 `request-transformer.ts` 文件，确保 `AGENTS.md` 内容不会被移除。","https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fissues\u002F68",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},5827,"如何解决更新插件后出现的 'No tool call found for function call output' 错误？","此问题可能与插件版本不兼容有关。请升级到 v4.3.0 或更高版本，并确保配置文件中没有冲突的设置。如果问题依旧，可以尝试清除缓存并重新安装插件。","https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fissues\u002F24",{"id":141,"question_zh":142,"answer_zh":143,"source_url":134},5828,"如何正确安装自定义插件版本？","要安装自定义插件版本，请先构建插件并将其放入 OpenCode 插件目录中。然后，在配置文件中指定插件名称和版本号，例如：`\"@connorads\u002Fopencode-openai-codex-auth@4.2.0\"`。如果遇到安装错误，可以运行以下命令清理缓存：\n```sh\nrm -rf ~\u002F.cache\u002Fopencode\u002Fnode_modules ~\u002F.cache\u002Fopencode\u002Fbun.lock ~\u002F.cache\u002Fopencode\u002Fpackage.json\n```",[145,150,155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240],{"id":146,"version":147,"summary_zh":148,"released_at":149},115188,"v3.0.0","## Highlights\n- Host-provided `prompt_cache_key` is now the single source of truth for Codex caching; we only forward headers and body fields when OpenCode supplies them.\n- Usage-limit errors mirror the Codex CLI summary (5-hour + weekly windows) so OpenCode users know exactly when quota resets.\n- Documentation refreshed: canonical config snippets, auto-compaction caveats, and CHANGELOG now tracks every release.\n- Security clean-up: pinned `hono@4.10.4` and `vite@7.1.12` to address upstream advisories.\n\n## Full Changelog\n- Detailed breakdown: https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fblob\u002Fmain\u002FCHANGELOG.md#300---2025-11-04\n- Compare diff: https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fcompare\u002Fv2.1.2...v3.0.0\n","2025-11-04T23:21:16",{"id":151,"version":152,"summary_zh":153,"released_at":154},115189,"v2.1.2","# v2.1.2: Compliance Updates + Critical Bug Fixes\n\nThis release includes comprehensive OpenAI ToS compliance updates and fixes 4 critical bugs that prevented per-model options and multi-turn conversations from working correctly.\n\n---\n\n## 🔒 Compliance & Legal Updates\n\n### Terms of Service & Usage Guidelines\n\nThis release adds comprehensive compliance documentation to ensure users understand proper usage:\n\n- ⚠️ **Terms of Service & Usage Notice** - Clear guidance on personal use only\n- 📋 **Rate Limits & Responsible Use** - Best practices for API usage\n- ❓ **Frequently Asked Questions** - 6 common TOS compliance questions\n- 📄 **LICENSE Update** - MIT with Usage Disclaimer for personal development\n- 💼 **Compliance Header** - Added to index.ts documenting intended use\n\n**Key Points:**\n- ✅ For **personal development only** with your own ChatGPT Plus\u002FPro subscription\n- ✅ Uses **OpenAI's official OAuth authentication** (same as Codex CLI)\n- ❌ **NOT for**: Commercial resale, multi-user services, or API resale\n- ❌ **NOT a \"free API alternative\"** - uses your existing subscription\n- 📋 Users are responsible for compliance with [OpenAI's Terms of Use](https:\u002F\u002Fopenai.com\u002Fpolicies\u002Fterms-of-use\u002F)\n\n### New Documentation Files\n\n**Compliance & Security:**\n- `CONTRIBUTING.md` - Contribution guidelines with compliance requirements\n- `SECURITY.md` - Security policy, vulnerability reporting, best practices\n- `docs\u002Fprivacy.md` - Comprehensive privacy & data handling documentation\n\n**User Guides:**\n- `docs\u002Fgetting-started.md` - Complete installation guide with compliance notice\n- `docs\u002Fconfiguration.md` - Advanced configuration options\n- `docs\u002Ftroubleshooting.md` - Debug techniques and compliance troubleshooting\n\n**Developer Guides:**\n- `docs\u002Fdevelopment\u002FARCHITECTURE.md` - Technical architecture deep dive\n- `docs\u002Fdevelopment\u002FCONFIG_FLOW.md` - Config system internals\n- `docs\u002Fdevelopment\u002FCONFIG_FIELDS.md` - Field usage guide\n- `docs\u002Fdevelopment\u002FTESTING.md` - Testing guide and verification\n\n**GitHub Templates:**\n- `.github\u002FISSUE_TEMPLATE\u002Fbug_report.md` - Bug reports with compliance checklist\n- `.github\u002FISSUE_TEMPLATE\u002Ffeature_request.md` - Feature requests with compliance confirmation\n- `.github\u002FISSUE_TEMPLATE\u002Fconfig.yml` - Links to OpenAI support and TOS\n\n---\n\n## 🐛 Critical Bug Fixes\n\n### Bug #1: Per-Model Options Ignored (Config Lookup Mismatch)\n**Severity:** 🔴 **HIGH**\n\n**Problem:**\nUsers configured different `reasoningEffort` for each model variant (low\u002Fmedium\u002Fhigh), but all variants behaved identically. The plugin was normalizing model names before config lookup, causing it to miss per-model configurations.\n\n**Fix:** `lib\u002Frequest\u002Frequest-transformer.ts:277`\n- Use **original model name** for config lookup\n- Normalize **only for API request**\n- Per-model options now correctly applied ✅\n\n**Impact:**\n- ✅ Different reasoning levels properly applied per variant\n- ✅ Users can customize each model independently\n\n---\n\n### Bug #2: Multi-Turn Conversations Fail (AI SDK Compatibility)\n**Severity:** 🔴 **CRITICAL**\n\n**Problem:**\nMulti-turn conversations failed with: `AI_APICallError: Item with id 'msg_abc123' not found. Items are not persisted when store is set to false.`\n\n**Root Causes:**\n1. AI SDK sends `item_reference` (not in Codex API spec)\n2. IDs weren't completely stripped for stateless mode\n3. Only `rs_*` IDs were filtered, but `msg_*`, `assistant_*` leaked through\n\n**Research:**\n- Tested `store: true` → API returned error (ChatGPT backend requires `store: false`)\n- Codex CLI **never** sends ANY IDs in stateless mode\n- Full message history needed for LLM context\n\n**Fix:** `lib\u002Frequest\u002Frequest-transformer.ts:114-135`\n- Filter out AI SDK `item_reference` items\n- Strip **ALL** IDs from remaining items (not just `rs_*`)\n- Preserve full message history for LLM context\n\n**Impact:**\n- ✅ No more \"item not found\" errors\n- ✅ Multi-turn conversations work flawlessly\n- ✅ Full context preserved via `reasoning.encrypted_content`\n\n---\n\n### Bug #3: Case-Sensitive Normalization\n**Severity:** 🟡 MEDIUM\n\n**Problem:**\nOld verbose config names like `\"GPT 5 Codex Low (ChatGPT Subscription)\"` didn't normalize correctly because pattern matching was case-sensitive.\n\n**Fix:** `lib\u002Frequest\u002Frequest-transformer.ts:22-27`\n- Added `toLowerCase()` for case-insensitive matching\n- Handles uppercase\u002Fmixed case user input\n\n**Impact:**\n- ✅ Backwards compatible with old verbose config names\n- ✅ Old configs work seamlessly\n\n---\n\n### Bug #4: GitHub API Rate Limiting\n**Severity:** 🟡 MEDIUM\n\n**Problem:**\nPlugin checked GitHub on **every request** to fetch latest Codex instructions, exhausting the 60 req\u002Fhour rate limit during testing.\n\n**Fix:** `lib\u002Fprompts\u002Fcodex.ts:50-53`, `lib\u002Fprompts\u002Fopencode-codex.ts:47-50`\n- Added 15-minute cache TTL check\n- Only fetches if cache is stale\n\n**Impact:**\n- ✅ Prevents rate limit exhaustion\n- ✅ Faster plugin initialization\n\n---\n\n## ✨ Enhancements\n\n### Debug Logging System\n\nNew environment variable for troubleshooting:\n```bash\nDEBUG_CODEX_PLUGIN=","2025-10-12T20:31:04",{"id":156,"version":157,"summary_zh":158,"released_at":159},115190,"v2.1.1","## 🐛 Bug Fix\n\n### Fixed cache clear command causing directory issues\n\nUpdated the plugin cache clearing command in README to run in a subshell, preventing directory navigation issues when users execute it from within the cache folder being deleted.\n\n**Before:**\n```bash\nsed -i.bak '\u002F\"opencode-openai-codex-auth\"\u002Fd' ~\u002F.cache\u002Fopencode\u002Fpackage.json && rm -rf ~\u002F.cache\u002Fopencode\u002Fnode_modules\u002Fopencode-openai-codex-auth\n```\n\n**After:**\n```bash\n(cd ~ && sed -i.bak '\u002F\"opencode-openai-codex-auth\"\u002Fd' .cache\u002Fopencode\u002Fpackage.json && rm -rf .cache\u002Fopencode\u002Fnode_modules\u002Fopencode-openai-codex-auth)\n```\n\nThe command now:\n- Runs in a subshell to avoid affecting the user's current directory\n- Changes to home directory first\n- Uses relative paths from home\n- Returns to original directory after completion\n\n### 📦 Installation\n\n```bash\n\"plugin\": [\"opencode-openai-codex-auth\"]\n```\n\n---\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fcompare\u002Fv2.1.0...v2.1.1","2025-10-04T10:42:02",{"id":161,"version":162,"summary_zh":163,"released_at":164},115191,"v2.1.0","## 🎯 What's New\n\nThis release enhances the CODEX_MODE bridge prompt with awareness of OpenCode's advanced capabilities and implements a robust cache-based verification system for more reliable prompt filtering.\n\n### ✨ New Features\n\n#### 🔧 Task Tool Awareness\nThe bridge prompt now documents OpenCode's sub-agent system, making the model aware it can delegate complex work to specialized agents:\n- Explains Task tool availability and dynamic agent types\n- Provides guidance on when to use sub-agents (complex analysis, isolated context)\n- References tool schema for current agent capabilities\n\n#### 🔌 MCP Tool Awareness\nDocuments Model Context Protocol tool naming conventions so the model knows these capabilities exist:\n- Explains `mcp__\u003Cserver>__\u003Ctool>` prefix convention\n- Encourages discovery of available MCP integrations\n- Promotes usage when functionality matches task needs\n\n#### ✅ ETag-Based OpenCode Prompt Verification\nImplements robust caching system for OpenCode's codex.txt to ensure 100% accurate prompt filtering:\n- Fetches OpenCode's official codex.txt from GitHub\n- Uses HTTP conditional requests (ETag) for efficient updates\n- Dual verification: cached content match + text signature fallback\n- Prevents accidentally filtering custom AGENTS.md content\n\n#### 📚 AGENTS.md (formerly CLAUDE.md)\nRenamed development guide to be applicable to all AI agents:\n- Updated header to reference Claude Code, Codex, and other AI agents\n- Comprehensive coding guidance for agent-assisted development\n- Documents 7-step fetch flow, module organization, and key design patterns\n\n### 🔨 Technical Improvements\n\n**New Architecture:**\n- Made request transformation pipeline async to support cache fetching\n- Created `lib\u002Fprompts\u002Fopencode-codex.ts` for prompt verification (109 lines)\n- Enhanced `isOpenCodeSystemPrompt()` with dual verification system\n\n**Bridge Prompt Update:**\n- Added \"Advanced Tools\" section with Task tool and MCP documentation\n- Increased from ~450 tokens to ~550 tokens\n- Better model awareness of available capabilities\n\n**Test Coverage:**\n- Added 6 new tests verifying AGENTS.md content safety\n- Total: **129 comprehensive tests** (was 123)\n- All tests passing ✅\n\n### 📋 What's Changed\n\n**New Files:**\n- `lib\u002Fprompts\u002Fopencode-codex.ts` - OpenCode prompt verification cache\n\n**Updated Files:**\n- `lib\u002Fprompts\u002Fcodex-opencode-bridge.ts` - Added Advanced Tools section\n- `lib\u002Frequest\u002Frequest-transformer.ts` - Enhanced prompt detection\n- `lib\u002Frequest\u002Ffetch-helpers.ts` - Made transformation async\n- `index.ts` - Await async transformation\n- `AGENTS.md` - Renamed from CLAUDE.md\n- `README.md` - Updated test count, added Task\u002FMCP mentions\n- `.gitignore` - Added opencode.json and .opencode\u002F\n- `package.json` - Version bump to 2.1.0\n- `test\u002Frequest-transformer.test.ts` - Added 6 tests, async updates\n\n### 🎁 Benefits\n\n1. **Better Task Delegation** - Model knows it can use Task tool for specialized work\n2. **MCP Discoverability** - Model aware of MCP tools and naming conventions\n3. **Robust Filtering** - Cache-based verification prevents false positives\n4. **Future-Proof** - Automatically updates when OpenCode changes their prompt\n5. **Broader Applicability** - AGENTS.md guidance applies to all AI agents\n\n### 📦 Installation\n\n```bash\n# In your opencode config, use:\n\"plugin\": [\"opencode-openai-codex-auth\"]\n\n# Or upgrade from previous version:\nsed -i.bak '\u002F\"opencode-openai-codex-auth\"\u002Fd' ~\u002F.cache\u002Fopencode\u002Fpackage.json && rm -rf ~\u002F.cache\u002Fopencode\u002Fnode_modules\u002Fopencode-openai-codex-auth\n```\n\n### ⚠️ Breaking Changes\n\nNone - all changes are additive and backward compatible.\n\n### 📊 Stats\n\n- **10 files changed**: +652 insertions, -671 deletions\n- **129 tests**: All passing ✅\n- **Bridge prompt**: ~550 tokens (was ~450)\n- **Test coverage**: Comprehensive including AGENTS.md safety verification\n\n### 🔗 Full Changelog\n\nSee [PR #15](https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fpull\u002F15) for complete technical details.\n\n---\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fcompare\u002Fv2.0.0...v2.1.0","2025-10-04T10:26:00",{"id":166,"version":167,"summary_zh":168,"released_at":169},115192,"v2.0.0","# 🚀 Version 2.0.0 - Major Release\n\nThis major release includes a complete TypeScript rewrite with enhanced configuration, CODEX_MODE for better Codex CLI parity, and comprehensive library reorganization.\n\n## ✨ Highlights\n\n### 🎯 CODEX_MODE (PR #10)\n**New configurable mode for better Codex CLI compatibility**\n\n- **Enabled by default** - Provides optimal Codex CLI experience out of the box\n- **Bridge Prompt** - ~450 tokens (~90% reduction vs full OpenCode prompt)\n- **Configurable** - Via `~\u002F.opencode\u002Fopenai-codex-auth-config.json` or `CODEX_MODE` env var\n- **Smart Tool Mapping** - Critical tool replacements, verification checklist, working style guidelines\n\n**Configuration:**\n```json\n{\n  \"codexMode\": true  \u002F\u002F default\n}\n```\n\n**Environment Override:**\n```bash\nCODEX_MODE=0 opencode run \"task\"  # Disable temporarily\nCODEX_MODE=1 opencode run \"task\"  # Enable temporarily\n```\n\n### 🔄 Complete TypeScript Rewrite (PR #9)\n**Modern, type-safe, well-tested codebase**\n\n- **Strict TypeScript** - Full type safety with comprehensive type definitions\n- **123 Tests** - Comprehensive test coverage (up from 0)\n- **Modular Architecture** - 10 focused helper functions, semantic folder structure\n- **Enhanced Configuration** - User-configurable reasoning, summaries, and verbosity\n- **Animated OAuth UI** - Beautiful success page with matrix rain effects\n\n## 📦 What's New\n\n### CODEX_MODE Features\n- ✅ Codex-OpenCode bridge prompt for CLI parity\n- ✅ Configurable via config file or environment variable\n- ✅ Priority: env var > config file > default (true)\n- ✅ OpenCode system prompt filtering\n- ✅ Tool name confusion prevention\n\n### TypeScript Rewrite\n- ✅ Complete migration from .mjs to .ts\n- ✅ Strict type checking with comprehensive interfaces\n- ✅ 123 comprehensive tests with Vitest\n- ✅ Modular helper functions (all \u003C 40 lines)\n- ✅ Enhanced error handling and logging\n\n### Enhanced Configuration\n- ✅ **9 pre-configured model variants** matching Codex CLI\n- ✅ **User-configurable reasoning** - effort, summary, verbosity\n- ✅ **Provider-level and model-level options** - Fine-grained control\n- ✅ **Plugin configuration support** - `~\u002F.opencode\u002Fopenai-codex-auth-config.json`\n\n### Improved OAuth Flow\n- ✅ Animated OAuth success page with matrix rain\n- ✅ Better error handling and user feedback\n- ✅ Automatic token refresh\n\n## 📁 Library Reorganization\n\n```\nlib\u002F\n├── auth\u002F              # OAuth authentication\n│   ├── auth.ts\n│   ├── browser.ts\n│   └── server.ts\n├── prompts\u002F           # System prompts\n│   ├── codex.ts\n│   └── codex-opencode-bridge.ts\n├── request\u002F           # Request handling\n│   ├── fetch-helpers.ts\n│   ├── request-transformer.ts\n│   └── response-handler.ts\n├── config.ts          # Plugin configuration\n├── constants.ts\n├── logger.ts\n├── types.ts\n└── oauth-success.html\n```\n\n## 🧪 Testing\n\n- **123 total tests** (all passing ✅)\n- **8 test files** covering all functionality\n- **12 new tests** for CODEX_MODE configuration\n- **Zero test regressions**\n\nTest suites:\n- ✅ Authentication & OAuth\n- ✅ Browser utilities\n- ✅ Configuration parsing\n- ✅ Plugin configuration\n- ✅ Fetch helpers\n- ✅ Logger\n- ✅ Request transformation\n- ✅ Response handling\n\n## 📚 Documentation\n\n### README Updates\n- ✅ Complete rewrite with better organization\n- ✅ Plugin configuration section\n- ✅ CODEX_MODE documentation\n- ✅ Installation guide (full vs minimal config)\n- ✅ Configuration examples\n- ✅ Model variants table\n- ✅ Troubleshooting guide\n\n### Configuration Files\n- ✅ `config\u002Ffull-opencode.json` - 9 model variants (recommended)\n- ✅ `config\u002Fminimal-opencode.json` - Minimal setup\n- ✅ `config\u002FREADME.md` - Configuration guide\n\n## 🔧 Configuration\n\n### Recommended: Full Configuration\n\nFor the complete experience with all reasoning variants:\n\n```json\n{\n  \"$schema\": \"https:\u002F\u002Fopencode.ai\u002Fconfig.json\",\n  \"plugin\": [\"opencode-openai-codex-auth\"],\n  \"model\": \"openai\u002Fgpt-5-codex\",\n  \"provider\": {\n    \"openai\": {\n      \"options\": {\n        \"reasoningEffort\": \"medium\",\n        \"reasoningSummary\": \"auto\",\n        \"textVerbosity\": \"medium\",\n        \"include\": [\"reasoning.encrypted_content\"]\n      },\n      \"models\": {\n        \"GPT 5 Codex Low\": { ... },\n        \"GPT 5 Codex Medium\": { ... },\n        \"GPT 5 Codex High\": { ... },\n        \u002F\u002F + 6 more variants\n      }\n    }\n  }\n}\n```\n\nSee [`config\u002Ffull-opencode.json`](.\u002Fconfig\u002Ffull-opencode.json) for complete configuration.\n\n### Plugin Configuration\n\nCreate `~\u002F.opencode\u002Fopenai-codex-auth-config.json`:\n```json\n{\n  \"codexMode\": true\n}\n```\n\n## ⚠️ Breaking Changes\n\n### CODEX_MODE Enabled by Default\n- **Impact**: Users now get the Codex-OpenCode bridge prompt instead of tool remap message\n- **Benefit**: Better Codex CLI parity, fewer tool confusion errors\n- **Migration**: No action needed - works better by default\n  - To disable: Set `{ \"codexMode\": false }` in config file\n  - Or use `CODEX_MODE=0` environment variable\n\n### TypeScript Migration\n- **Impact**: Distributed files are now .js (compiled from .ts) instead of .mjs\n- **Benefit**: Better IDE support, ","2025-10-03T21:36:34",{"id":171,"version":172,"summary_zh":173,"released_at":174},115177,"v4.4.0","## Changes\n- Update OAuth success page version banner to 4.4.0.\n- Maintenance release for version sync.","2026-01-09T14:01:00",{"id":176,"version":177,"summary_zh":178,"released_at":179},115178,"v4.3.1","## v4.3.1 (2026-01-08)\n\nInstaller safety release: JSONC support, safe uninstall, and minimal reasoning clamp.\n\n### Added\n- JSONC-aware installer with comment\u002Fformatting preservation and `.jsonc` priority.\n- Safe uninstall: `--uninstall` removes only plugin entries + our model presets; `--all` removes tokens\u002Flogs\u002Fcache.\n- Installer tests covering JSONC parsing, precedence, uninstall safety, and artifact cleanup.\n\n### Changed\n- Default config path when creating new configs: `~\u002F.config\u002Fopencode\u002Fopencode.jsonc`.\n- Added `jsonc-parser` (MIT, 0 deps) for robust JSONC handling.\n\n### Fixed\n- Normalizes `minimal` → `low` for GPT‑5.x requests to avoid backend rejection.\n","2026-01-08T20:02:31",{"id":181,"version":182,"summary_zh":183,"released_at":184},115179,"v4.3.0","## v4.3.0 (2026-01-04)\n\nFeature + reliability release: variants support, one-command installer, and auth\u002Ferror handling fixes.\n\n### Added\n- One-command installer\u002Fupdate: `npx -y opencode-openai-codex-auth@latest` (global config, backup, cache clear) with `--legacy` for OpenCode v1.0.209 and below.\n- Modern variants config: `config\u002Fopencode-modern.json` for OpenCode v1.0.210+; legacy presets remain in `config\u002Fopencode-legacy.json`.\n- Installer CLI bundled as package bin for cross-platform use (Windows\u002FmacOS\u002FLinux).\n\n### Changed\n- Variants-aware request config respects host-supplied `body.reasoning` \u002F `providerOptions.openai` before falling back to defaults.\n- OpenCode prompt source updated to the current upstream repository (`anomalyco\u002Fopencode`).\n- Docs\u002FREADME reorganized to an install-first layout with explicit legacy path.\n\n### Fixed\n- Headless login fallback when `xdg-open` is missing; manual URL paste remains available.\n- Error handling alignment: refresh failures throw; usage-limit 404s map to retryable 429s where appropriate.\n- AGENTS.md preservation via protected instruction markers.\n- Tool-call integrity: orphan outputs match `local_shell_call` and `custom_tool_call` (Codex CLI parity); unmatched outputs preserved as assistant messages.\n- Logging noise gated behind debug flags.\n","2026-01-05T00:00:05",{"id":186,"version":187,"summary_zh":188,"released_at":189},115180,"v4.2.0","**Feature release**: GPT 5.2 Codex support and prompt alignment with latest Codex CLI.\n\n### Added\n- **GPT 5.2 Codex model family**: Full support for `gpt-5.2-codex` with presets:\n  - `gpt-5.2-codex-low` - Fast GPT 5.2 Codex responses\n  - `gpt-5.2-codex-medium` - Balanced GPT 5.2 Codex tasks\n  - `gpt-5.2-codex-high` - Complex GPT 5.2 Codex reasoning & tools\n  - `gpt-5.2-codex-xhigh` - Deep GPT 5.2 Codex long-horizon work\n- **New model family prompt**: `gpt-5.2-codex_prompt.md` fetched from the latest Codex CLI release with its own cache file.\n- **Test coverage**: Added unit tests for GPT 5.2 Codex normalization, family selection, and reasoning behavior.\n\n### Changed\n- **Prompt selection alignment**: GPT 5.2 general now uses `gpt_5_2_prompt.md` (Codex CLI parity).\n- **Reasoning configuration**: GPT 5.2 Codex supports `xhigh` but does **not** support `\"none\"`; `\"none\"` auto-upgrades to `\"low\"` and `\"minimal\"` normalizes to `\"low\"`.\n- **Config presets**: `config\u002Ffull-opencode.json` now includes 22 pre-configured variants (adds GPT 5.2 Codex).\n- **Docs**: Updated README\u002FAGENTS\u002Fconfig docs to include GPT 5.2 Codex and new model family behavior.\n","2025-12-19T20:50:10",{"id":191,"version":192,"summary_zh":193,"released_at":194},115181,"v4.1.1","## What's New\n\n### \"None\" Reasoning Effort Support\n\nGPT-5.2 and GPT-5.1 general purpose models now support `reasoning_effort: \"none\"` which disables the reasoning phase entirely. This can result in faster responses when reasoning is not needed.\n\n- **gpt-5.2-none** - GPT-5.2 with reasoning disabled\n- **gpt-5.1-none** - GPT-5.1 with reasoning disabled\n\n**Note:** Codex variants do NOT support \"none\" - it auto-converts to \"low\" for Codex\u002FCodex Max, or \"medium\" for Codex Mini.\n\n### Bug Fixes\n\n- **Fixed orphaned function_call_output 400 errors** - Previously, when conversation history contained `item_reference` pointing to stored function calls, orphaned `function_call_output` items could cause API errors. Now handles orphans regardless of tools presence and converts them to assistant messages to preserve context while avoiding errors.\n\n- **Fixed OAuth HTML version display** - Updated the version shown in the OAuth success page from 1.0.4 to 4.1.0.\n\n## Reasoning Effort Support Matrix\n\n| Model | `none` | `low` | `medium` | `high` | `xhigh` |\n|-------|--------|-------|----------|--------|---------|\n| GPT-5.2 | ✅ | ✅ | ✅ | ✅ | ✅ |\n| GPT-5.1 | ✅ | ✅ | ✅ | ✅ | ❌ |\n| GPT-5.1-Codex | ❌→low | ✅ | ✅ | ✅ | ❌ |\n| GPT-5.1-Codex-Max | ❌→low | ✅ | ✅ | ✅ | ✅ |\n| GPT-5.1-Codex-Mini | ❌→medium | ❌→medium | ✅ | ✅ | ❌→high |\n\n## Model Presets\n\nThis release includes **18 pre-configured model variants** in the full configuration:\n\n- **GPT-5.2**: none, low, medium, high, xhigh\n- **GPT-5.1-Codex-Max**: low, medium, high, xhigh\n- **GPT-5.1-Codex**: low, medium, high\n- **GPT-5.1-Codex-Mini**: medium, high\n- **GPT-5.1**: none, low, medium, high\n\n## Upgrade\n\nUpdate your `opencode.json`:\n\n```json\n\"plugin\": [\"opencode-openai-codex-auth@4.1.1\"]\n```\n\nThen copy the updated configuration from [`config\u002Ffull-opencode.json`](https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fblob\u002Fmain\u002Fconfig\u002Ffull-opencode.json).\n\n## Test Coverage\n\n- 197 unit tests (4 new tests for \"none\" reasoning behavior)\n- All tests passing","2025-12-17T12:32:59",{"id":196,"version":197,"summary_zh":198,"released_at":199},115182,"v4.1.0","## 🚀 GPT 5.2 Support & Full Image Input Capabilities\n\nThis release adds support for OpenAI's latest GPT 5.2 model and enables full multimodal image input across all 16 model variants.\n\n### ✨ New Features\n\n#### GPT 5.2 Model Family\n- **4 new model presets** with full reasoning support:\n  - `gpt-5.2-low` - Fast responses with light reasoning\n  - `gpt-5.2-medium` - Balanced reasoning for general tasks\n  - `gpt-5.2-high` - Complex reasoning and analysis\n  - `gpt-5.2-xhigh` - Deep multi-hour analysis (same capabilities as Codex Max)\n\n#### 🖼️ Full Image Input Support\n- **All 16 models** now support image input via `modalities.input: [\"text\", \"image\"]`\n- Read screenshots, diagrams, UI mockups, and any image directly in OpenCode\n- No additional configuration required - just use the full config\n\n### 📝 Changes\n\n- **Model ordering**: Config now prioritizes newer models (GPT 5.2 → Codex Max → Codex → Codex Mini → GPT 5.1)\n- **Explicit reasoning levels**: Removed default presets without reasoning suffix to enforce explicit selection\n- **Test coverage**: 193 unit tests + 16 integration tests (all passing)\n- **Security**: Updated `@opencode-ai\u002Fplugin` to `^1.0.150` (0 vulnerabilities)\n\n### 📦 Installation\n\nUpdate your `opencode.json` to use the new version:\n\n```json\n{\n  \"plugin\": [\"opencode-openai-codex-auth@4.1.0\"]\n}\n```\n\nThen copy the full config from [`config\u002Ffull-opencode.json`](https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fblob\u002Fmain\u002Fconfig\u002Ffull-opencode.json).\n\n### 🔧 Usage\n\n```bash\n# GPT 5.2 models\nopencode run \"analyze this\" --model=openai\u002Fgpt-5.2-high\nopencode run \"deep research\" --model=openai\u002Fgpt-5.2-xhigh\n\n# With image input (automatic - no extra config needed)\n# Just reference images in your prompts and OpenCode will handle them\n```\n\n---\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fcompare\u002Fv4.0.2...v4.1.0","2025-12-11T22:21:34",{"id":201,"version":202,"summary_zh":203,"released_at":204},115183,"v4.0.2","## Bugfix Release\n\nFixes compaction context loss, agent creation, and SSE\u002FJSON response handling.\n\n### Fixed\n\n- **Compaction losing context**: v4.0.1 was too aggressive in filtering tool calls - it removed ALL `function_call`\u002F`function_call_output` items when tools weren't present. Now only **orphaned** outputs (without matching calls) are filtered, preserving matched pairs for compaction context.\n- **Agent creation failing**: The `\u002Fagent create` command was failing with \"Invalid JSON response\" because we were returning SSE streams instead of JSON for `generateText()` requests.\n- **SSE\u002FJSON response handling**: Properly detect original request intent - `streamText()` requests get SSE passthrough, `generateText()` requests get SSE→JSON conversion.\n\n### Added\n\n- **`gpt-5.1-chat-latest` model support**: Added to model map, normalizes to `gpt-5.1`.\n\n### Technical Details\n\n- **Compaction fix**: OpenCode sends `item_reference` with `fc_*` IDs for function calls. We filter these for stateless mode, but v4.0.1 then removed ALL tool items. Now we only remove orphaned `function_call_output` items (where no matching `function_call` exists).\n- **Agent creation fix**: We were forcing `stream: true` for all requests and returning SSE for all responses. Now we capture original `stream` value before transformation and convert SSE→JSON only when original request wasn't streaming.\n- The Codex API always receives `stream: true` (required), but response handling is based on original intent.\n\n### Upgrade\n\nUpdate your `opencode.json`:\n\n```json\n{\n  \"plugin\": [\"opencode-openai-codex-auth@4.0.2\"]\n}\n```\n\nIf stuck on an old version, clear the cache:\n\n```bash\nrm -rf ~\u002F.cache\u002Fopencode\u002Fnode_modules ~\u002F.cache\u002Fopencode\u002Fbun.lock\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fcompare\u002Fv4.0.1...v4.0.2","2025-11-27T13:57:54",{"id":206,"version":207,"summary_zh":208,"released_at":209},115184,"v4.0.1","## Bugfix Release\n\nFixes API errors during summary\u002Fcompaction and GitHub rate limiting.\n\n### Fixed\n\n- **Orphaned `function_call_output` errors**: Fixed 400 errors during summary\u002Fcompaction requests when OpenCode sends `item_reference` pointers to server-stored function calls. The plugin now filters out `function_call` and `function_call_output` items when no tools are present in the request.\n- **GitHub API rate limiting**: Added fallback mechanism when fetching Codex instructions from GitHub. If the API returns 403 (rate limit), the plugin now falls back to parsing the HTML releases page.\n\n### Technical Details\n\n- **Root cause**: OpenCode's secondary model (gpt-5-nano) uses `item_reference` with `fc_*` IDs to reference stored function calls. Our plugin filters `item_reference` for stateless mode (`store: false`), leaving `function_call_output` orphaned. The Codex API rejects requests with orphaned outputs.\n- **Fix**: When `hasTools === false`, filter out all `function_call` and `function_call_output` items from the input array.\n- **GitHub fallback chain**: API endpoint → HTML page → redirect URL parsing → HTML regex parsing.\n\n### Upgrade\n\nUpdate your `opencode.json`:\n\n```json\n{\n  \"plugin\": [\"opencode-openai-codex-auth@4.0.1\"]\n}\n```\n\nIf stuck on an old version, clear the cache:\n\n```bash\nrm -rf ~\u002F.cache\u002Fopencode\u002Fnode_modules ~\u002F.cache\u002Fopencode\u002Fbun.lock\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fcompare\u002Fv4.0.0...v4.0.1","2025-11-27T13:00:59",{"id":211,"version":212,"summary_zh":213,"released_at":214},115185,"v4.0.0","This release brings **full GPT-5.1 Codex Max support** with dedicated prompts, plus complete parity with Codex CLI's prompt selection logic.\r\n\r\n### 🚀 Highlights\r\n\r\n- **Full Codex Max support** with dedicated prompt including frontend design guidelines\r\n- **Model-specific prompts** matching Codex CLI's prompt selection logic\r\n- **GPT-5.0 → GPT-5.1 migration** as legacy models are phased out by OpenAI\r\n\r\n### ✨ Model-Specific System Prompts\r\n\r\nThe plugin now fetches the correct Codex prompt based on model family, matching Codex CLI's `model_family.rs` logic:\r\n\r\n| Model Family | Prompt File | Lines | Use Case |\r\n|--------------|-------------|-------|----------|\r\n| `gpt-5.1-codex-max*` | `gpt-5.1-codex-max_prompt.md` | 117 | **Codex Max** with frontend design guidelines |\r\n| `gpt-5.1-codex*`, `codex-*` | `gpt_5_codex_prompt.md` | 105 | Focused coding prompt |\r\n| `gpt-5.1*` | `gpt_5_1_prompt.md` | 368 | Full behavioral guidance |\r\n\r\n### 🔄 Legacy GPT-5.0 → GPT-5.1 Migration\r\n\r\nAll legacy GPT-5.0 models automatically normalize to GPT-5.1 equivalents:\r\n- `gpt-5-codex` → `gpt-5.1-codex`\r\n- `gpt-5` → `gpt-5.1`\r\n- `gpt-5-mini`, `gpt-5-nano` → `gpt-5.1`\r\n- `codex-mini-latest` → `gpt-5.1-codex-mini`\r\n\r\n### 🔧 Technical Improvements\r\n\r\n- **New `ModelFamily` type**: `\"codex-max\" | \"codex\" | \"gpt-5.1\"`\r\n- **Lazy instruction loading**: Instructions fetched per-request based on model\r\n- **Separate caching per family**: Better cache efficiency\r\n- **Model family logging**: Debug with `modelFamily` field in logs\r\n\r\n### 🧪 Test Coverage\r\n\r\n- **191 unit tests** (16 new for model family detection)\r\n- **13 integration tests** with family verification\r\n- All tests passing ✅\r\n\r\n### 📝 Full Changelog\r\n\r\nSee [CHANGELOG.md](https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fblob\u002Fmain\u002FCHANGELOG.md) for complete details.\r\n\r\n---\r\n\r\n**Installation:**\r\n\\`\\`\\`bash\r\nnpm install opencode-openai-codex-auth@4.0.0\r\n\\`\\`\\`","2025-11-25T17:44:56",{"id":216,"version":217,"summary_zh":218,"released_at":219},115186,"v3.3.0","This release enforces GPT 5.1 model identifiers across all configurations and documentation, removes deprecated GPT 5.0 models, and establishes `config\u002Ffull-opencode.json` as the only officially supported configuration. These changes address GPT 5 model temperamental behavior and ensure users have a reliable, tested setup that works consistently with OpenCode features.\r\n\r\n---\r\n\r\n## 🏷️ Model Naming & Deprecation\r\n\r\n### GPT 5.1 Standardization\r\n**Impact:** 🟡 **MEDIUM** - Configuration update required\r\n\r\n**Changes:**\r\nAll model identifiers updated to GPT 5.1 naming convention:\r\n- ✅ `gpt-5-codex-low` → `gpt-5.1-codex-low`\r\n- ✅ `gpt-5-codex-medium` → `gpt-5.1-codex-medium`\r\n- ✅ `gpt-5-codex-high` → `gpt-5.1-codex-high`\r\n- ✅ `gpt-5-codex-mini-medium` → `gpt-5.1-codex-mini-medium`\r\n- ✅ `gpt-5-codex-mini-high` → `gpt-5.1-codex-mini-high`\r\n- ✅ `gpt-5-low` → `gpt-5.1-low`\r\n- ✅ `gpt-5-medium` → `gpt-5.1-medium`\r\n- ✅ `gpt-5-high` → `gpt-5.1-high`\r\n\r\n**File:** `config\u002Ffull-opencode.json`\r\n\r\n**Display names updated:**\r\n- \"GPT 5 Codex Low (OAuth)\" → \"GPT 5.1 Codex Low (OAuth)\"\r\n- All variants now clearly show \"5.1\" in the TUI\r\n\r\n---\r\n\r\n### Deprecated Models Removed\r\n**Impact:** 🔴 **HIGH** - Breaking change for users on GPT 5.0 models\r\n\r\n**Removed from config:**\r\n- ❌ `gpt-5-minimal` - No longer supported\r\n- ❌ `gpt-5-mini` - No longer supported  \r\n- ❌ `gpt-5-nano` - No longer supported\r\n\r\n**Reason:**\r\nOpenAI is phasing out GPT 5.0 models. These models exhibited unreliable behavior and are being replaced by the GPT 5.1 family.\r\n\r\n**Migration:**\r\nUsers on deprecated models should switch to:\r\n- `gpt-5-minimal` → `gpt-5.1-low` (similar fast performance)\r\n- `gpt-5-mini` → `gpt-5.1-low` (lightweight reasoning)\r\n- `gpt-5-nano` → `gpt-5.1-low` (minimal reasoning)\r\n\r\n**File:** `config\u002Ffull-opencode.json` - Now ships with 8 verified GPT 5.1 variants instead of 11 mixed 5.0\u002F5.1 models\r\n\r\n---\r\n\r\n### Codex Mini Context Limits Corrected\r\n**Impact:** 🟢 **LOW** - Improves accuracy\r\n\r\n**Problem:**\r\nCodex Mini was configured with incorrect context limits (200k\u002F100k), which didn't match actual API specifications.\r\n\r\n**Fix:**\r\nUpdated Codex Mini limits to correct values:\r\n- Context: 200k → **272k tokens**\r\n- Output: 100k → **128k tokens**\r\n\r\n**Impact:**\r\n- ✅ OpenCode now displays accurate token usage for Codex Mini variants\r\n- ✅ Auto-compaction works correctly with proper limits\r\n- ✅ Matches actual API behavior\r\n\r\n**File:** `config\u002Ffull-opencode.json:69-70, 85-86`\r\n\r\n---\r\n\r\n## ⚠️ Configuration Enforcement\r\n\r\n### Full Config Now Required\r\n**Impact:** 🔴 **CRITICAL** - Affects all users\r\n\r\n**What Changed:**\r\nThe plugin now **strongly enforces** `config\u002Ffull-opencode.json` as the only officially supported configuration.\r\n\r\n**Why This Matters:**\r\nGPT 5 models have proven to be temperamental:\r\n- Some variants work reliably\r\n- Some don't respond correctly\r\n- Some may give errors unexpectedly\r\n\r\nThe full configuration has been thoroughly tested and verified to work consistently. Minimal configurations lack critical metadata and may fail unpredictably.\r\n\r\n**Documentation Updates:**\r\n\r\n**README.md:**\r\n- Changed \"Recommended: Full Configuration\" → \"⚠️ REQUIRED: Full Configuration (Only Supported Setup)\"\r\n- Added explicit warning: \"GPT 5 models can be temperamental - some work, some don't, some may error\"\r\n- Marked minimal config section as \"❌ NOT RECOMMENDED - DO NOT USE\"\r\n- Added detailed \"Why this doesn't work\" section explaining:\r\n  - Missing model metadata breaks OpenCode features\r\n  - No support for usage limits or context compaction\r\n  - Cannot guarantee stable operation\r\n\r\n**docs\u002Fgetting-started.md:**\r\n- Removed \"Option B: Minimal Configuration\"\r\n- Replaced with \"❌ Minimal Configuration (NOT SUPPORTED - DO NOT USE)\"\r\n- Added comprehensive warnings about GPT 5 models requiring proper configuration\r\n\r\n**docs\u002Fconfiguration.md:**\r\n- Added warnings throughout about using official `full-opencode.json`\r\n- Updated \"Recommended\" → \"⚠️ REQUIRED: Use Pre-Configured File\"\r\n- Added migration guide showing GPT 5.0 → GPT 5.1 upgrade path\r\n\r\n**config\u002FREADME.md:**\r\n- Complete restructure from \"Configuration Examples\" → \"Configuration\"\r\n- Added \"⚠️ REQUIRED Configuration File\" section\r\n- Marked `minimal-opencode.json` as **NOT SUPPORTED**\r\n- Marked `full-opencode-gpt5.json` as **DEPRECATED**\r\n- Clear \"❌ Other Configurations (NOT SUPPORTED)\" section\r\n\r\n**Impact:**\r\n- ✅ Users get reliable, tested configuration\r\n- ✅ OpenCode features (auto-compaction, usage sidebar) work properly\r\n- ✅ Reduces support issues from misconfiguration\r\n- ⚠️ Users must migrate from minimal configs to full config\r\n\r\n---\r\n\r\n### Why Minimal Configs Don't Work\r\n\r\n**Missing Metadata:**\r\nMinimal configs lack per-model `limit` metadata that OpenCode requires for:\r\n- Token usage display\r\n- Automatic context compaction\r\n- Usage sidebar widgets\r\n\r\n**GPT 5 Temperamental Behavior:**\r\nWithout proper configuration:\r\n- Some model variants may fail\r\n- Error messages may be unclear\r\n- Behavior is unpredictab","2025-11-16T21:56:56",{"id":221,"version":222,"summary_zh":223,"released_at":224},115187,"v3.1.0","To update the plugin (same as README):\r\n\r\n# Clear plugin cache\r\n(cd ~ && sed -i.bak '\u002F\"opencode-openai-codex-auth\"\u002Fd' .cache\u002Fopencode\u002Fpackage.json && rm -rf .cache\u002Fopencode\u002Fnode_modules\u002Fopencode-openai-codex-auth)\r\n\r\n# Restart OpenCode - it will reinstall the latest version\r\nopencode\r\n\r\nYou'll need to add `gpt-5-codex-mini-medium` and `gpt-5-codex-mini-high` presets (200k input \u002F 100k output) to your opencode.json. I recommend copying directly from the readme. ","2025-11-11T20:53:14",{"id":226,"version":227,"summary_zh":228,"released_at":229},115193,"v1.0.3","## 🏗️ Code Architecture Improvements\n\n### Major Refactoring\n- **50% code reduction** in main file (512→256 lines)\n- **Modular design** with focused, single-responsibility modules\n- **Clean separation** of concerns across the codebase\n\n### New Modules\n- `lib\u002Flogger.mjs` - Centralized debug logging\n- `lib\u002Frequest-transformer.mjs` - Request body transformations\n- `lib\u002Fresponse-handler.mjs` - SSE to JSON conversion\n\n### Code Quality\n- Removed all debug console.logs\n- Improved readability and maintainability\n- Easier testing of individual components\n- Better organized imports and dependencies\n\n## 📝 Documentation Updates\n- Updated project structure in README\n- Added version pinning instructions\n- Improved module overview with new files\n\n## 🗑️ Cleanup\n- Removed bundled `codex-instructions.md` (fully migrated to GitHub fetching)\n\n## 🔧 Technical Details\nAll functionality remains identical - this is purely a refactoring release for better code organization and future maintainability. No breaking changes.\n\n## 📦 Installation\n\\`\\`\\`bash\nnpm install opencode-openai-codex-auth@1.0.3\n\\`\\`\\`\n\nOr in your \\`opencode.json\\`:\n\\`\\`\\`json\n{\n  \"plugin\": [\"opencode-openai-codex-auth\"]\n}\n\\`\\`\\`\n\n## 📊 Package Stats\n- Total files: 10\n- Package size: 12.4 kB\n- Unpacked size: 36.9 kB","2025-10-02T12:58:31",{"id":231,"version":232,"summary_zh":233,"released_at":234},115194,"v1.0.2","## 🎯 Major Improvements\n\n### Smart ETag-Based Caching\n- Replaced 24-hour TTL cache with HTTP ETag-based conditional requests\n- Only downloads instructions when content actually changes (304 Not Modified responses)\n- Significantly reduces GitHub API calls while staying up-to-date\n\n### Release Tag Tracking for Stability\n- Now fetches Codex instructions from latest GitHub release tag instead of main branch\n- Ensures compatibility with ChatGPT Codex API (main branch may have unreleased features)\n- Prevents \"Instructions are not valid\" errors from bleeding-edge changes\n\n## 🐛 Bug Fixes\n\n### Model Normalization\n- Fixed default model fallback: unsupported models now default to `gpt-5` (not `gpt-5-codex`)\n- Preserves user's choice between `gpt-5` and `gpt-5-codex` when explicitly specified\n- Only codex model variants normalize to `gpt-5-codex`\n\n### Error Prevention\n- Added `body.text` initialization check to prevent TypeError on `body.text.verbosity`\n- Improved error handling in request transformation\n\n### Code Quality\n- Standardized all console.error prefixes to `[openai-codex-plugin]`\n- Updated documentation to reflect ETag caching implementation\n- Added npm version and downloads badges to README\n\n## 📚 Documentation\n- Updated README with accurate caching behavior description\n- Added npm package badges for version tracking\n- Clarified release-based fetching strategy\n\n## 📦 Installation\n\n```bash\nnpm install opencode-openai-codex-auth@1.0.2\n```\n\nOr add to your `opencode.json`:\n```json\n{\n  \"plugin\": [\"opencode-openai-codex-auth\"],\n  \"model\": \"openai\u002Fgpt-5-codex\"\n}\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fcompare\u002Fv1.0.1...v1.0.2","2025-10-02T11:14:40",{"id":236,"version":237,"summary_zh":238,"released_at":239},115195,"v1.0.1","## 📚 Documentation Update\n\n### Improvements\n- ✅ Clarified that `npm install` is **not needed** - opencode auto-installs plugins!\n- ✅ Added config file locations (global vs project-specific)\n- ✅ Added links to official [opencode docs](https:\u002F\u002Fopencode.ai\u002Fdocs\u002F)\n- ✅ Simplified quick start instructions for beginners\n\n### Installation\n\nJust add to your `opencode.json`:\n\n```json\n{\n  \"plugin\": [\"opencode-openai-codex-auth\"],\n  \"model\": \"openai\u002Fgpt-5-codex\"\n}\n```\n\nopencode handles the rest automatically!\n\n---\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fnumman-ali\u002Fopencode-openai-codex-auth\u002Fcompare\u002Fv1.0.0...v1.0.1","2025-10-01T23:30:22",{"id":241,"version":242,"summary_zh":243,"released_at":244},115196,"v1.0.0","## 🎉 Production-ready OpenAI Codex OAuth Plugin\n\n### Features\n- ✅ ChatGPT Plus\u002FPro OAuth authentication with auto-refresh\n- ✅ **Zero dependencies** - Lightweight with only @openauthjs\u002Fopenauth\n- ✅ **Auto-updating Codex instructions** - Fetches latest from OpenAI's Codex repo (cached 24h)\n- ✅ Full tool support with automatic remapping (apply_patch → edit, update_plan → todowrite)\n- ✅ High reasoning effort with detailed thinking blocks\n- ✅ Modular architecture for easy maintenance\n\n### Installation\n\n```bash\nnpm install opencode-openai-codex-auth\n```\n\nAdd to your `opencode.json`:\n\n```json\n{\n  \"plugin\": [\"opencode-openai-codex-auth\"],\n  \"model\": \"openai\u002Fgpt-5-codex\"\n}\n```\n\n### Authentication\n\n```bash\nopencode auth login\n```\n\nSelect \"OpenAI\" → \"ChatGPT Plus\u002FPro (Codex Subscription)\"\n\n### Package Size\n- **13.5 kB** compressed\n- **36.7 kB** unpacked\n- Only 1 dependency\n\n### Credits\nBased on research from:\n- [ben-vargas\u002Fai-sdk-provider-chatgpt-oauth](https:\u002F\u002Fgithub.com\u002Fben-vargas\u002Fai-sdk-provider-chatgpt-oauth)\n- [openai\u002Fcodex](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcodex)\n- [sst\u002Fopencode](https:\u002F\u002Fgithub.com\u002Fsst\u002Fopencode)","2025-10-01T23:17:30"]