[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-steipete--summarize":3,"tool-steipete--summarize":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":76,"owner_website":83,"owner_url":84,"languages":85,"stars":106,"forks":107,"last_commit_at":108,"license":109,"difficulty_score":10,"env_os":110,"env_gpu":111,"env_ram":111,"env_deps":112,"category_tags":120,"github_topics":121,"view_count":125,"oss_zip_url":126,"oss_zip_packed_at":126,"status":16,"created_at":127,"updated_at":128,"faqs":129,"releases":155},222,"steipete\u002Fsummarize","summarize","Point at any URL\u002FYouTube\u002FPodcast or file. Get the gist. CLI and Chrome Extension.","Summarize 是一个开源的智能摘要工具，支持通过命令行（CLI）或浏览器扩展（Chrome 侧边栏 \u002F Firefox 侧边栏）快速提取网页、PDF、视频、播客、YouTube 等内容的核心信息。它能自动识别内容类型——比如检测到 YouTube 视频时，会抓取幻灯片截图、结合 OCR 文字识别与字幕生成带时间戳的摘要卡片，并支持点击跳转播放位置。\n\nSummarize 解决了用户面对大量文本或音视频内容时“抓不住重点”的问题，尤其适合需要高效获取信息但不想手动整理的场景。普通用户可通过浏览器一键总结当前页面；研究人员、学生或知识工作者能快速处理论文、讲座或播客；开发者则可利用其 CLI 灵活集成到工作流中。\n\n技术上，它融合了多种能力：优先使用官方字幕，缺失时调用 Whisper、Gemini 或 OpenAI 等模型进行语音转写；支持本地、免费（如 OpenRouter）及付费模型；输出支持 Markdown、JSON 等格式，并具备缓存感知与成本估算。浏览器扩展依赖本地后台服务（daemon）处理 heavy lifting（如 ffmpeg、OCR），兼顾速度与隐私，所有","Summarize 是一个开源的智能摘要工具，支持通过命令行（CLI）或浏览器扩展（Chrome 侧边栏 \u002F Firefox 侧边栏）快速提取网页、PDF、视频、播客、YouTube 等内容的核心信息。它能自动识别内容类型——比如检测到 YouTube 视频时，会抓取幻灯片截图、结合 OCR 文字识别与字幕生成带时间戳的摘要卡片，并支持点击跳转播放位置。\n\nSummarize 解决了用户面对大量文本或音视频内容时“抓不住重点”的问题，尤其适合需要高效获取信息但不想手动整理的场景。普通用户可通过浏览器一键总结当前页面；研究人员、学生或知识工作者能快速处理论文、讲座或播客；开发者则可利用其 CLI 灵活集成到工作流中。\n\n技术上，它融合了多种能力：优先使用官方字幕，缺失时调用 Whisper、Gemini 或 OpenAI 等模型进行语音转写；支持本地、免费（如 OpenRouter）及付费模型；输出支持 Markdown、JSON 等格式，并具备缓存感知与成本估算。浏览器扩展依赖本地后台服务（daemon）处理 heavy lifting（如 ffmpeg、OCR），兼顾速度与隐私，所有通信仅限本机。","# Summarize 📝 — Chrome Side Panel + CLI\n\nFast summaries from URLs, files, and media. Works in the terminal, a Chrome Side Panel and Firefox Sidebar.\n\n## Highlights\n\n- Chrome Side Panel **chat** (streaming agent + history) inside the sidebar.\n- **YouTube slides**: screenshots + OCR + transcript cards, timestamped seek, OCR\u002FTranscript toggle.\n- Media-aware summaries: auto‑detect video\u002Faudio vs page content.\n- Streaming Markdown + metrics + cache‑aware status.\n- CLI supports URLs, files, podcasts, YouTube, audio\u002Fvideo, PDFs.\n\n## Feature overview\n\n- URLs, files, and media: web pages, PDFs, images, audio\u002Fvideo, YouTube, podcasts, RSS.\n- Slide extraction for video sources (YouTube\u002Fdirect media) with OCR + timestamped cards.\n- Transcript-first media flow: published transcripts when available, then Groq\u002FONNX\u002Fwhisper.cpp\u002FAssemblyAI\u002FGemini\u002FOpenAI\u002FFAL transcription fallback when not.\n- Streaming output with Markdown rendering, metrics, and cache-aware status.\n- Local, paid, and free models: OpenAI‑compatible local endpoints, paid providers, plus an OpenRouter free preset.\n- Output modes: Markdown\u002Ftext, JSON diagnostics, extract-only, metrics, timing, and cost estimates.\n- Smart default: if content is shorter than the requested length, we return it as-is (use `--force-summary` to override).\n\n## Get the extension (recommended)\n\n![Summarize extension screenshot](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsteipete_summarize_readme_591d962721be.png)\n\nOne‑click summarizer for the current tab. Chrome Side Panel + Firefox Sidebar + local daemon for streaming Markdown.\n\n**Chrome Web Store:** [Summarize Side Panel](https:\u002F\u002Fchromewebstore.google.com\u002Fdetail\u002Fsummarize\u002Fcejgnmmhbbpdmjnfppjdfkocebngehfg)\n\nYouTube slide screenshots (from the browser):\n\n![Summarize YouTube slide screenshots](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsteipete_summarize_readme_12c814e1f6b9.png)\n\n### Beginner quickstart (extension)\n\n1. Install the CLI (choose one):\n   - **npm** (cross‑platform): `npm i -g @steipete\u002Fsummarize`\n   - **Homebrew** (macOS arm64): `brew install steipete\u002Ftap\u002Fsummarize`\n2. Install the extension (Chrome Web Store link above) and open the Side Panel.\n3. The panel shows a token + install command. Run it in Terminal:\n   - `summarize daemon install --token \u003CTOKEN>`\n\nWhy a daemon\u002Fservice?\n\n- The extension can’t run heavy extraction inside the browser. It talks to a local background service on `127.0.0.1` for fast streaming and media tools (yt‑dlp, ffmpeg, OCR, transcription).\n- The service autostarts (launchd\u002Fsystemd\u002FScheduled Task) so the Side Panel is always ready.\n\nIf you only want the **CLI**, you can skip the daemon install entirely.\n\nNotes:\n\n- Summarization only runs when the Side Panel is open.\n- Auto mode summarizes on navigation (incl. SPAs); otherwise use the button.\n- Daemon is localhost-only and requires a shared token; rerunning `summarize daemon install --token \u003CTOKEN>` adds another paired browser token instead of invalidating the old one.\n- Autostart: macOS (launchd), Linux (systemd user), Windows (Scheduled Task).\n- Tip: configure `free` via `summarize refresh-free` (needs `OPENROUTER_API_KEY`). Add `--set-default` to set model=`free`.\n\nMore:\n\n- Step-by-step install: [apps\u002Fchrome-extension\u002FREADME.md](apps\u002Fchrome-extension\u002FREADME.md)\n- Architecture + troubleshooting: [docs\u002Fchrome-extension.md](docs\u002Fchrome-extension.md)\n- Firefox compatibility notes: [apps\u002Fchrome-extension\u002Fdocs\u002Ffirefox.md](apps\u002Fchrome-extension\u002Fdocs\u002Ffirefox.md)\n\n### Slides (extension)\n\n- Select **Video + Slides** in the Summarize picker.\n- Slides render at the top; expand to full‑width cards with timestamps.\n- Click a slide to seek the video; toggle **Transcript\u002FOCR** when OCR is significant.\n- Requirements: `yt-dlp` + `ffmpeg` for extraction; `tesseract` for OCR. Missing tools show an in‑panel notice.\n\n### Advanced (unpacked \u002F dev)\n\n1. Build + load the extension (unpacked):\n   - Chrome: `pnpm -C apps\u002Fchrome-extension build`\n     - `chrome:\u002F\u002Fextensions` → Developer mode → Load unpacked\n     - Pick: `apps\u002Fchrome-extension\u002F.output\u002Fchrome-mv3`\n   - Firefox: `pnpm -C apps\u002Fchrome-extension build:firefox`\n     - `about:debugging#\u002Fruntime\u002Fthis-firefox` → Load Temporary Add-on\n     - Pick: `apps\u002Fchrome-extension\u002F.output\u002Ffirefox-mv3\u002Fmanifest.json`\n2. Open Side Panel\u002FSidebar → copy token.\n3. Install daemon in dev mode:\n   - `pnpm summarize daemon install --token \u003CTOKEN> --dev`\n\n## CLI\n\n![Summarize CLI screenshot](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsteipete_summarize_readme_999e0d0e7576.png)\n\n### Install\n\nRequires Node 22+.\n\n- npx (no install):\n\n```bash\nnpx -y @steipete\u002Fsummarize \"https:\u002F\u002Fexample.com\"\n```\n\n- npm (global):\n\n```bash\nnpm i -g @steipete\u002Fsummarize\n```\n\n- npm (library \u002F minimal deps):\n\n```bash\nnpm i @steipete\u002Fsummarize-core\n```\n\n```ts\nimport { createLinkPreviewClient } from \"@steipete\u002Fsummarize-core\u002Fcontent\";\n```\n\n- Homebrew (custom tap):\n\n```bash\nbrew install steipete\u002Ftap\u002Fsummarize\n```\n\nHomebrew availability depends on the current tap formula for your architecture.\nIf Homebrew install fails on Intel\u002Fx64, use the npm global install above.\n\n### Optional local dependencies\n\nInstall these if you want media-heavy features:\n\n- `ffmpeg`: required for `--slides` and many local media\u002Ftranscription flows\n- `yt-dlp`: required for YouTube slide extraction and some remote media flows\n- `tesseract`: optional OCR for `--slides-ocr`\n- Optional cloud transcription providers:\n  - `GROQ_API_KEY`\n  - `ASSEMBLYAI_API_KEY`\n  - `GEMINI_API_KEY` \u002F `GOOGLE_GENERATIVE_AI_API_KEY` \u002F `GOOGLE_API_KEY`\n  - `OPENAI_API_KEY`\n  - `FAL_KEY`\n\nmacOS (Homebrew):\n\n```bash\nbrew install ffmpeg yt-dlp\nbrew install tesseract # optional, for --slides-ocr\n```\n\nIf `--slides` is enabled and these tools are missing, Summarize warns and continues without slides.\n\n### CLI vs extension\n\n- **CLI only:** just install via npm\u002FHomebrew and run `summarize ...` (no daemon needed).\n- **Chrome\u002FFirefox extension:** install the CLI **and** run `summarize daemon install --token \u003CTOKEN>` so the Side Panel can stream results and use local tools.\n\n### Quickstart\n\n```bash\nsummarize \"https:\u002F\u002Fexample.com\"\n```\n\n### Inputs\n\nURLs or local paths:\n\n```bash\nsummarize \"\u002Fpath\u002Fto\u002Ffile.pdf\" --model google\u002Fgemini-3-flash\nsummarize \"https:\u002F\u002Fexample.com\u002Freport.pdf\" --model google\u002Fgemini-3-flash\nsummarize \"\u002Fpath\u002Fto\u002Faudio.mp3\"\nsummarize \"\u002Fpath\u002Fto\u002Fvideo.mp4\"\n```\n\nStdin (pipe content using `-`):\n\n```bash\necho \"content\" | summarize -\npbpaste | summarize -\n# binary stdin also works (PDF\u002Fimage\u002Faudio\u002Fvideo bytes)\ncat \u002Fpath\u002Fto\u002Ffile.pdf | summarize -\n```\n\n**Notes:**\n\n- Stdin has a 50MB size limit\n- The `-` argument tells summarize to read from standard input\n- Text stdin is treated as UTF-8 text (whitespace-only input is rejected as empty)\n- Binary stdin is preserved as raw bytes and file type is auto-detected when possible\n- Useful for piping clipboard content or command output\n\nYouTube (supports `youtube.com` and `youtu.be`):\n\n```bash\nsummarize \"https:\u002F\u002Fyoutu.be\u002FdQw4w9WgXcQ\" --youtube auto\n```\n\nPodcast RSS (transcribes latest enclosure):\n\n```bash\nsummarize \"https:\u002F\u002Ffeeds.npr.org\u002F500005\u002Fpodcast.xml\"\n```\n\nApple Podcasts episode page:\n\n```bash\nsummarize \"https:\u002F\u002Fpodcasts.apple.com\u002Fus\u002Fpodcast\u002F2424-jelly-roll\u002Fid360084272?i=1000740717432\"\n```\n\nSpotify episode page (best-effort; may fail for exclusives):\n\n```bash\nsummarize \"https:\u002F\u002Fopen.spotify.com\u002Fepisode\u002F5auotqWAXhhKyb9ymCuBJY\"\n```\n\n### Output length\n\n`--length` controls how much output we ask for (guideline), not a hard cap.\n\n```bash\nsummarize \"https:\u002F\u002Fexample.com\" --length long\nsummarize \"https:\u002F\u002Fexample.com\" --length 20k\n```\n\n- Presets: `short|medium|long|xl|xxl`\n- Character targets: `1500`, `20k`, `20000`\n- Optional hard cap: `--max-output-tokens \u003Ccount>` (e.g. `2000`, `2k`)\n  - Provider\u002Fmodel APIs still enforce their own maximum output limits.\n  - If omitted, no max token parameter is sent (provider default).\n  - Prefer `--length` unless you need a hard cap.\n- Short content: when extracted content is shorter than the requested length, the CLI returns the content as-is.\n  - Override with `--force-summary` to always run the LLM.\n- Minimums: `--length` numeric values must be >= 50 chars; `--max-output-tokens` must be >= 16.\n- Preset targets (source of truth: `packages\u002Fcore\u002Fsrc\u002Fprompts\u002Fsummary-lengths.ts`):\n  - short: target ~900 chars (range 600-1,200)\n  - medium: target ~1,800 chars (range 1,200-2,500)\n  - long: target ~4,200 chars (range 2,500-6,000)\n  - xl: target ~9,000 chars (range 6,000-14,000)\n  - xxl: target ~17,000 chars (range 14,000-22,000)\n\n### What file types work?\n\nBest effort and provider-dependent. These usually work well:\n\n- `text\u002F*` and common structured text (`.txt`, `.md`, `.json`, `.yaml`, `.xml`, ...)\n  - Text-like files are inlined into the prompt for better provider compatibility.\n- PDFs: `application\u002Fpdf` (provider support varies; Google is the most reliable here)\n- Images: `image\u002Fjpeg`, `image\u002Fpng`, `image\u002Fwebp`, `image\u002Fgif`\n- Audio\u002FVideo: `audio\u002F*`, `video\u002F*` (local audio\u002Fvideo files MP3\u002FWAV\u002FM4A\u002FOGG\u002FFLAC\u002FMP4\u002FMOV\u002FWEBM automatically transcribed, when supported by the model)\n\nNotes:\n\n- If a provider rejects a media type, the CLI fails fast with a friendly message.\n- xAI models do not support attaching generic files (like PDFs) via the AI SDK; use Google\u002FOpenAI\u002FAnthropic for those.\n\n### Model ids\n\nUse gateway-style ids: `\u003Cprovider>\u002F\u003Cmodel>`.\n\nExamples:\n\n- `openai\u002Fgpt-5-mini`\n- `anthropic\u002Fclaude-sonnet-4-5`\n- `xai\u002Fgrok-4-fast-non-reasoning`\n- `google\u002Fgemini-3-flash`\n- `zai\u002Fglm-4.7`\n- `openrouter\u002Fopenai\u002Fgpt-5-mini` (force OpenRouter)\n\nNote: some models\u002Fproviders do not support streaming or certain file media types. When that happens, the CLI prints a friendly error (or auto-disables streaming for that model when supported by the provider).\n\n### Limits\n\n- Text inputs over 10 MB are rejected before tokenization.\n- Text prompts are preflighted against the model input limit (LiteLLM catalog), using a GPT tokenizer.\n\n### Common flags\n\n```bash\nsummarize \u003Cinput> [flags]\n```\n\nUse `summarize --help` or `summarize help` for the full help text.\n\n- `--model \u003Cprovider\u002Fmodel>`: which model to use (defaults to `auto`)\n- `--model auto`: automatic model selection + fallback (default)\n- `--model \u003Cname>`: use a config-defined model (see Configuration)\n- `--timeout \u003Cduration>`: `30s`, `2m`, `5000ms` (default `2m`)\n- `--retries \u003Ccount>`: LLM retry attempts on timeout (default `1`)\n- `--length short|medium|long|xl|xxl|s|m|l|\u003Cchars>`\n- `--language, --lang \u003Clanguage>`: output language (`auto` = match source)\n- `--max-output-tokens \u003Ccount>`: hard cap for LLM output tokens\n- `--cli [provider]`: use a CLI provider (`--model cli\u002F\u003Cprovider>`). Supports `claude`, `gemini`, `codex`, `agent`. If omitted, uses auto selection with CLI enabled.\n- `--stream auto|on|off`: stream LLM output (`auto` = TTY only; disabled in `--json` mode)\n- `--plain`: keep raw output (no ANSI\u002FOSC Markdown rendering)\n- `--no-color`: disable ANSI colors\n- `--theme \u003Cname>`: CLI theme (`aurora`, `ember`, `moss`, `mono`)\n- `--format md|text`: website\u002Ffile content format (default `text`)\n- `--markdown-mode off|auto|llm|readability`: HTML -> Markdown mode (default `readability`)\n- `--preprocess off|auto|always`: controls `uvx markitdown` usage (default `auto`)\n  - Install `uvx`: `brew install uv` (or https:\u002F\u002Fastral.sh\u002Fuv\u002F)\n- `--extract`: print extracted content and exit (URLs only; stdin `-` is not supported)\n  - Deprecated alias: `--extract-only`\n- `--slides`: extract slides for YouTube\u002Fdirect video URLs and render them inline in the summary narrative (auto-renders inline in supported terminals)\n- `--slides-ocr`: run OCR on extracted slides (requires `tesseract`)\n- `--slides-dir \u003Cdir>`: base output dir for slide images (default `.\u002Fslides`)\n- `--slides-scene-threshold \u003Cvalue>`: scene detection threshold (0.1-1.0)\n- `--slides-max \u003Ccount>`: maximum slides to extract (default `6`)\n- `--slides-min-duration \u003Cseconds>`: minimum seconds between slides\n- `--json`: machine-readable output with diagnostics, prompt, `metrics`, and optional summary\n- `--verbose`: debug\u002Fdiagnostics on stderr\n- `--metrics off|on|detailed`: metrics output (default `on`)\n\n### Coding CLIs (Codex, Claude, Gemini, Agent)\n\nSummarize can use common coding CLIs as local model backends:\n\n- `codex` -> `--cli codex` \u002F `--model cli\u002Fcodex\u002F\u003Cmodel>`\n- `claude` -> `--cli claude` \u002F `--model cli\u002Fclaude\u002F\u003Cmodel>`\n- `gemini` -> `--cli gemini` \u002F `--model cli\u002Fgemini\u002F\u003Cmodel>`\n- `agent` (Cursor Agent CLI) -> `--cli agent` \u002F `--model cli\u002Fagent\u002F\u003Cmodel>`\n\nRequirements:\n\n- Binary installed and on `PATH` (or set `CODEX_PATH`, `CLAUDE_PATH`, `GEMINI_PATH`, `AGENT_PATH`)\n- Provider authenticated (`codex login`, `claude auth`, `gemini` login flow, `agent login` or `CURSOR_API_KEY`)\n\nQuick smoke test:\n\n```bash\nprintf \"Summarize CLI smoke input.\\nOne short paragraph. Reply can be brief.\\n\" >\u002Ftmp\u002Fsummarize-cli-smoke.txt\n\nsummarize --cli codex --plain --timeout 2m \u002Ftmp\u002Fsummarize-cli-smoke.txt\nsummarize --cli claude --plain --timeout 2m \u002Ftmp\u002Fsummarize-cli-smoke.txt\nsummarize --cli gemini --plain --timeout 2m \u002Ftmp\u002Fsummarize-cli-smoke.txt\nsummarize --cli agent --plain --timeout 2m \u002Ftmp\u002Fsummarize-cli-smoke.txt\n```\n\nSet explicit CLI allowlist\u002Forder:\n\n```json\n{\n  \"cli\": { \"enabled\": [\"codex\", \"claude\", \"gemini\", \"agent\"] }\n}\n```\n\nConfigure implicit auto CLI fallback:\n\n```json\n{\n  \"cli\": {\n    \"autoFallback\": {\n      \"enabled\": true,\n      \"onlyWhenNoApiKeys\": true,\n      \"order\": [\"claude\", \"gemini\", \"codex\", \"agent\"]\n    }\n  }\n}\n```\n\nMore details: [`docs\u002Fcli.md`](docs\u002Fcli.md)\n\n### Auto model ordering\n\n`--model auto` builds candidate attempts from built-in rules (or your `model.rules` overrides).\nCLI attempts are prepended when:\n\n- `cli.enabled` is set (explicit allowlist\u002Forder), or\n- implicit auto selection is active and `cli.autoFallback` is enabled.\n\nDefault fallback behavior: only when no API keys are configured, order `claude, gemini, codex, agent`, and remember\u002Fprioritize last successful provider (`~\u002F.summarize\u002Fcli-state.json`).\n\nSet explicit CLI attempts:\n\n```json\n{\n  \"cli\": { \"enabled\": [\"gemini\"] }\n}\n```\n\nDisable implicit auto CLI fallback:\n\n```json\n{\n  \"cli\": { \"autoFallback\": { \"enabled\": false } }\n}\n```\n\nNote: explicit `--model auto` does not trigger implicit auto CLI fallback unless `cli.enabled` is set.\n\n### Website extraction (Firecrawl + Markdown)\n\nNon-YouTube URLs go through a fetch -> extract pipeline. When direct fetch\u002Fextraction is blocked or too thin,\n`--firecrawl auto` can fall back to Firecrawl (if configured).\n\n- `--firecrawl off|auto|always` (default `auto`)\n- `--extract --format md|text` (default `text`; if `--format` is omitted, `--extract` defaults to `md` for non-YouTube URLs)\n- `--markdown-mode off|auto|llm|readability` (default `readability`)\n  - `auto`: use an LLM converter when configured; may fall back to `uvx markitdown`\n  - `llm`: force LLM conversion (requires a configured model key)\n  - `off`: disable LLM conversion (still may return Firecrawl Markdown when configured)\n- Plain-text mode: use `--format text`.\n\n### YouTube transcripts\n\n`--youtube auto` tries best-effort web transcript endpoints first. When captions are not available, it falls back to:\n\n1. Apify (if `APIFY_API_TOKEN` is set): uses a scraping actor (`faVsWy9VTSNVIhWpR`)\n2. yt-dlp + Whisper (if `yt-dlp` is available): downloads audio, then transcribes with local `whisper.cpp` when installed\n   (preferred), otherwise falls back to Groq (`GROQ_API_KEY`), AssemblyAI (`ASSEMBLYAI_API_KEY`), Gemini\n   (`GEMINI_API_KEY` \u002F Google aliases), OpenAI (`OPENAI_API_KEY`), then FAL (`FAL_KEY`)\n\nEnvironment variables for yt-dlp mode:\n\n- `YT_DLP_PATH` - optional path to yt-dlp binary (otherwise `yt-dlp` is resolved via `PATH`)\n- `SUMMARIZE_WHISPER_CPP_MODEL_PATH` - optional override for the local `whisper.cpp` model file\n- `SUMMARIZE_WHISPER_CPP_BINARY` - optional override for the local binary (default: `whisper-cli`)\n- `SUMMARIZE_DISABLE_LOCAL_WHISPER_CPP=1` - disable local whisper.cpp (force remote)\n- `GROQ_API_KEY` - Groq Whisper transcription\n- `ASSEMBLYAI_API_KEY` - AssemblyAI transcription\n- `GEMINI_API_KEY` - Gemini transcription (`GOOGLE_GENERATIVE_AI_API_KEY` \u002F `GOOGLE_API_KEY` also work)\n- `OPENAI_API_KEY` - OpenAI Whisper transcription\n- `OPENAI_WHISPER_BASE_URL` - optional OpenAI-compatible Whisper endpoint override\n- `FAL_KEY` - FAL AI Whisper fallback\n\nApify costs money but tends to be more reliable when captions exist.\n\n### Slide extraction (YouTube + direct video URLs)\n\nExtract slide screenshots (scene detection via `ffmpeg`) and optional OCR:\n\nRequirements:\n\n- `ffmpeg` for scene detection and frame extraction\n- `yt-dlp` for YouTube video download\u002Fstream resolution\n- `tesseract` only when using `--slides-ocr`\n\n```bash\nsummarize \"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=...\" --slides\nsummarize \"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=...\" --slides --slides-ocr\n```\n\nOutputs are written under `.\u002Fslides\u002F\u003CsourceId>\u002F` (or `--slides-dir`). OCR results are included in JSON output\n(`--json`) and stored in `slides.json` inside the slide directory. When scene detection is too sparse, the\nextractor also samples at a fixed interval to improve coverage.\nWhen using `--slides`, supported terminals (kitty\u002FiTerm\u002FKonsole) render inline thumbnails automatically inside the\nsummary narrative (the model inserts `[slide:N]` markers). Timestamp links are clickable when the terminal supports\nOSC-8 (YouTube\u002FVimeo\u002FLoom\u002FDropbox). If inline images are unsupported, Summarize prints a note with the on-disk\nslide directory.\n\nUse `--slides --extract` to print the full timed transcript and insert slide images inline at matching timestamps.\n\nFormat the extracted transcript as Markdown (headings + paragraphs) via an LLM:\n\n```bash\nsummarize \"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=...\" --extract --format md --markdown-mode llm\n```\n\n### Media transcription (Whisper)\n\nLocal audio\u002Fvideo files are transcribed first, then summarized. `--video-mode transcript` forces\ndirect media URLs (and embedded media) through Whisper first. Prefers local `whisper.cpp` when available; otherwise requires\none of `GROQ_API_KEY`, `ASSEMBLYAI_API_KEY`, `GEMINI_API_KEY` (or Google aliases), `OPENAI_API_KEY`, or `FAL_KEY`.\n\n### Local ONNX transcription (Parakeet\u002FCanary)\n\nSummarize can use NVIDIA Parakeet\u002FCanary ONNX models via a local CLI you provide. Auto selection (default) prefers ONNX when configured.\n\n- Setup helper: `summarize transcriber setup`\n- Install `sherpa-onnx` from upstream binaries\u002Fbuild (Homebrew may not have a formula)\n- Auto selection: set `SUMMARIZE_ONNX_PARAKEET_CMD` or `SUMMARIZE_ONNX_CANARY_CMD` (no flag needed)\n- Force a model: `--transcriber parakeet|canary|whisper|auto`\n- Docs: `docs\u002Fnvidia-onnx-transcription.md`\n\n### Verified podcast services (2025-12-25)\n\nRun: `summarize \u003Curl>`\n\n- Apple Podcasts\n- Spotify\n- Amazon Music \u002F Audible podcast pages\n- Podbean\n- Podchaser\n- RSS feeds (Podcasting 2.0 transcripts when available)\n- Embedded YouTube podcast pages (e.g. JREPodcast)\n\nTranscription: prefers local `whisper.cpp` when installed; otherwise uses Groq, AssemblyAI, Gemini, OpenAI, or FAL when keys are set.\n\n### Translation paths\n\n`--language\u002F--lang` controls the output language of the summary (and other LLM-generated text). Default is `auto`.\n\nWhen the input is audio\u002Fvideo, the CLI needs a transcript first. The transcript comes from one of these paths:\n\n1. Existing transcript (preferred)\n   - YouTube: uses `youtubei` \u002F `captionTracks` when available.\n   - Podcasts: uses Podcasting 2.0 RSS `\u003Cpodcast:transcript>` (JSON\u002FVTT) when the feed publishes it.\n2. Whisper transcription (fallback)\n   - YouTube: falls back to yt-dlp (audio download) + Whisper transcription when configured; Apify is a last resort.\n   - Prefers local `whisper.cpp` when installed + model available.\n   - Otherwise uses cloud transcription in this order: Groq (`GROQ_API_KEY`) → AssemblyAI (`ASSEMBLYAI_API_KEY`) → Gemini (`GEMINI_API_KEY` \u002F Google aliases) → OpenAI (`OPENAI_API_KEY`) → FAL (`FAL_KEY`).\n\nFor direct media URLs, use `--video-mode transcript` to force transcribe -> summarize:\n\n```bash\nsummarize https:\u002F\u002Fexample.com\u002Ffile.mp4 --video-mode transcript --lang en\n```\n\n### Configuration\n\nSingle config location:\n\n- `~\u002F.summarize\u002Fconfig.json`\n\nSupported keys today:\n\n```json\n{\n  \"model\": { \"id\": \"openai\u002Fgpt-5-mini\" },\n  \"env\": { \"OPENAI_API_KEY\": \"sk-...\" },\n  \"ui\": { \"theme\": \"ember\" }\n}\n```\n\nShorthand (equivalent):\n\n```json\n{\n  \"model\": \"openai\u002Fgpt-5-mini\"\n}\n```\n\nAlso supported:\n\n- `model: { \"mode\": \"auto\" }` (automatic model selection + fallback; see [docs\u002Fmodel-auto.md](docs\u002Fmodel-auto.md))\n- `model.rules` (customize candidates \u002F ordering)\n- `models` (define presets selectable via `--model \u003Cpreset>`)\n- `env` (generic env var defaults; process env still wins)\n- `apiKeys` (legacy shortcut, mapped to env names; prefer `env` for new configs)\n- `cache.media` (media download cache: TTL 7 days, 2048 MB cap by default; `--no-media-cache` disables)\n- `media.videoMode: \"auto\"|\"transcript\"|\"understand\"`\n- `slides.enabled` \u002F `slides.max` \u002F `slides.ocr` \u002F `slides.dir` (defaults for `--slides`)\n- `ui.theme: \"aurora\"|\"ember\"|\"moss\"|\"mono\"`\n- `openai.useChatCompletions: true` (force OpenAI-compatible chat completions)\n\nNote: the config is parsed leniently (JSON5), but comments are not allowed. Unknown keys are ignored.\n\nMedia cache defaults:\n\n```json\n{\n  \"cache\": {\n    \"media\": { \"enabled\": true, \"ttlDays\": 7, \"maxMb\": 2048, \"verify\": \"size\" }\n  }\n}\n```\n\nNote: `--no-cache` bypasses summary caching only (LLM output). Extract\u002Ftranscript caches still apply. Use `--no-media-cache` to skip media files.\n\nPrecedence:\n\n1. `--model`\n2. `SUMMARIZE_MODEL`\n3. `~\u002F.summarize\u002Fconfig.json`\n4. default (`auto`)\n\nTheme precedence:\n\n1. `--theme`\n2. `SUMMARIZE_THEME`\n3. `~\u002F.summarize\u002Fconfig.json` (`ui.theme`)\n4. default (`aurora`)\n\nEnvironment variable precedence:\n\n1. process env\n2. `~\u002F.summarize\u002Fconfig.json` (`env`)\n3. `~\u002F.summarize\u002Fconfig.json` (`apiKeys`, legacy)\n\n### Environment variables\n\nSet the key matching your chosen `--model`:\n\n- Optional fallback defaults can be stored in config:\n  - `~\u002F.summarize\u002Fconfig.json` -> `\"env\": { \"OPENAI_API_KEY\": \"sk-...\" }`\n  - process env always takes precedence\n  - legacy `\"apiKeys\"` still works (mapped to env names)\n\n- `OPENAI_API_KEY` (for `openai\u002F...`)\n- `NVIDIA_API_KEY` (for `nvidia\u002F...`)\n- `ANTHROPIC_API_KEY` (for `anthropic\u002F...`)\n- `XAI_API_KEY` (for `xai\u002F...`)\n- `Z_AI_API_KEY` (for `zai\u002F...`; supports `ZAI_API_KEY` alias)\n- `GEMINI_API_KEY` (for `google\u002F...`)\n  - also accepts `GOOGLE_GENERATIVE_AI_API_KEY` and `GOOGLE_API_KEY` as aliases\n\nOpenAI-compatible chat completions toggle:\n\n- `OPENAI_USE_CHAT_COMPLETIONS=1` (or set `openai.useChatCompletions` in config)\n\nUI theme:\n\n- `SUMMARIZE_THEME=aurora|ember|moss|mono`\n- `SUMMARIZE_TRUECOLOR=1` (force 24-bit ANSI)\n- `SUMMARIZE_NO_TRUECOLOR=1` (disable 24-bit ANSI)\n\nOpenRouter (OpenAI-compatible):\n\n- Set `OPENROUTER_API_KEY=...`\n- Prefer forcing OpenRouter per model id: `--model openrouter\u002F\u003Cauthor>\u002F\u003Cslug>`\n- Built-in preset: `--model free` (uses a default set of OpenRouter `:free` models)\n\n### `summarize refresh-free`\n\nQuick start: make free the default (keep `auto` available)\n\n```bash\nsummarize refresh-free --set-default\nsummarize \"https:\u002F\u002Fexample.com\"\nsummarize \"https:\u002F\u002Fexample.com\" --model auto\n```\n\nRegenerates the `free` preset (`models.free` in `~\u002F.summarize\u002Fconfig.json`) by:\n\n- Fetching OpenRouter `\u002Fmodels`, filtering `:free`\n- Skipping models that look very small (\u003C27B by default) based on the model id\u002Fname\n- Testing which ones return non-empty text (concurrency 4, timeout 10s)\n- Picking a mix of smart-ish (bigger `context_length` \u002F output cap) and fast models\n- Refining timings and writing the sorted list back\n\nIf `--model free` stops working, run:\n\n```bash\nsummarize refresh-free\n```\n\nFlags:\n\n- `--runs 2` (default): extra timing runs per selected model (total runs = 1 + runs)\n- `--smart 3` (default): how many smart-first picks (rest filled by fastest)\n- `--min-params 27b` (default): ignore models with inferred size smaller than N billion parameters\n- `--max-age-days 180` (default): ignore models older than N days (set 0 to disable)\n- `--set-default`: also sets `\"model\": \"free\"` in `~\u002F.summarize\u002Fconfig.json`\n\nExample:\n\n```bash\nOPENROUTER_API_KEY=sk-or-... summarize \"https:\u002F\u002Fexample.com\" --model openrouter\u002Fmeta-llama\u002Fllama-3.1-8b-instruct:free\nOPENROUTER_API_KEY=sk-or-... summarize \"https:\u002F\u002Fexample.com\" --model openrouter\u002Fminimax\u002Fminimax-m2.5\n```\n\nIf your OpenRouter account enforces an allowed-provider list, make sure at least one provider\nis allowed for the selected model. When routing fails, `summarize` prints the exact providers to allow.\n\nLegacy: `OPENAI_BASE_URL=https:\u002F\u002Fopenrouter.ai\u002Fapi\u002Fv1` (and either `OPENAI_API_KEY` or `OPENROUTER_API_KEY`) also works.\n\nNVIDIA API Catalog (OpenAI-compatible; free credits):\n\n- Set `NVIDIA_API_KEY=...`\n- Optional: `NVIDIA_BASE_URL=https:\u002F\u002Fintegrate.api.nvidia.com\u002Fv1`\n- Credits: API Catalog trial starts with 1000 free API credits on signup (up to 5000 total via “Request More” in the API Catalog profile)\n- Pick a model id from `\u002Fv1\u002Fmodels` (examples: fast `stepfun-ai\u002Fstep-3.5-flash`, strong but slower `z-ai\u002Fglm5`)\n\n```bash\nexport NVIDIA_API_KEY=\"nvapi-...\"\nsummarize \"https:\u002F\u002Fexample.com\" --model nvidia\u002Fstepfun-ai\u002Fstep-3.5-flash\n```\n\nZ.AI (OpenAI-compatible):\n\n- `Z_AI_API_KEY=...` (or `ZAI_API_KEY=...`)\n- Optional base URL override: `Z_AI_BASE_URL=...`\n\nOptional services:\n\n- `FIRECRAWL_API_KEY` (website extraction fallback)\n- `YT_DLP_PATH` (path to yt-dlp binary for audio extraction)\n- `GROQ_API_KEY` (Groq Whisper transcription)\n- `ASSEMBLYAI_API_KEY` (AssemblyAI transcription)\n- `GEMINI_API_KEY` \u002F `GOOGLE_GENERATIVE_AI_API_KEY` \u002F `GOOGLE_API_KEY` (Gemini transcription)\n- `OPENAI_API_KEY` \u002F `OPENAI_WHISPER_BASE_URL` (OpenAI Whisper transcription)\n- `FAL_KEY` (FAL AI API key for audio transcription via Whisper)\n- `APIFY_API_TOKEN` (YouTube transcript fallback)\n\n### Model limits\n\nThe CLI uses the LiteLLM model catalog for model limits (like max output tokens):\n\n- Downloaded from: `https:\u002F\u002Fraw.githubusercontent.com\u002FBerriAI\u002Flitellm\u002Fmain\u002Fmodel_prices_and_context_window.json`\n- Cached at: `~\u002F.summarize\u002Fcache\u002F`\n\n### Library usage (optional)\n\nRecommended (minimal deps):\n\n- `@steipete\u002Fsummarize-core\u002Fcontent`\n- `@steipete\u002Fsummarize-core\u002Fprompts`\n\nCompatibility (pulls in CLI deps):\n\n- `@steipete\u002Fsummarize\u002Fcontent`\n- `@steipete\u002Fsummarize\u002Fprompts`\n\n### Development\n\n```bash\npnpm install\npnpm check\n```\n\n## More\n\n- Docs index: [docs\u002FREADME.md](docs\u002FREADME.md)\n- CLI providers and config: [docs\u002Fcli.md](docs\u002Fcli.md)\n- Auto model rules: [docs\u002Fmodel-auto.md](docs\u002Fmodel-auto.md)\n- Website extraction: [docs\u002Fwebsite.md](docs\u002Fwebsite.md)\n- YouTube handling: [docs\u002Fyoutube.md](docs\u002Fyoutube.md)\n- Media pipeline: [docs\u002Fmedia.md](docs\u002Fmedia.md)\n- Config schema and precedence: [docs\u002Fconfig.md](docs\u002Fconfig.md)\n\n## Troubleshooting\n\n- \"Receiving end does not exist\": Chrome did not inject the content script yet.\n  - Extension details -> Site access -> On all sites (or allow this domain)\n  - Reload the tab once.\n- \"Failed to fetch\" \u002F daemon unreachable:\n  - `summarize daemon status`\n  - Logs: `~\u002F.summarize\u002Flogs\u002Fdaemon.err.log`\n\nLicense: MIT\n","# Summarize 📝 — Chrome 侧边栏 + CLI\n\n快速从 URL、文件和媒体中生成摘要。支持终端、Chrome 侧边栏（Side Panel）和 Firefox 侧边栏（Sidebar）。\n\n## 亮点功能\n\n- Chrome 侧边栏内置 **聊天** 功能（流式代理 + 历史记录）。\n- **YouTube 幻灯片**：截图 + OCR（光学字符识别）+ 字幕卡片，带时间戳跳转，支持 OCR\u002F字幕切换。\n- 媒体感知摘要：自动检测视频\u002F音频内容 vs 网页内容。\n- 流式 Markdown 输出 + 指标 + 缓存状态提示。\n- CLI 支持 URL、文件、播客、YouTube、音视频、PDF 等多种输入。\n\n## 功能概览\n\n- 支持多种输入源：网页、PDF、图片、音视频、YouTube、播客、RSS。\n- 视频源（YouTube\u002F直接媒体）的幻灯片提取，结合 OCR 与带时间戳的卡片。\n- 优先使用已发布的字幕；若无，则回退至 Groq\u002FONNX\u002Fwhisper.cpp\u002FAssemblyAI\u002FGemini\u002FOpenAI\u002FFAL 进行转录。\n- 流式输出，支持 Markdown 渲染、指标展示和缓存状态提示。\n- 支持本地、付费和免费模型：兼容 OpenAI 的本地端点、付费服务商，以及 OpenRouter 提供的免费预设。\n- 多种输出模式：Markdown\u002F纯文本、JSON 诊断信息、仅提取内容、指标、耗时和成本估算。\n- 智能默认行为：若内容长度小于请求的摘要长度，则直接返回原文（使用 `--force-summary` 可强制摘要）。\n\n## 获取扩展程序（推荐）\n\n![Summarize 扩展程序截图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsteipete_summarize_readme_591d962721be.png)\n\n一键为当前标签页生成摘要。支持 Chrome 侧边栏、Firefox 侧边栏及本地守护进程（daemon），用于流式 Markdown 输出。\n\n**Chrome 网上应用店：** [Summarize Side Panel](https:\u002F\u002Fchromewebstore.google.com\u002Fdetail\u002Fsummarize\u002Fcejgnmmhbbpdmjnfppjdfkocebngehfg)\n\n浏览器中的 YouTube 幻灯片截图：\n\n![Summarize YouTube 幻灯片截图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsteipete_summarize_readme_12c814e1f6b9.png)\n\n### 初学者快速入门（扩展程序）\n\n1. 安装 CLI（任选其一）：\n   - **npm**（跨平台）：`npm i -g @steipete\u002Fsummarize`\n   - **Homebrew**（macOS arm64）：`brew install steipete\u002Ftap\u002Fsummarize`\n2. 安装上述扩展程序，并打开侧边栏。\n3. 面板会显示一个 token 和安装命令。在终端中运行：\n   - `summarize daemon install --token \u003CTOKEN>`\n\n为何需要守护进程（daemon）？\n\n- 扩展程序无法在浏览器内执行重型提取任务。它通过本地后台服务（运行于 `127.0.0.1`）实现快速流式处理和媒体工具调用（如 yt-dlp、ffmpeg、OCR、转录）。\n- 该服务会自动启动（通过 launchd\u002Fsystemd\u002F计划任务），确保侧边栏随时可用。\n\n如果你只需要 **CLI**，完全可以跳过守护进程的安装。\n\n注意事项：\n\n- 摘要仅在侧边栏打开时运行。\n- 自动模式会在页面导航时（包括 SPA）自动摘要；否则请手动点击按钮。\n- 守护进程仅限本地回环地址（localhost），需共享 token；再次运行 `summarize daemon install --token \u003CTOKEN>` 会新增一个配对 token，而非使旧 token 失效。\n- 自启机制：macOS（launchd）、Linux（systemd 用户级）、Windows（计划任务）。\n- 提示：通过 `summarize refresh-free` 配置 `free` 模型（需设置 `OPENROUTER_API_KEY`）。添加 `--set-default` 可将模型设为 `free`。\n\n更多详情：\n\n- 分步安装指南：[apps\u002Fchrome-extension\u002FREADME.md](apps\u002Fchrome-extension\u002FREADME.md)\n- 架构与故障排查：[docs\u002Fchrome-extension.md](docs\u002Fchrome-extension.md)\n- Firefox 兼容性说明：[apps\u002Fchrome-extension\u002Fdocs\u002Ffirefox.md](apps\u002Fchrome-extension\u002Fdocs\u002Ffirefox.md)\n\n### 幻灯片功能（扩展程序）\n\n- 在 Summarize 选择器中选择 **Video + Slides**。\n- 幻灯片显示在顶部；可展开为全宽卡片并附带时间戳。\n- 点击幻灯片可跳转视频；当 OCR 内容显著时，可切换 **Transcript\u002FOCR**。\n- 依赖项：`yt-dlp` + `ffmpeg` 用于提取；`tesseract` 用于 OCR。缺少工具时，面板内会显示提示。\n\n### 高级用法（未打包 \u002F 开发模式）\n\n1. 构建并加载扩展程序（未打包）：\n   - Chrome：`pnpm -C apps\u002Fchrome-extension build`\n     - 访问 `chrome:\u002F\u002Fextensions` → 启用开发者模式 → 加载已解压的扩展程序\n     - 选择路径：`apps\u002Fchrome-extension\u002F.output\u002Fchrome-mv3`\n   - Firefox：`pnpm -C apps\u002Fchrome-extension build:firefox`\n     - 访问 `about:debugging#\u002Fruntime\u002Fthis-firefox` → 加载临时附加组件\n     - 选择文件：`apps\u002Fchrome-extension\u002F.output\u002Ffirefox-mv3\u002Fmanifest.json`\n2. 打开侧边栏 → 复制 token。\n3. 以开发模式安装守护进程：\n   - `pnpm summarize daemon install --token \u003CTOKEN> --dev`\n\n## CLI\n\n![Summarize CLI 截图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsteipete_summarize_readme_999e0d0e7576.png)\n\n### 安装\n\n需要 Node 22+。\n\n- npx（无需安装）：\n\n```bash\nnpx -y @steipete\u002Fsummarize \"https:\u002F\u002Fexample.com\"\n```\n\n- npm（全局安装）：\n\n```bash\nnpm i -g @steipete\u002Fsummarize\n```\n\n- npm（作为库 \u002F 最小依赖）：\n\n```bash\nnpm i @steipete\u002Fsummarize-core\n```\n\n```ts\nimport { createLinkPreviewClient } from \"@steipete\u002Fsummarize-core\u002Fcontent\";\n```\n\n- Homebrew（自定义 tap）：\n\n```bash\nbrew install steipete\u002Ftap\u002Fsummarize\n```\n\nHomebrew 的可用性取决于当前 tap 中针对你架构的 formula。  \n如果在 Intel\u002Fx64 上 Homebrew 安装失败，请使用上方的 npm 全局安装方式。\n\n### 可选的本地依赖\n\n如需重度媒体功能，请安装以下工具：\n\n- `ffmpeg`：`--slides` 及许多本地媒体\u002F转录流程必需\n- `yt-dlp`：YouTube 幻灯片提取及部分远程媒体流程必需\n- `tesseract`：`--slides-ocr` 的可选 OCR 工具\n- 可选的云端转录服务提供商：\n  - `GROQ_API_KEY`\n  - `ASSEMBLYAI_API_KEY`\n  - `GEMINI_API_KEY` \u002F `GOOGLE_GENERATIVE_AI_API_KEY` \u002F `GOOGLE_API_KEY`\n  - `OPENAI_API_KEY`\n  - `FAL_KEY`\n\nmacOS（通过 Homebrew）：\n\n```bash\nbrew install ffmpeg yt-dlp\nbrew install tesseract # 可选，用于 --slides-ocr\n```\n\n若启用 `--slides` 但缺少上述工具，Summarize 会发出警告并继续运行（不生成幻灯片）。\n\n### CLI 与扩展程序对比\n\n- **仅 CLI**：只需通过 npm\u002FHomebrew 安装并运行 `summarize ...`（无需守护进程）。\n- **Chrome\u002FFirefox 扩展程序**：需安装 CLI **并**运行 `summarize daemon install --token \u003CTOKEN>`，以便侧边栏能流式接收结果并使用本地工具。\n\n### 快速入门\n\n```bash\nsummarize \"https:\u002F\u002Fexample.com\"\n```\n\n### 输入源\n\n支持 URL 或本地路径：\n\n```bash\nsummarize \"\u002Fpath\u002Fto\u002Ffile.pdf\" --model google\u002Fgemini-3-flash\nsummarize \"https:\u002F\u002Fexample.com\u002Freport.pdf\" --model google\u002Fgemini-3-flash\nsummarize \"\u002Fpath\u002Fto\u002Faudio.mp3\"\nsummarize \"\u002Fpath\u002Fto\u002Fvideo.mp4\"\n```\n\n标准输入（通过 `-` 管道传入内容）：\n\n```bash\necho \"content\" | summarize -\npbpaste | summarize -\n```\n\n# 二进制标准输入也支持（PDF\u002F图像\u002F音频\u002F视频字节流）\n```bash\ncat \u002Fpath\u002Fto\u002Ffile.pdf | summarize -\n```\n\n**注意事项：**\n\n- 标准输入（stdin）有 50MB 大小限制\n- `-` 参数告诉 `summarize` 从标准输入读取内容\n- 文本标准输入被视为 UTF-8 文本（仅包含空白字符的输入会被视为为空而拒绝）\n- 二进制标准输入会保留为原始字节，文件类型会在可能的情况下自动检测\n- 适用于管道传递剪贴板内容或命令输出\n\nYouTube（支持 `youtube.com` 和 `youtu.be`）：\n\n```bash\nsummarize \"https:\u002F\u002Fyoutu.be\u002FdQw4w9WgXcQ\" --youtube auto\n```\n\n播客 RSS（转录最新一期的 enclosure）：\n\n```bash\nsummarize \"https:\u002F\u002Ffeeds.npr.org\u002F500005\u002Fpodcast.xml\"\n```\n\nApple Podcasts 节目页面：\n\n```bash\nsummarize \"https:\u002F\u002Fpodcasts.apple.com\u002Fus\u002Fpodcast\u002F2424-jelly-roll\u002Fid360084272?i=1000740717432\"\n```\n\nSpotify 节目页面（尽力而为；独家内容可能失败）：\n\n```bash\nsummarize \"https:\u002F\u002Fopen.spotify.com\u002Fepisode\u002F5auotqWAXhhKyb9ymCuBJY\"\n```\n\n### 输出长度\n\n`--length` 控制我们请求的输出量（仅为指导值），并非硬性上限。\n\n```bash\nsummarize \"https:\u002F\u002Fexample.com\" --length long\nsummarize \"https:\u002F\u002Fexample.com\" --length 20k\n```\n\n- 预设值：`short|medium|long|xl|xxl`\n- 字符目标值：`1500`、`20k`、`20000`\n- 可选硬性上限：`--max-output-tokens \u003Ccount>`（例如 `2000`、`2k`）\n  - 提供商\u002F模型 API 仍会强制执行其自身的最大输出限制。\n  - 如果省略，则不会发送最大 token 参数（使用提供商默认值）。\n  - 除非需要硬性上限，否则建议优先使用 `--length`。\n- 短内容：当提取的内容比请求的长度更短时，CLI 会原样返回该内容。\n  - 使用 `--force-summary` 可强制始终运行 LLM。\n- 最小值限制：`--length` 的数值必须 ≥ 50 个字符；`--max-output-tokens` 必须 ≥ 16。\n- 预设目标值（权威来源：`packages\u002Fcore\u002Fsrc\u002Fprompts\u002Fsummary-lengths.ts`）：\n  - short: 目标约 900 字符（范围 600–1,200）\n  - medium: 目标约 1,800 字符（范围 1,200–2,500）\n  - long: 目标约 4,200 字符（范围 2,500–6,000）\n  - xl: 目标约 9,000 字符（范围 6,000–14,000）\n  - xxl: 目标约 17,000 字符（范围 14,000–22,000）\n\n### 支持哪些文件类型？\n\n尽力支持，具体取决于提供商。以下类型通常效果良好：\n\n- `text\u002F*` 及常见结构化文本（`.txt`、`.md`、`.json`、`.yaml`、`.xml` 等）\n  - 类文本文件会被内联到提示中，以提高与提供商的兼容性。\n- PDF：`application\u002Fpdf`（提供商支持情况各异；Google 在此最为可靠）\n- 图像：`image\u002Fjpeg`、`image\u002Fpng`、`image\u002Fwebp`、`image\u002Fgif`\n- 音频\u002F视频：`audio\u002F*`、`video\u002F*`（本地音频\u002F视频文件如 MP3\u002FWAV\u002FM4A\u002FOGG\u002FFLAC\u002FMP4\u002FMOV\u002FWEBM 会在模型支持时自动转录）\n\n注意事项：\n\n- 如果提供商拒绝某种媒体类型，CLI 会快速失败并给出友好提示。\n- xAI 模型不支持通过 AI SDK 附加通用文件（如 PDF）；此类场景请使用 Google\u002FOpenAI\u002FAnthropic。\n\n### 模型 ID\n\n使用网关风格的 ID：`\u003Cprovider>\u002F\u003Cmodel>`。\n\n示例：\n\n- `openai\u002Fgpt-5-mini`\n- `anthropic\u002Fclaude-sonnet-4-5`\n- `xai\u002Fgrok-4-fast-non-reasoning`\n- `google\u002Fgemini-3-flash`\n- `zai\u002Fglm-4.7`\n- `openrouter\u002Fopenai\u002Fgpt-5-mini`（强制使用 OpenRouter）\n\n注意：某些模型\u002F提供商不支持流式输出或特定文件媒体类型。发生这种情况时，CLI 会打印友好错误（或在提供商支持的情况下自动为该模型禁用流式输出）。\n\n### 限制\n\n- 超过 10 MB 的文本输入会在分词前被拒绝。\n- 文本提示会通过 GPT 分词器预先检查是否超出模型输入限制（基于 LiteLLM 目录）。\n\n### 常用标志\n\n```bash\nsummarize \u003Cinput> [flags]\n```\n\n使用 `summarize --help` 或 `summarize help` 查看完整帮助信息。\n\n- `--model \u003Cprovider\u002Fmodel>`：指定使用的模型（默认为 `auto`）\n- `--model auto`：自动选择模型并启用备选方案（默认）\n- `--model \u003Cname>`：使用配置中定义的模型（参见 Configuration）\n- `--timeout \u003Cduration>`：超时时间，如 `30s`、`2m`、`5000ms`（默认 `2m`）\n- `--retries \u003Ccount>`：LLM 超时时的重试次数（默认 `1`）\n- `--length short|medium|long|xl|xxl|s|m|l|\u003Cchars>`\n- `--language, --lang \u003Clanguage>`：输出语言（`auto` = 与源语言一致）\n- `--max-output-tokens \u003Ccount>`：LLM 输出 token 的硬性上限\n- `--cli [provider]`：使用 CLI 提供商（`--model cli\u002F\u003Cprovider>`）。支持 `claude`、`gemini`、`codex`、`agent`。若省略，则启用 CLI 并自动选择提供商。\n- `--stream auto|on|off`：流式输出 LLM 结果（`auto` = 仅在 TTY 中启用；在 `--json` 模式下禁用）\n- `--plain`：保留原始输出（不进行 ANSI\u002FOSC Markdown 渲染）\n- `--no-color`：禁用 ANSI 颜色\n- `--theme \u003Cname>`：CLI 主题（`aurora`、`ember`、`moss`、`mono`）\n- `--format md|text`：网站\u002F文件内容格式（默认 `text`）\n- `--markdown-mode off|auto|llm|readability`：HTML → Markdown 转换模式（默认 `readability`）\n- `--preprocess off|auto|always`：控制 `uvx markitdown` 的使用（默认 `auto`）\n  - 安装 `uvx`：`brew install uv`（或 https:\u002F\u002Fastral.sh\u002Fuv\u002F）\n- `--extract`：打印提取的内容并退出（仅适用于 URL；不支持标准输入 `-`）\n  - 已弃用的别名：`--extract-only`\n- `--slides`：为 YouTube\u002F直接视频 URL 提取幻灯片，并在摘要叙述中内联渲染（在支持的终端中自动内联显示）\n- `--slides-ocr`：对提取的幻灯片运行 OCR（需安装 `tesseract`）\n- `--slides-dir \u003Cdir>`：幻灯片图像的基础输出目录（默认 `.\u002Fslides`）\n- `--slides-scene-threshold \u003Cvalue>`：场景检测阈值（0.1–1.0）\n- `--slides-max \u003Ccount>`：最多提取的幻灯片数量（默认 `6`）\n- `--slides-min-duration \u003Cseconds>`：幻灯片之间的最小间隔秒数\n- `--json`：机器可读的输出，包含诊断信息、提示、`metrics` 和可选摘要\n- `--verbose`：在 stderr 输出调试\u002F诊断信息\n- `--metrics off|on|detailed`：指标输出（默认 `on`）\n\n### 编码 CLI（Codex、Claude、Gemini、Agent）\n\nSummarize 可以使用常见的编码 CLI 作为本地模型后端：\n\n- `codex` -> `--cli codex` \u002F `--model cli\u002Fcodex\u002F\u003Cmodel>`\n- `claude` -> `--cli claude` \u002F `--model cli\u002Fclaude\u002F\u003Cmodel>`\n- `gemini` -> `--cli gemini` \u002F `--model cli\u002Fgemini\u002F\u003Cmodel>`\n- `agent`（Cursor Agent CLI）-> `--cli agent` \u002F `--model cli\u002Fagent\u002F\u003Cmodel>`\n\n要求：\n\n- 二进制文件已安装并位于 `PATH` 中（或设置 `CODEX_PATH`、`CLAUDE_PATH`、`GEMINI_PATH`、`AGENT_PATH`）\n- 提供商已完成身份验证（`codex login`、`claude auth`、`gemini` 登录流程、`agent login` 或 `CURSOR_API_KEY`）\n\n快速冒烟测试：\n\n```bash\nprintf \"Summarize CLI smoke input.\\nOne short paragraph. Reply can be brief.\\n\" >\u002Ftmp\u002Fsummarize-cli-smoke.txt\n\nsummarize --cli codex --plain --timeout 2m \u002Ftmp\u002Fsummarize-cli-smoke.txt\nsummarize --cli claude --plain --timeout 2m \u002Ftmp\u002Fsummarize-cli-smoke.txt\nsummarize --cli gemini --plain --timeout 2m \u002Ftmp\u002Fsummarize-cli-smoke.txt\nsummarize --cli agent --plain --timeout 2m \u002Ftmp\u002Fsummarize-cli-smoke.txt\n```\n\n设置显式的 CLI 白名单\u002F顺序：\n\n```json\n{\n  \"cli\": { \"enabled\": [\"codex\", \"claude\", \"gemini\", \"agent\"] }\n}\n```\n\n配置隐式自动 CLI 回退：\n\n```json\n{\n  \"cli\": {\n    \"autoFallback\": {\n      \"enabled\": true,\n      \"onlyWhenNoApiKeys\": true,\n      \"order\": [\"claude\", \"gemini\", \"codex\", \"agent\"]\n    }\n  }\n}\n```\n\n更多详情：[`docs\u002Fcli.md`](docs\u002Fcli.md)\n\n### 自动模型排序\n\n`--model auto` 会根据内置规则（或你自定义的 `model.rules` 覆盖）构建候选尝试列表。  \n当满足以下任一条件时，CLI 尝试会被前置：\n\n- 设置了 `cli.enabled`（显式白名单\u002F顺序），或\n- 启用了隐式自动选择且 `cli.autoFallback` 已启用。\n\n默认回退行为：仅在未配置任何 API 密钥时触发，顺序为 `claude, gemini, codex, agent`，并记住\u002F优先使用上次成功的提供商（记录于 `~\u002F.summarize\u002Fcli-state.json`）。\n\n设置显式的 CLI 尝试：\n\n```json\n{\n  \"cli\": { \"enabled\": [\"gemini\"] }\n}\n```\n\n禁用隐式自动 CLI 回退：\n\n```json\n{\n  \"cli\": { \"autoFallback\": { \"enabled\": false } }\n}\n```\n\n注意：显式使用 `--model auto` 不会触发隐式自动 CLI 回退，除非设置了 `cli.enabled`。\n\n### 网站内容提取（Firecrawl + Markdown）\n\n非 YouTube URL 会经过 fetch -> extract 流程。当直接抓取\u002F提取被阻止或内容过少时，  \n若已配置，`--firecrawl auto` 会回退到 Firecrawl。\n\n- `--firecrawl off|auto|always`（默认 `auto`）\n- `--extract --format md|text`（默认 `text`；若省略 `--format`，则对非 YouTube URL 默认使用 `md`）\n- `--markdown-mode off|auto|llm|readability`（默认 `readability`）\n  - `auto`：在配置了 LLM 转换器时使用；可能回退到 `uvx markitdown`\n  - `llm`：强制使用 LLM 转换（需要配置模型密钥）\n  - `off`：禁用 LLM 转换（但若已配置 Firecrawl，仍可能返回其生成的 Markdown）\n- 纯文本模式：使用 `--format text`。\n\n### YouTube 字幕转录\n\n`--youtube auto` 首先尝试尽力而为的网页字幕端点。当字幕不可用时，会按以下顺序回退：\n\n1. Apify（若设置了 `APIFY_API_TOKEN`）：使用爬虫 Actor (`faVsWy9VTSNVIhWpR`)\n2. yt-dlp + Whisper（若 `yt-dlp` 可用）：下载音频，若已安装则使用本地 `whisper.cpp` 进行转录（优先），否则依次回退至 Groq (`GROQ_API_KEY`)、AssemblyAI (`ASSEMBLYAI_API_KEY`)、Gemini (`GEMINI_API_KEY` \u002F Google 别名)、OpenAI (`OPENAI_API_KEY`)，最后是 FAL (`FAL_KEY`)\n\nyt-dlp 模式环境变量：\n\n- `YT_DLP_PATH` - yt-dlp 二进制文件的可选路径（否则通过 `PATH` 解析）\n- `SUMMARIZE_WHISPER_CPP_MODEL_PATH` - 本地 `whisper.cpp` 模型文件的可选覆盖路径\n- `SUMMARIZE_WHISPER_CPP_BINARY` - 本地二进制文件的可选覆盖路径（默认：`whisper-cli`）\n- `SUMMARIZE_DISABLE_LOCAL_WHISPER_CPP=1` - 禁用本地 whisper.cpp（强制使用远程服务）\n- `GROQ_API_KEY` - Groq Whisper 转录\n- `ASSEMBLYAI_API_KEY` - AssemblyAI 转录\n- `GEMINI_API_KEY` - Gemini 转录（`GOOGLE_GENERATIVE_AI_API_KEY` \u002F `GOOGLE_API_KEY` 同样有效）\n- `OPENAI_API_KEY` - OpenAI Whisper 转录\n- `OPENAI_WHISPER_BASE_URL` - 可选的 OpenAI 兼容 Whisper 端点覆盖\n- `FAL_KEY` - FAL AI Whisper 回退方案\n\nApify 需付费，但在字幕存在时通常更可靠。\n\n### 幻灯片提取（YouTube + 直接视频链接）\n\n提取幻灯片截图（通过 `ffmpeg` 进行场景检测）并可选 OCR：\n\n依赖项：\n\n- `ffmpeg`：用于场景检测和帧提取\n- `yt-dlp`：用于 YouTube 视频下载\u002F流解析\n- `tesseract`：仅在使用 `--slides-ocr` 时需要\n\n```bash\nsummarize \"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=...\" --slides\nsummarize \"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=...\" --slides --slides-ocr\n```\n\n输出写入 `.\u002Fslides\u002F\u003CsourceId>\u002F`（或通过 `--slides-dir` 指定）。OCR 结果包含在 JSON 输出中（`--json`），并存储在幻灯片目录内的 `slides.json` 文件中。当场景检测过于稀疏时，提取器还会以固定间隔采样以提高覆盖率。  \n使用 `--slides` 时，支持的终端（kitty\u002FiTerm\u002FKonsole）会在摘要叙述中自动渲染内联缩略图（模型插入 `[slide:N]` 标记）。当终端支持 OSC-8 时，时间戳链接可点击（适用于 YouTube\u002FVimeo\u002FLoom\u002FDropbox）。若不支持内联图像，Summarize 会打印一条提示，说明磁盘上的幻灯片目录位置。\n\n使用 `--slides --extract` 可打印完整的时间戳转录，并在匹配的时间点内联插入幻灯片图像。\n\n通过 LLM 将提取的转录格式化为 Markdown（标题 + 段落）：\n\n```bash\nsummarize \"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=...\" --extract --format md --markdown-mode llm\n```\n\n### 媒体转录（Whisper）\n\n本地音视频文件会先进行转录，再进行摘要。`--video-mode transcript` 强制将直接媒体 URL（及嵌入媒体）先通过 Whisper 处理。优先使用本地 `whisper.cpp`（若可用）；否则需要配置以下任一密钥：`GROQ_API_KEY`、`ASSEMBLYAI_API_KEY`、`GEMINI_API_KEY`（或 Google 别名）、`OPENAI_API_KEY` 或 `FAL_KEY`。\n\n### 本地 ONNX 转录（Parakeet\u002FCanary）\n\nSummarize 可通过你提供的本地 CLI 使用 NVIDIA Parakeet\u002FCanary ONNX 模型。自动选择（默认）在配置后优先使用 ONNX。\n\n- 设置助手：`summarize transcriber setup`\n- 从上游二进制文件\u002F构建安装 `sherpa-onnx`（Homebrew 可能没有公式）\n- 自动选择：设置 `SUMMARIZE_ONNX_PARAKEET_CMD` 或 `SUMMARIZE_ONNX_CANARY_CMD`（无需额外标志）\n- 强制指定模型：`--transcriber parakeet|canary|whisper|auto`\n- 文档：`docs\u002Fnvidia-onnx-transcription.md`\n\n### 已验证的播客服务（截至 2025-12-25）\n\n运行：`summarize \u003Curl>`\n\n- Apple Podcasts\n- Spotify\n- Amazon Music \u002F Audible 播客页面\n- Podbean\n- Podchaser\n- RSS 订阅源（若有 Podcasting 2.0 转录则优先使用）\n- 嵌入式 YouTube 播客页面（例如 JREPodcast）\n\n转录：若已安装，优先使用本地 `whisper.cpp`；否则在设置了密钥的情况下，依次使用 Groq、AssemblyAI、Gemini、OpenAI 或 FAL。\n\n### 翻译路径\n\n`--language\u002F--lang` 控制摘要（及其他 LLM 生成文本）的输出语言，默认为 `auto`。\n\n当输入为音频\u002F视频时，CLI 需要先获取转录文本。转录文本来自以下路径之一：\n\n1. **已有转录文本**（优先）\n   - YouTube：在可用时使用 `youtubei` \u002F `captionTracks`。\n   - 播客（Podcasts）：当 RSS 订阅源发布时，使用 Podcasting 2.0 标准中的 `\u003Cpodcast:transcript>`（JSON\u002FVTT 格式）。\n2. **Whisper 转录**（后备方案）\n   - YouTube：在配置后，若无现成字幕，则回退到 yt-dlp（音频下载）+ Whisper 转录；Apify 是最后的选择。\n   - 若已安装本地 `whisper.cpp` 且模型可用，则优先使用。\n   - 否则按以下顺序使用云端转录服务：Groq (`GROQ_API_KEY`) → AssemblyAI (`ASSEMBLYAI_API_KEY`) → Gemini (`GEMINI_API_KEY` 或 Google 别名) → OpenAI (`OPENAI_API_KEY`) → FAL (`FAL_KEY`)。\n\n对于直接媒体 URL，可使用 `--video-mode transcript` 强制执行“转录 → 摘要”流程：\n\n```bash\nsummarize https:\u002F\u002Fexample.com\u002Ffile.mp4 --video-mode transcript --lang en\n```\n\n### 配置\n\n单一配置位置：\n\n- `~\u002F.summarize\u002Fconfig.json`\n\n当前支持的键：\n\n```json\n{\n  \"model\": { \"id\": \"openai\u002Fgpt-5-mini\" },\n  \"env\": { \"OPENAI_API_KEY\": \"sk-...\" },\n  \"ui\": { \"theme\": \"ember\" }\n}\n```\n\n简写形式（等效）：\n\n```json\n{\n  \"model\": \"openai\u002Fgpt-5-mini\"\n}\n```\n\n其他支持项：\n\n- `model: { \"mode\": \"auto\" }`（自动选择模型并回退；详见 [docs\u002Fmodel-auto.md](docs\u002Fmodel-auto.md)）\n- `model.rules`（自定义候选模型及排序）\n- `models`（定义可通过 `--model \u003Cpreset>` 选择的预设）\n- `env`（通用环境变量默认值；进程环境变量仍优先）\n- `apiKeys`（旧版快捷方式，映射为环境变量名；新配置建议使用 `env`）\n- `cache.media`（媒体下载缓存：默认 TTL 7 天，上限 2048 MB；`--no-media-cache` 可禁用）\n- `media.videoMode: \"auto\"|\"transcript\"|\"understand\"`\n- `slides.enabled` \u002F `slides.max` \u002F `slides.ocr` \u002F `slides.dir`（`--slides` 的默认值）\n- `ui.theme: \"aurora\"|\"ember\"|\"moss\"|\"mono\"`\n- `openai.useChatCompletions: true`（强制使用 OpenAI 兼容的聊天补全接口）\n\n注意：配置文件采用宽松解析（JSON5），但不允许注释。未知键将被忽略。\n\n媒体缓存默认值：\n\n```json\n{\n  \"cache\": {\n    \"media\": { \"enabled\": true, \"ttlDays\": 7, \"maxMb\": 2048, \"verify\": \"size\" }\n  }\n}\n```\n\n注意：`--no-cache` 仅跳过摘要缓存（LLM 输出），提取\u002F转录缓存仍生效。使用 `--no-media-cache` 可跳过媒体文件缓存。\n\n模型优先级顺序：\n\n1. `--model`\n2. `SUMMARIZE_MODEL`\n3. `~\u002F.summarize\u002Fconfig.json`\n4. 默认值 (`auto`)\n\n主题（theme）优先级顺序：\n\n1. `--theme`\n2. `SUMMARIZE_THEME`\n3. `~\u002F.summarize\u002Fconfig.json` (`ui.theme`)\n4. 默认值 (`aurora`)\n\n环境变量优先级顺序：\n\n1. 进程环境变量\n2. `~\u002F.summarize\u002Fconfig.json` (`env`)\n3. `~\u002F.summarize\u002Fconfig.json` (`apiKeys`，旧版)\n\n### 环境变量\n\n设置与所选 `--model` 对应的密钥：\n\n- 可选的回退默认值可存储在配置中：\n  - `~\u002F.summarize\u002Fconfig.json` → `\"env\": { \"OPENAI_API_KEY\": \"sk-...\" }`\n  - 进程环境变量始终优先\n  - 旧版 `\"apiKeys\"` 仍有效（映射为环境变量名）\n\n- `OPENAI_API_KEY`（用于 `openai\u002F...`）\n- `NVIDIA_API_KEY`（用于 `nvidia\u002F...`）\n- `ANTHROPIC_API_KEY`（用于 `anthropic\u002F...`）\n- `XAI_API_KEY`（用于 `xai\u002F...`）\n- `Z_AI_API_KEY`（用于 `zai\u002F...`；支持别名 `ZAI_API_KEY`）\n- `GEMINI_API_KEY`（用于 `google\u002F...`）\n  - 同时接受别名 `GOOGLE_GENERATIVE_AI_API_KEY` 和 `GOOGLE_API_KEY`\n\nOpenAI 兼容聊天补全开关：\n\n- `OPENAI_USE_CHAT_COMPLETIONS=1`（或在配置中设置 `openai.useChatCompletions`）\n\nUI 主题：\n\n- `SUMMARIZE_THEME=aurora|ember|moss|mono`\n- `SUMMARIZE_TRUECOLOR=1`（强制启用 24-bit ANSI）\n- `SUMMARIZE_NO_TRUECOLOR=1`（禁用 24-bit ANSI）\n\nOpenRouter（OpenAI 兼容）：\n\n- 设置 `OPENROUTER_API_KEY=...`\n- 建议通过模型 ID 显式指定 OpenRouter：`--model openrouter\u002F\u003Cauthor>\u002F\u003Cslug>`\n- 内置预设：`--model free`（使用一组默认的 OpenRouter `:free` 模型）\n\n### `summarize refresh-free`\n\n快速开始：将 `free` 设为默认（同时保留 `auto` 可用）\n\n```bash\nsummarize refresh-free --set-default\nsummarize \"https:\u002F\u002Fexample.com\"\nsummarize \"https:\u002F\u002Fexample.com\" --model auto\n```\n\n该命令会重新生成 `free` 预设（即 `~\u002F.summarize\u002Fconfig.json` 中的 `models.free`），具体步骤如下：\n\n- 获取 OpenRouter `\u002Fmodels` 接口数据，筛选出 `:free` 模型\n- 默认跳过明显较小的模型（根据模型 ID\u002F名称推断参数量 \u003C27B）\n- 并发测试（并发数 4，超时 10 秒）哪些模型能返回非空文本\n- 混合选择“较智能”（上下文长度\u002F输出上限更大）和“较快”的模型\n- 优化响应时间并写回排序后的列表\n\n如果 `--model free` 停止工作，请运行：\n\n```bash\nsummarize refresh-free\n```\n\n参数说明：\n\n- `--runs 2`（默认）：对每个选定模型额外进行 N 次计时测试（总次数 = 1 + runs）\n- `--smart 3`（默认）：优先选择多少个“较智能”的模型（其余由最快模型填充）\n- `--min-params 27b`（默认）：忽略推断参数量小于 N 十亿的模型\n- `--max-age-days 180`（默认）：忽略超过 N 天未更新的模型（设为 0 可禁用）\n- `--set-default`：同时在 `~\u002F.summarize\u002Fconfig.json` 中设置 `\"model\": \"free\"`\n\n示例：\n\n```bash\nOPENROUTER_API_KEY=sk-or-... summarize \"https:\u002F\u002Fexample.com\" --model openrouter\u002Fmeta-llama\u002Fllama-3.1-8b-instruct:free\nOPENROUTER_API_KEY=sk-or-... summarize \"https:\u002F\u002Fexample.com\" --model openrouter\u002Fminimax\u002Fminimax-m2.5\n```\n\n如果你的 OpenRouter 账户设置了允许的提供商列表，请确保所选模型至少有一个提供商被允许。路由失败时，`summarize` 会打印出需要允许的具体提供商。\n\n旧版兼容：设置 `OPENAI_BASE_URL=https:\u002F\u002Fopenrouter.ai\u002Fapi\u002Fv1`（并提供 `OPENAI_API_KEY` 或 `OPENROUTER_API_KEY`）也可工作。\n\nNVIDIA API Catalog（OpenAI 兼容；提供免费额度）：\n\n- 设置 `NVIDIA_API_KEY=...`\n- 可选：`NVIDIA_BASE_URL=https:\u002F\u002Fintegrate.api.nvidia.com\u002Fv1`\n- 免费额度：注册后 API Catalog 试用账户初始赠送 1000 点免费 API 积分（通过 API Catalog 个人资料页的 “Request More” 最多可增至 5000 点）\n- 从 `\u002Fv1\u002Fmodels` 中选择模型 ID（例如：快速模型 `stepfun-ai\u002Fstep-3.5-flash`，较强但较慢的 `z-ai\u002Fglm5`）\n\n```bash\nexport NVIDIA_API_KEY=\"nvapi-...\"\nsummarize \"https:\u002F\u002Fexample.com\" --model nvidia\u002Fstepfun-ai\u002Fstep-3.5-flash\n```\n\nZ.AI（OpenAI 兼容）：\n\n- `Z_AI_API_KEY=...`（或 `ZAI_API_KEY=...`）\n- 可选基础 URL 覆盖：`Z_AI_BASE_URL=...`\n\n可选服务：\n\n- `FIRECRAWL_API_KEY`（网站内容提取后备）\n- `YT_DLP_PATH`（用于音频提取的 yt-dlp 二进制路径）\n- `GROQ_API_KEY`（Groq Whisper 转录）\n- `ASSEMBLYAI_API_KEY`（AssemblyAI 转录）\n- `GEMINI_API_KEY` \u002F `GOOGLE_GENERATIVE_AI_API_KEY` \u002F `GOOGLE_API_KEY`（Gemini 转录）\n- `OPENAI_API_KEY` \u002F `OPENAI_WHISPER_BASE_URL`（OpenAI Whisper 转录）\n- `FAL_KEY`（通过 FAL AI API 使用 Whisper 进行音频转录）\n- `APIFY_API_TOKEN`（YouTube 转录后备）\n\n### 模型限制（Model limits）\n\nCLI 使用 LiteLLM 模型目录来获取模型限制（例如最大输出 token 数）：\n\n- 下载自：`https:\u002F\u002Fraw.githubusercontent.com\u002FBerriAI\u002Flitellm\u002Fmain\u002Fmodel_prices_and_context_window.json`\n- 缓存位置：`~\u002F.summarize\u002Fcache\u002F`\n\n### 库的使用方式（可选）\n\n推荐（依赖最少）：\n\n- `@steipete\u002Fsummarize-core\u002Fcontent`\n- `@steipete\u002Fsummarize-core\u002Fprompts`\n\n兼容性版本（会引入 CLI 的依赖）：\n\n- `@steipete\u002Fsummarize\u002Fcontent`\n- `@steipete\u002Fsummarize\u002Fprompts`\n\n### 开发\n\n```bash\npnpm install\npnpm check\n```\n\n## 更多内容\n\n- 文档索引：[docs\u002FREADME.md](docs\u002FREADME.md)\n- CLI 提供商与配置：[docs\u002Fcli.md](docs\u002Fcli.md)\n- 自动模型规则：[docs\u002Fmodel-auto.md](docs\u002Fmodel-auto.md)\n- 网站内容提取：[docs\u002Fwebsite.md](docs\u002Fwebsite.md)\n- YouTube 处理：[docs\u002Fyoutube.md](docs\u002Fyoutube.md)\n- 媒体处理流水线：[docs\u002Fmedia.md](docs\u002Fmedia.md)\n- 配置结构与优先级：[docs\u002Fconfig.md](docs\u002Fconfig.md)\n\n## 故障排查\n\n- “Receiving end does not exist”（接收端不存在）：Chrome 尚未注入内容脚本（content script）。\n  - 扩展详情 -> 站点访问权限 -> 设置为“在所有站点上”（或允许当前域名）\n  - 刷新一次页面标签页。\n- “Failed to fetch”（获取失败）\u002F 守护进程（daemon）不可达：\n  - 执行命令：`summarize daemon status`\n  - 日志位置：`~\u002F.summarize\u002Flogs\u002Fdaemon.err.log`\n\n许可证：MIT","# Summarize 快速上手指南\n\n## 环境准备\n\n- **Node.js**：需 Node 22 或更高版本（推荐使用 [nvm](https:\u002F\u002Fgithub.com\u002Fnvm-sh\u002Fnvm) 管理）\n- **可选依赖**（用于媒体处理）：\n  - `ffmpeg`：处理音视频\n  - `yt-dlp`：下载 YouTube 视频\n  - `tesseract`：OCR 文字识别（用于幻灯片）\n  \n> 💡 国内用户建议通过 Homebrew 安装依赖（macOS\u002FLinux）或使用清华源加速 npm：\n> ```bash\n> npm config set registry https:\u002F\u002Fregistry.npmmirror.com\n> ```\n\n## 安装步骤\n\n### 1. 安装 CLI 工具（任选其一）\n\n```bash\n# 使用 npm（推荐，跨平台）\nnpm i -g @steipete\u002Fsummarize\n\n# 或使用 Homebrew（仅 macOS arm64）\nbrew install steipete\u002Ftap\u002Fsummarize\n```\n\n### 2. （可选）安装浏览器扩展\n\n- Chrome 用户：[Chrome 应用商店安装](https:\u002F\u002Fchromewebstore.google.com\u002Fdetail\u002Fsummarize\u002Fcejgnmmhbbpdmjnfppjdfkocebngehfg)\n- 安装后打开侧边栏，复制显示的 token\n- 在终端运行（替换 `\u003CTOKEN>`）：\n  ```bash\n  summarize daemon install --token \u003CTOKEN>\n  ```\n  > 此步骤会启动本地后台服务，供浏览器扩展调用媒体处理能力。\n\n### 3. （可选）安装媒体处理工具（如需处理视频\u002F幻灯片）\n\n```bash\n# macOS (Homebrew)\nbrew install ffmpeg yt-dlp tesseract\n\n# Ubuntu\u002FDebian\nsudo apt install ffmpeg yt-dlp tesseract-ocr\n```\n\n## 基本使用\n\n### 最简示例\n\n```bash\n# 总结网页\nsummarize \"https:\u002F\u002Fexample.com\"\n\n# 总结本地 PDF\nsummarize \".\u002Freport.pdf\"\n\n# 从剪贴板总结（macOS）\npbpaste | summarize -\n\n# 总结 YouTube 视频（自动提取字幕+幻灯片）\nsummarize \"https:\u002F\u002Fyoutu.be\u002FdQw4w9WgXcQ\" --youtube auto\n```\n\n### 常用参数\n\n```bash\n# 指定摘要长度（short\u002Fmedium\u002Flong\u002Fxl\u002Fxxl 或字符数）\nsummarize \"https:\u002F\u002Fexample.com\" --length long\n\n# 强制生成摘要（即使原文很短）\nsummarize \"https:\u002F\u002Fexample.com\" --force-summary\n\n# 指定模型（格式：\u003Cprovider>\u002F\u003Cmodel>）\nsummarize \"https:\u002F\u002Fexample.com\" --model google\u002Fgemini-3-flash\n\n# 提取内容但不总结（仅 URL）\nsummarize \"https:\u002F\u002Fexample.com\" --extract\n```\n\n> ✅ 提示：首次使用建议先尝试 `summarize \"https:\u002F\u002Fexample.com\"` 验证安装是否成功。","一位高校研究生正在撰写文献综述，需要快速消化大量学术论文（PDF）、在线讲座视频（YouTube）和播客访谈内容。\n\n### 没有 summarize 时\n- 面对几十页的 PDF 论文，只能逐字阅读或手动高亮重点，耗时且容易遗漏核心结论。\n- YouTube 上的技术讲座动辄一小时以上，没有字幕或幻灯片索引，回看查找关键观点极其低效。\n- 播客访谈内容密集但无文字稿，无法快速定位专家提到的具体方法或数据。\n- 不同来源的内容格式割裂，缺乏统一入口进行信息提炼，整理笔记过程繁琐混乱。\n- 若想提取视频中的图表或公式，需手动截图+OCR，操作复杂且准确率低。\n\n### 使用 summarize 后\n- 在终端输入 `summarize paper.pdf`，几秒内获得结构化摘要，保留研究问题、方法与结论。\n- 浏览 YouTube 讲座时打开 Chrome Side Panel，自动提取带时间戳的幻灯片卡片，点击即可跳转视频对应位置。\n- 对播客链接执行 `summarize https:\u002F\u002Fexample.podcast.mp3`，自动调用本地 Whisper 模型生成文字摘要，关键论点一目了然。\n- 无论网页、PDF、音频还是视频，均通过同一命令或侧边栏界面处理，输出统一为 Markdown，便于整合进笔记系统。\n- 视频中的幻灯片经 OCR 识别后，公式与图表文字可直接复制，大幅提升信息复用效率。\n\nsummarize 将多模态内容转化为可检索、可交互的结构化知识，让研究者从“信息搬运”转向“深度思考”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsteipete_summarize_591d9627.png","steipete","Peter Steinberger","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fsteipete_d1c5c0c3.jpg","Came back from retirement to mess with AI. Clawdfather @OpenClaw\r\n\r\nPreviously: Founder of @PSPDFKit.","Full-Time Open-Sourcerer","Vienna & London","peter@steipete.me","http:\u002F\u002Fsteipete.me","https:\u002F\u002Fgithub.com\u002Fsteipete",[86,90,94,98,102],{"name":87,"color":88,"percentage":89},"TypeScript","#3178c6",96.9,{"name":91,"color":92,"percentage":93},"CSS","#663399",1.9,{"name":95,"color":96,"percentage":97},"HTML","#e34c26",0.8,{"name":99,"color":100,"percentage":101},"Shell","#89e051",0.2,{"name":103,"color":104,"percentage":105},"JavaScript","#f1e05a",0.1,5336,346,"2026-04-05T09:08:13","NOASSERTION","Linux, macOS, Windows","未说明",{"notes":113,"python":111,"dependencies":114},"需要安装 Node.js 22 或更高版本；若使用视频\u002F音频处理功能（如 YouTube 摘要、幻灯片 OCR），需额外安装 ffmpeg、yt-dlp 和 tesseract；浏览器扩展需配合本地守护进程（daemon）使用，该进程通过 launchd（macOS）、systemd（Linux）或计划任务（Windows）自动启动；支持多种大模型 API（如 OpenAI、Gemini、Groq 等），需配置对应 API 密钥；CLI 可独立运行，无需 GPU。",[115,116,117,118,119],"Node.js >=22","ffmpeg","yt-dlp","tesseract","@steipete\u002Fsummarize-core",[53,15,13,14],[122,123,67,124],"ai","cli","typescript",7,null,"2026-03-27T02:49:30.150509","2026-04-06T05:35:43.798045",[130,135,140,145,150],{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},631,"如何在 Windows 上成功构建项目？","Windows 不支持 Unix 命令 'rm -rf'，需改用跨平台工具 rimraf。解决步骤：1. 在项目根目录运行 `pnpm add -D rimraf`；2. 修改 package.json 中的 clean 脚本为 `\"clean\": \"rimraf dist packages\u002Fcore\u002Fdist\"`；3. 重新运行 `pnpm install` 和 `pnpm build`。此问题已在后续版本中修复。","https:\u002F\u002Fgithub.com\u002Fsteipete\u002Fsummarize\u002Fissues\u002F24",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},632,"安装后运行 CLI 报错“Cannot find built CLI”，怎么办？","该错误通常是因为全局安装后未正确构建 CLI 文件。临时解决方案是直接调用真实入口文件（绕过符号链接）。根本原因是 v0.10.0 版本尚未包含修复代码，需等待 v0.11.0 或更高版本发布。若急需使用，可从源码构建：运行 `pnpm build:cli` 或 `pnpm build` 后再安装 daemon。","https:\u002F\u002Fgithub.com\u002Fsteipete\u002Fsummarize\u002Fissues\u002F50",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},633,"使用 CLI 处理本地文件时返回“LLM returned an empty summary”，如何解决？","此问题已在主分支修复，涉及本地文件处理和 PDF 预处理逻辑。请确保使用最新版本。若仍复现，请提供 `summarize --version` 和 `--verbose` 输出以便排查。此外，CLI 文档正确地址为 https:\u002F\u002Fsummarize.sh\u002Fdocs\u002Fcli.html，旧链接已失效。","https:\u002F\u002Fgithub.com\u002Fsteipete\u002Fsummarize\u002Fissues\u002F34",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},634,"使用 google\u002Fgemini-3-flash-preview 模型时程序卡住无响应，怎么办？","gemini-3-flash-preview 是预览版模型，存在空响应或流式处理异常问题。最新版本已修复：1. 增强对 Google 空响应或嵌入错误的处理；2. 预览模型失败时自动回退到稳定版 Gemini（如 gemini-2.0-flash）；3. 默认不再优先选择预览模型。建议升级到包含提交 e58b72f 和 04f0932 的版本。","https:\u002F\u002Fgithub.com\u002Fsteipete\u002Fsummarize\u002Fissues\u002F82",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},635,"是否支持使用 Gemini 进行音频转录？","目前官方转录流程仅支持 OpenAI、FAL、Groq 及本地 whisper.cpp\u002FONNX，Gemini 尚未集成到标准媒体转录管道中。虽然项目中有用于视频理解的 Google API 调用，但不生成文本转录稿。若要实现，需扩展转录配置、新增 Gemini 提供商模块、更新进度报告并添加测试覆盖。","https:\u002F\u002Fgithub.com\u002Fsteipete\u002Fsummarize\u002Fissues\u002F89",[156,161,166,171,176,181,186,191,196,201,206,211,216,221,226],{"id":157,"version":158,"summary_zh":159,"released_at":160},100220,"v0.12.0","## 0.12.0 - 2026-03-11\n\n### Features\n\n- Models: add `nvidia\u002F...` provider alias (uses `NVIDIA_API_KEY` + optional `NVIDIA_BASE_URL`) for NVIDIA OpenAI-compatible endpoints.\n\n### Fixes\n\n- Transcription: add AssemblyAI as a first-class remote provider across direct media, podcast\u002FRSS, and yt-dlp YouTube fallback; refactor remote fallback ordering, expand config\u002Fenv support (`ASSEMBLYAI_API_KEY`, legacy `apiKeys.assemblyai`), and add AssemblyAI unit + live coverage (#126).\n- X\u002FTwitter: prefer `xurl` for tweet extraction when installed, fall back to `bird`, preserve long-form\u002Farticle text plus media URLs, add live `xurl` extraction\u002Fmedia coverage, and replace the stale dead-`bird` install tip with a current X CLI recommendation (#70).\n- Models: make daemon agent `artifacts` schemas Gemini-safe, improve Google empty-response handling with preview-to-stable fallback, and switch CLI\u002Fauto Gemini defaults away from brittle preview behavior (#82, #96).\n- Agents: expand model auto-resolution errors with checked models, missing env\u002FCLI setup, and daemon restart guidance (#107).\n- Daemon: support multiple saved extension tokens, migrate legacy single-token configs, and accept any configured token for auth (#116).\n- Chrome extension: harden side-panel slides so SSE keepalives no longer false-time out, seeded placeholders no longer block pending\u002Fcached slide runs, retries can start a fresh summarize+slides run, and reruns replace stale slide state.\n- Chrome extension: refactor side-panel navigation\u002Frun attachment policy so late summary\u002Fslide runs no longer attach to the wrong page after tab or URL switches, and expand headless regression coverage for pending-run resume and slide-mode transitions.\n- Chrome extension: default fresh installs to slide mode, keep passive tab navigation out of chat, and align slide cards with CLI `--slides` by preferring per-slide summary text over raw transcript\u002FOCR fallback.\n- Chrome extension tests: add stronger YouTube slide E2E coverage for loaded images, summary-backed slide text, and switching between videos mid-analysis without stale slide-summary bleed.\n- Chrome extension: isolate slide-summary stream callbacks per run and harden Playwright settings hydration so late events no longer blank slide text when switching videos mid-analysis.\n- Transcription: add Gemini audio\u002Fvideo transcription support across direct media, podcast\u002FRSS, and yt-dlp YouTube fallback, including Files API uploads for larger media plus new Gemini live coverage (#89).\n- npm packaging: publish CLI with `pnpm publish` so `@steipete\u002Fsummarize-core` is version-pinned in published metadata (no `workspace:*` in registry package).\n- Slides: detect WezTerm as an iTerm-compatible terminal for inline slide images in `--slides` mode. (#133) — thanks @doodaaatimmy-creator.\n- CLI help: surface `summarize refresh-free` in `summarize help` output.\n- CLI: report CLI provider timeouts explicitly, including the duration, command, and a `--timeout` hint instead of collapsing them into generic exec failures (#100, thanks @christophsturm).\n- Daemon: restrict CORS responses to trusted extension and localhost origins, with regression coverage for allowed and denied `Origin` headers (#108, thanks @sebastiondev).\n- Transcription: chunk oversized Groq Whisper uploads with ffmpeg in file mode instead of failing out on files above the 30MB limit (#134, thanks @WinnCook).\n- Docs: tighten landing-page mobile layout so hero, cards, code blocks, and nav stay readable on narrow screens (#118, thanks @Acidias).\n- Release: build macOS x64 Bun artifacts and add regression coverage for Homebrew formula rewrites during dual-arch releases (#122, thanks @androidshu).\n- YouTube: tighten hostname validation across core, slides, and extension helpers so attacker-controlled lookalike hosts are no longer treated as YouTube URLs (#91, thanks @RinZ27).\n- Config: honor `zai.baseUrl` config fallback for blank env values and keep Z.AI base URL overrides working outside the summary flow (#102, thanks @liuy).\n- Chrome extension: tighten options and sidepanel UI spacing, copy actions, and advanced-controls layout for a cleaner panel experience (#86, thanks @morozRed).\n- Slides: warn in summary mode when `--slides` dependencies are missing, and document required local installs for `ffmpeg`, `yt-dlp`, and optional `tesseract`.\n- Docs: fix broken docs index links by setting an empty Jekyll `baseurl` (#113, thanks @Youpen-y).\n- Models: preserve model id casing after the provider prefix so OpenAI-compatible proxies can route exact names correctly (#128, thanks @WinnCook).\n- Cache: give extract entries with unavailable transcripts the same short retry TTL as negative transcript cache entries, so transient Apify failures can recover (#115, thanks @gluneau).\n- Daemon: apply the saved env snapshot to `process.env` before `daemon run` starts so child tools inherit the right PATH and API\u002Ftool config under launchd\u002Fsystemd (#99, thanks @heyalchang).\n- Chrome automation: require s","2026-03-12T00:10:51",{"id":162,"version":163,"summary_zh":164,"released_at":165},100221,"v0.11.1","## 0.11.1 - 2026-02-14\n\n### Fixes\n\n- npm packaging: publish CLI with `pnpm publish` so `@steipete\u002Fsummarize-core` is version-pinned in published metadata (no `workspace:*` in registry package).\n\n## 0.11.0 - 2026-02-14\n\n### Highlights\n\n- Auto CLI fallback: new controls and persisted last-success provider state (`~\u002F.summarize\u002Fcli-state.json`) for no-key\u002Flocal-CLI workflows.\n- Transcription reliability: Groq Whisper is now the preferred cloud transcriber, with custom OpenAI-compatible Whisper endpoint overrides.\n- Input reliability: binary-safe stdin handling, local media support in `--extract`, and fixes for local-file hangs\u002FPDF preprocessing on custom OpenAI base URLs.\n\n### Features\n\n- CLI: add Cursor Agent provider (`--cli agent`) for CLI-model execution.\n- CLI auto mode: add implicit auto CLI fallback controls (`cli.autoFallback`, `--auto-cli-fallback`) and provider priority controls (`cli.providers`, `--cli-priority`), with persisted provider success ordering.\n- Transcription: add Groq Whisper as preferred cloud provider (#71, thanks @n0an).\n- Transcription: support custom OpenAI-compatible Whisper endpoints via `OPENAI_WHISPER_BASE_URL` (with safe `OPENAI_BASE_URL` fallback) (#65, thanks @toanbot).\n- Config: support generic `env` defaults in `~\u002F.summarize\u002Fconfig.json` (fallback for any env var), while keeping legacy `apiKeys` mapping for compatibility (#63, thanks @entropyy0).\n\n### Fixes\n\n- CLI local files: avoid hangs when stream usage never resolves and preprocess PDFs automatically for custom OpenAI-compatible `OPENAI_BASE_URL` endpoints (e.g. non-`api.openai.com`).\n- CLI stdin: support binary-safe piping\u002Finput temp files to prevent corruption on non-text stdin (#76).\n- Extract mode: allow `--extract` for local media files (#72).\n- Auto model\u002Fdaemon fallback: skip model attempts when required API keys are missing and normalize env-key checks in daemon fallback (#67, #78).\n- Cache: for auto presets (`auto`\u002F`free`\u002Fnamed auto), prefer preset-level winner cache entries so stale per-candidate cache hits don’t override newer better-model results.\n- Media: treat X broadcasts (`\u002Fi\u002Fbroadcasts\u002F...`) as transcript-first media and prefer URL mode.\n- YouTube: keep explicit `--youtube apify` working when HTML fetch fails, while preserving duration metadata parity (#64, thanks @entropyy0).\n- Transcription: stabilize Groq-first fallback flow (no duplicate Groq retries in file mode), improve terminal error reporting, and surface Groq setup in media guidance (#71, thanks @n0an).\n- Media detection: detect more direct media URL extensions including `.ogg`\u002F`.opus` (#65, thanks @toanbot).\n- Slides: allow yt-dlp cookies-from-browser via `SUMMARIZE_YT_DLP_COOKIES_FROM_BROWSER` to avoid YouTube 403s.\n- Daemon install: resolve symlinked\u002Fglobal bin paths and Windows shims when locating the CLI for install (#57, #62, thanks @entropyy0).\n- Extraction: strip hidden HTML + invisible Unicode before summarization or extract output (#61).\n- CLI: honor `--lang` for YouTube transcript→Markdown conversion in `--markdown-mode llm` (#56, thanks @entropyy0).\n- LLM: map Anthropic bare model ids to versioned aliases (`claude-sonnet-4` → `claude-sonnet-4-0`) (#55, thanks @entropyy0).\n\n### Improvements\n\n- Tooling: remove Biome and standardize on `oxfmt` + type-aware `oxlint`; `pnpm check` now enforces `format:check` before lint\u002Ftests.\n- Dependencies: update workspace dependencies to latest (including `@mariozechner\u002Fpi-ai` and `oxlint-tsgolint`).\n\n","2026-02-14T03:22:56",{"id":167,"version":168,"summary_zh":169,"released_at":170},100222,"v0.10.0","\n### Highlights\n\n- Chrome Side Panel: **Chat mode** with metrics bar, message queue, and improved context (full transcript + summary metadata, jump-to-latest).\n- Slides: **YouTube slide screenshots + OCR + transcript-aligned cards**, timestamped seek, and an OCR\u002FTranscript toggle.\n- Media-aware summarization in the Side Panel: Page vs Video\u002FAudio dropdown, automatic media preference on video sites, plus visible word count\u002Fduration.\n- CLI: robust URL + media extraction with transcript-first workflows and cache-aware streaming.\n\n### Features\n\n- Slides: extract slide screenshots + OCR for YouTube\u002Fdirect video URLs in the CLI + extension (#41, thanks @philippb).\n- Slides: top-of-summary slide strip with expand\u002Fcollapse full-width cards, timestamps, and click-to-seek.\n- Slides: slide descriptions without model calls (transcript windowing, OCR fallback) + OCR\u002FTranscript toggle.\n- Slides: stream slide extraction status\u002Fprogress and show a single header progress bar (no duplicate spinners).\n- Chrome Side Panel chat: stream agent replies over SSE and restore chat history from daemon cache (#33, thanks @dougvk).\n- Chrome Side Panel chat: timestamped transcript context plus clickable `[mm:ss]` links that seek the current media.\n- Summaries: when transcript timestamps are available, prompts require timestamped bullet summaries; side panel auto-links `[mm:ss]` in summaries for media.\n- Transcripts: `--timestamps` adds segment-level timings (`transcriptSegments` + `transcriptTimedText`) for YouTube, podcasts, and embedded captions.\n- Media-aware summarization in the Side Panel: Page vs Video\u002FAudio dropdown, automatic media preference on video sites, plus visible word count\u002Fduration.\n- CLI: transcribe local audio\u002Fvideo files with mtime-aware transcript cache invalidation (thanks @mvance!).\n- Browser extension: add Firefox sidebar build + multi-browser config (#31, thanks @vlnd0).\n- Chrome automation: add artifacts tool + REPL helpers for persistent session files (notes\u002FJSON\u002FCSV) and downloads.\n- Chrome automation: expand navigate tool with list\u002Fswitch tab support and return matching skills after navigation.\n\n### Fixes\n\n- Prompts: ignore sponsor\u002Fads segments in video and podcast summaries.\n- Prompts: enforce no-ads\u002Fno-skipped language and italicized standout excerpts (no quotation marks).\n- Media: route direct media URLs to the transcription pipeline and raise the local media limit to 2GB (#47, thanks @n0an).\n- Slides: render Slide X\u002FY labels and parse slide markers more robustly in streaming output.\n- Slides: ensure slide summary segments start with a title line when missing.\n- Slides: progress updates during yt-dlp downloads and OSC progress mirrors slide extraction.\n- Slides: reuse the media cache for downloaded videos (even with `--no-cache`).\n- Slides: clear slide progress line before the finish summary to avoid stray `Slides x\u002Fy` output.\n- Slides: parse `Slide N\u002FTotal` labels and stabilize title\u002Fbody extraction.\n- CLI: `--no-cache` now bypasses summary caching only; transcript\u002Fmedia caches still apply.\n- Chrome Side Panel chat: keep auto-scroll pinned while streaming when you’re already at the bottom.\n- Chrome Side Panel: scope streams\u002Fstate per window so other windows don’t wipe active summaries.\n- Chrome Side Panel chat: support JSON agent replies with explicit SSE\u002FJSON negotiation to avoid “stream ended” errors.\n- Chrome Side Panel chat: clear streaming placeholders on errors\u002Faborts.\n- Chrome Side Panel: add inline error toast above chat composer; errors stay visible when scrolled.\n- Chrome Side Panel: clear\u002Fhide the inline error toast when no message is present to avoid empty red boxes.\n- Cache: include transcript timestamp requests in extract cache keys so timed summaries don’t reuse plain transcript content.\n- Extract-only: remove implicit 8k cap; new `--max-extract-characters`\u002Fdaemon `maxExtractCharacters` allow opt-in limits; resolves transcript truncation.\n- Automation: require userScripts (no isolated-world fallback), with improved guidance and in-panel permission notice.\n- Daemon: avoid URL flow crashes when url-preference helpers are missing (ReferenceError guard).\n- CLI: clear OSC progress on SIGINT\u002FSIGTERM to avoid stuck indicators.\n- Slides: detect headline-style first lines and render them as slide titles (no required `Title:` markers).\n- YouTube: prefer English caption variants (`en-*`) when selecting caption tracks.\n\n### Improvements\n\n- Daemon: emit slides start\u002Fprogress\u002Fdone metadata in extended logging for easier debugging.\n- Media: refactor routing helpers and size policy (#48, thanks @steipete).\n- CLI: show determinate transcription progress percent when duration is known.\n- CLI: theme transcription progress lines and mirror part-based progress to OSC when duration is unknown.\n- CLI: show determinate OSC progress for transcription\u002Fdownload when totals are known.\n- CLI: keep OSC progress determinate when recent percent updates are available.\n- CLI: theme tweet\u002Fextraction progress lines for consis","2026-01-22T11:53:05",{"id":172,"version":173,"summary_zh":174,"released_at":175},100223,"v0.9.0","## 0.9.0 - 2025-12-31\n\n### Highlights\n\n- Chrome Side Panel: **Chat mode** with metrics bar, message queue, and improved context (full transcript + summary metadata, jump-to-latest, smoother auto-scroll).\n- Media-aware summarization in the Side Panel: Page vs Video\u002FAudio dropdown, automatic media preference on video sites, plus visible word count\u002Fduration.\n- Chrome extension: optional hover tooltip summaries for links (advanced setting, default off; experimental) with prompt customization.\n\n### Improvements\n\n- PDF + asset handling: send PDFs directly to Anthropic\u002FOpenAI\u002FGemini when supported; generic PDF attachments and better media URL detection.\n- Daemon: `\u002Fv1\u002Fchat` + `extractOnly`, version in health\u002Fstatus pill, optional JSON log with rotation, and more resilient restart\u002Finstall health checks.\n- Side Panel: advanced model row with “Scan free” (shows top free model after scan), a refresh summary control (cache bypass), plus richer length tooltips.\n- Side Panel UX: consolidated advanced layout and typography controls (font size A\u002FAA, line-height), streamlined setup panel with inline copy, clearer status text, and tighter model\u002Flength controls.\n- Side Panel UX: keep the Auto summarize toggle on one line in Advanced.\n- Streaming\u002Fmetrics polish: faster stream flushes, shorter OpenRouter labels on wrap, and improved extraction metadata in chat.\n\n### Fixes\n\n- Auto model selection: OpenRouter fallback now resolves provider-specific ids (dash\u002Fdot slug normalization) and skips fallback when no unique match.\n- Language auto: default to English when detection is uncertain.\n- OpenAI GPT-5: skip `temperature` in streaming requests to avoid 400s for unsupported params.\n- Side Panel stability: retryable stream errors, no abort crash, auto-summarize on open\u002Fsource switch, synced chat toggle state, and caret alignment.\n- YouTube duration handling: player API\u002FHTML\u002Fyt-dlp fallbacks, transcript metadata propagation, and extension duration fallbacks.\n- URL extraction: preserve final redirected URLs so shorteners (t.co) summarize the real destination.\n- Hover summaries: proxy localhost daemon calls to avoid Chrome “Local network access” prompts.\n- Install: use npm releases for osc-progress\u002Ftokentally instead of git deps.\n","2025-12-31T00:49:34",{"id":177,"version":178,"summary_zh":179,"released_at":180},100224,"v0.8.2","Includes 0.8.2, 0.8.1, and 0.8.0 (rolled up).\n\n### Fixed\n- Packaging: ship CLI runtime deps and verify pack installs core + cli tarballs before publish.\n- Packaging: move CLI runtime deps into dependencies so npm installs run cleanly.\n\n### Breaking\n- ESM-only: `@steipete\u002Fsummarize` + `@steipete\u002Fsummarize-core` no longer support CommonJS `require()`; the CLI binary is now ESM.\n\n### Highlights\n- Chrome: add a real **Side Panel** extension (MV3) that summarizes the **current tab** and renders streamed Markdown.\n- Daemon: add `summarize daemon …` (localhost server on `127.0.0.1:8787`) for extension ↔ CLI integration.\n  - Autostart: macOS LaunchAgent, Linux systemd user service, Windows Scheduled Task\n  - Token pairing (shared secret)\n  - Streaming over SSE\n  - Emit finish-line metrics over SSE (panel footer + hover details)\n  - Commands: `install`, `status`, `restart`, `uninstall`, `run`\n- Cache: add SQLite cache for transcripts\u002Fextractions\u002Fsummaries with `--no-cache`, `--cache-stats`, `--clear-cache` + config (`cache.enabled\u002FmaxMb\u002FttlDays\u002Fpath`).\n  - Finish line shows “Cached” for summary cache hits (CLI + daemon\u002Fextension)\n  - Daemon\u002FChrome stream cache status metadata (`summaryFromCache`)\n\n### Features\n- YouTube: add `--youtube no-auto` to skip auto-generated captions and prefer creator-uploaded captions; fall back to `yt-dlp` transcription (thanks @dougvk!).\n- CLI: add transcript → Markdown formatting via `--extract --format md --markdown-mode llm` (thanks @dougvk!).\n- X\u002FTwitter: auto-transcribe tweet videos via `yt-dlp`, using browser cookies (Chrome → Safari → Firefox) when available; set `TWITTER_COOKIE_SOURCE` \u002F `TWITTER_*_PROFILE` to control cookie extraction order.\n- Prompt overrides: add `--prompt`, `--prompt-file`, and config `prompt` to replace the default summary instructions.\n- Chrome Side Panel: add length + language controls (presets + custom), forwarded to the daemon.\n- Daemon API: `mode: \"auto\"` accepts both `url` + extracted page `text`; daemon picks the best pipeline (YouTube\u002Fpodcasts\u002Fmedia → URL, otherwise prefer visible page text) with a fallback attempt.\n- Daemon\u002FChrome: stream extra run metadata (`inputSummary`, `modelLabel`) over SSE for richer panel status.\n- Core: expose lightweight URL helpers at `@steipete\u002Fsummarize-core\u002Fcontent\u002Furl` (YouTube\u002FTwitter\u002Fpodcast\u002Fdirect-media detection).\n- Chrome Side Panel: new icon + extension `homepage_url` set to `summarize.sh`.\n- Providers: add configurable API base URLs (config + env) for OpenAI\u002FAnthropic\u002FGoogle\u002FxAI (thanks @bunchjesse for the nudge).\n\n### Improvements\n- Chrome Side Panel: stream SSE from the panel (no MV3 background stalls), use runtime messaging to avoid “disconnected port” errors, and improve auto-summarize de-dupe.\n- Chrome Side Panel UI: working status in header + 1px progress line (no layout jump), full-width subtitle, page title in header, idle subtitle shows `words\u002Fchars` (or media duration + words) + model, subtle metrics footer, continuous background, and native highlight\u002Flink accents.\n- Daemon: prefer the installed env snapshot over launchd’s minimal environment (improves `yt-dlp` \u002F `whisper.cpp` PATH reliability, especially for X\u002FTwitter video transcription).\n- X\u002FTwitter: cookie handling now delegates to `yt-dlp --cookies-from-browser` (no sweet-cookie dependency).\n- X\u002FTwitter: skip yt-dlp transcript attempts for long-form tweet text (articles).\n- Transcripts: show yt-dlp download progress bytes and stabilize totals to prevent bouncing progress bars.\n- Finish line: show transcript source labels (`YouTube` \u002F `podcast`) without repeating the label.\n- Streaming: stop\u002Fclear progress UI before first streamed output and avoid leading blank lines on non-TTY stdout.\n- URL flow: propagate `extracted.truncated` into the prompt context so summaries can reflect partial inputs.\n- Daemon: unify URL\u002Fpage summarization with the CLI flows (single code path; keeps extract\u002Fcache\u002Fmodel logic in sync).\n- Prompts: auto-require Markdown section headings for longer summaries (xl\u002Fxxl or large custom lengths).\n","2025-12-28T16:47:18",{"id":182,"version":183,"summary_zh":184,"released_at":185},100225,"v0.7.1","\n### Fixed\n\n- Packaging: `@steipete\u002Fsummarize-core` now ships a CJS build for `require()` consumers (fixes `pnpm dlx @steipete\u002Fsummarize --help` and the published CLI runtime).\n\n\n## 0.7.0 - 2025-12-26\n\n\n### Highlights\n\n- Packages: split into `@steipete\u002Fsummarize-core` (library) + `@steipete\u002Fsummarize` (CLI; depends on core). Versions are lockstep.\n- Streaming: scrollback-safe Markdown streaming (hybrid: line-by-line + block buffering for fenced code + tables). No cursor control, no full-frame redraws.\n- Output: Markdown rendering is automatic on TTY; use `--plain` for raw Markdown\u002Ftext output.\n- Finish line: compact separators (`·`) and no duplicated `… words` when transcript stats are shown.\n- YouTube: `--youtube auto` prefers `yt-dlp` transcription when available; Apify is last-last resort.\n\n### Fixed\n\n- Streaming: flush newline-bounded output in `--plain` mode to avoid duplication with cumulative stream chunks.\n- Website extraction: strip inline CSS before Readability to avoid extremely slow jsdom stylesheet parsing on some pages.\n- Twitter\u002FX: rotate Nitter hosts and skip Anubis PoW pages during tweet fallback.\n\n### Changed\n\n- CLI: remove `--render`; add `--plain` to keep raw output (no ANSI\u002FOSC rendering).\n\n","2025-12-26T23:23:44",{"id":187,"version":188,"summary_zh":189,"released_at":190},100226,"v0.6.1","## 0.6.1 - 2025-12-25\n\n### Changes\n\n- YouTube: `--youtube auto` now falls back to `yt-dlp` if it’s on `PATH` (or `YT_DLP_PATH` is set) and a Whisper provider is available.\n- `--version` now includes a short git SHA when available (build provenance).\n- `--extract` now defaults to Markdown output (when `--format` is omitted), preferring Readability input.\n- `--extract` no longer spends LLM tokens for Markdown conversion by default (unless `--markdown-mode llm` is used).\n- `--format md` no longer forces Firecrawl; use `--firecrawl always` to force it.\n- Finish line in `--extract` shows the extraction path (e.g. `markdown via readability`) and omits noisy `via html` output.\n- Finish line always includes the model id when an LLM is used (including `--extract --markdown-mode llm`).\n- `--extract` renders Markdown in TTY output (same renderer as summaries) when `--render auto|md` (use `--render plain` for raw Markdown).\n- Suppress transcript progress\u002Ffailure messages for non-YouTube \u002F non-podcast URLs.\n- Streaming now works with auto-selected models (including `--model free`) when `--stream on|auto`.\n- Warn when `--length` is explicitly provided with `--extract` (ignored; no summary is generated).\n","2025-12-25T14:56:25",{"id":192,"version":193,"summary_zh":194,"released_at":195},100227,"v0.6.0","\n### Features\n\n- **Podcasts (full episodes)**\n  - Support Apple Podcasts episode URLs via iTunes Lookup + enclosure transcription (avoids slow\u002Fblocked HTML).\n  - Support Spotify episode URLs via the embed page (`\u002Fembed\u002Fepisode\u002F...`) to avoid recaptcha; fall back to iTunes RSS when embed audio is DRM\u002Fmissing.\n  - Prefer local `whisper.cpp` when installed + model available (no API keys required for transcription).\n  - Whisper transcription works for any media URL (audio\u002Fvideo containers), not just YouTube.\n- **Language**\n  - Add `--language\u002F--lang` (default: `auto`, match source language).\n  - Add config support via `output.language` (legacy `language` still supported).\n- **Progress UI**\n  - Add two-phase progress for podcasts: media download + Whisper transcription progress.\n  - Show transcript phases (YouTube caption\u002FApify\u002Fyt-dlp), provider + model, and media size\u002Fduration.\n\n### Changes\n\n- **Transcription**\n  - Add lenient ffmpeg transcode fallback for local Whisper when strict decode fails (e.g. Spotify AAC).\n\n- **Models**\n  - Add `zai\u002F...` model alias with Z.AI base URL + chat completions by default.\n  - Add `OPENAI_USE_CHAT_COMPLETIONS` + `openai.useChatCompletions` config toggle.\n- **Metrics \u002F output**\n  - `--metrics on|detailed`: finish line includes compact transcript stats (… words, …) + media duration (when available); `--metrics detailed`: also prints input\u002Ftranscript sizes + transcript source\u002Fprovider\u002Fcache; hides `calls=1`.\n  - Smarter duration formatting (`1h 13m 4s`, `44s`) and rounded transfer rates.\n  - Make Markdown links terminal-clickable by materializing URLs.\n  - `--metrics on|detailed` renders a single finish line with a compact transcript block (… words, …) before the model.\n- **Cost**\n  - Include OpenAI Whisper transcription estimate (duration-based) in the finish line total (`txcost=…`); configurable via `openai.whisperUsdPerMinute`.\n\n### Docs\n\n- Add `docs\u002Flanguage.md` and document language config + flag usage.\n\n### Tests\n\n- Add JSON-LD graph extraction coverage.\n- Extend live podcast-host coverage (Podchaser, Spreaker, Buzzsprout).\n- Raise global branch coverage threshold to 75% and add regression coverage for podcast\u002Flanguage\u002Fprogress paths.\n\n","2025-12-25T03:00:57",{"id":197,"version":198,"summary_zh":199,"released_at":200},100228,"v0.5.0","\n### Features\n\n- **Model selection & presets**\n  - Automatic model selection (`--model auto`, now the default):\n    - Chooses models based on input kind (website\u002FYouTube\u002Ffile\u002Fimage\u002Fvideo\u002Ftext) and prompt size.\n    - Skips candidates without API keys; retries next model on request errors.\n    - Adds OpenRouter fallback attempts when `OPENROUTER_API_KEY` is present.\n    - Shows the chosen model in the progress UI.\n  - Named model presets via config (`~\u002F.summarize\u002Fconfig.json` → `models`), selectable as `--model \u003Cpreset>`.\n  - Built-in preset: `--model free` (OpenRouter `:free` candidates; override via `models.free`).\n- **OpenRouter free preset maintenance**\n  - `summarize refresh-free` regenerates `models.free` by scanning OpenRouter `:free` models and testing availability + latency.\n  - `summarize refresh-free --set-default` also sets `\"model\": \"free\"` in `~\u002F.summarize\u002Fconfig.json` (so free becomes your default).\n- **CLI models**\n  - Add `--cli \u003Cprovider>` flag (equivalent to `--model cli\u002F\u003Cprovider>`).\n  - `--cli` accepts case-insensitive providers and can be used without a provider to enable CLI auto selection.\n- **Content extraction**\n  - Website extraction detects video-only pages:\n    - YouTube embeds switch to transcript extraction automatically.\n    - Direct video URLs can be downloaded + summarized when `--video-mode auto|understand` and a Gemini key is available.\n- **Env**\n  - `.env` in the current directory is loaded automatically (so API keys work without exporting env vars).\n\n### Changes\n\n- **CLI config**\n  - Auto mode uses CLI models only when `cli.enabled` is set; order follows the list.\n  - `cli.enabled` is an allowlist for CLI usage.\n- **OpenRouter**\n  - Stop sending extra routing headers.\n  - `--model free`: when OpenRouter rejects routing with “No allowed providers”, print the exact provider names to allow and suggest running `summarize refresh-free`.\n  - `--max-output-tokens`: when explicitly set, it is also forwarded to OpenRouter calls.\n- **Refresh Free**\n  - Default extra runs reduced to 2 (total runs = 1 + runs) to reduce rate-limit pressure.\n  - Filter `:free` candidates by recency (default: last 180 days; configurable via `--max-age-days`).\n  - Print `ctx`\u002F`out` in `k` units for readability.\n- **Defaults**\n  - Default summary length is now `xl`.\n\n### Fixes\n\n- **LLM \u002F OpenRouter**\n  - LLM request retries (`--retries`) and clearer timeout errors.\n  - `summarize refresh-free`: detect OpenRouter free-model rate limits and back off + retry.\n- **Streaming**\n  - Normalize + de-dupe overlapping chunks to prevent repeated sections in live Markdown output.\n- **YouTube**\n  - Prefer manual captions over auto-generated when both exist. Thanks @dougvk.\n  - Always summarize YouTube transcripts in auto mode (instead of printing the transcript).\n- **Prompting & metrics**\n  - Don’t “pad” beyond input length when asking for longer summaries.\n  - `--metrics detailed`: fold metrics into finish line and make labels less cryptic.\n\n### Docs\n\n- Add documentation for presets and Refresh Free.\n- Add a “make free the default” quick start for `summarize refresh-free --set-default`.\n- Add a manual end-to-end checklist (`docs\u002Fmanual-tests.md`).\n- Add a quick CLI smoke checklist (`docs\u002Fsmoketest.md`).\n- Document CLI ordering and model selection behavior.\n\n### Tests\n\n- Add coverage for presets and Refresh Free regeneration.\n- Add live coverage for the `free` preset.\n- Add regression coverage for YouTube transcript handling and metrics formatting.\n\n","2025-12-24T00:10:00",{"id":202,"version":203,"summary_zh":204,"released_at":205},100229,"v0.4.0","### Changes\n\n- Add URL extraction mode via `--extract` (deprecated alias: `--extract-only`) with `--format md|text`.\n- Rename HTML→Markdown conversion flag to `--markdown-mode` (deprecated alias: `--markdown`).\n- Add `--preprocess off|auto|always` and a `uvx markitdown` fallback for Markdown extraction + unsupported file attachments (when `--format md` is used).\n- When `uvx` isn’t available, print an install hint (`brew install uv`).\n\n### Tests\n\n- Add coverage for preprocess + markitdown integration paths.\n","2025-12-21T00:09:09",{"id":207,"version":208,"summary_zh":209,"released_at":210},100230,"v0.3.0","### Changes\n\n- Add yt-dlp audio transcription fallback for YouTube; prefer OpenAI Whisper with FAL fallback.\n- Add `--no-playlist` to yt-dlp downloads to avoid transcript mismatches.\n- Run yt-dlp after web + Apify in `--youtube auto`, and error early for missing keys in `--youtube yt-dlp`.\n- Require Node 22+.\n- OpenRouter: respect `OPENAI_BASE_URL` consistently; apply provider ordering headers to HTML→Markdown conversion.\n- Ship a Bun bytecode macOS arm64 binary for Homebrew.\n\n### Tests\n\n- Add coverage for yt-dlp ordering, missing-key errors, and helper paths.\n- Add live coverage for yt-dlp transcript mode and missing-caption YouTube pages.\n\n### Dev\n\n- Add `Dockerfile.test` for containerized yt-dlp testing.\n","2025-12-20T17:01:29",{"id":212,"version":213,"summary_zh":214,"released_at":215},100231,"v0.2.0","### Changes\n\n- Remove map-reduce summarization; reject inputs that exceed the model’s context window.\n- Preflight prompts with a GPT tokenizer against the model’s input limit (LiteLLM catalog).\n- Reject text files over 10 MB before tokenization.\n- Reject too-small numeric `--length` \u002F `--max-output-tokens` values.\n- Cap requested summary length to extracted content length.\n- Skip summarization for tweets when extracted content is already below requested length.\n- Use bird CLI for tweet extraction when available; fall back to Nitter when bird fails.\n- Improve fetch spinner; show Firecrawl fallback status + reason.\n- Enforce a hard deadline for stalled streaming; fall back to non-streaming on streaming timeouts.\n- Preserve parentheses in URL paths.\n\n### Fixes\n\n- Avoid Firecrawl fallback when block keywords only appear in scripts\u002Fstyles.\n- Improve Bird\u002FNitter error messaging and install hints.\n\n### Tests\n\n- Add coverage for prompt length capping, cumulative stream merge handling, and streaming timeout fallback.\n- Add live coverage for Wikipedia URLs with parentheses.\n- Add coverage for tweet summaries bypassing the LLM when short.\n\n### Docs\n\n- Update release checklist + document input limits and minimum length\u002Ftoken values.\n\n### Dev\n\n- Add a tokenization benchmark script.\n","2025-12-20T12:09:36",{"id":217,"version":218,"summary_zh":219,"released_at":220},100232,"v0.1.2","### Fixes\n\n- Merge cumulative streaming chunks correctly.\n- Repair release script quoting.\n\n### Docs\n\n- Note all-in-one release flow.\n","2025-12-19T23:44:32",{"id":222,"version":223,"summary_zh":224,"released_at":225},100233,"v0.1.1","### Fixes\n\n- Accept common “pasted URL” patterns like `url (canonical)` and clean up accidental `\\\\?` \u002F `\\\\=` \u002F `%5C` before query separators.\n\n### Install\n\n- Homebrew (macOS arm64 Bun binary): `brew install steipete\u002Ftap\u002Fsummarize`\n","2025-12-19T18:00:04",{"id":227,"version":228,"summary_zh":229,"released_at":230},100234,"v0.1.0","First public release.\n\n- npm: `npm i -g @steipete\u002Fsummarize` (or `npx -y @steipete\u002Fsummarize`)\n- Homebrew (macOS arm64 Bun binary): `brew install steipete\u002Ftap\u002Fsummarize`\n","2025-12-19T15:40:56"]