[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-fishaudio--fish-speech":3,"tool-fishaudio--fish-speech":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":114,"forks":115,"last_commit_at":116,"license":117,"difficulty_score":10,"env_os":118,"env_gpu":119,"env_ram":118,"env_deps":120,"category_tags":123,"github_topics":124,"view_count":132,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":133,"updated_at":134,"faqs":135,"releases":164},616,"fishaudio\u002Ffish-speech","fish-speech","SOTA Open Source TTS","fish-speech 是一款开源的顶尖文本转语音（TTS）系统，专注于实现高保真、多语言的语音合成与克隆。它主要解决了传统语音合成中发音机械、情感表达匮乏以及跨语言支持不足的痛点，让生成的机器声音更加自然流畅，接近真人交流体验。\n\n在技术层面，fish-speech 基于 Fish Audio S2 Pro 模型，采用了先进的双自回归（Dual-AR）架构。fish-speech 在超过 1000 万小时的高质量音频数据上进行训练，能够流畅处理 80 多种语言，并具备强大的零样本语音克隆能力。这种设计不仅提升了语音的自然度，还大幅降低了资源消耗。\n\nfish-speech 非常适合开发者、AI 研究人员以及内容创作者。开发者可以轻松通过 WebUI 或命令行将其集成到应用中；研究人员可借此探索多模态模型的边界；而普通用户则能利用它快速制作高质量的配音视频。项目提供了丰富的部署选项，包括 Docker 和服务器模式，社区支持活跃。需要注意的是，fish-speech 遵循特定的研究许可协议，使用者应遵守相关条款，避免违规用途。","\u003Cdiv align=\"center\">\n\u003Ch1>Fish Speech\u003C\u002Fh1>\n\n**English** | [简体中文](docs\u002FREADME.zh.md) | [Portuguese](docs\u002FREADME.pt-BR.md) | [日本語](docs\u002FREADME.ja.md) | [한국어](docs\u002FREADME.ko.md) | [العربية](docs\u002FREADME.ar.md) | [Español](docs\u002FREADME.es.md)  \u003Cbr>\n\n\u003Ca href=\"https:\u002F\u002Fwww.producthunt.com\u002Fproducts\u002Ffish-speech?embed=true&utm_source=badge-top-post-badge&utm_medium=badge&utm_source=badge-fish&#0045;audio&#0045;s1\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fapi.producthunt.com\u002Fwidgets\u002Fembed-image\u002Fv1\u002Ftop-post-badge.svg?post_id=1023740&theme=light&period=daily&t=1761164814710\" alt=\"Fish&#0032;Audio&#0032;S1 - Expressive&#0032;Voice&#0032;Cloning&#0032;and&#0032;Text&#0045;to&#0045;Speech | Product Hunt\" style=\"width: 250px; height: 54px;\" width=\"250\" height=\"54\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F7014\" target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffishaudio_fish-speech_readme_4a68feb902da.png\" alt=\"fishaudio%2Ffish-speech | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"\u002F>\n\u003C\u002Fa>\n\u003Cbr>\n\u003C\u002Fdiv>\n\u003Cbr>\n\n\u003Cdiv align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Fcount.getloli.com\u002Fget\u002F@fish-speech?theme=asoul\" \u002F>\u003Cbr>\n\u003C\u002Fdiv>\n\n\u003Cbr>\n\n\u003Cdiv align=\"center\">\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdiscord.gg\u002FEs5qTB9BcN\">\n        \u003Cimg alt=\"Discord\" src=\"https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1214047546020728892?color=%23738ADB&label=Discord&logo=discord&logoColor=white&style=flat-square\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fhub.docker.com\u002Fr\u002Ffishaudio\u002Ffish-speech\">\n        \u003Cimg alt=\"Docker\" src=\"https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Ffishaudio\u002Ffish-speech?style=flat-square&logo=docker\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fpd.qq.com\u002Fs\u002Fbwxia254o\">\n      \u003Cimg alt=\"QQ Channel\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FQQ-blue?logo=tencentqq\">\n    \u003C\u002Fa>\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fhuggingface.co\u002Ffishaudio\u002Fs2-pro\">\n        \u003Cimg alt=\"HuggingFace Model\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗%20-models-orange\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Ffish.audio\u002Fblog\u002Ffish-audio-open-sources-s2\u002F\">\n        \u003Cimg alt=\"Fish Audio Blog\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBlog-Fish_Audio_S2-1f7a8c?style=flat-square&logo=readme&logoColor=white\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08823\">\n        \u003Cimg alt=\"Paper | Technical Report\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-Technical_Report-b31b1b?style=flat-square\"\u002F>\n    \u003C\u002Fa>\n\u003C\u002Fdiv>\n\n> [!IMPORTANT]\n> **License Notice**  \n> This codebase and its associated model weights are released under **[FISH AUDIO RESEARCH LICENSE](LICENSE)**. Please refer to [LICENSE](LICENSE) for more details. We will take action against any violation of the license.\n\n> [!WARNING]\n> **Legal Disclaimer**  \n> We do not hold any responsibility for any illegal usage of the codebase. Please refer to your local laws about DMCA and other related laws.\n\n## Quick Start\n\n### For Human\n\nHere are the official documents for Fish Audio S2, follow the instructions to get started easily.\n\n- [Installation](https:\u002F\u002Fspeech.fish.audio\u002Finstall\u002F)\n- [Command Line Inference](https:\u002F\u002Fspeech.fish.audio\u002Finference\u002F#command-line-inference)\n- [WebUI Inference](https:\u002F\u002Fspeech.fish.audio\u002Finference\u002F#webui-inference)\n- [Server Inference](https:\u002F\u002Fspeech.fish.audio\u002Fserver\u002F)\n- [Docker Setup](https:\u002F\u002Fspeech.fish.audio\u002Finstall\u002F#docker-setup)\n\n> [!IMPORTANT]\n> **For SGLang server, please read [SGLang-Omni README](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002Fsglang-omni\u002Fblob\u002Fmain\u002Fsglang_omni\u002Fmodels\u002Ffishaudio_s2_pro\u002FREADME.md).**\n\n### For LLM Agent\n\n```\nInstall and configure Fish-Audio S2 by following the instructions here: https:\u002F\u002Fspeech.fish.audio\u002Finstall\u002F\n```\n\n## Fish Audio S2 Pro\n**State-of-the-art multilingual text-to-speech (TTS) system, redefining the boundaries of voice generation.**\n\nFish Audio S2 Pro is the most advanced multimodal model developed by [Fish Audio](https:\u002F\u002Ffish.audio\u002F). Trained on over **10 million hours** of audio data covering more than **80 languages**, S2 Pro combines a **Dual-Autoregressive (Dual-AR)** architecture with reinforcement learning (RL) alignment to generate speech that is exceptionally natural, realistic, and emotionally rich, leading the competition among both open-source and closed-source systems.\n\nThe core strength of S2 Pro lies in its support for **sub-word level** fine-grained control of prosody and emotion using natural language tags (e.g., `[whisper]`, `[excited]`, `[angry]`), while natively supporting multi-speaker and multi-turn conversation generation.\n\nVisit the [Fish Audio website](https:\u002F\u002Ffish.audio\u002F) for a live playground, or read our [technical report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08823) and [blog post](https:\u002F\u002Ffish.audio\u002Fblog\u002Ffish-audio-open-sources-s2\u002F) for more details.\n\n### Model Variants\n\n| Model | Size | Availability | Description |\n|------|------|-------------|-------------|\n| S2-Pro | 4B parameters | [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Ffishaudio\u002Fs2-pro) | Full-featured flagship model with maximum quality and stability |\n\nMore details of the model can be found in the [technical report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01156).\n\n## Benchmark Results\n\n| Benchmark | Fish Audio S2 |\n|------|------|\n| Seed-TTS Eval — WER (Chinese) | **0.54%** (best overall) |\n| Seed-TTS Eval — WER (English) | **0.99%** (best overall) |\n| Audio Turing Test (with instruction) | **0.515** posterior mean |\n| EmergentTTS-Eval — Win Rate | **81.88%** (highest overall) |\n| Fish Instruction Benchmark — TAR | **93.3%** |\n| Fish Instruction Benchmark — Quality | **4.51 \u002F 5.0** |\n| Multilingual (MiniMax Testset) — Best WER | **11 of 24** languages |\n| Multilingual (MiniMax Testset) — Best SIM | **17 of 24** languages |\n\nOn Seed-TTS Eval, S2 achieves the lowest WER among all evaluated models including closed-source systems: Qwen3-TTS (0.77\u002F1.24), MiniMax Speech-02 (0.99\u002F1.90), Seed-TTS (1.12\u002F2.25). On the Audio Turing Test, 0.515 surpasses Seed-TTS (0.417) by 24% and MiniMax-Speech (0.387) by 33%. On EmergentTTS-Eval, S2 achieves particularly strong results in paralinguistics (91.61% win rate), questions (84.41%), and syntactic complexity (83.39%).\n\n## Highlights\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffishaudio_fish-speech_readme_3efec438d5b4.png\" width=200%>\n\n### Fine-Grained Inline Control via Natural Language\n\nS2 Pro brings unprecedented \"soul\" to speech. Using simple `[tag]` syntax, you can precisely embed emotional instructions at any position in the text.\n- **15,000+ Unique Tags Supported**: Not limited to fixed presets; S2 supports **free-form text descriptions**. Try `[whisper in small voice]`, `[professional broadcast tone]`, or `[pitch up]`.\n- **Rich Emotion Library**:\n  `[pause]` `[emphasis]` `[laughing]` `[inhale]` `[chuckle]` `[tsk]` `[singing]` `[excited]` `[laughing tone]` `[interrupting]` `[chuckling]` `[excited tone]` `[volume up]` `[echo]` `[angry]` `[low volume]` `[sigh]` `[low voice]` `[whisper]` `[screaming]` `[shouting]` `[loud]` `[surprised]` `[short pause]` `[exhale]` `[delight]` `[panting]` `[audience laughter]` `[with strong accent]` `[volume down]` `[clearing throat]` `[sad]` `[moaning]` `[shocked]`\n\n### Innovative Dual-Autoregressive (Dual-AR) Architecture\n\nS2 Pro adopts a master-slave Dual-AR architecture consisting of a decoder-only transformer and an RVQ audio codec (10 codebooks, ~21 Hz):\n\n- **Slow AR (4B parameters)**: Operates along the time axis, predicting the primary semantic codebook.\n- **Fast AR (400M parameters)**: Generates the remaining 9 residual codebooks at each time step, reconstructing exquisite acoustic details.\n\nThis asymmetric design achieves peak audio fidelity while significantly boosting inference speed.\n\n### Reinforcement Learning (RL) Alignment\n\nS2 Pro utilizes **Group Relative Policy Optimization (GRPO)** for post-training alignment. We use the same model suite for data cleaning and annotation directly as Reward Models, perfectly resolving the distribution mismatch between pre-training data and post-training objectives.\n- **Multi-Dimensional Reward Signals**: Comprehensively evaluates semantic accuracy, instruction adherence, acoustic preference scoring, and timbre similarity to ensure every second of generated speech feels intuitive to humans.\n\n### Extreme Streaming Performance (Powered by SGLang)\n\nAs the Dual-AR architecture is structurally isomorphic to standard LLMs, S2 Pro natively supports all SGLang inference acceleration features, including Continuous Batching, Paged KV Cache, CUDA Graph, and RadixAttention-based Prefix Caching.\n\n**Performance on a single NVIDIA H200 GPU:**\n- **Real-Time Factor (RTF)**: 0.195\n- **Time-to-First-Audio (TTFA)**: ~100 ms\n- **Extreme Throughput**: 3,000+ acoustic tokens\u002Fs while maintaining RTF \u003C 0.5\n\n### Robust Multilingual Support\n\nS2 Pro supports over 80 languages without requiring phonemes or language-specific preprocessing:\n\n- **Tier 1**: Japanese (ja), English (en), Chinese (zh)\n- **Tier 2**: Korean (ko), Spanish (es), Portuguese (pt), Arabic (ar), Russian (ru), French (fr), German (de)\n- **Global Coverage**: sv, it, tr, no, nl, cy, eu, ca, da, gl, ta, hu, fi, pl, et, hi, la, ur, th, vi, jw, bn, yo, xsl, cs, sw, nn, he, ms, uk, id, kk, bg, lv, my, tl, sk, ne, fa, af, el, bo, hr, ro, sn, mi, yi, am, be, km, is, az, sd, br, sq, ps, mn, ht, ml, sr, sa, te, ka, bs, pa, lt, kn, si, hy, mr, as, gu, fo, etc.\n\n### Native Multi-Speaker Generation\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffishaudio_fish-speech_readme_a9e917f0d2bd.png\" width=200%>\n\nFish Audio S2 allows users to upload reference audio containing multiple speakers, and the model processes each speaker's features via the `\u003C|speaker:i|>` token. You can then control the model's performance via speaker ID tokens, enabling a single generation to include multiple speakers. There is no longer a need to upload separate reference audio for each individual speaker.\n\n### Multi-Turn Generation\n\nThanks to the expansion of the model context, our model can now leverage previous information to improve the expressiveness of subsequent generated content, thereby increasing the naturalness of the dialogue.\n\n### Rapid Voice Cloning\n\nFish Audio S2 supports accurate voice cloning using short reference samples (typically 10-30 seconds). The model captures timbre, speaking style, and emotional tendencies, producing realistic and consistent cloned voices without additional fine-tuning.\nFor SGLang Server usage, please refer to the [SGLang-Omni README](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002Fsglang-omni\u002Fblob\u002Fmain\u002Fsglang_omni\u002Fmodels\u002Ffishaudio_s2_pro\u002FREADME.md).\n\n---\n\n## Credits\n\n- [VITS2 (daniilrobnikov)](https:\u002F\u002Fgithub.com\u002Fdaniilrobnikov\u002Fvits2)\n- [Bert-VITS2](https:\u002F\u002Fgithub.com\u002Ffishaudio\u002FBert-VITS2)\n- [GPT VITS](https:\u002F\u002Fgithub.com\u002Finnnky\u002Fgpt-vits)\n- [MQTTS](https:\u002F\u002Fgithub.com\u002Fb04901014\u002FMQTTS)\n- [GPT Fast](https:\u002F\u002Fgithub.com\u002Fpytorch-labs\u002Fgpt-fast)\n- [GPT-SoVITS](https:\u002F\u002Fgithub.com\u002FRVC-Boss\u002FGPT-SoVITS)\n- [Qwen3](https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen3)\n\n## Tech Report\n```bibtex\n@misc{fish-speech-v1.4,\n      title={Fish-Speech: Leveraging Large Language Models for Advanced Multilingual Text-to-Speech Synthesis},\n      author={Shijia Liao and Yuxuan Wang and Tianyu Li and Yifan Cheng and Ruoyi Zhang and Rongzhi Zhou and Yijin Xing},\n      year={2024},\n      eprint={2411.01156},\n      archivePrefix={arXiv},\n      primaryClass={cs.SD},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01156},\n}\n\n@misc{liao2026fishaudios2technical,\n      title={Fish Audio S2 Technical Report}, \n      author={Shijia Liao and Yuxuan Wang and Songting Liu and Yifan Cheng and Ruoyi Zhang and Tianyu Li and Shidong Li and Yisheng Zheng and Xingwei Liu and Qingzheng Wang and Zhizhuo Zhou and Jiahua Liu and Xin Chen and Dawei Han},\n      year={2026},\n      eprint={2603.08823},\n      archivePrefix={arXiv},\n      primaryClass={cs.SD},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08823}, \n}\n```\n","\u003Cdiv align=\"center\">\n\u003Ch1>Fish Speech\u003C\u002Fh1>\n\n**English** | [简体中文](docs\u002FREADME.zh.md) | [Portuguese](docs\u002FREADME.pt-BR.md) | [日本語](docs\u002FREADME.ja.md) | [한국어](docs\u002FREADME.ko.md) | [العربية](docs\u002FREADME.ar.md) | [Español](docs\u002FREADME.es.md)  \u003Cbr>\n\n\u003Ca href=\"https:\u002F\u002Fwww.producthunt.com\u002Fproducts\u002Ffish-speech?embed=true&utm_source=badge-top-post-badge&utm_medium=badge&utm_source=badge-fish&#0045;audio&#0045;s1\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fapi.producthunt.com\u002Fwidgets\u002Fembed-image\u002Fv1\u002Ftop-post-badge.svg?post_id=1023740&theme=light&period=daily&t=1761164814710\" alt=\"Fish&#0032;Audio&#0032;S1 - Expressive&#0032;Voice&#0032;Cloning&#0032;and&#0032;Text&#0045;to&#0045;Speech | Product Hunt\" style=\"width: 250px; height: 54px;\" width=\"250\" height=\"54\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F7014\" target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffishaudio_fish-speech_readme_4a68feb902da.png\" alt=\"fishaudio%2Ffish-speech | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"\u002F>\n\u003C\u002Fa>\n\u003Cbr>\n\u003C\u002Fdiv>\n\u003Cbr>\n\n\u003Cdiv align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Fcount.getloli.com\u002Fget\u002F@fish-speech?theme=asoul\" \u002F>\u003Cbr>\n\u003C\u002Fdiv>\n\n\u003Cbr>\n\n\u003Cdiv align=\"center\">\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fdiscord.gg\u002FEs5qTB9BcN\">\n        \u003Cimg alt=\"Discord\" src=\"https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1214047546020728892?color=%23738ADB&label=Discord&logo=discord&logoColor=white&style=flat-square\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fhub.docker.com\u002Fr\u002Ffishaudio\u002Ffish-speech\">\n        \u003Cimg alt=\"Docker\" src=\"https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Ffishaudio\u002Ffish-speech?style=flat-square&logo=docker\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fpd.qq.com\u002Fs\u002Fbwxia254o\">\n      \u003Cimg alt=\"QQ Channel\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FQQ-blue?logo=tencentqq\">\n    \u003C\u002Fa>\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fhuggingface.co\u002Ffishaudio\u002Fs2-pro\">\n        \u003Cimg alt=\"HuggingFace Model\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗%20-models-orange\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Ffish.audio\u002Fblog\u002Ffish-audio-open-sources-s2\u002F\">\n        \u003Cimg alt=\"Fish Audio Blog\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBlog-Fish_Audio_S2-1f7a8c?style=flat-square&logo=readme&logoColor=white\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca target=\"_blank\" href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08823\">\n        \u003Cimg alt=\"Paper | Technical Report\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-Technical_Report-b31b1b?style=flat-square\"\u002F>\n    \u003C\u002Fa>\n\u003C\u002Fdiv>\n\n> [!IMPORTANT]\n> **许可声明**  \n> 本代码库及其关联的模型权重均根据 **[FISH AUDIO RESEARCH LICENSE](LICENSE)** 发布。有关更多详细信息，请参阅 [LICENSE](LICENSE)。我们将对任何违反许可证的行为采取行动。\n\n> [!WARNING]\n> **法律免责声明**  \n> 我们对代码库的任何非法使用不承担任何责任。请参照您当地关于 DMCA 和其他相关法律的规定。\n\n## 快速开始\n\n### 面向人类用户\n\n以下是 Fish Audio S2 的官方文档，请按照说明轻松上手。\n\n- [安装](https:\u002F\u002Fspeech.fish.audio\u002Finstall\u002F)\n- [命令行推理](https:\u002F\u002Fspeech.fish.audio\u002Finference\u002F#command-line-inference)\n- [WebUI 推理](https:\u002F\u002Fspeech.fish.audio\u002Finference\u002F#webui-inference)\n- [服务器推理](https:\u002F\u002Fspeech.fish.audio\u002Fserver\u002F)\n- [Docker 设置](https:\u002F\u002Fspeech.fish.audio\u002Finstall\u002F#docker-setup)\n\n> [!IMPORTANT]\n> **对于 SGLang 服务器，请阅读 [SGLang-Omni README](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002Fsglang-omni\u002Fblob\u002Fmain\u002Fsglang_omni\u002Fmodels\u002Ffishaudio_s2_pro\u002FREADME.md)。**\n\n### 面向 LLM 智能体\n\n```\nInstall and configure Fish-Audio S2 by following the instructions here: https:\u002F\u002Fspeech.fish.audio\u002Finstall\u002F\n```\n\n## Fish Audio S2 Pro\n**最先进的多语言文本转语音 (TTS) 系统，重新定义了语音生成的边界。**\n\nFish Audio S2 Pro 是由 [Fish Audio](https:\u002F\u002Ffish.audio\u002F) 开发的最先进的多模态模型。该模型在超过 **1000 万小时** 的音频数据上进行了训练，涵盖 **80 多种语言**。S2 Pro 结合了 **双自回归 (Dual-AR)** 架构与强化学习 (RL) 对齐，以生成极其自然、逼真且情感丰富的语音，在开源和闭源系统中均处于领先地位。\n\nS2 Pro 的核心优势在于其支持使用自然语言标签（例如 `[whisper]`、`[excited]`、`[angry]`）进行 **子词级别** 的韵律和情感细粒度控制，同时原生支持多说话人和多轮对话生成。\n\n访问 [Fish Audio 网站](https:\u002F\u002Ffish.audio\u002F) 体验在线演示，或阅读我们的 [技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08823) 和 [博客文章](https:\u002F\u002Ffish.audio\u002Fblog\u002Ffish-audio-open-sources-s2\u002F) 了解更多详情。\n\n### 模型变体\n\n| 模型 | 大小 | 可用性 | 描述 |\n|------|------|-------------|-------------|\n| S2-Pro | 4B 参数 | [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Ffishaudio\u002Fs2-pro) | 全功能旗舰模型，具备最高质量和稳定性 |\n\n有关模型的更多详细信息，请参阅 [技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01156)。\n\n## 基准测试结果\n\n| 基准测试 | Fish Audio S2 |\n|------|------|\n| Seed-TTS 评估 — WER（中文） | **0.54%** （整体最佳） |\n| Seed-TTS 评估 — WER（英文） | **0.99%** （整体最佳） |\n| 音频图灵测试（带指令） | **0.515** 后验均值 |\n| EmergentTTS-Eval — 胜率 | **81.88%** （整体最高） |\n| Fish 指令基准测试 — TAR | **93.3%** |\n| Fish 指令基准测试 — 质量 | **4.51 \u002F 5.0** |\n| 多语言（MiniMax 测试集）— 最佳 WER | **24 种语言中的 11 种** |\n| 多语言（MiniMax 测试集）— 最佳 SIM | **24 种语言中的 17 种** |\n\n在 Seed-TTS 评估中，S2 在所有评估模型（包括闭源系统）中实现了最低的 WER：Qwen3-TTS (0.77\u002F1.24)、MiniMax Speech-02 (0.99\u002F1.90)、Seed-TTS (1.12\u002F2.25)。在音频图灵测试中，0.515 比 Seed-TTS (0.417) 高出 24%，比 MiniMax-Speech (0.387) 高出 33%。在 EmergentTTS-Eval 中，S2 在副语言特征（91.61% 胜率）、问答（84.41%）和句法复杂度（83.39%）方面取得了特别强的结果。\n\n## 亮点\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffishaudio_fish-speech_readme_3efec438d5b4.png\" width=200%>\n\n### 通过自然语言实现细粒度内联控制\n\nS2 Pro 为语音带来了前所未有的“灵魂”。使用简单的 `[tag]` 语法，您可以在文本的任何位置精确嵌入情感指令。\n- **支持 15,000+ 个唯一标签**：不限于固定预设；S2 支持**自由形式的文本描述**。尝试 `[whisper in small voice]`、`[professional broadcast tone]` 或 `[pitch up]`。\n- **丰富的情感库**：\n`[pause]` `[emphasis]` `[laughing]` `[inhale]` `[chuckle]` `[tsk]` `[singing]` `[excited]` `[laughing tone]` `[interrupting]` `[chuckling]` `[excited tone]` `[volume up]` `[echo]` `[angry]` `[low volume]` `[sigh]` `[low voice]` `[whisper]` `[screaming]` `[shouting]` `[loud]` `[surprised]` `[short pause]` `[exhale]` `[delight]` `[panting]` `[audience laughter]` `[with strong accent]` `[volume down]` `[clearing throat]` `[sad]` `[moaning]` `[shocked]`\n\n### 创新的双自回归（Dual-AR）架构\n\nS2 Pro 采用了主从式 Dual-AR（双自回归）架构，由仅解码器 Transformer（Decoder-only Transformer）和 RVQ（残差矢量量化）音频编解码器组成（10 个码本，~21 Hz）：\n\n- **慢速 AR（4B 参数）**：沿时间轴运行，预测主要的语义码本。\n- **快速 AR（4 亿参数）**：在每个时间步生成剩余的 9 个残差码本，重建精细的声学细节。\n\n这种非对称设计在显著提升推理速度的同时实现了最高的音频保真度。\n\n### 强化学习（RL）对齐\n\nS2 Pro 利用**组相对策略优化（GRPO）**进行后训练对齐。我们直接使用相同的模型套件进行数据清洗和标注作为 Reward Models（奖励模型），完美解决了预训练数据与后训练目标之间的分布不匹配问题。\n- **多维奖励信号**：综合评估语义准确性、指令遵循度、声学偏好评分和音色相似度，确保生成的每一秒语音对人类来说都感觉直观自然。\n\n### 极致的流式性能（由 SGLang 驱动）\n\n由于 Dual-AR 架构在结构上与标准大语言模型（LLMs）同构，S2 Pro 原生支持所有 SGLang 推理加速功能，包括连续批处理（Continuous Batching）、分页 KV 缓存（Paged KV Cache）、CUDA 图（CUDA Graph）以及基于 RadixAttention 的前缀缓存（Prefix Caching）。\n\n**单张 NVIDIA H200 GPU 上的性能：**\n- **实时因子（RTF）**：0.195\n- **首帧音频延迟（TTFA）**：~100 毫秒\n- **极高吞吐量**：3,000+ 声学 Token\u002F秒，同时保持 RTF \u003C 0.5\n\n### 强大的多语言支持\n\nS2 Pro 支持超过 80 种语言，无需音素或特定语言的预处理：\n\n- **第一梯队**：日语 (ja)、英语 (en)、中文 (zh)\n- **第二梯队**：韩语 (ko)、西班牙语 (es)、葡萄牙语 (pt)、阿拉伯语 (ar)、俄语 (ru)、法语 (fr)、德语 (de)\n- **全球覆盖**：sv, it, tr, no, nl, cy, eu, ca, da, gl, ta, hu, fi, pl, et, hi, la, ur, th, vi, jw, bn, yo, xsl, cs, sw, nn, he, ms, uk, id, kk, bg, lv, my, tl, sk, ne, fa, af, el, bo, hr, ro, sn, mi, yi, am, be, km, is, az, sd, br, sq, ps, mn, ht, ml, sr, sa, te, ka, bs, pa, lt, kn, si, hy, mr, as, gu, fo, 等。\n\n### 原生多说话人生成\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffishaudio_fish-speech_readme_a9e917f0d2bd.png\" width=200%>\n\nFish Audio S2 允许用户上传包含多个说话人的参考音频，模型通过 `\u003C|speaker:i|>` Token（令牌）处理每个说话人的特征。随后，您可以通过说话人 ID 令牌控制模型的表现，从而实现单次生成包含多个说话人。不再需要为每个单独的说话人上传单独的参考音频。\n\n### 多轮对话生成\n\n得益于模型上下文的扩展，我们的模型现在可以利用先前的信息来提高后续生成内容的表现力，从而增加对话的自然度。\n\n### 快速声音克隆\n\nFish Audio S2 支持使用短参考样本（通常为 10-30 秒）进行准确的声音克隆。该模型捕捉音色、说话风格和情感倾向，无需额外微调即可生成逼真且一致克隆的声音。\n关于 SGLang Server 的使用，请参阅 [SGLang-Omni README](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002Fsglang-omni\u002Fblob\u002Fmain\u002Fsglang_omni\u002Fmodels\u002Ffishaudio_s2_pro\u002FREADME.md)。\n\n---\n\n## 致谢\n\n- [VITS2 (daniilrobnikov)](https:\u002F\u002Fgithub.com\u002Fdaniilrobnikov\u002Fvits2)\n- [Bert-VITS2](https:\u002F\u002Fgithub.com\u002Ffishaudio\u002FBert-VITS2)\n- [GPT VITS](https:\u002F\u002Fgithub.com\u002Finnnky\u002Fgpt-vits)\n- [MQTTS](https:\u002F\u002Fgithub.com\u002Fb04901014\u002FMQTTS)\n- [GPT Fast](https:\u002F\u002Fgithub.com\u002Fpytorch-labs\u002Fgpt-fast)\n- [GPT-SoVITS](https:\u002F\u002Fgithub.com\u002FRVC-Boss\u002FGPT-SoVITS)\n- [Qwen3](https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen3)\n\n## 技术报告\n```bibtex\n@misc{fish-speech-v1.4,\n      title={Fish-Speech: Leveraging Large Language Models for Advanced Multilingual Text-to-Speech Synthesis},\n      author={Shijia Liao and Yuxuan Wang and Tianyu Li and Yifan Cheng and Ruoyi Zhang and Rongzhi Zhou and Yijin Xing},\n      year={2024},\n      eprint={2411.01156},\n      archivePrefix={arXiv},\n      primaryClass={cs.SD},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01156},\n}\n\n@misc{liao2026fishaudios2technical,\n      title={Fish Audio S2 Technical Report}, \n      author={Shijia Liao and Yuxuan Wang and Songting Liu and Yifan Cheng and Ruoyi Zhang and Tianyu Li and Shidong Li and Yisheng Zheng and Xingwei Liu and Qingzheng Wang and Zhizhuo Zhou and Jiahua Liu and Xin Chen and Dawei Han},\n      year={2026},\n      eprint={2603.08823},\n      archivePrefix={arXiv},\n      primaryClass={cs.SD},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.08823}, \n}\n```","# Fish Speech 快速上手指南\n\nFish Speech（Fish Audio S2 Pro）是由 Fish Audio 开发的最先进的多语言文本转语音（TTS）系统。它支持超过 80 种语言，具备强大的情感控制、多说话人生成及极速流式推理能力。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n- **操作系统**：Linux \u002F Windows \u002F macOS\n- **硬件要求**：推荐使用 NVIDIA GPU（支持 CUDA），显存建议 8GB 以上以获得最佳性能。\n- **软件依赖**：Python 3.x, Git。\n- **网络环境**：由于模型托管在 HuggingFace，建议配置稳定的网络连接或使用国内加速方案。\n\n## 2. 安装步骤\n\n### 方式一：Docker 部署（推荐）\n\n使用官方提供的 Docker 镜像可以快速完成环境配置，避免依赖冲突。\n\n```bash\ndocker pull fishaudio\u002Ffish-speech\n```\n\n详细配置请参考官方文档：[Docker Setup](https:\u002F\u002Fspeech.fish.audio\u002Finstall\u002F#docker-setup)\n\n### 方式二：源码安装\n\n如果您需要自定义环境，可克隆仓库并遵循官方安装指南。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech.git\ncd fish-speech\n```\n\n具体依赖安装与配置命令，请务必参考官方文档：[Installation](https:\u002F\u002Fspeech.fish.audio\u002Finstall\u002F)\n\n## 3. 基本使用\n\n### 获取模型权重\n\n核心模型 **S2-Pro** 托管于 HuggingFace，下载前请确认符合许可协议。\n\n- **模型地址**：[HuggingFace Model](https:\u002F\u002Fhuggingface.co\u002Ffishaudio\u002Fs2-pro)\n\n### 命令行推理 (CLI)\n\n适用于脚本化调用或服务器端集成。\n\n- **操作指引**：[Command Line Inference](https:\u002F\u002Fspeech.fish.audio\u002Finference\u002F#command-line-inference)\n\n### WebUI 推理\n\n适用于本地测试与交互式体验。\n\n- **操作指引**：[WebUI Inference](https:\u002F\u002Fspeech.fish.audio\u002Finference\u002F#webui-inference)\n\n### 服务端部署 (Server)\n\n针对高并发场景，支持 SGLang 加速。\n\n- **操作指引**：[Server Inference](https:\u002F\u002Fspeech.fish.audio\u002Fserver\u002F)\n- **注意**：若使用 SGLang 服务，请阅读 [SGLang-Omni README](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002Fsglang-omni\u002Fblob\u002Fmain\u002Fsglang_omni\u002Fmodels\u002Ffishaudio_s2_pro\u002FREADME.md)。\n\n## 4. 重要提示\n\n- **许可证声明**：本代码库及相关模型权重基于 **[FISH AUDIO RESEARCH LICENSE]** 发布。请严格遵守 [LICENSE](LICENSE) 文件中的条款，违者将追究责任。\n- **法律免责声明**：使用者需遵守当地关于 DMCA 及其他相关法律法规，项目方不对任何非法使用承担法律责任。\n- **功能亮点**：支持通过自然语言标签（如 `[whisper]`, `[excited]`）进行细粒度的情感与语调控制，无需额外微调即可实现快速声音克隆。","独立游戏开发者小林正在制作一款多语言叙事游戏，急需为数十个 NPC 角色生成高质量且一致的配音，但预算十分有限。\n\n### 没有 fish-speech 时\n- 聘请专业配音演员成本高昂，单个角色的英文、日文配音费用就占用了大部分美术预算。\n- 录制流程繁琐，需要协调档期、搭建录音棚并进行后期降噪处理，人力投入巨大。\n- 多语言版本难以保证角色音色统一，玩家容易出戏，严重影响沉浸感体验。\n- 剧情调整导致台词变更时，必须重新预约录音，迭代周期长达数周，严重拖慢进度。\n\n### 使用 fish-speech 后\n- fish-speech 基于开源模型本地部署，零授权费即可无限次生成语音，彻底解决预算问题。\n- 输入文本秒级输出音频，支持快速试音，大幅缩短开发验证周期，当天即可上线测试。\n- 利用其强大的多语种克隆功能，同一角色在八种语言下保持音色高度一致，无需额外训练。\n- 修改剧本后直接重新生成，无需等待外部人员配合，实现敏捷开发，响应速度提升十倍。\n\nfish-speech 让小型团队也能以极低成本获得媲美商业级的多语言语音生成能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffishaudio_fish-speech_3efec438.png","fishaudio","Fish Audio","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffishaudio_9838c120.png","A Spark Between Voice and Text",null,"FishAudio","https:\u002F\u002Ffish.audio","https:\u002F\u002Fgithub.com\u002Ffishaudio",[84,88,92,95,99,103,107,110],{"name":85,"color":86,"percentage":87},"Python","#3572A5",80.8,{"name":89,"color":90,"percentage":91},"TypeScript","#3178c6",14.3,{"name":93,"color":94,"percentage":10},"Dockerfile","#384d54",{"name":96,"color":97,"percentage":98},"Jupyter Notebook","#DA5B0B",1.2,{"name":100,"color":101,"percentage":102},"CSS","#663399",0.5,{"name":104,"color":105,"percentage":106},"JavaScript","#f1e05a",0.1,{"name":108,"color":109,"percentage":106},"HTML","#e34c26",{"name":111,"color":112,"percentage":113},"Shell","#89e051",0,29065,2449,"2026-04-05T10:51:14","NOASSERTION","未说明","需要 NVIDIA GPU (文中提及 H200 性能测试及 CUDA Graph)",{"notes":121,"python":118,"dependencies":122},"详细环境要求请参照官方安装文档链接；支持 Docker 容器化部署；模型为 4B 参数多模态模型；推理需 NVIDIA GPU 配合 SGLang 加速；请遵守相关开源许可证及法律条款。",[],[55,26,14,13],[125,126,127,128,129,130,131],"llama","transformer","tts","valle","vits","vqgan","vqvae",45,"2026-03-27T02:49:30.150509","2026-04-06T05:35:37.959409",[136,141,146,151,156,160],{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},2528,"运行 Fish-Speech Agent 时报错 `Qwen2TokenizerFast has no attribute \"get _token_id\"` 如何解决？","这是 v1.5 版本代码修改导致的兼容性问题。建议暂时使用 v1.4.3 版本来稳定运行 Agent。如果必须使用新版本，需参考 v1.4 版本修改 `fish_speech\u002Ftokenizer.py` 文件，或更新所有相关文件以确保兼容性。","https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fissues\u002F712",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},2529,"Fish-Speech 是否允许用于商业产品？","代码库遵循 Apache 2.0 许可，可用于商业用途；但模型权重仍遵循 CC-BY-NC-SA-4.0 协议（非商业条款）。因此，虽然可以使用代码，但不能直接将模型用于商业产品中，除非获得额外授权。","https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fissues\u002F531",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},2530,"训练数据的 `.lab` 标注文件是否需要包含时间戳？","不需要。`.lab` 文件只需包含对应音频的文本内容即可。例如音频内容是“你好，你在干嘛呢？”，lab 文件内容即为该文本字符串，无需添加时间戳信息。","https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fissues\u002F362",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},2531,"训练数据集的语音片段长度有什么最佳实践？","避免使用过于零碎的语音片段（如平均 2-5 秒或单字词）。建议将语音连接成长度为 1-2 分钟的连续音频再进行训练，这样可以显著提升训练效果和稳定性。","https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fissues\u002F428",{"id":157,"question_zh":158,"answer_zh":159,"source_url":150},2532,"如何使用拼音进行语音合成训练？","可以在 `.lab` 标注文件中直接填写拼音。例如“你好吗”可以标记为 `ni2 hao3 ma1`。也可以尝试混合文字和拼音（如 `你 ni2 好 hao3`）来控制发音，但需注意多音字的处理。",{"id":161,"question_zh":162,"answer_zh":163,"source_url":150},2533,"如何进行基础模型的预训练（不使用 LoRA）？","将音频打好标，每个音频旁放置 `.lab` 后缀的标注文件（仅含文本）。使用项目文档中的训练命令，务必去除 `+lora` 选项，并注意调整学习率等关键参数。",[165,170,175,180,185,190,195,200,205,210,215,220,225,230],{"id":166,"version":167,"summary_zh":168,"released_at":169},102040,"v2.0.0-beta","# Fish Audio S2 — Pre-Release\r\n\r\n> Best text-to-speech system among both open source and closed source.\r\n\r\nTrained on **10M+ hours** of audio across **~50 languages**, S2 combines a **Dual-AR architecture** (Qwen3 backbone) with **GRPO reinforcement learning alignment** to produce natural, emotionally rich speech with fine-grained inline control.\r\n\r\n[Technical Report](https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fblob\u002Fmain\u002FFishAudioS2TecReport.pdf) · [Blog](https:\u002F\u002Ffish.audio\u002Fblog\u002Ffish-audio-open-sources-s2\u002F) · [Model](https:\u002F\u002Fhuggingface.co\u002Ffishaudio\u002Fs2-pro) · [Playground](https:\u002F\u002Ffish.audio\u002F)\r\n\r\n## Model\r\n\r\n| Variant | Params | Codec | Output |\r\n|---------|--------|-------|--------|\r\n| S2-Pro | 4B (slow) + 400M (fast) | ModifiedDAC, 10 codebooks, ~21 Hz | 44.1 kHz |\r\n\r\n## Highlights\r\n\r\n- **Dual-AR**: Slow AR (4B) predicts semantic codebook along time axis; Fast AR (400M) fills 9 residual codebooks per step\r\n- **Inline Control**: Free-form tags like `[laugh]`, `[whispers]`, `[super happy]` at word level\r\n- **RL Alignment**: GRPO with unified data-reward pipeline — same model for data filtering and RL reward\r\n- **SGLang Streaming**: RTF 0.195, TTFA ~100ms, 3000+ tokens\u002Fs on single H200\r\n- **50+ Languages**, multi-speaker (`\u003C|speaker:i|>`), multi-turn, rapid voice cloning (10-30s reference)\r\n\r\n## What's Changed\r\n\r\n**Model & Inference**\r\n- New Dual-AR architecture with Qwen3 backbone, replacing Fish-Speech v1.5\r\n- New `ModifiedDAC` audio codec (replaces Firefly\u002FVQ-GAN)\r\n- Support `fish_qwen3_omni` checkpoint format (sharded safetensors) with backward compatibility\r\n- Fixed: torch.compile bugs, GPU memory leak, audio quality issues\r\n\r\n**Docker & Deployment**\r\n- Docker overhaul: multi-target builds, compose support, health checks, non-root user\r\n- SGLang server integration\r\n\r\n**API & Server**\r\n- Reference voice management API (CRUD), multipart upload support\r\n- Various server bug fixes, `\u002Fhealth` endpoint\r\n\r\n**Finetune**\r\n- Full finetune pipeline for S1\u002FS2 (datasets, training, LoRA merge)\r\n\r\n**Docs & Infra**\r\n- README & MkDocs rewritten for S2 across 6 languages\r\n- License updated to Fish Audio Research License\r\n- Removed legacy code (Firefly VQ-GAN, SenseVoice, Fish Agent, old batch files)\r\n","2026-03-10T15:29:07",{"id":171,"version":172,"summary_zh":173,"released_at":174},102041,"v1.5.1","The last stable branch before the next model release.","2025-05-31T12:15:00",{"id":176,"version":177,"summary_zh":178,"released_at":179},102042,"v1.5.0","Fish Speech 1.5 release, both inference and finetune are done.","2024-12-25T02:53:05",{"id":181,"version":182,"summary_zh":183,"released_at":184},102043,"v1.4.3","Last stable release before 1.5","2024-11-29T06:36:15",{"id":186,"version":187,"summary_zh":188,"released_at":189},102044,"v1.4.2","## What's Changed\r\n* Add Audio Select to WebUI by @PoTaTo-Mika in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F556\r\n* Fix cache max_seq_len by @AnyaCoder in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F568\r\n* docs: Docker icon is missing in zh-cn README & ja README displays that it is in English & properer expression “简体中文” by @Octopus058 in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F569\r\n* docs: Corrected the wrong expressions of supported languages in README by @Octopus058 in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F574\r\n* Api json format by @AnyaCoder in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F588\r\n* Update v1.4 readmes & samples by @AnyaCoder in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F592\r\n* [chore] add docs for macos by @Tps-F in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F544\r\n* [pre-commit.ci] pre-commit autoupdate by @pre-commit-ci in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F599\r\n* chore: typo fix on post_api by @bjwswang in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F605\r\n* feat: enable more workers in `api.py` by @AnyaCoder in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F621\r\n* Fix broken `remove_parameterization` in firefly by @med1844 in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F620\r\n* Fix dockerfile by @AnyaCoder in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F622\r\n* Fix dockerfile for `pyaudio` by @AnyaCoder in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F623\r\n* Update docs by @AnyaCoder in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F626\r\n* Fix backend by @AnyaCoder in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F627\r\n* Update docs by @AnyaCoder in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F638\r\n\r\n## New Contributors\r\n* @Octopus058 made their first contribution in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F569\r\n* @bjwswang made their first contribution in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F605\r\n* @med1844 made their first contribution in https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fpull\u002F620\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Ffishaudio\u002Ffish-speech\u002Fcompare\u002Fv1.4.1...v1.4.2","2024-10-25T07:15:52",{"id":191,"version":192,"summary_zh":193,"released_at":194},102045,"v1.4.1","This release includes bug fix and container optimization.","2024-09-15T09:49:39",{"id":196,"version":197,"summary_zh":198,"released_at":199},102046,"v1.4.0","Fish Speech V1.4 is a leading TTS model trained on 700k hours of audio data in multiple languages.\r\n\r\nSupported languages:\r\n- English (en) ~300k hours\r\n- Chinese (zh) ~300k hours\r\n- German (de) ~20k hours\r\n- Japanese (ja) ~20k hours\r\n- French (fr) ~20k hours\r\n- Spanish (es) ~20k hours\r\n- Korean (ko) ~20k hours\r\n- Arabic (ar) ~20k hours\r\n\r\nHave fun :)","2024-09-12T14:38:58",{"id":201,"version":202,"summary_zh":203,"released_at":204},102047,"v1.2.1","This is the final stable release before 1.4 release on Sep 10.","2024-09-10T00:24:46",{"id":206,"version":207,"summary_zh":208,"released_at":209},102048,"v1.2","In this release, we roll-out both 1.2 pretrain and SFT model, and also support auto-reranking for stable generation.","2024-07-18T16:41:19",{"id":211,"version":212,"summary_zh":213,"released_at":214},102049,"v1.1.2","This is the final stable release before 1.2","2024-07-02T04:55:09",{"id":216,"version":217,"summary_zh":218,"released_at":219},102050,"v1.1.1","Improve overall performance and experience, including lots of bug fixes","2024-06-08T16:58:08",{"id":221,"version":222,"summary_zh":223,"released_at":224},102051,"v1.1.0","In this release, we added the VITS decoder module, which provides better phone level accuracy and semantic similarity.","2024-05-11T14:20:11",{"id":226,"version":227,"summary_zh":228,"released_at":229},102052,"v1.0.0","This is a major release of fish speech, models can be found at [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Ffishaudio\u002Ffish-speech-1).   \r\nLive demo can be found at [HuggingFace Space](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Ffishaudio\u002Ffish-speech-1) and [Fish Audio](https:\u002F\u002Ffish.audio).\r\n\r\nModels are released under BY-CC-NC-SA 4.0 License.","2024-04-30T06:49:57",{"id":231,"version":232,"summary_zh":233,"released_at":234},102053,"v0.2.0","This version provides basic (arch) model implementation, inference acceleration, and pretrained model. \r\nMost functions \u002F pipelines are tested and working properly. \r\n","2023-12-25T11:55:47"]