[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ga642381--speech-trident":3,"tool-ga642381--speech-trident":65},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,3,"2026-04-06T11:19:32",[15,26,14,13],"图像",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":10,"last_commit_at":33,"category_tags":34,"status":16},8553,"spec-kit","github\u002Fspec-kit","Spec Kit 是一款专为提升软件开发效率而设计的开源工具包，旨在帮助团队快速落地“规格驱动开发”（Spec-Driven Development）模式。传统开发中，需求文档往往与代码实现脱节，导致沟通成本高且结果不可控；而 Spec Kit 通过将规格说明书转化为可执行的指令，让 AI 直接依据明确的业务场景生成高质量代码，从而减少从零开始的随意编码，确保产出结果的可预测性。\n\n该工具特别适合希望利用 AI 辅助编程的开发者、技术负责人及初创团队。无论是启动全新项目还是在现有工程中引入规范化流程，用户只需通过简单的命令行操作，即可初始化项目并集成主流的 AI 编程助手。其核心技术亮点在于“规格即代码”的理念，支持社区扩展与预设模板，允许用户根据特定技术栈定制开发流程。此外，Spec Kit 强调官方维护的安全性，提供稳定的版本管理，帮助开发者在享受 AI 红利的同时，依然牢牢掌握架构设计的主动权，真正实现从“凭感觉写代码”到“按规格建系统”的转变。",88749,"2026-04-17T09:48:14",[15,26,14,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":10,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85267,"2026-04-18T11:00:28",[26,51,52,53,14,54,15,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":62,"last_commit_at":63,"category_tags":64,"status":16},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[15,51,54],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":80,"owner_website":83,"owner_url":84,"languages":80,"stars":85,"forks":86,"last_commit_at":87,"license":80,"difficulty_score":88,"env_os":89,"env_gpu":90,"env_ram":90,"env_deps":91,"category_tags":94,"github_topics":80,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":95,"updated_at":96,"faqs":97,"releases":98},9723,"ga642381\u002Fspeech-trident","speech-trident","Awesome speech\u002Faudio LLMs, representation learning, and codec models","speech-trident 是一个专注于语音与音频大语言模型领域的开源资源库，旨在系统梳理该前沿方向的核心技术脉络。它主要解决了研究人员和开发者在面对海量、分散的语音 AI 论文与模型时，难以快速把握技术全貌和关键进展的痛点。\n\n该项目将复杂的语音大模型技术体系清晰地拆解为三大支柱：首先是“语音表示学习”，负责提取语音的深层语义结构；其次是“神经编解码器”，专注于在低码率下生成高质量的声学离散令牌；最后是“语音大语言模型”，利用前两者生成的令牌进行理解与生成任务的训练。通过这种结构化的分类，speech-trident 帮助用户理清了从底层特征提取到上层语言建模的完整技术链路。\n\n此外，该资源库还持续更新关于口语对话模型的最新综述论文及相关研究成果，具有极高的学术参考价值。无论是希望深入探索语音 AI 算法的研究人员，还是正在寻找技术选型参考的开发者，都能在这里找到详尽的模型列表、核心概念解析以及前沿动态。如果你正计划进入语音大模型领域，或需要一份权威的技术导航图，speech-trident 将是不可或缺的入门指南与研究助手。","# :trident: Speech Trident - Awesome Speech LM\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_b400c4d3ae1e.png\" alt=\"Speech Trident\" style=\"width:70%;\">\n\u003C\u002Fp>\n\nIn this repository, we survey three crucial areas: (1) representation learning, (2) neural codec, and (3) language models that contribute to speech\u002Faudio large language models.\n\n1.⚡ **Speech Representation Models:** These models focus on learning structural speech representations, which can then be quantized into discrete speech tokens, often refer to **semantic tokens**.\n\n2.⚡ **Speech Neural Codec Models:** These models are designed to learn speech and audio discrete tokens, often referred to as **acoustic tokens**, while maintaining reconstruction ability and low bitrate.\n\n3.⚡ **Speech Large Language Models:** These models are trained on top of speech and acoustic tokens in a language modeling approach. They demonstrate proficiency in tasks on speech understanding and speech generation.\n\n## :trident: Contributors\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fkwchang.org\u002F\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_4e6f76d94152.png\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Kai-Wei Chang\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fhbwu-ntu.github.io\u002F\">\n        \u003Cimg src=\"https:\u002F\u002Fscholar.googleusercontent.com\u002Fcitations?view_op=medium_photo&user=-bB-WHEAAAAJ&citpid=1\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Haibin Wu\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fscholar.google.com.tw\u002Fcitations?user=-d6aNP0AAAAJ&hl=zh-TW\">\n        \u003Cimg src=\"https:\u002F\u002Fscholar.googleusercontent.com\u002Fcitations?view_op=medium_photo&user=-d6aNP0AAAAJ&citpid=2\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Wei-Cheng Tseng\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fkehanlu.com\u002F\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_f1fda4d868b9.png\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Kehan Lu\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fkuan2jiu99\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_d1f0422fc705.png\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Chun-Yi Kuan\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fspeech.ee.ntu.edu.tw\u002F~hylee\u002Findex.php\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_b99fc95a9866.png\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>Hung-yi Lee\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## :trident: 2026 News\nNew survey repo for **spoken dialogue models** is available on [GitHub](https:\u002F\u002Fgithub.com\u002Fga642381\u002FSpoken-Dialogue-Model-Survey)\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_acb3f48ca8b7.png\" width=\"500\">\n\u003C\u002Fp>\n\n```bibtex\n@article{chang2026tico,\n      title={TiCo: Time-Controllable Training for Spoken Dialogue Models},\n      author={Kai-Wei Chang and Wei-Chih Chen and En-Pei Hu and Hung-yi Lee and James Glass},\n      journal={arXiv preprint arXiv:2603.22267},\n      year={2026}\n}\n```\n\n## :trident: 2025 News\n\nThe survey paper **“On The Landscape of Spoken Language Models: A Comprehensive Survey”** is now available on [arXiv](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.08528)\n```bibtex\n@article{arora2025landscape,\n  title={On The Landscape of Spoken Language Models: A Comprehensive Survey},\n  author={Arora, Siddhant and Chang, Kai-Wei and Chien, Chung-Ming and Peng, Yifan and Wu, Haibin and Adi, Yossi and Dupoux, Emmanuel and Lee, Hung-Yi and Livescu, Karen and Watanabe, Shinji},\n  journal={arXiv preprint arXiv:2504.08528},\n  year={2025}\n}\n```\n\nIt provides a comprehensive survey of spoken language models (SLMs), which covers a lot of speech\u002Faudio langauge models surveyed in this Speech-Trident project. But with more detailed and technical discussion. The paper categorizes the SLMs into:\n\n1.⚡ **Pure Speech LM**\n\n2.⚡ **Speech-aware Text LM**\n\n3.⚡ **Speech + Text LM**\n\nIn addition, the paper discusses training strategies, speech\u002Ftext token decoding patterns, duplex speech dialogue, benchmarks for SLMs, and more.\nPlease read the [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.08528) for more details. You will enjoy it!!\n\n## :trident: Speech\u002FAudio Language Models\n\n| Date    | Model Name     | Paper Title                                                                                                           | Link                                      |\n| ------- | -------------- | --------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- |\n| 2025-07 | Audio Flamingo 3 | Audio Flamingo 3: Advancing Audio Intelligence with Fully Open Large Audio Language Models | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.08128) |\n| 2025-07 | DeSTA2.5-Audio | DeSTA2.5-Audio: Toward General-Purpose Large Audio Language Model with Self-Generated Cross-Modal Alignment | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02768) |\n| 2025-05 | SLED | Efficient Speech Language Modeling via Energy Distance in Continuous Latent Space | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13181) |\n| 2025-05 | BALSa | From Alignment to Advancement: Bootstrapping Audio-Language Alignment with Synthetic Data | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20166) |\n| 2025-04 | -   | On The Landscape of Spoken Language Models: A Comprehensive Survey | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.08528) |\n| 2025-04 | Kimi-Audio   | Kimi-Audio Technical Report | [Paper](https:\u002F\u002Fgithub.com\u002FMoonshotAI\u002FKimi-Audio\u002Fblob\u002Fmaster\u002Fassets\u002Fkimia_report.pdf) |\n| 2025-03 | Qwen2.5-Omni   | Qwen2.5-Omni Technical Report | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20215) |\n| 2025-03 | Yue   | YuE: Scaling Open Foundation Models for Long-Form Music Generation | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.08638) |\n| 2025-03 | CSM   | Conversational speech generation | [blog](https:\u002F\u002Fwww.sesame.com\u002Fresearch\u002Fcrossing_the_uncanny_valley_of_voice#demo) |\n| 2025-03 | Phi-4-Multimodal   | Phi-4-Mini Technical Report: compact yet powerful multimodal language models via mixture-of-LoRAs | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01743) |\n| 2025-03 | Baichuan-Audio   | Baichuan-Audio: A Unified Framework for End-to-End Speech Interaction | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.17239) |\n| 2025-02 | DiTAR | DiTAR: Diffusion Transformer Autoregressive Modeling for Speech Generation | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03930) | \n| 2025-02 | Slamming   | Slamming: Training a Speech Language Model on One GPU in a Day | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.15814) |\n| 2025-02 | Step-Audio   | Step-Audio: Unified Understanding and Generation in Intelligent Speech Interaction | [paper](https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep-Audio\u002Fblob\u002Fcn-readme\u002Fassets\u002FStep-Audio.pdf) |\n| 2025-01 | BAICHUAN-OMNI-1.5   | BAICHUAN-OMNI-1.5 TECHNICAL REPORT | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2501.15368) |\n| 2025-01 |  MiniCPM-o   | A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone | [GitHub](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FMiniCPM-o) |\n| 2025-01 |  MinMo   | MinMo: A Multimodal large Language Model for Seamless Voice Interaction | [Paper](https:\u002F\u002Ffunaudiollm.github.io\u002Fminmo\u002F) |\n| 2025-01 |  VITA-1.5   | VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction | [Paper](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2501.01957v1) |\n| 2025-01 |  OMNICHAT   | OmniChat: Enhancing Spoken Dialogue Systems with Scalable Synthetic Data for Diverse Scenarios | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01384) |\n| 2025-01 |  SLIDE   | SLIDE: Integrating Speech Language Model with LLM for Spontaneous Spoken Dialogue Generation | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2501.00805) |\n| 2024-12 | SLAM-Omni   | SLAM-Omni: Timbre-Controllable Voice Interaction System with Single-Stage Training | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15649), [code](https:\u002F\u002Fgithub.com\u002FX-LANCE\u002FSLAM-LLM\u002Fblob\u002Fmain\u002Fexamples\u002Fs2s\u002FREADME.md) |\n| 2024-12 |   TouchTTS  | TouchTTS: An Embarrassingly Simple TTS Framework that Everyone Can Touch | [Paper](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2412.08237) |\n| 2024-12 |   CosyVoice 2  | CosyVoice 2: Scalable Streaming Speech Synthesis with Large Language Models | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2412.10117) |\n| 2024-12 |   GLM-4-Voice  | GLM-4-Voice: Towards Intelligent and Human-Like End-to-End Spoken Chatbot | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2412.02612) |\n| 2024-12 |   AlignFormer  | AlignFormer: Modality Matching Can Achieve Better Zero-shot Instruction-Following Speech-LLM | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2412.01145) |\n| 2024-11 |   --  | Scaling Speech-Text Pre-training with Synthetic Interleaved Data | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.17607) |\n| 2024-11 |   --  | State-Space Large Audio Language Models | [Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.15685) |\n| 2024-11 |   --  | Building a Taiwanese Mandarin Spoken Language Model: A First Attempt | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07111) |\n| 2024-11 | Ultravox    | Ultravox: An open-weight alternative to GPT-4o Realtime | [Blog](https:\u002F\u002Fwww.ultravox.ai\u002Fblog\u002Fultravox-an-open-weight-alternative-to-gpt-4o-realtime) |\n| 2024-11 | hertz-dev    | [blog](https:\u002F\u002Fsi.inc\u002Fhertz-dev\u002F)  | [GitHub](https:\u002F\u002Fgithub.com\u002FStandard-Intelligence\u002Fhertz-dev) |\n| 2024-11 | Freeze-Omni    |  Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLM | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00774) |\n| 2024-11 | Align-SLM              | Align-SLM: Textless Spoken Language Models with Reinforcement Learning from AI Feedback       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.01834) |\n| 2024-10 | Ichigo | Ichigo: Mixed-Modal Early-Fusion Realtime Voice Assistant | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15316), [code](https:\u002F\u002Fgithub.com\u002Fhomebrewltd\u002Fichigo)|\n| 2024-10 | OmniFlatten              | OmniFlatten: An End-to-end GPT Model for Seamless Voice Conversation       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17799v1) |\n| 2024-10 | GPT-4o              | GPT-4o System Card       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.21276) |\n| 2024-10 | Baichuan-OMNI              | Baichuan-Omni Technical Report       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08565) |\n| 2024-10 | GLM-4-Voice                | GLM-4-Voice       | [GitHub](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FGLM-4-Voice) |\n| 2024-10 | --              | Roadmap towards Superhuman Speech Understanding using Large Language Models       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13268) |\n| 2024-10 | SALMONN-OMNI               | SALMONN-OMNI: A SPEECH UNDERSTANDING AND GENERATION LLM IN A CODEC-FREE FULL-DUPLEX FRAMEWORK       | [paper](https:\u002F\u002Fopenreview.net\u002Fattachment?id=eJpI20hzWf&name=pdf) |\n| 2024-10 | Mini-Omni 2               | Mini-Omni2: Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.11190) |\n| 2024-10 | HALL-E               | HALL-E: Hierarchical Neural Codec Language Model for Minute-Long Zero-Shot Text-to-Speech Synthesis       | [paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=868masI331) |\n| 2024-10 | SyllableLM    |  SyllableLM: Learning Coarse Semantic Units for Speech Language Models     | [paper](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2410.04029v1) |\n| 2024-09 | DeSTA 2 | DeSTA2: Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.20007) |\n| 2024-09 | Moshi               | Moshi: a speech-text foundation model for real-time dialogue       | [paper](https:\u002F\u002Fkyutai.org\u002FMoshi.pdf) |\n| 2024-09 | Takin AudioLLM           | Takin: A Cohort of Superior Quality Zero-shot Speech Generation Models       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12139) |\n| 2024-09 | FireRedTTS          | FireRedTTS: A Foundation Text-To-Speech Framework for Industry-Level Generative Speech Applications       | [paper](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2409.03283v1) |\n| 2024-09 | LLaMA-Omni         | LLaMA-Omni: Seamless Speech Interaction with Large Language Models                                                      | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.06666) |\n| 2024-09 | MaskGCT         | MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00750v1) |\n| 2024-09 | SSR-Speech         | SSR-Speech: Towards Stable, Safe and Robust Zero-shot Text-based Speech Editing and Synthesis                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07556) |\n| 2024-09 | MoWE-Audio         | MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.06635) |\n| 2024-08 | Mini-Omni       | Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16725) |\n| 2024-08 | Make-A-Voice 2       |  Make-A-Voice: Revisiting Voice Large Language Models as Scalable Multilingual and Multitask Learner    | [paper](https:\u002F\u002Faclanthology.org\u002F2024.acl-long.589\u002F) |\n| 2024-08 | LSLM       |  Language Model Can Listen While Speaking  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02622) |\n| 2024-07 | Seed-ASR       |  Seed-ASR: Understanding Diverse Speech and Contexts with LLM-based Speech Recognition  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.04675) |\n| 2024-07 | MELLE | Autoregressive Speech Synthesis without Vector Quantization | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08551) |\n| 2024-06 | SimpleSpeech       | SimpleSpeech: Towards Simple and Efficient Text-to-Speech with Scalar Latent Transformer Diffusion Models                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02328) |\n| 2024-06 | UniAudio 1.5                | UniAudio 1.5: Large Language Model-driven Audio Codec is A Few-shot Audio Task Learner  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10056) |\n| 2024-06 | VALL-E R               | VALL-E R: Robust and Efficient Zero-Shot Text-to-Speech Synthesis via Monotonic Alignment  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07855) |\n| 2024-06 | VALL-E 2               | VALL-E 2: Neural Codec Language Models are Human Parity Zero-Shot Text to Speech Synthesizers  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05370) |\n| 2024-06 | GPST             | Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00976) |\n| 2024-04 | CLaM-TTS         | CLaM-TTS: Improving Neural Codec Language Model for Zero-Shot Text-to-Speech                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02781) |\n| 2024-04 | RALL-E         | RALL-E: Robust Codec Language Modeling with Chain-of-Thought Prompting for Text-to-Speech Synthesis                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.03204) |\n| 2024-04 | WavLLM         | WavLLM: Towards Robust and Adaptive Speech Large Language Model                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.00656) |\n| 2024-02 | MobileSpeech       | MobileSpeech: A Fast and High-Fidelity Framework for Mobile Zero-Shot Text-to-Speech                                                    | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09378) |\n| 2024-02 | SLAM-ASR       | An Embarrassingly Simple Approach for LLM with Strong ASR Capacity                                                    | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08846) |\n| 2024-02 | AnyGPT         | AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling                                                        | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12226) |\n| 2024-02 | SpiRit-LM      | SpiRit-LM: Interleaved Spoken and Written Language Model                                                              | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05755) |\n| 2024-02 | USDM           | Integrating Paralinguistics in Speech-Empowered Large Language Models for Natural Conversation                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05706) |\n| 2024-02 | BAT            | BAT: Learning to Reason about Spatial Sounds with Large Language Models                                               | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01591) |\n| 2024-02 | Audio Flamingo | Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities                            | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01831) |\n| 2024-02 | Text Description to speech | Natural language guidance of high-fidelity text-to-speech with synthetic annotations                      | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01912) |\n| 2024-02 | GenTranslate   | GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators                        | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.06894) |\n| 2024-02 | Base-TTS       | BASE TTS: Lessons from building a billion-parameter Text-to-Speech model on 100K hours of data                        | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08093) |\n| 2024-02 | --             | It's Never Too Late: Fusing Acoustic Information into Large Language Models for Automatic Speech Recognition          | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05457) |\n| 2024-01 | --             | Large Language Models are Efficient Learners of Noise-Robust Speech Recognition                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.10446) |\n| 2024-01 | ELLA-V             | ELLA-V: Stable Neural Codec Language Modeling with Alignment-guided Sequence Reordering                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.07333) |\n| 2023-12 | Seamless       | Seamless: Multilingual Expressive and Streaming Speech Translation                                                    | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05187) |\n| 2023-11 | Qwen-Audio     | Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models                     | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.07919) |\n| 2023-10 | LauraGPT       | LauraGPT: Listen, Attend, Understand, and Regenerate Audio with GPT                                                   | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04673) |\n| 2023-10 | SALMONN        | SALMONN: Towards Generic Hearing Abilities for Large Language Models                                                  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.13289) |\n| 2023-10 | UniAudio       | UniAudio: An Audio Foundation Model Toward Universal Audio Generation                                                 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00704) |\n| 2023-10 | Whispering LLaMA | Whispering LLaMA: A Cross-Modal Generative Error Correction Framework for Speech Recognition                          | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06434) |\n| 2023-09 | VoxtLM         | Voxtlm: unified decoder-only models for consolidating speech recognition\u002Fsynthesis and speech\u002Ftext continuation tasks | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07937) |\n| 2023-09 | LTU-AS         | Joint Audio and Speech Understanding                                                                                  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.14405) |\n| 2023-09 | SLM            | SLM: Bridge the thin gap between speech and text foundation models                                                    | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00230) |\n| 2023-09 | --             | Generative Speech Recognition Error Correction with Large Language Models and Task-Activating Prompting               | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15649) |\n| 2023-08 | SpeechGen      | SpeechGen: Unlocking the Generative Power of Speech Language Models with Prompts                                      | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02207) |\n| 2023-08 | SpeechX        | SpeechX: Neural Codec Language Model as a Versatile Speech Transformer                                                | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.06873) |\n| 2023-08 | LLaSM          | Large Language and Speech Model                                                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15930) |\n| 2023-08 | SeamlessM4T    | Massively Multilingual & Multimodal Machine Translation                                                               | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11596) |\n| 2023-07 | Speech-LLaMA   | On decoder-only architecture for speech-to-text and large language model integration                                  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03917) |\n| 2023-07 | LLM-ASR(temp.) | Prompting Large Language Models with Speech Recognition Abilities                                                     | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.11795) |\n| 2023-06 | AudioPaLM      | AudioPaLM: A Large Language Model That Can Speak and Listen                                                           | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12925) |\n| 2023-05 | Make-A-Voice       | Make-A-Voice: Unified Voice Synthesis With Discrete Representation                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19269) |\n| 2023-05 | Spectron       | Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15255) |\n| 2023-05 | TWIST          | Textually Pretrained Speech Language Models                                                                           | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13009) |\n| 2023-05 | Pengi          | Pengi: An Audio Language Model for Audio Tasks                                                                        | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11834) |\n| 2023-05 | SoundStorm     | Efficient Parallel Audio Generation                                                                                   | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09636) |\n| 2023-05 | LTU            | Joint Audio and Speech Understanding                                                                                  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10790) |\n| 2023-05 | SpeechGPT      | Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities                                  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11000) |\n| 2023-05 | VioLA          | Unified Codec Language Models for Speech Recognition, Synthesis, and Translation                                      | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16107) |\n| 2023-05 | X-LLM          | X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages                 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.04160) |\n| 2023-03 | Google USM     | Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages                                                 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.01037) |\n| 2023-03 | VALL-E X       | Speak Foreign Languages with Your Own Voice: Cross-Lingual Neural Codec Language Modeling                             | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03926) |\n| 2023-02 | SPEAR-TTS      | Speak, Read and Prompt: High-Fidelity Text-to-Speech with Minimal Supervision                                         | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03540) |\n| 2023-01 | VALL-E         | Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers                                                | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.02111) |\n| 2022-12 | Whisper        | Robust Speech Recognition via Large-Scale Weak Supervision                                                            | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04356) |\n| 2022-10 | AudioGen       | AudioGen: Textually Guided Audio Generation                                                                           | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15352) |\n| 2022-09 | AudioLM        | AudioLM: a Language Modeling Approach to Audio Generation                                                             | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.03143) |\n| 2022-05 | Wav2Seq        | Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages                                    | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.01086) |\n| 2022-04 | Unit mBART     | Enhanced Direct Speech-to-Speech Translation Using Self-supervised Pre-training and Data Augmentation                 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02967) |\n| 2022-03 | d-GSLM         | Generative Spoken Dialogue Language Modeling                                                                          | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.16502) |\n| 2021-10 | SLAM           | SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training                           | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10329) |\n| 2021-09 | p-GSLM         | Text-Free Prosody-Aware Generative Spoken Language Modeling                                                           | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.03264) |\n| 2021-02 | GSLM           | Generative Spoken Language Modeling from Raw Audio                                                                    | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.01192) |\n\n## :trident: Speech\u002FAudio Codec Models\n\n| Date    | Model Name           | Paper Title                                                                                                                                                | Link                                      |\n| ------- | -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- |\n| 2025-06 | CodecSlime | CodecSlime: Temporal Redundancy Compression of Neural Speech Codec via Dynamic Frame Rate | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2506.21074) |\n| 2025-06 |  | Discrete Audio Tokens: More Than a Survey! | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2506.10274) |\n| 2025-06 | TaDiCodec | TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling | [paper](https:\u002F\u002Fhecheng0625.github.io\u002Fassets\u002Fpdf\u002FArxiv_TaDiCodec.pdf) |\n| 2025-06 | MagiCodec | MagiCodec: Simple Masked Gaussian-Injected Codec for High-Fidelity Reconstruction and Generation | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2506.00385) |\n| 2025-06 | - | Probing the Robustness Properties of Neural Speech Codecs | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2505.24248) |\n| 2025-06 | DS-Codec | DS-Codec: Dual-Stage Training with Mirror-to-NonMirror Architecture Switching for Speech Codec | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2505.24314) |\n| 2025-05 | LFSC | Low Frame-rate Speech Codec: a Codec Designed for Fast High-quality Speech LLM Training and Inference | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.12117) |\n| 2025-05 | PAST | PAST: Phonetic-Acoustic Speech Tokenizer | [paper](https:\u002F\u002Fpages.cs.huji.ac.il\u002Fadiyoss-lab\u002FPAST\u002F) |\n| 2025-04 | ALMTokenizer | ALMTokenizer: A Low-bitrate and Semantic-rich Audio Codec Tokenizer for Audio Language Modeling | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.10344) |\n| 2025-04 | DualCodec  | DualCodec: A Low-Frame-Rate, Semantically-Enhanced Neural Audio Codec for Speech Generation | [paper](https:\u002F\u002Fopenreview.net\u002Fattachment?id=P7VkjAVClZ&name=pdf) |\n| 2025-04 | -  | One Quantizer is Enough: Toward a Lightweight Audio Codec | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.04949) |\n| 2025-04 | TASTE  | TASTE: Text-Aligned Speech Tokenization and Embedding for Spoken Language Modeling | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.07053) |\n| 2025-03 | UniCodec  | Universal Speech Token Learning via Low-Bitrate Neural Codec and Pretrained Representations | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.12115) |\n| 2025-03 | BiCodec  | Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.01710) |\n| 2025-03 | FlowDec  | FlowDec: A flow-based full-band general audio codec with high perceptual quality | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.01485) |\n| 2025-02 | UniCodec  | UniCodec: Unified Audio Codec with Single Domain-Adaptive Codebook | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.20067) |\n| 2025-02 | Baichuan-Audio Tokenizer  | Baichuan-Audio: A Unified Framework for End-to-End Speech Interaction | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.17239) |\n| 2025-02 | - | Recent Advances in Discrete Speech Tokens: A Review       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.06490) |\n| 2025-02 | FocalCodec | FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.04465) |\n| 2025-02 | - | Efficient Evaluation of Quantization-Effects in Neural Codecs       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.04770) |\n| 2025-02 | X-Codec 2 | Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.04128) |\n| 2025-02 | ComplexDec | ComplexDec: A Domain-robust High-fidelity Neural Audio Codec with Complex Spectrum Modeling       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.02019) |\n| 2024-12 | TS3-Codec | TS3-Codec: Transformer-Based Simple Streaming Single Codec       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.18803) |\n| 2024-12 | FreeCodec | FreeCodec: A disentangled neural speech codec with fewer tokens  | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2412.01053) |\n| 2024-12 | TAAE | Scaling Transformers for Low-Bitrate High-Quality Speech Coding       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.19842) |\n| 2024-11 | BEST-STD | BEST-STD: Bidirectional Mamba-Enhanced Speech Tokenization for Spoken Term Detection | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.14100) |\n| 2024-11 | PyramidCodec | PyramidCodec: Hierarchical Codec for Long-form Music Generation in Audio Domain | [paper](https:\u002F\u002Faclanthology.org\u002F2024.findings-emnlp.246.pdf) |\n| 2024-11 | UniCodec | Universal Speech Token Learning Via Low-Bitrate Neural Codec and Pretrained Representations | [paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10738376?casa_token=eWtmSXEr4AEAAAAA:FzYuQIESJ2LXwl9smJQe3RakpDUFuJ-AS0d39ZDlhsI0tBVX_8P7hu4a59yZezz7hpYd3VomUDo) |\n| 2024-11 | SimVQ | Addressing Representation Collapse in Vector Quantized Models with One Linear Layer | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.02038) |\n| 2024-11 | MDCTCodec | MDCTCodec: A Lightweight MDCT-based Neural Audio Codec towards High Sampling Rate and Low Bitrate Scenarios | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.00464) |\n| 2024-10 | APCodec+ | APCodec+: A Spectrum-Coding-Based High-Fidelity and High-Compression-Rate Neural Audio Codec with Staged Training Paradigm       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.22807) |\n| 2024-10 | - | A Closer Look at Neural Codec Resynthesis: Bridging the Gap between Codec and Waveform Generation       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.22448) |\n| 2024-10 | SNAC | SNAC: Multi-Scale Neural Audio Codec       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.14411) |\n| 2024-10 | LSCodec | LSCodec: Low-Bitrate and Speaker-Decoupled Discrete Speech Codec       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15764) |\n| 2024-10 | Co-design for codec and codec-LM | TOWARDS CODEC-LM CO-DESIGN FOR NEURAL CODEC LANGUAGE MODELS       | [paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=KCVv3tICvp) |\n| 2024-10 | VChangeCodec | VChangeCodec: A High-efficiency Neural Speech Codec with Built-in Voice Changer for Real-time Communication       | [paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=qDSfOQBrOD) |\n| 2024-10 | DC-Spin | DC-Spin: A Speaker-invariant Speech Tokenizer For Spoken Language Models       | [paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=OW332Wh9S5) |\n| 2024-10 | DM-Codec | DM-Codec: Distilling Multimodal Representations for Speech Tokenization       | [paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=UFwefiypla) |\n| 2024-09 | Mimi             | Moshi: a speech-text foundation model for real-time dialogue       | [paper](https:\u002F\u002Fkyutai.org\u002FMoshi.pdf) |\n| 2024-09 | NDVQ             | NDVQ: Robust Neural Audio Codec with Normal Distribution-Based Vector Quantization       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.12717) |\n| 2024-09 | SoCodec             | SoCodec: A Semantic-Ordered Multi-Stream Speech Codec for Efficient Language Model Based Text-to-Speech Synthesis       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.00933) |\n| 2024-09 | BigCodec             | BigCodec: Pushing the Limits of Low-Bitrate Neural Speech Codec       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.05377) |\n| 2024-08 | X-Codec             | Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model       | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2408.17175) |\n| 2024-08 | WavTokenizer             | WavTokenizer: an Efficient Acoustic Discrete Codec Tokenizer for Audio Language Modeling | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16532) |\n| 2024-07 | Super-Codec             | SuperCodec: A Neural Speech Codec with Selective Back-Projection Network | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.20530) |\n| 2024-07 | dMel             | dMel: Speech Tokenization made Simple | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15835) |\n| 2024-06 | CodecFake             | CodecFake: Enhancing Anti-Spoofing Models Against Deepfake Audios from Codec-Based Speech Synthesis Systems | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07237) |\n| 2024-06 | Single-Codec             | Single-Codec: Single-Codebook Speech Codec towards High-Performance Speech Generation | [paper](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2406.07422) |\n| 2024-06 | SQ-Codec             | SimpleSpeech: Towards Simple and Efficient Text-to-Speech with Scalar Latent Transformer Diffusion Models | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02328) |\n| 2024-06 | PQ-VAE             | Addressing Index Collapse of Large-Codebook Speech Tokenizer with Dual-Decoding Product-Quantized Variational Auto-Encoder | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02940) |\n| 2024-06 | LLM-Codec                | UniAudio 1.5: Large Language Model-driven Audio Codec is A Few-shot Audio Task Learner  | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10056) |\n| 2024-05 | HILCodec                 | HILCodec: High Fidelity and Lightweight Neural Audio Codec                                                                                             | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04752) |\n| 2024-04 | SemantiCodec             | SemantiCodec: An Ultra Low Bitrate Semantic Audio Codec for General Sound                                                                              | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00233) |\n| 2024-04 | PromptCodec             | PromptCodec: High-Fidelity Neural Speech Codec using Disentangled Representation Learning based Adaptive Feature-aware Prompt Encoders                | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02702) |\n| 2024-04 | ESC             | ESC: Efficient Speech Coding with Cross-Scale Residual Vector Quantized Transformers                                                                             | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19441) |\n| 2024-03 | FACodec                  | NaturalSpeech 3: Zero-Shot Speech Synthesis with Factorized Codec and Diffusion Models                                                                 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03100) |\n| 2024-02 | AP-Codec             | APCodec: A Neural Audio Codec with Parallel Amplitude and Phase Spectrum Encoding and Decoding | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10533) |\n| 2024-02 | Language-Codec       | Language-Codec: Reducing the Gaps Between Discrete Codec Representation and Speech Language Models                                                         | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12208) |\n| 2024-01 | ScoreDec             | ScoreDec: A Phase-preserving High-Fidelity Audio Codec with A Generalized Score-based Diffusion Post-filter                                                | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12160) |\n| 2023-11 | HierSpeech++         | HierSpeech++: Bridging the Gap between Semantic and Acoustic Representation of Speech by Hierarchical Variational Inference for Zero-shot Speech Synthesis | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.12454) |\n| 2023-10 | TiCodec              | FEWER-TOKEN NEURAL SPEECH CODEC WITH TIME-INVARIANT CODES                                                                                                  | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.00014) |\n| 2023-09 | RepCodec             | RepCodec: A Speech Representation Codec for Speech Tokenization                                                                                            | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.00169) |\n| 2023-09 | FunCodec             | FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec                                                           | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07405) |\n| 2023-08 | SpeechTokenizer      | Speechtokenizer: Unified speech tokenizer for speech large language models                                                                                 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.16692) |\n| 2023-06 | VOCOS | VOCOS: CLOSING THE GAP BETWEEN TIME-DOMAIN AND FOURIER-BASED NEURAL VOCODERS FOR HIGH-QUALITY AUDIO SYNTHESIS                                                             | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.00814) |\n| 2023-06 | Descript-audio-codec | High-Fidelity Audio Compression with Improved RVQGAN                                                                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06546) |\n| 2023-05 | AudioDec             | Audiodec: An open-source streaming highfidelity neural audio codec                                                                                         | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16608) |\n| 2023-05 | HiFi-Codec           | Hifi-codec: Group-residual vector quantization for high fidelity audio codec                                                                               | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.02765) |\n| 2023-03 | LMCodec              | LMCodec: A Low Bitrate Speech Codec With Causal Transformer Models                                                                                         | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.12984) |\n| 2022-11 | Disen-TF-Codec              | Disentangled Feature Learning for Real-Time Neural Speech Coding   | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11960) |\n| 2022-10 | EnCodec              | High fidelity neural audio compression                                                                                                                     | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13438) |\n| 2022-07 | S-TFNet              | Cross-Scale Vector Quantization for Scalable Neural Speech Coding | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.03067) |\n| 2022-01 | TFNet              | End-to-End Neural Speech Coding for Real-Time Communications | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.09429) |\n| 2021-07 | SoundStream          | SoundStream: An End-to-End Neural Audio Codec                                                                                                              | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.03312) |\n\n## :trident: Speech\u002FAudio Representation Models\n\n| Date    | Model Name   | Paper Title                                                                                                   | Link                                      |\n| ------- | ------------ | ------------------------------------------------------------------------------------------------------------- | ----------------------------------------- |\n| 2025-06 | USAD      | USAD: Universal Speech and Audio Representation via Distillation                                        | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2506.18843) |\n| 2025-03 | UniWav      | UniWav: Towards Unified Pre-training for Speech Representation Learning and Generation                                        | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.00733) |\n| 2024-09 | NEST-RQ      | NEST-RQ: Next Token Prediction for Speech Self-Supervised Pre-Training                                        | [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.08680) |\n| 2024-01 | EAT          | Self-Supervised Pre-Training with Efficient Audio Transformer                                                 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03497) |\n| 2023-10 | MR-HuBERT    | Multi-resolution HuBERT: Multi-resolution Speech Self-Supervised Learning with Masked Unit Prediction         | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02720) |\n| 2023-10 | SpeechFlow   | Generative Pre-training for Speech with Flow Matching                                                         | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16338) |\n| 2023-09 | WavLabLM     | Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning                          | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15317) |\n| 2023-08 | W2v-BERT 2.0 | Massively Multilingual & Multimodal Machine Translation                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11596) |\n| 2023-07 | Whisper-AT   | Noise-Robust Automatic Speech Recognizers are Also Strong General Audio Event Taggers                         | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03183) |\n| 2023-06 | ATST         | Self-supervised Audio Teacher-Student Transformer for Both Clip-level and Frame-level Tasks                   | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04186) |\n| 2023-05 | SPIN         | Self-supervised Fine-tuning for Improved Content Representations by Speaker-invariant Clustering              | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11072) |\n| 2023-05 | DinoSR       | Self-Distillation and Online Clustering for Self-supervised Speech Representation Learning                    | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10005) |\n| 2023-05 | NFA          | Self-supervised neural factor analysis for disentangling utterance-level speech representations               | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.08099) |\n| 2022-12 | Data2vec 2.0 | Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and Language | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.07525) |\n| 2022-12 | BEATs        | Audio Pre-Training with Acoustic Tokenizers                                                                   | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09058) |\n| 2022-11 | MT4SSL       | MT4SSL: Boosting Self-Supervised Speech Representation Learning by Integrating Multiple Targets               | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.07321) |\n| 2022-08 | DINO         | Non-contrastive self-supervised learning of utterance-level speech representations                            | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.05413) |\n| 2022-07 | Audio-MAE    | Masked Autoencoders that Listen                                                                               | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.06405) |\n| 2022-04 | MAESTRO      | Matched Speech Text Representations through Modality Matching                                                 | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.03409) |\n| 2022-03 | MAE-AST      | Masked Autoencoding Audio Spectrogram Transformer                                                             | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.16691) |\n| 2022-03 | LightHuBERT  | Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT                | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.15610) |\n| 2022-02 | Data2vec     | A General Framework for Self-supervised Learning in Speech, Vision and Language                               | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.03555) |\n| 2021-10 | WavLM        | WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing                              | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.13900) |\n| 2021-08 | W2v-BERT     | Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training           | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.06209) |\n| 2021-07 | mHuBERT      | Direct speech-to-speech translation with discrete units                                                       | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.05604) |\n| 2021-06 | HuBERT       | Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units                           | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.07447) |\n| 2021-03 | BYOL-A       | Self-Supervised Learning for General-Purpose Audio Representation                                             | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06695) |\n| 2020-12 | DeCoAR2.0    | DeCoAR 2.0: Deep Contextualized Acoustic Representations with Vector Quantization                             | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.06659) |\n| 2020-07 | TERA         | TERA: Self-Supervised Learning of Transformer Encoder Representation for Speech                               | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.06028) |\n| 2020-06 | Wav2vec2.0   | wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations                               | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.11477) |\n| 2019-10 | APC          | Generative Pre-Training for Speech with Autoregressive Predictive Coding                                      | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12607) |\n| 2018-07 | CPC          | Representation Learning with Contrastive Predictive Coding                                                    | [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03748) |\n\n## :trident: SLT 2024 Codec-SUPERB challenge\n- The challenge covers nowday's neural audio codecs and speech \u002F audio language models.\n  - Time: December 3 starting at 15:15\n  - Detailed agenda: https:\u002F\u002Fcodecsuperb.github.io\u002F\n\n\u003Cdetails>\n\u003Csummary> More about SLT 2024 Codec-SUPERB challenge \u003C\u002Fsummary>\n  \n- Keynote speakers\n  - [Neil Zeghidour (Kyutai)](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=fiJamZ0AAAAJ&hl=fr): 15:15-16:00\n    - [Slides](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1SrDLQ_XMetVS7Xfo72blVtGYvjNxWwRP\u002Fview?usp=sharing) | [Recording](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1_UAwzqfFfU3CLz-4p7V3k-c8wt7osba5\u002Fview?usp=drive_link) | [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Zjpl84KCTvw&list=PLJV_el3uVTsNnC37JYD8kBcNDI7CNJgum) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV115qZYuENj\u002F?spm_id_from=333.999.0.0&vd_source=de7baff5ae97e3392cfbc4c86ea52abf)\n    - Title: Audio Language Models\n  - [Dongchao Yang (CUHK)](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=WNiojyAAAAAJ&hl=zh-CN): 16:00-16:35\n    - [Slides](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1oXArl4DayOraVzVH0INUsnB8toIfEJiM\u002Fview?usp=sharing) | [Recording](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1owL-lA_VH2rujvG93DaVV1upo8JSrWRc\u002Fview?usp=drive_link) | [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ExDfqz8NfnE&list=PLJV_el3uVTsNnC37JYD8kBcNDI7CNJgum&index=3) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1m3qZY3Ej3\u002F?spm_id_from=333.999.0.0&vd_source=de7baff5ae97e3392cfbc4c86ea52abf)\n    - Title: Challenges in Developing Universal Audio Foundation Model\n  - [Shang-Wen Li (Meta)](https:\u002F\u002Fswdanielli.github.io\u002F): 16:35-17:10\n    - [Slides](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1aRGllscyT2QMRA0sBtebHpBNiDfN84Wq\u002Fview?usp=sharing) | [Recording](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1epVNVNoqiHkS3_KPXnCE4f8KZWTcmea1\u002Fview?usp=drive_link) | [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=JidtdZVtpkI&list=PLJV_el3uVTsNnC37JYD8kBcNDI7CNJgum&index=2) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1R9qoYXEWJ\u002F?spm_id_from=333.999.0.0)\n    - Title: VoiceCraft: Zero-Shot Speech Editing and TTS in the Wild\n  - [Wenwu Wang (University of Surrey)](https:\u002F\u002Fscholar.google.co.uk\u002Fcitations?user=JQFnV5IAAAAJ&hl=en): 17:40-18:15\n    - [Slides](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1gjBHCi76JiQmaSs9at8T1h2Aw2SnmH08\u002Fview?usp=sharing) | [Recording](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1bb3WBHI9z1yOWaTUKn5xKmppe0EHeFyk\u002Fview?usp=drive_link) | [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fIoCxwVobEo&list=PLJV_el3uVTsNnC37JYD8kBcNDI7CNJgum&index=4) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1dXqoYWE6L\u002F?spm_id_from=333.999.0.0&vd_source=de7baff5ae97e3392cfbc4c86ea52abf)\n    - Title: Neural Audio Codecs: Recent Progress and a Case Study with SemantiCodec\n  - [Minje Kim (UIUC)](https:\u002F\u002Fsiebelschool.illinois.edu\u002Fabout\u002Fpeople\u002Fall-faculty\u002Fminje): 18:15-18:50\n    - [Slides](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1NsWFRC0-d86tgk-Z36D8oRocT4nX_9FQ\u002Fview?usp=sharing) | [Recording](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16wH7nixp_botp1vJAeeQM9ucsnpaRlZ0\u002Fview?usp=drive_link) | [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zxFTrb_xGD0&list=PLJV_el3uVTsNnC37JYD8kBcNDI7CNJgum&index=5) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1GfqoYnEfS\u002F?spm_id_from=333.999.0.0&vd_source=de7baff5ae97e3392cfbc4c86ea52abf)\n    - Title: Future Directions in Neural Speech Communication Codecs\n- Host\n  - [Hung-yi Lee (NTU)](https:\u002F\u002Fspeech.ee.ntu.edu.tw\u002F~hylee\u002Findex.php)\n  - [Haibin Wu (Microsoft)](https:\u002F\u002Fhbwu-ntu.github.io)\n- Accepted papers ([Recording](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1NH0RyqZJ8NogH5jUuO5tQ9iShKj95WeM\u002Fview?usp=drive_link))\n  - ESPnet-Codec: Comprehensive Training and Evaluation of Neural Codecs for Audio, Music, and Speech\n  - Codec-SUPERB @ SLT 2024: A lightweight benchmark for neural audio codec models\n  - Investigating neural audio codecs for speech language model-based speech generation\n  - Addressing Index Collapse of Large-Codebook Speech Tokenizer with Dual-Decoding Product-Quantized Variational Auto-Encoder\n  - MDCTCodec: A Lightweight MDCT-based Neural Audio Codec towards High Sampling Rate and Low Bitrate Scenarios\n  \n\u003C\u002Fdetails>\n\n## :trident: Interspeech 2024 Survey Talk\n\nProfessor Hung-Yi Lee will be giving a talk as part of the [Interspeech 2024 survey talk](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1gPjnjGKxeCF72gisPVuQlDvogXQCtNk4\u002Fview) titled **Challenges in Developing Spoken Language Models**. The topic will cover nowday's speech\u002Faudio large language models.\n\n## :trident: ICASSP 2024 Tutorial Information\n\nI (Kai-Wei Chang) will be giving a talk as part of the [ICASSP 2024 tutorial](https:\u002F\u002Fcmsworkshops.com\u002FICASSP2024\u002Ftutorials.php#tut32) titled **Parameter-Efficient and Prompt Learning for Speech and Language Foundation Models**. The topic will cover nowday's speech\u002Faudio large language models. The slides from my presentation is available at https:\u002F\u002Fkwchang.org\u002Ftalks\u002F. Please feel free to reach out to me for any discussions.\n\n## :trident: Interspeech 2026 Tutorial\nComing soon....\n\n## 🔱 Related Repository\n\n| Name                                                        | GitHub Repo                                                                                   | Paper                                               |\n| ----------------------------------------------------------- | --------------------------------------------------------------------------------------------- | --------------------------------------------------- |\n| :fire: NEW! **Spoken-Dialogue-Model-Survey** | :fire: NEW! [Link](https:\u002F\u002Fgithub.com\u002Fga642381\u002FSpoken-Dialogue-Model-Survey) | :fire: NEW! [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22267)\n| **Towards Holistic Evaluation of Large Audio-Language Models** | [Link](https:\u002F\u002Fgithub.com\u002Fckyang1124\u002FLALM-Evaluation-Survey)                              | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15957)            |\n| **Large Audio Models**                                      | [Link](https:\u002F\u002Fgithub.com\u002Fliusongxiang\u002FLarge-Audio-Models)                                    | -                                                   |\n| **Awesome Speech Generation**                               | [Link](https:\u002F\u002Fgithub.com\u002Fkuan2jiu99\u002FAwesome-Speech-Generation)                               | -                                                   |\n| **Speech Prompts and Adapters**                             | [Link](https:\u002F\u002Fgithub.com\u002Fga642381\u002FSpeech-Prompts-Adapters)                                   | -                                                   |\n| **Codec-SUPERB**                                            | [Link](https:\u002F\u002Fgithub.com\u002Fvoidful\u002FCodec-SUPERB)                                               | [Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13071)            |\n| **Awesome Neural Programming and Prompting**                | [Link](https:\u002F\u002Fgithub.com\u002Fhuckiyang\u002Fawesome-neural-reprogramming-prompting)                   | -                                                   |\n\n## Citation\n\nThe survey paper **“On The Landscape of Spoken Language Models: A Comprehensive Survey”** is now available on [arXiv](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.08528)\n\n```bibtex\n@article{arora2025landscape,\n  title={On The Landscape of Spoken Language Models: A Comprehensive Survey},\n  author={Arora, Siddhant and Chang, Kai-Wei and Chien, Chung-Ming and Peng, Yifan and Wu, Haibin and Adi, Yossi and Dupoux, Emmanuel and Lee, Hung-Yi and Livescu, Karen and Watanabe, Shinji},\n  journal={arXiv preprint arXiv:2504.08528},\n  year={2025}\n}\n```\n\nRelated articles by the core contributors:\n\n```bibtex\n@article{chang2026tico,\n      title={TiCo: Time-Controllable Training for Spoken Dialogue Models},\n      author={Kai-Wei Chang and Wei-Chih Chen and En-Pei Hu and Hung-yi Lee and James Glass},\n      journal={arXiv preprint arXiv:2603.22267},\n      year={2026}\n}\n```\n\n```\n@article{wu2024ts3,\n  title={TS3-Codec: Transformer-Based Simple Streaming Single Codec},\n  author={Wu, Haibin and Kanda, Naoyuki and Eskimez, Sefik Emre and Li, Jinyu},\n  journal={arXiv preprint arXiv:2411.18803},\n  year={2024}\n}\n```\n\n```\n@article{wu2024codec,\n  title={Codec-SUPERB@ SLT 2024: A lightweight benchmark for neural audio codec models},\n  author={Wu, Haibin and Chen, Xuanjun and Lin, Yi-Cheng and Chang, Kaiwei and Du, Jiawei and Lu, Ke-Han and Liu, Alexander H and Chung, Ho-Lam and Wu, Yuan-Kuei and Yang, Dongchao and others},\n  journal={arXiv preprint arXiv:2409.14085},\n  year={2024}\n}\n```\n\n```\n@inproceedings{wu-etal-2024-codec,\n    title = \"Codec-{SUPERB}: An In-Depth Analysis of Sound Codec Models\",\n    author = \"Wu, Haibin  and\n      Chung, Ho-Lam  and\n      Lin, Yi-Cheng  and\n      Wu, Yuan-Kuei  and\n      Chen, Xuanjun  and\n      Pai, Yu-Chi  and\n      Wang, Hsiu-Hsuan  and\n      Chang, Kai-Wei  and\n      Liu, Alexander  and\n      Lee, Hung-yi\",\n    editor = \"Ku, Lun-Wei  and\n      Martins, Andre  and\n      Srikumar, Vivek\",\n    booktitle = \"Findings of the Association for Computational Linguistics: ACL 2024\",\n    month = aug,\n    year = \"2024\",\n    address = \"Bangkok, Thailand\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https:\u002F\u002Faclanthology.org\u002F2024.findings-acl.616\",\n    doi = \"10.18653\u002Fv1\u002F2024.findings-acl.616\",\n    pages = \"10330--10348\",\n}\n```\n\n```\n@article{wu2023speechgen,\n  title={Speechgen: Unlocking the generative power of speech language models with prompts},\n  author={Wu, Haibin and Chang, Kai-Wei and Wu, Yuan-Kuei and Lee, Hung-yi},\n  journal={arXiv preprint arXiv:2306.02207},\n  year={2023}\n}\n```\n\n```\n@article{wu2024towards,\n  title={Towards audio language modeling-an overview},\n  author={Wu, Haibin and Chen, Xuanjun and Lin, Yi-Cheng and Chang, Kai-wei and Chung, Ho-Lam and Liu, Alexander H and Lee, Hung-yi},\n  journal={arXiv preprint arXiv:2402.13236},\n  year={2024}\n}\n```\n","# :trident: 语音三叉戟 - 强大的语音大语言模型\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_b400c4d3ae1e.png\" alt=\"Speech Trident\" style=\"width:70%;\">\n\u003C\u002Fp>\n\n在这个仓库中，我们综述了三个关键领域：(1) 表征学习，(2) 神经编解码器，以及 (3) 语言模型，这些都对语音\u002F音频大语言模型的发展起到了重要作用。\n\n1.⚡ **语音表征模型**：这些模型专注于学习语音的结构化表示，随后可以将其量化为离散的语音标记，通常被称为**语义标记**。\n\n2.⚡ **语音神经编解码器模型**：这些模型旨在学习语音和音频的离散标记，通常称为**声学标记**，同时保持重建能力和低比特率。\n\n3.⚡ **语音大语言模型**：这些模型基于语音和声学标记，采用语言建模的方式进行训练。它们在语音理解和语音生成任务中表现出色。\n\n## :trident: 贡献者\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fkwchang.org\u002F\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_4e6f76d94152.png\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>张凯威\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fhbwu-ntu.github.io\u002F\">\n        \u003Cimg src=\"https:\u002F\u002Fscholar.googleusercontent.com\u002Fcitations?view_op=medium_photo&user=-bB-WHEAAAAJ&citpid=1\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>吴海斌\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fscholar.google.com.tw\u002Fcitations?user=-d6aNP0AAAAJ&hl=zh-TW\">\n        \u003Cimg src=\"https:\u002F\u002Fscholar.googleusercontent.com\u002Fcitations?view_op=medium_photo&user=-d6aNP0AAAAJ&citpid=2\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>曾伟成\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fkehanlu.com\u002F\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_f1fda4d868b9.png\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>陆可涵\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fkuan2jiu99\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_d1f0422fc705.png\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>关淳怡\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n      \u003Ca href=\"https:\u002F\u002Fspeech.ee.ntu.edu.tw\u002F~hylee\u002Findex.php\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_b99fc95a9866.png\" width=\"100px;\" style=\"border-radius: 50%;\" alt=\"\"\u002F>\n        \u003Cbr \u002F>\n        \u003Csub>\u003Cb>李宏毅\u003C\u002Fb>\u003C\u002Fsub>\n      \u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## :trident: 2026 年新闻\n关于**口语对话模型**的新综述仓库已在 [GitHub](https:\u002F\u002Fgithub.com\u002Fga642381\u002FSpoken-Dialogue-Model-Survey) 上发布。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_readme_acb3f48ca8b7.png\" width=\"500\">\n\u003C\u002Fp>\n\n```bibtex\n@article{chang2026tico,\n      title={TiCo: 口语对话模型的时间可控训练},\n      author={Kai-Wei Chang 和 Wei-Chih Chen 和 En-Pei Hu 和 Hung-yi Lee 和 James Glass},\n      journal={arXiv 预印本 arXiv:2603.22267},\n      year={2026}\n}\n```\n\n## :trident: 2025 年新闻\n\n综述论文 **“关于口语语言模型的全景：全面综述”** 现已发表在 [arXiv](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.08528) 上。\n```bibtex\n@article{arora2025landscape,\n  title={关于口语语言模型的全景：全面综述},\n  author={Arora, Siddhant 和 Chang, Kai-Wei 和 Chien, Chung-Ming 和 Peng, Yifan 和 Wu, Haibin 和 Adi, Yossi 和 Dupoux, Emmanuel 和 Lee, Hung-Yi 和 Livescu, Karen 和 Watanabe, Shinji},\n  journal={arXiv 预印本 arXiv:2504.08528},\n  year={2025}\n}\n```\n\n该论文对口语语言模型（SLMs）进行了全面的综述，涵盖了本 Speech-Trident 项目中所调研的大量语音\u002F音频语言模型，但讨论更为详细和技术化。论文将 SLMs 分为：\n\n1.⚡ **纯语音 LM**\n\n2.⚡ **语音感知文本 LM**\n\n3.⚡ **语音 + 文本 LM**\n\n此外，论文还讨论了训练策略、语音\u002F文本标记解码模式、双工语音对话、SLMs 的基准测试等内容。更多细节请阅读[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.08528)，相信你会喜欢！\n\n## :trident: 语音\u002F音频语言模型\n\n| 日期    | 模型名称     | 论文标题                                                                                                           | 链接                                      |\n| ------- | -------------- | --------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- |\n| 2025-07 | Audio Flamingo 3 | Audio Flamingo 3：通过全开源大型音频语言模型推进音频智能 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.08128) |\n| 2025-07 | DeSTA2.5-Audio | DeSTA2.5-Audio：迈向具有自生成跨模态对齐的通用大型音频语言模型 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02768) |\n| 2025-05 | SLED | 基于连续潜空间中的能量距离实现高效语音语言建模 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13181) |\n| 2025-05 | BALSa | 从对齐到提升：利用合成数据自举音频-语言对齐 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20166) |\n| 2025-04 | -   | 口语语言模型全景：全面综述 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.08528) |\n| 2025-04 | Kimi-Audio   | Kimi-Audio 技术报告 | [论文](https:\u002F\u002Fgithub.com\u002FMoonshotAI\u002FKimi-Audio\u002Fblob\u002Fmaster\u002Fassets\u002Fkimia_report.pdf) |\n| 2025-03 | Qwen2.5-Omni   | Qwen2.5-Omni 技术报告 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20215) |\n| 2025-03 | Yue   | YuE：面向长音频音乐生成的开源基础模型扩展 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.08638) |\n| 2025-03 | CSM   | 对话式语音生成 | [博客](https:\u002F\u002Fwww.sesame.com\u002Fresearch\u002Fcrossing_the_uncanny_valley_of_voice#demo) |\n| 2025-03 | Phi-4-Multimodal   | Phi-4-Mini 技术报告：通过LoRA混合实现紧凑而强大的多模态语言模型 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01743) |\n| 2025-03 | Baichuan-Audio   | Baichuan-Audio：端到端语音交互的统一框架 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.17239) |\n| 2025-02 | DiTAR | DiTAR：用于语音生成的扩散Transformer自回归建模 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03930) |\n| 2025-02 | Slamming   | Slamming：一天内在单张GPU上训练一个语音语言模型 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.15814) |\n| 2025-02 | Step-Audio   | Step-Audio：智能语音交互中的统一理解和生成 | [论文](https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep-Audio\u002Fblob\u002Fcn-readme\u002Fassets\u002FStep-Audio.pdf) |\n| 2025-01 | BAICHUAN-OMNI-1.5   | BAICHUAN-OMNI-1.5 技术报告 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2501.15368) |\n| 2025-01 |  MiniCPM-o   | 一款可在手机上运行、达到GPT-4o水平的视觉、语音及多模态直播MLLM | [GitHub](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FMiniCPM-o) |\n| 2025-01 |  MinMo   | MinMo：一款用于无缝语音交互的多模态大型语言模型 | [论文](https:\u002F\u002Ffunaudiollm.github.io\u002Fminmo\u002F) |\n| 2025-01 |  VITA-1.5   | VITA-1.5：迈向GPT-4o级别的实时视觉与语音交互 | [论文](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2501.01957v1) |\n| 2025-01 |  OMNICHAT   | OmniChat：利用可扩展的合成数据增强多样化场景下的口语对话系统 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01384) |\n| 2025-01 |  SLIDE   | SLIDE：将语音语言模型与LLM结合以生成自发性口语对话 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2501.00805) |\n| 2024-12 | SLAM-Omni   | SLAM-Omni：单阶段训练的音色可控语音交互系统 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15649), [代码](https:\u002F\u002Fgithub.com\u002FX-LANCE\u002FSLAM-LLM\u002Fblob\u002Fmain\u002Fexamples\u002Fs2s\u002FREADME.md) |\n| 2024-12 |   TouchTTS  | TouchTTS：一个简单到人人都能上手的TTS框架 | [论文](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2412.08237) |\n| 2024-12 |   CosyVoice 2  | CosyVoice 2：基于大型语言模型的可扩展流式语音合成 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2412.10117) |\n| 2024-12 |   GLM-4-Voice  | GLM-4-Voice：迈向智能且拟人化的端到端口语聊天机器人 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2412.02612) |\n| 2024-12 |   AlignFormer  | AlignFormer：模态匹配可实现更好的零样本指令遵循语音LLM | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2412.01145) |\n| 2024-11 |   --  | 利用合成交错数据扩展语音-文本预训练 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.17607) |\n| 2024-11 |   --  | 状态空间大型音频语言模型 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.15685) |\n| 2024-11 |   --  | 构建台湾闽南语口语语言模型：首次尝试 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07111) |\n| 2024-11 | Ultravox    | Ultravox：GPT-4o Realtime的开源权重替代方案 | [博客](https:\u002F\u002Fwww.ultravox.ai\u002Fblog\u002Fultravox-an-open-weight-alternative-to-gpt-4o-realtime) |\n| 2024-11 | hertz-dev    | [博客](https:\u002F\u002Fsi.inc\u002Fhertz-dev\u002F)  | [GitHub](https:\u002F\u002Fgithub.com\u002FStandard-Intelligence\u002Fhertz-dev) |\n| 2024-11 | Freeze-Omni    | Freeze-Omni：一种智能且低延迟的冻结LLM语音对话模型 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00774) |\n| 2024-11 | Align-SLM              | Align-SLM：无文本语音语言模型，通过AI反馈进行强化学习       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.01834) |\n| 2024-10 | Ichigo | Ichigo：混合模态早期融合的实时语音助手 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15316), [代码](https:\u002F\u002Fgithub.com\u002Fhomebrewltd\u002Fichigo)|\n| 2024-10 | OmniFlatten              | OmniFlatten：一款用于无缝语音对话的端到端GPT模型       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17799v1) |\n| 2024-10 | GPT-4o              | GPT-4o 系统卡片       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.21276) |\n| 2024-10 | Baichuan-OMNI              | Baichuan-Omni 技术报告       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08565) |\n| 2024-10 | GLM-4-Voice                | GLM-4-Voice       | [GitHub](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FGLM-4-Voice) |\n| 2024-10 | --              | 使用大型语言模型实现超人类语音理解的路线图       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13268) |\n| 2024-10 | SALMONN-OMNI               | SALMONN-OMNI：在无编解码器的全双工框架下实现语音理解和生成的LLM       | [论文](https:\u002F\u002Fopenreview.net\u002Fattachment?id=eJpI20hzWf&name=pdf) |\n| 2024-10 | Mini-Omni 2               | Mini-Omni2：朝着具备视觉、语音和双工能力的开源GPT-4o迈进       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.11190) |\n| 2024-10 | HALL-E               | HALL-E：用于分钟级零样本文本转语音合成的分层神经编解码语言模型       | [论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=868masI331) |\n| 2024-10 | SyllableLM    |  SyllableLM：为语音语言模型学习粗粒度语义单元     | [论文](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2410.04029v1) |\n| 2024-09 | DeSTA 2 | DeSTA2：无需语音指令微调数据即可开发指令遵循语音语言模型 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.20007) |\n| 2024-09 | Moshi               | Moshi：用于实时对话的语音-文本基础模型       | [论文](https:\u002F\u002Fkyutai.org\u002FMoshi.pdf) |\n| 2024-09 | Takin AudioLLM           | Takin：一批高质量的零样本语音生成模型       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12139) |\n| 2024-09 | FireRedTTS          | FireRedTTS：面向工业级生成式语音应用的基础文本转语音框架       | [论文](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2409.03283v1) |\n| 2024-09 | LLaMA-Omni         | LLaMA-Omni：与大型语言模型实现无缝语音交互                                                      | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.06666) |\n| 2024-09 | MaskGCT         | MaskGCT：使用掩码生成式编解码变压器实现零样本文本转语音                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00750v1) |\n| 2024-09 | SSR-Speech         | SSR-Speech：致力于稳定、安全且鲁本的零样本基于文本的语音编辑和合成                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07556) |\n| 2024-09 | MoWE-Audio         | MoWE-Audio：混合弱编码器的多任务音频LLM                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.06635) |\n| 2024-08 | Mini-Omni       | Mini-Omni：语言模型可以在流式传输中听、说并思考                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16725) |\n| 2024-08 | Make-A-Voice 2       |  Make-A-Voice：重新审视语音大型语言模型，将其视为可扩展的多语言和多任务学习者    | [论文](https:\u002F\u002Faclanthology.org\u002F2024.acl-long.589\u002F) |\n| 2024-08 | LSLM       |  语言模型在说话时也能倾听  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02622) |\n| 2024-07 | Seed-ASR       |  Seed-ASR：利用基于LLM的语音识别理解多样化的语音和语境  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.04675) |\n| 2024-07 | MELLE | 不使用向量量化实现自回归语音合成 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08551) |\n| 2024-06 | SimpleSpeech       | SimpleSpeech：朝着使用标量潜在扩散模型实现简单高效的文本转语音迈进                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02328) |\n| 2024-06 | UniAudio 1.5                | UniAudio 1.5：由大型语言模型驱动的音频编解码器是少数-shot音频任务的学习者  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10056) |\n| 2024-06 | VALL-E R               | VALL-E R：通过单调对齐实现稳健高效的零样本文本转语音合成 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07855) |\n| 2024-06 | VALL-E 2               | VALL-E 2：神经编解码语言模型是人类水平的零样本文本转语音合成者  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05370) |\n| 2024-06 | GPST             | 具有高效分层Transformer的生成式预训练语音语言模型  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00976) |\n| 2024-04 | CLaM-TTS         | CLaM-TTS：改进零样本文本转语音的神经编解码语言模型                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02781) |\n| 2024-04 | RALL-E         | RALL-E：通过思维链提示进行稳健的编解码语言建模，用于文本转语音合成                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.03204) |\n| 2024-04 | WavLLM         | WavLLM：朝着稳健且适应性强的语音大型语言模型迈进                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.00656) |\n| 2024-02 | MobileSpeech       | MobileSpeech：一款快速且高保真的移动端零样本文本转语音框架                                                    | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09378) |\n| 2024-02 | SLAM-ASR       | 一种简单到令人尴尬的、具备强大ASR能力的LLM方法                                                    | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08846) |\n| 2024-02 | AnyGPT         | AnyGPT：统一的多模态LLM，采用离散序列建模                                                        | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12226) |\n| 2024-02 | SpiRit-LM      | SpiRit-LM：交织的口语和书面语言模型                                                              | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05755) |\n| 2024-02 | USDM           | 在赋能语音的大型语言模型中整合副语言学，以实现自然对话                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05706) |\n| 2024-02 | BAT            | BAT：学习如何利用大型语言模型推理空间声音                                               | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01591) |\n| 2024-02 | Audio Flamingo | Audio Flamingo：一种具有少量-shot学习和对话能力的新颖音频语言模型                            | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01831) |\n| 2024-02 | 文本描述转语音 | 通过合成注释实现高保真文本转语音的自然语言指导                      | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01912) |\n| 2024-02 | GenTranslate   | GenTranslate：大型语言模型是生成式多语言语音和机器翻译者                        | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.06894) |\n| 2024-02 | Base-TTS       | BASE TTS：从在10万小时数据上构建十亿参数文本转语音模型中汲取的经验                        | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08093) |\n| 2024-02 | --             | 为自动语音识别将声学信息融入大型语言模型永远都不晚          | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05457) |\n| 2024-01 | --             | 大型语言模型是噪声鲁棒语音识别的有效学习者                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.10446) |\n| 2024-01 | ELLA-V             | ELLA-V：通过对齐引导的序列重排实现稳定的神经编解码语言建模                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.07333) |\n| 2023-12 | Seamless       | Seamless：多语言、富有表现力且流式的语音翻译                                                    | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05187) |\n| 2023-11 | Qwen-Audio     | Qwen-Audio：通过统一的大规模音频-语言模型推进通用音频理解                     | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.07919) |\n| 2023-10 | LauraGPT       | LauraGPT：用GPT聆听、关注、理解并再生音频                                                   | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04673) |\n| 2023-10 | SALMONN        | SALMONN：迈向大型语言模型的通用听力能力                                                  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.13289) |\n| 2023-10 | UniAudio       | UniAudio：一款面向通用音频生成的音频基础模型                                                 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00704) |\n| 2023-10 | Whispering LLaMA | Whispering LLaMA：一种用于语音识别的跨模态生成式纠错框架                          | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06434) |\n| 2023-09 | VoxtLM         | Voxtlm：统一的仅解码器模型，用于整合语音识别\u002F合成以及语音\u002F文本延续任务 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07937) |\n| 2023-09 | LTU-AS         | 音频与语音的联合理解                                                                                  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.14405) |\n| 2023-09 | SLM            | SLM：弥合语音与文本基础模型之间的薄壁                                                    | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00230) |\n| 2023-09 | --             | 利用大型语言模型和任务激活式提示进行生成式语音识别纠错               | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15649) |\n| 2023-08 | SpeechGen      | SpeechGen：通过提示释放语音语言模型的生成潜力                                      | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02207) |\n| 2023-08 | SpeechX        | SpeechX：作为多功能语音转换器的神经编解码语言模型                                                | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.06873) |\n| 2023-08 | LLaSM          | 大型语言和语音模型                                                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15930) |\n| 2023-08 | SeamlessM4T    | 大规模多语言和多模态机器翻译                                                               | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11596) |\n| 2023-07 | Speech-LLaMA   | 关于仅解码器架构在语音转文本和大型语言模型集成中的应用                                  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03917) |\n| 2023-07 | LLM-ASR(temp.) | 用语音识别能力提示大型语言模型                                                     | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.11795) |\n| 2023-06 | AudioPaLM      | AudioPaLM：一款会说也会听的大型语言模型                                                           | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12925) |\n| 2023-05 | Make-A-Voice       | Make-A-Voice：采用离散表示的统一语音合成                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19269) |\n| 2023-05 | Spectron       | 利用光谱图驱动的LLM进行口语问答和语音延续                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15255) |\n| 2023-05 | TWIST          | 文本预训练的语音语言模型                                                                           | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13009) |\n| 2023-05 | Pengi          | Pengi：一款用于音频任务的音频语言模型                                                                        | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11834) |\n| 2023-05 | SoundStorm     | 高效并行音频生成                                                                                   | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09636) |\n| 2023-05 | LTU            | 音频与语音的联合理解                                                                                  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10790) |\n| 2023-05 | SpeechGPT      | 用内在的跨模态对话能力赋能大型语言模型                                  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11000) |\n| 2023-05 | VioLA          | 统一编解码语言模型用于语音识别、合成和翻译                                      | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16107) |\n| 2023-05 | X-LLM          | X-LLM：通过将多模态视为外语来启动先进的大型语言模型                 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.04160) |\n| 2023-03 | Google USM     | Google USM：将自动语音识别扩展到超过100种语言                                                 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.01037) |\n| 2023-03 | VALL-E X       | 用自己的声音说外语：跨语言神经编解码语言建模                             | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03926) |\n| 2023-02 | SPEAR-TTS      | 说、读并提示：在极少监督下实现高保真文本转语音                                         | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03540) |\n| 2023-01 | VALL-E         | 神经编解码语言模型是零样本文本转语音合成者                                                | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.02111) |\n| 2022-12 | Whisper        | 通过大规模弱监督实现稳健的语音识别                                                            | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04356) |\n| 2022-10 | AudioGen       | AudioGen：文本引导的音频生成                                                                           | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15352) |\n| 2022-09 | AudioLM        | AudioLM：一种基于语言建模的方法来进行音频生成                                                             | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.03143) |\n| 2022-05 | Wav2Seq        | Wav2Seq：使用伪语言对语音转文本编码解码模型进行预训练                                    | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.01086) |\n| 2022-04 | Unit mBART     | 通过自监督预训练和数据增强，提升直接的语音转语音翻译                 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02967) |\n| 2022-03 | d-GSLM         | 生成式口语对话语言建模                                                                          | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.16502) |\n| 2021-10 | SLAM           | SLAM：通过语音-文本联合预训练，为语音和语言建模提供统一编码器                           | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10329) |\n| 2021-09 | p-GSLM         | 无文本韵律感知的生成式口语语言建模                                                           | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.03264) |\n| 2021-02 | GSLM           | 从原始音频中生成口语语言                                                                      | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.01192) |\n\n## :trident: 语音\u002F音频编解码模型\n\n| 日期    | 模型名称           | 论文标题                                                                                                                                                | 链接                                      |\n| ------- | -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- |\n| 2025-06 | CodecSlime | CodecSlime: 通过动态帧率对神经语音编解码器进行时域冗余压缩 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2506.21074) |\n| 2025-06 |  | 离散音频标记：不止于综述！ | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2506.10274) |\n| 2025-06 | TaDiCodec | TaDiCodec: 用于语音语言建模的文本感知扩散语音标记器 | [论文](https:\u002F\u002Fhecheng0625.github.io\u002Fassets\u002Fpdf\u002FArxiv_TaDiCodec.pdf) |\n| 2025-06 | MagiCodec | MagiCodec: 用于高保真重建和生成的简单掩码高斯注入编解码器 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2506.00385) |\n| 2025-06 | - | 探究神经语音编解码器的鲁棒性特性 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2505.24248) |\n| 2025-06 | DS-Codec | DS-Codec: 基于镜像到非镜像架构切换的双阶段训练语音编解码器 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2505.24314) |\n| 2025-05 | LFSC | 低帧率语音编解码器：专为快速高质量语音LLM训练与推理设计的编解码器 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.12117) |\n| 2025-05 | PAST | PAST: 语音声学语音标记器 | [论文](https:\u002F\u002Fpages.cs.huji.ac.il\u002Fadiyoss-lab\u002FPAST\u002F) |\n| 2025-04 | ALMTokenizer | ALMTokenizer: 用于音频语言建模的低比特率、语义丰富的音频编解码器标记器 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.10344) |\n| 2025-04 | DualCodec  | DualCodec: 一种用于语音生成的低帧率、语义增强型神经音频编解码器 | [论文](https:\u002F\u002Fopenreview.net\u002Fattachment?id=P7VkjAVClZ&name=pdf) |\n| 2025-04 | -  | 一个量化器就够了：迈向轻量级音频编解码器 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.04949) |\n| 2025-04 | TASTE  | TASTE: 用于口语语言建模的文本对齐语音标记与嵌入 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.07053) |\n| 2025-03 | UniCodec  | 通过低比特率神经编解码器和预训练表示进行通用语音标记学习 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.12115) |\n| 2025-03 | BiCodec  | Spark-TTS: 一种基于LLM的高效单流解耦语音标记文本转语音模型 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.01710) |\n| 2025-03 | FlowDec  | FlowDec: 一种基于流的全频段通用音频编解码器，具有高感知质量 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.01485) |\n| 2025-02 | UniCodec  | UniCodec: 具有单一领域自适应码本的统一音频编解码器 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.20067) |\n| 2025-02 | Baichuan-Audio Tokenizer  | 白川音频：端到端语音交互的统一框架 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.17239) |\n| 2025-02 | - | 离散语音标记的最新进展：综述       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.06490) |\n| 2025-02 | FocalCodec | FocalCodec: 基于焦点调制网络的低比特率语音编码       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.04465) |\n| 2025-02 | - | 神经编解码器中量化效应的高效评估       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.04770) |\n| 2025-02 | X-Codec 2 | Llasa: 扩展基于Llama的语音合成的训练与推理计算资源       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.04128) |\n| 2025-02 | ComplexDec | ComplexDec: 一种具有复杂频谱建模的领域鲁棒型高保真神经音频编解码器       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2502.02019) |\n| 2024-12 | TS3-Codec | TS3-Codec: 基于Transformer的简单流式单编解码器       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.18803) |\n| 2024-12 | FreeCodec | FreeCodec: 一种去耦合的神经语音编解码器，使用更少的标记  | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2412.01053) |\n| 2024-12 | TAAE | 扩展Transformer以实现低比特率高质量语音编码       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.19842) |\n| 2024-11 | BEST-STD | BEST-STD: 用于口语术语检测的双向Mamba增强语音标记       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.14100) |\n| 2024-11 | PyramidCodec | PyramidCodec: 音频领域中用于长篇音乐生成的层次化编解码器 | [论文](https:\u002F\u002Faclanthology.org\u002F2024.findings-emnlp.246.pdf) |\n| 2024-11 | UniCodec | 通过低比特率神经编解码器和预训练表示进行通用语音标记学习 | [论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10738376?casa_token=eWtmSXEr4AEAAAAA:FzYuQIESJ2LXwl9smJQe3RakpDUFuJ-AS0d39ZDlhsI0tBVX_8P7hu4a59yZezz7hpYd3VomUDo) |\n| 2024-11 | SimVQ | 使用一层线性层解决向量量化模型中的表征坍塌问题 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.02038) |\n| 2024-11 | MDCTCodec | MDCTCodec: 一种面向高采样率和低比特率场景的轻量级MDCT基神经音频编解码器 | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.00464) |\n| 2024-10 | APCodec+ | APCodec+: 一种基于频谱编码、具有分阶段训练范式的高保真、高压缩比神经音频编解码器       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.22807) |\n| 2024-10 | - | 更深入地研究神经编解码器的重合成：弥合编解码器与波形生成之间的差距       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.22448) |\n| 2024-10 | SNAC | SNAC: 多尺度神经音频编解码器       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.14411) |\n| 2024-10 | LSCodec | LSCodec: 低比特率且与说话人无关的离散语音编解码器       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15764) |\n| 2024-10 | 编解码器与编解码器-LM的协同设计 | 朝着神经编解码器语言模型的编解码器-LM协同设计       | [论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=KCVv3tICvp) |\n| 2024-10 | VChangeCodec | VChangeCodec: 一种内置变声器的高效率神经语音编解码器，适用于实时通信       | [论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=qDSfOQBrOD) |\n| 2024-10 | DC-Spin | DC-Spin: 一种面向口语语言模型的说话人不变语音标记器       | [论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=OW332Wh9S5) |\n| 2024-10 | DM-Codec | DM-Codec: 为语音标记提炼多模态表示       | [论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=UFwefiypla) |\n| 2024-09 | Mimi             | Moshi: 一款用于实时对话的语音-文本基础模型       | [论文](https:\u002F\u002Fkyutai.org\u002FMoshi.pdf) |\n| 2024-09 | NDVQ             | NDVQ: 一种基于正态分布向量量化的稳健神经音频编解码器       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.12717) |\n| 2024-09 | SoCodec             | SoCodec: 一种语义有序的多流语音编解码器，用于高效的基于语言模型的文本转语音合成       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.00933) |\n| 2024-09 | BigCodec             | BigCodec: 推动低比特率神经语音编解码器的极限       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.05377) |\n| 2024-08 | X-Codec             | 编解码器很重要：探讨编解码器在音频语言模型中的语义不足       | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2408.17175) |\n| 2024-08 | WavTokenizer             | WavTokenizer: 一种用于音频语言建模的高效声学离散编解码器标记器 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16532) |\n| 2024-07 | Super-Codec             | SuperCodec: 一种带有选择性反投影网络的神经语音编解码器 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.20530) |\n| 2024-07 | dMel             | dMel: 简单的语音标记 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15835) |\n| 2024-06 | CodecFake             | CodecFake: 通过基于编解码器的语音合成系统提升对抗深度伪造音频的反欺骗模型能力 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07237) |\n| 2024-06 | Single-Codec             | Single-Codec: 一种面向高性能语音生成的单码本语音编解码器 | [论文](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2406.07422) |\n| 2024-06 | SQ-Codec             | SimpleSpeech: 通过标量潜在扩散Transformer模型实现简单高效的文本转语音 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02328) |\n| 2024-06 | PQ-VAE             | 使用双重解码产品量化变分自编码器解决大码本语音标记器的索引坍塌问题 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02940) |\n| 2024-06 | LLM-Codec                | UniAudio 1.5: 大型语言模型驱动的音频编解码器是少数样本音频任务的学习者  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10056) |\n| 2024-05 | HILCodec                 | HILCodec: 高保真且轻量级的神经音频编解码器                                                                                             | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04752) |\n| 2024-04 | SemantiCodec             | SemantiCodec: 一种超低比特率的通用声音语义音频编解码器                                                                              | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00233) |\n| 2024-04 | PromptCodec             | PromptCodec: 一种利用解耦表示学习的自适应特征感知提示编码器实现高保真神经语音编解码器                | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02702) |\n| 2024-04 | ESC             | ESC: 利用跨尺度残差向量量化Transformer实现高效语音编码                                                                             | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19441) |\n| 2024-03 | FACodec                  | NaturalSpeech 3: 使用因子化编解码器和扩散模型实现零样本语音合成                                                                 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03100) |\n| 2024-02 | AP-Codec             | APCodec: 一种具有并行幅度和相位频谱编码与解码功能的神经音频编解码器 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10533) |\n| 2024-02 | Language-Codec       | Language-Codec: 减少离散编解码器表示与语音语言模型之间的差距                                                         | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12208) |\n| 2024-01 | ScoreDec             | ScoreDec: 一种保留相位的高保真音频编解码器，配备通用评分基的扩散后滤波器                                                | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12160) |\n| 2023-11 | HierSpeech++         | HierSpeech++: 通过零样本语音合成的层次化变分推理弥合语音的语义与声学表示之间的差距 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.12454) |\n| 2023-10 | TiCodec              | 更少标记的神经语音编解码器，采用时间不变码                                                                                                  | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.00014) |\n| 2023-09 | RepCodec             | RepCodec: 一种用于语音标记的语音表示编解码器                                                                                            | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.00169) |\n| 2023-09 | FunCodec             | FunCodec: 一套基础、可复现且可集成的开源神经语音编解码工具包                                                           | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07405) |\n| 2023-08 | SpeechTokenizer      | Speechtokenizer: 用于语音大型语言模型的统一语音标记器                                                                                 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.16692) |\n| 2023-06 | VOCOS | VOCOS: 弥合时域与基于傅里叶变换的神经声码器之间的差距，以实现高质量音频合成                                                             | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.00814) |\n| 2023-06 | Descript-audio-codec | 高保真音频压缩，采用改进的RVQGAN                                                                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06546) |\n| 2023-05 | AudioDec             | Audiodec: 一个开源的流式高保真神经音频编解码器                                                                                         | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16608) |\n| 2023-05 | HiFi-Codec           | Hifi-codec: 用于高保真音频编解码器的组内残差向量量化                                                                               | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.02765) |\n| 2023-03 | LMCodec              | LMCodec: 一种使用因果Transformer模型的低比特率语音编解码器                                                                                         | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.12984) |\n| 2022-11 | Disen-TF-Codec              | 用于实时神经语音编码的解耦特征学习   | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11960) |\n| 2022-10 | EnCodec              | 高保真神经音频压缩                                                                                                                     | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13438) |\n| 2022-07 | S-TFNet              | 用于可扩展神经语音编码的跨尺度向量量化 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.03067) |\n| 2022-01 | TFNet              | 用于实时通信的端到端神经语音编码 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.09429) |\n| 2021-07 | SoundStream          | SoundStream: 一种端到端神经音频编解码器                                                                                                              | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.03312) |\n\n## :trident: 语音\u002F音频表示模型\n\n| 日期    | 模型名称   | 论文标题                                                                                                   | 链接                                      |\n| ------- | ------------ | ------------------------------------------------------------------------------------------------------------- | ----------------------------------------- |\n| 2025-06 | USAD      | USAD：通过蒸馏实现通用语音和音频表示                                        | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2506.18843) |\n| 2025-03 | UniWav      | UniWav：迈向语音表示学习与生成的统一预训练                                        | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.00733) |\n| 2024-09 | NEST-RQ      | NEST-RQ：用于语音自监督预训练的下一个标记预测                                        | [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.08680) |\n| 2024-01 | EAT          | 基于高效音频Transformer的自监督预训练                                                 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03497) |\n| 2023-10 | MR-HuBERT    | 多分辨率HuBERT：基于掩码单元预测的多分辨率语音自监督学习         | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02720) |\n| 2023-10 | SpeechFlow   | 基于流匹配的语音生成式预训练                                                         | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16338) |\n| 2023-09 | WavLabLM     | 大规模多语言自监督学习中的联合预测与去噪                                          | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15317) |\n| 2023-08 | W2v-BERT 2.0 | 超大规模多语言及多模态机器翻译                                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11596) |\n| 2023-07 | Whisper-AT   | 抗噪声自动语音识别器同时也是强大的通用音频事件标签器                         | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03183) |\n| 2023-06 | ATST         | 适用于片段级和帧级任务的自监督音频师生Transformer                   | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04186) |\n| 2023-05 | SPIN         | 通过说话人无关聚类改进内容表示的自监督微调                                         | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11072) |\n| 2023-05 | DinoSR       | 自蒸馏与在线聚类用于自监督语音表示学习                                            | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10005) |\n| 2023-05 | NFA          | 用于解耦话语级语音表示的自监督神经因子分析                                         | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.08099) |\n| 2022-12 | Data2vec 2.0 | 基于上下文化目标表示的视觉、语音和语言高效自监督学习                             | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.07525) |\n| 2022-12 | BEATs        | 基于声学分词器的音频预训练                                                           | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09058) |\n| 2022-11 | MT4SSL       | MT4SSL：通过整合多个目标提升自监督语音表示学习                                     | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.07321) |\n| 2022-08 | DINO         | 话语级语音表示的非对比式自监督学习                                                | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.05413) |\n| 2022-07 | Audio-MAE    | 能够“倾听”的掩码自编码器                                                            | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.06405) |\n| 2022-04 | MAESTRO      | 通过模态匹配实现语音文本表示的对齐                                                  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.03409) |\n| 2022-03 | MAE-AST      | 掩码自编码音频频谱图Transformer                                                     | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.16691) |\n| 2022-03 | LightHuBERT  | 具有一次到位隐藏单元BERT的轻量且可配置语音表示学习                                | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.15610) |\n| 2022-02 | Data2vec     | 语音、视觉和语言领域自监督学习的通用框架                                           | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.03555) |\n| 2021-10 | WavLM        | WavLM：面向全栈语音处理的大规模自监督预训练                                        | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.13900) |\n| 2021-08 | W2v-BERT     | 结合对比学习和掩码语言建模的自监督语音预训练                                       | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.06209) |\n| 2021-07 | mHuBERT      | 使用离散单元进行直接的语音到语音翻译                                               | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.05604) |\n| 2021-06 | HuBERT       | 通过掩码预测隐藏单元实现自监督语音表示学习                                         | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.07447) |\n| 2021-03 | BYOL-A       | 用于通用音频表示的自监督学习                                                        | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06695) |\n| 2020-12 | DeCoAR2.0    | DeCoAR 2.0：结合向量量化技术的深度上下文化声学表示                                 | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.06659) |\n| 2020-07 | TERA         | TERA：用于语音的Transformer编码器表示的自监督学习                                  | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.06028) |\n| 2020-06 | Wav2vec2.0   | wav2vec 2.0：语音表示自监督学习的框架                                              | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.11477) |\n| 2019-10 | APC          | 基于自回归预测编码的语音生成式预训练                                                | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12607) |\n| 2018-07 | CPC          | 基于对比预测编码的表示学习                                                            | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03748) |\n\n## :trident: SLT 2024 编解码器-SUPERB 挑战赛\n- 该挑战赛涵盖当今的神经音频编解码器以及语音\u002F音频语言模型。\n  - 时间：12月3日 15:15 开始\n  - 详细议程：https:\u002F\u002Fcodecsuperb.github.io\u002F\n\n\u003Cdetails>\n\u003Csummary> 关于 SLT 2024 编解码器-SUPERB 挑战赛的更多信息 \u003C\u002Fsummary>\n  \n- 主题演讲嘉宾\n  - [Neil Zeghidour (Kyutai)](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=fiJamZ0AAAAJ&hl=fr)：15:15-16:00\n    - [幻灯片](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1SrDLQ_XMetVS7Xfo72blVtGYvjNxWwRP\u002Fview?usp=sharing) | [录像](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1_UAwzqfFfU3CLz-4p7V3k-c8wt7osba5\u002Fview?usp=drive_link) | [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Zjpl84KCTvw&list=PLJV_el3uVTsNnC37JYD8kBcNDI7CNJgum) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV115qZYuENj\u002F?spm_id_from=333.999.0.0&vd_source=de7baff5ae97e3392cfbc4c86ea52abf)\n    - 题目：音频语言模型\n  - [Dongchao Yang (CUHK)](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=WNiojyAAAAAJ&hl=zh-CN)：16:00-16:35\n    - [幻灯片](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1oXArl4DayOraVzVH0INUsnB8toIfEJiM\u002Fview?usp=sharing) | [录像](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1owL-lA_VH2rujvG93DaVV1upo8JSrWRc\u002Fview?usp=drive_link) | [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ExDfqz8NfnE&list=PLJV_el3uVTsNnC37JYD8kBcNDI7CNJgum&index=3) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1m3qZY3Ej3\u002F?spm_id_from=333.999.0.0&vd_source=de7baff5ae97e3392cfbc4c86ea52abf)\n    - 题目：通用音频基础模型开发中的挑战\n  - [Shang-Wen Li (Meta)](https:\u002F\u002Fswdanielli.github.io)：16:35-17:10\n    - [幻灯片](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1aRGllscyT2QMRA0sBtebHpBNiDfN84Wq\u002Fview?usp=sharing) | [录像](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1epVNVNoqiHkS3_KPXnCE4f8KZWTcmea1\u002Fview?usp=drive_link) | [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=JidtdZVtpkI&list=PLJV_el3uVTsNnC37JYD8kBcNDI7CNJgum&index=2) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1R9qoYXEWJ\u002F?spm_id_from=333.999.0.0)\n    - 题目：VoiceCraft：零样本语音编辑与野外环境下的 TTS\n  - [Wenwu Wang (萨里大学)](https:\u002F\u002Fscholar.google.co.uk\u002Fcitations?user=JQFnV5IAAAAJ&hl=en)：17:40-18:15\n    - [幻灯片](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1gjBHCi76JiQmaSs9at8T1h2Aw2SnmH08\u002Fview?usp=sharing) | [录像](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1bb3WBHI9z1yOWaTUKn5xKmppe0EHeFyk\u002Fview?usp=drive_link) | [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fIoCxwVobEo&list=PLJV_el3uVTsNnC37JYD8kBcNDI7CNJgum&index=4) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1dXqoYWE6L\u002F?spm_id_from=333.999.0.0&vd_source=de7baff5ae97e3392cfbc4c86ea52abf)\n    - 题目：神经音频编解码器：最新进展及 SemantiCodec 案例研究\n  - [Minje Kim (UIUC)](https:\u002F\u002Fsiebelschool.illinois.edu\u002Fabout\u002Fpeople\u002Fall-faculty\u002Fminje)：18:15-18:50\n    - [幻灯片](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1NsWFRC0-d86tgk-Z36D8oRocT4nX_9FQ\u002Fview?usp=sharing) | [录像](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16wH7nixp_botp1vJAeeQM9ucsnpaRlZ0\u002Fview?usp=drive_link) | [YouTube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zxFTrb_xGD0&list=PLJV_el3uVTsNnC37JYD8kBcNDI7CNJgum&index=5) | [Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1GfqoYnEfS\u002F?spm_id_from=333.999.0.0&vd_source=de7baff5ae97e3392cfbc4c86ea52abf)\n    - 题目：神经语音通信编解码器的未来发展方向\n- 主办方\n  - [Hung-yi Lee (NTU)](https:\u002F\u002Fspeech.ee.ntu.edu.tw\u002F~hylee\u002Findex.php)\n  - [Haibin Wu (微软)](https:\u002F\u002Fhbwu-ntu.github.io)\n- 被接受的论文（[录像](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1NH0RyqZJ8NogH5jUuO5tQ9iShKj95WeM\u002Fview?usp=drive_link)）\n  - ESPnet-Codec：面向音频、音乐和语音的神经编解码器综合训练与评估\n  - Codec-SUPERB @ SLT 2024：一个用于神经音频编解码器模型的轻量级基准测试\n  - 基于语音语言模型的语音生成中神经音频编解码器的研究\n  - 利用双解码产品量化变分自编码器解决大码本语音分词器的索引坍塌问题\n  - MDCTCodec：一种面向高采样率和低比特率场景的轻量级基于 MDCT 的神经音频编解码器\n\n\u003C\u002Fdetails>\n\n## :trident: Interspeech 2024 调查报告演讲\n\n李宏毅教授将作为 [Interspeech 2024 调查报告演讲](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1gPjnjGKxeCF72gisPVuQlDvogXQCtNk4\u002Fview)的一部分，发表题为 **口语语言模型开发中的挑战** 的演讲。演讲主题将涵盖当今的语音\u002F音频大型语言模型。\n\n## :trident: ICASSP 2024 教学讲座信息\n\n我（Kai-Wei Chang）将作为 [ICASSP 2024 教学讲座](https:\u002F\u002Fcmsworkshops.com\u002FICASSP2024\u002Ftutorials.php#tut32)的一部分，发表题为 **面向语音和语言基础模型的参数高效与提示学习** 的演讲。演讲主题将涵盖当今的语音\u002F音频大型语言模型。我的演示文稿幻灯片可在 https:\u002F\u002Fkwchang.org\u002Ftalks\u002F 上找到。如有任何讨论，请随时与我联系。\n\n## :trident: Interspeech 2026 教学讲座\n即将推出……\n\n## 🔱 相关仓库\n\n| 名称                                                        | GitHub 仓库                                                                                   | 论文                                               |\n| ----------------------------------------------------------- | --------------------------------------------------------------------------------------------- | --------------------------------------------------- |\n| :fire: 新！**口语对话模型调查** | :fire: 新！[链接](https:\u002F\u002Fgithub.com\u002Fga642381\u002FSpoken-Dialogue-Model-Survey) | :fire: 新！[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.22267)\n| **迈向音频语言模型的整体评估** | [链接](https:\u002F\u002Fgithub.com\u002Fckyang1124\u002FLALM-Evaluation-Survey)                              | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15957)            |\n| **大型音频模型**                                      | [链接](https:\u002F\u002Fgithub.com\u002Fliusongxiang\u002FLarge-Audio-Models)                                    | -                                                   |\n| **优秀语音生成**                               | [链接](https:\u002F\u002Fgithub.com\u002Fkuan2jiu99\u002FAwesome-Speech-Generation)                               | -                                                   |\n| **语音提示与适配器**                             | [链接](https:\u002F\u002Fgithub.com\u002Fga642381\u002FSpeech-Prompts-Adapters)                                   | -                                                   |\n| **Codec-SUPERB**                                            | [链接](https:\u002F\u002Fgithub.com\u002Fvoidful\u002FCodec-SUPERB)                                               | [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13071)            |\n| **优秀神经编程与提示技术**                | [链接](https:\u002F\u002Fgithub.com\u002Fhuckiyang\u002Fawesome-neural-reprogramming-prompting)                   | -                                                   |\n\n## 引用\n\n综述论文 **“关于语音语言模型的全景：全面综述”** 现已在 [arXiv](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.08528) 上发布。\n\n```bibtex\n@article{arora2025landscape,\n  title={On The Landscape of Spoken Language Models: A Comprehensive Survey},\n  author={Arora, Siddhant and Chang, Kai-Wei and Chien, Chung-Ming and Peng, Yifan and Wu, Haibin and Adi, Yossi and Dupoux, Emmanuel and Lee, Hung-Yi and Livescu, Karen and Watanabe, Shinji},\n  journal={arXiv preprint arXiv:2504.08528},\n  year={2025}\n}\n```\n\n核心作者的相关论文：\n\n```bibtex\n@article{chang2026tico,\n      title={TiCo: 面向语音对话模型的时间可控训练},\n      author={Kai-Wei Chang 和 Wei-Chih Chen、En-Pei Hu、Hung-yi Lee、James Glass},\n      journal={arXiv预印本 arXiv:2603.22267},\n      year={2026}\n}\n```\n\n```\n@article{wu2024ts3,\n  title={TS3-Codec：基于 Transformer 的简单流式单编解码器},\n  author={Wu, Haibin 和 Kanda, Naoyuki、Eskimez, Sefik Emre、Li, Jinyu},\n  journal={arXiv预印本 arXiv:2411.18803},\n  year={2024}\n}\n```\n\n```\n@article{wu2024codec,\n  title={Codec-SUPERB@ SLT 2024：面向神经音频编解码器模型的轻量级基准测试},\n  author={Wu, Haibin 和 Chen, Xuanjun、Lin, Yi-Cheng、Chang, Kaiwei、Du, Jiawei、Lu, Ke-Han、Liu, Alexander H、Chung, Ho-Lam、Wu, Yuan-Kuei、Yang, Dongchao 等},\n  journal={arXiv预印本 arXiv:2409.14085},\n  year={2024}\n}\n```\n\n```\n@inproceedings{wu-etal-2024-codec,\n    title = \"Codec-{SUPERB}：对声音编解码器模型的深入分析\",\n    author = \"Wu, Haibin 与 Chung, Ho-Lam、Lin, Yi-Cheng、Wu, Yuan-Kuei、Chen, Xuanjun、Pai, Yu-Chi、Wang, Hsiu-Hsuan、Chang, Kai-Wei、Liu, Alexander、Lee, Hung-yi\",\n    editor = \"Ku, Lun-Wei、Martins, Andre、Srikumar, Vivek\",\n    booktitle = \"计算语言学协会研究成果：ACL 2024\",\n    month = aug,\n    year = \"2024\",\n    address = \"曼谷，泰国\",\n    publisher = \"计算语言学协会\",\n    url = \"https:\u002F\u002Faclanthology.org\u002F2024.findings-acl.616\",\n    doi = \"10.18653\u002Fv1\u002F2024.findings-acl.616\",\n    pages = \"10330--10348\",\n}\n```\n\n```\n@article{wu2023speechgen,\n  title={Speechgen：利用提示解锁语音语言模型的生成能力},\n  author={Wu, Haibin 和 Chang, Kai-Wei、Wu, Yuan-Kuei、Lee, Hung-yi},\n  journal={arXiv预印本 arXiv:2306.02207},\n  year={2023}\n}\n```\n\n```\n@article{wu2024towards,\n  title={迈向音频语言建模——综述},\n  author={Wu, Haibin 和 Chen, Xuanjun、Lin, Yi-Cheng、Chang, Kai-Wei、Chung, Ho-Lam、Liu, Alexander H、Lee, Hung-yi},\n  journal={arXiv预印本 arXiv:2402.13236},\n  year={2024}\n}\n```","# Speech Trident 快速上手指南\n\n**Speech Trident** 并非一个单一的可安装软件包，而是一个由台湾大学李宏毅教授团队维护的**开源综述项目**。它系统性地整理了语音大语言模型（Speech LLM）领域的三大核心方向：**语音表示学习**、**神经编解码器**以及**语音大语言模型**。\n\n本指南旨在帮助开发者快速理解该项目结构，并获取列表中最新的模型资源进行开发。\n\n## 环境准备\n\n由于本项目主要提供论文列表、代码链接和技术调研，无需安装特定的 `speech-trident` 库。要运行列表中具体的模型（如 Mini-Omni, CosyVoice, GLM-4-Voice 等），建议准备以下通用深度学习环境：\n\n*   **操作系统**: Linux (推荐 Ubuntu 20.04+) 或 macOS\n*   **Python 版本**: 3.9 - 3.11\n*   **硬件要求**: 建议使用 NVIDIA GPU (显存 ≥ 16GB 用于推理，≥ 24GB 用于微调)\n*   **基础依赖**:\n    *   PyTorch (建议 2.0+)\n    *   Git\n    *   FFmpeg (用于音频处理)\n\n**前置依赖安装示例：**\n```bash\n# 创建虚拟环境\npython -m venv speech-env\nsource speech-env\u002Fbin\u002Factivate\n\n# 安装基础深度学习栈 (以 CUDA 11.8 为例)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n\n# 安装通用音频处理工具\npip install librosa soundfile ffmpeg-python\n```\n\n> **国内加速提示**：推荐使用清华或阿里镜像源加速 Python 包安装：\n> ```bash\n> pip install \u003Cpackage_name> -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 获取资源与安装\n\nSpeech Trident 的核心价值在于其维护的**模型清单**。你需要根据需求选择清单中的具体模型进行克隆和安装。\n\n### 1. 克隆综述仓库\n首先获取最新的模型列表和技术文档：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fga642381\u002Fspeech-trident.git\ncd speech-trident\n```\n\n### 2. 选择并安装具体模型\n在仓库的 `README.md` 表格中找到你感兴趣的模型（例如 **Mini-Omni 2** 或 **CosyVoice 2**），点击链接跳转至该模型的独立仓库。\n\n**示例：以 Mini-Omni 2 为例（2024-10 发布）**\n```bash\n# 克隆具体模型仓库\ngit clone https:\u002F\u002Fgithub.com\u002Fgpt-omni\u002Fmini-omni2.git\ncd mini-omni2\n\n# 安装该模型特定依赖\npip install -r requirements.txt\n```\n\n> **注意**：每个模型的环境配置可能不同，请务必阅读对应子项目的 `README` 文件。\n\n## 基本使用\n\n由于 Speech Trident 是综述项目，\"使用\"通常指运行其中收录的某个具体模型。以下以典型的**语音对话模型**（如 Mini-Omni 或类似架构）为例，展示通用的调用流程。\n\n### 步骤 1: 加载模型与处理器\n大多数 Speech LLM 使用 Hugging Face `transformers` 或官方提供的推理脚本。\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\n# 以假设的模型路径为例 (请替换为实际下载的模型路径)\nmodel_path = \".\u002Fmini-omni2\" \n\n# 加载模型 (建议使用 float16 节省显存)\nmodel = AutoModelForCausalLM.from_pretrained(\n    model_path, \n    torch_dtype=torch.float16, \n    device_map=\"auto\"\n)\ntokenizer = AutoTokenizer.from_pretrained(model_path)\n```\n\n### 步骤 2: 准备语音输入\n将音频文件转换为模型所需的张量格式（通常为采样率 16k 的 Mono 音频）。\n\n```python\nimport librosa\n\naudio_path = \"input.wav\"\n# 重采样至模型要求的采样率 (例如 16000Hz)\nspeech_array, sampling_rate = librosa.load(audio_path, sr=16000)\ninputs = processor(speech_array, sampling_rate=sampling_rate, return_tensors=\"pt\").to(model.device)\n```\n\n### 步骤 3: 生成语音或文本响应\n执行推理并保存输出。\n\n```python\n# 生成响应\noutputs = model.generate(**inputs, max_new_tokens=512)\n\n# 解码结果 (如果是文本)\nresponse_text = tokenizer.decode(outputs[0], skip_special_tokens=True)\nprint(\"AI Response:\", response_text)\n\n# 如果是端到端语音生成，通常会有专门的 decode_audio 方法\n# audio_output = model.decode_audio(outputs)\n# soundfile.write(\"output.wav\", audio_output, samplerate=16000)\n```\n\n## 核心领域参考\n\n在使用具体模型前，建议参考 Speech Trident 整理的三大技术支柱，以便更好地理解模型原理：\n\n1.  **Speech Representation Models (语音表示模型)**\n    *   作用：将连续语音信号量化为离散语义 Token。\n    *   代表技术：HuBERT, WavLM。\n2.  **Speech Neural Codec Models (语音神经编解码器)**\n    *   作用：提取声学 Token，保持高重建质量和低码率。\n    *   代表技术：EnCodec, SoundStream。\n3.  **Speech Large Language Models (语音大语言模型)**\n    *   作用：基于上述 Token 进行语言建模，实现语音理解与生成。\n    *   代表模型：列表中的 Qwen2.5-Omni, Step-Audio, Moshi 等。\n\n如需查看完整的模型演进时间轴和详细论文解读，请直接查阅本地 `speech-trident\u002FREADME.md` 文件或访问其 [arXiv 综述论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2504.08528)。","某智能客服团队正致力于升级其语音交互系统，希望让 AI 不仅能听懂用户指令，还能用带有情感的自然语音进行多轮对话。\n\n### 没有 speech-trident 时\n- **技术选型迷茫**：面对分散的语义表征、神经编解码器和语音大模型论文，开发人员难以厘清三者关系，耗费数周调研仍无法确定最佳技术组合。\n- **语音质感机械**：由于缺乏高效的声学令牌（acoustic tokens）生成方案，合成的语音虽然内容正确，但语调平淡、缺乏呼吸感，被用户投诉“像机器人”。\n- **理解与生成割裂**：语音理解模块与语音生成模块各自为政，导致在处理复杂口语化表达或打断重说时，系统反应迟钝且上下文衔接生硬。\n- **复现门槛极高**：开源代码碎片化严重，缺少统一的基准测试和预训练模型参考，团队需从零搭建实验环境，研发周期被迫拉长。\n\n### 使用 speech-trident 后\n- **架构清晰明确**：speech-trident 将表征学习、神经编解码和大模型三大核心领域梳理得井井有条，团队迅速锁定了适合业务场景的 SOTA 模型组合。\n- **高保真语音合成**：借助其中推荐的先进神经编解码模型，系统生成的语音在低码率下依然保留了丰富的音色细节和情感起伏，用户满意度显著提升。\n- **端到端能力增强**：基于 speech-trident 指引的语音大语言模型架构，实现了语义理解与语音生成的深度融合，系统能流畅处理带口音的指令及自然插话。\n- **研发效率倍增**：依托该仓库提供的全面综述和成熟模型列表，团队直接复用经过验证的方案，将原本两个月的原型开发期缩短至两周。\n\nspeech-trident 通过构建完整的语音大模型知识图谱，帮助开发者跨越了从理论调研到落地应用的鸿沟，让高质量的语音交互触手可及。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fga642381_speech-trident_b400c4d3.png","ga642381","Kai-Wei Chang (張凱爲)","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fga642381_4e6f76d9.jpg",":fist_raised: MIT Postdoc :fist_raised: NTU Ph.D.\r\n:fist_raised: former Research Scientist Intern @ Meta Reality labs",null,"Cambridge, USA","kaiwei.chang.tw@gmail.com","kwchang.org","https:\u002F\u002Fgithub.com\u002Fga642381",1220,74,"2026-04-15T07:01:03",5,"","未说明",{"notes":92,"python":90,"dependencies":93},"该仓库（speech-trident）是一个关于语音大语言模型（Speech LM）、神经编解码器（Neural Codec）和表示学习的综述项目，主要提供论文列表、资源链接和技术调研，本身不包含可运行的代码库或具体的安装脚本，因此 README 中未提及任何操作系统、硬件配置、Python 版本或依赖库的具体需求。如需运行列表中提到的具体模型（如 CosyVoice, Mini-Omni 等），请参考各模型独立的官方仓库。",[],[15,55],"2026-03-27T02:49:30.150509","2026-04-20T04:05:08.590590",[],[]]