[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-bytedance--MegaTTS3":3,"tool-bytedance--MegaTTS3":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,2,"2026-04-05T10:45:23",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[19,13,20,18],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,1,"2026-04-03T21:50:24",[20,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},2234,"scikit-learn","scikit-learn\u002Fscikit-learn","scikit-learn 是一个基于 Python 构建的开源机器学习库，依托于 SciPy、NumPy 等科学计算生态，旨在让机器学习变得简单高效。它提供了一套统一且简洁的接口，涵盖了从数据预处理、特征工程到模型训练、评估及选择的全流程工具，内置了包括线性回归、支持向量机、随机森林、聚类等在内的丰富经典算法。\n\n对于希望快速验证想法或构建原型的数据科学家、研究人员以及 Python 开发者而言，scikit-learn 是不可或缺的基础设施。它有效解决了机器学习入门门槛高、算法实现复杂以及不同模型间调用方式不统一的痛点，让用户无需重复造轮子，只需几行代码即可调用成熟的算法解决分类、回归、聚类等实际问题。\n\n其核心技术亮点在于高度一致的 API 设计风格，所有估算器（Estimator）均遵循相同的调用逻辑，极大地降低了学习成本并提升了代码的可读性与可维护性。此外，它还提供了强大的模型选择与评估工具，如交叉验证和网格搜索，帮助用户系统地优化模型性能。作为一个由全球志愿者共同维护的成熟项目，scikit-learn 以其稳定性、详尽的文档和活跃的社区支持，成为连接理论学习与工业级应用的最",65628,"2026-04-05T10:10:46",[20,18,14],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":10,"last_commit_at":63,"category_tags":64,"status":22},3364,"keras","keras-team\u002Fkeras","Keras 是一个专为人类设计的深度学习框架，旨在让构建和训练神经网络变得简单直观。它解决了开发者在不同深度学习后端之间切换困难、模型开发效率低以及难以兼顾调试便捷性与运行性能的痛点。\n\n无论是刚入门的学生、专注算法的研究人员，还是需要快速落地产品的工程师，都能通过 Keras 轻松上手。它支持计算机视觉、自然语言处理、音频分析及时间序列预测等多种任务。\n\nKeras 3 的核心亮点在于其独特的“多后端”架构。用户只需编写一套代码，即可灵活选择 TensorFlow、JAX、PyTorch 或 OpenVINO 作为底层运行引擎。这一特性不仅保留了 Keras 一贯的高层易用性，还允许开发者根据需求自由选择：利用 JAX 或 PyTorch 的即时执行模式进行高效调试，或切换至速度最快的后端以获得最高 350% 的性能提升。此外，Keras 具备强大的扩展能力，能无缝从本地笔记本电脑扩展至大规模 GPU 或 TPU 集群，是连接原型开发与生产部署的理想桥梁。",63927,"2026-04-04T15:24:37",[20,14,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":69,"owner_location":69,"owner_email":69,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":29,"env_os":96,"env_gpu":97,"env_ram":98,"env_deps":99,"category_tags":110,"github_topics":111,"view_count":113,"oss_zip_url":69,"oss_zip_packed_at":69,"status":22,"created_at":114,"updated_at":115,"faqs":116,"releases":155},975,"bytedance\u002FMegaTTS3","MegaTTS3",null,"MegaTTS3 是字节跳动与浙江大学联合开源的一款高质量文本转语音（TTS）工具，能够将文字自然流畅地转换为逼真的人声，并支持用短短几秒钟的音频样本克隆特定说话人的音色。\n\n这款工具主要解决了传统语音合成模型体积庞大、部署困难，以及克隆音质不够自然的问题。MegaTTS3 采用轻量化的 Diffusion Transformer 架构，仅需 4.5 亿参数即可实现专业级效果，大幅降低了对计算资源的要求。同时支持中英文双语及无缝切换，还能调节口音强度，满足个性化需求。\n\nMegaTTS3 适合开发者集成到语音助手、有声读物、视频配音等应用中，也便于研究人员探索语音合成技术。普通用户可通过 Hugging Face 在线 demo 直接体验，无需编程基础。技术亮点在于用极小的模型体积实现了接近真人的语音克隆质量，且支持本地部署保护隐私。目前代码已开源，采用 Apache 2.0 协议，Linux 环境可直接运行，Windows 版本正在测试中。","\u003Cdiv align=\"center\">\n    \u003Ch1>\n    MegaTTS 3 \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_MegaTTS3_readme_26df447c41af.gif\" width=\"40px\">\n    \u003C\u002Fh1>\n    \u003Cp>\n    Official PyTorch Implementation\u003Cbr>\n    \u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FByteDance\u002FMegaTTS3\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHugging%20Face-Space%20Demo-yellow\" alt=\"Hugging Face\">\u003C\u002Fa>\n    \u003Ca href=\"#\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPlatform-linux-lightgrey\" alt=\"version\">\u003C\u002Fa>\n    \u003Ca href=\"#\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython-3.10-brightgreen\" alt=\"version\">\u003C\u002Fa>\n    \u003Ca href=\"#\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPyTorch-2.6.0-orange\" alt=\"python\">\u003C\u002Fa>\n    \u003Ca href=\"#\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg\" alt=\"mit\">\u003C\u002Fa>\n\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBytedance-%230077B5.svg?&style=flat-square&logo=bytedance&logoColor=white\" \u002F>\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FZhejiang University-%230077B5.svg?&style=flat-square&logo=data:image\u002Fsvg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA1MTIgNTEyIj48IS0tIUZvbnQgQXdlc29tZSBGcmVlIDYuNy4yIGJ5IEBmb250YXdlc29tZSAtIGh0dHBzOi8vZm9udGF3ZXNvbWUuY29tIExpY2Vuc2UgLSBodHRwczovL2ZvbnRhd2Vzb21lLmNvbS9saWNlbnNlL2ZyZWUgQ29weXJpZ2h0IDIwMjUgRm9udGljb25zLCBJbmMuLS0+PHBhdGggZmlsbD0iI2ZmZmZmZiIgZD0iTTI0My40IDIuNmwtMjI0IDk2Yy0xNCA2LTIxLjggMjEtMTguNyAzNS44UzE2LjggMTYwIDMyIDE2MGwwIDhjMCAxMy4zIDEwLjcgMjQgMjQgMjRsNDAwIDBjMTMuMyAwIDI0LTEwLjcgMjQtMjRsMC04YzE1LjIgMCAyOC4zLTEwLjcgMzEuMy0yNS42cy00LjgtMjkuOS0xOC43LTM1LjhsLTIyNC05NmMtOC0zLjQtMTcuMi0zLjQtMjUuMiAwek0xMjggMjI0bC02NCAwIDAgMTk2LjNjLS42IC4zLTEuMiAuNy0xLjggMS4xbC00OCAzMmMtMTEuNyA3LjgtMTcgMjIuNC0xMi45IDM1LjlTMTcuOSA1MTIgMzIgNTEybDQ0OCAwYzE0LjEgMCAyNi41LTkuMiAzMC42LTIyLjdzLTEuMS0yOC4xLTEyLjktMzUuOWwtNDgtMzJjLS42LS40LTEuMi0uNy0xLjgtMS4xTDQ0OCAyMjRsLTY0IDAgMCAxOTItNDAgMCAwLTE5Mi02NCAwIDAgMTkyLTQ4IDAgMC0xOTItNjQgMCAwIDE5Mi00MCAwIDAtMTkyek0yNTYgNjRhMzIgMzIgMCAxIDEgMCA2NCAzMiAzMiAwIDEgMSAwLTY0eiIvPjwvc3ZnPg==&logoColor=white\" \u002F>\n\u003C\u002Fdiv>\n\n## Key features\n- 🚀**Lightweight and Efficient:** The backbone of the TTS Diffusion Transformer has only 0.45B parameters.\n- 🎧**Ultra High-Quality Voice Cloning:** You can try our model at [Huggingface Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FByteDance\u002FMegaTTS3)🎉. The .wav and .npy files can be found at [link1](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1QhcHWcy20JfqWjgqZX1YM3I6i9u4oNlr?usp=sharing). Submit a sample (.wav format, \u003C 24s, and please do not contain space in filename) on [link2](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1gCWL1y_2xu9nIFhUX_OW5MbcFuB7J5Cl?usp=sharing) to receive .npy voice latents you can use locally.\n- 🌍**Bilingual Support:** Supports both Chinese and English, and code-switching.\n- ✍️**Controllable:** Supports accent intensity control ✅ and fine-grained pronunciation\u002Fduration adjustment (coming soon).\n\n[MegaTTS 3 Demo Video](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F0174c111-f392-4376-a34b-0b5b8164aacc)\n\n\u003Cdiv style='width:100%;text-align:center'>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_MegaTTS3_readme_c5364b229a50.png\" width=\"550px\">\n\u003C\u002Fdiv>\n\n## 🎯Roadmap\n\n- **[2025-03-22]** Our project has been released!\n\n\n## Installation\n``` sh\n# Clone the repository\ngit clone https:\u002F\u002Fgithub.com\u002Fbytedance\u002FMegaTTS3\ncd MegaTTS3\n```\n**Requirements (for Linux)**\n``` sh\n\n# Create a python 3.10 conda env (you could also use virtualenv)\nconda create -n megatts3-env python=3.10\nconda activate megatts3-env\npip install -r requirements.txt\n\n# Set the root directory\nexport PYTHONPATH=\"\u002Fpath\u002Fto\u002FMegaTTS3:$PYTHONPATH\"\n\n# [Optional] Set GPU\nexport CUDA_VISIBLE_DEVICES=0\n\n# If you encounter bugs with pydantic in inference, you should check if the versions of pydantic and gradio are matched.\n# [Note] if you encounter bugs related with httpx, please check that whether your environmental variable \"no_proxy\" has patterns like \"::\"\n```\n\n**Requirements (for Windows)**\n``` sh\n# [The Windows version is currently under testing]\n# Comment below dependence in requirements.txt:\n# # WeTextProcessing==1.0.4.1\n\n# Create a python 3.10 conda env (you could also use virtualenv)\nconda create -n megatts3-env python=3.10\nconda activate megatts3-env\npip install -r requirements.txt\nconda install -y -c conda-forge pynini==2.1.5\npip install WeTextProcessing==1.0.3\n\n# [Optional] If you want GPU inference, you may need to install specific version of PyTorch for your GPU from https:\u002F\u002Fpytorch.org\u002F.\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu126\n\n# [Note] if you encounter bugs related with `ffprobe` or `ffmpeg`, you can install it through `conda install -c conda-forge ffmpeg`\n\n# Set environment variable for root directory\nset PYTHONPATH=\"C:\\path\\to\\MegaTTS3;%PYTHONPATH%\" # Windows\n$env:PYTHONPATH=\"C:\\path\\to\\MegaTTS3;%PYTHONPATH%\" # Powershell on Windows\nconda env config vars set PYTHONPATH=\"C:\\path\\to\\MegaTTS3;%PYTHONPATH%\" # For conda users\n\n# [Optional] Set GPU\nset CUDA_VISIBLE_DEVICES=0 # Windows\n$env:CUDA_VISIBLE_DEVICES=0 # Powershell on Windows\n\n```\n\n**Requirements (for Docker)**\n``` sh\n# [The Docker version is currently under testing]\n# ! You should download the pretrained checkpoint before running the following command\ndocker build . -t megatts3:latest\n\n# For GPU inference\ndocker run -it -p 127.0.0.1:7929:7929 --gpus all -e CUDA_VISIBLE_DEVICES=0 megatts3:latest\n# For CPU inference\ndocker run -it -p 127.0.0.1:7929:7929  megatts3:latest\n\n# Visit http:\u002F\u002F127.0.0.1:7860\u002F for gradio.\n```\n\n\n**Model Download**\n\nThe pretrained checkpoint can be found at [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1CidiSqtHgJTBDAHQ746_on_YR0boHDYB?usp=sharing) or [Huggingface](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FMegaTTS3). Please download them and put them to ``.\u002Fcheckpoints\u002Fxxx``.\n\n> [!IMPORTANT]  \n> For security issues, we do not upload the parameters of WaveVAE encoder to the above links. You can only use the pre-extracted latents from [link1](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1QhcHWcy20JfqWjgqZX1YM3I6i9u4oNlr?usp=sharing) for inference. If you want to synthesize speech for speaker A, you need \"A.wav\" and \"A.npy\" in the same directory. If you have any questions or suggestions for our model, please email us.\n> \n> This project is primarily intended for academic purposes. For academic datasets requiring evaluation, you may upload them to the voice request queue in [link2](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1gCWL1y_2xu9nIFhUX_OW5MbcFuB7J5Cl?usp=sharing) (within 24s for each clip). After verifying that your uploaded voices are free from safety issues, we will upload their latent files to [link1](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1QhcHWcy20JfqWjgqZX1YM3I6i9u4oNlr?usp=sharing) as soon as possible.\n> \n> In the coming days, we will also prepare and release the latent representations for some common TTS benchmarks.\n\n## Inference\n\n**Command-Line Usage (Standard)**\n``` bash\n# p_w (intelligibility weight), t_w (similarity weight). Typically, prompt with more noises requires higher p_w and t_w\npython tts\u002Finfer_cli.py --input_wav 'assets\u002FChinese_prompt.wav'  --input_text \"另一边的桌上,一位读书人嗤之以鼻道,'佛子三藏,神子燕小鱼是什么样的人物,李家的那个李子夜如何与他们相提并论？'\" --output_dir .\u002Fgen\n\n# As long as audio volume and pronunciation are appropriate, increasing --t_w within reasonable ranges (2.0~5.0)\n# will increase the generated speech's expressiveness and similarity (especially for some emotional cases).\npython tts\u002Finfer_cli.py --input_wav 'assets\u002FEnglish_prompt.wav' --input_text 'As his long promised tariff threat turned into reality this week, top human advisers began fielding a wave of calls from business leaders, particularly in the automotive sector, along with lawmakers who were sounding the alarm.' --output_dir .\u002Fgen --p_w 2.0 --t_w 3.0\n```\n**Command-Line Usage (for TTS with Accents)**\n``` bash\n# When p_w (intelligibility weight) ≈ 1.0, the generated audio closely retains the speaker’s original accent. As p_w increases, it shifts toward standard pronunciation. \n# t_w (similarity weight) is typically set 0–3 points higher than p_w for optimal results.\n# Useful for accented TTS or solving the accent problems in cross-lingual TTS.\npython tts\u002Finfer_cli.py --input_wav 'assets\u002FEnglish_prompt.wav' --input_text '这是一条有口音的音频。' --output_dir .\u002Fgen --p_w 1.0 --t_w 3.0\n\npython tts\u002Finfer_cli.py --input_wav 'assets\u002FEnglish_prompt.wav' --input_text '这条音频的发音标准一些了吗？' --output_dir .\u002Fgen --p_w 2.5 --t_w 2.5\n```\n\n**Web UI Usage**\n``` bash\n# We also support cpu inference, but it may take about 30 seconds (for 10 inference steps).\npython tts\u002Fgradio_api.py\n```\n\n## Submodules\n> [!TIP]\n> In addition to TTS, some submodules in this project may also have additional usages.\n> See ``.\u002Ftts\u002Ffrontend_fuction.py`` and ``.\u002Ftts\u002Finfer_cli.py`` for example code.\n\n### Aligner\n**Description:** a robust speech-text aligner model trained using pseudo-labels generated by a large number of MFA expert models.\n\n**Usage**: 1) Prepare the finetuning dataset for our model; 2) Filter the large-scale speech dataset (if the aligner fails to align a certain speech clip, it is likely to be noisy); 3) Phoneme recognition; 4) Speech segmentation.\n\n### Graphme-to-Phoneme Model\n**Description:** a Qwen2.5-0.5B model finetuned for robust graphme-to-phoneme conversion.\n\n**Usage**: Graphme-to-phoneme conversion.\n\n### WaveVAE\n**Description:** a strong waveform VAE that can compress 24 kHz speeche into 25 Hz acoustic latent and reconstruct the original wave almost losslessly.\n\n**Usage:** 1) Acoustic latents can provide a more compact and discriminative training target for speech synthesis models compared to mel-spectrograms, accelerating convergence; 2) Used as acoustic latents for voice conversion; 3) High-quality vocoder.\n\n\u003Cdiv style='width:100%;text-align:center'>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_MegaTTS3_readme_dbd478a39f1d.png\" width=\"650px\">\n\u003C\u002Fdiv>\n\n\n## Security\nIf you discover a potential security issue in this project, or think you may\nhave discovered a security issue, we ask that you notify Bytedance Security via our [security center](https:\u002F\u002Fsecurity.bytedance.com\u002Fsrc) or [sec@bytedance.com](sec@bytedance.com).\n\nPlease do **not** create a public GitHub issue.\n\n## License\nThis project is licensed under the [Apache-2.0 License](LICENSE).\n\n## Citation\nThis repo contains forced-align version of `Sparse Alignment Enhanced Latent Diffusion Transformer for Zero-Shot Speech Synthesis` and the WavVAE is mainly based on `Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio language modeling`. Compared to the model described in paper, the repository includes additional models. These models not only enhance the stability and cloning capabilities of the algorithm but can also be independently utilized to serve a wider range of scenarios.\n```\n@article{jiang2025sparse,\n  title={Sparse Alignment Enhanced Latent Diffusion Transformer for Zero-Shot Speech Synthesis},\n  author={Jiang, Ziyue and Ren, Yi and Li, Ruiqi and Ji, Shengpeng and Ye, Zhenhui and Zhang, Chen and Jionghao, Bai and Yang, Xiaoda and Zuo, Jialong and Zhang, Yu and others},\n  journal={arXiv preprint arXiv:2502.18924},\n  year={2025}\n}\n\n@article{ji2024wavtokenizer,\n  title={Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio language modeling},\n  author={Ji, Shengpeng and Jiang, Ziyue and Wang, Wen and Chen, Yifu and Fang, Minghui and Zuo, Jialong and Yang, Qian and Cheng, Xize and Wang, Zehan and Li, Ruiqi and others},\n  journal={arXiv preprint arXiv:2408.16532},\n  year={2024}\n}\n```\n","\u003Cdiv align=\"center\">\n    \u003Ch1>\n    MegaTTS 3 \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_MegaTTS3_readme_26df447c41af.gif\" width=\"40px\">\n    \u003C\u002Fh1>\n    \u003Cp>\n    官方 PyTorch 实现\u003Cbr>\n    \u003C\u002Fp>\n\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FByteDance\u002FMegaTTS3\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHugging%20Face-Space%20Demo-yellow\" alt=\"Hugging Face\">\u003C\u002Fa>\n    \u003Ca href=\"#\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPlatform-linux-lightgrey\" alt=\"version\">\u003C\u002Fa>\n    \u003Ca href=\"#\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython-3.10-brightgreen\" alt=\"version\">\u003C\u002Fa>\n    \u003Ca href=\"#\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPyTorch-2.6.0-orange\" alt=\"python\">\u003C\u002Fa>\n    \u003Ca href=\"#\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg\" alt=\"mit\">\u003C\u002Fa>\n\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBytedance-%230077B5.svg?&style=flat-square&logo=bytedance&logoColor=white\" \u002F>\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FZhejiang University-%230077B5.svg?&style=flat-square&logo=data:image\u002Fsvg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA1MTIgNTEyIj48IS0tIUZvbnQgQXdlc29tZSBGcmVlIDYuNy4yIGJ5IEBmb250YXdlc29tZSAtIGh0dHBzOi8vZm9udGF3ZXNvbWUuY29tIExpY2Vuc2UgLSBodHRwczovL2ZvbnRhd2Vzb21lLmNvbS9saWNlbnNlL2ZyZWUgQ29weXJpZ2h0IDIwMjUgRm9udGljb25zLCBJbmMuLS0+PHBhdGggZmlsbD0iI2ZmZmZmZiIgZD0iTTI0My40IDIuNmwtMjI0IDk2Yy0xNCA2LTIxLjggMjEtMTguNyAzNS44UzE2LjggMTYwIDMyIDE2MGwwIDhjMCAxMy4zIDEwLjcgMjQgMjQgMjRsNDAwIDBjMTMuMyAwIDI0LTEwLjcgMjQtMjRsMC04YzE1LjIgMCAyOC4zLTEwLjcgMzEuMy0yNS42cy00LjgtMjkuOS0xOC43LTM1LjhsLTIyNC05NmMtOC0zLjQtMTcuMi0zLjQtMjUuMiAwek0xMjggMjI0bC02NCAwIDAgMTk2LjNjLS42IC4zLTEuMiAuNy0xLjggMS4xbC00OCAzMmMtMTEuNyA3LjgtMTcgMjIuNC0xMi45IDM1LjlTMTcuOSA1MTIgMzIgNTEybDQ0OCAwYzE0LjEgMCAyNi41LTkuMiAzMC42LTIyLjdzLTEuMS0yOC4xLTEyLjktMzUuOWwtNDgtMzJjLS42LS40LTEuMi0uNy0xLjgtMS4xTDQ0OCAyMjRsLTY0IDAgMCAxOTItNDAgMCAwLTE5Mi02NCAwIDAgMTkyLTQ4IDAgMC0xOTItNjQgMCAwIDE5Mi00MCAwIDAtMTkyek0yNTYgNjRhMzIgMzIgMCAxIDEgMCA2NCAzMiAzMiAwIDEgMSAwLTY0eiIvPjwvc3ZnPg==&logoColor=white\" \u002F>\n\u003C\u002Fdiv>\n\n## 核心特性\n- 🚀**轻量高效：** TTS Diffusion Transformer（文本转语音扩散 Transformer）主干网络仅有 0.45B（4.5亿）参数。\n- 🎧**超高音质语音克隆：** 您可以在 [Huggingface Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FByteDance\u002FMegaTTS3)🎉 体验我们的模型。.wav 和 .npy 文件可在 [link1](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1QhcHWcy20JfqWjgqZX1YM3I6i9u4oNlr?usp=sharing) 获取。在 [link2](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1gCWL1y_2xu9nIFhUX_OW5MbcFuB7J5Cl?usp=sharing) 提交样本（.wav 格式，\u003C 24秒，文件名请勿包含空格）即可获取可在本地使用的 .npy 语音隐变量（voice latents）。\n- 🌍**双语支持：** 支持中文和英文，以及代码切换（code-switching）。\n- ✍️**可控性强：** 支持口音强度控制 ✅ 和细粒度发音\u002F时长调整（即将推出）。\n\n[MegaTTS 3 演示视频](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F0174c111-f392-4376-a34b-0b5b8164aacc)\n\n\u003Cdiv style='width:100%;text-align:center'>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_MegaTTS3_readme_c5364b229a50.png\" width=\"550px\">\n\u003C\u002Fdiv>\n\n## 🎯路线图\n\n- **[2025-03-22]** 项目已发布！\n\n\n## 安装\n``` sh\n# 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002Fbytedance\u002FMegaTTS3\ncd MegaTTS3\n```\n**环境要求（Linux）**\n``` sh\n\n# 创建 Python 3.10 的 conda 环境（也可使用 virtualenv）\nconda create -n megatts3-env python=3.10\nconda activate megatts3-env\npip install -r requirements.txt\n\n# 设置根目录\nexport PYTHONPATH=\"\u002Fpath\u002Fto\u002FMegaTTS3:$PYTHONPATH\"\n\n# [可选] 设置 GPU\nexport CUDA_VISIBLE_DEVICES=0\n\n# 如果在推理时遇到 pydantic 相关错误，请检查 pydantic 和 gradio 的版本是否匹配。\n# [注意] 如果遇到 httpx 相关错误，请检查环境变量 \"no_proxy\" 是否包含 \"::\" 等模式。\n```\n\n**环境要求（Windows）**\n``` sh\n# [Windows 版本目前正在测试中]\n# 在 requirements.txt 中注释以下依赖：\n# # WeTextProcessing==1.0.4.1\n\n# 创建 Python 3.10 的 conda 环境（也可使用 virtualenv）\nconda create -n megatts3-env python=3.10\nconda activate megatts3-env\npip install -r requirements.txt\nconda install -y -c conda-forge pynini==2.1.5\npip install WeTextProcessing==1.0.3\n\n# [可选] 如需 GPU 推理，可能需要从 https:\u002F\u002Fpytorch.org\u002F 安装适合您 GPU 的特定 PyTorch 版本。\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu126\n\n# [注意] 如果遇到 `ffprobe` 或 `ffmpeg` 相关错误，可通过 `conda install -c conda-forge ffmpeg` 安装。\n\n# 设置根目录环境变量\nset PYTHONPATH=\"C:\\path\\to\\MegaTTS3;%PYTHONPATH%\" # Windows\n$env:PYTHONPATH=\"C:\\path\\to\\MegaTTS3;%PYTHONPATH%\" # Windows Powershell\nconda env config vars set PYTHONPATH=\"C:\\path\\to\\MegaTTS3;%PYTHONPATH%\" # conda 用户\n\n# [可选] 设置 GPU\nset CUDA_VISIBLE_DEVICES=0 # Windows\n$env:CUDA_VISIBLE_DEVICES=0 # Windows Powershell\n\n```\n\n**环境要求（Docker）**\n``` sh\n# [Docker 版本目前正在测试中]\n# ! 运行以下命令前请先下载预训练检查点\ndocker build . -t megatts3:latest\n\n# GPU 推理\ndocker run -it -p 127.0.0.1:7929:7929 --gpus all -e CUDA_VISIBLE_DEVICES=0 megatts3:latest\n# CPU 推理\ndocker run -it -p 127.0.0.1:7929:7929  megatts3:latest\n\n# 访问 http:\u002F\u002F127.0.0.1:7860\u002F 使用 gradio。\n```\n\n\n**模型下载**\n\n预训练检查点可在 [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1CidiSqtHgJTBDAHQ746_on_YR0boHDYB?usp=sharing) 或 [Huggingface](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FMegaTTS3) 获取。请下载并放置到 ``.\u002Fcheckpoints\u002Fxxx``。\n\n> [!IMPORTANT]  \n> 出于安全考虑，我们未在上述链接中上传 WaveVAE 编码器的参数。您只能使用从 [link1](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1QhcHWcy20JfqWjgqZX1YM3I6i9u4oNlr?usp=sharing) 获取的预提取隐变量进行推理。如需为说话人 A 合成语音，您需要在同一目录下准备 \"A.wav\" 和 \"A.npy\"。如有任何问题或建议，请邮件联系我们。\n> \n> 本项目主要用于学术研究。如需评估学术数据集，您可将其上传至 [link2](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1gCWL1y_2xu9nIFhUX_OW5MbcFuB7J5Cl?usp=sharing) 的语音请求队列（每段不超过 24 秒）。验证上传语音无安全问题后，我们将尽快将隐变量文件上传至 [link1](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1QhcHWcy20JfqWjgqZX1YM3I6i9u4oNlr?usp=sharing)。\n> \n> 未来几天，我们还将准备并发布一些常用 TTS 基准测试的隐变量表示。\n\n## 推理\n\n**命令行使用（标准）**\n``` bash\n\n# p_w（可懂度权重, intelligibility weight）、t_w（相似度权重, similarity weight）。通常情况下，提示音频噪声越多，需要设置越高的 p_w 和 t_w\npython tts\u002Finfer_cli.py --input_wav 'assets\u002FChinese_prompt.wav'  --input_text \"另一边的桌上,一位读书人嗤之以鼻道,'佛子三藏,神子燕小鱼是什么样的人物,李家的那个李子夜如何与他们相提并论？'\" --output_dir .\u002Fgen\n\n# 只要音频音量和发音合适，在合理范围内（2.0~5.0）提高 --t_w\n# 可以增加生成语音的表现力和相似度（尤其适用于一些情感场景）。\npython tts\u002Finfer_cli.py --input_wav 'assets\u002FEnglish_prompt.wav' --input_text 'As his long promised tariff threat turned into reality this week, top human advisers began fielding a wave of calls from business leaders, particularly in the automotive sector, along with lawmakers who were sounding the alarm.' --output_dir .\u002Fgen --p_w 2.0 --t_w 3.0\n```\n**命令行用法（用于带口音的 TTS）**\n``` bash\n# 当 p_w（可懂度权重, intelligibility weight）≈ 1.0 时，生成的音频会紧密保留说话人的原始口音。随着 p_w 增加，发音会向标准发音靠拢。\n# t_w（相似度权重, similarity weight）通常比 p_w 高 0-3 个点以获得最佳效果。\n# 适用于带口音的 TTS 或解决跨语言 TTS 中的口音问题。\npython tts\u002Finfer_cli.py --input_wav 'assets\u002FEnglish_prompt.wav' --input_text '这是一条有口音的音频。' --output_dir .\u002Fgen --p_w 1.0 --t_w 3.0\n\npython tts\u002Finfer_cli.py --input_wav 'assets\u002FEnglish_prompt.wav' --input_text '这条音频的发音标准一些了吗？' --output_dir .\u002Fgen --p_w 2.5 --t_w 2.5\n```\n\n**Web UI 用法**\n``` bash\n# 我们也支持 CPU 推理，但可能需要约 30 秒（10 个推理步）。\npython tts\u002Fgradio_api.py\n```\n\n## 子模块\n> [!TIP]\n> 除了 TTS，本项目中的某些子模块可能还有其他用途。\n> 示例代码请参见 ``.\u002Ftts\u002Ffrontend_fuction.py`` 和 ``.\u002Ftts\u002Finfer_cli.py``。\n\n### Aligner（对齐器）\n**描述：** 一个鲁棒的语音-文本对齐模型，使用大量 MFA（Montreal Forced Aligner，蒙特利尔强制对齐器）专家模型生成的伪标签进行训练。\n\n**用途：** 1) 为本模型准备微调数据集；2) 过滤大规模语音数据集（如果对齐器无法对齐某段语音，则该语音很可能含有噪声）；3) 音素识别；4) 语音分割。\n\n### Grapheme-to-Phoneme Model（字素到音素模型）\n**描述：** 一个针对鲁棒字素到音素转换进行微调的 Qwen2.5-0.5B 模型。\n\n**用途：** 字素到音素转换。\n\n### WaveVAE\n**描述：** 一个强大的波形 VAE（变分自编码器, Variational Autoencoder），可将 24 kHz 语音压缩为 25 Hz 声学隐变量，并几乎无损地重建原始波形。\n\n**用途：** 1) 与梅尔频谱图相比，声学隐变量可以为语音合成模型提供更紧凑、更具区分性的训练目标，加速收敛；2) 用作语音转换的声学隐变量；3) 高质量声码器。\n\n\u003Cdiv style='width:100%;text-align:center'>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_MegaTTS3_readme_dbd478a39f1d.png\" width=\"650px\">\n\u003C\u002Fdiv>\n\n\n## 安全\n如果您在本项目中发现了潜在的安全问题，或者认为自己可能发现了安全问题，我们恳请您通过我们的[安全中心](https:\u002F\u002Fsecurity.bytedance.com\u002Fsrc)或 [sec@bytedance.com](sec@bytedance.com) 通知字节跳动安全团队。\n\n请**不要**创建公开的 GitHub issue。\n\n## 许可证\n本项目采用 [Apache-2.0 许可证](LICENSE) 授权。\n\n## 引用\n本仓库包含 `Sparse Alignment Enhanced Latent Diffusion Transformer for Zero-Shot Speech Synthesis` 的强制对齐版本，WavVAE 主要基于 `Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio language modeling`。与论文中描述的模型相比，本仓库包含了额外的模型。这些模型不仅增强了算法的稳定性和克隆能力，还可以独立使用以服务于更广泛的场景。\n```\n@article{jiang2025sparse,\n  title={Sparse Alignment Enhanced Latent Diffusion Transformer for Zero-Shot Speech Synthesis},\n  author={Jiang, Ziyue and Ren, Yi and Li, Ruiqi and Ji, Shengpeng and Ye, Zhenhui and Zhang, Chen and Jionghao, Bai and Yang, Xiaoda and Zuo, Jialong and Zhang, Yu and others},\n  journal={arXiv preprint arXiv:2502.18924},\n  year={2025}\n}\n\n@article{ji2024wavtokenizer,\n  title={Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio language modeling},\n  author={Ji, Shengpeng and Jiang, Ziyue and Wang, Wen and Chen, Yifu and Fang, Minghui and Zuo, Jialong and Yang, Qian and Cheng, Xize and Wang, Zehan and Li, Ruiqi and others},\n  journal={arXiv preprint arXiv:2408.16532},\n  year={2024}\n}\n```","# MegaTTS3 快速上手指南\n\n## 环境准备\n\n| 项目 | 要求 |\n|:---|:---|\n| 操作系统 | Linux（推荐）、Windows（测试中）、Docker |\n| Python | 3.10 |\n| PyTorch | 2.6.0 |\n| GPU | 可选，支持 CUDA |\n\n## 安装步骤\n\n### 1. 克隆仓库\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fbytedance\u002FMegaTTS3\ncd MegaTTS3\n```\n\n### 2. 创建虚拟环境\n\n```bash\nconda create -n megatts3-env python=3.10\nconda activate megatts3-env\n```\n\n### 3. 安装依赖\n\n```bash\npip install -r requirements.txt\n```\n\n### 4. 设置环境变量\n\n**Linux:**\n```bash\nexport PYTHONPATH=\"\u002Fpath\u002Fto\u002FMegaTTS3:$PYTHONPATH\"\nexport CUDA_VISIBLE_DEVICES=0  # 可选：指定GPU\n```\n\n**Windows:**\n```bash\nset PYTHONPATH=\"C:\\path\\to\\MegaTTS3;%PYTHONPATH%\"\nset CUDA_VISIBLE_DEVICES=0  # 可选：指定GPU\n```\n\n### 5. 下载模型\n\n从 [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1CidiSqtHgJTBDAHQ746_on_YR0boHDYB?usp=sharing) 或 [HuggingFace](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FMegaTTS3) 下载预训练权重，解压至 `.\u002Fcheckpoints\u002Fxxx`。\n\n> ⚠️ **重要**：WaveVAE 编码器参数未公开，需使用预提取的 `.npy` 潜变量文件。参考音频 `.wav` 与对应 `.npy` 文件需放在同一目录。\n\n## 基本使用\n\n### 命令行推理（标准模式）\n\n```bash\npython tts\u002Finfer_cli.py \\\n  --input_wav 'assets\u002FChinese_prompt.wav' \\\n  --input_text \"要合成的中文文本\" \\\n  --output_dir .\u002Fgen\n```\n\n**参数说明：**\n- `--p_w`：可懂度权重（默认 1.0，噪音大时调高）\n- `--t_w`：相似度权重（通常比 p_w 高 0-3，情感丰富时建议 2.0-5.0）\n\n```bash\n# 英文示例，增强表现力\npython tts\u002Finfer_cli.py \\\n  --input_wav 'assets\u002FEnglish_prompt.wav' \\\n  --input_text 'Your English text here.' \\\n  --output_dir .\u002Fgen \\\n  --p_w 2.0 \\\n  --t_w 3.0\n```\n\n### 口音控制\n\n```bash\n# p_w ≈ 1.0 保留原口音，增大则趋向标准发音\npython tts\u002Finfer_cli.py \\\n  --input_wav 'assets\u002FEnglish_prompt.wav' \\\n  --input_text '这是一条有口音的音频。' \\\n  --output_dir .\u002Fgen \\\n  --p_w 1.0 \\\n  --t_w 3.0\n```\n\n### Web UI 启动\n\n```bash\npython tts\u002Fgradio_api.py\n```\n\n访问 http:\u002F\u002F127.0.0.1:7860 使用图形界面（CPU 推理约 30 秒\u002F10步）。","某独立游戏工作室正在开发一款悬疑解谜游戏，需要为5位性格迥异的角色配音，但预算有限无法聘请专业声优全程录制。\n\n### 没有 MegaTTS3 时\n\n- **成本高昂**：请专业配音演员录制全部台词需花费数万元，小团队难以承担\n- **周期漫长**：协调演员档期、反复补录修改，配音环节可能拖慢整体开发进度2-3个月\n- **一致性难保证**：临时更换演员或补录时，音色和情绪难以与之前版本完全匹配\n- **多语言版本困难**：想做中英文双语版本，需要找两组完全不同的配音团队，成本翻倍\n\n### 使用 MegaTTS3 后\n\n- **零成本克隆音色**：只需收集少量参考音频（甚至可用团队成员的声音），即可生成高质量角色配音，预算全部投入到美术和程序开发\n- **实时迭代调整**：编剧修改台词后，开发者直接在本地运行推理，几分钟内生成新音频，无需等待演员档期\n- **音色永久固化**：将角色声音保存为 `.npy` 潜向量文件，任何时候都能复现完全一致的声音特质，续作或DLC开发无缝衔接\n- **中英双语无缝切换**：同一角色音色支持中英文混合输出，配合口音强度控制功能，轻松实现\"带外国口音的中文\"或\"带中文腔的英文\"等个性化设定\n\nMegaTTS3 让小团队用极低成本获得专业级配音能力，把\"做不起配音\"的困境变成了\"声音设计自由\"的创作优势。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_MegaTTS3_3811ed67.png","bytedance","Bytedance Inc.","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbytedance_7fee2b15.png","","ByteDanceOSS","https:\u002F\u002Fopensource.bytedance.com","https:\u002F\u002Fgithub.com\u002Fbytedance",[84,88],{"name":85,"color":86,"percentage":87},"Python","#3572A5",99.9,{"name":89,"color":90,"percentage":91},"Dockerfile","#384d54",0.1,6078,471,"2026-04-04T08:32:28","Apache-2.0","Linux, Windows","GPU 非必需但推荐，支持 CUDA 的 NVIDIA GPU 可用于加速推理；Windows 需安装特定版本 PyTorch（如 cu126）；CPU 推理约需 30 秒（10 步推理）","未说明",{"notes":100,"python":101,"dependencies":102},"1. 需下载预训练模型放到 .\u002Fcheckpoints\u002Fxxx 目录；2. WaveVAE 编码器参数未公开，仅支持使用预提取的 .npy 潜变量文件进行推理；3. Windows 版本正在测试中，需注释掉 WeTextProcessing==1.0.4.1 并通过 conda 安装 pynini；4. 若遇到 pydantic 或 gradio 版本不匹配问题需检查版本；5. 若 httpx 报错需检查 no_proxy 环境变量是否包含 '::' 模式；6. 支持 Docker 部署，CPU\u002FGPU 推理均可","3.10",[103,104,105,106,107,108,109],"torch==2.6.0","gradio","pydantic","httpx","WeTextProcessing","pynini==2.1.5","ffmpeg",[18],[112],"research",4,"2026-03-27T02:49:30.150509","2026-04-06T05:32:14.865802",[117,122,127,132,137,141,146,151],{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},4326,"MegaTTS3 是假开源吗？零样本（zero-shot）功能是否开放？","由于政策限制，MegaTTS3 无法完全开源零样本（zero-shot）功能。目前开源的是基础推理代码，不包含零样本能力和微调功能。如需体验零样本合成功能，可以将目标语音样本上传到 README 中提供的链接来使用官方提供的零样本合成功能。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FMegaTTS3\u002Fissues\u002F2",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},4327,"Windows 上安装失败，pynini 构建报错如何解决？","Windows 上 pynini 无法直接通过 pip 安装，需要使用 conda 安装：\n\n1. 在 requirements.txt 中注释掉 `WeTextProcessing==1.0.4.1`\n2. 安装其他依赖：`pip install -r requirements.txt`\n3. 使用 conda 安装 pynini：`conda install -c conda-forge pynini==2.1.5`\n4. 安装特定版本的 WeTextProcessing：`pip install WeTextProcessing==1.0.3`\n\n如果还遇到 ffprobe 错误，需要额外安装 ffmpeg：`conda install -c conda-forge ffmpeg`","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FMegaTTS3\u002Fissues\u002F8",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},4328,"Windows 上 Gradio 启动失败或版本不兼容怎么办？","requirements.txt 中没有指定 Gradio 版本，需要降级到特定版本：\n\n```bash\npip install gradio==4.12.0 gradio_client==0.8.0\n```\n\n如果遇到 pydantic 相关错误，还需要重新安装适配版本：\n```bash\npip uninstall pydantic\npip uninstall pydantic_core\npip install pydantic_core==2.10.6\n```","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FMegaTTS3\u002Fissues\u002F38",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},4329,"Windows 上运行时报错 \"Couldn't find ffprobe or avprobe\" 如何解决？","需要安装 ffmpeg 工具：\n\n```bash\nconda install -c conda-forge ffmpeg\n```\n\n安装后即可解决 pydub 找不到 ffprobe 的错误。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FMegaTTS3\u002Fissues\u002F37",{"id":138,"question_zh":139,"answer_zh":140,"source_url":136},4330,"Windows 上遇到 GBK 编码错误如何解决？","修改文件 `\\MegaTTS3\\tts\\infer_cli.py` 第 103 行，添加 UTF-8 编码支持：\n\n```python\n# 原代码：\n# with open(f\"{current_dir}\u002Futils\u002Ftext_utils\u002Fdict.json\") as f:\n\n# 修改为：\nwith open(f\"{current_dir}\u002Futils\u002Ftext_utils\u002Fdict.json\", encoding='utf-8-sig') as f:\n    ling_dict = json.load(f)\n```",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},4331,"如何获取上传音频后的 .npy 文件？","将音频样本上传到 README 中提供的链接（链接2）后，官方会处理并返回对应的 .npy 潜变量文件。获取到 .npy 文件后，在推理时通过 `latent_file` 参数指定该文件路径即可使用，例如：\n\n```python\nresource_context = infer_ins.preprocess(file_content, latent_file=wav_path.replace('.wav', '.npy'))\n```","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FMegaTTS3\u002Fissues\u002F81",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},4332,"MegaTTS3 的技术创新点是什么？","MegaTTS3 的主要技术创新包括：\n1. **更高的相似度**：生成的语音与目标说话人更相似\n2. **口音强度控制**：可以通过调整文本 CFG（Classifier-Free Guidance）参数来控制口音强度，CFG 越大，口音越小（具体范围因样本而异，需要调整）","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FMegaTTS3\u002Fissues\u002F7",{"id":152,"question_zh":153,"answer_zh":154,"source_url":136},4333,"Windows 上如何设置环境变量并启动推理？","完整步骤：\n\n1. 设置 PYTHONPATH 环境变量：\n```bash\nset PYTHONPATH=C:\\path\\to\\MegaTTS3;%PYTHONPATH%\n```\n\n2. 创建启动批处理文件（.bat）：\n```bash\ncall conda activate megatts3-env\nset CUDA_VISIBLE_DEVICES=0\npython tts\u002Finfer_cli.py --input_wav .\u002Fassets\u002FChinese_prompt.wav --input_text \"這是Mega最新的TTS中文模型\" --output_dir .\u002Fgen\npause\n```",[]]