[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-huggingface--parler-tts":3,"tool-huggingface--parler-tts":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":75,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":23,"env_os":95,"env_gpu":96,"env_ram":97,"env_deps":98,"category_tags":108,"github_topics":79,"view_count":109,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":110,"updated_at":111,"faqs":112,"releases":141},2556,"huggingface\u002Fparler-tts","parler-tts","Inference and training library for high-quality TTS models.","Parler-TTS 是一款轻量级且完全开源的高质量文本转语音（TTS）模型。它能够根据简单的文字描述，生成具有特定性别、音高、语速及情感风格的自然人声，让机器朗读听起来更像真人在表达。\n\n这款工具主要解决了传统 TTS 模型在声音风格控制上不够灵活，以及高质量模型往往闭源或授权受限的痛点。通过 Parler-TTS，用户只需输入如“一位女性用稍快且充满活力的语调说话”这样的提示词，即可精准定制语音效果，甚至支持指定 34 位预设说话人以保持声音一致性。\n\nParler-TTS 非常适合 AI 开发者、研究人员以及需要定制化语音内容的创作者使用。其核心亮点在于“完全开放”：数据集、预处理流程、训练代码及模型权重均在宽松许可下公开，便于社区二次开发。此外，最新发布的 Mini（8.8 亿参数）和 Large（23 亿参数）版本经过 4.5 万小时有声书数据训练，并引入了 SDPA 和 Flash Attention 2 等优化技术，显著提升了推理速度与生成质量，让用户能在本地高效部署属于自己的语音合成系统。","# Parler-TTS\n\nParler-TTS is a lightweight text-to-speech (TTS) model that can generate high-quality, natural sounding speech in the style of a given speaker (gender, pitch, speaking style, etc). It is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https:\u002F\u002Fwww.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.\n\nContrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.\n\nThis repository contains the inference and training code for Parler-TTS. It is designed to accompany the [Data-Speech](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdataspeech) repository for dataset annotation.\n\n> [!IMPORTANT]\n> **08\u002F08\u002F2024:** We are proud to release two new Parler-TTS checkpoints:\n> 1. [Parler-TTS Mini](https:\u002F\u002Fhuggingface.co\u002Fparler-tts\u002Fparler-tts-mini-v1), an 880M parameter model.\n> 2. [Parler-TTS Large](https:\u002F\u002Fhuggingface.co\u002Fparler-tts\u002Fparler-tts-large-v1), a 2.3B parameter model.\n>\n> These checkpoints have been trained on 45k hours of audiobook data.\n>\n> In addition, the code is optimized for much faster generation: we've added SDPA and Flash Attention 2 compatibility, as well as the ability to compile the model.\n\n## 📖 Quick Index\n* [Installation](#installation)\n* [Usage](#usage)\n  - [🎲 Using a random voice](#-random-voice)\n  - [🎯 Using a specific speaker](#-using-a-specific-speaker)\n* [Training](#training)\n* [Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fparler-tts\u002Fparler_tts)\n* [Model weights and datasets](https:\u002F\u002Fhuggingface.co\u002Fparler-tts)\n* [Optimizing inference](#-optimizing-inference-speed)\n\n## Installation\n\nParler-TTS has light-weight dependencies and can be installed in one line:\n\n```sh\npip install git+https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts.git\n```\n\nApple Silicon users will need to run a follow-up command to make use the nightly PyTorch (2.4) build for bfloat16 support:\n\n```sh\npip3 install --pre torch torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fnightly\u002Fcpu\n```\n\n## Usage\n\n> [!TIP]\n> You can directly try it out in an interactive demo [here](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fparler-tts\u002Fparler_tts)!\n\nUsing Parler-TTS is as simple as \"bonjour\". Simply install the library once:\n\n```sh\npip install git+https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts.git\n```\n\n### 🎲 Random voice\n\n\n**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:\n\n```py\nimport torch\nfrom parler_tts import ParlerTTSForConditionalGeneration\nfrom transformers import AutoTokenizer\nimport soundfile as sf\n\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n\nmodel = ParlerTTSForConditionalGeneration.from_pretrained(\"parler-tts\u002Fparler-tts-mini-v1\").to(device)\ntokenizer = AutoTokenizer.from_pretrained(\"parler-tts\u002Fparler-tts-mini-v1\")\n\nprompt = \"Hey, how are you doing today?\"\ndescription = \"A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up.\"\n\ninput_ids = tokenizer(description, return_tensors=\"pt\").input_ids.to(device)\nprompt_input_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids.to(device)\n\ngeneration = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)\naudio_arr = generation.cpu().numpy().squeeze()\nsf.write(\"parler_tts_out.wav\", audio_arr, model.config.sampling_rate)\n```\n\n### 🎯 Using a specific speaker\n\nTo ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name. The full list of available speakers includes:\nLaura, Gary, Jon, Lea, Karen, Rick, Brenda, David, Eileen, Jordan, Mike, Yann, Joy, James, Eric, Lauren, Rose, Will, Jason, Aaron, Naomie, Alisa, Patrick, Jerry, Tina, Jenna, Bill, Tom, Carol, Barbara, Rebecca, Anna, Bruce, Emily.\n\nTo take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`\n\nYou can replace \"Jon\" with any of the names from the list above to utilize different speaker characteristics. Each speaker has unique vocal qualities that can be leveraged to suit your specific needs. For more detailed information on speaker performance with voice consistency, please refer [inference guide](INFERENCE.md#speaker-consistency).\n\n```py\nimport torch\nfrom parler_tts import ParlerTTSForConditionalGeneration\nfrom transformers import AutoTokenizer\nimport soundfile as sf\n\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n\nmodel = ParlerTTSForConditionalGeneration.from_pretrained(\"parler-tts\u002Fparler-tts-mini-v1\").to(device)\ntokenizer = AutoTokenizer.from_pretrained(\"parler-tts\u002Fparler-tts-mini-v1\")\n\nprompt = \"Hey, how are you doing today?\"\ndescription = \"Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.\"\n\ninput_ids = tokenizer(description, return_tensors=\"pt\").input_ids.to(device)\nprompt_input_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids.to(device)\n\ngeneration = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)\naudio_arr = generation.cpu().numpy().squeeze()\nsf.write(\"parler_tts_out.wav\", audio_arr, model.config.sampling_rate)\n```\n\n**Tips**:\n* Include the term \"very clear audio\" to generate the highest quality audio, and \"very noisy audio\" for high levels of background noise\n* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech\n* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt\n\n### ✨ Optimizing Inference Speed\n\nWe've set up an [inference guide](INFERENCE.md) to make generation faster. Think SDPA, torch.compile and streaming!\n\n\nhttps:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts\u002Fassets\u002F52246514\u002F251e2488-fe6e-42c1-81cd-814c5b7795b0\n\n## Training\n\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fgithub.com\u002Fylacombe\u002Fscripts_and_notebooks\u002Fblob\u002Fmain\u002FFinetuning_Parler_TTS_v1_on_a_single_speaker_dataset.ipynb\"> \n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F> \n\u003C\u002Fa>\n\nThe [training folder](\u002Ftraining\u002F) contains all the information to train or fine-tune your own Parler-TTS model. It consists of:\n- [1. An introduction to the Parler-TTS architecture](\u002Ftraining\u002FREADME.md#1-architecture)\n- [2. The first steps to get started](\u002Ftraining\u002FREADME.md#2-getting-started)\n- [3. A training guide](\u002Ftraining\u002FREADME.md#3-training)\n\n> [!IMPORTANT]\n> **TL;DR:** After having followed the [installation steps](\u002Ftraining\u002FREADME.md#requirements), you can reproduce the Parler-TTS Mini v1 training recipe with the following command line:\n\n```sh\naccelerate launch .\u002Ftraining\u002Frun_parler_tts_training.py .\u002Fhelpers\u002Ftraining_configs\u002Fstarting_point_v1.json\n```\n\n> [!IMPORTANT]\n> You can also follow [this fine-tuning guide](https:\u002F\u002Fgithub.com\u002Fylacombe\u002Fscripts_and_notebooks\u002Fblob\u002Fmain\u002FFinetuning_Parler_TTS_v1_on_a_single_speaker_dataset.ipynb) on a mono-speaker dataset example.\n\n## Acknowledgements\n\nThis library builds on top of a number of open-source giants, to whom we'd like to extend our warmest thanks for providing these tools!\n\nSpecial thanks to:\n- Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively, for publishing such a promising and clear research paper: [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01912).\n- the many libraries used, namely [🤗 datasets](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fdatasets\u002Fv2.17.0\u002Fen\u002Findex), [🤗 accelerate](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Faccelerate\u002Fen\u002Findex), [jiwer](https:\u002F\u002Fgithub.com\u002Fjitsi\u002Fjiwer), [wandb](https:\u002F\u002Fwandb.ai\u002F), and [🤗 transformers](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Findex).\n- Descript for the [DAC codec model](https:\u002F\u002Fgithub.com\u002Fdescriptinc\u002Fdescript-audio-codec)\n- Hugging Face 🤗 for providing compute resources and time to explore!\n\n\n## Citation\n\nIf you found this repository useful, please consider citing this work and also the original Stability AI paper:\n\n```\n@misc{lacombe-etal-2024-parler-tts,\n  author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},\n  title = {Parler-TTS},\n  year = {2024},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts}}\n}\n```\n\n```\n@misc{lyth2024natural,\n      title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},\n      author={Dan Lyth and Simon King},\n      year={2024},\n      eprint={2402.01912},\n      archivePrefix={arXiv},\n      primaryClass={cs.SD}\n}\n```\n\n## Contribution\n\nContributions are welcome, as the project offers many possibilities for improvement and exploration.\n\nNamely, we're looking at ways to improve both quality and speed:\n- Datasets:\n    - Train on more data\n    - Add more features such as accents\n- Training:\n    - Add PEFT compatibility to do Lora fine-tuning.\n    - Add possibility to train without description column.\n    - Add notebook training.\n    - Explore multilingual training.\n    - Explore mono-speaker finetuning.\n    - Explore more architectures.\n- Optimization:\n    - Compilation and static cache\n    - Support to FA2 and SDPA\n- Evaluation:\n    - Add more evaluation metrics\n\n","# Parler-TTS\n\nParler-TTS 是一款轻量级的文本到语音（TTS）模型，能够以给定说话人的风格（性别、音高、语速等）生成高质量、自然流畅的语音。它是对 Dan Lyth 和 Simon King 在 Stability AI 与爱丁堡大学联合发表的论文《基于合成标注的高保真文本到语音的自然语言引导》中工作的复现。\n\n与其他 TTS 模型不同，Parler-TTS 是一个 **完全开源** 的项目。所有数据集、预处理流程、训练代码和模型权重均以宽松许可证公开发布，使社区能够在此基础上进一步开发属于自己的强大 TTS 模型。\n\n本仓库包含 Parler-TTS 的推理与训练代码，并配套使用 [Data-Speech](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdataspeech) 仓库进行数据集标注。\n\n> [!重要提示]\n> **2024年8月8日：** 我们自豪地发布了两个新的 Parler-TTS 检查点：\n> 1. [Parler-TTS Mini](https:\u002F\u002Fhuggingface.co\u002Fparler-tts\u002Fparler-tts-mini-v1)，一个拥有 8.8 亿参数的模型。\n> 2. [Parler-TTS Large](https:\u002F\u002Fhuggingface.co\u002Fparler-tts\u002Fparler-tts-large-v1)，一个拥有 23 亿参数的模型。\n>\n> 这些检查点是在 4.5 万小时有声书数据上训练得到的。\n>\n> 此外，代码也进行了优化，以大幅提升生成速度：我们加入了 SDPA 和 Flash Attention 2 的兼容性，并支持对模型进行编译。\n\n## 📖 快速索引\n* [安装](#installation)\n* [使用](#usage)\n  - [🎲 使用随机语音](#-random-voice)\n  - [🎯 使用特定说话人](#-using-a-specific-speaker)\n* [训练](#training)\n* [演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fparler-tts\u002Fparler_tts)\n* [模型权重与数据集](https:\u002F\u002Fhuggingface.co\u002Fparler-tts)\n* [优化推理速度](#-optimizing-inference-speed)\n\n## 安装\n\nParler-TTS 的依赖非常轻量，只需一行命令即可完成安装：\n\n```sh\npip install git+https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts.git\n```\n\n对于 Apple Silicon 用户，还需要运行以下命令以使用夜间版 PyTorch（2.4）构建，从而支持 bfloat16 数据类型：\n\n```sh\npip3 install --pre torch torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fnightly\u002Fcpu\n```\n\n## 使用\n\n> [!提示]\n> 您可以直接在交互式演示中体验 [这里](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fparler-tts\u002Fparler_tts)！\n\n使用 Parler-TTS 非常简单，就像“bonjour”一样。只需安装一次库即可：\n\n```sh\npip install git+https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts.git\n```\n\n### 🎲 随机语音\n\n\n**Parler-TTS** 经过训练，能够根据简单的文本提示控制生成语音的特征，例如：\n\n```py\nimport torch\nfrom parler_tts import ParlerTTSForConditionalGeneration\nfrom transformers import AutoTokenizer\nimport soundfile as sf\n\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n\nmodel = ParlerTTSForConditionalGeneration.from_pretrained(\"parler-tts\u002Fparler-tts-mini-v1\").to(device)\ntokenizer = AutoTokenizer.from_pretrained(\"parler-tts\u002Fparler-tts-mini-v1\")\n\nprompt = \"嘿，你今天过得怎么样？\"\ndescription = \"一位女性说话者以略带表现力和生动的语气，用适中的语速和音调进行演讲。录音质量非常高，说话者的嗓音清晰且近场感很强。\"\n\ninput_ids = tokenizer(description, return_tensors=\"pt\").input_ids.to(device)\nprompt_input_ids = tokenizer(prompt，return_tensors=\"pt\").input_ids.to(device）\n\ngeneration = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)\naudio_arr = generation.cpu().numpy().squeeze()\nsf.write(\"parler_tts_out.wav\", audio_arr，model.config.sampling_rate)\n```\n\n### 🎯 使用特定说话人\n\n为确保每次生成的语音都保持一致的说话人特征，该检查点还基于 34 位说话人进行了训练，每位说话人都有其独特的名称。可用的说话人名单包括：\nLaura、Gary、Jon、Lea、Karen、Rick、Brenda、David、Eileen、Jordan、Mike、Yann、Joy、James、Eric、Lauren、Rose、Will、Jason、Aaron、Naomie、Alisa、Patrick、Jerry、Tina、Jenna、Bill、Tom、Carol、Barbara、Rebecca、Anna、Bruce、Emily。\n\n要利用这一点，只需在文本描述中明确指定要使用的说话人即可：`Jon 的声音单调但语速稍快，录音非常近场，几乎没有背景噪声。`\n\n您可以将“Jon”替换为上述列表中的任何名字，以使用不同的说话人特征。每位说话人独特的嗓音特质都可以根据您的具体需求加以利用。有关说话人一致性及语音表现的更多详细信息，请参阅 [推理指南](INFERENCE.md#speaker-consistency)。\n\n```py\nimport torch\nfrom parler_tts import ParlerTTSForConditionalGeneration\nfrom transformers import AutoTokenizer\nimport soundfile as sf\n\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n\nmodel = ParlerTTSForConditionalGeneration.from_pretrained(\"parler-tts\u002Fparler-tts-mini-v1\").to(device)\ntokenizer = AutoTokenizer.from_pretrained(\"parler-tts\u002Fparler-tts-mini-v1\")\n\nprompt = \"嘿，你今天过得怎么样？\"\ndescription = \"Jon 的声音单调但语速稍快，录音非常近场，几乎没有背景噪声。\"\n\ninput_ids = tokenizer(description，return_tensors=\"pt\").input_ids.to(device)\nprompt_input_ids = tokenizer(prompt，return_tensors=\"pt\").input_ids.to(device)\n\ngeneration = model.generate(input_ids=input_ids，prompt_input_ids=prompt_input_ids)\naudio_arr = generation.cpu().numpy().squeeze()\nsf.write(\"parler_tts_out.wav\", audio_arr，model.config.sampling_rate)\n```\n\n**提示**：\n* 如果希望生成最高质量的音频，请加入“非常清晰的音频”；若需要较高背景噪声，则可使用“非常嘈杂的音频”。\n* 标点符号可用于控制生成语音的韵律，例如用逗号来添加短暂停顿。\n* 其他语音特征（性别、语速、音调和混响）可以直接通过提示词进行控制。\n\n### ✨ 优化推理速度\n\n我们已编写了一份 [推理指南](INFERENCE.md)，旨在进一步提升生成速度。请参考 SDPA、torch.compile 和流式传输等内容！\n\nhttps:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts\u002Fassets\u002F52246514\u002F251e2488-fe6e-42c1-81cd-814c5b7795b0\n\n## 训练\n\n\u003Ca target=\"_blank\" href=\"https:\u002F\u002Fgithub.com\u002Fylacombe\u002Fscripts_and_notebooks\u002Fblob\u002Fmain\u002FFinetuning_Parler_TTS_v1_on_a_single_speaker_dataset.ipynb\"> \n  \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F> \n\u003C\u002Fa>\n\n[训练文件夹](\u002Ftraining\u002F) 包含训练或微调您自己的 Parler-TTS 模型所需的所有信息。它包括：\n- [1. Parler-TTS 架构简介](\u002Ftraining\u002FREADME.md#1-architecture)\n- [2. 入门步骤](\u002Ftraining\u002FREADME.md#2-getting-started)\n- [3. 训练指南](\u002Ftraining\u002FREADME.md#3-training)\n\n> [!重要]\n> **简而言之：** 在完成[安装步骤](\u002Ftraining\u002FREADME.md#requirements)后，您可以使用以下命令行重现 Parler-TTS Mini v1 的训练配方：\n\n```sh\naccelerate launch .\u002Ftraining\u002Frun_parler_tts_training.py .\u002Fhelpers\u002Ftraining_configs\u002Fstarting_point_v1.json\n```\n\n> [!重要]\n> 您还可以按照[此微调指南](https:\u002F\u002Fgithub.com\u002Fylacombe\u002Fscripts_and_notebooks\u002Fblob\u002Fmain\u002FFinetuning_Parler_TTS_v1_on_a_single_speaker_dataset.ipynb)，以单说话人数据集为例进行操作。\n\n## 致谢\n\n本库建立在众多开源优秀项目的基础上，我们衷心感谢这些项目提供了如此强大的工具！\n\n特别感谢：\n- Stability AI 的 Dan Lyth 和爱丁堡大学的 Simon King，他们发表了极具前景且清晰的研究论文：[利用合成标注实现高保真文本到语音的自然语言引导](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01912)。\n- 所使用的诸多库，包括 [🤗 datasets](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fdatasets\u002Fv2.17.0\u002Fen\u002Findex)、[🤗 accelerate](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Faccelerate\u002Fen\u002Findex)、[jiwer](https:\u002F\u002Fgithub.com\u002Fjitsi\u002Fjiwer)、[wandb](https:\u002F\u002Fwandb.ai\u002F) 以及 [🤗 transformers](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Findex)。\n- Descript 提供的 [DAC 编解码器模型](https:\u002F\u002Fgithub.com\u002Fdescriptinc\u002Fdescript-audio-codec)。\n- Hugging Face 🤗 提供了计算资源和探索时间！\n\n## 引用\n\n如果您觉得本仓库有用，请考虑引用本文以及原始的 Stability AI 论文：\n\n```\n@misc{lacombe-etal-2024-parler-tts,\n  author = {Yoach Lacombe、Vaibhav Srivastav 和 Sanchit Gandhi},\n  title = {Parler-TTS},\n  year = {2024},\n  publisher = {GitHub},\n  journal = {GitHub 仓库},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts}}\n}\n```\n\n```\n@misc{lyth2024natural,\n      title={利用合成标注实现高保真文本到语音的自然语言引导},\n      author={Dan Lyth 和 Simon King},\n      year={2024},\n      eprint={2402.01912},\n      archivePrefix={arXiv},\n      primaryClass={cs.SD}\n}\n```\n\n## 贡献\n\n欢迎贡献，因为该项目有许多改进和探索的可能性。\n\n具体来说，我们正在寻找提升质量和速度的方法：\n- 数据集：\n    - 使用更多数据进行训练\n    - 增加更多特征，例如口音\n- 训练：\n    - 添加 PEFT 兼容性，以支持 LoRA 微调。\n    - 增加无需描述列即可训练的功能。\n    - 添加笔记本式训练。\n    - 探索多语言训练。\n    - 探索单说话人微调。\n    - 探索更多架构。\n- 优化：\n    - 编译与静态缓存\n    - 支持 FA2 和 SDPA\n- 评估：\n    - 增加更多评估指标","# Parler-TTS 快速上手指南\n\nParler-TTS 是一个轻量级的开源文本转语音（TTS）模型，能够根据自然语言描述生成高质量、自然的语音，并精确控制说话人的性别、音调、语速和风格。该模型完全开源，提供从数据预处理到训练权重的全套资源。\n\n## 环境准备\n\n*   **系统要求**：支持 Linux、macOS (包括 Apple Silicon) 和 Windows。\n*   **硬件建议**：推荐使用 NVIDIA GPU 以获得最佳生成速度；CPU 亦可运行但速度较慢。\n*   **前置依赖**：\n    *   Python 3.8+\n    *   PyTorch 2.4+ (Apple Silicon 用户需安装 nightly 版本以支持 bfloat16)\n    *   `transformers` 库\n\n## 安装步骤\n\n### 1. 基础安装\n大多数用户只需运行以下命令即可完成安装：\n\n```sh\npip install git+https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts.git\n```\n\n> **提示**：国内用户若下载缓慢，可尝试配置 pip 国内镜像源（如清华源）：\n> ```sh\n> pip install git+https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts.git -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 2. Apple Silicon (M1\u002FM2\u002FM3) 用户额外步骤\n如果您使用的是 Mac Apple Silicon 芯片，为了启用 bfloat16 支持以获得更好性能，请额外安装 PyTorch 的 nightly 构建版：\n\n```sh\npip3 install --pre torch torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fnightly\u002Fcpu\n```\n\n## 基本使用\n\nParler-TTS 的核心特性是通过**自然语言描述**来控制语音风格。以下是最简单的使用示例，展示如何生成一段随机风格的语音。\n\n### 示例：生成随机风格语音\n\n此代码将加载 `parler-tts-mini-v1` 模型，并根据描述生成语音文件 `parler_tts_out.wav`。\n\n```py\nimport torch\nfrom parler_tts import ParlerTTSForConditionalGeneration\nfrom transformers import AutoTokenizer\nimport soundfile as sf\n\n# 自动检测设备 (CUDA 或 CPU)\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n\n# 加载模型和分词器\nmodel = ParlerTTSForConditionalGeneration.from_pretrained(\"parler-tts\u002Fparler-tts-mini-v1\").to(device)\ntokenizer = AutoTokenizer.from_pretrained(\"parler-tts\u002Fparler-tts-mini-v1\")\n\n# 输入文本\nprompt = \"Hey, how are you doing today?\"\n\n# 语音风格描述 (控制性别、语速、音质等)\ndescription = \"A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up.\"\n\n# 编码输入\ninput_ids = tokenizer(description, return_tensors=\"pt\").input_ids.to(device)\nprompt_input_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids.to(device)\n\n# 生成音频\ngeneration = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)\naudio_arr = generation.cpu().numpy().squeeze()\n\n# 保存为 WAV 文件\nsf.write(\"parler_tts_out.wav\", audio_arr, model.config.sampling_rate)\n```\n\n### 进阶技巧：指定特定说话人\n模型内置了 34 种预设说话人（如 Jon, Laura, Gary 等）。若要固定说话人音色，只需在 `description` 中明确提及名字即可。\n\n**修改描述示例：**\n```python\n# 将描述改为指定 \"Jon\" 的声音特征\ndescription = \"Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.\"\n```\n*可用说话人列表包含：Laura, Gary, Jon, Lea, Karen, Rick, Brenda, David, Eileen, Jordan, Mike, Yann, Joy, James, Eric, Lauren, Rose, Will, Jason, Aaron, Naomie, Alisa, Patrick, Jerry, Tina, Jenna, Bill, Tom, Carol, Barbara, Rebecca, Anna, Bruce, Emily。*\n\n### 提示词优化建议\n*   **音质控制**：在描述中加入 `\"very clear audio\"` 可生成最高音质；加入 `\"very noisy audio\"` 可模拟背景噪音。\n*   **韵律控制**：利用标点符号（如逗号）可以在语音中制造短暂的停顿。\n*   **特征控制**：性别、语速、音调和混响效果均可直接通过自然语言描述进行调整。","一家专注于有声书制作的初创团队，正试图将大量公版经典小说快速转化为高质量的多人对话版有声剧。\n\n### 没有 parler-tts 时\n- **角色声音单一**：传统 TTS 模型难以在同一项目中稳定区分多个角色，导致旁白与不同人物的声音缺乏辨识度，听众容易混淆。\n- **情感表达僵硬**：现有工具生成的语音语调平淡，无法根据剧情需要灵活调整“激动”、“低沉”或“耳语”等细腻的情感色彩。\n- **定制成本高昂**：若需特定音色（如“中年磁性男声”），往往需要录制大量真实人声数据并重新训练模型，耗时数周且算力成本极高。\n- **开源限制多**：许多高质量语音模型不开放权重或训练代码，团队无法针对特定书籍风格进行微调，只能被动接受固定效果。\n\n### 使用 parler-tts 后\n- **精准角色塑造**：利用自然语言描述（如\"A female speaker delivers an animated speech\"）即可直接生成具有特定性别、音高和说话风格的语音，轻松区分剧中十多位角色。\n- **情感动态可控**：只需修改提示词中的风格描述，就能让同一位虚拟演员瞬间从“平静叙述”切换到“愤怒争吵”，极大提升了剧集的感染力。\n- **零样本快速落地**：无需收集任何新数据，直接调用预训练的 34 种特定说话人（如 Laura, Gary 等）或随机生成音色，几分钟内即可完成新角色的声音配置。\n- **完全自主可控**：得益于全开源的权重与训练代码，团队可在本地基于 4.5 万小时有声书数据微调模型，打造专属的语音合成引擎。\n\nparler-tts 通过自然语言驱动的高保真语音生成能力，让小型团队也能以极低成本制作出媲美专业录音棚的多角色有声剧。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhuggingface_parler-tts_1a8aacd2.png","huggingface","Hugging Face","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhuggingface_90da21a4.png","The AI community building the future.",null,"https:\u002F\u002Fhuggingface.co\u002F","https:\u002F\u002Fgithub.com\u002Fhuggingface",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",100,{"name":88,"color":89,"percentage":90},"Makefile","#427819",0,5561,589,"2026-04-02T09:08:27","Apache-2.0","Linux, macOS, Windows","可选但推荐（代码示例优先使用 CUDA），支持 NVIDIA GPU；Apple Silicon (M1\u002FM2) 需安装 PyTorch 夜间版以支持 bfloat16；支持 SDPA 和 Flash Attention 2 加速","未说明（建议根据模型大小配置：Mini 版约需 4-8GB，Large 版建议 16GB+）",{"notes":99,"python":100,"dependencies":101},"Apple Silicon 用户必须运行额外命令安装 PyTorch 夜间构建版以启用 bfloat16 支持。提供两种模型版本：Parler-TTS Mini (8.8 亿参数) 和 Large (23 亿参数)。代码已优化支持 SDPA、Flash Attention 2 及 torch.compile 以加速推理。可通过文本描述控制说话人特征（性别、语速、音高等），内置 34 种特定说话人音色。","未说明（隐含要求 Python 3.8+ 以兼容 PyTorch 2.4 及 transformers）",[102,103,104,105,106,107],"torch>=2.4 (Apple Silicon 需 nightly 版)","torchaudio","transformers","accelerate","soundfile","datasets",[55,13],7,"2026-03-27T02:49:30.150509","2026-04-06T08:45:16.168816",[113,118,123,128,133,137],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},11815,"在 macOS MPS 设备上运行时遇到 \"Can't infer missing attention mask on `mps` device\" 错误怎么办？","该问题通常与 PyTorch 的回归问题或配置有关。解决方案包括：\n1. 确保使用较新版本的 PyTorch（建议 v2.6.0 或更高，其中包含了相关修复）。\n2. 如果问题依然存在，可以尝试显式提供 `attention_mask`。\n3. 注意：如果看到关于 `pad_token_id` 和 `eos_token_id` 相同的警告，这通常是良性警告（WAI），不会影响生成质量，可以忽略。","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts\u002Fissues\u002F148",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},11816,"微调训练时出现 \"TypeError: slice indices must be integers...\" 错误如何解决？","这是因为音频长度计算结果为浮点数而非整数。请修改训练脚本 `training\u002Frun_parler_tts_training.py` 中的第 347-348 行，将最大和最小目标长度强制转换为整数：\n```python\nmax_target_length = int(data_args.max_duration_in_seconds * sampling_rate)\nmin_target_length = int(data_args.min_duration_in_seconds * sampling_rate)\n```\n此修复已在后续版本中合并。","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts\u002Fissues\u002F100",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},11817,"如何在 Apple Silicon (MPS) 设备上运行 Parler TTS？","需要较新的 PyTorch 版本（如 2.4 nightly 或更高）以支持 MPS 上的 bfloat16。示例代码如下：\n```python\ndevice = \"mps:0\"\nmodel = ParlerTTSForConditionalGeneration.from_pretrained(\"parler-tts\u002Fparler_tts_mini_v0.1\").to(device=device, dtype=torch.bfloat16)\n# ... 其余推理代码\ngeneration = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)\naudio_arr = generation.to(torch.float32).cpu().numpy().squeeze()\n```\n如果遇到 \"Output channels > 65536 not supported\" 错误，可设置环境变量 `PYTORCH_ENABLE_MPS_FALLBACK=1` 作为临时解决方案（会使用 CPU 回退，速度较慢）。","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts\u002Fissues\u002F6",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},11818,"Parler TTS 能否重新训练用于生成歌唱人声（Singing Vocals）？","理论上可以。用户已成功使用约 1 万小时的歌唱人声数据集对模型进行微调。但在实践中需要注意以下几点：\n1. 模型可能难以区分男女声，需检查训练数据集的质量。\n2. 生成长音频（2 分钟以上）时，可能需要通过添加特定名称标记来保持声音一致性。\n3. 需要根据输入文本长度调整 `min_length` 或 `min_new_tokens` 参数，防止漏词（例如使用公式：输入词数 * 0.8）。\n4. 偶尔会出现无意义的尖叫声，需进一步调试数据或参数。","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fparler-tts\u002Fissues\u002F8",{"id":134,"question_zh":135,"answer_zh":136,"source_url":117},11819,"关于 `pad_token_id` 和 `eos_token_id` 相同的警告是否会影响模型输出？","这是一个良性警告（Benign Warning），属于预期行为（WAI）。这是因为在配置文件中 `pad_token_id` 和 `eos_token_id` 被设置为相同的值。该警告不会阻止你生成高质量的音频输出，在官方微调示例笔记本中也会出现此警告，可以安全忽略。",{"id":138,"question_zh":139,"answer_zh":140,"source_url":117},11820,"在 MPS 设备上遇到 PyTorch Conv2d 操作回归导致的问题怎么办？","这是 PyTorch 上游引入的回归问题（pytorch\u002Fpytorch#140726）。目前 `Conv1d` 操作不受影响。修复方案预计将包含在 PyTorch v2.6.0 版本中。在此之前，建议关注 PyTorch 官方 Issue #142836 获取最新进展，或暂时使用 CPU\u002FCUDA 设备进行推理。",[]]