[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ThioJoe--Auto-Synced-Translated-Dubs":3,"tool-ThioJoe--Auto-Synced-Translated-Dubs":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":76,"owner_location":79,"owner_email":76,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":23,"env_os":92,"env_gpu":93,"env_ram":94,"env_deps":95,"category_tags":101,"github_topics":102,"view_count":109,"oss_zip_url":76,"oss_zip_packed_at":76,"status":16,"created_at":110,"updated_at":111,"faqs":112,"releases":148},390,"ThioJoe\u002FAuto-Synced-Translated-Dubs","Auto-Synced-Translated-Dubs","Automatically translates the text of a video based on a subtitle file, and then uses AI voice services to create a new dubbed & translated audio track where the speech is synced using the subtitle's timings.","Auto-Synced-Translated-Dubs 是一款自动化视频本地化工具，它能基于现有的字幕文件，自动将视频文本翻译成目标语言，并利用 AI 语音合成技术生成配音音频。这一过程通过字幕的时间戳精确控制新音频的时长，确保配音与原视频画面完美同步。\n\n它主要解决了人工制作多语言配音耗时费力、音画难以对齐的痛点。工具特别适合开发者、内容创作者及研究人员，尤其是需要在 YouTube 等平台批量发布多语言视频的用户。除了核心配音功能外，它还集成了多个实用脚本，例如批量翻译视频标题描述、自动上传字幕以及管理音轨等。\n\n其独特之处在于灵活的音频处理策略，既可以通过拉伸现有音频来匹配时长，也能在合成时直接指定语速以提升自然度。虽然使用它需要配置 Google Cloud 或 Azure 等 API 密钥并安装 ffmpeg 环境，但对于追求高效视频本地化的技术用户来说，这是一个功能强大且免费的开源解决方案。","# Auto Synced & Translated Dubs\n Automatically translates the text of a video into chosen languages based on a subtitle file, and also uses AI voice to dub the video, while keeping it properly synced to the original video using the subtitle's timings.\n \n### How It Works\nIf you already have a human-made SRT subtitles file for a video, this will:\n1. Use Google Cloud\u002FDeepL to automatically translate the text, and create new translated SRT files\n2. Use the timings of the subtitle lines to calculate the correct duration of each spoken audio clip\n3. Create text-to-speech audio clips of the translated text (using more realistic neural voices)\n4. Stretch or shrink the translated audio clip to be exactly the same length as the original speech.\n    - Optional (On by Default): Instead of stretching the audio clips, you can instead do a second pass at synthesizing each clip through the API using the proper speaking speed calculated during the first pass. This slightly improves audio quality.\n    - If using Azure TTS, this entire step is not necessary because it allows specifying the desired duration of the speech before synthesis\n5. Builds the audio track by inserting the new audio clips at their correct time points. Therefore the translated speech will remain perfectly in sync with the original video.\n    \n### More Key Features\n- Creates translated versions of the SRT subtitle file\n- Batch processing of multiple languages in sequence\n- Config files to save translation, synthesis, and language settings for re-use\n- Allows detailed control over how the text is translated and synthesized\n   - Including: A \"Don't Translate\" phrase list, a manual translation list, a phoneme pronunciation list, and more\n\n### Additional Included Tools\n- `TrackAdder.py`: Adds all language audio tracks to a video file\n   - With ability to merge a sound effects track into each language track\n- `TitleTranslator.py`: Translates a YouTube video Title and Description to multiple languages\n- `TitleDescriptionUpdater.py`: Uses YouTube API to update the localized titles and descriptions for a YouTube video using output of TitleTranslator.py\n- `SubtitleTrackRemover.py`: Uses YouTube API to remove a specific audio track from a YouTube video\n- `TranscriptTranslator.py`: Translates an entire transcript of text\n- `TranscriptAutoSyncUploader.py`: Using YouTube API, it lets you upload a transcript for a video, then have YouTube sync the text to the video\n   - You can also upload multiple pre-translated transcripts and have YouTube sync it, assuming the language is supported\n- `YouTube_Synced_Translations_Downloader.py`: Using YouTube API, translate the captions of a video into the specified languages, then download the auto-synced subtitle file created by YouTube\n----\n\n# Instructions\n\n### External Requirements:\n- ffmpeg must be installed (https:\u002F\u002Fffmpeg.org\u002Fdownload.html)\n\n### Optional External Tools:\n- Optional: Instead of ffmpeg for audio stretching, you could use the program'rubberband'\n  - I've actually found ffmpeg works better, but I'll still leave the option for rubberband if you want.\n  - If using Rubberband, yoou'll need the rubberband binaries. Specifically on [this page]((https:\u002F\u002Fbreakfastquay.com\u002Frubberband\u002F), find the download link for \"Rubber Band Library v3.3.0 command-line utility\" (Pick the Windows or MacOS version depending). Then extract the archive to find:\n     - On Windows: rubberband.exe, rubberband-r3.exe, and sndfile.dll\n     - On MacOS: rubberband, rubberband-r3\n  - Doesn't need to be installed, just put the above mentioned files in the same directory as main.py\n\n## Setup & Configuration\n1. Download or clone the repo and install the requirements using `pip install -r requirements.txt`\n   - I wrote this using Python 3.9 but it will probably work with earlier versions too\n2. Install the programs mentioned in the 'External Requirements' above.\n3. Setup your Google Cloud (See Wiki), Microsoft Azure API access and\u002For DeepL API Token, and set the variables in `cloud_service_settings.ini`. \n   - I recommend Azure for the TTS voice synthesizing because they have newer and better voices in my opinion, and in higher quality (Azure supports sample rate up to 48KHz vs 24KHz with Google). \n   - Google Cloud is faster, cheaper and supports more languages for text translation, but you can also use DeepL.\n4. Set up your configuration settings in `config.ini`. The default settings should work in most cases, but read through them especially if you are using Azure for TTS because there are more applicable options you may want to customize.\n   - This config includes options such as the ability to skip text translation, setting formats and sample rate, and using two-pass voice synthesizing\n5. Finally open `batch.ini` to set the language and voice settings that will be used for each run. \n   - In the top `[SETTINGS]` section you will enter the path to the original video file (used to get the correct audio length), and the original subtitle file path\n   - Also you can use the `enabled_languages` variable to list all the languages that will be translated and synthesized at once. The numbers will correspond to the `[LANGUAGE-#]` sections in the same config file. The program will process only the languages listed in this variable.\n   - This lets you add as many language presets as you want (such as the preferred voice per language), and can choose which languages you want to use (or not use) for any given run.\n   - Make sure to check supported languages and voices for each service in their respective documentation.\n\n## Usage Instructions\n- **How to Run:** After configuring the config files, simply run the main.py script using `python main.py` and let it run to completion\n   - Resulting translated subtitle files and dubbed audio tracks will be placed in a folder called 'output'\n- **Optional:** You can use the separate `TrackAdder.py` script to automatically add the resulting language tracks to an mp4 video file. Requires ffmpeg to be installed.\n   - Open the script file with a text editor and change the values in the \"User Settings\" section at the top.\n   - This will label the tracks so the video file is ready to be uploaded to YouTube. HOWEVER, the multiple audio tracks feature is only available to a limited number of channels. You will most likely need to contact YouTube creator support to ask for access, but there is no guarantee they will grant it.\n- **Optional:** You can use the separate `TitleTranslator.py` script if uploading to YouTube, which lets you enter a video's Title and Description, and the text will be translated into all the languages enabled in `batch.ini`. They wil be placed together in a single text file in the \"output\" folder.\n\n----\n\n## Additional Notes:\n- This works best with subtitles that do not remove gaps between sentences and lines.\n- For now the process only assumes there is one speaker. However, if you can make separate SRT files for each speaker, you could generate each TTS track separately using different voices, then combine them afterwards.\n- It supports both Google Translate API and DeepL for text translation, and Google, Azure, and Eleven Labs for Text-To-Speech with neural voices.\n- This script was written with my own personal workflow in mind. That is:\n    - I use [**OpenAI Whisper**](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fwhisper) to transcribe the videos locally, then use [**Descript**](https:\u002F\u002Fwww.descript.com\u002F) to sync that transcription and touch it up with corrections.\n    - Then I export the SRT file with Descript, which is ideal because it does not just butt the start and end times of each subtitle line next to each other. This means the resulting dub will preserve the pauses between sentences from the original speech. If you use subtitles from another program, you might find the pauses between lines are too short.\n    - The SRT export settings in Descript that seem to work decently for dubbing are *150 max characters per line*, and *1 max line per card*.\n- The \"Two Pass\" synthesizing feature (can be enabled in the config) will drastically improve the quality of the final result, but will require synthesizing each clip twice, therefore doubling any API costs.\n\n### Currently Supported Text-To-Speech Services:\n- Microsoft Azure\n- Google Cloud\n- Eleven Labs\n\n### Currently Supported Translation Services:\n- Google Translate\n- DeepL\n\n## For more information on supported languages by service:\n- [Google Cloud Translation Supported Languages](https:\u002F\u002Fcloud.google.com\u002Ftranslate\u002Fdocs\u002Flanguages)\n- [Google Cloud Text-to-Speech Supported Languages](https:\u002F\u002Fcloud.google.com\u002Ftext-to-speech\u002Fdocs\u002Fvoices)\n- [Azure Text-to-Speech Supported Languages](https:\u002F\u002Fdocs.microsoft.com\u002Fen-us\u002Fazure\u002Fcognitive-services\u002Fspeech-service\u002Flanguage-support#text-to-speech)\n- [DeepL Supported Languages](https:\u002F\u002Fwww.deepl.com\u002Fdocs-api\u002Ftranslating-text\u002Frequest\u002F)\n\n----\n\n### For Result Examples See: [Examples Wiki Page](https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fwiki\u002FExamples)\n### For Planned Features See: [Planned Features Wiki Page](https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fwiki\u002FPlanned-Features)\n### For Google Cloud Project Setup Instructions See: [Instructions Wiki Page](https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fwiki\u002FInstructions:-Obtaining-an-API-Key)\n### For Microsoft Azure Setup Instructions See: [Azure Instructions Wiki Page](https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fwiki\u002FInstructions:-Microsoft-Azure-Setup)\n\n","# 自动同步与翻译配音\n 基于字幕文件，自动将视频文本翻译成所选语言，并使用 AI 语音为视频配音，同时利用字幕的时间码确保配音与原视频保持正确同步。\n\n### 工作原理\n如果你已经有一个视频的人工制作 SRT 字幕文件，它将执行以下操作：\n1. 使用 Google Cloud\u002FDeepL 自动翻译文本，并创建新的翻译后的 SRT 文件\n2. 利用字幕行的时间码计算每个语音音频片段的正确时长\n3. 为翻译后的文本创建文本转语音 (Text-to-Speech, TTS) 音频片段（使用更逼真的神经语音）\n4. 拉伸或缩短翻译后的音频片段，使其长度与原语音完全一致。\n    - 可选（默认开启）：与其拉伸音频片段，不如通过 API 对每个片段进行第二遍合成，使用第一遍计算出的正确语速。这会略微提高音频质量。\n    - 如果使用 Azure TTS，则不需要此步骤，因为它允许在合成前指定所需的语音时长\n5. 通过在正确时间点插入新音频片段来构建音频轨道。因此，翻译后的语音将与原视频保持完美同步。\n\n### 更多关键功能\n- 创建 SRT 字幕文件的翻译版本\n- 按顺序批量处理多种语言\n- 配置文件用于保存翻译、合成和语言设置以便重复使用\n- 允许详细控制文本的翻译和合成方式\n   - 包括：“不翻译”短语列表、手动翻译列表、音素发音列表等\n\n### 额外包含的工具\n- `TrackAdder.py`：将所有语言音频轨道添加到视频文件中\n   - 能够将音效轨道合并到每个语言轨道中\n- `TitleTranslator.py`：将 YouTube 视频标题和描述翻译成多种语言\n- `TitleDescriptionUpdater.py`：使用 YouTube API，利用 `TitleTranslator.py` 的输出更新 YouTube 视频的本地化标题和描述\n- `SubtitleTrackRemover.py`：使用 YouTube API 从 YouTube 视频中移除特定的音频轨道\n- `TranscriptTranslator.py`：翻译整段文本转录文稿\n- `TranscriptAutoSyncUploader.py`：使用 YouTube API，允许你上传视频的字幕文稿，然后让 YouTube 将文本与视频同步\n   - 你也可以上传多个预先翻译好的文稿，让 YouTube 进行同步（前提是支持该语言）\n- `YouTube_Synced_Translations_Downloader.py`：使用 YouTube API，将视频的字幕翻译成指定语言，然后下载由 YouTube 创建的自动同步字幕文件\n----\n\n# 使用说明\n\n### 外部依赖项：\n- 必须安装 ffmpeg (https:\u002F\u002Fffmpeg.org\u002Fdownload.html)\n\n### 可选的外部工具：\n- 可选：如果不使用 ffmpeg 进行音频拉伸，可以使用程序 'rubberband'\n  - 实际上我发现 ffmpeg 效果更好，但我仍然保留 rubberband 选项供你选择。\n  - 如果使用 Rubberband，你需要 rubberband 的二进制文件。具体来说，在 [此页面](https:\u002F\u002Fbreakfastquay.com\u002Frubberband\u002F) 上，找到 \"Rubber Band Library v3.3.0 command-line utility\" 的下载链接（根据情况选择 Windows 或 MacOS 版本）。然后解压归档文件以查找：\n     - 在 Windows 上：rubberband.exe, rubberband-r3.exe, 和 sndfile.dll\n     - 在 MacOS 上：rubberband, rubberband-r3\n  - 无需安装，只需将上述文件放在与 main.py 相同的目录中\n\n## 设置与配置\n1. 下载或克隆仓库，并使用 `pip install -r requirements.txt` 安装依赖项\n   - 我是用 Python 3.9 编写的，但它可能也能兼容更早的版本\n2. 安装上述“外部依赖项”中提到的程序。\n3. 设置你的 Google Cloud（参见 Wiki）、Microsoft Azure API 访问权限和\u002F或 DeepL API Token，并在 `cloud_service_settings.ini` 中设置变量。 \n   - 我推荐 Azure 用于 TTS 语音合成，因为在我看来他们的声音更新更好，且质量更高（Azure 支持高达 48KHz 的采样率，而 Google 为 24KHz）。 \n   - Google Cloud 更快、更便宜且支持更多语言的文本翻译，但你也可以使用 DeepL。\n4. 在 `config.ini` 中设置配置参数。默认设置通常在大多数情况下都能工作，但请阅读一遍，特别是如果你使用 Azure 进行 TTS，因为那里有更多适用的选项你可能想要自定义。\n   - 此配置包括跳过文本翻译、设置格式和采样率以及使用两遍语音合成等功能选项。\n5. 最后打开 `batch.ini` 设置每次运行将使用的语言和语音设置。 \n   - 在顶部的 `[SETTINGS]` 部分，你将输入原始视频文件的路径（用于获取正确的音频长度）和原始字幕文件路径。\n   - 你还可以使用 `enabled_languages` 变量列出所有将一次性翻译和合成的语言。数字将对应同一配置文件中的 `[LANGUAGE-#]` 部分。程序将仅处理此变量中列出的语言。\n   - 这允许你添加任意数量的语言预设（例如每种语言的偏好语音），并可以选择在任何给定运行中使用（或不使用）哪些语言。\n   - 请务必在各服务的相应文档中检查支持的语言和语音。\n\n## 使用说明\n- **如何运行：** 配置好配置文件后，只需使用 `python main.py` 运行 main.py 脚本，并让其运行至完成。\n   - 生成的翻译字幕文件和配音音频轨道将放置在名为 'output' 的文件夹中。\n- **可选：** 你可以使用单独的 `TrackAdder.py` 脚本自动将生成的语言轨道添加到 mp4 视频文件中。需要安装 ffmpeg。\n   - 用文本编辑器打开脚本文件，并更改顶部“用户设置”部分中的值。\n   - 这将标记轨道，使视频文件准备好上传到 YouTube。但是，多音频轨道功能仅适用于有限数量的频道。你很可能需要联系 YouTube 创作者支持以申请访问权限，但不能保证他们会批准。\n- **可选：** 如果上传到 YouTube，你可以使用单独的 `TitleTranslator.py` 脚本，允许你输入视频的标题和描述，文本将被翻译成 `batch.ini` 中启用的所有语言。它们将一起放置在一个位于 \"output\" 文件夹中的文本文件中。\n\n## 补充说明：\n- 此工具在处理保留句子和行间停顿的字幕时效果最佳。\n- 目前该流程仅假设有一个说话人。不过，如果你能为每个说话人生成单独的 SRT 文件，你可以使用不同的声音分别生成每个 TTS（文本转语音）轨道，然后再将它们合并。\n- 它支持 Google Translate API 和 DeepL 进行文本翻译，并支持 Google、Azure 和 Eleven Labs 提供带有神经语音的 Text-To-Speech（TTS）。\n- 此脚本是围绕我个人的工作流程编写的。具体如下：\n    - 我使用 [**OpenAI Whisper**](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fwhisper) 在本地转录视频，然后使用 [**Descript**](https:\u002F\u002Fwww.descript.com\u002F) 同步该转录内容并进行修正润色。\n    - 然后我通过 Descript 导出 SRT 文件，这非常理想，因为它不会简单地将每行字幕的开始和结束时间紧挨着排列。这意味着生成的配音将保留原声中的句子间停顿。如果你使用其他程序的字幕，可能会发现行之间的停顿太短。\n    - 在 Descript 中似乎适合配音的 SRT 导出设置是 *每行最多 150 个字符*，以及 *每张卡片最多 1 行*。\n- “双遍”合成功能（可在配置中启用）将显著提高最终结果的质量，但需要为每个片段合成两次，因此会将任何 API 成本加倍。\n\n### 当前支持的文本转语音服务：\n- Microsoft Azure\n- Google Cloud\n- Eleven Labs\n\n### 当前支持的翻译服务：\n- Google Translate\n- DeepL\n\n## 关于各服务支持语言的更多信息：\n- [Google Cloud 翻译支持的语言](https:\u002F\u002Fcloud.google.com\u002Ftranslate\u002Fdocs\u002Flanguages)\n- [Google Cloud 文本转语音支持的语言](https:\u002F\u002Fcloud.google.com\u002Ftext-to-speech\u002Fdocs\u002Fvoices)\n- [Azure 文本转语音支持的语言](https:\u002F\u002Fdocs.microsoft.com\u002Fen-us\u002Fazure\u002Fcognitive-services\u002Fspeech-service\u002Flanguage-support#text-to-speech)\n- [DeepL 支持的语言](https:\u002F\u002Fwww.deepl.com\u002Fdocs-api\u002Ftranslating-text\u002Frequest\u002F)\n\n----\n\n### 查看结果示例请见：[示例 Wiki 页面](https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fwiki\u002FExamples)\n### 查看计划功能请见：[计划功能 Wiki 页面](https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fwiki\u002FPlanned-Features)\n### 查看 Google Cloud 项目设置说明请见：[说明 Wiki 页面](https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fwiki\u002FInstructions:-Obtaining-an-API-Key)\n### 查看 Microsoft Azure 设置说明请见：[Azure 说明 Wiki 页面](https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fwiki\u002FInstructions:-Microsoft-Azure-Setup)","# Auto-Synced-Translated-Dubs 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- Python 3.9+\n- 操作系统：Windows \u002F macOS \u002F Linux\n\n### 前置依赖\n1. **FFmpeg**: 必须安装，用于音频处理。\n   - 下载地址：https:\u002F\u002Fffmpeg.org\u002Fdownload.html\n   - *注：也可使用 `rubberband` 替代音频拉伸功能，但 FFmpeg 效果更佳。*\n2. **API 密钥**: 需申请并配置以下服务的访问凭证（至少选择一种组合）：\n   - 文本翻译：Google Cloud Translation 或 DeepL\n   - 语音合成 (TTS)：Microsoft Azure, Google Cloud 或 Eleven Labs\n   - *建议：Azure TTS 音质较好，Google Cloud 翻译速度快且支持语言多。*\n\n## 安装步骤\n\n1. **克隆仓库**\n   ```bash\n   git clone \u003Crepository_url>\n   cd Auto-Synced-Translated-Dubs\n   ```\n\n2. **安装 Python 依赖**\n   建议使用国内镜像加速安装：\n   ```bash\n   pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n   ```\n\n3. **配置服务密钥**\n   编辑 `cloud_service_settings.ini`，填入你的 API Token、Project ID 等凭据。\n\n4. **配置通用设置**\n   编辑 `config.ini`，调整默认参数（如采样率、是否开启双遍合成等）。\n   - *提示：开启“双遍合成”可显著提升音质，但会加倍 API 调用成本。*\n\n5. **配置批处理任务**\n   编辑 `batch.ini`：\n   - 在 `[SETTINGS]` 中指定原视频路径和原字幕文件路径。\n   - 在 `enabled_languages` 变量中列出需要翻译的语言编号。\n   - 在对应的 `[LANGUAGE-#]` 部分配置目标语言和语音预设。\n\n## 基本使用\n\n### 运行主程序\n完成上述配置后，直接运行脚本：\n```bash\npython main.py\n```\n程序将自动执行翻译、AI 配音及时间轴同步操作。生成的翻译字幕和配音音频将保存在 `output` 文件夹中。\n\n### 可选：合并音轨\n如需将生成的多语言音轨合并到 MP4 视频中，可使用辅助脚本：\n```bash\npython TrackAdder.py\n```\n*注意：需在脚本顶部修改\"User Settings\"中的文件路径。YouTube 多音轨功能可能需要联系创作者支持获取权限。*\n\n### 可选：翻译标题与描述\n若计划上传至 YouTube，可先批量翻译视频标题和描述：\n```bash\npython TitleTranslator.py\n```\n\n---\n*提示：该工具对字幕格式敏感，推荐使用 Descript 导出的 SRT 字幕，以保留原始语速停顿。*","一家专注于技术分享的自媒体团队，急需将现有的英文编程教程视频本地化为西班牙语版本，以便触达拉美市场。\n\n### 没有 Auto-Synced-Translated-Dubs 时\n- 人工翻译字幕不仅耗时漫长，还容易出现专业术语理解偏差导致歧义。\n- 聘请专业配音员成本极高，且协调档期、录制及后期修音流程复杂。\n- 即便获得录音，手动调整音频长度以匹配原片口型几乎不可能精准同步。\n- 在 YouTube 平台管理多语言音轨需逐个上传配置，维护多个版本效率低下。\n\n### 使用 Auto-Synced-Translated-Dubs 后\n- 直接读取原有 SRT 字幕，调用 Google Cloud 或 DeepL 接口实现秒级文本翻译。\n- 集成高拟真神经网络语音合成，自动生成符合语境情感的多语言配音轨道。\n- 依据字幕时间戳智能伸缩音频片段，确保新语音与原始视频画面完美卡点。\n- 支持配置文件保存设置，可一次性批量输出多语言版本并自动合并音轨。\n\nAuto-Synced-Translated-Dubs 彻底消除了人工配音与对口的繁琐环节，让视频内容的全球化分发变得高效且低成本。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThioJoe_Auto-Synced-Translated-Dubs_4cdad056.png","ThioJoe",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FThioJoe_10af61f5.png","I make tech YouTube videos, and also sometimes create random software tools as fun side projects.","United States & America","thiojoe","ThioJoe.com","https:\u002F\u002Fgithub.com\u002FThioJoe",[84],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,1716,162,"2026-04-03T17:51:04","GPL-3.0","Linux, macOS, Windows","不需要 (依赖云端 API 进行翻译和语音合成)","未说明",{"notes":96,"python":97,"dependencies":98},"1. 必须安装 ffmpeg 命令行工具用于音频处理；2. 需自行配置 Google Cloud、Microsoft Azure 或 DeepL 的 API 密钥；3. 建议使用保留句间停顿的 SRT 字幕文件以优化同步效果；4. 开启两遍语音合成可提升音质但会使 API 成本翻倍；5. 默认针对单说话人设计，多说话人需分别生成轨道；6. 可选使用 rubberband 替代 ffmpeg 进行音频拉伸。","3.9",[99,100],"未说明 (需安装 requirements.txt)","ffmpeg",[15,26,14,13,55],[103,104,105,106,107,108],"ai","dubbing","subtitles","text-to-speech","translation","tts",5,"2026-03-27T02:49:30.150509","2026-04-06T05:37:56.315902",[113,118,123,128,133,138,143],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},1430,"如何跳过翻译仅使用 Azure 朗读字幕？","在 config.ini 文件中设置 `skip_translation = True`。根据维护者回复，此问题在最新发行版 0.11.1 中已修复，不再强制要求提供 Google 或 DeepL API 密钥。","https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fissues\u002F47",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},1431,"使用 ElevenLabs API 时出现 400 错误怎么办？","错误通常是因为使用了语音名称（如 'Adam'）而非语音 ID。需要在配置文件中填入 ElevenLabs 的语音 ID（例如 en-US-JasonNeural 是 Azure 默认值，需改为 ElevenLabs 的 ID）。可以通过链接 https:\u002F\u002Fapi.elevenlabs.io\u002Fv1\u002Fvoices 查找正确的语音 ID。","https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fissues\u002F84",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},1432,"`dont_translate_phrases.txt` 对印地语等印度语言不生效怎么办？","这是因为正则表达式无法正确处理 Unicode 字符。该问题已在最新版本 0.11.2 中修复。请确保更新到最新版软件以支持这些语言的跳过翻译功能。","https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fissues\u002F52",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},1433,"运行脚本时报错 ImportError: cannot import name 'parseBool' from partially initialized module 'Scripts.utils' 如何解决？","这通常是由于循环导入引起的。建议先尝试更新最新的脚本版本。如果问题依旧，可以尝试将 `Scripts.util` 的导入语句移动到文件的最后一行。","https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fissues\u002F69",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},1434,"为什么不能同时运行多种语言（用逗号分隔），只能一次运行一种？","批量合成（batch synthesis）仅在 Azure Standard 用户可用。如果你使用的是 Azure Student Plan，`batch_tts_synthesize` 会被设置为 False。请检查 `cloud_service_settings.ini` 中的 `tts_service` 和 `batch_tts_synthesize` 设置，并确保使用最新版本的软件以避免因 Bug 导致音频不可播放。","https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fissues\u002F23",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},1435,"执行程序时出现 KeyError: '1' (break_until_next') 错误怎么修复？","这是一个代码逻辑错误。维护者提供了一个修复后的 `main.py` 版本，请下载并替换项目中的文件：https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fblob\u002Fmain\u002Fmain.py。","https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fissues\u002F41",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},1436,"运行脚本时出现 Azure dll 找不到模块的错误 (FileNotFoundError: Could not find module...) 怎么办？","这是依赖库缺失问题。建议参考 StackOverflow 上的相关解决方案（搜索关键词：azure cognitiveservices speech core.dll）。此外，确保 Python 环境正确安装了所有必要的 Azure SDK 包及 Visual C++ 运行库。","https:\u002F\u002Fgithub.com\u002FThioJoe\u002FAuto-Synced-Translated-Dubs\u002Fissues\u002F30",[149,154,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244],{"id":150,"version":151,"summary_zh":152,"released_at":153},100927,"v0.21.0","## 🛠️ [0.21.0] Fixes\r\n   * Updated calls to Azure TTS Batch API so it should work again with the latest API version\r\n   * Added check for division by zero when calculating speech rate goal\r\n\r\n## 📈 [0.21.0] New Features and Improvements\r\n   * Added `force_always_stretch` option to config, which will make it stretch the audio files after receiving from the TTS service even if it usually supports exact length, like Azure. Should come in handy if certain voices don't support specifying the exact length like some preview voices in Azure.\r\n\r\n## [0.21.0] Other Changes\r\n   * Some behind the scene changes to type hinting and enums instead of hard coded string references to improve reliability when making changes in the future","2025-01-28T01:07:27",{"id":155,"version":156,"summary_zh":157,"released_at":158},100928,"v0.20.1","## 📈 [0.20.0] New Features and Improvements\r\n   * New config option `increase_max_chars_for_extreme_speeds` which will temporarily increase the maximum number of characters per line for lines that have extreme speaking speeds, so they are more likely to be combined with lower speed subtitle\r\n   * Added a check while building the audio file to alert the user if any clips might run too long and possibly overlap with the next clip so you can check the file after generation.\r\n   * The script will now auto calculate the average character-rate of the entire video and use that as the char rate goal.\r\n      * The user can change this behavior in the config file with the \"speech_rate_goal\" setting if they want.\r\n      * This just means the speech speed might be slightly more natural on average\r\n   * Now displays message to tell user that it's downloading batch audio files\r\n   * Now when in debug mode, it will put out a proper completed subtitle file, and a separate debug version of the file.\r\n   * Fixed a long standing un-noticed bug that was causing combined subtitle lines to be less than ideally optimized\r\n      * Speech speeds should be significantly improved on average across all languages\r\n   * Addressed some odd punctuation issues where the resulting subtitles would begin with a period or comma, have double punctuation, etc.\r\n\r\n### 🟢 0.20.0 → 0.20.1:\r\n- Updated Azure voices list link in batch.ini\r\n- Updated requirements.txt with correct soundfile version and added aiohttp","2024-02-12T16:12:22",{"id":160,"version":161,"summary_zh":162,"released_at":163},100929,"v0.20.0","## 📈 [0.20.0] New Features and Improvements\r\n   * New config option `increase_max_chars_for_extreme_speeds` which will temporarily increase the maximum number of characters per line for lines that have extreme speaking speeds, so they are more likely to be combined with lower speed subtitle\r\n   * Added a check while building the audio file to alert the user if any clips might run too long and possibly overlap with the next clip so you can check the file after generation.\r\n   * The script will now auto calculate the average character-rate of the entire video and use that as the char rate goal.\r\n      * The user can change this behavior in the config file with the \"speech_rate_goal\" setting if they want.\r\n      * This just means the speech speed might be slightly more natural on average\r\n   * Now displays message to tell user that it's downloading batch audio files\r\n   * Now when in debug mode, it will put out a proper completed subtitle file, and a separate debug version of the file.\r\n   * Fixed a long standing un-noticed bug that was causing combined subtitle lines to be less than ideally optimized\r\n      * Speech speeds should be significantly improved on average across all languages\r\n   * Addressed some odd punctuation issues where the resulting subtitles would begin with a period or comma, have double punctuation, etc.","2024-01-29T02:24:14",{"id":165,"version":166,"summary_zh":167,"released_at":168},100930,"v0.19.0","## 📈 [0.19.0] New Features and Improvements\r\n- #### Reduced fragmented TTS speech thanks to new logic in the subtitle combining function\r\n   - Controlled by new option in `config.ini` called `prioritize_avoiding_fragmented_speech`. On by default but can be disabled.\r\n   - This will reduce the amount of times the TTS voice will sound like it's starting a new sentence even though it's in the middle of a sentence\r\n- #### Add config option `subtitle_gap_threshold_milliseconds`, an advanced setting for controlling the largest gap between subtitle line timestamps where the script will consider combining them.\r\n- `SubtitleTrackRemover.py` now has the option to also remove a videos' localizations (aka translated titles and descriptions)\r\n- Now the script will also combine subtitles and output an srt file, even if the original language is the same as the target language.","2024-01-21T00:52:55",{"id":170,"version":171,"summary_zh":172,"released_at":173},100931,"v0.18.1","## 📝 [0.18.0] Major Fix \u002F Improvement: \r\n* ### **Fix Fragmented Translations - Now Much Higher Quality Translations**\r\n  - Until now the script was really only able to translate each line of subtitles individually. This often resulted in extremely fragmented and awkward sentences, and poor translator word choice because of lack of context\r\n  - Now, a huge amount (if not all) of the subtitles will be fed to the translator as a _single string_ so the translator will have full context and will have no fragmented sentences\r\n  - On a technical level, this took some weird workarounds where I put HTML tag markers at the end of each subtitle line before combining it into the string, such that the translator wouldn't remove them after the fact, so they could be used to put the exact part of the translation back in the correct subtitle timestamp\r\n  - This might work better with DeepL, which has more native support for this technique. Google Translate took some extra janky workarounds so if you have trouble with it let me know and I can try to further improve it.\r\n\r\n\r\n## 📈 [0.18.0] Other Improvements\r\n- #### Added `TranscriptTranslator.py` to the tools folder. This simply lets you translate a basic transcript file as opposed to a subtitle file into multiple languages.\r\n\r\n### 🟢 0.18.0 → 0.18.1:\r\n- Fixed error caused by translator sometimes returning blank lines in some spots","2024-01-20T00:38:52",{"id":175,"version":176,"summary_zh":177,"released_at":178},100932,"v0.18.0","## 📝 [0.18.0] Major Fix \u002F Improvement: \r\n* ### **Fix Fragmented Translations - Now Much Higher Quality Translations**\r\n  - Until now the script was really only able to translate each line of subtitles individually. This often resulted in extremely fragmented and awkward sentences, and poor translator word choice because of lack of context\r\n  - Now, a huge amount (if not all) of the subtitles will be fed to the translator as a _single string_ so the translator will have full context and will have no fragmented sentences\r\n  - On a technical level, this took some weird workarounds where I put HTML tag markers at the end of each subtitle line before combining it into the string, such that the translator wouldn't remove them after the fact, so they could be used to put the exact part of the translation back in the correct subtitle timestamp\r\n  - This might work better with DeepL, which has more native support for this technique. Google Translate took some extra janky workarounds so if you have trouble with it let me know and I can try to further improve it.\r\n\r\n\r\n## 📈 [0.18.0] Other Improvements\r\n- #### Added `TranscriptTranslator.py` to the tools folder. This simply lets you translate a basic transcript file as opposed to a subtitle file into multiple languages.","2024-01-19T23:40:17",{"id":180,"version":181,"summary_zh":182,"released_at":183},100933,"v0.17.3","## 🎉 [0.17.0] New Features: \r\n* ### **Support for Eleven Labs voice synthesis**\r\n \t- You can now use Eleven Labs for voice synthesis. Simply use the 'Voice ID' of the voice as the voice name\r\n \t- Note that Eleven Labs doesn't support SSML so you don't have as much control\r\n \t- Be sure to update your `cloud_service_settings.ini` file with the latest version\r\n* ### **New Default Audio Stretching: FFMPEG**\r\n \t- I've found that FFMPEG actually results in much higher quality stretched audio than Rubberband, so I've added that and set it as default\r\n \t- You can set it manually with the config option `local_audio_stretch_method`\r\n \t- The FFMPEG result is actually pretty close to Two Pass Synthesis so that might not be necessary unless you really want\r\n \t- Be sure to update your `config.ini` file with the latest version\r\n\r\n### 0.17.0 → 0.17.2:\r\n- Now provides some more detail when ElevenLabs returns an error and stops the script from proceeding. Also checks if ElevenLabs API key is set when that TTS service is chosen\r\n### 🟢 0.17.2 → 0.173:\r\n- Improved error messaging for ElevenLabs when invalid voice is set\r\n- Fixed unnecessary extra error messaging from test line that I forgot to remove\r\n- Consolidated lots of code in the translation functions which will make it easier to add additional translation services","2024-01-15T02:00:26",{"id":185,"version":186,"summary_zh":187,"released_at":188},100934,"v0.17.2","## 🎉 [0.17.0] New Features: \r\n* ### **Support for Eleven Labs voice synthesis**\r\n \t- You can now use Eleven Labs for voice synthesis. Simply use the 'Voice ID' of the voice as the voice name\r\n \t- Note that Eleven Labs doesn't support SSML so you don't have as much control\r\n \t- Be sure to update your `cloud_service_settings.ini` file with the latest version\r\n* ### **New Default Audio Stretching: FFMPEG**\r\n \t- I've found that FFMPEG actually results in much higher quality stretched audio than Rubberband, so I've added that and set it as default\r\n \t- You can set it manually with the config option `local_audio_stretch_method`\r\n \t- The FFMPEG result is actually pretty close to Two Pass Synthesis so that might not be necessary unless you really want\r\n \t- Be sure to update your `config.ini` file with the latest version\r\n\r\n### 🟢 0.17.0 → 0.17.2:\r\n- Now provides some more detail when ElevenLabs returns an error and stops the script from proceeding. Also checks if ElevenLabs API key is set when that TTS service is chosen","2024-01-12T18:57:27",{"id":190,"version":191,"summary_zh":192,"released_at":193},100935,"v0.17.0","## 🎉 [0.17.0] New Features: \r\n* ### **Support for Eleven Labs voice synthesis**\r\n \t- You can now use Eleven Labs for voice synthesis. Simply use the 'Voice ID' of the voice as the voice name\r\n \t- Note that Eleven Labs doesn't support SSML so you don't have as much control\r\n \t- Be sure to update your `cloud_service_settings.ini` file with the latest version\r\n* ### **New Default Audio Stretching: FFMPEG**\r\n \t- I've found that FFMPEG actually results in much higher quality stretched audio than Rubberband, so I've added that and set it as default\r\n \t- You can set it manually with the config option `local_audio_stretch_method`\r\n \t- The FFMPEG result is actually pretty close to Two Pass Synthesis so that might not be necessary unless you really want\r\n \t- Be sure to update your `config.ini` file with the latest version\r\n","2024-01-11T18:20:13",{"id":195,"version":196,"summary_zh":197,"released_at":198},100936,"v0.16.0","## 🎉 [0.16.0] New Features: \r\n* ### **Added support for Rubberband on MacOS**\r\n \t- You can now use the MacOS version of the rubberband binaries (downloaded from the same page as the windows ones).\r\n \t- Running the script on MacOS should now have equal feature parity between Windows and MacOS\r\n\r\n## [0.16.0] 📈 Improvements:\r\n- #### Fixed bug on MacOS preventing it from running because it was trying to import Windows sound module\r\n- #### Fixed bug where in a few places, byte files were being opened with UTF-8 encoding","2024-01-06T18:20:21",{"id":200,"version":201,"summary_zh":202,"released_at":203},100937,"v0.15.0","## 🎉 [0.15.0] Major New Features: \r\n* ### **Use YouTube-Translated and Synced Subtitles**\r\n\t- Through use of the new tool YouTube_Synced_Translations_Downloader.py, you can have YouTube automatically translate the subtitles of the original language file, including keeping the timestamps synced.\r\n\t- Though this requires you to use YouTube's system for the translation (which presumably is Google Translate), it should eliminate translation errors arising from the translation of sentences that had been split across subtitle lines.\r\n\t- It uses the same language list found in batch.ini used elsewhere.\r\n\t- Add Translated Transcription Uploader in tools folder:  `TranscriptAutoSyncUploader.py` - Allows user to upload pre-translated transcriptions for a YouTube video. There should already be a native transcript \u002F subtitles for the video. YouTube will sync the translated subtitles to the correct times.\r\n\r\n## 📝 [0.15.0] Other New Features: \r\n - Title Translator now reads translation modifications from the SSML_Customization folder, such as manual translations and the dont_translate_phrases list\r\n - Script now plays default system sound upon completion","2023-12-14T22:00:13",{"id":205,"version":206,"summary_zh":207,"released_at":208},100938,"v0.14.1","## 🎉 [0.14.0] Major New Features: \r\n* ### **Synthesizing with Azure no longer requires two-passes**\r\n\t- Azure now supports a new SSML tag that lets you specify the exact desired duration of an audio clip before it is processed. This eliminates the need to do a second pass to correct the speed.\r\n\t- This should make the overall process much faster, because it requires half the API calls, and no stretching of audio, all while keeping the highest voice quality.\r\n\t- **NOTE:** It seems that at the moment some of the tags the scripts use (namely `Leading-exact` and `Tailing-exact`), which determines the amount of silence at the beginning and end of a clip, do not function perfectly. So there may be a slight amount of extra silence between some sentences (probably around 60ms). This shouldn't be a dramatic amount, but if it bothers you, in the mean time you can go back to using the 0.13.1 release which uses the two-pass technique. I've submitted [a question](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fanswers\u002Fquestions\u002F1193641\u002Fazure-tts-problem-inconsistent-ssml-tag-functional) about this and it has been forwarded to a team at Microsoft, so hopefully they will fix it soon.\r\n\r\n### 🟢 0.14.0 → 0.14.1:\r\n - Fixed sys import error for several tool scripts. \r\n - Slightly improved message displayed while waiting for Azure batch process","2023-03-28T19:38:46",{"id":210,"version":211,"summary_zh":212,"released_at":213},100939,"v0.14.0","## 🎉 [0.14.0] Major New Features: \r\n* ### **Synthesizing with Azure no longer requires two-passes**\r\n\t- Azure now supports a new SSML tag that lets you specify the exact desired duration of an audio clip before it is processed. This eliminates the need to do a second pass to correct the speed.\r\n\t- This should make the overall process much faster, because it requires half the API calls, and no stretching of audio, all while keeping the highest voice quality.\r\n\t- **NOTE:** It seems that at the moment some of the tags the scripts use (namely `Leading-exact` and `Tailing-exact`), which determines the amount of silence at the beginning and end of a clip, do not function perfectly. So there may be a slight amount of extra silence between some sentences (probably around 60ms). This shouldn't be a dramatic amount, but if it bothers you, in the mean time you can go back to using the 0.13.1 release which uses the two-pass technique. I've submitted [a question](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fanswers\u002Fquestions\u002F1193641\u002Fazure-tts-problem-inconsistent-ssml-tag-functional) about this and it has been forwarded to a team at Microsoft, so hopefully they will fix it soon.","2023-03-28T17:55:22",{"id":215,"version":216,"summary_zh":217,"released_at":218},100940,"v0.13.1","## 🎉 [0.13.0] New Features: \r\n* ### **Support for Azure Exact Comma Pause Time**\r\n\t- New config setting `azure_comma_pause`. Similar to `azure_sentence_pause`, this allows you to set the exact pause time of the voice after a comma.\r\n\r\n### 🟢 0.13.0 → 0.13.1:\r\n - Moved Google Authentication to auth.py. Fixed logic so won't try to authorize if neither Google Translate nor Google TTS will be used.\r\n - Fixed case where target and original language are the same, so it doesn't send a translate request to cloud","2023-03-28T00:33:27",{"id":220,"version":221,"summary_zh":222,"released_at":223},100941,"v0.13.0","## 🎉 [0.13.0] New Features: \r\n* ### **Support for Azure Exact Comma Pause Time**\r\n\t- New config setting `azure_comma_pause`. Similar to `azure_sentence_pause`, this allows you to set the exact pause time of the voice after a comma.","2023-03-26T22:26:48",{"id":225,"version":226,"summary_zh":227,"released_at":228},100942,"v0.12.0","## 🎉 [0.12.0] New Features: \r\n* ### **New Tool: Subtitle Track Remover**\r\n\t- Found in Tools folder as `SubtitleTrackRemover.py`\r\n\t- Fetches the subtitle tracks of a video, and allows the user to select which ones to delete. Good if needing to replace a subtitles track with an updated one.\r\n\r\n## 📈 [0.12.0] Other Improvements\r\n- #### Created new folders called `Scripts` and `Tools` for better organization. Tools are standalone while Scripts are used as part of the main program. The configs stay in the main root directory along with `main.py`.\r\n- #### Script now shows progress for how many out of the total languages have been processed so far","2023-03-26T21:05:42",{"id":230,"version":231,"summary_zh":232,"released_at":233},100943,"v0.11.2","## [0.11.0] 📈 Improvements:\r\n* ### **Importing of Pre-Translated SRT Files When Translation is Skipped**\r\n\t- Now when `skip_translation` is enabled in the config, the script will instead search for pre-existing translated srt files in the relevant output folder for each language and load it in.\r\n\t- Therefore you can use `stop_after_translation`, then inspect the translated subtitle files before they are synthesized. Then when you are ready, run the script with `skip_translation` enabled.\r\n\r\n### 0.11.0 → 0.11.1:\r\n-  Now if skip_translation is enabled, the `translate_service` setting doesn't matter. So no translation service is needed if only synthesizing voice.\r\n- Cleaned up and consolidated many variables for easier future changes\r\n### 🟢 0.11.1 → 0.11.2:\r\n-  Fixed the text replacement functionality (like dont translate) for languages using unicode characters like Hindi ( #52 )","2023-03-26T04:39:26",{"id":235,"version":236,"summary_zh":237,"released_at":238},100944,"v0.11.1","## [0.11.0] 📈 Improvements:\r\n* ### **Importing of Pre-Translated SRT Files When Translation is Skipped**\r\n\t- Now when `skip_translation` is enabled in the config, the script will instead search for pre-existing translated srt files in the relevant output folder for each language and load it in.\r\n\t- Therefore you can use `stop_after_translation`, then inspect the translated subtitle files before they are synthesized. Then when you are ready, run the script with `skip_translation` enabled.\r\n\r\n### 🟢 0.11.0 → 0.11.1:\r\n-  Now if skip_translation is enabled, the `translate_service` setting doesn't matter. So no translation service is needed if only synthesizing voice.\r\n- Cleaned up and consolidated many variables for easier future changes","2023-03-26T03:54:37",{"id":240,"version":241,"summary_zh":242,"released_at":243},100945,"v0.11.0","## 📈 [0.11.0] Improvements:\r\n* ### **Importing of Pre-Translated SRT Files When Translation is Skipped**\r\n\t- Now when `skip_translation` is enabled in the config, the script will instead search for pre-existing translated srt files in the relevant output folder for each language and load it in.\r\n\t- Therefore you can use `stop_after_translation`, then inspect the translated subtitle files before they are synthesized. Then when you are ready, run the script with `skip_translation` enabled.","2023-03-26T00:03:03",{"id":245,"version":246,"summary_zh":247,"released_at":248},100946,"v0.10.0","## 🎉 [0.10.0] New Features: \r\n* ### **Manual Word Translations**\r\n\t- Allows you to set manual translations of chosen words or phrases. See `READ_THIS.txt` file in `SSML_Customization` folder.\r\n\r\n* ### **Phonetic Pronunciation**\r\n\t- Added ability to specify exact phonetic pronunciation of chosen words or phrases. See `READ_THIS.txt` file in `SSML_Customization` folder.\r\n\r\n* ### **URL Handling**\r\n\t- Added ability to list URLs that might be included in the text, so that it will be better pronounced in the TTS stage. See `READ_THIS.txt` file in `SSML_Customization` folder.\r\n\r\n## 📈 [0.10.0] Other Improvements\r\n- #### Fixed bug that resulted in Google Translate and DeepL translating words that were set to not be translated.","2023-01-27T21:21:30"]