[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-astramind-ai--Auralis":3,"tool-astramind-ai--Auralis":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":10,"env_os":92,"env_gpu":93,"env_ram":94,"env_deps":95,"category_tags":103,"github_topics":105,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":109,"updated_at":110,"faqs":111,"releases":142},6509,"astramind-ai\u002FAuralis","Auralis","A Fast TTS Engine","Auralis 是一款专注于极速生成的开源文本转语音（TTS）引擎，旨在将文字高效转化为自然流畅的语音，并支持声音克隆功能。它主要解决了传统 TTS 工具处理长文本时速度慢、耗时久以及对参考音频质量要求苛刻的痛点。借助智能批处理技术，Auralis 能在普通消费级显卡上运行，甚至能以约 0.02 倍的实时因子，在短短几分钟内完成整本小说的语音合成，同时自动增强低质量录音的清晰度并降低背景噪音。\n\n这款工具非常适合开发者、研究人员以及需要批量制作有声内容的内容创作者使用。它不仅提供了简洁易用的 Python API 和兼容 OpenAI 标准的服务器接口，方便集成到各类应用中，还支持流式输出长文本和多请求并行处理。其独特的技术亮点包括对 XTTSv2 模型的深度优化、自动语言检测、内置音频预处理（如静音修剪、音量标准化），以及允许用户轻松转换并使用自定义微调模型的能力。无论是构建实时语音助手，还是大规模生产有声书，Auralis 都能以极高的效率和优异的音质满足需求。","[![](https:\u002F\u002Fdcbadge.limes.pink\u002Fapi\u002Fserver\u002Fhttps:\u002F\u002Fdiscord.gg\u002FBEMVTmcPEs)](https:\u002F\u002Fdiscord.gg\u002Fhttps:\u002F\u002Fdiscord.gg\u002FBEMVTmcPEs)\n\n# Auralis 🌌 (\u002Fauˈralis\u002F)\n\nTransform text into natural speech (with voice cloning) at warp speed. Process an entire novel in minutes, not hours.\n\n## What is Auralis? 🚀\n\nAuralis is a text-to-speech engine that makes voice generation practical for real-world use:\n\n- Convert the entire first Harry Potter book to speech in 10 minutes (**realtime factor of ≈ 0.02x!** )\n- Automatically enhance the reference quality, you can register them even with a low quality mic!\n- It can be configured to have a small memory footprint (scheduler_max_concurrency)\n- Process multiple requests simultaneously\n- Stream long texts piece by piece\n\n## Quick Start ⭐\n\n1. Create a new Conda environment:\n   ```bash\n   conda create -n auralis_env python=3.10 -y\n   ```\n\n2. Activate the environment:\n   ```bash\n   conda activate auralis_env\n   ```\n\n3. Install Auralis:\n   ```bash\n   pip install auralis\n   ```\n\nand then you can try it out via **python**\n\n```python\nfrom auralis import TTS, TTSRequest\n\n# Initialize\ntts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n\n# Generate speech\nrequest = TTSRequest(\n    text=\"Hello Earth! This is Auralis speaking.\",\n    speaker_files=['reference.wav']\n)\n\noutput = tts.generate_speech(request)\noutput.save('hello.wav')\n```\n\nor via **cli** using the openai compatible server\n```commandline\nauralis.openai --host 127.0.0.1 --port 8000 --model AstraMindAI\u002Fxttsv2 --gpt_model AstraMindAI\u002Fxtts2-gpt --max_concurrency 8 --vllm_logging_level warn  \n```\nYou can see [here](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Ftree\u002Fmain\u002Fdocs\u002FUSING_OAI_SERVER.md) for a more in-depth explanation or try it out with this [example](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Ftree\u002Fmain\u002Fexamples\u002Fuse_openai_server.py)\n\n## Key Features 🛸\n\n### Speed & Efficiency\n- Processes long texts rapidly using smart batching\n- Runs on consumer GPUs without memory issues\n- Handles multiple requests in parallel\n\n### Easy Integration\n- Simple Python API\n- Streaming support for long texts\n- Built-in audio enhancement\n- Automatic language detection\n\n### Audio Quality\n- Voice cloning from short samples\n- Background noise reduction\n- Speech clarity enhancement\n- Volume normalization\n\n## XTTSv2 Finetunes\n\nYou can use your own XTTSv2 finetunes by simply converting them from the standard coqui checkpoint format to our safetensor format. Use [this script](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fblob\u002Fmain\u002Fsrc\u002Fauralis\u002Fmodels\u002Fxttsv2\u002Futils\u002Fcheckpoint_converter.py):\n```commandline\npython checkpoint_converter.py path\u002Fto\u002Fcheckpoint.pth --output_dir path\u002Fto\u002Foutput\n```\n\nit will create two folders, one with the core xttsv2 checkpoint and one with the gtp2 component. Then create a TTS instance with \n```python\ntts = TTS().from_pretrained(\"som\u002Fcore-xttsv2_model\", gpt_model='some\u002Fxttsv2-gpt_model')\n```\n\n## Examples & Usage 🚀\n\n### Basic Examples ⭐\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Simple Text Generation\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nfrom auralis import TTS, TTSRequest\n\n# Initialize\ntts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n# Basic generation\nrequest = TTSRequest(\n    text=\"Hello Earth! This is Auralis speaking.\",\n    speaker_files=[\"speaker.wav\"]\n)\noutput = tts.generate_speech(request)\noutput.save(\"hello.wav\")\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Working with TTSRequest\u003C\u002Fb> 🎤\u003C\u002Fsummary>\n\n```python\n# Basic request\nrequest = TTSRequest(\n    text=\"Hello world!\",\n    speaker_files=[\"speaker.wav\"]\n)\n\n# Enhanced audio processing\nrequest = TTSRequest(\n    text=\"Pristine audio quality\",\n    speaker_files=[\"speaker.wav\"],\n    audio_config=AudioPreprocessingConfig(\n        normalize=True,\n        trim_silence=True,\n        enhance_speech=True,\n        enhance_amount=1.5\n    )\n)\n\n# Language-specific request\nrequest = TTSRequest(\n    text=\"Bonjour le monde!\",\n    speaker_files=[\"speaker.wav\"],\n    language=\"fr\"\n)\n\n# Streaming configuration\nrequest = TTSRequest(\n    text=\"Very long text...\",\n    speaker_files=[\"speaker.wav\"],\n    stream=True,\n)\n\n# Generation parameters\nrequest = TTSRequest(\n    text=\"Creative variations\",\n    speaker_files=[\"speaker.wav\"],\n    temperature=0.8,\n    top_p=0.9,\n    top_k=50\n)\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Working with TTSOutput\u003C\u002Fb> 🎧\u003C\u002Fsummary>\n\n```python\n# Load audio file\noutput = TTSOutput.from_file(\"input.wav\")\n\n# Format conversion\noutput.bit_depth = 32\noutput.channel = 2\ntensor_audio = output.to_tensor()\naudio_bytes = output.to_bytes()\n\n\n\n# Audio processing\nresampled = output.resample(target_sr=44100)\nfaster = output.change_speed(1.5)\nnum_samples, sample_rate, duration = output.get_info()\n\n# Combine multiple outputs\ncombined = TTSOutput.combine_outputs([output1, output2, output3])\n\n# Playback and saving\noutput.play()  # Play audio\noutput.preview()  # Smart playback (Jupyter\u002Fsystem)\noutput.save(\"processed.wav\", sample_rate=44100)\n```\n\u003C\u002Fdetails>\n\n### Synchronous Advanced Examples 🌟\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Batch Text Processing\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\n# Process multiple texts with same voice\ntexts = [\"First paragraph.\", \"Second paragraph.\", \"Third paragraph.\"]\nrequests = [\n    TTSRequest(\n        text=text,\n        speaker_files=[\"speaker.wav\"]\n    ) for text in texts\n]\n\n# Sequential processing with progress\noutputs = []\nfor i, req in enumerate(requests, 1):\n    print(f\"Processing text {i}\u002F{len(requests)}\")\n    outputs.append(tts.generate_speech(req))\n\n# Combine all outputs\ncombined = TTSOutput.combine_outputs(outputs)\ncombined.save(\"combined_output.wav\")\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Book Chapter Processing\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\ndef process_book(chapter_file: str, speaker_file: str):\n    # Read chapter\n    with open(chapter_file, 'r') as f:\n        chapter = f.read()\n    \n    # You can pass the whole book, auralis will take care of splitting\n    \n    request = TTSRequest(\n            text=chapter,\n            speaker_files=[speaker_file],\n            audio_config=AudioPreprocessingConfig(\n                enhance_speech=True,\n                normalize=True\n            )\n        )\n        \n    output = tts.generate_speech(request)\n    \n    output.play()\n    output.save(\"chapter_output.wav\")\n```\n\u003C\u002Fdetails>\n\n### Asynchronous Examples 🛸\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Basic Async Generation\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport asyncio\nfrom auralis import TTS, TTSRequest\n\nasync def generate_speech():\n    tts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n    \n    request = TTSRequest(\n        text=\"Async generation example\",\n        speaker_files=[\"speaker.wav\"]\n    )\n    \n    output = await tts.generate_speech_async(request)\n    output.save(\"async_output.wav\")\n\nasyncio.run(generate_speech())\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Parallel Processing\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nasync def generate_parallel():\n    tts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n    \n    # Create multiple requests\n    requests = [\n        TTSRequest(\n            text=f\"This is voice {i}\",\n            speaker_files=[f\"speaker_{i}.wav\"]\n        ) for i in range(3)\n    ]\n    \n    # Process in parallel\n    coroutines = [tts.generate_speech_async(req) for req in requests]\n    outputs = await asyncio.gather(*coroutines, return_exceptions=True)\n    \n    # Handle results\n    valid_outputs = [\n        out for out in outputs \n        if not isinstance(out, Exception)\n    ]\n    \n    combined = TTSOutput.combine_outputs(valid_outputs)\n    combined.save(\"parallel_output.wav\")\n\nasyncio.run(generate_parallel())\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Async Streaming with Multiple Requests\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nasync def stream_multiple_texts():\n    tts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n    \n    # Prepare streaming requests\n    texts = [\n        \"First long text...\",\n        \"Second long text...\",\n        \"Third long text...\"\n    ]\n    \n    requests = [\n        TTSRequest(\n            text=text,\n            speaker_files=[\"speaker.wav\"],\n            stream=True,\n        ) for text in texts\n    ]\n    \n    # Process streams in parallel\n    coroutines = [tts.generate_speech_async(req) for req in requests]\n    streams = await asyncio.gather(*coroutines)\n    \n    # Collect outputs\n    output_container = {i: [] for i in range(len(requests))}\n    \n    async def process_stream(idx, stream):\n        async for chunk in stream:\n            output_container[idx].append(chunk)\n            print(f\"Processed chunk for text {idx+1}\")\n            \n    # Process all streams\n    await asyncio.gather(\n        *(process_stream(i, stream) \n          for i, stream in enumerate(streams))\n    )\n    \n    # Save results\n    for idx, chunks in output_container.items():\n        TTSOutput.combine_outputs(chunks).save(\n            f\"text_{idx}_output.wav\"\n        )\n\nasyncio.run(stream_multiple_texts())\n```\n\u003C\u002Fdetails>\n\n\n## Core Classes 🌟\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>TTSRequest\u003C\u002Fb> - Unified request container with audio enhancement 🎤\u003C\u002Fsummary>\n\n```python\n@dataclass\nclass TTSRequest:\n    \"\"\"Container for TTS inference request data\"\"\"\n    # Request metadata\n    text: Union[AsyncGenerator[str, None], str, List[str]]\n\n    speaker_files: Union[List[str], bytes]  # Path to the speaker audio file\n\n    enhance_speech: bool = True\n    audio_config: AudioPreprocessingConfig = field(default_factory=AudioPreprocessingConfig)\n    language: SupportedLanguages = \"auto\"\n    request_id: str = field(default_factory=lambda: uuid.uuid4().hex)\n    load_sample_rate: int = 22050\n    sound_norm_refs: bool = False\n\n    # Voice conditioning parameters\n    max_ref_length: int = 60\n    gpt_cond_len: int = 30\n    gpt_cond_chunk_len: int = 4\n\n    # Generation parameters\n    stream: bool = False\n    temperature: float = 0.75\n    top_p: float = 0.85\n    top_k: int = 50\n    repetition_penalty: float = 5.0\n    length_penalty: float = 1.0\n    do_sample: bool = True\n```\n\n### Examples\n\n```python\n# Basic usage\nrequest = TTSRequest(\n    text=\"Hello world!\",\n    speaker_files=[\"reference.wav\"]\n)\n\n# With custom audio enhancement\nrequest = TTSRequest(\n    text=\"Hello world!\",\n    speaker_files=[\"reference.wav\"],\n    audio_config=AudioPreprocessingConfig(\n        normalize=True,\n        trim_silence=True,\n        enhance_speech=True,\n        enhance_amount=1.5\n    )\n)\n\n# Streaming long text\nrequest = TTSRequest(\n    text=\"Very long text...\",\n    speaker_files=[\"reference.wav\"],\n    stream=True,\n)\n```\n\n### Features\n- Automatic language detection\n- Audio preprocessing & enhancement\n- Flexible input handling (strings, lists, generators)\n- Configurable generation parameters\n- Caching for efficient processing\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>TTSOutput\u003C\u002Fb> - Unified output container for audio processing 🎧\u003C\u002Fsummary>\n\n```python\n@dataclass\nclass TTSOutput:\n    array: np.ndarray\n    sample_rate: int\n```\n\n### Methods\n\n#### Format Conversion\n```python\noutput.to_tensor()      # → torch.Tensor\noutput.to_bytes()       # → bytes (wav\u002Fraw)\noutput.from_tensor()    # → TTSOutput\noutput.from_file()      # → TTSOutput\n```\n\n#### Audio Processing\n```python\noutput.combine_outputs()  # Combine multiple outputs\noutput.resample()        # Change sample rate\noutput.get_info()        # Get audio properties\noutput.change_speed()    # Modify playback speed\n```\n\n#### File & Playback\n```python\noutput.save()           # Save to file\noutput.play()          # Play audio\noutput.display()       # Show in Jupyter\noutput.preview()       # Smart playback\n```\n\n### Examples\n\n```python\n# Load and process\noutput = TTSOutput.from_file(\"input.wav\")\noutput = output.resample(target_sr=44100)\noutput.save(\"output.wav\")\n\n# Combine multiple outputs\ncombined = TTSOutput.combine_outputs([output1, output2, output3])\n\n# Change playback speed\nfaster = output.change_speed(1.5)\n```\n\n\u003C\u002Fdetails>\n\n\n## Languages 🌍\n\nXTTSv2 Supports: English, Spanish, French, German, Italian, Portuguese, Polish, Turkish, Russian, Dutch, Czech, Arabic, Chinese (Simplified), Hungarian, Korean, Japanese, Hindi\n\n## Performance Details 📊\n\nProcessing speeds on NVIDIA 3090:\n- Short phrases (\u003C 100 chars): ~1 second\n- Medium texts (\u003C 1000 chars): ~5-10 seconds\n- Full books (~500K chars @ concurrency 36): ~10 minutes\n\nMemory usage:\n- Base: ~2.5GB VRAM concurrency = 1\n- ~ 5.3GB VRAM concurrency = 20\n\n\n## Gradio\n\n[Gradio code](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fblob\u002Fmain\u002Fexamples\u002Fgradio_example.py)\n\n![Auralis](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fastramind-ai_Auralis_readme_e0c808f5c17d.png)\n\n\n## Contributions\n\n**Join Our Community!**\n\nWe welcome and appreciate any contributions to our project! To ensure a smooth and efficient process, please take a moment to review our [Contribution Guideline](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fblob\u002Fmain\u002FCONTRIBUTING.md). By following these guidelines, you'll help us review and accept your contribution quickly. Thank you for your support!\n\n\n## Learn More 🔭\n\n- [Technical Deep Dive](https:\u002F\u002Fwww.astramind.ai\u002Fpost\u002Fauralis)\n- [Adding Custom Models](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fblob\u002Fmain\u002Fdocs\u002Fadvanced\u002Fadding-models.md)\n\n## License\n\nThe codebase is released under Apache 2.0, feel free to use it in your projects.\n\nThe XTTSv2 model (and the files under auralis\u002Fmodels\u002Fxttsv2\u002Fcomponents\u002Ftts) are licensed under the [Coqui AI License](https:\u002F\u002Fcoqui.ai\u002Fcpml).\n","[![](https:\u002F\u002Fdcbadge.limes.pink\u002Fapi\u002Fserver\u002Fhttps:\u002F\u002Fdiscord.gg\u002FBEMVTmcPEs)](https:\u002F\u002Fdiscord.gg\u002Fhttps:\u002F\u002Fdiscord.gg\u002FBEMVTmcPEs)\n\n# Auralis 🌌 (\u002Fauˈralis\u002F)\n\n以超高速将文本转换为自然语音（支持语音克隆）。只需几分钟即可处理整部小说，而无需耗费数小时。\n\n## 什么是Auralis？🚀\n\nAuralis是一款文本转语音引擎，使语音生成在实际应用中更加可行：\n\n- 在10分钟内将《哈利·波特》第一本书全文转换为语音（**实时因子约为0.02x！**）\n- 自动提升参考音频质量，即使使用低质量麦克风录制的音频也能进行注册！\n- 可配置为占用较小内存（scheduler_max_concurrency）\n- 同时处理多个请求\n- 支持长文本分段流式传输\n\n## 快速入门⭐\n\n1. 创建一个新的Conda环境：\n   ```bash\n   conda create -n auralis_env python=3.10 -y\n   ```\n\n2. 激活环境：\n   ```bash\n   conda activate auralis_env\n   ```\n\n3. 安装Auralis：\n   ```bash\n   pip install auralis\n   ```\n\n然后您可以通过 **python** 来试用：\n\n```python\nfrom auralis import TTS, TTSRequest\n\n# 初始化\ntts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n\n# 生成语音\nrequest = TTSRequest(\n    text=\"你好，地球！这里是Auralis在说话。\",\n    speaker_files=['reference.wav']\n)\n\noutput = tts.generate_speech(request)\noutput.save('hello.wav')\n```\n\n或者通过兼容OpenAI协议的服务器使用 **cli**：\n```commandline\nauralis.openai --host 127.0.0.1 --port 8000 --model AstraMindAI\u002Fxttsv2 --gpt_model AstraMindAI\u002Fxtts2-gpt --max_concurrency 8 --vllm_logging_level warn  \n```\n您可以访问 [这里](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Ftree\u002Fmain\u002Fdocs\u002FUSING_OAI_SERVER.md) 查看更详细的说明，或尝试这个 [示例](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Ftree\u002Fmain\u002Fexamples\u002Fuse_openai_server.py)。\n\n## 核心功能 🛸\n\n### 速度与效率\n- 使用智能批处理快速处理长文本\n- 可在消费级GPU上运行，不会出现内存问题\n- 支持多请求并行处理\n\n### 易于集成\n- 简单的Python API\n- 支持长文本流式传输\n- 内置音频增强功能\n- 自动语言检测\n\n### 音质\n- 从短样本进行语音克隆\n- 背景降噪\n- 提升语音清晰度\n- 音量归一化\n\n## XTTSv2微调模型\n\n您可以将自己的XTTSv2微调模型直接使用，只需将其从标准的Coqui检查点格式转换为我们的Safetensors格式。请使用 [此脚本](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fblob\u002Fmain\u002Fsrc\u002Fauralis\u002Fmodels\u002Fxttsv2\u002Futils\u002Fcheckpoint_converter.py)：\n```commandline\npython checkpoint_converter.py path\u002Fto\u002Fcheckpoint.pth --output_dir path\u002Fto\u002Foutput\n```\n\n它会生成两个文件夹，一个包含核心XTTSv2检查点，另一个包含GPT2组件。然后您可以创建一个TTS实例：\n```python\ntts = TTS().from_pretrained(\"som\u002Fcore-xttsv2_model\", gpt_model='some\u002Fxttsv2-gpt_model')\n```\n\n## 示例与用法🚀\n\n### 基础示例⭐\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>简单文本生成\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nfrom auralis import TTS, TTSRequest\n\n# 初始化\ntts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n# 基本生成\nrequest = TTSRequest(\n    text=\"你好，地球！这里是Auralis在说话。\",\n    speaker_files=[\"speaker.wav\"]\n)\noutput = tts.generate_speech(request)\noutput.save(\"hello.wav\")\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>使用TTSRequest\u003C\u002Fb>🎤\u003C\u002Fsummary>\n\n```python\n# 基本请求\nrequest = TTSRequest(\n    text=\"Hello world!\",\n    speaker_files=[\"speaker.wav\"]\n)\n\n# 增强音频处理\nrequest = TTSRequest(\n    text=\"纯净的音质\",\n    speaker_files=[\"speaker.wav\"],\n    audio_config=AudioPreprocessingConfig(\n        normalize=True,\n        trim_silence=True,\n        enhance_speech=True,\n        enhance_amount=1.5\n    )\n)\n\n# 语言特定请求\nrequest = TTSRequest(\n    text=\"Bonjour le monde!\",\n    speaker_files=[\"speaker.wav\"],\n    language=\"fr\"\n)\n\n# 流式传输配置\nrequest = TTSRequest(\n    text=\"非常长的文本...\",\n    speaker_files=[\"speaker.wav\"],\n    stream=True,\n)\n\n# 生成参数\nrequest = TTSRequest(\n    text=\"创意变体\",\n    speaker_files=[\"speaker.wav\"],\n    temperature=0.8,\n    top_p=0.9,\n    top_k=50\n)\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>使用TTSOutput\u003C\u002Fb>🎧\u003C\u002Fsummary>\n\n```python\n# 加载音频文件\noutput = TTSOutput.from_file(\"input.wav\")\n\n# 格式转换\noutput.bit_depth = 32\noutput.channel = 2\ntensor_audio = output.to_tensor()\naudio_bytes = output.to_bytes()\n\n# 音频处理\nresampled = output.resample(target_sr=44100)\nfaster = output.change_speed(1.5)\nnum_samples、sample_rate、duration = output.get_info()\n\n# 组合多个输出\ncombined = TTSOutput.combine_outputs([output1、output2、output3])\n\n# 播放和保存\noutput.play()  # 播放音频\noutput.preview()  # 智能播放（Jupyter\u002F系统）\noutput.save(\"processed.wav\", sample_rate=44100)\n```\n\u003C\u002Fdetails>\n\n### 同步高级示例🌟\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>批量文本处理\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\n# 处理多段相同声音的文本\ntexts = [\"第一段。\"、\"第二段。\"、\"第三段。\"]\nrequests = [\n    TTSRequest(\n        text=text,\n        speaker_files=[\"speaker.wav\"]\n    ) for text in texts\n]\n\n# 顺序处理并显示进度\noutputs = []\nfor i、req in enumerate(requests、1):\n    print(f\"正在处理第{i}篇\u002F{len(requests)}\")\n    outputs.append(tts.generate_speech(req))\n\n# 组合所有输出\ncombined = TTSOutput.combine_outputs(outputs)\ncombined.save(\"combined_output.wav\")\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>处理书籍章节\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\ndef process_book(chapter_file: str、speaker_file: str):\n    # 读取章节\n    with open(chapter_file、'r') as f:\n        chapter = f.read()\n    \n    # 您可以传入整本书，Auralis会自动分段处理\n    \n    request = TTSRequest(\n            text=chapter，\n            speaker_files=[speaker_file]，\n            audio_config=AudioPreprocessingConfig(\n                enhance_speech=True，\n                normalize=True\n            )\n        )\n        \n    output = tts.generate_speech(request)\n    \n    output.play()\n    output.save(\"chapter_output.wav\")\n```\n\u003C\u002Fdetails>\n\n### 异步示例 🛸\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>基本异步生成\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nimport asyncio\nfrom auralis import TTS, TTSRequest\n\nasync def generate_speech():\n    tts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n    \n    request = TTSRequest(\n        text=\"Async generation example\",\n        speaker_files=[\"speaker.wav\"]\n    )\n    \n    output = await tts.generate_speech_async(request)\n    output.save(\"async_output.wav\")\n\nasyncio.run(generate_speech())\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>并行处理\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nasync def generate_parallel():\n    tts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n    \n    # 创建多个请求\n    requests = [\n        TTSRequest(\n            text=f\"This is voice {i}\",\n            speaker_files=[f\"speaker_{i}.wav\"]\n        ) for i in range(3)\n    ]\n    \n    # 并行处理\n    coroutines = [tts.generate_speech_async(req) for req in requests]\n    outputs = await asyncio.gather(*coroutines, return_exceptions=True)\n    \n    # 处理结果\n    valid_outputs = [\n        out for out in outputs \n        if not isinstance(out, Exception)\n    ]\n    \n    combined = TTSOutput.combine_outputs(valid_outputs)\n    combined.save(\"parallel_output.wav\")\n\nasyncio.run(generate_parallel())\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>多请求异步流式传输\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nasync def stream_multiple_texts():\n    tts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n    \n    # 准备流式请求\n    texts = [\n        \"First long text...\",\n        \"Second long text...\",\n        \"Third long text...\"\n    ]\n    \n    requests = [\n        TTSRequest(\n            text=text,\n            speaker_files=[\"speaker.wav\"],\n            stream=True,\n        ) for text in texts\n    ]\n    \n    # 并行处理流\n    coroutines = [tts.generate_speech_async(req) for req in requests]\n    streams = await asyncio.gather(*coroutines)\n    \n    # 收集输出\n    output_container = {i: [] for i in range(len(requests))}\n    \n    async def process_stream(idx, stream):\n        async for chunk in stream:\n            output_container[idx].append(chunk)\n            print(f\"Processed chunk for text {idx+1}\")\n            \n    # 处理所有流\n    await asyncio.gather(\n        *(process_stream(i, stream) \n          for i, stream in enumerate(streams))\n    )\n    \n    # 保存结果\n    for idx, chunks in output_container.items():\n        TTSOutput.combine_outputs(chunks).save(\n            f\"text_{idx}_output.wav\"\n        )\n\nasyncio.run(stream_multiple_texts())\n```\n\u003C\u002Fdetails>\n\n\n## 核心类 🌟\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>TTSRequest\u003C\u002Fb> - 统一的请求容器，带音频增强功能 🎤\u003C\u002Fsummary>\n\n```python\n@dataclass\nclass TTSRequest:\n    \"\"\"TTS 推理请求数据的容器\"\"\"\n    # 请求元数据\n    text: Union[AsyncGenerator[str, None], str, List[str]]\n\n    speaker_files: Union[List[str], bytes]  # 发声人音频文件路径\n\n    enhance_speech: bool = True\n    audio_config: AudioPreprocessingConfig = field(default_factory=AudioPreprocessingConfig)\n    language: SupportedLanguages = \"auto\"\n    request_id: str = field(default_factory=lambda: uuid.uuid4().hex)\n    load_sample_rate: int = 22050\n    sound_norm_refs: bool = False\n\n    # 声音调节参数\n    max_ref_length: int = 60\n    gpt_cond_len: int = 30\n    gpt_cond_chunk_len: int = 4\n\n    # 生成参数\n    stream: bool = False\n    temperature: float = 0.75\n    top_p: float = 0.85\n    top_k: int = 50\n    repetition_penalty: float = 5.0\n    length_penalty: float = 1.0\n    do_sample: bool = True\n```\n\n### 示例\n\n```python\n# 基本用法\nrequest = TTSRequest(\n    text=\"Hello world!\",\n    speaker_files=[\"reference.wav\"]\n)\n\n# 自定义音频增强\nrequest = TTSRequest(\n    text=\"Hello world!\",\n    speaker_files=[\"reference.wav\"],\n    audio_config=AudioPreprocessingConfig(\n        normalize=True,\n        trim_silence=True,\n        enhance_speech=True,\n        enhance_amount=1.5\n    )\n)\n\n# 流式传输长文本\nrequest = TTSRequest(\n    text=\"Very long text...\",\n    speaker_files=[\"reference.wav\"],\n    stream=True,\n)\n```\n\n### 特性\n- 自动语言检测\n- 音频预处理与增强\n- 灵活的输入处理（字符串、列表、生成器）\n- 可配置的生成参数\n- 缓存以提高处理效率\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>TTSOutput\u003C\u002Fb> - 统一的音频处理输出容器 🎧\u003C\u002Fsummary>\n\n```python\n@dataclass\nclass TTSOutput:\n    array: np.ndarray\n    sample_rate: int\n```\n\n### 方法\n\n#### 格式转换\n```python\noutput.to_tensor()      # → torch.Tensor\noutput.to_bytes()       # → bytes (wav\u002Fraw)\noutput.from_tensor()    # → TTSOutput\noutput.from_file()      # → TTSOutput\n```\n\n#### 音频处理\n```python\noutput.combine_outputs()  # 合并多个输出\noutput.resample()        # 更改采样率\noutput.get_info()        # 获取音频属性\noutput.change_speed()    # 修改播放速度\n```\n\n#### 文件与播放\n```python\noutput.save()           # 保存到文件\noutput.play()          # 播放音频\noutput.display()       # 在 Jupyter 中显示\noutput.preview()       # 智能播放\n```\n\n### 示例\n\n```python\n# 加载和处理\noutput = TTSOutput.from_file(\"input.wav\")\noutput = output.resample(target_sr=44100)\noutput.save(\"output.wav\")\n\n# 合并多个输出\ncombined = TTSOutput.combine_outputs([output1, output2, output3])\n\n# 改变播放速度\nfaster = output.change_speed(1.5)\n```\n\n\u003C\u002Fdetails>\n\n\n## 语言 🌍\n\nXTTSv2 支持：英语、西班牙语、法语、德语、意大利语、葡萄牙语、波兰语、土耳其语、俄语、荷兰语、捷克语、阿拉伯语、简体中文、匈牙利语、韩语、日语、印地语\n\n## 性能详情 📊\n\n在 NVIDIA 3090 上的处理速度：\n- 短句（\u003C 100 字）：~1 秒\n- 中等文本（\u003C 1000 字）：~5-10 秒\n- 完整书籍（~50 万字 @ 并发 36）：~10 分钟\n\n内存使用情况：\n- 基础：~2.5GB VRAM，并发 = 1\n- ~5.3GB VRAM，并发 = 20\n\n\n## Gradio\n\n[Gradio 代码](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fblob\u002Fmain\u002Fexamples\u002Fgradio_example.py)\n\n![Auralis](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fastramind-ai_Auralis_readme_e0c808f5c17d.png)\n\n\n## 贡献\n\n**加入我们的社区！**\n\n我们欢迎并感谢对本项目的所有贡献！为确保流程顺畅高效，请花点时间阅读我们的[贡献指南](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fblob\u002Fmain\u002FCONTRIBUTING.md)。遵循这些指南将有助于我们快速审核并接受您的贡献。感谢您的支持！\n\n\n## 了解更多 🔭\n\n- [技术深度解析](https:\u002F\u002Fwww.astramind.ai\u002Fpost\u002Fauralis)\n- [添加自定义模型](https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fblob\u002Fmain\u002Fdocs\u002Fadvanced\u002Fadding-models.md)\n\n## 许可证\n\n代码库采用 Apache 2.0 许可证发布，您可以自由地在自己的项目中使用。\n\nXTTSv2 模型（以及 auralis\u002Fmodels\u002Fxttsv2\u002Fcomponents\u002Ftts 目录下的文件）采用 [Coqui AI 许可证](https:\u002F\u002Fcoqui.ai\u002Fcpml) 许可。","# Auralis 快速上手指南\n\nAuralis 是一款超高速文本转语音（TTS）引擎，支持声音克隆。它能在几分钟内处理整本小说，实时因子低至 0.02x，并具备自动音频增强功能，即使使用低质量麦克风录制的参考音也能生成高质量语音。\n\n## 环境准备\n\n*   **操作系统**：Linux \u002F Windows \u002F macOS\n*   **Python 版本**：3.10 (推荐)\n*   **硬件要求**：支持 CUDA 的 NVIDIA GPU（消费级显卡即可运行，显存占用优化良好）\n*   **前置依赖**：需安装 Conda 包管理器\n\n## 安装步骤\n\n1.  **创建 Conda 环境**\n    ```bash\n    conda create -n auralis_env python=3.10 -y\n    ```\n\n2.  **激活环境**\n    ```bash\n    conda activate auralis_env\n    ```\n\n3.  **安装 Auralis**\n    *注：若下载速度较慢，可配置 pip 使用国内镜像源（如清华源）。*\n    ```bash\n    pip install auralis -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n## 基本使用\n\n### 方式一：Python API（推荐）\n\n这是最简单的集成方式，适用于脚本开发或应用后端。\n\n```python\nfrom auralis import TTS, TTSRequest\n\n# 1. 初始化模型\n# 加载预训练的 XTTSv2 模型及对应的 GPT 组件\ntts = TTS().from_pretrained(\"AstraMindAI\u002Fxttsv2\", gpt_model='AstraMindAI\u002Fxtts2-gpt')\n\n# 2. 构建请求\n# text: 要转换的文本\n# speaker_files: 参考音频文件路径（用于声音克隆），支持低质量录音\nrequest = TTSRequest(\n    text=\"你好地球！这里是 Auralis 在说话。\",\n    speaker_files=['reference.wav']\n)\n\n# 3. 生成并保存语音\noutput = tts.generate_speech(request)\noutput.save('hello.wav')\n```\n\n### 方式二：命令行启动 OpenAI 兼容服务\n\n如果你需要搭建一个兼容 OpenAI 接口的 TTS 服务供其他应用调用：\n\n```commandline\nauralis.openai --host 127.0.0.1 --port 8000 --model AstraMindAI\u002Fxttsv2 --gpt_model AstraMindAI\u002Fxtts2-gpt --max_concurrency 8 --vllm_logging_level warn\n```\n\n启动后，即可通过标准的 OpenAI SDK 连接该服务进行语音合成。\n\n### 进阶提示\n*   **长文本处理**：`TTSRequest` 支持直接传入长文本，Auralis 会自动进行智能分块和流式处理。\n*   **音频增强**：默认开启音频预处理（去噪、归一化、清晰度增强），无需额外配置。\n*   **多语言**：自动检测输入文本语言（支持中文、英文、日文等十余种语言）。","一家小型有声书工作室急需将一本 20 万字的奇幻小说在 48 小时内转化为高质量音频，并需模仿主角的独特声线以保持一致性。\n\n### 没有 Auralis 时\n- **生成效率极低**：传统 TTS 引擎处理整本小说需要数天甚至数周，完全无法应对紧急交付需求。\n- **录音门槛高**：用于声音克隆的参考音频若带有背景噪音或设备一般，合成效果会严重失真，必须重新录制干净样本。\n- **硬件资源受限**：长文本处理极易占满消费级显卡显存，导致任务频繁崩溃，不得不分批手动切割文本。\n- **并发能力弱**：无法同时处理多个章节或不同角色的生成请求，只能串行排队，进一步拉长工期。\n\n### 使用 Auralis 后\n- **极速批量处理**：凭借约 0.02x 的实时因子，Auralis 能在 10 分钟内完成整本书的语音合成，轻松赶上截止期限。\n- **自动音频增强**：即使使用普通麦克风录制的参考音，Auralis 也能自动降噪并提升清晰度，完美复刻角色声线。\n- **显存优化稳定**：通过智能批处理和并发配置，Auralis 在普通消费者显卡上即可流畅运行，无需担心内存溢出。\n- **并行流式输出**：支持多请求并行处理与长文本流式生成，团队可同时推进多个章节制作，大幅缩短整体流程。\n\nAuralis 将原本需要数周的高成本有声书制作流程压缩至分钟级，让小型团队也能以极低成本实现电影级的语音产出。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fastramind-ai_Auralis_e0c808f5.png","astramind-ai","AstraMindAI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fastramind-ai_b8320a68.png","",null,"https:\u002F\u002Fastramind.ai","https:\u002F\u002Fgithub.com\u002Fastramind-ai",[80,84],{"name":81,"color":82,"percentage":83},"Python","#3572A5",99.9,{"name":85,"color":86,"percentage":87},"Dockerfile","#384d54",0.1,619,48,"2026-04-08T07:46:16","NOASSERTION","未说明","需要 GPU（文中提及可在消费级 GPU 上运行且无内存问题），具体型号、显存大小及 CUDA 版本未说明","未说明（文中提及可配置为小内存占用）",{"notes":96,"python":97,"dependencies":98},"建议使用 Conda 创建虚拟环境。该工具基于 XTTSv2 模型，支持语音克隆和长文本流式处理。可通过 pip 直接安装。支持将自定义的 XTTSv2 检查点转换为 safetensor 格式使用。可配置并发数以控制内存占用。","3.10",[99,100,101,102],"auralis","torch (隐含)","transformers (隐含，基于 XTTSv2)","safetensors (隐含)",[104,14],"音频",[106,107,108],"tts","xttsv2","tts-serving","2026-03-27T02:49:30.150509","2026-04-11T10:03:12.224606",[112,117,122,127,132,137],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},29449,"如何在 Windows 上安装 Auralis？遇到 triton 版本找不到怎么办？","Auralis 目前主要支持 Linux 环境。Windows 用户无法直接通过 pip 安装（会报错找不到 triton==3.1.0），建议使用 WSL (Windows Subsystem for Linux) 或 Docker 进行安装和运行。如果在 WSL 中遇到 `libportaudio.so.2` 或 `GLIBCXX` 相关错误，可以尝试运行 `conda install -c conda-forge libstdcxx-ng` 来解决依赖问题。","https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fissues\u002F3",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},29450,"生成中文语音时输出不可听或效果很差，如何解决？","如果生成的中文语音不可听或听起来像外国人说话，请尝试在 TTS 请求中显式指定语言参数为 `zh-cn`。例如：`language=\"zh-cn\"`。这可以避免语言自动检测错误导致的发音问题。如果更新版本后效果变差，可能需要检查是否使用了正确的分词器版本或回退到之前的稳定修改版。","https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fissues\u002F1",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},29451,"使用微调后的 Coqui 模型在 Auralis 中推理效果变差或不相似，怎么办？","这是因为分词器（tokenizer）设置不匹配导致的。解决方法是将分词器设置调整为与模型版本一致的 token 数量（例如 6153 个 tokens），并将当前环境的 2.0.3 版本分词器替换为 2.0.0 版本的分词器。确保加载模型时使用的分词器路径和配置与原训练环境一致。","https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fissues\u002F27",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},29452,"调用 generate_speech 函数时出现显存泄漏（OOM）错误，如何解决？","显存未释放通常是因为缺少上下文管理器。请在调用生成代码时包裹 `with torch.no_grad():` 块，并在生成结束后手动调用 `torch.cuda.empty_cache()`。示例代码如下：\n```python\nwith torch.no_grad():\n    output = tts.generate_speech(request)\n    output.save(output_path, format=\"mp3\")\ntorch.cuda.empty_cache()\n```\n此外，建议移除硬编码的并发设置，让 Auralis 自行管理并发以避免显存碎片化。","https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fissues\u002F37",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},29453,"导入 TTS 模块时报 TypeError 或与 numpy\u002Fnumba 版本相关的错误，如何处理？","这类错误通常由依赖包版本冲突引起。最有效的解决方法是创建一个全新的虚拟环境（如 conda env 或 venv），并重新安装所有依赖包。确保不要混用不同项目的包版本。如果问题依旧，请检查系统中 numpy 和 numba 的具体版本兼容性，全新环境通常能解决此类隐性的依赖冲突。","https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fissues\u002F14",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},29454,"加载转换后的 XTTSv2 模型时报 'size mismatch' 错误，原因是什么？","该错误表明检查点文件中的参数量（如 text_embedding.weight 形状为 [6153, 1024]）与当前模型架构预期的参数量（如 [6681, 1024]）不匹配。这通常是因为模型训练时使用的词汇表大小与推理时加载的默认词汇表大小不一致。请确保在转换模型或初始化推理引擎时，使用了与训练时完全相同的分词器（tokenizer）和词汇表配置。","https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fissues\u002F43",[143,148,153,158],{"id":144,"version":145,"summary_zh":146,"released_at":147},198305,"v0.2.8.post2","修复了同步生成方法中的一个错误，该错误曾导致死锁。\n\n**完整更新日志**：https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fcompare\u002F0.2.8.post1...v0.2.8.post2","2024-12-16T15:45:47",{"id":149,"version":150,"summary_zh":151,"released_at":152},198306,"0.2.8.post1","## 变更内容\n* 移除了在单独线程中初始化第二个循环的逻辑，这有助于改善内存管理以及同步和异步并发请求的处理。\n* 杂项（Dockerfile）：移除 langid，由 @kwaa 在 https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fpull\u002F40 中完成。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fcompare\u002F0.2.8...0.2.8.post1","2024-12-15T16:13:20",{"id":154,"version":155,"summary_zh":156,"released_at":157},198307,"0.2.8","## 变更内容\n* 文档字符串和 MkDocs 文档，由 @prassanna-ravishankar 在 https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fpull\u002F32 中完成\n\n## 新贡献者\n* @prassanna-ravishankar 在 https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fpull\u002F32 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fcompare\u002F0.2.7.post1...0.2.8","2024-12-14T17:02:27",{"id":159,"version":160,"summary_zh":161,"released_at":162},198308,"0.2.7.post1","## 变更内容\n* 功能：添加 Dockerfile，由 @kwaa 在 https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fpull\u002F26 中完成\n\n## 新贡献者\n* @kwaa 在 https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fpull\u002F26 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fastramind-ai\u002FAuralis\u002Fcommits\u002F0.2.7.post1","2024-12-09T10:39:01"]