[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-asteroid-team--asteroid":3,"tool-asteroid-team--asteroid":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",158594,2,"2026-04-16T23:34:05",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":74,"owner_location":74,"owner_email":74,"owner_twitter":74,"owner_website":74,"owner_url":75,"languages":76,"stars":85,"forks":86,"last_commit_at":87,"license":88,"difficulty_score":32,"env_os":89,"env_gpu":90,"env_ram":89,"env_deps":91,"category_tags":104,"github_topics":106,"view_count":32,"oss_zip_url":74,"oss_zip_packed_at":74,"status":17,"created_at":114,"updated_at":115,"faqs":116,"releases":147},8186,"asteroid-team\u002Fasteroid","asteroid","The PyTorch-based audio source separation toolkit for researchers","Asteroid 是一个基于 PyTorch 构建的音频源分离工具包，专为加速科研实验而设计。它的核心使命是解决音频处理领域中“从混合声音中分离出独立声源”这一难题，例如在嘈杂环境中提取特定人声或乐器音轨。通过提供统一的代码框架，Asteroid 极大地降低了复现前沿学术论文的门槛，让研究人员无需从零搭建环境即可快速验证想法。\n\n这款工具主要面向音频领域的研究人员、算法工程师及开发者。它内置了对多种主流数据集的支持，并封装了如 ConvTasNet 等经典模型的训练“食谱”（recipes），用户只需简单配置即可重现重要研究成果。其技术亮点在于灵活的架构设计和对“排列不变训练”（PIT）等关键技术的原生支持，同时提供了处理大音频文件的实用教程。作为一个社区驱动的项目，Asteroid 鼓励用户贡献代码、报告问题或分享新特性，旨在成为连接理论研究与工程实践的高效桥梁，帮助社群共同推动音频分离技术的发展。","\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fasteroid-team_asteroid_readme_257f2b521bfd.png\" width=\"50%\">\n\n**The PyTorch-based audio source separation toolkit for researchers.**\n\n[![PyPI Status](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fasteroid.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fasteroid)\n[![Build Status](https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fworkflows\u002FCI\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Factions?query=workflow%3ACI+branch%3Amaster+event%3Apush)\n[![codecov][codecov-badge]][codecov]\n[![Code style: black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)\n[![Documentation Status](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-0.7.0-blue)](https:\u002F\u002Fasteroid.readthedocs.io\u002Fen\u002Fv0.7.0\u002F)\n[![Latest Docs Status](https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fworkflows\u002FLatest%20docs\u002Fbadge.svg)](https:\u002F\u002Fasteroid-team.github.io\u002Fasteroid\u002F)\n\n\n[![PRs Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg)](https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fpulls)\n[![Python Versions](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fasteroid.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fasteroid\u002F)\n[![PyPI Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fasteroid-team_asteroid_readme_f617c87ed07f.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fasteroid)\n[![Slack][slack-badge]][slack-invite]\n\n\u003C\u002Fdiv>\n\n--------------------------------------------------------------------------------\n\n\nAsteroid is a Pytorch-based audio source separation toolkit\nthat enables fast experimentation on common datasets.\nIt comes with a source code that supports a large range\nof datasets and architectures, and a set of\n recipes to reproduce some important papers.\n\n\n### You use Asteroid or you want to?\nPlease, if you have found a bug, [open an issue][issue],\nif you solved it, [open a pull request][pr]!\nSame goes for new features, tell us what you want or help us building it!\nDon't hesitate to [join the slack][slack-invite]\nand ask questions \u002F suggest new features there as well!\nAsteroid is intended to be a __community-based project__\nso hop on and help us!\n## Contents\n- [Contents](#contents)\n- [Installation](#installation)\n- [Tutorials](#tutorials)\n- [Running a recipe](#running-a-recipe)\n- [Available recipes](#available-recipes)\n- [Supported datasets](#supported-datasets)\n- [Pretrained models](#pretrained-models)\n- [Contributing](#contributing)\n- [TensorBoard visualization](#tensorboard-visualization)\n- [Guiding principles](#guiding-principles)\n- [Citing Asteroid](#citing-asteroid)\n\n## Installation\n([↑up to contents](#contents))\nTo install Asteroid, clone the repo and install it using\nconda, pip or python :\n```bash\n# First clone and enter the repo\ngit clone https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\ncd asteroid\n```\n\n- With `pip`\n```bash\n# Install with pip in editable mode\npip install -e .\n# Or, install with python in dev mode\n# python setup.py develop\n```\n- With conda (if you don't already have conda, see [here][miniconda].)\n```bash\nconda env create -f environment.yml\nconda activate asteroid\n```\n\n- Asteroid is also on PyPI, you can install the latest release with\n```bash\npip install asteroid\n```\n\n## Tutorials\n([↑up to contents](#contents))\nHere is a list of notebooks showing example usage of Asteroid's features.\n- [Getting started with Asteroid](.\u002Fnotebooks\u002F00_GettingStarted.ipynb)\n- [Introduction and Overview](.\u002Fnotebooks\u002F01_APIOverview.ipynb)\n- [Filterbank API](.\u002Fnotebooks\u002F02_Filterbank.ipynb)\n- [Permutation invariant training wrapper `PITLossWrapper`](.\u002Fnotebooks\u002F03_PITLossWrapper.ipynb)\n- [Process large wav files](.\u002Fnotebooks\u002F04_ProcessLargeAudioFiles.ipynb)\n\n\n## Running a recipe\n([↑up to contents](#contents))\nRunning the recipes requires additional packages in most cases,\nwe recommend running :\n```bash\n# from asteroid\u002F\npip install -r requirements.txt\n```\nThen choose the recipe you want to run and run it!\n```bash\ncd egs\u002Fwham\u002FConvTasNet\n. .\u002Frun.sh\n```\nMore information in [egs\u002FREADME.md](.\u002Fegs).\n\n## Available recipes\n([↑up to contents](#contents))\n* [x] [ConvTasnet](.\u002Fegs\u002Fwham\u002FConvTasNet) ([Luo et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.07454))\n* [x] [Tasnet](.\u002Fegs\u002Fwhamr\u002FTasNet) ([Luo et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00541))\n* [x] [Deep clustering](.\u002Fegs\u002Fwsj0-mix\u002FDeepClustering) ([Hershey et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1508.04306) and [Isik et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.02173))\n* [x] [Chimera ++](.\u002Fegs\u002Fwsj0-mix\u002FDeepClustering) ([Luo et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06265) and [Wang et al.](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8462507))\n* [x] [DualPathRNN](.\u002Fegs\u002Fwham\u002FDPRNN) ([Luo et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.06379))\n* [x] [Two step learning](.\u002Fegs\u002Fwham\u002FTwoStep)([Tzinis et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.09804))\n* [x] [SudoRMRFNet](.\u002Fasteroid\u002Fmodels\u002Fsudormrf.py) ([Tzinis et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.06833))\n* [x] [DPTNet](.\u002Fasteroid\u002Fmodels\u002Fdptnet.py) ([Chen et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13975))\n* [x] [DCCRNet](.\u002Fasteroid\u002Fmodels\u002Fdccrnet.py) ([Hu et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.00264))\n* [x] [DCUNet](.\u002Fasteroid\u002Fmodels\u002Fdcunet.py) ([Choi et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03107))\n* [x] [CrossNet-Open-Unmix](.\u002Fasteroid\u002Fmodels\u002Fx_umx.py) ([Sawata et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.04228))\n* [x] [Multi-Decoder DPRNN](.\u002Fegs\u002Fwsj0-mix-var\u002FMulti-Decoder-DPRNN) ([Zhu et al.](http:\u002F\u002Fwww.isle.illinois.edu\u002Fspeech_web_lg\u002Fpubs\u002F2021\u002Fzhu2021multi.pdf))\n* [ ] Open-Unmix (coming) ([Stöter et al.](https:\u002F\u002Fsigsep.github.io\u002Fopen-unmix\u002F))\n* [ ] Wavesplit (coming) ([Zeghidour et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08933))\n\n## Supported datasets\n([↑up to contents](#contents))\n* [x] [WSJ0-2mix](.\u002Fegs\u002Fwsj0-mix) \u002F WSJ03mix ([Hershey et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1508.04306))\n* [x] [WHAM](.\u002Fegs\u002Fwham) ([Wichern et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.01160))\n* [x] [WHAMR](.\u002Fegs\u002Fwhamr) ([Maciejewski et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.10279))\n* [x] [LibriMix](.\u002Fegs\u002Flibrimix) ([Cosentino et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.11262))\n* [x] [Microsoft DNS Challenge](.\u002Fegs\u002Fdns_challenge) ([Chandan et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.08662))\n* [x] [SMS_WSJ](.\u002Fegs\u002Fsms_wsj) ([Drude et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.13934))\n* [x] [MUSDB18](.\u002Fasteroid\u002Fdata\u002Fmusdb18_dataset.py) ([Raffi et al.](https:\u002F\u002Fhal.inria.fr\u002Fhal-02190845))\n* [x] [FUSS](.\u002Fasteroid\u002Fdata\u002Ffuss_dataset.py) ([Wisdom et al.](https:\u002F\u002Fzenodo.org\u002Frecord\u002F3694384#.XmUAM-lw3g4))\n* [x] [AVSpeech](.\u002Fasteroid\u002Fdata\u002Favspeech_dataset.py) ([Ephrat et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03619))\n* [x] [Kinect-WSJ](.\u002Fasteroid\u002Fdata\u002Fkinect_wsj.py) ([Sivasankaran et al.](https:\u002F\u002Fgithub.com\u002Fsunits\u002FReverberated_WSJ_2MIX))\n\n## Pretrained models\n([↑up to contents](#contents))\nSee [here](.\u002Fdocs\u002Fsource\u002Freadmes\u002Fpretrained_models.md)\n\n## Contributing\n([↑up to contents](#contents))\nWe are always looking to expand our coverage of the source separation\nand speech enhancement research, the following is a list of\nthings we're missing.\nYou want to contribute? This is a great place to start!\n* Wavesplit ([Zeghidour and Grangier](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08933))\n* FurcaNeXt ([Shi et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.04891))\n* DeepCASA ([Liu and Want](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.11148))\n* VCTK Test sets from [Kadioglu et al.](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.08688.pdf)\n* Interrupted and cascaded PIT ([Yang et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12706))\n* ~Consistency contraints ([Wisdom et al.](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8682783))~\n* ~Backpropagable STOI and PESQ.~\n* Parametrized filterbanks from [Tukuljac et al.](https:\u002F\u002Fopenreview.net\u002Fforum?id=HyewT1BKvr)\n* ~End-to-End MISI ([Wang et al.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10204))~\n\n\nDon't forget to read our [contributing guidelines](.\u002FCONTRIBUTING.md).\n\nYou can also open an issue or make a PR to add something we missed in this list.\n\n## TensorBoard visualization\nThe default logger is TensorBoard in all the recipes. From the recipe folder,\nyou can run the following to visualize the logs of all your runs. You can\nalso compare different systems on the same dataset by running a similar command\nfrom the dataset directiories.\n```bash\n# Launch tensorboard (default port is 6006)\ntensorboard --logdir exp\u002F --port tf_port\n```\nIf your launching tensorboard remotely, you should open an ssh tunnel\n```bash\n# Open port-forwarding connection. Add -Nf option not to open remote.\nssh -L local_port:localhost:tf_port user@ip\n```\nThen open `http:\u002F\u002Flocalhost:local_port\u002F`. If both ports are the same, you can\nclick on the tensorboard URL given on the remote, it's just more practical.\n\n\n## Guiding principles\n([↑up to contents](#contents))\n* __Modularity.__ Building blocks are thought and designed to be seamlessly\nplugged together. Filterbanks, encoders, maskers, decoders and losses are\nall common building blocks that can be combined in a\nflexible way to create new systems.\n* __Extensibility.__ Extending Asteroid with new features is simple.\nAdd a new filterbank, separator architecture, dataset or even recipe very\neasily.\n* __Reproducibility.__ Recipes provide an easy way to reproduce\nresults with data preparation, system design, training and evaluation in a\nsingle script. This is an essential tool for the community!\n\n## Citing Asteroid\n([↑up to contents](#contents))\nIf you loved using Asteroid and you want to cite us, use this :\n```BibTex\n@inproceedings{Pariente2020Asteroid,\n    title={Asteroid: the {PyTorch}-based audio source separation toolkit for researchers},\n    author={Manuel Pariente and Samuele Cornell and Joris Cosentino and Sunit Sivasankaran and\n            Efthymios Tzinis and Jens Heitkaemper and Michel Olvera and Fabian-Robert Stöter and\n            Mathieu Hu and Juan M. Martín-Doñas and David Ditter and Ariel Frank and Antoine Deleforge\n            and Emmanuel Vincent},\n    year={2020},\n    booktitle={Proc. Interspeech},\n}\n```\n\n[comment]: \u003C> (Badge)\n[miniconda]: https:\u002F\u002Fconda.io\u002Fminiconda.html\n[codecov-badge]: https:\u002F\u002Fcodecov.io\u002Fgh\u002Fasteroid-team\u002Fasteroid\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg\n[codecov]: https:\u002F\u002Fcodecov.io\u002Fgh\u002Fasteroid-team\u002Fasteroid\n[slack-badge]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-chat-green.svg?logo=slack\n[slack-invite]: https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fasteroid-dev\u002Fshared_invite\u002Fzt-cn9y85t3-QNHXKD1Et7qoyzu1Ji5bcA\n\n[comment]: \u003C> (Others)\n[issue]: https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fissues\u002Fnew\n[pr]: https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fcompare\n","\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fasteroid-team_asteroid_readme_257f2b521bfd.png\" width=\"50%\">\n\n**基于 PyTorch 的音频源分离工具包，专为研究人员设计。**\n\n[![PyPI 状态](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fasteroid.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fasteroid)\n[![构建状态](https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fworkflows\u002FCI\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Factions?query=workflow%3ACI+branch%3Amaster+event%3Apush)\n[![codecov][codecov-badge]][codecov]\n[![代码风格：black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)\n[![文档状态](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-0.7.0-blue)](https:\u002F\u002Fasteroid.readthedocs.io\u002Fen\u002Fv0.7.0\u002F)\n[![最新文档状态](https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fworkflows\u002FLatest%20docs\u002Fbadge.svg)](https:\u002F\u002Fasteroid-team.github.io\u002Fasteroid\u002F)\n\n\n[![欢迎提交 PR](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg)](https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fpulls)\n[![Python 版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fasteroid.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fasteroid\u002F)\n[![PyPI 状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fasteroid-team_asteroid_readme_f617c87ed07f.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fasteroid)\n[![Slack][slack-badge]][slack-invite]\n\n\u003C\u002Fdiv>\n\n--------------------------------------------------------------------------------\n\n\nAsteroid 是一个基于 PyTorch 的音频源分离工具包，\n它支持在常见数据集上快速进行实验。\n该工具包包含支持多种数据集和架构的源代码，\n以及用于复现一些重要论文的配方。\n\n### 你正在使用 Asteroid 吗？或者想开始使用吗？\n如果你发现了 bug，请[提交 issue][issue]；\n如果你已经修复了它，请[提交 pull request][pr]！\n对于新功能也是如此——告诉我们你的需求，或者帮助我们一起实现！\n别犹豫，加入我们的 [Slack 频道][slack-invite]，\n在那里也可以提问或提出新功能建议！\nAsteroid 旨在成为一个__社区驱动的项目__，\n快来参与进来，一起帮助我们吧！\n## 目录\n- [目录](#contents)\n- [安装](#installation)\n- [教程](#tutorials)\n- [运行配方](#running-a-recipe)\n- [可用的配方](#available-recipes)\n- [支持的数据集](#supported-datasets)\n- [预训练模型](#pretrained-models)\n- [贡献](#contributing)\n- [TensorBoard 可视化](#tensorboard-visualization)\n- [指导原则](#guiding-principles)\n- [引用 Asteroid](#citing-asteroid)\n\n## 安装\n([↑返回目录](#contents))\n要安装 Asteroid，首先克隆仓库，然后使用 conda、pip 或 python 进行安装：\n```bash\n# 首先克隆并进入仓库\ngit clone https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\ncd asteroid\n```\n\n- 使用 `pip`\n```bash\n# 以可编辑模式使用 pip 安装\npip install -e .\n# 或者，使用 python 以开发模式安装\n# python setup.py develop\n```\n- 使用 conda（如果你还没有安装 conda，可以参考 [这里][miniconda]。）\n```bash\nconda env create -f environment.yml\nconda activate asteroid\n```\n\n- Asteroid 也发布在 PyPI 上，你可以通过以下命令安装最新版本：\n```bash\npip install asteroid\n```\n\n## 教程\n([↑返回目录](#contents))\n以下是展示 Asteroid 功能使用示例的笔记本列表。\n- [Asteroid 入门](.\u002Fnotebooks\u002F00_GettingStarted.ipynb)\n- [简介与概览](.\u002Fnotebooks\u002F01_APIOverview.ipynb)\n- [滤波器组 API](.\u002Fnotebooks\u002F02_Filterbank.ipynb)\n- [排列不变损失包装器 `PITLossWrapper`](.\u002Fnotebooks\u002F03_PITLossWrapper.ipynb)\n- [处理大型 WAV 文件](.\u002Fnotebooks\u002F04_ProcessLargeAudioFiles.ipynb)\n\n\n## 运行配方\n([↑返回目录](#contents))\n大多数情况下，运行配方需要额外的依赖包，\n我们建议执行以下命令来安装这些依赖：\n```bash\n# 从 asteroid\u002F 目录下\npip install -r requirements.txt\n```\n然后选择你想运行的配方并执行即可！\n```bash\ncd egs\u002Fwham\u002FConvTasNet\n. .\u002Frun.sh\n```\n更多详细信息请参阅 [egs\u002FREADME.md](.\u002Fegs)。\n\n## 可用的配方\n([↑返回目录](#contents))\n* [x] [ConvTasnet](.\u002Fegs\u002Fwham\u002FConvTasNet) ([Luo 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.07454))\n* [x] [Tasnet](.\u002Fegs\u002Fwhamr\u002FTasNet) ([Luo 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00541))\n* [x] [深度聚类](.\u002Fegs\u002Fwsj0-mix\u002FDeepClustering) ([Hershey 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1508.04306) 和 [Isik 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.02173))\n* [x] [Chimera ++](.\u002Fegs\u002Fwsj0-mix\u002FDeepClustering) ([Luo 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06265) 和 [Wang 等人](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8462507))\n* [x] [DualPathRNN](.\u002Fegs\u002Fwham\u002FDPRNN) ([Luo 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.06379))\n* [x] [两步学习](.\u002Fegs\u002Fwham\u002FTwoStep) ([Tzinis 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.09804))\n* [x] [SudoRMRFNet](.\u002Fasteroid\u002Fmodels\u002Fsudormrf.py) ([Tzinis 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.06833))\n* [x] [DPTNet](.\u002Fasteroid\u002Fmodels\u002Fdptnet.py) ([Chen 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13975))\n* [x] [DCCRNet](.\u002Fasteroid\u002Fmodels\u002Fdccrnet.py) ([Hu 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.00264))\n* [x] [DCUNet](.\u002Fasteroid\u002Fmodels\u002Fdcunet.py) ([Choi 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03107))\n* [x] [CrossNet-Open-Unmix](.\u002Fasteroid\u002Fmodels\u002Fx_umx.py) ([Sawata 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.04228))\n* [x] [多解码器 DPRNN](.\u002Fegs\u002Fwsj0-mix-var\u002FMulti-Decoder-DPRNN) ([Zhu 等人](http:\u002F\u002Fwww.isle.illinois.edu\u002Fspeech_web_lg\u002Fpubs\u002F2021\u002Fzhu2021multi.pdf))\n* [ ] Open-Unmix（即将推出）([Stöter 等人](https:\u002F\u002Fsigsep.github.io\u002Fopen-unmix\u002F))\n* [ ] Wavesplit（即将推出）([Zeghidour 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08933))\n\n## 支持的数据集\n([↑返回目录](#contents))\n* [x] [WSJ0-2mix](.\u002Fegs\u002Fwsj0-mix) \u002F WSJ03mix ([Hershey 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1508.04306))\n* [x] [WHAM](.\u002Fegs\u002Fwham) ([Wichern 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.01160))\n* [x] [WHAMR](.\u002Fegs\u002Fwhamr) ([Maciejewski 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.10279))\n* [x] [LibriMix](.\u002Fegs\u002Flibrimix) ([Cosentino 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.11262))\n* [x] [Microsoft DNS 挑战](.\u002Fegs\u002Fdns_challenge) ([Chandan 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.08662))\n* [x] [SMS_WSJ](.\u002Fegs\u002Fsms_wsj) ([Drude 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.13934))\n* [x] [MUSDB18](.\u002Fasteroid\u002Fdata\u002Fmusdb18_dataset.py) ([Raffi 等人](https:\u002F\u002Fhal.inria.fr\u002Fhal-02190845))\n* [x] [FUSS](.\u002Fasteroid\u002Fdata\u002Ffuss_dataset.py) ([Wisdom 等人](https:\u002F\u002Fzenodo.org\u002Frecord\u002F3694384#.XmUAM-lw3g4))\n* [x] [AVSpeech](.\u002Fasteroid\u002Fdata\u002Favspeech_dataset.py) ([Ephrat 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03619))\n* [x] [Kinect-WSJ](.\u002Fasteroid\u002Fdata\u002Fkinect_wsj.py) ([Sivasankaran 等人](https:\u002F\u002Fgithub.com\u002Fsunits\u002FReverberated_WSJ_2MIX))\n\n## 预训练模型\n([↑返回目录](#contents))\n详情请参见 [此处](.\u002Fdocs\u002Fsource\u002Freadmes\u002Fpretrained_models.md)\n\n## 贡献\n([↑返回目录](#contents))\n我们始终致力于扩展对声源分离和语音增强研究的覆盖范围，以下是目前我们尚未涵盖的内容列表。\n你想贡献吗？这里是一个很好的起点！\n* Wavesplit ([Zeghidour 和 Grangier](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08933))\n* FurcaNeXt ([Shi 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.04891))\n* DeepCASA ([Liu 和 Want](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.11148))\n* VCTK 测试集，来自 [Kadioglu 等人](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.08688.pdf)\n* 中断式和级联式 PIT ([Yang 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12706))\n* ~一致性约束 ([Wisdom 等人](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8682783))~\n* ~可微分的 STOI 和 PESQ。~\n* 来自 [Tukuljac 等人](https:\u002F\u002Fopenreview.net\u002Fforum?id=HyewT1BKvr) 的参数化滤波器组\n* ~端到端 MISI ([Wang 等人](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10204))~\n\n\n别忘了阅读我们的[贡献指南](.\u002FCONTRIBUTING.md)。\n\n你也可以提交一个问题或拉取请求，以添加本列表中遗漏的内容。\n\n## TensorBoard 可视化\n所有配方的默认日志记录器都是 TensorBoard。从配方文件夹中，你可以运行以下命令来可视化所有实验的日志。你还可以通过从数据集目录运行类似的命令，在同一数据集上比较不同的系统。\n```bash\n# 启动 TensorBoard（默认端口为 6006）\ntensorboard --logdir exp\u002F --port tf_port\n```\n如果你在远程启动 TensorBoard，应该打开一个 SSH 隧道：\n```bash\n# 打开端口转发连接。使用 -Nf 选项以避免打开远程会话。\nssh -L local_port:localhost:tf_port user@ip\n```\n然后打开 `http:\u002F\u002Flocalhost:local_port\u002F`。如果两个端口相同，你也可以直接点击远程提供的 TensorBoard URL，这样更方便。\n\n## 指导原则\n([↑返回目录](#contents))\n* __模块化。__ 构建模块经过精心设计，可以无缝地组合在一起。滤波器组、编码器、掩码器、解码器和损失函数都是常见的构建模块，可以灵活组合以创建新的系统。\n* __可扩展性。__ 使用 Asteroid 添加新功能非常简单。可以轻松添加新的滤波器组、分离架构、数据集，甚至配方。\n* __可重复性。__ 配方提供了一种简便的方式，通过一个脚本即可完成数据准备、系统设计、训练和评估，从而轻松复现结果。这对社区来说是一项至关重要的工具！\n\n## 引用 Asteroid\n([↑返回目录](#contents))\n如果你喜欢使用 Asteroid 并希望引用我们，请使用以下格式：\n```BibTex\n@inproceedings{Pariente2020Asteroid,\n    title={Asteroid：基于 PyTorch 的音频声源分离工具包，专为研究人员设计},\n    author={Manuel Pariente、Samuele Cornell、Joris Cosentino、Sunit Sivasankaran、Efthymios Tzinis、Jens Heitkaemper、Michel Olvera、Fabian-Robert Stöter、Mathieu Hu、Juan M. Martín-Doñas、David Ditter、Ariel Frank、Antoine Deleforge 和 Emmanuel Vincent},\n    year={2020},\n    booktitle={Interspeech 会议论文集},\n}\n```\n\n[注释]: \u003C> (徽章)\n[miniconda]: https:\u002F\u002Fconda.io\u002Fminiconda.html\n[codecov-badge]: https:\u002F\u002Fcodecov.io\u002Fgh\u002Fasteroid-team\u002Fasteroid\u002Fbranch\u002Fmaster\u002Fgraph\u002Fbadge.svg\n[codecov]: https:\u002F\u002Fcodecov.io\u002Fgh\u002Fasteroid-team\u002Fasteroid\n[slack-badge]: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-chat-green.svg?logo=slack\n[slack-invite]: https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fasteroid-dev\u002Fshared_invite\u002Fzt-cn9y85t3-QNHXKD1Et7qoyzu1Ji5bcA\n\n[注释]: \u003C> (其他)\n[issue]: https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fissues\u002Fnew\n[pr]: https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fcompare","# Asteroid 快速上手指南\n\nAsteroid 是一个基于 PyTorch 的音频源分离工具包，专为研究人员设计，支持多种数据集和架构，旨在实现快速的实验复现。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python**: 3.7 及以上版本\n*   **核心依赖**: PyTorch (建议安装与您的 CUDA 版本匹配的 PyTorch)\n*   **包管理工具**: 推荐使用 `conda` 或 `pip`\n\n> **国内加速建议**：\n> 如果您在中国大陆，建议在安装依赖时配置国内镜像源以加速下载。\n> *   **Conda**: 使用清华源或中科大源。\n> *   **Pip**: 使用阿里云、清华或腾讯源 (例如：`pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple ...`)。\n\n## 安装步骤\n\n您可以选择以下任意一种方式进行安装：\n\n### 方法一：通过 PyPI 安装（推荐，适合直接使用）\n\n这是获取最新稳定 release 版本的最快方式。\n\n```bash\npip install asteroid\n# 国内用户建议使用：\n# pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple asteroid\n```\n\n### 方法二：源码安装（适合开发或修改代码）\n\n如果您需要修改源码或运行官方提供的完整实验脚本（recipes），建议克隆仓库并安装。\n\n```bash\n# 1. 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\ncd asteroid\n\n# 2. 安装依赖 (运行 recipes 通常需要额外依赖)\npip install -r requirements.txt\n# 国内用户建议使用：\n# pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple -r requirements.txt\n\n# 3. 以可编辑模式安装 asteroid\npip install -e .\n```\n\n### 方法三：使用 Conda 环境\n\n如果您希望隔离环境，可以使用项目提供的 `environment.yml` 文件。\n\n```bash\n# 创建并激活环境\nconda env create -f environment.yml\nconda activate asteroid\n```\n\n## 基本使用\n\nAsteroid 的核心功能包括加载预训练模型、处理音频文件以及运行完整的训练脚本。\n\n### 1. 使用预训练模型进行分离\n\n这是最简单的使用场景。您可以直接加载官方提供的预训练模型对音频进行源分离。\n\n```python\nimport torch\nfrom asteroid.models import ConvTasNet\n\n# 加载预训练模型 (例如：在 WHAM 数据集上训练的 ConvTasNet)\nmodel = ConvTasNet.from_pretrained(\"ConvTasNet_WHAM!_separate_all\")\n\n# 准备输入音频 (假设 mix 是一个形状为 [batch, time] 的 Tensor)\n# 这里仅作示例，实际使用时需加载真实的 wav 文件\nmix = torch.randn(1, 16000) \n\n# 执行分离\nwith torch.no_grad():\n    separated_sources = model(mix)\n\n# separated_sources 的形状通常为 [batch, num_sources, time]\nprint(separated_sources.shape)\n```\n\n### 2. 运行官方实验脚本 (Recipes)\n\nAsteroid 提供了大量复现经典论文的脚本（位于 `egs\u002F` 目录）。以下以运行 WHAM 数据集上的 ConvTasNet 为例：\n\n```bash\n# 进入对应的实验目录\ncd egs\u002Fwham\u002FConvTasNet\n\n# 运行脚本 (请确保已执行过 'pip install -r requirements.txt')\n. .\u002Frun.sh\n```\n\n运行后，脚本将自动处理数据准备、模型训练和评估，并使用 TensorBoard 记录日志。\n\n### 3. 查看训练日志\n\n所有实验默认使用 TensorBoard 记录。在实验目录下运行以下命令即可可视化训练过程：\n\n```bash\n# 启动 TensorBoard (默认端口 6006)\ntensorboard --logdir exp\u002F\n```\n\n如果是远程服务器，请通过 SSH 隧道转发端口：\n```bash\nssh -L 6006:localhost:6006 user@your_server_ip\n```\n然后在本地浏览器访问 `http:\u002F\u002Flocalhost:6006`。","某音频算法研究员正在开发一款针对嘈杂会议录音的自动转写系统，需要从混合音频中精准分离出不同发言人的声音。\n\n### 没有 asteroid 时\n- **重复造轮子耗时久**：研究人员需手动复现 ConvTasNet 等经典论文的底层代码，花费数周时间搭建基础架构而非专注算法改进。\n- **数据适配困难**：面对 WHAM 等不同数据集，需编写大量定制化脚本进行格式清洗和对齐，极易因数据处理错误导致训练失败。\n- **实验复现门槛高**：缺乏统一的训练食谱（recipes），难以快速验证前沿论文效果，调参过程如同“黑盒”，试错成本极高。\n- **长音频处理棘手**：直接处理长时间会议录音时，显存易爆炸，需自行编写复杂的分块与重叠推理逻辑，工程实现难度大。\n\n### 使用 asteroid 后\n- **开箱即用加速研发**：直接调用内置的 ConvTasNet 等成熟架构和预训练模型，将环境搭建与基线构建时间从数周缩短至几小时。\n- **数据集无缝对接**：利用其原生支持的多种数据集接口，一键完成数据加载与预处理，让研究人员能立即开始模型训练。\n- **标准化实验流程**：通过运行官方提供的“食谱”脚本，可轻松复现顶会论文结果，并基于统一框架快速迭代新想法。\n- **大文件平滑处理**：借助内置的大音频处理工具包，自动解决长录音的分段与拼接问题，无需关心底层显存管理细节。\n\nasteroid 通过提供标准化的 PyTorch 音频分离基础设施，让研究人员从繁琐的工程实现中解放出来，专注于核心算法的创新与突破。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fasteroid-team_asteroid_257f2b52.png","asteroid-team","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fasteroid-team_f65ae1aa.png",null,"https:\u002F\u002Fgithub.com\u002Fasteroid-team",[77,81],{"name":78,"color":79,"percentage":80},"Python","#3572A5",87.3,{"name":82,"color":83,"percentage":84},"Shell","#89e051",12.7,2560,446,"2026-04-13T18:27:30","MIT","未说明","未说明 (基于 PyTorch，通常音频分离任务建议使用支持 CUDA 的 NVIDIA GPU)",{"notes":92,"python":93,"dependencies":94},"该工具是基于 PyTorch 的音频源分离工具箱。安装推荐使用 conda (`conda env create -f environment.yml`) 或 pip。运行具体示例脚本（recipes）通常需要额外安装 `requirements.txt` 中的依赖。支持多种数据集（如 WHAM, WSJ0-2mix）和模型架构（如 ConvTasNet, DPRNN）。日志可视化默认使用 TensorBoard。","3.8+",[95,96,97,98,99,100,101,102,103],"torch","pytorch-lightning","soundfile","scipy","numpy","pandas","tqdm","yaml","tensorboard",[105,14],"音频",[107,108,109,110,111,112,113],"source-separation","speech-separation","audio-separation","speech-enhancement","deep-learning","pytorch","pretrained-models","2026-03-27T02:49:30.150509","2026-04-17T08:24:10.446382",[117,122,127,132,137,142],{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},36608,"DPTNet 实现中 MultiheadAttention 的输入维度顺序是否有误？","这不是 Bug。虽然 PyTorch 的 MultiheadAttention 默认输入格式为 [seq_len, batch, channels]，而代码传入的是 [batch, seq_len, channels]，但这实际上导致了 intra-processing（块内处理）和 inter-processing（块间处理）的顺序互换。对于 intra-processing，本应输入 [K, 1*L, N] 却输入了 [1*L, K, N]，这与 inter-processing 的输入 [L, 1*K, N] 等价，反之亦然。因此，这种“错误”的实现等效于交换了处理顺序的 DPTNet，模型仍能正常工作。","https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fissues\u002F380",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},36609,"如何从 Hugging Face 获取 MulCat DPRNN 的预训练模型文件（.bin）？","可以通过运行项目中的 eval.py 脚本来导出模型。具体方法是查看 egs\u002Fwsj0-mix-var\u002FMulti-Decoder-DPRNN\u002Feval.py 文件的第 66 行附近，预训练模型在此处被加载用于测试循环，可以在此基础上添加代码将模型导出为 .bin 格式。","https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fissues\u002F412",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},36610,"训练 DCCRN 时遇到 'unexpected keyword hidden_size' 错误或 Loss 变为 NaN 怎么办？","如果遇到 'unexpected keyword hidden_size'，说明配置文件中的参数与当前模型架构不匹配，可以直接通过代码实例化模型（如 `model = DCCRNet(...)`）而不依赖配置文件的 masknet 部分。若训练几轮后 Loss 变为 NaN，建议尝试更换损失函数，例如使用 SNR loss 或 SD-SDR（尺度依赖 SDR）。SD-SDR 具有 SI-SDR 的正交投影优势，但包含对尺度敏感的项。此外，也可以在后处理步骤中固定音量，但这并非最佳方法。","https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fissues\u002F359",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},36611,"在 WHAM 数据集上训练 DPRNN 的结果与 README 中给出的基准结果不一致，可能是什么原因？","结果差异可能与 PyTorch Lightning (pl) 版本更新导致的 early stopping（早停）行为变化有关。此外，环境因素如 CUDA 版本（例如 CUDA 9.2）也可能影响复现结果。建议检查所使用的框架版本是否与原始实验环境一致。","https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fissues\u002F151",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},36612,"在 WSL (Windows Subsystem for Linux) 环境下创建 WHAM 混合数据时出现 'ValueError: could not broadcast input array' 错误如何解决？","该错误通常发生在音频片段拼接时长度不匹配。虽然在某些特定 utterance 处理中会因数组形状不匹配而报错，但这通常不影响当前源分离模型的训练需求。如果是为了 ASR（自动语音识别）任务，需要确保所有说话人说完句子，此时可能需要使用 max 模式来处理长度对齐问题。对于源分离任务，如果该错误频繁发生，可能需要检查音频采样率设置或截断\u002F填充逻辑。","https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fissues\u002F153",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},36613,"Asteroid 是否支持可变数量的说话人分离？","是的，社区已经提出了支持可变数量说话人的功能（基于 Multi-Decoder DPRNN 论文）。该模型可以在相同的运行时间下处理不同数量的说话人（如 2 到 5 人），并且在选择说话人数量方面准确率超过 98%，其 Si-SNR 性能与针对固定说话人数量单独训练的模型相当。相关代码和预训练模型正在整理中以供开源。","https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid\u002Fissues\u002F367",[148,153,158,163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238,243],{"id":149,"version":150,"summary_zh":151,"released_at":152},289377,"v0.7.0","⬆️ 升级 ⬆️ \n\n#682 中记录的变更：\n- 移除 `System` 类中 `lr_scheduler_step` 方法中的 `optimizer_idx` 参数。\n- 在 `beamforming.py` 中将 `torch.symeigh` 替换为 `torch.linalg.eigh`。\n- 禁用 X-UMX，因为它使用的是 torch 1.x 的 STFT：@r-sawata，您打算修复这个问题吗？\n- 将 `on_*_end` 替换为 `on_training_*_end`。\n- 将 `torch.testing.assert_allclose` 替换为 `torch.testing.assert_close`。\n\n祝编码愉快 🙃","2023-10-12T20:47:16",{"id":154,"version":155,"summary_zh":156,"released_at":157},289378,"v0.6.1","在升级至 `asteroid` 0.7.x 并引入 `torch` 和 `lightning` 的新版本之前发布。\n\n### 破坏性变更\n- [install] 将 `torch` 和 `pytorch-lightning` 的版本限制为 1.x (#671)\n\n### 新增功能\n- [egs] 如果 LibriMix 配置文件中未指定 `storage_dir`，则抛出错误 (#626)\n- [egs] 创建多解码器 DPRNN 的 README.md 文件 (#632)\n- [egs] 更新多解码器 DPRNN，使其开箱即用即可对单个文件进行推理 (#653)\n- [egs] 添加一个新的预训练模型 (X-UMXL) (#665)\n- [egs] 为 MD-DPRNN 添加笔记本演示和 `separate.py` 脚本，简化 `from_pretrained` 的使用方式 (#668)\n\n### 变更\n- [src] 使用 `torch.linalg` 进行求解和 Cholesky 分解 (#623)\n- [install] 移除 `torchmetrics` 的版本限制\n- [CI] 使用 black 22.3.0 进行代码格式化，并修复 CI 中 Python 3.*8 的格式化问题 (#624)\n\n### 修复\n- [hub] 升级已弃用的 `huggingface_hub.cached_download` (#645)\n- [src] 修复累积归一化问题 (#649)\n- [src] 根据 Python 版本修复 argparse 键的问题 (#628) (#657)\n- [ci] 修复持续集成中的测试问题 (#672)\n\n非常感谢所有贡献者 @actuallyaswin、@mystlee、@JunzheJosephZhu、@jbartolewska、@LeonieBorne、@mattiadg、@r-sawata 和 @zmolikova！💪 🤩 🙏","2023-07-19T11:37:16",{"id":159,"version":160,"summary_zh":161,"released_at":162},289379,"v0.6.0","是的。就是这样。目前仅支持 PyTorch Lightning。\n\n从 [1.5.0](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Flightning\u002Freleases\u002Ftag\u002F1.5.0) 版本到最新版本都适用。\n\n因此，需要切换到新的 `Trainer` API，并使用新的 `accelerator`、`strategy` 和 `devices` 参数。\n\n感谢 @paulfd 的推动：肌肉表情！","2022-06-28T15:24:22",{"id":164,"version":165,"summary_zh":166,"released_at":167},289380,"v0.5.3","0.6.0 之前的次要补丁版本，将升级 PyTorch Lightning 的版本。\n\n### 新增\n- [egs&tests] MixIT 损失函数 (#595)\n\n### 变更\n- [docs] 将 RTD 版本更新至 0.5.1\n\n### 修复\n- [src] 修复 FasNetTAC 加载问题 (#586)\n- [src] 修复 sudormrf 填充中的设备问题。修复 #598 (#603)\n- [docs] 修复 deep_clustering_loss 文档示例 (#607)\n- [src] 移除针对 PyTorch 1.11.0 的 torch.complex32 使用 (#609)\n- [egs] 修复 WHAMR! 的包安装脚本 (#613)\n- [src] 修复 SuDORMRF 的 masknn 中的形状不匹配问题 (#618)\n- [CI] 修复 CI，将 torchmetrics 版本限制在 0.8.0 以下 (#619)\n- [docs] 修复文档，将 jinja2 版本限制为 >=3.0.0,\u003C3.1.0 (#620)\n\n感谢 @jc5201、@z-wony、@JorisCos、@ben-freist、@nicocasaisd 和 @zmolikova 的精彩贡献 :fire: :muscle: :pray:","2022-06-26T19:31:56",{"id":169,"version":170,"summary_zh":171,"released_at":172},289381,"v0.5.2","## 亮点\n- 基于 ConvTasNet 的全新 VAD 模型，适用于 [`Libri_VAD`]() 数据集 :rocket: \n- 修复并改进了 GEVD 波束形成器\n- 多解码器 DPRNN 配方\n\n## 更改日志\n### 新增\n- [src&egs] 在 Asteroid 中添加 VAD :tada: (#558)\n- [src&tests] 添加 GEVD 波束形成器 (#520)\n- [hub] 为 Hugging Face 下载统计信息添加库版本和名称 (#524)\n- [egs] 添加多解码器 DPRNN 的配方 (#463)\n- [egs] 启用 GPU 上的 WER 评估 (#541)\n- [egs] 为 FaSNet 配方添加 README 和预训练模型 (#561)\n\n### 变更\n- [src] 将 cLN 设置为因果 ConvTasNet 的默认配置 (#511)\n- [install] 锁定 pytorch-optimizer 版本以支持 RAdam (#568)\n\n### 修复\n- [src] 防止 gev 中出现复数特征值 (#519)\n- [src] 修复因果 ConvTasNet 的默认 norm_type (#503)\n- [egs] 修复 X-UMX 中的 bug (#521)\n- [nbs] 将 asteroid.filterbanks 改为 asteroid_filterbanks (#526)\n- [nbs] 修复 notebooks\u002F02_Filterbank.ipynb 中的拼写错误 (#527)\n- [install] 修复 PL 版本 \u003C1.5.0 的问题 (#576)\n\n感谢 @ldelebec、@hihunjin、@nobel861017、@ben-freist、@r-sawata、@osanseviero、@JunzheJosephZhu 和 @JorisCos 的精彩贡献 :fire: :muscle: :pray:","2021-12-07T08:09:14",{"id":174,"version":175,"summary_zh":176,"released_at":177},289382,"v0.5.1","不多也不少 :upside_down_face: \r\n请查看下方的发布内容，以获取最新的版本说明。","2021-05-07T19:04:23",{"id":179,"version":180,"summary_zh":181,"released_at":182},289383,"v0.5.0",":warning: 本版本不再支持 PyTorch 1.8 以下版本，并限制了 PyTorch-Lightning 1.3 以下版本的使用。 \r\n下个版本（0.5.1）将添加对 PyTorch-Lightning 1.3.0 的支持，但该版本导致我们的 CI 测试中断。 :warning: \n\n## 亮点\n- 波束成形模块现在更加稳定（默认以双精度计算），并进行了重命名和扩展。\n- 音乐分离挑战赛的基准模型已发布 :tada: \n\n## 更改日志\n### 破坏性变更\n- [src&tests] 移除所有已弃用的代码 (#474)\n- [all] 不再支持 torch\u003C1.8.0 (#476)\n\n### 新增\n- [src] 波束成形：Souden MVDR 和最优通道选择 (#484)\n- [src&egs] X-UMX 音乐分离挑战赛官方基准模型 (#490)\n\n### 变更\n- [src] 以双精度计算线性代数波束成形运算 (#482)\n- [src] 改进波束成形命名并添加待办事项 (#483)\n- [src] 波束成形：启用强制使用单精度线性代数运算 (#485)\n- [docs] 更新预训练模型共享说明 (#489)\n- [install] 将 lightning 版本升级至 1.3.0 以下 (#493)\n\n### 修复\n- [nb] 修复 00_GettingStarted.ipynb 中的形状问题 (#478)\n- [src] 稳定 GEV 波束成形器 (#479)\n- [src] 波束成形：修复文档引用\n\n感谢 @quancs、@r-sawata、@popcornell 和 @faroit 的贡献。","2021-05-07T15:14:55",{"id":184,"version":185,"summary_zh":186,"released_at":187},289384,"v0.4.5",":warning: **警告** :warning: 这是最后一个支持 torch\u003C1.8 的版本。\n\n从 asteroid 0.5.0 开始，将仅支持 torch>=1.8.0。主要原因是 `fft` 和 `linalg` 包的复杂性导致的支持难度较大。\n\n### 亮点\n- 我们现在有了端到端的波束成形模块！:tada: 多通道配方也将陆续推出。\n\n### 更改日志\n- [src] 修复 DCUNet ConvTranspose2d 中的填充问题 (#466)\n- [src&tests] 实现因果 TDConvNet 和 ConvTasNet (#465)\n- [tests] 修复 #465 中的测试问题 (#467)\n- [docs] 修复错误的文档链接 (#469)\n- [docs] 修复文档中的链接和错误 (#470)\n- [src&tests] 添加波束成形模块 :tada: (#468)\n\n非常感谢各位贡献者！:upside_down_face:","2021-04-09T19:22:02",{"id":189,"version":190,"summary_zh":191,"released_at":192},289385,"v0.4.2","# 亮点\n- **已弃用**：不再向 `BaseModel` 传递 `sample_rate` 已被弃用，未来版本中将引发错误。  \n- `BaseModel` 现在接受一个 `in_channels` 参数，该参数将在 `separate` 方法以及 `asteroid-infer` CLI 中使用。  \n  这使得现在可以在模型仓库中共享多通道模型（例如）。  \n- 首个完全支持的多通道模型是 `FasNetTAC`，感谢 @popcornell！ :tada:  \n- 使用 [`huggingface_hub`](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fhuggingface_hub) 替代“自定义”代码来与模型仓库进行交互。\n\n# 更改日志\n\n### 破坏性变更\n- [src] 将 `BaseModel` 中的 `sample_rate` 参数改为位置参数 (#431)\n\n### 新增功能\n- [src&egs] 在 LibriMix 配方中加入 ESPNet :tada: (#329)\n- [cli] 向 `asteroid-infer` 添加 `--device` 参数 (#375)\n- [src] 向 `BaseDCUNet` 添加 `stft_n_filters` 参数 (#406)\n- [src&tests] 添加 `MetricTracker` 类 (#394)\n- [egs] 为所有模型添加 Librimix 配方 (#418)\n- [src] 在 `WerTracker` 中跟踪转录结果 (#414)\n- [docs] 添加关于 System Lightning 钩子的说明 (#428)\n- [src] 支持多通道模型 (#427)\n- [src&egs] 添加 `FasNetTAC` 模型、数据集和配方 (#306)\n- [src] 向 `DPRNN` 添加 `mulcat` 选项 (#416)\n\n### 变更\n- [src&install] 移除 `librosa` 并重构依赖文件 (#386)\n- [src] 移除未使用的钩子 (#424)\n- [hub] 使用 `huggingface_hub` 依赖，移除内联的 Hugging Face 代码 (#409)\n\n### 修复\n- [egs] 将采样率传递给模型 (#407)\n- [src] 修复 Large-DCUNet-20 架构问题 (#405)\n- [src] 修复张量设备不一致的问题 (#417)\n- [egs] 修复 DeepClustering 配方中模型保存路径问题 (#398)\n- [src] 修复 TasNet 和 SudoRMRF 中未传递采样率的问题 (#433)\n- [egs] 修复 AVSpeech 中的重塑问题 (#441)","2021-02-18T15:15:21",{"id":194,"version":195,"summary_zh":196,"released_at":197},289386,"v0.4.1","# 亮点\n- 终于升级到 Lightning 1.x。快去看看他们的[文档](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002Fen\u002Flatest\u002F)吧，看看有哪些新变化！尤其是回调函数和日志记录方面做了不少改进，变得更棒了 :wink: \n- 从 [Zenodo 仓库](https:\u002F\u002Fzenodo.org\u002Fcommunities\u002Fasteroid-models) 迁移到了 [HuggingFace 模型仓库](https:\u002F\u002Fhuggingface.co\u002Fmodels?filter=asteroid)，我们甚至有了专属标签和“在 Asteroid 中使用”按钮，真是太棒了！:star_struck: 非常感谢 HuggingFace 团队，特别是 @julien-c。:pray: \n\n![image](https:\u002F\u002Fuser-images.githubusercontent.com\u002F18496796\u002F103692633-55521c80-4f98-11eb-9a02-3ccbc6c76a03.png)\n\n![image](https:\u002F\u002Fuser-images.githubusercontent.com\u002F18496796\u002F103692876-b7ab1d00-4f98-11eb-8b8e-ff4025e6fdee.png)\n\n\n# 更改日志\n\n### 新增\n- [hub] 支持 HuggingFace 模型仓库 :tada: (#377)\n- [hub] 在 HuggingFace 仓库中列出 Asteroid 模型 (#382)\n- [安装] 去除重复的版本号 (#388)\n\n### 变更\n- [文档] 迁移到 asteroid-team 组织 (#369)\n- [源码] 从 `.models` 中导入 `BaseModel`\n- [源码] 将 Lightning 升级到 1.x 版本 (#371)\n\n### 修复\n- [安装] 修复旧版 STFT 模型加载问题（感谢 @popcornell）\n- [hub] 修复 `torch.hub` 测试，并添加 #377 引入的新依赖\n- [源码] 修复 `attention.py` 中因多头注意力输入形状导致的 bug (#381)\n- [源码] 这次应该能彻底修复 DPTNet 的问题了 (#383)\n- [源码] 移除对 `super().training_step()` 的调用 (#395)\n\n感谢所有贡献者：@jonashaag、@popcornell、@julien-c、@iver56、@lubacien、@cliffzhao，以及提出问题和报告 bug 的各位！:pray:","2021-01-05T20:06:00",{"id":199,"version":200,"summary_zh":201,"released_at":202},289387,"v0.4.0","# Highlights \r\n- Full [`TorchScript`](https:\u002F\u002Fpytorch.org\u002Fdocs\u002Fstable\u002Fjit.html) support for all asteroid models, unit tested for consistency  :rocket: \r\n- Outsource all filterbanks to [asteroid-filterbanks](https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid-filterbanks) with a new `MelGramFB`, a STFT that matches `torch.stft` and new hooks for more extensibility! \r\n- Direct links to GitHub code from the docs :tada:\r\n- Add Hungarian algorithm in `PITLossWrapper` + new `MixITWrapper` and `SinkPITLossWrapper` :zap: \r\n- Better CI with tests against `1.6.0`, `1.7.0` and `torch-nightly`.\r\n\r\n## Backwards Incompatible Changes\r\n- `filters` in `Filterbank` is now a method instead of a property, for `TorchScript` support (#237). \r\n- `PITLossWrapper` method `best_perm_from_perm_avg_loss`, `find_best_perm` now return batch indices of the best permutation, to match with the new hungarian algorithm and facilitate outside use of those methods (#243).\r\n- Models that were saved without `sample_rate` argument won't be loadable anymore. Use `asteroid-register-sr` to register the sample rate of the model (#285). \r\n- We finally removed the `losses` (#343 ) and `blocks` (#344) that were deprecated since 0.2.0.\r\n- Remove `kernel_size` argument from `TDConvNet` (deprecated since v0.2.1) (#368).\r\n \r\n## Deprecation\r\n- `BaseModel._separate` is deprecated in favour of `BaseModel.forward_wav` (#337). \r\n- `asteroid.filterbanks` has been outsourced to [`asteroid-filterbanks`](https:\u002F\u002Fgithub.com\u002Fasteroid-team\u002Fasteroid-filterbanks) (#346). Use `from asteroid_filterbanks import` instead of `from asteroid.filterbanks import` from now. \r\n- Several deprecation cycles have been engaged in `asteroid-filterbanks.transforms`:\r\n  - `take_reim` will be removed.\r\n  - `take_mag` is deprecated in favour of `mag`.\r\n  - `take_cat` is deprecated in favour of `magreim`.\r\n  - `from_mag_and_phase` is deprecated in favour of `from_magphase`.\r\n- `asteroid.complex_nn.as_torch_complex` has been deprecated and will be removed. Use `torch.view_as_complex`, `torch_complex_from_magphase`, `torch_complex_from_reim` or `asteroid_filterbanks.transforms.from_torch_complex` instead (#358).\r\n\r\n# Changelog\r\n### Breaking\r\n[src] BC-breaking: Load models without sample_rate (#285)\r\n[src] Remove deprecated losses (#343)\r\n[src] Remove deprecated blocks (#344)\r\n[src] BaseEncoderMaskerDecoder: remove old hooks (#309)\r\n[src] Remove deprecated kernel_size in TDConvNet (#368)\r\n\r\n### Added\r\n[src&tests] Add sample_rate property (float) in `BaseModel`.  (#274)\r\n[src] Add sample_rate argument to all supported models. (#284)\r\n[src&tests] Automatic resampling in separate + CLI. (#283)\r\n[src & tests] :tada: TorchScript support :tada: (#237)\r\n[src & tests] Add Hungarian matcher to solve LSA in PITLossWrapper (#243)\r\n[src&tests] Add jitable_shape and use it in EncMaskDec forward (#288)\r\n[src&tests] Add shape checks to SDR and MSE losses (#299)\r\n[docs] Add loss plot in the FAQ (#314)\r\n[src] New asteroid.show_available_models (#313)\r\n[egs] DAMP-VSEP vocal separation using ConvTasNet (#298)\r\n[docs] DAMP-VSEP in the docs ! (#317)\r\n[src&test] Add Sinkhorn PIT loss (#302)\r\n[src] Add MixITWrapper loss (#320)\r\n[egs] Add MixIT example recipe (#328)\r\n[src] New Filterbank's hooks + add MelGram_FB (#334)\r\n[src] New phase features and transforms (#333)\r\n[src] Better names in asteroid.filterbanks.transforms (#342)\r\n[src] Add asteroid-versions script to print installed versions (#349)\r\n[install] Add conda environment.yml (#354)\r\n[src] Add ebased_vad and deltas (#355)\r\n\r\n### Changed\r\n[src&tests] Make `get_metrics` robust against metrics failures (#275)\r\n[egs] Don't override print() with pprint (#281)\r\n[src] Refactor BaseEncoderMaskerDecoder.forward (#307)\r\n[src&tests] Refactor DeMask for consistency (#304)\r\n[docs] Replace GettingStarted notebook (#319)\r\n[src] BaseModel takes sample_rate argument (#336)\r\n[src&egs] Transition to asteroid_filterbanks (#346)\r\n[src] Rename _separate to forward_wav (#337)\r\n[docs] Build docs with 3.8\r\n[docs] Links to GitHub code from the docs :tada: (#363)\r\n[CI&hub] TorchHub integration tests (#362)\r\n\r\n### Fixed\r\n[egs] Fix #277 DNS Challenge baseline's run.sh\r\n[docs] Fix Reference and Example blocks in docs (#297)\r\n[src] Fix #300: skip connection on good device (#301)\r\n[src] DCUNet: Replace old hooks by new ones (#308)\r\n[src] Fix schedulers serialization  (#326)\r\n[src] Improve Filterbank.forward error message (#327)\r\n[egs] Fix: replace DPRNNTasNet with DPTNet (#331)\r\n[src&jit] Fix DCCRN and DCUNet-Large (#276)\r\n[CI] Catch warnings we expect (#351)\r\n[src] Fix #279 OLA support for separate() and asteroid-infer (#305)\r\n[docs] Docs fixes and improvements (#340)\r\n[docs] Fix CLI output in docs (#357)\r\n[src&tests] Fix complex and add tests (#358)\r\n[docs] Fix docstrings (#365)\r\n[src] Fix #360 Correct DCCRN RNN (#364)\r\n\r\n\r\nA **large thanks to all contributors** for this release :\r\n@popcornell @jonashaag @michelolzam @faroit @mhu-coder @JorisCos @groadabike  @giorgiacantisani @tachi-hi @SouppuoS  @su","2020-11-30T20:12:17",{"id":204,"version":205,"summary_zh":206,"released_at":207},289388,"v0.3.5","Since v0.3.4, `pytorch_lightning` released 1.0 which is incompatible. This release only limits lightning's version so that the install is compatible with the source code. ","2020-11-10T11:27:48",{"id":209,"version":210,"summary_zh":211,"released_at":212},289389,"v0.3.4","# Highlights\r\n- Fixed compatibility issue of pretrained models between `v0.3.0` and `v0.3.3` thanks to @groadabike  (issue #255, #258)\r\n- Fixed chunk reordering in `LambdaOverlapAdd`.\r\n- `BaseEncoderMaskerDecoder` (formerly `BaseTasNet`) now has model hooks for easier extensibility (thanks to @jonashaag) : `postprocess_encoded`, `postprocess_masks`, `postprocess_masked`, postprocess_decoded`\r\n- Brand new complex ops and NNs thanks to @jonashaag !\r\n- New supported models: `DCUNet` ([paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03107))and `DCCRNet` ([paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.00264))\r\n\r\nNote : Next release (0.4.0) will have some small backward breaking changes, to support TorchScript and improve our `PITLossWrapper`.\r\n\r\n# Changelog\r\n#### Added\r\n[hub] Add tmirzaev's model in the string-retrievable ones.\r\n[src] BaseTasNet -> BaseEncoderMaskerDecoder + add model hooks (#266)\r\n[src & tests] New complex ops + Add DCUNet and DCCRNet (#224)\r\n[src&tests] Improve scheduler's docs + add plot method (#268)\r\n\r\n#### Changed\r\n[hub] Add software version section in published models (#261)\r\n[docs] Add issue #250 to FAQ (#260)\r\n[black] Update black to 20.8b1 (#265)\r\n[black] Fix black 20.8b1 update (#267)\r\n[black] Update to 20.8b1 + always lint\r\n\r\n#### Fixed\r\n[egs] Fix declared unused variables in DeMask (#248)\r\n[docs] Update article citation.\r\n[src] Restore linear activation as default in ConvTasNet and DPRNN (#258)\r\n[src] Fix uncalled optimizer in System without LR schedule (#259)\r\n[src] Fix bug for DPTNetScheduler (#262)\r\n[src] Fix LambdaOverlapAdd and improve docs (#271)\r\n\r\n\r\nThanks to our awesome contributors @popcornell @jonashaag @faroit @groadabike !","2020-10-07T12:29:54",{"id":214,"version":215,"summary_zh":216,"released_at":217},289390,"v0.3.3","# Highlights \r\n\r\n- Refactored base models to ease publishing and loading \r\n- Improve Asteroid extensibility (without `-e` install) with `register`\u002F`get` logic.\r\n- `asteroid-infer` CLI for easy enhancement\u002Fseparation\r\n- DeMask recipe + pretrained model \r\n- Upgrade to lightning 0.8.1+\r\n- [Brand new docs](https:\u002F\u002Fmpariente.github.io\u002Fasteroid\u002F) ! (thanks @michelolzam)\r\n- [Brand new landing page](https:\u002F\u002Fasteroid-team.github.io\u002F) ! (thanks @michelolzam)\r\n\r\n# Changelog\r\n\r\n#### Added\r\n- [hub] Add DeMask to hubconf (#242)\r\n- [models] Add 16k popcornell\u002FDPRNNTasNet_WHAM_enhancesingle enhancement\r\n- [src & egs] Add DeMask: Surgical mask speech enhancement recipes (#235)\r\n- [models] Add pretrained DeMask name to URL mapping\r\n- [src & egs] PyTorchLightning upgrade to 0.9.0 (stable from 0.8.1 on) (#217)\r\n- [CLI] Add asteroid-infer CLI to enhance\u002Fseparate from the command line (#236)\r\n- [docs] New docs theme ! (#230)\r\n- [tests] Improve tests for new model interface (#234)\r\n- [docs] Add license info to FUSS Dataset\r\n- [src] Cleaner try\u002Fexcept\u002Felse in base_models.py (thanks @jonashaag)\r\n- [src] Add LibriMix.loaders_from_mini in librimix_dataset.py (#229)\r\n- [src] Add register command to gettable modules (#231)\r\n- [models] Add groadabike's pretrained model's URL (#228)\r\n- [src] Import MultiScale spectral loss in __init__.py\r\n- [docs] Include new notebook: follow up of #221 (#226)\r\n- [src] Add LambdaOverlapAdd usage example (#221)\r\n\r\n#### Changed\r\n- [src] Improve BaseModel.separate (#236 follow up) (#238)\r\n- [hub] Remove args in hub models, don't accept positional args after kwargs\r\n- [src] Refactor BaseTasnNet in two: better loadable\u002Fserializable models  (#232)\r\n\r\n#### Fixed\r\n- [CLI] Fix large output volume in asteroid-infer\r\n- [docs] Fix theme installation, grunt build and import errors in docs (#240)\r\n- [tests] Fix wrong `separate` method in model tests (#241)\r\n- [src] Recipe name in publisher + dataset name fix (#239)\r\n\r\n\r\n","2020-08-25T20:42:04",{"id":219,"version":220,"summary_zh":221,"released_at":222},289391,"v0.3.2","## Highlights \r\n- New supported models:\r\n  - DualPathTransformer Network (DPTNet).\r\n  - SuDORMRF Network.\r\n  - LSTM TasNet accessible from `hub`.\r\n  - TDCNpp masking network.\r\n- Batch-wise schedulers (very useful for transformers) are now available.\r\n- `LambdaOverlapAdd` to easily process long files.\r\n- Add SMS_WSJ `Dataset`.\r\n- Add FUSS `Dataset`.\r\n- Consistent codestyle with `black` and more extensive testing\r\n- **Redable docs** ! [[latest](https:\u002F\u002Fmpariente.github.io\u002Fasteroid\u002F)][[0.3.1](https:\u002F\u002Fasteroid.readthedocs.io\u002Fen\u002Fv0.3.1\u002F)]\r\n\r\n\r\nNote: Next releases will be based on `pytorch-lightning>=0.8.0`.\r\n\r\n## Changelog\r\n### Added\r\n- [tests] Add scheduler tests (#220)\r\n- [docs] Add schedulers, activations docs and improve datasets' docs (#219)\r\n- [docs] Add DSP section to docs (#218)\r\n- [docs] Add FUSSdataset to docs\r\n- [docs] Add schedulers in docs\r\n- [docs] Add activations to docs\r\n- [src] Add FUSS dataset from FUSS PR (#215)\r\n- [src & tests] Add TDCN++ to masknn (#214)\r\n- [hub] Add LSTMTasNet\u002FDPTNet\u002FSuDORMRFNet to torch.hub! (#210)\r\n- [src & tests] Add LSTMTasNet to serializable models  (#209)\r\n- [src] Continuous Speech separation with LambdaOverlapAdd (#193)\r\n- [src] Continuous Speech separation with OverlappadWrapper (#191)\r\n- [src & tests] Add SuDoRM-RF model & online mixing collate function (#174)\r\n- [src, tests & egs] Batchwise learning rate schedulers + DPProcessing + Dual Path Transformer Network + recipe (#200)\r\n- [docs] Add black code-style\r\n- [hub] Add Brij's LibriMix enhancement model\r\n- [src] Adds Dataset for SmsWsj (#179)\r\n- [docs] STOI loss example: Add sample rate (#189)\r\n- [src & tests] Add feature-wise global layernorm (#170)\r\n\r\n### Changed\r\n- [src & tests] Split SuDORMRF architectures in encoder\u002Fmasker\u002Fdecoder (#208)\r\n- [src] Code-style + docs\r\n- [src & tests] (BWI) Gather DSP methods in dsp folder (#194)\r\n- [egs] EarlyStopping Patience to 30 instead of 10. (#178)\r\n\r\n### Fixed\r\n- [src] Fix docs append problem in STOI.\r\n- [black] Apply black to recipes (#216)\r\n- [tests & CI] Fix tests for publishing (#211)\r\n- [notebooks] Fix notebooks (#206)\r\n- [src & tests] Fix serialization issues introduced in previous PR + some docs (#204)\r\n- [egs] Remove file librimix\u002Fmodel.py as model is imported from asteroid.models (#176)\r\n- [egs] Dynamic Mixing fix  (#173)\r\n- [instal] Fix pytorch-lightning dependency (#159)\r\n- [egs] Fix empty audio, multi gpu and reduced storage issues in avspeech (#169)\r\n- [egs] Fix style in model.py\r\n- [egs] Fix bugs in generating wsj0-mix dataset with wv1 (#166)\r\n- [egs] Fix wrong rel path in wham\u002FDPRNN prepare_data.sh (#167)\r\n- [egs] Fix clipping problems when saving estimate wav file for Wham ConvTasNet (#160)\r\n\r\n### Backward incompatible changes\r\n- Move `mixture_consistency` in `dsp` folder. \r\n","2020-08-20T22:08:37",{"id":224,"version":225,"summary_zh":226,"released_at":227},289392,"v0.3.1","Use 0.3.2 instead.","2020-08-20T17:27:26",{"id":229,"version":230,"summary_zh":231,"released_at":232},289393,"v0.3.0","### Backward incompatible changes\r\n- `System` is not imported from `asteroid\u002F__init__.py`. This way, `torch.hub` doesn't need `pytorch-lightning` to load models. Replace your `from asteroid import System` calls by `from asteroid.engine.system import System` \r\n- Refactor utils files into `asteroid\u002Futils` folder (#120). \r\nThe only not backward compatible behavior is when we try to import a function from `torch_utils` e.g `from asteroid.torch_utils import pad_x_to_y`. Instead, we can use `from asteroid import torch_utils; torch_utils.pad_x_to_y(...)`\r\n\r\n#### Added \r\n[src & egs] Publishing pretrained models !! (wham\u002FConvTasNet) (#125)\r\n[src] Add License info on all (but MUSDB) supported datasets (#130)\r\n[src & egs] Kinect-WSJ  Dataset and Single channel DC Recipe (#131)\r\n[src] Add licenses info and dataset name for model publishing\r\n[docs] Add getting started notebook\r\n[docs] Add notebook summary table\r\n[egs] Enable pretrained models sharing on LibriMix (#132)\r\n[egs] Enable wham\u002FDPRNN model sharing (#135)\r\n[model_cards] Add message to create model card after publishing\r\n[model_cards] Add ConvTasNet_LibriMix_sepnoisy.md model card (Thanks @JorisCos)\r\n[src & egs] Adding AVSpeech AudioVisual source separation (#127)\r\n[src] Instantiate LibriMix from download with class method (#144)\r\n[src] Add show_available_models in asteroid init\r\n[src & tests] Bidirectional residual RNN (#146)\r\n[src & tests] Support filenames at the input of `separate` (#154)\r\n#### Changed\r\n[src & hub] Remove System to reduce torch.hub deps (back to #112)\r\n[src & tests & egs] Refactor utils files into folder (#120)\r\n[egs] GPU `id` defaults to $CUDA_VISIBLE_DEVICES in all recipes (#128)\r\n[egs] set -e in all recipes to exit or errors (#129)\r\n[egs] Remove gpus args in all train.py (--id controls that in run.sh)  (#134)\r\n[hub] Change dataset name in LibriMix (fix)\r\n[src] Add targets argument (to stack sources) to MUSDB18 (#143)\r\n[notebooks] Rename examples to notebooks\r\n[src] Enable using Zenodo without api_key argument (set ACCESS_TOKEN env variable)\r\n#### Deprecated\r\n[src] Deprecate inputs_and_masks.py (#117)\r\n[src] Deprecate PITLossWrapper `mode` argument (#119)\r\n#### Fixed\r\n[src] Fix PMSQE loss (NAN backward + device placement) (#121)\r\n[egs] Fix checkpoint.best_k_models in new PL version (#123)\r\n[egs] Fix: remove shuffle=True in validation Loader (lightning error) (#124)\r\n[egs] Corrections on LibriMix eval and train and evals scripts  (#137)\r\n[egs] Fix wavfiles saving in eval.py for enh_single and enh_both tasks (closes #139)\r\n[egs] Fix wavfiles saving in eval.py for enh tasks (estimates)\r\n[egs] Fix #139 : correct squeeze for enhancement tasks (#142)\r\n[egs] Fix librimix run.sh and eval.py (#148)","2020-06-16T19:44:44",{"id":234,"version":235,"summary_zh":236,"released_at":237},289394,"v0.2.1","#### Summary\r\n\r\n- New datasets: LibriMix, wsj0-mix, MUSDB18 and FUSS.\r\n- New recipes: DPRNN, TwoStep separation, Deep clustering, Chimera++\r\n- `asteroid.models` + `hubconf.py`: Model definitions without installing Asteroid. \r\n \r\n\r\n#### Added \r\n- [src] Add dataset_name attribute to all data.Dataset (#113) (@mpariente)\r\n- [hub] Add hubconf.py: load asteroid models without install ! (#112) (@mpariente)\r\n- [src] Add support to the MUSDB18 dataset (#110) (@faroit)\r\n- [src & tests] Importable models: ConvTasNet and DPRNNTasNet  (#109) (@mpariente)\r\n- [egs] Deep clustering\u002FChimera++ recipe (#96) (@mpariente)\r\n- [src & egs] Source changes towards deep clustering recipe (#95) (@mpariente)\r\n- [docs] Add training logic figure (#94) (@mpariente)\r\n- [install] Include PMSQE matrices in setup.py (@mpariente)\r\n- [src & egs] DPRNN architecture change + replicated results (#93) (@mpariente)\r\n- [egs] Two step recipe : update results (#91) (@etzinis)\r\n- [src & tests] Add multi-phase gammatone filterbank (#89) (@dditter)\r\n- [src] Stabilize consistency constraint (@mpariente)\r\n- [src] LibriMix dataset importable from data (@mpariente)\r\n- [src & egs] LibriMix dataset support and ConvTasnet recipe (#87) (@JorisCos)\r\n- [egs] Dynamic mixing for Wham (#80) (@popcornell)\r\n- [egs] Add FUSS data preparation files (#86) (@michelolzam)\r\n- [src & tests] Implement MISI (#85) (@mpariente)\r\n- [src] ConvTasnetv1 available : no skip option to TDCN (#82) (@mpariente)\r\n- [src & tests] Implement GriffinLim (#83) (@mpariente)\r\n- [docs] Add reduce example in PITLossWrapper (@mpariente)\r\n- [src & tests] Generalize ! Implement pairwise losses reduce function (#81) (@mpariente)\r\n- [src & tests] Add support for STOI loss (#79) (@mpariente)\r\n- [src] Support padding and output_padding in Encoder and Decoder (#78) (@mpariente)\r\n- [src & tests] Added mixture consistency constraints (#77) (@mpariente)\r\n- [logo] Add white-themed asteroid logo (@mpariente)\r\n- [egs] Add Two step source separation recipe (#67) (@etzinis)\r\n- [src] Add train kwarg in System's common_step for easier subclassing. (@mpariente)\r\n- [egs] Upload Tasnet WHAMR results (@mpariente)\r\n- [src & tests] Add PMSQE loss in asteroid (#65) (@mdjuamart)\r\n- [src] Add Ranger in supported optimizers (@mpariente)\r\n\r\n#### Changed\r\n- [src & tests] Depend on torch_optimizer for optimizers (#116) (@mpariente)\r\n- [src & tests] Upgrade pytorch-lightning to >= 0.7.3 (#115) (@mpariente)\r\n- [src] Reverts part of #112  (0.2.1 should be fully backward compatible) (@mpariente)\r\n- [src] Change .pth convention for asteroid-models (#111) (@mpariente)\r\n- [src] Split blocks in convolutional and recurrent (#107) (@mpariente)\r\n- [install] Update pb_bss_eval to zero-mean si-sdr (@mpariente)\r\n- [egs] Remove python installs in recipes (#100) (@mpariente)\r\n- [egs] Remove abs path in recipes (#99) (@mpariente)\r\n- [egs] Add DC requirements.txt (@mpariente)\r\n- [egs] Remove file headers (@mpariente)\r\n- [src] Remove Chimerapp from blocks.py (#98) (@mpariente)\r\n- [egs] Better logging : copy logs to expdir (#102) (@mpariente)\r\n- [src] Delete wav.py (@mpariente)\r\n- [egs] Delete unused file (@mpariente)\r\n- [ci] Restore old travis.yml (reverts part of #61) (@mpariente)\r\n- [egs] Utils symlink sms_wsj (@mpariente)\r\n- [egs] Utils symlink wham\u002FTwoStep (@mpariente)\r\n- [egs] Utils symlink wham\u002FDPRNN (@mpariente)\r\n- [egs] Utils symlink LibriMix (@mpariente)\r\n- [src & egs] Replace WSJ0-mix Dataset (#97) (@mpariente)\r\n\r\n#### Deprecated\r\n- [src] Deprecate kernel_size arg for conv_kernel_size in TDConvNet (#108) (@mpariente)\r\n- [src] Deprecate masknn.blocks (splited) (#107) (@mpariente)\r\n\r\n#### Fixed\r\n- [src] Fix docstring after #108 (@mpariente)\r\n- [egs] Replace PITLossWrapper arg mode by pit_from (#103) (@mpariente)\r\n- [src] Return config in DPRNN.get_config() (@mpariente)\r\n- [egs] Fix typo in LibriMix import (@mpariente)\r\n- [egs] Twostep recipe small fixes (#74) (@mpariente)\r\n- [egs] Fixed mode overwriting in all wham recipes (#76) (@Ariel12321)\r\n- [egs] Fixed mode overwriting in ConvTasNet recipe (#75) (@Ariel12321)\r\n- Fix build erros (Pin sphinx \u003C 3.0) (#72) (@mpariente)\r\n- Fixing paths for wham scripts and unziping command for noise (#66) (@etzinis)\r\n- Fix whamr_dataset.py, map reverb to anechoic (@mpariente)\r\n- Important fix : WHAMR tasks include dereverberation ! (@mpariente)\r\n\r\nBig thanks to all contributors @popcornell @etzinis @michelolzam @Ariel12321 @faroit @dditter @mdjuamart @sunits :smiley: ","2020-05-25T13:55:41",{"id":239,"version":240,"summary_zh":241,"released_at":242},289395,"v0.1.2","Mainly, this release is to downgrade pytorch-lightning to 0.6.0 as we were having some performance problems with 0.7.1 (#58).  \r\nWe also incorporate metrics calculation in Asteroid, though `pb_bss_eval` which is a sub-package of `pb_bss` available on PyPI.\r\n\r\n### New \r\n- WHAMR support + and Tasnet recipe (#54)\r\n- Add `BatchNorm` wrapper that handles 2D, 3D and 4D cases. retrievable from string `bN`. (#60)\r\n- `get_metrics` method and strict dependency on `pb_bss_eval` (#57) (#62)\r\n\r\n### Bug fixes \r\n-  Fix PITLossWrapper usage on GPU (#55)\r\n","2020-03-15T13:20:35",{"id":244,"version":245,"summary_zh":246,"released_at":247},289396,"v0.1.0","### New features\r\n- Better `argparse` interface with dictionary.\r\n- STFT is now prefectly invertible with default values. Also `perfect_synthesis_window` enables perfect synthesis with a large range if windows for even overlaps.\r\n-  `Encoder` and `Decoder` now support arbitrary number of input dimensions.\r\n- More support for complex numbers (angle, magnitude, interface to numpy and torchaudio)\r\n - Add `SingleSrcMultiScaleSpectralLoss` from DDSP (magenta)\r\n- Huge improvements on tests and coverage\r\n\r\n### New recipes \r\n- ConvTasnet full recipe on WHAM\r\n- DPRNN full recipe on WHAM \r\n- Full DNS Challenge (microsoft) baseline. \r\n- Deep clustering and Chimera++ recipe on WSJ0-2mix (ongoing)\r\n- WHAMR dataset support.\r\n\r\n### Breaking change\r\n- `Encoder` looses its `post_process_inputs` and `apply_mask` methods which were not really useful. We consider it is better the user applies these methods knowingly \r\n\r\n\r\nBig thanks to the contributors on this release @popcornell @sunits @JorisCos","2020-03-09T16:37:43"]