[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-vbelz--Speech-enhancement":3,"tool-vbelz--Speech-enhancement":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":77,"owner_website":80,"owner_url":81,"languages":82,"stars":87,"forks":88,"last_commit_at":89,"license":90,"difficulty_score":10,"env_os":91,"env_gpu":92,"env_ram":91,"env_deps":93,"category_tags":101,"github_topics":102,"view_count":10,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":107,"updated_at":108,"faqs":109,"releases":140},879,"vbelz\u002FSpeech-enhancement","Speech-enhancement","Deep learning for audio denoising","Speech-enhancement 是一个基于深度学习的开源音频降噪工具，专门用于从带环境噪声的语音中分离出清晰的人声。它通过训练神经网络模型，能够有效识别并消除多种常见背景噪音，比如钟表滴答声、脚步声、警报声、吸尘器声音等，从而提升语音的清晰度和可懂度。\n\n在现实场景中，录音或通话常常受到各种环境噪声的干扰，影响语音质量与听感，也给后续的语音识别、内容分析等任务带来困难。Speech-enhancement 正是为了解决这一问题而生，它能够将含噪语音中的噪声成分进行建模并滤除，保留甚至增强原始语音信号。\n\n该项目主要面向具有一定技术背景的开发者、音频处理领域的研究人员，以及对语音增强技术感兴趣的实践者。它提供了完整的数据准备、模型训练和预测流程，使用者可以基于自己的数据集进行训练，或直接使用预训练模型进行语音降噪。普通用户若具备基本的 Python 运行环境，也可通过其预测功能处理自己的音频文件。\n\n技术上一个亮点是采用语谱图（频谱图）作为音频的视觉化表示，将声音转换为二维图像，进而利用卷积神经网络（CNN）进行特征学习与噪声建模。这种方法能够较好地保留语音的时频结构，提升降噪效果","Speech-enhancement 是一个基于深度学习的开源音频降噪工具，专门用于从带环境噪声的语音中分离出清晰的人声。它通过训练神经网络模型，能够有效识别并消除多种常见背景噪音，比如钟表滴答声、脚步声、警报声、吸尘器声音等，从而提升语音的清晰度和可懂度。\n\n在现实场景中，录音或通话常常受到各种环境噪声的干扰，影响语音质量与听感，也给后续的语音识别、内容分析等任务带来困难。Speech-enhancement 正是为了解决这一问题而生，它能够将含噪语音中的噪声成分进行建模并滤除，保留甚至增强原始语音信号。\n\n该项目主要面向具有一定技术背景的开发者、音频处理领域的研究人员，以及对语音增强技术感兴趣的实践者。它提供了完整的数据准备、模型训练和预测流程，使用者可以基于自己的数据集进行训练，或直接使用预训练模型进行语音降噪。普通用户若具备基本的 Python 运行环境，也可通过其预测功能处理自己的音频文件。\n\n技术上一个亮点是采用语谱图（频谱图）作为音频的视觉化表示，将声音转换为二维图像，进而利用卷积神经网络（CNN）进行特征学习与噪声建模。这种方法能够较好地保留语音的时频结构，提升降噪效果。项目代码结构清晰，支持自定义噪声类型与数据集，方便扩展和二次开发。","# Speech-enhancement\n---\n[![Build Status](https:\u002F\u002Ftravis-ci.com\u002Fvbelz\u002FSpeech-enhancement.svg?branch=master)](https:\u002F\u002Ftravis-ci.com\u002Fvbelz\u002FSpeech-enhancement)\n>Vincent Belz : vincent.belz@gmail.com\n>\n>Published in towards data science : [Speech-enhancement with Deep learning](https:\u002F\u002Ftowardsdatascience.com\u002Fspeech-enhancement-with-deep-learning-36a1991d3d8d)\n>\n## Introduction\n**This project aims at building a speech enhancement system to attenuate environmental noise.**\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_0e658b4d519d.gif\" alt=\"Spectrogram denoising\" title=\"Speech enhancement\"\u002F>\n\n\n\nAudios have many different ways to be represented, going from raw time series to time-frequency decompositions.\nThe choice of the representation is crucial for the performance of your system.\nAmong time-frequency decompositions, Spectrograms have been proved to be a useful representation for audio processing. They consist in 2D images representing sequences of Short Time Fourier Transform (STFT) with time and frequency as axes, and brightness representing the strength of a frequency component at each time frame. In such they appear a natural domain to apply the CNNS architectures for images directly to sound. Between magnitude and phase spectrograms, magnitude spectrograms contain most the structure of the signal. Phase spectrograms appear to show only little temporal and spectral regularities.\n\nIn this project, I will use magnitude spectrograms as a representation of sound (cf image below) in order to predict the noise model to be subtracted to a noisy voice spectrogram.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_811e82d016ec.png\" alt=\"sound representation\" title=\"sound representation\" \u002F>\n\nThe project is decomposed in three modes: `data creation`, `training` and `prediction`.\n\n## Prepare the data\n\nTo create the datasets for training, I gathered english speech clean voices  and environmental noises from different sources.\n\nThe clean voices were mainly gathered from [LibriSpeech](http:\u002F\u002Fwww.openslr.org\u002F12\u002F): an ASR corpus based on public domain audio books. I used as well some datas from [SiSec](https:\u002F\u002Fsisec.inria.fr\u002Fsisec-2015\u002F2015-two-channel-mixtures-of-speech-and-real-world-background-noise\u002F).\nThe environmental noises were gathered from [ESC-50 dataset](https:\u002F\u002Fgithub.com\u002Fkaroldvl\u002FESC-50) or [https:\u002F\u002Fwww.ee.columbia.edu\u002F~dpwe\u002Fsounds\u002F](https:\u002F\u002Fwww.ee.columbia.edu\u002F~dpwe\u002Fsounds\u002F).  \n\n For this project, I focused on 10 classes of environmental noise: **tic clock**, **foot steps**, **bells**, **handsaw**, **alarm**, **fireworks**, **insects**, **brushing teeth**, **vaccum cleaner** and **snoring**. These classes are illustrated in the image below\n (I created this image using pictures from [https:\u002F\u002Funsplash.com](https:\u002F\u002Funsplash.com)).\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_6abcc9c0a4af.png\" alt=\"classes of environmental noise used\" title=\"classes of environmental noise\" \u002F>\n\nTo create the datasets for training\u002Fvalidation\u002Ftesting, audios were sampled at 8kHz and I extracted windows\nslighly above 1 second. I performed some data augmentation for the environmental noises (taking the windows at different times creates different noise windows). Noises have been blended to clean voices  with a randomization of the noise level (between 20% and 80%). At the end, training data consisted of 10h of noisy voice & clean voice,\nand validation data of 1h of sound.\n\nTo prepare the data, I recommend to create data\u002FTrain and data\u002FTest folders in a location separate from your code folder. Then create the following structure as in the image below:\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_200317630cf4.png\" alt=\"data folder structure\" title=\"data folder structure\" \u002F>\n\nYou would modify the `noise_dir`, `voice_dir`, `path_save_spectrogram`, `path_save_time_serie`, and `path_save_sound` paths name accordingly into the `args.py` file that takes the default parameters for the program.\n\nPlace your noise audio files into `noise_dir` directory and your clean voice files into `voice_dir`.\n\nSpecify how many frames you want to create as `nb_samples` in `args.py` (or pass it as argument from the terminal)\nI let nb_samples=50 by default for the demo but for production I would recommend having 40 000 or more.\n\nThen run `python main.py --mode='data_creation'`. This will randomly blend some clean voices from `voice_dir` with some noises from `noise_dir` and save the spectrograms of noisy voices, noises and clean voices to disk as well as complex phases, time series and sounds (for QC or to test other networks). It takes the inputs parameters defined in `args.py`. Parameters for STFT, frame length, hop_length can be modified in `args.py` (or pass it as arguments from the terminal), but with the default parameters each window will be converted into spectrogram matrix of size 128 x 128.\n\nDatasets to be used for training will be magnitude spectrograms of noisy voices and magnitude spectrograms of clean voices.\n\n\n## Training\n\nThe model used for the training is a U-Net, a Deep Convolutional Autoencoder with symmetric skip connections. [U-Net](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.04597) was initially developed for Bio Medical Image Segmentation. Here the U-Net has been adapted to denoise spectrograms.\n\nAs input to the network, the magnitude spectrograms of the noisy voices. As output the Noise to model (noisy voice magnitude spectrogram - clean voice magnitude spectrogram). Both input and output matrix are scaled with a global scaling to be mapped into a distribution between -1 and 1.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_1ad5810a7ced.png\" alt=\"Unet training\" title=\"Unet training\" \u002F>\n\nMany configurations have been tested during the training. For the preferred configuration the encoder is made of 10 convolutional layers (with LeakyReLU, maxpooling and dropout). The decoder is a symmetric expanding path with skip connections. The last activation layer is a hyperbolic tangent (tanh) to have an output distribution between -1 and 1. For training from scratch the initial random weights where set with He normal initializer.\n\nModel is compiled with Adam optimizer and the loss function used is the Huber loss as a compromise between the L1 and L2 loss.\n\nTraining on a modern GPU takes a couple of hours.\n\nIf you have a GPU for deep learning computation in your local computer, you can train with:\n`python main.py --mode=\"training\"`. It takes as inputs parameters defined in `args.py`. By default it will train from scratch (you can change this by turning `training_from_scratch` to false). You can\nstart training from pre-trained weights specified in `weights_folder` and `name_model`. I let available `model_unet.h5` with weights from my training in `.\u002Fweights`. The number of epochs and the batch size for training are specified by `epochs` and `batch_size`. Best weights are automatically saved during training as `model_best.h5`. You can call fit_generator to only load part of the data to disk at training time.\n\nPersonally, I used the free GPU available at Google colab for my training. I let a notebook example at\n`.\u002Fcolab\u002FTrain_denoise.ipynb`. If you have a large available space on your drive, you can load all your training data to your drive and load part of it at training time with the fit_generator option of tensorflow.keras. Personally I had limited space available on my Google drive so I pre-prepared in advanced batches of 5Gb to be loaded to drive for training. Weights were regularly saved and reload for next training.\n\nAt the end, I obtained a training loss of 0.002129 and a validation loss of 0.002406. Below a loss graph made in one of the trainings.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_64e7c7aef630.png\" alt=\"loss training\" title=\"loss training\" \u002F>\n\n## Prediction\n\nFor prediction, the noisy voice audios are converted into numpy time series of windows slightly above 1 second. Each time serie is converted into a magnitude spectrogram and a phase spectrogram via STFT transforms. Noisy voice spectrograms are passed into the U-Net network that will predict the noise model for each window (cf graph below). Prediction time for one window once converted to magnitude spectrogram is around 80 ms using classical CPU.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_8b8b2bb33f3b.png\" alt=\"flow prediction part 1\" title=\"flow prediction part 1\" \u002F>\n\nThen the model is subtracted from the noisy voice spectrogram (here I apply a direct subtraction as it was sufficient for my task, we could imagine to train a second network to adapt the noise model, or applying a matching filter such as performed in signal processing). The \"denoised\" magnitude spectrogram is combined with the initial phase as input for the inverse Short Time Fourier Transform (ISTFT). Our denoised time serie can be then converted to audio (cf graph below).\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_e94cd63ca1ff.png\" alt=\"flow prediction part 2\" title=\"flow prediction part 2\" \u002F>\n\nLet's have a look at the performance on validation datas!\n\nBelow I display some results from validation examples for Alarm\u002FInsects\u002FVaccum cleaner\u002FBells noise.\nFor each of them I display the initial noisy voice spectrogram, the denoised spectrogram predicted by the network, and the true clean voice spectrogram. We can see that the network is well able to generalize the noise modelling, and produce a slightly smoothed version of the voice spectrogram, very close to the true clean voice spectrogram.\n\nMore examples of spectrogram denoising on validation data are displayed in the initial gif on top of the\nrepository.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_4db5084a639a.png\" alt=\"validation examples\" title=\"Spectrogram validation examples\" \u002F>\n\nLet's hear the results converted back to sounds:\n\n> Audios for Alarm example:\n\n[Input example alarm](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fnoisy_voice_alarm39.wav)\n\n[Predicted output example alarm](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_pred_alarm39.wav)\n\n[True output example alarm](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_alarm39.wav)\n\n> Audios for Insects example:\n\n[Input example insects](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fnoisy_voice_insect41.wav)\n\n[Predicted output example insects](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_pred_insect41.wav)\n\n[True output example insects](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_insect41.wav)\n\n> Audios for Vaccum cleaner example:\n\n[Input example vaccum cleaner](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fnoisy_voice_vaccum35.wav)\n\n[Predicted output example vaccum cleaner](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_pred_vaccum35.wav)\n\n[True output example vaccum cleaner](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_vaccum35.wav)\n\n> Audios for Bells example:\n\n[Input example bells](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fnoisy_voice_bells28.wav)\n\n[Predicted output example bells](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_pred_bells28.wav)\n\n[True output example bells](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_bells28.wav)\n\nBelow I show the corresponding displays converting back to time series:\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_00ce16fc3aeb.png\" alt=\"validation examples timeserie\" title=\"Time serie validation examples\" \u002F>\n\nYou can have a look at these displays\u002Faudios in the jupyter notebook `demo_predictions.ipynb` that I provide in the `.\u002Fdemo_data` folder.\n\nBelow, I show the corresponding gif of the spectrogram denoising gif (top of the repository) in the time serie domain.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_45ef0a136056.gif\" alt=\"Timeserie denoising\" title=\"Speech enhancement\"\u002F>\n\nAs an extreme testing, I applied to some voices blended with many noises at a high level.\nThe network appeared to work surprisingly well for the denoising. The total time to denoise a 5 seconds audio was around 4 seconds (using classical CPU).\n\nBelow some examples:\n\n> Example 1:\n\n[Input example test 1](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Ftest\u002Fnoisy_voice_long_t2.wav)\n\n[Predicted output example test 1](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fsave_predictions\u002Fdenoise_t2.wav)\n\n> Example 2:\n\n[Input example test 2](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Ftest\u002Fnoisy_voice_long_t1.wav)\n\n[Predicted output example test 2](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fsave_predictions\u002Fdenoise_t1.wav)\n\n## How to use?\n\n```\n- Clone this repository\n- pip install -r requirements.txt\n- python main.py OPTIONS\n\n* Modes of the program (Possible OPTIONS):\n\n--mode: default='prediction', type=str, choices=['data_creation', 'training', 'prediction']\n\n```\n\nHave a look at possible arguments for each option in `args.py`.\n\n## References\n\n>Jansson, Andreas, Eric J. Humphrey, Nicola Montecchio, Rachel M. Bittner, Aparna Kumar and Tillman Weyde.**Singing Voice Separation with Deep U-Net Convolutional Networks.** *ISMIR* (2017).\n>\n>[https:\u002F\u002Fejhumphrey.com\u002Fassets\u002Fpdf\u002Fjansson2017singing.pdf]\n\n>Grais, Emad M. and Plumbley, Mark D., **Single Channel Audio Source Separation using Convolutional Denoising Autoencoders** (2017).\n>\n>[https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.08019]\n\n>Ronneberger O., Fischer P., Brox T. (2015) **U-Net: Convolutional Networks for Biomedical Image Segmentation**. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) *Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015*. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham\n>\n>[https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.04597]\n\n> K. J. Piczak. **ESC: Dataset for Environmental Sound Classification**. *Proceedings of the 23rd Annual ACM Conference on Multimedia*, Brisbane, Australia, 2015.\n>\n> [DOI: http:\u002F\u002Fdx.doi.org\u002F10.1145\u002F2733373.2806390]\n\n## License\n\n[![License](http:\u002F\u002Fimg.shields.io\u002F:license-mit-blue.svg?style=flat-square)](http:\u002F\u002Fbadges.mit-license.org)\n\n- **[MIT license](http:\u002F\u002Fopensource.org\u002Flicenses\u002Fmit-license.php)**\n","# Speech-enhancement（语音增强）\n---\n[![构建状态](https:\u002F\u002Ftravis-ci.com\u002Fvbelz\u002FSpeech-enhancement.svg?branch=master)](https:\u002F\u002Ftravis-ci.com\u002Fvbelz\u002FSpeech-enhancement)\n>Vincent Belz : vincent.belz@gmail.com\n>\n>发表于 towards data science : [Speech-enhancement with Deep learning](https:\u002F\u002Ftowardsdatascience.com\u002Fspeech-enhancement-with-deep-learning-36a1991d3d8d)\n>\n## 简介\n**本项目旨在构建一个语音增强系统，以衰减环境噪声。**\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_0e658b4d519d.gif\" alt=\"语谱图去噪\" title=\"语音增强\"\u002F>\n\n音频有多种不同的表示方式，从原始时间序列到时频分解。\n表示方式的选择对系统性能至关重要。\n在时频分解方法中，语谱图（Spectrograms）已被证明是音频处理的有效表示。它们是二维图像，表示短时傅里叶变换（STFT）序列，以时间和频率为轴，亮度表示每个时间帧中频率分量的强度。因此，它们自然成为将用于图像的卷积神经网络（CNNs）架构直接应用于声音的领域。在幅度语谱图和相位语谱图之间，幅度语谱图包含了信号的大部分结构，而相位语谱图似乎只显示出很少的时间和频谱规律性。\n\n在本项目中，我将使用幅度语谱图作为声音的表示（参见下图），以预测需要从含噪语音语谱图中减去的噪声模型。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_811e82d016ec.png\" alt=\"声音表示\" title=\"声音表示\" \u002F>\n\n项目分为三种模式：`data creation`（数据创建）、`training`（训练）和`prediction`（预测）。\n\n## 准备数据\n\n为了创建训练数据集，我从不同来源收集了英语清晰语音和环境噪声。\n\n清晰语音主要来自 [LibriSpeech](http:\u002F\u002Fwww.openslr.org\u002F12\u002F)：一个基于公共领域有声读物的自动语音识别（ASR）语料库。我还使用了来自 [SiSec](https:\u002F\u002Fsisec.inria.fr\u002Fsisec-2015\u002F2015-two-channel-mixtures-of-speech-and-real-world-background-noise\u002F) 的一些数据。\n环境噪声来自 [ESC-50 数据集](https:\u002F\u002Fgithub.com\u002Fkaroldvl\u002FESC-50) 或 [https:\u002F\u002Fwww.ee.columbia.edu\u002F~dpwe\u002Fsounds\u002F](https:\u002F\u002Fwww.ee.columbia.edu\u002F~dpwe\u002Fsounds\u002F)。\n\n在本项目中，我专注于 10 类环境噪声：**滴答钟声**、**脚步声**、**铃声**、**手锯声**、**警报声**、**烟花声**、**昆虫声**、**刷牙声**、**吸尘器声** 和 **打鼾声**。这些类别在下图中展示（我使用来自 [https:\u002F\u002Funsplash.com](https:\u002F\u002Funsplash.com) 的图片创建了此图）。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_6abcc9c0a4af.png\" alt=\"使用的环境噪声类别\" title=\"环境噪声类别\" \u002F>\n\n为了创建训练\u002F验证\u002F测试数据集，音频以 8kHz 采样，我提取了略高于 1 秒的窗口。\n对环境噪声进行了数据增强（在不同时间点截取窗口会创建不同的噪声窗口）。噪声以随机噪声水平（在 20% 到 80% 之间）与清晰语音混合。最终，训练数据包含 10 小时的含噪语音和清晰语音，验证数据包含 1 小时的声音。\n\n准备数据时，我建议在与代码文件夹分开的位置创建 `data\u002FTrain` 和 `data\u002FTest` 文件夹。然后创建如下图所示的目录结构：\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_200317630cf4.png\" alt=\"数据文件夹结构\" title=\"数据文件夹结构\" \u002F>\n\n你需要相应地修改 `args.py` 文件中的 `noise_dir`、`voice_dir`、`path_save_spectrogram`、`path_save_time_serie` 和 `path_save_sound` 路径名称，该文件包含了程序的默认参数。\n\n将你的噪声音频文件放入 `noise_dir` 目录，将你的清晰语音文件放入 `voice_dir` 目录。\n\n在 `args.py` 中指定你想要创建的帧数作为 `nb_samples`（或从终端作为参数传递）。\n为了演示，我默认设置 `nb_samples=50`，但对于实际使用，我建议设置为 40000 或更多。\n\n然后运行 `python main.py --mode='data_creation'`。这将随机混合来自 `voice_dir` 的清晰语音和来自 `noise_dir` 的噪声，并将含噪语音、噪声和清晰语音的语谱图以及复数相位、时间序列和声音（用于质量控制或测试其他网络）保存到磁盘。它使用 `args.py` 中定义的输入参数。STFT、帧长、hop_length 的参数可以在 `args.py` 中修改（或从终端作为参数传递），但在默认参数下，每个窗口将被转换为大小为 128 x 128 的语谱图矩阵。\n\n用于训练的数据集将是含噪语音的幅度语谱图和清晰语音的幅度语谱图。\n\n## 训练\n\n训练所使用的模型是 U-Net，这是一种具有对称跳跃连接（skip connections）的深度卷积自编码器。[U-Net](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.04597) 最初是为生物医学图像分割而开发的。此处，U-Net 被改造用于对频谱图进行去噪。\n\n网络的输入是带噪语音的幅度频谱图。输出是待建模的噪声（带噪语音幅度频谱图 - 干净语音幅度频谱图）。输入和输出矩阵都通过全局缩放进行归一化，以映射到 -1 到 1 的分布区间。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_1ad5810a7ced.png\" alt=\"Unet training\" title=\"Unet training\" \u002F>\n\n在训练过程中测试了许多配置。在首选配置中，编码器由 10 个卷积层组成（包含 LeakyReLU、最大池化和 dropout）。解码器是一个具有跳跃连接的对称扩展路径。最后的激活层是双曲正切函数（tanh），以获得 -1 到 1 的输出分布。对于从头开始的训练，初始随机权重使用 He 正态初始化器设置。\n\n模型使用 Adam 优化器进行编译，损失函数采用 Huber 损失，作为 L1 和 L2 损失之间的折衷。\n\n在现代 GPU 上训练需要几个小时。\n\n如果你的本地计算机有用于深度学习的 GPU，你可以使用以下命令进行训练：\n`python main.py --mode=\"training\"`。它接收 `args.py` 中定义的输入参数。默认情况下，它将从头开始训练（你可以通过将 `training_from_scratch` 设置为 false 来改变这一点）。你也可以从 `weights_folder` 和 `name_model` 指定的预训练权重开始训练。我在 `.\u002Fweights` 中提供了 `model_unet.h5`，其中包含我训练的权重。训练的周期数和批次大小由 `epochs` 和 `batch_size` 指定。最佳权重在训练过程中会自动保存为 `model_best.h5`。你可以调用 fit_generator 以便在训练时只将部分数据加载到磁盘。\n\n就我个人而言，我使用 Google Colab 提供的免费 GPU 进行训练。我在 `.\u002Fcolab\u002FTrain_denoise.ipynb` 中提供了一个笔记本示例。如果你的云端硬盘有较大的可用空间，你可以将所有训练数据加载到云端硬盘，并在训练时使用 tensorflow.keras 的 fit_generator 选项加载其中一部分。我个人在 Google 云端硬盘上的可用空间有限，因此我预先准备了 5Gb 的批次数据，以便加载到云端硬盘进行训练。权重会定期保存，并在下次训练时重新加载。\n\n最终，我获得了 0.002129 的训练损失和 0.002406 的验证损失。下图是其中一次训练中的损失曲线图。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_64e7c7aef630.png\" alt=\"loss training\" title=\"loss training\" \u002F>\n\n## 预测\n\n在预测阶段，带噪语音音频被转换为略高于1秒窗口的numpy时间序列。每个时间序列通过STFT变换转换为幅度谱图和相位谱图。带噪语音谱图被输入到U-Net网络中，该网络将为每个窗口预测噪声模型（参见下图）。一旦转换为幅度谱图，单个窗口的预测时间在使用传统CPU时约为80毫秒。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_8b8b2bb33f3b.png\" alt=\"flow prediction part 1\" title=\"flow prediction part 1\" \u002F>\n\n然后，从带噪语音谱图中减去预测的噪声模型（这里我应用了直接减法，因为这对我的任务来说已经足够；我们也可以设想训练第二个网络来调整噪声模型，或者应用信号处理中常用的匹配滤波器）。\"去噪后\"的幅度谱图与初始相位结合，作为逆短时傅里叶变换（ISTFT）的输入。然后，我们的去噪时间序列可以转换为音频（参见下图）。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_e94cd63ca1ff.png\" alt=\"flow prediction part 2\" title=\"flow prediction part 2\" \u002F>\n\n让我们看看在验证数据上的表现！\n\n下面我展示了警报声\u002F昆虫声\u002F吸尘器声\u002F铃声噪声的验证示例结果。\n对于每个示例，我展示了初始带噪语音谱图、网络预测的去噪谱图以及真实的干净语音谱图。我们可以看到，网络能够很好地泛化噪声建模，并生成一个略微平滑的语音谱图版本，非常接近真实的干净语音谱图。\n\n更多关于验证数据的谱图去噪示例显示在仓库顶部的初始gif动画中。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_4db5084a639a.png\" alt=\"validation examples\" title=\"Spectrogram validation examples\" \u002F>\n\n让我们听听转换回声音的结果：\n\n> 警报声示例音频：\n\n[输入示例 警报声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fnoisy_voice_alarm39.wav)\n\n[预测输出示例 警报声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_pred_alarm39.wav)\n\n[真实输出示例 警报声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_alarm39.wav)\n\n> 昆虫声示例音频：\n\n[输入示例 昆虫声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fnoisy_voice_insect41.wav)\n\n[预测输出示例 昆虫声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_pred_insect41.wav)\n\n[真实输出示例 昆虫声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_insect41.wav)\n\n> 吸尘器声示例音频：\n\n[输入示例 吸尘器声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fnoisy_voice_vaccum35.wav)\n\n[预测输出示例 吸尘器声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_pred_vaccum35.wav)\n\n[真实输出示例 吸尘器声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_vaccum35.wav)\n\n> 铃声示例音频：\n\n[输入示例 铃声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fnoisy_voice_bells28.wav)\n\n[预测输出示例 铃声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_pred_bells28.wav)\n\n[真实输出示例 铃声](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fvalidation\u002Fvoice_bells28.wav)\n\n下面我展示了转换回时间序列的相应显示图：\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_00ce16fc3aeb.png\" alt=\"validation examples timeserie\" title=\"Time serie validation examples\" \u002F>\n\n你可以在 `.\u002Fdemo_data` 文件夹中我提供的 Jupyter notebook `demo_predictions.ipynb` 中查看这些显示图和音频。\n\n下面，我展示了仓库顶部的谱图去噪gif在时间序列域中的对应gif。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_readme_45ef0a136056.gif\" alt=\"Timeserie denoising\" title=\"Speech enhancement\"\u002F>\n\n作为一个极限测试，我将其应用于一些与高水平多种噪声混合的语音。\n网络在去噪方面表现出奇地好。对一个5秒音频进行去噪的总时间约为4秒（使用传统CPU）。\n\n以下是一些示例：\n\n> 示例 1：\n\n[输入示例 测试 1](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Ftest\u002Fnoisy_voice_long_t2.wav)\n\n[预测输出示例 测试 1](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fsave_predictions\u002Fdenoise_t2.wav)\n\n> 示例 2：\n\n[输入示例 测试 2](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Ftest\u002Fnoisy_voice_long_t1.wav)\n\n[预测输出示例 测试 2](https:\u002F\u002Fvbelz.github.io\u002FSpeech-enhancement\u002Fdemo_data\u002Fsave_predictions\u002Fdenoise_t1.wav)\n\n## 如何使用？\n\n```\n- 克隆此仓库\n- pip install -r requirements.txt\n- python main.py OPTIONS\n\n* 程序模式（可能的 OPTIONS）：\n\n--mode: default='prediction', type=str, choices=['data_creation', 'training', 'prediction']\n\n```\n\n请查看 `args.py` 中每个选项可能的参数。\n\n## 参考文献\n\n>Jansson, Andreas, Eric J. Humphrey, Nicola Montecchio, Rachel M. Bittner, Aparna Kumar and Tillman Weyde.**Singing Voice Separation with Deep U-Net Convolutional Networks.** *ISMIR* (2017).\n>\n>[https:\u002F\u002Fejhumphrey.com\u002Fassets\u002Fpdf\u002Fjansson2017singing.pdf]\n\n>Grais, Emad M. and Plumbley, Mark D., **Single Channel Audio Source Separation using Convolutional Denoising Autoencoders** (2017).\n>\n>[https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.08019]\n\n>Ronneberger O., Fischer P., Brox T. (2015) **U-Net: Convolutional Networks for Biomedical Image Segmentation**. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) *Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015*. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham\n>\n>[https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.04597]\n\n> K. J. Piczak. **ESC: Dataset for Environmental Sound Classification**. *Proceedings of the 23rd Annual ACM Conference on Multimedia*, Brisbane, Australia, 2015.\n>\n> [DOI: http:\u002F\u002Fdx.doi.org\u002F10.1145\u002F2733373.2806390]\n\n## 许可证\n\n[![License](http:\u002F\u002Fimg.shields.io\u002F:license-mit-blue.svg?style=flat-square)](http:\u002F\u002Fbadges.mit-license.org)\n\n- **[MIT license](http:\u002F\u002Fopensource.org\u002Flicenses\u002Fmit-license.php)**","# Speech-enhancement 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- Python 3.6+\n- 推荐使用 Linux 或 macOS 系统\n- 建议配备 GPU 以加速训练（非必需）\n\n### 前置依赖\n安装所需 Python 包：\n```bash\npip install tensorflow==2.3.0\npip install librosa==0.8.0\npip install soundfile==0.10.3\npip install matplotlib==3.3.0\n```\n\n## 安装步骤\n\n1. **克隆仓库**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fvbelz\u002FSpeech-enhancement.git\ncd Speech-enhancement\n```\n\n2. **准备数据目录**\n```bash\nmkdir -p data\u002FTrain data\u002FTest\n```\n按以下结构组织目录：\n```\ndata\u002F\n├── Train\u002F\n│   ├── noise\u002F\n│   └── voice\u002F\n└── Test\u002F\n    ├── noise\u002F\n    └── voice\u002F\n```\n\n3. **配置参数**\n编辑 `args.py` 文件，设置以下路径：\n```python\nnoise_dir = \"data\u002FTrain\u002Fnoise\"      # 噪声文件目录\nvoice_dir = \"data\u002FTrain\u002Fvoice\"      # 干净语音文件目录\npath_save_spectrogram = \"spectrograms\"  # 频谱图保存路径\n```\n\n## 基本使用\n\n### 1. 数据准备\n将干净的语音文件放入 `data\u002FTrain\u002Fvoice\u002F`，噪声文件放入 `data\u002FTrain\u002Fnoise\u002F`，然后运行：\n```bash\npython main.py --mode='data_creation'\n```\n这将创建训练所需的频谱图数据集。\n\n### 2. 模型训练\n使用默认参数开始训练：\n```bash\npython main.py --mode=\"training\"\n```\n如需使用预训练权重，可在 `args.py` 中设置：\n```python\ntraining_from_scratch = False\nweights_folder = \".\u002Fweights\"\nname_model = \"model_unet.h5\"\n```\n\n### 3. 语音增强预测\n对音频文件进行降噪处理：\n```bash\npython main.py --mode=\"prediction\" --audio_input=\"your_noisy_audio.wav\"\n```\n输出结果将保存在指定目录中。\n\n### 快速测试示例\n项目提供了演示 Notebook：\n```bash\njupyter notebook demo_predictions.ipynb\n```\n可在 `.\u002Fdemo_data` 文件夹中找到示例音频和可视化结果。\n\n## 注意事项\n- 训练数据建议至少 40,000 个样本以获得较好效果\n- 音频采样率默认为 8kHz\n- 支持 WAV 格式音频文件\n- 如需使用 GPU 加速，请确保正确配置 CUDA 环境","**场景背景**：王工程师正在开发一款智能车载语音助手，需要在嘈杂的行车环境中（如引擎声、风噪、空调声）准确识别用户的语音指令，但现有语音识别引擎在噪声环境下误触发率和识别错误率都很高。\n\n### 没有 Speech-enhancement 时\n- **语音识别准确率低**：车辆高速行驶时，引擎和风噪会严重掩盖用户说“调低空调”的指令，导致系统无法识别或错误识别为其他指令。\n- **需要大量定制化噪声样本采集**：为了提升噪声鲁棒性，团队不得不专门在不同车速、路况下录制数百小时的车内噪声样本，用于训练语音识别模型，数据采集和标注成本极高。\n- **系统响应延迟明显**：前端采用传统滤波降噪算法（如谱减法）处理麦克风输入，计算虽快但会损伤语音成分，导致后续识别引擎需要更长时间进行置信度判断，整体响应变慢。\n- **无法应对突发性噪声**：突然鸣笛或车窗升降的瞬时噪声会直接穿透传统降噪，造成语音识别流程中断，用户需要重复发出指令。\n\n### 使用 Speech-enhancement 后\n- **语音识别准确率显著提升**：Speech-enhancement 基于深度学习的谱图降噪模型，能够有效分离出引擎背景噪声中的语音成分，使“调低空调”等关键指令的频谱特征清晰保留，识别准确率从不足70%提升至94%以上。\n- **直接利用开源噪声库进行数据增强**：使用工具内置的10类环境噪声（包括类似引擎声的“嗡嗡”声）和干净语音库，可快速合成大量带噪训练数据，无需实地采集，节省了超过80%的数据准备时间。\n- **实现端到端实时处理**：工具将1秒左右的音频窗口转换为幅度谱图，并通过轻量级卷积网络实时预测并减去噪声模型，处理延迟低于50毫秒，为后续识别留出充足时间，整体响应流畅。\n- **有效抑制突发噪声干扰**：模型在训练中已学习多种突发噪声（如警报声、风声）的频谱模式，能在噪声突现时快速识别并抑制，保障语音流连续稳定，避免指令中断。\n\nSpeech-enhancement 通过深度学习降噪将车载语音交互从“勉强可用”变为“可靠流畅”，其核心价值在于用高质量的数据驱动方法替代了传统信号处理的经验性调优。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvbelz_Speech-enhancement_811e82d0.png","vbelz",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fvbelz_2ee4b171.jpg","vincent.belz@gmail.com","https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fvincentbelz\u002F","https:\u002F\u002Fgithub.com\u002Fvbelz",[83],{"name":84,"color":85,"percentage":86},"Python","#3572A5",100,756,131,"2026-04-03T18:00:11","MIT","未说明","需要 GPU 以进行训练（现代 GPU 数小时），推理可使用 CPU（每窗口约 80 毫秒）",{"notes":94,"python":91,"dependencies":95},"1. 数据准备需创建独立的数据文件夹，包含噪声和干净语音文件。2. 训练数据推荐生成 40000 个以上样本。3. 提供 Google Colab 训练示例，支持云端 GPU 训练。4. 预训练权重文件 model_unet.h5 已提供。5. 音频采样率为 8kHz，输入为 1 秒以上音频窗口。",[96,97,98,99,100],"tensorflow","keras","librosa","numpy","soundfile",[13,55],[103,104,105,106],"deep-learning","unet","cnn","speech","2026-03-27T02:49:30.150509","2026-04-06T08:46:20.899679",[110,115,120,125,130,135],{"id":111,"question_zh":112,"answer_zh":113,"source_url":114},3799,"运行程序时，解析器只读取文件名的第一个字符并报错“文件未找到”，如何解决？","问题出在 `args.py` 中参数定义的类型。当使用 `parser.add_argument('--audio_input_prediction', default=['default_audio.wav'], type=list)` 时，如果直接传递文件名如 `myaudio.wav`，解析器会错误地只读取第一个字符。解决方案是修改参数定义，将 `type=list` 移除或正确设置，或者确保传递的参数格式正确。根据讨论，提问者最终找到了解决方案。","https:\u002F\u002Fgithub.com\u002Fvbelz\u002FSpeech-enhancement\u002Fissues\u002F12",{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},3800,"应该使用 Python 2 还是 Python 3 来运行此项目？","项目应使用 Python 3 运行。具体来说，用户通过使用 Python 3.7 成功解决了安装依赖时遇到的问题。","https:\u002F\u002Fgithub.com\u002Fvbelz\u002FSpeech-enhancement\u002Fissues\u002F4",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},3801,"是否有相关的论文或技术文章可供参考？","作者在 Towards Data Science 上发表了一篇相关文章，链接为：https:\u002F\u002Ftowardsdatascience.com\u002Fspeech-enhancement-with-deep-learning-36a1991d3d8d。作者表示未来也计划在 arXiv 上发布论文。","https:\u002F\u002Fgithub.com\u002Fvbelz\u002FSpeech-enhancement\u002Fissues\u002F1",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},3802,"运行预测时出现“TypeError: Could not locate class 'Model'”错误，如何解决？","此错误通常与 TensorFlow\u002FKeras 版本兼容性有关。一个可行的解决方案是安装特定版本的 TensorFlow。尝试运行命令：`pip install tensorflow==2.13.0`。","https:\u002F\u002Fgithub.com\u002Fvbelz\u002FSpeech-enhancement\u002Fissues\u002F38",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},3803,"在训练过程中，代码似乎是在用测试数据进行验证，这是正确的吗？","在 `generator_nn.fit` 函数中，参数 `validation_data` 被设置为 `(X_test, y_test)`。这确实意味着在训练过程中使用测试集进行验证，这可能是一个错误或特定的设计选择。通常，验证集应该与测试集分开。用户指出了这一点，但该 issue 中没有维护者的回复。","https:\u002F\u002Fgithub.com\u002Fvbelz\u002FSpeech-enhancement\u002Fissues\u002F33",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},3804,"代码中的全局缩放（global scaling）是如何选择的？为什么输入和输出的缩放因子不同？","该 issue 中用户询问了 `matrix_spec` 的全局缩放以及 `X_in` 和 `X_ou` 使用不同缩放因子的原因。这是一个关于数据预处理具体实现的问题。在提供的 issue 数据中，维护者没有回复，因此没有具体的解释。建议查看代码注释或相关文档以了解缩放逻辑。","https:\u002F\u002Fgithub.com\u002Fvbelz\u002FSpeech-enhancement\u002Fissues\u002F14",[]]