[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-mlachmish--MusicGenreClassification":3,"tool-mlachmish--MusicGenreClassification":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":86,"forks":87,"last_commit_at":88,"license":89,"difficulty_score":10,"env_os":90,"env_gpu":90,"env_ram":90,"env_deps":91,"category_tags":97,"github_topics":98,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":103,"updated_at":104,"faqs":105,"releases":121},854,"mlachmish\u002FMusicGenreClassification","MusicGenreClassification","Classify music genre from a 10 second sound stream using a Neural Network.","MusicGenreClassification 是一个基于深度学习的开源项目，能够仅凭 10 秒音频流就精准识别音乐流派。它致力于解决音频处理领域中流派分类难、优质数据获取成本高的问题。过去许多研究受限于版权或数据规模，往往难以取得理想效果。该项目创新性地结合了卷积神经网络（CNN）与大规模音频特征提取，成功实现了涵盖 10 个常见流派的分类任务，性能优于早期研究。\n\n项目基于 Python 和 TensorFlow 构建，不仅展示了模型训练流程，还提供了关于如何合法获取音频数据的实用方案（如利用 MSD 数据集及第三方预览接口）。这使其成为深度学习开发者、音频算法研究人员以及学生群体的优秀学习资源。无论是想入门声音处理，还是探索音乐推荐系统的底层逻辑，MusicGenreClassification 都能提供有价值的技术参考和实践范例。","\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_fd90e7c387ad.png\" alt=\"MusicGenreClassification\" width=\"250\">\n\u003C\u002Fp>\n\n\n# MusicGenreClassification\n\nAcademic research in the field of **Deep Learning (Deep Neural Networks) and Sound Processing**, Tel Aviv University.\n\nFeatured in [Medium](https:\u002F\u002Fmedium.com\u002F@matanlachmish\u002Fmusic-genre-classification-470aaac9833d).\n\n## Abstract\n\nThis paper discuss the task of classifying the music genre of a sound sample.\n\n## Introduction\n\nWhen I decided to work on the field of sound processing I thought that genre classification is a parallel problem to the image classification. To my surprise I did not found too many works in deep learning that tackled this exact problem. One paper that did tackle this classification problem is Tao Feng’s paper [1] from the university of Illinois. I did learned a lot from this paper, but honestly, they results the paper presented were not impressive.\n\nSo I had to look on other, related but not exact papers. A very influential paper was Deep content-based music recommendation [2] This paper is about content-base music recommendation using deep learning techniques. The way they got the dataset, and the preprocessing they had done to the sound had really enlightened my implementation. Also, this paper was mentioned lately on “Spotify” blog [3]. Spotify recruited a deep learning intern that based on the above work implemented a music recommendation engine. His simple yet very efficient network made me think that Tao’s RBM was not the best approach and there for my implementation included a CNN instead like in the Spotify blog. One very important note is that Tao’s work published result only for 2,3 and 4 classes classification. Obviously he got really good result for 2 classes classification, but the more classes he tried to classify the poorer the result he got. My work classify the whole 10 classes challenge, a much more difficult task. A sub task for this project was to learn a new SDK for deep learning, I have been waiting for an opportunity to learn Google’s new TensorFlow[4]. This project is implemented in Python and the Machine Learning part is using TensorFlow.\n\n## The Dataset\n\nGetting the dataset might be the most time consuming part of this work. Working with music is a big pain, every file is usually a couple of MBs, there are variety of qualities and parameters of recording (Number of frequencies, Bits per second, etc…). But the biggest pain is copyrighting, there are no legit famous songs dataset as they would cost money. Tao’s paper based on a dataset called GTZAN[5]. This dataset is quit small (100 songs per genre X 10 genres = overall 1,000 songs), and the copyright permission is questionable. This is from my perspective one of the reasons that held him from getting better results. So, I looked up for generating more data to learn from. Eventually I found MSD[6] dataset (Million Song Dataset). It is a freely-available collection of audio features and metadata for a million contemporary popular music tracks. Around 280 GB of pure metadata. There is a project on top of MSD called tagtraum[7] which classify MSD songs into genres. The problem now was to get the sound itself, here is where I got a little creative. I found that one of the tags every song have in the dataset is an id from a provider called 7Digital[8]. 7Digital is a SaaS provider for music application, it basically let you stream music for money. I signed up to 7Digital as a developer and after their approval i could access their API. Still any song stream costs money, But I found out that they are enabling to preview random 30 seconds of a song to the user before paying for them. This is more than enough for my deep learning task, So I wrote “previewDownloader.py” that downloads for every song in the MSD dataset a 30 sec preview. Unfortunately I had only my laptop for this mission, so I had to settle with only 1% of the dataset (around 2.8GB).\n\n\nThe genres I am classifying are:\n1. blues\u003Cbr>\n2. classical\u003Cbr>\n3. country\u003Cbr>\n4. disco \u003Cbr>\n5. hiphop\u003Cbr>\n6. jazz\u003Cbr>\n7. metal\u003Cbr>\n8. pop\u003Cbr>\n9. reggae\u003Cbr>\n10.rock\u003Cbr>\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_1476e10453ca.png\" alt=\"Music genre popularity\" width=\"500\">\n\u003C\u002Fp>\n\n## Preprocessing the data\n\nHaving a big data set isn't enough, in oppose to image tasks I cannot work straight on the raw sound sample, a quick calculation: 30 seconds × 22050 sample\u002Fsec- ond = 661500 length of vector, which would be heavy load for a convention machine learning method.\n\nFollowing all the papers I read and researching a little on acoustic analysis, It is quit obvious that the industry is using Mel-frequency cepstral coefficients (MFCC) as the feature vector for the sound sample, I used librosa[9] implementation.\n\nMFCCs are derived as follows:\n1. Take the Fourier transform of (a windowed excerpt of) a signal.\n2. Map the powers of the spectrum obtained above onto the mel scale,\n   using triangular overlapping windows.\n3. Take the logs of the powers at each of the mel frequencies.\n4. Take the discrete cosine transform of the list of mel log powers, as if it\n   were a signal.\n5. The MFCCs are the amplitudes of the resulting spectrum.\n\nI had tried several window size and stride values, the best result I got was for size of 100ms and a stride of 40ms.\n\nOne more point was that Tao’s paper used MFCC features (step 5) while Sander used strait mel-frequencies (step 2).\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_314183ddaf54.png\" alt=\"MEL ppower over time\" width=\"650\">\n\u003C\u002Fp>\n\nI tried both approaches and found out that I got extremely better results using just the mel-frequencies, but the trade-off was the training time of-course.\nBefore continue to building a network I wanted to visualise the preprocessed data set, I implemented this through the t-SNE[10] algorithm.Below you can see the t-SNE graph for MFCC (step 5) and Mel-Frequencies (step 2):\n \n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_8bc3fcbb45af.png\" alt=\"t-SNE MFCC samples as genres\" width=\"500\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_59cf1a4441bf.png\" alt=\"t-SNE mel-spectogram samples as genres\" width=\"500\">\n\u003C\u002Fp>\n \n## The Graph\n \n After seeing the results Tao and Sander reached I decided to go with a convolu- tional neural network implementation. The network receive a 599 vector of mea-frequen- cy beans, each containing 128 frequencies which describe their window. The network consist with 3 hidden layers and between them I am doing a max pooling. Finally a fully connected layer and than softmax to end up with a 10 dimensional vector for our ten genre classes\n \n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_86c39944c305.png\" alt=\"Nural Network\" width=\"500\">\n\u003C\u002Fp>\n \n I did implement another network for MFCC feature instead of mel-frequencies, the only differences are in the sizes (13 frequencies per window instead of 128).\n \n Visualisation of various filters (from Sander’s paper):\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_1a0adb590f35.png\" alt=\"Filters visualization\" width=\"250\">\n\u003C\u002Fp>\n\n• Filter 14 seems to pick up vibrato singing.\n• Filter 242 picks up some kind of ringing ambience.\n• Filter 250 picks up vocal thirds, i.e. multiple singers singing\n  the same thing, but the notes are a major third (4 semitones) apart.\n• Filter 253 picks up various types of bass drum sounds.\n\n## Results\n\nAs I explained in the introduction, the papers I based my work on did not solve the exact problem I did, for example Tao’s paper published results for classifying 2,3 and 4 classes (Genres). \n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_97bd80c0b0d0.png\" alt=\"Tao Feng's results\" width=\"750\">\n\u003C\u002Fp>\n\nI did looked for benchmarks outside the deep learning field and I found a paper titled “A BENCHMARK DATASET FOR AUDIO CLASSIFICATION AND CLUSTERING” [11]. This paper benchmark a very similar task to mine, the genres it classifies: Blues, Electronic, Jazz, Pop, HipHop, Rock, Folk, Alternative, Funk.\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_c0b49346d408.png\" alt=\"Benchmark results\" width=\"750\">\n\u003C\u002Fp>\n\n### My results:\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_9ed851306200.png\" alt=\"My results\" width=\"750\">\n\u003C\u002Fp>\n\n## Code\n\n### Documentation\n\n• previewDownloader.py: \nUSAGE: python previewDownloader.py [path to MSD data] \nThis script iterate over all ‘.h5’ in a directory and download a 30 seconds sample from 7digital.\n\n• preproccess.py: \nUSAGE: python preproccess.py [path to MSD mp3 data] \nThis script pre-processing the sound files. Calculating MFCC for a sliding window and saving the result in a ‘.pp’ file.\n\n• formatInput.py: \nUSAGE: python formatInput.py [path to MSD pp data] \nThe script iterates over all ‘.pp’ files and generates ‘data’ and ‘labels’ that will be used as an input to the NN. \nMoreover, the script output a t-SNE graph at the end.\n\n• train.py: \nUSAGE: python train.py \nThis script builds the neural network and feeds it with ‘data’ and ‘labels’.  When it is done it will save ‘model.final’.\n\n### Complete Installation\n\n\u003Cul>\n\u003Cli>Download the dataset files from https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F8ohx6m23co1qaz3\u002FDataSet.zip?dl=0.\u003C\u002Fli>\n\u003Cli>Unzip file\u003C\u002Fli>\n\u003Cli>Place dataset files in the structure they are ordered in\u003C\u002Fli>\n\u003C\u002Ful>\n\n\n## References\n\n[1] Tao Feng, Deep learning for music genre classification, University of Illinois. https:\u002F\u002Fcourses.engr.illinois.edu\u002Fece544na\u002Ffa2014\u002FTao_Feng.pdf\n[2]Aar̈onvandenOord,SanderDieleman,BenjaminSchrauwen,Deepcontent- based music recommendation. http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5004-deep-content-based- music-recommendation.pdf\n[3] SANDER DIELEMAN, RECOMMENDING MUSIC ON SPOTIFY WITH DEEP LEARNING, AUGUST 05, 2014. http:\u002F\u002Fbenanne.github.io\u002F2014\u002F08\u002F05\u002Fspotify-cnns.html\n[4] https:\u002F\u002Fwww.tensorflow.org\n[5] GTZAN Genre Collection. http:\u002F\u002Fmarsyasweb.appspot.com\u002Fdownload\u002F data_sets\u002F\n[6] Thierry Bertin-Mahieux, Daniel P.W. Ellis, Brian Whitman, and Paul Lamere. The Million Song Dataset. In Proceedings of the 12th International Society\nfor Music Information Retrieval Conference (ISMIR 2011), 2011. http:\u002F\u002F\nlabrosa.ee.columbia.edu\u002Fmillionsong\u002F\n[7] Hendrik Schreiber. Improving genre annotations for the million song dataset. In\nProceedings of the 16th International Conference on Music Information Retrieval (IS- MIR), pages 241-247, 2015.\nhttp:\u002F\u002Fwww.tagtraum.com\u002Fmsd_genre_datasets.html\n[8] https:\u002F\u002Fwww.7digital.com\n[9] https:\u002F\u002Fgithub.com\u002Fbmcfee\u002Flibrosa\n[10] http:\u002F\u002Fscikit-learn.org\u002Fstable\u002Fmodules\u002Fgenerated\u002Fsklearn.manifold.TSNE.html [11] Helge Homburg, Ingo Mierswa, Bu l̈ent Mo l̈ler, Katharina Morik and Michael\nWurst, A BENCHMARK DATASET FOR AUDIO CLASSIFICATION AND CLUSTERING, University of Dortmund, AI Unit. http:\u002F\u002Fsfb876.tu-dortmund.de\u002FPublicPublicationFiles\u002F homburg_etal_2005a.pdf\n\n## Author\n\nMatan Lachmish \u003Csub>a.k.a\u003C\u002Fsub> \u003Cb>The Big Fat Ninja\u003C\u002Fb> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_e103d4b9db0d.png\" alt=\"The Big Fat Ninja\" width=\"13\">\u003Cbr>\nhttps:\u002F\u002Fthebigfatninja.xyz\n\n### attribution\n\nIcon made by \u003Ca title=\"Freepik\" href=\"http:\u002F\u002Fwww.freepik.com\">Freepik\u003C\u002Fa> from \u003Ca title=\"Flaticon\" href=\"http:\u002F\u002Fwww.flaticon.com\">www.flaticon.com\u003C\u002Fa>\n\n## License\n\nMusicGenreClassification is available under the MIT license. See the LICENSE file for more info.\n","\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_fd90e7c387ad.png\" alt=\"MusicGenreClassification\" width=\"250\">\n\u003C\u002Fp>\n\n\n# MusicGenreClassification\n\n**深度学习（深度神经网络）与声音处理**领域的学术研究，特拉维夫大学。\n\n被 [Medium](https:\u002F\u002Fmedium.com\u002F@matanlachmish\u002Fmusic-genre-classification-470aaac9833d) 收录。\n\n## 摘要\n\n本文讨论了声音样本的音乐流派分类任务。\n\n## 简介\n\n当我决定从事声音处理领域的工作时，我认为流派分类是图像分类的平行问题。令我惊讶的是，我发现没有太多关于深度学习解决这一确切问题的作品。Tao Feng 来自伊利诺伊大学的论文 [1] 解决了这个分类问题。我从这篇论文中学到了很多，但老实说，他们展示的结果并不令人印象深刻。\n\n所以我不得不寻找其他相关但不完全相同的论文。一篇非常有影响力的论文是《基于内容的音乐推荐》[2]。这篇论文是关于使用深度学习技术进行基于内容的音乐推荐。他们获取数据集的方式以及对声音进行的预处理真正启发了我的实现。此外，这篇论文最近出现在\"Spotify\"博客 [3] 上。Spotify 招募了一名深度学习实习生，他基于上述工作实现了一个音乐推荐引擎。他那简单却非常高效的网络让我认为 Tao 的 RBM（受限玻尔兹曼机）不是最佳方法，因此我的实现中包含了 CNN（卷积神经网络），就像 Spotify 博客中那样。一个非常重要的注意事项是，Tao 的工作仅发布了针对 2、3 和 4 类分类的结果。显然他在 2 类分类上取得了很好的结果，但他尝试分类的类别越多，结果就越差。我的工作对全部 10 个类别的挑战进行分类，这是一个更困难的任务。此项目的一个子任务是学习一个新的深度学习 SDK（软件开发工具包），我一直在等待机会学习 Google 的新 TensorFlow[4]。本项目用 Python 实现，机器学习部分使用 TensorFlow。\n\n## 数据集\n\n获取数据集可能是这项工作中最耗时的部分。处理音乐很麻烦，每个文件通常有几十 MB，录音的质量和参数多种多样（频率数量、每秒比特数等……）。但最大的痛点是版权问题，没有合法的著名歌曲数据集，因为它们需要付费。Tao 的论文基于一个名为 GTZAN[5] 的数据集。这个数据集相当小（每流派 100 首歌曲 x 10 个流派 = 总共 1,000 首歌曲），且版权许可存疑。在我看来，这是阻碍他获得更好结果的其中一个原因。所以，我寻找了更多可以从中学习的数据。最终我找到了 MSD[6] 数据集（Million Song Dataset，百万歌曲数据集）。它是一个免费提供的音频特征和元数据集合，包含一百万首当代流行音乐曲目。大约 280 GB 的纯元数据。MSD 之上有一个名为 tagtraum[7] 的项目，它将 MSD 歌曲分类为流派。现在的问题是如何获取声音本身，这就是我发挥创意的地方。我发现数据集中每首歌都有一个标签，是来自提供商 7Digital[8] 的 ID。7Digital 是一个音乐应用程序的 SaaS（软件即服务）提供商，它基本上让你付费流式传输音乐。我以开发者身份注册了 7Digital，在他们的批准之后，我可以访问他们的 API。尽管如此，任何歌曲流媒体都需要付费，但我发现他们允许用户在付费前预览随机 30 秒的歌曲。这对于我的深度学习任务来说已经足够了，所以我写了 `previewDownloader.py` 来下载 MSD 数据集中每首歌的 30 秒预览。不幸的是，我只有一台笔记本电脑用于这项任务，所以我只能满足于数据集的 1%（约 2.8GB）。\n\n我正在分类的流派包括：\n1. blues（蓝调）\u003Cbr>\n2. classical（古典）\u003Cbr>\n3. country（乡村）\u003Cbr>\n4. disco（迪斯科）\u003Cbr>\n5. hiphop（嘻哈）\u003Cbr>\n6. jazz（爵士）\u003Cbr>\n7. metal（金属）\u003Cbr>\n8. pop（流行）\u003Cbr>\n9. reggae（雷鬼）\u003Cbr>\n10. rock（摇滚）\u003Cbr>\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_1476e10453ca.png\" alt=\"Music genre popularity\" width=\"500\">\n\u003C\u002Fp>\n\n## 数据预处理\n\n拥有大数据集是不够的，与图像任务相反，我不能直接在原始声音样本上工作。一个简单的计算：30 秒 × 22050 样本\u002F秒 = 661500 长度的向量，这对传统的机器学习方法是沉重的负担。\n\n遵循我阅读的所有论文并稍微研究声学分析后，很明显行业正在使用梅尔频率倒谱系数（MFCC）作为声音样本的特征向量，我使用了 librosa[9] 的实现。\n\nMFCC 推导如下：\n1. 对信号的（加窗片段）进行傅里叶变换。\n2. 将上述获得的频谱功率映射到梅尔刻度，使用三角形重叠窗口。\n3. 取每个梅尔频率处的功率的对数。\n4. 对该梅尔对数功率列表进行离散余弦变换，将其视为信号。\n5. MFCC 是所得频谱的幅度。\n\n我尝试了几种窗口大小和步长值，最好的结果是窗口大小为 100ms，步长为 40ms。\n\n还有一点是，Tao 的论文使用了 MFCC 特征（步骤 5），而 Sander 使用了直接的梅尔频率（步骤 2）。\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_314183ddaf54.png\" alt=\"MEL ppower over time\" width=\"650\">\n\u003C\u002Fp>\n\n我尝试了这两种方法，发现仅使用梅尔频率可以获得极好的结果，但代价自然是训练时间。在继续构建网络之前，我想可视化预处理后的数据集，我通过 t-SNE[10] 算法实现了这一点。下面你可以看到 MFCC（步骤 5）和梅尔频率（步骤 2）的 t-SNE 图：\n \n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_8bc3fcbb45af.png\" alt=\"t-SNE MFCC samples as genres\" width=\"500\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_59cf1a4441bf.png\" alt=\"t-SNE mel-spectogram samples as genres\" width=\"500\">\n\u003C\u002Fp>\n\n## 网络架构\n \n 在看到 Tao 和 Sander 得出的结果后，我决定采用卷积神经网络（Convolutional Neural Network, CNN）实现方案。该网络接收一个包含 599 个梅尔频率包（mel-frequency bins）的向量，每个包包含描述其窗口的 128 个频率。网络由 3 个隐藏层组成，层与层之间进行了最大池化（max pooling）。最后是一个全连接层（fully connected layer），然后是 Softmax，最终得到一个 10 维向量，对应我们的十个音乐流派类别。\n \n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_86c39944c305.png\" alt=\"Nural Network\" width=\"500\">\n\u003C\u002Fp>\n \n 我还为 MFCC（梅尔频率倒谱系数）特征实现了另一个网络，而不是使用梅尔频率，唯一的区别在于尺寸（每个窗口 13 个频率，而不是 128 个）。\n \n 各种滤波器的可视化（来自 Sander 的论文）：\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_1a0adb590f35.png\" alt=\"Filters visualization\" width=\"250\">\n\u003C\u002Fp>\n\n• 滤波器 14 似乎能捕捉颤音演唱。\n• 滤波器 242 能捕捉某种回响环境音。\n• 滤波器 250 能捕捉人声三度，即多个歌手唱同样的内容，但音符相差一个大三度（4 个半音） apart。\n• 滤波器 253 能捕捉各种类型的低音鼓声音。\n\n## 结果\n\n正如我在引言中解释的那样，我工作所依据的论文并没有解决我遇到的确切问题，例如 Tao 的论文发布了针对分类 2、3 和 4 个类别（流派）的结果。 \n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_97bd80c0b0d0.png\" alt=\"Tao Feng's results\" width=\"750\">\n\u003C\u002Fp>\n\n我在深度学习领域之外寻找了基准测试（benchmark），找到了一篇题为“音频分类与聚类的基准数据集”[11] 的论文。这篇论文评估了一个与我非常相似的任务，它分类的流派包括：蓝调（Blues）、电子（Electronic）、爵士（Jazz）、流行（Pop）、嘻哈（HipHop）、摇滚（Rock）、民谣（Folk）、另类（Alternative）、放克（Funk）。\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_c0b49346d408.png\" alt=\"Benchmark results\" width=\"750\">\n\u003C\u002Fp>\n\n### 我的结果：\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_9ed851306200.png\" alt=\"My results\" width=\"750\">\n\u003C\u002Fp>\n\n## 代码\n\n### 文档\n\n• previewDownloader.py： \n用法：python previewDownloader.py [MSD 数据路径] \n此脚本遍历目录中的所有'.h5'文件，并从 7digital 下载 30 秒的样本。\n\n• preproccess.py： \n用法：python preproccess.py [MSD mp3 数据路径] \n此脚本预处理声音文件。计算滑动窗口的 MFCC，并将结果保存为'.pp'文件。\n\n• formatInput.py： \n用法：python formatInput.py [MSD pp 数据路径] \n该脚本遍历所有'.pp'文件，生成将用作神经网络输入的'data'和'labels'。此外，脚本在末尾输出一个 t-SNE 图表。\n\n• train.py： \n用法：python train.py \n此脚本构建神经网络并用'data'和'labels'进行训练。完成后，它将保存'model.final'。\n\n### 完整安装\n\n\u003Cul>\n\u003Cli>从 https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F8ohx6m23co1qaz3\u002FDataSet.zip?dl=0 下载数据集文件。\u003C\u002Fli>\n\u003Cli>解压文件\u003C\u002Fli>\n\u003Cli>将数据集文件放置在指定的目录结构中\u003C\u002Fli>\n\u003C\u002Ful>\n\n\n## 参考文献\n\n[1] Tao Feng, 音乐流派分类的深度学习，伊利诺伊大学。https:\u002F\u002Fcourses.engr.illinois.edu\u002Fece544na\u002Ffa2014\u002FTao_Feng.pdf\n[2] Aaron van den Oord, Sander Dieleman, Benjamin Schrauwen, 基于内容的音乐推荐。http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5004-deep-content-based- music-recommendation.pdf\n[3] SANDER DIELEMAN, 使用深度学习在 Spotify 上推荐音乐，2014 年 8 月 05 日。http:\u002F\u002Fbenanne.github.io\u002F2014\u002F08\u002F05\u002Fspotify-cnns.html\n[4] https:\u002F\u002Fwww.tensorflow.org\n[5] GTZAN 流派集合。http:\u002F\u002Fmarsyasweb.appspot.com\u002Fdownload\u002F data_sets\u002F\n[6] Thierry Bertin-Mahieux, Daniel P.W. Ellis, Brian Whitman, 和 Paul Lamere。百万歌曲数据集。In Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR 2011), 2011. http:\u002F\u002F labrosa.ee.columbia.edu\u002Fmillionsong\u002F\n[7] Hendrik Schreiber。改进百万歌曲数据集的流派标注。In Proceedings of the 16th International Conference on Music Information Retrieval (IS- MIR), pages 241-247, 2015. http:\u002F\u002Fwww.tagtraum.com\u002Fmsd_genre_datasets.html\n[8] https:\u002F\u002Fwww.7digital.com\n[9] https:\u002F\u002Fgithub.com\u002Fbmcfee\u002Flibrosa\n[10] http:\u002F\u002Fscikit-learn.org\u002Fstable\u002Fmodules\u002Fgenerated\u002Fsklearn.manifold.TSNE.html [11] Helge Homburg, Ingo Mierswa, Bu l̈ent Mo l̈ller, Katharina Morik 和 Michael Wurst, 音频分类与聚类的基准数据集，多特蒙德大学，人工智能部门。http:\u002F\u002Fsfb876.tu-dortmund.de\u002FPublicPublicationFiles\u002F homburg_etal_2005a.pdf\n\n## 作者\n\nMatan Lachmish \u003Csub>又名\u003C\u002Fsub> \u003Cb>The Big Fat Ninja\u003C\u002Fb> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_readme_e103d4b9db0d.png\" alt=\"The Big Fat Ninja\" width=\"13\">\u003Cbr>\nhttps:\u002F\u002Fthebigfatninja.xyz\n\n### 署名\n\n图标由 \u003Ca title=\"Freepik\" href=\"http:\u002F\u002Fwww.freepik.com\">Freepik\u003C\u002Fa> 制作，来自 \u003Ca title=\"Flaticon\" href=\"http:\u002F\u002Fwww.flaticon.com\">www.flaticon.com\u003C\u002Fa>\n\n## 许可证\n\nMusicGenreClassification 采用 MIT 许可证。有关更多信息，请参阅 LICENSE 文件。","# MusicGenreClassification 快速上手指南\n\n本工具基于深度学习（CNN）和 TensorFlow，用于对音乐音频样本进行流派分类。支持 10 种音乐流派的识别。\n\n## 1. 环境准备\n\n*   **操作系统**: Windows \u002F Linux \u002F macOS\n*   **Python 版本**: Python 3.x\n*   **硬件要求**: \n    *   CPU 即可运行，建议使用 GPU 以加速训练。\n    *   内存建议 4GB 以上。\n*   **前置依赖**:\n    *   `tensorflow` (深度学习框架)\n    *   `librosa` (音频处理)\n    *   `scikit-learn` (t-SNE 可视化等)\n    *   `numpy`, `matplotlib`\n\n## 2. 安装步骤\n\n### 2.1 克隆代码库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FTheBigFatNinja\u002FMusicGenreClassification.git\ncd MusicGenreClassification\n```\n\n### 2.2 安装 Python 依赖\n推荐使用国内镜像源加速下载：\n```bash\npip install tensorflow librosa scikit-learn numpy matplotlib -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 2.3 获取数据集\n本项目需要特定的音频数据。根据官方说明，可下载预处理的压缩包（注意网络访问情况）：\n1.  访问下载地址：[DataSet.zip](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F8ohx6m23co1qaz3\u002FDataSet.zip?dl=0)\n2.  解压文件到项目根目录或指定路径。\n3.  确保文件结构符合脚本预期（参考 README 中的 `Complete Installation` 部分）。\n\n> **注意**：如果无法下载上述压缩包，需自行注册 7Digital 开发者账号并使用 `previewDownloader.py` 从 MSD 数据集生成数据，此过程耗时较长且涉及费用。\n\n## 3. 基本使用\n\n项目包含四个主要脚本，按顺序执行即可完成从预处理到模型训练的流程。\n\n### 3.1 数据预处理\n将原始音频转换为特征向量（如 MFCC 或 Mel-Frequencies）：\n```bash\npython preproccess.py [path to MSD mp3 data]\n```\n*参数说明*: `[path to MSD mp3 data]` 替换为存放 MP3 音频文件的实际路径。\n\n### 3.2 格式化输入\n将预处理后的结果转换为神经网络所需的输入格式，并生成 t-SNE 可视化图：\n```bash\npython formatInput.py [path to MSD pp data]\n```\n*参数说明*: `[path to MSD pp data]` 替换为 `preproccess.py` 生成的输出路径。\n\n### 3.3 模型训练\n构建卷积神经网络并进行训练，最终保存模型文件 `model.final`：\n```bash\npython train.py\n```\n\n### 3.4 可选：数据下载脚本\n若需自行从 MSD 数据集下载 30 秒预览片段：\n```bash\npython previewDownloader.py [path to MSD data]\n```\n*前提*: 需拥有 7Digital 开发者 API 权限。","某在线音乐社区的开发团队计划上线“智能混音”功能，急需对用户上传的 10 秒试听片段进行自动流派标记，以优化个性化推荐算法。\n\n### 没有 MusicGenreClassification 时\n- 依靠人工审核员逐一听辨音频风格，效率极其低下，无法应对海量用户上传数据。\n- 公开可用的音乐数据集往往存在版权争议或规模过小，导致难以训练出高精度的专用模型。\n- 通用图像分类模型直接迁移到音频任务效果不佳，在多流派区分度上表现严重不足。\n- 自行构建声学特征提取 pipeline 技术门槛过高，团队容易陷入漫长的参数调优困境中。\n\n### 使用 MusicGenreClassification 后\n- 直接复用基于神经网络的成熟方案，仅需输入 10 秒音频流即可实时输出流派预测结果。\n- 项目内置了针对大规模数据集优化的预处理逻辑，无需重复处理不同音质和采样率的兼容问题。\n- 采用卷积神经网络替代传统方法，显著提升了十种流派同时分类时的整体准确率与鲁棒性。\n- 原生支持 TensorFlow 框架，代码结构清晰，方便与现有的 Python 后端服务无缝集成部署。\n\n核心价值：通过开箱即用的深度学习模型，大幅降低了音乐内容理解的技术门槛与开发周期。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlachmish_MusicGenreClassification_fd90e7c3.png","mlachmish","Matan Lachmish","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmlachmish_59e38246.jpg",null,"www.thebigfatninja.xyz","https:\u002F\u002Fgithub.com\u002Fmlachmish",[82],{"name":83,"color":84,"percentage":85},"Python","#3572A5",100,599,121,"2026-04-02T01:16:55","MIT","未说明",{"notes":92,"python":90,"dependencies":93},"项目基于 Python 和 TensorFlow 实现。数据获取需注册 7Digital 开发者账号并通过 API 下载 30 秒音乐预览，受版权和成本限制，实际处理数据量较小（约 2.8GB）。训练前需运行预处理脚本将音频转换为 MFCC 或 Mel-frequencies 特征向量。",[94,95,96],"tensorflow","librosa","scikit-learn",[55,13],[99,100,101,94,102],"deep-learning","music-genre-classification","paper","neural-network","2026-03-27T02:49:30.150509","2026-04-06T09:43:41.059762",[106,111,116],{"id":107,"question_zh":108,"answer_zh":109,"source_url":110},3681,"数据集下载链接失效或显示流量超限怎么办？","原下载链接可能因流量超限而失效。建议直接使用以下 wget 命令获取数据集：\n\nwget http:\u002F\u002Fopihi.cs.uvic.ca\u002Fsound\u002Fgenres.tar.gz","https:\u002F\u002Fgithub.com\u002Fmlachmish\u002FMusicGenreClassification\u002Fissues\u002F10",{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},3682,"从 Dropbox 下载 DataSet.zip 时被限速、中断或无法下载怎么办？","Dropbox 可能存在隐形的下载大小限制（约 1.6GB）。维护者已重新上传了文件，请使用以下新链接尝试下载：\n\nhttps:\u002F\u002Fwww.dropbox.com\u002Fs\u002F8ohx6m23co1qaz3\u002FDataSet.zip?dl=0","https:\u002F\u002Fgithub.com\u002Fmlachmish\u002FMusicGenreClassification\u002Fissues\u002F1",{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},3683,"运行 formatInput.py 和 train.py 时出现报错怎么办？","这通常是由于文件读写模式或 pickle 编码问题导致的。请按以下方式修改代码：\n\n1. 在 formatInput.py 中，使用二进制写入模式：\n```python\nwith open(\"data\", 'wb') as f:\n    f.write(pickle.dumps(data.values()))\n```\n\n2. 在 train.py 中，使用二进制读取模式，并在加载 pickle 时指定 encoding='latin1' 以避免 ASCII 字符错误：\n```python\nwith open(\"labels\", 'rb') as f:\n    content = f.read()\n    labels = pickle.loads(content, encoding='latin1')\n```","https:\u002F\u002Fgithub.com\u002Fmlachmish\u002FMusicGenreClassification\u002Fissues\u002F6",[]]