[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-zhvng--open-musiclm":3,"tool-zhvng--open-musiclm":64},[4,17,25,39,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":10,"last_commit_at":23,"category_tags":24,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":26,"name":27,"github_repo":28,"description_zh":29,"stars":30,"difficulty_score":10,"last_commit_at":31,"category_tags":32,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[33,34,35,36,14,37,15,13,38],"图像","数据工具","视频","插件","其他","音频",{"id":40,"name":41,"github_repo":42,"description_zh":43,"stars":44,"difficulty_score":45,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[14,33,13,15,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":45,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[15,33,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":45,"last_commit_at":62,"category_tags":63,"status":16},2181,"OpenHands","OpenHands\u002FOpenHands","OpenHands 是一个专注于 AI 驱动开发的开源平台，旨在让智能体（Agent）像人类开发者一样理解、编写和调试代码。它解决了传统编程中重复性劳动多、环境配置复杂以及人机协作效率低等痛点，通过自动化流程显著提升开发速度。\n\n无论是希望提升编码效率的软件工程师、探索智能体技术的研究人员，还是需要快速原型验证的技术团队，都能从中受益。OpenHands 提供了灵活多样的使用方式：既可以通过命令行（CLI）或本地图形界面在个人电脑上轻松上手，体验类似 Devin 的流畅交互；也能利用其强大的 Python SDK 自定义智能体逻辑，甚至在云端大规模部署上千个智能体并行工作。\n\n其核心技术亮点在于模块化的软件智能体 SDK，这不仅构成了平台的引擎，还支持高度可组合的开发模式。此外，OpenHands 在 SWE-bench 基准测试中取得了 77.6% 的优异成绩，证明了其解决真实世界软件工程问题的能力。平台还具备完善的企业级功能，支持与 Slack、Jira 等工具集成，并提供细粒度的权限管理，适合从个人开发者到大型企业的各类用户场景。",70612,"2026-04-05T11:12:22",[15,14,13,36],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":79,"owner_url":80,"languages":81,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":98,"env_os":99,"env_gpu":100,"env_ram":99,"env_deps":101,"category_tags":111,"github_topics":112,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":118,"updated_at":119,"faqs":120,"releases":156},2150,"zhvng\u002Fopen-musiclm","open-musiclm","Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.","open-musiclm 是一个基于 PyTorch 实现的开源项目，旨在复现谷歌研究院发布的顶尖文本生成音乐模型 MusicLM。它的核心目标是让开发者和研究人员能够更快速地搭建并训练自己的文本到音乐生成系统，而无需依赖谷歌未公开的内部数据或模型权重。\n\n该项目主要解决了原始 MusicLM 依赖私有组件（如 MuLan 和 SoundStream）导致难以复现的问题。open-musiclm 巧妙地采用了现有的优质开源模型进行替代：使用 CLAP 模型处理文本与音频的对齐，利用 Encodec 进行音频编码，并引入 MERT 增强特征提取。这种“站在巨人肩膀上”的策略，使得用户在缺乏海量私有训练数据的情况下，也能尝试生成多样化的音乐样本。\n\n在技术亮点方面，open-musiclm 对架构进行了灵活改进。它支持将条件信号作为离散令牌进行自回归建模，而非仅依靠交叉注意力机制；其令牌条件变换器还能支持可变长度的令牌序列，便于探索立体声生成或多条件组合等进阶实验。\n\n因此，open-musiclm 特别适合具有一定深度学习基础的 AI 研究者、算法工程师以及希望深入探索音频生成技术的开发","open-musiclm 是一个基于 PyTorch 实现的开源项目，旨在复现谷歌研究院发布的顶尖文本生成音乐模型 MusicLM。它的核心目标是让开发者和研究人员能够更快速地搭建并训练自己的文本到音乐生成系统，而无需依赖谷歌未公开的内部数据或模型权重。\n\n该项目主要解决了原始 MusicLM 依赖私有组件（如 MuLan 和 SoundStream）导致难以复现的问题。open-musiclm 巧妙地采用了现有的优质开源模型进行替代：使用 CLAP 模型处理文本与音频的对齐，利用 Encodec 进行音频编码，并引入 MERT 增强特征提取。这种“站在巨人肩膀上”的策略，使得用户在缺乏海量私有训练数据的情况下，也能尝试生成多样化的音乐样本。\n\n在技术亮点方面，open-musiclm 对架构进行了灵活改进。它支持将条件信号作为离散令牌进行自回归建模，而非仅依靠交叉注意力机制；其令牌条件变换器还能支持可变长度的令牌序列，便于探索立体声生成或多条件组合等进阶实验。\n\n因此，open-musiclm 特别适合具有一定深度学习基础的 AI 研究者、算法工程师以及希望深入探索音频生成技术的开发者使用。对于想要从零开始理解音乐生成原理或进行二次开发的团队来说，这是一个极具价值的起点。项目社区活跃，欢迎有志之士共同参与探索。","# Open MusicLM\nPytorch implementation of [MusicLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.11325), a SOTA text to music model published by Google, with a few modifications. We use [CLAP](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002FCLAP) as a replacement for MuLan, [Encodec](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fencodec) as a replacement for SoundStream, and [MERT](https:\u002F\u002Fhuggingface.co\u002Fm-a-p\u002FMERT-v0) as a replacement for w2v-BERT.\n\n\u003Cp align='center'>\n\u003Cimg alt='diagram of MusicLM' src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzhvng_open-musiclm_readme_450f30cf14ae.png' title=\"MusicLM\" height='250px'>\n\u003Cimg alt='diagram of CLAP' src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzhvng_open-musiclm_readme_9043b502d089.png' title=\"CLAP\" height='250px'>\n\u003C\u002Fp>\n\n## Why CLAP?\nCLAP is a joint audio-text model trained on [LAION-Audio-630K](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002Faudio-dataset). Similar to MuLan, it consists of an audio tower and a text tower that project their respective media onto a shared latent space (512 dimensions in CLAP vs 128 dimensions in MuLan).\n\nMuLan was trained on 50 million text-music pairs. Unfortunately I don't have the data to replicate this, so I'm relying on CLAP's pretrained checkpoints to come close. CLAP was trained on 2.6 million total text-audio pairs from LAION-630k (~633k text-audio pairs) and AudioSet (2 million samples with captions generated by a keyword-to-caption model). Although this is a fraction of the data used to train MuLan, we have successfully used CLAP to generate diverse music samples, which you can listen to [here](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1pGY8EP2EZlE2pPpXn5E3YAkoCgATWw3_) (keep in mind these are very early results). In the event that CLAP's latent space is not expressive enough for music generation, we can train CLAP on music or substitute the model for @lucidrain's [MuLan implementation](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Fmusiclm-pytorch) once it is trained.\n\n## Why Encodec?\nSoundStream and Encodec are both neural audio codecs that encode any waveform to a sequence of acoustic tokens, which can then be decoded into a waveform resembling the original. These intermediate tokens can then be modeled as a seq2seq task. [Encodec](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fencodec) is released by Facebook and pretrained checkpoints are publicly available, whereas this is not the case with SoundStream.\n\n## Differences from @lucidrains implementation\n- Autoregressively models the CLAP\u002FMuLan conditioning signal by passing it into the transformers as discrete tokens, as mentioned in section 3.1 of the paper. Musiclm-pytorch conditions on them with cross attention.\n- TokenConditionedTransformer can support variable token sequences, which makes it easy to do further experimentation (e.g. combining multiple conditioning signals, stereo waveform generation, etc.)\n- Uses existing open source models instead of training MuLan and SoundStream.\n- Some modifications to increase the chance of successfully training the model.\n\n# End Goal\nThe goal of this project is to replicate the results of MusicLM as quickly as possible without necessarily sticking to the architecture in the paper. For those looking for a more true-to-form implementation, check out [musiclm-pytorch](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Fmusiclm-pytorch). \n\nWe also seek to gain a better understanding of CLAP's latent space.\n\nJoin us on discord if you'd like to get involved! [\u003Cimg alt=\"join discord\" src=\"https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1079520916591104000?color=%237289da&logo=discord\"\u002F>](https:\u002F\u002Fdiscord.gg\u002FjN8jADShX5)\n\n# Usage\n## Install\n```shell\nconda env create -f environment.yaml\nconda activate open-musiclm\n```\n\n## Configs\nA \"model config\" contains information about the model architecture such as the number of layers, number of quantizers, target audio lengths for each stage, etc. It is used to instantiate the model during training and inference.\n\nA \"training config\" contains hyperparameters for training the model. It is used to instantiate the trainer classes during training.\n\nSee the `.\u002Fconfigs` directory for example configs.\n\n## Training\n### CLAP RVQ\nThe first step is to train the residual vector quantizer that maps continuous CLAP embeds to a discrete token sequence.\n```shell\npython .\u002Fscripts\u002Ftrain_clap_rvq.py \\\n    --results_folder .\u002Fresults\u002Fclap_rvq \\ # where to save results and checkpoints\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\ # path to model config\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json # path to training config\n```\n\n### Hubert K-means\nNext, we learn a K-means layer that we use to quantize our MERT embeddings into semantic tokens.\n```shell\npython .\u002Fscripts\u002Ftrain_hubert_kmeans.py \\\n    --results_folder .\u002Fresults\u002Fhubert_kmeans \\ # where to save results and checkpoints\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json\n```\n\n### Semantic Stage + Coarse Stage + Fine Stage\nOnce we have a working K-means and RVQ, we can now train the semantic, coarse and fine stages. These stages can be trained concurrently.\n```shell\npython .\u002Fscripts\u002Ftrain_semantic_stage.py \\\n    --results_folder .\u002Fresults\u002Fsemantic \\ # where to save results and checkpoints\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json \\\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\ # path to previously trained rvq\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT # path to previously trained kmeans\n```\n```shell\npython .\u002Fscripts\u002Ftrain_coarse_stage.py \\\n    --results_folder .\u002Fresults\u002Fcoarse \\ # where to save results and checkpoints\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json \\\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\ # path to previously trained rvq\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT # path to previously trained kmeans\n```\n```shell\npython .\u002Fscripts\u002Ftrain_fine_stage.py \\\n    --results_folder .\u002Fresults\u002Ffine \\ # where to save results and checkpoints\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json \\\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\ # path to previously trained rvq\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT # path to previously trained kmeans\n```\n\n## Preprocessing\nIn the above case, we are using CLAP, Hubert and Encodec to generate clap, semantic and acoustic tokens live during training. However, these models take up space on the GPU, and it is inefficient to recompute these tokens if we're making multiple runs on the same data. We can instead compute these tokens ahead of time and iterate over them during training.\n\nTo do this, fill in the `data_preprocessor_cfg` field in the config and set `use_preprocessed_data` to True in the trainer configs (look at train_fma_preprocess.json for inspiration). Then run the following to preprocess the dataset, followed by your training script.\n\n```shell\npython .\u002Fscripts\u002Fpreprocess_data.py \\\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_fma_preprocess.json \\\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\ # path to previously trained rvq\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT # path to previously trained kmeans\n```\n\n## Inference\nGenerate multiple samples and use CLAP to select the best ones:\n```shell\npython scripts\u002Finfer_top_match.py \\\n    \"your text prompt\"\n    --num_samples 4                                 # number of samples to generate\n    --num_top_matches 1                             # number of top matches to return\n    --semantic_path PATH_TO_SEMANTIC_CHECKPOINT \\   # path to previously trained semantic stage\n    --coarse_path PATH_TO_COARSE_CHECKPOINT \\       # path to previously trained coarse stage\n    --fine_path PATH_TO_FINE_CHECKPOINT \\           # path to previously trained fine stage\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\             # path to previously trained rvq\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT         # path to previously trained kmeans\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --duration 4\n```\n\nGenerate samples for various test prompts:\n```shell\npython scripts\u002Finfer.py \\\n    --semantic_path PATH_TO_SEMANTIC_CHECKPOINT \\   # path to previously trained semantic stage\n    --coarse_path PATH_TO_COARSE_CHECKPOINT \\       # path to previously trained coarse stage\n    --fine_path PATH_TO_FINE_CHECKPOINT \\           # path to previously trained fine stage\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\             # path to previously trained rvq\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT         # path to previously trained kmeans\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --duration 4\n```\n\nYou can use the `--return_coarse_wave` flag to skip the fine stage and reconstruct audio from coarse tokens alone.\n\n## Checkpoints\nYou can download experimental checkpoints for the musiclm_large_small_context model [here](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Fu\u002F0\u002Ffolders\u002F1347glwEc-6XWulfU7NGrFrYTvTnjeVJE). To fine tune the model, call the train scripts with the `--fine_tune_from` flag.\n\n# Thank you\n* [Okio](https:\u002F\u002Fokio.ai\u002F) for providing compute to train the model! Okio is a startup that is developing Nendo - an open source generative music tool-suite\nthat re-imagines music. If you're interested check them out at [okio.ai](https:\u002F\u002Fokio.ai\u002F)\n* [@lucidrains](https:\u002F\u002Fgithub.com\u002Flucidrains\u002F) for the [audiolm-pytorch](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Faudiolm-pytorch) implementation. This repo contains a refactored version of a lot of the code in [audiolm-pytorch](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Faudiolm-pytorch).\n* [LAION](https:\u002F\u002Flaion.ai\u002F) for [CLAP](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002FCLAP)\n* [Music Audio Pretrain team](https:\u002F\u002Fhuggingface.co\u002Fm-a-p) for [MERT](https:\u002F\u002Fhuggingface.co\u002Fm-a-p\u002FMERT-v0)\n\n# Citations\n```bibtex\n@inproceedings{Agostinelli2023MusicLMGM,\n    title     = {MusicLM: Generating Music From Text},\n    author    = {Andrea Agostinelli and Timo I. Denk and Zal{\\'a}n Borsos and Jesse Engel and Mauro Verzetti and Antoine Caillon and Qingqing Huang and Aren Jansen and Adam Roberts and Marco Tagliasacchi and Matthew Sharifi and Neil Zeghidour and C. Frank},\n    year      = {2023}\n}\n```\n```bibtex\n@article{wu2022large,\n  title     = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},\n  author    = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},\n  journal   = {arXiv preprint arXiv:2211:06687},\n  year      = {2022},\n}\n```\n```bibtex\n@article{defossez2022highfi,\n  title     = {High Fidelity Neural Audio Compression},\n  author    = {Défossez, Alexandre and Copet, Jade and Synnaeve, Gabriel and Adi, Yossi},\n  journal   = {arXiv preprint arXiv:2210.13438},\n  year      = {2022}\n}\n```\n```bibtex\n@misc{li2023mert,\n  title     = {MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training}, \n  author    = {Yizhi Li and Ruibin Yuan and Ge Zhang and Yinghao Ma and Xingran Chen and Hanzhi Yin and Chenghua Lin and Anton Ragni and Emmanouil Benetos and Norbert Gyenge and Roger Dannenberg and Ruibo Liu and Wenhu Chen and Gus Xia and Yemin Shi and Wenhao Huang and Yike Guo and Jie Fu},\n  year      = {2023},\n  eprint    = {2306.00107},\n  archivePrefix = {arXiv},\n  primaryClass  = {cs.SD}\n}\n```\n","# Open MusicLM\n这是 Google 发布的 SOTA 文本到音乐模型 [MusicLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.11325) 的 PyTorch 实现，并做了一些修改。我们使用 [CLAP](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002FCLAP) 替代 MuLan，[Encodec](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fencodec) 替代 SoundStream，以及 [MERT](https:\u002F\u002Fhuggingface.co\u002Fm-a-p\u002FMERT-v0) 替代 w2v-BERT。\n\n\u003Cp align='center'>\n\u003Cimg alt='MusicLM 示意图' src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzhvng_open-musiclm_readme_450f30cf14ae.png' title=\"MusicLM\" height='250px'>\n\u003Cimg alt='CLAP 示意图' src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzhvng_open-musiclm_readme_9043b502d089.png' title=\"CLAP\" height='250px'>\n\u003C\u002Fp>\n\n## 为什么选择 CLAP？\nCLAP 是一个联合音频-文本模型，基于 [LAION-Audio-630K](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002Faudio-dataset) 数据集训练而成。与 MuLan 类似，它由音频塔和文本塔组成，分别将各自模态映射到一个共享的潜在空间（CLAP 为 512 维，而 MuLan 为 128 维）。\n\nMuLan 是在 5000 万对文本-音乐数据上训练的。遗憾的是，我并没有这些数据来复现它，因此我依赖 CLAP 的预训练检查点来尽可能接近。CLAP 在 LAION-630k（约 63.3 万对文本-音频）和 AudioSet（200 万个带有关键词转写字幕的样本）的数据上进行了训练，总共使用了 260 万对文本-音频数据。尽管这远少于 MuLan 训练时使用的数据量，但我们已经成功利用 CLAP 生成了多样化的音乐样本，你可以在 [这里](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1pGY8EP2EZlE2pPpXn5E3YAkoCgATWw3_) 收听这些早期结果。如果 CLAP 的潜在空间对于音乐生成来说不够丰富，我们可以用音乐数据继续训练 CLAP，或者等 @lucidrain 的 [MuLan 实现](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Fmusiclm-pytorch) 训练完成后将其作为替代方案。\n\n## 为什么选择 Encodec？\nSoundStream 和 Encodec 都是神经网络音频编解码器，能够将任意波形编码为一系列声学标记，随后再解码回与原始波形相似的信号。这些中间标记可以被建模为序列到序列任务。[Encodec](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fencodec) 由 Facebook 开源，并提供了公开可用的预训练检查点，而 SoundStream 则没有。\n\n## 与 @lucidrains 实现的区别\n- 自回归地将 CLAP\u002FMuLan 的条件信号作为离散标记传递给 Transformer，正如论文第 3.1 节所述。而在 musiclm-pytorch 中，则通过交叉注意力对其进行条件化。\n- TokenConditionedTransformer 可以支持可变长度的标记序列，这使得进一步的实验更加便捷（例如组合多个条件信号、生成立体声波形等）。\n- 使用现有的开源模型，而不是重新训练 MuLan 和 SoundStream。\n- 对代码进行了一些修改，以提高模型成功训练的可能性。\n\n# 最终目标\n该项目的目标是在不完全拘泥于论文中架构的前提下，尽快复现 MusicLM 的效果。如果你希望获得更贴近原论文实现的版本，请查看 [musiclm-pytorch](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Fmusiclm-pytorch)。\n\n我们还希望更好地理解 CLAP 的潜在空间。\n\n如果你想参与其中，欢迎加入我们的 Discord！ [\u003Cimg alt=\"加入 Discord\" src=\"https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1079520916591104000?color=%237289da&logo=discord\"\u002F>](https:\u002F\u002Fdiscord.gg\u002FjN8jADShX5)\n\n# 使用方法\n## 安装\n```shell\nconda env create -f environment.yaml\nconda activate open-musiclm\n```\n\n## 配置文件\n“模型配置”包含有关模型架构的信息，比如层数、量化器数量、各阶段的目标音频长度等，用于在训练和推理时实例化模型。\n\n“训练配置”则包含训练模型的超参数，用于在训练过程中实例化训练器类。\n\n示例配置文件请参见 `.\u002Fconfigs` 目录。\n\n## 训练\n### CLAP RVQ\n第一步是训练残差向量量化器，将连续的 CLAP 嵌入映射为离散的标记序列。\n```shell\npython .\u002Fscripts\u002Ftrain_clap_rvq.py \\\n    --results_folder .\u002Fresults\u002Fclap_rvq \\ # 保存结果和检查点的路径\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\ # 模型配置文件路径\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json # 训练配置文件路径\n```\n\n### Hubert K-means\n接下来，我们需要学习一个 K-means 层，用于将 MERT 嵌入量化为语义标记。\n```shell\npython .\u002Fscripts\u002Ftrain_hubert_kmeans.py \\\n    --results_folder .\u002Fresults\u002Fhubert_kmeans \\ # 保存结果和检查点的路径\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json\n```\n\n### 语义阶段 + 粗粒度阶段 + 细粒度阶段\n一旦 K-means 和 RVQ 准备就绪，我们就可以开始训练语义、粗粒度和细粒度阶段。这些阶段可以同时进行训练。\n```shell\npython .\u002Fscripts\u002Ftrain_semantic_stage.py \\\n    --results_folder .\u002Fresults\u002Fsemantic \\ # 保存结果和检查点的路径\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json \\\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\ # 之前训练好的 RVQ 检查点路径\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT # 之前训练好的 K-means 检查点路径\n```\n```shell\npython .\u002Fscripts\u002Ftrain_coarse_stage.py \\\n    --results_folder .\u002Fresults\u002Fcoarse \\ # 保存结果和检查点的路径\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json \\\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\ # 之前训练好的 RVQ 检查点路径\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT # 之前训练好的 K-means 检查点路径\n```\n```shell\npython .\u002Fscripts\u002Ftrain_fine_stage.py \\\n    --results_folder .\u002Fresults\u002Ffine \\ # 保存结果和检查点的路径\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json \\\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\ # 之前训练好的 RVQ 检查点路径\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT # 之前训练好的 K-means 检查点路径\n```\n\n## 预处理\n在上述情况下，我们在训练过程中实时使用 CLAP、Hubert 和 Encodec 生成 CLAP 令牌、语义令牌和声学令牌。然而，这些模型会占用 GPU 的显存，如果对同一数据集进行多次运行，每次都重新计算这些令牌效率较低。我们可以预先计算这些令牌，并在训练过程中迭代使用它们。\n\n为此，在配置文件中填写 `data_preprocessor_cfg` 字段，并在训练器配置中将 `use_preprocessed_data` 设置为 True（可参考 `train_fma_preprocess.json` 获取灵感）。然后运行以下命令预处理数据集，再执行你的训练脚本：\n\n```shell\npython .\u002Fscripts\u002Fpreprocess_data.py \\\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_fma_preprocess.json \\\n    --rvq_path RVQ检查点路径 \\ # 之前训练好的RVQ检查点路径\n    --kmeans_path K-means检查点路径 # 之前训练好的K-means检查点路径\n```\n\n## 推理\n生成多个样本，并使用 CLAP 选择最佳的几个：\n```shell\npython scripts\u002Finfer_top_match.py \\\n    \"你的文本提示\"\n    --num_samples 4                                 # 生成的样本数量\n    --num_top_matches 1                             # 返回的最佳匹配数量\n    --semantic_path 语义阶段检查点路径 \\   # 之前训练好的语义阶段检查点\n    --coarse_path 粗略阶段检查点路径 \\       # 之前训练好的粗略阶段检查点\n    --fine_path 精细阶段检查点路径 \\           # 之前训练好的精细阶段检查点\n    --rvq_path RVQ检查点路径 \\             # 之前训练好的RVQ检查点\n    --kmeans_path K-means检查点路径         # 之前训练好的K-means检查点\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --duration 4\n```\n\n为不同的测试提示生成样本：\n```shell\npython scripts\u002Finfer.py \\\n    --semantic_path 语义阶段检查点路径 \\   # 之前训练好的语义阶段检查点\n    --coarse_path 粗略阶段检查点路径 \\       # 之前训练好的粗略阶段检查点\n    --fine_path 精细阶段检查点路径 \\           # 之前训练好的精细阶段检查点\n    --rvq_path RVQ检查点路径 \\             # 之前训练好的RVQ检查点\n    --kmeans_path K-means检查点路径         # 之前训练好的K-means检查点\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --duration 4\n```\n\n你可以使用 `--return_coarse_wave` 标志跳过精细阶段，仅从粗略令牌重建音频。\n\n## 检查点\n你可以从 [这里](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Fu\u002F0\u002Ffolders\u002F1347glwEc-6XWulfU7NGrFrYTvTnjeVJE) 下载 musiclm_large_small_context 模型的实验性检查点。要对该模型进行微调，请在训练脚本中使用 `--fine_tune_from` 标志。\n\n# 感谢\n* [Okio](https:\u002F\u002Fokio.ai\u002F) 提供了训练模型所需的算力！Okio 是一家正在开发 Nendo——一个开源的生成式音乐工具套件——的初创公司，旨在重新构想音乐。如果你感兴趣，可以访问 [okio.ai](https:\u002F\u002Fokio.ai\u002F) 了解更多信息。\n* [@lucidrains](https:\u002F\u002Fgithub.com\u002Flucidrains\u002F) 提供了 [audiolm-pytorch](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Faudiolm-pytorch) 的实现。该仓库包含了对 [audiolm-pytorch](https:\u002F\u002Fgithub.com\u002Flucidrains\u002Faudiolm-pytorch) 中许多代码的重构版本。\n* [LAION](https:\u002F\u002Flaion.ai\u002F) 提供了 [CLAP](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002FCLAP)。\n* [Music Audio Pretrain 团队](https:\u002F\u002Fhuggingface.co\u002Fm-a-p) 提供了 [MERT](https:\u002F\u002Fhuggingface.co\u002Fm-a-p\u002FMERT-v0)。\n\n# 引用\n```bibtex\n@inproceedings{Agostinelli2023MusicLMGM,\n    title     = {MusicLM: 从文本生成音乐},\n    author    = {Andrea Agostinelli、Timo I. Denk、Zalán Borsos、Jesse Engel、Mauro Verzetti、Antoine Caillon、Qingqing Huang、Aren Jansen、Adam Roberts、Marco Tagliasacchi、Matthew Sharifi、Neil Zeghidour、C. Frank},\n    year      = {2023}\n}\n```\n```bibtex\n@article{wu2022large,\n  title     = {基于特征融合和关键词到字幕增强的大规模对比语言-音频预训练},\n  author    = {Wu, Yusong、Chen, Ke、Zhang, Tianyu、Hui, Yuchen、Berg-Kirkpatrick, Taylor、Dubnov, Shlomo},\n  journal   = {arXiv 预印本 arXiv:2211:06687},\n  year      = {2022},\n}\n```\n```bibtex\n@article{defossez2022highfi,\n  title     = {高保真神经网络音频压缩},\n  author    = {Défossez, Alexandre、Copet, Jade、Synnaeve, Gabriel、Adi, Yossi},\n  journal   = {arXiv 预印本 arXiv:2210.13438},\n  year      = {2022},\n}\n```\n```bibtex\n@misc{li2023mert,\n  title     = {MERT：具有大规模自监督训练的声学音乐理解模型}, \n  author    = {Yizhi Li、Ruibin Yuan、Ge Zhang、Yinghao Ma、Xingran Chen、Hanzhi Yin、Chenghua Lin、Anton Ragni、Emmanouil Benetos、Norbert Gyenge、Roger Dannenberg、Ruibo Liu、Wenhu Chen、Gus Xia、Yemin Shi、Wenhao Huang、Yike Guo、Jie Fu},\n  year      = {2023},\n  eprint    = {2306.00107},\n  archivePrefix = {arXiv},\n  primaryClass  = {cs.SD}\n}\n```","# Open MusicLM 快速上手指南\n\nOpen MusicLM 是 Google SOTA 文本生成音乐模型 MusicLM 的 PyTorch 实现。该项目使用 **CLAP** 替代 MuLan，**Encodec** 替代 SoundStream，**MERT** 替代 w2v-BERT，旨在利用现有的开源预训练模型快速复现音乐生成效果。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐) 或 macOS\n*   **Python**: 3.8+\n*   **包管理器**: Conda (强烈推荐用于管理依赖)\n*   **硬件**: 支持 CUDA 的 NVIDIA GPU (训练和推理均需要较大的显存)\n*   **前置知识**: 熟悉基本的命令行操作和深度学习训练流程\n\n## 安装步骤\n\n本项目通过 Conda 环境进行依赖管理。请依次执行以下命令：\n\n1.  **克隆仓库** (如果尚未下载):\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fyour-repo\u002Fopen-musiclm.git\n    cd open-musiclm\n    ```\n\n2.  **创建并激活 Conda 环境**:\n    ```shell\n    conda env create -f environment.yaml\n    conda activate open-musiclm\n    ```\n\n    > **提示**: 如果 `environment.yaml` 中的源下载缓慢，国内用户可在创建环境后，手动将 `pip` 源配置为清华源或阿里源以加速后续可能的 pip 安装：\n    > `pip config set global.index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 基本使用\n\n本项目的核心流程分为：**训练量化器** -> **训练生成阶段** -> **推理生成**。若仅需测试效果，可直接下载官方提供的实验性检查点（Checkpoints）进行推理。\n\n### 1. 准备检查点 (可选)\n如果您不想从头训练，可以从 [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Fu\u002F0\u002Ffolders\u002F1347glwEc-6XWulfU7NGrFrYTvTnjeVJE) 下载预训练的 `musiclm_large_small_context` 模型文件。记下以下路径以备后用：\n*   `PATH_TO_SEMANTIC_CHECKPOINT`\n*   `PATH_TO_COARSE_CHECKPOINT`\n*   `PATH_TO_FINE_CHECKPOINT`\n*   `PATH_TO_RVQ_CHECKPOINT`\n*   `PATH_TO_KMEANS_CHECKPOINT`\n\n### 2. 推理生成 (Inference)\n\n#### 方案 A：生成并自动筛选最佳样本\n此脚本会生成多个样本，并利用 CLAP 模型计算与文本提示的相似度，自动返回最匹配的一个。\n\n```shell\npython scripts\u002Finfer_top_match.py \\\n    \"your text prompt\" \\\n    --num_samples 4 \\\n    --num_top_matches 1 \\\n    --semantic_path PATH_TO_SEMANTIC_CHECKPOINT \\\n    --coarse_path PATH_TO_COARSE_CHECKPOINT \\\n    --fine_path PATH_TO_FINE_CHECKPOINT \\\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT \\\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --duration 4\n```\n*   将 `\"your text prompt\"` 替换为您想要的音乐描述（英文效果更佳）。\n*   将 `PATH_TO_...` 替换为实际的模型文件路径。\n\n#### 方案 B：直接生成样本\n此脚本直接根据提示词生成音频，不进行自动筛选。\n\n```shell\npython scripts\u002Finfer.py \\\n    --semantic_path PATH_TO_SEMANTIC_CHECKPOINT \\\n    --coarse_path PATH_TO_COARSE_CHECKPOINT \\\n    --fine_path PATH_TO_FINE_CHECKPOINT \\\n    --rvq_path PATH_TO_RVQ_CHECKPOINT \\\n    --kmeans_path PATH_TO_KMEANS_CHECKPOINT \\\n    --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n    --duration 4\n```\n\n> **技巧**: 添加 `--return_coarse_wave` 标志可以跳过精细阶段（Fine Stage），仅使用粗粒度令牌重建音频，从而加快生成速度。\n\n### 3. 训练流程简述 (如需从头训练)\n\n若需使用自定义数据训练，请按顺序执行以下步骤：\n\n1.  **训练 CLAP RVQ** (将连续嵌入映射为离散令牌):\n    ```shell\n    python .\u002Fscripts\u002Ftrain_clap_rvq.py \\\n        --results_folder .\u002Fresults\u002Fclap_rvq \\\n        --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n        --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json\n    ```\n\n2.  **训练 Hubert K-means** (量化 MERT 嵌入为语义令牌):\n    ```shell\n    python .\u002Fscripts\u002Ftrain_hubert_kmeans.py \\\n        --results_folder .\u002Fresults\u002Fhubert_kmeans \\\n        --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n        --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json\n    ```\n\n3.  **训练生成阶段** (语义、粗粒度、细粒度可并发训练):\n    ```shell\n    python .\u002Fscripts\u002Ftrain_semantic_stage.py \\\n        --results_folder .\u002Fresults\u002Fsemantic \\\n        --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n        --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_musiclm_fma.json \\\n        --rvq_path PATH_TO_RVQ_CHECKPOINT \\\n        --kmeans_path PATH_TO_KMEANS_CHECKPOINT\n    ```\n    *(同理运行 `train_coarse_stage.py` 和 `train_fine_stage.py`)*\n\n4.  **数据预处理 (优化选项)**:\n    为避免每次训练重复计算令牌，可预先处理数据集。修改配置文件中的 `data_preprocessor_cfg` 并设置 `use_preprocessed_data: True`，然后运行：\n    ```shell\n    python .\u002Fscripts\u002Fpreprocess_data.py \\\n        --model_config .\u002Fconfigs\u002Fmodel\u002Fmusiclm_small.json \\\n        --training_config .\u002Fconfigs\u002Ftraining\u002Ftrain_fma_preprocess.json \\\n        --rvq_path PATH_TO_RVQ_CHECKPOINT \\\n        --kmeans_path PATH_TO_KMEANS_CHECKPOINT\n    ```","独立游戏开发者小林正在为一款赛博朋克风格的冒险游戏制作动态背景音乐，需要根据不同剧情段落快速生成契合氛围的音频素材。\n\n### 没有 open-musiclm 时\n- **版权与成本困境**：购买商用音乐库授权费用高昂，且难以找到完全匹配特定“雨夜霓虹”或“黑客入侵”场景的现成曲目，面临侵权风险。\n- **定制门槛极高**：若委托作曲家定制，沟通成本高且修改周期长，无法适应游戏开发中频繁变动的剧情需求。\n- **技术复现困难**：谷歌原版 MusicLM 未开源代码且依赖私有数据（如 MuLan 和 SoundStream），普通开发者无法在本地部署或微调模型。\n- **素材同质化严重**：使用传统循环音效库导致背景音乐缺乏变化，玩家容易产生听觉疲劳，沉浸感大打折扣。\n\n### 使用 open-musiclm 后\n- **零成本按需生成**：直接输入“快节奏的合成器波浪伴随紧张的低音”等文本描述，open-musiclm 即可利用预训练的 CLAP 模型生成无版权风险的独特曲目。\n- **敏捷迭代工作流**：剧情调整后，小林只需修改提示词并在几分钟内重新生成音频，无需等待外部反馈，大幅缩短开发周期。\n- **开源架构可落地**：open-musiclm 巧妙替换了不可用的私有模块，采用公开的 Encodec 和 MERT 模型，让开发者能在消费级显卡上成功训练和推理。\n- **高度风格化控制**：借助 CLAP 强大的文本 - 音频对齐能力，生成的音乐能精准捕捉细微的情感色彩，显著提升游戏场景的叙事张力。\n\nopen-musiclm 将顶尖的文本生成音乐技术从实验室带入现实，让中小开发者也能以极低门槛实现专业级的动态音频创作。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzhvng_open-musiclm_8a19e256.png","zhvng","Allen Zhang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fzhvng_0bc94fc7.jpg",null,"https:\u002F\u002Fgithub.com\u002Fzhvng",[82,86,90],{"name":83,"color":84,"percentage":85},"Python","#3572A5",90.9,{"name":87,"color":88,"percentage":89},"Jupyter Notebook","#DA5B0B",8.9,{"name":91,"color":92,"percentage":93},"Shell","#89e051",0.2,563,66,"2026-03-26T20:38:27","MIT",4,"未说明","必需（训练和推理涉及 CLAP, MERT, Encodec 及 Transformer 模型，需 NVIDIA GPU 支持 CUDA；显存需求未明确，但建议 16GB+ 以处理多阶段训练）",{"notes":102,"python":103,"dependencies":104},"该项目是 Google MusicLM 的 PyTorch 实现，使用 CLAP 替代 MuLan，Encodec 替代 SoundStream，MERT 替代 w2v-BERT。安装需通过 conda 创建环境（environment.yaml）。训练分为多个阶段：首先训练 CLAP 的残差向量量化器 (RVQ) 和 Hubert K-means，然后并行训练语义、粗粒度和细粒度阶段。支持预处理数据以提高效率。官方提供了实验性检查点可供微调。","未说明（通过 conda environment.yaml 安装）",[105,106,107,108,109,110],"torch (PyTorch)","CLAP","Encodec","MERT","transformers","accelerate",[15,38],[113,114,115,116,117],"artificial-intelligence","attention","music-generation","transformer","text-to-music","2026-03-27T02:49:30.150509","2026-04-06T05:44:12.738101",[121,126,131,136,141,146,151],{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},9907,"训练 CLAP 或 Hubert 模型时出现 'IndexError: tuple index out of range' 或数据加载器返回空元组的错误，如何解决？","这通常是由以下两个原因之一引起的：\n1. **PyTorch 版本不兼容**：确保使用了正确版本的 torch。\n2. **缺少 ffmpeg**：这是最常见的原因。如果未安装 ffmpeg，torchaudio 无法加载音频文件，导致数据处理器返回空结果。请尝试直接运行 `torchaudio.load` 加载 mp3 文件（去掉代码中的 try\u002Fexcept 块）来验证是否报错。解决方法是安装 ffmpeg（例如在 Linux 上使用 `sudo apt-get install ffmpeg`，在 macOS 上使用 `brew install ffmpeg`）。","https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fissues\u002F2",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},9908,"训练 Semantic 阶段时，超过 2000 步后 Loss 变得非常大，应该如何调整？","Loss 异常增大通常是由于学习率过高或批次大小不合适导致的。建议尝试**降低学习率（learning rate）**或**增大批次大小（batch size）**。有用户反馈在使用较低的学习率后，Loss 恢复正常。","https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fissues\u002F20",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},9909,"在 Windows 上使用 conda 创建环境时，执行 'conda env create -f environment.yaml' 失败怎么办？","这是一个常见的依赖名称问题。请在 `environment.yaml` 文件中将 `sklearn` 修改为 `scikit-learn`，然后重新运行创建命令即可解决。","https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fissues\u002F22",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},9910,"生成的音频质量不佳，听起来像是有模型损坏或只有杂音，如何改善？","音频质量差可能与训练步数不足或技术细节有关。首先，可以尝试**增加训练步数**。其次，检查是否存在代码层面的问题，例如在推理（inference） coarse 模型时，曾发现忘记在输入前对音频进行归一化（normalize）的问题，维护者已修复此 bug。确保你的代码是最新版本，并且输入音频已正确归一化为零均值和单位方差。","https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fissues\u002F3",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},9911,"运行推理脚本（如 infer_top_match.py）时程序运行数小时不结束且无结果，如何控制生成长度？","该问题表明推理过程可能陷入了死循环或缺乏停止条件。虽然具体参数未在评论中详细列出，但遇到此问题时，应检查脚本中关于生成长度（generation length）或最大步数的配置。如果官方脚本缺乏明确的停止控制，可能需要手动修改代码限制生成的 token 数量或时间步长，以防止无限运行。","https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fissues\u002F28",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},9912,"Hubert\u002FMERT 模型输入是否需要特殊的归一化处理？","是的，输入需要归一化。MERT 示例代码会将输入归一化为零均值和单位方差。在当前项目中，数据加载阶段（data.py）已经设置了 `normalize` 参数来处理这一点。如果你自行修改了数据加载流程或在推理时直接传入音频，请确保在将音频传递给 MERT 模型之前，先对其进行零均值和单位方差的归一化处理，否则会影响模型效果。","https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fissues\u002F7",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},9913,"Discord 邀请链接失效或被禁止加入，如何获取最新的社区链接？","Discord 链接可能会过期。如果原链接失效，请查看项目 README 或最新的 Issue 评论，维护者通常会发布新的邀请链接。此外，也可以尝试在 LAION Discord 服务器的 audio-generation 频道进行交流，或者等待维护者在 GitHub 上更新永久有效的邀请地址。","https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fissues\u002F15",[157,162,167,172,177,182,187,192,197,202,207,212],{"id":158,"version":159,"summary_zh":160,"released_at":161},107192,"0.2.5","- [add option to generate continuation from input audio](https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fcommit\u002F4ee53d9d1ae3daa0b14cd8b647107e23408dfd7d)\r\n- [upgrade to pytorch 2.0 + torch.compile compatibility](https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fcommit\u002F054026f2fd24f37f215d35743640b7856cfb2970)\r\n- minor fixes and formatting","2023-04-22T07:49:01",{"id":163,"version":164,"summary_zh":165,"released_at":166},107193,"0.2.4","- allow for fine tuning base models\r\n- release pretrained checkpoints!","2023-04-12T07:38:17",{"id":168,"version":169,"summary_zh":170,"released_at":171},107194,"0.2.3","- support for new clap checkpoints, see the [CLAP](https:\u002F\u002Fgithub.com\u002Flaion-ai\u002Fclap\u002F) repo\r\n- more customizability in clap config\r\n- update clap code","2023-04-11T22:15:27",{"id":173,"version":174,"summary_zh":175,"released_at":176},107195,"0.2.2","- collect all acoustic tokens before generating wave, instead of generating wave section by section\r\n- make skipping ignored files O(n)\r\n- fix bugs with saving preprocessed samples\r\n- sliding semantic window inference for longer outputs\r\n- add option to filter out certain genres from fma","2023-04-11T22:13:02",{"id":178,"version":179,"summary_zh":180,"released_at":181},107196,"0.2.1","- support preprocessing longer clap windows\r\n- add a max audio length and random crop option to preprocessor \r\n- release some results in discord!","2023-04-04T17:27:15",{"id":183,"version":184,"summary_zh":185,"released_at":186},107197,"0.2.0","- Revamp data preprocessing. See #9 ","2023-03-27T09:23:11",{"id":188,"version":189,"summary_zh":190,"released_at":191},107198,"0.1.2","- remove dependency on audiolm-pytorch since we have diverged from the architecture\r\n- quality of life things","2023-03-25T03:38:22",{"id":193,"version":194,"summary_zh":195,"released_at":196},107199,"0.1.1","- Option to provide the output feature rate for Hubert and Encodec in the config ([here](https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fcommit\u002Fc53e9f5ffe30537d25f84bc5c779869a7c7dfe74)). This will make it easier to test out MERT v1 (which has a 75 hz feature rate vs 50 hz for MERT v0), and sets up for some upcoming preprocessing changes\r\n- [Add config for full size model with small context](https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fcommit\u002F1e86f5a2d70c8eb6407cec64af8242e229a8d979)\r\n- [Add rvq settings to model config](https:\u002F\u002Fgithub.com\u002Fzhvng\u002Fopen-musiclm\u002Fcommit\u002F7d21d7837dd6c6baf860b554e85ee580a1a4f08c)\r\n- Add a script to test semantic tokens \r\n- Fix some bugs","2023-03-21T20:59:56",{"id":198,"version":199,"summary_zh":200,"released_at":201},107200,"0.1.0","- update clap\r\n- multi-gpu training now works for all scripts","2023-03-13T15:20:07",{"id":203,"version":204,"summary_zh":205,"released_at":206},107201,"0.0.3","option to preprocess tokens for semantic, coarse and fine stages","2023-03-08T06:46:11",{"id":208,"version":209,"summary_zh":210,"released_at":211},107202,"0.0.2","fix audio length discrepancies when loading data and provide relative positional embeddings option in config","2023-03-06T05:24:08",{"id":213,"version":214,"summary_zh":215,"released_at":216},107203,"0.0.1","initial implementation. was able to produce preliminary results (in discord)","2023-03-02T21:17:14"]