[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jatinchowdhury18--RTNeural":3,"tool-jatinchowdhury18--RTNeural":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":77,"owner_company":79,"owner_location":77,"owner_email":80,"owner_twitter":77,"owner_website":81,"owner_url":82,"languages":83,"stars":110,"forks":111,"last_commit_at":112,"license":113,"difficulty_score":10,"env_os":114,"env_gpu":114,"env_ram":114,"env_deps":115,"category_tags":120,"github_topics":77,"view_count":121,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":122,"updated_at":123,"faqs":124,"releases":153},496,"jatinchowdhury18\u002FRTNeural","RTNeural","Real-time neural network inferencing","RTNeural 是一个专为实时系统设计的轻量级神经网络推理引擎，采用 C++ 编写，特别优化了音频处理等对延迟敏感的场景。它允许开发者将训练好的神经网络模型（如 TensorFlow 或 PyTorch 训练的模型）快速转换为高效的 C++ 代码，直接部署到资源受限的嵌入式设备或实时应用中。\n\n传统深度学习框架在部署到边缘设备时常面临计算资源占用高、延迟大等问题，而 RTNeural 通过精简计算流程和内存管理，显著降低了推理开销。其支持常见网络结构（如 LSTM、GRU、卷积层）和主流激活函数（如 ReLU、SoftMax），并提供从 Python 框架导出模型权重的工具链，使模型迁移更便捷。对于需要毫秒级响应的音频处理、机器人控制或物联网设备，RTNeural 提供了高效的解决方案。\n\n开发者（尤其是嵌入式系统工程师）和研究人员（如实时信号处理领域）是其主要适用人群。其技术亮点包括：跨平台兼容性（支持多种编译器）、低内存占用（部分模型仅需几 KB 内存），以及通过 SIMD 指令加速计算的能力。开源社区提供详细文档和示例代码，用户可通过 JSON 格式导入模型参数，快速构建推理流","RTNeural 是一个专为实时系统设计的轻量级神经网络推理引擎，采用 C++ 编写，特别优化了音频处理等对延迟敏感的场景。它允许开发者将训练好的神经网络模型（如 TensorFlow 或 PyTorch 训练的模型）快速转换为高效的 C++ 代码，直接部署到资源受限的嵌入式设备或实时应用中。\n\n传统深度学习框架在部署到边缘设备时常面临计算资源占用高、延迟大等问题，而 RTNeural 通过精简计算流程和内存管理，显著降低了推理开销。其支持常见网络结构（如 LSTM、GRU、卷积层）和主流激活函数（如 ReLU、SoftMax），并提供从 Python 框架导出模型权重的工具链，使模型迁移更便捷。对于需要毫秒级响应的音频处理、机器人控制或物联网设备，RTNeural 提供了高效的解决方案。\n\n开发者（尤其是嵌入式系统工程师）和研究人员（如实时信号处理领域）是其主要适用人群。其技术亮点包括：跨平台兼容性（支持多种编译器）、低内存占用（部分模型仅需几 KB 内存），以及通过 SIMD 指令加速计算的能力。开源社区提供详细文档和示例代码，用户可通过 JSON 格式导入模型参数，快速构建推理流程。对于学术研究，项目还提供基准测试对比和扩展实验模块，方便性能验证与功能迭代。","\u003Cp align=center>\n  \u003Cpicture>\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjatinchowdhury18_RTNeural_readme_f4599b0940f5.png\" height=\"200\"\u002F>\n  \u003C\u002Fpicture>\n\u003C\u002Fp>\n\n# RTNeural\n\n[![Tests](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Fworkflows\u002FTests\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Ftests.yml)\n[![Bench](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Fworkflows\u002FBench\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Fbench.yml)\n[![Examples](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Fexamples.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Fexamples.yml)\n[![RADSan](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Fradsan.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Fradsan.yml)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fjatinchowdhury18\u002FRTNeural\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=QBEBVSCQTW)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fjatinchowdhury18\u002FRTNeural)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2106.03037-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03037)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-BSD-blue.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FBSD-3-Clause)\n\nA lightweight neural network inferencing engine written in C++.\nThis library was designed with the intention of being used in\nreal-time systems, specifically real-time audio processing.\n\nCurrently supported layers:\n\n  - [x] Dense\n  - [x] GRU\n  - [x] LSTM\n  - [x] Conv1D\n  - [x] Conv2D\n  - [ ] MaxPooling\n  - [x] BatchNorm1D\n  - [x] BatchNorm2D\n\nCurrently supported activations:\n  - [x] tanh\n  - [x] ReLU\n  - [x] Sigmoid\n  - [x] SoftMax\n  - [x] ELu\n  - [x] PReLU\n\nAdditional resources:\n- [RTNeural Discord](https:\u002F\u002Fdiscord.gg\u002FQMBBucKt4Q)\n- [API Reference](https:\u002F\u002Fccrma.stanford.edu\u002F~jatin\u002Fchowdsp\u002FRTNeural)\n- [Reference Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03037)\n- [Example Plugin](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural-example)\n- [Comparison Benchmarks](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural-compare)\n- [Experimental Extensions](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural-Experimental)\n\n## Citation\n\nIf you are using RTNeural as part of an academic work, please cite the library as follows:\n```\n@article{chowdhury2021rtneural,\n        title={RTNeural: Fast Neural Inferencing for Real-Time Systems},\n        author={Jatin Chowdhury},\n        year={2021},\n        journal={arXiv preprint arXiv:2106.03037}\n}\n```\n\n## How To Use\n\n`RTNeural` is capable of taking a neural network that\nhas already been trained, loading the weights from that\nnetwork, and running inference. Some simple examples\nare available in the [`examples\u002F`](.\u002Fexamples) directory.\n\n### Exporting weights from a trained network\n\nNeural networks are typically trained using `Python`\nlibraries including Tensorflow or PyTorch. Once you\nhave trained a neural network using one of these frameworks,\nyou can \"export\" the network weights to a json file,\nso that `RTNeural` can read them. An implementation of\nthe export process for a \"sequential\" Tensorflow model is\nprovided in `python\u002Fmodel_utils.py`, and can be used as follows.\n\n```python\n# import dependencies\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom model_utils import save_model\n\n# create Tensrflow model\nmodel = keras.Sequential()\n...\n\n# train model\nmodel.train()\n\n# export model weights\nsave_model(model, 'model_weights.json')\n```\n\nFor an example of exporting a model from PyTorch,\nsee [this example script](.\u002Fpython\u002Fgru_torch.py).\n\n### Creating a model\n\nNext, you can create an inferencing engine in C++ directly\nfrom the exported json file:\n\n```cpp\n#include \u003CRTNeural.h>\n...\nstd::ifstream jsonStream(\"model_weights.json\", std::ifstream::binary);\nauto model = RTNeural::json_parser::parseJson\u003Cdouble>(jsonStream);\n```\n\n### Running inference\n\nBefore running inference, it is recommended to \"reset\" the\nstate of your model (if the model has state).\n```cpp\nmodel->reset();\n```\n\nThen, you may run inference as follows:\n```cpp\ndouble input[] = { 1.0, 0.5, -0.1 }; \u002F\u002F set up input vector\ndouble output = model->forward(input); \u002F\u002F compute output\n```\n\n### Compile-Time API\n\nThe code shown above will create the inferencing engine\ndynamically at run-time. If the model architecture is\nfixed at compile-time, it may be preferable to use RTNeural's\nAPI for defining an inferencing engine type at compile-time,\nwhich can significantly improve performance.\n```cpp\n\u002F\u002F define model type\nRTNeural::ModelT\u003Cdouble, 8, 1,\n    RTNeural::DenseT\u003Cdouble, 8, 8>,\n    RTNeural::TanhActivationT\u003Cdouble, 8>,\n    RTNeural::DenseT\u003Cdouble, 8, 1>\n> modelT;\n\n\u002F\u002F load model weights from json\nstd::ifstream jsonStream(\"model_weights.json\", std::ifstream::binary);\nmodelT.parseJson(jsonStream);\n\nmodelT.reset(); \u002F\u002F reset state\n\ndouble input[] = { 1.0, 0.5, -0.1, 0.0, 0.4, 0.9, -0.2, -0.3 }; \u002F\u002F set up input vector\ndouble output = modelT.forward(input); \u002F\u002F compute output\n```\n\n### Loading Layers from PyTorch\n\nThe above example code assumes that the trained model has\nbeen exported from TensorFlow. For loading PyTorch models,\nthe RTNeural namespace `RTNeural::torch_helpers`, provides\nhelper functions for loading layers exported from PyTorch.\n\n```cpp\n\u002F\u002F load model weights from json\nstd::ifstream jsonStream(\"model_weights.json\", std::ifstream::binary);\nnlohmann::json modelJson;\njsonStream >> modelJson;\n\n\u002F\u002F load a layer from a static model\nRTNeural::ModelT\u003Cfloat, 1, 1, RTNeural::DenseT\u003Cfloat, 1, 1>> model;\nRTNeural::torch_helpers::loadDense(modelJson, \"name_of_layer.\", model.get\u003C0>());\n```\n\nFor more examples, see the\n[`examples\u002Ftorch`](.\u002Fexamples\u002Ftorch) directory.\n\n## Building with CMake\n\n`RTNeural` is built with CMake, and the easiest way to link\nis to include `RTNeural` as a submodule:\n```cmake\n...\nadd_subdirectory(RTNeural)\ntarget_link_libraries(MyCMakeProject LINK_PUBLIC RTNeural)\n```\n\nIf you are trying to use RTNeural in a project that does not use\nCMake, please see the [instructions below](#building-without-cmake).\n\n### Choosing a Backend\n\n`RTNeural` supports three backends,\n[`Eigen`](http:\u002F\u002Feigen.tuxfamily.org\u002F),\n[`xsimd`](https:\u002F\u002Fgithub.com\u002Fxtensor-stack\u002Fxsimd),\nor the C++ STL. You can choose your backend by passing\neither `-DRTNEURAL_EIGEN=ON`, `-DRTNEURAL_XSIMD=ON`,\nor `-DRTNEURAL_STL=ON` to your CMake configuration. By\ndefault, the `Eigen` backend will be used. Alternatively,\nyou may select your choice of backends in your CMake\nconfiguration as follows:\n```cmake\nset(RTNEURAL_XSIMD ON CACHE BOOL \"Use RTNeural with this backend\" FORCE)\nadd_subdirectory(modules\u002FRTNeural)\n```\n\nIn general, the `Eigen` backend typically has the best\nperformance for larger networks, while smaller networks\nmay perform better with XSIMD. However, it is recommended\nto measure the performance of your network with all the\nbackends that are available on your target platform\nto ensure optimal performance. For more information see the\n[benchmark results](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions?query=workflow%3ABench).\n\nNote that you must abide by the licensing rules of whichever backend library you choose.\n\n### Other configuration flags\n\nIf you would like to build RTNeural with the AVX SIMD extensions,\nyou may run CMake with the `-DRTNEURAL_USE_AVX=ON`. Note that\nthis flag will have no effect when compiling for platforms that\ndo not support AVX instructions.\n\n### Building the test suite\n\nTo build RTNeural's test suite, run `cmake -Bbuild -DBUILD_TESTS=ON`, followed\nby `cmake --build build`. To run the full testing suite, run `ctest` from the\n`build` folder. For more information, see `tests\u002FREADME.md`.\n\n### Building the Performance Benchmarks\n\nTo build the performance benchmarks, run\n`cmake -Bbuild -DBUILD_BENCH=ON`, followed by\n`cmake --build build --config Release`. To run the layer benchmarks, run\n`.\u002Fbuild\u002Frtneural_layer_bench \u003Clayer> \u003Clength> \u003Cin_size> \u003Cout_size>`. To\nrun the model benchmark, run `.\u002Fbuild\u002Frtneural_model_bench`.\n\n### Building the Examples\n\nTo build the RTNeural examples run:\n```bash\ncmake -Bbuild -DBUILD_EXAMPLES=ON\ncmake --build build --config Release\n```\nThe example programs will then be located in\n`build\u002Fexamples_out\u002F`, and may be run from there.\n\nAn example of using RTNeural within a real-time\naudio plugin can be found on GitHub\n[here](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural-example).\n\n## Building without CMake\n\nIf you wish to use RTNeural in a project that doesn't use CMake,\nRTNeural can be included as a header-only library, along with a few\nextra steps.\n\n1. Add a compile-time definition to define a default byte alignment for RTNeural.\n   For most cases this definition will be one of either:\n   - `RTNEURAL_DEFAULT_ALIGNMENT=16`\n   - `RTNEURAL_DEFAULT_ALIGNMENT=32`\n\n2. Add a compile-time definition to [select a backend](#choosing-a-backend).\n   If you wish to use the STL backend, then no definition is required.\n   This definition should be one of the following:\n   - `RTNEURAL_USE_EIGEN=1`\n   - `RTNEURAL_USE_XSIMD=1`\n\n4. Add the necessary include paths for your chosen backend. This path will be\n   one of either:\n   - `\u003Crepo>\u002Fmodules\u002FEigen`\n   - `\u003Crepo>\u002Fmodules\u002Fxsimd\u002Finclude\u002Fxsimd`\n\nIt may also be worth checking out the\n[example Makefile](.\u002Fexamples\u002Fhello_rtneural\u002FMakefile).\n\n## Contributing\n\nContributions to this project are most welcome!\nCurrently, there is a need for the\nfollowing improvements:\n- Improved support for 2-dimensional input\u002Foutput data.\n- Improved support for \"stateless\" Conv1D layers.\n- More robust support for exporting\u002Floading models.\n- Support for more activation layers.\n- Any changes that improve overall performance.\n\nGeneral code maintenance and documentation is always\nappreciated as well! Note that if you are implementing\na new layer type, it is not required to provide support\nfor all the backends, though it is recommended to at\nleast provide a \"fallback\" implementation using the STL\nbackend.\n\n## Contributors\n\nPlease thank the following individuals for their important contributions:\n\n- [wayne-chen](https:\u002F\u002Fgithub.com\u002Fwayne-chen): Softmax activation layer and general API improvements.\n- [hollance](https:\u002F\u002Fgithub.com\u002Fhollance): RTNeural logo.\n- [stepanmk](https:\u002F\u002Fgithub.com\u002Fstepanmk): Eigen Conv1D layer optimization.\n- [DamRsn](https:\u002F\u002Fgithub.com\u002FDamRsn): Eigen implementations for Conv2D and BatchNorm2D layers.\n- [lHorvalds](https:\u002F\u002Fgithub.com\u002FIHorvalds): Eigen backend optimizations.\n- [davidtrevelyan](https:\u002F\u002Fgithub.com\u002Fdavidtrevelyan): Testing framework upgrade.\n- [purefunctor](https:\u002F\u002Fgithub.com\u002Fpurefunctor): Groups feature for Conv1D.\n\n## Powered by RTNeural\n\nRTNeural is currently being used by several audio plugins and other projects:\n\n- [4000DB-NeuralAmp](https:\u002F\u002Fgithub.com\u002FEnrcDamn\u002F4000DB-NeuralAmp): Neural emulation of the pre-amp section from the Akai 4000DB tape machine.\n- [AIDA-X](https:\u002F\u002Fgithub.com\u002FAidaDSP\u002FAIDA-X): An AU\u002FCLAP\u002FLV2\u002FVST2\u002FVST3 audio plugin that loads RTNeural models and cabinet IRs.\n- [BYOD](https:\u002F\u002Fgithub.com\u002FChowdhury-DSP\u002FBYOD): A guitar distortion plugin containing several machine learning-based effects.\n- [Chow Centaur](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FKlonCentaur): A guitar pedal emulation plugin, using a real-time recurrent neural network.\n- [Chow Tape Model](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FAnalogTapeModel): An analog tape emulation, using a real-time dense neural network.\n- [cppTimbreID](https:\u002F\u002Fgithub.com\u002Fdomenicostefani\u002Fcpp-timbreID): An audio feature extraction library.\n- [guitarix](https:\u002F\u002Fgithub.com\u002Fbrummer10\u002Fguitarix): A guitarix effects suite, including neural network amplifier models.\n- [GuitarML](https:\u002F\u002Fguitarml.com\u002F): GuitarML plugins use machine learning to model guitar amplifiers and effects.\n- [MLTerror15](https:\u002F\u002Fgithub.com\u002FIHorvalds\u002FMLTerror15): Deeply learned simulator for the Orange Tiny Terror with Recurrent Neural Networks.\n- [neural-amp-modeler-lv2](https:\u002F\u002Fgithub.com\u002Fmikeoliphant\u002Fneural-amp-modeler-lv2): LV2 plugin for using neural network machine learning amp models.\n- [NeuralNote](https:\u002F\u002Fgithub.com\u002FDamRsn\u002FNeuralNote): An audio-to-MIDI transcription plugin using Spotify's [basic-pitch](https:\u002F\u002Fgithub.com\u002Fspotify\u002Fbasic-pitch) model.\n- [rt-neural-lv2](https:\u002F\u002Fgithub.com\u002FAidaDSP\u002Faidadsp-lv2): A headless lv2 plugin using RTNeural to model guitar pedals and amplifiers.\n- [stompbox](https:\u002F\u002Fgithub.com\u002Fmikeoliphant\u002Fstompbox): Guitar amplification and effects pedalboard simulation.\n- Tone Empire plugins:\n  - [LVL - 01](https:\u002F\u002Ftone-empire.com\u002Fshop\u002Flvl-01\u002F): An A.I.\u002FM.L.-based compressor effect.\n  - [TM700](https:\u002F\u002Ftone-empire.com\u002Fshop\u002Ftm700\u002F): A machine learning tape emulation effect.\n  - [Neural Q](https:\u002F\u002Ftone-empire.com\u002Fshop\u002Fneuralq-v2\u002F): An analog emulation 2-band EQ, using recurrent neural networks.\n- [ToobAmp](https:\u002F\u002Fgithub.com\u002Frerdavies\u002FToobAmp): Guitar effect plugins for the Raspberry Pi.\n\n\nIf you are using RTNeural in one of your projects, let us know and we will add it to this list!\n\n## License\n\nRTNeural is open source, and is licensed under the\nBSD 3-clause license.\n\nEnjoy!\n","\u003Cp align=center>\n  \u003Cpicture>\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjatinchowdhury18_RTNeural_readme_f4599b0940f5.png\" height=\"200\"\u002F>\n  \u003C\u002Fpicture>\n\u003C\u002Fp>\n\n# RTNeural\n\n[![Tests](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Fworkflows\u002FTests\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Ftests.yml)\n[![Bench](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Fworkflows\u002FBench\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Fbench.yml)\n[![Examples](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Fexamples.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Fexamples.yml)\n[![RADSan](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Fradsan.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions\u002Fworkflows\u002Fradsan.yml)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fjatinchowdhury18\u002FRTNeural\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=QBEBVSCQTW)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fjatinchowdhury18\u002FRTNeural)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2106.03037-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03037)\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-BSD-blue.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FBSD-3-Clause)\n\n一个用 C++ 编写的轻量级神经网络推理引擎（neural network inferencing engine）。该库专为实时系统（real-time systems）设计，特别适用于实时音频处理（real-time audio processing）。\n\n当前支持的网络层（layers）：\n  - [x] Dense（全连接层）\n  - [x] GRU（门控循环单元）\n  - [x] LSTM（长短期记忆网络）\n  - [x] Conv1D（一维卷积层）\n  - [x] Conv2D（二维卷积层）\n  - [ ] MaxPooling（最大池化层）\n  - [x] BatchNorm1D（一维批量归一化）\n  - [x] BatchNorm2D（二维批量归一化）\n\n当前支持的激活函数（activations）：\n  - [x] tanh\n  - [x] ReLU\n  - [x] Sigmoid\n  - [x] SoftMax\n  - [x] ELu\n  - [x] PReLU\n\n附加资源：\n- [RTNeural Discord](https:\u002F\u002Fdiscord.gg\u002FQMBBucKt4Q)\n- [API Reference](https:\u002F\u002Fccrma.stanford.edu\u002F~jatin\u002Fchowdsp\u002FRTNeural)\n- [Reference Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03037)\n- [Example Plugin](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural-example)\n- [Comparison Benchmarks](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural-compare)\n- [Experimental Extensions](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural-Experimental)\n\n## 引用\n\n如果您在学术工作中使用 RTNeural，请按以下方式引用：\n```\n@article{chowdhury2021rtneural,\n        title={RTNeural: Fast Neural Inferencing for Real-Time Systems},\n        author={Jatin Chowdhury},\n        year={2021},\n        journal={arXiv preprint arXiv:2106.03037}\n}\n```\n\n## 使用方法\n\n`RTNeural` 可以加载已训练好的神经网络权重并运行推理。简单示例请参见 [`examples\u002F`](.\u002Fexamples) 目录。\n\n### 从训练好的网络导出权重\n\n神经网络通常使用 Python 库（如 Tensorflow 或 PyTorch）进行训练。训练完成后，可以将网络权重导出为 json 文件供 `RTNeural` 读取。Tensorflow 顺序模型的导出示例在 `python\u002Fmodel_utils.py` 中实现，使用方法如下：\n\n```python\n# 导入依赖\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom model_utils import save_model\n\n# 创建 Tensorflow 模型\nmodel = keras.Sequential()\n...\n\n# 训练模型\nmodel.train()\n\n# 导出模型权重\nsave_model(model, 'model_weights.json')\n```\n\nPyTorch 模型导出示例请参见 [此示例脚本](.\u002Fpython\u002Fgru_torch.py)。\n\n### 创建模型\n\n接下来，可以从导出的 json 文件中创建 C++ 推理引擎：\n\n```cpp\n#include \u003CRTNeural.h>\n...\nstd::ifstream jsonStream(\"model_weights.json\", std::ifstream::binary);\nauto model = RTNeural::json_parser::parseJson\u003Cdouble>(jsonStream);\n```\n\n### 运行推理\n\n运行推理前建议重置模型状态（如果模型有状态）：\n```cpp\nmodel->reset();\n```\n\n然后运行推理：\n```cpp\ndouble input[] = { 1.0, 0.5, -0.1 }; \u002F\u002F 设置输入向量\ndouble output = model->forward(input); \u002F\u002F 计算输出\n```\n\n### 编译时 API\n\n上述代码会在运行时动态创建推理引擎。如果模型架构在编译时固定，可以使用 RTNeural 的编译时 API 定义推理引擎类型，这能显著提升性能：\n```cpp\n\u002F\u002F 定义模型类型\nRTNeural::ModelT\u003Cdouble, 8, 1,\n    RTNeural::DenseT\u003Cdouble, 8, 8>,\n    RTNeural::TanhActivationT\u003Cdouble, 8>,\n    RTNeural::DenseT\u003Cdouble, 8, 1>\n> modelT;\n\n\u002F\u002F 从 json 加载模型权重\nstd::ifstream jsonStream(\"model_weights.json\", std::ifstream::binary);\nmodelT.parseJson(jsonStream);\n\nmodelT.reset(); \u002F\u002F 重置状态\n\ndouble input[] = { 1.0, 0.5, -0.1, 0.0, 0.4, 0.9, -0.2, -0.3 }; \u002F\u002F 设置输入向量\ndouble output = modelT.forward(input); \u002F\u002F 计算输出\n```\n\n### 从 PyTorch 加载层\n\n上述示例假设模型已从 TensorFlow 导出。对于 PyTorch 模型，RTNeural 的命名空间 `RTNeural::torch_helpers` 提供了加载 PyTorch 导出层的辅助函数。\n\n```cpp\n\u002F\u002F 从 json 加载模型权重\nstd::ifstream jsonStream(\"model_weights.json\", std::ifstream::binary);\nnlohmann::json modelJson;\njsonStream >> modelJson;\n\n\u002F\u002F 从静态模型加载层\nRTNeural::ModelT\u003Cfloat, 1, 1, RTNeural::DenseT\u003Cfloat, 1, 1>> model;\nRTNeural::torch_helpers::loadDense(modelJson, \"name_of_layer.\", model.get\u003C0>());\n```\n\n更多示例请参见 [`examples\u002Ftorch`](.\u002Fexamples\u002Ftorch) 目录。\n\n## 使用 CMake 构建\n\n`RTNeural` 使用 CMake 构建，最简单的链接方式是将其作为子模块包含：\n```cmake\n...\nadd_subdirectory(RTNeural)\ntarget_link_libraries(MyCMakeProject LINK_PUBLIC RTNeural)\n```\n\n如果项目未使用 CMake，请参见下方的[非 CMake 构建说明](#building-without-cmake)。\n\n### 选择后端（Backend）\n\n`RTNeural` 支持三种后端：\n- [`Eigen`](http:\u002F\u002Feigen.tuxfamily.org\u002F)（一个 C++ 线性代数库）\n- [`xsimd`](https:\u002F\u002Fgithub.com\u002Fxtensor-stack\u002Fxsimd)（SIMD 向量化库）\n- C++ STL\n\n通过传递 `-DRTNEURAL_EIGEN=ON`、`-DRTNEURAL_XSIMD=ON` 或 `-DRTNEURAL_STL=ON` 到 CMake 配置中选择后端。默认使用 Eigen 后端。也可以在 CMake 配置中这样选择：\n```cmake\nset(RTNEURAL_XSIMD ON CACHE BOOL \"Use RTNeural with this backend\" FORCE)\nadd_subdirectory(modules\u002FRTNeural)\n```\n\n一般来说，Eigen 后端在大型网络中性能最佳，小型网络可能 XSIMD 表现更好。建议在目标平台上测试所有可用后端以获得最佳性能。更多信息请参见 [基准测试结果](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Factions?query=workflow%3ABench)。\n\n请注意您选择的后端库的许可协议要求。\n\n### 其他配置选项\n\n如果你希望使用 AVX SIMD 扩展（Advanced Vector Extensions 单指令多数据扩展）构建 RTNeural，  \n可以通过 `-DRTNEURAL_USE_AVX=ON` 参数运行 CMake。请注意，  \n当编译目标平台不支持 AVX 指令时，此选项将无效。\n\n### 构建测试套件\n\n要构建 RTNeural 的测试套件，请运行 `cmake -Bbuild -DBUILD_TESTS=ON`，然后  \n执行 `cmake --build build`。要在 `build` 文件夹中运行完整测试套件，请运行 `ctest`。  \n更多信息请参见 `tests\u002FREADME.md`。\n\n### 构建性能基准测试\n\n要构建性能基准测试，请运行  \n`cmake -Bbuild -DBUILD_BENCH=ON`，然后  \n执行 `cmake --build build --config Release`。要运行图层基准测试，请运行  \n`.\u002Fbuild\u002Frtneural_layer_bench \u003Clayer> \u003Clength> \u003Cin_size> \u003Cout_size>`。要  \n运行模型基准测试，请运行 `.\u002Fbuild\u002Frtneural_model_bench`。\n\n### 构建示例\n\n要构建 RTNeural 示例，请运行：\n```bash\ncmake -Bbuild -DBUILD_EXAMPLES=ON\ncmake --build build --config Release\n```\n示例程序将位于 `build\u002Fexamples_out\u002F` 目录中，可从此处运行。\n\n一个在实时音频插件中使用 RTNeural 的示例可在 GitHub  \n[此处](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural-example) 找到。\n\n## 不使用 CMake 构建\n\n如果希望在非 CMake 项目中使用 RTNeural，  \n可以将其作为头文件库包含，并配合以下步骤：\n\n1. 添加编译时定义以设置 RTNeural 的默认字节对齐方式。  \n   大多数情况下，定义应为以下之一：  \n   - `RTNEURAL_DEFAULT_ALIGNMENT=16`  \n   - `RTNEURAL_DEFAULT_ALIGNMENT=32`  \n\n2. 添加编译时定义以[选择后端](#choosing-a-backend)。  \n   如果使用 STL 后端，则无需定义。定义应为以下之一：  \n   - `RTNEURAL_USE_EIGEN=1`  \n   - `RTNEURAL_USE_XSIMD=1`  \n\n4. 添加所选后端的必要包含路径。路径应为以下之一：  \n   - `\u003Crepo>\u002Fmodules\u002FEigen`  \n   - `\u003Crepo>\u002Fmodules\u002Fxsimd\u002Finclude\u002Fxsimd`  \n\n也可以参考  \n[示例 Makefile](.\u002Fexamples\u002Fhello_rtneural\u002FMakefile)。\n\n## 贡献\n\n欢迎为本项目做出贡献！  \n当前需要以下改进：  \n- 改进对二维输入\u002F输出数据的支持。  \n- 改进无状态 Conv1D（一维卷积层）的支持。  \n- 更健壮的模型导出\u002F加载支持。  \n- 增加更多激活层支持。  \n- 任何提升整体性能的修改。  \n\n代码维护和文档完善同样重要！请注意，如果实现新图层类型，  \n无需为所有后端提供支持，但建议至少提供一个使用 STL  \n后端的\"回退\"实现。\n\n## 贡献者\n\n感谢以下人员的重要贡献：\n\n- [wayne-chen](https:\u002F\u002Fgithub.com\u002Fwayne-chen): Softmax 激活层和通用 API 改进。  \n- [hollance](https:\u002F\u002Fgithub.com\u002Fhollance): RTNeural 标志。  \n- [stepanmk](https:\u002F\u002Fgithub.com\u002Fstepanmk): Eigen Conv1D 层优化。  \n- [DamRsn](https:\u002F\u002Fgithub.com\u002FDamRsn): Conv2D 和 BatchNorm2D 层的 Eigen 实现。  \n- [lHorvalds](https:\u002F\u002Fgithub.com\u002FIHorvalds): Eigen 后端优化。  \n- [davidtrevelyan](https:\u002F\u002Fgithub.com\u002Fdavidtrevelyan): 测试框架升级。  \n- [purefunctor](https:\u002F\u002Fgithub.com\u002Fpurefunctor): Conv1D 的 Groups 功能。  \n\n## 使用 RTNeural 的项目\n\nRTNeural 目前被多个音频插件和其他项目使用：\n\n- [4000DB-NeuralAmp](https:\u002F\u002Fgithub.com\u002FEnrcDamn\u002F4000DB-NeuralAmp): 对 Akai 4000DB 磁带机前置放大器部分的神经模拟。  \n- [AIDA-X](https:\u002F\u002Fgithub.com\u002FAidaDSP\u002FAIDA-X): 支持 AU\u002FCLAP\u002FLV2\u002FVST2\u002FVST3 的音频插件，可加载 RTNeural 模型和音箱 IR。  \n- [BYOD](https:\u002F\u002Fgithub.com\u002FChowdhury-DSP\u002FBYOD): 含多个机器学习效果的吉他失真插件。  \n- [Chow Centaur](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FKlonCentaur): 使用实时循环神经网络的吉他效果器模拟插件。  \n- [Chow Tape Model](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FAnalogTapeModel): 使用实时密集神经网络的模拟磁带模拟。  \n- [cppTimbreID](https:\u002F\u002Fgithub.com\u002Fdomenicostefani\u002Fcpp-timbreID): 音频特征提取库。  \n- [guitarix](https:\u002F\u002Fgithub.com\u002Fbrummer10\u002Fguitarix): 包含神经网络放大器模型的吉他效果套件。  \n- [GuitarML](https:\u002F\u002Fguitarml.com\u002F): 使用机器学习模拟吉他放大器和效果的插件。  \n- [MLTerror15](https:\u002F\u002Fgithub.com\u002FIHorvalds\u002FMLTerror15): 使用循环神经网络深度学习的 Orange Tiny Terror 模拟器。  \n- [neural-amp-modeler-lv2](https:\u002F\u002Fgithub.com\u002Fmikeoliphant\u002Fneural-amp-modeler-lv2): 用于神经网络机器学习放大器模型的 LV2 插件。  \n- [NeuralNote](https:\u002F\u002Fgithub.com\u002FDamRsn\u002FNeuralNote): 使用 Spotify 的 [basic-pitch](https:\u002F\u002Fgithub.com\u002Fspotify\u002Fbasic-pitch) 模型的音频转 MIDI 插件。  \n- [rt-neural-lv2](https:\u002F\u002Fgithub.com\u002FAidaDSP\u002Faidadsp-lv2): 使用 RTNeural 模拟吉他效果器和放大器的无界面 LV2 插件。  \n- [stompbox](https:\u002F\u002Fgithub.com\u002Fmikeoliphant\u002Fstompbox): 吉他放大和效果踏板模拟。  \n- Tone Empire 插件:  \n  - [LVL - 01](https:\u002F\u002Ftone-empire.com\u002Fshop\u002Flvl-01\u002F): 基于 AI\u002FML 的压缩效果器。  \n  - [TM700](https:\u002F\u002Ftone-empire.com\u002Fshop\u002Ftm700\u002F): 机器学习磁带模拟效果。  \n  - [Neural Q](https:\u002F\u002Ftone-empire.com\u002Fshop\u002Fneuralq-v2\u002F): 使用循环神经网络的模拟双频段均衡器。  \n- [ToobAmp](https:\u002F\u002Fgithub.com\u002Frerdavies\u002FToobAmp): 适用于 Raspberry Pi 的吉他效果插件。  \n\n如果你的项目使用了 RTNeural，请告知我们，我们将将其添加到此列表！\n\n## 许可证\n\nRTNeural 是开源软件，采用  \n三条款 BSD 许可证（BSD 3-clause license）。\n\n祝使用愉快！","# RTNeural 快速上手指南\n\n## 环境准备\n- **系统要求**：Linux\u002FmacOS\u002FWindows（支持C++17编译器）\n- **前置依赖**：\n  - CMake 3.14+\n  - C++编译器（推荐GCC 9+\u002FClang 10+\u002FMSVC 2019）\n  - 可选依赖（根据后端选择）：\n    - Eigen 3.4+（默认后端）\n    - xsimd 8.0+（SIMD加速后端）\n\n## 安装步骤\n1. **克隆项目仓库**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural.git\ncd RTNeural\n```\n\n2. **构建项目（推荐使用CMake）**\n```bash\nmkdir build && cd build\ncmake .. -DRTNEURAL_EIGEN=ON  # 使用Eigen后端（默认）\ncmake --build . --config Release\n```\n\n3. **作为子模块集成到项目**\n```cmake\n# CMakeLists.txt配置示例\nadd_subdirectory(RTNeural)\ntarget_link_libraries(your_project PRIVATE RTNeural)\n```\n\n## 基本使用\n### 1. 导出模型权重（Python）\n```python\n# 示例：TensorFlow模型导出\nfrom model_utils import save_model\n\nmodel = keras.Sequential([\n    keras.layers.Dense(8, input_shape=(3,)),\n    keras.layers.Tanh(),\n    keras.layers.Dense(1)\n])\nmodel.train()  # 训练模型\nsave_model(model, 'model_weights.json')  # 导出权重\n```\n\n### 2. C++推理实现\n```cpp\n#include \u003CRTNeural.h>\n\nint main() {\n    \u002F\u002F 动态加载模型\n    std::ifstream jsonStream(\"model_weights.json\", std::ifstream::binary);\n    auto model = RTNeural::json_parser::parseJson\u003Cdouble>(jsonStream);\n    \n    model->reset();  \u002F\u002F 重置状态\n    \n    double input[] = {1.0, 0.5, -0.1};\n    double output = model->forward(input);  \u002F\u002F 执行推理\n    \n    return 0;\n}\n```\n\n### 3. 编译时优化（固定模型结构）\n```cpp\n\u002F\u002F 定义固定结构模型\nRTNeural::ModelT\u003Cdouble, 8, 1,\n    RTNeural::DenseT\u003Cdouble, 8, 8>,\n    RTNeural::TanhActivationT\u003Cdouble, 8>,\n    RTNeural::DenseT\u003Cdouble, 8, 1>\n> modelT;\n\n\u002F\u002F 加载权重\nstd::ifstream jsonStream(\"model_weights.json\", std::ifstream::binary);\nmodelT.parseJson(jsonStream);\nmodelT.reset();\n\ndouble input[] = { \u002F* 8个输入值 *\u002F };\ndouble output = modelT.forward(input);\n```\n\n### 4. 后端选择建议\n- **Eigen**：适合大网络（默认）\n- **xsimd**：适合小网络（SIMD加速）\n- **STL**：通用兼容模式\n\n```cmake\n# 示例：强制使用xsimd后端\nset(RTNEURAL_XSIMD ON CACHE BOOL \"Use XSIMD backend\" FORCE)\nadd_subdirectory(RTNeural)\n```\n\n> ⚠️ 注意：使用AVX指令集需添加 `-DRTNEURAL_USE_AVX=ON`（仅限支持AVX的平台）","音频插件开发团队正在为数字音频工作站（DAW）开发一款基于深度学习的实时语音变声插件，要求处理延迟低于5ms并适配Windows\u002FLinux\u002FmacOS三平台。\n\n### 没有 RTNeural 时\n- **模型移植困难**：PyTorch训练的GRU模型需手动转换为C++代码，参数初始化和张量运算需重复实现，耗时2周且易出错\n- **性能瓶颈明显**：使用通用推理框架（如TensorFlow Lite）时，单次推理耗时达12ms，无法满足实时音频处理需求\n- **内存占用过高**：在嵌入式音频接口设备上运行时，内存峰值超过128MB，超出硬件限制\n- **跨平台调试复杂**：不同操作系统下的浮点数精度差异导致音频输出出现可闻的爆裂声\n\n### 使用 RTNeural 后\n- **自动模型转换**：通过Python工具链一键导出JSON权重文件，C++端自动解析生成优化后的计算图，开发周期缩短至2天\n- **延迟显著降低**：经SIMD指令优化的GRU层实现3.2ms\u002F帧推理速度，满足5ms硬实时要求\n- **内存占用优化**：静态内存分配策略将峰值内存控制在18MB，适配USB音频接口的嵌入式环境\n- **跨平台一致性**：内置的数值稳定性处理消除系统差异，确保Mac和Linux设备输出音频波形完全一致\n\n核心价值：RTNeural通过专为实时系统设计的轻量化C++引擎，解决了深度学习音频处理中模型移植效率低、推理延迟高、资源占用大等关键痛点，使开发者能将训练好的模型快速部署到对时延敏感的实时音频场景。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjatinchowdhury18_RTNeural_f4599b09.png","jatinchowdhury18",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjatinchowdhury18_64e20b2e.png","@Chowdhury-DSP ","jatin@ccrma.stanford.edu","https:\u002F\u002Fccrma.stanford.edu\u002F~jatin\u002F","https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18",[84,88,91,95,99,103,107],{"name":85,"color":86,"percentage":87},"C++","#f34b7d",98.1,{"name":89,"color":77,"percentage":90},"NASL",0.7,{"name":92,"color":93,"percentage":94},"CMake","#DA3434",0.5,{"name":96,"color":97,"percentage":98},"Python","#3572A5",0.4,{"name":100,"color":101,"percentage":102},"C","#555555",0.3,{"name":104,"color":105,"percentage":106},"HTML","#e34c26",0,{"name":108,"color":109,"percentage":106},"Makefile","#427819",805,83,"2026-04-02T11:43:30","BSD-3-Clause","未说明",{"notes":116,"python":114,"dependencies":117},"需通过CMake构建项目，支持Eigen\u002Fxsimd后端加速；导出模型权重需Python环境并安装TensorFlow或PyTorch",[118,119,92],"Eigen","xsimd",[13,55],4,"2026-03-27T02:49:30.150509","2026-04-06T08:45:14.916364",[125,130,135,140,145,149],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},1970,"如何获取模型的数组输出？","使用 `model.getOutputs()` 方法可以直接获取模型的数组输出。例如，在 RTNeural 中，如果模型输出是一个 1D 向量，可以通过该方法访问所有输出值。具体实现可参考官方示例代码。","https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Fissues\u002F105",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},1971,"如何将 PyTorch 模型的 `state_dict` 转换为 RTNeural 支持的 JSON 格式？","RTNeural 目前未直接支持 PyTorch 的 `state_dict` 导出。建议通过 PyTorch 的 TorchScript 功能导出模型，或手动解析 `state_dict` 并生成符合 RTNeural 格式的 JSON 文件。具体实现可能需要编写自定义脚本，可联系维护者获取进一步指导。","https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Fissues\u002F53",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},1972,"如何为 RTNeural 的 Conv1D 层设置 strides 参数？","RTNeural 的 Conv1D 层目前未直接支持 strides 参数。需要手动实现 strided 卷积逻辑，或等待后续版本更新。可参考维护者提供的测试脚本（如 [conv1d-stateless 分支](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Fcompare\u002Fconv1d-stateless)）进行自定义实现。","https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Fissues\u002F144",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},1973,"如何支持 NAM 文件格式的模型？","RTNeural 已通过 [RTNeural-NAM 项目](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural-NAM) 实现了对 NAM 文件格式的支持。需注意实现中缺少 \"gated\" 激活函数，但可通过 `std::tanh` 替代以达到相近精度（RMS 误差约 6.8e-8）。性能测试显示 RTNeural 的实现比原 NAM 快 1.5-2 倍。","https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Fissues\u002F143",{"id":146,"question_zh":147,"answer_zh":148,"source_url":129},1974,"如何在 RTNeural 中实现非顺序结构的复杂模型（如 ResNet\u002FDenseNet）？","RTNeural 支持通过直接操作层对象（而非仅限顺序结构）构建复杂模型。例如，可使用 `RTNeural::Conv1DStateless` 层并手动管理输入\u002F输出。具体示例可参考 [conv1d-stateless 分支](https:\u002F\u002Fgithub.com\u002Fjatinchowdhury18\u002FRTNeural\u002Fcompare\u002Fconv1d-stateless) 的实现。",{"id":150,"question_zh":151,"answer_zh":152,"source_url":144},1975,"RTNeural 与 NAM 在性能上的差异如何？","RTNeural 的性能优势在固定样本缓冲区大小时更明显（如 32768 样本），但实际应用中缓冲区大小动态变化时，NAM 的性能可能更优。建议通过编译器优化（如指定最小缓冲区大小）提升 RTNeural 的动态性能。",[]]