[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-novak-99--MLPP":3,"tool-novak-99--MLPP":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":79,"owner_twitter":79,"owner_website":79,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":95,"env_os":96,"env_gpu":97,"env_ram":97,"env_deps":98,"category_tags":104,"github_topics":105,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":110,"updated_at":111,"faqs":112,"releases":143},846,"novak-99\u002FMLPP","MLPP","A library created to revitalize C++ as a machine learning front end. Per aspera ad astra. ","MLPP 是一个专为 C++ 打造的机器学习库，致力于让 C++ 重回机器学习开发的核心舞台。长期以来，Python 主导了机器学习生态，导致 C++ 开发者面临工具匮乏的困境。MLPP 填补了这一空白，充当了底层系统与机器学习算法之间的桥梁。\n\n它非常适合熟悉 C++ 的开发者、系统工程师以及对性能有严格要求的研究人员。通过 MLPP，用户可以直接利用 C++ 的高性能优势进行模型构建与部署。\n\n技术亮点方面，MLPP 功能丰富且灵活。它不仅涵盖线性、逻辑及多种高级回归模型，还支持动态深度的神经网络。内置了超过二十种激活函数（如 ReLU、Swish、GELU）和十几种优化算法（包括 Adam、SGD 等）。此外，它还提供了完善的损失函数、正则化方法及权重初始化策略。数据层面采用标准 std::vector 模拟向量与矩阵，降低了学习门槛。只需简单的头文件包含与编译配置，C++ 开发者便能快速集成强大的机器学习能力。","# ML++\n\nMachine learning is a vast and exiciting discipline, garnering attention from specialists of many fields. Unfortunately, for C++ programmers and enthusiasts, there appears to be a lack of support in the field of machine learning. To fill that void and give C++ a true foothold in the ML sphere, this library was written. The intent with this library is for it to act as a crossroad between low-level developers and machine learning engineers.\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fnovak-99_MLPP_readme_6e5899e03135.gif\" \n    width = 600 height = 400>\n\u003C\u002Fp>\n\n## Installation\nBegin by downloading the header files for the ML++ library. You can do this by cloning the repository and extracting the MLPP directory within it:\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fnovak-99\u002FMLPP\n```\nNext, execute the \"buildSO.sh\" shell script:\n```\nsudo .\u002FbuildSO.sh\n```\nAfter doing so, maintain the ML++ source files in a local directory and include them in this fashion: \n```cpp\n#include \"MLPP\u002FStat\u002FStat.hpp\" \u002F\u002F Including the ML++ statistics module. \n\nint main(){\n...\n}\n```\nFinally, after you have concluded creating a project, compile it using g++:\n```\ng++ main.cpp \u002Fusr\u002Flocal\u002Flib\u002FMLPP.so --std=c++17\n```\n\n## Usage\nPlease note that ML++ uses the ```std::vector\u003Cdouble>``` data type for emulating vectors, and the ```std::vector\u003Cstd::vector\u003Cdouble>>``` data type for emulating matrices.\n\nBegin by including the respective header file of your choice.\n```cpp\n#include \"MLPP\u002FLinReg\u002FLinReg.hpp\"\n```\nNext, instantiate an object of the class. Don't forget to pass the input set and output set as parameters.\n```cpp\nLinReg model(inputSet, outputSet);\n```\nAfterwards, call the optimizer that you would like to use. For iterative optimizers such as gradient descent, include the learning rate, epoch number, and whether or not to utilize the UI panel. \n```cpp\nmodel.gradientDescent(0.001, 1000, 0);\n```\nGreat, you are now ready to test! To test a singular testing instance, utilize the following function:\n```cpp\nmodel.modelTest(testSetInstance);\n```\nThis will return the model's singular prediction for that example. \n\nTo test an entire test set, use the following function: \n```cpp\nmodel.modelSetTest(testSet);\n```\nThe result will be the model's predictions for the entire dataset.\n\n\n## Contents of the Library\n1. ***Regression***\n    1. Linear Regression \n    2. Logistic Regression\n    3. Softmax Regression\n    4. Exponential Regression\n    5. Probit Regression\n    6. CLogLog Regression\n    7. Tanh Regression\n2. ***Deep, Dynamically Sized Neural Networks***\n    1. Possible Activation Functions\n        - Linear\n        - Sigmoid\n        - Softmax\n        - Swish\n        - Mish\n        - SinC\n        - Softplus\n        - Softsign\n        - CLogLog\n        - Logit\n        - Gaussian CDF\n        - RELU\n        - GELU\n        - Sign\n        - Unit Step \n        - Sinh\n        - Cosh\n        - Tanh\n        - Csch\n        - Sech\n        - Coth\n        - Arsinh\n        - Arcosh\n        - Artanh\n        - Arcsch\n        - Arsech\n        - Arcoth\n    2. Possible Optimization Algorithms\n        - Batch Gradient Descent\n        - Mini-Batch Gradient Descent \n        - Stochastic Gradient Descent \n        - Gradient Descent with Momentum\n        - Nesterov Accelerated Gradient\n        - Adagrad Optimizer \n        - Adadelta Optimizer \n        - Adam Optimizer \n        - Adamax Optimizer \n        - Nadam Optimizer \n        - AMSGrad Optimizer \n        - 2nd Order Newton-Raphson Optimizer*\n        - Normal Equation*\n        \u003Cp>\u003C\u002Fp>\n        *Only available for linear regression\n    3. Possible Loss Functions\n        - MSE\n        - RMSE \n        - MAE\n        - MBE\n        - Log Loss\n        - Cross Entropy\n        - Hinge Loss\n        - Wasserstein Loss\n    4. Possible Regularization Methods\n        - Lasso\n        - Ridge\n        - ElasticNet\n        - Weight Clipping\n    5. Possible Weight Initialization Methods\n        - Uniform \n        - Xavier Normal\n        - Xavier Uniform\n        - He Normal\n        - He Uniform\n        - LeCun Normal\n        - LeCun Uniform\n    6. Possible Learning Rate Schedulers\n        - Time Based \n        - Epoch Based\n        - Step Based\n        - Exponential \n3. ***Prebuilt Neural Networks***\n    1. Multilayer Peceptron\n    2. Autoencoder\n    3. Softmax Network\n4. ***Generative Modeling***\n    1. Tabular Generative Adversarial Networks\n    2. Tabular Wasserstein Generative Adversarial Networks\n5. ***Natural Language Processing***\n    1. Word2Vec (Continous Bag of Words, Skip-Gram)\n    2. Stemming\n    3. Bag of Words\n    4. TFIDF\n    5. Tokenization \n    6. Auxiliary Text Processing Functions\n6. ***Computer Vision***\n    1. The Convolution Operation\n    2. Max, Min, Average Pooling\n    3. Global Max, Min, Average Pooling\n    4. Prebuilt Feature Detectors\n        - Horizontal\u002FVertical Prewitt Filter\n        - Horizontal\u002FVertical Sobel Filter\n        - Horizontal\u002FVertical Scharr Filter\n        - Horizontal\u002FVertical Roberts Filter\n        - Gaussian Filter\n        - Harris Corner Detector\n7. ***Principal Component Analysis***\n8. ***Naive Bayes Classifiers***\n    1. Multinomial Naive Bayes\n    2. Bernoulli Naive Bayes \n    3. Gaussian Naive Bayes\n9. ***Support Vector Classification***\n    1. Primal Formulation (Hinge Loss Objective) \n    2. Dual Formulation (Via Lagrangian Multipliers)\n10. ***K-Means***\n11. ***k-Nearest Neighbors***\n12. ***Outlier Finder (Using z-scores)***\n13. ***Matrix Decompositions***    \n    1. SVD Decomposition\n    2. Cholesky Decomposition\n        - Positive Definiteness Checker \n    3. QR Decomposition\n14. ***Numerical Analysis***\n    1. Numerical Diffrentiation \n        - Univariate Functions \n        - Multivariate Functions \n    2. Jacobian Vector Calculator\n    3. Hessian Matrix Calculator\n    4. Function approximator\n        - Constant Approximation\n        - Linear Approximation \n        - Quadratic Approximation\n        - Cubic Approximation\n    5. Diffrential Equations Solvers \n        - Euler's Method \n        - Growth Method\n15. ***Mathematical Transforms***\n    1. Discrete Cosine Transform\n16. ***Linear Algebra Module***\n17. ***Statistics Module***\n18. ***Data Processing Module***\n    1. Setting and Printing Datasets \n    2. Available Datasets\n        1. Wisconsin Breast Cancer Dataset\n            - Binary\n            - SVM \n        2. MNIST Dataset\n            - Train\n            - Test\n        3. Iris Flower Dataset\n        4. Wine Dataset\n        5. California Housing Dataset\n        6. Fires and Crime Dataset (Chicago)\n    3. Feature Scaling \n    4. Mean Normalization\n    5. One Hot Representation\n    6. Reverse One Hot Representation\n    7. Supported Color Space Conversions \n        - RGB to Grayscale\n        - RGB to HSV\n        - RGB to YCbCr\n        - RGB to XYZ\n        - XYZ to RGB\n19. ***Utilities***\n    1. TP, FP, TN, FN function\n    2. Precision\n    3. Recall \n    4. Accuracy\n    5. F1 score\n\n\n## What's in the Works? \nML++, like most frameworks, is dynamic, and constantly changing. This is especially important in the world of ML, as new algorithms and techniques are being developed day by day. Here are a couple of things currently being developed for ML++:\n    \u003Cp>\n    - Convolutional Neural Networks \n    \u003C\u002Fp>\n    \u003Cp>\n    - Kernels for SVMs \n    \u003C\u002Fp>\n    \u003Cp>\n    - Support Vector Regression\n    \u003C\u002Fp>    \n    \n## Citations\nVarious different materials helped me along the way of creating ML++, and I would like to give credit to several of them here. [This](https:\u002F\u002Fwww.tutorialspoint.com\u002Fcplusplus-program-to-compute-determinant-of-a-matrix) article by TutorialsPoint was a big help when trying to implement the determinant of a matrix, and [this](https:\u002F\u002Fwww.geeksforgeeks.org\u002Fadjoint-inverse-matrix\u002F) article by GeeksForGeeks was very helpful when trying to take the adjoint and inverse of a matrix.\n","# ML++\n\n机器学习（Machine Learning）是一个广阔且令人兴奋的学科，吸引了来自许多领域的专家关注。不幸的是，对于 C++ 程序员和爱好者而言，机器学习领域似乎缺乏支持。为了填补这一空白，并使 C++ 在机器学习（ML）领域获得真正的立足点，我们编写了此库。本库的意图是作为底层开发者和机器学习工程师之间的交汇点。\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Fuser.githubusercontent.com\u002F78002988\u002F119920911-f3338d00-bf21-11eb-89b3-c84bf7c9f4ac.gif\" \n    width = 600 height = 400>\n\u003C\u002Fp>\n\n## Installation\n首先下载 ML++ 库的头文件。你可以通过克隆仓库并提取其中的 MLPP 目录来实现：\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fnovak-99\u002FMLPP\n```\n接下来，执行 \"buildSO.sh\" shell 脚本：\n```\nsudo .\u002FbuildSO.sh\n```\n完成后，将 ML++ 源文件保存在本地目录中，并按以下方式包含它们： \n```cpp\n#include \"MLPP\u002FStat\u002FStat.hpp\" \u002F\u002F Including the ML++ statistics module. \n\nint main(){\n...\n}\n```\n最后，在完成项目创建后，使用 g++ 进行编译：\n```\ng++ main.cpp \u002Fusr\u002Flocal\u002Flib\u002FMLPP.so --std=c++17\n```\n\n## Usage\n请注意，ML++ 使用 ```std::vector\u003Cdouble>``` 数据类型来模拟向量，使用 ```std::vector\u003Cstd::vector\u003Cdouble>>``` 数据类型来模拟矩阵。\n\n首先包含您选择的相应头文件。\n```cpp\n#include \"MLPP\u002FLinReg\u002FLinReg.hpp\"\n```\n接下来，实例化一个类对象。别忘了将输入集和输出集作为参数传递。\n```cpp\nLinReg model(inputSet, outputSet);\n```\n之后，调用您想要使用的优化器。对于迭代优化器，例如梯度下降（Gradient Descent），需要指定学习率、epoch（训练轮次）数量以及是否利用 UI 面板。 \n```cpp\nmodel.gradientDescent(0.001, 1000, 0);\n```\n太好了，现在准备测试！要测试单个测试实例，请使用以下函数：\n```cpp\nmodel.modelTest(testSetInstance);\n```\n这将返回该示例模型的单一预测值。 \n\n要测试整个测试集，请使用以下函数： \n```cpp\nmodel.modelSetTest(testSet);\n```\n结果将是模型对整个数据集的预测。\n\n\n## Contents of the Library\n1. ***回归 (Regression)***\n    1. 线性回归 (Linear Regression) \n    2. 逻辑回归 (Logistic Regression)\n    3. Softmax 回归 (Softmax Regression)\n    4. 指数回归 (Exponential Regression)\n    5. Probit 回归 (Probit Regression)\n    6. CLogLog 回归 (CLogLog Regression)\n    7. Tanh 回归 (Tanh Regression)\n2. ***深度动态大小神经网络 (Deep, Dynamically Sized Neural Networks)***\n    1. 可能的激活函数 (Possible Activation Functions)\n        - 线性 (Linear)\n        - Sigmoid\n        - Softmax\n        - Swish\n        - Mish\n        - SinC\n        - Softplus\n        - Softsign\n        - CLogLog\n        - Logit\n        - 高斯累积分布函数 (Gaussian CDF)\n        - RELU\n        - GELU\n        - Sign\n        - 单位阶跃 (Unit Step) \n        - Sinh\n        - Cosh\n        - Tanh\n        - Csch\n        - Sech\n        - Coth\n        - Arsinh\n        - Arcosh\n        - Artanh\n        - Arcsch\n        - Arsech\n        - Arcoth\n    2. 可能的优化算法 (Possible Optimization Algorithms)\n        - 批量梯度下降 (Batch Gradient Descent)\n        - 小批量梯度下降 (Mini-Batch Gradient Descent) \n        - 随机梯度下降 (Stochastic Gradient Descent) \n        - 带动量的梯度下降 (Gradient Descent with Momentum)\n        - Nesterov 加速梯度 (Nesterov Accelerated Gradient)\n        - Adagrad 优化器 (Adagrad Optimizer) \n        - Adadelta 优化器 (Adadelta Optimizer) \n        - Adam 优化器 (Adam Optimizer) \n        - Adamax 优化器 (Adamax Optimizer) \n        - Nadam 优化器 (Nadam Optimizer) \n        - AMSGrad 优化器 (AMSGrad Optimizer) \n        - 二阶牛顿 - 拉夫逊优化器 (2nd Order Newton-Raphson Optimizer)*\n        - 正规方程 (Normal Equation)*\n        \u003Cp>\u003C\u002Fp>\n        *仅适用于线性回归\n    3. 可能的损失函数 (Possible Loss Functions)\n        - 均方误差 (MSE)\n        - 均方根误差 (RMSE) \n        - 平均绝对误差 (MAE)\n        - 平均偏差误差 (MBE)\n        - 对数损失 (Log Loss)\n        - 交叉熵 (Cross Entropy)\n        - 铰链损失 (Hinge Loss)\n        - Wasserstein 损失 (Wasserstein Loss)\n    4. 可能的正则化方法 (Possible Regularization Methods)\n        - Lasso\n        - Ridge\n        - ElasticNet\n        - 权重裁剪 (Weight Clipping)\n    5. 可能的权重初始化方法 (Possible Weight Initialization Methods)\n        - 均匀分布 (Uniform) \n        - Xavier 正态分布 (Xavier Normal)\n        - Xavier 均匀分布 (Xavier Uniform)\n        - He 正态分布 (He Normal)\n        - He 均匀分布 (He Uniform)\n        - LeCun 正态分布 (LeCun Normal)\n        - LeCun 均匀分布 (LeCun Uniform)\n    6. 可能的学习率调度器 (Possible Learning Rate Schedulers)\n        - 基于时间 (Time Based) \n        - 基于轮次 (Epoch Based)\n        - 基于步数 (Step Based)\n        - 指数衰减 (Exponential) \n3. ***预构建神经网络 (Prebuilt Neural Networks)***\n    1. 多层感知机 (Multilayer Perceptron)\n    2. 自编码器 (Autoencoder)\n    3. Softmax 网络 (Softmax Network)\n4. ***生成建模 (Generative Modeling)***\n    1. 表格生成对抗网络 (Tabular Generative Adversarial Networks)\n    2. 表格 Wasserstein 生成对抗网络 (Tabular Wasserstein Generative Adversarial Networks)\n5. ***自然语言处理 (Natural Language Processing)***\n    1. Word2Vec (连续词袋模型 Continuous Bag of Words, Skip-Gram)\n    2. 词干提取 (Stemming)\n    3. 词袋模型 (Bag of Words)\n    4. TFIDF\n    5. 分词 (Tokenization) \n    6. 辅助文本处理函数 (Auxiliary Text Processing Functions)\n6. ***计算机视觉 (Computer Vision)***\n    1. 卷积操作 (The Convolution Operation)\n    2. 最大池化、最小池化、平均池化 (Max, Min, Average Pooling)\n    3. 全局最大池化、全局最小池化、全局平均池化 (Global Max, Min, Average Pooling)\n    4. 预构建特征检测器 (Prebuilt Feature Detectors)\n        - 水平\u002F垂直 Prewitt 滤波器 (Horizontal\u002FVertical Prewitt Filter)\n        - 水平\u002F垂直 Sobel 滤波器 (Horizontal\u002FVertical Sobel Filter)\n        - 水平\u002F垂直 Scharr 滤波器 (Horizontal\u002FVertical Scharr Filter)\n        - 水平\u002F垂直 Roberts 滤波器 (Horizontal\u002FVertical Roberts Filter)\n        - 高斯滤波器 (Gaussian Filter)\n        - Harris 角点检测器 (Harris Corner Detector)\n7. ***主成分分析 (Principal Component Analysis)***\n8. ***朴素贝叶斯分类器 (Naive Bayes Classifiers)***\n    1. 多项式朴素贝叶斯 (Multinomial Naive Bayes)\n    2. 伯努利朴素贝叶斯 (Bernoulli Naive Bayes) \n    3. 高斯朴素贝叶斯 (Gaussian Naive Bayes)\n9. ***支持向量分类 (Support Vector Classification)***\n    1. 原始形式 (Primal Formulation) (铰链损失目标 Hinge Loss Objective) \n    2. 对偶形式 (Dual Formulation) (通过拉格朗日乘子 Via Lagrangian Multipliers)\n10. ***K-Means 聚类***\n11. ***k-近邻算法 (k-Nearest Neighbors)***\n12. ***异常值查找器 (Outlier Finder) (使用 Z 分数 Using z-scores)***\n13. ***矩阵分解 (Matrix Decompositions)***    \n    1. SVD 分解 (SVD Decomposition)\n    2. Cholesky 分解 (Cholesky Decomposition)\n        - 正定性检查器 (Positive Definiteness Checker) \n    3. QR 分解 (QR Decomposition)\n14. ***数值分析 (Numerical Analysis)***\n    1. 数值微分 (Numerical Differentiation) \n        - 单变量函数 (Univariate Functions) \n        - 多变量函数 (Multivariate Functions) \n    2. 雅可比向量计算器 (Jacobian Vector Calculator)\n    3. 海森矩阵计算器 (Hessian Matrix Calculator)\n    4. 函数近似器 (Function approximator)\n        - 常数近似 (Constant Approximation)\n        - 线性近似 (Linear Approximation) \n        - 二次近似 (Quadratic Approximation)\n        - 三次近似 (Cubic Approximation)\n    5. 微分方程求解器 (Differential Equations Solvers) \n        - 欧拉法 (Euler's Method) \n        - 增长法 (Growth Method)\n15. ***数学变换 (Mathematical Transforms)***\n    1. 离散余弦变换 (Discrete Cosine Transform)\n16. ***线性代数模块 (Linear Algebra Module)***\n17. ***统计模块 (Statistics Module)***\n18. ***数据处理模块 (Data Processing Module)***\n    1. 设置和打印数据集 (Setting and Printing Datasets) \n    2. 可用数据集 (Available Datasets)\n        1. 威斯康星乳腺癌数据集 (Wisconsin Breast Cancer Dataset)\n            - 二元 (Binary)\n            - 支持向量机 (SVM) \n        2. MNIST 数据集 (MNIST Dataset)\n            - 训练集 (Train)\n            - 测试集 (Test)\n        3. 鸢尾花数据集 (Iris Flower Dataset)\n        4. 葡萄酒数据集 (Wine Dataset)\n        5. 加州住房数据集 (California Housing Dataset)\n        6. 火灾与犯罪数据集 (芝加哥) (Fires and Crime Dataset (Chicago))\n    3. 特征缩放 (Feature Scaling) \n    4. 均值归一化 (Mean Normalization)\n    5. 独热表示 (One Hot Representation)\n    6. 反向独热表示 (Reverse One Hot Representation)\n    7. 支持的色彩空间转换 (Supported Color Space Conversions) \n        - RGB 转灰度 (RGB to Grayscale)\n        - RGB 转 HSV (RGB to HSV)\n        - RGB 转 YCbCr (RGB to YCbCr)\n        - RGB 转 XYZ (RGB to XYZ)\n        - XYZ 转 RGB (XYZ to RGB)\n19. ***工具 (Utilities)***\n    1. TP, FP, TN, FN 函数 (TP, FP, TN, FN function)\n    2. 精确率 (Precision)\n    3. 召回率 (Recall) \n    4. 准确率 (Accuracy)\n    5. F1 分数 (F1 score)\n\n## 正在进行的工作\nML++ 像大多数框架一样，是动态的并且不断变化。这在机器学习（Machine Learning, ML）领域尤为重要，因为新的算法和技术每天都在被开发出来。以下是目前 ML++ 正在开发的几个功能：\n    \u003Cp>\n    - 卷积神经网络（Convolutional Neural Networks） \n    \u003C\u002Fp>\n    \u003Cp>\n    - 支持向量机（SVM）的核函数 \n    \u003C\u002Fp>\n    \u003Cp>\n    - 支持向量回归（Support Vector Regression）\n    \u003C\u002Fp>    \n    \n## 引用来源\n在创建 ML++ 的过程中，各种不同资料提供了帮助，我想在此向其中几项致谢。TutorialsPoint 的 [此文章](https:\u002F\u002Fwww.tutorialspoint.com\u002Fcplusplus-program-to-compute-determinant-of-a-matrix) 在尝试实现矩阵行列式时提供了很大帮助，而 GeeksForGeeks 的 [这篇文章](https:\u002F\u002Fwww.geeksforgeeks.org\u002Fadjoint-inverse-matrix\u002F) 在尝试计算矩阵伴随矩阵和逆矩阵时也非常有用。","# ML++ 快速上手指南\n\nML++ 是一个专为 C++ 开发者设计的机器学习库，旨在连接底层开发与机器学习工程。它提供了丰富的回归、神经网络、NLP 及计算机视觉模块。\n\n## 环境准备\n\n*   **操作系统**：Linux \u002F macOS (基于 Shell 脚本构建)\n*   **编译器**：GCC (g++)，需支持 **C++17** 标准\n*   **版本控制**：Git\n*   **权限**：部分安装步骤可能需要 `sudo` 权限\n\n## 安装步骤\n\n1.  **克隆仓库**\n    从 GitHub 获取源代码并提取 MLPP 目录：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fnovak-99\u002FMLPP\n    ```\n\n2.  **构建共享库**\n    进入目录执行构建脚本（需要管理员权限）：\n    ```bash\n    sudo .\u002FbuildSO.sh\n    ```\n\n3.  **编译项目**\n    将 ML++ 源文件保留在本地目录，并在编译时链接生成的库文件：\n    ```bash\n    g++ main.cpp \u002Fusr\u002Flocal\u002Flib\u002FMLPP.so --std=c++17\n    ```\n\n## 基本使用\n\nML++ 使用 `std::vector\u003Cdouble>` 模拟向量，使用 `std::vector\u003Cstd::vector\u003Cdouble>>` 模拟矩阵。以下以线性回归为例：\n\n1.  **引入头文件**\n    ```cpp\n    #include \"MLPP\u002FLinReg\u002FLinReg.hpp\"\n    ```\n\n2.  **实例化模型**\n    传入输入集和输出集参数：\n    ```cpp\n    LinReg model(inputSet, outputSet);\n    ```\n\n3.  **训练模型**\n    调用优化器（如梯度下降），设置学习率、迭代次数及 UI 开关：\n    ```cpp\n    model.gradientDescent(0.001, 1000, 0);\n    ```\n\n4.  **测试预测**\n    测试单个样本或整个数据集：\n    ```cpp\n    \u002F\u002F 测试单个实例\n    model.modelTest(testSetInstance);\n\n    \u002F\u002F 测试整个数据集\n    model.modelSetTest(testSet);\n    ```","某物联网设备厂商的嵌入式团队，需要在低功耗网关上实时运行线性回归算法预测设备故障，但不想引入庞大的 Python 环境。\n\n### 没有 MLPP 时\n- 必须集成 Python 解释器，导致固件体积膨胀数倍，超出嵌入式设备的存储限制。\n- 跨平台编译 TensorFlow 等深度学习库极其困难，依赖管理混乱且容易在交叉编译中出错。\n- C++ 主程序通过子进程调用 Python 接口存在显著延迟，无法满足毫秒级实时响应需求。\n- 维护 C++ 与 Python 两套语言代码增加了开发成本，数据传递和调试过程繁琐且不直观。\n\n### 使用 MLPP 后\n- 直接使用头文件链接生成纯 C++ 二进制，内存占用极低，完美适配资源受限的嵌入式环境。\n- 内置线性及逻辑回归等多种算法，无需外部依赖即可快速构建并训练预测模型。\n- 原生 C++ 实现消除了进程间通信开销，推理速度提升数个数量级，满足高频交易或控制需求。\n- 统一代码栈简化了项目结构，利用现有 C++ 工程体系即可无缝集成机器学习功能，降低维护难度。\n\nMLPP 让 C++ 开发者能在不牺牲性能的前提下，轻松将机器学习能力植入底层系统。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fnovak-99_MLPP_c5131669.png","novak-99","Marc Melikyan","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fnovak-99_77acd700.png","19-year-old aspiring machine learning engineer. C++ enthusiast with an interest in numerical analysis. ",null,"Madison, WI","https:\u002F\u002Fgithub.com\u002Fnovak-99",[83,87],{"name":84,"color":85,"percentage":86},"C++","#f34b7d",99.7,{"name":88,"color":89,"percentage":90},"Shell","#89e051",0.3,1106,155,"2026-04-04T08:17:51","MIT",4,"Linux, macOS","未说明",{"notes":99,"python":100,"dependencies":101},"该工具为 C++ 原生机器学习库，无需 Python 环境。安装需编译生成动态链接库 (.so)，需使用 g++ 编译器并支持 C++17 标准。数据通过 std::vector 处理，主要依赖本地 CPU 计算能力，具体硬件资源需求未在文档中说明。","无需 Python",[102,103],"g++","C++17",[51,13,54],[106,107,108,109],"cpp","data-science","machine-learning","deep-learning","2026-03-27T02:49:30.150509","2026-04-06T07:13:09.490214",[113,118,123,128,133,138],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},3638,"Softmax 函数如何实现性能优化？","原实现在循环内会重复计算总和 z.size() 次。优化方案是将求和计算移至循环外仅执行一次。更进一步的优化是预先计算指数值存入中间数组，避免重复调用 exp() 函数。维护者已采纳第一种优化方案并更新了代码。","https:\u002F\u002Fgithub.com\u002Fnovak-99\u002FMLPP\u002Fissues\u002F1",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},3639,"如何提高矩阵乘法的运行效率？","建议交换矩阵乘法内部循环的执行顺序（改为 i-k-j 顺序），这不会改变结果但能显著提升速度。此外，不建议使用 std::vector\u003Cstd::vector\u003C>> 存储矩阵，应寻找更合适的容器或数据结构。维护者已接受优化并提交了对应的代码更改。","https:\u002F\u002Fgithub.com\u002Fnovak-99\u002FMLPP\u002Fissues\u002F4",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},3640,"为什么准确率计算函数中使用 std::round 进行四舍五入？","因为模型通过迭代算法输出离散值时，很难精确输出整数（如 [1, 0, 1]），通常得到的是接近值（如 [0.99, 0.001, 0.999]）。将输出四舍五入到最近整数再与真实标签比较，是计算准确率更便捷且合理的方式。","https:\u002F\u002Fgithub.com\u002Fnovak-99\u002FMLPP\u002Fissues\u002F11",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},3641,"为什么要包含发散梯度的双曲激活函数（如 Cosh, Sinh）？","虽然这些函数在深度学习中使用较少，但在某些小规模案例（例如 XOR 数据集）中，它们与 Sigmoid 配合使用时表现良好。保留这些选项是为了让用户拥有更多的激活函数选择权。","https:\u002F\u002Fgithub.com\u002Fnovak-99\u002FMLPP\u002Fissues\u002F7",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},3642,"成本函数 MAEDeriv 和 WassersteinLoss 是否存在实现错误？","MAEDeriv 确实存在错误（应将 y_hat 与 y 比较而非零），维护者已修复。关于 WassersteinLoss，用户曾误以为它与 HingeLoss 相同，经确认代码实现是正确的。","https:\u002F\u002Fgithub.com\u002Fnovak-99\u002FMLPP\u002Fissues\u002F12",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},3643,"Logit 函数的实现是否存在已知问题？","之前版本的 Logit 函数存在实现错误。该问题已通过 Pull Request #10 提交修复，并且已被维护者合并到主分支中。","https:\u002F\u002Fgithub.com\u002Fnovak-99\u002FMLPP\u002Fissues\u002F9",[144,149,154],{"id":145,"version":146,"summary_zh":147,"released_at":148},103233,"v1.0.2","Added DCT, various color space conversions, learning rate decay for neural nets, and more.","2022-02-13T05:12:01",{"id":150,"version":151,"summary_zh":152,"released_at":153},103234,"v1.0.1","Generative adversarial networks and generative modeling added to the ML++ suite. Minor error in ANNs fixed. ","2022-01-29T01:34:27",{"id":155,"version":156,"summary_zh":157,"released_at":158},103235,"v1.0.0","First official release. Includes deep neural networks, special optimizers, computer vision algorithms, regression, PCA, linear algebra, statistics, and more. ","2022-01-22T18:18:40"]