[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-karpathy--convnetjs":3,"similar-karpathy--convnetjs":88},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":19,"owner_email":20,"owner_twitter":18,"owner_website":21,"owner_url":22,"languages":23,"stars":36,"forks":37,"last_commit_at":38,"license":39,"difficulty_score":40,"env_os":41,"env_gpu":41,"env_ram":41,"env_deps":42,"category_tags":45,"github_topics":18,"view_count":48,"oss_zip_url":18,"oss_zip_packed_at":18,"status":49,"created_at":50,"updated_at":51,"faqs":52,"releases":82},1301,"karpathy\u002Fconvnetjs","convnetjs","Deep Learning in Javascript. Train Convolutional Neural Networks (or ordinary ones) in your browser.","ConvNetJS 把深度学习“搬”进了浏览器：用纯 JavaScript 就能训练卷积神经网络（CNN）或普通神经网络，无需安装任何框架或 GPU。它解决了“想快速验证模型却懒得搭环境”的痛点——打开网页即可跑 MNIST、CIFAR-10 等经典实验，还能在线调参看效果。\n\n主要面向前端开发者、算法初学者和教学场景：写几行 JS 就能搭网络、看梯度下降过程，甚至体验深度强化学习（Deep Q-Learning）。内置的交互式 Demo 把抽象概念可视化，特别适合课堂演示或原型验证。\n\n技术亮点：支持卷积、池化、ReLU、Softmax 等常用层；自带 SGD\u002FAdagrad\u002FAdadelta 优化器；体积小巧，可直接嵌入网页。虽然作者已不再维护，但代码和示例依旧可用，仍是入门深度学习的轻量级选择。","\n# ConvNetJS\n\nConvNetJS is a Javascript implementation of Neural networks, together with nice browser-based demos. It currently supports:\n\n- Common **Neural Network modules** (fully connected layers, non-linearities)\n- Classification (SVM\u002FSoftmax) and Regression (L2) **cost functions**\n- Ability to specify and train **Convolutional Networks** that process images\n- An experimental **Reinforcement Learning** module, based on Deep Q Learning\n\nFor much more information, see the main page at [convnetjs.com](http:\u002F\u002Fconvnetjs.com)\n\n**Note**: I am not actively maintaining ConvNetJS anymore because I simply don't have time. I think the npm repo might not work at this point.\n\n## Online Demos\n- [Convolutional Neural Network on MNIST digits](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fmnist.html)\n- [Convolutional Neural Network on CIFAR-10](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fcifar10.html)\n- [Toy 2D data](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fclassify2d.html)\n- [Toy 1D regression](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fregression.html)\n- [Training an Autoencoder on MNIST digits](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fautoencoder.html)\n- [Deep Q Learning Reinforcement Learning demo](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fdemo\u002Frldemo.html)\n- [Image Regression (\"Painting\")](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fimage_regression.html)\n- [Comparison of SGD\u002FAdagrad\u002FAdadelta on MNIST](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fdemo\u002Ftrainers.html)\n\n## Example Code\n\nHere's a minimum example of defining a **2-layer neural network** and training\nit on a single data point:\n\n```javascript\n\u002F\u002F species a 2-layer neural network with one hidden layer of 20 neurons\nvar layer_defs = [];\n\u002F\u002F input layer declares size of input. here: 2-D data\n\u002F\u002F ConvNetJS works on 3-Dimensional volumes (sx, sy, depth), but if you're not dealing with images\n\u002F\u002F then the first two dimensions (sx, sy) will always be kept at size 1\nlayer_defs.push({type:'input', out_sx:1, out_sy:1, out_depth:2});\n\u002F\u002F declare 20 neurons, followed by ReLU (rectified linear unit non-linearity)\nlayer_defs.push({type:'fc', num_neurons:20, activation:'relu'}); \n\u002F\u002F declare the linear classifier on top of the previous hidden layer\nlayer_defs.push({type:'softmax', num_classes:10});\n\nvar net = new convnetjs.Net();\nnet.makeLayers(layer_defs);\n\n\u002F\u002F forward a random data point through the network\nvar x = new convnetjs.Vol([0.3, -0.5]);\nvar prob = net.forward(x); \n\n\u002F\u002F prob is a Vol. Vols have a field .w that stores the raw data, and .dw that stores gradients\nconsole.log('probability that x is class 0: ' + prob.w[0]); \u002F\u002F prints 0.50101\n\nvar trainer = new convnetjs.SGDTrainer(net, {learning_rate:0.01, l2_decay:0.001});\ntrainer.train(x, 0); \u002F\u002F train the network, specifying that x is class zero\n\nvar prob2 = net.forward(x);\nconsole.log('probability that x is class 0: ' + prob2.w[0]);\n\u002F\u002F now prints 0.50374, slightly higher than previous 0.50101: the networks\n\u002F\u002F weights have been adjusted by the Trainer to give a higher probability to\n\u002F\u002F the class we trained the network with (zero)\n```\n\nand here is a small **Convolutional Neural Network** if you wish to predict on images:\n\n```javascript\nvar layer_defs = [];\nlayer_defs.push({type:'input', out_sx:32, out_sy:32, out_depth:3}); \u002F\u002F declare size of input\n\u002F\u002F output Vol is of size 32x32x3 here\nlayer_defs.push({type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'});\n\u002F\u002F the layer will perform convolution with 16 kernels, each of size 5x5.\n\u002F\u002F the input will be padded with 2 pixels on all sides to make the output Vol of the same size\n\u002F\u002F output Vol will thus be 32x32x16 at this point\nlayer_defs.push({type:'pool', sx:2, stride:2});\n\u002F\u002F output Vol is of size 16x16x16 here\nlayer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});\n\u002F\u002F output Vol is of size 16x16x20 here\nlayer_defs.push({type:'pool', sx:2, stride:2});\n\u002F\u002F output Vol is of size 8x8x20 here\nlayer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});\n\u002F\u002F output Vol is of size 8x8x20 here\nlayer_defs.push({type:'pool', sx:2, stride:2});\n\u002F\u002F output Vol is of size 4x4x20 here\nlayer_defs.push({type:'softmax', num_classes:10});\n\u002F\u002F output Vol is of size 1x1x10 here\n\nnet = new convnetjs.Net();\nnet.makeLayers(layer_defs);\n\n\u002F\u002F helpful utility for converting images into Vols is included\nvar x = convnetjs.img_to_vol(document.getElementById('some_image'))\nvar output_probabilities_vol = net.forward(x)\n```\n\n## Getting Started\nA [Getting Started](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fstarted.html) tutorial is available on main page.\n\nThe full [Documentation](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fdocs.html) can also be found there.\n\nSee the **releases** page for this project to get the minified, compiled library, and a direct link to is also available below for convenience (but please host your own copy)\n\n- [convnet.js](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fbuild\u002Fconvnet.js)\n- [convnet-min.js](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fbuild\u002Fconvnet-min.js)\n\n## Compiling the library from src\u002F to build\u002F\nIf you would like to add features to the library, you will have to change the code in `src\u002F` and then compile the library into the `build\u002F` directory. The compilation script simply concatenates files in `src\u002F` and then minifies the result.\n\nThe compilation is done using an ant task: it compiles `build\u002Fconvnet.js` by concatenating the source files in `src\u002F` and then minifies the result into `build\u002Fconvnet-min.js`. Make sure you have **ant** installed (on Ubuntu you can simply *sudo apt-get install* it), then cd into `compile\u002F` directory and run:\n\n    $ ant -lib yuicompressor-2.4.8.jar -f build.xml\n\nThe output files will be in `build\u002F`\n## Use in Node\nThe library is also available on *node.js*:\n\n1. Install it: `$ npm install convnetjs`\n2. Use it: `var convnetjs = require(\"convnetjs\");`\n\n## License\nMIT\n","# ConvNetJS\n\nConvNetJS 是一个用 JavaScript 实现的神经网络库，并附带精美的浏览器端演示。目前支持：\n\n- 常见的**神经网络模块**（全连接层、非线性激活函数）\n- 分类（SVM\u002FSoftmax）和回归（L2）**损失函数**\n- 可以定义并训练处理图像的**卷积神经网络**\n- 一个基于深度 Q 学习的实验性**强化学习**模块\n\n更多详细信息请参见主页面：[convnetjs.com](http:\u002F\u002Fconvnetjs.com)\n\n**注意**：由于时间有限，我已不再 actively 维护 ConvNetJS。我认为目前 npm 仓库可能也无法正常工作。\n\n## 在线演示\n- [MNIST 数字上的卷积神经网络](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fmnist.html)\n- [CIFAR-10 上的卷积神经网络](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fcifar10.html)\n- [玩具二维数据](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fclassify2d.html)\n- [玩具一维回归](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fregression.html)\n- [在 MNIST 数字上训练自编码器](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fautoencoder.html)\n- [深度 Q 学习强化学习演示](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fdemo\u002Frldemo.html)\n- [图像回归（“绘画”）](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fimage_regression.html)\n- [SGD\u002FAdagrad\u002FAdadelta 在 MNIST 上的比较](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fdemo\u002Ftrainers.html)\n\n## 示例代码\n\n以下是一个定义**两层神经网络**并对其单个数据点进行训练的最小示例：\n\n```javascript\n\u002F\u002F 定义一个包含一层隐藏层、共 20 个神经元的两层神经网络\nvar layer_defs = [];\n\u002F\u002F 输入层声明输入的大小。这里为 2 维数据\n\u002F\u002F ConvNetJS 处理的是 3 维张量（sx, sy, depth），但如果你处理的不是图像，\n\u002F\u002F 那么前两个维度（sx, sy）将始终保持为 1\nlayer_defs.push({type:'input', out_sx:1, out_sy:1, out_depth:2});\n\u002F\u002F 声明 20 个神经元，随后是 ReLU（修正线性单元非线性激活函数）\nlayer_defs.push({type:'fc', num_neurons:20, activation:'relu'}); \n\u002F\u002F 在上一层隐藏层之上声明线性分类器\nlayer_defs.push({type:'softmax', num_classes:10});\n\nvar net = new convnetjs.Net();\nnet.makeLayers(layer_defs);\n\n\u002F\u002F 将一个随机数据点通过网络前向传播\nvar x = new convnetjs.Vol([0.3, -0.5]);\nvar prob = net.forward(x); \n\n\u002F\u002F prob 是一个 Vol。Vol 对象有一个 .w 字段存储原始数据，.dw 存储梯度\nconsole.log('x 属于第 0 类的概率：' + prob.w[0]); \u002F\u002F 输出 0.50101\n\nvar trainer = new convnetjs.SGDTrainer(net, {learning_rate:0.01, l2_decay:0.001});\ntrainer.train(x, 0); \u002F\u002F 训练网络，指定 x 属于第 0 类\n\nvar prob2 = net.forward(x);\nconsole.log('x 属于第 0 类的概率：' + prob2.w[0]);\n\u002F\u002F 现在输出 0.50374，略高于之前的 0.50101：网络的权重已被 Trainer 调整，\n\u002F\u002F 使其对训练过的类别（第 0 类）的概率更高。\n```\n\n而如果你想对图像进行预测，这里是一个小型的**卷积神经网络**示例：\n\n```javascript\nvar layer_defs = [];\nlayer_defs.push({type:'input', out_sx:32, out_sy:32, out_depth:3}); \u002F\u002F 声明输入的大小\n\u002F\u002F 此处输出 Vol 的尺寸为 32x32x3\nlayer_defs.push({type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'});\n\u002F\u002F 该层将使用 16 个 5x5 的卷积核进行卷积操作。\n\u002F\u002F 输入将在四边各填充 2 个像素，以使输出 Vol 的尺寸保持不变。\n\u002F\u002F 因此，此时输出 Vol 的尺寸为 32x32x16\nlayer_defs.push({type:'pool', sx:2, stride:2});\n\u002F\u002F 此处输出 Vol 的尺寸为 16x16x16\nlayer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});\n\u002F\u002F 此处输出 Vol 的尺寸为 16x16x20\nlayer_defs.push({type:'pool', sx:2, stride:2});\n\u002F\u002F 此处输出 Vol 的尺寸为 8x8x20\nlayer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});\n\u002F\u002F 此处输出 Vol 的尺寸为 8x8x20\nlayer_defs.push({type:'pool', sx:2, stride:2});\n\u002F\u002F 此处输出 Vol 的尺寸为 4x4x20\nlayer_defs.push({type:'softmax', num_classes:10});\n\u002F\u002F 此处输出 Vol 的尺寸为 1x1x10\n\nnet = new convnetjs.Net();\nnet.makeLayers(layer_defs);\n\n\u002F\u002F 提供了一个将图像转换为 Vol 的实用工具\nvar x = convnetjs.img_to_vol(document.getElementById('some_image'))\nvar output_probabilities_vol = net.forward(x)\n```\n\n## 入门指南\n主页面上提供了[入门指南](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fstarted.html)。\n\n完整的[文档](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fdocs.html)也可在那里找到。\n\n请查看本项目的**发布**页面，以获取压缩后的编译库；此外，为了方便起见，下方也提供了直接链接：\n\n- [convnet.js](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fbuild\u002Fconvnet.js)\n- [convnet-min.js](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fbuild\u002Fconvnet-min.js)\n\n## 从 src\u002F 编译库到 build\u002F\n如果你想为库添加新功能，需要修改 `src\u002F` 中的代码，然后将其编译到 `build\u002F` 目录。编译脚本会简单地将 `src\u002F` 中的文件串联起来，再对结果进行压缩。\n\n编译使用 Ant 构建工具完成：它会先将 `src\u002F` 中的源文件串联起来生成 `build\u002Fconvnet.js`，然后再将其压缩为 `build\u002Fconvnet-min.js`。确保你已安装 **Ant**（在 Ubuntu 上只需运行 *sudo apt-get install* 即可），然后进入 `compile\u002F` 目录并执行：\n\n    $ ant -lib yuicompressor-2.4.8.jar -f build.xml\n\n编译后的文件将位于 `build\u002F` 目录中。\n## 在 Node 中使用\n该库也可在 *node.js* 中使用：\n\n1. 安装：`$ npm install convnetjs`\n2. 使用：`var convnetjs = require(\"convnetjs\");`\n\n## 许可证\nMIT","# ConvNetJS 快速上手指南\n\n## 环境准备\n- **系统要求**：任意支持现代浏览器的操作系统（Windows \u002F macOS \u002F Linux）\n- **前置依赖**：无（纯 JavaScript 实现，浏览器或 Node.js 均可运行）\n\n## 安装步骤\n\n### 1. 浏览器端\n直接下载并引入：\n```bash\n# 任选其一\ncurl -O http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fbuild\u002Fconvnet.js\ncurl -O http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fbuild\u002Fconvnet-min.js\n```\n\nHTML 中引用：\n```html\n\u003Cscript src=\"convnet-min.js\">\u003C\u002Fscript>\n```\n\n### 2. Node.js 端\n```bash\nnpm install convnetjs\n```\n```javascript\nconst convnetjs = require(\"convnetjs\");\n```\n\n## 基本使用\n\n### 示例 1：2 层全连接网络\n```javascript\n\u002F\u002F 1. 定义网络结构\nconst layer_defs = [];\nlayer_defs.push({type:'input', out_sx:1, out_sy:1, out_depth:2}); \u002F\u002F 2维输入\nlayer_defs.push({type:'fc', num_neurons:20, activation:'relu'});   \u002F\u002F 隐藏层\nlayer_defs.push({type:'softmax', num_classes:10});                 \u002F\u002F 输出层\n\n\u002F\u002F 2. 创建网络\nconst net = new convnetjs.Net();\nnet.makeLayers(layer_defs);\n\n\u002F\u002F 3. 前向传播\nconst x = new convnetjs.Vol([0.3, -0.5]);\nconst prob = net.forward(x);\nconsole.log('类别 0 的概率:', prob.w[0]);\n\n\u002F\u002F 4. 训练一步\nconst trainer = new convnetjs.SGDTrainer(net, {learning_rate:0.01, l2_decay:0.001});\ntrainer.train(x, 0); \u002F\u002F 告诉网络 x 属于类别 0\n```\n\n### 示例 2：小型 CNN 处理 32×32 图像\n```javascript\nconst layer_defs = [];\nlayer_defs.push({type:'input', out_sx:32, out_sy:32, out_depth:3}); \u002F\u002F RGB 图像\nlayer_defs.push({type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'});\nlayer_defs.push({type:'pool', sx:2, stride:2});\nlayer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});\nlayer_defs.push({type:'pool', sx:2, stride:2});\nlayer_defs.push({type:'softmax', num_classes:10});\n\nconst net = new convnetjs.Net();\nnet.makeLayers(layer_defs);\n\n\u002F\u002F 将 \u003Cimg id=\"some_image\"> 转为网络输入\nconst x = convnetjs.img_to_vol(document.getElementById('some_image'));\nconst output = net.forward(x);\n```\n\n### 在线体验\n- [MNIST 手写数字 CNN 演示](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fmnist.html)\n- [CIFAR-10 图像分类](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fcifar10.html)\n- [2D 数据分类](http:\u002F\u002Fcs.stanford.edu\u002F~karpathy\u002Fconvnetjs\u002Fdemo\u002Fclassify2d.html)\n\n更多教程与完整文档：[convnetjs.com](http:\u002F\u002Fconvnetjs.com)","一家 5 人规模的独立游戏工作室正在为 Steam 新品节准备一款“手势施法”小游戏，需要让玩家在浏览器里对着摄像头比出火球、冰锥、闪电三种手势即可实时触发技能。\n\n### 没有 convnetjs 时\n- 后端同学得用 Python + TensorFlow 训练模型，再把权重导出成 JSON，前端再写 WebAssembly 推理代码，跨语言踩坑两天起步  \n- 每次调参都要重新跑训练脚本、打包、上传 CDN，美术妹子想试新动作得等 20 分钟  \n- 手势样本只有 300 张手机拍的照片，Python 端数据增强要写 OpenCV 脚本，浏览器里无法即时预览效果  \n- 模型体积 8 MB，首屏加载 5 秒以上，新品节现场网络一卡就掉体验分  \n- 策划临时想加“双指闪电”新类别，排期直接延后一周\n\n### 使用 convnetjs 后\n- 前端同学直接在 Chrome DevTools 里写 30 行 JS 就搭好 3 层卷积网络，训练、推理全在浏览器完成，零后端依赖  \n- 调参刷新页面即可，美术妹子边拍照边点“Train 10 epochs”，30 秒后就能看到准确率曲线飙升  \n- 浏览器实时做旋转、缩放、亮度扰动，训练数据瞬间扩增到 3000 张，准确率达到 92%  \n- 网络权重以 Float32Array 形式保存在 IndexedDB，模型文件仅 600 KB，首屏 1 秒内完成加载  \n- 策划午饭前提需求，下午就把“双指闪电”作为第 4 类标签加进去，现场试玩反馈立刻收集\n\nconvnetjs 让独立团队把原本需要一周的深度学习能力，在浏览器里 3 小时就落地上线。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkarpathy_convnetjs_e40af5fa.png","karpathy","Andrej","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkarpathy_75f033eb.jpg","I like to train Deep Neural Nets on large datasets.",null,"Stanford","andrej.karpathy@gmail.com","https:\u002F\u002Ftwitter.com\u002Fkarpathy","https:\u002F\u002Fgithub.com\u002Fkarpathy",[24,28,32],{"name":25,"color":26,"percentage":27},"JavaScript","#f1e05a",59.8,{"name":29,"color":30,"percentage":31},"HTML","#e34c26",38.8,{"name":33,"color":34,"percentage":35},"CSS","#663399",1.4,11146,2072,"2026-04-05T22:29:05","MIT",1,"未说明",{"notes":43,"python":41,"dependencies":44},"ConvNetJS 是一个纯 JavaScript 实现的神经网络库，可直接在浏览器或 Node.js 中运行，无需额外安装 Python、CUDA 或 GPU。README 中已声明作者不再维护，npm 包可能已不可用。若需自行编译，需安装 ant 和 yuicompressor-2.4.8.jar。",[41],[46,47],"图像","开发框架",3,"ready","2026-03-27T02:49:30.150509","2026-04-06T09:45:05.973206",[53,58,63,68,73,78],{"id":54,"question_zh":55,"answer_zh":56,"source_url":57},5957,"为什么 npm install convnetjs 会报 404 错误？","这是 package.json 中 `\"files\"` 字段导致的 npm 打包问题。解决方案：\n1. 从 package.json 中删除 `\"files\"` 字段\n2. 重新执行 `npm pack` 生成正确的 tarball\n3. 或者使用 `npm install karpathy\u002Fconvnetjs` 直接从 GitHub 安装\n\n示例：\n```bash\n$ vim package.json  # 删除 \"files\" 字段\n$ npm pack          # 重新打包\n```","https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fconvnetjs\u002Fissues\u002F4",{"id":59,"question_zh":60,"answer_zh":61,"source_url":62},5958,"训练后网络对所有输入都输出相同结果怎么办？","常见原因及解决方案：\n1. 训练迭代次数不足：确保训练足够多的 epoch（不要只训练1次）\n2. 数据未归一化：将输入像素值归一化到 -0.5 到 0.5 范围\n3. 检查训练过程：确认 loss 值确实在下降（从~2降到\u003C1表示训练有效）\n\n示例归一化：\n```javascript\n\u002F\u002F 将像素值从 [0,255] 转换到 [-0.5,0.5]\nconst normalized = (pixel\u002F255) - 0.5;\n```","https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fconvnetjs\u002Fissues\u002F90",{"id":64,"question_zh":65,"answer_zh":66,"source_url":67},5959,"average_loss_window 出现 NaN 值怎么解决？","根本原因是输入数据格式不正确：\n1. 确保所有输入都是数字类型，不要传入数组或其他复杂结构\n2. 检查输入维度是否与网络定义匹配（如 inputNum 设置是否正确）\n3. 验证 reward 参数在传入 backward() 前是有效数字\n\n错误示例：\n```javascript\n\u002F\u002F 错误：包含数组的输入\ngameInfo = [2d array, array, number]\n\n\u002F\u002F 正确：纯数字输入\ngameInfo = [0.1, 0.2, 0.3, 0.4]\n```","https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fconvnetjs\u002Fissues\u002F114",{"id":69,"question_zh":70,"answer_zh":71,"source_url":72},5960,"ConvLayer 的前向传播速度如何优化？","通过以下优化可获得 2x 以上加速：\n1. 类型提示：对循环变量使用 `|0` 强制整数类型\n2. 循环重排：将变化最快的索引放在最内层循环以提高缓存命中率\n3. 常量外提：将不依赖循环的变量计算移到循环外\n4. 构造函数优化：在 Volume 构造函数和 set() 方法中添加类型提示\n\n关键优化示例：\n```javascript\n\u002F\u002F 优化前\nfor(var x=0; x\u003Cwidth; x++) {\n  for(var y=0; y\u003Cheight; y++) {\n    \u002F\u002F 每次循环都计算 ox,oy\n  }\n}\n\n\u002F\u002F 优化后\nfor(var y=0; y\u003Cheight; y++) {\n  var oy = y*stride_y;  \u002F\u002F 外提计算\n  for(var x=0; x\u003Cwidth; x++) {\n    var ox = x*stride_x|0;  \u002F\u002F 类型提示\n  }\n}\n```","https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fconvnetjs\u002Fissues\u002F11",{"id":74,"question_zh":75,"answer_zh":76,"source_url":77},5961,"batch_size 参数是否会影响梯度计算？","不会。ConvNetJS 的实现中：\n1. 每次 backward() 都会清零梯度（这是正确行为）\n2. 真正的梯度累积发生在 trainer 层面，而不是 layer 层面\n3. 当达到 batch_size 指定的迭代次数时，trainer 会统一更新权重\n\n因此无需手动修改 layer 的梯度清零逻辑，batch_size 功能已正确实现。","https:\u002F\u002Fgithub.com\u002Fkarpathy\u002Fconvnetjs\u002Fissues\u002F17",{"id":79,"question_zh":80,"answer_zh":81,"source_url":62},5962,"如何正确保存和加载训练好的网络？","使用内置的 JSON 序列化方法：\n```javascript\n\u002F\u002F 保存网络\nconst json = net.toJSON();\nlocalStorage.setItem('network', JSON.stringify(json));\n\n\u002F\u002F 加载网络\nconst saved = JSON.parse(localStorage.getItem('network'));\nconst net = new convnetjs.Net();\nnet.fromJSON(saved);\n```\n\n注意事项：\n1. 确保训练充分（loss 明显下降）后再保存\n2. 加载后无需重新训练即可直接使用\n3. 序列化包含网络结构和权重，但不包含训练状态",[83],{"id":84,"version":85,"summary_zh":86,"released_at":87},115259,"2014.08.31","Switching convnetjs to releases.\nThis is the first official release: it includes everything seen in the repo and the built version (build\u002Fconvnet.js and build\u002Fconvnet-min.js) that contains the library.\n","2014-09-01T00:23:09",[89,98,108,116,124,137],{"id":90,"name":91,"github_repo":92,"description_zh":93,"stars":94,"difficulty_score":48,"last_commit_at":95,"category_tags":96,"status":49},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[47,46,97],"Agent",{"id":99,"name":100,"github_repo":101,"description_zh":102,"stars":103,"difficulty_score":104,"last_commit_at":105,"category_tags":106,"status":49},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[47,97,107],"语言模型",{"id":109,"name":110,"github_repo":111,"description_zh":112,"stars":113,"difficulty_score":104,"last_commit_at":114,"category_tags":115,"status":49},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[47,46,97],{"id":117,"name":118,"github_repo":119,"description_zh":120,"stars":121,"difficulty_score":104,"last_commit_at":122,"category_tags":123,"status":49},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[47,107],{"id":125,"name":126,"github_repo":127,"description_zh":128,"stars":129,"difficulty_score":104,"last_commit_at":130,"category_tags":131,"status":49},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[46,132,133,134,97,135,107,47,136],"数据工具","视频","插件","其他","音频",{"id":138,"name":139,"github_repo":140,"description_zh":141,"stars":142,"difficulty_score":48,"last_commit_at":143,"category_tags":144,"status":49},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[97,46,47,107,135]]