[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-the-deep-learners--deep-learning-illustrated":3,"similar-the-deep-learners--deep-learning-illustrated":90},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":18,"owner_email":18,"owner_twitter":18,"owner_website":18,"owner_url":19,"languages":20,"stars":39,"forks":40,"last_commit_at":41,"license":42,"difficulty_score":43,"env_os":44,"env_gpu":44,"env_ram":44,"env_deps":45,"category_tags":52,"github_topics":18,"view_count":43,"oss_zip_url":18,"oss_zip_packed_at":18,"status":57,"created_at":58,"updated_at":59,"faqs":60,"releases":89},2616,"the-deep-learners\u002Fdeep-learning-illustrated","deep-learning-illustrated","Deep Learning Illustrated (2020)","deep-learning-illustrated 是配套经典教材《Deep Learning Illustrated》的开源代码库，旨在通过可视化与交互式实践，降低深度学习的学习门槛。它解决了传统 AI 教程理论枯燥、代码难以复现的痛点，将抽象的神经网络概念转化为可运行的 Jupyter Notebook 实例，涵盖从生物视觉原理、自然语言处理到风格迁移、强化学习游戏博弈等丰富场景。\n\n这套资源特别适合希望系统入门深度学习的开发者、学生及跨领域研究人员。即便没有深厚的数学背景，用户也能跟随书中步骤，亲手构建并训练模型。其独特亮点在于“图解优先”的教学理念：先通过直观图表建立直觉，再引入代码实现。内容基于 TensorFlow 和 Keras 构建，虽然原著出版时 TensorFlow 2.0 尚未普及，但项目提供了清晰的迁移指南及新版代码适配方案，确保技术栈的现代性与实用性。无论是想理解 AI 如何识别图像、生成艺术画作，还是探索 AlphaGo 背后的逻辑，deep-learning-illustrated 都提供了一条清晰、有趣且动手性极强的学习路径。","# Deep Learning Illustrated (2020)\n\nThis repository is home to the code that accompanies [Jon Krohn](https:\u002F\u002Fwww.jonkrohn.com\u002F), [Grant Beyleveld](http:\u002F\u002Fgrantbeyleveld.com\u002Fabout\u002F) and [Aglaé Bassens](https:\u002F\u002Fwww.aglaebassens.com\u002F)' book [Deep Learning Illustrated](https:\u002F\u002Fwww.deeplearningillustrated.com\u002F). This visual, interactive guide to artificial neural networks was published on Pearson's Addison-Wesley imprint. \n\n## Installation\n\nStep-by-step guides for running the code in this repository can be found in the [installation directory](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Ftree\u002Fmaster\u002Finstallation). For installation difficulties, please consider visiting our book's [Q&A forum](https:\u002F\u002Fgroups.google.com\u002Fforum\u002F#!forum\u002Fdeep-learning-illustrated) instead of creating an _Issue_.\n\n## Notebooks\n\nAll of the code covered in the book can be found in [the notebooks directory](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Ftree\u002Fmaster\u002Fnotebooks) as [Jupyter notebooks](http:\u002F\u002Fjupyter.org\u002F). \n\nBelow is the book's table of contents with links to all of the individual notebooks. \n\n*Note that while TensorFlow 2.0 was released after the book had gone to press, as detailed in Chapter 14 (specifically, Example 14.1), all of our notebooks can be trivially converted into TensorFlow 2.x code if desired. Failing that, TensorFlow 2.x analogs of the notebooks in the current repo are available [here](https:\u002F\u002Fgithub.com\u002Fjonkrohn\u002FDLTFpT).*\n\n### Part 1: Introducing Deep Learning\n\n#### Chapter 1: Biological and Machine Vision\n\n* Biological Vision\n* Machine Vision\n\t* The Neocognitron\n\t* LeNet-5\n\t* The Traditional Machine Learning Approach\n\t* ImageNet and the ILSVRC\n\t* AlexNet\n* TensorFlow PlayGround\n* The _Quick, Draw!_ Game\n\n#### Chapter 2: Human and Machine Language\n\n* Deep Learning for Natural Language Processing\n\t* Deep Learning Networks Learn Representations Automatically\n\t* A Brief History of Deep Learning for NLP\n* Computational Representations of Language\n\t* One-Hot Representations of Words\n\t* Word Vectors\n\t* Word Vector Arithmetic\n\t* word2viz\n\t* Localist Versus Distributed Representations\n* Elements of Natural Human Language\n* Google Duplex\n\n#### Chapter 3: Machine Art\n\n* A Boozy All-Nighter\n* Arithmetic on Fake Human Faces\n* Style Transfer: Converting Photos into Monet (and Vice Versa)\n* Make Your Own Sketches Photorealistic\n* Creating Photorealistic Images from Text\n* Image Processing Using Deep Learning\n\n#### Chapter 4: Game-Playing Machines\n\n* Deep Learning, AI, and Other Beasts\n\t* Artificial Intelligence\n\t* Machine Learning\n\t* Representation Learning\n\t* Artificial Neural Networks\n* Three Categories of Machine Learning Problems\n\t* Supervised Learning\n\t* Unsupervised Learning\n\t* Reinforcement Learning\n* Deep Reinforcement Learning\n* Video Games\n* Board Games\n\t* AlphaGo\n\t* AlphaGo Zero\n\t* AlphaZero\n* Manipulation of Objects\n* Popular Reinforcement Learning Environments\n\t* OpenAI Gym\n\t* DeepMind Lab\n\t* Unity ML-Agents\n* Three Categories of AI\n\t* Artificial Narrow Intelligence\n\t* Artificial General Intelligence\n\t* Artificial Super Intelligence\n\n### Part II: Essential Theory Illustrated\n\n#### Chapter 5: The (Code) Cart Ahead of the (Theory) Horse\n\n* Prerequisites\n* Installation\n* A Shallow Neural Network in Keras ([shallow_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fshallow_net_in_keras.ipynb))\n\t* The MNIST Handwritten Digits ([mnist_digit_pixel_by_pixel.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fmnist_digit_pixel_by_pixel.ipynb))\n\t* A Schematic Diagram of the Network\n\t* Loading the Data\n\t* Reformatting the Data\n\t* Designing a Neural Network Architecture\n\t* Training a Deep Learning Model\n\n#### Chapter 6: Artificial Neurons Detecting Hot Dogs\n\n* Biological Neuroanatomy 101\n* The Perceptron \n\t* The Hot Dog \u002F Not Hot Dog Detector\n\t* The Most Important Equation in the Book\n* Modern Neurons and Activation Functions \n\t* Sigmoid Neurons ([sigmoid_function.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fsigmoid_function.ipynb))\n\t* Tanh Neurons \n\t* ReLU: Rectified Linear Units\n* Choosing a Neuron\n\n#### Chapter 7: Artificial Neural Networks\n\n* The Input Layer\n* Dense Layers\n* A Hot Dog-Detecting Dense Network \n\t* Forward Propagation through the First Hidden Layer\n\t* Forward Propagation through Subsequent Layers\n* The Softmax Layer of a Fast Food-Classifying Network ([softmax_demo.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fsoftmax_demo.ipynb))\n* Revisiting our Shallow Neural Network\n\n#### Chapter 8: Training Deep Networks\n\n* Cost Functions \n\t* Quadratic Cost ([quadratic_cost.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fquadratic_cost.ipynb))\n\t* Saturated Neurons\n\t* Cross-Entropy Cost ([cross_entropy_cost.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcross_entropy_cost.ipynb))\n* Optimization: Learning to Minimize Cost \n\t* Gradient Descent\n\t* Learning Rate ([measuring_speed_of_learning.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fmeasuring_speed_of_learning.ipynb))\n\t* Batch Size and Stochastic Gradient Descent\n\t* Escaping the Local Minimum\n* Backpropagation\n* Tuning Hidden-Layer Count and Neuron Count\n* An Intermediate Net in Keras ([intermediate_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fintermediate_net_in_keras.ipynb))\n\n#### Chapter 9: Improving Deep Networks\n\n* Weight Initialization ([weight_initialization.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fweight_initialization.ipynb))\n\t* Xavier Glorot Distributions\n* Unstable Gradients \n\t* Vanishing Gradients\n\t* Exploding Gradients\n\t* Batch Normalization\n* Model Generalization — Avoiding Overfitting \n\t* L1 and L2 Regularization\n\t* Dropout\n\t* Data Augmentation\n* Fancy Optimizers\n\t* Momentum\n\t* Nesterov Momentum\n\t* AdaGrad\n\t* AdaDelta and RMSProp\n\t* Adam\n* A Deep Neural Network in Keras ([deep_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdeep_net_in_keras.ipynb))\n* Regression ([regression_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fregression_in_keras.ipynb))\n* TensorBoard ([deep_net_in_keras_with_tensorboard.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdeep_net_in_keras_with_tensorboard.ipynb))\n\n### Part III: Interactive Applications of Deep Learning\n\n#### Chapter 10: Machine Vision\n\n* Convolutional Neural Networks \n\t* The Two-Dimensional Structure of Visual Imagery\n\t* Computational Complexity\n\t* Convolutional Layers\n\t* Multiple Filters\n\t* A Convolutional Example\n\t* Convolutional Filter Hyperparameters\n\t* Stride Length\n\t* Padding\n* Pooling Layers\n* LeNet-5 in Keras ([lenet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Flenet_in_keras.ipynb))\n* AlexNet ([alexnet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Falexnet_in_keras.ipynb)) and VGGNet ([vggnet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fvggnet_in_keras.ipynb))\n* Residual Networks \n\t* Vanishing Gradients: The Bête Noire of Deep CNNs\n\t* Residual Connection\n* Applications of Machine Vision\n\t* Object Detection\n\t* Image Segmentation\n\t* Transfer Learning ([transfer_learning_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Ftransfer_learning_in_keras.ipynb))\n\t* Capsule Networks\n\n#### Chapter 11: Natural Language Processing\n\n* Preprocessing Natural Language Data ([natural_language_preprocessing.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fnatural_language_preprocessing.ipynb))\n\t* Tokenization\n\t* Converting all Characters to Lower Case\n\t* Removing Stop Words and Punctuation\n\t* Stemming; Handling *n*-grams\n\t* Preprocessing the Full Corpus\n* Creating Word Embeddings with word2vec\n\t* The Essential Theory Behind word2vec\n\t* Evaluating Word Vectors\n\t* Running word2vec\n\t* Plotting Word Vectors\n* The Area Under the ROC Curve\n\t* The Confusion Matrix\n\t* Calculating the ROC AUC Metric\n* Natural Language Classification with Familiar Networks\n\t* Loading the IMDB Film Reviews\n\t* Examining the IMDB Data\n\t* Standardizing the Length of the Reviews\n\t* Dense Network ([dense_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdense_sentiment_classifier.ipynb))\n\t* Convolutional Networks ([convolutional_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fconvolutional_sentiment_classifier.ipynb))\n* Networks Designed for Sequential Data \n\t* Recurrent Neural Networks ([rnn_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Frnn_sentiment_classifier.ipynb))\n\t* Long Short-Term Memory Units ([lstm_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Flstm_sentiment_classifier.ipynb))\n\t* Bidirectional LSTMs ([bi_lstm_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fbi_lstm_sentiment_classifier.ipynb))\n\t* Stacked Recurrent Models ([stacked_bi_lstm_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fstacked_bi_lstm_sentiment_classifier.ipynb))\n\t* Seq2seq and Attention\n\t* Transfer Learning in NLP\n* Non-Sequential Architectures: The Keras Functional API ([multi_convnet_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fmulti_convnet_sentiment_classifier.ipynb))\n\n#### Chapter 12: Generative Adversarial Networks\n\n* Essential GAN Theory\n* The _Quick, Draw!_ Dataset\n* The Discriminator Network\n* The Generator Network\n* The Adversarial Network\n* GAN Training ([generative_adversarial_network.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgenerative_adversarial_network.ipynb))\n\n#### Chapter 13: Deep Reinforcement Learning\n\n* Essential Theory of Reinforcement Learning \n\t* The Cart-Pole Game\n\t* Markov Decision Processes\n\t* The Optimal Policy\n* Essential Theory of Deep Q-Learning Networks\n\t* Value Functions\n\t* Q-Value Functions\n\t* Estimating an Optimal Q-Value\n* Defining a DQN Agent ([cartpole_dqn.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcartpole_dqn.ipynb))\n\t* Initialization Parameters\n\t* Building the Agent’s Neural Network Model\n\t* Remembering Gameplay\n\t* Training via Memory Replay\n\t* Selecting an Action to Take\n\t* Saving and Loading Model Parameters\n* Interacting with an OpenAI Gym Environment\n* Hyperparameter Optimization with SLM Lab\n* Agents Beyond DQN \n\t* Policy Gradients and the REINFORCE Algorithm\n\t* The Actor-Critic Algorithm\n\n### Part IV: You and AI\n\n#### Chapter 14: Moving Forward with Your Own Deep Learning Projects\n\n* Ideas for Deep Learning Projects\n\t* Machine Vision and GANs ([fashion_mnist_pixel_by_pixel.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Ffashion_mnist_pixel_by_pixel.ipynb))\n\t* Natural Language Processing\n\t* Deep Reinforcement Learning\n\t* Converting an Existing Machine-Learning Project\n* Resources for Further Projects \n\t* Socially-Beneficial Projects\n* The Modeling Process, including Hyperparameter Tuning \n\t* Automation of Hyperparameter Search\n* Deep Learning Libraries\n\t* Keras and TensorFlow ([deep_net_in_tensorflow.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdeep_net_in_tensorflow.ipynb))\n\t* PyTorch ([pytorch.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fpytorch.ipynb))\n\t* MXNet, CNTK, Caffe, and Beyond\n* Software 2.0\n* Approaching Artificial General Intelligence\n\n## Book Cover\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthe-deep-learners_deep-learning-illustrated_readme_297c5a4f6bce.jpeg)\n\n","# 深度学习图解（2020）\n\n本仓库包含了与[乔恩·克罗恩](https:\u002F\u002Fwww.jonkrohn.com\u002F)、[格兰特·贝耶尔韦尔德](http:\u002F\u002Fgrantbeyleveld.com\u002Fabout\u002F)和[阿格莱·巴森斯](https:\u002F\u002Fwww.aglaebassens.com\u002F)合著的书籍《深度学习图解》配套的代码。这本以视觉化、交互式方式介绍人工神经网络的指南由培生旗下的艾迪生-韦斯利出版社出版。\n\n## 安装\n\n运行本仓库中代码的分步指南可在[安装目录](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Ftree\u002Fmaster\u002Finstallation)中找到。如在安装过程中遇到困难，请先访问我们书籍的[问答论坛](https:\u002F\u002Fgroups.google.com\u002Fforum\u002F#!forum\u002Fdeep-learning-illustrated)，而非创建 _Issue_。\n\n## 笔记本\n\n书中涵盖的所有代码均以[Jupyter笔记本](http:\u002F\u002Fjupyter.org\u002F)的形式存放在[笔记本目录](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Ftree\u002Fmaster\u002Fnotebooks)中。\n\n以下是本书的目录结构，并附有所有单个笔记本的链接。\n\n*请注意，虽然TensorFlow 2.0是在本书付印之后发布的，但正如第14章（特别是示例14.1）所详述的那样，我们的所有笔记本都可以轻松转换为TensorFlow 2.x代码。如果不想进行转换，当前仓库中的笔记本在TensorFlow 2.x下的等效版本可在此处找到：[这里](https:\u002F\u002Fgithub.com\u002Fjonkrohn\u002FDLTFpT)。*\n\n### 第一部分：深度学习导论\n\n#### 第1章：生物视觉与机器视觉\n\n* 生物视觉\n* 机器视觉\n\t* 新皮质认知机\n\t* LeNet-5\n\t* 传统机器学习方法\n\t* ImageNet与ILSVRC\n\t* AlexNet\n* TensorFlow PlayGround\n* “Quick, Draw!” 游戏\n\n#### 第2章：人类语言与机器语言\n\n* 用于自然语言处理的深度学习\n\t* 深度学习网络能够自动学习表示\n\t* 自然语言处理领域深度学习的发展简史\n* 语言的计算表示\n\t* 单热编码\n\t* 词向量\n\t* 词向量运算\n\t* word2viz\n\t* 局部主义与分布式表示\n* 自然人类语言的要素\n* Google Duplex\n\n#### 第3章：机器艺术\n\n* 通宵不眠的创作\n* 对虚假人脸图像的算术操作\n* 风格迁移：将照片转化为莫奈风格（反之亦然）\n* 让你的草图逼真如照片\n* 从文本生成逼真图像\n* 基于深度学习的图像处理\n\n#### 第4章：游戏智能体\n\n* 大型语言模型、人工智能及其他相关概念\n\t* 人工智能\n\t* 机器学习\n\t* 表征学习\n\t* 人工神经网络\n* 机器学习问题的三大类别\n\t* 监督学习\n\t* 无监督学习\n\t* 强化学习\n* 深度强化学习\n* 视频游戏\n* 棋类游戏\n\t* AlphaGo\n\t* AlphaGo Zero\n\t* AlphaZero\n* 物体操控\n* 常用的强化学习环境\n\t* OpenAI Gym\n\t* DeepMind Lab\n\t* Unity ML-Agents\n* 人工智能的三大类别\n\t* 弱人工智能\n\t* 强人工智能\n\t* 超级人工智能\n\n### 第二部分：基础理论详解\n\n#### 第5章：先有代码，后有理论\n\n* 先决条件\n* 安装\n* Keras中的浅层神经网络（[shallow_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fshallow_net_in_keras.ipynb)）\n\t* MNIST手写数字数据集（[mnist_digit_pixel_by_pixel.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fmnist_digit_pixel_by_pixel.ipynb)）\n\t* 网络结构示意图\n\t* 数据加载\n\t* 数据格式转换\n\t* 神经网络架构设计\n\t* 深度学习模型训练\n\n#### 第6章：检测热狗的人工神经元\n\n* 生物神经解剖学入门\n* 感知器\n\t* 热狗\u002F非热狗检测器\n\t* 书中最重要的公式\n* 现代神经元与激活函数\n\t* Sigmoid神经元（[sigmoid_function.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fsigmoid_function.ipynb)）\n\t* Tanh神经元\n\t* ReLU：修正线性单元\n* 神经元的选择\n\n#### 第7章：人工神经网络\n\n* 输入层\n* 密集层\n* 一个检测热狗的密集网络\n\t* 前向传播至第一隐藏层\n\t* 后续各层的前向传播\n* 快餐分类网络的Softmax层（[softmax_demo.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fsoftmax_demo.ipynb)）\n* 回顾我们的浅层神经网络\n\n#### 第8章：深度网络的训练\n\n* 损失函数\n\t* 均方误差损失（[quadratic_cost.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fquadratic_cost.ipynb)）\n\t* 饱和神经元\n\t* 交叉熵损失（[cross_entropy_cost.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcross_entropy_cost.ipynb)）\n* 优化：学习如何最小化损失\n\t* 梯度下降\n\t* 学习率（[measuring_speed_of_learning.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fmeasuring_speed_of_learning.ipynb)）\n\t* 批量大小与随机梯度下降\n\t* 如何跳出局部最小值\n* 反向传播\n* 调整隐藏层数量和神经元数量\n* Keras中的中间网络（[intermediate_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fintermediate_net_in_keras.ipynb)）\n\n#### 第9章：提升深度网络性能\n\n* 权重初始化（[weight_initialization.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fweight_initialization.ipynb)）\n\t* Xavier Glorot分布\n* 不稳定的梯度\n\t* 梯度消失\n\t* 梯度爆炸\n\t* 批量归一化\n* 模型泛化——避免过拟合\n\t* L1和L2正则化\n\t* Dropout\n\t* 数据增强\n* 高级优化算法\n\t* 动量\n\t* Nesterov动量\n\t* AdaGrad\n\t* AdaDelta与RMSProp\n\t* Adam\n* Keras中的深层神经网络（[deep_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdeep_net_in_keras.ipynb)）\n* 回归任务（[regression_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fregression_in_keras.ipynb)）\n* TensorBoard（[deep_net_in_keras_with_tensorboard.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdeep_net_in_keras_with_tensorboard.ipynb)）\n\n### 第三部分：深度学习的交互式应用\n\n#### 第10章：机器视觉\n\n* 卷积神经网络\n\t* 视觉图像的二维结构\n\t* 计算复杂度\n\t* 卷积层\n\t* 多个滤波器\n\t* 一个卷积示例\n\t* 卷积滤波器的超参数\n\t* 步长\n\t* 填充\n* 池化层\n* LeNet-5 在 Keras 中的实现（[lenet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Flenet_in_keras.ipynb)）\n* AlexNet（[alexnet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Falexnet_in_keras.ipynb)）和 VGGNet（[vggnet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fvggnet_in_keras.ipynb)）\n* 残差网络\n\t* 梯度消失：深度 CNN 的致命弱点\n\t* 残差连接\n* 机器视觉的应用\n\t* 目标检测\n\t* 图像分割\n\t* 迁移学习（[transfer_learning_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Ftransfer_learning_in_keras.ipynb)）\n\t* 胶囊网络\n\n#### 第11章：自然语言处理\n\n* 自然语言数据的预处理（[natural_language_preprocessing.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fnatural_language_preprocessing.ipynb)）\n\t* 分词\n\t* 将所有字符转换为小写\n\t* 去除停用词和标点符号\n\t* 词干提取；处理 *n*-元组\n\t* 对整个语料库进行预处理\n* 使用 word2vec 创建词嵌入\n\t* word2vec 的核心理论\n\t* 评估词向量\n\t* 运行 word2vec\n\t* 绘制词向量\n* ROC 曲线下的面积\n\t* 混淆矩阵\n\t* 计算 ROC AUC 指标\n* 使用常见网络进行自然语言分类\n\t* 加载 IMDB 电影评论\n\t* 查看 IMDB 数据\n\t* 标准化评论长度\n\t* 密集网络（[dense_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdense_sentiment_classifier.ipynb)）\n\t* 卷积网络（[convolutional_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fconvolutional_sentiment_classifier.ipynb)）\n* 针对序列数据设计的网络\n\t* 循环神经网络（[rnn_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Frnn_sentiment_classifier.ipynb)）\n\t* 长短期记忆单元（[lstm_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Flstm_sentiment_classifier.ipynb)）\n\t* 双向 LSTM（[bi_lstm_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fbi_lstm_sentiment_classifier.ipynb)）\n\t* 堆叠式循环模型（[stacked_bi_lstm_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fstacked_bi_lstm_sentiment_classifier.ipynb)）\n\t* Seq2seq 和注意力机制\n\t* NLP 中的迁移学习\n* 非序列架构：Keras 函数式 API（[multi_convnet_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fmulti_convnet_sentiment_classifier.ipynb)）\n\n#### 第12章：生成对抗网络\n\n* GAN 的基本理论\n* _Quick, Draw!_ 数据集\n* 判别器网络\n* 生成器网络\n* 对抗网络\n* GAN 训练（[generative_adversarial_network.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgenerative_adversarial_network.ipynb)）\n\n#### 第13章：深度强化学习\n\n* 强化学习的基本理论\n\t* 小车摆游戏\n\t* 马尔可夫决策过程\n\t* 最优策略\n* 深度 Q 学习网络的基本理论\n\t* 值函数\n\t* Q 值函数\n\t* 估计最优 Q 值\n* 定义 DQN 智能体（[cartpole_dqn.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcartpole_dqn.ipynb)）\n\t* 初始化参数\n\t* 构建智能体的神经网络模型\n\t* 记录游戏过程\n\t* 通过经验回放进行训练\n\t* 选择要采取的动作\n\t* 保存和加载模型参数\n* 与 OpenAI Gym 环境交互\n* 使用 SLM Lab 进行超参数优化\n* DQN 之外的智能体\n\t* 策略梯度和 REINFORCE 算法\n\t* 演员-评论家算法\n\n### 第四部分：你与人工智能\n\n#### 第14章：开展你自己的深度学习项目\n\n* 深度学习项目的创意\n\t* 机器视觉和 GAN（[fashion_mnist_pixel_by_pixel.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Ffashion_mnist_pixel_by_pixel.ipynb)）\n\t* 自然语言处理\n\t* 深度强化学习\n\t* 将现有机器学习项目转化为深度学习项目\n* 进一步开展项目的资源\n\t* 具有社会公益性的项目\n* 建模过程，包括超参数调优\n\t* 超参数搜索自动化\n* 深度学习框架\n\t* Keras 和 TensorFlow（[deep_net_in_tensorflow.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdeep_net_in_tensorflow.ipynb)）\n\t* PyTorch（[pytorch.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fpytorch.ipynb)）\n\t* MXNet、CNTK、Caffe 等\n* 软件 2.0\n* 向通用人工智能迈进\n\n## 书籍封面\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthe-deep-learners_deep-learning-illustrated_readme_297c5a4f6bce.jpeg)","# Deep Learning Illustrated 快速上手指南\n\n本指南基于《Deep Learning Illustrated》配套代码库，帮助开发者快速搭建环境并运行书中的交互式 Jupyter Notebook 示例。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Windows、macOS 或 Linux。\n*   **Python 版本**：推荐 Python 3.6 - 3.8（书中代码主要基于此范围，高版本可能需微调）。\n*   **核心依赖**：\n    *   [Jupyter Notebook](http:\u002F\u002Fjupyter.org\u002F)：用于运行交互式代码。\n    *   [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002F) \u002F [Keras](https:\u002F\u002Fkeras.io\u002F)：本书主要使用 Keras API（早期版本基于 TF 1.x，但笔记可轻松迁移至 TF 2.x）。\n    *   其他常用库：`numpy`, `matplotlib`, `pandas`, `scikit-learn` 等。\n*   **硬件建议**：部分章节（如 GANs、深度强化学习）涉及大量计算，建议使用配备 NVIDIA GPU 的机器并安装 CUDA 工具包以加速训练。\n\n> **注意**：虽然本书出版时 TensorFlow 2.0 刚发布，但仓库中的笔记主要基于 TF 1.x 风格。若需在 TF 2.x 环境下运行，可参考官方提供的 [TF 2.x 适配版本](https:\u002F\u002Fgithub.com\u002Fjonkrohn\u002FDLTFpT)，或在代码开头添加兼容性设置。\n\n## 安装步骤\n\n### 1. 克隆项目仓库\n\n打开终端（Terminal 或 CMD），执行以下命令下载源代码：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated.git\ncd deep-learning-illustrated\n```\n\n> **国内加速提示**：如果访问 GitHub 速度较慢，可使用国内镜像源（如 Gitee 镜像，若有）或通过代理加速克隆过程。\n\n### 2. 创建虚拟环境（推荐）\n\n为避免依赖冲突，建议使用 `conda` 或 `venv` 创建独立环境。\n\n**使用 Conda:**\n```bash\nconda create -n dli_env python=3.7\nconda activate dli_env\n```\n\n**使用 venv:**\n```bash\npython -m venv dli_env\n# Windows:\ndli_env\\Scripts\\activate\n# macOS\u002FLinux:\nsource dli_env\u002Fbin\u002Factivate\n```\n\n### 3. 安装依赖库\n\n进入项目根目录后，检查是否有 `requirements.txt` 文件。若无，可根据书中内容手动安装核心库。推荐使用国内镜像源加速安装：\n\n```bash\n# 使用清华源安装核心依赖\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple jupyter tensorflow==1.15.0 keras matplotlib numpy pandas scikit-learn opencv-python\n```\n\n*注：若直接使用 TensorFlow 2.x，请安装 `tensorflow>=2.0`，并在运行旧版笔记时注意 API 差异。*\n\n### 4. 验证安装\n\n启动 Jupyter Notebook 以确认环境正常：\n\n```bash\njupyter notebook\n```\n\n浏览器将自动打开，导航至 `notebooks` 目录即可看到所有章节的代码文件。\n\n## 基本使用\n\n本书的核心内容是位于 `notebooks` 目录下的 Jupyter Notebook 文件。每个笔记本对应书中的一个概念或案例。\n\n### 最简单的使用示例：运行浅层神经网络\n\n我们将运行第 5 章中的经典示例——在 Keras 中构建一个浅层神经网络来识别 MNIST 手写数字。\n\n1.  **定位文件**：\n    在 Jupyter 界面中，进入 `notebooks` 文件夹，找到并点击打开 `shallow_net_in_keras.ipynb`。\n    *(或者直接访问：[notebooks\u002Fshallow_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fshallow_net_in_keras.ipynb))*\n\n2.  **执行代码**：\n    按顺序点击单元格左侧的“运行”按钮（或按下 `Shift + Enter`）。代码流程如下：\n    *   **加载数据**：自动下载并加载 MNIST 数据集。\n    *   **数据预处理**：将像素值归一化并重塑数组形状。\n    *   **构建模型**：定义一个简单的全连接神经网络架构。\n    *   **编译与训练**：配置优化器和损失函数，开始训练模型。\n    *   **评估**：输出模型在测试集上的准确率。\n\n3.  **观察结果**：\n    运行完毕后，您将看到类似以下的输出，表明模型已成功训练：\n    ```text\n    Epoch 1\u002F5\n    60000\u002F60000 [==============================] - 2s 33us\u002Fsample - loss: 0.2594 - accuracy: 0.9245\n    ...\n    Test accuracy: 0.9785\n    ```\n\n### 探索其他章节\n\n您可以按照书中的目录结构探索更多高级应用：\n*   **计算机视觉**：运行 `lenet_in_keras.ipynb` 体验卷积神经网络。\n*   **自然语言处理**：打开 `lstm_sentiment_classifier.ipynb` 学习情感分析。\n*   **生成对抗网络**：运行 `generative_adversarial_network.ipynb` 生成图像。\n\n所有代码均设计为可交互修改，您可以尝试调整超参数（如学习率、批次大小）来观察对模型性能的影响。","某高校人工智能讲师正准备开设深度学习入门课，急需一套能让学生直观理解神经网络原理且无需复杂环境配置的教学资源。\n\n### 没有 deep-learning-illustrated 时\n- 学生面对枯燥的数学公式和抽象理论难以建立直观认知，导致学习曲线陡峭，初期流失率高。\n- 教师需花费大量时间手动编写基础代码示例（如 MNIST 识别、热狗检测器），且容易因框架版本差异（如 TensorFlow 1.x 与 2.x）引发环境报错。\n- 缺乏可视化的交互案例，学生无法通过“所见即所得”的方式观察权重变化或风格迁移效果，只能被动听讲。\n- 课后练习资源分散，学生找不到与教材章节严格对应的可运行代码，自学效率极低。\n\n### 使用 deep-learning-illustrated 后\n- 借助书中配套的可视化图解和交互式 Jupyter Notebook，学生能清晰看到从生物视觉到机器视觉的演变，瞬间理解核心概念。\n- 直接调用仓库中涵盖浅层网络、词向量运算及风格迁移等完整案例，代码已针对教学优化，大幅减少环境调试时间。\n- 利用\"TensorFlow Playground\"和“热狗检测器”等生动实例，学生可动手调整参数并实时观察模型行为，将理论转化为直觉。\n- 每个章节均提供精准对应的笔记文件，学生可按图索骥复现从数据加载到模型训练的全流程，实现高效闭环学习。\n\ndeep-learning-illustrated 通过将深奥的算法理论转化为可视化的交互代码，彻底打破了深度学习入门的认知壁垒与环境门槛。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthe-deep-learners_deep-learning-illustrated_e2b38613.png","the-deep-learners","Deep Learning Study Group","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fthe-deep-learners_ec42d379.jpg","",null,"https:\u002F\u002Fgithub.com\u002Fthe-deep-learners",[21,25,29,33,36],{"name":22,"color":23,"percentage":24},"Jupyter Notebook","#DA5B0B",99.8,{"name":26,"color":27,"percentage":28},"Shell","#89e051",0.1,{"name":30,"color":31,"percentage":32},"Python","#3572A5",0,{"name":34,"color":35,"percentage":32},"Dockerfile","#384d54",{"name":37,"color":38,"percentage":32},"Batchfile","#C1F12E",793,394,"2026-03-29T21:07:17","MIT",2,"未说明",{"notes":46,"python":44,"dependencies":47},"本书代码原基于 TensorFlow 2.0 之前版本编写，但可轻松转换为 TensorFlow 2.x 代码；官方已提供 TensorFlow 2.x 版本的笔记本仓库链接。安装指南位于 installation 目录中，遇到问题建议访问书籍问答论坛而非提交 Issue。",[48,49,22,50,51],"TensorFlow 2.x (或 TensorFlow 2.0)","Keras","OpenAI Gym","word2vec",[53,54,55,56],"语言模型","图像","开发框架","其他","ready","2026-03-27T02:49:30.150509","2026-04-06T07:13:45.134993",[61,66,71,76,81,85],{"id":62,"question_zh":63,"answer_zh":64,"source_url":65},12112,"运行生成对抗网络（GAN）笔记本时出现 Keras 警告且训练失败，生成的图像全是噪声或灰色，如何解决？","该问题通常与判别器（discriminator）的可训练状态设置及编译时机有关。如果在设置 `discriminator.trainable = False` 后没有重新编译模型，或者在训练循环中未正确切换 `trainable` 状态并重新编译，会导致生成器学习而判别器不学习，从而产生尖锐噪声或灰色图像。维护者已更新了 GAN 笔记本以修复此逻辑，建议查看仓库中最新的 notebook 文件及其中添加的注释说明。此外，数据集的变化程度也会影响学习效果，数据变化越大，网络越难学习。","https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fissues\u002F2",{"id":67,"question_zh":68,"answer_zh":69,"source_url":70},12113,"书中附录 B.6 关于反向传播的公式似乎有误，激活函数梯度的计算是否正确？","经确认，原书附录 B.6 中的公式确实存在笔误。正确的项应包含激活函数导数部分。修正后的公式应为：梯度计算需正确反映激活函数对 z 的导数。维护者已确认该错误并将提交给出版商进行更正。用户在推导时应参考修正后的公式结构，确保链式法则中各部分乘积正确。","https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fissues\u002F6",{"id":72,"question_zh":73,"answer_zh":74,"source_url":75},12114,"natural_language_preprocessing.ipynb 中使用 Gensim 库训练 Word2Vec 时报错，提示属性不存在（如 size, iter, vocab），如何修复？","这不是代码错误，而是版本兼容性问题。本书及配套代码库特意将 Gensim 库版本锁定在 3.4.0（见 Dockerfile 中的 `gensim==3.4.0`），因为纸质书无法随库更新而修改。新版 Gensim 4.x 改变了参数名（如 `size` 改为 `vector_size`，`iter` 改为 `epochs`，`vocab` 改为 `index_to_key` 等）。若要运行书中代码，请确保使用 Gensim 3.4.0 版本，不要升级到最新版。可以通过 `pip install gensim==3.4.0` 安装指定版本。","https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fissues\u002F8",{"id":77,"question_zh":78,"answer_zh":79,"source_url":80},12115,"按照书中第 152 页指示使用 TensorBoard 时，日志路径找不到或无法在浏览器中访问，如何解决？","有两个常见配置问题需要调整：1. 日志目录路径：书中步骤 2 的 `--logdir='logs\u002Fdeep-net'` 应修改为 `--logdir='work\u002Fnotebooks\u002Flogs\u002Fdeep-net'`，因为笔记本代码是在 notebooks 目录下创建 logs 文件夹的（详见书中第 152 页脚注 38）。2. Docker 端口映射：运行 `docker run` 命令时必须添加 `-p 6006:6006` 参数，以便主机浏览器能访问 TensorBoard。完整的命令示例应包含 `-p 8888:8888 -p 6006:6006`。","https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002Fdeep-learning-illustrated\u002Fissues\u002F5",{"id":82,"question_zh":83,"answer_zh":84,"source_url":65},12116,"为什么在 CPU 和 GPU 上运行相同的深度学习代码会得到不同的结果？","理论上，CPU 和 GPU 的计算结果应当完全一致，唯一的区别是 GPU 计算速度更快。如果出现结果差异，通常不是硬件本身的计算逻辑不同导致的，可能源于随机种子未固定、浮点数运算精度的微小累积差异（极少见）、或代码中存在依赖硬件后端的行为（如特定的 dropout 实现或数据加载顺序）。在标准情况下，确保设置了相同的随机种子（random seed）后，结果应保持一致。",{"id":86,"question_zh":87,"answer_zh":88,"source_url":65},12117,"使用完整数据集训练时生成的图像仍然是灰色的，是否数据集太大导致难以学习？","是的，对于某些简单的涂鸦（doodles）数据集，增加数据的多样性反而可能使网络更难学习特征，导致输出趋于平均化（如灰色图像）。在这种情况下，尝试减小数据集规模或增加训练轮数（epochs）可能有助于网络收敛。此外，检查学习率设置和网络架构是否适合当前数据复杂度也很重要。",[],[91,101,109,117,125,137],{"id":92,"name":93,"github_repo":94,"description_zh":95,"stars":96,"difficulty_score":97,"last_commit_at":98,"category_tags":99,"status":57},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[55,54,100],"Agent",{"id":102,"name":103,"github_repo":104,"description_zh":105,"stars":106,"difficulty_score":43,"last_commit_at":107,"category_tags":108,"status":57},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,"2026-04-05T11:33:21",[55,100,53],{"id":110,"name":111,"github_repo":112,"description_zh":113,"stars":114,"difficulty_score":43,"last_commit_at":115,"category_tags":116,"status":57},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[55,54,100],{"id":118,"name":119,"github_repo":120,"description_zh":121,"stars":122,"difficulty_score":43,"last_commit_at":123,"category_tags":124,"status":57},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[55,53],{"id":126,"name":127,"github_repo":128,"description_zh":129,"stars":130,"difficulty_score":43,"last_commit_at":131,"category_tags":132,"status":57},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[54,133,134,135,100,56,53,55,136],"数据工具","视频","插件","音频",{"id":138,"name":139,"github_repo":140,"description_zh":141,"stars":142,"difficulty_score":97,"last_commit_at":143,"category_tags":144,"status":57},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[100,54,55,53,56]]