[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-the-deep-learners--TensorFlow-LiveLessons":3,"tool-the-deep-learners--TensorFlow-LiveLessons":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":79,"owner_url":80,"languages":81,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":10,"env_os":102,"env_gpu":103,"env_ram":104,"env_deps":105,"category_tags":115,"github_topics":79,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":116,"updated_at":117,"faqs":118,"releases":148},2328,"the-deep-learners\u002FTensorFlow-LiveLessons","TensorFlow-LiveLessons","\"Deep Learning with TensorFlow\" LiveLessons","TensorFlow-LiveLessons 是深度学习专家 Jon Krohn 为其系列视频教程配套的开源代码库，旨在帮助学习者通过实战掌握 TensorFlow 框架。它主要解决了深度学习入门过程中“理论难以落地”的痛点，提供了与《Deep Learning with TensorFlow》、《自然语言处理》及《深度强化学习与 GAN》三门课程完全同步的 Jupyter Notebook 代码示例。\n\n这套资源特别适合具备一定 Python 基础和数据分析师技能的开发者或研究人员使用。如果你熟悉 Unix 命令行操作，并希望系统性地从神经网络基础进阶到复杂的应用场景，这里将是理想的练习场。其独特的技术亮点在于严谨的学习路径设计：内容按推荐顺序排列，从生物灵感类比入手，逐步引导用户完成环境配置、手写数字识别等经典任务，并深入讲解 Keras 接口与专业术语。此外，项目还贴心地提供了详尽的 macOS 及类 Unix 系统安装指南，确保用户能顺畅运行所有示例。作为第一版教程的代码存档，它不仅记录了早期的 TensorFlow 实践方法，也为后续更新版本的学习奠定了坚实的理论与实操基础。","# TensorFlow-LiveLessons\n\n**Note that the second edition of this video series is now available [here](https:\u002F\u002Fgithub.com\u002Fjonkrohn\u002FDLTFpT\u002F). The second edition contains all of the content from this (first) edition plus quite a bit more, as well as updated library versions.**\n\nThis repository is home to the code that accompanies [Jon Krohn's](https:\u002F\u002Fwww.jonkrohn.com\u002F):\n\n1. [Deep Learning with TensorFlow](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-with\u002F9780134770826\u002F) LiveLessons (summary blog post [here](https:\u002F\u002Fmedium.com\u002F@jjpkrohn\u002Ffilming-deep-learning-with-tensorflow-livelessons-for-oreilly-safari-50363ed4efad))\n2. [Deep Learning for Natural Language Processing](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-for\u002F9780134851921\u002F) LiveLessons (summary blog post [here](https:\u002F\u002Finsights.untapt.com\u002Fdeep-learning-for-natural-language-processing-tutorials-with-jupyter-notebooks-ad67f336ce3f))\n3. [Deep Reinforcement Learning and GANs](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-reinforcement-learning\u002F9780135171233\u002F) LiveLessons (summary blog post [here](https:\u002F\u002Finsights.untapt.com\u002Fdeep-reinforcement-learning-and-generative-adversarial-networks-tutorials-with-jupyter-notebooks-6ef4dc6957ea))\n\n**The above order is the recommended sequence in which to undertake these LiveLessons.** That said, *Deep Learning with TensorFlow* provides a sufficient theoretical and practical background for \tthe other LiveLessons.\n\n## Prerequisites\n\n#### Command Line\n\nWorking through these LiveLessons will be easiest if you are familiar with the **Unix command line** basics. A tutorial of these fundamentals can be found [here](https:\u002F\u002Flearnpythonthehardway.org\u002Fbook\u002Fappendixa.html). \n\n#### Python for Data Analysis\n\nIn addition, if you're unfamiliar with using **Python** for data analysis (e.g., the **pandas**, scikit-learn, matplotlib packages), the [data analyst path of DataQuest](https:\u002F\u002Fwww.dataquest.io\u002Fpath\u002Fdata-analyst) will quickly get you up to speed -- steps one (*Introduction to Python*) and two (*Intermediate Python and Pandas*) provide the bulk of the essentials. \n\n## Installation\n\nStep-by-step guides for running the code in this repository can be found in the [installation directory](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Ftree\u002Fmaster\u002Finstallation). \n\n## Notebooks\n\nAll of the code that I cover in the LiveLessons can be found in [this directory](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Ftree\u002Fmaster\u002Fnotebooks) as [Jupyter notebooks](http:\u002F\u002Fjupyter.org\u002F). \n\nBelow is the lesson-by-lesson sequence in which I covered them: \n\n### [Deep Learning with TensorFlow LiveLessons](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-with\u002F9780134770826\u002F)\n\n#### Lesson One: Introduction to Deep Learning\n\n##### 1.1 Neural Networks and Deep Learning\n\n* via analogy to their biological inspirations, this section introduces Artificial Neural Networks and how they developed to their predominantly *deep* architectures today\n\n##### 1.2 Running the Code in These LiveLessons\n\n* goes over the [installation directory](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Ftree\u002Fmaster\u002Finstallation) mentioned above, discussing the options for working through my Jupyter notebooks\n* details the [step-by-step installation of TensorFlow on Mac OS X](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Finstallation\u002Fstep_by_step_MacOSX_install.md), a process that may be instructive for users of any Unix-like operating system\n\n##### 1.3 An Introductory Artificial Neural Network\n\n* get your hands dirty with a simple-as-possible neural network ([shallow_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fshallow_net_in_keras.ipynb)) for classifying handwritten digits\n* introduces Jupyter notebooks and their most useful hot keys\n* introduces a gentle quantity of deep learning terminology by whiteboarding through:\n  * the MNIST digit data set\n  * the preprocessing of images for analysis with a neural network\n  * a shallow network architecture\n\n---\n\n#### Lesson Two: How Deep Learning Works\n\n##### 2.1 The Families of Deep Neural Nets and their Applications\n\n* talk through the function and popular applications of the predominant modern families of deep neural nets:\n  * Dense \u002F Fully-Connected\n  * Convolutional Networks (ConvNets)\n  * Recurrent Neural Networks (RNNs) \u002F Long Short-Term Memory units (LSTMs)\n  * Reinforcement Learning\n  * Generative Adversarial Networks\n\n##### 2.2 Essential Theory I —- Neural Units\n\n* the following essential deep learning concepts are explained with intuitive, graphical explanations: \n  * neural units and activation functions\n    * perceptron\n    * sigmoid ([sigmoid_function.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fsigmoid_function.ipynb))\n    * tanh\n    * Rectified Linear Units (ReLU)\n\n##### 2.3 Essential Theory II -- Cost Functions, Gradient Descent, and Backpropagation\n\n* cost functions\n    * quadratic\n    * cross-entropy ([cross_entropy_cost.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcross_entropy_cost.ipynb))\n* gradient descent\n* backpropagation via the chain rule\n* layer types\n    * input\n    * dense \u002F fully-connected\n    * softmax output ([softmax_demo.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fsoftmax_demo.ipynb))\n    \n##### 2.4 TensorFlow Playground -- Visualizing a Deep Net in Action\n\n* leverage [TensorFlow Playground](http:\u002F\u002Fplayground.tensorflow.org\u002F) to interactively visualize the theory from the preceding section\n\n##### 2.5 Data Sets for Deep Learning\n\n* overview of canonical data sets for image classification and meta-resources for data sets ideally suited to deep learning\n\n##### 2.6 Applying Deep Net Theory to Code I\n\n* apply the theory learned throughout Lesson Two to create an intermediate-depth image classifier ([intermediate_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fintermediate_net_in_keras.ipynb))\n* builds on, and greatly outperforms, the shallow architecture from Section 1.3 \n\n---\n\n#### Lesson Three: Convolutional Networks\n\n##### 3.1 Essential Theory III -- Mini-Batches, Unstable Gradients, and Avoiding Overfitting\n\n* add to our state-of-the-art deep learning toolkit by delving further into essential theory, specifically:\n  * weight initialization\n    * uniform\n    * normal\n    * Xavier Glorot\n  * **stochastic** gradient descent\n    * learning rate\n    * batch size\n    * second-order gradient learning\n      * momentum\n      * Adam\n   * unstable gradients\n     * vanishing\n     * exploding\n   * avoiding overfitting \u002F model generalization\n     * L1\u002FL2 regularization\n     * dropout\n     * artificial data set expansion\n   * batch normalization\n   * more layers\n     * max-pooling\n     * flatten\n\n##### 3.2 Applying Deep Net Theory to Code II\n\n* apply the theory learned in the previous section to create a deep, dense net for image classification ([deep_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdeep_net_in_keras.ipynb))\n* builds on, and outperforms, the intermediate architecture from Section 2.5\n\n##### 3.3 Introduction to Convolutional Neural Networks for Visual Recognition\n\n* whiteboard through an intuitive explanation of what convolutional layers are and how they're so effective\n\n##### 3.4 Classic ConvNet Architectures -— LeNet-5\n\n* apply the theory learned in the previous section to create a deep convolutional net for image classification ([lenet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Flenet_in_keras.ipynb)) that is inspired by the classic LeNet-5 neural network introduced in section 1.1\n\n##### 3.5 Classic ConvNet Architectures -— AlexNet and VGGNet\n\n* classify color images of flowers with two very deep convolutional networks inspired by contemporary prize-winning model architectures: AlexNet ([alexnet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Falexnet_in_keras.ipynb)) and VGGNet ([vggnet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fvggnet_in_keras.ipynb))\n\n##### 3.6 TensorBoard and the Interpretation of Model Outputs\n\n* return to the networks from the previous section, adding code to output results to the TensorBoard deep learning results-visualization tool\n* explore TensorBoard and explain how to interpret model results within it\n\n---\n\n#### Lesson Four: Introduction to TensorFlow\n\n##### 4.1 Comparison of the Leading Deep Learning Libraries\n\n* discuss the relative strengths, weaknesses, and common applications of the leading deep learning libraries:\n  * Caffe\n  * Torch\n  * Theano\n  * TensorFlow\n  * and the high-level APIs TFLearn and Keras\n* conclude that, for the broadest set of applications, TensorFlow is the best option\n\n##### 4.2 Introduction to TensorFlow\n\n* introduce TensorFlow graphs and related terminology:\n  * ops\n  * tensors\n    * Variables\n    * placeholders\n  * feeds\n  * fetches\n* build simple TensorFlow graphs ([first_tensorflow_graphs.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Ffirst_tensorflow_graphs.ipynb))\n* build neurons in TensorFlow ([first_tensorflow_neurons.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Ffirst_tensorflow_neurons.ipynb))\n\n##### 4.3 Fitting Models in TensorFlow\n\n* fit a simple line in TensorFlow:\n  * by considering individual data points ([point_by_point_intro_to_tensorflow.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fpoint_by_point_intro_to_tensorflow.ipynb))\n  * while taking advantage of tensors ([tensor-fied_intro_to_tensorflow.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Ftensor-fied_intro_to_tensorflow.ipynb))\n  * with batches sampled from millions of data points ([intro_to_tensorflow_times_a_million.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fintro_to_tensorflow_times_a_million.ipynb))\n\n##### 4.4 Dense Nets in TensorFlow\n\n* create a dense neural net ([intermediate_net_in_tensorflow.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fintermediate_net_in_tensorflow.ipynb)) in TensorFlow with an architecture identical to the intermediate one built in Keras in Section 2.5\n\n##### 4.5 Deep Convolutional Nets in TensorFlow\n\n* create a deep convolutional neural net ([lenet_in_tensorflow.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Flenet_in_tensorflow.ipynb)) in TensorFlow with an architecture identical to the LeNet-inspired one built in Keras in Section 3.4\n\n---\n\n#### Lesson Five: Improving Deep Networks\n\n##### 5.1 Improving Performance and Tuning Hyperparameters\n\n* detail systematic steps for improving the performance of deep neural nets, including by tuning hyperparameters\n\n##### 5.2 How to Built Your Own Deep Learning Project\n\n* specific steps for designing and evaluating your own deep learning project\n\n##### 5.3 Resources for Self-Study\n\n* topics worth investing time in to become an expert deployer of deep learning models\n\n--- \n---\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthe-deep-learners_TensorFlow-LiveLessons_readme_82a2947a7a9f.jpg)\n\n### [Deep Learning for Natural Language Processing](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-for\u002F9780134851921\u002F)\n\n#### Lesson One: The Power and Elegance of Deep Learning for NLP\n\n##### 1.1 Introduction to Deep Learning for Natural Language Processing\n\n* high-level overview of deep learning as it pertains to Natural Language Processing (NLP)\n* influential examples of industrial applications of NLP\n* timeline of contemporary breakthroughs that have brought Deep Learning approaches to the forefront of NLP research and development\n\n##### 1.2 Computational Representations of Natural Language Elements\n\n* introduce the elements of natural language\n* contrast how these elements are represented by traditional machine-learning models and emergent deep-learning models\n\n##### 1.3 NLP Applications\n\n* specify common NLP applications and bucket them into three tiers of relative complexity\n\n##### 1.4 Installation, Including GPU Considerations\n\n* build on the [step-by-step installation of TensorFlow on Mac OS X](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Finstallation\u002Fstep_by_step_MacOSX_install.md) covered in the [Deep Learning with TensorFlow LiveLessons](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002F#12-running-the-code-in-these-livelessons) to facilitate the training of deep learning models with an Nvidia GPU. \n\n##### 1.5 Review of Prerequisite Deep Learning Theory\n\n* summarise the key concepts introduced in the [Deep Learning with TensorFlow LiveLessons](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-with\u002F9780134770826\u002F), which serve as the foundation for the material introduced in these NLP-focused LiveLessons\n\n##### 1.6 A Sneak Peak\n\n* take a tantalising look ahead at the capabilities developed over the course of these LiveLessons\n\n---\n\n#### Lesson Two: Word Vectors\n\n##### 2.1 Vector-Space Embedding\n\n* leverage interactive demos to enable an intuitive understanding of vector-space embeddings of words, nuanced quantitative representations of word meaning\n\n##### 2.2 word2vec\n\n* key papers that led to the development of word2vec, a technique for transforming natural language into vector representations\n* essential word2vec theory introduced:\n  * architectures:\n    1. Skip-Gram\n    2. Continuous Bag of Words\n  * training algorithms:\n    1. hierarchical softmax\n    2. negative sampling\n  * evaluation perspectives:\n    1. intrinsic\n    2. extrinsic\n  * hyperparameters:\n    1. number of dimensions\n    2. context-word window size\n    3. number of iterations\n    4. size of data set\n* contrast word2vec with its leading alternative, [GloVe](https:\u002F\u002Fnlp.stanford.edu\u002Fprojects\u002Fglove\u002F)\n\n##### 2.3 Data Sets for NLP\n\n* pre-trained word vectors:\n  * for [word2vec](https:\u002F\u002Fcode.google.com\u002Farchive\u002Fp\u002Fword2vec\u002F)\n  * for [GloVe](http:\u002F\u002Fnlp.stanford.edu\u002Fprojects\u002Fglove\u002F)\n* natural language data sets:\n  * Jon Krohn's [resources page](https:\u002F\u002Fwww.jonkrohn.com\u002Fresources\u002F)\n  * [Zhang, Zhao and LeCun's](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1509.01626.pdf) [labelled data](http:\u002F\u002Fxzh.me\u002F)\n  * Internet Movie DataBase (IMDB) reviews classified by sentiment from [Andrew Maas and his Stanford colleagues (2011)](http:\u002F\u002Fai.stanford.edu\u002F~amaas\u002Fpapers\u002FwvSent_acl2011.pdf)\n  \n##### 2.4 Creating Word Vectors with word2vec\n\n* use books from [Project Gutenberg](https:\u002F\u002Fwww.gutenberg.org\u002F) to create word vectors with word2vec\n* interactively visualise the word vectors with the [bokeh](https:\u002F\u002Fbokeh.pydata.org\u002Fen\u002Flatest\u002F) library ([creating_word_vectors_with_word2vec.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcreating_word_vectors_with_word2vec.ipynb))\n\n---\n\n#### Lesson Three: Modeling Natural Language Data\n\n##### 3.1 Best Practices for Preprocessing Natural Language Data\n\n* in [natural_language_preprocessing_best_practices.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fnatural_language_preprocessing_best_practices.ipynb), apply the following recommended best practices to clean up a corpus natural language data prior to modeling: \n  * tokenize\n  * convert all characters to lowercase\n  * remove stopwords\n  * remove punctuation\n  * stem words\n  * handle bigram (and trigram) word collocations\n  \n##### 3.2 The Area Under the ROC Curve\n\n* detail the calculation and functionality of the area under the Receiver Operating Characteristic curve summary metric, which is used throughout the remainder of the LiveLessons for evaluating model performance\n\n##### 3.3 Dense Neural Network Classification\n\n* pair vector-space embedding with the fundamentals of deep learning introduced in the [Deep Learning with TensorFlow LiveLessons](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-with\u002F9780134770826\u002F) to create a dense neural network for classifying documents by their sentiment ([dense_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdense_sentiment_classifier.ipynb))\n\n##### 3.4 Convolutional Neural Network Classification\n\n* add convolutional layers to the deep learning architecture to improve the performance of the natural language classifying model ([convolutional_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fconvolutional_sentiment_classifier.ipynb))\n\n---\n\n#### Lesson Four: Recurrent Neural Networks\n\n##### 4.1 Essential Theory of RNNs\n\n* provide an intuitive understanding of Recurrent Neural Networks (RNNs), which permit backpropagation through time over sequential data, such as natural language and financial time series data\n\n##### 4.2 RNNs in Practice\n\n* incorporate simple RNN layers into a model that classifies documents by their sentiment ([rnn_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Frnn_in_keras.ipynb)\n\n##### 4.3 Essential Theory of LSTMs and GRUs\n\n* develop familiarity with the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) varieties of RNNs which provide markedly more productive modeling of sequential data with deep learning models\n\n##### 4.4 LSTMs and GRUs in Practice\n\n* straightforwardly build LSTM ([vanilla_lstm_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fvanilla_lstm_in_keras.ipynb)) and GRU ([gru_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgru_in_keras.ipynb)) deep learning architectures through the Keras high-level API \n\n--- \n\n#### Lesson Five: Advanced Models\n\n##### 5.1 Bi-Directional LSTMs\n\n* Bi-directional LSTMs are an especially potent variant of the LSTM\n* high-level theory on Bi-LSTMs before leveraging them in practice ([bidirectional_lstm.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fbidirectional_lstm.ipynb))\n\n##### 5.2 Stacked LSTMs\n\n* Bi-LSTMs are stacked to enable deep learning networks to model increasingly abstract representations of language ([stacked_bidirectional_lstm.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fstacked_bidirectional_lstm.ipynb); [ye_olde_conv_lstm_stackeroo.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fye_olde_conv_lstm_stackeroo.ipynb))\n\n##### 5.3 Parallel Network Architectures\n\n* advanced data modeling capabilities are possible with non-sequential architectures, e.g., parallel convolutional layers, each with unique hyperparameters ([multi_convnet_architectures.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fmulti_convnet_architectures.ipynb))\n\n\n\n--- \n---\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthe-deep-learners_TensorFlow-LiveLessons_readme_efd79795dad8.jpg)\n\n### [Deep Reinforcement Learning and GANs](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-reinforcement-learning\u002F9780135171233\u002F)\n\n#### Lesson One: The Foundations of Artificial Intelligence\n\n##### 1.1 The Contemporary State of AI\n\n* examine what the term \"Artificial Intelligence\" means and how it relates to deep learning\n* define *narrow*, *general*, and *super* intelligence\n\n##### 1.2 Applications of Generative Adversarial Networks\n\n* uncover the rapidly-improving quality of Generative Adversarial Networks for creating compelling novel imagery in the style of humans\n* involves the fun, interactive [pix2pix](https:\u002F\u002Faffinelayer.com\u002Fpixsrv\u002F) tool\n\n##### 1.3 Applications of Deep Reinforcement Learning\n\n* distinguish *supervised* and *unsupervised learning* from *reinforcement learning*\n* provide an overview of the seminal contemporary deep reinforcement learning breakthroughs, including: \n\t* the Deep Q-Learning algorithm\n\t* AlphaGo\n\t* AlphaGo Zero\n\t* AlphaZero\n\t* robotics advances\n* introduce the most popular deep reinforcement learning environments:\n\t* [OpenAI Gym](github.com\u002Fopenai\u002Fgym)\n\t* [DeepMind Lab](github.com\u002Fdeepmind\u002Flab)\n\t* [Unity](github.com\u002FUnity-Technologies\u002Fml-agents)\n\n##### 1.4 Running the Code in these LiveLessons\n\n* review the [step-by-step installation of TensorFlow on Mac OS X](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Finstallation\u002Fstep_by_step_MacOSX_install.md) detailed in the [Deep Learning with TensorFlow LiveLessons](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002F#12-running-the-code-in-these-livelessons)\n\n##### 1.5 Review of Prerequisite Deep Learning Theory\n\n* summarise the key concepts introduced in the [Deep Learning with TensorFlow LiveLessons](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-with\u002F9780134770826\u002F), which serve as the foundation for the material introduced in these advanced-topics LiveLessons\n\n--- \n\n#### Lesson Two: Generative Adversarial Networks (GANs)\n\n##### 2.1 Essential GAN Theory\n\n* cover the high-level theory of what GANs are and how they are able to generate realistic-looking images\n\n##### 2.2 The “Quick, Draw!” Game Dataset\n\n* show the [Quick, Draw! game](https:\u002F\u002Fquickdraw.withgoogle.com\u002F), which we use as the source of hundreds of thousands of hand-drawn images from a single class for a GAN to learn to imitate\n\n##### 2.3 A Discriminator Network\n\n* build the discriminator component of a GAN ([generative_adversarial_network.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgenerative_adversarial_network.ipynb))\n\n##### 2.4 A Generator Network\n\n* build the generator component of a GAN ([generative_adversarial_network.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgenerative_adversarial_network.ipynb) continued)\n\n##### 2.5 Training an Adversarial Network\n\n* pit the generator and discriminator networks against each other as adversaries ([generative_adversarial_network.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgenerative_adversarial_network.ipynb) completed)\n\n--- \n\n#### Lesson Three: Deep Q-Learning Networks (DQNs)\n\n##### 3.1 The Cartpole Game\n\n* introduce the [Cartpole Game](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym\u002Fwiki\u002FCartPole-v0), an environment provided by OpenAI and used throughout the remainder these LiveLessons to train deep reinforcement learning algorithms\n\n##### 3.2 Essential Deep RL Theory\n\n* delve into the essential theory of deep reinforcement learning in general\n\n##### 3.3 Essential DQN Theory\n\n* delve into the essential theory of Deep Q-Learning networks, a popular, particular type of deep reinforcement learning algorithm\n\n##### 3.4 Defining a DQN Agent\n\n* define a Deep Q-Learning agent from scratch ([cartpole_dqn.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcartpole_dqn.ipynb))\n\n##### 3.5 Interacting with an OpenAI Gym Environment\n\n* leverage OpenAI Gym to enable our Deep Q-Learning agent to master the Cartpole Game ([cartpole_dqn.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcartpole_dqn.ipynb) completed)\n\n--- \n\n#### Lesson Four: OpenAI Lab\n\n##### 4.1 Visualizing Agent Performance\n\n* use the [OpenAI Lab](https:\u002F\u002Fgithub.com\u002Fkengz\u002Fopenai_lab) to visualise our Deep Q-Learning agent's performance in real-time\n\n##### 4.2 Modifying Agent Hyperparameters\n\n* learn to straightforwardly optimise a deep reinforcement learning agent's hyperparameters\n\n##### 4.3 Automated Hyperparameter Experimentation and Optimization\n\n* automate the search through hyperparameters to optimize our agent’s performance\n\n##### 4.4 Fitness\n\n* calculate summary metrics to gauge our agent’s overall fitness\n\n--- \n\n#### Lesson Five: Advanced Deep Reinforcement Learning Agents\n\n##### 5.1 Policy Gradients and the REINFORCE Algorithm\n\n* at a high level, discover Policy Gradient algorithms in general and the classic REINFORCE implementation in particular\n\n##### 5.2 The Actor-Critic Algorithm\n\n* cover how Policy Gradients can be combined with Deep Q-Learning to facilitate the Actor-Critic algorithms\n\n##### 5.3 Software 2.0\n\n* discuss how deep learning is ushering in a new era of software development driven by data in place of hard-coded rules\n\n##### 5.4 Approaching Artificial General Intelligence\n\n* return to our discussion of Artificial Intelligence, specifically addressing the limitations of modern deep learning approaches\n\n---\n---\n","# TensorFlow-LiveLessons\n\n**请注意，本视频系列的第二版现已发布[此处](https:\u002F\u002Fgithub.com\u002Fjonkrohn\u002FDLTFpT\u002F)。第二版不仅包含了本（第一）版的所有内容，还增加了大量新内容，并更新了相关库的版本。**\n\n此仓库收录了与[Jon Krohn](https:\u002F\u002Fwww.jonkrohn.com\u002F)著作配套的代码：\n\n1. 《使用TensorFlow进行深度学习》LiveLessons（摘要博文[此处](https:\u002F\u002Fmedium.com\u002F@jjpkrohn\u002Ffilming-deep-learning-with-tensorflow-livelessons-for-oreilly-safari-50363ed4efad)）\n2. 《面向自然语言处理的深度学习》LiveLessons（摘要博文[此处](https:\u002F\u002Finsights.untapt.com\u002Fdeep-learning-for-natural-language-processing-tutorials-with-jupyter-notebooks-ad67f336ce3f)）\n3. 《深度强化学习与生成对抗网络》LiveLessons（摘要博文[此处](https:\u002F\u002Finsights.untapt.com\u002Fdeep-reinforcement-learning-and-generative-adversarial-networks-tutorials-with-jupyter-notebooks-6ef4dc6957ea)）\n\n**上述顺序是建议学习这些LiveLessons的推荐顺序。** 不过，《使用TensorFlow进行深度学习》已提供了足够充分的理论与实践基础，足以支撑后续其他LiveLessons的学习。\n\n## 先决条件\n\n#### 命令行\n\n若您熟悉**Unix命令行**的基本操作，将更轻松地完成这些LiveLessons的学习。有关这些基础知识的教程可参见[这里](https:\u002F\u002Flearnpythonthehardway.org\u002Fbook\u002Fappendixa.html)。\n\n#### Python数据分析\n\n此外，如果您不熟悉使用**Python**进行数据分析（例如，pandas、scikit-learn、matplotlib等库），可以参考[DataQuest的数据分析师路径](https:\u002F\u002Fwww.dataquest.io\u002Fpath\u002Fdata-analyst)，快速掌握所需技能——其中的第一步（Python入门）和第二步（Python进阶与Pandas）涵盖了大部分核心内容。\n\n## 安装\n\n运行本仓库中代码的分步指南可在[安装目录](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Ftree\u002Fmaster\u002Finstallation)中找到。\n\n## 笔记本\n\n我在LiveLessons中讲解的所有代码都以[Jupyter笔记本](http:\u002F\u002Fjupyter.org\u002F)的形式存放在[此目录](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Ftree\u002Fmaster\u002Fnotebooks)中。\n\n以下是我按课程顺序讲解这些笔记本的安排：\n\n### 《使用TensorFlow进行深度学习》LiveLessons\n\n#### 第一课：深度学习导论\n\n##### 1.1 神经网络与深度学习\n\n* 通过类比生物神经系统的启发，本节介绍人工神经网络及其如何发展到如今以“深度”架构为主的形态。\n\n##### 1.2 如何运行这些LiveLessons中的代码\n\n* 回顾上文提到的[安装目录](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Ftree\u002Fmaster\u002Finstallation)，讨论运行我的Jupyter笔记本的不同方式。\n* 详细说明在Mac OS X上逐步安装TensorFlow的过程（[step_by_step_MacOSX_install.md](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Finstallation\u002Fstep_by_step_MacOSX_install.md)），这一过程对任何类Unix操作系统用户都有借鉴意义。\n\n##### 1.3 一个入门级的人工神经网络\n\n* 动手实践一个尽可能简单的神经网络（[shallow_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fshallow_net_in_keras.ipynb)），用于分类手写数字。\n* 介绍Jupyter笔记本及其常用快捷键。\n* 通过白板演示，温和地引入一些深度学习术语：\n  * MNIST手写数字数据集\n  * 图像预处理以供神经网络分析\n  * 浅层网络架构\n\n---\n\n#### 第二课：深度学习的工作原理\n\n##### 2.1 深度神经网络的家族及其应用\n\n* 讲解当前主流深度神经网络家族的功能及常见应用：\n  * 密集层\u002F全连接层\n  * 卷积神经网络（ConvNet）\n  * 循环神经网络（RNN）\u002F长短期记忆单元（LSTM）\n  * 强化学习\n  * 生成对抗网络\n\n##### 2.2 核心理论I——神经元单元\n\n* 通过直观的图形化解释，阐述以下深度学习核心概念：\n  * 神经元单元与激活函数\n    * 感知器\n    * Sigmoid函数（[sigmoid_function.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fsigmoid_function.ipynb)）\n    * 双曲正切函数\n    * 整流线性单元（ReLU）\n\n##### 2.3 核心理论II——损失函数、梯度下降与反向传播\n\n* 损失函数\n    * 均方误差\n    * 交叉熵（[cross_entropy_cost.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcross_entropy_cost.ipynb)）\n* 梯度下降\n* 通过链式法则实现反向传播\n* 各种层类型\n    * 输入层\n    * 密集层\u002F全连接层\n    * Softmax输出层（[softmax_demo.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fsoftmax_demo.ipynb)）\n\n##### 2.4 TensorFlow Playground——可视化深度网络的运行\n\n* 利用[TensorFlow Playground](http:\u002F\u002Fplayground.tensorflow.org\u002F)交互式地展示前一节所学的理论。\n\n##### 2.5 深度学习的数据集\n\n* 概述用于图像分类的标准数据集，以及适合深度学习的数据资源汇总。\n\n##### 2.6 将深度网络理论应用于代码I\n\n* 将第二课中学到的理论应用于创建一个中等深度的图像分类器（[intermediate_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fintermediate_net_in_keras.ipynb)）。\n* 此模型在1.3节的浅层架构基础上进行了改进，性能也显著提升。\n\n---\n\n#### 第三课：卷积神经网络\n\n##### 3.1 核心理论III——小批量训练、不稳定梯度及避免过拟合\n\n* 为进一步完善我们的深度学习工具箱，深入探讨以下核心理论：\n  * 权重初始化\n    * 均匀分布\n    * 正态分布\n    * Xavier Glorot方法\n  * **随机**梯度下降\n    * 学习率\n    * 批量大小\n    * 二阶梯度优化\n      * 动量法\n      * Adam算法\n  * 不稳定梯度\n    * 梯度消失\n    * 梯度爆炸\n  * 避免过拟合\u002F提高模型泛化能力\n    * L1\u002FL2正则化\n    * Dropout技术\n    * 数据增强\n  * 批量归一化\n  * 更多层次\n    * 最大池化\n    * 展平\n\n##### 3.2 将深度网络理论应用于代码 II\n\n* 将上一节学到的理论应用于创建用于图像分类的深层全连接网络（[deep_net_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdeep_net_in_keras.ipynb)）\n* 在第2.5节的中间架构基础上进一步改进，并取得更好的性能\n\n##### 3.3 用于视觉识别的卷积神经网络简介\n\n* 通过白板演示，直观地解释卷积层是什么以及它们为何如此有效\n\n##### 3.4 经典卷积网络架构——LeNet-5\n\n* 将上一节学到的理论应用于创建用于图像分类的深层卷积网络（[lenet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Flenet_in_keras.ipynb)），该网络灵感来源于第1.1节中介绍的经典LeNet-5神经网络\n\n##### 3.5 经典卷积网络架构——AlexNet和VGGNet\n\n* 使用两个受当代获奖模型架构启发的超深层卷积网络对彩色花卉图像进行分类：AlexNet（[alexnet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Falexnet_in_keras.ipynb)）和VGGNet（[vggnet_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fvggnet_in_keras.ipynb)）\n\n##### 3.6 TensorBoard与模型输出的解读\n\n* 回到上一节中的网络，添加代码将结果输出到TensorBoard深度学习结果可视化工具\n* 探索TensorBoard，并解释如何在其中解读模型结果\n\n---\n\n#### 第四课：TensorFlow简介\n\n##### 4.1 主流深度学习框架对比\n\n* 讨论主流深度学习框架的相对优势、劣势及常见应用：\n  * Caffe\n  * Torch\n  * Theano\n  * TensorFlow\n  * 以及高层API TFLearn和Keras\n* 得出结论：对于最广泛的应用场景，TensorFlow是最佳选择\n\n##### 4.2 TensorFlow简介\n\n* 介绍TensorFlow图及相关术语：\n  * 操作（ops）\n  * 张量\n    * 变量\n    * 占位符\n  * 输入（feeds）\n  * 输出（fetches）\n* 构建简单的TensorFlow图（[first_tensorflow_graphs.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Ffirst_tensorflow_graphs.ipynb)）\n* 在TensorFlow中构建神经元（[first_tensorflow_neurons.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Ffirst_tensorflow_neurons.ipynb)）\n\n##### 4.3 在TensorFlow中拟合模型\n\n* 在TensorFlow中拟合一条直线：\n  * 通过逐点考虑数据点（[point_by_point_intro_to_tensorflow.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fpoint_by_point_intro_to_tensorflow.ipynb)）\n  * 同时利用张量的优势（[tensor-fied_intro_to_tensorflow.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Ftensor-fied_intro_to_tensorflow.ipynb)）\n  * 使用从数百万数据点中采样的批次（[intro_to_tensorflow_times_a_million.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fintro_to_tensorflow_times_a_million.ipynb)）\n\n##### 4.4 TensorFlow中的全连接网络\n\n* 在TensorFlow中创建一个全连接神经网络（[intermediate_net_in_tensorflow.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fintermediate_net_in_tensorflow.ipynb)），其架构与第2.5节中用Keras构建的中间网络相同\n\n##### 4.5 TensorFlow中的深层卷积网络\n\n* 在TensorFlow中创建一个深层卷积神经网络（[lenet_in_tensorflow.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Flenet_in_tensorflow.ipynb)），其架构与第3.4节中用Keras构建的LeNet-inspired网络相同\n\n---\n\n#### 第五课：优化深度网络\n\n##### 5.1 提升性能与调优超参数\n\n* 详细介绍提升深度神经网络性能的系统性步骤，包括通过调优超参数来实现\n\n##### 5.2 如何构建你自己的深度学习项目\n\n* 设计和评估你自己的深度学习项目的具体步骤\n\n##### 5.3 自学资源\n\n* 值得投入时间以成为深度学习模型部署专家的主题\n\n--- \n---\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthe-deep-learners_TensorFlow-LiveLessons_readme_82a2947a7a9f.jpg)\n\n\n\n### [自然语言处理中的深度学习](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-for\u002F9780134851921\u002F)\n\n#### 第一课：深度学习在NLP中的强大与优雅\n\n##### 1.1 自然语言处理中的深度学习简介\n\n* 高度概括深度学习在自然语言处理（NLP）领域的应用\n* NLP工业应用中的重要案例\n* 当代突破性进展的时间线，这些进展使深度学习方法成为NLP研究与开发的前沿\n\n##### 1.2 自然语言元素的计算表示\n\n* 介绍自然语言的基本元素\n* 对比传统机器学习模型与新兴深度学习模型如何表示这些元素\n\n##### 1.3 NLP应用\n\n* 列举常见的NLP应用，并将其分为三个相对复杂度的层次\n\n##### 1.4 安装，包括GPU注意事项\n\n* 基于[在Mac OS X上逐步安装TensorFlow](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Finstallation\u002Fstep_by_step_MacOSX_install.md)，该内容已在[使用TensorFlow的深度学习LiveLessons](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002F#12-running-the-code-in-these-livelessons)中介绍，以方便使用Nvidia GPU训练深度学习模型。\n\n##### 1.5 深度学习先修理论回顾\n\n* 总结[使用TensorFlow的深度学习LiveLessons](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-with\u002F9780134770826\u002F)中介绍的关键概念，这些概念构成了本NLP专题LiveLessons的基础\n\n##### 1.6 预览\n\n* 略窥这些LiveLessons过程中所掌握的能力\n\n---\n\n#### 第二课：词向量\n\n##### 2.1 向量空间嵌入\n\n* 利用交互式演示，帮助直观理解词的向量空间嵌入，即对词义的精细量化表示\n\n##### 2.2 word2vec\n\n* 导致 word2vec 发展的关键论文，这是一种将自然语言转换为向量表示的技术\n* 介绍 word2vec 的核心理论：\n  * 架构：\n    1. Skip-Gram\n    2. 连续词袋模型\n  * 训练算法：\n    1. 层次化 Softmax\n    2. 负采样\n  * 评估视角：\n    1. 内在评估\n    2. 外在评估\n  * 超参数：\n    1. 向量维度数\n    2. 上下文窗口大小\n    3. 迭代次数\n    4. 数据集大小\n* 将 word2vec 与其主要替代方案 [GloVe](https:\u002F\u002Fnlp.stanford.edu\u002Fprojects\u002Fglove\u002F) 进行对比\n\n##### 2.3 自然语言处理的数据集\n\n* 预训练的词向量：\n  * 适用于 [word2vec](https:\u002F\u002Fcode.google.com\u002Farchive\u002Fp\u002Fword2vec\u002F)\n  * 适用于 [GloVe](http:\u002F\u002Fnlp.stanford.edu\u002Fprojects\u002Fglove\u002F)\n* 自然语言数据集：\n  * Jon Krohn 的 [资源页面](https:\u002F\u002Fwww.jonkrohn.com\u002Fresources\u002F)\n  * [Zhang、Zhao 和 LeCun 的](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1509.01626.pdf) [标注数据](http:\u002F\u002Fxzh.me\u002F)\n  * 来自 [Andrew Maas 及其斯坦福同事（2011 年）](http:\u002F\u002Fai.stanford.edu\u002F~amaas\u002Fpapers\u002FwvSent_acl2011.pdf) 的互联网电影数据库（IMDB）影评，按情感分类\n\n##### 2.4 使用 word2vec 创建词向量\n\n* 使用 [古腾堡计划](https:\u002F\u002Fwww.gutenberg.org\u002F) 中的书籍，通过 word2vec 创建词向量\n* 利用 [bokeh](https:\u002F\u002Fbokeh.pydata.org\u002Fen\u002Flatest\u002F) 库交互式地可视化这些词向量（[creating_word_vectors_with_word2vec.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcreating_word_vectors_with_word2vec.ipynb)）\n\n---\n\n#### 第三课：自然语言数据建模\n\n##### 3.1 自然语言数据预处理的最佳实践\n\n* 在 [natural_language_preprocessing_best_practices.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fnatural_language_preprocessing_best_practices.ipynb) 中，应用以下推荐的最佳实践来清理用于建模的自然语言语料库：\n  * 分词\n  * 将所有字符转换为小写\n  * 去除停用词\n  * 去除标点符号\n  * 词干提取\n  * 处理二元（和三元）词组搭配\n\n##### 3.2 ROC 曲线下面积\n\n* 详细说明接收者操作特征曲线总结指标——ROC 曲线下面积的计算方法及其功能，该指标将在后续 LiveLessons 中用于评估模型性能\n\n##### 3.3 密集神经网络分类\n\n* 将向量空间嵌入与 [TensorFlow 深度学习直播课程](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-with\u002F9780134770826\u002F) 中介绍的深度学习基础知识相结合，创建一个用于按情感对文档进行分类的密集神经网络（[dense_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fdense_sentiment_classifier.ipynb)）\n\n##### 3.4 卷积神经网络分类\n\n* 在深度学习架构中添加卷积层，以提升自然语言分类模型的性能（[convolutional_sentiment_classifier.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fconvolutional_sentiment_classifier.ipynb)）\n\n---\n\n#### 第四课：循环神经网络\n\n##### 4.1 RNN 的基本理论\n\n* 提供对循环神经网络（RNN）的直观理解，RNN 允许在序列数据上进行时间反向传播，例如自然语言和金融时间序列数据\n\n##### 4.2 RNN 的实际应用\n\n* 将简单的 RNN 层融入到一个按情感对文档进行分类的模型中（[rnn_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Frnn_in_keras.ipynb)）\n\n##### 4.3 LSTM 和 GRU 的基本理论\n\n* 熟悉长短期记忆网络（LSTM）和门控循环单元（GRU）这两种 RNN 变体，它们能够显著提升深度学习模型对序列数据的建模能力\n\n##### 4.4 LSTM 和 GRU 的实际应用\n\n* 通过 Keras 高级 API，轻松构建 LSTM（[vanilla_lstm_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fvanilla_lstm_in_keras.ipynb)）和 GRU（[gru_in_keras.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgru_in_keras.ipynb)）深度学习架构\n\n---\n\n#### 第五课：高级模型\n\n##### 5.1 双向 LSTM\n\n* 双向 LSTM 是 LSTM 的一种特别强大的变体\n* 在实践中运用之前，先了解双向 LSTM 的高级理论（[bidirectional_lstm.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fbidirectional_lstm.ipynb)）\n\n##### 5.2 堆叠式双向 LSTM\n\n* 将双向 LSTM 堆叠起来，使深度学习网络能够对语言进行越来越抽象的建模（[stacked_bidirectional_lstm.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fstacked_bidirectional_lstm.ipynb)；[ye_olde_conv_lstm_stackeroo.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fye_olde_conv_lstm_stackeroo.ipynb)）\n\n##### 5.3 并行网络架构\n\n* 通过非序列化的架构，例如具有各自独特超参数的并行卷积层，可以实现更高级的数据建模能力（[multi_convnet_architectures.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fmulti_convnet_architectures.ipynb)）\n\n\n\n---\n---\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthe-deep-learners_TensorFlow-LiveLessons_readme_efd79795dad8.jpg)\n\n### [深度强化学习与生成对抗网络](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-reinforcement-learning\u002F9780135171233\u002F)\n\n#### 第一课：人工智能的基础\n\n##### 1.1 当前的人工智能现状\n\n* 探讨“人工智能”这一术语的含义及其与深度学习的关系\n* 定义狭义、通用和超人类智能\n\n##### 1.2 生成对抗网络的应用\n\n* 揭示生成对抗网络在创作逼真且富有创意的人类风格图像方面迅速提升的质量\n* 涉及有趣且交互式的[pix2pix](https:\u002F\u002Faffinelayer.com\u002Fpixsrv\u002F)工具\n\n##### 1.3 大型强化学习的应用\n\n* 区分监督学习、无监督学习与强化学习\n* 概述当代具有里程碑意义的深度强化学习突破，包括：\n\t* 深度Q学习算法\n\t* AlphaGo\n\t* AlphaGo Zero\n\t* AlphaZero\n\t* 机器人技术的进步\n* 介绍最受欢迎的深度强化学习环境：\n\t* [OpenAI Gym](github.com\u002Fopenai\u002Fgym)\n\t* [DeepMind Lab](github.com\u002Fdeepmind\u002Flab)\n\t* [Unity](github.com\u002FUnity-Technologies\u002Fml-agents)\n\n##### 1.4 在这些LiveLessons中运行代码\n\n* 回顾[在Mac OS X上逐步安装TensorFlow](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Finstallation\u002Fstep_by_step_MacOSX_install.md)的过程，该过程详细记录于[使用TensorFlow进行深度学习的LiveLessons](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002F#12-running-the-code-in-these-livelessons)\n\n##### 1.5 深度学习理论预备知识回顾\n\n* 总结[使用TensorFlow进行深度学习的LiveLessons](https:\u002F\u002Fwww.safaribooksonline.com\u002Flibrary\u002Fview\u002Fdeep-learning-with\u002F9780134770826\u002F)中介绍的关键概念，这些概念构成了本高级主题LiveLessons内容的基础\n\n---\n\n#### 第二课：生成对抗网络（GANs）\n\n##### 2.1 GANs的核心理论\n\n* 介绍GANs的高层次理论，以及它们如何生成逼真的图像\n\n##### 2.2 “Quick, Draw!”游戏数据集\n\n* 展示[Quick, Draw!游戏](https:\u002F\u002Fquickdraw.withgoogle.com\u002F)，我们将其用作来源，提供来自单一类别的数十万幅手绘图像，供GAN学习模仿\n\n##### 2.3 判别器网络\n\n* 构建GAN中的判别器组件（[generative_adversarial_network.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgenerative_adversarial_network.ipynb)）\n\n##### 2.4 生成器网络\n\n* 继续构建GAN中的生成器组件（[generative_adversarial_network.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgenerative_adversarial_network.ipynb)）\n\n##### 2.5 训练对抗网络\n\n* 让生成器和判别器作为对手相互对抗（[generative_adversarial_network.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fgenerative_adversarial_network.ipynb)完成）\n\n---\n\n#### 第三课：深度Q学习网络（DQN）\n\n##### 3.1 Cartpole游戏\n\n* 介绍[Cartpole游戏](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym\u002Fwiki\u002FCartPole-v0)，这是由OpenAI提供的环境，并在整个剩余的LiveLessons中用于训练深度强化学习算法\n\n##### 3.2 深度强化学习的核心理论\n\n* 深入探讨深度强化学习的一般核心理论\n\n##### 3.3 DQN的核心理论\n\n* 深入探讨深度Q学习网络的核心理论，这是一种流行的特定类型的深度强化学习算法\n\n##### 3.4 定义一个DQN智能体\n\n* 从头开始定义一个深度Q学习智能体（[cartpole_dqn.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcartpole_dqn.ipynb)）\n\n##### 3.5 与OpenAI Gym环境交互\n\n* 利用OpenAI Gym使我们的深度Q学习智能体掌握Cartpole游戏（[cartpole_dqn.ipynb](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Fnotebooks\u002Fcartpole_dqn.ipynb)完成）\n\n---\n\n#### 第四课：OpenAI Lab\n\n##### 4.1 可视化智能体表现\n\n* 使用[OpenAI Lab](https:\u002F\u002Fgithub.com\u002Fkengz\u002Fopenai_lab)实时可视化我们的深度Q学习智能体的表现\n\n##### 4.2 调整智能体超参数\n\n* 学习如何简单地优化深度强化学习智能体的超参数\n\n##### 4.3 自动化超参数实验与优化\n\n* 自动化搜索超参数以优化智能体的表现\n\n##### 4.4 适应度\n\n* 计算汇总指标来衡量智能体的整体适应度\n\n---\n\n#### 第五课：高级深度强化学习智能体\n\n##### 5.1 策略梯度与REINFORCE算法\n\n* 从高层次了解策略梯度算法，特别是经典的REINFORCE实现\n\n##### 5.2 演员-评论家算法\n\n* 介绍如何将策略梯度与深度Q学习结合，从而实现演员-评论家算法\n\n##### 5.3 软件2.0\n\n* 讨论深度学习如何引领软件开发进入一个由数据驱动、而非硬编码规则的新时代\n\n##### 5.4 朝向通用人工智能迈进\n\n* 回到关于人工智能的讨论，重点探讨现代深度学习方法的局限性\n\n---\n---","# TensorFlow-LiveLessons 快速上手指南\n\n本指南基于 Jon Krohn 的深度学习视频教程配套代码库，旨在帮助开发者快速搭建环境并运行经典的深度学习示例（涵盖 TensorFlow 基础、CNN、NLP 及强化学习）。\n\n> **注意**：原仓库为第一版教程代码。作者已发布包含更多内容且库版本更新的**第二版**，建议优先访问 [DLTFpT 第二版仓库](https:\u002F\u002Fgithub.com\u002Fjonkrohn\u002FDLTFpT\u002F)。若需学习本仓库（第一版）内容，请参考以下步骤。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n### 系统要求\n- **操作系统**：推荐使用 **Unix-like 系统**（macOS 或 Linux）。Windows 用户建议使用 WSL (Windows Subsystem for Linux) 以获得最佳兼容性。\n- **命令行基础**：熟悉基本的 Unix 命令行操作（如 `cd`, `ls`, `mkdir` 等）。\n  - *入门参考*：[Unix 命令行基础教程](https:\u002F\u002Flearnpythonthehardway.org\u002Fbook\u002Fappendixa.html)\n\n### 前置知识\n- **Python 数据分析基础**：熟悉 Python 及其数据科学生态圈。\n  - 核心库：`pandas`, `scikit-learn`, `matplotlib`。\n  - *快速提升*：推荐完成 DataQuest 的 [数据分析师路径](https:\u002F\u002Fwww.dataquest.io\u002Fpath\u002Fdata-analyst) 中的前两步（Python 入门与中级\u002FPandas）。\n\n### 软件依赖\n- **Python**: 建议版本 3.6+ (具体视原仓库 `requirements.txt` 而定，老版本项目可能依赖 Python 3.5\u002F3.6)。\n- **Jupyter Notebook**: 用于运行交互式教程代码。\n- **深度学习框架**: TensorFlow (CPU 或 GPU 版本) 及 Keras。\n\n## 2. 安装步骤\n\n本仓库提供了详细的分步安装指南，特别是针对 macOS 用户。\n\n### 第一步：克隆仓库\n打开终端，执行以下命令获取代码：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons.git\ncd TensorFlow-LiveLessons\n```\n\n### 第二步：查看详细安装文档\n仓库内的 `installation` 目录包含了针对不同系统的详细配置说明。强烈建议先阅读该目录下的文档，特别是 macOS 用户的分步指南：\n- **通用安装指引目录**: [installation directory](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Ftree\u002Fmaster\u002Finstallation)\n- **macOS 分步安装详解**: [step_by_step_MacOSX_install.md](https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fblob\u002Fmaster\u002Finstallation\u002Fstep_by_step_MacOSX_install.md)\n\n### 第三步：安装依赖 (通用流程)\n通常需要通过 pip 安装所需库。由于这是较早期的项目，建议先检查是否存在 `requirements.txt` 文件，或使用以下通用命令安装核心组件（**国内用户推荐使用清华或阿里镜像源加速**）：\n\n```bash\n# 使用清华镜像源安装常用依赖\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple jupyter numpy pandas matplotlib scikit-learn\n\n# 安装 TensorFlow 和 Keras (版本需参考原项目要求，以下为通用安装命令)\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple tensorflow keras\n```\n\n> **提示**：如果遇到版本冲突问题，请参照 `installation` 文件夹中的具体版本号进行安装。\n\n## 3. 基本使用\n\n所有教程代码均以 **Jupyter Notebook** (`.ipynb`) 形式存储在 `notebooks` 目录中，按课程章节分类。\n\n### 启动 Jupyter Notebook\n在项目根目录下运行：\n```bash\njupyter notebook\n```\n浏览器将自动打开，导航至 `notebooks` 文件夹即可看到所有示例代码。\n\n### 推荐学习路径与首个示例\n\n建议按照以下顺序进行学习，这也是原教程的录制顺序：\n\n1.  **Deep Learning with TensorFlow** (基础理论的核心)\n2.  **Deep Learning for Natural Language Processing** (NLP 应用)\n3.  **Deep Reinforcement Learning and GANs** (进阶应用)\n\n#### 运行第一个示例：浅层神经网络\n进入 **Lesson One**，尝试运行最基础的 handwritten digits 分类示例，以熟悉流程和术语。\n\n- **文件位置**: `notebooks\u002Fshallow_net_in_keras.ipynb`\n- **内容概述**:\n  - 介绍 Jupyter Notebook 常用快捷键。\n  - 讲解 MNIST 数据集及其预处理。\n  - 构建并训练一个简单的浅层神经网络。\n\n**操作步骤**:\n1. 在 Jupyter 界面点击 `shallow_net_in_keras.ipynb`。\n2. 依次选中代码单元格，按 `Shift + Enter` 运行。\n3. 观察模型训练过程中的 Loss 变化及最终的准确率输出。\n\n#### 进阶示例：TensorFlow 原生实现\n在掌握 Keras 高层 API 后，可前往 **Lesson Four** 学习 TensorFlow 底层图机制。\n\n- **文件位置**: `notebooks\u002Ffirst_tensorflow_graphs.ipynb`\n- **内容概述**: 演示如何手动定义 Ops、Tensors、Variables 和 Placeholders，构建最简单的计算图。\n\n---\n*更多详细理论讲解请配合原视频课程观看，代码注释中亦包含关键概念的解释。*","一名刚转行深度学习的数据分析师，试图从零开始构建手写数字识别模型，却因环境配置和理论缺失而举步维艰。\n\n### 没有 TensorFlow-LiveLessons 时\n- **环境搭建受阻**：面对复杂的 Unix 命令行和 TensorFlow 依赖库，在 macOS 上反复安装失败，耗费数天仍无法运行第一行代码。\n- **理论实践脱节**：虽然看过零散的博客文章，但无法将“神经网络”、“反向传播”等抽象概念与具体的代码实现对应起来，理解浮于表面。\n- **学习路径混乱**：网络上教程质量参差不齐，不知道是先学 Keras 还是原生 TensorFlow，也不清楚是否需要先掌握强化学习，缺乏系统性的指引。\n- **调试无从下手**：遇到报错只能盲目复制粘贴错误信息到搜索引擎，缺乏对 Jupyter Notebook 快捷键和调试技巧的了解，效率极低。\n\n### 使用 TensorFlow-LiveLessons 后\n- **一键复现环境**：跟随仓库中详细的 macOS 分步安装指南，快速配好 Unix 环境和 TensorFlow，直接运行现成的 Jupyter Notebook 代码。\n- **代码驱动理解**：通过 `shallow_net_in_keras.ipynb` 等案例，边看视频边动手修改参数，直观地看到浅层网络如何分类 MNIST 数字，彻底吃透核心术语。\n- **顺序清晰高效**：严格遵循推荐的课程顺序（从 TensorFlow 基础到 NLP 再到强化学习），建立了完整的知识体系，避免了盲目跳跃学习带来的认知负担。\n- **交互式调试体验**：利用提供的 Notebook 热键指南和结构化代码，能够快速定位并修复错误，将原本需要几天的摸索过程缩短为几小时的实战练习。\n\nTensorFlow-LiveLessons 通过将系统的视频理论与可执行的代码笔记完美结合，让初学者能以最快速度跨越环境配置与理论理解的鸿沟，真正落地深度学习项目。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthe-deep-learners_TensorFlow-LiveLessons_7c78e73d.png","the-deep-learners","Deep Learning Study Group","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fthe-deep-learners_ec42d379.jpg","",null,"https:\u002F\u002Fgithub.com\u002Fthe-deep-learners",[82,86,90,94],{"name":83,"color":84,"percentage":85},"Jupyter Notebook","#DA5B0B",99.8,{"name":87,"color":88,"percentage":89},"Shell","#89e051",0.2,{"name":91,"color":92,"percentage":93},"Python","#3572A5",0.1,{"name":95,"color":96,"percentage":97},"Dockerfile","#384d54",0,904,682,"2026-02-28T21:14:30","MIT","macOS, Linux (类 Unix 系统)","未说明 (文档仅提及在 macOS 上安装 TensorFlow 的步骤，未明确强制要求 GPU 或指定 CUDA 版本)","未说明",{"notes":106,"python":107,"dependencies":108},"该项目是 Jon Krohn 深度学习视频系列（第一版）的配套代码。强烈建议先熟悉 Unix 命令行基础。官方推荐学习顺序为：先完成《TensorFlow 深度学习》，再进行 NLP 和强化学习\u002FGAN 的课程。详细的分步安装指南（特别是针对 Mac OS X）位于仓库的 'installation' 目录中。注意：作者已发布包含更新库版本的第二版课程，链接在 README 顶部。","未说明 (需熟悉 Python 数据分析)",[109,110,111,112,113,114],"TensorFlow","Keras","pandas","scikit-learn","matplotlib","Jupyter",[13],"2026-03-27T02:49:30.150509","2026-04-06T06:52:13.297332",[119,124,129,134,139,144],{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},10682,"在 Windows 上构建 Docker 镜像时遇到 conda 包冲突错误怎么办？","这通常是因为本地修改了 Dockerfile 或环境不一致导致的。建议完全重新克隆仓库（complete new clone），使用原始的 Dockerfile 进行构建，不要随意更改其中的安装命令（例如将 conda install 改为 pip install 或更改 TensorFlow 版本）。如果问题依旧，尝试移除 `--ignore-installed` 标志后再试。","https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fissues\u002F23",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},10683,"挂载本地路径后，Jupyter Notebook 中看不到任何文件或笔记本列表为空？","这个问题通常是因为挂载路径映射不正确，导致容器服务的是空目录（如 \u002Fhome\u002Fjovyan）而不是工作目录（\u002Fhome\u002Fjovyan\u002Fwork）。请确保运行命令时 `-v` 参数正确映射到了包含笔记本的本地文件夹。此外，也可以考虑直接使用配置好的 AWS Deep Learning AMI 来避免本地环境配置问题。","https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fissues\u002F3",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},10684,"运行容器时出现 'ImportError: No module named notebook.notebookapp' 错误如何解决？","这表明当前容器镜像中未正确安装 Jupyter Notebook 或其依赖损坏。解决方案是改用其他维护良好的 Dockerfile（例如 deep-learning-illustrated 项目中的 Dockerfile）重新构建镜像，或者直接放弃本地 Docker 部署，转而使用免费的 Google Colab (colab.research.google.com) 在线运行所有笔记本。","https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fissues\u002F26",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},10685,"如何在 Windows 上安装 TensorFlow 和 Keras？","推荐先安装 Anaconda Python 发行版。创建环境：`conda create -n tf python=3.5`，激活环境：`activate tf`。安装 TensorFlow CPU 版命令：`pip install --ignore-installed --upgrade [TensorFlow WHL URL]`。若需 GPU 支持，需先安装 Visual Studio Community 2015 (含 C++)、CUDA 8 和 cuDNN 5.1。注意：安装 Keras 不会覆盖已有的 TensorFlow 安装。","https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fissues\u002F1",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},10686,"在 Windows PowerShell 中运行 Docker 命令时报 'sudo' 不是有效命令或权限错误怎么办？","Windows 系统默认没有 `sudo` 命令，请在 PowerShell 中以管理员身份运行，并去掉命令中的 `sudo` 前缀。如果遇到 'Container must be run with group root' 等权限错误，可能是 Windows 文件权限与 Linux 容器不兼容。建议直接在 Google Colab 上免费运行这些笔记本以绕过本地 Windows 环境的复杂性，或者检查 Docker Desktop 的文件共享设置。","https:\u002F\u002Fgithub.com\u002Fthe-deep-learners\u002FTensorFlow-LiveLessons\u002Fissues\u002F27",{"id":145,"question_zh":146,"answer_zh":147,"source_url":143},10687,"Docker 容器启动后 Jupyter 无法访问或报错，有什么替代方案？","如果本地 Docker 环境配置困难（特别是在 Windows 上），最推荐的替代方案是使用 Google Colab (colab.research.google.com)。它可以免费在云端运行所有 TensorFlow 笔记本，无需本地安装任何依赖或配置 Docker 容器，同时也避免了操作系统兼容性带来的权限和路径问题。",[]]