[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-zackchase--mxnet-the-straight-dope":3,"tool-zackchase--mxnet-the-straight-dope":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":107,"forks":108,"last_commit_at":109,"license":110,"difficulty_score":111,"env_os":112,"env_gpu":113,"env_ram":114,"env_deps":115,"category_tags":124,"github_topics":82,"view_count":23,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":125,"updated_at":126,"faqs":127,"releases":153},3242,"zackchase\u002Fmxnet-the-straight-dope","mxnet-the-straight-dope","An interactive book on deep learning. Much easy, so MXNet. Wow. [Straight Dope is growing up] ---> Much of this content has been incorporated into the new Dive into Deep Learning Book available at https:\u002F\u002Fd2l.ai\u002F.","mxnet-the-straight-dope 是一本开源的交互式深度学习教程，旨在通过可运行的代码笔记帮助读者从零掌握深度学习核心概念与 MXNet 框架。它解决了传统技术书籍中理论讲解与实际代码脱节的痛点，将文字说明、数学公式、图表和可执行代码完美融合在 Jupyter Notebook 中，让学习者能够边读边练，即时验证想法。\n\n该项目特别适合希望系统学习深度学习的开发者、研究人员以及高校师生使用。无论是需要夯实线性代数、概率统计等数学基础的新手，还是想要快速上手监督学习、逻辑回归等实战模型的进阶用户，都能从中获益。其独特亮点在于“所见即所得”的教学模式：所有示例均基于 MXNet 的 Gluon 接口编写，既保留了生产级框架的高效性能，又提供了类似 Python 原生编写的简洁体验。此外，项目采用完全开放的社区协作模式，欢迎全球贡献者共同完善内容，并已获得官方中文版支持。虽然该项目现已演进为更完善的《动手学深度学习》（d2l.ai），但其开创性的交互式教学理念仍对 AI 教育领域产生着深远影响。","# Deep Learning - The Straight Dope (*Deprecated* Please see d2l.ai)\n## This content has been moved to Dive into the Deep Learning Book freely available at https:\u002F\u002Fd2l.ai\u002F.\n\n\n\n## Abstract\nThis repo contains an\nincremental sequence of notebooks designed to teach deep learning, MXNet, and\nthe ``gluon`` interface. Our goal is to leverage the strengths of Jupyter\nnotebooks to present prose, graphics, equations, and code together in one place.\nIf we're successful, the result will be a resource that could be simultaneously\na book, course material, a prop for live tutorials, and a resource for\nplagiarising (with our blessing) useful code. To our knowledge there's no source\nout there that teaches either (1) the full breadth of concepts in modern deep\nlearning or (2) interleaves an engaging textbook with runnable code. We'll find\nout by the end of this venture whether or not that void exists for a good\nreason.\n\nAnother unique aspect of this book is its authorship process. We are\ndeveloping this resource fully in the public view and are making it available\nfor free in its entirety. While the book has a few primary authors to set the\ntone and shape the content, we welcome contributions from the community and hope\nto coauthor chapters and entire sections with experts and community members.\nAlready we've received contributions spanning typo corrections through full\nworking examples.\n\n## Implementation with Apache MXNet\nThroughout this book,\nwe rely upon MXNet to teach core concepts, advanced topics, and a full\ncomplement of applications. MXNet is widely used in production environments\nowing to its strong reputation for speed. Now with ``gluon``, MXNet's new\nimperative interface (alpha), doing research in MXNet is easy.\n\n## Dependencies\nTo run these notebooks, you'll want to build MXNet from source. Fortunately,\nthis is easy (especially on Linux) if you follow [these\ninstructions](http:\u002F\u002Fmxnet.io\u002Fget_started\u002Finstall.html). You'll also want to\n[install Jupyter](http:\u002F\u002Fjupyter.readthedocs.io\u002Fen\u002Flatest\u002Finstall.html) and use\nPython 3 (because it's 2017).\n\n## Slides\n\nThe authors (& others) are\nincreasingly giving talks that are based on the content in this books. Some of\nthese slide-decks (like the 6-hour KDD 2017) are gigantic so we're collecting\nthem separately in [this repo](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-slides).\nContribute there if you'd like to share tutorials or course material based on\nthis books.\n\n## Translation\nAs we write the book, large stable sections are simultaneously being translated into 中文,\navailable in a [web version](http:\u002F\u002Fzh.gluon.ai\u002F) and via [GitHub source](http:\u002F\u002Fzh.gluon.ai\u002F).\n\n## Table of contents\n\n### Part 1: Deep Learning Fundamentals\n* **Chapter 1:** Crash course\n    * [Preface](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Fpreface.ipynb)\n    * [Introduction](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Fintroduction.ipynb)\n    * [Manipulating data with NDArray](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Fndarray.ipynb)\n    * [Linear algebra](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Flinear-algebra.ipynb)\n    * [Probability and statistics](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Fprobability.ipynb)\n    * [Automatic differentiation via ``autograd``](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Fautograd.ipynb)\n\n* **Chapter 2:** Introduction to supervised learning\n    * [Linear regression *(from scratch)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Flinear-regression-scratch.ipynb)\n    * [Linear regression *(with ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Flinear-regression-gluon.ipynb)\n    * [Binary classification with logistic regression *(``gluon`` w bespoke loss function)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Flogistic-regression-gluon.ipynb)\n    * [Multiclass logistic regression *(from scratch)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fsoftmax-regression-scratch.ipynb)\n    * [Multiclass logistic regression *(with ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fsoftmax-regression-gluon.ipynb)\n    * [Overfitting and regularization *(from scratch)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fregularization-scratch.ipynb)\n     * [Overfitting and regularization *(with ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fregularization-gluon.ipynb)\n     * [Perceptron and SGD primer](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fperceptron.ipynb)\n     * [Learning environments](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fenvironment.ipynb)\n\n* **Chapter 3:** Deep neural networks (DNNs)\n    * [Multilayer perceptrons *(from scratch)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fmlp-scratch.ipynb)\n    * [Multilayer perceptrons *(with ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fmlp-gluon.ipynb)\n    * [Dropout regularization *(from scratch)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fmlp-dropout-scratch.ipynb)\n    * [Dropout regularization *(with ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fmlp-dropout-gluon.ipynb)\n    * [Introduction to ``gluon.Block`` and ``gluon.nn.Sequential()``](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fplumbing.ipynb)\n    * [Writing custom layers with ``gluon.Block``](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fcustom-layer.ipynb)\n    * [Serialization: saving and loading models](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fserialization.ipynb)\n    * Advanced Data IO\n    * Debugging your neural networks\n\n* **Chapter 4:** Convolutional neural networks (CNNs)\n     * [Convolutional neural networks *(from scratch)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fcnn-scratch.ipynb)\n     * [Convolutional neural networks *(with ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fcnn-gluon.ipynb)\n     * [Introduction to deep CNNs (AlexNet)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fdeep-cnns-alexnet.ipynb)\n     * [Very deep networks and repeating blocks (VGG network)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fvery-deep-nets-vgg.ipynb)\n     * [Batch normalization *(from scratch)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fcnn-batch-norm-scratch.ipynb)\n     * [Batch normalization *(with ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fcnn-batch-norm-gluon.ipynb)\n\n* **Chapter 5:** Recurrent neural networks (RNNs)\n    * [Simple RNNs (from scratch)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter05_recurrent-neural-networks\u002Fsimple-rnn.ipynb)\n    * [LSTMS RNNs (from scratch)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter05_recurrent-neural-networks\u002Flstm-scratch.ipynb)\n    * [GRUs (from scratch)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter05_recurrent-neural-networks\u002Fgru-scratch.ipynb)\n    * [RNNs (with ``gluon``)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter05_recurrent-neural-networks\u002Frnns-gluon.ipynb)\n    * ***Roadmap*** Dropout for recurrent nets\n    * ***Roadmap*** Zoneout regularization\n\n\n* **Chapter 6:** Optimization\n    * [Introduction to optimization](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Foptimization-intro.ipynb)\n    * [Gradient descent and stochastic gradient descent from scratch](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fgd-sgd-scratch.ipynb)\n    * [Gradient descent and stochastic gradient descent with `gluon`](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fgd-sgd-gluon.ipynb)\n    * [Momentum from scratch](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fmomentum-scratch.ipynb)\n    * [Momentum with `gluon`](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fmomentum-gluon.ipynb)\n    * [Adagrad from scratch](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadagrad-scratch.ipynb)\n    * [Adagrad with `gluon`](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadagrad-gluon.ipynb)\n    * [RMSprop from scratch](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Frmsprop-scratch.ipynb)\n    * [RMSprop with `gluon`](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Frmsprop-gluon.ipynb)\n    * [Adadelta from scratch](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadadelta-scratch.ipynb)\n    * [Adadelta with `gluon`](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadadelta-gluon.ipynb)\n    * [Adam from scratch](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadam-scratch.ipynb)\n    * [Adam with `gluon`](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadam-gluon.ipynb)\n\n* **Chapter 7:** Distributed & high-performance learning\n    * [Fast & flexible: combining imperative & symbolic nets with HybridBlocks](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter07_distributed-learning\u002Fhybridize.ipynb)\n    * [Training with multiple GPUs (from scratch)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter07_distributed-learning\u002Fmultiple-gpus-scratch.ipynb)\n    * [Training with multiple GPUs (with ``gluon``)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter07_distributed-learning\u002Fmultiple-gpus-gluon.ipynb)\n    * [Training with multiple machines](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter07_distributed-learning\u002Ftraining-with-multiple-machines.ipynb)\n    * ***Roadmap*** Asynchronous SGD\n    * ***Roadmap*** Elastic SGD\n\n### Part 2: Applications\n* **Chapter 8:** Computer vision (CV)\n    * ***Roadmap*** Network of networks (inception & co)\n    * ***Roadmap*** Residual networks\n    * [Object detection](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter08_computer-vision\u002Fobject-detection.ipynb)\n    * ***Roadmap*** Fully-convolutional networks\n    * ***Roadmap*** Siamese (conjoined?) networks\n    * ***Roadmap*** Embeddings (pairwise and triplet losses)\n    * ***Roadmap*** Inceptionism \u002F visualizing feature detectors\n    * ***Roadmap*** Style transfer\n    * [Visual-question-answer](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter08_computer-vision\u002Fvisual-question-answer.ipynb)\n    * [Fine-tuning](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter08_computer-vision\u002Ffine-tuning.ipynb)\n\n* **Chapter 9:** Natural language processing (NLP)\n    * ***Roadmap*** Word embeddings (Word2Vec)\n    * ***Roadmap*** Sentence embeddings (SkipThought)\n    * ***Roadmap*** Sentiment analysis\n    * ***Roadmap*** Sequence-to-sequence learning (machine translation)\n    * ***Roadmap*** Sequence transduction with attention (machine translation)\n    * ***Roadmap*** Named entity recognition\n    * ***Roadmap*** Image captioning\n    * [Tree-LSTM for semantic relatedness](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter09_natural-language-processing\u002Ftree-lstm.ipynb)\n\n* **Chapter 10:** Audio processing\n    * ***Roadmap*** Intro to automatic speech recognition\n    * ***Roadmap*** Connectionist temporal classification (CSC) for unaligned sequences\n    * ***Roadmap*** Combining static and sequential data\n\n* **Chapter 11:** Recommender systems\n    * [Introduction to recommender systems](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter11_recommender-systems\u002Fintro-recommender-systems.ipynb)\n    * ***Roadmap*** Latent factor models\n    * ***Roadmap*** Deep latent factor models\n    * ***Roadmap*** Bilinear models\n    * ***Roadmap*** Learning from implicit feedback\n\n* **Chapter 12:** Time series\n    * [Introduction to Forecasting *(with ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter12_time-series\u002Fintro-forecasting-gluon.ipynb)\n    * [Generalized Linear Models\u002FMLP for Forecasting *(with ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter12_time-series\u002Fintro-forecasting-2-gluon.ipynb)\n    * ***Roadmap*** Factor Models for Forecasting\n    * ***Roadmap*** Recurrent Neural Network for Forecasting\n    * [Linear Dynamical System (*from scratch*) ](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter12_time-series\u002Flds-scratch.ipynb)\n    * [Exponential Smoothing and Innovative State-space modeling (*from scratch*)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter12_time-series\u002Fissm-scratch.ipynb)\n    * ***Roadmap*** Gaussian processes for Forecasting\n    * ***Roadmap*** Bayesian Time Series Models\n    * ***Roadmap*** Modeling missing data\n    * ***Roadmap*** Combining static and sequential data\n\n### Part 3: Advanced Methods\n* **Chapter 13:** Unsupervised learning\n   * ***Roadmap*** Introduction to autoencoders\n   * ***Roadmap*** Convolutional autoencoders (introduce upconvolution)\n   * ***Roadmap*** Denoising autoencoders\n   * [Variational autoencoders](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter13_unsupervised-learning\u002Fvae-gluon.ipynb)\n   * ***Roadmap*** Clustering\n\n* **Chapter 14:** Generative adversarial networks (GANs)\n    * [Introduction to GANs](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter14_generative-adversarial-networks\u002Fgan-intro.ipynb)\n    * [Deep convolutional GANs (DCGANs)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter14_generative-adversarial-networks\u002Fdcgan.ipynb)\n    * ***Roadmap*** Wasserstein-GANs\n    * ***Roadmap*** Energy-based GANS\n    * ***Roadmap*** Conditional GANs\n    * [Image transduction GANs (Pix2Pix)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter14_generative-adversarial-networks\u002Fpixel2pixel.ipynb)\n    * ***Roadmap*** Learning from Synthetic and Unsupervised Images\n\n* **Chapter 15:** Adversarial learning\n    * ***Roadmap*** Two Sample Tests\n    * ***Roadmap*** Finding adversarial examples\n    * ***Roadmap*** Adversarial training\n\n* **Chapter 16:** Tensor Methods\n    * [Introduction to tensor methods](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter16_tensor_methods\u002Ftensor_basics.ipynb)\n    * ***Roadmap*** Tensor decomposition\n    * ***Roadmap*** Tensorized neural networks\n\n* **Chapter 17:** Deep reinforcement learning (DRL)\n    * ***Roadmap*** Introduction to reinforcement learning\n    * ***Roadmap*** Deep contextual bandits\n    * [Deep Q-networks (DQN)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter17_deep-reinforcement-learning\u002FDQN.ipynb)\n    * [Double-DQN](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter17_deep-reinforcement-learning\u002FDDQN.ipynb)\n    * ***Roadmap*** Policy gradient\n    * ***Roadmap*** Actor-critic gradient\n\n* **Chapter 18:** Variational methods and uncertainty\n    * ***Roadmap*** Dropout-based uncertainty estimation (BALD)\n    * [Weight uncertainty (Bayes by Backprop) from scratch](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter18_variational-methods-and-uncertainty\u002Fbayes-by-backprop.ipynb)\n    * [Weight uncertainty (Bayes by Backprop) with ``gluon``](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter18_variational-methods-and-uncertainty\u002Fbayes-by-backprop-gluon.ipynb)\n    * [Weight uncertainty (Bayes by Backprop) for Recurrent Neural Networks](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter18_variational-methods-and-uncertainty\u002Fbayes-by-backprop-rnn.ipynb)\n    * ***Roadmap*** Variational autoencoders\n\n* **Chapter 19:** Graph Neural Networks\n    * [Deep Learning on Graphs with Message Passing Neural Networks](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter19_graph-neural-networks\u002FGraph-Neural-Networks.ipynb)\n\n### Appendices\n* Appendix 1: Cheatsheets\n    * ***Roadmap*** ``gluon``\n    * ***Roadmap*** [PyTorch to MXNet](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fcheatsheets\u002Fpytorch_gluon.md) (work in progress)\n    * ***Roadmap*** Tensorflow to MXNet\n    * ***Roadmap*** Keras to MXNet\n    * ***Roadmap*** Math to MXNet\n\n\n## Choose your own adventure\nWe've designed these tutorials so that you can traverse the curriculum in more than one way.\n* Anarchist - Choose whatever you want to read, whenever you want to read it.\n* Imperialist - Proceed through all tutorials in order. In this fashion you will be exposed to each model first from scratch, writing all the code ourselves but for the basic linear algebra primitives and automatic differentiation.\n* Capitalist - If you don't care how things work (or already know) and just want to see working code in ``gluon``, you can skip (*from scratch!*) tutorials and go straight to the production-like code using the high-level ``gluon`` front end.\n\n## Authors\nThis evolving creature is a collaborative effort (see contributors tab). The lead writers, assimilators, and coders include:\n* Zachary C. Lipton ([@zackchase](https:\u002F\u002Fgithub.com\u002Fzackchase))\n* Mu Li ([@mli](https:\u002F\u002Fgithub.com\u002Fmli))\n* Alex Smola ([@smolix](https:\u002F\u002Fgithub.com\u002Fsmolix))\n* Sheng Zha ([@szha](https:\u002F\u002Fgithub.com\u002Fszha))\n* Aston Zhang ([@astonzhang](https:\u002F\u002Fgithub.com\u002Fastonzhang))\n* Joshua Z. Zhang ([@zhreshold](https:\u002F\u002Fgithub.com\u002Fzhreshold))\n* Eric Junyuan Xie ([@piiswrong](https:\u002F\u002Fgithub.com\u002Fpiiswrong))\n* Kamyar Azizzadenesheli ([@kazizzad](https:\u002F\u002Fgithub.com\u002Fkazizzad))\n* Jean Kossaifi ([@JeanKossaifi](https:\u002F\u002Fgithub.com\u002FJeanKossaifi))\n* Stephan Rabanser ([@steverab](https:\u002F\u002Fgithub.com\u002Fsteverab))\n\n## Inspiration\nIn creating these tutorials, we've have drawn inspiration from some the resources that allowed us\nto learn deep \u002F machine learning with other libraries in the past. These include:\n\n* [Soumith Chintala's *Deep Learning with PyTorch: A 60 Minute Blitz*](http:\u002F\u002Fpytorch.org\u002Ftutorials\u002Fbeginner\u002Fdeep_learning_60min_blitz.html)\n* [Alec Radford's *Bare-bones intro to Theano*](https:\u002F\u002Fgithub.com\u002FNewmu\u002FTheano-Tutorials)\n* [Video of Alec's intro to deep learning with Theano](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=S75EdAcXHKk)\n* [Chris Bishop's *Pattern Recognition and Machine Learning*](https:\u002F\u002Fwww.amazon.com\u002FPattern-Recognition-Learning-Information-Statistics\u002Fdp\u002F0387310738)\n\n## Contribute\n* Already, in the short time this project has been off the ground, we've gotten some helpful PRs from the community with pedagogical suggestions, typo corrections, and other useful fixes. If you're inclined, please contribute!\n","# 深度学习——直截了当的指南（已弃用* 请参阅 d2l.ai）\n## 本内容已迁移至《深度学习》一书，该书可在 https:\u002F\u002Fd2l.ai\u002F 免费获取。\n\n\n\n## 摘要\n此仓库包含一系列逐步递进的笔记本，旨在教授深度学习、MXNet 以及 ``gluon`` 接口。我们的目标是充分利用 Jupyter 笔记本的优势，将文字、图表、公式和代码整合在同一页面中呈现。如果成功，这将形成一种兼具书籍、课程材料、现场教程辅助工具以及可合法引用实用代码资源的综合性学习材料。据我们所知，目前尚不存在同时满足以下两个条件的资料：(1) 全面覆盖现代深度学习的核心概念；(2) 将引人入胜的教材内容与可运行代码无缝结合。我们将在项目结束时揭晓这一空白是否存在合理的理由。\n\n本书的另一大特色在于其编写方式。我们完全公开地开发这套资源，并将其完整版免费提供给公众。尽管本书由几位主要作者负责设定基调和内容框架，但我们欢迎社区贡献，期待与领域专家及社区成员共同撰写章节乃至整部分内容。迄今为止，我们已收到从错别字修正到完整可运行示例的各种贡献。\n\n## 基于 Apache MXNet 的实现\n在本书的整个过程中，我们以 MXNet 为核心工具，讲解深度学习的基础概念、高级主题以及丰富的应用场景。MXNet 因其卓越的速度表现而广泛应用于生产环境。如今，借助 MXNet 的全新命令式接口 ``gluon``（处于 Alpha 阶段），研究人员可以更轻松地开展 MXNet 相关工作。\n\n## 依赖项\n要运行这些笔记本，您需要从源码编译 MXNet。幸运的是，按照 [这些说明](http:\u002F\u002Fmxnet.io\u002Fget_started\u002Finstall.html)，这一过程非常简单（尤其是在 Linux 系统上）。此外，您还需要 [安装 Jupyter](http:\u002F\u002Fjupyter.readthedocs.io\u002Fen\u002Flatest\u002Finstall.html)，并使用 Python 3（因为现在是 2017 年）。\n\n## 幻灯片\n\n作者及其他人士正越来越多地基于本书内容进行演讲。其中一些演示文稿（例如长达 6 小时的 KDD 2017 演讲）篇幅巨大，因此我们将其单独收集在 [这个仓库](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-slides) 中。如果您希望分享基于本书的教程或课程材料，欢迎向该仓库贡献内容。\n\n## 翻译\n在编写本书的过程中，规模较大的稳定章节正在同步翻译成中文，既提供 [网页版本](http:\u002F\u002Fzh.gluon.ai\u002F) ，也通过 [GitHub 源代码](http:\u002F\u002Fzh.gluon.ai\u002F) 提供。\n\n## 目录\n\n### 第一部分：深度学习基础\n* **第1章：速成课程**\n    * [前言](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Fpreface.ipynb)\n    * [导论](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Fintroduction.ipynb)\n    * [使用 NDArray 操作数据](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Fndarray.ipynb)\n    * [线性代数](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Flinear-algebra.ipynb)\n    * [概率与统计](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Fprobability.ipynb)\n    * [通过 ``autograd`` 进行自动微分](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter01_crashcourse\u002Fautograd.ipynb)\n\n* **第2章：监督学习入门**\n    * [线性回归 *(从零开始)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Flinear-regression-scratch.ipynb)\n    * [线性回归 *(使用 ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Flinear-regression-gluon.ipynb)\n    * [二分类问题的逻辑回归 *(``gluon`` 结合自定义损失函数)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Flogistic-regression-gluon.ipynb)\n    * [多分类逻辑回归 *(从零开始)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fsoftmax-regression-scratch.ipynb)\n    * [多分类逻辑回归 *(使用 ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fsoftmax-regression-gluon.ipynb)\n    * [过拟合与正则化 *(从零开始)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fregularization-scratch.ipynb)\n    * [过拟合与正则化 *(使用 ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fregularization-gluon.ipynb)\n    * [感知机与 SGD 入门](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fperceptron.ipynb)\n    * [学习环境](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter02_supervised-learning\u002Fenvironment.ipynb)\n\n* **第3章：深度神经网络 (DNNs)**\n    * [多层感知机 *(从零开始)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fmlp-scratch.ipynb)\n    * [多层感知机 *(使用 ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fmlp-gluon.ipynb)\n    * [Dropout 正则化 *(从零开始)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fmlp-dropout-scratch.ipynb)\n    * [Dropout 正则化 *(使用 ``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fmlp-dropout-gluon.ipynb)\n    * [``gluon.Block`` 和 ``gluon.nn.Sequential()`` 简介](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fplumbing.ipynb)\n    * [使用 ``gluon.Block`` 编写自定义层](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fcustom-layer.ipynb)\n    * [序列化：保存和加载模型](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter03_deep-neural-networks\u002Fserialization.ipynb)\n    * 高级数据输入输出\n    * 调试您的神经网络\n\n* **第4章**：卷积神经网络（CNN）\n     * [卷积神经网络 *(从头实现)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fcnn-scratch.ipynb)\n     * [卷积神经网络 *(使用``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fcnn-gluon.ipynb)\n     * [深度 CNN 简介（AlexNet）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fdeep-cnns-alexnet.ipynb)\n     * [超深网络与重复模块（VGG 网络）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fvery-deep-nets-vgg.ipynb)\n     * [批量归一化 *(从头实现)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fcnn-batch-norm-scratch.ipynb)\n     * [批量归一化 *(使用``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter04_convolutional-neural-networks\u002Fcnn-batch-norm-gluon.ipynb)\n\n* **第5章**：循环神经网络（RNN）\n    * [简单 RNN *(从头实现)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter05_recurrent-neural-networks\u002Fsimple-rnn.ipynb)\n    * [LSTM RNN *(从头实现)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter05_recurrent-neural-networks\u002Flstm-scratch.ipynb)\n    * [GRU *(从头实现)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter05_recurrent-neural-networks\u002Fgru-scratch.ipynb)\n    * [RNN *(使用``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter05_recurrent-neural-networks\u002Frnns-gluon.ipynb)\n    * ***路线图*** 循环网络中的 Dropout\n    * ***路线图*** Zoneout 正则化\n\n\n* **第6章**：优化\n    * [优化简介](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Foptimization-intro.ipynb)\n    * [梯度下降与随机梯度下降（从头实现）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fgd-sgd-scratch.ipynb)\n    * [使用``gluon``的梯度下降和随机梯度下降](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fgd-sgd-gluon.ipynb)\n    * [动量法（从头实现）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fmomentum-scratch.ipynb)\n    * [使用``gluon``的动量法](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fmomentum-gluon.ipynb)\n    * [Adagrad（从头实现）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadagrad-scratch.ipynb)\n    * [使用``gluon``的 Adagrad](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadagrad-gluon.ipynb)\n    * [RMSprop（从头实现）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Frmsprop-scratch.ipynb)\n    * [使用``gluon``的 RMSprop](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Frmsprop-gluon.ipynb)\n    * [Adadelta（从头实现）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadadelta-scratch.ipynb)\n    * [使用``gluon``的 Adadelta](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadadelta-gluon.ipynb)\n    * [Adam（从头实现）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadam-scratch.ipynb)\n    * [使用``gluon``的 Adam](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter06_optimization\u002Fadam-gluon.ipynb)\n\n* **第7章**：分布式与高性能学习\n    * [快速灵活：结合命令式与符号式网络的 HybridBlocks](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter07_distributed-learning\u002Fhybridize.ipynb)\n    * [使用多块 GPU 训练（从头实现）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter07_distributed-learning\u002Fmultiple-gpus-scratch.ipynb)\n    * [使用多块 GPU 训练（使用``gluon``）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter07_distributed-learning\u002Fmultiple-gpus-gluon.ipynb)\n    * [使用多台机器训练](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter07_distributed-learning\u002Ftraining-with-multiple-machines.ipynb)\n    * ***路线图*** 异步 SGD\n    * ***路线图*** 弹性 SGD\n\n### 第二部分：应用\n* **第8章**：计算机视觉（CV）\n    * ***路线图*** 网络之网络（Inception等）\n    * ***路线图*** 残差网络\n    * [目标检测](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter08_computer-vision\u002Fobject-detection.ipynb)\n    * ***路线图*** 全卷积网络\n    * ***路线图*** 连体（连结？）网络\n    * ***路线图*** 嵌入（成对损失和三元组损失）\n    * ***路线图*** Inceptionism \u002F 可视化特征检测器\n    * ***路线图*** 风格迁移\n    * [视觉问答](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter08_computer-vision\u002Fvisual-question-answer.ipynb)\n    * [微调](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter08_computer-vision\u002Ffine-tuning.ipynb)\n\n* **第9章**：自然语言处理（NLP）\n    * ***路线图*** 词嵌入（Word2Vec）\n    * ***路线图*** 句子嵌入（SkipThought）\n    * ***路线图*** 情感分析\n    * ***路线图*** 序列到序列学习（机器翻译）\n    * ***路线图*** 带注意力机制的序列转换（机器翻译）\n    * ***路线图*** 命名实体识别\n    * ***路线图*** 图像字幕生成\n    * [用于语义相关性的树LSTM](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter09_natural-language-processing\u002Ftree-lstm.ipynb)\n\n* **第10章**：音频处理\n    * ***路线图*** 自动语音识别简介\n    * ***路线图*** 连接主义时序分类（CSC）用于未对齐序列\n    * ***路线图*** 结合静态与序列数据\n\n* **第11章**：推荐系统\n    * [推荐系统简介](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter11_recommender-systems\u002Fintro-recommender-systems.ipynb)\n    * ***路线图*** 隐因子模型\n    * ***路线图*** 深度隐因子模型\n    * ***路线图*** 双线性模型\n    * ***路线图*** 从隐式反馈中学习\n\n* **第12章**：时间序列\n    * [预测简介 *(使用``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter12_time-series\u002Fintro-forecasting-gluon.ipynb)\n    * [广义线性模型\u002FMLP用于预测 *(使用``gluon``)*](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter12_time-series\u002Fintro-forecasting-2-gluon.ipynb)\n    * ***路线图*** 用于预测的因子模型\n    * ***路线图*** 用于预测的循环神经网络\n    * [线性动态系统 (*从头实现*) ](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter12_time-series\u002Flds-scratch.ipynb)\n    * [指数平滑与创新状态空间建模 (*从头实现*)](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter12_time-series\u002Fissm-scratch.ipynb)\n    * ***路线图*** 用于预测的高斯过程\n    * ***路线图*** 贝叶斯时间序列模型\n    * ***路线图*** 缺失数据建模\n    * ***路线图*** 结合静态与序列数据\n\n### 第三部分：高级方法\n* **第13章**：无监督学习\n   * ***路线图*** 自编码器简介\n   * ***路线图*** 卷积自编码器（引入上采样）\n   * ***路线图*** 去噪自编码器\n   * [变分自编码器](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter13_unsupervised-learning\u002Fvae-gluon.ipynb)\n   * ***路线图*** 聚类\n\n* **第14章**：生成对抗网络（GANs）\n    * [GANs简介](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter14_generative-adversarial-networks\u002Fgan-intro.ipynb)\n    * [深度卷积GANs（DCGANs）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter14_generative-adversarial-networks\u002Fdcgan.ipynb)\n    * ***路线图*** Wasserstein-GANs\n    * ***路线图*** 基于能量的GANs\n    * ***路线图*** 条件GANs\n    * [图像转换GANs（Pix2Pix）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter14_generative-adversarial-networks\u002Fpixel2pixel.ipynb)\n    * ***路线图*** 从合成和无监督图像中学习\n\n* **第15章**：对抗学习\n    * ***路线图*** 双样本检验\n    * ***路线图*** 寻找对抗样本\n    * ***路线图*** 对抗训练\n\n* **第16章**：张量方法\n    * [张量方法简介](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter16_tensor_methods\u002Ftensor_basics.ipynb)\n    * ***路线图*** 张量分解\n    * ***路线图*** 张量化神经网络\n\n* **第17章**：深度强化学习（DRL）\n    * ***路线图*** 强化学习简介\n    * ***路线图*** 深度上下文多臂赌博机\n    * [深度Q网络（DQN）](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter17_deep-reinforcement-learning\u002FDQN.ipynb)\n    * [双DQN](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter17_deep-reinforcement-learning\u002FDDQN.ipynb)\n    * ***路线图*** 策略梯度\n    * ***路线图*** 演员-评论家梯度\n\n* **第18章**：变分方法与不确定性\n    * ***路线图*** 基于Dropout的不确定性估计（BALD）\n    * [权重不确定性（反向传播贝叶斯）从零开始](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter18_variational-methods-and-uncertainty\u002Fbayes-by-backprop.ipynb)\n    * [权重不确定性（反向传播贝叶斯）使用``gluon``](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter18_variational-methods-and-uncertainty\u002Fbayes-by-backprop-gluon.ipynb)\n    * [权重不确定性（反向传播贝叶斯）用于循环神经网络](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter18_variational-methods-and-uncertainty\u002Fbayes-by-backprop-rnn.ipynb)\n    * ***路线图*** 变分自编码器\n\n* **第19章**：图神经网络\n    * [基于消息传递神经网络的图上的深度学习](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fchapter19_graph-neural-networks\u002FGraph-Neural-Networks.ipynb)\n\n### 附录\n* 附录1：备忘单\n    * ***路线图*** ``gluon``\n    * ***路线图*** [PyTorch转MXNet](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fcheatsheets\u002Fpytorch_gluon.md)（正在进行中）\n    * ***路线图*** TensorFlow转MXNet\n    * ***路线图*** Keras转MXNet\n    * ***路线图*** 数学转MXNet\n\n## 自选冒险\n我们设计这些教程时，旨在让您以多种方式学习整个课程内容。\n* 无政府主义者——想读什么就读什么，什么时候读都行。\n* 帝国主义者——按顺序完成所有教程。这样您将首先从零开始接触每个模型，自己编写所有代码，仅使用基础的线性代数原语和自动微分功能。\n* 资本主义者——如果您不关心底层原理（或者已经了解），只想快速查看使用 Gluon 的可运行代码，那么您可以跳过“从零开始！”的教程，直接进入基于 Gluon 高级接口的生产级代码部分。\n\n## 作者\n这部不断发展的作品是一项协作成果（请参阅“贡献者”标签页）。主要撰写、整理和编码人员包括：\n* 扎卡里·C·利普顿（[@zackchase](https:\u002F\u002Fgithub.com\u002Fzackchase)）\n* 穆·李（[@mli](https:\u002F\u002Fgithub.com\u002Fmli)）\n* 亚历克斯·斯莫拉（[@smolix](https:\u002F\u002Fgithub.com\u002Fsmolix)）\n* 盛·扎（[@szha](https:\u002F\u002Fgithub.com\u002Fszha)）\n* 阿斯顿·张（[@astonzhang](https:\u002F\u002Fgithub.com\u002Fastonzhang)）\n* 乔舒亚·Z·张（[@zhreshold](https:\u002F\u002Fgithub.com\u002Fzhreshold)）\n* 埃里克·俊元·谢（[@piiswrong](https:\u002F\u002Fgithub.com\u002Fpiiswrong)）\n* 卡米亚尔·阿齐扎德内舍利（[@kazizzad](https:\u002F\u002Fgithub.com\u002Fkazizzad)）\n* 让·科萨伊菲（[@JeanKossaifi](https:\u002F\u002Fgithub.com\u002FJeanKossaifi)）\n* 斯特凡·拉班瑟（[@steverab](https:\u002F\u002Fgithub.com\u002Fsteverab)）\n\n## 灵感来源\n在编写这些教程的过程中，我们从过去帮助我们使用其他库学习深度学习或机器学习的一些资源中汲取了灵感。其中包括：\n\n* 苏米思·钦塔拉的《用 PyTorch 进行深度学习：60 分钟速成》\n* 阿莱克·拉德福德的《Theano 入门基础教程》\n* 阿莱克关于使用 Theano 进行深度学习的介绍视频\n* 克里斯·毕晓普的《模式识别与机器学习》\n\n## 参与贡献\n自项目启动以来的短时间内，社区已为我们提供了许多有益的拉取请求，其中包含教学建议、错别字修正以及其他实用改进。如果您愿意，欢迎参与贡献！","# MXNet The Straight Dope 快速上手指南\n\n> **重要提示**：本仓库（`mxnet-the-straight-dope`）已停止维护并标记为废弃。所有内容已迁移至《动手学深度学习》（Dive into Deep Learning, D2L）官方项目。\n> *   **最新中文版地址**：[https:\u002F\u002Fzh.d2l.ai\u002F](https:\u002F\u002Fzh.d2l.ai\u002F)\n> *   **英文版地址**：[https:\u002F\u002Fd2l.ai\u002F](https:\u002F\u002Fd2l.ai\u002F)\n>\n> 以下指南基于原仓库内容整理，旨在帮助开发者快速复现经典教程或理解其运行环境。建议新用户直接访问上述最新地址获取基于 PyTorch、TensorFlow 和 MXNet 的多框架支持版本。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：推荐 Linux（Ubuntu\u002FCentOS），Windows 和 macOS 也可运行但配置可能稍复杂。\n*   **Python 版本**：必须使用 **Python 3.x**（原项目明确不再支持 Python 2）。\n*   **核心依赖**：\n    *   **Apache MXNet**：需要包含 `gluon` 接口的版本。\n    *   **Jupyter Notebook**：用于运行交互式教程笔记。\n    *   **Git**：用于克隆代码仓库。\n\n## 安装步骤\n\n### 1. 安装 Jupyter Notebook\n如果您尚未安装 Jupyter，请使用 pip 进行安装：\n\n```bash\npip install jupyter\n```\n\n### 2. 安装 Apache MXNet (含 Gluon)\n原项目建议从源码编译以获得最佳性能，但对于快速上手，推荐使用预编译包。\n\n**通用安装命令：**\n```bash\npip install mxnet gluonnlp\n```\n\n**国内加速方案（推荐）：**\n在中国大陆地区，建议使用清华源或阿里源加速安装：\n\n```bash\npip install mxnet gluonnlp -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n*注：如果您需要使用 GPU 版本，请将 `mxnet` 替换为 `mxnet-cu101`（根据实际 CUDA 版本调整，如 `mxnet-cu110`）。*\n\n### 3. 获取教程代码\n克隆原始仓库以获取 Notebook 文件：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope.git\ncd mxnet-the-straight-dope\n```\n\n## 基本使用\n\n本项目由一系列 Jupyter Notebook 组成，涵盖了从基础数学到深度神经网络的全流程。\n\n### 启动交互式教程\n进入章节目录并启动 Jupyter Server。例如，要学习第一章“深度学习基础”中的 NDArray 操作：\n\n```bash\ncd chapter01_crashcourse\njupyter notebook ndarray.ipynb\n```\n\n浏览器将自动打开，您可以直接在单元格中运行代码、查看公式和修改参数。\n\n### 最小化代码示例\n以下是一个使用 MXNet Gluon 接口定义并运行简单线性回归模型的极简示例，展示了该教程的核心风格：\n\n```python\nimport mxnet as mx\nfrom mxnet import nd, autograd, gluon\n\n# 设置上下文 (CPU 或 GPU)\nctx = mx.cpu() \n\n# 1. 准备数据\nX = nd.random.normal(shape=(4, 2), ctx=ctx)\ny = nd.array([2, 4, 6, 8], ctx=ctx)\n\n# 2. 定义模型\nnet = gluon.nn.Dense(1)\nnet.initialize(ctx=ctx)\n\n# 3. 定义损失函数和优化器\nloss = gluon.loss.L2Loss()\ntrainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.03})\n\n# 4. 训练循环 (单步演示)\nwith autograd.record():\n    output = net(X)\n    l = loss(output, y)\nl.backward()\ntrainer.step(1)\n\nprint(\"训练完成，模型参数已更新\")\n```\n\n### 学习路径建议\n按照原书结构，建议的学习顺序如下：\n1.  **Part 1: 基础** - 从 `chapter01` 的 NDArray 操作开始，逐步学习监督学习、多层感知机 (MLP)、卷积神经网络 (CNN) 和循环神经网络 (RNN)。\n2.  **Part 2: 应用** - 进阶学习计算机视觉 (`chapter08`) 和自然语言处理等实际应用案例。\n\n请再次注意，如需学习最新技术栈和完整中文内容，请访问 **[zh.d2l.ai](https:\u002F\u002Fzh.d2l.ai\u002F)**。","某高校人工智能实验室的研究生团队正试图从零开始掌握深度学习核心算法，并复现经典论文中的模型。\n\n### 没有 mxnet-the-straight-dope 时\n- 理论学习与代码实践严重割裂，学生需先在教材中推导数学公式，再独自摸索如何将其转化为 MXNet 代码，极易出错。\n- 缺乏可运行的交互式示例，面对复杂的自动微分（autograd）或线性代数运算，只能依靠静态文档猜测实现细节。\n- 社区资源分散且质量参差不齐，寻找一个既涵盖基础概率统计又能直接用于生产环境的高效代码范例如同大海捞针。\n- 学习曲线陡峭，从理解概念到动手构建第一个线性回归模型往往需要数周时间，严重拖慢科研进度。\n\n### 使用 mxnet-the-straight-dope 后\n- 实现了“所见即所得”的学习体验，Jupyter Notebook 将数学推导、原理解析与可执行的 Gluon 代码无缝整合，边读边跑。\n- 提供从底层原理（Scratch）到高级接口（Gluon）的完整实现路径，学生能清晰对比手动推导与框架调用的差异，彻底吃透算法。\n- 内容覆盖从数据操作、概率统计到监督学习的全链路，且所有代码均经过生产级速度优化，可直接作为科研项目的启动模板。\n- 借助其中文版资源和开源协作模式，团队快速统一了技术栈，将新成员的上手时间从数周缩短至几天。\n\nmxnet-the-straight-dope 通过将交互式代码与深度理论完美融合，消除了深度学习学习中“懂原理却写不出代码”的核心障碍。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzackchase_mxnet-the-straight-dope_03e2d38e.png","zackchase","Zack Chase Lipton","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fzackchase_88259546.jpg","Assistant Professor of Machine Learning & Operations Research (CMU). ","CMU, Amazon","Pittsburgh","zlipton@cs.ucsd.edu",null,"zacklipton.com","https:\u002F\u002Fgithub.com\u002Fzackchase",[86,90,94,97,101,104],{"name":87,"color":88,"percentage":89},"Jupyter Notebook","#DA5B0B",99.6,{"name":91,"color":92,"percentage":93},"Python","#3572A5",0.2,{"name":95,"color":96,"percentage":93},"Makefile","#427819",{"name":98,"color":99,"percentage":100},"Shell","#89e051",0,{"name":102,"color":103,"percentage":100},"JavaScript","#f1e05a",{"name":105,"color":106,"percentage":100},"CSS","#663399",2565,712,"2026-03-28T06:04:02","Apache-2.0",4,"Linux, macOS, Windows","未说明（项目依赖 MXNet，支持 CPU 及多 GPU 训练，具体显卡型号和 CUDA 版本需参考 MXNet 安装指南）","未说明",{"notes":116,"python":117,"dependencies":118},"该项目已弃用，内容已迁移至《动手学深度学习》(d2l.ai)。运行笔记本需要从源码编译安装 MXNet（在 Linux 上较容易），并安装 Jupyter。书中包含多 GPU 和多机器分布式训练的章节，但具体硬件需求取决于所选模型和数据集规模。","3.x (README 明确指出使用 Python 3)",[119,120,121,122,123],"mxnet (建议从源码编译)","gluon (MXNet 接口)","jupyter","numpy","matplotlib",[13],"2026-03-27T02:49:30.150509","2026-04-06T05:44:06.799903",[128,133,138,143,148],{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},14926,"如何在 Gluon 中访问模型的原始参数（如权重和偏置）并将其转换为 NumPy 数组？","在较新版本的教程（特别是重构后的第 3 章）中，已经添加了对 Gluon 参数的直接访问支持。你可以通过遍历网络层来获取所有层的类型及其参数。虽然具体的 API 调用可能随版本更新，但通常可以通过 `net.collect_params()` 收集参数，并进一步提取为 MXNet NDArray 或转换为 NumPy 数组（使用 `.asnumpy()` 方法）。对于需要检查无参数层（如 ReLU）的情况，建议遍历网络的子模块（children）来识别层类型。","https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fissues\u002F16",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},14927,"运行 Fine-tuning 示例时遇到 'Cannot update self with other because they have different Parameters with the same name' 错误怎么办？","这是一个已知的参数命名冲突问题，通常发生在模型加载或上下文重置时。该问题已在 MXNet 主仓库中得到修复（参考 PR #7892）。解决方法是确保你使用的是包含该修复的最新版本 MXNet，或者手动检查代码中是否存在重复初始化导致同名参数被多次注册的情况。如果问题依旧，可能需要清理缓存或重新定义网络结构以避免参数名冲突。","https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fissues\u002F238",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},14928,"运行 Pix2Pix 训练时出现 'The any container is empty' 或 'Gradient of Parameter ... has not been updated' 错误如何解决？","这通常是因为模型中某些参数在反向传播过程中未被使用，导致梯度未更新。如果在训练中确实只使用了部分参数，可以在调用 `trainer.step()` 时添加参数 `ignore_stale_grad=True` 来抑制警告并跳过这些 stale 梯度的更新。例如：`trainerG.step(batch_size, ignore_stale_grad=True)`。此外，请检查模型定义是否正确连接了所有层，确保没有孤立的模块导致计算图断裂。","https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fissues\u002F288",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},14929,"如何从 PyTorch 迁移到 MXNet Gluon？有没有对照参考表？","社区维护了一份详细的 PyTorch 到 Gluon 的速查表（Cheatsheet），涵盖了张量操作、GPU 复制、NumPy 转换等常见功能。例如，PyTorch 中的 `y = torch.FloatTensor(1).cuda()` 对应 Gluon 中的 `y = mxnet.nd.ones((1,), ctx=mx.gpu(0))`；PyTorch 的 `x = y.cpu().numpy()` 对应 Gluon 的 `x = y.asnumpy()`。你可以参考项目中的 `cheatsheets\u002Fpytorch_gluon.md` 文件获取完整对照表和高低层差异描述。","https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fissues\u002F301",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},14930,"gluon.mxnet.io 网站是否支持 HTTPS 安全访问？","是的，该网站已启用 HTTPS 支持，并配置了 HTTP 自动跳转到 HTTPS。用户现在可以通过 `https:\u002F\u002Fgluon.mxnet.io` 安全地访问文档和教程。这一改进是通过在 CloudFront 上配置正确的 SSL 证书实现的。","https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fissues\u002F466",[154,159,164],{"id":155,"version":156,"summary_zh":157,"released_at":158},81764,"v0.3","重写了监督学习的大部分内容。新增了关于优化、损失函数和激活函数的解释。补充了缺失的（二分类）逻辑回归章节。","2017-11-16T22:24:44",{"id":160,"version":161,"summary_zh":162,"released_at":163},81765,"v0.2","本次发布标志着各章节均已至少经过一次扎实的初步编辑审阅与重写。绪论以及概率部分的大部分内容都进行了重写；目标检测和优化章节现在也新增了绪论。自上一版发布以来，我们还增加了大量内容，包括DQN、GAN、优化方法等的初稿。","2017-09-12T23:34:05",{"id":165,"version":166,"summary_zh":167,"released_at":168},81766,"v0.1","一本关于深度学习的书籍初稿草稿，内容涵盖理论、应用以及 MXNet 框架。","2017-08-08T04:18:51"]