[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-dsgiitr--d2l-pytorch":3,"tool-dsgiitr--d2l-pytorch":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",156804,2,"2026-04-15T11:34:33",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":95,"forks":96,"last_commit_at":97,"license":98,"difficulty_score":32,"env_os":99,"env_gpu":100,"env_ram":99,"env_deps":101,"category_tags":108,"github_topics":110,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":121,"updated_at":122,"faqs":123,"releases":159},7792,"dsgiitr\u002Fd2l-pytorch","d2l-pytorch","This project reproduces the book Dive Into Deep Learning (https:\u002F\u002Fd2l.ai\u002F), adapting the code from MXNet into PyTorch.","d2l-pytorch 是经典深度学习教材《动手学深度学习》（Dive Into Deep Learning）的 PyTorch 代码实现版本。该项目将原书基于 MXNet 框架的代码示例完整迁移至目前更主流的 PyTorch 框架，涵盖了从环境安装、数学基础、线性神经网络到多层感知机等核心章节的可运行代码笔记。\n\n它主要解决了学习者在使用这本优质开源教材时面临的框架适配痛点。由于原版代码依赖 MXNet，而许多开发者和研究者更习惯或必须使用 PyTorch 生态，d2l-pytorch 填补了这一空白，让用户无需手动转换代码即可直接利用 PyTorch 进行复现和实验，极大地降低了学习门槛和时间成本。\n\n这款资源非常适合希望系统学习深度学习的开发者、高校学生以及研究人员。无论是想要从零开始掌握神经网络原理的初学者，还是需要在 PyTorch 环境下快速验证算法的研究者，都能从中获益。其独特的技术亮点在于保留了原书“从零实现”与“简洁实现”并行的教学特色，既帮助读者深入理解底层数学逻辑，又展示了工业级的高效调用方式，是连接理论与实践的桥梁。需要注意的是，目前官方已推出完整的 PyTor","d2l-pytorch 是经典深度学习教材《动手学深度学习》（Dive Into Deep Learning）的 PyTorch 代码实现版本。该项目将原书基于 MXNet 框架的代码示例完整迁移至目前更主流的 PyTorch 框架，涵盖了从环境安装、数学基础、线性神经网络到多层感知机等核心章节的可运行代码笔记。\n\n它主要解决了学习者在使用这本优质开源教材时面临的框架适配痛点。由于原版代码依赖 MXNet，而许多开发者和研究者更习惯或必须使用 PyTorch 生态，d2l-pytorch 填补了这一空白，让用户无需手动转换代码即可直接利用 PyTorch 进行复现和实验，极大地降低了学习门槛和时间成本。\n\n这款资源非常适合希望系统学习深度学习的开发者、高校学生以及研究人员。无论是想要从零开始掌握神经网络原理的初学者，还是需要在 PyTorch 环境下快速验证算法的研究者，都能从中获益。其独特的技术亮点在于保留了原书“从零实现”与“简洁实现”并行的教学特色，既帮助读者深入理解底层数学逻辑，又展示了工业级的高效调用方式，是连接理论与实践的桥梁。需要注意的是，目前官方已推出完整的 PyTorch 移植版，建议用户结合最新官方资源共同参考学习。","\u003Cp align=\"center\">\n  \u003Cimg width=\"60%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdsgiitr_d2l-pytorch_readme_d55bb07818f4.png\" \u002F>\n\u003C\u002Fp>\n\n-----------------------------------------------------------------------------------------------------------\n\n**UPDATE: Please see the [orignal repo](https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en) for the complete PyTorch port. We no longer maintain this repo.**\n\nThis project is adapted from the original [Dive Into Deep Learning](https:\u002F\u002Fd2l.ai) book by Aston Zhang, Zachary C. Lipton, Mu Li, Alex J. Smola and all the community contributors. GitHub of the original book: [https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en](https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en). We have made an effort to modify the book and convert the MXnet code snippets into PyTorch.\n\nNote: Some ipynb notebooks may not be rendered perfectly in Github. We suggest `cloning` the repo or using [nbviewer](https:\u002F\u002Fnbviewer.jupyter.org\u002F) to view the notebooks.\n\n\n## Chapters\n\n  * **Ch02 Installation**\n    * [Installation](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh02_Installation\u002FINSTALL.md)\n\n  * **Ch03 Introduction**\n    * [Introduction](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh03_Introduction\u002FIntroduction.ipynb)\n\n  * **Ch04 The Preliminaries: A Crashcourse**\n    * 4.1 [Data Manipulation](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FData_Manipulation.ipynb)\n    * 4.2 [Linear Algebra](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FLinear_Algebra.ipynb)\n    * 4.3 [Automatic Differentiation](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FAutomatic_Differentiation.ipynb)\n    * 4.4 [Probability and Statistics](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FProbability_and_Statistics.ipynb)\n    * 4.5 [Naive Bayes Classification](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FNaive_Bayes_Classification.ipynb)\n    * 4.6 [Documentation](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FDocumentation.ipynb)\n    \n  * **Ch05 Linear Neural Networks**\n    * 5.1 [Linear Regression](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FLinear_Regression.ipynb)\n    * 5.2 [Linear Regression Implementation from Scratch](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FLinear_Regression_Implementation_from_Scratch.ipynb)\n    * 5.3 [Concise Implementation of Linear Regression](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FConcise_Implementation_of_Linear_Regression.ipynb)\n    * 5.4 [Softmax Regression](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FSoftmax_Regression.ipynb)\n    * 5.5 [Image Classification Data (Fashion-MNIST)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FImage_Classification_Data(Fashion-MNIST).ipynb)\n    * 5.6 [Implementation of Softmax Regression from Scratch](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FImplementation_of_Softmax_Regression_from_Scratch.ipynb)\n    * 5.7 [Concise Implementation of Softmax Regression](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FConcise_Implementation_of_Softmax_Regression.ipynb)\n\n  * **Ch06 Multilayer Perceptrons**\n    * 6.1 [Multilayer Perceptron](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FMultilayer_Perceptron.ipynb)\n    * 6.2 [Implementation of Multilayer Perceptron from Scratch](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FImplementation_of_Multilayer_Perceptron_from_Scratch.ipynb)\n    * 6.3 [Concise Implementation of Multilayer Perceptron](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FConcise_Implementation_of_Multilayer_Perceptron.ipynb)\n    * 6.4 [Model Selection Underfitting and Overfitting](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FModel_Selection_Underfitting_and_Overfitting.ipynb)\n    * 6.5 [Weight Decay](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FWeight_Decay.ipynb)\n    * 6.6 [Dropout](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FDropout.ipynb)\n    * 6.7 [Forward Propagation Backward Propagation and Computational Graphs](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FForward_Propagation_Backward_Propagation_and_Computational_Graphs.ipynb)\n    * 6.8 [Numerical Stability and Initialization](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FNumerical_Stability_and_Initialization.ipynb)\n    * 6.9 [Considering the Environment](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FConsidering_The_Environment.ipynb)\n    * 6.10 [Predicting House Prices on Kaggle](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FPredicting_House_Prices_on_Kaggle.ipynb)\n\n  * **Ch07 Deep Learning Computation**\n    * 7.1 [Layers and Blocks](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FLayers_and_Blocks.ipynb)\n    * 7.2 [Parameter Management](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FParameter_Management.ipynb)\n    * 7.3 [Deferred Initialization](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FDeferred_Initialization.ipynb)\n    * 7.4 [Custom Layers](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FCustom_Layers.ipynb)\n    * 7.5 [File I\u002FO](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FFile_I_O.ipynb)\n    * 7.6 [GPUs](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FGPUs.ipynb)\n\n  * **Ch08 Convolutional Neural Networks**\n    * 8.1 [From Dense Layers to Convolutions](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FFrom_Dense_Layers_to_Convolutions.ipynb)\n    * 8.2 [Convolutions for Images](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FConvolutions_For_Images.ipynb)\n    * 8.3 [Padding and Stride](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FPadding_and_Stride.ipynb)\n    * 8.4 [Multiple Input and Output Channels](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FMultiple_Input_and_Output_Channels.ipynb)\n    * 8.5 [Pooling](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FPooling.ipynb)\n    * 8.6 [Convolutional Neural Networks (LeNet)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FConvolutional_Neural_Networks(LeNet).ipynb)\n\n  * **Ch09 Modern Convolutional Networks**\n    * 9.1 [Deep Convolutional Neural Networks (AlexNet)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FAlexNet.ipynb) \n    * 9.2 [Networks Using Blocks (VGG)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FVGG.ipynb)\n    * 9.3 [Network in Network (NiN)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FNetwork_in_Network(NiN).ipynb) \n    * 9.4 [Networks with Parallel Concatenations (GoogLeNet)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FNetworks_with_Parallel_Concatenations_(GoogLeNet).ipynb) \n    * 9.5 [Batch Normalization](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FBatch_Normalization.ipynb)\n    * 9.6 [Residual Networks (ResNet)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FResidual_Networks_(ResNet).ipynb) \n    * 9.7 [Densely Connected Networks (DenseNet)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FDensely_Connected_Networks_(DenseNet).ipynb) \n\n  * **Ch10 Recurrent Neural Networks**\n    * 10.1 [Sequence Models](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FSequence_Models.ipynb)\n    * 10.2 [Language Models](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FLanguage_Models.ipynb)\n    * 10.3 [Recurrent Neural Networks](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FRecurrent_Neural_Networks.ipynb)\n    * 10.4 [Text Preprocessing](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FText_Preprocessing.ipynb)\n    * 10.5 [Implementation of Recurrent Neural Networks from Scratch](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FImplementation_of_Recurrent_Neural_Networks_from_Scratch.ipynb)\n    * 10.6 [Concise Implementation of Recurrent Neural Networks](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FConcise_Implementation_of_Recurrent_Neural_Networks.ipynb)\n    * 10.7 [Backpropagation Through Time](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FBackpropagation_Through_Time.ipynb)\n    * 10.8 [Gated Recurrent Units (GRU)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FGated_Recurrent_Units.ipynb)\n    * 10.9 [Long Short Term Memory (LSTM)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FLong_Short_Term_Memory.ipynb)\n    * 10.10 [Deep Recurrent Neural Networks](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FDeep_Recurrent_Neural_Networks.ipynb)\n    * 10.11 Bidirectional Recurrent Neural Networks\n    * 10.12 [Machine Translation and DataSets](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FMachine_Translation_and_Data_Sets.ipynb)\n    * 10.13 [Encoder-Decoder Architecture](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FEncoder-Decoder_Architecture.ipynb) \n    * 10.14 [Sequence to Sequence](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FSequence_to_Sequence.ipynb)\n    * 10.15 [Beam Search](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FBeam_Search.ipynb)\n\n  * **Ch11 Attention Mechanism**\n    * 11.1 [Attention Mechanism](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh11_Attention_Mechanism\u002FAttention_Mechanism.ipynb)\n    * 11.2 Sequence to Sequence with Attention Mechanism\n    * 11.3 Transformer\n\n  * **Ch12 Optimization Algorithms**\n    * 12.1 [Optimization and Deep Learning](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FOptimization_And_Deep_Learning.ipynb)\n    * 12.2 [Convexity](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FConvexity.ipynb)\n    * 12.3 [Gradient Descent](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FGradient_Descent.ipynb)\n    * 12.4 [Stochastic Gradient Descent](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FStochastic_Gradient_Descent.ipynb)\n    * 12.5 [Mini-batch Stochastic Gradient Descent](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FMini-batch_Stochastic_Gradient_Descent.ipynb)\n    * 12.6 [Momentum](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FMomentum.ipynb)\n    * 12.7 Adagrad\n    * 12.8 [RMSProp](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FRMSProp.ipynb)\n    * 12.9 Adadelta\n    * 12.10 Adam\n  * **Ch14 Computer Vision**\n    * 14.1 Image Augmentation\n    * 14.2 Fine Tuning\n    * 14.3 [Object Detection and Bounding Boxes](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FObject_Detection_and_Bounding_Boxes.ipynb)\n    * 14.4 [Anchor Boxes](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FAnchor_Boxes.ipynb)\n    * 14.5 [Multiscale Object Detection](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FMultiscale_Object_Detection.ipynb)\n    * 14.6 [Object Detection Data Set (Pikachu)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FObject_Detection_Data_Set.ipynb)\n    * 14.7 [Single Shot Multibox Detection (SSD)](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FSingle_Shot_Multibox_Detection.ipynb)\n    * 14.8 Region-based CNNs (R-CNNs)\n    * 14.9 Semantic Segmentation and Data Sets\n    * 14.10 Transposed Convolution\n    * 14.11 Fully Convolutional Networks (FCN)\n    * 14.12 [Neural Style Transfer](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FNeural_Style_Transfer.ipynb)\n    * 14.13 Image Classification (CIFAR-10) on Kaggle\n    * 14.14 Dog Breed Identification (ImageNet Dogs) on Kaggle\n\n## Contributing\n\n  * Please feel free to open a Pull Request to contribute a notebook in PyTorch for the rest of the chapters. Before starting     out with the notebook, open an issue with the name of the notebook in order to contribute for the same. We will assign         that issue to you (if no one has been assigned earlier).\n\n  * Strictly follow the naming conventions for the IPython Notebooks and the subsections.\n\n  * Also, if you think there's any section that requires more\u002Fbetter explanation, please use the issue tracker to \n    open an issue and let us know about the same. We'll get back as soon as possible.\n\n  * Find some code that needs improvement and submit a pull request.\n\n  * Find a reference that we missed and submit a pull request.\n\n  * Try not to submit huge pull requests since this makes them hard to understand and incorporate. \n    Better send several smaller ones.\n\n\n## Support \n\nIf you like this repo and find it useful, please consider (★) starring it, so that it can reach a broader audience.\n\n\n## References\n\n[1] Original Book [Dive Into Deep Learning](https:\u002F\u002Fd2l.ai) -> [Github Repo](https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en)\n\n[2] [Deep Learning - The Straight Dope](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope)\n\n[3] [PyTorch - MXNet Cheatsheet](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fcheatsheets\u002Fpytorch_gluon.md)\n\n\n## Cite\nIf you use this work or code for your research please cite the original book with the following bibtex entry.\n```\n@book{zhang2020dive,\n    title={Dive into Deep Learning},\n    author={Aston Zhang and Zachary C. Lipton and Mu Li and Alexander J. Smola},\n    note={\\url{https:\u002F\u002Fd2l.ai}},\n    year={2020}\n}\n```\n","\u003Cp align=\"center\">\n  \u003Cimg width=\"60%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdsgiitr_d2l-pytorch_readme_d55bb07818f4.png\" \u002F>\n\u003C\u002Fp>\n\n-----------------------------------------------------------------------------------------------------------\n\n**更新：完整 PyTorch 版本请参阅[原始仓库](https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en)。我们不再维护此仓库。**\n\n本项目改编自 Aston Zhang、Zachary C. Lipton、Mu Li、Alex J. Smola 及全体社区贡献者所著的原版《深入浅出深度学习》（Dive Into Deep Learning）。原书 GitHub 地址：[https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en](https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en)。我们致力于对本书进行修改，并将其中的 MXNet 代码片段转换为 PyTorch 格式。\n\n注意：部分 ipynb 笔记本在 GitHub 上可能无法完美渲染。建议您`克隆`该仓库，或使用 [nbviewer](https:\u002F\u002Fnbviewer.jupyter.org\u002F) 来查看这些笔记本。\n\n\n## 章节\n\n  * **Ch02 安装**\n    * [安装指南](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh02_Installation\u002FINSTALL.md)\n\n  * **Ch03 导论**\n    * [导论](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh03_Introduction\u002FIntroduction.ipynb)\n\n  * **Ch04 基础知识：速成课程**\n    * 4.1 [数据处理](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FData_Manipulation.ipynb)\n    * 4.2 [线性代数](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FLinear_Algebra.ipynb)\n    * 4.3 [自动微分](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FAutomatic_Differentiation.ipynb)\n    * 4.4 [概率与统计](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FProbability_and_Statistics.ipynb)\n    * 4.5 [朴素贝叶斯分类](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FNaive_Bayes_Classification.ipynb)\n    * 4.6 [文档](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh04_The_Preliminaries_A_Crashcourse\u002FDocumentation.ipynb)\n    \n  * **Ch05 线性神经网络**\n    * 5.1 [线性回归](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FLinear_Regression.ipynb)\n    * 5.2 [从零开始实现线性回归](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FLinear_Regression_Implementation_from_Scratch.ipynb)\n    * 5.3 [线性回归的简洁实现](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FConcise_Implementation_of_Linear_Regression.ipynb)\n    * 5.4 [Softmax 回归](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FSoftmax_Regression.ipynb)\n    * 5.5 [图像分类数据集（Fashion-MNIST）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FImage_Classification_Data(Fashion-MNIST).ipynb)\n    * 5.6 [从零开始实现 Softmax 回归](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FImplementation_of_Softmax_Regression_from_Scratch.ipynb)\n    * 5.7 [Softmax 回归的简洁实现](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh05_Linear_Neural_Networks\u002FConcise_Implementation_of_Softmax_Regression.ipynb)\n\n  * **Ch06 多层感知机**\n    * 6.1 [多层感知机](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FMultilayer_Perceptron.ipynb)\n    * 6.2 [从零开始实现多层感知机](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FImplementation_of_Multilayer_Perceptron_from_Scratch.ipynb)\n    * 6.3 [多层感知机的简洁实现](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FConcise_Implementation_of_Multilayer_Perceptron.ipynb)\n    * 6.4 [模型选择、欠拟合与过拟合](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FModel_Selection_Underfitting_and_Overfitting.ipynb)\n    * 6.5 [权重衰减](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FWeight_Decay.ipynb)\n    * 6.6 [Dropout](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FDropout.ipynb)\n    * 6.7 [前向传播、反向传播与计算图](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FForward_Propagation_Backward_Propagation_and_Computational_Graphs.ipynb)\n    * 6.8 [数值稳定性与参数初始化](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FNumerical_Stability_and_Initialization.ipynb)\n    * 6.9 [环境因素考量](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FConsidering_The_Environment.ipynb)\n    * 6.10 [在 Kaggle 上预测房价](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh06_Multilayer_Perceptrons\u002FPredicting_House_Prices_on_Kaggle.ipynb)\n\n  * **Ch07 深度学习计算**\n    * 7.1 [层与块](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FLayers_and_Blocks.ipynb)\n    * 7.2 [参数管理](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FParameter_Management.ipynb)\n    * 7.3 [延迟初始化](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FDeferred_Initialization.ipynb)\n    * 7.4 [自定义层](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FCustom_Layers.ipynb)\n    * 7.5 [文件输入输出](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FFile_I_O.ipynb)\n    * 7.6 [GPU](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh07_Deep_Learning_Computation\u002FGPUs.ipynb)\n\n  * **Ch08 卷积神经网络**\n    * 8.1 [从全连接层到卷积](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FFrom_Dense_Layers_to_Convolutions.ipynb)\n    * 8.2 [用于图像的卷积](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FConvolutions_For_Images.ipynb)\n    * 8.3 [填充与步幅](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FPadding_and_Stride.ipynb)\n    * 8.4 [多输入多输出通道](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FMultiple_Input_and_Output_Channels.ipynb)\n    * 8.5 [池化](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FPooling.ipynb)\n    * 8.6 [卷积神经网络（LeNet）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh08_Convolutional_Neural_Networks\u002FConvolutional_Neural_Networks(LeNet).ipynb)\n\n* **第9章 现代卷积网络**\n    * 9.1 [深度卷积神经网络（AlexNet）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FAlexNet.ipynb) \n    * 9.2 [使用模块的网络（VGG）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FVGG.ipynb)\n    * 9.3 [网络中的网络（NiN）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FNetwork_in_Network(NiN).ipynb) \n    * 9.4 [具有并行连接的网络（GoogLeNet）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FNetworks_with_Parallel_Concatenations_(GoogLeNet).ipynb) \n    * 9.5 [批量归一化](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FBatch_Normalization.ipynb)\n    * 9.6 [残差网络（ResNet）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FResidual_Networks_(ResNet).ipynb) \n    * 9.7 [密集连接网络（DenseNet）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh09_Modern_Convolutional_Networks\u002FDensely_Connected_Networks_(DenseNet).ipynb) \n\n  * **第10章 循环神经网络**\n    * 10.1 [序列模型](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FSequence_Models.ipynb)\n    * 10.2 [语言模型](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FLanguage_Models.ipynb)\n    * 10.3 [循环神经网络](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FRecurrent_Neural_Networks.ipynb)\n    * 10.4 [文本预处理](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FText_Preprocessing.ipynb)\n    * 10.5 [从零开始实现循环神经网络](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FImplementation_of_Recurrent_Neural_Networks_from_Scratch.ipynb)\n    * 10.6 [循环神经网络的简洁实现](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FConcise_Implementation_of_Recurrent_Neural_Networks.ipynb)\n    * 10.7 [时间反向传播](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FBackpropagation_Through_Time.ipynb)\n    * 10.8 [门控循环单元（GRU）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FGated_Recurrent_Units.ipynb)\n    * 10.9 [长短期记忆网络（LSTM）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FLong_Short_Term_Memory.ipynb)\n    * 10.10 [深度循环神经网络](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FDeep_Recurrent_Neural_Networks.ipynb)\n    * 10.11 双向循环神经网络\n    * 10.12 [机器翻译与数据集](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FMachine_Translation_and_Data_Sets.ipynb)\n    * 10.13 [编码器-解码器架构](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FEncoder-Decoder_Architecture.ipynb) \n    * 10.14 [序列到序列](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FSequence_to_Sequence.ipynb)\n    * 10.15 [束搜索](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh10_Recurrent_Neural_Networks\u002FBeam_Search.ipynb)\n\n  * **第11章 注意力机制**\n    * 11.1 [注意力机制](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh11_Attention_Mechanism\u002FAttention_Mechanism.ipynb)\n    * 11.2 带有注意力机制的序列到序列模型\n    * 11.3 Transformer\n\n  * **第12章 优化算法**\n    * 12.1 [优化与深度学习](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FOptimization_And_Deep_Learning.ipynb)\n    * 12.2 [凸性](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FConvexity.ipynb)\n    * 12.3 [梯度下降](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FGradient_Descent.ipynb)\n    * 12.4 [随机梯度下降](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FStochastic_Gradient_Descent.ipynb)\n    * 12.5 [小批量随机梯度下降](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FMini-batch_Stochastic_Gradient_Descent.ipynb)\n    * 12.6 [动量法](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FMomentum.ipynb)\n    * 12.7 Adagrad\n    * 12.8 [RMSProp](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh12_Optimization_Algorithms\u002FRMSProp.ipynb)\n    * 12.9 Adadelta\n    * 12.10 Adam\n  * **第14章 计算机视觉**\n    * 14.1 图像增强\n    * 14.2 微调\n    * 14.3 [目标检测与边界框](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FObject_Detection_and_Bounding_Boxes.ipynb)\n    * 14.4 [锚框](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FAnchor_Boxes.ipynb)\n    * 14.5 [多尺度目标检测](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FMultiscale_Object_Detection.ipynb)\n    * 14.6 [目标检测数据集（皮卡丘）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FObject_Detection_Data_Set.ipynb)\n    * 14.7 [单次多盒检测（SSD）](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FSingle_Shot_Multibox_Detection.ipynb)\n    * 14.8 基于区域的CNN（R-CNN）\n    * 14.9 语义分割与数据集\n    * 14.10 转置卷积\n    * 14.11 全卷积网络（FCN）\n    * 14.12 [神经风格迁移](https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002FCh14_Computer_Vision\u002FNeural_Style_Transfer.ipynb)\n    * 14.13 Kaggle上的图像分类（CIFAR-10）\n    * 14.14 Kaggle上的犬种识别（ImageNet Dogs）\n\n\n\n## 贡献说明\n\n  * 欢迎随时提交Pull Request，为剩余章节贡献PyTorch笔记本。在开始编写笔记本之前，请先创建一个包含笔记本名称的议题，以便我们为您分配该任务（如果尚未有人被分配）。\n\n  * 请严格遵循IPython笔记本和子节的命名规范。\n\n  * 如果您认为某些部分需要更详细或更好的解释，请通过议题跟踪器提交问题，我们会尽快回复。\n\n  * 找到需要改进的代码并提交拉取请求。\n\n  * 发现我们遗漏的参考文献并提交拉取请求。\n\n  * 尽量避免提交过大的拉取请求，因为这会使理解和合并变得困难。建议分多次提交较小的请求。\n\n## 支持\n\n如果你喜欢这个仓库并觉得它有用，请考虑给它点个赞（★），这样它就能触达更广泛的受众了。\n\n\n## 参考文献\n\n[1] 原书 [Dive Into Deep Learning](https:\u002F\u002Fd2l.ai) -> [GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en)\n\n[2] [深度学习——直截了当](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope)\n\n[3] [PyTorch - MXNet 备忘单](https:\u002F\u002Fgithub.com\u002Fzackchase\u002Fmxnet-the-straight-dope\u002Fblob\u002Fmaster\u002Fcheatsheets\u002Fpytorch_gluon.md)\n\n\n## 引用\n\n如果你在研究中使用了本项目或代码，请使用以下 BibTeX 条目引用原书。\n```\n@book{zhang2020dive,\n    title={Dive into Deep Learning},\n    author={Aston Zhang and Zachary C. Lipton and Mu Li and Alexander J. Smola},\n    note={\\url{https:\u002F\u002Fd2l.ai}},\n    year={2020}\n}\n```","# d2l-pytorch 快速上手指南\n\n> **重要提示**：本仓库（dsgiitr\u002Fd2l-pytorch）已停止维护。官方完整的 PyTorch 版本请移步 [Dive Into Deep Learning (d2l-ai\u002Fd2l-en)](https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en)。本指南仅作为基于该旧版仓库的快速参考，建议新用户直接使用官方最新仓库。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：推荐 Python 3.6 - 3.8（根据原项目时期兼容性）\n*   **核心依赖**：\n    *   PyTorch (深度学习框架)\n    *   Jupyter Notebook \u002F JupyterLab (用于运行 `.ipynb` 教程)\n    *   NumPy, Pandas, Matplotlib (数据处理与可视化)\n\n**前置检查**：\n建议在安装前更新 `pip` 并安装基础科学计算包。国内用户推荐使用清华或阿里镜像源以加速下载。\n\n## 安装步骤\n\n### 1. 克隆项目代码\n由于部分 Notebook 在 GitHub 网页端渲染可能不完整，建议克隆仓库到本地运行。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch.git\ncd d2l-pytorch\n```\n\n### 2. 安装依赖库\n你可以选择手动安装或使用 `requirements.txt`（如果项目中包含）。以下是基于 PyTorch 的手动安装命令。\n\n**国内加速安装方案（推荐）：**\n使用清华大学开源软件镜像源安装 PyTorch 及相关依赖。\n\n```bash\n# 安装 PyTorch (CPU 版本示例，如需 GPU 请访问 pytorch.org 获取对应 CUDA 版本命令)\npip install torch torchvision torchaudio -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 安装其他必要依赖\npip install jupyter notebook numpy pandas matplotlib scipy -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 3. 启动 Jupyter Notebook\n进入项目目录并启动服务，以便浏览和运行各章节的教程代码。\n\n```bash\njupyter notebook\n```\n浏览器将自动打开，导航至对应的章节文件夹（如 `Ch05_Linear_Neural_Networks`）即可开始学习。\n\n## 基本使用\n\n本项目本质是一本交互式教材，每个 `.ipynb` 文件都是一个独立的教学单元。以下以 **线性回归（Linear Regression）** 为例，展示如何在 Notebook 中运行最简单的代码。\n\n打开 `Ch05_Linear_Neural_Networks\u002FConcise_Implementation_of_Linear_Regression.ipynb`，你将看到类似以下的 PyTorch 代码结构：\n\n```python\nimport torch\nfrom torch import nn\nfrom torch.utils.data import TensorDataset, DataLoader\n\n# 1. 生成合成数据\ntrue_w = torch.tensor([2, -3.4])\ntrue_b = 4.2\nfeatures = torch.randn((1000, 2))\nlabels = torch.matmul(features, true_w) + true_b\nlabels += torch.normal(0, 0.01, size=labels.shape)\n\n# 2. 读取数据\ndataset = TensorDataset(features, labels)\ndata_iter = DataLoader(dataset, batch_size=10, shuffle=True)\n\n# 3. 定义模型 (简洁实现)\nnet = nn.Sequential(nn.Linear(2, 1))\n\n# 4. 初始化参数\nnet[0].weight.data.normal_(0, 0.01)\nnet[0].bias.data.fill_(0)\n\n# 5. 定义损失函数和优化算法\nloss = nn.MSELoss()\ntrainer = torch.optim.SGD(net.parameters(), lr=0.03)\n\n# 6. 训练循环\nnum_epochs = 3\nfor epoch in range(num_epochs):\n    for X, y in data_iter:\n        l = loss(net(X), y)\n        trainer.zero_grad()\n        l.backward()\n        trainer.step()\n    \n    l = loss(net(features), labels)\n    print(f'epoch {epoch + 1}, loss {l.item():f}')\n```\n\n**学习路径建议：**\n1.  **Ch02 Installation**: 确认环境配置无误。\n2.  **Ch04 The Preliminaries**: 复习数据操作、线性代数及自动求导基础。\n3.  **Ch05 - Ch06**: 从零实现到简洁实现线性网络与多层感知机，理解核心原理。\n4.  **后续章节**: 根据兴趣深入卷积神经网络 (CNN)、循环神经网络 (RNN) 或 Transformer 等现代架构。","某高校人工智能实验室的研究生团队正致力于复现经典深度学习算法，以完成课程作业并夯实理论基础，但团队成员主要熟悉 PyTorch 框架。\n\n### 没有 d2l-pytorch 时\n- **框架迁移成本高**：经典的《动手学深度学习》原书代码基于 MXNet 编写，学生需手动逐行将代码转换为 PyTorch 语法，极易引入难以排查的 Bug。\n- **数学原理与代码脱节**：在从零实现线性回归或多层感知机时，缺乏对应的 PyTorch 参考实现，导致学生难以验证自己对反向传播等核心数学推导的理解是否正确。\n- **环境配置混乱**：不同章节依赖的数据预处理和可视化工具分散，新手在搭建实验环境时往往因版本冲突或缺失依赖而耗费数天时间。\n- **学习曲线陡峭**：需要在理解深度学习理论的同时还要克服框架差异，导致注意力分散，严重拖慢了从理论到实践的转化效率。\n\n### 使用 d2l-pytorch 后\n- **即拿即用的 PyTorch 代码**：直接获取书中所有章节（从数据操作到卷积神经网络）已完美适配 PyTorch 的代码，消除了手动移植的工作量和错误风险。\n- **理论与实现无缝对照**：通过“从零实现”与“简洁实现”两种版本的对比笔记，学生能清晰看到数学公式如何映射为具体的 PyTorch API，深刻掌握模型底层逻辑。\n- **标准化的实验环境**：项目提供了统一的安装指南和依赖管理，确保团队成员能在几分钟内复现书中的 Fashion-MNIST 分类等经典实验。\n- **专注核心算法学习**：移除了框架转换的干扰，团队能将全部精力集中在模型调优、过拟合分析及超参数选择等核心深度学习任务上。\n\nd2l-pytorch 通过提供标准化的 PyTorch 版教学代码，彻底打通了深度学习理论学习与工业级框架实践之间的最后一公里。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdsgiitr_d2l-pytorch_d55bb078.png","dsgiitr","Data Science Group, IIT Roorkee","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fdsgiitr_4747c598.png","A student organisation driving innovation via Machine Learning.",null,"dsg@iitr.ac.in","dsg_iitr","https:\u002F\u002Fdsgiitr.in","https:\u002F\u002Fgithub.com\u002Fdsgiitr",[83,87,91],{"name":84,"color":85,"percentage":86},"Jupyter Notebook","#DA5B0B",99.5,{"name":88,"color":89,"percentage":90},"Python","#3572A5",0.5,{"name":92,"color":93,"percentage":94},"Shell","#89e051",0,4347,1237,"2026-04-13T01:17:19","Apache-2.0","未说明","非必需（第 7.6 章涉及 GPU 使用，但未指定具体型号、显存或 CUDA 版本要求）",{"notes":102,"python":99,"dependencies":103},"该项目已不再维护，README 明确建议前往原始仓库 (d2l-ai\u002Fd2l-en) 获取完整的 PyTorch 版本。部分 Jupyter Notebook 文件在 GitHub 上可能无法完美渲染，建议克隆仓库本地查看或使用 nbviewer。内容涵盖从基础安装到深度学习各章节的代码实现。",[104,105,106,107],"torch","numpy","matplotlib","jupyter",[16,35,109,14,15],"其他",[111,112,113,114,115,116,117,118,119,120],"deep-learning","d2l","data-science","pytorch-implmention","book","nlp","computer-vision","pytorch","mxnet","dive-into-deep-learning","2026-03-27T02:49:30.150509","2026-04-16T01:44:48.027544",[124,129,134,139,144,149,154],{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},34904,"调用 download_and_preprocess_data() 函数时报错或无法使用怎么办？","该函数不是 MXNet 或 PyTorch 的内置函数。你需要先导入包含该函数的特定文件：https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fblob\u002Fmaster\u002Fd2l\u002Fssd_utils.py，然后直接调用该文件中定义的 \"download_and_preprocess_data()\" 函数即可。","https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fissues\u002F107",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},34905,"运行“从零实现多层感知机”章节代码时出现导入错误如何解决？","该错误通常是因为缺少 opencv3 依赖。解决方法有两种：\n1. 访问 https:\u002F\u002Fopencv.org\u002F 下载并安装适合你操作系统的 opencv3。\n2. 如果暂时不需要对象检测功能，可以注释掉 \"d2l\u002F__init__.py\" 文件中的 \"from .ssd_utils import *\" 这一行。注意：该导入仅对对象检测笔记本是必需的，对多层感知机章节并非必须。","https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fissues\u002F94",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},34906,"在哪里可以找到书中练习题的答案？","目前没有一个统一的仓库包含所有练习题的答案。建议通过搜索引擎查找相关解答，或者在 MXNet 论坛 (https:\u002F\u002Fdiscuss.mxnet.io\u002F) 和 PyTorch 论坛 (https:\u002F\u002Fdiscuss.pytorch.org\u002F) 发起讨论。如果是 PyTorch 特有的问题，也可以在该项目的 GitHub Issues 或原始 d2l-en 仓库中提问。","https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fissues\u002F90",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},34907,"项目未来会包含自然语言处理（NLP）部分的内容吗？","是的，维护者计划很快添加 NLP 相关的笔记本。如果你愿意贡献，也欢迎针对此部分提交 Pull Request (PR)。","https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fissues\u002F96",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},34908,"运行 ResNet 相关代码时遇到通道维度不匹配的错误怎么办？","请注意，该仓库 (dsgiitr\u002Fd2l-pytorch) 已不再维护。为了获取最新且修复了此类错误的代码，请前往官方仓库 https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en 查看最新的实现代码。","https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fissues\u002F124",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},34909,"代码中出现 'invalid syntax' 错误，特别是在涉及 CUDA 设备字符串时如何修复？","如果在构建 CUDA 设备字符串时遇到语法错误（例如在 base.py 中），请确保使用正确的字符串格式化方式。应将相关代码替换为：'cuda:' + str(i)，以确保生成合法的设备名称字符串（如 'cuda:0'）。","https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fissues\u002F10",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},34910,"如何在项目中复用书中原有的训练循环函数（如 train_ch3）？","为了完全复现书本内容并避免重复编写训练循环，你应该将所需的训练函数（如 `d2l.train_ch3()`）实现并添加到 `d2l` 包的相关模块中（例如创建或更新 `train.py`）。参考官方英文版实现 (https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-en\u002Fblob\u002F8e53fce19c6c744cf4994a896c43e382567e6fbc\u002Fd2l\u002Ftrain.py#L1)，如果某个笔记本调用了特定的 d2l 函数但该函数尚未在当前 PyTorch 版本中实现，你需要自行实现它并添加到包中以便后续章节复用。","https:\u002F\u002Fgithub.com\u002Fdsgiitr\u002Fd2l-pytorch\u002Fissues\u002F4",[]]