[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-d2l-ai--d2l-zh-pytorch-slides":3,"tool-d2l-ai--d2l-zh-pytorch-slides":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,52],"视频",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[14,35],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":88,"forks":89,"last_commit_at":90,"license":76,"difficulty_score":32,"env_os":91,"env_gpu":91,"env_ram":91,"env_deps":92,"category_tags":96,"github_topics":76,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":97,"updated_at":98,"faqs":99,"releases":100},4330,"d2l-ai\u002Fd2l-zh-pytorch-slides","d2l-zh-pytorch-slides","Pytorch版代码幻灯片","d2l-zh-pytorch-slides 是知名深度学习教材《动手学深度学习》（Dive into Deep Learning）的 PyTorch 版本配套幻灯片资源库。它将原本交互式的 Jupyter 笔记本代码与讲解内容，自动转化为适合课堂演示或自学回顾的幻灯片格式，涵盖了从线性代数、微积分等数学基础，到线性回归、多层感知机等核心神经网络算法的完整章节。\n\n这一资源有效解决了深度学习教学中“代码”与“演示文稿”分离的痛点。传统教学中，讲师往往需要在代码环境和 PPT 之间频繁切换，而 d2l-zh-pytorch-slides 让可运行的代码、公式推导和可视化结果直接融合在每一页幻灯片中，极大提升了知识传递的连贯性与效率。\n\n它非常适合高校教师、培训讲师以及自学者使用。教师可直接利用这些幻灯片进行高质量授课，无需重复制作课件；自学者则能通过幻灯片模式更清晰地梳理知识脉络，专注于逻辑推演而非代码细节。其独特亮点在于基于 RISE 插件生成，既保留了 Jupyter 环境的交互潜力，又提供了流畅的演示体验，是让深度学习理论“看得见、跑得通”的实用教学助手。","# d2l-ai\u002Fd2l-zh-pytorch-slides\n\nThis repo contains generated notebook slides. To open it locally, we suggest you to install the [rise](https:\u002F\u002Frise.readthedocs.io\u002Fen\u002Fstable\u002F) extension.\n\nYou can also preview them in nbviwer:\n - [chapter_preliminaries\u002Fndarray.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fndarray.ipynb)\n - [chapter_preliminaries\u002Fpandas.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fpandas.ipynb)\n - [chapter_preliminaries\u002Flinear-algebra.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Flinear-algebra.ipynb)\n - [chapter_preliminaries\u002Fcalculus.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fcalculus.ipynb)\n - [chapter_preliminaries\u002Fautograd.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fautograd.ipynb)\n - [chapter_preliminaries\u002Flookup-api.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Flookup-api.ipynb)\n - [chapter_linear-networks\u002Flinear-regression.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Flinear-regression.ipynb)\n - [chapter_linear-networks\u002Flinear-regression-scratch.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Flinear-regression-scratch.ipynb)\n - [chapter_linear-networks\u002Flinear-regression-concise.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Flinear-regression-concise.ipynb)\n - [chapter_linear-networks\u002Fimage-classification-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Fimage-classification-dataset.ipynb)\n - [chapter_linear-networks\u002Fsoftmax-regression-scratch.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Fsoftmax-regression-scratch.ipynb)\n - [chapter_linear-networks\u002Fsoftmax-regression-concise.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Fsoftmax-regression-concise.ipynb)\n - [chapter_multilayer-perceptrons\u002Fmlp.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fmlp.ipynb)\n - [chapter_multilayer-perceptrons\u002Fmlp-scratch.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fmlp-scratch.ipynb)\n - [chapter_multilayer-perceptrons\u002Fmlp-concise.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fmlp-concise.ipynb)\n - [chapter_multilayer-perceptrons\u002Funderfit-overfit.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Funderfit-overfit.ipynb)\n - [chapter_multilayer-perceptrons\u002Fweight-decay.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fweight-decay.ipynb)\n - [chapter_multilayer-perceptrons\u002Fdropout.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fdropout.ipynb)\n - [chapter_multilayer-perceptrons\u002Fnumerical-stability-and-init.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fnumerical-stability-and-init.ipynb)\n - [chapter_multilayer-perceptrons\u002Fkaggle-house-price.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fkaggle-house-price.ipynb)\n - [chapter_deep-learning-computation\u002Fmodel-construction.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_deep-learning-computation\u002Fmodel-construction.ipynb)\n - [chapter_deep-learning-computation\u002Fparameters.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_deep-learning-computation\u002Fparameters.ipynb)\n - [chapter_deep-learning-computation\u002Fcustom-layer.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_deep-learning-computation\u002Fcustom-layer.ipynb)\n - [chapter_deep-learning-computation\u002Fread-write.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_deep-learning-computation\u002Fread-write.ipynb)\n - [chapter_deep-learning-computation\u002Fuse-gpu.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_deep-learning-computation\u002Fuse-gpu.ipynb)\n - [chapter_convolutional-neural-networks\u002Fconv-layer.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Fconv-layer.ipynb)\n - [chapter_convolutional-neural-networks\u002Fpadding-and-strides.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Fpadding-and-strides.ipynb)\n - [chapter_convolutional-neural-networks\u002Fchannels.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Fchannels.ipynb)\n - [chapter_convolutional-neural-networks\u002Fpooling.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Fpooling.ipynb)\n - [chapter_convolutional-neural-networks\u002Flenet.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Flenet.ipynb)\n - [chapter_convolutional-modern\u002Falexnet.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Falexnet.ipynb)\n - [chapter_convolutional-modern\u002Fvgg.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fvgg.ipynb)\n - [chapter_convolutional-modern\u002Fnin.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fnin.ipynb)\n - [chapter_convolutional-modern\u002Fgooglenet.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fgooglenet.ipynb)\n - [chapter_convolutional-modern\u002Fbatch-norm.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fbatch-norm.ipynb)\n - [chapter_convolutional-modern\u002Fresnet.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fresnet.ipynb)\n - [chapter_convolutional-modern\u002Fdensenet.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fdensenet.ipynb)\n - [chapter_recurrent-neural-networks\u002Fsequence.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-neural-networks\u002Fsequence.ipynb)\n - [chapter_recurrent-neural-networks\u002Ftext-preprocessing.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-neural-networks\u002Ftext-preprocessing.ipynb)\n - [chapter_recurrent-neural-networks\u002Flanguage-models-and-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-neural-networks\u002Flanguage-models-and-dataset.ipynb)\n - [chapter_recurrent-neural-networks\u002Frnn-scratch.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-neural-networks\u002Frnn-scratch.ipynb)\n - [chapter_recurrent-neural-networks\u002Frnn-concise.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-neural-networks\u002Frnn-concise.ipynb)\n - [chapter_recurrent-modern\u002Fgru.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fgru.ipynb)\n - [chapter_recurrent-modern\u002Flstm.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Flstm.ipynb)\n - [chapter_recurrent-modern\u002Fdeep-rnn.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fdeep-rnn.ipynb)\n - [chapter_recurrent-modern\u002Fbi-rnn.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fbi-rnn.ipynb)\n - [chapter_recurrent-modern\u002Fmachine-translation-and-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fmachine-translation-and-dataset.ipynb)\n - [chapter_recurrent-modern\u002Fencoder-decoder.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fencoder-decoder.ipynb)\n - [chapter_recurrent-modern\u002Fseq2seq.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fseq2seq.ipynb)\n - [chapter_attention-mechanisms\u002Fnadaraya-waston.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Fnadaraya-waston.ipynb)\n - [chapter_attention-mechanisms\u002Fattention-scoring-functions.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Fattention-scoring-functions.ipynb)\n - [chapter_attention-mechanisms\u002Fbahdanau-attention.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Fbahdanau-attention.ipynb)\n - [chapter_attention-mechanisms\u002Fmultihead-attention.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Fmultihead-attention.ipynb)\n - [chapter_attention-mechanisms\u002Fself-attention-and-positional-encoding.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Fself-attention-and-positional-encoding.ipynb)\n - [chapter_attention-mechanisms\u002Ftransformer.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Ftransformer.ipynb)\n - [chapter_computational-performance\u002Fmultiple-gpus.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computational-performance\u002Fmultiple-gpus.ipynb)\n - [chapter_computational-performance\u002Fmultiple-gpus-concise.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computational-performance\u002Fmultiple-gpus-concise.ipynb)\n - [chapter_computer-vision\u002Fimage-augmentation.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fimage-augmentation.ipynb)\n - [chapter_computer-vision\u002Ffine-tuning.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Ffine-tuning.ipynb)\n - [chapter_computer-vision\u002Fbounding-box.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fbounding-box.ipynb)\n - [chapter_computer-vision\u002Fanchor.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fanchor.ipynb)\n - [chapter_computer-vision\u002Fmultiscale-object-detection.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fmultiscale-object-detection.ipynb)\n - [chapter_computer-vision\u002Fobject-detection-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fobject-detection-dataset.ipynb)\n - [chapter_computer-vision\u002Fssd.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fssd.ipynb)\n - [chapter_computer-vision\u002Fsemantic-segmentation-and-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fsemantic-segmentation-and-dataset.ipynb)\n - [chapter_computer-vision\u002Ftransposed-conv.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Ftransposed-conv.ipynb)\n - [chapter_computer-vision\u002Ffcn.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Ffcn.ipynb)\n - [chapter_computer-vision\u002Fneural-style.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fneural-style.ipynb)\n - [chapter_computer-vision\u002Fkaggle-cifar10.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fkaggle-cifar10.ipynb)\n - [chapter_computer-vision\u002Fkaggle-dog.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fkaggle-dog.ipynb)\n - [chapter_natural-language-processing-applications\u002Fnatural-language-inference-and-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_natural-language-processing-applications\u002Fnatural-language-inference-and-dataset.ipynb)\n - [chapter_natural-language-processing-applications\u002Fnatural-language-inference-bert.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_natural-language-processing-applications\u002Fnatural-language-inference-bert.ipynb)","# d2l-ai\u002Fd2l-zh-pytorch-slides\n\n此仓库包含生成的笔记本幻灯片。若要在本地打开，建议安装 [rise](https:\u002F\u002Frise.readthedocs.io\u002Fen\u002Fstable\u002F) 插件。\n\n你也可以在 nbviewer 上预览它们：\n- [chapter_preliminaries\u002Fndarray.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fndarray.ipynb)\n- [chapter_preliminaries\u002Fpandas.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fpandas.ipynb)\n- [chapter_preliminaries\u002Flinear-algebra.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Flinear-algebra.ipynb)\n- [chapter_preliminaries\u002Fcalculus.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fcalculus.ipynb)\n- [chapter_preliminaries\u002Fautograd.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fautograd.ipynb)\n- [chapter_preliminaries\u002Flookup-api.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Flookup-api.ipynb)\n- [chapter_linear-networks\u002Flinear-regression.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Flinear-regression.ipynb)\n- [chapter_linear-networks\u002Flinear-regression-scratch.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Flinear-regression-scratch.ipynb)\n- [chapter_linear-networks\u002Flinear-regression-concise.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Flinear-regression-concise.ipynb)\n- [chapter_linear-networks\u002Fimage-classification-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Fimage-classification-dataset.ipynb)\n- [chapter_linear-networks\u002Fsoftmax-regression-scratch.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Fsoftmax-regression-scratch.ipynb)\n- [chapter_linear-networks\u002Fsoftmax-regression-concise.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Fsoftmax-regression-concise.ipynb)\n- [chapter_multilayer-perceptrons\u002Fmlp.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fmlp.ipynb)\n- [chapter_multilayer-perceptrons\u002Fmlp-scratch.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fmlp-scratch.ipynb)\n- [chapter_multilayer-perceptrons\u002Fmlp-concise.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fmlp-concise.ipynb)\n- [chapter_multilayer-perceptrons\u002Funderfit-overfit.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Funderfit-overfit.ipynb)\n- [chapter_multilayer-perceptrons\u002Fweight-decay.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fweight-decay.ipynb)\n- [chapter_multilayer-perceptrons\u002Fdropout.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fdropout.ipynb)\n- [chapter_multilayer-perceptrons\u002Fnumerical-stability-and-init.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fnumerical-stability-and-init.ipynb)\n- [chapter_multilayer-perceptrons\u002Fkaggle-house-price.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fkaggle-house-price.ipynb)\n- [chapter_deep-learning-computation\u002Fmodel-construction.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_deep-learning-computation\u002Fmodel-construction.ipynb)\n- [chapter_deep-learning-computation\u002Fparameters.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_deep-learning-computation\u002Fparameters.ipynb)\n- [chapter_deep-learning-computation\u002Fcustom-layer.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_deep-learning-computation\u002Fcustom-layer.ipynb)\n- [chapter_deep-learning-computation\u002Fread-write.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_deep-learning-computation\u002Fread-write.ipynb)\n- [chapter_deep-learning-computation\u002Fuse-gpu.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_deep-learning-computation\u002Fuse-gpu.ipynb)\n- [chapter_convolutional-neural-networks\u002Fconv-layer.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Fconv-layer.ipynb)\n- [chapter_convolutional-neural-networks\u002Fpadding-and-strides.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Fpadding-and-strides.ipynb)\n- [chapter_convolutional-neural-networks\u002Fchannels.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Fchannels.ipynb)\n- [chapter_convolutional-neural-networks\u002Fpooling.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Fpooling.ipynb)\n- [chapter_convolutional-neural-networks\u002Flenet.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Flenet.ipynb)\n- [chapter_convolutional-modern\u002Falexnet.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Falexnet.ipynb)\n- [chapter_convolutional-modern\u002Fvgg.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fvgg.ipynb)\n- [chapter_convolutional-modern\u002Fnin.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fnin.ipynb)\n- [chapter_convolutional-modern\u002Fgooglenet.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fgooglenet.ipynb)\n- [chapter_convolutional-modern\u002Fbatch-norm.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fbatch-norm.ipynb)\n- [chapter_convolutional-modern\u002Fresnet.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fresnet.ipynb)\n- [chapter_convolutional-modern\u002Fdensenet.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fdensenet.ipynb)\n- [chapter_recurrent-neural-networks\u002Fsequence.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-neural-networks\u002Fsequence.ipynb)\n- [chapter_recurrent-neural-networks\u002Ftext-preprocessing.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-neural-networks\u002Ftext-preprocessing.ipynb)\n- [chapter_recurrent-neural-networks\u002Flanguage-models-and-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-neural-networks\u002Flanguage-models-and-dataset.ipynb)\n- [chapter_recurrent-neural-networks\u002Frnn-scratch.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-neural-networks\u002Frnn-scratch.ipynb)\n- [chapter_recurrent-neural-networks\u002Frnn-concise.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-neural-networks\u002Frnn-concise.ipynb)\n- [chapter_recurrent-modern\u002Fgru.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fgru.ipynb)\n- [chapter_recurrent-modern\u002Flstm.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Flstm.ipynb)\n- [chapter_recurrent-modern\u002Fdeep-rnn.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fdeep-rnn.ipynb)\n- [chapter_recurrent-modern\u002Fbi-rnn.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fbi-rnn.ipynb)\n- [chapter_recurrent-modern\u002Fmachine-translation-and-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fmachine-translation-and-dataset.ipynb)\n- [chapter_recurrent-modern\u002Fencoder-decoder.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fencoder-decoder.ipynb)\n- [chapter_recurrent-modern\u002Fseq2seq.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Fseq2seq.ipynb)\n- [chapter_attention-mechanisms\u002Fnadaraya-waston.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Fnadaraya-waston.ipynb)\n- [chapter_attention-mechanisms\u002Fattention-scoring-functions.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Fattention-scoring-functions.ipynb)\n- [chapter_attention-mechanisms\u002Fbahdanau-attention.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Fbahdanau-attention.ipynb)\n- [chapter_attention-mechanisms\u002Fmultihead-attention.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Fmultihead-attention.ipynb)\n- [chapter_attention-mechanisms\u002Fself-attention-and-positional-encoding.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Fself-attention-and-positional-encoding.ipynb)\n- [chapter_attention-mechanisms\u002Ftransformer.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Ftransformer.ipynb)\n- [chapter_computational-performance\u002Fmultiple-gpus.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computational-performance\u002Fmultiple-gpus.ipynb)\n- [chapter_computational-performance\u002Fmultiple-gpus-concise.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computational-performance\u002Fmultiple-gpus-concise.ipynb)\n- [chapter_computer-vision\u002Fimage-augmentation.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fimage-augmentation.ipynb)\n- [chapter_computer-vision\u002Ffine-tuning.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Ffine-tuning.ipynb)\n- [chapter_computer-vision\u002Fbounding-box.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fbounding-box.ipynb)\n- [chapter_computer-vision\u002Fanchor.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fanchor.ipynb)\n- [chapter_computer-vision\u002Fmultiscale-object-detection.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fmultiscale-object-detection.ipynb)\n- [chapter_computer-vision\u002Fobject-detection-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fobject-detection-dataset.ipynb)\n- [chapter_computer-vision\u002Fssd.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fssd.ipynb)\n- [chapter_computer-vision\u002Fsemantic-segmentation-and-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fsemantic-segmentation-and-dataset.ipynb)\n- [chapter_computer-vision\u002Ftransposed-conv.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Ftransposed-conv.ipynb)\n- [chapter_computer-vision\u002Ffcn.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Ffcn.ipynb)\n- [chapter_computer-vision\u002Fneural-style.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fneural-style.ipynb)\n- [chapter_computer-vision\u002Fkaggle-cifar10.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fkaggle-cifar10.ipynb)\n- [chapter_computer-vision\u002Fkaggle-dog.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_computer-vision\u002Fkaggle-dog.ipynb)\n- [chapter_natural-language-processing-applications\u002Fnatural-language-inference-and-dataset.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_natural-language-processing-applications\u002Fnatural-language-inference-and-dataset.ipynb)\n- [chapter_natural-language-processing-applications\u002Fnatural-language-inference-bert.ipynb](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides.github\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_natural-language-processing-applications\u002Fnatural-language-inference-bert.ipynb)","# d2l-zh-pytorch-slides 快速上手指南\n\n本仓库提供了《动手学深度学习》（PyTorch 版）中文教材的幻灯片格式 Jupyter Notebook。通过浏览器即可像演示文稿一样浏览代码、公式和讲解内容，非常适合教学演示或个人复习。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**：Windows \u002F macOS \u002F Linux\n*   **Python 版本**：建议 Python 3.8 或更高版本\n*   **前置依赖**：\n    *   [Jupyter Notebook](https:\u002F\u002Fjupyter.org\u002F)\n    *   [RISE](https:\u002F\u002Frise.readthedocs.io\u002Fen\u002Fstable\u002F) 扩展（用于在本地将 Notebook 渲染为幻灯片）\n\n> **提示**：如果您不想在本地安装环境，可以直接使用下文提供的在线预览链接。\n\n## 安装步骤\n\n### 1. 安装 Jupyter Notebook\n如果您尚未安装 Jupyter，可以使用 pip 进行安装（推荐使用国内镜像源加速）：\n\n```bash\npip install jupyter -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 2. 安装 RISE 扩展\nRISE 是将 Notebook 转换为幻灯片的关键插件。\n\n**方法一：使用 pip 安装（推荐）**\n```bash\npip install rise -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\njupyter nbextension install rise --py --sys-prefix\njupyter nbextension enable rise --py --sys-prefix\n```\n\n**方法二：使用 conda 安装（如果您使用 Anaconda）**\n```bash\nconda install -c conda-forge rise\n```\n\n### 3. 克隆项目代码\n将幻灯片源码下载到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides.git\ncd d2l-zh-pytorch-slides\n```\n\n## 基本使用\n\n### 方式一：本地运行（推荐）\n\n1.  进入项目目录后，启动 Jupyter Notebook：\n    ```bash\n    jupyter notebook\n    ```\n2.  在浏览器中打开任意一个 `.ipynb` 文件（例如 `chapter_preliminaries\u002Fndarray.ipynb`）。\n3.  点击工具栏右侧出现的 **幻灯片图标**（通常位于保存按钮旁边，或按快捷键 `Alt + r`）。\n4.  页面将立即切换为幻灯片演示模式，您可以使用键盘方向键进行翻页。\n\n### 方式二：在线预览（无需安装）\n\n如果您希望快速查看内容而无需配置环境，可以直接访问 NBViewer 生成的幻灯片页面。以下是部分核心章节的直达链接：\n\n*   **基础知识**\n    *   [数据操作 (ndarray)](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fndarray.ipynb)\n    *   [线性代数](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Flinear-algebra.ipynb)\n    *   [微积分](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fcalculus.ipynb)\n    *   [自动求导 (autograd)](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_preliminaries\u002Fautograd.ipynb)\n\n*   **线性网络**\n    *   [线性回归](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Flinear-regression.ipynb)\n    *   [Softmax 回归](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_linear-networks\u002Fsoftmax-regression-concise.ipynb)\n\n*   **多层感知机**\n    *   [MLP 实现](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Fmlp.ipynb)\n    *   [过拟合与欠拟合](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_multilayer-perceptrons\u002Funderfit-overfit.ipynb)\n\n*   **卷积神经网络 (CNN)**\n    *   [卷积层](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-neural-networks\u002Fconv-layer.ipynb)\n    *   [ResNet](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_convolutional-modern\u002Fresnet.ipynb)\n\n*   **循环神经网络 (RNN) 与 Transformer**\n    *   [LSTM](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_recurrent-modern\u002Flstm.ipynb)\n    *   [Transformer](https:\u002F\u002Fnbviewer.jupyter.org\u002Fformat\u002Fslides\u002Fgithub\u002Fd2l-ai\u002Fd2l-zh-pytorch-slides\u002Fblob\u002Fmain\u002Fchapter_attention-mechanisms\u002Ftransformer.ipynb)\n\n> 完整列表请参阅项目根目录或 GitHub 仓库首页。","某高校人工智能讲师正在准备《深度学习动手学》课程中关于“多层感知机与过拟合”的章节，急需将复杂的代码逻辑转化为直观的课堂演示材料。\n\n### 没有 d2l-zh-pytorch-slides 时\n- 讲师需手动将 Jupyter Notebook 中的代码块、公式和图表逐一截图并粘贴到 PPT 中，耗时数小时且排版极易错乱。\n- 静态幻灯片无法展示代码运行过程，学生难以理解动态的数据流向和模型训练时的实时损失变化。\n- 课后分享资料时，学生只能看到静态图片，无法直接复制代码进行本地复现或修改实验参数。\n- 维护成本极高，一旦教材代码更新，所有相关的 PPT 页面都需要重新制作和替换。\n\n### 使用 d2l-zh-pytorch-slides 后\n- 直接加载预生成的 Notebook 幻灯片文件，配合 RISE 插件即可在浏览器中呈现专业排版的交互式课件，备课时间从数小时缩短至几分钟。\n- 课堂上可现场执行代码单元格，实时演示权重衰减（Weight Decay）或 Dropout 对模型收敛的具体影响，让抽象概念可视化。\n- 学生通过 nbviewer 链接即可在线浏览完整幻灯片，并能一键下载原始 Notebook 文件，无缝衔接理论讲解与课后动手实践。\n- 内容随官方仓库自动同步，讲师无需手动维护，确保教学内容始终与最新的 PyTorch 实现和教材版本保持一致。\n\nd2l-zh-pytorch-slides 通过将代码笔记直接转化为可执行的交互式幻灯片，彻底打通了从理论学习到代码实战的教学闭环。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fd2l-ai_d2l-zh-pytorch-slides_258f9209.png","d2l-ai","Dive into Deep Learning (D2L.ai)","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fd2l-ai_6b4d7ddf.png","",null,"https:\u002F\u002FD2L.ai","https:\u002F\u002Fgithub.com\u002Fd2l-ai",[80,84],{"name":81,"color":82,"percentage":83},"Jupyter Notebook","#DA5B0B",100,{"name":85,"color":86,"percentage":87},"CSS","#663399",0,787,174,"2026-04-06T05:02:00","未说明",{"notes":93,"python":91,"dependencies":94},"该仓库包含生成的 Jupyter Notebook 幻灯片。若要在本地打开，建议安装 RISE 扩展；或者可以通过提供的 nbviewer 链接在线预览幻灯片内容。README 中未列出具体的 Python 版本、深度学习框架（如 PyTorch）版本或硬件资源需求，因为主要用途是展示而非直接运行训练代码。",[95],"rise (Jupyter Notebook 扩展)",[14],"2026-03-27T02:49:30.150509","2026-04-06T18:56:10.330310",[],[]]