[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-mrdbourke--tensorflow-deep-learning":3,"tool-mrdbourke--tensorflow-deep-learning":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":72,"owner_website":77,"owner_url":78,"languages":79,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":32,"env_os":92,"env_gpu":93,"env_ram":92,"env_deps":94,"category_tags":100,"github_topics":101,"view_count":10,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":109,"updated_at":110,"faqs":111,"releases":142},3349,"mrdbourke\u002Ftensorflow-deep-learning","tensorflow-deep-learning","All course materials for the Zero to Mastery Deep Learning with TensorFlow course.","tensorflow-deep-learning 是 Zero to Mastery 深度学习课程的全套开源学习资料，旨在帮助学习者从零开始掌握使用 TensorFlow 和 Keras 构建与训练神经网络的核心技能。它系统性地解决了初学者在面对深度学习复杂概念时缺乏结构化路径、实战代码和最新行业实践的痛点，涵盖了从基础理论到计算机视觉、自然语言处理等实际项目的全流程内容。\n\n这套资料非常适合希望转行或提升技能的开发者、数据科学爱好者以及相关专业学生使用。无论你是否具备编程基础，只要对人工智能感兴趣，都能通过它循序渐进地学习。其独特亮点在于不仅提供了完整的 Jupyter 笔记本代码示例，还配套了免费的在线书籍版本和部分公开视频课程，并持续维护更新以适配 TensorFlow 新版本（如 EfficientNetV2 迁移、命名空间调整等），确保所学技术与业界同步。此外，课程设计了丰富的练习题和扩展资源，鼓励动手实践，真正做到“学以致用”。","# Zero to Mastery Deep Learning with TensorFlow\n\nAll of the course materials for the [Zero to Mastery Deep Learning with TensorFlow course](https:\u002F\u002Fdbourke.link\u002FZTMTFcourse).\n\nThis course will teach you the foundations of deep learning and how to build and train neural networks for various problem types with TensorFlow\u002FKeras.\n\n## Important links\n* 🎥 Watch the [first 14-hours of the course on YouTube](https:\u002F\u002Fdbourke.link\u002Ftfpart1part2) (notebooks 00, 01, 02)\n* 📖 Read the [beautiful online book version of the course](https:\u002F\u002Fdev.mrdbourke.com\u002Ftensorflow-deep-learning\u002F)\n* 💻 [Sign up](https:\u002F\u002Fdbourke.link\u002FZTMTFcourse) to the full course on the Zero to Mastery Academy (videos for notebooks 03-10)\n* 🤔 Got questions about the course? Check out the [livestream Q&A for the course launch](https:\u002F\u002Fyoutu.be\u002FrqAqcFcfeK8)\n* 📝 Get a quick overview of TensorFlow with the [TensorFlow Cheatsheet](https:\u002F\u002Fzerotomastery.io\u002Fcheatsheets\u002Ftensorflow-cheat-sheet\u002F)\n\n## Contents of this page\n- [Fixes and updates](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#fixes-and-updates)\n- [Course materials](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#course-materials) (everything you'll need for completing the course)\n- [Course structure](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#course-structure) (how this course is taught)\n- [Should you do this course?](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#should-you-do-this-course) (decide by answering a couple simple questions)\n- [Prerequisites](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#prerequisites) (what skills you'll need to do this course)\n- [Exercises & Extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-exercises---extra-curriculum) (challenges to practice what you've learned and resources to learn more)\n- [Ask a question](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#ask-questions) (like to know more? go here)\n- [Status](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#status)\n- [Log](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#log) (updates, changes and progress)\n\n## Fixes and updates \n\n* 2 May 2024 - Update section 11 to reflect closing of TensorFlow Developer Certification program by Google (see [#645](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F645) for more)\n* 18 Aug 2023 - Update [Notebook 05](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F05_transfer_learning_in_tensorflow_part_2_fine_tuning.ipynb) to fix [#544](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F544) and [#553](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F553), see https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F575 for full notes\n     * In short, if you're using `tf.keras.applications.EfficientNetB0` and facing errors, swap to [`tf.keras.applications.efficientnet_v2.EfficientNetV2B0`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fapplications\u002Fefficientnet_v2\u002FEfficientNetV2B0)\n* 26 May 2023 - Update [Notebook 08](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F08_introduction_to_nlp_in_tensorflow.ipynb) for new version of TensorFlow + update [Notebook 09](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F09_SkimLit_nlp_milestone_project_2.ipynb) for new version of TensorFlow & spaCy, see update notes for 09: https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F557 \n* 19 May 2023 - Update [Notebook 07](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F07_food_vision_milestone_project_1.ipynb) for new version of TensorFlow + fix model loading errors (TensorFlow 2.13+ required), see: https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F550\n* 18 May 2023 - Update Notebook 06 for new TensorFlow namespaces (no major functionality change, just different imports), see: https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F549 \n* 12 May 2023 - Notebook 05 new namespaces added for `tf.keras.layers`, see https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F547, also add fix for issue with `model.load_weights()` in Notebook 05, see https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F544, if you're having trouble saving\u002Floading the model weights, also see https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F553 \n* 12 May 2023 - Newer versions of TensorFlow (2.10+) use `learning_rate` instead of `lr` in `tf.keras.optimizers` (e.g. `tf.keras.optimizers.Adam(learning_rate=0.001)`, old `lr` still works but is deprecated\n* 02 Dec 2021 - Added fix for TensorFlow 2.7.0+ for notebook 02, [see discussion for more](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F278)\n* 11 Nov 2021 - Added fix for TensorFlow 2.7.0+ for notebook 01,  [see discussion for more](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F256)\n\n## Course materials\n\nThis table is the ground truth for course materials. All the links you need for everything will be here.\n\nKey:\n* **Number:** The number of the target notebook (this may not match the video section of the course but it ties together all of the materials in the table)\n* **Notebook:** The notebook for a particular module with lots of code and text annotations (notebooks from the videos are based on these)\n* **Data\u002Fmodel:** Links to datasets\u002Fpre-trained models for the associated notebook\n* **Exercises & Extra-curriculum:** Each module comes with a set of exercises and extra-curriculum to help practice your skills and learn more, I suggest going through these **before** you move onto the next module\n* **Slides:** Although we focus on writing TensorFlow code, we sometimes use pretty slides to describe different concepts, you'll find them here\n\n**Note:** You can get all of the notebook code created during the videos in the [`video_notebooks`](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Ftree\u002Fmain\u002Fvideo_notebooks) directory.\n\n| Number | Notebook | Data\u002FModel | Exercises & Extra-curriculum | Slides |\n| ----- |  ----- |  ----- |  ----- |  ----- |\n| 00 | [TensorFlow Fundamentals](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F00_tensorflow_fundamentals.ipynb) |  | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-00-tensorflow-fundamentals-exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F00_introduction_to_tensorflow_and_deep_learning.pdf) |\n| 01 | [TensorFlow Regression](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F01_neural_network_regression_in_tensorflow.ipynb) |  | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-01-neural-network-regression-with-tensorflow-exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F01_neural_network_regression_with_tensorflow.pdf) |\n| 02 | [TensorFlow Classification](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F02_neural_network_classification_in_tensorflow.ipynb) |  | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-02-neural-network-classification-with-tensorflow-exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F02_neural_network_classification_with_tensorflow.pdf) |\n| 03 | [TensorFlow Computer Vision](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F03_convolutional_neural_networks_in_tensorflow.ipynb) | [`pizza_steak`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002Fpizza_steak.zip), [`10_food_classes_all_data`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F10_food_classes_all_data.zip) | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-03-computer-vision--convolutional-neural-networks-in-tensorflow-exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F03_convolution_neural_networks_and_computer_vision_with_tensorflow.pdf) |\n| 04 | [Transfer Learning Part 1: Feature extraction](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F04_transfer_learning_in_tensorflow_part_1_feature_extraction.ipynb) | [`10_food_classes_10_percent`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F10_food_classes_10_percent.zip) | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-04-transfer-learning-in-tensorflow-part-1-feature-extraction-exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F04_transfer_learning_with_tensorflow_part_1_feature_extraction.pdf) |\n| 05 | [Transfer Learning Part 2: Fine-tuning](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F05_transfer_learning_in_tensorflow_part_2_fine_tuning.ipynb) | [`10_food_classes_10_percent`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F10_food_classes_10_percent.zip), [`10_food_classes_1_percent`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F10_food_classes_1_percent.zip), [`10_food_classes_all_data`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F10_food_classes_all_data.zip) | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-05-transfer-learning-in-tensorflow-part-2-fine-tuning-exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F05_transfer_learning_with_tensorflow_part_2_fine_tuning.pdf) |\n| 06 | [Transfer Learning Part 3: Scaling up](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F06_transfer_learning_in_tensorflow_part_3_scaling_up.ipynb) | [`101_food_classes_10_percent`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F101_food_classes_10_percent.zip), [`custom_food_images`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002Fcustom_food_images.zip), [`fine_tuned_efficientnet_model`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F06_101_food_class_10_percent_saved_big_dog_model.zip) | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-06-transfer-learning-in-tensorflow-part-3-scaling-up-exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F06_transfer_learning_with_tensorflow_part_3_scaling_up.pdf) |\n| 07 | [Milestone Project 1: Food Vision 🍔👁](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F07_food_vision_milestone_project_1.ipynb), [Template (your challenge)](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fextras\u002FTEMPLATE_07_food_vision_milestone_project_1.ipynb) | [`feature_extraction_mixed_precision_efficientnet_model`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F07_efficientnetb0_feature_extract_model_mixed_precision.zip), [`fine_tuned_mixed_precision_efficientnet_model`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F07_efficientnetb0_fine_tuned_101_classes_mixed_precision.zip) | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-07-milestone-project-1--food-vision-big-exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F07_milestone_project_1_food_vision.pdf) |\n| 08 | [TensorFlow NLP Fundamentals](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F08_introduction_to_nlp_in_tensorflow.ipynb) | [`diaster_or_no_diaster_tweets`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Fnlp_getting_started.zip), [`USE_feature_extractor_model`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002F08_model_6_USE_feature_extractor.zip) | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-08-introduction-to-nlp-natural-language-processing-in-tensorflow-exercises)  | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F08_natural_language_processing_in_tensorflow.pdf) |\n| 09 | [Milestone Project 2: SkimLit 📄🔥](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F09_SkimLit_nlp_milestone_project_2.ipynb) | [`pubmed_RCT_200k_dataset`](https:\u002F\u002Fgithub.com\u002FFranck-Dernoncourt\u002Fpubmed-rct.git), [`skimlit_tribrid_model`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Fskimlit\u002Fskimlit_tribrid_model.zip) | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-09-milestone-project-2-skimlit--exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F09_milestone_project_2_skimlit.pdf) |\n| 10 | [TensorFlow Time Series Fundamentals & Milestone Project 3: BitPredict 💰📈](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F10_time_series_forecasting_in_tensorflow.ipynb) | [`bitcoin_price_data_USD_2013-10-01_2021-05-18.csv`](https:\u002F\u002Fraw.githubusercontent.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fmain\u002Fextras\u002FBTC_USD_2013-10-01_2021-05-18-CoinDesk.csv) | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-10-time-series-fundamentals-and-milestone-project-3-bitpredict--exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F10_time_series_fundamentals_and_milestone_project_3_bitpredict.pdf) |\n| 11 | [Preparing to Pass the TensorFlow Developer Certification Exam (archive)](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F11_passing_the_tensorflow_developer_certification_exam.md) | | [Go to exercises & extra-curriculum](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-11-passing-the-tensorflow-developer-certification-exercises) | [Go to slides](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F11_passing_the_tensorflow_developer_certification_exam.pdf) |\n\n## Course structure\n\nThis course is code first. The goal is to get you writing deep learning code as soon as possible.\n\nIt is taught with the following mantra:\n\n```\nCode -> Concept -> Code -> Concept -> Code -> Concept\n```\n\nThis means we write code first then step through the concepts behind it.\n\nIf you've got 6-months experience writing Python code and a willingness to learn (most important), you'll be able to do the course.\n\n## Should you do this course?\n\n> Do you have 1+ years experience with deep learning and writing TensorFlow code?\n\nIf yes, no you shouldn't, use your skills to build something. \n\nIf no, move onto the next question.\n\n> Have you done at least one beginner machine learning course and would like to learn about deep learning\u002Fhow to build neural networks with TensorFlow?\n\nIf yes, this course is for you.\n\nIf no, go and do a beginner machine learning course and if you decide you want to learn TensorFlow, this page will still be here.\n\n## Prerequisites\n\n> What do I need to know to go through this course?\n\n* **6+ months writing Python code.** Can you write a Python function which accepts and uses parameters? That’s good enough. If you don’t know what that means, spend another month or two writing Python code and then come back here.\n* **At least one beginner machine learning course.** Are you familiar with the idea of training, validation and test sets? Do you know what supervised learning is? Have you used pandas, NumPy or Matplotlib before? If no to any of these, I’d going through at least one machine learning course which teaches these first and then coming back. \n* **Comfortable using Google Colab\u002FJupyter Notebooks.** This course uses Google Colab throughout. If you have never used Google Colab before, it works very similar to Jupyter Notebooks with a few extra features. If you’re not familiar with Google Colab notebooks, I’d suggest going through the Introduction to Google Colab notebook.\n* **Plug:** The [Zero to Mastery beginner-friendly machine learning course](https:\u002F\u002Fdbourke.link\u002FZTMMLcourse) (I also teach this) teaches all of the above (and this course is designed as a follow on).\n\n## 🛠 Exercises & 📖 Extra-curriculum\n\nTo prevent the course from being 100+ hours (deep learning is a broad field), various external resources for different sections are recommended to puruse under your own discretion.\n\nYou can find solutions to the exercises in [`extras\u002Fsolutions\u002F`](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Ftree\u002Fmain\u002Fextras\u002Fsolutions), there's a notebook per set of exercises (one for 00, 01, 02... etc). Thank you to [Ashik Shafi](https:\u002F\u002Fgithub.com\u002Fashikshafi08) for all of the efforts creating these.\n\n---\n\n### 🛠 00. TensorFlow Fundamentals Exercises\n\n1. Create a vector, scalar, matrix and tensor with values of your choosing using `tf.constant()`.\n2. Find the shape, rank and size of the tensors you created in 1.\n3. Create two tensors containing random values between 0 and 1 with shape `[5, 300]`.\n4. Multiply the two tensors you created in 3 using matrix multiplication.\n5. Multiply the two tensors you created in 3 using dot product.\n6. Create a tensor with random values between 0 and 1 with shape `[224, 224, 3]`.\n7. Find the min and max values of the tensor you created in 6 along the first axis.\n8. Created a tensor with random values of shape `[1, 224, 224, 3]` then squeeze it to change the shape to `[224, 224, 3]`.\n9. Create a tensor with shape `[10]` using your own choice of values, then find the index which has the maximum value.\n10. One-hot encode the tensor you created in 9.\n\n### 📖 00. TensorFlow Fundamentals Extra-curriculum \n\n* Read through the [list of TensorFlow Python APIs](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002F), pick one we haven't gone through in this notebook, reverse engineer it (write out the documentation code for yourself) and figure out what it does.\n* Try to create a series of tensor functions to calculate your most recent grocery bill (it's okay if you don't use the names of the items, just the price in numerical form).\n  * How would you calculate your grocery bill for the month and for the year using tensors?\n* Go through the [TensorFlow 2.x quick start for beginners](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fquickstart\u002Fbeginner) tutorial (be sure to type out all of the code yourself, even if you don't understand it).\n  * Are there any functions we used in here that match what's used in there? Which are the same? Which haven't you seen before?\n* Watch the video [\"What's a tensor?\"](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=f5liqUk0ZTw) - a great visual introduction to many of the concepts we've covered in this notebook.\n\n---\n\n### 🛠 01. Neural network regression with TensorFlow Exercises\n\n1. Create your own regression dataset (or make the one we created in \"Create data to view and fit\" bigger) and build fit a model to it.\n2. Try building a neural network with 4 Dense layers and fitting it to your own regression dataset, how does it perform?\n3. Try and improve the results we got on the insurance dataset, some things you might want to try include:\n  * Building a larger model (how does one with 4 dense layers go?).\n  * Increasing the number of units in each layer.\n  * Lookup the documentation of [Adam](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Foptimizers\u002FAdam) and find out what the first parameter is, what happens if you increase it by 10x?\n  * What happens if you train for longer (say 300 epochs instead of 200)? \n4. Import the [Boston pricing dataset](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fdatasets\u002Fboston_housing\u002Fload_data) from TensorFlow [`tf.keras.datasets`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fdatasets) and model it.\n\n### 📖 01. Neural network regression with TensorFlow Extra-curriculum\n\n* [MIT introduction deep learning lecture 1](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7sB052Pz0sQ&ab_channel=AlexanderAmini) - gives a great overview of what's happening behind all of the code we're running.\n* Reading: 1-hour of [Chapter 1 of Neural Networks and Deep Learning](http:\u002F\u002Fneuralnetworksanddeeplearning.com\u002Fchap1.html) by Michael Nielson - a great in-depth and hands-on example of the intuition behind neural networks.\n* To practice your regression modelling with TensorFlow, I'd also encourage you to look through [Kaggle's datasets](https:\u002F\u002Fwww.kaggle.com\u002Fdata), find a regression dataset which sparks your interest and try to model.\n\n---\n\n### 🛠 02. Neural network classification with TensorFlow Exercises\n\n1. Play with neural networks in the [TensorFlow Playground](https:\u002F\u002Fplayground.tensorflow.org\u002F) for 10-minutes. Especially try different values of the learning, what happens when you decrease it? What happens when you increase it?\n2. Replicate the model pictured in the [TensorFlow Playground diagram](https:\u002F\u002Fplayground.tensorflow.org\u002F#activation=relu&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.001&regularizationRate=0&noise=0&networkShape=6,6,6,6,6&seed=0.51287&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false&regularization_hide=true&discretize_hide=true&regularizationRate_hide=true&percTrainData_hide=true&dataset_hide=true&problem_hide=true&noise_hide=true&batchSize_hide=true) below using TensorFlow code. Compile it using the Adam optimizer, binary crossentropy loss and accuracy metric. Once it's compiled check a summary of the model.\n![tensorflow playground example neural network](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmrdbourke_tensorflow-deep-learning_readme_64541eafec62.png)\n*Try this network out for yourself on the [TensorFlow Playground website](https:\u002F\u002Fplayground.tensorflow.org\u002F#activation=relu&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.001&regularizationRate=0&noise=0&networkShape=6,6,6,6,6&seed=0.51287&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false&regularization_hide=true&discretize_hide=true&regularizationRate_hide=true&percTrainData_hide=true&dataset_hide=true&problem_hide=true&noise_hide=true&batchSize_hide=true). Hint: there are 5 hidden layers but the output layer isn't pictured, you'll have to decide what the output layer should be based on the input data.*\n3. Create a classification dataset using Scikit-Learn's [`make_moons()`](https:\u002F\u002Fscikit-learn.org\u002Fstable\u002Fmodules\u002Fgenerated\u002Fsklearn.datasets.make_moons.html) function, visualize it and then build a model to fit it at over 85% accuracy.\n4. Train a model to get 88%+ accuracy on the fashion MNIST test set. Plot a confusion matrix to see the results after.\n5. Recreate [TensorFlow's](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Factivations\u002Fsoftmax) [softmax activation function](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FSoftmax_function) in your own code. Make sure it can accept a tensor and return that tensor after having the softmax function applied to it.\n6. Create a function (or write code) to visualize multiple image predictions for the fashion MNIST at the same time. Plot at least three different images and their prediction labels at the same time. Hint: see the [classification tutorial in the TensorFlow documentation](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fkeras\u002Fclassification) for ideas.\n7. Make a function to show an image of a certain class of the fashion MNIST dataset and make a prediction on it. For example, plot 3 images of the `T-shirt` class with their predictions.\n\n### 📖 02. Neural network classification with TensorFlow Extra-curriculum\n\n* Watch 3Blue1Brown's neural networks video 2: [*Gradient descent, how neural networks learn*](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=IHZwWFHWa-w). After you're done, write 100 words about what you've learned.\n  * If you haven't already, watch video 1: [*But what is a Neural Network?*](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=aircAruvnKk). Note the activation function they talk about at the end.\n* Watch [MIT's introduction to deep learning lecture 1](https:\u002F\u002Fyoutu.be\u002FnjKP3FqW3Sk) (if you haven't already) to get an idea of the concepts behind using linear and non-linear functions.\n* Spend 1-hour reading [Michael Nielsen's Neural Networks and Deep Learning book](http:\u002F\u002Fneuralnetworksanddeeplearning.com\u002Findex.html).\n* Read the [ML-Glossary documentation on activation functions](https:\u002F\u002Fml-cheatsheet.readthedocs.io\u002Fen\u002Flatest\u002Factivation_functions.html). Which one is your favourite?\n  * After you've read the ML-Glossary, see which activation functions are available in TensorFlow by searching \"tensorflow activation functions\".\n\n---\n\n### 🛠 03. Computer vision & convolutional neural networks in TensorFlow Exercises\n\n1. Spend 20-minutes reading and interacting with the [CNN explainer website](https:\u002F\u002Fpoloclub.github.io\u002Fcnn-explainer\u002F). \n * What are the key terms? e.g. explain convolution in your own words, pooling in your own words\n2. Play around with the \"understanding hyperparameters\" section in the [CNN explainer](https:\u002F\u002Fpoloclub.github.io\u002Fcnn-explainer\u002F) website for 10-minutes.\n  * What is the kernel size?\n  * What is the stride? \n  * How could you adjust each of these in TensorFlow code?\n3. Take 10 photos of two different things and build your own CNN image classifier using the techniques we've built here.\n4. Find an ideal learning rate for a simple convolutional neural network model on your the 10 class dataset.\n\n### 📖 03. Computer vision & convolutional neural networks in TensorFlow Extra-curriculum\n\n* **Watch:** [MIT's Introduction to Deep Computer Vision](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=uapdILWYTzE&list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI&index=3&ab_channel=AlexanderAmini) lecture. This will give you a great intuition behind convolutional neural networks.\n* **Watch:** Deep dive on [mini-batch gradient descent](https:\u002F\u002Fyoutu.be\u002F-_4Zi8fCZO4) by deeplearning.ai. If you're still curious about why we use **batches** to train models, this technical overview covers many of the reasons why.\n* **Read:** [CS231n Convolutional Neural Networks for Visual Recognition](https:\u002F\u002Fcs231n.github.io\u002Fconvolutional-networks\u002F) class notes. This will give a very deep understanding of what's going on behind the scenes of the convolutional neural network architectures we're writing. \n* **Read:** [\"A guide to convolution arithmetic for deep learning\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1603.07285.pdf). This paper goes through all of the mathematics running behind the scenes of our convolutional layers.\n* **Code practice:** [TensorFlow Data Augmentation Tutorial](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fimages\u002Fdata_augmentation). For a more in-depth introduction on data augmentation with TensorFlow, spend an hour or two reading through this tutorial.\n\n---\n\n### 🛠 04. Transfer Learning in TensorFlow Part 1: Feature Extraction Exercises\n\n1. Build and fit a model using the same data we have here but with the MobileNetV2 architecture feature extraction ([`mobilenet_v2_100_224\u002Ffeature_vector`](https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Fimagenet\u002Fmobilenet_v2_100_224\u002Ffeature_vector\u002F4)) from TensorFlow Hub, how does it perform compared to our other models?\n2. Name 3 different image classification models on TensorFlow Hub that we haven't used.\n3. Build a model to classify images of two different things you've taken photos of.\n  * You can use any feature extraction layer from TensorFlow Hub you like for this.\n  * You should aim to have at least 10 images of each class, for example to build a fridge versus oven classifier, you'll want 10 images of fridges and 10 images of ovens.\n4. What is the current best performing model on ImageNet?\n  * Hint: you might want to check [sotabench.com](https:\u002F\u002Fwww.sotabench.com) for this.\n\n### 📖 04. Transfer Learning in TensorFlow Part 1: Feature Extraction Extra-curriculum\n\n* Read through the [TensorFlow Transfer Learning Guide](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fimages\u002Ftransfer_learning) and define the main two types of transfer learning in your own words.\n* Go through the [Transfer Learning with TensorFlow Hub tutorial](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fimages\u002Ftransfer_learning_with_hub) on the TensorFlow website and rewrite all of the code yourself into a new Google Colab notebook making comments about what each step does along the way.\n* We haven't covered fine-tuning with TensorFlow Hub in this notebook, but if you'd like to know more, go through the [fine-tuning a TensorFlow Hub model tutorial](https:\u002F\u002Fwww.tensorflow.org\u002Fhub\u002Ftf2_saved_model#fine-tuning) on the TensorFlow homepage.How to fine-tune a tensorflow hub model:  \n* Look into [experiment tracking with Weights & Biases](https:\u002F\u002Fwww.wandb.com\u002Fexperiment-tracking), how could you integrate it with our existing TensorBoard logs?\n\n---\n\n### 🛠 05. Transfer Learning in TensorFlow Part 2: Fine-tuning Exercises\n\n1. Use feature-extraction to train a transfer learning model on 10% of the Food Vision data for 10 epochs using [`tf.keras.applications.EfficientNetB0`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fapplications\u002FEfficientNetB0) as the base model. Use the [`ModelCheckpoint`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FModelCheckpoint) callback to save the weights to file.\n2. Fine-tune the last 20 layers of the base model you trained in 2 for another 10 epochs. How did it go?\n3. Fine-tune the last 30 layers of the base model you trained in 2 for another 10 epochs. How did it go?\n4. Write a function to visualize an image from any dataset (train or test file) and any class (e.g. \"steak\", \"pizza\"... etc), visualize it and make a prediction on it using a trained model.\n\n### 📖 05. Transfer Learning in TensorFlow Part 2: Fine-tuning Extra-curriculum\n\n* Read the [documentation on data augmentation](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fimages\u002Fdata_augmentation) in TensorFlow.\n* Read the [ULMFit paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.06146) (technical) for an introduction to the concept of freezing and unfreezing different layers.\n* Read up on learning rate scheduling (there's a [TensorFlow callback](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FLearningRateScheduler) for this), how could this influence our model training?\n  * If you're training for longer, you probably want to reduce the learning rate as you go... the closer you get to the bottom of the hill, the smaller steps you want to take. Imagine it like finding a coin at the bottom of your couch. In the beginning your arm movements are going to be large and the closer you get, the smaller your movements become.\n  \n---\n\n### 🛠 06. Transfer Learning in TensorFlow Part 3: Scaling-up Exercises\n\n1. Take 3 of your own photos of food and use the trained model to make predictions on them, share your predictions with the other students in Discord and show off your Food Vision model 🍔👁.\n2. Train a feature-extraction transfer learning model for 10 epochs on the same data and compare its performance versus a model which used feature extraction for 5 epochs and fine-tuning for 5 epochs (like we've used in this notebook). Which method is better?\n3. Recreate the first model (the feature extraction model) with [`mixed_precision`](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fmixed_precision) turned on. \n  * Does it make the model train faster? \n  * Does it effect the accuracy or performance of our model? \n  * What's the advantages of using `mixed_precision` training?\n\n### 📖 06. Transfer Learning in TensorFlow Part 3: Scaling-up Extra-curriculum\n* Spend 15-minutes reading up on the [EarlyStopping callback](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FEarlyStopping). What does it do? How could we use it in our model training?\n* Spend an hour reading about [Streamlit](https:\u002F\u002Fwww.streamlit.io\u002F). What does it do? How might you integrate some of the things we've done in this notebook in a Streamlit app?\n\n---\n\n### 🛠 07. Milestone Project 1: 🍔👁 Food Vision Big™ Exercises\n\n**Note:** The chief exercise for Milestone Project 1 is to finish the \"TODO\" sections in the [Milestone Project 1 Template notebook](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fextras\u002FTEMPLATE_07_food_vision_milestone_project_1.ipynb). After doing so, move onto the following.\n\n1. Use the same evaluation techniques on the large-scale Food Vision model as you did in the previous notebook ([Transfer Learning Part 3: Scaling up](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F06_transfer_learning_in_tensorflow_part_3_scaling_up.ipynb)). More specifically, it would be good to see:\n  * A confusion matrix between all of the model's predictions and true labels.\n  * A graph showing the f1-scores of each class.\n  * A visualization of the model making predictions on various images and comparing the predictions to the ground truth.\n    * For example, plot a sample image from the test dataset and have the title of the plot show the prediction, the prediction probability and the ground truth label. \n  * **Note:** To compare predicted labels to test labels, it might be a good idea when loading the test data to set `shuffle=False` (so the ordering of test data is preserved alongside the order of predicted labels).\n2. Take 3 of your own photos of food and use the Food Vision model to make predictions on them. How does it go? Share your images\u002Fpredictions with the other students.\n3. Retrain the model (feature extraction and fine-tuning) we trained in this notebook, except this time use [`EfficientNetB4`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fapplications\u002FEfficientNetB4) as the base model instead of `EfficientNetB0`. Do you notice an improvement in performance? Does it take longer to train? Are there any tradeoffs to consider?\n4. Name one important benefit of mixed precision training, how does this benefit take place?\n\n### 📖 07. Milestone Project 1: 🍔👁 Food Vision Big™ Extra-curriculum\n\n* Read up on learning rate scheduling and the [learning rate scheduler callback](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FLearningRateScheduler). What is it? And how might it be helpful to this project?\n* Read up on TensorFlow data loaders ([improving TensorFlow data loading performance](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fdata_performance)). Is there anything we've missed? What methods you keep in mind whenever loading data in TensorFlow? Hint: check the summary at the bottom of the page for a great round up of ideas.\n* Read up on the documentation for [TensorFlow mixed precision training](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fmixed_precision). What are the important things to keep in mind when using mixed precision training?\n\n---\n\n### 🛠 08. Introduction to NLP (Natural Language Processing) in TensorFlow Exercises\n1. Rebuild, compile and train `model_1`, `model_2` and `model_5` using the [Keras Sequential API](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002FSequential) instead of the Functional API.\n2. Retrain the baseline model with 10% of the training data. How does perform compared to the Universal Sentence Encoder model with 10% of the training data?\n3. Try fine-tuning the TF Hub Universal Sentence Encoder model by setting `training=True` when instantiating it as a Keras layer.\n\n```\n# We can use this encoding layer in place of our text_vectorizer and embedding layer\nsentence_encoder_layer = hub.KerasLayer(\"https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Funiversal-sentence-encoder\u002F4\",\n                                        input_shape=[],\n                                        dtype=tf.string,\n                                        trainable=True) # turn training on to fine-tune the TensorFlow Hub model\n```\n4. Retrain the best model you've got so far on the whole training set (no validation split). Then use this trained model to make predictions on the test dataset and format the predictions into the same format as the `sample_submission.csv` file from Kaggle (see the Files tab in Colab for what the `sample_submission.csv` file looks like). Once you've done this, [make a submission to the Kaggle competition](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fnlp-getting-started\u002Fdata), how did your model perform?\n5. Combine the ensemble predictions using the majority vote (mode), how does this perform compare to averaging the prediction probabilities of each model?\n6. Make a confusion matrix with the best performing model's predictions on the validation set and the validation ground truth labels.\n\n### 📖 08. Introduction to NLP (Natural Language Processing) in TensorFlow Extra-curriculum\nTo practice what you've learned, a good idea would be to spend an hour on 3 of the following (3-hours total, you could through them all if you want) and then write a blog post about what you've learned.\n\n* For an overview of the different problems within NLP and how to solve them read through: \n  * [A Simple Introduction to Natural Language Processing](https:\u002F\u002Fbecominghuman.ai\u002Fa-simple-introduction-to-natural-language-processing-ea66a1747b32)\n  * [How to solve 90% of NLP problems: a step-by-step guide](https:\u002F\u002Fblog.insightdatascience.com\u002Fhow-to-solve-90-of-nlp-problems-a-step-by-step-guide-fda605278e4e)\n* Go through [MIT's Recurrent Neural Networks lecture](https:\u002F\u002Fyoutu.be\u002FSEnXr6v2ifU). This will be one of the greatest additions to what's happening behind the RNN model's you've been building.\n* Read through the [word embeddings page on the TensorFlow website](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Ftext\u002Fword_embeddings). Embeddings are such a large part of NLP. We've covered them throughout this notebook but extra practice would be well worth it. A good exercise would be to write out all the code in the guide in a new notebook. \n* For more on RNN's in TensorFlow, read and reproduce [the TensorFlow RNN guide](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fkeras\u002Frnn). We've covered many of the concepts in this guide, but it's worth writing the code again for yourself.\n* Text data doesn't always come in a nice package like the data we've downloaded. So if you're after more on preparing different text sources for being with your TensorFlow deep learning models, it's worth checking out the following:\n  * [TensorFlow text loading tutorial](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fload_data\u002Ftext).\n  * [Reading text files with Python](https:\u002F\u002Frealpython.com\u002Fread-write-files-python\u002F) by Real Python.\n* This notebook has focused on writing NLP code. For a mathematically rich overview of how NLP with Deep Learning happens, read [Stanford's Natural Language Processing with Deep Learning lecture notes Part 1](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs224n\u002Freadings\u002Fcs224n-2019-notes01-wordvecs1.pdf).  \n  * For an even deeper dive, you could even do the whole [CS224n](http:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs224n\u002F) (Natural Language Processing with Deep Learning) course. \n* Great blog posts to read:\n  * Andrei Karpathy's [The Unreasonable Effectiveness of RNNs](https:\u002F\u002Fkarpathy.github.io\u002F2015\u002F05\u002F21\u002Frnn-effectiveness\u002F) dives into generating Shakespeare text with RNNs.\n  * [Text Classification with NLP: Tf-Idf vs Word2Vec vs BERT](https:\u002F\u002Ftowardsdatascience.com\u002Ftext-classification-with-nlp-tf-idf-vs-word2vec-vs-bert-41ff868d1794) by Mauro Di Pietro. An overview of different techniques for turning text into numbers and then classifying it.\n  * [What are word embeddings?](https:\u002F\u002Fmachinelearningmastery.com\u002Fwhat-are-word-embeddings\u002F) by Machine Learning Mastery.\n* Other topics worth looking into:\n  * [Attention mechanisms](https:\u002F\u002Fjalammar.github.io\u002Fvisualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention\u002F). These are a foundational component of the transformer architecture and also often add improvements to deep NLP models.\n  * [Transformer architectures](http:\u002F\u002Fjalammar.github.io\u002Fillustrated-transformer\u002F). This model architecture has recently taken the NLP world by storm, achieving state of the art on many benchmarks. However, it does take a little more processing to get off the ground, the [HuggingFace Models (formerly HuggingFace Transformers) library](https:\u002F\u002Fhuggingface.co\u002Fmodels\u002F) is probably your best quick start.\n\n---\n\n### 🛠 09. Milestone Project 2: SkimLit 📄🔥 Exercises\n\n1. Train `model_5` on all of the data in the training dataset for as many epochs until it stops improving. Since this might take a while, you might want to use:\n  * [`tf.keras.callbacks.ModelCheckpoint`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FModelCheckpoint) to save the model's best weights only.\n  * [`tf.keras.callbacks.EarlyStopping`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FEarlyStopping) to stop the model from training once the validation loss has stopped improving for ~3 epochs.\n2. Checkout the [Keras guide on using pretrained GloVe embeddings](https:\u002F\u002Fkeras.io\u002Fexamples\u002Fnlp\u002Fpretrained_word_embeddings\u002F). Can you get this working with one of our models?\n  * Hint: You'll want to incorporate it with a custom token [Embedding](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Flayers\u002FEmbedding) layer.\n  * It's up to you whether or not you fine-tune the GloVe embeddings or leave them frozen.\n3. Try replacing the TensorFlow Hub Universal Sentence Encoder pretrained  embedding for the [TensorFlow Hub BERT PubMed expert](https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Fexperts\u002Fbert\u002Fpubmed\u002F2) (a language model pretrained on PubMed texts) pretrained embedding. Does this effect results?\n  * Note: Using the BERT PubMed expert pretrained embedding requires an extra preprocessing step for sequences (as detailed in the [TensorFlow Hub guide](https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Fexperts\u002Fbert\u002Fpubmed\u002F2)).\n  * Does the BERT model beat the results mentioned in this paper? https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.06071.pdf \n4. What happens if you were to merge our `line_number` and `total_lines` features for each sequence? For example, created a `X_of_Y` feature instead? Does this effect model performance?\n  * Another example: `line_number=1` and `total_lines=11` turns into `line_of_X=1_of_11`.\n5. Write a function (or series of functions) to take a sample abstract string, preprocess it (in the same way our model has been trained), make a prediction on each sequence in the abstract and return the abstract in the format:\n  * `PREDICTED_LABEL`: `SEQUENCE`\n  * `PREDICTED_LABEL`: `SEQUENCE`\n  * `PREDICTED_LABEL`: `SEQUENCE`\n  * `PREDICTED_LABEL`: `SEQUENCE`\n  * ...\n    * You can find your own unstructured RCT abstract from PubMed or try this one from: [*Baclofen promotes alcohol abstinence in alcohol dependent cirrhotic patients with hepatitis C virus (HCV) infection*](https:\u002F\u002Fpubmed.ncbi.nlm.nih.gov\u002F22244707\u002F).\n\n### 📖 09. Milestone Project 2: SkimLit 📄🔥 Extra-curriculum\n\n* For more on working with text\u002FspaCy, see [spaCy's advanced NLP course](https:\u002F\u002Fcourse.spacy.io\u002Fen\u002F). If you're going to be working on production-level NLP problems, you'll probably end up using spaCy.\n* For another look at how to approach a text classification problem like the one we've just gone through, I'd suggest going through [Google's Machine Learning Course for text classification](https:\u002F\u002Fdevelopers.google.com\u002Fmachine-learning\u002Fguides\u002Ftext-classification). \n* Since our dataset has imbalanced classes (as with many real-world datasets), so it might be worth looking into the [TensorFlow guide for different methods to training a model with imbalanced classes](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fstructured_data\u002Fimbalanced_data).\n\n---\n\n### 🛠 10. Time series fundamentals and Milestone Project 3: BitPredict 💰📈 Exercises\n\n1. Does scaling the data help for univariate\u002Fmultivariate data? (e.g. getting all of the values between 0 & 1) \n  * Try doing this for a univariate model (e.g. `model_1`) and a multivariate model (e.g. `model_6`) and see if it effects model training or evaluation results.\n2. Get the most up to date data on Bitcoin, train a model & see how it goes (our data goes up to May 18 2021).\n  * You can download the Bitcoin historical data for free from [coindesk.com\u002Fprice\u002Fbitcoin](https:\u002F\u002Fwww.coindesk.com\u002Fprice\u002Fbitcoin) and clicking \"Export Data\" -> \"CSV\".\n3. For most of our models we used `WINDOW_SIZE=7`, but is there a better window size?\n  * Setup a series of experiments to find whether or not there's a better window size.\n  * For example, you might train 10 different models with `HORIZON=1` but with window sizes ranging from 2-12.\n4. Create a windowed dataset just like the ones we used for `model_1` using [`tf.keras.preprocessing.timeseries_dataset_from_array()`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fpreprocessing\u002Ftimeseries_dataset_from_array) and retrain `model_1` using the recreated dataset.\n5. For our multivariate modelling experiment, we added the Bitcoin block reward size as an extra feature to make our time series multivariate. \n  * Are there any other features you think you could add? \n  * If so, try it out, how do these affect the model?\n6. Make prediction intervals for future forecasts. To do so, one way would be to train an ensemble model on all of the data, make future forecasts with it and calculate the prediction intervals of the ensemble just like we did for `model_8`.\n7. For future predictions, try to make a prediction, retrain a model on the predictions, make a prediction, retrain a model, make a prediction, retrain a model, make a prediction (retrain a model each time a new prediction is made). Plot the results, how do they look compared to the future predictions where a model wasn't retrained for every forecast (`model_9`)?\n8. Throughout this notebook, we've only tried algorithms we've handcrafted ourselves. But it's worth seeing how a purpose built forecasting algorithm goes. \n  * Try out one of the extra algorithms listed in the modelling experiments part such as:\n    * [Facebook's Kats library](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FKats) - there are many models in here, remember the machine learning practioner's motto: experiment, experiment, experiment.\n    * [LinkedIn's Greykite library](https:\u002F\u002Fgithub.com\u002Flinkedin\u002Fgreykite)\n\n### 📖 10. Time series fundamentals and Milestone Project 3: BitPredict 💰📈 Extra-curriculum\n\nWe've only really scratched the surface with time series forecasting and time series modelling in general. But the good news is, you've got plenty of hands-on coding experience with it already.\n\nIf you'd like to dig deeper in to the world of time series, I'd recommend the following:\n\n* [Forecasting: Principles and Practice](https:\u002F\u002Fotexts.com\u002Ffpp3\u002F) is an outstanding online textbook which discusses at length many of the most important concepts in time series forecasting. I'd especially recommend reading at least Chapter 1 in full.\n  * I'd definitely recommend at least checking out chapter 1 as well as the chapter on forecasting accuracy measures.\n* 🎥 [Introduction to machine learning and time series](https:\u002F\u002Fyoutu.be\u002FwqQKFu41FIw) by Markus Loning goes through different time series problems and how to approach them. It focuses on using the `sktime` library (Scikit-Learn for time series), though the principles are applicable elsewhere.\n* [*Why you should care about the Nate Silver vs. Nassim Taleb Twitter war*](https:\u002F\u002Ftowardsdatascience.com\u002Fwhy-you-should-care-about-the-nate-silver-vs-nassim-taleb-twitter-war-a581dce1f5fc) by Isaac Faber is an outstanding discussion insight into the role of uncertainty in the example of election prediction.\n* [TensorFlow time series tutorial](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fstructured_data\u002Ftime_series) - A tutorial on using TensorFlow to forecast weather time series data with TensorFlow.\n* 📕 [*The Black Swan*](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FThe_Black_Swan:_The_Impact_of_the_Highly_Improbable) by Nassim Nicholas Taleb - Nassim Taleb was a pit trader (a trader who trades on their own behalf) for 25 years, this book compiles many of the lessons he learned from first-hand experience. It changed my whole perspective on our ability to predict. \n* [*3 facts about time series forecasting that surprise experienced machine learning practitioners*](https:\u002F\u002Ftowardsdatascience.com\u002F3-facts-about-time-series-forecasting-that-surprise-experienced-machine-learning-practitioners-69c18ee89387) by Skander Hannachi, Ph.D - time series data is different to other kinds of data, if you've worked on other kinds of machine learning problems before, getting into time series might require some adjustments, Hannachi outlines 3 of the most common.\n* 🎥 World-class lectures by \nJordan Kern, watching these will take you from 0 to 1 with time series problems: \n  * [Time Series Analysis](https:\u002F\u002Fyoutu.be\u002FPrpu_U5tKkE) - how to analyse time series data.\n  * [Time Series Modelling](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=s3XH7fTHMb4) - different techniques for modelling time series data (many of which aren't deep learning).\n\n---\n\n## TensorFlow Developer Certificate (archive)\n\n> **Note:** As of 1 May 2024, the TensorFlow Developer Certification is no longer available for purchase. After being in contact with the TensorFlow Certification team, they stated they were closing the program with no official next steps (see [#645](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F645) for more).\n>\n> With this in mind, the exercises\u002Fextra-curriculum below are for archive purposes only. The rest of the course materials are still valid.\n\n### 🛠 11. Passing the TensorFlow Developer Certification Exercises (archive)\n\n**Preparing your brain**\n1. Read through the [TensorFlow Developer Certificate Candidate Handbook](https:\u002F\u002Fwww.tensorflow.org\u002Fextras\u002Fcert\u002FTF_Certificate_Candidate_Handbook.pdf).\n2. Go through the Skills checklist section of the TensorFlow Developer Certification Candidate Handbook and create a notebook which covers all of the skills required, write code for each of these (this notebook can be used as a point of reference during the exam).\n\n![mapping the TensorFlow Developer handbook to code in a notebook](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmrdbourke_tensorflow-deep-learning_readme_974b302b9dd1.png)\n*Example of mapping the Skills checklist section of the TensorFlow Developer Certification Candidate handbook to a notebook.*\n\n**Prearing your computer**\n1. Go through the [PyCharm quick start](https:\u002F\u002Fwww.jetbrains.com\u002Fpycharm\u002Flearning-center\u002F) tutorials to make sure you're familiar with PyCharm (the exam uses PyCharm, you can download the free version).\n2. Read through and follow the suggested steps in the [setting up for the TensorFlow Developer Certificate Exam guide](https:\u002F\u002Fwww.tensorflow.org\u002Fextras\u002Fcert\u002FSetting_Up_TF_Developer_Certificate_Exam.pdf).\n3. After going through (2), go into PyCharm and make sure you can train a model in TensorFlow. The model and dataset in the example `image_classification_test.py` [script on GitHub](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fextras\u002Fimage_classification_test.py) should be enough. If you can train and save the model in under 5-10 minutes, your computer will be powerful enough to train the models in the exam.\n    - Make sure you've got experience running models locally in PyCharm before taking the exam. Google Colab (what we used through the course) is a little different to PyCharm.\n\n![before taking the TensorFlow Developer certification exam, make sure you can run TensorFlow code in PyCharm on your local machine](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmrdbourke_tensorflow-deep-learning_readme_47d93a86e176.png)\n*Before taking the exam make sure you can run TensorFlow code on your local machine in PyCharm. If the [example `image_class_test.py` script](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fextras\u002Fimage_classification_test.py) can run completely in under 5-10 minutes on your local machine, your local machine can handle the exam (if not, you can use Google Colab to train, save and download models to submit for the exam).*\n\n### 📖 11. Passing the TensorFlow Developer Certification Extra-curriculum (archive)\n\nIf you'd like some extra materials to go through to further your skills with TensorFlow and deep learning in general or to prepare more for the exam, I'd highly recommend the following:\n\n* 📄 **Read:** [How I got TensorFlow Developer Certified (and how you can too)](https:\u002F\u002Fwww.mrdbourke.com\u002Fhow-i-got-tensorflow-developer-certified\u002F)\n* 🎥 **Watch:** [How I passed the TensorFlow Developer Certification exam (and how you can too)](https:\u002F\u002Fyoutu.be\u002Fya5NwvKafDk)\n* Go through the [TensorFlow in Practice Specialization on Coursera](https:\u002F\u002Fdbourke.link\u002Ftfinpractice)\n* Read through the second half of [Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow 2nd Edition](https:\u002F\u002Famzn.to\u002F3aYexF2)\n\n---\n\n## What this course is missing\n\nDeep learning is a broad topic. So this course doesn't cover it all. \n\nHere are some of the main topics you might want to look into next:\n\n* Transformers (the neural network architecture taking the NLP world by storm)\n* Multi-modal models (models which use more than one data source such as text & images)\n* Reinforcement learning\n* Unsupervised learning\n\n## Extensions (possible places to go after the course)\n\n* [Neural Networks and Deep Learning Book](http:\u002F\u002Fneuralnetworksanddeeplearning.com\u002F) by Michael Nielsen - If the Zero to Mastery TensorFlow for Deep Learning course is top down, this book is bottom up. A fantastic resource to sandwich your knowledge. \n* [Deeplearning.AI specializations](https:\u002F\u002Fwww.deeplearning.ai) - The ZTM TensorFLow course focuses on code-first, the deeplearning.ai specializations will teach you what's going on behind the code.\n* [Hands-on Machine Learning with Scikit-Learn, Keras and TensorFlow Book](https:\u002F\u002Fwww.oreilly.com\u002Flibrary\u002Fview\u002Fhands-on-machine-learning\u002F9781492032632\u002F) (especially the 2nd half) - Many of the materials in this course were inspired by and guided by the pages of this beautiful text book.\n* [Full Stack Deep Learning](https:\u002F\u002Ffullstackdeeplearning.com) - Learn how to turn your models into machine learning-powered applications.\n* [Made with ML MLOps materials](https:\u002F\u002Fmadewithml.com\u002F#mlops) - Similar to Full Stack Deep Learning but comprised into many small lessons around all the pieces of the puzzle (data collection, labelling, deployment and more) required to build a full-stack machine learning-powered application.\n* [fast.ai Curriculum](https:\u002F\u002Fwww.fast.ai) - One of the best (and free) AI\u002Fdeep learning courses online. Enough said.\n* [\"How does a beginner data scientist like me gain experience?\"](https:\u002F\u002Fwww.mrdbourke.com\u002Fhow-can-a-beginner-data-scientist-like-me-gain-experience\u002F) by Daniel Bourke - Read this on how to get experience for a job after studying online\u002Fat unveristy (start the job before you have it).\n\n## Ask questions\n\nContact [Daniel Bourke](mailto:daniel@mrdbourke.com) or [add a discussion](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions) (preferred).\n\n  \n## Log\n\n* 2 May 2024 - update materials to reflect closing of TensorFlow Developer Certification exam by Google (see [#645](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F645) for more)\n* 12 May 2023 - update several course notebooks for latest version of TensorFlow, several API updates for Notebook 05 here: https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F547\n* 02 Dec 2021 - add fix for TensorFlow 2.7 to notebook 02\n* 11 Nov 2021 - add fix for TensorFlow 2.7 to notebook 01\n* 14 Aug 2021 - added a discussion with TensorFlow 2.6 updates and EfficientNetV2 notes: https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F166\n* 16 Jul 2021 - added 35 videos to ZTM Academy + Udemy versions of the course for time series and how to pass TensorFlow Developer Certification\n* 10 Jul 2021 - added 29 edited time series videos to ZTM Academy + Udemy versions of the course, more to come soon\n* 07 Jul 2021 - recorded 5 videos for passing TensorFlow Developer Certification exam section - ALL VIDEOS FOR COURSE DONE!!! time to edit\u002Fupload! 🎉\n* 06 Jul 2021 - (archived) added guide to TensorFlow Certification Exam: https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F11_passing_the_tensorflow_developer_certification_exam.md - going to record videos for it tomorrow\n* 05 Jul 2021 - making materials for TF certification exam (what\u002Fwhy\u002Fhow)\n* 02 Jul 2021 - FINISHED RECORDING VIDEOS FOR TIME SERIES SECTION!!!!! time to upload\n* 30 Jun 2021 - recorded 12 videos for time series section, total heading past 60 (the biggest section yet), nearly done!!!\n* 29 Jun 2021 - recorded 10 videos for time series section, total heading towards 60\n* 28 Jun 2021 - recorded 10 videos for time series section, the line below says 40 videos total, actually more like 50\n* 26 Jun 2021 - recorded 4 videos for time series section, looks like it'll be about 40 videos total\n* 25 Jun 2021 - recorded 8 videos for time series section + fixed a bunch of typos in time series notebook\n* 24 Jun 2021 - recorded 14 videos for time series section, more to come tomorrow\n* 23 Jun 2021 - finished adding images to time series notebook, now to start video recording\n* 22 Jun 2021 - added a bunch of images to the time series notebook\u002Fstarted making slides\n* 21 Jun 2021 - code for time series notebook is done, now creating slides\u002Fimages to prepare for recording\n* 19 Jun 2021 - turned curriculum into an online book, you can read it here: https:\u002F\u002Fdev.mrdbourke.com\u002Ftensorflow-deep-learning\u002F \n* 18 Jun 2021 - add exercises\u002Fextra-curriculum\u002Foutline to time series notebook\n* 17 Jun 2021 - add annotations for turkey problem and model comparison in time series notebook, next is outline\u002Fimages\n* 16 Jun 2021 - add annotations for uncertainty and future predictions in time series notebook, next is turkey problem\n* 14 Jun 2021 - add annotations for ensembling, begin on prediction intervals\n* 10 Jun 2021 - finished annotations for N-BEATS algorithm, now onto ensembling\u002Fprediction intervals\n* 9 Jun 2021 - add annotations for N-BEATS algorithm implementation for time series notebook\n* 8 Jun 2021 - add annotations to time series notebook, all will be finished by end of week (failed)\n* 4 Jun 2021 - more annotation updates to time series notebook, brick by brick!\n* 3 Jun 2021 - added a bunch of annotations\u002Fexplanations to time series notebook, momentum building, plenty more to come!\n* 2 Jun 2021 - started adding annotations explaining the code + resources to learn more, will continue for next few days\n* 1 Jun 2021 - added turkey problem to time series notebook, cleaned up a bunch of code, draft code is ready, now to write annotations\u002Fexplanations\n* 28 May 2021 - added future forecasts, added ensemble model, added prediction intervals to time series notebook\n* 25 May 2021 - added multivariate time series to time series notebook, fix LSTM model, next we add TensorFlow windowing\u002Fexperimenting with window sizes\n* 24 May 2021 - fixed broken preprocessing function in time series notebook, LSTM model is broken, more material to come\n* 20 May 2021 - more time series material creation\n* 19 May 2021 - more time series material creation, streaming much of it live on Twitch - https:\u002F\u002Ftwitch.tv\u002Fmrdbourke\n* 18 May 2021 - added time series forecasting notebook outline ([notebook 10](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F10_time_series_forecasting_in_tensorflow.ipynb)), going to really start ramping up the materials here\n* 12 May 2021 - all videos for 09 have now been released on Udemy & ZTM!!! enjoy build SkimLit 📄🔥\n* 11 May 2021 - 40+ section 08 & 09 videos released on Udemy & ZTM!!!\n* 10 May 2021 - time series materials research + preparation\n* 08 May 2021 - time series materials research + preparation\n* 05 May 2021 - ~20+ videos edited for 08, ~10+ videos edited for 09, time series materials in 1st draft mode\n* 04 May 2021 - fixed the remaining videos for 08 (audio missing), now onto making time series materials!\n* 03 May 2021 - rerecorded 10 videos for 08 fixing the sound isse, these are going straight to editing and should be uploaded by end of week\n* 02 May 2021 - found an issue with videos 09-20 of section 08 (no audio), going to rerecord them\n* 29 Apr 2021 - 🚀🚀🚀 launched on Udemy!!! 🚀🚀🚀\n* 22 Apr 2021 - finished recording videos for 09! added slides and video notebook 09\n* 21 Apr 2021 - recorded 14 videos for 09! biggggg day of recording! getting closer to finishing 09\n* 20 Apr 2021 - recorded 10 videos for 09\n* 19 Apr 2021 - recorded 9 videos for 09\n* 16 Apr 2021 - slides done for 09, ready to start recording!\n* 15 Apr 2021 - added slides, extra-curriculum, exercises and video notebook for 08, started making slides for 09, will finish tomorrow\n* 14 Apr 2021 - recorded 12 videos for notebook 08, finished the section! time to make slides for 09 and get into it\n* 10 Apr 2021 - recorded 4 videos for notebook 08\n* 9 Apr 2021 - recorded 6 videos for notebook 08 \n* 8 Apr 2021 - recorded 10 videos for notebook 08! more coming tomorrow! home stretch baby!!!\n* 7 Apr 2021 - added a whole bunch of images to notebook 08, getting ready for recording tomorrow!\n* 1 Apr 2021 - added [notebook 09: SkimLit](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F09_SkimLit_nlp_milestone_project_2.ipynb), almost finished, a little cleaning and we'll be ready for slide making!\n* 31 Mar 2021 - added notebook 08, going to finish tomorrow, then onto 09! \n* 24 Mar 2021 - Recorded 8 videos for 07, finished! onto materials (slides\u002Fnotebooks) for 08, 09\n* 23 Mar 2021 - Recorded 6 videos for 07 (finally), going to finish tomorrow\n* 22 Mar 2021 - Polished notebook 07 ready for recording, made slides for 07, added template for 07 (for a student to go through and practice), ready to record!\n* 17 Mar 2021 - 99% finished notebook 07, added links to first 14 hours of the course on YouTube ([10 hours in part 1](https:\u002F\u002Fyoutu.be\u002FtpCFfeUEGs8), [4 hours in part 2](https:\u002F\u002Fyoutu.be\u002FZUKz4125WNI))\n* 11 Mar 2021 - added even more text annotations to notebook 07, finishing tomorrow, then slides\n* 10 Mar 2021 - Typed a whole bunch of explanations into notebook 07, continuing tomorrow\n* 09 Mar 2021 - fixed plenty of code in notebook 07, should run end to end very cleanly (though loading times are still a thing)\n* 05 Mar 2021 - added draft notebook 07 (heaps of data loading and model training improvements in this one!), gonna fix up over next few days\n* 01 Mar 2021 - Added slides for 06 ([see them here](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F06_transfer_learning_with_tensorflow_part_3_scaling_up.pdf)) \n* 26 Feb 2021 - 🚀 LAUNCHED!!!!! also finished recording videos for 06, onto 07, 08, 09 for next release \n* 24 Feb 2021 - recorded 9 videos for section 06, launch inbound!!!\n* 23 Feb 2021 - rearranged GitHub in preparation for launch 🚀 \n* 18 Feb 2021 - recorded 8 videos for 05 and... it's done! onto polishing the GitHub\n* 17 Feb 2021 - recorded 10 videos for 05! going to finish tomorrow 🚀\n* 16 Feb 2021 - polished slides for 05 and started recording videos, got 7 videos done for 05 \n* 15 Feb 2021 - finished videos for 04, now preparing to record for 05!\n* 12 Feb 2021 - recored 7 videos for section 04... wanted 10 but we'll take 7 (🤔 this seems to have happened before)\n* 11 Feb 2021 - NO PROGRESS - gave a Machine Learning deployment tutorial for [Stanford's CS329s](https:\u002F\u002Fstanford-cs329s.github.io\u002Fsyllabus.html) (using the model code from this course!!!) - [see the full tutorial materials](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Fcs329s-ml-deployment-tutorial)\n* 08 Feb 2021 - recorded 10 videos for section 03... and section 03 is done! 🚀 onto section 04\n* 30 Jan 2021 - 07 Feb 2021: NO PROGRESS (working on a ML deployment lecture for [Stanford's CS329s](https:\u002F\u002Fstanford-cs329s.github.io\u002Fsyllabus.html)... more on this later)\n* 29 Jan 2021 - recorded 9 videos for section 03... closer to 10 than yesterday but still not there\n* 28 Jan 2021 - recorded 7 videos for section 03... wanted 10 but we'll take 7\n* 27 Jan 2021 - recorded 10 videos for section 03\n* 26 Jan 2021 - polished GitHub README (what you're looking at) with a [nice table](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#course-materials)\n* 23 Jan 2021 - finished slides of 06\n* 22 Jan 2021 - finished review of notebook 06 & started slides of 06\n* 21 Jan 2021 - finished slides for 05 & started review of 06\n* 20 Jan 2021 - finished notebook 05 & 95% slides for 05\n* 19 Jan 2021 - found a storage idea for data during course (use Google Storage in same region as Colab Notebooks, cheapest\u002Ffastest)\n* 18 Jan 2021 - reviewed notebook 05 & slides for 05\n* 17 Jan 2021 - finished notebook 04 & slides for 04\n* 16 Jan 2021 - review notebook 04 & made slides for transfer learning\n* 13 Jan 2021 - review notebook 03 again & finished slides for 03, BIGGGGG updates to the README, notebook 03 99% done, just need to figure out optimum way to transfer data (e.g. when a student downloads it, where's best to store it in the meantime? Dropbox? S3? ~~GS~~ (too expensive)\n* 11 Jan 2021 - reviewed notebook 03, 95% ready for recording, onto slides for 03\n* 9 Jan 2021 - I'm back baby! Finished all videos for 02, now onto slides\u002Fmaterials for 03, 04, 05 (then I'll get back in the lab)\n* 19 Dec 2020 - ON HOLD (family holiday until Jan 02 2021) \n* 18 Dec 2020 - recorded 75% of videos for 02\n* 17 Dec 2020 - recorded 50% of videos for 02\n* 16 Dec 2020 - recorded 100% of videos for 01\n* 15 Dec 2020 - recorded 90% of videos for 01\n* 09 Dec 2020 - finished recording videos for 00\n* 08 Dec 2020 - recorded 90% of videos for 00\n* 05 Dec 2020 - trialled recording studio for ~6 videos with notebook 00 material\n* 04 Dec 2020 - setup [recording studio in closet](https:\u002F\u002Fraw.githubusercontent.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fmain\u002Fimages\u002Fmisc-studio-setup.jpeg)\n* 03 Dec 2020 - finished notebook 02, finished slides for 02, time to setup recording studio\n* 02 Dec 2020 - notebook 02 95% done, slides for 02 90% done\n* 01 Dec 2020 - added notebook 02 (90% polished), start preparing slides for 02\n* 27 Nov 2020 - polished notebook 01, made slides for notebook 01\n* 26 Nov 2020 - polished notebook 00, made slides for notebook 00\n","# 从零开始掌握 TensorFlow 深度学习\n\n[从零开始掌握 TensorFlow 深度学习课程](https:\u002F\u002Fdbourke.link\u002FZTMTFcourse) 的所有课程资料。\n\n本课程将教你深度学习的基础知识，以及如何使用 TensorFlow\u002FKeras 构建和训练适用于各种问题类型的神经网络。\n\n## 重要链接\n* 🎥 在 YouTube 上观看课程的前 14 小时（notebook 00、01、02）：[点击这里](https:\u002F\u002Fdbourke.link\u002Ftfpart1part2)\n* 📖 阅读课程的精美在线书籍版本：[点击这里](https:\u002F\u002Fdev.mrdbourke.com\u002Ftensorflow-deep-learning\u002F)\n* 💻 报名参加 Zero to Mastery Academy 的完整课程（notebook 03–10 的视频）：[点击这里](https:\u002F\u002Fdbourke.link\u002FZTMTFcourse)\n* 🤔 对课程有疑问？请查看课程发布的直播问答：[点击这里](https:\u002F\u002Fyoutu.be\u002FrqAqcFcfeK8)\n* 📝 使用 [TensorFlow 备忘录] (https:\u002F\u002Fzerotomastery.io\u002Fcheatsheets\u002Ftensorflow-cheat-sheet\u002F) 快速了解 TensorFlow\n\n## 本页内容\n- [修复与更新](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#fixes-and-updates)\n- [课程材料](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#course-materials)（完成课程所需的一切）\n- [课程结构](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#course-structure)（本课程的教学方式）\n- [你是否应该学习这门课程？](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#should-you-do-this-course)（通过回答几个简单的问题来决定）\n- [先决条件](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#prerequisites)（学习本课程所需的技能）\n- [练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-exercises---extra-curriculum)（用于实践所学内容的挑战及更多学习资源）\n- [提问](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#ask-questions)（想了解更多？请在此处提问）\n- [状态](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#status)\n- [日志](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#log)（更新、变更及进度）\n\n## 修复与更新\n\n* 2024年5月2日 - 更新第11节，以反映 Google 已关闭 TensorFlow 开发者认证项目（详情请参阅 [#645](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F645)）\n* 2023年8月18日 - 更新 [Notebook 05](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F05_transfer_learning_in_tensorflow_part_2_fine_tuning.ipynb)，修复 [#544](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F544) 和 [#553](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F553)，完整说明请见：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F575  \n     * 简而言之，如果你正在使用 `tf.keras.applications.EfficientNetB0` 并遇到错误，请切换到 [`tf.keras.applications.efficientnet_v2.EfficientNetV2B0`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fapplications\u002Fefficientnet_v2\u002FEfficientNetV2B0)\n* 2023年5月26日 - 更新 [Notebook 08](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F08_introduction_to_nlp_in_tensorflow.ipynb) 以适配新版本的 TensorFlow；同时更新 [Notebook 09](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F09_SkimLit_nlp_milestone_project_2.ipynb) 以适配新版本的 TensorFlow 和 spaCy。09 的更新说明请见：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F557\n* 2023年5月19日 - 更新 [Notebook 07](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F07_food_vision_milestone_project_1.ipynb) 以适配新版本的 TensorFlow，并修复模型加载错误（需 TensorFlow 2.13 或更高版本），详情请见：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F550\n* 2023年5月18日 - 更新 Notebook 06，以采用新的 TensorFlow 命名空间（功能无重大变化，仅导入语句有所不同），详情请见：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F549\n* 2023年5月12日 - Notebook 05 为 `tf.keras.layers` 添加了新的命名空间，详情请见：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F547。此外，还修复了 Notebook 05 中 `model.load_weights()` 的问题，详情请见：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F544。如果你在保存或加载模型权重时遇到困难，也可参考：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F553\n* 2023年5月12日 - 较新版本的 TensorFlow（2.10 及以上）在 `tf.keras.optimizers` 中使用 `learning_rate` 而非 `lr`（例如：`tf.keras.optimizers.Adam(learning_rate=0.001)`）。旧的 `lr` 参数仍可使用，但已被弃用。\n* 2021年12月2日 - 为 notebook 02 添加了针对 TensorFlow 2.7.0+ 的修复，[详情请见讨论](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F278)\n* 2021年11月11日 - 为 notebook 01 添加了针对 TensorFlow 2.7.0+ 的修复，[详情请见讨论](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F256)\n\n## 课程材料\n\n下表是课程材料的真实来源。你所需的所有链接都将在此列出。\n\n关键：\n* **编号：** 目标 notebook 的编号（这可能与课程视频部分不完全一致，但会将表格中的所有材料联系起来）\n* **Notebook：** 包含大量代码和文本注释的特定模块 notebook（视频中的 notebook 均基于这些）\n* **数据\u002F模型：** 与相应 notebook 相关的数据集或预训练模型的链接\n* **练习与课外资源：** 每个模块都附有一组练习和课外资源，帮助你巩固技能并深入学习。建议你在进入下一个模块之前先完成这些内容。\n* **幻灯片：** 尽管我们主要关注编写 TensorFlow 代码，但有时也会使用精美的幻灯片来解释不同的概念，你可以在这里找到它们。\n\n**注意：** 你可以在 [`video_notebooks`](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Ftree\u002Fmain\u002Fvideo_notebooks) 目录中获取视频中创建的所有 notebook 代码。\n\n| 序号 | 笔记本 | 数据\u002F模型 | 练习与课外资源 | 幻灯片 |\n| ----- |  ----- |  ----- |  ----- |  ----- |\n| 00 | [TensorFlow 基础](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F00_tensorflow_fundamentals.ipynb) |  | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-00-tensorflow-fundamentals-exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F00_introduction_to_tensorflow_and_deep_learning.pdf) |\n| 01 | [TensorFlow 回归](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F01_neural_network_regression_in_tensorflow.ipynb) |  | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-01-neural-network-regression-with-tensorflow-exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F01_neural_network_regression_with_tensorflow.pdf) |\n| 02 | [TensorFlow 分类](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F02_neural_network_classification_in_tensorflow.ipynb) |  | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-02-neural-network-classification-with-tensorflow-exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F02_neural_network_classification_with_tensorflow.pdf) |\n| 03 | [TensorFlow 计算机视觉](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F03_convolutional_neural_networks_in_tensorflow.ipynb) | [`pizza_steak`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002Fpizza_steak.zip), [`10_food_classes_all_data`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F10_food_classes_all_data.zip) | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-03-computer-vision--convolutional-neural-networks-in-tensorflow-exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F03_convolution_neural_networks_and_computer_vision_with_tensorflow.pdf) |\n| 04 | [迁移学习 第1部分：特征提取](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F04_transfer_learning_in_tensorflow_part_1_feature_extraction.ipynb) | [`10_food_classes_10_percent`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F10_food_classes_10_percent.zip) | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-04-transfer-learning-in-tensorflow-part-1-feature-extraction-exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F04_transfer_learning_with_tensorflow_part_1_feature_extraction.pdf) |\n| 05 | [迁移学习 第2部分：微调](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F05_transfer_learning_in_tensorflow_part_2_fine_tuning.ipynb) | [`10_food_classes_10_percent`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F10_food_classes_10_percent.zip), [`10_food_classes_1_percent`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F10_food_classes_1_percent.zip), [`10_food_classes_all_data`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F10_food_classes_all_data.zip) | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-05-transfer-learning-in-tensorflow-part-2-fine-tuning-exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F05_transfer_learning_with_tensorflow_part_2_fine_tuning.pdf) |\n| 06 | [迁移学习 第3部分：扩展规模](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F06_transfer_learning_in_tensorflow_part_3_scaling_up.ipynb) | [`101_food_classes_10_percent`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F101_food_classes_10_percent.zip), [`custom_food_images`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002Fcustom_food_images.zip), [`fine_tuned_efficientnet_model`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F06_101_food_class_10_percent_saved_big_dog_model.zip) | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-06-transfer-learning-in-tensorflow-part-3-scaling-up-exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F06_transfer_learning_with_tensorflow_part_3_scaling_up.pdf) |\n| 07 | [里程碑项目1：食品视觉 🍔👁](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F07_food_vision_milestone_project_1.ipynb), [模板（你的挑战）](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fextras\u002FTEMPLATE_07_food_vision_milestone_project_1.ipynb) | [`feature_extraction_mixed_precision_efficientnet_model`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F07_efficientnetb0_feature_extract_model_mixed_precision.zip), [`fine_tuned_mixed_precision_efficientnet_model`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Ffood_vision\u002F07_efficientnetb0_fine_tuned_101_classes_mixed_precision.zip) | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-07-milestone-project-1--food-vision-big-exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F07_milestone_project_1_food_vision.pdf) |\n| 08 | [TensorFlow NLP 基础](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F08_introduction_to_nlp_in_tensorflow.ipynb) | [`diaster_or_no_diaster_tweets`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Fnlp_getting_started.zip), [`USE_feature_extractor_model`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002F08_model_6_USE_feature_extractor.zip) | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-08-introduction-to-nlp-natural-language-processing-in-tensorflow-exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F08_natural_language_processing_in_tensorflow.pdf) |\n| 09 | [里程碑项目2：SkimLit 📄🔥](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F09_SkimLit_nlp_milestone_project_2.ipynb) | [`pubmed_RCT_200k_dataset`](https:\u002F\u002Fgithub.com\u002FFranck-Dernoncourt\u002Fpubmed-rct.git), [`skimlit_tribrid_model`](https:\u002F\u002Fstorage.googleapis.com\u002Fztm_tf_course\u002Fskimlit\u002Fskimlit_tribrid_model.zip) | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-09-milestone-project-2-skimlit--exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F09_milestone_project_2_skimlit.pdf) |\n| 10 | [TensorFlow 时间序列基础及里程碑项目3：BitPredict 💰📈](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F10_time_series_forecasting_in_tensorflow.ipynb) | [`bitcoin_price_data_USD_2013-10-01_2021-05-18.csv`](https:\u002F\u002Fraw.githubusercontent.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fmain\u002Fextras\u002FBTC_USD_2013-10-01_2021-05-18-CoinDesk.csv) | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-10-time-series-fundamentals-and-milestone-project-3-bitpredict--exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F10_time_series_fundamentals_and_milestone_project_3_bitpredict.pdf) |\n| 11 | [准备通过 TensorFlow 开发者认证考试（存档）](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F11_passing_the_tensorflow_developer_certification_exam.md) | | [前往练习与课外资源](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#-11-passing-the-tensorflow-developer-certification-exercises) | [前往幻灯片](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F11_passing_the_tensorflow_developer_certification_exam.pdf) |\n\n## 课程结构\n\n本课程采用代码优先的教学方式。我们的目标是让你尽快开始编写深度学习代码。\n\n教学遵循以下口诀：\n\n```\n代码 -> 概念 -> 代码 -> 概念 -> 代码 -> 概念\n```\n\n这意味着我们首先编写代码，然后逐步讲解其背后的原理。\n\n如果你有6个月以上的Python编程经验，并且具备强烈的学习意愿（这是最重要的），那么你就能顺利完成这门课程。\n\n## 你应该参加这门课程吗？\n\n> 你是否有1年以上的深度学习和TensorFlow编程经验？\n\n如果有，那么你不应该参加这门课程，而是利用你的技能去构建一些实际项目。\n\n如果没有，请继续回答下一个问题。\n\n> 你是否已经完成过至少一门机器学习入门课程，并希望进一步学习深度学习或如何使用TensorFlow构建神经网络？\n\n如果答案是肯定的，那么这门课程非常适合你。\n\n如果答案是否定的，建议先去学习一门机器学习入门课程。等你决定要学习TensorFlow时，这个页面仍然会在这里。\n\n## 先修条件\n\n> 学习这门课程需要掌握哪些知识？\n\n* **6个月以上的Python编程经验。** 你能编写一个接受并使用参数的Python函数吗？这就足够了。如果你还不清楚这意味着什么，建议再花一两个月时间练习Python编程，然后再回来学习。\n* **至少一门机器学习入门课程。** 你是否熟悉训练集、验证集和测试集的概念？你知道什么是监督学习吗？你以前用过pandas、NumPy或Matplotlib吗？如果对这些问题的回答是否定的，建议先学习一门教授这些内容的机器学习入门课程，然后再回来学习本课程。\n* **熟练使用Google Colab\u002FJupyter Notebook。** 本课程全程使用Google Colab。如果你从未用过Google Colab，它与Jupyter Notebook非常相似，只是多了一些额外的功能。如果你不熟悉Google Colab笔记本，建议先阅读《Google Colab入门》教程。\n* **推荐资源：** [Zero to Mastery的友好型机器学习入门课程](https:\u002F\u002Fdbourke.link\u002FZTMMLcourse)（我也教授这门课程）涵盖了上述所有内容，而本课程正是作为其后续课程设计的。\n\n## 🛠 练习题 & 📖 课外拓展\n\n为了避免课程内容过于庞大（深度学习领域非常广泛），我们为不同章节推荐了多种外部资源，供你根据自己的兴趣选择深入学习。\n\n练习题的答案可以在[`extras\u002Fsolutions\u002F`](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Ftree\u002Fmain\u002Fextras\u002Fsolutions)中找到，每组练习题都对应一个Notebook（分别对应00、01、02等）。感谢[Ashik Shafi](https:\u002F\u002Fgithub.com\u002Fashikshafi08)为创建这些解答所付出的努力。\n\n---\n\n### 🛠 00. TensorFlow基础练习题\n\n1. 使用`tf.constant()`创建一个向量、标量、矩阵和张量，数值自选。\n2. 找出你在第1题中创建的张量的形状、秩和大小。\n3. 创建两个形状为`[5, 300]`、元素随机分布在0到1之间的张量。\n4. 使用矩阵乘法将第3题中的两个张量相乘。\n5. 使用点积将第3题中的两个张量相乘。\n6. 创建一个形状为`[224, 224, 3]`、元素随机分布在0到1之间的张量。\n7. 找出第6题中创建的张量在第一个轴上的最小值和最大值。\n8. 创建一个形状为`[1, 224, 224, 3]`的张量，然后通过squeeze操作将其形状变为`[224, 224, 3]`。\n9. 使用你选择的数值创建一个形状为`[10]`的张量，然后找出其中最大值的索引。\n10. 对第9题中的张量进行独热编码。\n\n### 📖 00. TensorFlow基础课外拓展\n\n* 阅读[TensorFlow Python API列表](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002F)，挑选一个我们在本Notebook中未涉及的API，自行反推其功能（写出对应的文档代码），并弄清楚它的作用。\n* 尝试编写一系列张量函数来计算你最近一次的购物账单（可以不用具体商品名称，只用金额数值即可）。\n  * 你打算如何用张量计算本月和全年的购物支出？\n* 阅读[TensorFlow 2.x新手快速入门教程](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fquickstart\u002Fbeginner)，务必亲自敲入所有代码，即使暂时不理解也没关系。\n  * 我们在此处使用的某些函数是否与该教程中的函数相同？有哪些一致之处？又有哪些是你之前没见过的？\n* 观看视频《什么是张量？》(https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=f5liqUk0ZTw)——这是一个非常好的视觉化介绍，帮助你理解我们在本Notebook中讨论的许多概念。\n\n---\n\n### 🛠 01. 使用TensorFlow进行神经网络回归练习题\n\n1. 创建你自己的回归数据集（或者将我们在“创建数据以查看和拟合”中使用的数据集扩大），并为其构建和拟合模型。\n2. 尝试构建一个包含4个Dense层的神经网络，并将其拟合到你自己的回归数据集上，看看效果如何。\n3. 尝试改进我们在保险数据集上得到的结果，你可以尝试以下方法：\n  * 构建更大的模型（比如使用4个Dense层的效果如何？）。\n  * 增加每一层的单元数量。\n  * 查阅[Adam优化器](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Foptimizers\u002FAdam)的文档，了解其第一个参数的作用，如果将其值提高10倍会发生什么？\n  * 如果延长训练时间（例如从200轮增加到300轮），结果会有何变化？\n4. 从TensorFlow的`tf.keras.datasets`模块中导入[Boston房价数据集](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fdatasets\u002Fboston_housing\u002Fload_data)，并对其进行建模。\n\n### 📖 01. 使用TensorFlow进行神经网络回归课外拓展\n\n* [MIT深度学习导论第一讲](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7sB052Pz0sQ&ab_channel=AlexanderAmini)——很好地概述了我们正在运行的所有代码背后的实际原理。\n* 阅读：Michael Nielsen所著《神经网络与深度学习》第一章（约1小时）——这是一篇深入且实践性强的文章，有助于理解神经网络的直观原理。\n* 为了练习使用TensorFlow进行回归建模，我还鼓励你浏览[Kaggle的数据集](https:\u002F\u002Fwww.kaggle.com\u002Fdata)，寻找一个你感兴趣的回归数据集，并尝试进行建模。\n\n---\n\n### 🛠 02. 使用 TensorFlow 进行神经网络分类练习\n\n1. 在 [TensorFlow Playground](https:\u002F\u002Fplayground.tensorflow.org\u002F) 中玩 10 分钟，尝试不同的神经网络设置。特别注意调整学习率：当学习率降低时会发生什么？当学习率升高时又会发生什么？\n2. 使用 TensorFlow 代码复现下方 [TensorFlow Playground 图表](https:\u002F\u002Fplayground.tensorflow.org\u002F#activation=relu&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.001&regularizationRate=0&noise=0&networkShape=6,6,6,6,6&seed=0.51287&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false&regularization_hide=true&discretize_hide=true&regularizationRate_hide=true&percTrainData_hide=true&dataset_hide=true&problem_hide=true&noise_hide=true&batchSize_hide=true) 中所示的模型。使用 Adam 优化器、二元交叉熵损失函数和准确率指标编译模型。编译完成后，查看模型摘要。\n![tensorflow playground 示例神经网络](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmrdbourke_tensorflow-deep-learning_readme_64541eafec62.png)\n* 您可以在 [TensorFlow Playground 网站](https:\u002F\u002Fplayground.tensorflow.org\u002F#activation=relu&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.001&regularizationRate=0&noise=0&networkShape=6,6,6,6,6&seed=0.51287&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false&regularization_hide=true&discretize_hide=true&regularizationRate_hide=true&percTrainData_hide=true&dataset_hide=true&problem_hide=true&noise_hide=true&batchSize_hide=true) 上亲自尝试该网络。提示：网络有 5 层隐藏层，但输出层未在图中显示，您需要根据输入数据决定输出层的结构。\n3. 使用 Scikit-Learn 的 `make_moons()` 函数创建一个分类数据集，对其进行可视化，然后构建一个能够达到 85% 以上准确率的模型。\n4. 训练一个模型，在时尚 MNIST 测试集上达到 88% 以上的准确率。训练完成后绘制混淆矩阵以查看结果。\n5. 在您的代码中重新实现 [TensorFlow 的 softmax 激活函数](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Factivations\u002Fsoftmax)（参考维基百科上的 [softmax 函数](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FSoftmax_function)）。确保该函数可以接受张量，并在其上应用 softmax 函数后返回修改后的张量。\n6. 创建一个函数或编写代码，同时可视化时尚 MNIST 数据集中多张图像的预测结果。至少同时绘制三张不同图像及其预测标签。提示：可参考 TensorFlow 文档中的 [分类教程](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fkeras\u002Fclassification) 获取灵感。\n7. 编写一个函数，显示时尚 MNIST 数据集中某一类别的图像，并对该图像进行预测。例如，绘制 3 张“T恤”类别的图像及其预测结果。\n\n### 📖 02. 使用 TensorFlow 进行神经网络分类课外阅读\n\n* 观看 3Blue1Brown 的神经网络系列视频第 2 集：[*梯度下降：神经网络如何学习*](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=IHZwWFHWa-w)。观看完毕后，用 100 字左右总结您学到的内容。\n  * 如果尚未观看，请先观看第 1 集：[*什么是神经网络？*](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=aircAruvnKk)。注意他们在结尾提到的激活函数。\n* 观看 [MIT 深度学习导论讲座第 1 讲](https:\u002F\u002Fyoutu.be\u002FnjKP3FqW3Sk)（如果您尚未观看），以了解线性与非线性函数背后的概念。\n* 花 1 小时阅读 [Michael Nielsen 的《神经网络与深度学习》一书](http:\u002F\u002Fneuralnetworksanddeeplearning.com\u002Findex.html)。\n* 阅读 [ML-Glossary 关于激活函数的文档](https:\u002F\u002Fml-cheatsheet.readthedocs.io\u002Fen\u002Flatest\u002Factivation_functions.html)。您最喜欢哪种激活函数？\n  * 阅读完 ML-Glossary 后，通过搜索“tensorflow 激活函数”查看 TensorFlow 中可用的激活函数。\n\n---\n\n### 🛠 03. 使用 TensorFlow 进行计算机视觉与卷积神经网络练习\n\n1. 花 20 分钟阅读并互动 [CNN 解释器网站](https:\u002F\u002Fpoloclub.github.io\u002Fcnn-explainer\u002F)。\n * 请解释关键术语：例如，用自己的话解释卷积和池化。\n2. 在 [CNN 解释器](https:\u002F\u002Fpoloclub.github.io\u002Fcnn-explainer\u002F) 网站的“理解超参数”部分花 10 分钟进行探索。\n  * 卷积核大小是什么？\n  * 步幅是什么？\n  * 如何在 TensorFlow 代码中调整这些参数？\n3. 拍摄两种不同物体的 10 张照片，利用我们在此介绍的技术构建您自己的 CNN 图像分类器。\n4. 为您的 10 类数据集上的简单卷积神经网络模型找到理想的学习率。\n\n### 📖 03. 使用 TensorFlow 进行计算机视觉与卷积神经网络课外阅读\n\n* **观看：** [MIT 深度学习导论：计算机视觉](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=uapdILWYTzE&list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI&index=3&ab_channel=AlexanderAmini) 讲座。这将帮助您深入理解卷积神经网络的工作原理。\n* **观看：** deeplearning.ai 的 [小批量梯度下降详解](https:\u002F\u002Fyoutu.be\u002F-_4Zi8fCZO4)。如果您仍然好奇为什么我们要使用**批次**来训练模型，这篇技术概述将解答许多相关问题。\n* **阅读：** [CS231n 视觉识别中的卷积神经网络](https:\u002F\u002Fcs231n.github.io\u002Fconvolutional-networks\u002F) 课程笔记。这将让您深入了解我们所编写的卷积神经网络架构背后的运行机制。\n* **阅读：** [\"深度学习中卷积运算指南\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1603.07285.pdf)。这篇论文详细阐述了卷积层背后的所有数学原理。\n* **代码实践：** [TensorFlow 数据增强教程](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fimages\u002Fdata_augmentation)。如需更深入地了解 TensorFlow 中的数据增强技术，可花一到两个小时阅读该教程。\n\n---\n\n### 🛠 04. TensorFlow 中的迁移学习 第1部分：特征提取练习\n\n1. 使用我们现有的数据，但采用 TensorFlow Hub 中的 MobileNetV2 架构进行特征提取（`mobilenet_v2_100_224\u002Ffeature_vector`），构建并训练一个模型。与我们之前的其他模型相比，它的性能如何？\n2. 列出 TensorFlow Hub 上尚未使用过的3种不同的图像分类模型。\n3. 构建一个模型，用于分类你拍摄的两种不同事物的图片。\n  * 你可以选择 TensorFlow Hub 上任何你喜欢的特征提取层来完成这个任务。\n  * 每个类别的图片数量应至少为10张，例如，要构建冰箱与烤箱的分类器，你需要10张冰箱的图片和10张烤箱的图片。\n4. 目前在 ImageNet 数据集上表现最好的模型是什么？\n  * 提示：你可以查看 [sotabench.com](https:\u002F\u002Fwww.sotabench.com) 获取相关信息。\n\n### 📖 04. TensorFlow 中的迁移学习 第1部分：特征提取课外拓展\n\n* 阅读 [TensorFlow 迁移学习指南](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fimages\u002Ftransfer_learning)，用自己的话定义迁移学习的两种主要类型。\n* 浏览 TensorFlow 官网上的 [使用 TensorFlow Hub 进行迁移学习教程](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fimages\u002Ftransfer_learning_with_hub)，将所有代码重新编写到一个新的 Google Colab 笔记本中，并在每一步旁边添加注释，说明该步骤的作用。\n* 在本笔记本中我们尚未介绍 TensorFlow Hub 的微调方法，但如果你想了解更多，请阅读 TensorFlow 官网上关于 [微调 TensorFlow Hub 模型的教程](https:\u002F\u002Fwww.tensorflow.org\u002Fhub\u002Ftf2_saved_model#fine-tuning)。如何微调 TensorFlow Hub 模型：\n* 了解 [Weights & Biases 实验跟踪工具](https:\u002F\u002Fwww.wandb.com\u002Fexperiment-tracking)，你如何将其与我们现有的 TensorBoard 日志集成？\n\n---\n\n### 🛠 05. TensorFlow 中的迁移学习 第2部分：微调练习\n\n1. 使用特征提取方法，以 [`tf.keras.applications.EfficientNetB0`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fapplications\u002FEfficientNetB0) 作为基础模型，在 Food Vision 数据集的10%上训练一个迁移学习模型，共10个 epoch。使用 [`ModelCheckpoint`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FModelCheckpoint) 回调函数将权重保存到文件。\n2. 对你在第2题中训练的基础模型的最后20层进行微调，再训练10个 epoch。结果如何？\n3. 对你在第2题中训练的基础模型的最后30层进行微调，再训练10个 epoch。结果如何？\n4. 编写一个函数，用于从任意数据集（训练集或测试集）和任意类别（如“牛排”、“披萨”等）中可视化一张图片，并使用训练好的模型对其进行预测。\n\n### 📖 05. TensorFlow 中的迁移学习 第2部分：微调课外拓展\n\n* 阅读 TensorFlow 中关于 [数据增强的文档](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fimages\u002Fdata_augmentation)。\n* 阅读 [ULMFit 论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.06146)（技术性内容），了解冻结和解冻不同层的概念。\n* 学习学习率调度的相关知识（TensorFlow 提供了 [LearningRateScheduler 回调函数](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FLearningRateScheduler)），它将如何影响我们的模型训练？\n  * 如果你计划训练更长时间，可能需要逐步降低学习率……越接近“山谷底部”，步伐就应该越小。可以想象一下，在沙发底下找到一枚硬币的过程：刚开始时手臂的动作会比较大，而越靠近硬币，动作就越小。\n\n---\n\n### 🛠 06. TensorFlow 中的迁移学习 第3部分：扩展练习\n\n1. 自己拍摄3张食物照片，使用训练好的模型对它们进行预测，在 Discord 上与其他同学分享你的预测结果，秀一秀你的 Food Vision 模型吧！🍔👁\n2. 使用相同的数据，训练一个基于特征提取的迁移学习模型，共10个 epoch；然后将其性能与一个先进行5个 epoch 特征提取、再进行5个 epoch 微调的模型（就像我们在本笔记本中使用的那样）进行比较。哪种方法更好？\n3. 使用 [`mixed_precision`](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fmixed_precision) 开启混合精度训练，重新创建第一个模型（即特征提取模型）。\n  * 混合精度训练是否能使模型训练速度更快？\n  * 它是否会影响模型的准确率或性能？\n  * 使用混合精度训练有哪些优势？\n\n### 📖 06. TensorFlow 中的迁移学习 第3部分：扩展课外拓展\n\n* 花15分钟阅读 [EarlyStopping 回调函数](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FEarlyStopping)。它有什么作用？我们如何在模型训练中使用它？\n* 花1个小时阅读关于 [Streamlit](https:\u002F\u002Fwww.streamlit.io\u002F) 的资料。它能做什么？你如何将本笔记本中的一些内容整合到一个 Streamlit 应用程序中？\n\n---\n\n### 🛠 07. 阶段性项目 1：🍔👁 Food Vision Big™ 练习\n\n**注：** 阶段性项目 1 的主要任务是完成 [阶段性项目 1 模板笔记本](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fextras\u002FTEMPLATE_07_food_vision_milestone_project_1.ipynb) 中的“TODO”部分。完成后，请继续进行以下内容。\n\n1. 对大规模 Food Vision 模型使用与上一个笔记本（[迁移学习第 3 部分：扩展规模](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F06_transfer_learning_in_tensorflow_part_3_scaling_up.ipynb)）中相同的评估方法。具体来说，建议查看：\n   * 模型所有预测与真实标签之间的混淆矩阵。\n   * 展示每个类别 F1 分数的图表。\n   * 可视化模型对各类图像的预测结果，并将预测与真实标签进行对比。\n     * 例如，绘制测试数据集中的某张样本图像，并在图标题中显示预测结果、预测概率以及真实标签。\n   * **注意：** 为了将预测标签与测试标签进行比较，在加载测试数据时，可以将 `shuffle=False` 设置为真，以保持测试数据的顺序与预测标签的顺序一致。\n2. 自己拍摄 3 张食物照片，并使用 Food Vision 模型对这些图片进行预测。效果如何？请与其他同学分享你的图片和预测结果。\n3. 重新训练本笔记本中使用的模型（特征提取和微调），但这次将基础模型从 `EfficientNetB0` 替换为 [`EfficientNetB4`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fapplications\u002FEfficientNetB4)。你是否观察到性能有所提升？训练时间是否更长？有哪些需要权衡的地方？\n4. 请说出混合精度训练的一个重要优势，并说明这一优势是如何实现的？\n\n### 📖 07. 阶段性项目 1：🍔👁 Food Vision Big™ 课外拓展\n\n* 阅读有关学习率调度以及 [学习率调度回调函数](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FLearningRateScheduler) 的资料。它是什么？对本项目有何帮助？\n* 阅读有关 TensorFlow 数据加载器的内容（[提升 TensorFlow 数据加载性能](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fdata_performance)）。我们是否遗漏了什么？在使用 TensorFlow 加载数据时，你通常会注意哪些方法？提示：查看页面底部的总结部分，那里有很好的思路汇总。\n* 阅读关于 [TensorFlow 混合精度训练](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fmixed_precision) 的文档。在使用混合精度训练时，需要注意哪些关键点？\n\n---\n\n### 🛠 08. TensorFlow 中自然语言处理（NLP）入门练习\n1. 使用 [Keras 顺序 API](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002FSequential) 而不是函数式 API，重新构建、编译并训练 `model_1`、`model_2` 和 `model_5`。\n2. 使用 10% 的训练数据重新训练基线模型。与使用 10% 训练数据的通用句子编码器模型相比，其性能如何？\n3. 尝试通过在实例化为 Keras 层时将 `training=True` 设置为真，来微调 TF Hub 通用句子编码器模型。\n\n```\n# 我们可以用这个编码层代替文本向量化器和嵌入层\nsentence_encoder_layer = hub.KerasLayer(\"https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Funiversal-sentence-encoder\u002F4\",\n                                        input_shape=[],\n                                        dtype=tf.string,\n                                        trainable=True) # 将训练设置为真，以微调 TensorFlow Hub 模型\n```\n4. 使用整个训练集（不划分验证集）重新训练目前为止表现最好的模型。然后使用该模型对测试数据集进行预测，并将预测结果格式化为与 Kaggle 的 `sample_submission.csv` 文件相同的格式（可在 Colab 的“文件”选项卡中查看 `sample_submission.csv` 文件的样式）。完成后，请尝试 [提交到 Kaggle 竞赛](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fnlp-getting-started\u002Fdata)，你的模型表现如何？\n5. 使用多数投票法（众数）组合集成预测结果，这种方法与平均各模型预测概率相比，性能如何？\n6. 使用表现最佳的模型在验证集上的预测结果及其对应的验证集真实标签，绘制混淆矩阵。\n\n### 📖 08. TensorFlow 中的自然语言处理（NLP）简介——课外拓展\n为了实践你所学到的知识，一个不错的想法是花一小时分别完成以下三项任务（总共三小时，当然如果你愿意也可以全部做完），然后写一篇博客文章总结你的学习成果。\n\n* 要了解 NLP 领域中的各类问题及其解决方法，请阅读：\n  * [自然语言处理的简单介绍](https:\u002F\u002Fbecominghuman.ai\u002Fa-simple-introduction-to-natural-language-processing-ea66a1747b32)\n  * [如何解决 90% 的 NLP 问题：分步指南](https:\u002F\u002Fblog.insightdatascience.com\u002Fhow-to-solve-90-of-nlp-problems-a-step-by-step-guide-fda605278e4e)\n* 观看 [MIT 的循环神经网络讲座](https:\u002F\u002Fyoutu.be\u002FSEnXr6v2ifU)。这将极大地补充你对之前构建的 RNN 模型内部机制的理解。\n* 阅读 [TensorFlow 官网上的词嵌入页面](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Ftext\u002Fword_embeddings)。词嵌入在 NLP 中占据非常重要的地位。虽然我们在本笔记本中已经涉及过相关内容，但再多一些练习仍然非常值得。一个不错的练习是将该指南中的所有代码重新编写到一个新的笔记本中。\n* 如果你想深入了解 TensorFlow 中的 RNN，可以阅读并复现 [TensorFlow RNN 指南](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fkeras\u002Frnn)。我们已经讨论过该指南中的许多概念，但亲自再写一遍代码仍然很有意义。\n* 文本数据并不总是像我们下载的数据那样整齐。因此，如果你想了解更多关于如何为 TensorFlow 深度学习模型准备不同来源的文本数据，可以参考以下内容：\n  * [TensorFlow 文本加载教程](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fload_data\u002Ftext)\n  * Real Python 的 [使用 Python 读取文本文件](https:\u002F\u002Frealpython.com\u002Fread-write-files-python\u002F)\n* 本笔记本主要关注 NLP 代码的编写。若想从数学角度深入了解深度学习在 NLP 中的应用，可以阅读 [斯坦福大学《深度学习自然语言处理》讲义第一部分](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs224n\u002Freadings\u002Fcs224n-2019-notes01-wordvecs1.pdf)。\n  * 如果想要更深入的学习，甚至可以选修整个 [CS224n](http:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs224n\u002F) 课程（深度学习自然语言处理）。\n* 值得阅读的优秀博客文章：\n  * Andrei Karpathy 的 [RNN 的不合理有效性](https:\u002F\u002Fkarpathy.github.io\u002F2015\u002F05\u002F21\u002Frnn-effectiveness\u002F) 探讨了如何利用 RNN 生成莎士比亚风格的文本。\n  * Mauro Di Pietro 的 [NLP 文本分类：TF-IDF vs Word2Vec vs BERT](https:\u002F\u002Ftowardsdatascience.com\u002Ftext-classification-with-nlp-tf-idf-vs-word2vec-vs-bert-41ff868d1794)，概述了将文本转换为数值并进行分类的不同技术。\n  * Machine Learning Mastery 的 [什么是词嵌入？](https:\u002F\u002Fmachinelearningmastery.com\u002Fwhat-are-word-embeddings\u002F)\n* 其他值得探索的主题：\n  * [注意力机制](https:\u002F\u002Fjalammar.github.io\u002Fvisualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention\u002F)。这是 Transformer 架构的基础组成部分，通常也能提升深度 NLP 模型的性能。\n  * [Transformer 架构](http:\u002F\u002Fjalammar.github.io\u002Fillustrated-transformer\u002F)。这种模型架构近年来在 NLP 领域风靡一时，在许多基准测试中都达到了最先进水平。不过，它的运行需要更多的计算资源，[HuggingFace Models（原 HuggingFace Transformers）库](https:\u002F\u002Fhuggingface.co\u002Fmodels\u002F)可能是你快速上手的最佳选择。\n\n---\n\n### 🛠 09. 阶段性项目 2：SkimLit 📄🔥 练习\n1. 使用训练数据集中的所有数据训练 `model_5`，直到其性能不再提升为止。由于这可能需要较长时间，你可以使用以下工具：\n  * [`tf.keras.callbacks.ModelCheckpoint`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FModelCheckpoint) 来保存模型的最佳权重。\n  * [`tf.keras.callbacks.EarlyStopping`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fcallbacks\u002FEarlyStopping) 在验证损失连续约 3 个 epoch 不再改善时停止训练。\n2. 查看 [Keras 关于使用预训练 GloVe 词嵌入的指南](https:\u002F\u002Fkeras.io\u002Fexamples\u002Fnlp\u002Fpretrained_word_embeddings\u002F)。你能否将其应用到我们的某个模型中？\n  * 提示：你需要将其与自定义的 [Embedding](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Flayers\u002FEmbedding) 层结合使用。\n  * 是否对 GloVe 词嵌入进行微调或保持冻结状态由你决定。\n3. 尝试将 TensorFlow Hub 的通用句子编码器预训练词嵌入替换为 [TensorFlow Hub BERT PubMed 专家](https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Fexperts\u002Fbert\u002Fpubmed\u002F2)（一种基于 PubMed 文本预训练的语言模型）的预训练词嵌入。这样做会对结果产生影响吗？\n  * 注意：使用 BERT PubMed 专家预训练词嵌入需要对序列进行额外的预处理步骤（详见 [TensorFlow Hub 指南](https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Fexperts\u002Fbert\u002Fpubmed\u002F2)）。\n  * BERT 模型的效果是否优于这篇论文中提到的结果？https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.06071.pdf\n4. 如果我们将每个序列的 `line_number` 和 `total_lines` 特征合并会怎样？例如，创建一个 `X_of_Y` 特征？这会影响模型性能吗？\n  * 另一个例子：`line_number=1` 且 `total_lines=11` 可以变为 `line_of_X=1_of_11`。\n5. 编写一个函数（或一组函数），用于接收一段摘要文本，按与模型训练相同的方式对其进行预处理，对摘要中的每个序列进行预测，并以如下格式返回摘要：\n  * `PREDICTED_LABEL`: `SEQUENCE`\n  * `PREDICTED_LABEL`: `SEQUENCE`\n  * `PREDICTED_LABEL`: `SEQUENCE`\n  * `PREDICTED_LABEL`: `SEQUENCE`\n  * …\n    * 你可以从 PubMed 中找到自己的非结构化 RCT 摘要，或者尝试使用这篇摘要：[*巴氯芬促进伴有丙型肝炎病毒感染的酒精依赖性肝硬化患者的戒酒*](https:\u002F\u002Fpubmed.ncbi.nlm.nih.gov\u002F22244707\u002F)。\n\n### 📖 09. 阶段性项目 2：SkimLit 📄🔥 课外拓展\n* 若想进一步学习文本处理和 spaCy 的使用，可以参考 [spaCy 的高级 NLP 课程](https:\u002F\u002Fcourse.spacy.io\u002Fen\u002F)。如果你将来要处理生产级别的 NLP 问题，很可能会用到 spaCy。\n* 为了从另一个角度了解如何解决类似我们刚刚经历过的文本分类问题，建议你学习 [Google 的文本分类机器学习课程](https:\u002F\u002Fdevelopers.google.com\u002Fmachine-learning\u002Fguides\u002Ftext-classification)。\n* 由于我们的数据集存在类别不平衡（许多真实世界的数据集也存在类似问题），因此值得查阅 [TensorFlow 关于处理类别不平衡数据的训练方法指南](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fstructured_data\u002Fimbalanced_data)。\n\n---\n\n### 🛠 第10课：时间序列基础与里程碑项目3：BitPredict 💰📈 练习\n\n1. 对于单变量\u002F多变量数据，进行数据归一化是否有帮助？（例如，将所有值缩放到0到1之间）  \n  * 尝试对一个单变量模型（如`model_1`）和一个多变量模型（如`model_6`）进行归一化处理，观察是否会影响模型的训练或评估结果。\n2. 获取最新的比特币数据，训练一个模型并观察效果（我们的数据截至2021年5月18日）。  \n  * 你可以从[coindesk.com\u002Fprice\u002Fbitcoin](https:\u002F\u002Fwww.coindesk.com\u002Fprice\u002Fbitcoin)免费下载比特币历史数据，点击“Export Data” -> “CSV”即可。\n3. 我们大多数模型都使用了`WINDOW_SIZE=7`，但是否存在更优的窗口大小呢？  \n  * 设计一系列实验，以确定是否存在更好的窗口大小。  \n  * 例如，可以训练10个不同的模型，设置`HORIZON=1`，但窗口大小分别从2到12不等。\n4. 使用[`tf.keras.preprocessing.timeseries_dataset_from_array()`](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftf\u002Fkeras\u002Fpreprocessing\u002Ftimeseries_dataset_from_array)创建一个与我们用于`model_1`相同的滑动窗口数据集，并用重新创建的数据集重新训练`model_1`。\n5. 在我们的多变量建模实验中，我们添加了比特币区块奖励大小作为额外特征，使时间序列变为多变量。  \n  * 你认为还可以添加哪些其他特征？  \n  * 如果有，请尝试加入这些特征，看看它们对模型有何影响？\n6. 为未来的预测生成预测区间。一种方法是使用全部数据训练一个集成模型，用该模型进行未来预测，然后像我们对`model_8`那样计算集成模型的预测区间。\n7. 对于未来预测，尝试先做一个预测，然后用这个预测结果重新训练模型，再做下一个预测、重新训练模型，如此循环往复。绘制结果图，与每次预测都不重新训练模型的情况（`model_9`）相比，结果有何不同？\n8. 在本笔记本中，我们只尝试了自己实现的算法。不过，不妨也看看专门用于预测的算法表现如何。  \n  * 尝试在建模实验部分列出的额外算法之一，例如：  \n    * [Facebook的Kats库](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FKats) - 这里包含许多模型，记住机器学习从业者的座右铭：实验、实验、实验。  \n    * [LinkedIn的Greykite库](https:\u002F\u002Fgithub.com\u002Flinkedin\u002Fgreykite)\n\n### 📖 第10课：时间序列基础与里程碑项目3：BitPredict 💰📈 课外拓展\n\n我们在时间序列预测和时间序列建模方面仅仅触及了皮毛。不过好消息是，你已经积累了丰富的动手编码经验。\n\n如果你想更深入地探索时间序列领域，我推荐以下资源：\n\n* [Forecasting: Principles and Practice](https:\u002F\u002Fotexts.com\u002Ffpp3\u002F) 是一本优秀的在线教材，详细讨论了时间序列预测中的许多重要概念。我特别建议完整阅读第1章。  \n  * 我强烈推荐至少阅读第1章以及关于预测准确度衡量标准的那一章。\n* 🎥 Markus Loning的[机器学习与时间序列导论](https:\u002F\u002Fyoutu.be\u002FwqQKFu41FIw)讲解了不同类型的时间序列问题及其解决方法。该视频主要基于`sktime`库（面向时间序列的Scikit-Learn），但其中的原则同样适用于其他场景。\n* Isaac Faber的[*为什么你应该关注Nate Silver与Nassim Taleb的推特大战*](https:\u002F\u002Ftowardsdatascience.com\u002Fwhy-you-should-care-about-the-nate-silver-vs-nassim-taleb-twitter-war-a581dce1f5fc)是一篇精彩的讨论文章，深入探讨了不确定性在选举预测中的作用。\n* [TensorFlow时间序列教程](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fstructured_data\u002Ftime_series) - 介绍如何使用TensorFlow预测天气时间序列数据。\n* 📕 Nassim Nicholas Taleb的[*黑天鹅*](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FThe_Black_Swan:_The_Impact_of_the_Highly_Improbable) - Nassim Taleb曾是一名自营交易员，从事交易工作长达25年。这本书汇集了他从亲身经历中总结出的诸多教训，彻底改变了我对预测能力的看法。  \n* Skander Hannachi博士的[*让经验丰富的机器学习从业者感到惊讶的3个时间序列预测事实*](https:\u002F\u002Ftowardsdatascience.com\u002F3-facts-about-time-series-forecasting-that-surprise-experienced-machine-learning-practitioners-69c18ee89387) - 时间序列数据与其他类型的数据有所不同。如果你之前主要从事其他类型的机器学习任务，进入时间序列领域可能需要一些调整。Hannachi概述了其中最常见的3点。\n* 🎥 Jordan Kern的世界级讲座，观看这些课程可以帮助你从零开始掌握时间序列问题：  \n  * [时间序列分析](https:\u002F\u002Fyoutu.be\u002FPrpu_U5tKkE) - 如何分析时间序列数据。  \n  * [时间序列建模](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=s3XH7fTHMb4) - 不同的时间序列建模技术（其中许多并非深度学习方法）。\n\n---\n\n## TensorFlow开发者认证（存档）\n\n> **注**：自2024年5月1日起，TensorFlow开发者认证已不再提供购买。经与TensorFlow认证团队联系后，他们表示该项目已关闭，目前尚无明确的后续计划（详情请参阅[#645](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F645)）。  \n>\n> 鉴于此，以下练习和课外拓展内容仅作存档之用。课程的其他材料仍然有效。\n\n### 🛠 11. 通过 TensorFlow 开发者认证练习（存档）\n\n**准备工作：大脑**\n1. 阅读《TensorFlow 开发者认证考生手册》（[链接](https:\u002F\u002Fwww.tensorflow.org\u002Fextras\u002Fcert\u002FTF_Certificate_Candidate_Handbook.pdf)）。\n2. 浏览《TensorFlow 开发者认证考生手册》中的技能清单部分，并创建一个涵盖所有所需技能的笔记本，为每项技能编写代码。这个笔记本可以在考试期间作为参考。\n\n![将 TensorFlow 开发者手册中的技能映射到笔记本中](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmrdbourke_tensorflow-deep-learning_readme_974b302b9dd1.png)\n*示例：将 TensorFlow 开发者认证考生手册中的技能清单部分映射到笔记本中。*\n\n**准备工作：电脑**\n1. 参考 [PyCharm 快速入门教程](https:\u002F\u002Fwww.jetbrains.com\u002Fpycharm\u002Flearning-center\u002F)，确保熟悉 PyCharm 的使用（考试使用 PyCharm，可下载免费版本）。\n2. 阅读并按照《TensorFlow 开发者认证考试准备指南》中的建议步骤操作（[链接](https:\u002F\u002Fwww.tensorflow.org\u002Fextras\u002Fcert\u002FSetting_Up_TF_Developer_Certificate_Exam.pdf)）。\n3. 完成上述步骤后，在 PyCharm 中确保能够用 TensorFlow 训练模型。GitHub 上提供的示例脚本 `image_classification_test.py` 中的模型和数据集应该足够了。如果能在 5–10 分钟内完成模型训练并保存，说明你的电脑性能足以应对考试中的模型训练任务。\n    - 在参加考试前，请务必在本地 PyCharm 环境中积累运行模型的经验。Google Colab（我们在课程中使用的工具）与 PyCharm 有些不同。\n\n![在参加 TensorFlow 开发者认证考试前，确保能够在本地 PyCharm 中运行 TensorFlow 代码](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmrdbourke_tensorflow-deep-learning_readme_47d93a86e176.png)\n*在参加考试前，请确保能够在本地 PyCharm 环境中运行 TensorFlow 代码。如果示例脚本 `image_class_test.py`（[链接](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fextras\u002Fimage_classification_test.py)）能在你的本地机器上于 5–10 分钟内完整运行，则说明你的本地设备可以应对考试；若无法满足要求，也可使用 Google Colab 进行模型训练、保存及下载，以提交至考试。*\n\n### 📖 11. 通过 TensorFlow 开发者认证的课外补充材料（存档）\n\n如果你想进一步提升 TensorFlow 和深度学习相关技能，或为考试做更充分的准备，我强烈推荐以下资源：\n\n* 📄 **阅读：** [我是如何获得 TensorFlow 开发者认证的（以及你也可以做到）](https:\u002F\u002Fwww.mrdbourke.com\u002Fhow-i-got-tensorflow-developer-certified\u002F)\n* 🎥 **观看：** [我是如何通过 TensorFlow 开发者认证考试的（以及你也可以做到）](https:\u002F\u002Fyoutu.be\u002Fya5NwvKafDk)\n* 学习 Coursera 上的 [TensorFlow 实战专项课程](https:\u002F\u002Fdbourke.link\u002Ftfinpractice)\n* 阅读《动手学机器学习：使用 Scikit-Learn、Keras 和 TensorFlow》第 2 版的后半部分（[链接](https:\u002F\u002Famzn.to\u002F3aYexF2)）\n\n---\n\n## 本课程未涵盖的内容\n\n深度学习是一个非常广泛的主题，因此本课程不可能面面俱到。\n\n以下是一些你可能希望进一步探索的主要领域：\n\n* Transformer 模型（席卷自然语言处理领域的神经网络架构）\n* 多模态模型（同时利用文本和图像等多种数据源的模型）\n* 强化学习\n* 无监督学习\n\n## 课程之外的进阶学习方向\n\n* 迈克尔·尼尔森的《神经网络与深度学习》一书（[链接](http:\u002F\u002Fneuralnetworksanddeeplearning.com\u002F)）——如果 Zero to Mastery 的 TensorFlow 深度学习课程是从上往下讲解，那么这本书则是从下往上构建知识体系，是极佳的补充资源。\n* Deeplearning.AI 的专项课程——Zero to Mastery 的 TensorFlow 课程侧重于代码实践，而 Deeplearning.AI 的专项课程则会深入讲解代码背后的原理。\n* 《Hands-On Machine Learning with Scikit-Learn, Keras and TensorFlow》一书（尤其是后半部分）——本课程中的许多内容都受到这本优秀教材的启发和指导。\n* Full Stack Deep Learning——学习如何将你的模型转化为基于机器学习的应用程序。\n* Made with ML 的 MLOps 资料（[链接](https:\u002F\u002Fmadewithml.com\u002F#mlops)）——类似于 Full Stack Deep Learning，但以更细粒度的小课程形式呈现，覆盖构建全栈式机器学习应用所需的各个环节（数据收集、标注、部署等）。\n* fast.ai 课程体系（[链接](https:\u002F\u002Fwww.fast.ai)）——在线上最好的（也是免费的）人工智能\u002F深度学习课程之一。无需多言。\n* 丹尼尔·伯克的文章“像我这样的初学者数据科学家如何积累经验？”（[链接](https:\u002F\u002Fwww.mrdbourke.com\u002Fhow-can-a-beginner-data-scientist-like-me-gain-experience\u002F)）——阅读这篇文章，了解如何在在线学习或大学毕业后为求职积累经验（先开始工作，再正式入职）。\n\n## 提问交流\n\n请联系 [丹尼尔·伯克](mailto:daniel@mrdbourke.com) 或在 [讨论区](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions) 发帖提问（推荐方式）。\n\n  \n## 日志\n\n* 2024年5月2日 - 更新材料，以反映谷歌已关闭 TensorFlow 开发者认证考试（更多信息请参阅 [#645](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F645)）\n* 2023年5月12日 - 为最新版本的 TensorFlow 更新了多份课程笔记本，并对第05课的几个 API 进行了更新：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F547\n* 2021年12月2日 - 在第02课笔记本中添加了针对 TensorFlow 2.7 的修复\n* 2021年11月11日 - 在第01课笔记本中添加了针对 TensorFlow 2.7 的修复\n* 2021年8月14日 - 添加了一篇关于 TensorFlow 2.6 更新及 EfficientNetV2 注意事项的讨论：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fdiscussions\u002F166\n* 2021年7月16日 - 向 ZTM Academy 和 Udemy 版本的课程中新增了35个视频，内容涉及时间序列以及如何通过 TensorFlow 开发者认证\n* 2021年7月10日 - 向 ZTM Academy 和 Udemy 版本的课程中新增了29个经过编辑的时间序列视频，后续还将继续增加\n* 2021年7月7日 - 录制了5个关于通过 TensorFlow 开发者认证考试部分的视频——课程的所有视频均已录制完毕！！！接下来就是剪辑和上传了！🎉\n* 2021年7月6日 - （已归档）添加了 TensorFlow 认证考试指南：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F11_passing_the_tensorflow_developer_certification_exam.md ——明天将开始录制相关视频\n* 2021年7月5日 - 正在准备 TF 认证考试的相关材料（内容、原因及方法）\n* 2021年7月2日 - 时间序列部分的视频录制工作终于完成!!!!! 现在可以开始上传了\n* 2021年6月30日 - 录制了12个时间序列部分的视频，总数已超过60个（这是迄今为止最大的章节），几乎接近完成！！！\n* 2021年6月29日 - 录制了10个时间序列部分的视频，总数正朝着60个迈进\n* 2021年6月28日 - 录制了10个时间序列部分的视频，下方标注总共有40个视频，但实际上可能接近50个\n* 2021年6月26日 - 录制了4个时间序列部分的视频，预计整个部分大约会有40个视频\n* 2021年6月25日 - 录制了8个时间序列部分的视频，并修复了时间序列笔记本中的大量错别字\n* 2021年6月24日 - 录制了14个时间序列部分的视频，明天还将继续录制更多\n* 2021年6月23日 - 完成了向时间序列笔记本中添加图片的工作，现在可以开始录制视频了\n* 2021年6月22日 - 向时间序列笔记本中添加了大量的图片，并开始制作幻灯片\n* 2021年6月21日 - 时间序列笔记本的代码已经完成，接下来将制作幻灯片和图片，为录制做准备\n* 2021年6月19日 - 将课程大纲整理成在线书籍，您可以在这里阅读：https:\u002F\u002Fdev.mrdbourke.com\u002Ftensorflow-deep-learning\u002F\n* 2021年6月18日 - 向时间序列笔记本中添加练习、课外内容和提纲\n* 2021年6月17日 - 在时间序列笔记本中添加了关于火鸡问题和模型比较的注释，下一步是制作提纲和图片\n* 2021年6月16日 - 在时间序列笔记本中添加了关于不确定性及未来预测的注释，接下来将处理火鸡问题\n* 2021年6月14日 - 添加了关于集成的注释，随后开始讲解预测区间\n* 2021年6月10日 - 完成了 N-BEATS 算法的注释，接下来将进入集成和预测区间的部分\n* 2021年6月9日 - 为时间序列笔记本添加了关于 N-BEATS 算法实现的注释\n* 2021年6月8日 - 向时间序列笔记本中添加注释，原计划本周末全部完成（但未能如期完成）\n* 2021年6月4日 - 继续更新时间序列笔记本中的注释，一点一滴地推进！\n* 2021年6月3日 - 向时间序列笔记本中添加了大量的注释和解释，进展顺利，后续还将继续补充！\n* 2021年6月2日 - 开始添加解释代码及更多学习资源的注释，未来几天将继续进行\n* 2021年6月1日 - 向时间序列笔记本中添加了火鸡问题，并清理了一部分代码，初步代码已就绪，接下来将编写注释和解释\n* 2021年5月28日 - 向时间序列笔记本中添加了未来预测、集成模型以及预测区间\n* 2021年5月25日 - 向时间序列笔记本中添加了多元时间序列，并修复了 LSTM 模型，接下来将引入 TensorFlow 的窗口化技术，并尝试不同的窗口大小\n* 2021年5月24日 - 修复了时间序列笔记本中损坏的预处理函数，LSTM 模型仍存在问题，后续还将继续补充内容\n* 2021年5月20日 - 继续创作时间序列相关内容\n* 2021年5月19日 - 继续创作时间序列相关内容，其中大部分内容在 Twitch 上直播：https:\u002F\u002Ftwitch.tv\u002Fmrdbourke\n* 2021年5月18日 - 添加了时间序列预测笔记本的大纲（[第10课笔记本](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F10_time_series_forecasting_in_tensorflow.ipynb)），接下来将大幅增加相关内容\n* 2021年5月12日 - 第09课的所有视频现已在 Udemy 和 ZTM 上发布！！！尽情享受 SkimLit 的构建过程吧 📄🔥\n* 2021年5月11日 - 第08课和第09课共计40余条视频已在 Udemy 和 ZTM 上发布！！！\n* 2021年5月10日 - 时间序列相关内容的研究与准备工作\n* 2021年5月8日 - 时间序列相关内容的研究与准备工作\n* 2021年5月5日 - 第08课约有20余条视频已完成编辑，第09课则有10余条视频完成编辑，目前时间序列相关内容处于初稿阶段\n* 2021年5月4日 - 修复了第08课剩余视频中的音频缺失问题，接下来将着手制作时间序列相关内容！\n* 2021年5月3日 - 重新录制了第08课中的10条视频，解决了声音问题，这些视频将直接进入后期制作，预计本周末即可上线\n* 2021年5月2日 - 发现第08课第09至第20条视频存在无音频的问题，计划重新录制这些视频\n* 2021年4月29日 - 🚀🚀🚀 在 Udemy 上正式上线！！！🚀🚀🚀\n* 2021年4月22日 - 第09课的视频录制工作已全部完成！添加了幻灯片并完善了第09课的视频笔记本\n* 2021年4月21日 - 录制了14条第09课的视频！今天真是超棒的一天！距离完成第09课又近了一步\n* 2021年4月20日 - 录制了10条第09课的视频\n* 2021年4月19日 - 录制了9条第09课的视频\n* 2021年4月16日 - 第09课的幻灯片已准备完毕，可以开始录制了！\n* 2021年4月15日 - 为第08课添加了幻灯片、课外内容、练习题以及视频笔记本，同时开始制作第09课的幻灯片，预计明天就能完成\n* 2021年4月14日 - 录制了12条第08课的视频，该章节已告一段落！接下来将制作第09课的幻灯片并投入录制\n* 2021年4月10日 - 录制了4条第08课的视频\n* 2021年4月9日 - 录制了6条第08课的视频\n* 2021年4月8日 - 录制了10条第08课的视频！明天还将继续录制更多！最后冲刺阶段到了！！！\n* 2021年4月7日 - 向第08课笔记本中添加了大量的图片，为明天的录制做好了准备！\n* 2021年4月1日 - 添加了[第09课：SkimLit](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002F09_SkimLit_nlp_milestone_project_2.ipynb)，几乎已完成，稍作整理后就可以开始制作幻灯片了！\n* 2021年3月31日 - 添加了第08课笔记本，计划明天完成，随后进入第09课\n* 2021年3月24日 - 录制了8条第07课的视频，已全部完成！接下来将制作第08课和第09课的相关材料（幻灯片\u002F笔记本）\n* 2021年3月23日 - 录制了6条第07课的视频（终于完成了！），计划明天完成\n* 2021年3月22日 - 对第07课笔记本进行了润色，使其适合录制，并制作了第07课的幻灯片，还添加了一个模板供学生练习使用，现在可以开始录制了！\n* 2021年3月17日 - 第07课笔记本已完成99%，并添加了指向课程前14小时视频的链接（[第一部分10小时](https:\u002F\u002Fyoutu.be\u002FtpCFfeUEGs8)，[第二部分4小时](https:\u002F\u002Fyoutu.be\u002FZUKz4125WNI)）\n* 2021年3月11日 - 向第07课笔记本中添加了更多的文字注释，计划明天完成，随后再制作幻灯片\n* 2021年3月10日 - 向第07课笔记本中输入了大量的解释性文字，明天将继续\n* 2021年3月9日 - 修复了第07课笔记本中的大量代码，现在应该能够从头到尾流畅运行（不过加载时间仍然是一个问题）\n* 2021年3月5日 - 添加了第07课的草稿笔记本（其中包含许多数据加载和模型训练方面的改进），计划在未来几天内进一步完善\n* 2021年3月1日 - 为第06课添加了幻灯片（可在此查看：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fblob\u002Fmain\u002Fslides\u002F06_transfer_learning_with_tensorflow_part_3_scaling_up.pdf）\n* 2021年2月26日 - 🚀 正式上线！！！同时也完成了第06课视频的录制，接下来将进入第07课、第08课和第09课的录制阶段\n* 2021年2月24日 - 录制了9条第06课的视频，即将发布！！！\n* 2021年2月23日 - 为发布做准备，重新整理了 GitHub 仓库 🚀\n* 2021年2月18日 - 录制了8条第05课的视频……终于完成了！接下来将对 GitHub 仓库进行最后的润色\n* 2021年2月17日 - 录制了10条第05课的视频！计划明天完成 🚀\n* 2021年2月16日 - 对第05课的幻灯片进行了润色，并开始录制视频，目前已完成了7条第05课的视频\n* 2021年2月15日 - 第04课的视频录制工作已完成，现在正在准备录制第05课！\n* 2021年2月12日 - 录制了7条第04课的视频……原本希望录满10条，但先凑齐7条也行（🤔 类似的情况似乎之前也发生过）\n* 2021年2月11日 - 无进展 - 为 [斯坦福大学 CS329s 课程](https:\u002F\u002Fstanford-cs329s.github.io\u002Fsyllabus.html) 提供了机器学习部署教程（使用了本课程中的模型代码！！！）——[完整教程材料请见此处](https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Fcs329s-ml-deployment-tutorial)\n* 2021年2月8日 - 录制了10条第03课的视频……第03课也完成了！🚀 接下来将进入第04课\n* 2021年1月30日至2月7日：无进展（正在为 [斯坦福大学 CS329s 课程](https:\u002F\u002Fstanford-cs329s.github.io\u002Fsyllabus.html) 准备机器学习部署讲座……后续再详细说明）\n* 2021年1月29日 - 录制了9条第03课的视频……比昨天更接近10条，但仍差一步\n* 2021年1月28日 - 录制了7条第03课的视频……原本希望录满10条，但先凑齐7条也可以\n* 2021年1月27日 - 录制了10条第03课的视频\n* 2021年1月26日 - 对 GitHub 仓库的 README 文件（即您当前所看到的内容）进行了美化，添加了一张漂亮的表格：https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning#course-materials\n* 2021年1月23日 - 第06课的幻灯片制作完成\n* 2021年1月22日 - 完成了第06课笔记本的审核，并开始制作第06课的幻灯片\n* 2021年1月21日 - 第05课的幻灯片制作完成，并开始审核第06课\n* 2021年1月20日 - 第05课笔记本已完成，其幻灯片也完成了95%\n* 2021年1月19日 - 找到了一种在课程期间存储数据的方案（使用与 Colab 笔记本同区域的 Google 存储服务，既便宜又快速）\n* 2021年1月18日 - 审核了第05课笔记本及其幻灯片\n* 2021年1月17日 - 第04课笔记本及其幻灯片制作完成\n* 2021年1月16日 - 审核了第04课笔记本，并制作了关于迁移学习的幻灯片\n* 2021年1月13日 - 再次审核了第03课笔记本，并完成了第03课的幻灯片制作。README 文件也进行了重大更新，第03课已完成99%，现在只需确定最佳的数据传输方式（例如，当学生下载时，中途应将数据存放在哪里？Dropbox？S3？~~GS~~（太贵了））\n* 2021年1月11日 - 审核了第03课笔记本，已完成95%，接下来将制作第03课的幻灯片\n* 2021年1月9日 - 我回来了！第02课的所有视频录制工作已完成，接下来将制作第03课、第04课和第05课的幻灯片及相关材料（之后我才会回到实验室）\n* 2020年12月19日 - 暂停（因家庭度假，暂停至2021年1月2日）\n* 2020年12月18日 - 第02课的视频录制已完成75%\n* 2020年12月17日 - 第02课的视频录制已完成50%\n* 2020年12月16日 - 第01课的视频录制已完成100%\n* 2020年12月15日 - 第01课的视频录制已完成90%\n* 2020年12月9日 - 完成了第00课视频的录制\n* 2020年12月8日 - 第00课的视频录制已完成90%\n* 2020年12月5日 - 使用第00课的素材试录了约6条视频\n* 2020年12月4日 - 在衣柜里搭建了一个录音棚：https:\u002F\u002Fraw.githubusercontent.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fmain\u002Fimages\u002Fmisc-studio-setup.jpeg\n* 2020年12月3日 - 第02课笔记本已完成，第02课的幻灯片也制作完毕，现在可以开始搭建录音棚了\n* 2020年12月2日 - 第02课笔记本已完成95%，第02课的幻灯片也已完成90%\n* 2020年12月1日 - 添加了第02课笔记本（已打磨90%），开始准备第02课的幻灯片\n* 2020年11月27日 - 对第01课笔记本进行了打磨，并制作了第01课的幻灯片\n* 2020年11月26日 - 对第00课笔记本进行了打磨，并制作了第00课的幻灯片","# TensorFlow 深度学习快速上手指南\n\n本指南基于 **Zero to Mastery Deep Learning with TensorFlow** 课程材料整理，旨在帮助开发者快速搭建环境并开始使用 TensorFlow 进行深度学习实践。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Windows, macOS 或 Linux (推荐 Ubuntu 20.04+)\n*   **Python 版本**: Python 3.8 - 3.11 (推荐 3.9 或 3.10)\n*   **硬件加速 (可选但推荐)**: NVIDIA GPU (需安装对应的 CUDA 和 cuDNN) 以加速模型训练。若无 GPU，代码可在 CPU 上运行，但速度较慢。\n*   **前置知识**: 具备基础的 Python 编程能力，了解基本的机器学习概念（如回归、分类）更佳。\n\n## 安装步骤\n\n### 1. 创建虚拟环境 (推荐)\n为了避免依赖冲突，建议使用 `venv` 或 `conda` 创建独立环境。\n\n```bash\npython -m venv tf-env\n# Windows 激活\ntf-env\\Scripts\\activate\n# macOS\u002FLinux 激活\nsource tf-env\u002Fbin\u002Factivate\n```\n\n### 2. 安装 TensorFlow\n使用国内镜像源（如清华源）可显著加快下载速度。\n\n```bash\npip install tensorflow -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n*注：若需使用 GPU 版本且已配置好 CUDA 环境，上述命令通常会自动安装 `tensorflow` (TF 2.x 后 CPU\u002FGPU 包合并，只需确保系统层面驱动正确)。*\n\n### 3. 获取课程代码与数据\n克隆官方仓库以获取所有 Jupyter Notebook 教程、数据集链接及练习材料。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning.git\ncd tensorflow-deep-learning\n```\n\n### 4. 安装额外依赖\n部分笔记本（特别是 NLP 部分）可能需要额外的库，如 `scikit-learn`, `matplotlib`, `pandas`, `tensorboard` 等。\n\n```bash\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n*(如果仓库根目录没有 requirements.txt，可根据 Notebook 开头的 import 语句按需安装，通常执行以下命令即可覆盖大部分需求)*\n```bash\npip install scikit-learn matplotlib pandas tensorboard numpy -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\n本课程的核心是通过 Jupyter Notebook 逐步学习。以下是启动第一个教程的步骤：\n\n### 1. 启动 Jupyter Notebook\n在项目目录下运行：\n\n```bash\njupyter notebook\n```\n浏览器将自动打开，显示文件列表。\n\n### 2. 运行基础示例\n点击打开 `00_tensorflow_fundamentals.ipynb`。这是课程的起点，涵盖张量基础操作。\n\n以下是一个最简单的 TensorFlow 代码示例，验证环境是否正常工作（可直接在 Notebook 单元格中运行）：\n\n```python\nimport tensorflow as tf\n\n# 检查 TensorFlow 版本\nprint(f\"TensorFlow version: {tf.__version__}\")\n\n# 创建一个简单的张量 (Tensor)\nhello = tf.constant(\"Hello, TensorFlow!\")\nprint(hello)\n\n# 简单的矩阵乘法示例\na = tf.constant([[1, 2], [3, 4]])\nb = tf.constant([[5, 6], [7, 8]])\nc = tf.matmul(a, b)\n\nprint(\"Matrix multiplication result:\")\nprint(c)\n\n# 检查是否有可用的 GPU\ngpus = tf.config.list_physical_devices('GPU')\nif gpus:\n    print(f\"GPUs Available: {gpus}\")\nelse:\n    print(\"No GPU available, running on CPU.\")\n```\n\n### 3. 后续学习路径\n按照仓库中的顺序依次学习：\n1.  **00-02**: 掌握回归与分类问题的神经网络基础。\n2.  **03-06**: 深入计算机视觉（CNN）与迁移学习（Transfer Learning）。\n3.  **07**: 完成第一个里程碑项目 \"Food Vision\"。\n4.  **08+**: 探索自然语言处理（NLP）及其他高级主题。\n\n> **提示**: 每个 Notebook 中都包含了详细的代码注释、理论讲解以及课后练习（Exercises & Extra-curriculum），建议在完成视频学习或阅读后，务必动手完成练习题以巩固技能。","一家初创医疗科技公司的算法工程师小李，正负责开发一个基于 X 光片的肺炎辅助诊断系统，需要在短时间内构建高精度的图像分类模型。\n\n### 没有 tensorflow-deep-learning 时\n- 面对复杂的 TensorFlow API 和 Keras 架构，小李缺乏系统的学习路径，只能在零散的博客和过时的文档中摸索，效率极低。\n- 在尝试复现经典的迁移学习（Transfer Learning）策略时，因不熟悉 `EfficientNet` 等预训练模型的正确调用方式，频繁遭遇版本兼容性报错，调试耗时数天。\n- 缺乏标准化的项目实战参考，导致数据预处理、模型评估及回调函数设置等关键环节代码规范混乱，模型性能难以稳定复现。\n- 遇到自然语言处理（NLP）与视觉结合的多模态需求时，找不到权威的示例代码，只能凭猜测编写逻辑，增加了项目失败风险。\n\n### 使用 tensorflow-deep-learning 后\n- 小李直接跟随课程提供的结构化 Notebook（从基础到进阶），快速掌握了深度学习核心概念，将原本需要数周的自学周期压缩至几天。\n- 利用项目中更新及时的迁移学习案例（如针对 TensorFlow 2.13+ 修复后的 `EfficientNetV2` 代码），他迅速解决了环境报错，成功加载预训练权重并微调模型。\n- 参考\"Food Vision\"等里程碑项目的完整代码架构，小李规范了数据管道搭建与模型训练流程，显著提升了诊断模型的准确率和训练稳定性。\n- 通过研读 NLP 相关的章节与练习，他顺利实现了病历文本与影像数据的特征融合，快速验证了多模态方案的可行性。\n\ntensorflow-deep-learning 通过提供经过持续维护的实战代码与系统化教程，帮助开发者跨越了从理论到工程落地的巨大鸿沟，大幅缩短了高质量 AI 模型的研发周期。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmrdbourke_tensorflow-deep-learning_64541eaf.png","mrdbourke","Daniel Bourke","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmrdbourke_c53fa7a6.jpg","Machine Learning Engineer live on YouTube.",null,"www.mrdbourke.com","https:\u002F\u002Fgithub.com\u002Fmrdbourke",[80,84],{"name":81,"color":82,"percentage":83},"Jupyter Notebook","#DA5B0B",100,{"name":85,"color":86,"percentage":87},"Python","#3572A5",0,5878,2817,"2026-04-04T05:12:45","MIT","未说明","未说明 (课程涵盖深度学习与迁移学习，建议配备 NVIDIA GPU 以加速训练，但 README 未指定具体型号或显存要求)",{"notes":95,"python":92,"dependencies":96},"本项目为 Zero to Mastery 深度学习课程配套代码。根据更新日志，部分笔记本（如 07、08、09）需要 TensorFlow 2.13 及以上版本。在使用优化器时，新版 TensorFlow (2.10+) 建议使用 'learning_rate' 参数替代已弃用的 'lr'。若使用 EfficientNet 模型遇到错误，建议切换至 efficientnet_v2 版本。课程包含计算机视觉和自然语言处理内容，运行完整项目可能需要下载额外的数据集和预训练模型。",[97,98,99],"tensorflow>=2.13","tf.keras","spaCy",[14],[102,103,104,105,106,107,108],"deep-learning","deep-neural-networks","tensorflow","tensorflow2","tensorflow-tutorials","tensorflow-course","curriculum","2026-03-27T02:49:30.150509","2026-04-11T16:52:08.871685",[112,117,122,127,132,137],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},15378,"为什么使用 tf.keras.applications 加载 EfficientNet 模型时，不需要手动添加归一化\u002F缩放层，而使用 TensorFlow Hub 版本则需要？","这是因为不同来源的模型架构略有不同。从 `tf.keras.applications` 加载的 EfficientNet 模型在第一层已经内置了 rescaling（缩放）层，因此不需要手动对数据进行缩放；而从 TensorFlow Hub 加载的版本通常不包含此层，需要用户手动添加。你可以通过打印模型的前几层来验证：`for layer in base_model.layers[:10]: print(layer.name)`，你会看到名为 `rescaling_1` 的层。","https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F14",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},15379,"在 TensorFlow 2.10+ 版本中保存 EfficientNet 模型时出现 'TypeError: Unable to serialize ... EagerTensor' 错误，如何解决？","这是 TensorFlow 2.10+ 版本中与 EfficientNet 缩放层相关的已知问题。有两种主要解决方案：\n1. **推荐方案**：升级到 EfficientNetV2 模型，将代码中的 `tf.keras.applications.efficientnet.EfficientNetB0` 替换为 `tf.keras.applications.efficientnet_v2.EfficientNetV2B0`。\n2. **替代方案**：如果必须使用旧版模型，可以将 TensorFlow 版本降级回 2.9.0。在 Google Colab 中运行：`!pip install -U -q tensorflow==2.9.0`。","https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F553",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},15380,"训练模型时遇到 'DNN library is not found' 错误，特别是在使用 EfficientNet 时，该怎么办？","这通常是由于 TensorFlow 版本不兼容导致的。该问题在某些新版本中出现，但在 TensorFlow 2.11 版本中通常可以正常工作。建议尝试切换 TensorFlow 版本（例如安装 2.11 版）。注意：如果在保存模型时仍遇到问题，可能是混合精度训练与 EfficientNet 模型组合导致的，需进一步检查配置。","https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F504",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},15381,"调用 model.fit() 时输入数据形状不正确，导致模型无法训练，如何修复输入张量的维度问题？","如果输入数据 `X` 缺少批次维度（batch dimension），会导致形状不匹配错误。解决方法是使用 `tf.expand_dims()` 为输入数据增加一个维度。例如，将 `model.fit(X, y, ...)` 改为 `model.fit(tf.expand_dims(X), y, ...)`。这通常发生在单个样本预测或特定数据预处理流程中。","https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F293",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},15382,"在使用 model.fit() 训练时出现 'AttributeError: NoneType object has no attribute items' 错误，原因是什么？","这个错误通常是因为在 `model.fit()` 函数中错误地使用了参数。当使用 `tf.data.Dataset` 作为输入时，不应使用 `validation_steps` 参数，而应使用 `validation_batch_size`（或者完全省略该参数让 Keras 自动处理）。请尝试移除 `validation_steps=len(valid_data)` 或将其替换为合适的批次大小设置。","https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F676",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},15383,"在进行迁移学习和数据增强时，图片没有按预期进行增强，或者在拟合模型后出现冲突错误，如何解决？","确保在运行数据增强层时正确设置了 `training=True` 参数。此外，如果视频中展示的代码与后续拟合模型前的第二次增强发生冲突，建议重新定义一个独立的数据增强序列。示例代码如下：\n```python\ndata_augmentation_sample = tf.keras.Sequential([\n    tf.keras.layers.Resizing(224,224),\n    tf.keras.layers.experimental.preprocessing.Rescaling(1\u002F255.0),\n    tf.keras.layers.RandomFlip(mode=\"horizontal\"),\n    tf.keras.layers.RandomRotation(0.2),\n    tf.keras.layers.RandomZoom(0.2),\n    tf.keras.layers.RandomHeight(0.2),\n    tf.keras.layers.RandomWidth(0.2)\n], name=\"data_augmentation\")\n```\n使用此序列可以避免重复增强导致的形状或状态冲突。","https:\u002F\u002Fgithub.com\u002Fmrdbourke\u002Ftensorflow-deep-learning\u002Fissues\u002F350",[]]