[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-enggen--DeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning":3,"tool-enggen--DeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning":61},[4,18,26,36,44,52],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",143909,2,"2026-04-07T11:33:18",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":53,"name":54,"github_repo":55,"description_zh":56,"stars":57,"difficulty_score":10,"last_commit_at":58,"category_tags":59,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,60],"视频",{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":76,"owner_website":79,"owner_url":80,"languages":76,"stars":81,"forks":82,"last_commit_at":83,"license":76,"difficulty_score":32,"env_os":84,"env_gpu":85,"env_ram":85,"env_deps":86,"category_tags":89,"github_topics":76,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":91,"updated_at":92,"faqs":93,"releases":94},5170,"enggen\u002FDeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning","DeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning","Advanced Deep Learning and Reinforcement Learning course taught at UCL in partnership with Deepmind","DeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning 是由谷歌旗下 DeepMind 与伦敦大学学院（UCL）联合打造的高级课程资源库，旨在系统性地传授深度学习与强化学习的前沿知识。它解决了学习者难以获取权威、结构化且理论与实践并重的高端 AI 教育内容的痛点，将复杂的算法原理拆解为从基础神经网络到注意力机制、生成模型，再到强化学习中的探索策略与马尔可夫决策过程等完整知识体系。\n\n这套资源特别适合有一定基础的 AI 开发者、科研人员以及希望深入理解算法底层逻辑的学生使用。其独特亮点在于课程内容直接源自工业界顶尖实验室与学术界的合作，不仅提供详细的讲座幻灯片，还配套了完整的视频讲解，覆盖了从 TensorFlow 入门到无监督学习等关键领域。通过跟随这一系列课程，用户能够建立起对现代人工智能技术的深刻认知，为开展高阶研究或解决复杂工程问题打下坚实基础。","# Advanced Deep Learning and Reinforcement Learning\n\n##### Advanced Deep Learning and Reinforcement Learning course taught at [UCL](http:\u002F\u002Fwww.cs.ucl.ac.uk\u002Fcurrent_students\u002Fsyllabus\u002Fcompgi\u002Fcompgi22_advanced_deep_learning_and_reinforcement_learning\u002F) in partnership with [DeepMind](https:\u002F\u002Fdeepmind.com)\n\n## Deep Learning Part\n\n### Deep Learning 1: Introduction to Machine Learning Based AI  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_01%20Introduction%20to%20Machine%20Learning%20Based%20AI.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=iOh7QUZGyiU&t=0s&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=2)\n\n### Deep Learning 2: Introduction to TensorFlow  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_02%20Introduction%20to%20TensorFlow.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=JO0LwmIlWw0&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=2)\n\n### Deep Learning 3: Neural Networks Foundations  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_03%20Neural%20Networks%20Foundations.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5eAXoPSBgnE&index=3&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs)\n\n### Deep Learning 4: Beyond Image Recognition, End-to-End Learning, Embeddings \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_04%20Beyond%20Image%20Recognition%2C%20End-to-End%20Learning%2C%20Embeddings.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=OfKnA91zs9I&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=8)\n\n### Deep Learning 5: Optimization for Machine Learning  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_05%20Optimization%20for%20Machine%20Learning.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ALdsqfrLieg&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=11)\n\n### Deep Learning 6: Deep Learning for NLP  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_06%20Deep%20Learning%20for%20NLP.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Y95JwaynE40&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=13)\n\n### Deep Learning 7. Attention and Memory in Deep Learning  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_07%20Attention%20and%20Memory%20in%20Deep%20Learning.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Q57rzaHHO0k&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=15)\n\n### Deep Learning 8: Unsupervised learning and generative models  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FDeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_08%20Unsupervised%20learning%20and%20generative%20models.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=H4VGSYGvJiA&index=17&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs)\n\n## Reinforcement Learning Part\n\n\n### Reinforcement Learning 1: Introduction to Reinforcement Learning  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_01%20Introduction%20to%20Reinforcement%20Learning.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ISk80iLhdfU&index=4&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs)\n\n### Reinforcement Learning 2: Exploration and Exploitation  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_02%20Exploration%20and%20Exploitation.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=eM6IBYVqXEA&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=5)\n\n### Reinforcement Learning 3: Markov Decision Processes and Dynamic Programming  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_03%20Markov%20Decision%20Processes%20and%20Dynamic%20Programming.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=hMbxmRyDw5M&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=6)\n\n### Reinforcement Learning 4: Model-Free Prediction and Control  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_04%20Model-Free%20Prediction%20and%20Control.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=nnxHlg-2WgA&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=7)\n\n### Reinforcement Learning 5: Function Approximation and Deep Reinforcement Learning  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_05%20Function%20Approximation%20and%20Deep%20Reinforcement%20Learning.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=wAk1lxmiW4c&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=9)\n\n### Reinforcement Learning 6: Policy Gradients and Actor Critics  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_06%20Policy%20Gradients%20and%20Actor%20Critics.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=bRfUxQs6xIM&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=10)\n\n### Reinforcement Learning 7: Planning and Models  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_07%20Planning%20and%20Models.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Xrxrd8nl4YI&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=12)\n\n### Reinforcement Learning 8: Advanced Topics in Deep RL  \n[[slides]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_08%20Advanced%20Topics%20in%20Deep%20RL.pdf) [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=L6xaQ501jEs&index=14&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs)\n\n### Reinforcement Learning 9: A Brief Tour of Deep RL Agents  \n[[slides]]() [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=-mhBD8Frkc4&index=16&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs)\n\n### Reinforcement Learning 10: Classic Games Case Study  \n[[slides]]() [[video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ld28AU7DDB4&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=18)\n\n\n","# 高级深度学习与强化学习\n\n##### 在[UCL](http:\u002F\u002Fwww.cs.ucl.ac.uk\u002Fcurrent_students\u002Fsyllabus\u002Fcompgi\u002Fcompgi22_advanced_deep_learning_and_reinforcement_learning\u002F)与[DeepMind](https:\u002F\u002Fdeepmind.com)合作开设的高级深度学习与强化学习课程\n\n## 深度学习部分\n\n### 深度学习1：基于机器学习的人工智能导论  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_01%20Introduction%20to%20Machine%20Learning%20Based%20AI.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=iOh7QUZGyiU&t=0s&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=2)\n\n### 深度学习2：TensorFlow入门  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_02%20Introduction%20to%20TensorFlow.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=JO0LwmIlWw0&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=2)\n\n### 深度学习3：神经网络基础  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_03%20Neural%20Networks%20Foundations.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5eAXoPSBgnE&index=3&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs)\n\n### 深度学习4：超越图像识别、端到端学习、嵌入  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_04%20Beyond%20Image%20Recognition%2C%20End-to-End%20Learning%2C%20Embeddings.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=OfKnA91zs9I&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=8)\n\n### 深度学习5：机器学习中的优化  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_05%20Optimization%20for%20Machine%20Learning.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ALdsqfrLieg&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=11)\n\n### 深度学习6：面向自然语言处理的深度学习  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_06%20Deep%20Learning%20for%20NLP.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Y95JwaynE40&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=13)\n\n### 深度学习7：深度学习中的注意力机制与记忆  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_07%20Attention%20and%20Memory%20in%20Deep%20Learning.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Q57rzaHHO0k&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=15)\n\n### 深度学习8：无监督学习与生成模型  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FDeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Fdl_08%20Unsupervised%20learning%20and%20generative%20models.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=H4VGSYGvJiA&index=17&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs)\n\n## 强化学习部分\n\n\n### 强化学习1：强化学习导论  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_01%20Introduction%20to%20Reinforcement%20Learning.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ISk80iLhdfU&index=4&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs)\n\n### 强化学习2：探索与利用  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_02%20Exploration%20and%20Exploitation.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=eM6IBYVqXEA&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=5)\n\n### 强化学习3：马尔可夫决策过程与动态规划  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_03%20Markov%20Decision%20Processes%20and%20Dynamic%20Programming.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=hMbxmRyDw5M&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=6)\n\n### 强化学习4：无模型预测与控制  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_04%20Model-Free%20Prediction%20and%20Control.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=nnxHlg-2WgA&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=7)\n\n### 强化学习5：函数逼近与深度强化学习  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_05%20Function%20Approximation%20and%20Deep%20Reinforcement%20Learning.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=wAk1lxmiW4c&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=9)\n\n### 强化学习6：策略梯度与演员-评论家算法  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_06%20Policy%20Gradients%20and%20Actor%20Critics.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=bRfUxQs6xIM&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=10)\n\n### 强化学习7：规划与模型  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_07%20Planning%20and%20Models.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Xrxrd8nl4YI&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=12)\n\n### 强化学习8：深度强化学习的高级主题  \n[[课件]](https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\u002Fblob\u002Fmaster\u002Flecture%20slides\u002Frl_08%20Advanced%20Topics%20in%20Deep%20RL.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=L6xaQ501jEs&index=14&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs)\n\n### 强化学习9：深度强化学习智能体简述  \n[[课件]]() [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=-mhBD8Frkc4&index=16&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs)\n\n### 强化学习10：经典游戏案例研究  \n[[课件]]() [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ld28AU7DDB4&list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs&index=18)","# DeepMind 高级深度学习与强化学习课程快速上手指南\n\n本指南基于伦敦大学学院（UCL）与 DeepMind 合作开设的《高级深度学习与强化学习》课程资源整理。该仓库主要包含课程幻灯片（Slides）和视频讲座链接，旨在帮助开发者系统掌握从基础神经网络到前沿强化学习代理的核心知识。\n\n## 环境准备\n\n由于本资源主要为教学视频和文档，无需安装特定的大型软件包即可开始学习。但为了复现课程中的代码示例（通常基于 TensorFlow），建议准备以下开发环境：\n\n*   **操作系统**：Windows, macOS 或 Linux (推荐 Ubuntu 20.04+)\n*   **编程语言**：Python 3.8 或更高版本\n*   **核心依赖**：\n    *   TensorFlow 2.x (课程早期部分涉及 TensorFlow 基础)\n    *   PyTorch (可选，用于复现部分现代 RL 算法)\n    *   Jupyter Notebook (用于运行示例代码)\n*   **网络环境**：需能访问 YouTube 观看视频课程，或具备下载 PDF 幻灯片的能力。\n\n> **国内加速建议**：\n> *   **视频观看**：由于原始视频托管于 YouTube，国内用户建议在 **Bilibili (B 站)** 搜索课程标题 \"Advanced Deep Learning and Reinforcement Learning UCL DeepMind\"，通常有社区搬运的高清中文字幕版本。\n> *   **依赖安装**：配置 pip 使用清华源或阿里源以加速 Python 包下载。\n\n## 安装步骤\n\n本课程没有单一的“安装命令”，主要是获取学习资料和配置本地实验环境。\n\n### 1. 克隆课程资源仓库\n获取最新的幻灯片和相关资料：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fenggen\u002FAdvanced-Deep-Learning-and-Reinforcement-Learning-DeepMind.git\ncd Advanced-Deep-Learning-and-Reinforcement-Learning-DeepMind\n```\n\n### 2. 配置 Python 实验环境\n创建一个虚拟环境并安装深度学习基础库（以 TensorFlow 为例，参考课程第二部分）：\n\n```bash\n# 创建虚拟环境\npython -m venv adrl-env\n\n# 激活环境\n# Windows:\nadrl-env\\Scripts\\activate\n# macOS\u002FLinux:\nsource adrl-env\u002Fbin\u002Factivate\n\n# 配置国内镜像源并安装依赖\npip config set global.index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\npip install tensorflow jupyter numpy matplotlib gymnasium\n```\n\n## 基本使用\n\n本项目的核心使用方式是**跟随课程大纲进行学习**。课程分为“深度学习”和“强化学习”两大模块，共 18 讲。\n\n### 学习路径示例\n\n#### 第一步：入门机器学习与 TensorFlow\n从基础概念入手，熟悉课程使用的框架。\n*   **阅读材料**：打开 `lecture slides\u002Fdl_01 Introduction to Machine Learning Based AI.pdf`\n*   **观看视频**：访问对应的 YouTube 链接或在 B 站搜索 \"DL 1 Introduction\"\n*   **实践重点**：理解张量操作、计算图概念及 TensorFlow 基础 API。\n\n#### 第二步：深入神经网络与优化\n学习网络架构设计及训练技巧。\n*   **关键章节**：\n    *   `dl_03`: 神经网络基础 (Neural Networks Foundations)\n    *   `dl_05`: 机器学习优化 (Optimization for Machine Learning)\n    *   `dl_07`: 注意力机制与记忆 (Attention and Memory)\n*   **代码实践**：在 Jupyter Notebook 中复现幻灯片里的简单 CNN 或 RNN 结构。\n\n#### 第三步：强化学习核心算法\n进入 RL 部分，从马尔可夫决策过程到深度强化学习。\n*   **关键章节**：\n    *   `rl_03`: 马尔可夫决策过程与动态规划\n    *   `rl_05`: 函数近似与深度强化学习 (DQN 等)\n    *   `rl_06`: 策略梯度与 Actor-Critic 算法\n*   **实践示例**：\n    使用 `gymnasium` 创建一个简单的 CartPole 环境，尝试应用课程中学到的 Q-Learning 或 Policy Gradient 思路：\n\n```python\nimport gymnasium as gym\n\n# 创建环境\nenv = gym.make(\"CartPole-v1\")\nobservation, info = env.reset()\n\n# 简单的随机动作演示 (实际学习中请替换为课程算法)\nfor _ in range(1000):\n    action = env.action_space.sample() \n    observation, reward, terminated, truncated, info = env.step(action)\n    \n    if terminated or truncated:\n        observation, info = env.reset()\n\nenv.close()\n```\n\n#### 第四步：进阶专题与案例研究\n学习最新的多智能体系统、经典游戏案例（如 AlphaGo\u002FAlphaZero 原理）。\n*   **参考章节**：`rl_09` (Deep RL Agents Tour) 和 `rl_10` (Classic Games Case Study)。\n\n---\n**提示**：所有幻灯片均位于仓库的 `lecture slides\u002F` 目录下，文件名清晰对应每一讲的主题。建议结合视频讲解阅读 PDF，并在本地环境中动手复现关键算法逻辑。","某自动驾驶初创公司的算法团队正致力于开发能在复杂城市路况下自主决策的智能驾驶系统。\n\n### 没有 DeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning 时\n- 团队在构建强化学习模型时缺乏系统的理论指导，难以平衡“探索”与“利用”策略，导致车辆在未知路口频繁陷入死循环或做出危险试探。\n- 由于缺少对马尔可夫决策过程（MDP）和动态规划的深入理解，状态空间设计不合理，训练收敛极慢且极易陷入局部最优解。\n- 成员各自为战，依赖零散的网络教程拼凑知识，对注意力机制和端到端学习等高级架构理解肤浅，无法有效处理多传感器融合数据。\n- 面对训练失败，团队往往只能盲目调整超参数，缺乏从优化原理层面诊断问题的能力，研发周期被无限拉长。\n\n### 使用 DeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning 后\n- 团队通过课程中关于“探索与利用”的专项讲解，引入了成熟的策略算法，使车辆在面对新路况时能安全高效地尝试新路径，决策成功率显著提升。\n- 基于对 MDP 和动态编程的系统学习，重新设计了状态奖励函数，模型收敛速度加快了三倍，且在复杂博弈场景中表现更加稳健。\n- 利用课程中关于注意力机制和无监督学习的进阶内容，构建了更强大的感知决策一体化网络，有效提升了恶劣天气下的识别准确率。\n- 团队成员统一了技术语言和方法论，能够依据优化理论快速定位训练瓶颈，将原本数周的调优过程缩短至几天。\n\nDeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning 通过将顶尖学术理论与工业界实战经验深度融合，为团队提供了从基础原理到前沿架构的完整知识图谱，彻底改变了盲目试错的研发模式。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fenggen_DeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning_599615cf.png","enggen","Engen ","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fenggen_bae60dbf.jpg","ML Engineer | Lifelong learner.\r\n\r\n",null,"London, UK","enggenapp@gmail.com","https:\u002F\u002Ftwitter.com\u002FsherpaEG","https:\u002F\u002Fgithub.com\u002Fenggen",861,277,"2026-03-03T14:46:37","","未说明",{"notes":87,"python":85,"dependencies":88},"该仓库主要为伦敦大学学院（UCL）与 DeepMind 合作开设的“高级深度学习与强化学习”课程资料，包含讲座幻灯片和视频链接。README 中未提供具体的代码运行环境、依赖库或硬件需求说明。用户需参考具体讲座内容或相关视频以获取实现细节。",[],[35,15,90],"其他","2026-03-27T02:49:30.150509","2026-04-08T01:59:00.517033",[],[]]