[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-aikorea--awesome-rl":3,"tool-aikorea--awesome-rl":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":78,"owner_url":79,"languages":78,"stars":80,"forks":81,"last_commit_at":82,"license":78,"difficulty_score":83,"env_os":77,"env_gpu":84,"env_ram":84,"env_deps":85,"category_tags":88,"github_topics":78,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":89,"updated_at":90,"faqs":91,"releases":122},2111,"aikorea\u002Fawesome-rl","awesome-rl","Reinforcement learning resources curated","awesome-rl 是一个专为强化学习领域打造的精选资源清单，旨在帮助从业者和学习者快速定位高质量的学习材料与技术工具。面对强化学习知识体系庞大、资料分散且质量参差不齐的痛点，它系统性地整理了从基础理论到前沿应用的全方位内容，涵盖经典教材代码复现、学术论文、行业应用案例（如游戏博弈、机器人控制）、开源框架及在线演示等。\n\n无论是刚入门的学生、深耕算法的研究人员，还是寻求落地解决方案的开发者，都能在这里找到适合自己的资源。其独特亮点在于不仅收录了 Richard Sutton 等权威著作的多语言代码实现（Python、Julia 等），还汇聚了 PyBrain、RLPy、TensorFlow 深度 Q 学习等多种主流开源平台与工具箱，甚至包含针对教学设计的标准化接口 RL-Glue。虽然项目页面已注明不再主动维护，但其沉淀的历史资源依然具有极高的参考价值，是探索强化学习世界不可或缺的“导航图”。","# Awesome Reinforcement Learning  [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\nThis page is no longer maintained. \n\nA curated list of resources dedicated to reinforcement learning.\n\nWe have pages for other topics: [awesome-rnn](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-rnn), [awesome-deep-vision](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-deep-vision), [awesome-random-forest](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-random-forest)\n\nMaintainers: [Hyunsoo Kim](http:\u002F\u002Fsites.duke.edu\u002Fhyunsookim\u002F), [Jiwon Kim](http:\u002F\u002Fgithub.com\u002Fkjw0612)\n\n## Contributing\nPlease feel free to [pull requests](https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl\u002Fpulls)\n\n## Table of Contents\n\n - [Theory](#theory)\n   - [Lectures](#lectures)\n   - [Books](#books)\n   - [Surveys](#surveys)\n   - [Papers \u002F Thesis](#papers--thesis)\n - [Applications](#applications)\n   - [Game Playing](#game-playing)\n   - [Robotics](#robotics)\n   - [Control](#control)\n   - [Operations Research](#operations-research)\n   - [Human Computer Interaction](#human-computer-interaction)\n - [Codes](#codes)\n - [Tutorials \u002F Websites](#tutorials--websites)\n - [Online Demos](#online-demos)\n - [Open Source Reinforcement Learning Platforms](#open-source-reinforcement-learning-platforms)\n\n## Codes\n - Codes for examples and exercises in Richard Sutton and Andrew Barto's Reinforcement Learning: An Introduction\n    - [Python Code](https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002Freinforcement-learning-an-introduction)\n    - [MATLAB Code (BROKEN LINK)](http:\u002F\u002Fwaxworksmath.com\u002FAuthors\u002FN_Z\u002FSutton\u002Fsutton.html)\n    - [C\u002FLisp Code](http:\u002F\u002Fincompleteideas.net\u002Fbook\u002Fcode\u002Fcode2nd.html)\n    - [Julia Code](https:\u002F\u002Fgithub.com\u002FJu-jl\u002FReinforcementLearningAnIntroduction.jl)\n    - [Book](http:\u002F\u002Fincompleteideas.net\u002Fbook\u002FRLbook2018.pdf)\n    - [Exercise Solutions](https:\u002F\u002Fgithub.com\u002FLyWangPX\u002FReinforcement-Learning-2nd-Edition-by-Sutton-Exercise-Solutions)\n - Simulation code for Reinforcement Learning Control Problems\n    - [Pole-Cart Problem](http:\u002F\u002Fpages.cs.wisc.edu\u002F~finton\u002Fpoledriver.html)\n    - [Q-learning Controller](http:\u002F\u002Fpages.cs.wisc.edu\u002F~finton\u002Fqcontroller.html)\n - [MATLAB Environment and GUI for Reinforcement Learning](http:\u002F\u002Fwww.cs.colostate.edu\u002F~anderson\u002Fres\u002Frl\u002Fmatlabpaper\u002Frl.html)\n - [Reinforcement Learning Repository - University of Massachusetts, Amherst](http:\u002F\u002Fwww-anw.cs.umass.edu\u002Frlr\u002F)\n - [Brown-UMBC Reinforcement Learning and Planning Library (Java)](http:\u002F\u002Fburlap.cs.brown.edu\u002F)\n - [Reinforcement Learning in R (MDP, Value Iteration)](http:\u002F\u002Fwww.moneyscience.com\u002Fpg\u002Fblog\u002FStatAlgo\u002Fread\u002F635759\u002Freinforcement-learning-in-r-markov-decision-process-mdp-and-value-iteration)\n - [Reinforcement Learning Environment in Python and MATLAB](https:\u002F\u002Fjamh-web.appspot.com\u002Fdownload.htm)\n - [RL-Glue](http:\u002F\u002Fglue.rl-community.org\u002Fwiki\u002FMain_Page) (standard interface for RL) and [RL-Glue Library](http:\u002F\u002Flibrary.rl-community.org\u002Fwiki\u002FMain_Page)\n - [PyBrain Library](http:\u002F\u002Fwww.pybrain.org\u002F) - Python-Based Reinforcement learning, Artificial intelligence, and Neural network\n - [RLPy Framework](http:\u002F\u002Frlpy.readthedocs.org\u002Fen\u002Flatest\u002F) -  Value-Function-Based Reinforcement Learning Framework for Education and Research\n - [Maja](http:\u002F\u002Fmmlf.sourceforge.net\u002F) - Machine learning framework for problems in Reinforcement Learning in python\n - [TeachingBox](http:\u002F\u002Fservicerobotik.hs-weingarten.de\u002Fen\u002Fteachingbox.php) - Java based Reinforcement Learning framework\n - [Policy Gradient Reinforcement Learning Toolbox for MATLAB](http:\u002F\u002Fwww.ias.informatik.tu-darmstadt.de\u002FResearch\u002FPolicyGradientToolbox)\n - [PIQLE](http:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fpiqle\u002F) - Platform Implementing Q-Learning and other RL algorithms\n - [BeliefBox](https:\u002F\u002Fcode.google.com\u002Fp\u002Fbeliefbox\u002F) - Bayesian reinforcement learning library and toolkit\n - [Deep Q-Learning with TensorFlow](https:\u002F\u002Fgithub.com\u002Fnivwusquorum\u002Ftensorflow-deepq) - A deep Q learning demonstration using Google Tensorflow\n - [Atari](https:\u002F\u002Fgithub.com\u002FKaixhin\u002FAtari) - Deep Q-networks and asynchronous agents in Torch\n - [AgentNet](https:\u002F\u002Fgithub.com\u002Fyandexdataschool\u002FAgentNet) - A python library for deep reinforcement learning and custom recurrent networks using Theano+Lasagne.\n - [Reinforcement Learning Examples by RLCode](https:\u002F\u002Fgithub.com\u002Frlcode\u002Freinforcement-learning) - A Collection of minimal and clean reinforcement learning examples\n - [OpenAI Baselines](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fbaselines) - Well tested implementations ([and results](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fbaselines-results)) of reinforcement learning algorithms from OpenAI \n - [PyTorch Deep RL](https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002FDeepRL) - Popular deep RL algorithm implementations with PyTorch\n - [ChainerRL](https:\u002F\u002Fgithub.com\u002Fchainer\u002Fchainerrl) - Popular deep RL algorithm implementations with Chainer\n - [Black-DROPS](https:\u002F\u002Fgithub.com\u002Fresibots\u002Fblackdrops) - Modular and generic code for the model-based policy search Black-DROPS algorithm (IROS 2017 paper) and easy integration with the [DART](http:\u002F\u002Fdartsim.github.io\u002F) simulator\n - [Gold](https:\u002F\u002Fgithub.com\u002Faunum\u002Fgold) - A reinforcement learning library for Golang.\n - [Jumanji](https:\u002F\u002Fgithub.com\u002Finstadeepai\u002Fjumanji) - A Suite of Industry-Driven Hardware-Accelerated RL Environments written in JAX.\n\n## Theory\n\n### Lectures\n- [DeepMind x UCL] [Reinforcement Learning Lecture Series 2021](https:\u002F\u002Fdeepmind.com\u002Flearning-resources\u002Freinforcement-learning-series-2021)\n - [UCL] [COMPM050\u002FCOMPGI13 Reinforcement Learning](http:\u002F\u002Fwww0.cs.ucl.ac.uk\u002Fstaff\u002Fd.silver\u002Fweb\u002FTeaching.html) by David Silver\n - [UCL] [COMPMI22\u002FCOMPGI22 - Advanced Deep Learning and Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Fenggen\u002FDeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning)\n - [UC Berkeley] CS188 Artificial Intelligence by Pieter Abbeel\n   - [Lecture 8: Markov Decision Processes 1](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=i0o-ui1N35U)\n   - [Lecture 9: Markov Decision Processes 2](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Csiiv6WGzKM)\n   - [Lecture 10: Reinforcement Learning 1](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ifma8G7LegE)\n   - [Lecture 11: Reinforcement Learning 2](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Si1_YTw960c)\n - [Udacity (Georgia Tech.)] [CS7642 Reinforcement Learning](https:\u002F\u002Fclassroom.udacity.com\u002Fcourses\u002Fud600)\n - [Stanford] [CS229 Machine Learning - Lecture 16: Reinforcement Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RtxI449ZjSc&feature=relmfu) by Andrew Ng\n - [UC Berkeley] [Deep RL Bootcamp](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdeep-rl-bootcamp\u002Flectures)\n - [UC Berkeley] [CS294 Deep Reinforcement Learning](http:\u002F\u002Frll.berkeley.edu\u002Fdeeprlcourse\u002F) by John Schulman and Pieter Abbeel\n - [CMU] [10703: Deep Reinforcement Learning and Control, Spring 2017](https:\u002F\u002Fkatefvision.github.io\u002F)\n - [MIT] [6.S094: Deep Learning for Self-Driving Cars](http:\u002F\u002Fselfdrivingcars.mit.edu\u002F)\n   - [Lecture 2: Deep Reinforcement Learning for Motion Planning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=QDzM8r3WgBw&list=PLrAXtmErZgOeiKm4sgNOknGvNjby9efdf)\n - [Siraj Raval]: Introduction to AI for Video Games (Reinforcement Learning Video Series)\n   - [Introduction to AI for video games](https:\u002F\u002Fyoutu.be\u002Fi_McNBDP9Qs)\n   - [Monte Carlo Prediction](https:\u002F\u002Fyoutu.be\u002F-YpalutQCKw)\n   - [Q learning explained](https:\u002F\u002Fyoutu.be\u002FaCEvtRtNO-M)\n   - [Solving the basic game of Pong](https:\u002F\u002Fyoutu.be\u002FpN7ETkOizGM)\n   - [Actor Critic Algorithms](https:\u002F\u002Fyoutu.be\u002Fw_3mmm0P0j8)\n   - [War Robots](https:\u002F\u002Fyoutu.be\u002Ftm5kQmjfZN8)\n - [Mutual Information] [Reinforcement Learning Fundamentals](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLzvYlJMoZ02Dxtwe-MmH4nOB5jYlMGBjr)\n   - [Reinforcement Learning: A Six Part Series](https:\u002F\u002Fyoutu.be\u002FNFo9v_yKQXA)\n   - [The Bellman Equations, Dynamic Programming, and Generalized Policy Iteration](https:\u002F\u002Fyoutu.be\u002F_j6pvGEchWU)\n   - [Monte Carlo And Off-Policy Methods](https:\u002F\u002Fyoutu.be\u002FbpUszPiWM7o)\n   - [TD Learning, Sarsa, and Q-Learning](https:\u002F\u002Fyoutu.be\u002FAJiG3ykOxmY)\n\n### Books\n - Richard Sutton and Andrew Barto, Reinforcement Learning: An Introduction (1st Edition, 1998) [[Book]](http:\u002F\u002Fincompleteideas.net\u002Fbook\u002Febook\u002Fthe-book.html) [[Code]](http:\u002F\u002Fincompleteideas.net\u002Fbook\u002Fcode\u002Fcode.html)\n - Richard Sutton and Andrew Barto, Reinforcement Learning: An Introduction (2nd Edition, in progress, 2018) [[Book]](http:\u002F\u002Fincompleteideas.net\u002Fbook\u002FRLbook2020.pdf) [[Code]](https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002Freinforcement-learning-an-introduction)\n - Csaba Szepesvari, Algorithms for Reinforcement Learning [[Book]](http:\u002F\u002Fwww.ualberta.ca\u002F~szepesva\u002Fpapers\u002FRLAlgsInMDPs.pdf)\n - David Poole and Alan Mackworth, Artificial Intelligence: Foundations of Computational Agents [[Book Chapter]](http:\u002F\u002Fartint.info\u002Fhtml\u002FArtInt_262.html)\n - Dimitri P. Bertsekas and John N. Tsitsiklis, Neuro-Dynamic Programming [[Book (Amazon)]](http:\u002F\u002Fwww.amazon.com\u002FNeuro-Dynamic-Programming-Optimization-Neural-Computation\u002Fdp\u002F1886529108\u002Fref=sr_1_3?s=books&ie=UTF8&qid=1442461075&sr=1-3&refinements=p_27%3AJohn+N.+Tsitsiklis+Dimitri+P.+Bertsekas) [[Summary]](http:\u002F\u002Fwww.mit.edu\u002F~dimitrib\u002FNDP_Encycl.pdf)\n - Mykel J. Kochenderfer, Decision Making Under Uncertainty: Theory and Application [[Book (Amazon)]](http:\u002F\u002Fwww.amazon.com\u002FDecision-Making-Under-Uncertainty-Application\u002Fdp\u002F0262029251\u002Fref=sr_1_1?ie=UTF8&qid=1441126550&sr=8-1&keywords=kochenderfer&pebp=1441126551594&perid=1Y6RG2EGRD26659CJHH9)\n - Deep Reinforcement Learning in Action [[Book(Manning)]](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-reinforcement-learning-in-action)\n - REINFORCEMENT LEARNING AND OPTIMAL CONTROL Dimitri P. Bertsekas [BOOK, VIDEOLECTURES, AND COURSE MATERIAL, 2019](http:\u002F\u002Fweb.mit.edu\u002Fdimitrib\u002Fwww\u002FRLbook.html)\n \n\n\n### Surveys\n - Leslie Pack Kaelbling, Michael L. Littman, Andrew W. Moore, Reinforcement Learning: A Survey (JAIR 1996) [[Paper]](https:\u002F\u002Fwww.jair.org\u002Findex.php\u002Fjair\u002Farticle\u002Fdownload\u002F10166\u002F24110\u002F)\n - S. S. Keerthi and B. Ravindran, A Tutorial Survey of Reinforcement Learning (Sadhana 1994) [[Paper]](http:\u002F\u002Fwww.cse.iitm.ac.in\u002F~ravi\u002Fpapers\u002Fkeerthi.rl-survey.pdf)\n - Matthew E. Taylor, Peter Stone, Transfer Learning for Reinforcement Learning Domains: A Survey (JMLR 2009) [[Paper]](http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume10\u002Ftaylor09a\u002Ftaylor09a.pdf)\n - Jens Kober, J. Andrew Bagnell, Jan Peters, Reinforcement Learning in Robotics, A Survey (IJRR 2013) [[Paper]](http:\u002F\u002Fwww.ias.tu-darmstadt.de\u002Fuploads\u002FPublications\u002FKober_IJRR_2013.pdf)\n - Michael L. Littman, Reinforcement learning improves behaviour from evaluative feedback (Nature 2015) [[Paper]](http:\u002F\u002Fwww.nature.com\u002Fnature\u002Fjournal\u002Fv521\u002Fn7553\u002Ffull\u002Fnature14540.html)\n - Marc P. Deisenroth, Gerhard Neumann, Jan Peter, A Survey on Policy Search for Robotics, Foundations and Trends in Robotics (2014) [[Book]](https:\u002F\u002Fspiral.imperial.ac.uk:8443\u002Fbitstream\u002F10044\u002F1\u002F12051\u002F7\u002Ffnt_corrected_2014-8-22.pdf)\n - Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil Anthony Bharath, A Brief Survey of Deep Reinforcement Learning (IEEE Signal Processing Magazine 2017) [[DOI]](https:\u002F\u002Fdx.doi.org\u002F10.1109\u002FMSP.2017.2743240) [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.05866)\n - Benjamin Recht, A Tour of Reinforcement Learning: The View from Continuous Control (Annu. Rev. Control Robot. Auton. Syst. 2019) [[DOI]](https:\u002F\u002Fdx.doi.org\u002F10.1146\u002Fannurev-control-053018-023825)\n\n### Papers \u002F Thesis\nFoundational Papers\n - Marvin Minsky, Steps toward Artificial Intelligence, Proceedings of the IRE, 1961. [[DOI]](https:\u002F\u002Fdx.doi.org\u002F10.1109\u002FJRPROC.1961.287775) [[Paper]](http:\u002F\u002Fstaffweb.worc.ac.uk\u002FDrC\u002FCourses%202010-11\u002FComp%203104\u002FTutor%20Inputs\u002FSession%209%20Prep\u002FReading%20material\u002FMinsky60steps.pdf) (discusses issues in RL such as the \"credit assignment problem\")\n - Ian H. Witten, An Adaptive Optimal Controller for Discrete-Time Markov Environments, Information and Control, 1977. [[DOI]](https:\u002F\u002Fdoi.org\u002F10.1016\u002FS0019-9958(77)90354-0) [[Paper]](http:\u002F\u002Fwww.cs.waikato.ac.nz\u002F~ihw\u002Fpapers\u002F77-IHW-AdaptiveController.pdf) (earliest publication on temporal-difference (TD) learning rule)\n  \nMethods\n - Dynamic Programming (DP):\n   - Christopher J. C. H. Watkins, Learning from Delayed Rewards, Ph.D. Thesis, Cambridge University, 1989. [[Thesis]](https:\u002F\u002Fwww.cs.rhul.ac.uk\u002Fhome\u002Fchrisw\u002Fnew_thesis.pdf)\n - Monte Carlo:\n   - Andrew Barto, Michael Duff, Monte Carlo Inversion and Reinforcement Learning, NIPS, 1994. [[Paper]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F865-monte-carlo-matrix-inversion-and-reinforcement-learning.pdf)\n   - Satinder P. Singh, Richard S. Sutton, Reinforcement Learning with Replacing Eligibility Traces, Machine Learning, 1996. [[Paper]](http:\u002F\u002Fwww-all.cs.umass.edu\u002Fpubs\u002F1995_96\u002Fsingh_s_ML96.pdf)\n - Temporal-Difference:\n   - Richard S. Sutton, Learning to predict by the methods of temporal differences. Machine Learning 3: 9-44, 1988. [[Paper]](http:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~sutton\u002Fpapers\u002Fsutton-88-with-erratum.pdf)\n - Q-Learning (Off-policy TD algorithm):\n   - Chris Watkins, Learning from Delayed Rewards, Cambridge, 1989. [[Thesis]](http:\u002F\u002Fwww.cs.rhul.ac.uk\u002Fhome\u002Fchrisw\u002Fthesis.html)\n - Sarsa (On-policy TD algorithm):\n   - G.A. Rummery, M. Niranjan, On-line Q-learning using connectionist systems, Technical Report, Cambridge Univ., 1994. [[Report]](https:\u002F\u002Fwww.google.com\u002Furl?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CDIQFjACahUKEwj2lMm5wZDIAhUHkg0KHa6kAVM&url=ftp%3A%2F%2Fmi.eng.cam.ac.uk%2Fpub%2Freports%2Fauto-pdf%2Frummery_tr166.pdf&usg=AFQjCNHz6IrgcaaO5lzC7t8oEIBY9epozg&sig2=sa-emPme1m5Jav7YmaXsNQ&cad=rja)\n   - Richard S. Sutton, Generalization in Reinforcement Learning: Successful examples using sparse coding, NIPS, 1996. [[Paper]](http:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~sutton\u002Fpapers\u002Fsutton-96.pdf)\n - R-Learning (learning of relative values)\n   - Andrew Schwartz, A Reinforcement Learning Method for Maximizing Undiscounted Rewards, ICML, 1993. [[Paper-Google Scholar]](https:\u002F\u002Fscholar.google.com\u002Fscholar?q=reinforcement+learning+method+for+maximizing+undiscounted+rewards&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ved=0CBsQgQMwAGoVChMIho6p_MOQyAIVwh0eCh3XWAwM)\n - Function Approximation methods (Least-Square Temporal Difference, Least-Square Policy Iteration)\n   - Steven J. Bradtke, Andrew G. Barto, Linear Least-Squares Algorithms for Temporal Difference Learning, Machine Learning, 1996. [[Paper]](http:\u002F\u002Fwww-anw.cs.umass.edu\u002Fpubs\u002F1995_96\u002Fbradtke_b_ML96.pdf)\n   - Michail G. Lagoudakis, Ronald Parr, Model-Free Least Squares Policy Iteration, NIPS, 2001. [[Paper]](http:\u002F\u002Fwww.cs.duke.edu\u002Fresearch\u002FAI\u002FLSPI\u002Fnips01.pdf) [[Code]](http:\u002F\u002Fwww.cs.duke.edu\u002Fresearch\u002FAI\u002FLSPI\u002F)\n - Policy Search \u002F Policy Gradient\n   - Richard Sutton, David McAllester, Satinder Singh, Yishay Mansour, Policy Gradient Methods for Reinforcement Learning with Function Approximation, NIPS, 1999. [[Paper]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation.pdf)\n   - Jan Peters, Sethu Vijayakumar, Stefan Schaal, Natural Actor-Critic, ECML, 2005. [[Paper]](https:\u002F\u002Fhomes.cs.washington.edu\u002F~todorov\u002Fcourses\u002Famath579\u002Freading\u002FNaturalActorCritic.pdf)\n   - Jens Kober, Jan Peters, Policy Search for Motor Primitives in Robotics, NIPS, 2009. [[Paper]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F3545-policy-search-for-motor-primitives-in-robotics.pdf)\n   - Jan Peters, Katharina Mulling, Yasemin Altun, Relative Entropy Policy Search, AAAI, 2010. [[Paper]](http:\u002F\u002Fwww.kyb.tue.mpg.de\u002Ffileadmin\u002Fuser_upload\u002Ffiles\u002Fpublications\u002Fattachments\u002FAAAI-2010-Peters_6439%5b0%5d.pdf)\n   - Freek Stulp, Olivier Sigaud, Path Integral Policy Improvement with Covariance Matrix Adaptation, ICML, 2012. [[Paper]](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1206.4621v1.pdf)\n   - Nate Kohl, Peter Stone, Policy Gradient Reinforcement Learning for Fast Quadrupedal Locomotion, ICRA, 2004. [[Paper]](http:\u002F\u002Fwww.cs.utexas.edu\u002F~pstone\u002FPapers\u002Fbib2html-links\u002Ficra04.pdf)\n   - Marc Deisenroth, Carl Rasmussen, PILCO: A Model-Based and Data-Efficient Approach to Policy Search, ICML, 2011. [[Paper]](http:\u002F\u002Fmlg.eng.cam.ac.uk\u002Fpub\u002Fpdf\u002FDeiRas11.pdf)\n   - Scott Kuindersma, Roderic Grupen, Andrew Barto, Learning Dynamic Arm Motions for Postural Recovery, Humanoids, 2011. [[Paper]](http:\u002F\u002Fwww-all.cs.umass.edu\u002Fpubs\u002F2011\u002Fkuindersma_g_b_11.pdf)\n   - Konstantinos Chatzilygeroudis, Roberto Rama, Rituraj Kaushik, Dorian Goepp, Vassilis Vassiliades, Jean-Baptiste Mouret, Black-Box Data-efficient Policy Search for Robotics, IROS, 2017. [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.07261)]\n - Hierarchical RL\n   - Richard Sutton, Doina Precup, Satinder Singh, Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning, Artificial Intelligence, 1999. [[Paper]](https:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~sutton\u002Fpapers\u002FSPS-aij.pdf)\n   - George Konidaris, Andrew Barto, Building Portable Options: Skill Transfer in Reinforcement Learning, IJCAI, 2007. [[Paper]](http:\u002F\u002Fwww-anw.cs.umass.edu\u002Fpubs\u002F2007\u002Fkonidaris_b_IJCAI07.pdf)\n - Deep Learning + Reinforcement Learning (A sample of recent works on DL+RL)\n   - V. Mnih, et. al., Human-level Control through Deep Reinforcement Learning, Nature, 2015. [[Paper]](http:\u002F\u002Fwww.readcube.com\u002Farticles\u002F10.1038%2Fnature14236?shared_access_token=Lo_2hFdW4MuqEcF3CVBZm9RgN0jAjWel9jnR3ZoTv0P5kedCCNjz3FJ2FhQCgXkApOr3ZSsJAldp-tw3IWgTseRnLpAc9xQq-vTA2Z5Ji9lg16_WvCy4SaOgpK5XXA6ecqo8d8J7l4EJsdjwai53GqKt-7JuioG0r3iV67MQIro74l6IxvmcVNKBgOwiMGi8U0izJStLpmQp6Vmi_8Lw_A%3D%3D)\n   - Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard Lewis, Xiaoshi Wang, Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning, NIPS, 2014. [[Paper]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5421-deep-learning-for-real-time-atari-game-play-using-offline-monte-carlo-tree-search-planning.pdf)\n   - Sergey Levine, Chelsea Finn, Trevor Darrel, Pieter Abbeel, End-to-End Training of Deep Visuomotor Policies. ArXiv, 16 Oct 2015. [[ArXiv]](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1504.00702v3.pdf)\n   - Tom Schaul, John Quan, Ioannis Antonoglou, David Silver, Prioritized Experience Replay, ArXiv, 18 Nov 2015. [[ArXiv]](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.05952v2.pdf)\n   - Hado van Hasselt, Arthur Guez, David Silver, Deep Reinforcement Learning with Double Q-Learning, ArXiv, 22 Sep 2015. [[ArXiv]](http:\u002F\u002Farxiv.org\u002Fabs\u002F1509.06461)\n   - Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu, Asynchronous Methods for Deep Reinforcement Learning, ArXiv, 4 Feb 2016. [[ArXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.01783)\n    \n\n## Applications\n### Game Playing\nTraditional Games\n  - Backgammon - Gerald Tesauro, \"TD-Gammon\" game play using TD(λ) (ACM 1995) [[Paper]](http:\u002F\u002Fwww.bkgm.com\u002Farticles\u002Ftesauro\u002Ftdl.html)\n  - Chess - Jonathan Baxter, Andrew Tridgell and Lex Weaver, \"KnightCap\" program using TD(λ) (1999) [[arXiv]](http:\u002F\u002Farxiv.org\u002Fpdf\u002Fcs\u002F9901002v1.pdf)\n  - Chess - Matthew Lai, Giraffe: Using deep reinforcement learning to play chess (2015) [[arXiv]](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1509.01549v2.pdf)\n\nComputer Games\n  - Atari 2600 Games - Volodymyr Mnih, Koray Kavukcuoglu, David Silver et al., Human-level Control through Deep Reinforcement Learning (Nature 2015) [[DOI]](https:\u002F\u002Fdx.doi.org\u002Fdoi:10.1038\u002Fnature14236) [[Paper]](http:\u002F\u002Fwww.readcube.com\u002Farticles\u002F10.1038%2Fnature14236?shared_access_token=Lo_2hFdW4MuqEcF3CVBZm9RgN0jAjWel9jnR3ZoTv0P5kedCCNjz3FJ2FhQCgXkApOr3ZSsJAldp-tw3IWgTseRnLpAc9xQq-vTA2Z5Ji9lg16_WvCy4SaOgpK5XXA6ecqo8d8J7l4EJsdjwai53GqKt-7JuioG0r3iV67MQIro74l6IxvmcVNKBgOwiMGi8U0izJStLpmQp6Vmi_8Lw_A%3D%3D) [[Code]](https:\u002F\u002Fsites.google.com\u002Fa\u002Fdeepmind.com\u002Fdqn\u002F) [[Video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=iqXKQf2BOSE)\n  - Flappy Bird - Sarvagya Vaish, [Flappy Bird Reinforcement Learning](https:\u002F\u002Fgithub.com\u002FSarvagyaVaish\u002FFlappyBirdRL) [[Video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=xM62SpKAZHU)\n  - Mario - Kenneth O. Stanley and Risto Miikkulainen, MarI\u002FO - learning to play Mario with evolutionary reinforcement learning using artificial neural networks (Evolutionary Computation 2002) [[Paper]](http:\u002F\u002Fnn.cs.utexas.edu\u002Fdownloads\u002Fpapers\u002Fstanley.ec02.pdf) [[Video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=qv6UVOQ0F44)\n  - StarCraft II - Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki et al., Grandmaster level in StarCraft II using multi-agent reinforcement learning (Nature 2019) [[DOI]](https:\u002F\u002Fdoi.org\u002F10.1038\u002Fs41586-019-1724-z) [[Paper]](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs41586-019-1724-z.epdf) [[Video]](https:\u002F\u002Fdeepmind.com\u002Fresearch\u002Fopen-source\u002Falphastar-resources)\n\n### Robotics\n  - Nate Kohl and Peter Stone, Policy Gradient Reinforcement Learning for Fast Quadrupedal Locomotion (ICRA 2004) [[Paper]](http:\u002F\u002Fwww.cs.utexas.edu\u002F~pstone\u002FPapers\u002Fbib2html-links\u002Ficra04.pdf)\n  - Petar Kormushev, Sylvain Calinon and Darwin G. Caldwel, Robot Motor SKill Coordination with EM-based Reinforcement Learning (IROS 2010) [[Paper]](http:\u002F\u002Fkormushev.com\u002Fpapers\u002FKormushev-IROS2010.pdf) [[Video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=W_gxLKSsSIE)\n  - Todd Hester, Michael Quinlan, and Peter Stone, Generalized Model Learning for Reinforcement Learning on a Humanoid Robot (ICRA 2010) [[Paper]](https:\u002F\u002Fccc.inaoep.mx\u002F~mdprl\u002Fdocumentos\u002FHester_2010.pdf) [[Video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=mRpX9DFCdwI&list=PL5nBAYUyJTrM48dViibyi68urttMlUv7e&index=12)\n  - George Konidaris, Scott Kuindersma, Roderic Grupen and Andrew Barto, Autonomous Skill Acquisition on a Mobile Manipulator (AAAI 2011) [[Paper]](http:\u002F\u002Flis.csail.mit.edu\u002Fpubs\u002Fkonidaris-aaai11b.pdf) [[Video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=yUICAkSQTZY)\n  - Marc Peter Deisenroth and Carl Edward Rasmussen,PILCO: A Model-Based and Data-Efficient Approach to Policy Search (ICML 2011) [[Paper]](http:\u002F\u002Fmlg.eng.cam.ac.uk\u002Fpub\u002Fpdf\u002FDeiRas11.pdf)\n  - Scott Niekum, Sachin Chitta, Bhaskara Marthi, et al., Incremental Semantically Grounded Learning from Demonstration (RSS 2013) [[Paper]](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.310.87&rep=rep1&type=pdf)\n  - Mark Cutler and Jonathan P. How, Efficient Reinforcement Learning for Robots using Informative Simulated Priors (ICRA 2015) [[Paper]](http:\u002F\u002Fmarkjcutler.com\u002Fpapers\u002FCutler15_ICRA.pdf) [[Video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kKClFx6l1HY)\n  - Antoine Cully, Jeff Clune, Danesh Tarapore and Jean-Baptiste Mouret, Robots that can adapt like animals (Nature 2015) [[ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1407.3501)] [[Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=T-c17RKh3uE)] [[Code](https:\u002F\u002Fgithub.com\u002Fresibots\u002Fcully_2015_nature)]\n  - Konstantinos Chatzilygeroudis, Roberto Rama, Rituraj Kaushik et al, Black-Box Data-efficient Policy Search for Robotics (IROS 2017) [[ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.07261)] [[Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kTEyYiIFGPM)] [[Code](https:\u002F\u002Fgithub.com\u002Fresibots\u002Fblackdrops)]\n  - P. Travis Jardine, Michael Kogan, Sidney N. Givigi and Shahram Yousefi, Adaptive predictive control of a differential drive robot tuned with reinforcement learning (Int J Adapt Control Signal Process 2019) [[DOI]](https:\u002F\u002Fdx.doi.org\u002F10.1002\u002Facs.2882)\n\n\n\n### Control\n  - Pieter Abbeel, Adam Coates, et al., An Application of Reinforcement Learning to Aerobatic Helicopter Flight (NIPS 2006) [[Paper]](http:\u002F\u002Fheli.stanford.edu\u002Fpapers\u002Fnips06-aerobatichelicopter.pdf) [[Video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=VCdxqn0fcnE)\n  - J. Andrew Bagnell and Jeff G. Schneider, Autonomous helicopter control using Reinforcement Learning Policy Search Methods (ICRA 2001) [[Paper]](https:\u002F\u002Fkilthub.cmu.edu\u002Farticles\u002FAutonomous_Helicopter_Control_Using_Reinforcement_Learning_Policy_Search_Methods\u002F6552119\u002Ffiles\u002F12033380.pdf)\n\n### Operations Research\n  - Scott Proper and Prasad Tadepalli, Scaling Average-reward Reinforcement Learning for Product Delivery (AAAI 2004) [[Paper]](https:\u002F\u002Fs3.amazonaws.com\u002Facademia.edu.documents\u002F44453946\u002FScaling_Average-reward_Reinforcement_Lea20160405-20758-1wxkm8y.pdf)\n  - Naoki Abe, Naval Verma et al., Cross Channel Optimized Marketing by Reinforcement Learning (KDD 2004) [[Paper]](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.375.151&rep=rep1&type=pdf)\n  - Bernd Waschneck, Andre Reichstaller, Lenz Belzner et al., Deep reinforcement learning for semiconductor production scheduling (ASMC 2018) [[DOI]](https:\u002F\u002Fdx.doi.org\u002F10.1109\u002FASMC.2018.8373191) [[Paper]](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FLenz_Belzner\u002Fpublication\u002F325713164_Deep_reinforcement_learning_for_semiconductor_production_scheduling\u002Flinks\u002F5be537caa6fdcc3a8dc89fb3\u002FDeep-reinforcement-learning-for-semiconductor-production-scheduling.pdf)\n\n### Human Computer Interaction\n  - Satinder Singh, Diane Litman et al., Optimizing Dialogue Management with Reinforcement Learning: Experiments with the NJFun System (JAIR 2002) [[Paper]](http:\u002F\u002Fweb.eecs.umich.edu\u002F~baveja\u002FPapers\u002FRLDSjair.pdf)\n\n\n\n## Codes\n - Codes for examples and exercises in Richard Sutton and Andrew Barto's [Book](#books) Reinforcement Learning: An Introduction\n    - [Python Code](https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002Freinforcement-learning-an-introduction) (2nd Edition)\n    - [MATLAB Code](https:\u002F\u002Fwaxworksmath.com\u002FAuthors\u002FN_Z\u002FSutton\u002FRLAI_1st_Edition\u002Fsutton.html) (1st Edition)\n - Simulation code for Reinforcement Learning Control Problems\n    - [Pole-Cart Problem](http:\u002F\u002Fpages.cs.wisc.edu\u002F~finton\u002Fpoledriver.html)\n    - [Q-learning Controller](http:\u002F\u002Fpages.cs.wisc.edu\u002F~finton\u002Fqcontroller.html)\n - [MATLAB Environment and GUI for Reinforcement Learning](http:\u002F\u002Fwww.cs.colostate.edu\u002F~anderson\u002Fres\u002Frl\u002Fmatlabpaper\u002Frl.html)\n - [Reinforcement Learning Repository - University of Massachusetts, Amherst](http:\u002F\u002Fwww-anw.cs.umass.edu\u002Frlr\u002F)\n - [Brown-UMBC Reinforcement Learning and Planning Library (Java)](http:\u002F\u002Fburlap.cs.brown.edu\u002F)\n - [Reinforcement Learning in R (MDP, Value Iteration)](http:\u002F\u002Fwww.moneyscience.com\u002Fpg\u002Fblog\u002FStatAlgo\u002Fread\u002F635759\u002Freinforcement-learning-in-r-markov-decision-process-mdp-and-value-iteration)\n - [Reinforcement Learning Environment in Python and MATLAB](https:\u002F\u002Fjamh-web.appspot.com\u002Fdownload.htm)\n - [RL-Glue](http:\u002F\u002Fglue.rl-community.org\u002Fwiki\u002FMain_Page) (standard interface for RL) and [RL-Glue Library](http:\u002F\u002Flibrary.rl-community.org\u002Fwiki\u002FMain_Page)\n - [PyBrain Library](http:\u002F\u002Fwww.pybrain.org\u002F) - Python-Based Reinforcement learning, Artificial intelligence, and Neural network\n - [RLPy Framework](http:\u002F\u002Frlpy.readthedocs.org\u002Fen\u002Flatest\u002F) -  Value-Function-Based Reinforcement Learning Framework for Education and Research\n - [Maja](http:\u002F\u002Fmmlf.sourceforge.net\u002F) - Machine learning framework for problems in Reinforcement Learning in python\n - [TeachingBox](http:\u002F\u002Fservicerobotik.hs-weingarten.de\u002Fen\u002Fteachingbox.php) - Java based Reinforcement Learning framework\n - [Policy Gradient Reinforcement Learning Toolbox for MATLAB](http:\u002F\u002Fwww.ias.informatik.tu-darmstadt.de\u002FResearch\u002FPolicyGradientToolbox)\n - [PIQLE](http:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fpiqle\u002F) - Platform Implementing Q-Learning and other RL algorithms\n - [BeliefBox](https:\u002F\u002Fcode.google.com\u002Fp\u002Fbeliefbox\u002F) - Bayesian reinforcement learning library and toolkit\n - [Deep Q-Learning with TensorFlow](https:\u002F\u002Fgithub.com\u002Fnivwusquorum\u002Ftensorflow-deepq) - A deep Q learning demonstration using Google Tensorflow\n - [Atari](https:\u002F\u002Fgithub.com\u002FKaixhin\u002FAtari) - Deep Q-networks and asynchronous agents in Torch\n - [AgentNet](https:\u002F\u002Fgithub.com\u002Fyandexdataschool\u002FAgentNet) - A python library for deep reinforcement learning and custom recurrent networks using Theano+Lasagne.\n - [Reinforcement Learning Examples by RLCode](https:\u002F\u002Fgithub.com\u002Frlcode\u002Freinforcement-learning) - A Collection of minimal and clean reinforcement learning examples\n - [OpenAI Baselines](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fbaselines) - Well tested implementations ([and results](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fbaselines-results)) of reinforcement learning algorithms from OpenAI \n - [PyTorch Deep RL](https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002FDeepRL) - Popular deep RL algorithm implementations with PyTorch\n - [ChainerRL](https:\u002F\u002Fgithub.com\u002Fchainer\u002Fchainerrl) - Popular deep RL algorithm implementations with Chainer\n - [Black-DROPS](https:\u002F\u002Fgithub.com\u002Fresibots\u002Fblackdrops) - Modular and generic code for the model-based policy search Black-DROPS algorithm (IROS 2017 paper) and easy integration with the [DART](http:\u002F\u002Fdartsim.github.io\u002F) simulator\n - [Jumanji](https:\u002F\u002Fgithub.com\u002Finstadeepai\u002Fjumanji) - A Suite of Industry-Driven Hardware-Accelerated RL Environments written in JAX.\n\n \n \n ## Tutorials \u002F Websites\n  - Mance Harmon and Stephanie Harmon, [Reinforcement Learning: A Tutorial](http:\u002F\u002Fold.nbu.bg\u002Fcogs\u002Fevents\u002F2000\u002FReadings\u002FPetrov\u002Frltutorial.pdf)\n  - C. Igel, M.A. Riedmiller, et al., Reinforcement Learning in a Nutshell, ESANN, 2007. [[Paper]](http:\u002F\u002Fimage.diku.dk\u002Figel\u002Fpaper\u002FRLiaN.pdf)\n  - UNSW - [Reinforcement Learning](http:\u002F\u002Fwww.cse.unsw.edu.au\u002F~cs9417ml\u002FRL1\u002Findex.html)\n    - [Introduction](http:\u002F\u002Fwww.cse.unsw.edu.au\u002F~cs9417ml\u002FRL1\u002Fintroduction.html)\n    - [TD-Learning](http:\u002F\u002Fwww.cse.unsw.edu.au\u002F~cs9417ml\u002FRL1\u002Ftdlearning.html)\n    - [Q-Learning and SARSA](http:\u002F\u002Fwww.cse.unsw.edu.au\u002F~cs9417ml\u002FRL1\u002Falgorithms.html)\n    - [Applet for \"Cat and Mouse\" Game](http:\u002F\u002Fwww.cse.unsw.edu.au\u002F~cs9417ml\u002FRL1\u002Fapplet.html)\n  - [ROS Reinforcement Learning Tutorial](http:\u002F\u002Fwiki.ros.org\u002Freinforcement_learning\u002FTutorials\u002FReinforcement%20Learning%20Tutorial)\n  - [POMDP for Dummies](http:\u002F\u002Fcs.brown.edu\u002Fresearch\u002Fai\u002Fpomdp\u002Ftutorial\u002Findex.html)\n  - Scholarpedia articles on:\n    - [Reinforcement Learning](http:\u002F\u002Fwww.scholarpedia.org\u002Farticle\u002FReinforcement_learning)\n    - [Temporal Difference Learning](http:\u002F\u002Fwww.scholarpedia.org\u002Farticle\u002FTemporal_difference_learning)\n  - Repository with useful [MATLAB Software, presentations, and demo videos](http:\u002F\u002Fbusoniu.net\u002Frepository.php)\n  - [Bibliography on Reinforcement Learning](http:\u002F\u002Fliinwww.ira.uka.de\u002Fbibliography\u002FNeural\u002Freinforcement.learning.html)\n  - UC Berkeley - CS 294: Deep Reinforcement Learning, Fall 2015 (John Schulman, Pieter Abbeel) [[Class Website]](http:\u002F\u002Frll.berkeley.edu\u002Fdeeprlcourse\u002F)\n  - [Blog posts on Reinforcement Learning, Parts 1-4](https:\u002F\u002Fstudywolf.wordpress.com\u002F2012\u002F11\u002F25\u002Freinforcement-learning-q-learning-and-exploration\u002F) by Travis DeWolf\n  - [The Arcade Learning Environment](http:\u002F\u002Fwww.arcadelearningenvironment.org\u002F) - Atari 2600 games environment for developing AI agents\n  - [Deep Reinforcement Learning: Pong from Pixels](http:\u002F\u002Fkarpathy.github.io\u002F2016\u002F05\u002F31\u002Frl\u002F) by Andrej Karpathy\n  - [Demystifying Deep Reinforcement Learning](https:\u002F\u002Fwww.nervanasys.com\u002Fdemystifying-deep-reinforcement-learning\u002F) \n  - [Let’s make a DQN](https:\u002F\u002Fjaromiru.com\u002F2016\u002F09\u002F27\u002Flets-make-a-dqn-theory\u002F) \n  - [Simple Reinforcement Learning with Tensorflow, Parts 0-8](https:\u002F\u002Fmedium.com\u002Femergent-future\u002Fsimple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0#.78km20i8r) by Arthur Juliani\n  - [Practical_RL](https:\u002F\u002Fgithub.com\u002Fyandexdataschool\u002FPractical_RL) - github-based course in reinforcement learning in the wild (lectures, coding labs, projects)\n  - [RLenv.directory: Explore and find new reinforcement learning environments.](https:\u002F\u002Frlenv.directory\u002F)\n  - Katja Hofmann's talk at NeurIPS '19 - [RL: Past, Present and Future Perspectives](https:\u002F\u002Fslideslive.com\u002F38922817\u002Freinforcement-learning-past-present-and-future-perspectives)\n  - [How to Structure, Organize, Track and Manage Reinforcement Learning (RL) Projects](https:\u002F\u002Fneptune.ai\u002Fblog\u002Fhow-to-structure-organize-track-and-manage-reinforcement-learning-rl-projects)\n  - [Reinforcement Learning Cheat Sheet](https:\u002F\u002Falxthm.com\u002Fassets\u002Fpdf\u002Frl-cheatsheet.pdf) - A summary of some important concepts and algorithms in RL\n\n\n\n## Online Demos\n - [Real-world demonstrations of Reinforcement Learning](http:\u002F\u002Fwww.dcsc.tudelft.nl\u002F~robotics\u002Fmedia.html)\n - [Deep Q-Learning Demo](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fdemo\u002Frldemo.html) - A deep Q learning demonstration using ConvNetJS\n - [Deep Q-Learning with Tensor Flow](https:\u002F\u002Fgithub.com\u002Fnivwusquorum\u002Ftensorflow-deepq) - A deep Q learning demonstration using Google Tensorflow\n - [Reinforcement Learning Demo](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Freinforcejs\u002F) - A reinforcement learning demo using reinforcejs by Andrej Karpathy\n\n\n## Open Source Reinforcement Learning Platforms\n- [OpenAI gym](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym) - A toolkit for developing and comparing reinforcement learning algorithms\n- [OpenAI universe](https:\u002F\u002Fgithub.com\u002Fopenai\u002Funiverse) - A software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications\n- [DeepMind Lab](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Flab) - A customisable 3D platform for agent-based AI research\n- [Project Malmo](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fmalmo) - A platform for Artificial Intelligence experimentation and research built on top of Minecraft by Microsoft\n- [ViZDoom](https:\u002F\u002Fgithub.com\u002FMarqt\u002FViZDoom) - Doom-based AI research platform for reinforcement learning from raw visual information\n- [Retro Learning Environment](https:\u002F\u002Fgithub.com\u002Fnadavbh12\u002FRetro-Learning-Environment) - An AI platform for reinforcement learning based on video game emulators. Currently supports SNES and Sega Genesis. Compatible with OpenAI gym.\n- [torch-twrl](https:\u002F\u002Fgithub.com\u002Ftwitter\u002Ftorch-twrl) - A package that enables reinforcement learning in Torch by Twitter\n- [UETorch](https:\u002F\u002Fgithub.com\u002Ffacebook\u002FUETorch) - A Torch plugin for Unreal Engine 4 by Facebook\n- [TorchCraft](https:\u002F\u002Fgithub.com\u002FTorchCraft\u002FTorchCraft) - Connecting Torch to StarCraft\n- [garage](https:\u002F\u002Fgithub.com\u002Frlworkgroup\u002Fgarage) - A framework for reproducible reinformcement learning research, fully compatible with OpenAI Gym and DeepMind Control Suite (successor to rllab)\n- [TensorForce](https:\u002F\u002Fgithub.com\u002Freinforceio\u002Ftensorforce) - Practical deep reinforcement learning on TensorFlow with Gitter support and OpenAI Gym\u002FUniverse\u002FDeepMind Lab integration.\n- [tf-TRFL](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Ftrfl\u002F) - A library built on top of TensorFlow that exposes several useful building blocks for implementing Reinforcement Learning agents.\n- [OpenAI lab](https:\u002F\u002Fgithub.com\u002Fkengz\u002Fopenai_lab) - An experimentation system for Reinforcement Learning using OpenAI Gym, Tensorflow, and Keras.\n- [keras-rl](https:\u002F\u002Fgithub.com\u002Fmatthiasplappert\u002Fkeras-rl) - State-of-the art deep reinforcement learning algorithms in Keras designed for compatibility with OpenAI.\n- [BURLAP](http:\u002F\u002Fburlap.cs.brown.edu) - Brown-UMBC Reinforcement Learning and Planning, a library written in Java\n- [MAgent](https:\u002F\u002Fgithub.com\u002Fgeek-ai\u002FMAgent) - A Platform for Many-agent Reinforcement Learning. \n- [Ray RLlib](http:\u002F\u002Fray.readthedocs.io\u002Fen\u002Flatest\u002Frllib.html) - Ray RLlib is a reinforcement learning library that aims to provide both performance and composability.\n- [SLM Lab](https:\u002F\u002Fgithub.com\u002Fkengz\u002FSLM-Lab) - A research framework for Deep Reinforcement Learning using Unity, OpenAI Gym, PyTorch, Tensorflow.\n- [Unity ML Agents](https:\u002F\u002Fgithub.com\u002FUnity-Technologies\u002Fml-agents) - Create reinforcement learning environments using the Unity Editor\n- [Intel Coach](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fcoach) - Coach is a python reinforcement learning research framework containing implementation of many state-of-the-art algorithms.\n- [Microsoft AirSim](https:\u002F\u002Fmicrosoft.github.io\u002FAirSim\u002Freinforcement_learning\u002F) - Open source simulator based on Unreal Engine for autonomous vehicles from Microsoft AI & Research.\n- [DI-engine](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FDI-engine) - DI-engine is a generalized Decision Intelligence engine. It supports most basic deep reinforcement learning (DRL) algorithms, such as DQN, PPO, SAC, and domain-specific algorithms like QMIX in multi-agent RL, GAIL in inverse RL, and RND in exploration problems.\n- [Jumanji](https:\u002F\u002Fgithub.com\u002Finstadeepai\u002Fjumanji) - A Suite of Industry-Driven Hardware-Accelerated RL Environments written in JAX.\n\n## valuable Contributors👩‍💻👨‍💻 :\n\n\u003Cp align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl\">\n  \u003Cimg src=\"https:\u002F\u002Fcontributors-img.web.app\u002Fimage?repo=aikorea\u002Fawesome-rl\" \u002F>\n\u003C\u002Fa>\u003C\u002Fp>\n","# 令人惊叹的强化学习  [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\n本页面已不再维护。\n\n这是一份精心整理的、专注于强化学习的资源列表。\n\n我们还维护着其他主题的页面：[awesome-rnn](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-rnn)、[awesome-deep-vision](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-deep-vision)、[awesome-random-forest](https:\u002F\u002Fgithub.com\u002Fkjw0612\u002Fawesome-random-forest)\n\n维护者：[Hyunsoo Kim](http:\u002F\u002Fsites.duke.edu\u002Fhyunsookim\u002F)、[Jiwon Kim](http:\u002F\u002Fgithub.com\u002Fkjw0612)\n\n## 贡献\n欢迎随时提交[拉取请求](https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl\u002Fpulls)\n\n## 目录\n\n - [理论](#theory)\n   - [讲座](#lectures)\n   - [书籍](#books)\n   - [综述](#surveys)\n   - [论文\u002F学位论文](#papers--thesis)\n - [应用](#applications)\n   - [游戏博弈](#game-playing)\n   - [机器人学](#robotics)\n   - [控制](#control)\n   - [运筹学](#operations-research)\n   - [人机交互](#human-computer-interaction)\n - [代码](#codes)\n - [教程\u002F网站](#tutorials--websites)\n - [在线演示](#online-demos)\n - [开源强化学习平台](#open-source-reinforcement-learning-platforms)\n\n## 代码\n - 理查德·萨顿和安德鲁·巴托的《强化学习：导论》中的示例和习题代码\n    - [Python代码](https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002Freinforcement-learning-an-introduction)\n    - [MATLAB代码（链接已失效）](http:\u002F\u002Fwaxworksmath.com\u002FAuthors\u002FN_Z\u002FSutton\u002Fsutton.html)\n    - [C\u002FLisp代码](http:\u002F\u002Fincompleteideas.net\u002Fbook\u002Fcode\u002Fcode2nd.html)\n    - [Julia代码](https:\u002F\u002Fgithub.com\u002FJu-jl\u002FReinforcementLearningAnIntroduction.jl)\n    - [书籍](http:\u002F\u002Fincompleteideas.net\u002Fbook\u002FRLbook2018.pdf)\n    - [习题解答](https:\u002F\u002Fgithub.com\u002FLyWangPX\u002FReinforcement-Learning-2nd-Edition-by-Sutton-Exercise-Solutions)\n - 强化学习控制问题的仿真代码\n    - [倒立摆问题](http:\u002F\u002Fpages.cs.wisc.edu\u002F~finton\u002Fpoledriver.html)\n    - [Q-learning控制器](http:\u002F\u002Fpages.cs.wisc.edu\u002F~finton\u002Fqcontroller.html)\n - [用于强化学习的MATLAB环境及GUI](http:\u002F\u002Fwww.cs.colostate.edu\u002F~anderson\u002Fres\u002Frl\u002Fmatlabpaper\u002Frl.html)\n - [马萨诸塞大学阿默斯特分校强化学习库](http:\u002F\u002Fwww-anw.cs.umass.edu\u002Frlr\u002F)\n - [布朗-UMBC强化学习与规划库（Java）](http:\u002F\u002Fburlap.cs.brown.edu\u002F)\n - [R语言中的强化学习（MDP，值迭代）](http:\u002F\u002Fwww.moneyscience.com\u002Fpg\u002Fblog\u002FStatAlgo\u002Fread\u002F635759\u002Freinforcement-learning-in-r-markov-decision-process-mdp-and-value-iteration)\n - [Python和MATLAB中的强化学习环境](https:\u002F\u002Fjamh-web.appspot.com\u002Fdownload.htm)\n - [RL-Glue](http:\u002F\u002Fglue.rl-community.org\u002Fwiki\u002FMain_Page)（强化学习的标准接口）及[RL-Glue库](http:\u002F\u002Flibrary.rl-community.org\u002Fwiki\u002FMain_Page)\n - [PyBrain库](http:\u002F\u002Fwww.pybrain.org\u002F)——基于Python的强化学习、人工智能和神经网络\n - [RLPy框架](http:\u002F\u002Frlpy.readthedocs.org\u002Fen\u002Flatest\u002F)——面向教育和研究的价值函数型强化学习框架\n - [Maja](http:\u002F\u002Fmmlf.sourceforge.net\u002F)——用于强化学习问题的Python机器学习框架\n - [TeachingBox](http:\u002F\u002Fservicerobotik.hs-weingarten.de\u002Fen\u002Fteachingbox.php)——基于Java的强化学习框架\n - [MATLAB政策梯度强化学习工具箱](http:\u002F\u002Fwww.ias.informatik.tu-darmstadt.de\u002FResearch\u002FPolicyGradientToolbox)\n - [PIQLE](http:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fpiqle\u002F)——实现Q-learning及其他强化学习算法的平台\n - [BeliefBox](https:\u002F\u002Fcode.google.com\u002Fp\u002Fbeliefbox\u002F)——贝叶斯强化学习库和工具包\n - [使用TensorFlow的深度Q学习](https:\u002F\u002Fgithub.com\u002Fnivwusquorum\u002Ftensorflow-deepq)——利用Google TensorFlow进行深度Q学习的演示\n - [Atari](https:\u002F\u002Fgithub.com\u002FKaixhin\u002FAtari)——在Torch中实现的深度Q网络和异步智能体\n - [AgentNet](https:\u002F\u002Fgithub.com\u002Fyandexdataschool\u002FAgentNet)——一个使用Theano+Lasagne的Python库，用于深度强化学习和自定义循环网络。\n - [RLCode提供的强化学习示例](https:\u002F\u002Fgithub.com\u002Frlcode\u002Freinforcement-learning)——一系列简洁、精炼的强化学习示例\n - [OpenAI Baselines](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fbaselines)——来自OpenAI的经过充分测试的强化学习算法实现（以及相关结果）\n - [PyTorch深度强化学习](https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002FDeepRL)——使用PyTorch实现的流行深度强化学习算法\n - [ChainerRL](https:\u002F\u002Fgithub.com\u002Fchainer\u002Fchainerrl)——使用Chainer实现的流行深度强化学习算法\n - [Black-DROPS](https:\u002F\u002Fgithub.com\u002Fresibots\u002Fblackdrops)——基于模型策略搜索的Black-DROPS算法的模块化通用代码（IROS 2017论文），并可轻松集成到[DART](http:\u002F\u002Fdartsim.github.io\u002F)模拟器中\n - [Gold](https:\u002F\u002Fgithub.com\u002Faunum\u002Fgold)——用于Golang的强化学习库。\n - [Jumanji](https:\u002F\u002Fgithub.com\u002Finstadeepai\u002Fjumanji)——一套由工业界驱动、硬件加速的强化学习环境，使用JAX编写。\n\n## 理论\n\n### 讲座\n- [DeepMind x UCL] [强化学习系列讲座 2021](https:\u002F\u002Fdeepmind.com\u002Flearning-resources\u002Freinforcement-learning-series-2021)\n- [UCL] [COMPM050\u002FCOMPGI13 强化学习](http:\u002F\u002Fwww0.cs.ucl.ac.uk\u002Fstaff\u002Fd.silver\u002Fweb\u002FTeaching.html) 由 David Silver 主讲\n- [UCL] [COMPMI22\u002FCOMPGI22 - 高级深度学习与强化学习](https:\u002F\u002Fgithub.com\u002Fenggen\u002FDeepMind-Advanced-Deep-Learning-and-Reinforcement-Learning)\n- [UC Berkeley] CS188 人工智能，由 Pieter Abbeel 主讲\n  - [第8讲：马尔可夫决策过程 1](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=i0o-ui1N35U)\n  - [第9讲：马尔可夫决策过程 2](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Csiiv6WGzKM)\n  - [第10讲：强化学习 1](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ifma8G7LegE)\n  - [第11讲：强化学习 2](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Si1_YTw960c)\n- [Udacity (Georgia Tech.)] [CS7642 强化学习](https:\u002F\u002Fclassroom.udacity.com\u002Fcourses\u002Fud600)\n- [Stanford] [CS229 机器学习 - 第16讲：强化学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RtxI449ZjSc&feature=relmfu)，由 Andrew Ng 主讲\n- [UC Berkeley] [深度强化学习训练营](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdeep-rl-bootcamp\u002Flectures)\n- [UC Berkeley] [CS294 深度强化学习](http:\u002F\u002Frll.berkeley.edu\u002Fdeeprlcourse\u002F)，由 John Schulman 和 Pieter Abbeel 主讲\n- [CMU] [10703：深度强化学习与控制，2017年春季](https:\u002F\u002Fkatefvision.github.io\u002F)\n- [MIT] [6.S094：自动驾驶汽车的深度学习](http:\u002F\u002Fselfdrivingcars.mit.edu\u002F)\n  - [第2讲：用于运动规划的深度强化学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=QDzM8r3WgBw&list=PLrAXtmErZgOeiKm4sgNOknGvNjby9efdf)\n- [Siraj Raval]：视频游戏中的人工智能入门（强化学习视频系列）\n  - [视频游戏人工智能入门](https:\u002F\u002Fyoutu.be\u002Fi_McNBDP9Qs)\n  - [蒙特卡洛预测](https:\u002F\u002Fyoutu.be\u002F-YpalutQCKw)\n  - [Q学习详解](https:\u002F\u002Fyoutu.be\u002FaCEvtRtNO-M)\n  - [解决基础乒乓球游戏](https:\u002F\u002Fyoutu.be\u002FpN7ETkOizGM)\n  - [演员-评论家算法](https:\u002F\u002Fyoutu.be\u002Fw_3mmm0P0j8)\n  - [战争机器人](https:\u002F\u002Fyoutu.be\u002Ftm5kQmjfZN8)\n- [Mutual Information] [强化学习基础](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLzvYlJMoZ02Dxtwe-MmH4nOB5jYlMGBjr)\n  - [强化学习：六集系列](https:\u002F\u002Fyoutu.be\u002FNFo9v_yKQXA)\n  - [贝尔曼方程、动态规划和广义策略迭代](https:\u002F\u002Fyoutu.be\u002F_j6pvGEchWU)\n  - [蒙特卡洛方法与离策略方法](https:\u002F\u002Fyoutu.be\u002FbpUszPiWM7o)\n  - [TD学习、Sarsa 和 Q-Learning](https:\u002F\u002Fyoutu.be\u002FAJiG3ykOxmY)\n\n### 书籍\n- Richard Sutton 和 Andrew Barto，《强化学习：导论》（第一版，1998年）[[图书]](http:\u002F\u002Fincompleteideas.net\u002Fbook\u002Febook\u002Fthe-book.html) [[代码]](http:\u002F\u002Fincompleteideas.net\u002Fbook\u002Fcode\u002Fcode.html)\n- Richard Sutton 和 Andrew Barto，《强化学习：导论》（第二版，正在进行中，2018年）[[图书]](http:\u002F\u002Fincompleteideas.net\u002Fbook\u002FRLbook2020.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002Freinforcement-learning-an-introduction)\n- Csaba Szepesvari，《强化学习算法》[[图书]](http:\u002F\u002Fwww.ualberta.ca\u002F~szepesva\u002Fpapers\u002FRLAlgsInMDPs.pdf)\n- David Poole 和 Alan Mackworth，《人工智能：计算智能体的基础》[[图书章节]](http:\u002F\u002Fartint.info\u002Fhtml\u002FArtInt_262.html)\n- Dimitri P. Bertsekas 和 John N. Tsitsiklis，《神经动力学规划》[[图书（亚马逊）]](http:\u002F\u002Fwww.amazon.com\u002FNeuro-Dynamic-Programming-Optimization-Neural-Computation\u002Fdp\u002F1886529108\u002Fref=sr_1_3?s=books&ie=UTF8&qid=1442461075&sr=1-3&refinements=p_27%3AJohn+N.+Tsitsiklis+Dimitri+P.+Bertsekas) [[摘要]](http:\u002F\u002Fwww.mit.edu\u002F~dimitrib\u002FNDP_Encycl.pdf)\n- Mykel J. Kochenderfer，《不确定性下的决策制定：理论与应用》[[图书（亚马逊）]](http:\u002F\u002Fwww.amazon.com\u002FDecision-Making-Under-Uncertainty-Application\u002Fdp\u002F0262029251\u002Fref=sr_1_1?ie=UTF8&qid=1441126550&sr=8-1&keywords=kochenderfer&pebp=1441126551594&perid=1Y6RG2EGRD26659CJHH9)\n- 《行动中的深度强化学习》[[图书（Manning）]](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fdeep-reinforcement-learning-in-action)\n- REINFORCEMENT LEARNING AND OPTIMAL CONTROL Dimitri P. Bertsekas [图书、视频讲座及课程资料，2019年](http:\u002F\u002Fweb.mit.edu\u002Fdimitrib\u002Fwww\u002FRLbook.html)\n\n\n### 综述文章\n- Leslie Pack Kaelbling、Michael L. Littman、Andrew W. Moore，《强化学习综述》（JAIR 1996年）[[论文]](https:\u002F\u002Fwww.jair.org\u002Findex.php\u002Fjair\u002Farticle\u002Fdownload\u002F10166\u002F24110\u002F)\n- S. S. Keerthi 和 B. Ravindran，《强化学习教程式综述》（Sadhana 1994年）[[论文]](http:\u002F\u002Fwww.cse.iitm.ac.in\u002F~ravi\u002Fpapers\u002Fkeerthi.rl-survey.pdf)\n- Matthew E. Taylor、Peter Stone，《强化学习领域的迁移学习综述》（JMLR 2009年）[[论文]](http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume10\u002Ftaylor09a\u002Ftaylor09a.pdf)\n- Jens Kober、J. Andrew Bagnell、Jan Peters，《机器人领域强化学习综述》（IJRR 2013年）[[论文]](http:\u002F\u002Fwww.ias.tu-darmstadt.de\u002Fuploads\u002FPublications\u002FKober_IJRR_2013.pdf)\n- Michael L. Littman，《强化学习通过评估性反馈改善行为》（Nature 2015年）[[论文]](http:\u002F\u002Fwww.nature.com\u002Fnature\u002Fjournal\u002Fv521\u002Fn7553\u002Ffull\u002Fnature14540.html)\n- Marc P. Deisenroth、Gerhard Neumann、Jan Peter，《机器人领域策略搜索综述》，发表于《机器人技术基础与趋势》（2014年）[[图书]](https:\u002F\u002Fspiral.imperial.ac.uk:8443\u002Fbitstream\u002F10044\u002F1\u002F12051\u002F7\u002Ffnt_corrected_2014-8-22.pdf)\n- Kai Arulkumaran、Marc Peter Deisenroth、Miles Brundage、Anil Anthony Bharath，《深度强化学习简要综述》（IEEE信号处理杂志 2017年）[[DOI]](https:\u002F\u002Fdx.doi.org\u002F10.1109\u002FMSP.2017.2743240) [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.05866)\n- Benjamin Recht，《强化学习巡礼：从连续控制视角看》（年度控制、机器人与自动化系统评论 2019年）[[DOI]](https:\u002F\u002Fdx.doi.org\u002F10.1146\u002Fannurev-control-053018-023825)\n\n### 论文 \u002F 学位论文\n基础性论文\n - 马文·明斯基，《迈向人工智能的步骤》，IRE会刊，1961年。[[DOI]](https:\u002F\u002Fdx.doi.org\u002F10.1109\u002FJRPROC.1961.287775) [[论文]](http:\u002F\u002Fstaffweb.worc.ac.uk\u002FDrC\u002FCourses%202010-11\u002FComp%203104\u002FTutor%20Inputs\u002FSession%209%20Prep\u002FReading%20material\u002FMinsky60steps.pdf)（讨论了强化学习中的“信用分配问题”等）\n - 伊恩·H·维滕，《离散时间马尔可夫环境下的自适应最优控制器》，信息与控制，1977年。[[DOI]](https:\u002F\u002Fdoi.org\u002F10.1016\u002FS0019-9958(77)90354-0) [[论文]](http:\u002F\u002Fwww.cs.waikato.ac.nz\u002F~ihw\u002Fpapers\u002F77-IHW-AdaptiveController.pdf)（最早关于时序差分（TD）学习规则的发表文献）\n\n方法类\n - 动态规划（DP）：\n   - 克里斯托弗·J·C·C·沃特金斯，《从延迟奖励中学习》，剑桥大学博士论文，1989年。[[论文]](https:\u002F\u002Fwww.cs.rhul.ac.uk\u002Fhome\u002Fchrisw\u002Fnew_thesis.pdf)\n - 蒙特卡洛法：\n   - 安德鲁·巴托、迈克尔·达夫，《蒙特卡洛逆矩阵与强化学习》，NIPS，1994年。[[论文]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F865-monte-carlo-matrix-inversion-and-reinforcement-learning.pdf)\n   - 萨廷德·P·辛格、理查德·S·萨顿，《使用替换型资格迹的强化学习》，机器学习，1996年。[[论文]](http:\u002F\u002Fwww-all.cs.umass.edu\u002Fpubs\u002F1995_96\u002Fsingh_s_ML96.pdf)\n - 时序差分法：\n   - 理查德·S·萨顿，《通过时序差分方法进行预测的学习》。机器学习第3卷：9–44页，1988年。[[论文]](http:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~sutton\u002Fpapers\u002Fsutton-88-with-erratum.pdf)\n - Q学习（离策略TD算法）：\n   - 克里斯·沃特金斯，《从延迟奖励中学习》，剑桥，1989年。[[论文]](http:\u002F\u002Fwww.cs.rhul.ac.uk\u002Fhome\u002Fchrisw\u002Fthesis.html)\n - Sarsa（在策略TD算法）：\n   - G.A. 拉默里、M. 尼兰詹，《基于连接主义系统的在线Q学习》，剑桥大学技术报告，1994年。[[报告]](https:\u002F\u002Fwww.google.com\u002Furl?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CDIQFjACahUKEwj2lMm5wZDIAhUHkg0KHa6kAVM&url=ftp%3A%2F%2Fmi.eng.cam.ac.uk%2Fpub%2Freports%2Fauto-pdf%2Frummery_tr166.pdf&usg=AFQjCNHz6IrgcaaO5lzC7t8oEIBY9epozg&sig2=sa-emPme1m5Jav7YmaXsNQ&cad=rja)\n   - 理查德·S·萨顿，《强化学习中的泛化：利用稀疏编码的成功案例》，NIPS，1996年。[[论文]](http:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~sutton\u002Fpapers\u002Fsutton-96.pdf)\n - R学习（相对价值的学习）：\n   - 安德鲁·施瓦茨，《一种用于最大化未折现奖励的强化学习方法》，ICML，1993年。[[Google Scholar论文]](https:\u002F\u002Fscholar.google.com\u002Fscholar?q=reinforcement+learning+method+for+maximizing+undiscounted+rewards&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ved=0CBsQgQMwAGoVChMIho6p_MOQyAIVwh0eCh3XWAwM)\n - 函数逼近方法（最小二乘时序差分、最小二乘策略迭代）：\n   - 史蒂文·J·布拉特克、安德鲁·G·巴托，《用于时序差分学习的线性最小二乘算法》，机器学习，1996年。[[论文]](http:\u002F\u002Fwww-anw.cs.umass.edu\u002Fpubs\u002F1995_96\u002Fbradtke_b_ML96.pdf)\n   - 米哈伊尔·G·拉古达基斯、罗纳德·帕尔，《无模型最小二乘策略迭代》，NIPS，2001年。[[论文]](http:\u002F\u002Fwww.cs.duke.edu\u002Fresearch\u002FAI\u002FLSPI\u002Fnips01.pdf) [[代码]](http:\u002F\u002Fwww.cs.duke.edu\u002Fresearch\u002FAI\u002FLSPI\u002F)\n - 策略搜索 \u002F 策略梯度法：\n   - 理查德·萨顿、大卫·麦卡莱斯特、萨廷德·辛格、伊沙伊·曼苏尔，《带有函数逼近的强化学习中的策略梯度方法》，NIPS，1999年。[[论文]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation.pdf)\n   - 扬·彼得斯、塞图·维贾亚库马尔、斯特凡·沙尔，《自然演员—评论家算法》，ECML，2005年。[[论文]](https:\u002F\u002Fhomes.cs.washington.edu\u002F~todorov\u002Fcourses\u002Famath579\u002Freading\u002FNaturalActorCritic.pdf)\n   - 延斯·科伯、扬·彼得斯，《机器人学中运动基元的策略搜索》，NIPS，2009年。[[论文]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F3545-policy-search-for-motor-primitives-in-robotics.pdf)\n   - 扬·彼得斯、卡塔琳娜·穆林、雅赛敏·阿尔通，《基于相对熵的策略搜索》，AAAI，2010年。[[论文]](http:\u002F\u002Fwww.kyb.tue.mpg.de\u002Ffileadmin\u002Fuser_upload\u002Ffiles\u002Fpublications\u002Fattachments\u002FAAAI-2010-Peters_6439%5b0%5d.pdf)\n   - 弗里克·斯图普、奥利维尔·西戈，《协方差矩阵自适应的路径积分策略改进》，ICML，2012年。[[论文]](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1206.4621v1.pdf)\n   - 内特·科尔、彼得·斯通，《用于快速四足行走的策略梯度强化学习》，ICRA，2004年。[[论文]](http:\u002F\u002Fwww.cs.utexas.edu\u002F~pstone\u002FPapers\u002Fbib2html-links\u002Ficra04.pdf)\n   - 马克·戴森罗斯、卡尔·拉斯穆森，《PILCO：一种基于模型且数据高效的策略搜索方法》，ICML，2011年。[[论文]](http:\u002F\u002Fmlg.eng.cam.ac.uk\u002Fpub\u002Fpdf\u002FDeiRas11.pdf)\n   - 斯科特·昆德尔斯马、罗德里克·格鲁彭、安德鲁·巴托，《用于姿势恢复的动态手臂动作学习》，Humanoids，2011年。[[论文]](http:\u002F\u002Fwww-all.cs.umass.edu\u002Fpubs\u002F2011\u002Fkuindersma_g_b_11.pdf)\n   - 康斯坦丁诺斯·哈齐利耶鲁迪斯、罗伯托·拉马、里图拉杰·考希克、多里安·戈普、瓦西里斯·瓦西利亚德斯、让-巴蒂斯特·穆雷，《面向机器人领域的黑箱高效策略搜索》，IROS，2017年。[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.07261)\n - 层次化强化学习：\n   - 理查德·萨顿、多伊娜·普雷库普、萨廷德·辛格，《介于MDP与半MDP之间：强化学习中时间抽象的框架》，人工智能杂志，1999年。[[论文]](https:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~sutton\u002Fpapers\u002FSPS-aij.pdf)\n   - 乔治·科尼达里斯、安德鲁·巴托，《构建可移植选项：强化学习中的技能迁移》，IJCAI，2007年。[[论文]](http:\u002F\u002Fwww-anw.cs.umass.edu\u002Fpubs\u002F2007\u002Fkonidaris_b_IJCAI07.pdf)\n - 深度学习 + 强化学习（深度学习与强化学习结合的近期工作示例）：\n   - V. 米赫等人，《通过深度强化学习实现人类水平控制》，自然杂志，2015年。[[论文]](http:\u002F\u002Fwww.readcube.com\u002Farticles\u002F10.1038%2Fnature14236?shared_access_token=Lo_2hFdW4MuqEcF3CVBZm9RgN0jAjWel9jnR3ZoTv0P5kedCCNjz3FJ2FhQCgXkApOr3ZSsJAldp-tw3IWgTseRnLpAc9xQq-vTA2Z5Ji9lg16_WvCy4SaOgpK5XXA6ecqo8d8J7l4EJsdjwai53GqKt-7JuioG0r3iV67MQIro74l6IxvmcVNKBgOwiMGi8U0izJStLpmQp6Vmi_8Lw_A%3D%3D)\n   - 夏晓晓·郭、萨廷德·辛格、洪拉克·李、理查德·刘易斯、夏石·王，《利用离线蒙特卡洛树搜索规划进行实时雅达利游戏玩的深度学习》，NIPS，2014年。[[论文]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5421-deep-learning-for-real-time-atari-game-play-using-offline-monte-carlo-tree-search-planning.pdf)\n   - 谢尔盖·列文、切尔西·芬恩、特雷弗·达雷尔、皮特·阿贝尔，《端到端训练深度视觉运动策略》。ArXiv，2015年10月16日。[[ArXiv]](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1504.00702v3.pdf)\n   - 汤姆·绍尔、约翰·匡恩、伊万尼斯·安东格卢、大卫·西尔弗，《优先级经验回放》，ArXiv，2015年11月18日。[[ArXiv]](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.05952v2.pdf)\n   - 哈多·范·哈塞尔特、阿瑟·古兹、大卫·西尔弗，《采用双Q学习的深度强化学习》，ArXiv，2015年9月22日。[[ArXiv]](http:\u002F\u002Farxiv.org\u002Fabs\u002F1509.06461)\n   - 沃洛季米尔·米赫、阿德里亚·普伊格多梅内奇·巴迪亚、梅赫迪·米尔扎、亚历克斯·格雷夫斯、蒂莫西·P·利利克拉普、蒂姆·哈利、大卫·西尔弗、科雷·卡武克丘奥卢，《深度强化学习的异步方法》，ArXiv，2016年2月4日。[[ArXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.01783)\n\n## 应用\n### 游戏博弈\n传统游戏\n  - 跳棋 - 杰拉尔德·特萨罗，使用TD(λ)的“TD-Gammon”程序（ACM 1995）[[论文]](http:\u002F\u002Fwww.bkgm.com\u002Farticles\u002Ftesauro\u002Ftdl.html)\n  - 国际象棋 - 乔纳森·巴克斯特、安德鲁·特里吉尔和莱克斯·韦弗，使用TD(λ)的“KnightCap”程序（1999）[[arXiv]](http:\u002F\u002Farxiv.org\u002Fpdf\u002Fcs\u002F9901002v1.pdf)\n  - 国际象棋 - 马修·莱，Giraffe：利用深度强化学习下国际象棋（2015）[[arXiv]](http:\u002F\u002Farxiv.org\u002Fpdf\u002F1509.01549v2.pdf)\n\n电脑游戏\n  - Atari 2600游戏 - 沃洛迪米尔·姆尼赫、科雷·卡武克乔卢、大卫·西尔弗等，通过深度强化学习实现人类水平控制（Nature 2015）[[DOI]](https:\u002F\u002Fdx.doi.org\u002Fdoi:10.1038\u002Fnature14236) [[论文]](http:\u002F\u002Fwww.readcube.com\u002Farticles\u002F10.1038%2Fnature14236?shared_access_token=Lo_2hFdW4MuqEcF3CVBZm9RgN0jAjWel9jnR3ZoTv0P5kedCCNjz3FJ2FhQCgXkApOr3ZSsJAldp-tw3IWgTseRnLpAc9xQq-vTA2Z5Ji9lg16_WvCy4SaOgpK5XXA6ecqo8d8J7l4EJsdjwai53GqKt-7JuioG0r3iV67MQIro74l6IxvmcVNKBgOwiMGi8U0izJStLpmQp6Vmi_8Lw_A%3D%3D) [[代码]](https:\u002F\u002Fsites.google.com\u002Fa\u002Fdeepmind.com\u002Fdqn\u002F) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=iqXKQf2BOSE)\n  - Flappy Bird - Sarvagya Vaish，[Flappy Bird强化学习](https:\u002F\u002Fgithub.com\u002FSarvagyaVaish\u002FFlappyBirdRL) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=xM62SpKAZHU)\n  - 马里奥 - 肯尼思·O·斯坦利和里斯托·米库莱宁，MarI\u002FO：利用进化强化学习和人工神经网络学习玩马里奥（Evolutionary Computation 2002）[[论文]](http:\u002F\u002Fnn.cs.utexas.edu\u002Fdownloads\u002Fpapers\u002Fstanley.ec02.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=qv6UVOQ0F44)\n  - 星际争霸II - 奥里奥尔·维尼亚尔斯、伊戈尔·巴布什金、沃伊切赫·M·查尔涅茨基等，使用多智能体强化学习在星际争霸II中达到大师级水平（Nature 2019）[[DOI]](https:\u002F\u002Fdoi.org\u002F10.1038\u002Fs41586-019-1724-z) [[论文]](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs41586-019-1724-z.epdf) [[视频]](https:\u002F\u002Fdeepmind.com\u002Fresearch\u002Fopen-source\u002Falphastar-resources)\n\n### 机器人学\n  - 内特·科尔和彼得·斯通，用于快速四足行走的策略梯度强化学习（ICRA 2004）[[论文]](http:\u002F\u002Fwww.cs.utexas.edu\u002F~pstone\u002FPapers\u002Fbib2html-links\u002Ficra04.pdf)\n  - 彼塔尔·科尔穆舍夫、西尔万·卡利农和达尔文·G·卡尔德威尔，基于EM的强化学习实现机器人运动技能协调（IROS 2010）[[论文]](http:\u002F\u002Fkormushev.com\u002Fpapers\u002FKormushev-IROS2010.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=W_gxLKSsSIE)\n  - 托德·赫斯特、迈克尔·奎兰和彼得·斯通，人形机器人上的强化学习通用模型学习（ICRA 2010）[[论文]](https:\u002F\u002Fccc.inaoep.mx\u002F~mdprl\u002Fdocumentos\u002FHester_2010.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=mRpX9DFCdwI&list=PL5nBAYUyJTrM48dViibyi68urttMlUv7e&index=12)\n  - 乔治·科尼达里斯、斯科特·昆德斯玛、罗德里克·格鲁彭和安德鲁·巴托，移动机械臂上的自主技能获取（AAAI 2011）[[论文]](http:\u002F\u002Flis.csail.mit.edu\u002Fpubs\u002Fkonidaris-aaai11b.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=yUICAkSQTZY)\n  - 马克·彼得·戴森罗斯和卡尔·爱德华·拉斯穆森，PILCO：一种基于模型且数据高效的策略搜索方法（ICML 2011）[[论文]](http:\u002F\u002Fmlg.eng.cam.ac.uk\u002Fpub\u002Fpdf\u002FDeiRas11.pdf)\n  - 斯科特·尼库姆、萨钦·奇塔、巴斯卡拉·马尔蒂等，从示范中进行增量式语义化学习（RSS 2013）[[论文]](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.310.87&rep=rep1&type=pdf)\n  - 马克·卡特勒和乔纳森·P·豪，利用信息丰富的模拟先验对机器人进行高效强化学习（ICRA 2015）[[论文]](http:\u002F\u002Fmarkjcutler.com\u002Fpapers\u002FCutler15_ICRA.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kKClFx6l1HY)\n  - 安托万·库利、杰夫·克伦、达内什·塔拉波尔和让-巴普蒂斯特·穆雷，能够像动物一样适应的机器人（Nature 2015）[[ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1407.3501)] [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=T-c17RKh3uE) [[代码]](https:\u002F\u002Fgithub.com\u002Fresibots\u002Fcully_2015_nature)\n  - 康斯坦丁诺斯·哈齐利格鲁迪斯、罗伯托·拉马、里图拉杰·考希克等，机器人领域的黑箱数据高效策略搜索（IROS 2017）[[ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.07261)] [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kTEyYiIFGPM) [[代码]](https:\u002F\u002Fgithub.com\u002Fresibots\u002Fblackdrops)\n  - P·特拉维斯·贾尔丁、迈克尔·科根、西德尼·N·吉维吉和沙赫拉姆·优素福，采用强化学习调优的差速驱动机器人自适应预测控制（Int J Adapt Control Signal Process 2019）[[DOI]](https:\u002F\u002Fdx.doi.org\u002F10.1002\u002Facs.2882)\n\n\n\n### 控制\n  - 彼得·阿贝尔、亚当·科茨等，强化学习在特技直升机飞行中的应用（NIPS 2006）[[论文]](http:\u002F\u002Fheli.stanford.edu\u002Fpapers\u002Fnips06-aerobatichelicopter.pdf) [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=VCdxqn0fcnE)\n  - J·安德鲁·巴格内尔和杰夫·G·施奈德，利用强化学习策略搜索方法实现直升机自动驾驶（ICRA 2001）[[论文]](https:\u002F\u002Fkilthub.cmu.edu\u002Farticles\u002FAutonomous_Helicopter_Control_Using_Reinforcement_Learning_Policy_Search_Methods\u002F6552119\u002Ffiles\u002F12033380.pdf)\n\n### 运筹学\n  - 斯科特·普罗珀和普拉萨德·塔德帕利，产品配送中的平均奖励强化学习扩展（AAAI 2004）[[论文]](https:\u002F\u002Fs3.amazonaws.com\u002Facademia.edu.documents\u002F44453946\u002FScaling_Average-reward_Reinforcement_Lea20160405-20758-1wxkm8y.pdf)\n  - 直树安倍、纳瓦尔·维尔马等，利用强化学习实现跨渠道优化营销（KDD 2004）[[论文]](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.375.151&rep=rep1&type=pdf)\n  - 伯恩德·瓦施内克、安德烈·赖希施塔勒、伦茨·贝尔茨纳等，半导体生产调度中的深度强化学习（ASMC 2018）[[DOI]](https:\u002F\u002Fdx.doi.org\u002F10.1109\u002FASMC.2018.8373191) [[论文]](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FLenz_Belzner\u002Fpublication\u002F325713164_Deep_reinforcement_learning_for_semiconductor_production_scheduling\u002Flinks\u002F5be537caa6fdcc3a8dc89fb3\u002FDeep-reinforcement-learning-for-semiconductor-production-scheduling.pdf)\n\n### 人机交互\n  - 萨廷德·辛格、黛安·利特曼等，利用强化学习优化对话管理：NJFun系统的实验（JAIR 2002）[[论文]](http:\u002F\u002Fweb.eecs.umich.edu\u002F~baveja\u002FPapers\u002FRLDSjair.pdf)\n\n## 代码\n - 理查德·萨顿和安德鲁·巴托的[书籍](#books)《强化学习：导论》中的示例和习题代码\n    - [Python代码](https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002Freinforcement-learning-an-introduction)（第2版）\n    - [MATLAB代码](https:\u002F\u002Fwaxworksmath.com\u002FAuthors\u002FN_Z\u002FSutton\u002FRLAI_1st_Edition\u002Fsutton.html)（第1版）\n - 强化学习控制问题的仿真代码\n    - [倒立摆问题](http:\u002F\u002Fpages.cs.wisc.edu\u002F~finton\u002Fpoledriver.html)\n    - [Q学习控制器](http:\u002F\u002Fpages.cs.wisc.edu\u002F~finton\u002Fqcontroller.html)\n - [用于强化学习的MATLAB环境与GUI](http:\u002F\u002Fwww.cs.colostate.edu\u002F~anderson\u002Fres\u002Frl\u002Fmatlabpaper\u002Frl.html)\n - [马萨诸塞大学阿默斯特分校强化学习资源库](http:\u002F\u002Fwww-anw.cs.umass.edu\u002Frlr\u002F)\n - [布朗-UMBC强化学习与规划库（Java）](http:\u002F\u002Fburlap.cs.brown.edu\u002F)\n - [R语言中的强化学习（MDP，值迭代）](http:\u002F\u002Fwww.moneyscience.com\u002Fpg\u002Fblog\u002FStatAlgo\u002Fread\u002F635759\u002Freinforcement-learning-in-r-markov-decision-process-mdp-and-value-iteration)\n - [Python和MATLAB中的强化学习环境](https:\u002F\u002Fjamh-web.appspot.com\u002Fdownload.htm)\n - [RL-Glue](http:\u002F\u002Fglue.rl-community.org\u002Fwiki\u002FMain_Page)（强化学习的标准接口）和[RL-Glue库](http:\u002F\u002Flibrary.rl-community.org\u002Fwiki\u002FMain_Page)\n - [PyBrain库](http:\u002F\u002Fwww.pybrain.org\u002F)——基于Python的强化学习、人工智能和神经网络\n - [RLPy框架](http:\u002F\u002Frlpy.readthedocs.org\u002Fen\u002Flatest\u002F)——面向教育和研究的基于值函数的强化学习框架\n - [Maja](http:\u002F\u002Fmmlf.sourceforge.net\u002F)——用于强化学习问题的Python机器学习框架\n - [TeachingBox](http:\u002F\u002Fservicerobotik.hs-weingarten.de\u002Fen\u002Fteachingbox.php)——基于Java的强化学习框架\n - [MATLAB策略梯度强化学习工具箱](http:\u002F\u002Fwww.ias.informatik.tu-darmstadt.de\u002FResearch\u002FPolicyGradientToolbox)\n - [PIQLE](http:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fpiqle\u002F)——实现Q学习及其他强化学习算法的平台\n - [BeliefBox](https:\u002F\u002Fcode.google.com\u002Fp\u002Fbeliefbox\u002F)——贝叶斯强化学习库和工具包\n - [使用TensorFlow的深度Q学习](https:\u002F\u002Fgithub.com\u002Fnivwusquorum\u002Ftensorflow-deepq)——利用Google TensorFlow进行深度Q学习的演示\n - [Atari](https:\u002F\u002Fgithub.com\u002FKaixhin\u002FAtari)——在Torch中实现的深度Q网络和异步智能体\n - [AgentNet](https:\u002F\u002Fgithub.com\u002Fyandexdataschool\u002FAgentNet)——一个使用Theano+Lasagne的Python库，用于深度强化学习和自定义循环网络。\n - [RLCode提供的强化学习示例](https:\u002F\u002Fgithub.com\u002Frlcode\u002Freinforcement-learning)——一系列简洁、精炼的强化学习示例\n - [OpenAI Baselines](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fbaselines)——来自OpenAI的经过充分测试的强化学习算法实现（以及相关结果）[链接](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fbaselines-results)\n - [PyTorch深度强化学习](https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002FDeepRL)——使用PyTorch实现的流行深度强化学习算法\n - [ChainerRL](https:\u002F\u002Fgithub.com\u002Fchainer\u002Fchainerrl)——使用Chainer实现的流行深度强化学习算法\n - [Black-DROPS](https:\u002F\u002Fgithub.com\u002Fresibots\u002Fblackdrops)——基于模型的策略搜索算法Black-DROPS的模块化通用代码（IROS 2017论文），并可轻松集成到[DART](http:\u002F\u002Fdartsim.github.io\u002F)模拟器中\n - [Jumanji](https:\u002F\u002Fgithub.com\u002Finstadeepai\u002Fjumanji)——一套由行业驱动、硬件加速的强化学习环境，使用JAX编写。\n\n \n\n## 教程 \u002F 网站\n  - 曼斯·哈蒙和斯蒂芬妮·哈蒙，《强化学习：教程》（http:\u002F\u002Fold.nbu.bg\u002Fcogs\u002Fevents\u002F2000\u002FReadings\u002FPetrov\u002Frltutorial.pdf）\n  - C. Igel、M.A. Riedmiller等，《强化学习简述》，ESANN，2007年。[[论文]](http:\u002F\u002Fimage.diku.dk\u002Figel\u002Fpaper\u002FRLiaN.pdf)\n  - UNSW——[强化学习](http:\u002F\u002Fwww.cse.unsw.edu.au\u002F~cs9417ml\u002FRL1\u002Findex.html)\n    - [简介](http:\u002F\u002Fwww.cse.unsw.edu.au\u002F~cs9417ml\u002FRL1\u002Fintroduction.html)\n    - [TD学习](http:\u002F\u002Fwww.cse.unsw.edu.au\u002F~cs9417ml\u002FRL1\u002Ftdlearning.html)\n    - [Q学习与SARSA](http:\u002F\u002Fwww.cse.unsw.edu.au\u002F~cs9417ml\u002FRL1\u002Falgorithms.html)\n    - [“猫鼠游戏”小程序](http:\u002F\u002Fwww.cse.unsw.edu.au\u002F~cs9417ml\u002FRL1\u002Fapplet.html)\n  - [ROS强化学习教程](http:\u002F\u002Fwiki.ros.org\u002Freinforcement_learning\u002FTutorials\u002FReinforcement%20Learning%20Tutorial)\n  - [面向初学者的POMDP教程](http:\u002F\u002Fcs.brown.edu\u002Fresearch\u002Fai\u002Fpomdp\u002Ftutorial\u002Findex.html)\n  - Scholarpedia关于以下主题的文章：\n    - [强化学习](http:\u002F\u002Fwww.scholarpedia.org\u002Farticle\u002FReinforcement_learning)\n    - [时序差分学习](http:\u002F\u002Fwww.scholarpedia.org\u002Farticle\u002FTemporal_difference_learning)\n  - 包含实用[MATLAB软件、演示文稿和演示视频]的资源库（http:\u002F\u002Fbusoniu.net\u002Frepository.php）\n  - [强化学习文献综述](http:\u002F\u002Fliinwww.ira.uka.de\u002Fbibliography\u002FNeural\u002Freinforcement.learning.html)\n  - 加州大学伯克利分校——CS 294：深度强化学习，2015年秋季（约翰·舒尔曼、皮特·阿贝尔）[[课程网站]](http:\u002F\u002Frll.berkeley.edu\u002Fdeeprlcourse\u002F)\n  - 特拉维斯·德沃尔夫的[强化学习博客文章，第1至4部分](https:\u002F\u002Fstudywolf.wordpress.com\u002F2012\u002F11\u002F25\u002Freinforcement-learning-q-learning-and-exploration\u002F) \n  - [街机学习环境](http:\u002F\u002Fwww.arcadelearningenvironment.org\u002F)——用于开发AI智能体的Atari 2600游戏环境\n  - 安德烈·卡帕西的[深度强化学习：从像素玩Pong](http:\u002F\u002Fkarpathy.github.io\u002F2016\u002F05\u002F31\u002Frl\u002F) \n  - [揭秘深度强化学习](https:\u002F\u002Fwww.nervanasys.com\u002Fdemystifying-deep-reinforcement-learning\u002F) \n  - [让我们来做一个DQN](https:\u002F\u002Fjaromiru.com\u002F2016\u002F09\u002F27\u002Flets-make-a-dqn-theory\u002F) \n  - 阿瑟·朱利亚尼的[使用TensorFlow的简单强化学习，第0至8部分](https:\u002F\u002Fmedium.com\u002Femergent-future\u002Fsimple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0#.78km20i8r) \n  - [Practical_RL](https:\u002F\u002Fgithub.com\u002Fyandexdataschool\u002FPractical_RL)——基于GitHub的野外强化学习课程（讲座、编程实验室、项目）\n  - [RLenv.directory：探索并发现新的强化学习环境](https:\u002F\u002Frlenv.directory\u002F)\n  - 卡佳·霍夫曼在NeurIPS '19上的演讲——[强化学习：过去、现在与未来展望](https:\u002F\u002Fslideslive.com\u002F38922817\u002Freinforcement-learning-past-present-and-future-perspectives)\n  - [如何构建、组织、跟踪和管理强化学习（RL）项目](https:\u002F\u002Fneptune.ai\u002Fblog\u002Fhow-to-structure-organize-track-and-manage-reinforcement-learning-rl-projects)\n  - [强化学习速查表](https:\u002F\u002Falxthm.com\u002Fassets\u002Fpdf\u002Frl-cheatsheet.pdf)——一份关于强化学习中一些重要概念和算法的总结\n\n## 在线演示\n - [强化学习的实际应用演示](http:\u002F\u002Fwww.dcsc.tudelft.nl\u002F~robotics\u002Fmedia.html)\n - [深度Q学习演示](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Fconvnetjs\u002Fdemo\u002Frldemo.html) - 使用ConvNetJS的深度Q学习演示\n - [TensorFlow深度Q学习](https:\u002F\u002Fgithub.com\u002Fnivwusquorum\u002Ftensorflow-deepq) - 使用Google TensorFlow的深度Q学习演示\n - [强化学习演示](http:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fkarpathy\u002Freinforcejs\u002F) - Andrej Karpathy开发的reinforcejs强化学习演示\n\n\n## 开源强化学习平台\n- [OpenAI gym](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym) - 用于开发和比较强化学习算法的工具包\n- [OpenAI universe](https:\u002F\u002Fgithub.com\u002Fopenai\u002Funiverse) - 一个软件平台，用于衡量和训练人工智能在全球范围内的游戏、网站和其他应用中的通用智能\n- [DeepMind Lab](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Flab) - 一个可定制的3D平台，用于基于智能体的人工智能研究\n- [Project Malmo](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fmalmo) - 微软基于Minecraft构建的人工智能实验与研究平台\n- [ViZDoom](https:\u002F\u002Fgithub.com\u002FMarqt\u002FViZDoom) - 基于Doom的游戏环境，用于从原始视觉信息中进行强化学习的研究平台\n- [Retro Learning Environment](https:\u002F\u002Fgithub.com\u002Fnadavbh12\u002FRetro-Learning-Environment) - 基于视频游戏模拟器的强化学习人工智能平台。目前支持SNES和Sega Genesis。与OpenAI gym兼容。\n- [torch-twrl](https:\u002F\u002Fgithub.com\u002Ftwitter\u002Ftorch-twrl) - Twitter开发的在Torch中实现强化学习的包\n- [UETorch](https:\u002F\u002Fgithub.com\u002Ffacebook\u002FUETorch) - Facebook为Unreal Engine 4开发的Torch插件\n- [TorchCraft](https:\u002F\u002Fgithub.com\u002FTorchCraft\u002FTorchCraft) - 将Torch与StarCraft连接起来\n- [garage](https:\u002F\u002Fgithub.com\u002Frlworkgroup\u002Fgarage) - 一个用于可重复强化学习研究的框架，完全兼容OpenAI Gym和DeepMind Control Suite（rllab的继任者）\n- [TensorForce](https:\u002F\u002Fgithub.com\u002Freinforceio\u002Ftensorforce) - 基于TensorFlow的实用深度强化学习，提供Gitter支持，并集成OpenAI Gym\u002FUniverse\u002FDeepMind Lab。\n- [tf-TRFL](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Ftrfl\u002F) - 一个基于TensorFlow的库，提供了多个用于实现强化学习智能体的有用组件。\n- [OpenAI lab](https:\u002F\u002Fgithub.com\u002Fkengz\u002Fopenai_lab) - 使用OpenAI Gym、TensorFlow和Keras的强化学习实验系统。\n- [keras-rl](https:\u002F\u002Fgithub.com\u002Fmatthiasplappert\u002Fkeras-rl) - Keras中先进的深度强化学习算法，专为与OpenAI兼容而设计。\n- [BURLAP](http:\u002F\u002Fburlap.cs.brown.edu) - 布朗大学和UMBC联合开发的强化学习与规划库，使用Java编写。\n- [MAgent](https:\u002F\u002Fgithub.com\u002Fgeek-ai\u002FMAgent) - 多智能体强化学习平台。\n- [Ray RLlib](http:\u002F\u002Fray.readthedocs.io\u002Fen\u002Flatest\u002Frllib.html) - Ray RLlib是一个旨在同时提供高性能和组合性的强化学习库。\n- [SLM Lab](https:\u002F\u002Fgithub.com\u002Fkengz\u002FSLM-Lab) - 使用Unity、OpenAI Gym、PyTorch和TensorFlow进行深度强化学习的研究框架。\n- [Unity ML Agents](https:\u002F\u002Fgithub.com\u002FUnity-Technologies\u002Fml-agents) - 使用Unity编辑器创建强化学习环境\n- [Intel Coach](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fcoach) - Coach是一个Python强化学习研究框架，包含许多最先进的算法实现。\n- [Microsoft AirSim](https:\u002F\u002Fmicrosoft.github.io\u002FAirSim\u002Freinforcement_learning\u002F) - 微软AI与研究部门基于Unreal Engine开发的开源自动驾驶车辆模拟器。\n- [DI-engine](https:\u002F\u002Fgithub.com\u002Fopendilab\u002FDI-engine) - DI-engine是一个通用的决策智能引擎。它支持大多数基础的深度强化学习（DRL）算法，如DQN、PPO、SAC，以及多智能体强化学习中的QMIX、逆向强化学习中的GAIL和探索问题中的RND等特定领域的算法。\n- [Jumanji](https:\u002F\u002Fgithub.com\u002Finstadeepai\u002Fjumanji) - 一套由行业驱动、硬件加速的强化学习环境，使用JAX编写。\n\n## 重要贡献者👩‍💻👨‍💻：\n\n\u003Cp align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl\">\n  \u003Cimg src=\"https:\u002F\u002Fcontributors-img.web.app\u002Fimage?repo=aikorea\u002Fawesome-rl\" \u002F>\n\u003C\u002Fa>\u003C\u002Fp>","# Awesome Reinforcement Learning 快速上手指南\n\n**注意**：`awesome-rl` 本身不是一个单一的代码库或软件工具，而是一个**强化学习（Reinforcement Learning, RL）资源的精选列表**。它汇集了理论课程、书籍、论文、开源代码库、教程和演示平台。\n\n本指南将指导你如何利用该列表中的核心资源（以经典的 Sutton & Barto 教材代码和主流深度学习框架为例）快速搭建强化学习环境并开始实践。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**：Linux (推荐 Ubuntu 20.04+), macOS, 或 Windows (建议使用 WSL2)。\n*   **Python 版本**：推荐 Python 3.8 - 3.10（部分旧版 RL 库对最新 Python 版本支持尚不完善）。\n*   **前置依赖**：\n    *   `git`：用于克隆代码库。\n    *   `pip` 或 `conda`：包管理工具。\n    *   **深度学习框架**（任选其一，推荐 PyTorch）：\n        *   PyTorch\n        *   TensorFlow\n        *   JAX (用于 Jumanji 等新型环境)\n\n> **国内加速建议**：\n> 推荐使用清华源或阿里源加速 Python 包安装：\n> ```bash\n> pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple \u003Cpackage_name>\n> ```\n\n## 安装步骤\n\n由于 `awesome-rl` 包含多个独立项目，以下提供两种最通用的入门安装方案。\n\n### 方案 A：经典入门（Sutton & Barto 教材代码）\n适合初学者理解 RL 基础算法（如动态规划、蒙特卡洛、Q-Learning）。\n\n1.  **克隆仓库**：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002Freinforcement-learning-an-introduction.git\n    cd reinforcement-learning-an-introduction\n    ```\n\n2.  **安装依赖**：\n    ```bash\n    pip install -r requirements.txt\n    # 若需加速：\n    # pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple -r requirements.txt\n    ```\n\n### 方案 B：深度强化学习实战（基于 PyTorch）\n适合希望直接上手 DQN, PPO, A3C 等深度强化学习算法的开发者。\n\n1.  **克隆仓库**：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002FDeepRL.git\n    cd DeepRL\n    ```\n\n2.  **安装依赖**：\n    该项目通常需要 `gym` (或 `gymnasium`) 和 `pytorch`。\n    ```bash\n    pip install torch gymnasium matplotlib tqdm\n    # 若需加速：\n    # pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple torch gymnasium matplotlib tqdm\n    ```\n    *(注：如果运行报错提示 `gym` 版本问题，建议安装 `gymnasium` 并在代码中做相应导入调整，或使用项目中指定的特定版本)*\n\n### 方案 C：工业级基准（OpenAI Baselines \u002F Stable Baselines3）\n如果你需要经过严格测试的算法实现：\n\n```bash\npip install stable-baselines3[extra]\n# 国内加速\n# pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple stable-baselines3[extra]\n```\n\n## 基本使用\n\n以下示例展示如何运行一个最简单的强化学习实验。\n\n### 示例 1：运行经典 Q-Learning (基于方案 A)\n\n进入目录后，运行书中的经典例子，例如“网格世界”或“悬崖行走”：\n\n```bash\n# 运行第 6 章中的 Q-Learning 示例 (Cliff Walking)\npython chapter06\u002Fcliff_walking.py\n```\n*执行后将弹出窗口显示训练过程中的策略变化或累积奖励曲线。*\n\n### 示例 2：运行深度 Q 网络 (DQN) 玩 Atari 游戏 (基于方案 B)\n\n在 `DeepRL` 目录下，使用一行命令启动 DQN 训练，让 AI 学习玩 \"Pong\" 游戏：\n\n```bash\n# 用法：python main.py --alg DQN --game Pong\npython main.py --alg DQN --game Pong\n```\n\n*   **参数说明**：\n    *   `--alg`: 选择算法 (如 `DQN`, `A2C`, `PPO`, `DDPG`)。\n    *   `--game`: 选择环境 (需已安装 `atari-py` 或 `gymnasium[atari]`)。\n\n### 示例 3：使用 Stable Baselines3 快速训练 (基于方案 C)\n\n这是目前最简洁的代码方式，无需克隆特定仓库，直接编写脚本：\n\n```python\nimport gymnasium as gym\nfrom stable_baselines3 import PPO\n\n# 1. 创建环境\nenv = gym.make(\"CartPole-v1\")\n\n# 2. 初始化算法 (PPO)\nmodel = PPO(\"MlpPolicy\", env, verbose=1)\n\n# 3. 训练模型 (10000 步)\nmodel.learn(total_timesteps=10000)\n\n# 4. 观察训练后的表现\nobs, info = env.reset()\nfor i in range(1000):\n    action, _states = model.predict(obs, deterministic=True)\n    obs, reward, terminated, truncated, info = env.step(action)\n    env.render()\n    if terminated or truncated:\n        obs, info = env.reset()\n\nenv.close()\n```\n\n## 下一步探索\n\n完成上述基础实验后，你可以回到 `awesome-rl` 列表中寻找更专业的资源：\n*   **理论研究**：参考 David Silver 的 UCL 课程视频或 Sutton 的电子书。\n*   **特定领域**：查看 `Robotics` (机器人控制) 或 `Game Playing` (博弈) 分类下的专用库（如 `Jumanji`, `Black-DROPS`）。\n*   **复现论文**：在 `Papers \u002F Thesis` 部分查找经典论文对应的代码实现链接。","某高校机器人实验室的研究生团队正致力于开发一款基于强化学习的自主导航机械臂，需要在短时间内复现经典算法并寻找合适的开源框架进行二次开发。\n\n### 没有 awesome-rl 时\n- **资源检索效率极低**：团队成员需在 Google Scholar、GitHub 和各类论坛中盲目搜索，花费数周时间筛选过时的教程或失效的代码链接。\n- **理论到实践脱节**：难以找到与经典教材（如 Sutton 的《强化学习导论》）严格对应的多语言代码实现，导致公式推导后无法快速验证。\n- **框架选型试错成本高**：面对分散的开源平台（如 PyBrain, RLPy, Maja），缺乏横向对比资料，容易选错不适合当前任务的工具，造成前期工作返工。\n- **应用场景参考缺失**：在将算法迁移至具体场景（如机械臂控制）时，找不到类似的开源案例作为基准，只能从零构建仿真环境。\n\n### 使用 awesome-rl 后\n- **一站式获取权威资源**：直接通过分类目录锁定最新的论文、综述及经过社区验证的代码库，将文献调研时间从数周压缩至几天。\n- **代码复现无缝衔接**：迅速定位到教材配套的 Python、Julia 等官方习题解答与实现代码，确保理论理解与工程落地的一致性。\n- **精准匹配开发框架**：依据“开源平台”列表中的详细描述，快速评估并选定最适合机械臂控制的框架（如基于 TensorFlow 的 Deep Q-Learning 实现），避免重复造轮子。\n- **借鉴成熟应用案例**：参考\"Robotics\"和\"Control\"板块下的游戏与控制类 demo，直接复用部分仿真环境代码，大幅加速原型系统搭建。\n\nawesome-rl 通过结构化整理全球优质资源，将研究人员从繁琐的信息搜集工作中解放出来，使其能专注于核心算法的创新与优化。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Faikorea_awesome-rl_4cc42302.png","aikorea","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Faikorea_c4b3060d.jpg","",null,"https:\u002F\u002Fgithub.com\u002Faikorea",9712,1905,"2026-04-05T04:05:02",5,"未说明",{"notes":86,"python":84,"dependencies":87},"awesome-rl 本身不是一个可执行的软件工具或代码库，而是一个强化学习（Reinforcement Learning）资源的精选列表（Curated List）。它主要包含书籍、论文、讲座视频链接以及指向其他独立开源项目（如 OpenAI Baselines, PyTorch Deep RL, ChainerRL 等）的链接。因此，该项目本身没有特定的操作系统、GPU、内存、Python 版本或依赖库要求。用户若需运行列表中提到的具体算法或示例，需参考各个子项目的独立文档以获取相应的环境配置信息。",[],[13,54],"2026-03-27T02:49:30.150509","2026-04-06T05:37:32.531365",[92,97,102,107,112,117],{"id":93,"question_zh":94,"answer_zh":95,"source_url":96},9720,"《强化学习：导论》（Reinforcement Learning: An Introduction）第二版的最新草稿或正式版在哪里下载？","该书第二版的最终正式版可以在以下链接下载：http:\u002F\u002Fincompleteideas.net\u002Fbook\u002FRLbook2018.pdf。此前曾有过 2018 年 1 月和 3 月的草稿版本，但建议直接访问最终版链接。","https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl\u002Fissues\u002F54",{"id":98,"question_zh":99,"answer_zh":100,"source_url":101},9721,"BeliefBox 是贝叶斯强化学习（Bayesian RL）库吗？","是的，BeliefBox 包含贝叶斯强化学习及相关算法。虽然它主要被描述为统计推断和决策制定工具箱，但其中实现了 BVMM、贝叶斯蒙特卡洛（Bayesian MC）、贝叶斯多任务逆强化学习（Bayesian Multi-Task Inverse RL）等算法。","https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl\u002Fissues\u002F27",{"id":103,"question_zh":104,"answer_zh":105,"source_url":106},9722,"哪里可以找到《强化学习：导论》书中示例的 Python 代码实现？","官方书籍仅提供过时的 MATLAB 和 Lisp 代码。社区成员 ShangtongZhang 已经复现了该书（第二版草稿）从第 1 章到第 6 章及后续章节的所有示例，代码整洁且文档完善。可以在 GitHub 上获取：https:\u002F\u002Fgithub.com\u002FShangtongZhang\u002Freinforcement-learning-an-introduction。","https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl\u002Fissues\u002F14",{"id":108,"question_zh":109,"answer_zh":110,"source_url":111},9723,"论文\"Autonomous helicopter control using Reinforcement Learning Policy Search Methods\"的正确发表年份是多少？","该论文的正确发表年份是 2001 年，而非列表中错误标注的 2011 年。该论文由 Bagnell 等在 ICRA 2001 会议上发表。","https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl\u002Fissues\u002F39",{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},9724,"列表中的\"Short introduction to some Reinforcement Learning algorithms\"链接失效了，正确的地址是什么？","原链接已失效。用户提供的替代链接（https:\u002F\u002Fsites.ualberta.ca\u002F~szepesva\u002Fpapers\u002FRLAlgsInMDPs.pdf）指向的是不同的文档。维护者已确认该链接损坏并感谢报告，建议用户在仓库更新前通过搜索引擎查找 Szepesvari 关于 MDP 中 RL 算法的最新论文地址，或关注仓库的修复更新。","https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl\u002Fissues\u002F23",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},9725,"Sutton 书籍电子版的主页链接（webdocs.cs.ualberta.ca）无法访问怎么办？","该链接（http:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~sutton\u002Fbook\u002Febook\u002Fthe-book.html）已确认失效。由于这是书籍官方托管地的变更，建议直接访问 incompleteideas.net 获取最新版书籍，或在项目 Issue 中等待维护者更新为新的有效镜像地址。","https:\u002F\u002Fgithub.com\u002Faikorea\u002Fawesome-rl\u002Fissues\u002F69",[]]