[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-hanjuku-kaso--awesome-offline-rl":3,"tool-hanjuku-kaso--awesome-offline-rl":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85267,2,"2026-04-18T11:00:28",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[19,14,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},5773,"cs-video-courses","Developer-Y\u002Fcs-video-courses","cs-video-courses 是一个精心整理的计算机科学视频课程清单，旨在为自学者提供系统化的学习路径。它汇集了全球知名高校（如加州大学伯克利分校、新南威尔士大学等）的完整课程录像，涵盖从编程基础、数据结构与算法，到操作系统、分布式系统、数据库等核心领域，并深入延伸至人工智能、机器学习、量子计算及区块链等前沿方向。\n\n面对网络上零散且质量参差不齐的教学资源，cs-video-courses 解决了学习者难以找到成体系、高难度大学级别课程的痛点。该项目严格筛选内容，仅收录真正的大学层级课程，排除了碎片化的简短教程或商业广告，确保用户能接触到严谨的学术内容。\n\n这份清单特别适合希望夯实计算机基础的开发者、需要补充特定领域知识的研究人员，以及渴望像在校生一样系统学习计算机科学的自学者。其独特的技术亮点在于分类极其详尽，不仅包含传统的软件工程与网络安全，还细分了生成式 AI、大语言模型、计算生物学等新兴学科，并直接链接至官方视频播放列表，让用户能一站式获取高质量的教育资源，免费享受世界顶尖大学的课堂体验。",79792,"2026-04-08T22:03:59",[18,13,14,20],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",75940,"2026-04-19T21:42:30",[19,13,20,18],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":29,"last_commit_at":63,"category_tags":64,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,"2026-04-03T21:50:24",[20,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":71,"readme_en":72,"readme_zh":73,"quickstart_zh":74,"use_case_zh":75,"hero_image_url":76,"owner_login":77,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":79,"owner_url":80,"languages":79,"stars":81,"forks":82,"last_commit_at":83,"license":79,"difficulty_score":29,"env_os":84,"env_gpu":85,"env_ram":85,"env_deps":86,"category_tags":89,"github_topics":90,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":22,"created_at":97,"updated_at":98,"faqs":99,"releases":100},9818,"hanjuku-kaso\u002Fawesome-offline-rl","awesome-offline-rl","An index of algorithms for offline reinforcement learning (offline-rl)","awesome-offline-rl 是一个专注于离线强化学习（Offline RL）领域的开源资源索引库。它系统地收集并整理了该方向的高质量研究论文、综述文章、基准测试数据集、开源代码实现以及相关教程和讲座。\n\n在传统强化学习中，智能体通常需要通过与环境实时交互来试错学习，这在医疗、自动驾驶等高风险或高成本场景中往往难以实施。awesome-offline-rl 正是为了解决这一痛点而生，它汇聚了仅利用历史静态数据进行训练和评估的前沿算法与理论，帮助研究者突破对实时交互的依赖，探索更安全、高效的决策模型。\n\n这份资源清单特别适合人工智能研究人员、算法工程师以及对强化学习感兴趣的学生使用。无论是想要快速了解领域全貌的新手，还是致力于推导新理论或复现 SOTA 模型的资深专家，都能在这里找到所需的文献指引和代码参考。其独特的亮点在于分类极其详尽，不仅涵盖基础的理论与方法，还深入细分到“离策评估”、“上下文多臂老虎机”等具体子领域，并持续更新来自康奈尔大学等顶尖机构的最新成果。通过 awesome-offline-rl，用户可以高效地追踪学术动态，避免在海量文献中迷失方向，是进入离线强化学","awesome-offline-rl 是一个专注于离线强化学习（Offline RL）领域的开源资源索引库。它系统地收集并整理了该方向的高质量研究论文、综述文章、基准测试数据集、开源代码实现以及相关教程和讲座。\n\n在传统强化学习中，智能体通常需要通过与环境实时交互来试错学习，这在医疗、自动驾驶等高风险或高成本场景中往往难以实施。awesome-offline-rl 正是为了解决这一痛点而生，它汇聚了仅利用历史静态数据进行训练和评估的前沿算法与理论，帮助研究者突破对实时交互的依赖，探索更安全、高效的决策模型。\n\n这份资源清单特别适合人工智能研究人员、算法工程师以及对强化学习感兴趣的学生使用。无论是想要快速了解领域全貌的新手，还是致力于推导新理论或复现 SOTA 模型的资深专家，都能在这里找到所需的文献指引和代码参考。其独特的亮点在于分类极其详尽，不仅涵盖基础的理论与方法，还深入细分到“离策评估”、“上下文多臂老虎机”等具体子领域，并持续更新来自康奈尔大学等顶尖机构的最新成果。通过 awesome-offline-rl，用户可以高效地追踪学术动态，避免在海量文献中迷失方向，是进入离线强化学习世界不可或缺的导航图。","# awesome-offline-rl\nThis is a collection of research and review papers for **offline reinforcement learning (offline rl)**. Feel free to star and fork.\n\n\nMaintainers:\n- [Haruka Kiyohara](https:\u002F\u002Fsites.google.com\u002Fview\u002Fharukakiyohara) (Cornell University)\n- [Yuta Saito](https:\u002F\u002Fusait0.com\u002Fen\u002F) (Hanjuku-kaso Co., Ltd. \u002F Cornell University)\n\nWe are looking for more contributors and maintainers! Please feel free to [pull requests](https:\u002F\u002Fgithub.com\u002Fusaito\u002Fawesome-offline-rl\u002Fpulls).\n\n```\nformat:\n- [title](paper link) [links]\n  - author1, author2, and author3. arXiv\u002Fconferences\u002Fjournals\u002F, year.\n```\n\nFor any questions, feel free to contact: hk844@cornell.edu\n\n## Table of Contents\n- [Papers](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#papers)\n  - [Review\u002FSurvey\u002FPosition Papers](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#reviewsurveyposition-papers)\n    - [Offline RL](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#offline-rl)\n    - [Off-Policy Evaluation and Learning](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#off-policy-evaluation-and-learning)\n    - [Related Reviews](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#related-reviews)\n  - [Offline RL: Theory\u002FMethods](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#offline-rl-theorymethods)\n  - [Offline RL: Benchmarks\u002FExperiments](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#offline-rl-benchmarksexperiments)\n  - [Offline RL: Applications](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#offline-rl-applications)\n  - [Off-Policy Evaluation and Learning: Theory\u002FMethods](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl#off-policy-evaluation-and-learning-theorymethods)\n    - [Off-Policy Evaluation: Contextual Bandits](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#off-policy-evaluation-contextual-bandits)\n    - [Off-Policy Evaluation: Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#off-policy-evaluation-reinforcement-learning)\n    - [Off-Policy Learning](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#off-policy-learning)\n  - [Off-Policy Evaluation and Learning: Benchmarks\u002FExperiments](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl#off-policy-evaluation-and-learning-benchmarksexperiments)\n  - [Off-Policy Evaluation and Learning: Applications](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl#off-policy-evaluation-and-learning-applications)\n- [Open Source Software\u002FImplementations](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#open-source-softwareimplementations)\n- [Blog\u002FPodcast](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#blogpodcast)\n  - [Blog](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#blog)\n  - [Podcast](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#podcast)\n- [Related Workshops](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#related-workshops)\n- [Tutorials\u002FTalks\u002FLectures](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#tutorialstalkslectures)\n\n## Papers\n\n### Review\u002FSurvey\u002FPosition Papers\n#### Offline RL\n- [Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15217)\n  - Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, and Dylan Hadfield-Menell. arXiv, 2023.\n- [A Survey on Offline Model-Based Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03360)\n  - Haoyang He. arXiv, 2023.\n- [Foundation Models for Decision Making: Problems, Methods, and Opportunities](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04129)\n  - Sherry Yang, Ofir Nachum, Yilun Du, Jason Wei, Pieter Abbeel, Dale Schuurmans. arXiv, 2023.\n- [A Survey on Offline Reinforcement Learning: Taxonomy, Review, and Open Problems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.01387)\n  - Rafael Figueiredo Prudencio, Marcos R. O. A. Maximo, and Esther Luna Colombini. arXiv, 2022.\n- [Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.01643)\n  - Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. arXiv, 2020.\n\n#### Off-Policy Evaluation and Learning\n- [A Review of Off-Policy Evaluation in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.06355)\n  - Masatoshi Uehara, Chengchun Shi, and Nathan Kallus. arXiv, 2022.\n\n#### Related Reviews\n- [On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11336)\n  - Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, and Lina Yao. arXiv, 2023.\n- [Understanding Reinforcement Learning Algorithms: The Progress from Basic Q-learning to Proximal Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.00026)\n  - Mohamed-Amine Chadi and Hajar Mousannif. arXiv, 2023.\n- [Offline Evaluation for Reinforcement Learning-based Recommendation: A Critical Issue and Some Alternatives](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00993)\n  - Romain Deffayet, Thibaut Thonet, Jean-Michel Renders, and Maarten de Rijke. arXiv, 2023.\n- [A Survey on Transformers in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.03044)\n  - Wenzhe Li, Hao Luo, Zichuan Lin, Chongjie Zhang, Zongqing Lu, and Deheng Ye. arXiv, 2023.\n- [Deep Reinforcement Learning: Opportunities and Challenges](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.11296)\n  - Yuxi Li. arXiv, 2022.\n- [A Survey on Model-based Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.09328)\n  - Fan-Ming Luo, Tian Xu, Hang Lai, Xiong-Hui Chen, Weinan Zhang, and Yang Yu. arXiv, 2022.\n- [Survey on Fair Reinforcement Learning: Theory and Practice](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10032)\n  - Pratik Gajane, Akrati Saxena, Maryam Tavakol, George Fletcher, and Mykola Pechenizkiy. arXiv, 2022.\n- [Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.08331)\n  - Haruka Kiyohara, Kosuke Kawakami, and Yuta Saito. arXiv, 2021.\n- [A Survey of Generalisation in Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09794)\n  - Robert Kirk, Amy Zhang, Edward Grefenstette, and Tim Rocktäschel. arXiv, 2021.\n\n### Offline RL: Theory\u002FMethods\n- [Value-Aided Conditional Supervised Learning for Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02017)\n  - Jeonghye Kim, Suyoung Lee, Woojun Kim, and Youngchul Sung. arXiv, 2024.\n- [Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02429)\n  - Lanqing Li, Hai Zhang, Xinyu Zhang, Shatong Zhu, Junqiao Zhao, and Pheng-Ann Heng. arXiv, 2024.\n- [DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02439)\n  - Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, and Weinan Zhang. arXiv, 2024.\n- [Deep autoregressive density nets vs neural ensembles for model-based offline reinforcement learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02858)\n  - Abdelhakim Benechehab, Albert Thomas, and Balázs Kégl. arXiv, 2024.\n- [Context-Former: Stitching via Latent Conditioned Sequence Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.16452)\n  - Ziqi Zhang, Jingzehua Xu, Zifeng Zhuang, Jinxin Liu, and Donglin wang. arXiv, 2024.\n- [Adversarially Trained Actor Critic for offline CMDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00629)\n  - Honghao Wei, Xiyue Peng, Xin Liu, and Arnob Ghosh. arXiv, 2024.\n- [Optimistic Model Rollouts for Pessimistic Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05899)\n  - Yuanzhao Zhai, Yiying Li, Zijian Gao, Xudong Gong, Kele Xu, Dawei Feng, Ding Bo, and Huaimin Wang. arXiv, 2024.\n- [Solving Continual Offline Reinforcement Learning with Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08478)\n  - Kaixin Huang, Li Shen, Chen Zhao, Chun Yuan, and Dacheng Tao. arXiv, 2024.\n- [MoMA: Model-based Mirror Ascent for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11380)\n  - Mao Hong, Zhiyue Zhang, Yue Wu, and Yanxun Xu. arXiv, 2024.\n- [Reframing Offline Reinforcement Learning as a Regression Problem](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11630)\n  - Prajwal Koirala and Cody Fleming. arXiv, 2024.\n- [Efficient Two-Phase Offline Deep Reinforcement Learning from Preference Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00330)\n  - Yinglun Xu and Gagandeep Singh. arXiv, 2024.\n- [Policy-regularized Offline Multi-objective Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02244)\n  - Qian Lin, Chao Yu, Zongkai Liu, and Zifan Wu. arXiv, 2024.\n- [Differentiable Tree Search in Latent State Space](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11660)\n  - Dixant Mittal and Wee Sun Lee. arXiv, 2024.\n- [Learning from Sparse Offline Datasets via Conservative Density Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08819)\n  - Zhepeng Cen, Zuxin Liu, Zitong Wang, Yihang Yao, Henry Lam, and Ding Zhao. ICLR, 2024.\n- [Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.10700)\n  - Yinan Zheng, Jianxiong Li, Dongjie Yu, Yujie Yang, Shengbo Eben Li, Xianyuan Zhan, and Jingjing Liu. ICLR, 2024.\n- [PDiT: Interleaving Perception and Decision-making Transformers for Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15863)\n  - Hangyu Mao, Rui Zhao, Ziyue Li, Zhiwei Xu, Hao Chen, Yiqun Chen, Bin Zhang, Zhen Xiao, Junge Zhang, and Jiangjin Yin. AAMAS, 2024.\n- [Critic-Guided Decision Transformer for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13716)\n  - Yuanfu Wang, Chao Yang, Ying Wen, Yu Liu, and Yu Qiao. AAAI, 2024.\n- [CUDC: A Curiosity-Driven Unsupervised Data Collection Method with Adaptive Temporal Distances for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12191)\n  - Chenyu Sun, Hangwei Qian, and Chunyan Miao. AAAI, 2024.\n- [Neural Network Approximation for Pessimistic Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.11863)\n  - Di Wu, Yuling Jiao, Li Shen, Haizhao Yang, and Xiliang Lu. AAAI, 2024.\n- [A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.07685)\n  - Yinmin Zhang, Jie Liu, Chuming Li, Yazhe Niu, Yaodong Yang, Yu Liu, and Wanli Ouyang. AAAI, 2024.\n- [The Generalization Gap in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05742)\n  - Ishita Mediratta, Qingfei You, Minqi Jiang, and Roberta Raileanu. arXiv, 2023.\n- [Decoupling Meta-Reinforcement Learning with Gaussian Task Contexts and Skills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.06518)\n  - Hongcai He, Anjie Zhu, Shuang Liang, Feiyu Chen, and Jie Shao. arXiv, 2023.\n- [MICRO: Model-Based Offline Reinforcement Learning with a Conservative Bellman Operator](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03991)\n  - Xiao-Yin Liu, Xiao-Hu Zhou, Guo-Tao Li, Hao Li, Mei-Jiang Gui, Tian-Yu Xiang, De-Xing Huang, and Zeng-Guang Hou. arXiv, 2023.\n- [Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.04386)\n  - Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, and Jan Peters. arXiv, 2023.\n- [Using Curiosity for an Even Representation of Tasks in Continual Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03177)\n  - Pankayaraj Pathmanathan, Natalia Díaz-Rodríguez, and Javier Del Ser. arXiv, 2023.\n- [Projected Off-Policy Q-Learning (POP-QL) for Stabilizing Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.14885)\n  - Melrose Roderick, Gaurav Manek, Felix Berkenkamp, and J. Zico Kolter. arXiv, 2023.\n- [Offline Data Enhanced On-Policy Policy Gradient with Provable Guarantees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.08384)\n  - Yifei Zhou, Ayush Sekhari, Yuda Song, and Wen Sun. arXiv, 2023.\n- [Switch Trajectory Transformer with Distributional Value Approximation for Multi-Task Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.07413)\n  - Qinjie Lin, Han Liu, and Biswa Sengupta. arXiv, 2023.\n- [Hierarchical Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.10447)\n  - André Correia and Luís A. Alexandre. arXiv, 2023.\n- [Prompt-Tuning Decision Transformer with Preference Ranking](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09648)\n  - Shengchao Hu, Li Shen, Ya Zhang, and Dacheng Tao. arXiv, 2023.\n- [Context Shift Reduction for Offline Meta-Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03695)\n  - Yunkai Gao, Rui Zhang, Jiaming Guo, Fan Wu, Qi Yi, Shaohui Peng, Siming Lan, Ruizhi Chen, Zidong Du, Xing Hu, Qi Guo, Ling Li, and Yunji Chen. arXiv, 2023.\n- [Uni-O4: Unifying Online and Offline Deep Reinforcement Learning with Multi-Step On-Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03351)\n  - Kun Lei, Zhengmao He, Chenhao Lu, Kaizhe Hu, Yang Gao, and Huazhe Xu. arXiv, 2023.\n- [Score Models for Offline Goal-Conditioned Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.02013)\n  - Harshit Sikchi, Rohan Chitnis, Ahmed Touati, Alborz Geramifard, Amy Zhang, and Scott Niekum. arXiv, 2023.\n- [Offline RL with Observation Histories: Analyzing and Improving Sample Complexity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20663)\n  - Joey Hong, Anca Dragan, and Sergey Levine. arXiv, 2023.\n- [Expressive Modeling Is Insufficient for Offline RL: A Tractable Inference Perspective](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.00094)\n  - Xuejie Liu, Anji Liu, Guy Van den Broeck, and Yitao Liang. arXiv, 2023.\n- [Rethinking Decision Transformer via Hierarchical Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.00267)\n  - Yi Ma, Chenjun Xiao, Hebin Liang, and Jianye Hao. arXiv, 2023.\n- [Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20587)\n  - Ruizhe Shi, Yuyao Liu, Yanjie Ze, Simon S. Du, and Huazhe Xu. arXiv, 2023.\n- [GOPlan: Goal-conditioned Offline Reinforcement Learning by Planning with Learned Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20025)\n  - Mianchu Wang, Rui Yang, Xi Chen, and Meng Fang. arXiv, 2023.\n- [SERA: Sample Efficient Reward Augmentation in offline-to-online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.19805)\n  - Ziqi Zhang, Xiao Xiong, Zifeng Zhuang, Jinxin Liu, and Donglin Wang. arXiv, 2023.\n- [Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18434)\n  - Kishan Panaganti, Zaiyan Xu, Dileep Kalathil, and Mohammad Ghavamzadeh. arXiv, 2023.\n- [Guided Data Augmentation for Offline Reinforcement Learning and Imitation Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18247)\n  - Nicholas E. Corrado, Yuxiao Qu, John U. Balis, Adam Labiosa, and Josiah P. Hanna. arXiv, 2023.\n- [CROP: Conservative Reward for Model-based Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.17245)\n  - Hao Li, Xiao-Hu Zhou, Xiao-Liang Xie, Shi-Qi Liu, Zhen-Qiu Feng, Xiao-Yin Liu, Mei-Jiang Gui, Tian-Yu Xiang, De-Xing Huang, Bo-Xian Yao, and Zeng-Guang Hou. arXiv, 2023.\n- [Towards Robust Offline Reinforcement Learning under Diverse Data Corruption](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.12955)\n  - Rui Yang, Han Zhong, Jiawei Xu, Amy Zhang, Chongjie Zhang, Lei Han, and Tong Zhang. arXiv, 2023.\n- [Offline Retraining for Online RL: Decoupled Policy Learning to Mitigate Exploration Bias](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08558)\n  - Max Sobol Mark, Archit Sharma, Fahim Tajwar, Rafael Rafailov, Sergey Levine, and Chelsea Finn. arXiv, 2023.\n- [Boosting Continuous Control with Consistency Policy](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06343)\n  - Yuhui Chen, Haoran Li, and Dongbin Zhao. arXiv, 2023.\n- [Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05723)\n  - Trevor McInroe, Stefano V. Albrecht, and Amos Storkey. arXiv, 2023.\n- [Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05422)\n  - Fan-Ming Luo, Tian Xu, Xingchen Cao, and Yang Yu. arXiv, 2023.\n- [DiffCPS: Diffusion Model based Constrained Policy Search for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05333)\n  - Longxiang He, Linrui Zhang, Junbo Tan, and Xueqian Wang. arXiv, 2023.\n- [Self-Confirming Transformer for Locally Consistent Online Adaptation in Multi-Agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04579)\n  - Tao Li, Juan Guevara, Xinghong Xie, and Quanyan Zhu. arXiv, 2023.\n- [Learning to Reach Goals via Diffusion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02505)\n  - Vineet Jain and Siamak Ravanbakhsh. arXiv, 2023.\n- [Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03022)\n  - Jeonghye Kim, Suyoung Lee, Woojun Kim, and Youngchul Sung. arXiv, 2023.\n- [Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16984)\n  - Zihan Ding and Chi Jin. arXiv, 2023.\n- [Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01380)\n  - Qiwei Di, Heyang Zhao, Jiafan He, and Quanquan Gu. arXiv, 2023.\n- [Reasoning with Latent Diffusion in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06599)\n  - Siddarth Venkatraman, Shivesh Khaitan, Ravi Tej Akella, John Dolan, Jeff Schneider, and Glen Berseth. arXiv, 2023.\n- [Hundreds Guide Millions: Adaptive Offline Reinforcement Learning with Expert Guidance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.01448)\n  - Qisen Yang, Shenzhi Wang, Qihang Zhang, Gao Huang, and Shiji Song. arXiv, 2023.\n- [Towards Robust Offline-to-Online Reinforcement Learning via Uncertainty and Smoothness](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16973)\n  - Xiaoyu Wen, Xudong Yu, Rui Yang, Chenjia Bai, and Zhen Wang. arXiv, 2023.\n- [Robust Offline Reinforcement Learning -- Certify the Confidence Interval](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16631)\n  - Jiarui Yao and Simon Shaolei Du. arXiv, 2023.\n- [Stackelberg Batch Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16188)\n  - Wenzhuo Zhou and Annie Qu. arXiv, 2023.\n- [H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.12716)\n  - Haoyi Niu, Tianying Ji, Bingqi Liu, Haocheng Zhao, Xiangyu Zhu, Jianying Zheng, Pengfei Huang, Guyue Zhou, Jianming Hu, and Xianyuan Zhan. arXiv, 2023.\n- [Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.10150)\n  - Yevgen Chebotar, Quan Vuong, Alex Irpan, Karol Hausman, Fei Xia, Yao Lu, Aviral Kumar, Tianhe Yu, Alexander Herzog, Karl Pertsch, Keerthana Gopalakrishnan, Julian Ibarz, Ofir Nachum, Sumedh Sontakke, Grecia Salazar, Huong T Tran, Jodilyn Peralta, Clayton Tan, Deeksha Manjunath, Jaspiar Singht, Brianna Zitkovich, Tomas Jackson, Kanishka Rao, Chelsea Finn, and Sergey Levine. arXiv, 2023.\n- [DOMAIN: MilDly COnservative Model-BAsed OfflINe Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.08925)\n  - Xiao-Yin Liu, Xiao-Hu Zhou, Xiao-Liang Xie, Shi-Qi Liu, Zhen-Qiu Feng, Hao Li, Mei-Jiang Gui, Tian-Yu Xiang, De-Xing Huang, and Zeng-Guang Hou. arXiv, 2023.\n- [Guided Online Distillation: Promoting Safe Reinforcement Learning by Offline Demonstration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.09408)\n  - Jinning Li, Xinyi Liu, Banghua Zhu, Jiantao Jiao, Masayoshi Tomizuka, Chen Tang, and Wei Zhan. arXiv, 2023.\n- [Equivariant Data Augmentation for Generalization in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07578)\n  - Cristina Pinneri, Sarah Bechtle, Markus Wulfmeier, Arunkumar Byravan, Jingwei Zhang, William F. Whitney, and Martin Riedmiller. arXiv, 2023.\n- [Reasoning with Latent Diffusion in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06599)\n  - Siddarth Venkatraman, Shivesh Khaitan, Ravi Tej Akella, John Dolan, Jeff Schneider, and Glen Berseth. arXiv, 2023.\n- [Hundreds Guide Millions: Adaptive Offline Reinforcement Learning with Expert Guidance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.01448)\n  - Qisen Yang, Shenzhi Wang, Qihang Zhang, Gao Huang, and Shiji Song. arXiv, 2023.\n- [Multi-Objective Decision Transformers for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.16379)\n  - Abdelghani Ghanem, Philippe Ciblat, and Mounir Ghogho. arXiv, 2023.\n- [AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03526)\n  - Michaël Mathieu, Sherjil Ozair, Srivatsan Srinivasan, Caglar Gulcehre, Shangtong Zhang, Ray Jiang, Tom Le Paine, Richard Powell, Konrad Żołna, Julian Schrittwieser, David Choi, Petko Georgiev, Daniel Toyama, Aja Huang, Roman Ring, Igor Babuschkin, Timo Ewalds, Mahyar Bordbar, Sarah Henderson, Sergio Gómez Colmenarejo, Aäron van den Oord, Wojciech Marian Czarnecki, Nando de Freitas, and Oriol Vinyals. arXiv, 2023.\n- [Exploiting Generalization in Offline Reinforcement Learning via Unseen State Augmentations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03882)\n  - Nirbhay Modhe, Qiaozi Gao, Ashwin Kalyan, Dhruv Batra, Govind Thattai, and Gaurav Sukhatme. arXiv, 2023.\n- [PASTA: Pretrained Action-State Transformer Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.10936)\n  - Raphael Boige, Yannis Flet-Berliac, Arthur Flajolet, Guillaume Richard, and Thomas Pierrot. arXiv, 2023.\n- [Towards A Unified Agent with Foundation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.09668)\n  - Norman Di Palo, Arunkumar Byravan, Leonard Hasenclever, Markus Wulfmeier, Nicolas Heess, and Martin Riedmiller. arXiv, 2023.\n- [Goal-Conditioned Predictive Coding as an Implicit Planner for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03406)\n  - Zilai Zeng, Ce Zhang, Shijie Wang, and Chen Sun. arXiv, 2023.\n- [Offline Reinforcement Learning with Imbalanced Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02752)\n  - Li Jiang, Sijie Chen, Jielin Qiu, Haoran Xu, Wai Kin Chan, and Zhao Ding. arXiv, 2023.\n- [LLQL: Logistic Likelihood Q-Learning for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02345)\n  - Outongyi Lv, Bingxin Zhou, and Yu Guang Wang. arXiv, 2023.\n- [Elastic Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02484)\n  - Yueh-Hua Wu, Xiaolong Wang, and Masashi Hamaya. arXiv, 2023.\n- [Prioritized Trajectory Replay: A Replay Memory for Data-driven Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15503)\n  - Jinyi Liu, Yi Ma, Jianye Hao, Yujing Hu, Yan Zheng, Tangjie Lv, and Changjie Fan. arXiv, 2023.\n- [Is RLHF More Difficult than Standard RL?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14111)\n  - Yuanhao Wang, Qinghua Liu, and Chi Jin. arXiv, 2023.\n- [Supervised Pretraining Can Learn In-Context Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14892)\n  - Jonathan N. Lee, Annie Xie, Aldo Pacchiano, Yash Chandak, Chelsea Finn, Ofir Nachum, and Emma Brunskill. arXiv, 2023.\n- [Fighting Uncertainty with Gradients: Offline Reinforcement Learning via Diffusion Score Matching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14079)\n  - H.J. Terry Suh, Glen Chou, Hongkai Dai, Lujie Yang, Abhishek Gupta, and Russ Tedrake. arXiv, 2023.\n- [Safe Reinforcement Learning with Dead-Ends Avoidance and Recovery](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13944)\n  - Xiao Zhang, Hai Zhang, Hongtu Zhou, Chang Huang, Di Zhang, Chen Ye, and Junqiao Zhao. arXiv, 2023.\n- [CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13412)\n  - Jinxin Liu, Lipeng Zu, Li He, and Donglin Wang. arXiv, 2023.\n- [Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13085)\n  - Zhang-Wei Hong, Pulkit Agrawal, Rémi Tachet des Combes, and Romain Laroche.\n- [Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12755)\n  - Jinxin Liu, Ziqi Zhang, Zhenyu Wei, Zifeng Zhuang, Yachen Kang, Sibo Gai, and Donglin Wang. arXiv, 2023.\n- [A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07818)\n  - Kihyuk Hong, Yuhang Li, and Ambuj Tewari. arXiv, 2023.\n- [HIPODE: Enhancing Offline Reinforcement Learning with High-Quality Synthetic Data from a Policy-Decoupled Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06329)\n  - Shixi Lian, Yi Ma, Jinyi Liu, Yan Zheng, and Zhaopeng Meng. arXiv, 2023.\n- [Ensemble-based Offline-to-Online Reinforcement Learning: From Pessimistic Learning to Optimistic Exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06871)\n  - Kai Zhao, Yi Ma, Jinyi Liu, Yan Zheng, and Zhaopeng Meng. arXiv, 2023.\n- [In-Sample Policy Iteration for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05726)\n  - Xiaohan Hu, Yi Ma, Chenjun Xiao, Yan Zheng, and Zhaopeng Meng. arXiv, 2023.\n- [Instructed Diffuser with Temporal Condition Guidance for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04875)\n  - Jifeng Hu, Yanchao Sun, Sili Huang, SiYuan Guo, Hechang Chen, Li Shen, Lichao Sun, Yi Chang, and Dacheng Tao. arXiv, 2023.\n- [Offline Prioritized Experience Replay](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05412)\n  - Yang Yue, Bingyi Kang, Xiao Ma, Gao Huang, Shiji Song, and Shuicheng Yan. arXiv, 2023.\n- [Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.01157)\n  - Alizée Pace, Hugo Yèche, Bernhard Schölkopf, Gunnar Rätsch, and Guy Tennenholtz. arXiv, 2023.\n- [Offline Meta Reinforcement Learning with In-Distribution Online Adaptation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19529)\n  - Jianhao Wang, Jin Zhang, Haozhe Jiang, Junyu Zhang, Liwei Wang, and Chongjie Zhang. arXiv, 2023.\n- [Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18459)\n  - Haoran He, Chenjia Bai, Kang Xu, Zhuoran Yang, Weinan Zhang, Dong Wang, Bin Zhao, and Xuelong Li. arXiv, 2023.\n- [Reinforcement Learning with Human Feedback: Learning Dynamic Choices via Pessimism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18438)\n  - Zihao Li, Zhuoran Yang, and Mengdi Wang. arXiv, 2023.\n- [MADiff: Offline Multi-agent Learning with Diffusion Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17330)\n  - Zhengbang Zhu, Minghuan Liu, Liyuan Mao, Bingyi Kang, Minkai Xu, Yong Yu, Stefano Ermon, and Weinan Zhang. arXiv, 2023.\n- [Provable Offline Reinforcement Learning with Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14816)\n  - Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, and Wen Sun. arXiv, 2023.\n- [Think Before You Act: Decision Transformers with Internal Working Memory](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16338)\n  - Jikun Kang, Romain Laroche, Xindi Yuan, Adam Trischler, Xue Liu, and Jie Fu. arXiv, 2023.\n- [Distributionally Robust Optimization Efficiently Solves Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13289)\n  - Yue Wang, Yuting Hu, Jinjun Xiong, and Shaofeng Zou. arXiv, 2023.\n- [Offline Primal-Dual Reinforcement Learning for Linear MDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12944)\n  - Germano Gabbianelli, Gergely Neu, Nneka Okolo, and Matteo Papini. arXiv, 2023.\n- [Federated Offline Policy Learning with Heterogeneous Observational Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12407)\n  - Aldo Gael Carranza and Susan Athey. arXiv, 2023.\n- [Offline Reinforcement Learning with Additional Covering Distributions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12679)\n  - Chenjie Mao. arXiv, 2023.\n- [Reward-agnostic Fine-tuning: Provable Statistical Benefits of Hybrid Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09836)\n  - Gen Li, Wenhao Zhan, Jason D. Lee, Yuejie Chi, and Yuxin Chen. arXiv, 2023.\n- [Stackelberg Decision Transformer for Asynchronous Action Coordination in Multi-Agent Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.07856)\n  - Bin Zhang, Hangyu Mao, Lijuan Li, Zhiwei Xu, Dapeng Li, Rui Zhao, and Guoliang Fan. arXiv, 2023.\n- [Federated Ensemble-Directed Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03097)\n  - Desik Rengarajan, Nitin Ragothaman, Dileep Kalathil, and Srinivas Shakkottai. arXiv, 2023.\n- [IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.10573)\n  - Philippe Hansen-Estruch, Ilya Kostrikov, Michael Janner, Jakub Grudzien Kuba, and Sergey Levine. arXiv, 2023.\n- [Using Offline Data to Speed-up Reinforcement Learning in Procedurally Generated Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.09825)\n  - Alain Andres, Lukas Schäfer, Esther Villar-Rodriguez, Stefano V.Albrecht, Javier Del Ser. arXiv, 2023.\n- [Reinforcement Learning from Passive Data via Latent Intentions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.04782) [[website](https:\u002F\u002Fdibyaghosh.com\u002Ficvf\u002F)]\n  - Dibya Ghosh, Chethan Bhateja, and Sergey Levine. arXiv, 2023.\n- [Uncertainty-driven Trajectory Truncation for Model-based Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.04660)\n  - Junjie Zhang, Jiafei Lyu, Xiaoteng Ma, Jiangpeng Yan, Jun Yang, Le Wan, and Xiu Li. arXiv, 2023.\n- [RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06767)\n  - Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. arXiv, 2023.\n- [Batch Quantum Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.00905)\n  - Maniraman Periyasamy, Marc Hölle, Marco Wiedmann, Daniel D. Scherer, Axel Plinge, and Christopher Mutschler. arXiv, 2023.\n- [Accelerating exploration and representation learning with offline pre-training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.00046)\n  - Bogdan Mazoure, Jake Bruce, Doina Precup, Rob Fergus, and Ankit Anand. arXiv, 2023.\n- [On Context Distribution Shift in Task Representation Learning for Offline Meta RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.00354)\n  - Chenyang Zhao, Zihao Zhou, and Bin Liu. arXiv, 2023.\n- [Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01203)\n  - Tongzhou Wang, Antonio Torralba, Phillip Isola, and Amy Zhang. arXiv, 2023.\n- [Learning Excavation of Rigid Objects with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.16427)\n  - Shiyu Jin, Zhixian Ye, and Liangjun Zhang. arXiv, 2023.\n- [Goal-conditioned Offline Reinforcement Learning through State Space Partitioning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09367)\n  - Mianchu Wang, Yue Jin, and Giovanni Montana. arXiv, 2023.\n- [Merging Decision Transformers: Weight Averaging for Forming Multi-Task Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07551)\n  - Daniel Lawson and Ahmed H. Qureshi. arXiv, 2023.\n- [Deploying Offline Reinforcement Learning with Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07046)\n  - Ziniu Li, Ke Xu, Liu Liu, Lanqing Li, Deheng Ye, and Peilin Zhao. arXiv, 2023.\n- [Synthetic Experience Replay](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06614)\n  - Cong Lu, Philip J. Ball, and Jack Parker-Holder. arXiv, 2023.\n- [ENTROPY: Environment Transformer and Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03811)\n  - Pengqin Wang, Meixin Zhu, and Shaojie Shen. arXiv, 2023.\n- [Graph Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03747)\n  - Shengchao Hu, Li Shen, Ya Zhang, and Dacheng Tao. arXiv, 2023.\n- [Selective Uncertainty Propagation in Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00284)\n  - Sanath Kumar Krishnamurthy, Tanmay Gangwani, Sumeet Katariya, Branislav Kveton, and Anshuka Rangi. arXiv, 2023.\n- [Off-the-Grid MARL: a Framework for Dataset Generation with Baselines for Cooperative Offline Multi-Agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00521)\n  - Claude Formanek, Asad Jeewa, Jonathan Shock, and Arnu Pretorius. arXiv, 2023.\n- [Skill Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13573)\n  - Shyam Sudhakaran and Sebastian Risi. arXiv, 2023.\n- [Guiding Online Reinforcement Learning with Action-Free Offline Pretraining](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12876)\n  - Deyao Zhu, Yuhui Wang, Jürgen Schmidhuber, and Mohamed Elhoseiny. arXiv, 2023.\n- [SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12203)\n  - Qin Zhang, Linrui Zhang, Haoran Xu, Li Shen, Bowen Wang, Yongzhe Chang, Xueqian Wang, Bo Yuan, and Dacheng Tao. arXiv, 2023.\n- [APAC: Authorized Probability-controlled Actor-Critic For Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12130)\n  - Jing Zhang, Chi Zhang, Wenjia Wang, and Bing-Yi Jing. arXiv, 2023.\n- [Designing an offline reinforcement learning objective from scratch](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12842)\n  - Gaon An, Junhyeok Lee, Xingdong Zuo, Norio Kosaka, Kyung-Min Kim, and Hyun Oh Song. arXiv, 2023.\n- [Behaviour Discriminator: A Simple Data Filtering Method to Improve Offline Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.11734)\n  - Qiang Wang, Robert McCarthy, David Cordova Bulens, Kevin McGuinness, Noel E. O'Connor, Francisco Roldan Sanchez, and Stephen J. Redmond. arXiv, 2023.\n- [Learning to View: Decision Transformers for Active Object Detection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.09544)\n  - Wenhao Ding, Nathalie Majcherczyk, Mohit Deshpande, Xuewei Qi, Ding Zhao, Rajasimman Madhivanan, and Arnie Sen. arXiv, 2023.\n- [Risk Sensitive Dead-end Identification in Safety-Critical Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.05664)\n  - Taylor W. Killian, Sonali Parbhoo, and Marzyeh Ghassemi. arXiv, 2023.\n- [Value Enhancement of Reinforcement Learning via Efficient and Robust Trust Region Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.02220)\n  - Chengchun Shi, Zhengling Qi, Jianing Wang, and Fan Zhou. arXiv, 2023.\n- [Contextual Conservative Q-Learning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.01298)\n  - Ke Jiang, Jiayu Yao, and Xiaoyang Tan. arXiv, 2023.\n- [Offline Policy Optimization in RL with Variance Regularizaton](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.14405)\n  - Riashat Islam, Samarth Sinha, Homanga Bharadhwaj, Samin Yeasar Arnob, Zhuoran Yang, Animesh Garg, Zhaoran Wang, Lihong Li, and Doina Precup. arXiv, 2023.\n- [Transformer in Transformer as Backbone for Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.14538)\n  - Hangyu Mao, Rui Zhao, Hao Chen, Jianye Hao, Yiqun Chen, Dong Li, Junge Zhang, and Zhen Xiao. arXiv, 2023.\n- [SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03137)\n  - Dohyeok Lee, Seungyub Han, Taehyun Cho, and Jungwoo Lee. NeurIPS, 2023.\n- [Revisiting the Minimalist Approach to Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09836)\n  - Denis Tarasov, Vladislav Kurenkov, Alexander Nikulin, and Sergey Kolesnikov. NeurIPS, 2023.\n- [Constrained Policy Optimization with Explicit Behavior Density for Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=dLmDPVv19z)\n  - Jing Zhang, Chi Zhang, Wenjia Wang, and Bingyi Jing. NeurIPS, 2023.\n- [Supported Value Regularization for Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=fze7P9oy6l)\n  - Yixiu Mao, Hongchang Zhang, Chen Chen, Yi Xu, and Xiangyang Ji. NeurIPS, 2023.\n- [Conservative State Value Estimation for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.06884)\n  - Liting Chen, Jie Yan, Zhengdao Shao, Lu Wang, Qingwei Lin, Saravan Rajmohan, Thomas Moscibroda, and Dongmei Zhang. NeurIPS, 2023.\n- [Understanding and Addressing the Pitfalls of Bisimulation-based Representations in Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=sQyRQjun46)\n  - Hongyu Zang, Xin Li, Leiji Zhang, Yang Liu, Baigui Sun, Riashat Islam, Remi Tachet des Combes, and Romain Laroche. NeurIPS, 2023.\n- [Adversarial Model for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.11048)\n  - Mohak Bhardwaj, Tengyang Xie, Byron Boots, Nan Jiang, and Ching-An Cheng. NeurIPS, 2023.\n- [Percentile Criterion Optimization in Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=4LSDk5nlVvV)\n  - Cyrus Cousins, Elita Lobo, Marek Petrik, and Yair Zick. NeurIPS, 2023.\n- [Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12714)\n  - Hanlin Zhu, Paria Rashidinejad, and Jiantao Jiao. NeurIPS, 2023.\n- [HIQL: Offline Goal-Conditioned RL with Latent States as Actions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.11949)\n  - Seohong Park, Dibya Ghosh, Benjamin Eysenbach, and Sergey Levine. NeurIPS, 2023.\n- [Recovering from Out-of-sample States via Inverse Dynamics in Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=4gLWjSaw4o)\n  - Ke Jiang, Jia-Yu Yao, and Xiaoyang Tan. NeurIPS, 2023.\n- [Offline RL with Discrete Proxy Representations for Generalizability in POMDPs](https:\u002F\u002Fopenreview.net\u002Fpdf?id=tJN664ZNVG)\n  - Pengjie Gu, Xinyu Cai, Dong Xing, Xinrun Wang, Mengchen Zhao, and Bo An. NeurIPS, 2023.\n- [Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.11620)\n  - Xiangsen Wang, Haoran Xu, Yinan Zheng, and Xianyuan Zhan. NeurIPS, 2023.\n- [Bi-Level Offline Policy Optimization with Limited Exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06268)\n  - Wenzhuo Zhou. NeurIPS, 2023.\n- [Provably (More) Sample-Efficient Offline RL with Options](https:\u002F\u002Fopenreview.net\u002Fpdf?id=JwNXeBdkeo)\n  - Xiaoyan Hu and Ho-fung Leung. NeurIPS, 2023.\n- [Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09659)\n  - Jose Blanchet, Miao Lu, Tong Zhang, and Han Zhong. NeurIPS, 2023.\n- [AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline Multi-Agent RL via Alternating Stationary Distribution Correction Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.02194)\n  - Daiki E. Matsunaga, Jongmin Lee, Jaeseok Yoon, Stefanos Leonardos, Pieter Abbeel, and Kee-Eung Kim. NeurIPS, 2023.\n- [Budgeting Counterfactual for Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.06328)\n  - Yao Liu, Pratik Chaudhari, and Rasool Fakoor. NeurIPS, 2023.\n- [Efficient Diffusion Policies for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.20081)\n  - Bingyi Kang, Xiao Ma, Chao Du, Tianyu Pang, and Shuicheng Yan. NeurIPS, 2023.\n- [Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.05479)\n  - Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, and Sergey Levine. NeurIPS, 2023.\n- [Policy Finetuning in Reinforcement Learning via Design of Experiments using Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.04354)\n  - Ruiqi Zhang and Andrea Zanette. NeurIPS, 2023.\n- [Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.02392)\n  - Masatoshi Uehara, Nathan Kallus, Jason D. Lee, and Wen Sun. NeurIPS, 2023.\n- [Provably Efficient Offline Reinforcement Learning in Regular Decision Processes](https:\u002F\u002Fopenreview.net\u002Fpdf?id=8bQc7oRnjm)\n  - Roberto Cipollone, Anders Jonsson, Alessandro Ronca, and Mohammad Sadegh Talebi. NeurIPS, 2023.\n- [Provably Efficient Offline Goal-Conditioned Reinforcement Learning with General Function Approximation and Single-Policy Concentrability](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03770)\n  - Hanlin Zhu and Amy Zhang. NeurIPS, 2023.\n- [On Sample-Efficient Offline Reinforcement Learning: Data Diversity, Posterior Sampling and Beyond](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03301)\n  - Thanh Nguyen-Tang and Raman Arora. NeurIPS, 2023.\n- [Conservative Offline Policy Adaptation in Multi-Agent Games](https:\u002F\u002Fopenreview.net\u002Fpdf?id=C8pvL8Qbfa)\n  - Chengjie Wu, Pingzhong Tang, Jun Yang, Yujing Hu, Tangjie Lv, Changjie Fan, and Chongjie Zhang. NeurIPS, 2023.\n- [Look Beneath the Surface: Exploiting Fundamental Symmetry for Sample-Efficient Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04220)\n  - Peng Cheng, Xianyuan Zhan, Zhihao Wu, Wenjia Zhang, Shoucheng Song, Han Wang, Youfang Lin, and Li Jiang. NeurIPS, 2023.\n- [Survival Instinct in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.03286)\n  - Anqi Li, Dipendra Misra, Andrey Kolobov, and Ching-An Cheng. NeurIPS, 2023.\n- [Learning from Visual Observation via Offline Pretrained State-to-Go Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12860)\n  - Bohan Zhou, Ke Li, Jiechuan Jiang, and Zongqing Lu. NeurIPS, 2023.\n- [Design from Policies: Conservative Test-Time Adaptation for Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14479)\n  - Jinxin Liu, Hongyin Zhang, Zifeng Zhuang, Yachen Kang, Donglin Wang, and Bin Wang. NeurIPS, 2023.\n- [Learning to Influence Human Behavior with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.02265)\n  - Joey Hong, Anca Dragan, and Sergey Levine. NeurIPS, 2023.\n- [Residual Q-Learning: Offline and Online Policy Customization without Value](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09526)\n  - Chenran Li, Chen Tang, Haruki Nishimura, Jean Mercat, Masayoshi Tomizuka, Wei Zhan. NeurIPS, 2023.\n- [Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.17966)\n  - Shenzhi Wang, Qisen Yang, Jiawei Gao, Matthieu Gaetan Lin, Hao Chen, Liwei Wu, Ning Jia, Shiji Song, and Gao Huang. NeurIPS, 2023.\n- [Beyond Uniform Sampling: Offline Reinforcement Learning with Imbalanced Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04413)\n  - Zhang-Wei Hong, Aviral Kumar, Sathwik Karnik, Abhishek Bhandwaldar, Akash Srivastava, Joni Pajarinen, Romain Laroche, Abhishek Gupta, and Pulkit Agrawal. NeurIPS, 2023.\n- [Understanding, Predicting and Better Resolving Q-Value Divergence in Offline-RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04411)\n  - Yang Yue, Rui Lu, Bingyi Kang, Shiji Song, and Gao Huang. NeurIPS, 2023.\n- [Corruption-Robust Offline Reinforcement Learning with General Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.14550)\n  - Chenlu Ye, Rui Yang, Quanquan Gu, and Tong Zhang. NeurIPS, 2023.\n- [Learning to Modulate pre-trained Models in RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14884)\n  - Thomas Schmied, Markus Hofmarcher, Fabian Paischer, Razvan Pascanu, and Sepp Hochreiter. NeurIPS, 2023.\n- [Counterfactual Conservative Q Learning for Offline Multi-agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.12696)\n  - Jianzhun Shao, Yun Qu, Chen Chen, Hongchang Zhang, and Xiangyang Ji. NeurIPS, 2023.\n- [One Risk to Rule Them All: A Risk-Sensitive Perspective on Model-Based Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.00124)\n  - Marc Rigter, Bruno Lacerda, and Nick Hawes. NeurIPS, 2023.\n- [Goal-Conditioned Predictive Coding for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03406)\n  - Zilai Zeng, Ce Zhang, Shijie Wang, and Chen Sun. NeurIPS, 2023.\n- [Mutual Information Regularized Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07484)\n  - Xiao Ma, Bingyi Kang, Zhongwen Xu, Min Lin, and Shuicheng Yan. NeurIPS, 2023.\n- [Offline RL With Heteroskedastic Datasets and Support Constraints](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01052)\n  - Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, and Sergey Levine. NeurIPS, 2023.\n- [Offline Reinforcement Learning with Differential Privacy](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00810)\n  - Dan Qiao and Yu-Xiang Wang. NeurIPS, 2023.\n- [Accountability in Offline Reinforcement Learning: Explaining Decisions with a Corpus of Examples](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.07747)\n  - Hao Sun, Alihan Hüyük, Daniel Jarrett, and Mihaela van der Schaar. NeurIPS, 2023.\n- [Reining Generalization in Offline Reinforcement Learning via Representation Distinction](https:\u002F\u002Fopenreview.net\u002Fpdf?id=mVywRIDNIl)\n  - Yi Ma, Hongyao Tang, Dong Li, and Zhaopeng Meng. NeurIPS, 2023.\n- [VOCE: Variational Optimization with Conservative Estimation for Offline Safe Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=sIU3WujeSl)\n  - Jiayi Guan, Guang Chen, Jiaming Ji, Long Yang, ao zhou, Zhijun Li, and changjun jiang. NeurIPS, 2023.\n- [SafeDICE: Offline Safe Imitation Learning with Non-Preferred Demonstrations](https:\u002F\u002Fopenreview.net\u002Fpdf?id=toEGuA9Qfn)\n  - Youngsoo Jang, Geon-Hyeong Kim, Jongmin Lee, Sungryull Sohn, Byoungjip Kim, Honglak Lee, and Moontae Lee. NeurIPS, 2023.\n- [Hierarchical Diffusion for Offline Decision Making](https:\u002F\u002Fopenreview.net\u002Fforum?id=55kLa7tH9o)\n  - Wenhao Li, Xiangfeng Wang, Bo Jin, and Hongyuan Zha. ICML, 2023.\n- [MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17156)\n  - Anqi Li, Byron Boots, and Ching-An Cheng. ICML, 2023.\n- [Safe Offline Reinforcement Learning with Real-Time Budget Constraints](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00603)\n  - Qian Lin, Bo Tang, Zifan Wu, Chao Yu, Shangqin Mao, Qianlong Xie, Xingxing Wang, and Dong Wang. ICML, 2023.\n- [Near-optimal Conservative Exploration in Reinforcement Learning under Episode-wise Constraints](https:\u002F\u002Fopenreview.net\u002Fforum?id=Wo9JQDb4ms)\n  - Donghao Li, Ruiquan Huang, Cong Shen, and Jing Yang. ICML, 2023.\n- [A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12968)\n  - Benjamin Eysenbach, Matthieu Geist, Sergey Levine, and Ruslan Salakhutdinov. ICML, 2023.\n- [Anti-Exploration by Random Network Distillation](https:\u002F\u002Fopenreview.net\u002Fforum?id=NRQ5lC8Dit)\n  - Alexander Nikulin, Vladislav Kurenkov, Denis Tarasov, and Sergey Kolesnikov. ICML, 2023.\n- [Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=VLmf5fqWdf)\n  - Tongzhou Wang, Antonio Torralba, Phillip Isola, and Amy Zhang. ICML, 2023.\n- [PASTA: Pessimistic Assortment Optimization](https:\u002F\u002Fopenreview.net\u002Fforum?id=Yzfg7JhPhp)\n  - Juncheng Dong, Weibin Mo, Zhengling Qi, Cong Shi, Ethan X Fang, and Vahid Tarokh. ICML, 2023.\n- [Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=LucUrr5kUi)\n  - Cheng Lu, Huayu Chen, Jianfei Chen, Hang Su, Chongxuan Li, and Jun Zhu. ICML, 2023.\n- [Supported Trust Region Optimization for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.08935)\n  - Yixiu Mao, Hongchang Zhang, Chen Chen, Yi Xu, and Xiangyang Ji. ICML, 2023.\n- [Principled Offline RL in the Presence of Rich Exogenous Information](https:\u002F\u002Fopenreview.net\u002Fforum?id=jTcRlAAO01)\n  - Riashat Islam, Manan Tomar, Alex Lamb, Yonathan Efroni, Hongyu Zang, Aniket Rajiv Didolkar, Dipendra Misra, Xin Li, Harm van Seijen, Remi Tachet des Combes, and John Langford. ICML, 2023.\n- [Efficient Online Reinforcement Learning with Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.02948)\n  - Philip J. Ball, Laura Smith, Ilya Kostrikov, and Sergey Levine. ICML, 2023.\n- [Boosting Offline Reinforcement Learning with Action Preference Query](https:\u002F\u002Fopenreview.net\u002Fforum?id=XiGijCSGjx)\n  - Qisen Yang, Shenzhi Wang, Matthieu Gaetan Lin, Shiji Song, and Gao Huang. ICML, 2023.\n- [Model-based Offline Reinforcement Learning with Count-based Conservatism](https:\u002F\u002Fopenreview.net\u002Fforum?id=T5VlejGx7f)\n  - Byeongchan Kim and Min-hwan Oh. ICML, 2023.\n- [Constrained Decision Transformer for Offline Safe Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=9VKCBHESq0)\n  - Zuxin Liu, Zijian Guo, Yihang Yao, Zhepeng Cen, Wenhao Yu, Tingnan Zhang, and Ding Zhao. ICML, 2023.\n- [Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=rwLwGPdzDD)\n  - Yihao Sun, Jiaji Zhang, Chengxing Jia, Haoxin Lin, Junyin Ye, and Yang Yu. ICML, 2023.\n- [Provably Efficient Offline Reinforcement Learning with Perturbed Data Sources](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08364)\n  - Chengshuai Shi, Wei Xiong, Cong Shen, and Jing Yang. ICML, 2023.\n- [What is Essential for Unseen Goal Generalization of Offline Goal-conditioned RL?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18882)\n  - Rui Yang, Yong Lin, Xiaoteng Ma, Hao Hu, Chongjie Zhang, and Tong Zhang. ICML, 2023.\n- [Policy Regularization with Dataset Constraint for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06569)\n  - Yuhang Ran, Yi-Chen Li, Fuxiang Zhang, Zongzhang Zhang, and Yang Yu. ICML, 2023.\n- [MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19923)\n  - Fei Ni, Jianye Hao, Yao Mu, Yifu Yuan, Yan Zheng, Bin Wang, and Zhixuan Liang. ICML, 2023.\n- [Distance Weighted Supervised Learning for Offline Interaction Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13774)\n  - Joey Hejna, Jensen Gao, and Dorsa Sadigh. ICML, 2023.\n- [Masked Trajectory Models for Prediction, Representation, and Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.02968)\n  - Philipp Wu, Arjun Majumdar, Kevin Stone, Yixin Lin, Igor Mordatch, Pieter Abbeel, and Aravind Rajeswaran. ICML, 2023.\n- [Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.12824)\n  - Cheng Lu, Huayu Chen, Jianfei Chen, Hang Su, Chongxuan Li, and Jun Zhu. ICML, 2023.\n- [Bayesian Reparameterization of Reward-Conditioned Reinforcement Learning with Energy-based Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11340)\n  - Wenhao Ding, Tong Che, Ding Zhao, and Marco Pavone. ICML, 2023.\n- [Warm-Start Actor-Critic: From Approximation Error to Sub-optimality Gap](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.11271)\n  - Hang Wang, Sen Lin, and Junshan Zhang. ICML, 2023.\n- [Future-conditioned Unsupervised Pretraining for Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16683)\n  - Zhihui Xie, Zichuan Lin, Deheng Ye, Qiang Fu, Wei Yang, and Shuai Li. ICML, 2023.\n- [PAC-Bayesian Offline Contextual Bandits With Guarantees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13132)\n  - Otmane Sakhi, Nicolas Chopin, and Pierre Alquier. ICML, 2023.\n- [Q-learning Decision Transformer: Leveraging Dynamic Programming for Conditional Sequence Modelling in Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.03993)\n  - Taku Yamagata, Ahmed Khalil, and Raul Santos-Rodriguez. ICML, 2023.\n- [Jump-Start Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02372) [[website](https:\u002F\u002Fjumpstart-rl.github.io\u002F)]\n  - Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, and Karol Hausman. ICML, 2023.\n- [Learning Temporally AbstractWorld Models without Online Experimentation](https:\u002F\u002Fopenreview.net\u002Fforum?id=YeTYJz7th5)\n  - Benjamin Freed, Siddarth Venkatraman, Guillaume Adrien Sartoretti, Jeff Schneider, and Howie Choset. ICML, 2023.\n- [A Framework for Adapting Offline Algorithms to Solve Combinatorial Multi-Armed Bandit Problems with Bandit Feedback](https:\u002F\u002Fopenreview.net\u002Fforum?id=fBDP40MrQS)\n  - Guanyu Nie, Yididiya Y Nadew, Yanhui Zhu, Vaneet Aggarwal, and Christopher John Quinn. ICML, 2023.\n- [Revisiting the Linear-Programming Framework for Offline RL with General Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.13861)\n  - Asuman Ozdaglar, Sarath Pattathil, Jiawei Zhang, and Kaiqing Zhang. ICML, 2023.\n- [Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06518)\n  - Qinqing Zheng, Mikael Henaff, Brandon Amos, and Aditya Grover. ICML, 2023.\n- [Actor-Critic Alignment for Offline-to-Online Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=f6I3ZehFmu)\n  - Zishun Yu and Xinhua Zhang. ICML, 2023.\n- [Leveraging Offline Data in Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.04974)\n  - Andrew Wagenmaker and Aldo Pacchiano. ICML, 2023.\n- [Offline Reinforcement Learning with Closed-Form Policy Improvement Operators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.15956)\n  - Jiachen Li, Edwin Zhang, Ming Yin, Qinxun Bai, Yu-Xiang Wang, and William Yang Wang. ICML, 2023.\n- [Offline Learning in Markov Games with General Function Approximation](https:\u002F\u002Fopenreview.net\u002Fforum?id=LtSMEVi6eB)\n  - Yuheng Zhang, Yu Bai, and Nan Jiang. ICML, 2023.\n- [Offline Meta Reinforcement Learning with In-Distribution Online Adaptation](https:\u002F\u002Fopenreview.net\u002Fforum?id=dkYfm01yQp)\n  - Jianhao Wang, Jin Zhang, Haozhe Jiang, Junyu Zhang, Liwei Wang, and Chongjie Zhang. ICML, 2023.\n- [Scaling Pareto-Efficient Decision Making Via Offline Multi-Objective RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.00567)\n  - Baiting Zhu, Meihua Dang, and Aditya Grover. ICLR, 2023.\n- [Confidence-Conditioned Value Functions for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04607)\n  - Joey Hong, Aviral Kumar, and Sergey Levine. ICLR, 2023.\n- [Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.15144) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fscaling-offlinerl\u002Fhome)]\n  - Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, and Sergey Levine. ICLR, 2023.\n- [Is Conditional Generative Modeling all you need for Decision-Making?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.15657) [[website](https:\u002F\u002Fanuragajay.github.io\u002Fdecision-diffuser\u002F)]\n  - Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, and Pulkit Agrawal. ICLR, 2023\n- [Offline RL with No OOD Actions: In-Sample Learning via Implicit Value Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.15810)\n  - Haoran Xu, Li Jiang, Jianxiong Li, Zhuoran Yang, Zhaoran Wang, Victor Wai Kin Chan, and Xianyuan Zhan. ICLR, 2023.\n- [Extreme Q-Learning: MaxEnt RL without Entropy](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.02328)\n  - Divyansh Garg, Joey Hejna, Matthieu Geist, and Stefano Ermon. ICLR, 2023.\n- [Dichotomy of Control: Separating What You Can Control from What You Cannot](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13435)\n  - Mengjiao Yang, Dale Schuurmans, Pieter Abbeel, and Ofir Nachum. ICLR, 2023.\n- [From Play to Policy: Conditional Behavior Generation from Uncurated Robot Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.10047)\n  - Zichen Jeff Cui, Yibin Wang, Nur Muhammad Mahi Shafiullah, and Lerrel Pinto. ICLR, 2023.\n- [VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12780)\n  - Thanh Nguyen-Tang and Raman Arora. ICLR, 2023.\n- [Optimal Conservative Offline RL with General Function Approximation via Augmented Lagrangian](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00716)\n  - Paria Rashidinejad, Hanlin Zhu, Kunhe Yang, Stuart Russell, and Jiantao Jiao. ICLR, 2023.\n- [The In-Sample Softmax for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.14372)\n  - Chenjun Xiao, Han Wang, Yangchen Pan, Adam White, and Martha White. ICLR, 2023.\n- [VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00030) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fvip-rl)] [[code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fvip)]\n  - Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, and Amy Zhang. ICLR, 2023.\n- [Does Zero-Shot Reinforcement Learning Exist?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.14935)\n  - Ahmed Touati, Jérémy Rapin, and Yann Ollivier. ICLR, 2023.\n- [Behavior Prior Representation learning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00863)\n  - Hongyu Zang, Xin Li, Jie Yu, Chen Liu, Riashat Islam, Remi Tachet Des Combes, and Romain Laroche. ICLR, 2023.\n- [Mind the Gap: Offline Policy Optimization for Imperfect Rewards](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.01667)\n  - Jianxiong Li, Xiao Hu, Haoran Xu, Jingjing Liu, Xianyuan Zhan, Qing-Shan Jia, and Ya-Qin Zhang. ICLR, 2023.\n- [Offline Congestion Games: How Feedback Type Affects Data Coverage Requirement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13396)\n  - Haozhe Jiang, Qiwen Cui, Zhihan Xiong, Maryam Fazel, and Simon S. Du. ICLR, 2023.\n- [User-Interactive Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10629)\n  - Phillip Swazinna, Steffen Udluft, and Thomas Runkler. ICLR, 2023.\n- [Discovering Generalizable Multi-agent Coordination Skills from Multi-task Offline Data](https:\u002F\u002Fopenreview.net\u002Fforum?id=53FyUAdP7d)\n  - Fuxiang Zhang, Chengxing Jia, Yi-Chen Li, Lei Yuan, Yang Yu, and Zongzhang Zhang. ICLR, 2023.\n- [Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06718) [[code](https:\u002F\u002Fgithub.com\u002Fyudasong\u002FHyQ)]\n  - Yuda Song, Yifei Zhou, Ayush Sekhari, J. Andrew Bagnell, Akshay Krishnamurthy, and Wen Sun. ICLR, 2023.\n- [Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting](https:\u002F\u002Fopenreview.net\u002Fforum?id=OhUAblg27z)\n  - Zhang-Wei Hong, Pulkit Agrawal, Remi Tachet des Combes, and Romain Laroche. ICLR, 2023.\n- [Efficient Offline Policy Optimization with a Learned Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05980)\n  - Zichen Liu, Siyi Li, Wee Sun Lee, Shuicheng Yan, and Zhongwen Xu. ICLR, 2023.\n- [Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.06193)\n  - Zhendong Wang, Jonathan J Hunt, and Mingyuan Zhou. ICLR, 2023.\n- [When Data Geometry Meets Deep Function: Generalizing Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11027)\n  - Jianxiong Li, Xianyuan Zhan, Haoran Xu, Xiangyu Zhu, Jingjing Liu, and Ya-Qin Zhang. ICLR, 2023.\n- [In-sample Actor Critic for Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=dfDv0WU853R)\n  - Hongchang Zhang, Yixiu Mao, Boyuan Wang, Shuncheng He, Yi Xu, and Xiangyang Ji. ICLR, 2023.\n- [Value Memory Graph: A Graph-Structured World Model for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04384)\n  - Deyao Zhu, Li Erran Li, and Mohamed Elhoseiny. ICLR, 2023.\n- [Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03802)\n  - Jihwan Jeong, Xiaoyu Wang, Michael Gimelfarb, Hyunwoo Kim, Baher Abdulhai, and Scott Sanner. ICLR, 2023.\n- [Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.14548)\n  - Huayu Chen, Cheng Lu, Chengyang Ying, Hang Su, and Jun Zhu. ICLR, 2023.\n- [Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00750)\n  - Ming Yin, Mengdi Wang, and Yu-Xiang Wang. ICLR, 2023.\n- [Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.15512)\n  - Wei Xiong, Han Zhong, Chengshuai Shi, Cong Shen, Liwei Wang, and Tong Zhang. ICLR, 2023.\n- [Pessimism in the Face of Confounders: Provably Efficient Offline Reinforcement Learning in Partially Observable Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13589)\n  - Miao Lu, Yifei Min, Zhaoran Wang, and Zhuoran Yang. ICLR, 2023.\n- [Hyper-Decision Transformer for Efficient Online Policy Adaptation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08487)\n  - Mengdi Xu, Yuchen Lu, Yikang Shen, Shun Zhang, Ding Zhao, and Chuang Gan. ICLR, 2023.\n- [Efficient Planning in a Compact Latent Action Space](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.10291)\n  - Zhengyao Jiang, Tianjun Zhang, Michael Janner, Yueying Li, Tim Rocktäschel, Edward Grefenstette, and Yuandong Tian. ICLR, 2023.\n- [Preference Transformer: Modeling Human Preferences using Transformers for RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.00957) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fpreference-transformer)]\n  - Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. ICLR, 2023.\n- [Behavior Proximal Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.11312)\n  - Zifeng Zhuang, Kun Lei, Jinxin Liu, Donglin Wang, and Yilang Guo. ICLR, 2023.\n- [Provably Efficient Neural Offline Reinforcement Learning via Perturbed Rewards](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12780)\n  - Thanh Nguyen-Tang and Raman Arora. ICLR, 2023.\n- [The Provable Benefits of Unsupervised Data Sharing for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.13493)\n  - Hao Hu, Yiqin Yang, Qianchuan Zhao, and Chongjie Zhang. ICLR, 2023.\n- [Decision Transformer under Random Frame Dropping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03391)\n  - Kaizhe Hu, Ray Chen Zheng, Yang Gao, and Huazhe Xu. ICLR, 2023.\n- [Policy Expansion for Bridging Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00935)\n  - Haichao Zhang, We Xu, and Haonan Yu. ICLR, 2023.\n- [Finetuning Offline World Models in the Real World](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16029)\n  - Yunhai Feng, Nicklas Hansen, Ziyan Xiong, Chandramouli Rajagopalan, and Xiaolong Wang. CoRL, 2023.\n- [On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04268)\n  - Mustafa O. Karabag and Ufuk Topcu. AAAI, 2023.\n- [Adaptive Policy Learning for Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07693)\n  - Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, and Jing Jiang. AAAI, 2023.\n- [Safe Policy Improvement for POMDPs via Finite-State Controllers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.04939)\n  - Thiago D. Simão, Marnix Suilen, and Nils Jansen. AAAI, 2023.\n- [Behavior Estimation from Multi-Source Data for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.16078)\n  - Guoxi Zhang and Hisashi Kashima. AAAI, 2023.\n- [On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.13208)\n  - Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, and Raman Arora. AAAI, 2023.\n- [Contrastive Example-Based Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13101)\n  - Kyle Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, and Chelsea Finn. LDC, 2023.\n- [Curriculum Offline Reinforcement Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.5555\u002F3545946.3598767)\n  - Yuanying Cai, Chuheng Zhang, Hanye Zhao, Li Zhao, and Jiang Bian. AAMAS. 2023.\n- [Offline Reinforcement Learning with On-Policy Q-Function Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13824)\n  - Laixi Shi, Robert Dadashi, Yuejie Chi, Pablo Samuel Castro, and Matthieu Geist. ECML, 2023.\n- [Model-based Offline Policy Optimization with Adversarial Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02157)\n  - Junming Yang, Xingguo Chen, Shengyuan Wang, and Bolei Zhang. ECAI, 2023.\n- [Efficient experience replay architecture for offline reinforcement learning](https:\u002F\u002Fwww.emerald.com\u002Finsight\u002Fcontent\u002Fdoi\u002F10.1108\u002FRIA-10-2022-0248\u002Ffull\u002Fhtml)\n  - Longfei Zhang, Yanghe Feng, Rongxiao Wang, Yue Xu, Naifu Xu, Zeyi Liu, and Hang Du. RIA, 2023.\n- [Automatic Trade-off Adaptation in Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09744)\n  - Phillip Swazinna, Steffen Udluft, and Thomas Runkler. ESANN, 2023.\n- [Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08232)\n  - Ashish Kumar and Ilya Kuzovkin. arXiv, 2022.\n- [Latent Variable Representation for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08765)\n  - Tongzheng Ren, Chenjun Xiao, Tianjun Zhang, Na Li, Zhaoran Wang, Sujay Sanghavi, Dale Schuurmans, and Bo Dai. arXiv, 2022.\n- [Learning From Good Trajectories in Offline Multi-Agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.15612)\n  - Qi Tian, Kun Kuang, Furui Liu, and Baoxiang Wang. arXiv, 2022.\n- [State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.15065)\n  - Chen Chen, Hongyao Tang, Yi Ma, Chao Wang, Qianli Shen, Dong Li, and Jianye Hao. arXiv, 2022.\n- [Masked Autoencoding for Scalable and Generalizable Decision Making](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.12740)\n  - Fangchen Liu, Hao Liu, Aditya Grover, and Pieter Abbeel. arXiv, 2022.\n- [Improving TD3-BC: Relaxed Policy Constraint for Offline Learning and Stable Online Fine-Tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11802)\n  - Alex Beeson and Giovanni Montana. arXiv, 2022.\n- [Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch Size](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11092)\n  - Alexander Nikulin, Vladislav Kurenkov, Denis Tarasov, Dmitry Akimov, and Sergey Kolesnikov. arXiv, 2022.\n- [Let Offline RL Flow: Training Conservative Agents in the Latent Space of Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11096)\n  - Dmitriy Akimov, Vladislav Kurenkov, Alexander Nikulin, Denis Tarasov, and Sergey Kolesnikov. arXiv, 2022.\n- [Model-based Trajectory Stitching for Improved Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11603)\n  - Charles A. Hepburn and Giovanni Montana. arXiv, 2022.\n- [Offline Reinforcement Learning with Adaptive Behavior Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.08251)\n  - Yunfan Zhou, Xijun Li, and Qingyu Qu. arXiv, 2022.\n- [Contextual Transformer for Offline Meta Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.08016)\n  - Runji Lin, Ye Li, Xidong Feng, Zhaowei Zhang, Xian Hong Wu Fung, Haifeng Zhang, Jun Wang, Yali Du, and Yaodong Yang. arXiv, 2022.\n- [Wall Street Tree Search: Risk-Aware Planning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.04583)\n  - Dan Elbaz, Gal Novik, and Oren Salzman. arXiv, 2022.\n- [ARMOR: A Model-based Framework for Improving Arbitrary Baseline Policies with Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.04538)\n  - Tengyang Xie, Mohak Bhardwaj, Nan Jiang, and Ching-An Cheng. arXiv, 2022.\n- [Contrastive Value Learning: Implicit Models for Simple Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.02100)\n  - Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, and Jonathan Tompson. arXiv, 2022.\n- [Optimistic Curiosity Exploration and Conservative Exploitation with Linear Reward Shaping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.07288)\n  - Hao Sun, Lei Han, Rui Yang, Xiaoteng Ma, Jian Guo, and Bolei Zhou. arXiv, 2022.\n- [Optimal Conservative Offline RL with General Function Approximation via Augmented Lagrangian](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00716)\n  - Paria Rashidinejad, Hanlin Zhu, Kunhe Yang, Stuart Russell, and Jiantao Jiao. ICLR, 2023.\n- [Agent-Controller Representations: Principled Offline RL with Rich Exogenous Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00164)\n  - Riashat Islam, Manan Tomar, Alex Lamb, Yonathan Efroni, Hongyu Zang, Aniket Didolkar, Dipendra Misra, Xin Li, Harm van Seijen, Remi Tachet des Combes, and John Langford. arXiv, 2022.\n- [Provable Safe Reinforcement Learning with Binary Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.14492)\n  - Andrew Bennett, Dipendra Misra, and Nathan Kallus. arXiv, 2022.\n- [Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.15206)\n  - Ashvin Nair, Brian Zhu, Gokul Narayanan, Eugen Solowjow, and Sergey Levine. arXiv, 2022.\n- [Implicit Offline Reinforcement Learning via Supervised Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.12272)\n  - Alexandre Piche, Rafael Pardinas, David Vazquez, Igor Mordatch, and Chris Pal. arXiv, 2022.\n- [Robust Offline Reinforcement Learning with Gradient Penalty and Constraint Relaxation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.10469)\n  - Chengqian Gao, Ke Xu, Liu Liu, Deheng Ye, Peilin Zhao, and Zhiqiang Xu. arXiv, 2022.\n- [Boosting Offline Reinforcement Learning via Data Rebalancing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09241)\n  - Yang Yue, Bingyi Kang, Xiao Ma, Zhongwen Xu, Gao Huang, and Shuicheng Yan. arXiv, 2022.\n- [ConserWeightive Behavioral Cloning for Reliable Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05158) [[code](https:\u002F\u002Fgithub.com\u002Ftung-nd\u002Fcwbc)]\n  - Tung Nguyen, Qinqing Zheng, and Aditya Grover. arXiv, 2022.\n- [State Advantage Weighting for Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.04251)\n  - Jiafei Lyu, Aicheng Gong, Le Wan, Zongqing Lu, and Xiu Li. arXiv, 2022.\n- [Blessing from Experts: Super Reinforcement Learning in Confounded Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15448)\n  - Jiayi Wang, Zhengling Qi, and Chengchun Shi. arXiv, 2022.\n- [DCE: Offline Reinforcement Learning With Double Conservative Estimates](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.13132)\n  - Chen Zhao, Kai Xing Huang, and Chun Yuan. arXiv, 2022.\n- [On the Opportunities and Challenges of using Animals Videos in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.12347)\n  - Vittorio Giammarino. arXiv, 2022.\n- [Offline Reinforcement Learning with Instrumental Variables in Confounded Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.08666)\n  - Zuyue Fu, Zhengling Qi, Zhaoran Wang, Zhuoran Yang, Yanxun Xu, and Michael R. Kosorok. arXiv, 2022.\n- [Exploiting Reward Shifting in Value-Based Deep RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.07288)\n  - Hao Sun, Lei Han, Rui Yang, Xiaoteng Ma, Jian Guo, and Bolei Zhou. arXiv, 2022.\n- [Distributionally Robust Offline Reinforcement Learning with Linear Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.06620)\n  - Xiaoteng Ma, Zhipeng Liang, Li Xia, Jiheng Zhang, Jose Blanchet, Mingwen Liu, Qianchuan Zhao, and Zhengyuan Zhou. arXiv, 2022.\n- [C^2:Co-design of Robots via Concurrent Networks Coupling Online and Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.06579)\n  - Ci Chen, Pingyu Xiang, Haojian Lu, Yue Wang, and Rong Xiong. arXiv, 2022.\n- [Strategic Decision-Making in the Presence of Information Asymmetry: Provably Efficient RL with Algorithmic Instruments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.11040)\n  - Mengxin Yu, Zhuoran Yang, and Jianqing Fan. arXiv, 2022.\n- [Distributionally Robust Model-Based Offline Reinforcement Learning with Near-Optimal Sample Complexity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.05767)\n  - Laixi Shi and Yuejie Chi. arXiv, 2022.\n- [AdaCat: Adaptive Categorical Discretization for Autoregressive Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.02246)\n  - Qiyang Li, Ajay Jain, and Pieter Abbeel. arXiv, 2022.\n- [Branch Ranking for Efficient Mixed-Integer Programming via Offline Ranking-based Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.13701)\n  - Zeren Huang, Wenhao Chen, Weinan Zhang, Chuhan Shi, Furui Liu, Hui-Ling Zhen, Mingxuan Yuan, Jianye Hao, Yong Yu, and Jun Wang. arXiv, 2022.\n- [Offline Reinforcement Learning at Multiple Frequencies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.13082) [[webpage](https:\u002F\u002Fsites.google.com\u002Fstanford.edu\u002Fadaptive-nstep-returns\u002F)]\n  - Kaylee Burns, Tianhe Yu, Chelsea Finn, and Karol Hausman. arXiv, 2022.\n- [General Policy Evaluation and Improvement by Learning to Identify Few But Crucial States](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.01566)\n  - Francesco Faccio, Aditya Ramesh, Vincent Herrmann, Jean Harb, and Jürgen Schmidhuber. arXiv, 2022.\n- [Behavior Transformers: Cloning k modes with one stone](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11251)\n  - Nur Muhammad Mahi Shafiullah, Zichen Jeff Cui, Ariuntuya Altanzaya, and Lerrel Pinto. arXiv, 2022.\n- [Contrastive Learning as Goal-Conditioned Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07568)\n  - Benjamin Eysenbach, Tianjun Zhang, Ruslan Salakhutdinov, and Sergey Levine. arXiv, 2022.\n- [Federated Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.05581)\n  - Doudou Zhou, Yufeng Zhang, Aaron Sonabend-W, Zhaoran Wang, Junwei Lu, and Tianxi Cai. arXiv, 2022.\n- [Provable Benefit of Multitask Representation Learning in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.05900)\n  - Yuan Cheng, Songtao Feng, Jing Yang, Hong Zhang, and Yingbin Liang. arXiv, 2022\n- [Provably Efficient Offline Reinforcement Learning with Trajectory-Wise Reward](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.06426)\n  - Tengyu Xu and Yingbin Liang. arXiv, 2022.\n- [Model-Based Reinforcement Learning Is Minimax-Optimal for Offline Zero-Sum Markov Games](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04044)\n  - Yuling Yan, Gen Li, Yuxin Chen, and Jianqing Fan. arXiv, 2022.\n- [Offline Reinforcement Learning with Causal Structured World Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.01474)\n  - Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, and Yang Yu. arXiv, 2022.\n- [Incorporating Explicit Uncertainty Estimates into Deep Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.01085)\n  - David Brandfonbrener, Remi Tachet des Combes, and Romain Laroche. arXiv, 2022.\n- [Know Your Boundaries: The Necessity of Explicit Behavioral Cloning in Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00695)\n  - Wonjoon Goo and Scott Niekum. arXiv, 2022.\n- [Byzantine-Robust Online and Offline Distributed Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00165)\n  - Yiding Chen, Xuezhou Zhang, Kaiqing Zhang, Mengdi Wang, and Xiaojin Zhu. arXiv, 2022.\n- [Model Generation with Provable Coverability for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00316)\n  - Chengxing Jia, Hao Yin, Chenxiao Gao, Tian Xu, Lei Yuan, Zongzhang Zhang, and Yang Yu. arXiv, 2022.\n- [You Can't Count on Luck: Why Decision Transformers Fail in Stochastic Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.15967)\n  - Keiran Paster, Sheila McIlraith, and Jimmy Ba. arXiv, 2022.\n- [Multi-Game Decision Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.15241)\n  - Kuang-Huei Lee, Ofir Nachum, Mengjiao Yang, Lisa Lee, Daniel Freeman, Winnie Xu, Sergio Guadarrama, Ian Fischer, Eric Jang, Henryk Michalewski, and Igor Mordatch. arXiv, 2022.\n- [Hierarchical Planning Through Goal-Conditioned Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11790)\n  - Jinning Li, Chen Tang, Masayoshi Tomizuka, and Wei Zhan. arXiv, 2022.\n- [Distance-Sensitive Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11027)\n  - Jianxiong Li, Xianyuan Zhan, Haoran Xu, Xiangyu Zhu, Jingjing Liu, and Ya-Qin Zhang. arXiv, 2022.\n- [No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.08716)\n  - Han Wang, Archit Sakhadeo, Adam White, James Bell, Vincent Liu, Xutong Zhao, Puer Liu, Tadashi Kozuno, Alona Fyshe, and Martha White. arXiv, 2022.\n- [How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.03353)\n  - Alex X. Lee, Coline Devin, Jost Tobias Springenberg, Yuxiang Zhou, Thomas Lampe, Abbas Abdolmaleki, and Konstantinos Bousmalis. arXiv, 2022.\n- [Offline Visual Representation Learning for Embodied Navigation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.13226)\n  - Karmesh Yadav, Ram Ramrakhya, Arjun Majumdar, Vincent-Pierre Berges, Sachit Kuhar, Dhruv Batra, Alexei Baevski, and Oleksandr Maksymets. arXiv, 2022.\n- [Towards Flexible Inference in Sequential Decision Problems via Bidirectional Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.13326)\n  - Micah Carroll, Jessy Lin, Orr Paradise, Raluca Georgescu, Mingfei Sun, David Bignell, Stephanie Milani, Katja Hofmann, Matthew Hausknecht, Anca Dragan, and Sam Devlin. arXiv, 2022.\n- [BATS: Best Action Trajectory Stitching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.12026)\n  - Ian Char, Viraj Mehta, Adam Villaflor, John M. Dolan, Jeff Schneider. arXiv, 2022.\n- [Settling the Sample Complexity of Model-Based Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05275)\n  - Gen Li, Laixi Shi, Yuxin Chen, Yuejie Chi, and Yuting Wei. arXiv, 2022.\n- [PAnDR: Fast Adaptation to New Environments from Offline Experiences via Decoupling Policy and Environment Representations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02877)\n  - Tong Sang, Hongyao Tang, Yi Ma, Jianye Hao, Yan Zheng, Zhaopeng Meng, Boyan Li, and Zhen Wang. arXiv, 2022.\n- [Offline Reinforcement Learning Under Value and Density-Ratio Realizability: the Power of Gaps](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.13935)\n  - Jinglin Chen and Nan Jiang. arXiv, 2022.\n- [Meta Reinforcement Learning for Adaptive Control: An Offline Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.09661)\n  - Daniel G. McClement, Nathan P. Lawrence, Johan U. Backstrom, Philip D. Loewen, Michael G. Forbes, and R. Bhushan Gopaluni. arXiv, 2022.\n- [The Efficacy of Pessimism in Asynchronous Q-Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.07368)\n  - Yuling Yan, Gen Li, Yuxin Chen, and Jianqing Fan. arXiv, 2022.\n- [Reinforcement Learning for Linear Quadratic Control is Vulnerable Under Cost Manipulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.05774)\n  - Yunhan Huang and Quanyan Zhu. arXiv, 2022.\n- [A Regularized Implicit Policy for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.09673)\n  - Shentao Yang, Zhendong Wang, Huangjie Zheng, Yihao Feng, and Mingyuan Zhou. arXiv, 2022.\n- [Reinforcement Learning in Possibly Nonstationary Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.01707) [[code](https:\u002F\u002Fgithub.com\u002Flimengbinggz\u002FCUSUM-RL)]\n  - Mengbing Li, Chengchun Shi, Zhenke Wu, and Piotr Fryzlewicz. arXiv, 2022.\n- [Statistically Efficient Advantage Learning for Offline Reinforcement Learning in Infinite Horizons](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13163)\n  - Chengchun Shi, Shikai Luo, Hongtu Zhu, and Rui Song. arXiv, 2022.\n- [VRL3: A Data-Driven Framework for Visual Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.10324)\n  - Che Wang, Xufang Luo, Keith Ross, and Dongsheng Li. arXiv, 2022.\n- [Retrieval-Augmented Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.08417)\n  - Anirudh Goyal, Abram L. Friesen, Andrea Banino, Theophane Weber, Nan Rosemary Ke, Adria Puigdomenech Badia, Arthur Guez, Mehdi Mirza, Ksenia Konyushkova, Michal Valko, Simon Osindero, Timothy Lillicrap, Nicolas Heess, and Charles Blundell. arXiv, 2022.\n- [Online Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.05607)\n  - Qinqing Zheng, Amy Zhang, and Aditya Grover. arXiv, 2022.\n- [Transferred Q-learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04709)\n  - Elynn Y. Chen, Michael I. Jordan, and Sai Li. arXiv, 2022.\n- [Settling the Communication Complexity for Distributed Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04862)\n  - Juliusz Krysztof Ziomek, Jun Wang, and Yaodong Yang. arXiv, 2022.\n- [Offline Reinforcement Learning with Realizability and Single-policy Concentrability](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04634)\n  - Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, and Jason D. Lee. arXiv, 2022.\n- [Rethinking Goal-conditioned Supervised Learning and Its Connection to Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04478)\n  - Rui Yang, Yiming Lu, Wenzhe Li, Hao Sun, Meng Fang, Yali Du, Xiu Li, Lei Han, and Chongjie Zhang. arXiv, 2022.\n- [Stochastic Gradient Descent with Dependent Data for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02850)\n  - Jing Dong and Xin T. Tong. arXiv, 2022.\n- [Can Wikipedia Help Offline Reinforcement Learning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12122)\n  - Machel Reid, Yutaro Yamada, and Shixiang Shane Gu. arXiv, 2022.\n- [MOORe: Model-based Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.10070)\n  - Yihuan Mao, Chao Wang, Bin Wang, and Chongjie Zhang. arXiv, 2022.\n- [Operator Deep Q-Learning: Zero-Shot Reward Transferring in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.00236)\n  - Ziyang Tang, Yihao Feng, and Qiang Liu. arXiv, 2022.\n- [Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.15578)\n  - Samin Yeasar Arnob, Riashat Islam, and Doina Precup. arXiv, 2022.\n- [Single-Shot Pruning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.15579)\n  - Samin Yeasar Arnob, Riyasat Ohib, Sergey Plis, and Doina Precup. arXiv, 2022.\n- [Monte Carlo Augmented Actor-Critic for Sparse Reward Deep Reinforcement Learning from Suboptimal Demonstrations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07432) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fmcac-rl)] [[code](https:\u002F\u002Fgithub.com\u002Falbertwilcox\u002Fmcac)]\n  - Albert Wilcox, Ashwin Balakrishna, Jules Dedieu, Wyame Benslimane, Daniel S. Brown, and Ken Goldberg. NeurIPS, 2022.\n- [Data-Driven Offline Decision-Making via Invariant Representation Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11349)\n  - Han Qi, Yi Su, Aviral Kumar, and Sergey Levine. NeurIPS, 2022.\n- [Bellman Residual Orthogonalization for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.12786)\n  - Andrea Zanette, and Martin J. Wainwright. NeurIPS, 2022.\n- [A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.06147)\n  - Fan Chen, Junyu Zhang, and Zaiwen Wen. NeurIPS, 2022.\n- [RORL: Robust Offline Reinforcement Learning via Conservative Smoothing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02829)\n  - Rui Yang, Chenjia Bai, Xiaoteng Ma, Zhaoran Wang, Chongjie Zhang, and Lei Han. NeurIPS, 2022.\n- [On Gap-dependent Bounds for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00177)\n  - Xinqi Wang, Qiwen Cui, and Simon S. Du. NeurIPS, 2022.\n- [Provably Efficient Offline Multi-agent Reinforcement Learning via Strategy-wise Bonus](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00159)\n  - Qiwen Cui and Simon S. Du. NeurIPS, 2022.\n- [Supported Policy Optimization for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.06239)\n  - Jialong Wu, Haixu Wu, Zihan Qiu, Jianmin Wang, and Mingsheng Long. NeurIPS, 2022.\n- [When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.13464)\n  - Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, and Xianyuan Zhan. NeurIPS, 2022.\n- [Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13703)\n  - Seyed Kamyar Seyed Ghasemipour, Shixiang Shane Gu, and Ofir Nachum. NeurIPS, 2022.\n- [When does return-conditioned supervised learning work for offline reinforcement learning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.01079)\n  - David Brandfonbrener, Alberto Bietti, Jacob Buckman, Romain Laroche, and Joan Bruna. NeurIPS, 2022.\n- [Pessimism for Offline Linear Contextual Bandits using ℓp Confidence Sets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10671)\n  - Gene Li, Cong Ma, and Nathan Srebro. NeurIPS, 2022.\n- [RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.12581)\n  - Marc Rigter, Bruno Lacerda, and Nick Hawes. NeurIPS, 2022.\n- [When is Offline Two-Player Zero-Sum Markov Game Solvable?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.03522)\n  - Qiwen Cui, and Simon S. Du. NeurIPS, 2022.\n- [Robust Reinforcement Learning using Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.05767)\n  - Kishan Panaganti, Zaiyan Xu, Dileep Kalathil, and Mohammad Ghavamzadeh. NeurIPS, 2022.\n- [Bidirectional Learning for Offline Infinite-width Model-based Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.07507)\n  - Can Chen, Yingxue Zhang, Jie Fu, Xue Liu, and Mark Coates. NeurIPS, 2022.\n- [Mildly Conservative Q-Learning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04745)\n  - Jiafei Lyu, Xiaoteng Ma, Xiu Li, and Zongqing Lu. NeurIPS, 2022.\n- [Bootstrapped Transformer for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.08569)\n  - Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, and Dongsheng Li. NeurIPS, 2022.\n- [LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13536)\n  - Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, and Kee-Eung Kim. NeurIPS, 2022.\n- [Latent-Variable Advantage-Weighted Policy Optimization for Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08949)\n  - Xi Chen, Ali Ghadirzadeh, Tianhe Yu, Yuan Gao, Jianhao Wang, Wenzhe Li, Bin Liang, Chelsea Finn, and Chongjie Zhang. NeurIPS, 2022.\n- [Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07989)\n  - Jiafei Lyu, Xiu Li, and Zongqing Lu. NeurIPS, 2022.\n- [Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.14629)\n  - Bogdan Mazoure, Ilya Kostrikov, Ofir Nachum, and Jonathan Tompson. NeurIPS, 2022.\n- [Offline Goal-Conditioned Reinforcement Learning via f-Advantage Regression](https:\u002F\u002Fopenreview.net\u002Fforum?id=_h29VprPHD)\n  - Yecheng Jason Ma, Jason Yan, Dinesh Jayaraman, and Osbert Bastani. NeurIPS, 2022.\n- [Dual Generator Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01471)\n  - Quan Vuong, Aviral Kumar, Sergey Levine, and Yevgen Chebotar. NeurIPS, 2022.\n- [MoCoDA: Model-based Counterfactual Data Augmentation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11287)\n  - Silviu Pitis, Elliot Creager, Ajay Mandlekar, and Animesh Garg. NeurIPS, 2022.\n- [A Policy-Guided Imitation Approach for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.08323) [[code](https:\u002F\u002Fgithub.com\u002Fryanxhr\u002FPOR)]\n  - Haoran Xu, Li Jiang, Jianxiong Li, and Xianyuan Zhan. NeurIPS, 2022.\n- [A Unified Framework for Alternating Offline Model Training and Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05922)\n  - Shentao Yang, Shujian Zhang, Yihao Feng, and Mingyuan Zhou. NeurIPS, 2022.\n- [Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06692)\n  - Kaiyang Guo, Yunfeng Shao, and Yanhui Geng. NeurIPS, 2022.\n- [S2P: State-conditioned Image Synthesis for Data Augmentation in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15256)\n  - Daesol Cho, Dongseok Shim, and H. Jin Kim. NeurIPS, 2022.\n- [ASPiRe:Adaptive Skill Priors for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15205)\n  - Mengda Xu, Manuela Veloso, and Shuran Song. NeurIPS, 2022.\n- [Skills Regularized Task Decomposition for Multi-task Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=uuaMrewU9Kk)\n  - Minjong Yoo, Sangwoo Cho, and Honguk Woo. NeurIPS, 2022.\n- [Offline Multi-Agent Reinforcement Learning with Knowledge Distillation](https:\u002F\u002Fopenreview.net\u002Fforum?id=yipUuqxveCy)\n  - Wei-Cheng Tseng, Tsun-Hsuan Wang, Yen-Chen Lin, and Phillip Isola. NeurIPS, 2022.\n- [Shadow Knowledge Distillation: Bridging Offline and Online Knowledge Transfer](https:\u002F\u002Fopenreview.net\u002Fforum?id=prQT0gN81oG)\n  - Lujun Li and Zhe Jin. NeurIPS, 2022.\n- [Addressing Optimism Bias in Sequence Modeling for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10295)\n  - Adam Villaflor, Zhe Huang, Swapnil Pande, John Dolan, and Jeff Schneider. ICML, 2022.\n- [Offline RL Policies Should be Trained to be Adaptive](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.02200)\n  - Dibya Ghosh, Anurag Ajay, Pulkit Agrawal, and Sergey Levine. ICML, 2022.\n- [Adversarially Trained Actor Critic for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02446)\n  - Ching-An Cheng, Tengyang Xie, Nan Jiang, and Alekh Agarwal. ICML, 2022.\n- [Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.07511)\n  - Han Zhong, Wei Xiong, Jiyuan Tan, Liwei Wang, Tong Zhang, Zhaoran Wang, and Zhuoran Yang. ICML, 2022.\n- [How to Leverage Unlabeled Data in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.01741)\n  - Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, and Sergey Levine. ICML, 2022.\n- [Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11188)\n  - Ling Pan, Longbo Huang, Tengyu Ma, and Huazhe Xu. ICML, 2022.\n- [Learning Pseudometric-based Action Representations for Offline Reinforcement Learning](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fgu22b.html)\n  - Pengjie Gu, Mengchen Zhao, Chen Chen, Dong Li, Jianye Hao, and Bo An. ICML, 2022.\n- [Offline Meta-Reinforcement Learning with Online Self-Supervision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.03974)\n  - Vitchyr H. Pong, Ashvin Nair, Laura Smith, Catherine Huang, and Sergey Levine. ICML, 2022.\n- [Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02433)\n  - Yecheng Jason Ma, Andrew Shen, Dinesh Jayaraman, and Osbert Bastani. ICML, 2022.\n- [Constrained Offline Policy Optimization](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fpolosky22a.html)\n  - Nicholas Polosky, Bruno C. Da Silva, Madalina Fiterau, and Jithin Jagannath. ICML, 2022.\n- [Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fxu22l.html)\n  - Haoran Xu, Xianyuan Zhan, Honglei Yin, and Huiling Qin. ICML, 2022.\n- [Provably Efficient Offline Reinforcement Learning for Partially Observable Markov Decision Processes](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fguo22a.html)\n  - Hongyi Guo, Qi Cai, Yufeng Zhang, Zhuoran Yang, and Zhaoran Wang. ICML, 2022.\n- [Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13890)\n  - Laixi Shi, Gen Li, Yuting Wei, Yuxin Chen, and Yuejie Chi. ICML, 2022.\n- [Efficient Reinforcement Learning in Block MDPs: A Model-free Representation Learning Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.00063)\n  - Xuezhou Zhang, Yuda Song, Masatoshi Uehara, Mengdi Wang, Alekh Agarwal, and Wen Sun. ICML, 2022.\n- [Prompting Decision Transformer for Few-Shot Policy Generalization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.13499)\n  - Mengdi Xu, Yikang Shen, Shun Zhang, Yuchen Lu, Ding Zhao, Joshua B. Tenenbaum, and Chuang Gan. ICML, 2022.\n- [Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07166)\n  - Shentao Yang, Yihao Feng, Shujian Zhang, and Mingyuan Zhou. ICML, 2022.\n- [On the Role of Discount Factor in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03383)\n  - Hao Hu, Yiqin Yang, Qianchuan Zhao, and Chongjie Zhang. ICML, 2022.\n- [Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.01365)\n  - Matthias Weissenbacher, Samarth Sinha, Animesh Garg, and Yoshinobu Kawahara. ICML, 2022.\n- [Representation Learning for Online and Offline RL in Low-rank MDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.04652) [[video](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=EynREeip-y8s)]\n  - Masatoshi Uehara, Xuezhou Zhang, and Wen Sun. ICLR, 2022.\n- [Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06226) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=aPce6Y-NqpQs)]\n  - Masatoshi Uehara and Wen Sun. ICLR, 2022.\n- [Revisiting Design Choices in Model-Based Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.04135)\n  - Cong Lu, Philip J. Ball, Jack Parker-Holder, Michael A. Osborne, and Stephen J. Roberts. ICLR, 2022.\n- [DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.04716)\n  - Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, and Sergey Levine. ICLR, 2022.\n- [COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.08957)\n  - Jongmin Lee, Cosmin Paduraru, Daniel J. Mankowitz, Nicolas Heess, Doina Precup, Kee-Eung Kim, and Arthur Guez. ICLR, 2022.\n- [POETREE: Interpretable Policy Learning with Adaptive Decision Trees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08057)\n  - Alizée Pace, Alex J. Chan, and Mihaela van der Schaar. ICLR, 2022.\n- [Planning in Stochastic Environments with a Learned Model](https:\u002F\u002Fopenreview.net\u002Fforum?id=X6D9bAHhBQ1)\n  - Ioannis Antonoglou, Julian Schrittwieser, Sherjil Ozair, Thomas K Hubert, and David Silver. ICLR, 2022.\n- [Offline Reinforcement Learning with Value-based Episodic Memory](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.09796)\n  - Xiaoteng Ma, Yiqin Yang, Hao Hu, Qihan Liu, Jun Yang, Chongjie Zhang, Qianchuan Zhao, and Bin Liang. ICLR, 2022.\n- [When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05618)\n  - Aviral Kumar, Joey Hong, Anikait Singh, and Sergey Levine. ICLR, 2022.\n- [Learning Value Functions from Undirected State-only Experience](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.12458) [[website](https:\u002F\u002Fmatthewchang.github.io\u002Flatent_action_qlearning_site\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Farjung128\u002Flaq)]\n  - Matthew Chang, Arjun Gupta, and Saurabh Gupta. ICLR, 2022.\n- [Rethinking Goal-Conditioned Supervised Learning and Its Connection to Offline RL](https:\u002F\u002Fopenreview.net\u002Fforum?id=KJztlfGPdwW)\n  - Rui Yang, Yiming Lu, Wenzhe Li, Hao Sun, Meng Fang, Yali Du, Xiu Li, Lei Han, and Chongjie Zhang. ICLR, 2022.\n- [Offline Reinforcement Learning with Implicit Q-Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06169)\n  - Ilya Kostrikov, Ashvin Nair, and Sergey Levine. ICLR, 2022.\n- [RvS: What is Essential for Offline RL via Supervised Learning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.10751)\n  - Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, and Sergey Levine. ICLR, 2022.\n- [Pareto Policy Pool for Model-based Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=OqcZu8JIIzS)\n  - Yijun Yang, Jing Jiang, Tianyi Zhou, Jie Ma, and Yuhui Shi. ICLR, 2022.\n- [CrowdPlay: Crowdsourcing Human Demonstrations for Offline Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=qyTBxTztIpQ)\n  - Matthias Gerstgrasser, Rakshit Trivedi, and David C. Parkes. ICLR, 2022.\n- [COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08398)\n  - Fan Wu, Linyi Li, Chejian Xu, Huan Zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, and Bo Li. ICLR, 2022.\n- [DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.06662)\n  - Jinxin Liu, Hongyin Zhang, and Donglin Wang. ICLR, 2022.\n- [Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.05804)\n  - Ming Yin, Yaqi Duan, Mengdi Wang, and Yu-Xiang Wang. ICLR, 2022.\n- [Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.11566)\n  - Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, and Zhaoran Wang. ICLR, 2022.\n- [Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.13807)\n  - Thanh Nguyen-Tang, Sunil Gupta, A.Tuan Nguyen, and Svetha Venkatesh. ICLR, 2022.\n- [Generalized Decision Transformer for Offline Hindsight Information Matching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.10364)  [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fgeneralizeddt)]\n  - Hiroki Furuta, Yutaka Matsuo, and Shixiang Shane Gu. ICLR, 2022.\n- [Model-Based Offline Meta-Reinforcement Learning with Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02929)\n  - Sen Lin, Jialin Wan, Tengyu Xu, Yingbin Liang, and Junshan Zhang. ICLR, 2022.\n- [AW-Opt: Learning Robotic Skills with Imitation and Reinforcement at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.05424) [[website](https:\u002F\u002Fawopt.github.io\u002F)]\n  - Yao Lu, Karol Hausman, Yevgen Chebotar, Mengyuan Yan, Eric Jang, Alexander Herzog, Ted Xiao, Alex Irpan, Mohi Khansari, Dmitry Kalashnikov, and Sergey Levine. CoRL, 2022.\n- [Dealing with the Unknown: Pessimistic Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.05440)\n  - Jinning Li, Chen Tang, Masayoshi Tomizuka, and Wei Zhan. CoRL, 2022.\n- [You Only Evaluate Once: a Simple Baseline Algorithm for Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.02304)\n  - Wonjoon Goo and Scott Niekum. CoRL, 2022.\n- [S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06326)\n  - Samarth Sinha and Animesh Garg. CoRL, 2022.\n- [A Workflow for Offline Model-Free Robotic Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.10813) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Foffline-rl-workflow)]\n  - Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, and Sergey Levine. CoRL, 2022.\n- [Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06192)  [[blog](https:\u002F\u002Fdeepmind.com\u002Fblog\u002Farticle\u002Fstacking-our-way-to-more-general-robots)] [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=BxOKPEtMuZw)] [[code](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frgb_stacking)]\n  - Alex X. Lee, Coline Devin, Yuxiang Zhou, Thomas Lampe, Konstantinos Bousmalis, Jost Tobias Springenberg, Arunkumar Byravan, Abbas Abdolmaleki, Nimrod Gileadi, David Khosid, Claudio Fantacci, Jose Enrique Chen, Akhil Raju, Rae Jeong, Michael Neunert, Antoine Laurens, Stefano Saliceti, Federico Casarini, Martin Riedmiller, Raia Hadsell, and Francesco Nori. CoRL, 2022.\n- [Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs and Practical Solutions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17396)\n  - Yicheng Luo, Jackie Kay, Edward Grefenstette, and Marc Peter Deisenroth. RLDM, 2022.\n- [Offline Reinforcement Learning with Representations for Actions](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0020025522009033?via%3Dihub)\n  - Xingzhou Lou, Qiyue Yin, Junge Zhang, Chao Yu, Zhaofeng He, Nengjie Cheng, and Kaiqi Huang. Information Sciences, 2022.\n- [Towards Off-Policy Learning for Ranking Policies with Logged Feedback](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-8695.XiaoT.pdf)\n  - Teng Xiao and Suhang Wang. AAAI, 2022.\n- [Safe Offline Reinforcement Learning Through Hierarchical Policies](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-05936-0_30)\n  - Shaofan Liu and Shiliang Sun. PAKDD, 2022.\n- [TD3 with Reverse KL Regularizer for Offline Reinforcement Learning from Mixed Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.02125)\n  - Yuanying Cai, Chuheng Zhang, Li Zhao, Wei Shen, Xuyun Zhang, Lei Song, Jiang Bian, Tao Qin, and Tieyan Liu. ICDM, 2022.\n- [Sample Complexity of Offline Reinforcement Learning with Deep ReLU Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06671)\n  - Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, and Svetha Venkatesh. arXiv, 2021.\n- [Model Selection in Batch Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.12320)\n  - Jonathan N. Lee, George Tucker, Ofir Nachum, and Bo Dai. arXiv, 2021.\n- [Learning Contraction Policies from Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.05911)\n  - Navid Rezazadeh, Maxwell Kolarich, Solmaz S. Kia, and Negar Mehr. arXiv, 2021.\n- [CoMPS: Continual Meta Policy Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.04467)\n  - Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn, Sergey Levine. arXiv, 2021.\n- [MESA: Offline Meta-RL for Safe Adaptation and Fault Tolerance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.03575)\n  - Michael Luo, Ashwin Balakrishna, Brijen Thananjeyan, Suraj Nair, Julian Ibarz, Jie Tan, Chelsea Finn, Ion Stoica, and Ken Goldberg. arXiv, 2021.\n- [Offline Pre-trained Multi-Agent Decision Transformer: One Big Sequence Model Conquers All StarCraftII Tasks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.02845)\n  - Linghui Meng, Muning Wen, Yaodong Yang, Chenyang Le, Xiyun Li, Weinan Zhang, Ying Wen, Haifeng Zhang, Jun Wang, and Bo Xu. arXiv, 2021.\n- [Policy Gradient and Actor-Critic Learning in Continuous Time and Space: Theory and Algorithms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11232)\n  - Yanwei Jia and Xun Yu Zhou. arXiv, 2021.\n- [Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.10919) [[video](https:\u002F\u002Fyoutu.be\u002FQS2xVHgBg-k)]\n  - Dylan J. Foster, Akshay Krishnamurthy, David Simchi-Levi, and Yunzong Xu. arXiv, 2021.\n- [UMBRELLA: Uncertainty-Aware Model-Based Offline Reinforcement Learning Leveraging Planning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11097)\n  - Christopher Diehl, Timo Sievernich, Martin Krüger, Frank Hoffmann, and Torsten Bertran. arXiv, 2021.\n- [Exploiting Action Impact Regularity and Partially Known Models for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.08066)\n  - Vincent Liu, James Wright, and Martha White. arXiv, 2021.\n- [Batch Reinforcement Learning from Crowds](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.04279)\n  - Guoxi Zhang and Hisashi Kashima. arXiv, 2021.\n- [SCORE: Spurious COrrelation REduction for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.12468)\n  - Zhihong Deng, Zuyue Fu, Lingxiao Wang, Zhuoran Yang, Chenjia Bai, Zhaoran Wang, and Jing Jiang. arXiv, 2021.\n- [Safely Bridging Offline and Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.13060)\n  - Wanqiao Xu, Kan Xu, Hamsa Bastani, and Osbert Bastani. arXiv, 2021.\n- [Efficient Robotic Manipulation Through Offline-to-Online Reinforcement Learning and Goal-Aware State Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10905)\n  - Jin Li, Xianyuan Zhan, Zixu Xiao, and Guyue Zhou. arXiv, 2021.\n- [Value Penalized Q-Learning for Recommender Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07923)\n  - Chengqian Gao, Ke Xu, and Peilin Zhao. arXiv, 2021.\n- [Offline Reinforcement Learning with Soft Behavior Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07395)\n  - Haoran Xu, Xianyuan Zhan, Jianxiong Li, and Honglei Yin. arXiv, 2021.\n- [Planning from Pixels in Environments with Combinatorially Hard Search Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06149)\n  - Marco Bagatella, Mirek Olšák, Michal Rolínek, and Georg Martius. arXiv, 2021.\n- [StARformer: Transformer with State-Action-Reward Representations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06206)\n  - Jinghuan Shang and Michael S. Ryoo. arXiv, 2021.\n- [Offline RL With Resource Constrained Online Deployment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.03165) [[code](https:\u002F\u002Fgithub.com\u002FJayanthRR\u002FRC-OfflineRL)]\n  - Jayanth Reddy Regatti, Aniket Anand Deshmukh, Frank Cheng, Young Hun Jung, Abhishek Gupta, and Urun Dogan. arXiv, 2021.\n- [Lifelong Robotic Reinforcement Learning by Retaining Experiences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.09180) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fretain-experience\u002F)]\n  - Annie Xie and Chelsea Finn. arXiv, 2021.\n- [Dual Behavior Regularized Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.09037)\n  - Chapman Siu, Jason Traish, and Richard Yi Da Xu. arXiv, 2021.\n- [DCUR: Data Curriculum for Teaching via Samples with Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.07380) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fteach-curr\u002Fhome)] [[code](https:\u002F\u002Fgithub.com\u002FDanielTakeshi\u002FDCUR)]\n  - Daniel Seita, Abhinav Gopal, Zhao Mandi, and John Canny. arXiv, 2021.\n- [DROMO: Distributionally Robust Offline Model-based Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.07275)\n  - Ruizhen Liu, Dazhi Zhong, and Zhicong Chen. arXiv, 2021.\n- [Implicit Behavioral Cloning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.00137)\n  - Pete Florence, Corey Lynch, Andy Zeng, Oscar Ramirez, Ayzaan Wahid, Laura Downs, Adrian Wong, Johnny Lee, Igor Mordatch, and Jonathan Tompson. arXiv, 2021.\n- [Reducing Conservativeness Oriented Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00098)\n  - Hongchang Zhang, Jianzhun Shao, Yuhang Jiang, Shuncheng He, and Xiangyang Ji. arXiv, 2021.\n- [Policy Gradients Incorporating the Future](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.02096)\n  - David Venuto, Elaine Lau, Doina Precup, and Ofir Nachum. arXiv, 2021.\n- [Offline Decentralized Multi-Agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.01832)\n  - Jiechuan Jiang and Zongqing Lu. arXiv, 2021.\n- [OPAL: Offline Preference-Based Apprenticeship Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.09251) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Foffline-prefs)]\n  - Daniel Shin and Daniel S. Brown. arXiv, 2021.\n- [Constraints Penalized Q-Learning for Safe Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.09003)\n  - Haoran Xu, Xianyuan Zhan, and Xiangyu Zhu. arXiv, 2021.\n- [Where is the Grass Greener? Revisiting Generalized Policy Iteration for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01407)\n  - Lionel Blondé and Alexandros Kalousis. arXiv, 2021.\n- [The Least Restriction for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01757)\n  - Zizhou Su. arXiv, 2021.\n- [Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.00591)\n  - Seunghyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, and Jinwoo Shin. arXiv, 2021.\n- [Causal Reinforcement Learning using Observational and Interventional Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.14421)\n  - Maxime Gasse, Damien Grasset, Guillaume Gaudron, and Pierre-Yves Oudeyer. arXiv, 2021.\n- [On the Sample Complexity of Batch Reinforcement Learning with Policy-Induced Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.09973)\n  - Chenjun Xiao, Ilbin Lee, Bo Dai, Dale Schuurmans, and Csaba Szepesvari. arXiv, 2021.\n- [Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.09119) [[website](https:\u002F\u002Fsites.google.com\u002Fberkeley.edu\u002Fmabe)]\n  - Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, and Michael Laskin. arXiv, 2021.\n- [On Multi-objective Policy Optimization as a Tool for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.08199)\n  - Abbas Abdolmaleki, Sandy H. Huang, Giulia Vezzani, Bobak Shahriari, Jost Tobias Springenberg, Shruti Mishra, Dhruva TB, Arunkumar Byravan, Konstantinos Bousmalis, Andras Gyorgy, Csaba Szepesvari, Raia Hadsell, Nicolas Heess, and Martin Riedmiller. arXiv, 2021.\n- [Offline Reinforcement Learning as Anti-Exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06431)\n  - Shideh Rezaeifar, Robert Dadashi, Nino Vieillard, Léonard Hussenot, Olivier Bachem, Olivier Pietquin, and Matthieu Geist. arXiv, 2021.\n- [Corruption-Robust Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06630)\n  - Xuezhou Zhang, Yiding Chen, Jerry Zhu, and Wen Sun. arXiv, 2021.\n- [Offline Inverse Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.05068)\n  - Firas Jarboui and Vianney Perchet. arXiv, 2021.\n- [Heuristic-Guided Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02757)\n  - Ching-An Cheng, Andrey Kolobov, and Adith Swaminathan. arXiv, 2021.\n- [Reinforcement Learning as One Big Sequence Modeling Problem](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02039)\n  - Michael Janner, Qiyang Li, and Sergey Levine. arXiv, 2021.\n- [Decision Transformer: Reinforcement Learning via Sequence Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.01345)\n  - Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. arXiv, 2021.\n- [Model-Based Offline Planning with Trajectory Pruning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.07351)\n  - Xianyuan Zhan, Xiangyu Zhu, and Haoran Xu. arXiv, 2021.\n- [InferNet for Delayed Reinforcement Tasks: Addressing the Temporal Credit Assignment Problem](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.00568)\n  - Markel Sanz Ausin, Hamoon Azizsoltani, Song Ju, Yeo Jin Kim, and Min Chi. arXiv, 2021.\n- [Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.09847) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=uOIvo1wQ_RQ)]\n  - Lin Chen, Bruno Scherrer, and Peter L. Bartlett. arXiv, 2021.\n- [MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08212) [[website](https:\u002F\u002Fkarolhausman.github.io\u002Fmt-opt\u002F)]\n  - Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, and Karol Hausman. arXiv, 2021.\n- [Distributional Offline Continuous-Time Reinforcement Learning with Neural Physics-Informed PDEs (SciPhy RL for DOCTR-L)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01040)\n  - Igor Halperin. arXiv, 2021.\n- [Regularized Behavior Value Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.09575)\n  - Caglar Gulcehre, Sergio Gómez Colmenarejo, Ziyu Wang, Jakub Sygnowski, Thomas Paine, Konrad Zolna, Yutian Chen, Matthew Hoffman, Razvan Pascanu, and Nando de Freitas. arXiv, 2021.\n- [Improved Context-Based Offline Meta-RL with Attention and Contrastive Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10774)\n  - Lanqing Li, Yuanhao Huang, and Dijun Luo. arXiv, 2021.\n- [Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09907)\n  - Luofeng Liao, Zuyue Fu, Zhuoran Yang, Mladen Kolar, and Zhaoran Wang. arXiv, 2021.\n- [GELATO: Geometrically Enriched Latent Model for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.11327)\n  - Guy Tennenholtz, Nir Baram, and Shie Mannor. arXiv, 2021.\n- [MUSBO: Model-based Uncertainty Regularized and Sample Efficient Batch Optimization for Deployment Constrained Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.11448)\n  - DiJia Su, Jason D. Lee, John M. Mulvey, and H. Vincent Poor. arXiv, 2021.\n- [Continuous Doubly Constrained Batch Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09225)\n  - Rasool Fakoor, Jonas Mueller, Pratik Chaudhari, and Alexander J. Smola. arXiv, 2021.\n- [Q-Value Weighted Regression: Reinforcement Learning with Limited Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.06782)\n  - Piotr Kozakowski, Łukasz Kaiser, Henryk Michalewski, Afroz Mohiuddin, and Katarzyna Kańska. arXiv, 2021.\n- [Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.02981)\n  - Masatoshi Uehara, Masaaki Imaizumi, Nan Jiang, Nathan Kallus, Wen Sun, and Tengyang Xie. arXiv, 2021.\n- [Fast Rates for the Regret of Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.00479) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=eGZ-2JU9zKE)]\n  - Yichun Hu, Nathan Kallus, and Masatoshi Uehara. arXiv, 2021.\n- [Safe Policy Learning through Extrapolation: Application to Pre-trial Risk Assessment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.11679) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Gd2-MxJQTKA)]\n  - Eli Ben-Michael, D. James Greiner, Kosuke Imai, and Zhichao Jiang.\n- [Weighted Model Estimation for Offline Model-based Reinforcement Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Fhash\u002F949694a5059302e7283073b502f094d7-Abstract.html)\n  - Toru Hishinuma and Kei Senda. NeurIPS, 2021.\n- [A Minimalist Approach to Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06860)\n  - Scott Fujimoto and Shixiang Shane Gu. NeurIPS, 2021.\n- [Conservative Offline Distributional Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06106)\n  - Yecheng Jason Ma, Dinesh Jayaraman, and Osbert Bastani. NeurIPS, 2021.\n- [Pessimism Meets Invariance: Provably Efficient Offline Mean-Field Multi-Agent RL](https:\u002F\u002Fopenreview.net\u002Fforum?id=Ww1e07fy9fC)\n  - Minshuo Chen, Yan Li, Ethan Wang, Zhuoran Yang, Zhaoran Wang, and Tuo Zhao. NeurIPS, 2021.\n- [Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03400)\n  - Yiqin Yang, Xiaoteng Ma, Chenghao Li, Zewu Zheng, Qiyuan Zhang, Gao Huang, Jun Yang, and Qianchuan Zhao. NeurIPS, 2021.\n- [Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.08812)\n  - Andrea Zanette, Martin J. Wainwright, and Emma Brunskill. NeurIPS, 2021.\n- [Multi-Objective SPIBB: Seldonian Offline Policy Improvement with Safety Constraints in Finite MDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.00099)\n  - Harsh Satija, Philip S. Thomas, Joelle Pineau, and Romain Laroche. NeurIPS, 2021.\n- [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02039)\n  - Michael Janner, Qiyang Li, and Sergey Levine. NeurIPS, 2021.\n- [Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.12021) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=T1Am0bGzH4A)]\n  - Paria Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, and Stuart Russell. NeurIPS, 2021.\n- [Offline Reinforcement Learning with Reverse Model-based Imagination](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.00188)\n  - Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, and Chongjie Zhang. NeurIPS, 2021.\n- [Offline Meta Reinforcement Learning -- Identifiability Challenges and Effective Data Collection Strategies](https:\u002F\u002Fopenreview.net\u002Fforum?id=IBdEfhLveS)\n  - Ron Dorfman, Idan Shenfeld, and Aviv Tamar. NeurIPS, 2021.\n- [Nearly Horizon-Free Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.14077)\n  - Tongzheng Ren, Jialian Li, Bo Dai, Simon S. Du, and Sujay Sanghavi. NeurIPS, 2021.\n- [Conservative Data Sharing for Multi-Task Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.08128)\n  - Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, and Chelsea Finn. NeurIPS, 2021.\n- [Online and Offline Reinforcement Learning by Planning with a Learned Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06294)\n  - Julian Schrittwieser, Thomas Hubert, Amol Mandhane, Mohammadamin Barekatain, Ioannis Antonoglou, and David Silver. NeurIPS, 2021.\n- [Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04895)\n  - Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, and Yu Bai. NeurIPS, 2021.\n- [Offline RL Without Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.08909)\n  - David Brandfonbrener, William F. Whitney, Rajesh Ranganath, and Joan Bruna. NeurIPS, 2021.\n- [Offline Model-based Adaptable Policy Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=lrdXc17jm6)\n  - Xiong-Hui Chen, Yang Yu, Qingyang Li, Fan-Ming Luo, Zhiwei Tony Qin, Shang Wenjie, and Jieping Ye. NeurIPS, 2021.\n- [COMBO: Conservative Offline Model-Based Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.08363)\n  - Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, and Chelsea Finn. NeurIPS, 2021.\n- [PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Personalized Simulators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.06961)\n  - Anish Agarwal, Abdullah Alomar, Varkey Alumootil, Devavrat Shah, Dennis Shen, Zhi Xu, and Cindy Yang. NeurIPS, 2021.\n- [Near-Optimal Offline Reinforcement Learning via Double Variance Reduction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.01748)\n  - Ming Yin, Yu Bai, and Yu-Xiang Wang. NeurIPS, 2021.\n- [Bellman-consistent Pessimism for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06926) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=g_yD6Yw8MLQ)]\n  - Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal. NeurIPS, 2021.\n- [The Difficulty of Passive Learning in Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14020)\n  - Georg Ostrovski, Pablo Samuel Castro, and Will Dabney. NeurIPS, 2021.\n- [Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.01548)\n  - Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song. NeurIPS, 2021.\n- [Towards Instance-Optimal Offline Reinforcement Learning with Pessimism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.08695)\n  - Ming Yin and Yu-Xiang Wang. NeurIPS, 2021.\n- [EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.11091)\n  - Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, and Shixiang Shane Gu. ICML, 2021.\n- [Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.07749) [[website](https:\u002F\u002Factionable-models.github.io\u002F)]\n  - Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jake Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, and Sergey Levine. ICML, 2021.\n- [Is Pessimism Provably Efficient for Offline RL?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.15085) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vCQsZ5pzHPk)]\n  - Ying Jin, Zhuoran Yang, and Zhaoran Wang. ICML, 2021.\n- [Representation Matters: Offline Pretraining for Sequential Decision Making](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.05815)\n  - Mengjiao Yang and Ofir Nachum. ICML, 2021.\n- [Offline Reinforcement Learning with Pseudometric Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.01948)\n  - Robert Dadashi, Shideh Rezaeifar, Nino Vieillard, Léonard Hussenot, Olivier Pietquin, and Matthieu Geist. ICML, 2021.\n- [Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.05632)\n  - Philip J. Ball, Cong Lu, Jack Parker-Holder, and Stephen Roberts. ICML, 2021.\n- [Offline Contextual Bandits with Overparameterized Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.15368)\n  - David Brandfonbrener, William F. Whitney, Rajesh Ranganath and Joan Bruna. ICML, 2021.\n- [Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13883)\n  - Yaqi Duan, Chi Jin, and Zhiyuan Li. ICML, 2021.\n- [Offline Reinforcement Learning with Fisher Divergence Critic Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.08050)\n  - Ilya Kostrikov, Jonathan Tompson, Rob Fergus, and Ofir Nachum. ICML, 2021.\n- [OptiDICE: Offline Policy Optimization via Stationary Distribution Correction Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.10783)\n  - Jongmin Lee, Wonseok Jeon, Byung-Jun Lee, Joelle Pineau, and Kee-Eung Kim. ICML, 2021.\n- [Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.08140)\n  - Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua Susskind, Jian Zhang, Ruslan Salakhutdinov, and Hanlin Goh. ICML, 2021.\n- [Vector Quantized Models for Planning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04615)\n  - Sherjil Ozair, Yazhe Li, Ali Razavi, Ioannis Antonoglou, Aäron van den Oord, and Oriol Vinyals. ICML, 2021.\n- [Exponential Lower Bounds for Batch Reinforcement Learning: Batch RL can be Exponentially Harder than Online RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.08005) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=YktnEdsxYfc&feature=youtu.be)]\n  - Andrea Zanette. ICML, 2021.\n- [Instabilities of Offline RL with Pre-Trained Neural Representation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.04947)\n  - Ruosong Wang, Yifan Wu, Ruslan Salakhutdinov, and Sham M. Kakade. ICML, 2021.\n- [Offline Meta-Reinforcement Learning with Advantage Weighting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.06043)\n  - Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, and Chelsea Finn. ICML, 2021.\n- [Model-Based Offline Planning](https:\u002F\u002Fopenreview.net\u002Fforum?id=OMNB1G5xzd4) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=nxGGHdZOFts&feature=youtu.be)]\n  - Arthur Argenson and Gabriel Dulac-Arnold. ICLR, 2021.\n- [Batch Reinforcement Learning Through Continuation Method](https:\u002F\u002Fopenreview.net\u002Fforum?id=po-DLlBuAuz)\n  - Yijie Guo, Shengyu Feng, Nicolas Le Roux, Ed Chi, Honglak Lee, and Minmin Chen. ICLR, 2021.\n- [Model-Based Visual Planning with Self-Supervised Functional Distances](https:\u002F\u002Fopenreview.net\u002Fforum?id=UcoXdfrORC)\n  - Stephen Tian, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin Eysenbach, Chelsea Finn, and Sergey Levine. ICLR, 2021.\n- [Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization](https:\u002F\u002Fopenreview.net\u002Fforum?id=3hGNqpI4WS)\n  - Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, and Shixiang Gu. ICLR, 2021.\n- [Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization](https:\u002F\u002Fopenreview.net\u002Fforum?id=8cpHIfgY4Dj)\n  - Lanqing Li, Rui Yang, and Dijun Luo. ICLR, 2021.\n- [DeepAveragers: Offline Reinforcement Learning by Solving Derived Non-Parametric MDPs](https:\u002F\u002Fopenreview.net\u002Fforum?id=eMP1j9efXtX)\n  - Aayam Kumar Shrestha, Stefan Lee, Prasad Tadepalli, and Alan Fern. ICLR, 2021.\n- [What are the Statistical Limits of Offline RL with Linear Function Approximation?](https:\u002F\u002Fopenreview.net\u002Fforum?id=30EvkP2aQLD) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FkkphMeFapg)]\n  - Ruosong Wang, Dean Foster, and Sham M. Kakade. ICLR, 2021.\n- [Reset-Free Lifelong Learning with Skill-Space Planning](https:\u002F\u002Fopenreview.net\u002Fforum?id=HIGSa_3kOx3) [[website](https:\u002F\u002Fsites.google.com\u002Fberkeley.edu\u002Freset-free-lifelong-learning)]\n  - Kevin Lu, Aditya Grover, Pieter Abbeel, and Igor Mordatch. ICLR, 2021.\n- [Risk-Averse Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.05371)\n  - Núria Armengol Urpí, Sebastian Curi, and Andreas Krause. ICLR, 2021.\n- [Finite-Sample Regret Bound for Distributionally Robust Offline Tabular Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv130\u002Fzhou21d.html)\n  - Zhengqing Zhou, Zhengyuan Zhou, Qinxun Bai, Linhai Qiu, Jose Blanchet, and Peter Glynn. AISTATS, 2021.\n- [Exploration by Maximizing Rényi Entropy for Reward-Free RL Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.06193)\n  - Chuheng Zhang, Yuanying Cai, Longbo Huang, and Jian Li. AAAI, 2021.\n- [Efficient Self-Supervised Data Collection for Offline Robot Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04607)\n  - Shadi Endrawis, Gal Leibovich, Guy Jacob, Gal Novik, Aviv Tamar. ICRA, 2021.\n- [Boosting Offline Reinforcement Learning with Residual Generative Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.10411)\n  - Hua Wei, Deheng Ye, Zhao Liu, Hao Wu, Bo Yuan, Qiang Fu, Wei Yang, and Zhenhui (Jessie)Li. IJCAI, 2021.\n- [BRAC+: Improved Behavior Regularized Actor Critic for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.00894)\n  - Chi Zhang, Sanmukh Rao Kuppannagari, and Viktor K Prasanna. ACML, 2021.\n- [Behavior Constraining in Weight Space for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.05479)\n  - Phillip Swazinna, Steffen Udluft, Daniel Hein, and Thomas Runkler. ESANN, 2021.\n- [Finite-Sample Analysis For Decentralized Batch Multi-Agent Reinforcement Learning With Networked Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.02783)\n  - Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, and Tamer Başar. IEEE T AUTOMATIC CONTROL, 2021.\n- [Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17168)\n  - Gunshi Gupta, Tim G. J. Rudner, Rowan Thomas McAllister, Adrien Gaidon, and Yarin Gal. CLeaR, 2021.\n- [Reinforcement Learning via Fenchel-Rockafellar Duality](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.01866) [[software](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fdice_rl)]\n  - Ofir Nachum and Bo Dai. arXiv, 2020.\n- [AWAC: Accelerating Online Reinforcement Learning with Offline Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09359) [[website](https:\u002F\u002Fawacrl.github.io\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Fvitchyr\u002Frlkit\u002Ftree\u002Fmaster\u002Fexamples\u002Fawac)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F09\u002F10\u002Fawac\u002F)]\n  - Ashvin Nair, Abhishek Gupta, Murtaza Dalal, and Sergey Levine. arXiv, 2020.\n- [Sparse Feature Selection Makes Batch Reinforcement Learning More Sample Efficient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.04019)\n  - Botao Hao, Yaqi Duan, Tor Lattimore, Csaba Szepesvári, and Mengdi Wang. arXiv, 2020.\n- [A Variant of the Wang-Foster-Kakade Lower Bound for the Discounted Setting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.01075)\n  - Philip Amortila, Nan Jiang, and Tengyang Xie. arXiv, 2020.\n- [Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14771)\n  - Samuele Tosatto, João Carvalho, and Jan Peters. arXiv, 2020.\n- [Batch Value-function Approximation with Only Realizability](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.04990)\n  - Tengyang Xie and Nan Jiang. arXiv2020.\n- [DRIFT: Deep Reinforcement Learning for Functional Software Testing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.08220)\n  - Luke Harries, Rebekah Storan Clarke, Timothy Chapman, Swamy V. P. L. N. Nallamalli, Levent Ozgur, Shuktika Jain, Alex Leung, Steve Lim, Aaron Dietrich, José Miguel Hernández-Lobato, Tom Ellis, Cheng Zhang, and Kamil Ciosek. arXiv, 2020.\n- [Causality and Batch Reinforcement Learning: Complementary Approaches To Planning In Unknown Domains](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.02579)\n  - James Bannon, Brad Windsor, Wenbo Song, and Tao Li. arXiv, 2020.\n- [Goal-conditioned Batch Reinforcement Learning for Rotation Invariant Locomotion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.08356) [[code](https:\u002F\u002Fgithub.com\u002Faditimavalankar\u002Fgc-batch-rl-locomotion)]\n  - Aditi Mavalankar. arXiv, 2020.\n- [Semi-Supervised Reward Learning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.06899)\n  - Ksenia Konyushkova, Konrad Zolna, Yusuf Aytar, Alexander Novikov, Scott Reed, Serkan Cabi, and Nando de Freitas. arXiv, 2020.\n- [Sample-Efficient Reinforcement Learning via Counterfactual-Based Data Augmentation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.09092)\n  - Chaochao Lu, Biwei Huang, Ke Wang, José Miguel Hernández-Lobato, Kun Zhang, and Bernhard Schölkopf. arXiv, 2020.\n- [Offline Reinforcement Learning from Images with Latent Space Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.11547) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Flompo\u002F)]\n  - Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, and Chelsea Finn. arXiv, 2020.\n- [POPO: Pessimistic Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.13682)\n  - Qiang He and Xinwen Hou. arXiv, 2020.\n- [Reinforcement Learning with Videos: Combining Offline Observations with Interaction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.06507)\n  - Karl Schmeckpeper, Oleh Rybkin, Kostas Daniilidis, Sergey Levine, and Chelsea Finn. arXiv, 2020.\n- [Recovery RL: Safe Reinforcement Learning with Learned Recovery Zones](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.15920) [[website](https:\u002F\u002Fsites.google.com\u002Fberkeley.edu\u002Frecovery-rl\u002F)]\n  - Brijen Thananjeyan, Ashwin Balakrishna, Suraj Nair, Michael Luo, Krishnan Srinivasan, Minho Hwang, Joseph E. Gonzalez, Julian Ibarz, Chelsea Finn, and Ken Goldberg. arXiv, 2020.\n- [Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14498)\n  - Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, and Sergey Levine. arXiv, 2020.\n- [OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.13611) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fopal-iclr)]\n  - Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, and Ofir Nachum. arXiv, 2020.\n- [Batch Exploration with Examples for Scalable Robotic Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11917)\n  - Annie S. Chen, HyunJi Nam, Suraj Nair, and Chelsea Finn. arXiv, 2020.\n- [Learning Dexterous Manipulation from Suboptimal Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.08587) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Frlfse)]\n  - Rae Jeong, Jost Tobias Springenberg, Jackie Kay, Daniel Zheng, Yuxiang Zhou, Alexandre Galashov, Nicolas Heess, and Francesco Nori. arXiv, 2020.\n- [The Reinforcement Learning-Based Multi-Agent Cooperative Approach for the Adaptive Speed Regulation on a Metallurgical Pickling Line](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.06933)\n  - Anna Bogomolova, Kseniia Kingsep, and Boris Voskresenskii. arXiv, 2020.\n- [Overcoming Model Bias for Robust Offline Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05533) [[dataset](https:\u002F\u002Fgithub.com\u002Fsiemens\u002Findustrialbenchmark\u002Ftree\u002Foffline_datasets\u002Fdatasets)]\n  - Phillip Swazinna, Steffen Udluft, and Thomas Runkler. arXiv, 2020.\n- [Offline Meta Learning of Exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.02598)\n  - Ron Dorfman, Idan Shenfeld, and Aviv Tamar. arXiv, 2020.\n- [EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.11091)\n  - Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, and Shixiang Shane Gu. arXiv, 2020.\n- [Hyperparameter Selection for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.09055)\n  - Tom Le Paine, Cosmin Paduraru, Andrea Michi, Caglar Gulcehre, Konrad Zolna, Alexander Novikov, Ziyu Wang, and Nando de Freitas. arXiv, 2020.\n- [Interpretable Control by Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.09964)\n  - Daniel Hein, Steffen Limmer, and Thomas A. Runkler. arXiv, 2020.\n- [Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.03886) [[code](https:\u002F\u002Fgithub.com\u002FCausalML\u002FNaturalStochasticOPE)]\n  - Nathan Kallus and Masatoshi Uehara. arXiv, 2020.\n- [Accelerating Online Reinforcement Learning with Offline Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09359) [[website](https:\u002F\u002Fawacrl.github.io\u002F)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F09\u002F10\u002Fawac\u002F)]\n  - Ashvin Nair, Murtaza Dalal, Abhishek Gupta, and Sergey Levine. arXiv, 2020.\n- [DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.07305) [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F03\u002F16\u002Fdiscor\u002F)]\n  - Aviral Kumar, Abhishek Gupta, and Sergey Levine. arXiv, 2020.\n- [Critic Regularized Regression](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F588cb956d6bbe67078f29f8de420a13d-Abstract.html)\n  - Ziyu Wang, Alexander Novikov, Konrad Zolna, Josh S. Merel, Jost Tobias Springenberg, Scott E. Reed, Bobak Shahriari, Noah Siegel, Caglar Gulcehre, Nicolas Heess, and Nando de Freitas. NeurIPS, 2020\n- [Provably Good Batch Off-Policy Reinforcement Learning Without Great Exploration](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F0dc23b6a0e4abc39904388dd3ffadcd1-Abstract.html)\n  - Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. NeurIPS, 2020.\n- [Conservative Q-Learning for Offline Reinforcement Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F0d2b2061826a5df3221116a5085a6052-Abstract.html) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fcql-offline-rl)] [[code](https:\u002F\u002Fgithub.com\u002Faviralkumar2907\u002FCQL)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F12\u002F07\u002Foffline\u002F)]\n  - Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. NeurIPS, 2020.\n- [BAIL: Best-Action Imitation Learning for Batch Deep Reinforcement Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fd55cbf210f175f4a37916eafe6c04f0d-Abstract.html)\n  - Xinyue Chen, Zijian Zhou, Zheng Wang, Che Wang, Yanqiu Wu, and Keith Ross. NeurIPS, 2020.\n- [MOPO: Model-based Offline Policy Optimization](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fa322852ce0df73e204b7e67cbbef0d0a-Abstract.html) [[code](https:\u002F\u002Fgithub.com\u002Ftianheyu927\u002Fmopo)]\n  - Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Y. Zou, Sergey Levine, Chelsea Finn, and Tengyu Ma. NeurIPS, 2020.\n- [MOReL: Model-Based Offline Reinforcement Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Ff7efa4f864ae9b88d43527f4b14f750f-Abstract.html) [[podcast](https:\u002F\u002Ftwimlai.com\u002Fmorel-model-based-offline-reinforcement-learning-with-aravind-rajeswaran\u002F)]\n  - Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. NeurIPS, 2020.\n- [Expert-Supervised Reinforcement Learning for Offline Policy Learning and Evaluation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fdaf642455364613e2120c636b5a1f9c7-Abstract.html)\n  - Aaron Sonabend, Junwei Lu, Leo Anthony Celi, Tianxi Cai, and Peter Szolovits. NeurIPS, 2020.\n- [Multi-task Batch Reinforcement Learning with Metric Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F4496bf24afe7fab6f046bf4923da8de6-Abstract.html)\n  - Jiachen Li, Quan Vuong, Shuang Liu, Minghua Liu, Kamil Ciosek, Henrik Christensen, and Hao Su. NeurIPS, 2020.\n- [Counterfactual Data Augmentation using Locally Factored Dynamics](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F294e09f267683c7ddc6cc5134a7e68a8-Abstract.html) [[code](https:\u002F\u002Fgithub.com\u002Fspitis\u002Fmrl)]\n  - Silviu Pitis, Elliot Creager, and Animesh Garg. NeurIPS, 2020.\n- [On Reward-Free Reinforcement Learning with Linear Function Approximation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fce4449660c6523b377b22a1dc2da5556-Abstract.html)\n  - Ruosong Wang, Simon S. Du, Lin Yang, and Russ R. Salakhutdinov. NeurIPS, 2020.\n- [Constrained Policy Improvement for Safe and Efficient Reinforcement Learning](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F396)\n  - Elad Sarafian, Aviv Tamar, and Sarit Kraus. IJCAI, 2020.\n- [BRPO: Batch Residual Policy Optimization](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F391) [[code](https:\u002F\u002Fgithub.com\u002Feladsar\u002Frbi)]\n  - Sungryull Sohn, Yinlam Chow, Jayden Ooi, Ofir Nachum, Honglak Lee, Ed Chi, and Craig Boutilier. IJCAI, 2020.\n- [Keep Doing What Worked: Behavior Modelling Priors for Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=rke7geHtwH)\n  - Noah Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, and Martin Riedmiller. ICLR, 2020.\n- [COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14500) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fcog-rl)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F12\u002F07\u002Foffline\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Favisingh599\u002Fcog)]\n  - Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, and Sergey Levine. CoRL, 2020.\n- [Accelerating Reinforcement Learning with Learned Skill Priors](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11944)\n  - Karl Pertsch, Youngwoon Lee, and Joseph J. Lim. CoRL, 2020.\n- [PLAS: Latent Action Space for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.07213) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Flatent-policy)] [[code](https:\u002F\u002Fgithub.com\u002FWenxuan-Zhou\u002FPLAS)]\n  - Wenxuan Zhou, Sujay Bajracharya, and David Held. CoRL, 2020.\n- [Scaling data-driven robotics with reward sketching and batch reinforcement learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.12200) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdata-driven-robotics\u002F)]\n  - Serkan Cabi, Sergio Gómez Colmenarejo, Alexander Novikov, Ksenia Konyushkova, Scott Reed, Rae Jeong, Konrad Zolna, Yusuf Aytar, David Budden, Mel Vecerik, Oleg Sushkov, David Barker, Jonathan Scholz, Misha Denil, Nando de Freitas, and Ziyu Wang. RSS, 2020.\n- [Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.02787)\n  - Cristian Bodnar, Adrian Li, Karol Hausman, Peter Pastor, and Mrinal Kalakrishnan. RSS, 2020.\n- [Batch-Constrained Reinforcement Learning for Dynamic Distribution Network Reconfiguration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12749)\n  - Yuanqi Gao, Wei Wang, Jie Shi, and Nanpeng Yu. IEEE T SMART GRID, 2020.\n- [Behavior Regularized Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.11361)\n  - Yifan Wu, George Tucker, and Ofir Nachum. arXiv, 2019.\n- [Off-Policy Policy Gradient Algorithms by Constraining the State Distribution Shift](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.06970)\n  - Riashat Islam, Komal K. Teru, Deepak Sharma, and Joelle Pineau. arXiv, 2019.\n- [Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.00177)\n  - Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. arXiv, 2019.\n- [AlgaeDICE: Policy Gradient from Arbitrary Experience](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.02074)\n  - Ofir Nachum, Bo Dai, Ilya Kostrikov, Yinlam Chow, Lihong Li, and Dale Schuurmans. arXiv, 2019.\n- [Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fc2073ffa77b5357a498057413bb09d3a-Abstract.html) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fbear-off-policyrl)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2019\u002F12\u002F05\u002Fbear\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Faviralkumar2907\u002FBEAR)]\n  - Aviral Kumar, Justin Fu, George Tucker, and Sergey Levine. NeurIPS, 2019.\n- [Off-Policy Deep Reinforcement Learning without Exploration](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Ffujimoto19a.html)\n  - Scott Fujimoto, David Meger, and Doina Precup. ICML, 2019.\n- [Safe Policy Improvement with Baseline Bootstrapping](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Flaroche19a.html)\n  - Romain Laroche, Paul Trichelair, and Remi Tachet Des Combes. ICML, 2019.\n- [Information-Theoretic Considerations in Batch Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fchen19e.html)\n  - Jinglin Chen and Nan Jiang. ICML, 2019.\n- [Batch Recurrent Q-Learning for Backchannel Generation Towards Engaging Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.02037)\n  - Nusrah Hussain, Engin Erzin, T. Metin Sezgin, and Yucel Yemez. ACII, 2019.\n- [Safe Policy Improvement with Soft Baseline Bootstrapping](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.05079)\n  - Kimia Nadjahi, Romain Laroche, and Rémi Tachet des Combes. ECML, 2019.\n- [Importance Weighted Transfer of Samples in Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftirinzoni18a.html)\n  - Andrea Tirinzoni, Andrea Sessa, Matteo Pirotta, and Marcello Restelli. ICML, 2018.\n- [Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation](http:\u002F\u002Fproceedings.mlr.press\u002Fv87\u002Fkalashnikov18a.html) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fqtopt)]\n  - Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, and Sergey Levine. CoRL, 2018.\n- [Off-Policy Policy Gradient with State Distribution Correction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.08473)\n  - Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. UAI, 2018.\n- [Behavioral Cloning from Observation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.01954)\n  - Faraz Torabi, Garrett Warnell, and Peter Stone. IJCAI, 2018.\n- [Diverse Exploration for Fast and Safe Policy Improvement](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.08331)\n  - Andrew Cohen, Lei Yu, and Robert Wright. AAAI, 2018.\n- [Deep Exploration via Bootstrapped DQN](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2016\u002Fhash\u002F8d8818c8e140c64c743113f563cf750f-Abstract.html)\n  - Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. NeurIPS, 2016.\n- [Safe Policy Improvement by Minimizing Robust Baseline Regret](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2016\u002Fhash\u002F9a3d458322d70046f63dfd8b0153ece4-Abstract.html)\n  - Mohammad Ghavamzadeh, Marek Petrik, and Yinlam Chow. NeurIPS, 2016.\n- [Residential Demand Response Applications Using Batch Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1504.02125)\n  - Frederik Ruelens, Bert Claessens, Stijn Vandael, Bart De Schutter, Robert Babuska, and Ronnie Belmans. arXiv, 2015.\n- [Structural Return Maximization for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1405.2606)\n  - Joshua Joseph, Javier Velez, and Nicholas Roy. arXiv, 2014.\n- [Simultaneous Perturbation Algorithms for Batch Off-Policy Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1403.4514)\n  - Raphael Fonteneau, and L.A. Prashanth. CDC, 2014.\n- [Guided Policy Search](http:\u002F\u002Fproceedings.mlr.press\u002Fv28\u002Flevine13.html)\n  - Sergey Levine, and Vladlen Koltun. ICML, 2013.\n- [Off-Policy Actor-Critic](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.5555\u002F3042573.3042600)\n  - Thomas Degris, Martha White, and Richard S. Sutton. ICML, 2012.\n- [PAC-Bayesian Policy Evaluation for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1202.3717)\n  - Mahdi MIlani Fard, Joelle Pineau, and Csaba Szepesvari. UAI, 2011.\n- [Tree-Based Batch Mode Reinforcement Learning](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fv6\u002Fernst05a.html)\n  - Damien Ernst, Pierre Geurts, and Louis Wehenkel. JMLR, 2005.\n- [Neural Fitted Q Iteration–First Experiences with a Data Efficient Neural Reinforcement Learning Method](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1007\u002F11564096_32)\n  - Martin Riedmiller. ECML, 2005.\n- [Off-Policy Temporal-Difference Learning with Function Approximation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.5555\u002F645530.655817)\n  - Doina Precup, Richard S. Sutton, and Sanjoy Dasgupta. ICML, 2001.\n\n### Offline RL: Benchmarks\u002FExperiments\n- [ORL-AUDITOR: Dataset Auditing in Offline Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.03081)\n  - Linkang Du, Min Chen, Mingyang Sun, Shouling Ji, Peng Cheng, Jiming Chen, and Zhikun Zhang. NDSS, 2024.\n- [Pearl: A Production-ready Reinforcement Learning Agent](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03814)\n  - Zheqing Zhu, Rodrigo de Salvo Braz, Jalaj Bhandari, Daniel Jiang, Yi Wan, Yonathan Efroni, Liyuan Wang, Ruiyang Xu, Hongbo Guo, Alex Nikulkov, Dmytro Korenkevych, Urun Dogan, Frank Cheng, Zheng Wu, and Wanqiao Xu. arXiv, 2023.\n- [LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18232)\n  - Marwa Abdulhai, Isadora White, Charlie Snell, Charles Sun, Joey Hong, Yuexiang Zhai, Kelvin Xu, and Sergey Levine. arXiv, 2023.\n- [Robotic Manipulation Datasets for Offline Compositional Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13372)\n  - Marcel Hussing, Jorge A. Mendez, Anisha Singrodia, Cassandra Kent, and Eric Eaton. arXiv, 2023.\n- [Datasets and Benchmarks for Offline Safe Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09303)\n  - Zuxin Liu, Zijian Guo, Haohong Lin, Yihang Yao, Jiacheng Zhu, Zhepeng Cen, Hanjiang Hu, Wenhao Yu, Tingnan Zhang, Jie Tan, and Ding Zhao. arXiv, 2023.\n- [Improving and Benchmarking Offline Reinforcement Learning Algorithms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00972)\n  - Bingyi Kang, Xiao Ma, Yirui Wang, Yang Yue, and Shuicheng Yan. arXiv, 2023.\n- [Benchmarks and Algorithms for Offline Preference-Based Reward Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.01392)\n  - Daniel Shin, Anca D. Dragan, and Daniel S. Brown. arXiv, 2023.\n- [Hokoff: Real Game Dataset from Honor of Kings and its Offline Reinforcement Learning Benchmarks](https:\u002F\u002Fopenreview.net\u002Fpdf?id=jP3BduIxy6)\n  - Yun Qu, Boyuan Wang, Jianzhun Shao, Yuhang Jiang, Chen Chen, Zhenbin Ye, Liu Linc, Yang Feng, Lin Lai, Hongyang Qin, Minwen Deng, Juchao Zhuo, Deheng Ye, Qiang Fu, Yang Guang, Wei Yang, Lanxiao Huang, and Xiangyang Ji. NeurIPS, 2023.\n- [CORL: Research-oriented Deep Offline Reinforcement Learning Library](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07105) [[code](https:\u002F\u002Fgithub.com\u002Fcorl-team\u002FCORL)]\n  - Denis Tarasov, Alexander Nikulin, Dmitry Akimov, Vladislav Kurenkov, and Sergey Kolesnikov. NeurIPS, 2023.\n- [Benchmarking Offline Reinforcement Learning on Real-Robot Hardware](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15690) [[dataset](https:\u002F\u002Fgithub.com\u002Frr-learning\u002Ftrifinger_rl_datasets)]\n  - Nico Gürtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel Wuthrich, Stefan Bauer, Bernhard Schölkopf, and Georg Martius. ICLR, 2023.\n- [Train Offline, Test Online: A Real Robot Learning Benchmark](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00942)\n  - Gaoyue Zhou, Victoria Dean, Mohan Kumar Srirama, Aravind Rajeswaran, Jyothish Pari, Kyle Hatch, Aryan Jain, Tianhe Yu, Pieter Abbeel, Lerrel Pinto, Chelsea Finn, and Abhinav Gupta. ICRA, 2023.\n- [Benchmarking Offline Reinforcement Learning Algorithms for E-Commerce Order Fraud Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.02620)\n  - Soysal Degirmenci and Chris Jones. arXiv, 2022.\n- [Real World Offline Reinforcement Learning with Realistic Data Source](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06479) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Freal-orl)] [[dataset](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1nyMPlbwkjsJ_FyMwVp9ynOvz_ykGtbA8)]\n  - Gaoyue Zhou, Liyiming Ke, Siddhartha Srinivasa, Abhinav Gupta, Aravind Rajeswaran, and Vikash Kumar. arXiv, 2022.\n- [Mind Your Data! Hiding Backdoors in Offline Reinforcement Learning Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.04688)\n  - Chen Gong, Zhou Yang, Yunpeng Bai, Junda He, Jieke Shi, Arunesh Sinha, Bowen Xu, Xinwen Hou, Guoliang Fan, and David Lo. arXiv, 2022.\n- [B2RL: An open-source Dataset for Building Batch Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15626)\n  - Hsin-Yu Liu, Xiaohan Fu, Bharathan Balaji, Rajesh Gupta, and Dezhi Hong. arXiv, 2022.\n- [An Empirical Study of Implicit Regularization in Deep Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.02099)\n  - Caglar Gulcehre, Srivatsan Srinivasan, Jakub Sygnowski, Georg Ostrovski, Mehrdad Farajtabar, Matt Hoffman, Razvan Pascanu, and Arnaud Doucet. arXiv, 2022.\n- [Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04779)\n  - Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, and Yee Whye Teh. arXiv, 2022.\n- [Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.13425) [[code](https:\u002F\u002Fgithub.com\u002Fdenisyarats\u002Fexorl)]\n  - Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, and Lerrel Pinto. arXiv, 2022.\n- [The Challenges of Exploration for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11861)\n  - Nathan Lambert, Markus Wulfmeier, William Whitney, Arunkumar Byravan, Michael Bloesch, Vibhavari Dasagi, Tim Hertweck, and Martin Riedmiller. arXiv, 2022.\n- [Offline Equilibrium Finding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.05285) [[code](https:\u002F\u002Fgithub.com\u002FSecurityGames\u002Foef)]\n  - Shuxin Li, Xinrun Wang, Jakub Cerny, Youzhi Zhang, Hau Chan, and Bo An. arXiv, 2022.\n- [Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.05433)\n  - Phillip Swazinna, Steffen Udluft, Daniel Hein, and Thomas Runkler. arXiv, 2022.\n- [Data-Efficient Pipeline for Offline Reinforcement Learning with Limited Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.08642)\n  - Allen Nie, Yannis Flet-Berliac, Deon R. Jordan, William Steenbergen, and Emma Brunskill. NeurIPS, 2022.\n- [Dungeons and Data: A Large-Scale NetHack Dataset](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00539)\n  - Eric Hambro, Roberta Raileanu, Danielle Rothermel, Vegard Mella, Tim Rocktäschel, Heinrich Küttler, and Naila Murray. NeurIPS, 2022.\n- [NeoRL: A Near Real-World Benchmark for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.00714) [[website](http:\u002F\u002Fpolixir.ai\u002Fresearch\u002Fneorl)] [[code](https:\u002F\u002Fagit.ai\u002FPolixir\u002Fneorl)]\n  - Rongjun Qin, Songyi Gao, Xingyuan Zhang, Zhen Xu, Shengkai Huang, Zewen Li, Weinan Zhang, and Yang Yu. NeurIPS, 2022.\n- [A Closer Look at Offline RL Agents](https:\u002F\u002Fopenreview.net\u002Fforum?id=mn1MWh0iDCA)\n  - Yuwei Fu, Di Wu and Benoit Boulet. NeurIPS, 2022.\n- [Beyond Rewards: a Hierarchical Perspective on Offline Multiagent Behavioral Analysis](https:\u002F\u002Fopenreview.net\u002Fforum?id=SiQAZV0yEny)\n  - Shayegan Omidshafiei, Andrei Kapishnikov, Yannick Assogba, Lucas Dixon, and Been Kim. NeurIPS, 2022.\n- [On the Effect of Pre-training for Transformer in Different Modality on Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=9GXoMs__ckJ)\n  - Shiro Takagi. NeurIPS, 2022.\n- [Showing Your Offline Reinforcement Learning Work: Online Evaluation Budget Matters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.04156)\n  - Vladislav Kurenkov and Sergey Kolesnikov. ICML, 2022.\n- [d3rlpy: An Offline Deep Reinforcement Learning Library](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.03788) [[software](https:\u002F\u002Fgithub.com\u002Ftakuseno\u002Fd3rlpy)]\n  - Takuma Seno and Michita Imai. JMLR, 2022.\n- [Understanding the Effects of Dataset Characteristics on Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.04714) [[code](https:\u002F\u002Fgithub.com\u002Fml-jku\u002FOfflineRL)]\n  - Kajetan Schweighofer, Markus Hofmarcher, Marius-Constantin Dinu, Philipp Renz, Angela Bitto-Nemling, Vihang Patil, and Sepp Hochreiter. arXiv, 2021.\n- [Interpretable performance analysis towards offline reinforcement learning: A dataset perspective](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.05473)\n  - Chenyang Xi, Bo Tang, Jiajun Shen, Xinfu Liu, Feiyu Xiong, and Xueying Li. arXiv, 2021.\n- [Comparison and Unification of Three Regularization Methods in Batch Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.08134)\n  - Sarah Rathnam, Susan A. Murphy, and Finale Doshi-Velez. arXiv, 2021.\n- [RLDS: an Ecosystem to Generate, Share and Use Datasets in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.02767) [[code](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Frlds)]\n  - Sabela Ramos, Sertan Girgin, Léonard Hussenot, Damien Vincent, Hanna Yakubovich, Daniel Toyama, Anita Gergely, Piotr Stanczyk, Raphael Marinier, Jeremiah Harmsen, Olivier Pietquin, and Nikola Momchev. NeurIPS, 2021.\n- [Measuring Data Quality for Dataset Selection in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.13461)\n  - Phillip Swazinna, Steffen Udluft, and Thomas Runkler. IEEE SSCI, 2021.\n- [Offline Reinforcement Learning Hands-On](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.14379)\n  - Louis Monier, Jakub Kmec, Alexandre Laterre, Thomas Pierrot, Valentin Courgeau, Olivier Sigaud, Karim Beguir. arXiv, 2020.\n- [D4RL: Datasets for Deep Data-Driven Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.07219) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fd4rl\u002Fhome)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F06\u002F25\u002FD4RL\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Frail-berkeley\u002Fd4rl)]\n  - Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. arXiv, 2020.\n- [RL Unplugged: Benchmarks for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.13888) [[code](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdeepmind-research\u002Ftree\u002Fmaster\u002Frl_unplugged)] [[dataset](https:\u002F\u002Fconsole.cloud.google.com\u002Fstorage\u002Fbrowser\u002Frl_unplugged?pli=1)]\n  - Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Tom Le Paine, Sergio Gomez Colmenarejo, Konrad Zolna, Rishabh Agarwal, Josh Merel, Daniel Mankowitz, Cosmin Paduraru, Gabriel Dulac-Arnold, Jerry Li, Mohammad Norouzi, Matt Hoffman, Ofir Nachum, George Tucker, Nicolas Heess, and Nando de Freitas. NeurIPS, 2020.\n- [Benchmarking Batch Deep Reinforcement Learning Algorithms](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01708)\n  - Scott Fujimoto, Edoardo Conti, Mohammad Ghavamzadeh, and Joelle Pineau. arXiv, 2019.\n\n### Offline RL: Applications\n- [MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03306)\n  - Rafael Rafailov, Kyle Hatch, Victor Kolev, John D. Martin, Mariano Phielipp, and Chelsea Finn. arXiv, 2024.\n- [P2DT: Mitigating Forgetting in task-incremental Learning with progressive prompt Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11666)\n  - Zhiyuan Wang, Xiaoyang Qu, Jing Xiao, Bokui Chen, and Jianzong Wang. ICASSP, 2024.\n- [Online Symbolic Music Alignment with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00466)\n  - Silvan David Peter. arXiv, 2023.\n- [Advancing RAN Slicing with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10547)\n  - Kun Yang, Shu-ping Yeh, Menglei Zhang, Jerry Sydir, Jing Yang, and Cong Shen. arXiv, 2023.\n- [Traffic Signal Control Using Lightweight Transformers: An Offline-to-Online RL Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.07795)\n  - Xingshuai Huang, Di Wu, and Benoit Boulet. arXiv, 2023.\n- [Self-Driving Telescopes: Autonomous Scheduling of Astronomical Observation Campaigns with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18094)\n  - Franco Terranova, M. Voetberg, Brian Nord, and Amanda Pagul. arXiv, 2023.\n- [A Fully Data-Driven Approach for Realistic Traffic Signal Control Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.15920)\n  - Jianxiong Li, Shichao Lin, Tianyu Shi, Chujie Tian, Yu Mei, Jian Song, Xianyuan Zhan, and Ruimin Li. arXiv, 2023.\n- [Offline Reinforcement Learning for Wireless Network Optimization with Mixture Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11423)\n  - Kun Yang, Cong Shen, Jing Yang, Shu-ping Yeh, and Jerry Sydir. arXiv, 2023.\n- [STEER: Unified Style Transfer with Expert Reinforcement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.07167)\n  - Skyler Hallinan, Faeze Brahman, Ximing Lu, Jaehun Jung, Sean Welleck, and Yejin Choi. arXiv, 2023.\n- [Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05584)\n  - Joey Hong, Sergey Levine, and Anca Dragan. arXiv, 2023.\n- [Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.15145)\n  - Jingyun Yang, Max Sobol Mark, Brandon Vu, Archit Sharma, Jeannette Bohg, and Chelsea Finn. arXiv, 2023.\n- [Offline Reinforcement Learning for Optimizing Production Bidding Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09426)\n  - Dmytro Korenkevych, Frank Cheng, Artsiom Balakir, Alex Nikulkov, Lingnan Gao, Zhihao Cen, Zuobing Xu, and Zheqing Zhu. arXiv, 2023.\n- [End-to-end Offline Reinforcement Learning for Glycemia Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.10312)\n  - Tristan Beolet, Alice Adenis, Erik Huneker, and Maxime Louis. arXiv, 2023.\n- [Leveraging Optimal Transport for Enhanced Offline Reinforcement Learning in Surgical Robotic Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08841)\n  - Maryam Zare, Parham M. Kebria, and Abbas Khosravi. arXiv, 2023.\n- [Learning RL-Policies for Joint Beamforming Without Exploration: A Batch Constrained Off-Policy Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08660)\n  - Heasung Kim and Sravan Ankireddy. arXiv, 2023.\n- [Uncertainty-Aware Decision Transformer for Stochastic Driving Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16397)\n  - Zenan Li, Fan Nie, Qiao Sun, Fang Da, and Hang Zhao. arXiv, 2023.\n- [Boosting Offline Reinforcement Learning for Autonomous Driving with Hierarchical Latent Skills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13614)\n  - Zenan Li, Fan Nie, Qiao Sun, Fang Da, and Hang Zhao. arXiv, 2023.\n- [Robotic Offline RL from Internet Videos via Value-Function Pre-Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13041)\n  - Chethan Bhateja, Derek Guo, Dibya Ghosh, Anikait Singh, Manan Tomar, Quan Vuong, Yevgen Chebotar, Sergey Levine, and Aviral Kumar. arXiv, 2023.\n- [VAPOR: Holonomic Legged Robot Navigation in Outdoor Vegetation Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07832)\n  - Kasun Weerakoon, Adarsh Jagan Sathyamoorthy, Mohamed Elnoor, and Dinesh Manocha. arXiv, 2023.\n- [RLSynC: Offline-Online Reinforcement Learning for Synthon Completion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02671)\n  - Frazier N. Baker, Ziqi Chen, and Xia Ning. arXiv, 2023.\n- [Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07741)\n  - Nico Gürtler, Felix Widmaier, Cansu Sancaktar, Sebastian Blaes, Pavel Kolev, Stefan Bauer, Manuel Wüthrich, Markus Wulfmeier, Martin Riedmiller, Arthur Allshire, Qiang Wang, Robert McCarthy, Hangyeol Kim, Jongchan Baek Pohang, Wookyong Kwon, Shanliang Qian, Yasunori Toshimitsu, Mike Yan Michelis, Amirhossein Kazemipour, Arman Raayatsanati, Hehui Zheng, Barnabasa Gavin Cangan, Bernhard Schölkopf, and Georg Martius. arXiv, 2023.\n- [Reinforced Self-Training (ReST) for Language Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.08998)\n  - Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud Doucet, Orhan Firat, and Nando de Freitas. arXiv, 2023.\n- [Aligning Language Models with Offline Reinforcement Learning from Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12050)\n  - Jian Hu, Li Tao, June Yang, and Chandler Zhou. arXiv, 2023.\n- [Integrating Offline Reinforcement Learning with Transformers for Sequential Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14450)\n  - Xumei Xi, Yuke Zhao, Quan Liu, Liwen Ouyang, and Yang Wu. arXiv, 2023.\n- [Offline Skill Graph (OSG): A Framework for Learning and Planning using Offline Reinforcement Learning Skills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13630)\n  - Ben-ya Halevy, Yehudit Aperstein, and Dotan Di Castro. arXiv, 2023.\n- [Improving Offline RL by Blending Heuristics](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00321)\n  - Sinong Geng, Aldo Pacchiano, Andrey Kolobov, and Ching-An Cheng. arXiv, 2023.\n- [IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00867)\n  - Rohan Chitnis, Yingchen Xu, Bobak Hashemi, Lucas Lehnert, Urun Dogan, Zheqing Zhu, and Olivier Delalleau. arXiv, 2023.\n- [Robust Reinforcement Learning Objectives for Sequential Recommender Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18820)\n  - Melissa Mozifian, Tristan Sylvain, Dave Evans, and Lili Meng. arXiv, 2023.\n- [The Benefits of Being Distributional: Small-Loss Bounds for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15703)\n  - Kaiwen Wang, Kevin Zhou, Runzhe Wu, Nathan Kallus, and Wen Sun. arXiv, 2023.\n- [PROTO: Iterative Policy Regularized Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15669)\n  - Jianxiong Li, Xiao Hu, Haoran Xu, Jingjing Liu, Xianyuan Zhan, and Ya-Qin Zhang. arXiv, 2023.\n- [Matrix Estimation for Offline Reinforcement Learning with Low-Rank Structure](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15621)\n  - Xumei Xi, Christina Lee Yu, and Yudong Chen. arXiv, 2023.\n- [Offline Experience Replay for Continual Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13804)\n  - Sibo Gai, Donglin Wang, and Li He. arXiv, 2023.\n- [Causal Decision Transformer for Recommender Systems via Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07920)\n  - Siyu Wang, Xiaocong Chen, Dietmar Jannach, and Lina Yao. arXiv, 2023.\n- [Data Might be Enough: Bridge Real-World Traffic Signal Control Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10828)\n  - Liang Zhang and Jianming Deng. arXiv, 2023.\n- [User Retention-oriented Recommendation with Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06347)\n  - Kesen Zhao, Lixin Zou, Xiangyu Zhao, Maolin Wang, and Dawei Yin. arXiv, 2023.\n- [Learning to Control Autonomous Fleets from Observation via Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.14833)\n  - Carolin Schmidt, Daniele Gammelli, Francisco Camara Pereira, and Filipe Rodrigues. arXiv, 2023.\n- [INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic Learning and Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13164)\n  - Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri, and Siddharth Garg. arXiv, 2023.\n- [Learning Vision-based Robotic Manipulation Tasks Sequentially in Offline Reinforcement Learning Settings](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13450)\n  - Sudhir Pratap Yadav, Rajendra Nagar, and Suril V. Shah. arXiv, 2023.\n- [Winning Solution of Real Robot Challenge III](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13019)\n  - Qiang Wang, Robert McCarthy, David Cordova Bulens, and Stephen J. Redmond. arXiv, 2023.\n- [Learning-based MPC from Big Data Using Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.01667)\n  - Shambhuraj Sawant, Akhil S Anand, Dirk Reinhardt, and Sebastien Gros. arXiv, 2023.\n- [Offline Reinforcement Learning for Mixture-of-Expert Dialogue Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.10850)\n  - Dhawal Gupta, Yinlam Chow, Aza Tulepbergenov, Mohammad Ghavamzadeh, and Craig Boutilier. NeurIPS, 2023.\n- [Beyond Reward: Offline Preference-guided Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16217)\n  - Yachen Kang, Diyuan Shi, Jinxin Liu, Li He, and Donglin Wang. ICML, 2023.\n- [DevFormer: A Symmetric Transformer for Context-Aware Device Placement](https:\u002F\u002Fopenreview.net\u002Fforum?id=pWk5MoS04I)\n  - Haeyeon Kim, Minsu Kim, Federico Berto, Joungho Kim, and Jinkyoo Park. ICML, 2023.\n- [On the Effectiveness of Offline RL for Dialogue Response Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12425)\n  - Paloma Sodhi, Felix Wu, Ethan R. Elenberg, Kilian Q. Weinberger, and Ryan McDonald. ICML, 2023.\n- [Bidirectional Learning for Offline Model-based Biological Sequence Design](https:\u002F\u002Fopenreview.net\u002Fforum?id=CUORPu6abU)\n  - Can Chen, Yingxue Zhang, Xue Liu, and Mark Coates. ICML, 2023.\n- [ChiPFormer: Transferable Chip Placement via Offline Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14744)\n  - Yao Lai, Jinxin Liu, Zhentao Tang, Bin Wang, Jianye Hao, and Ping Luo. ICML, 2023.\n- [Semi-Offline Reinforcement Learning for Optimized Text Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09712)\n  - Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, and Rui Yan. ICML, 2023.\n- [Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11373)\n  - Michael Chang, Alyssa L. Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, and Amy Zhang. ICLR, 2023.\n- [Offline RL for Natural Language Generation with Implicit Language Q Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11871)\n  - Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. ICLR, 2023.\n- [Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.11731)\n  - Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, and Sergey Levine. CoRL, 2023.\n- [Building Persona Consistent Dialogue Agents with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.10735)\n  - Ryan Shea and Zhou Yu. EMNLP, 2023.\n- [Dialog Action-Aware Transformer for Dialog Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02240)\n  - Huimin Wang, Wai-Chung Kwan, and Kam-Fai Wong. SIGdial, 2023.\n- [Can Offline Reinforcement Learning Help Natural Language Understanding?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.03864)\n  - Ziqi Zhang, Yile Wang, Yue Zhang, and Donglin Wang. arXiv, 2022.\n- [NeurIPS 2022 Competition: Driving SMARTS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.07545)\n  - Amir Rasouli, Randy Goebel, Matthew E. Taylor, Iuliia Kotseruba, Soheil Alizadeh, Tianpei Yang, Montgomery Alban, Florian Shkurti, Yuzheng Zhuang, Adam Scibior, Kasra Rezaee, Animesh Garg, David Meger, Jun Luo, Liam Paull, Weinan Zhang, Xinyu Wang, and Xi Chen. arXiv, 2022.\n- [Controlling Commercial Cooling Systems Using Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.07357)\n  - Jerry Luo, Cosmin Paduraru, Octavian Voicu, Yuri Chervonyi, Scott Munns, Jerry Li, Crystal Qian, Praneet Dutta, Jared Quincy Davis, Ningjia Wu, Xingwei Yang, Chu-Ming Chang, Ted Li, Rob Rose, Mingyan Fan, Hootan Nakhost, Tinglin Liu, Brian Kirkman, Frank Altamura, Lee Cline, Patrick Tonker, Joel Gouker, Dave Uden, Warren Buddy Bryan, Jason Law, Deeni Fatiha, Neil Satra, Juliet Rothenberg, Molly Carlin, Satish Tallapaka, Sims Witherspoon, David Parish, Peter Dolan, Chenyu Zhao, and Daniel J. Mankowitz.\n- [Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05178) [[code](https:\u002F\u002Fgithub.com\u002FAsap7772\u002FPTR)]\n  - Aviral Kumar, Anikait Singh, Frederik Ebert, Yanlai Yang, Chelsea Finn, and Sergey Levine. arXiv, 2022.\n- [Towards Safe Mechanical Ventilation Treatment Using Deep Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02552)\n  - Flemming Kondrup, Thomas Jiralerspong, Elaine Lau, Nathan de Lara, Jacob Shkrob, My Duc Tran, Doina Precup, and Sumana Basu. IAAI, 2023.\n- [Learning-to-defer for sequential medical decision-making under uncertainty](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.06312)\n  - Shalmali Joshi, Sonali Parbhoo, and Finale Doshi-Velez. TMLR, 2023.\n- [Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.11419)\n  - Yiren Lu, Justin Fu, George Tucker, Xinlei Pan, Eli Bronstein, Rebecca Roelofs, Benjamin Sapp, Brandyn White, Aleksandra Faust, Shimon Whiteson, Dragomir Anguelov, and Sergey Levine. arXiv, 2022.\n- [Dialogue Evaluation with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.00876)\n  - Nurul Lubis, Christian Geishauser, Hsien-Chin Lin, Carel van Niekerk, Michael Heck, Shutong Feng, and Milica Gašić. arXiv, 2022.\n- [Multi-Task Fusion via Reinforcement Learning for Long-Term User Satisfaction in Recommender Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.04560)\n  - Qihua Zhang, Junning Liu, Yuzhuo Dai, Yiyan Qi, Yifan Yuan, Kunlun Zheng, Fan Huang, and Xianfeng Tan. arXiv, 2022.\n- [A Maintenance Planning Framework using Online and Offline Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.00808)\n  - Zaharah A. Bukhsh, Nils Jansen, and Hajo Molegraaf. arXiv, 2022.\n- [BCRLSP: An Offline Reinforcement Learning Framework for Sequential Targeted Promotion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.07790)\n  - Fanglin Chen, Xiao Liu, Bo Tang, Feiyu Xiong, Serim Hwang, and Guomian Zhuang. arXiv, 2022.\n- [Learning Optimal Treatment Strategies for Sepsis Using Offline Reinforcement Learning in Continuous Space](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11190)\n  - Zeyu Wang, Huiying Zhao, Peng Ren, Yuxi Zhou, and Ming Sheng. arXiv, 2022.\n- [Rethinking Reinforcement Learning for Recommendation: A Prompt Perspective](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07353)\n  - Xin Xin, Tiago Pimentel, Alexandros Karatzoglou, Pengjie Ren, Konstantina Christakopoulou, and Zhaochun Ren. arXiv, 2022.\n- [ARLO: A Framework for Automated Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10416)\n  - Marco Mussi, Davide Lombarda, Alberto Maria Metelli, Francesco Trovò, and Marcello Restelli. arXiv, 2022.\n- [A Reinforcement Learning-based Volt-VAR Control Dataset and Testing Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.09500)\n  - Yuanqi Gao and Nanpeng Yu. arXiv, 2022.\n- [CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.08426)\n  - Siddharth Verma, Justin Fu, Mengjiao Yang, and Sergey Levine. arXiv, 2022.\n- [Offline Reinforcement Learning for Safer Blood Glucose Control in People with Type 1 Diabetes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.03376) [[code](https:\u002F\u002Fgithub.com\u002Fhemerson1\u002Foffline-glucose)]\n  - Harry Emerson, Matt Guy, and Ryan McConville. arXiv, 2022.\n- [CIRS: Bursting Filter Bubbles by Counterfactual Interactive Recommender System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.01266) [[code](https:\u002F\u002Fgithub.com\u002Fchongminggao\u002FCIRS-codes)]\n  - Chongming Gao, Wenqiang Lei, Jiawei Chen, Shiqi Wang, Xiangnan He, Shijun Li, Biao Li, Yuan Zhang, and Peng Jiang. arXiv, 2022.\n- [A Conservative Q-Learning approach for handling distribution shift in sepsis treatment strategies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.13884)\n  - Pramod Kaushik, Sneha Kummetha, Perusha Moodley, and Raju S. Bapi. arXiv, 2022.\n- [Optimizing Trajectories for Highway Driving with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.10949)\n  - Branka Mirchevska, Moritz Werling, and Joschka Boedecker. arXiv, 2022.\n- [Offline Deep Reinforcement Learning for Dynamic Pricing of Consumer Credit](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.03003)\n  - Raad Khraishi and Ramin Okhrati. arXiv, 2022.\n- [Offline Reinforcement Learning for Mobile Notifications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.03867)\n  - Yiping Yuan, Ajith Muralidharan, Preetam Nandy, Miao Cheng, and Prakruthi Prabhakar. arXiv, 2022.\n- [Offline Reinforcement Learning for Road Traffic Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.02381)\n  - Mayuresh Kunjir and Sanjay Chawla. arXiv, 2022.\n- [Sustainable Online Reinforcement Learning for Auto-bidding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07006)\n  - Zhiyu Mou, Yusen Huo, Rongquan Bai, Mingzhou Xie, Chuan Yu, Jian Xu, and Bo Zheng. NeurIPS, 2022.\n- [Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.01738)\n  - Shengpu Tang, Maggie Makar, Michael W. Sjoding, Finale Doshi-Velez, and Jenna Wiens. NeurIPS, 2022.\n- [Multi-objective Optimization of Notifications Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.03029)\n  - Prakruthi Prabhakar, Yiping Yuan, Guangyu Yang, Wensheng Sun, and Ajith Muralidharan. KDD, 2022.\n- [Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.02450)\n  - Boxiang Lyu, Zhaoran Wang, Mladen Kolar, and Zhuoran Yang. ICML, 2022.\n- [GPT-Critic: Offline Reinforcement Learning for End-to-End Task-Oriented Dialogue Systems](https:\u002F\u002Fopenreview.net\u002Fforum?id=qaxhBG1UUaS)\n  - Youngsoo Jang, Jongmin Lee, and Kee-Eung Kim. ICLR, 2022.\n- [Offline Reinforcement Learning for Visual Navigation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08244)\n  - Dhruv Shah, Arjun Bhorkar, Hrish Leen, Ilya Kostrikov, Nick Rhinehart, and Sergey Levine. CoRL, 2022.\n- [Semi-Markov Offline Reinforcement Learning for Healthcare](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.09365)\n  - Mehdi Fatemi, Mary Wu, Jeremy Petch, Walter Nelson, Stuart J. Connolly, Alexander Benz, Anthony Carnicelli, and Marzyeh Ghassemi. CHIL, 2022.\n- [Automate Page Layout Optimization: An Offline Deep Q-Learning Approach](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3523227.3547400)\n  - Zhou Qin and Wenyang Liu. RecSys, 2022.\n- [RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.11073) [[code](https:\u002F\u002Fgithub.com\u002FfuxiAIlab\u002FRL4RS)] [[dataset](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1YbPtPyYrMvMGOuqD4oHvK0epDtEhEb9v\u002Fview)]\n  - Kai Wang, Zhene Zou, Yue Shang, Qilin Deng, Minghao Zhao, Yile Liang, Runze Wu, Jianrong Tao, Xudong Shen, Tangjie Lyu, and Changjie Fan. arXiv, 2021.\n- [Compressive Features in Offline Reinforcement Learning for Recommender Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.08817)\n  - Hung Nguyen, Minh Nguyen, Long Pham, and Jennifer Adorno Nieves. arXiv, 2021.\n- [Causal-aware Safe Policy Improvement for Task-oriented dialogue](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06370)\n  - Govardana Sachithanandam Ramachandran, Kazuma Hashimoto, and Caiming Xiong. arXiv, 2021.\n- [Offline Contextual Bandits for Wireless Network Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.08587)\n  - Miguel Suau, Alexandros Agapitos, David Lynch, Derek Farrell, Mingqi Zhou, and Aleksandar Milenovic. arXiv, 2021.\n- [Identifying Decision Points for Safe and Interpretable Reinforcement Learning in Hypotension Treatment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03309)\n  - Kristine Zhang, Yuanheng Wang, Jianzhun Du, Brian Chu, Leo Anthony Celi, Ryan Kindle, and Finale Doshi-Velez. arXiv, 2021.\n- [Offline Reinforcement Learning for Autonomous Driving with Safety and Exploration Enhancement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07067)\n  - Tianyu Shi, Dong Chen, Kaian Chen, and Zhaojian Li. arXiv, 2021.\n- [Medical Dead-ends and Learning to Identify High-risk States and Treatments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.04186)\n  - Mehdi Fatemi, Taylor W. Killian, Jayakumar Subramanian, and Marzyeh Ghassemi. arXiv, 2021.\n- [An Offline Deep Reinforcement Learning for Maintenance Decision-Making](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.15050)\n  - Hamed Khorasgani, Haiyan Wang, Chetan Gupta, and Ahmed Farahat. arXiv, 2021.\n- [Learning Language-Conditioned Robot Behavior from Offline Data and Crowd-Sourced Annotation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.01115)\n  - Suraj Nair, Eric Mitchell, Kevin Chen, Brian Ichter, Silvio Savarese, and Chelsea Finn. arXiv, 2021.\n- [Offline-Online Reinforcement Learning for Energy Pricing in Office Demand Response: Lowering Energy and Data Costs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.06594)\n  - Doseok Jang, Lucas Spangher, Manan Khattar, Utkarsha Agwan, Selvaprabuh Nadarajah, and Costas Spanos. arXiv, 2021.\n- [Offline reinforcement learning with uncertainty for treatment strategies in sepsis](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.04491)\n  - Ran Liu, Joseph L. Greenstein, James C. Fackler, Jules Bergmann, Melania M. Bembea, and Raimond L. Winslow. arXiv, 2021.\n- [Improving Long-Term Metrics in Recommendation Systems using Short-Horizon Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.00589)\n  - Bogdan Mazoure, Paul Mineiro, Pavithra Srinath, Reza Sharifi Sedeh, Doina Precup, and Adith Swaminathan. arXiv, 2021.\n- [Safe Model-based Off-policy Reinforcement Learning for Eco-Driving in Connected and Automated Hybrid Electric Vehicles](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.11640)\n  - Zhaoxuan Zhu, Nicola Pivaro, Shobhit Gupta, Abhishek Gupta, and Marcello Canova. arXiv, 2021.\n- [pH-RL: A personalization architecture to bring reinforcement learning to health practice](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.15908)\n  - Ali el Hassouni, Mark Hoogendoorn, Marketa Ciharova, Annet Kleiboer, Khadicha Amarti, Vesa Muhonen, Heleen Riper, and A. E. Eiben. arXiv, 2021.\n- [DeepThermal: Combustion Optimization for Thermal Power Generating Units Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.11492) [[podcast](https:\u002F\u002Fwww.talkrl.com\u002Fepisodes\u002Fxianyuan-zhan)]\n  - Xianyuan Zhan, Haoran Xu, Yue Zhang, Yusen Huo, Xiangyu Zhu, Honglei Yin, and Yu Zheng. arXiv, 2021.\n- [Personalization for Web-based Services using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.05612)\n  - Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, and Igor L. Markov. arXiv, 2021.\n- [BCORLE(λ): An Offline Reinforcement Learning and Evaluation Framework for Coupons Allocation in E-commerce Market](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Fhash\u002Fab452534c5ce28c4fbb0e102d4a4fb2e-Abstract.html)\n  - Yang Zhang, Bo Tang, Qingyu Yang, Dou An, Hongyin Tang, Chenyang Xi, Xueying LI, and Feiyu Xiong. NeurIPS, 2021.\n- [Safe Driving via Expert Guided Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06831) [[website](https:\u002F\u002Fdecisionforce.github.io\u002FEGPO\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Fdecisionforce\u002FEGPO)]\n  - Zhenghao Peng, Quanyi Li, Chunxiao Liu, and Bolei Zhou. CoRL, 2021.\n- [A General Offline Reinforcement Learning Framework for Interactive Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00678)\n  - Teng Xiao and Donglin Wang. AAAI, 2021.\n- [Value Function is All You Need: A Unified Learning Framework for Ride Hailing Platforms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.08791)\n  - Xiaocheng Tang, Fan Zhang, Zhiwei (Tony)Qin, Yansheng Wang, Dingyuan Shi, Bingchen Song, Yongxin Tong, Hongtu Zhu, and Jieping Ye. KDD, 2021.\n- [Discovering an Aid Policy to Minimize Student Evasion Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.10258)\n  - Leandro M. de Lima and Renato A. Krohling. IJCNN, 2021.\n- [Learning robust driving policies without online exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.08070)\n  - Daniel Graves, Nhat M. Nguyen, Kimia Hassanzadeh, Jun Jin, and Jun Luo. ICRA, 2021.\n- [Engagement Rewarded Actor-Critic with Conservative Q-Learning for Speech-Driven Laughter Backchannel Generation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3462244.3479944)\n  - Öykü Zeynep Bayramoğlu, Engin Erzin, Tevfik Metin Sezgin, and Yücel Yemez. ICMI, 2021.\n- [Network Intrusion Detection Based on Extended RBF Neural Network With Offline Reinforcement Learning](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9612220)\n  - Manuel Lopez-Martin, Antonio Sanchez-Esguevillas, Juan Ignacio Arribas, and Belen Carro. IEEE Access, 2021.\n- [Towards Accelerating Offline RL based Recommender Systems](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3486001.3486244)\n  - Mayank Mishra, Rekha Singhal, and Ravi Singh. AIMLSystems, 2021.\n- [Offline Meta-level Model-based Reinforcement Learning Approach for Cold-Start Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.02476)\n  - Yanan Wang, Yong Ge, Li Li, Rui Chen, and Tong Xu. arXiv, 2020.\n- [Batch-Constrained Distributional Reinforcement Learning for Session-based Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.08984)\n  - Diksha Garg, Priyanka Gupta, Pankaj Malhotra, Lovekesh Vig, and Gautam Shroff. arXiv, 2020.\n- [An Empirical Study of Representation Learning for Reinforcement Learning in Healthcare](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.11235)\n  - Taylor W. Killian, Haoran Zhang, Jayakumar Subramanian, Mehdi Fatemi, and Marzyeh Ghassemi. arXiv, 2020.\n- [Learning from Human Feedback: Challenges for Real-World Reinforcement Learning in NLP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.02511)\n  - Julia Kreutzer, Stefan Riezler, and Carolin Lawrence. arXiv, 2020.\n- [Remote Electrical Tilt Optimization via Safe Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.05842)\n  - Filippo Vannella, Grigorios Iakovidis, Ezeddin Al Hakim, Erik Aumayr, and Saman Feghhi. arXiv, 2020.\n- [An Optimistic Perspective on Offline Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fagarwal20c.html) [[website](https:\u002F\u002Foffline-rl.github.io\u002F)] [[blog](https:\u002F\u002Fai.googleblog.com\u002F2020\u002F04\u002Fan-optimistic-perspective-on-offline.html)]\n  - Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. ICML, 2020.\n- [Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Frakhsha20a.html)\n  - Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, and Adish Singla. ICML, 2020.\n- [Offline Contextual Multi-armed Bandits for Mobile Health Interventions: A Case Study on Emotion Regulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.09472)\n  - Mawulolo K. Ameko, Miranda L. Beltzer, Lihua Cai, Mehdi Boukhechba, Bethany A. Teachman, and Laura E. Barnes. RecSys, 2020.\n- [Human-centric Dialog Training via Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.05848)\n  - Natasha Jaques, Judy Hanwen Shen, Asma Ghandeharioun, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Shane Gu, and Rosalind Picard. EMNLP, 2020.\n- [Definition and evaluation of model-free coordination of electrical vehicle charging with reinforcement learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.10679)\n  - Nasrin Sadeghianpourhamami, Johannes Deleu, and Chris Develder. IEEE T SMART GRID, 2020.\n- [Optimal Tap Setting of Voltage Regulation Transformers Using Batch Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.10997)\n  - Hanchen Xu, Alejandro D. Domínguez-García, and Peter W. Sauer. IEEE T POWER SYSTEMS, 2020.\n- [Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.00456)\n  - Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. arXiv, 2019.\n- [Optimized cost function for demand response coordination of multiple EV charging stations using reinforcement learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.01654)\n  - Manu Lahariya, Nasrin Sadeghianpourhamami, and Chris Develder. BuildSys, 2019.\n- [A Clustering-Based Reinforcement Learning Approach for Tailored Personalization of E-Health Interventions](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03592)\n  - Ali el Hassouni, Mark Hoogendoorn, Martijn van Otterlo, A. E. Eiben, Vesa Muhonen, and Eduardo Barbaro. arXiv, 2018.\n- [Generating Interpretable Fuzzy Controllers using Particle Swarm Optimization and Genetic Programming](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10960)\n  - Daniel Hein, Steffen Udluft, and Thomas A. Runkler. GECCO, 2018.\n- [End-to-End Offline Goal-Oriented Dialog Policy Learning via Policy Gradient](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.02838)\n  - Li Zhou, Kevin Small, Oleg Rokhlenko, and Charles Elkan. arXiv, 2017.\n- [Batch Reinforcement Learning on the Industrial Benchmark: First Experiences](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07262)\n  - Daniel Hein, Steffen Udluft, Michel Tokic, Alexander Hentschel, Thomas A. Runkler, and Volkmar Sterzing. IJCNN, 2017.\n- [Policy Networks with Two-Stage Training for Dialogue Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03152)\n  - Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. SIGDial, 2016.\n- [Adaptive Treatment of Epilepsy via Batch-mode Reinforcement Learning](https:\u002F\u002Fwww.aaai.org\u002FLibrary\u002FIAAI\u002F2008\u002Fiaai08-008.php)\n  - Arthur Guez, Robert D. Vincent, Massimo Avoli, and Joelle Pineau. IAAI, 2008.\n\n### Off-Policy Evaluation and Learning: Theory\u002FMethods\n#### Off-Policy Evaluation: Contextual Bandits\n- [Off-Policy Evaluation of Slate Bandit Policies via Optimizing Abstraction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02171)\n  - Haruka Kiyohara, Masahiro Nomura, and Yuta Saito. WWW, 2024.\n- [Distributionally Robust Policy Evaluation under General Covariate Shift in Contextual Bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11353)\n  - Yihong Guo, Hao Liu, Yisong Yue, and Anqi Liu. arXiv, 2024.\n- [Off-Policy Evaluation for Large Action Spaces via Conjunct Effect Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.08062)\n  - Yuta Saito, Qingyang Ren, and Thorsten Joachims. ICML, 2023.\n- [Multiply Robust Off-policy Evaluation and Learning under Truncation by Death](https:\u002F\u002Fopenreview.net\u002Fforum?id=FQlsEvyQ4N)\n  - Jianing Chu, Shu Yang, and Wenbin Lu. ICML, 2023.\n- [Off-Policy Evaluation of Ranking Policies under Diverse User Behavior](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15098)\n  - Haruka Kiyohara, Masatoshi Uehara, Yusuke Narita, Nobuyuki Shimizu, Yasuo Yamamoto, and Yuta Saito. KDD, 2023.\n- [Policy-Adaptive Estimator Selection for Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.13904)\n  - Takuma Udagawa, Haruka Kiyohara, Yusuke Narita, Yuta Saito, and Kei Tateno. AAAI, 2023.\n- [Variance-Optimal Augmentation Logging for Counterfactual Evaluation in Contextual Bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.01721)\n  - Aaron David Tucker and Thorsten Joachims. WSDM, 2023.\n- [Offline Policy Evaluation in Large Action Spaces via Outcome-Oriented Action Grouping](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3543507.3583448)\n  -  Jie Peng, Hao Zou, Jiashuo Liu, Shaoming Li, Yibao Jiang, Jian Pei, and Peng Cui. WWW, 2023.\n- [Off-Policy Evaluation for Large Action Spaces via Policy Convolution](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.15433)\n  - Noveen Sachdeva, Lequn Wang, Dawen Liang, Nathan Kallus, and Julian McAuley. arXiv, 2023.\n- [Distributional Off-Policy Evaluation for Slate Recommendations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.14165)\n  - Shreyas Chaudhari, David Arbour, Georgios Theocharous, and Nikos Vlassis. arXiv, 2023.\n- [Debiased Machine Learning and Network Cohesion for Doubly-Robust Differential Reward Models in Contextual Bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.06403)\n  - Easton K. Huch, Jieru Shi, Madeline R. Abbott, Jessica R. Golbus, Alexander Moreno, and Walter H. Dempsey. arXiv, 2023.\n- [Doubly Robust Estimator for Off-Policy Evaluation with Large Action Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03443)\n  - Tatsuhiro Shimizu. arXiv, 2023.\n- [Offline Policy Evaluation with Out-of-Sample Guarantees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08649)\n  - Sofia Ek and Dave Zachariah. arXiv, 2023.\n- [Quantile Off-Policy Evaluation via Deep Conditional Generative Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.14466)\n  - Yang Xu, Chengchun Shi, Shikai Luo, Lan Wang, and Rui Song. arXiv, 2023.\n- [Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.01562) [[code](https:\u002F\u002Fgithub.com\u002Faiueola\u002Fwsdm2022-cascade-dr)]\n  - Haruka Kiyohara, Yuta Saito, Tatsuya Matsuhiro, Yusuke Narita, Nobuyuki Shimizu, and Yasuo Yamamoto. WSDM, 2022.\n- [Off-Policy Evaluation for Large Action Spaces via Embeddings](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.06317) [[code](https:\u002F\u002Fgithub.com\u002Fusaito\u002Ficml2022-mips)] [[video](https:\u002F\u002Fyoutu.be\u002FHrqhv-AsMRE)]\n  - Yuta Saito and Thorsten Joachims. ICML, 2022.\n- [Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.09667)\n  - Nathan Kallus, Xiaojie Mao, Kaiwen Wang, and Zhengyuan Zhou. ICML, 2022.\n- [Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous Actions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13373)\n  - Haanvid Lee, Jongmin Lee, Yunseon Choi, Wonseok Jeon, Byung-Jun Lee, Yung-Kyun Noh, and Kee-Eung Kim. NeurIPS, 2022.\n- [Conformal Off-Policy Prediction in Contextual Bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04405)\n  - Muhammad Faaiz Taufiq, Jean-Francois Ton, Rob Cornish, Yee Whye Teh, and Arnaud Doucet. NeurIPS, 2022.\n- [Off-Policy Evaluation with Policy-Dependent Optimization Response](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.12958)\n  - Wenshuo Guo, Michael I. Jordan, and Angela Zhou. NeurIPS, 2022.\n- [Off-Policy Evaluation with Deficient Support Using Side Information](https:\u002F\u002Fopenreview.net\u002Fforum?id=uFSrUpapQ5K)\n  - Nicolò Felicioni, Maurizio Ferrari Dacrema, Marcello Restelli, and Paolo Cremonesi. NeurIPS, 2022.\n- [Towards Robust Off-Policy Evaluation via Human Inputs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.08682)\n  - Harvineet Singh, Shalmali Joshi, Finale Doshi-Velez, and Himabindu Lakkaraju. AIES, 2022.\n- [Off-policy evaluation for learning-to-rank via interpolating the item-position model and the position-based model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09512)\n  - Alexander Buchholz, Ben London, Giuseppe di Benedetto, and Thorsten Joachims. arXiv, 2022.\n- [Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01518)\n  - Diego Martinez-Taboada and Dino Sejdinovic. arXiv, 2022.\n- [Anytime-valid off-policy inference for contextual bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.10768)\n  - Ian Waudby-Smith, Lili Wu, Aaditya Ramdas, Nikos Karampatziakis, and Paul Mineiro. arXiv, 2022.\n- [Off-policy estimation of linear functionals: Non-asymptotic theory for semi-parametric efficiency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.13075)\n  - Wenlong Mou, Martin J. Wainwright, and Peter L. Bartlett. arXiv, 2022.\n- [Off-Policy Evaluation in Embedded Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02807)\n  - Jaron J. R. Lee, David Arbour, and Georgios Theocharous. arXiv, 2022.\n- [Safe Exploration for Efficient Policy Evaluation and Comparison](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13234)\n  - Runzhe Wan, Branislav Kveton, and Rui Song. arXiv, 2022.\n- [Inverse Propensity Score based offline estimator for deterministic ranking lists using position bias](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.14980)\n  - Nick Wood and Sumit Sidana. arXiv, 2022.\n- [Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Fhash\u002F4476b929e30dd0c4e8bdbcc82c6ba23a-Abstract.html)\n  - Alberto Maria Metelli, Alessio Russo, Marcello Restelli. NeurIPS, 2021.\n- [Control Variates for Slate Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.07914)\n  - Nikos Vlassis, Ashok Chandrashekar, Fernando Amat Gil, and Nathan Kallus. NeurIPS, 2021.\n- [Deep Jump Learning for Off-Policy Evaluation in Continuous Treatment Settings](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.15963)\n  - Hengrui Cai, Chengchun Shi, Rui Song, and Wenbin Lu. NeurIPS, 2021.\n- [Optimal Off-Policy Evaluation from Multiple Logging Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11002) [[code](https:\u002F\u002Fgithub.com\u002FCausalML\u002FMultipleLoggers)]\n  - Nathan Kallus, Yuta Saito, and Masatoshi Uehara. ICML, 2021.\n- [Off-policy Confidence Sequences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09540)\n  - Nikos Karampatziakis, Paul Mineiro, and Aaditya Ramdas. ICML, 2021.\n- [Confident Off-Policy Evaluation and Selection through Self-Normalized Importance Weighting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10460) [[video](https:\u002F\u002Fyoutu.be\u002F0MYRwW6BdvU)]\n  - Ilja Kuzborskij, Claire Vernade, András György, and Csaba Szepesvári. AISTATS, 2021.\n- [Off-Policy Evaluation Using Information Borrowing and Context-Based Switching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.09865)\n  - Sutanoy Dasgupta, Yabo Niu, Kishan Panaganti, Dileep Kalathil, Debdeep Pati, and Bani Mallick. arXiv, 2021.\n- [Identification of Subgroups With Similar Benefits in Off-Policy Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.14272)\n  - Ramtin Keramati, Omer Gottesman, Leo Anthony Celi, Finale Doshi-Velez, and Emma Brunskill. arXiv, 2021.\n- [Robust On-Policy Data Collection for Data-Efficient Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.14552)\n  - Rujie Zhong, Josiah P. Hanna, Lukas Schäfer, and Stefano V. Albrecht. arXiv, 2021.\n- [Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02029)\n  - Ruohan Zhan, Vitor Hadad, David A. Hirshberg, and Susan Athey. arXiv, 2021.\n- [Off-Policy Risk Assessment in Contextual Bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08977)\n  - Audrey Huang, Liu Leqi, Zachary C. Lipton, and Kamyar Azizzadenesheli. arXiv, 2021.\n- [Off-Policy Evaluation of Slate Policies under Bayes Risk](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.02553)\n  - Nikos Vlassis, Fernando Amat Gil, and Ashok Chandrashekar. arXiv, 2021.\n- [A Practical Guide of Off-Policy Evaluation for Bandit Problems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12470)\n  - Masahiro Kato, Kenshi Abe, Kaito Ariu, and Shota Yasui. arXiv, 2020.\n- [Off-Policy Evaluation and Learning for External Validity under a Covariate Shift](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.11642)\n  - Masatoshi Uehara, Masahiro Kato, and Shota Yasui. NeurIPS, 2020.\n- [Counterfactual Evaluation of Slate Recommendations with Sequential Reward Interactions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.12986)\n  - James McInerney, Brian Brost, Praveen Chandar, Rishabh Mehrotra, and Ben Carterette. KDD, 2020.\n- [Doubly robust off-policy evaluation with shrinkage](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fsu20a.html)\n  - Yi Su, Maria Dimakopoulou, Akshay Krishnamurthy, and Miroslav Dudik. ICML, 2020.\n- [Adaptive Estimator Selection for Off-Policy Evaluation](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fsu20d.html) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=r8ZDuC71lCs)]\n  - Yi Su, Pavithra Srinath, and Akshay Krishnamurthy. ICML, 2020.\n- [Distributionally Robust Policy Evaluation and Learning in Offline Contextual Bandits](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fsi20a.html)\n  - Nian Si, Fan Zhang, Zhengyuan Zhou, and Jose Blanchet. ICML, 2020.\n- [Improving Offline Contextual Bandits with Distributional Robustness](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.06835)\n  - Otmane Sakhi, Louis Faury, and Flavian Vasile. arXiv, 2020.\n- [Balanced Off-Policy Evaluation in General Action Spaces](http:\u002F\u002Fproceedings.mlr.press\u002Fv108\u002Fsondhi20a.html)\n  - Arjun Sondhi, David Arbour, and Drew Dimmery. AISTATS, 2019.\n- [Policy Evaluation with Latent Confounders via Optimal Balance](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002F7c4bf50b715509a963ce81b168ca674b-Abstract.html)\n  - Andrew Bennett and Nathan Kallus. NeuIPS, 2019.\n- [On the Design of Estimators for Bandit Off-Policy Evaluation](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fvlassis19a.html)\n  - Nikos Vlassis, Aurelien Bibaut, Maria Dimakopoulou, and Tony Jebara. ICML, 2019.\n- [CAB: Continuous Adaptive Blending for Policy Evaluation and Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fsu19a.html)\n  - Yi Su, Lequn Wang, Michele Santacatterina, and Thorsten Joachims. ICML, 2019.\n- [Focused Context Balancing for Robust Offline Policy Evaluation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3292500.3330852)\n  - Hao Zou, Kun Kuang, Boqi Chen, Peixuan Chen, and Peng Cui. KDD, 2019.\n- [When People Change their Mind: Off-Policy Evaluation in Non-Stationary Recommendation Environments](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3289600.3290958)\n  - Rolf Jagerman, Ilya Markov, and Maarten de Rijke. WSDM, 2019.\n- [Policy Evaluation and Optimization with Continuous Treatments](http:\u002F\u002Fproceedings.mlr.press\u002Fv84\u002Fkallus18a.html)\n  - Nathan Kallus and Angela Zhou. AISTATS, 2019.\n- [Confounding-Robust Policy Improvement](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2018\u002Fhash\u002F3a09a524440d44d7f19870070a5ad42f-Abstract.html)\n  - Nathan Kallus and Angela Zhou. NeuIPS, 2018.\n- [Balanced Policy Evaluation and Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2018\u002Fhash\u002F6616758da438b02b8d360ad83a5b3d77-Abstract.html)\n  - Nathan Kallus. NeuIPS, 2018.\n- [Offline Evaluation of Ranking Policies with Click Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3219819.3220028)\n  - Shuai Li, Yasin Abbasi-Yadkori, Branislav Kveton, S. Muthukrishnan, Vishwa Vinay, and Zheng Wen. KDD, 2018.\n- [Effective Evaluation using Logged Bandit Feedback from Multiple Loggers](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06180)\n  - Aman Agarwal, Soumya Basu, Tobias Schnabel, and Thorsten Joachims. KDD, 2018.\n- [Off-policy Evaluation for Slate Recommendation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2017\u002Fhash\u002F5352696a9ca3397beb79f116f3a33991-Abstract.html)\n  - Adith Swaminathan, Akshay Krishnamurthy, Alekh Agarwal, Miroslav Dudík, John Langford, Damien Jose, and Imed Zitouni. NeurIPS, 2017.\n- [Optimal and Adaptive Off-policy Evaluation in Contextual Bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.01205)\n  - Yu-Xiang Wang, Alekh Agarwal, and Miroslav Dudik. ICML, 2017.\n- [Data-Efficient Policy Evaluation Through Behavior Policy Search](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhanna17a.html)\n  - Josiah P. Hanna, Philip S. Thomas, Peter Stone, and Scott Niekum. ICML, 2017.\n- [Doubly Robust Policy Evaluation and Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1503.02834)\n  - Miroslav Dudík, Dumitru Erhan, John Langford, and Lihong Li. ICML, 2011.\n- [Unbiased Offline Evaluation of Contextual-bandit-based News Article Recommendation Algorithms](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F1935826.1935878)\n  - Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. WSDM, 2011.\n\n#### Off-Policy Evaluation: Reinforcement Learning\n- [Distributional Off-policy Evaluation with Bellman Residual Minimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01900)\n  - Sungee Hong, Zhengling Qi, and Raymond K. W. Wong. arXiv, 2024.\n- [Future-Dependent Value-Based Off-Policy Evaluation in POMDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.13081)\n  - Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, and Wen Sun. NeurIPS, 2023.\n- [Marginal Density Ratio for Off-Policy Evaluation in Contextual Bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.01457)\n  - Muhammad Faaiz Taufiq, Arnaud Doucet, Rob Cornish, and Jean-Francois Ton. NeurIPS, 2023.\n- [State-Action Similarity-Based Representations for Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18409)\n  - Brahma S. Pavse and Josiah P. Hanna. NeurIPS, 2023.\n- [Off-Policy Evaluation for Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.07123)\n  - Qitong Gao, Juncheng Dong, Vahid Tarokh, Min Chi, and Miroslav Pajic. NeurIPS, 2023.\n- [Counterfactual-Augmented Importance Sampling for Semi-Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.17146)\n  - Shengpu Tang and Jenna Wiens. NeurIPS, 2023.\n- [An Instrumental Variable Approach to Confounded Off-Policy Evaluation](https:\u002F\u002Fopenreview.net\u002Fforum?id=ZVRWKr3ApD)\n  - Yang Xu, Jin Zhu, Chengchun Shi, Shikai Luo, and Rui Song. ICML, 2023.\n- [Semiparametrically Efficient Off-Policy Evaluation in Linear Markov Decision Processes](https:\u002F\u002Fopenreview.net\u002Fforum?id=6lP80vBiI6)\n  - Chuhan Xie, Wenhao Yang, and Zhihua Zhang. ICML, 2023.\n- [Distributional Offline Policy Evaluation with Predictive Error Guarantees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.09456)\n  - Runzhe Wu, Masatoshi Uehara, and Wen Sun. ICML, 2023.\n- [The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13332)\n  - Philip Amortila, Nan Jiang, and Csaba Szepesvári. ICML, 2023.\n- [Revisiting Bellman Errors for Offline Model Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00141) [[code](https:\u002F\u002Fgithub.com\u002Fjzitovsky\u002FSBV)]\n  - Joshua P. Zitovsky, Daniel de Marchi, Rishabh Agarwal, and Michael R. Kosorok. ICML, 2023.\n- [Scaling Marginalized Importance Sampling to High-Dimensional State-Spaces via State Abstraction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.07486)\n  - Brahma S. Pavse and Josiah P. Hanna. AAAI, 2023.\n- [Variational Latent Branching Model for Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12056)\n  - Qitong Gao, Ge Gao, Min Chi, and Miroslav Pajic. ICLR, 2023.\n- [Multiple-policy High-confidence Policy Evaluation](https:\u002F\u002Fproceedings.mlr.press\u002Fv206\u002Fdann23a.html)\n  - Chris Dann, Mohammad Ghavamzadeh, and Teodor V. Marinov. AISTATS, 2023.\n- [Off-Policy Evaluation with Online Adaptation for Robot Exploration in Challenging Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.03140)\n  - Yafei Hu, Junyi Geng, Chen Wang, John Keller, and Sebastian Scherer. RA-L, 2023.\n- [Conservative Exploration for Policy Optimization via Off-Policy Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15458)\n  - Paul Daoudi, Mathias Formoso, Othman Gaizi, Achraf Azize, and Evrard Garcelon. arXiv, 2023.\n- [Robust Offline Policy Evaluation and Optimization with Heavy-Tailed Rewards](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18715)\n  - Jin Zhu, Runzhe Wan, Zhengling Qi, Shikai Luo, and Chengchun Shi. arXiv, 2023.\n- [When is Offline Policy Selection Sample Efficient for Reinforcement Learning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02355)\n  - Vincent Liu, Prabhat Nagarajan, Andrew Patterson, and Martha White. arXiv, 2023.\n- [Sample Complexity of Preference-Based Nonparametric Off-Policy Evaluation with Deep Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.10556)\n  - Zihao Li, Xiang Ji, Minshuo Chen, and Mengdi Wang. arXiv, 2023.\n- [Evaluation of Active Feature Acquisition Methods for Static Feature Settings](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03619)\n  - Henrik von Kleist, Alireza Zamanian, Ilya Shpitser, and Narges Ahmidi. arXiv, 2023.\n- [Distributional Shift-Aware Off-Policy Interval Estimation: A Unified Error Quantification Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13278)\n  - Wenzhuo Zhou, Yuhan Li, Ruoqing Zhu, and Annie Qu. arXiv, 2023.\n- [Marginalized Importance Sampling for Off-Environment Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.01807)\n  - Pulkit Katdare, Nan Jiang, and Katherine Driggs-Campbell. arXiv, 2023.\n- [Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.14897)\n  - Hanhan Zhou, Tian Lan, and Vaneet Aggarwal. arXiv, 2023.\n- [Asymptotically Unbiased Off-Policy Policy Evaluation when Reusing Old Data in Nonstationary Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.11725)\n  - Vincent Liu, Yash Chandak, Philip Thomas, and Martha White. arXiv, 2023.\n- [Off-policy Evaluation in Doubly Inhomogeneous Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08719)\n  - Zeyu Bian, Chengchun Shi, Zhengling Qi, and Lan Wang. arXiv, 2023.\n- [Offline Policy Evaluation for Reinforcement Learning with Adaptively Collected Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14063)\n  - Sunil Madhow, Dan Xiao, Ming Yin, and Yu-Xiang Wang. arXiv, 2023.\n- [π2vec : Policy Representations with Successor Features](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09800)\n  - Gianluca Scarpellini, Ksenia Konyushkova, Claudio Fantacci, Tom Le Paine, Yutian Chen, and Misha Denil. arXiv, 2023.\n- [Conformal Off-Policy Evaluation in Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.02574)\n  - Daniele Foffano, Alessio Russo, and Alexandre Proutiere. arXiv, 2023.\n- [Hallucinated Adversarial Control for Conservative Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.01076)\n  - Jonas Rothfuss, Bhavya Sukhija, Tobias Birchler, Parnian Kassraie, and Andreas Krause. arXiv, 2023.\n- [Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00662)\n  - David Bruns-Smith and Angela Zhou. arXiv, 2023.\n- [Minimax Weight Learning for Absorbing MDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.03183)\n  - Fengyin Li, Yuqiang Li, and Xianyi Wu. arXiv, 2023.\n- [Improving Monte Carlo Evaluation with Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13734)\n  - Shuze Liu and Shangtong Zhang. arXiv, 2023.\n- [First-order Policy Optimization for Robust Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15890)\n  - Yan Li and Guanghui Lan. arXiv, 2023.\n- [A Minimax Learning Approach to Off-Policy Evaluation in Confounded Partially Observable Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06784)\n  - Chengchun Shi, Masatoshi Uehara, Jiawei Huang, and Nan Jiang. ICML, 2022.\n- [On Well-posedness and Minimax Optimal Rates of Nonparametric Q-function Estimation in Off-policy Evaluation](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fchen22u.html)\n  - Xiaohong Chen and Zhengling Qi. ICML, 2022.\n- [Learning Bellman Complete Representations for Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.05837)\n  - Jonathan Chang, Kaiwen Wang, Nathan Kallus, and Wen Sun. ICML, 2022.\n- [Supervised Off-Policy Ranking](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01360)\n  - Yue Jin, Yue Zhang, Tao Qin, Xudong Zhang, Jian Yuan, Houqiang Li, and Tie-Yan Liu. ICML, 2022.\n- [Off-Policy Fitted Q-Evaluation with Differentiable Function Approximators: Z-Estimation and Inference Theory](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04970)\n  - Ruiqi Zhang, Xuezhou Zhang, Chengzhuo Ni, and Mengdi Wang. ICML, 2022.\n- [Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.15543)\n  - Audrey Huang and Nan Jiang. NeurIPS, 2022.\n- [Oracle Inequalities for Model Selection in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.02016)\n  - Jonathan N. Lee, George Tucker, Ofir Nachum, Bo Dai, and Emma Brunskill. NeurIPS, 2022.\n- [Off-Policy Evaluation for Episodic Partially Observable Markov Decision Processes under Non-Parametric Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.10064)\n  - Rui Miao, Zhengling Qi, and Xiaoke Zhang. NeurIPS, 2022.\n- [Off-Policy Evaluation for Action-Dependent Non-stationary Environments](https:\u002F\u002Fopenreview.net\u002Fforum?id=PuagBLcAf8n)\n  - Yash Chandak, Shiv Shankar, Nathaniel D. Bastian, Bruno Castro da Silva, Emma Brunskill, and Philip S. Thomas. NeurIPS, 2022.\n- [Stateful Offline Contextual Policy Evaluation and Learning](https:\u002F\u002Fproceedings.mlr.press\u002Fv151\u002Fkallus22a)\n  - Nathan Kallus, and Angela Zhou. AISTATS, 2022.\n- [Off-Policy Risk Assessment for Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.10444)\n  - Audrey Huang, Liu Leqi, Zachary Lipton, and Kamyar Azizzadenesheli. AISTATS, 2022.\n- [Offline Reinforcement Learning for Human-Guided Human-Machine Interaction with Private Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.12167)\n  - Zuyue Fu, Zhengling Qi, Zhuoran Yang, Zhaoran Wang, and Lan Wang. arXiv, 2022.\n- [Offline Policy Evaluation and Optimization under Confounding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.16583)\n  - Kevin Tan, Yangyi Lu, Chinmaya Kausik, YIxin Wang, and Ambuj Tewari. arXiv, 2022.\n- [Bridging the Gap Between Offline and Online Reinforcement Learning Evaluation Methodologies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08131)\n  - Shivakanth Sujit, Pedro H. M. Braga, Jorg Bornschein, and Samira Ebrahimi Kahou. arXiv, 2022.\n- [Safe Evaluation For Offline Learning: Are We Ready To Deploy?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08302)\n  - Hager Radi, Josiah P. Hanna, Peter Stone, and Matthew E. Taylor. arXiv, 2022.\n- [Low Variance Off-policy Evaluation with State-based Importance Sampling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.03932)\n  - David M. Bossens and Philip Thomas. arXiv, 2022.\n- [Statistical Estimation of Confounded Linear MDPs: An Instrumental Variable Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.05186)\n  - Miao Lu, Wenhao Yang, Liangyu Zhang, and Zhihua Zhang. arXiv, 2022.\n- [Offline Estimation of Controlled Markov Chains: Minimax Nonparametric Estimators and Sample Efficiency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.07092)\n  - Imon Banerjee, Harsha Honnappa, and Vinayak Rao. arXiv, 2022.\n- [Sample Complexity of Nonparametric Off-Policy Evaluation on Low-Dimensional Manifolds using Deep Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02887)\n  - Xiang Ji, Minshuo Chen, Mengdi Wang, and Tuo Zhao. arXiv, 2022.\n- [A Sharp Characterization of Linear Estimators for Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04236)\n  - Juan C. Perdomo, Akshay Krishnamurthy, Peter Bartlett, and Sham Kakade. arXiv, 2022.\n- [A Multi-Agent Reinforcement Learning Framework for Off-Policy Evaluation in Two-sided Markets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.10574) [[code](https:\u002F\u002Fgithub.com\u002FRunzheStat\u002FCausalMARL)]\n  - Chengchun Shi, Runzhe Wan, Ge Song, Shikai Luo, Rui Song, and Hongtu Zhu. arXiv, 2022.\n- [A Theoretical Framework of Almost Hyperparameter-free Hyperparameter Selection Methods for Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.02300)\n  - Kohei Miyaguchi. arXiv, 2022.\n- [SOPE: Spectrum of Off-Policy Estimators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.03936)\n  - Christina J. Yuan, Yash Chandak, Stephen Giguere, Philip S. Thomas, and Scott Niekum. NeurIPS, 2021.\n- [Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13125)\n  - Yunhao Tang, Tadashi Kozuno, Mark Rowland, Rémi Munos, and Michal Valko. NeurIPS, 2021.\n- [Variance-Aware Off-Policy Evaluation with Linear Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.11960)\n  - Yifei Min, Tianhao Wang, Dongruo Zhou, and Quanquan Gu. NeurIPS, 2021.\n- [Universal Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12820)\n  - Yash Chandak, Scott Niekum, Bruno Castro da Silva, Erik Learned-Miller, Emma Brunskill, and Philip S. Thomas. NeurIPS, 2021.\n- [Towards Hyperparameter-free Policy Selection for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14000)\n  - Siyuan Zhang and Nan Jiang. NeurIPS, 2021.\n- [Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Fhash\u002F6b3c49bdba5be0d322334e30c459f8bd-Abstract.html)\n  - Ming Yin and Yu-Xiang Wang. NeurIPS, 2021.\n- [State Relevance for Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.06310)\n  - Simon P. Shen, Yecheng Jason Ma, Omer Gottesman, and Finale Doshi-Velez. ICML, 2021.\n- [Bootstrapping Fitted Q-Evaluation for Off-Policy Inference](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fhao21b.html)\n  - Botao Hao, Xiang Ji, Yaqi Duan, Hao Lu, Csaba Szepesvari, and Mengdi Wang. ICML, 2021.\n- [Deeply-Debiased Off-Policy Interval Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04646)\n  - Chengchun Shi, Runzhe Wan, Victor Chernozhukov, and Rui Song. ICML, 2021.\n- [Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization](https:\u002F\u002Fopenreview.net\u002Fforum?id=kmqjgSNXby)\n  - Michael R. Zhang, Tom Le Paine, Ofir Nachum, Cosmin Paduraru, George Tucker, Ziyu Wang, Mohammad Norouzi. ICLR, 2021.\n- [Minimax Model Learning](http:\u002F\u002Fwww.yisongyue.com\u002Fpublications\u002Faistats2021_mml.pdf)\n   - Cameron Voloshin, Nan Jiang, and Yisong Yue. AISTATS, 2021.\n- [Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13893)\n  - Andrew Bennett, Nathan Kallus, Lihong Li, and Ali Mousavi. AISTATS, 2021.\n- [High-Confidence Off-Policy (or Counterfactual) Variance Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.09847)\n  - Yash Chandak, Shiv Shankar, and Philip S. Thomas. AAAI, 2021.\n- [Debiased Off-Policy Evaluation for Recommendation Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08536)\n  - Yusuke Narita, Shota Yasui, and Kohei Yata. RecSys, 2021.\n- [Pessimistic Model Selection for Offline Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.14346)\n  - Chao-Han Huck Yang, Zhengling Qi, Yifan Cui, and Pin-Yu Chen. arXiv, 2021.\n- [Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.15332)\n  - Andrew Bennett and Nathan Kallus. arXiv, 2021.\n- [Off-Policy Evaluation in Partially Observed Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.12343)\n  - Yuchen Hu and Stefan Wager. arXiv, 2021.\n- [A Spectral Approach to Off-Policy Evaluation for POMDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.10502)\n  - Yash Nair and Nan Jiang. arXiv, 2021.\n- [Projected State-action Balancing Weights for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.04640)s\n  - Jiayi Wang, Zhengling Qi, and Raymond K.W. Wong. arXiv, 2021.\n- [Active Offline Policy Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.10251)\n  - Ksenia Konyushkova, Yutian Chen, Thomas Paine, Caglar Gulcehre, Cosmin Paduraru, Daniel J Mankowitz, Misha Denil, and Nando de Freitas. arXiv, 2021.\n- [On Instrumental Variable Regression for Deep Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.10148)\n  - Yutian Chen, Liyuan Xu, Caglar Gulcehre, Tom Le Paine, Arthur Gretton, Nando de Freitas, and Arnaud Doucet. arXiv, 2021.\n- [Average-Reward Off-Policy Policy Evaluation with Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.02808)\n  - Shangtong Zhang, Yi Wan, Richard S. Sutton, and Shimon Whiteson. arXiv, 2021.\n- [Sequential causal inference in a single world of connected units](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.07380)\n  - Aurelien Bibaut, Maya Petersen, Nikos Vlassis, Maria Dimakopoulou, and Mark van der Laan, arXiv, 2021.\n- [Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fda21bae82c02d1e2b8168d57cd3fbab7-Abstract.html)\n  - Hongseok Namkoong, Ramtin Keramati, Steve Yadlowsky, and Emma Brunskill. NeurIPS, 2020.\n- [CoinDICE: Off-Policy Confidence Interval Estimation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F6aaba9a124857622930ca4e50f5afed2-Abstract.html)\n  - Bo Dai, Ofir Nachum, Yinlam Chow, Lihong Li, Csaba Szepesvari, and Dale Schuurmans. NeurIPS, 2020.\n- [Off-Policy Interval Estimation with Lipschitz Value Iteration](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F59accb9fe696ce55e28b7d23a009e2d1-Abstract.html)\n  - Ziyang Tang, Yihao Feng, Na Zhang, Jian Peng, and Qiang Liu. NeurIPS, 2020.\n- [Off-Policy Evaluation via the Regularized Lagrangian](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F488e4104520c6aab692863cc1dba45af-Abstract.html)\n  - Mengjiao Yang, Ofir Nachum, Bo Dai, Lihong Li, and Dale Schuurmans. NeurIPS, 2020.\n- [Minimax Value Interval for Off-Policy Evaluation and Policy Optimization](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F1cd138d0499a68f4bb72bee04bbec2d7-Abstract.html)\n  - Nan Jiang and Jiawei Huang. NeurIPS, 2020.\n- [GenDICE: Generalized Offline Estimation of Stationary Values](https:\u002F\u002Fopenreview.net\u002Fforum?id=HkxlcnVFwB)\n  - Ruiyi Zhang, Bo Dai, Lihong Li, and Dale Schuurmans. ICLR, 2020.\n- [Infinite-horizon Off-Policy Policy Evaluation with Multiple Behavior Policies](https:\u002F\u002Ficlr.cc\u002Fvirtual_2020\u002Fposter_rkgU1gHtvr.html)\n  - Xinyun Chen, Lu Wang, Yizhe Hang, Heng Ge, and Hongyuan Zha. ICLR, 2020.\n- [Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation](https:\u002F\u002Ficlr.cc\u002Fvirtual_2020\u002Fposter_S1glGANtDr.html)\n  - Ziyang Tang, Yihao Feng, Lihong Li, Dengyong Zhou, and Qiang Liu. ICLR, 2020.\n- [Black-box Off-policy Estimation for Infinite-Horizon Reinforcement Learning](https:\u002F\u002Ficlr.cc\u002Fvirtual_2020\u002Fposter_S1ltg1rFDS.html)\n  - Ali Mousavi, Lihong Li, Qiang Liu, and Denny Zhou. ICLR, 2020.\n- [GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fzhang20r.html)\n  - Shangtong Zhang, Bo Liu, and Shimon Whiteson. ICML, 2020.\n- [Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fduan20b.html)\n  - Yaqi Duan, Zeyu Jia, and Mengdi Wang. ICML, 2020.\n- [Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fgottesman20a.html)\n  - Omer Gottesman, Joseph Futoma, Yao Liu, Sonali Parbhoo, Leo Celi, Emma Brunskill, and Finale Doshi-Velez. ICML, 2020.\n- [Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fkallus20b.html)\n  - Nathan Kallus and Masatoshi Uehara. ICML, 2020.\n- [Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fliu20a.html)\n  - Yao Liu, Pierre-Luc Bacon, and Emma Brunskill. ICML, 2020.\n- [Minimax Weight and Q-Function Learning for Off-Policy Evaluation](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fuehara20a.html)\n  - Masatoshi Uehara, Jiawei Huang, and Nan Jiang. ICML, 2020.\n- [Accountable Off-Policy Evaluation With Kernel Bellman Statistics](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Ffeng20d.html)\n  - Yihao Feng, Tongzheng Ren, Ziyang Tang, and Qiang Liu. ICML, 2020.\n- [Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv108\u002Fyin20b.html)\n  - Ming Yin and Yu-Xiang Wang. ICML, 2020.\n- [Batch Stationary Distribution Estimation](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fwen20a.html)\n  - Junfeng Wen, Bo Dai, Lihong Li, and Dale Schuurmans. ICML, 2020.\n- [Towards Off-policy Evaluation as a Prerequisite for Real-world Reinforcement Learning in Building Control](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3427773.3427871) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zlk_TDNC4qk)]\n  - Bingqing Chen, Ming Jin, Zhe Wang, Tianzhen Hong, and Mario Bergés, RLEM, 2020.\n- [Defining Admissible Rewards for High Confidence Policy Evaluation in Batch Reinforcement Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3368555.3384450)\n  - Niranjani Prasad, Barbara E Engelhardt, and Finale Doshi-Velez. CHIL, 2020.\n- [Offline Policy Selection under Uncertainty](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.06919)\n  - Mengjiao Yang, Bo Dai, Ofir Nachum, George Tucker, and Dale Schuurmans. arXiv, 2020.\n- [Near-Optimal Provable Uniform Convergence in Offline Policy Evaluation for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.03760)\n  - Ming Yin, Yu Bai, and Yu-Xiang Wang. arXiv, 2020.\n- [Optimal Mixture Weights for Off-Policy Evaluation with Multiple Behavior Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.14359)\n  - Jinlin Lai, Lixin Zou, and Jiaxing Song. arXiv, 2020.\n- [Kernel Methods for Policy Evaluation: Treatment Effects, Mediation Analysis, and Off-Policy Planning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.04855)\n  - Rahul Singh, Liyuan Xu, and Arthur Gretton. arXiv, 2020.\n- [Statistical Bootstrapping for Uncertainty Estimation in Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13609)\n  - Ilya Kostrikov and Ofir Nachum. arXiv, 2020.\n- [Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.05850)\n  - Nathan Kallus and Masatoshi Uehara. arXiv, 2019.\n- [Off-Policy Evaluation in Partially Observable Environments](https:\u002F\u002Fojs.aaai.org\u002F\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F6590)\n  - Guy Tennenholtz, Uri Shalit, and Shie Mannor. AAAI, 2019.\n- [Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.03735)\n  - Nathan Kallus and Masatoshi Uehara. NeurIPS, 2019.\n- [Towards Optimal Off-Policy Evaluation for Reinforcement Learning with Marginalized Importance Sampling](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002F4ffb0d2ba92f664c2281970110a2e071-Abstract.html)\n  - Tengyang Xie, Yifei Ma, and Yu-Xiang Wang. NeuIPS, 2019.\n- [Off-Policy Evaluation via Off-Policy Classification](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fb5b03f06271f8917685d14cea7c6c50a-Abstract.html)\n  - Alexander Irpan, Kanishka Rao, Konstantinos Bousmalis, Chris Harris, Julian Ibarz, and Sergey Levine. NeuIPS, 2019.\n- [DualDICE: Behavior-Agnostic Estimation of Discounted Stationary Distribution Corrections](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.04733) [[software](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fdice_rl)]\n  - Ofir Nachum, Yinlam Chow, Bo Dai, Lihong Li. NeurIPS, 2019.\n- [Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy](https:\u002F\u002Fopenreview.net\u002Fforum?id=HklKui0ct7)\n  - Yuan Xie, Boyi Liu, Qiang Liu, Zhaoran Wang, Yuan Zhou, and Jian Peng. ICLR, 2019.\n- [Batch Policy Learning under Constraints](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.08738) [[code](https:\u002F\u002Fgithub.com\u002Fclvoloshin\u002Fconstrained_batch_policy_learning)] [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fconstrained-batch-policy-learn\u002F)]\n  - Hoang M. Le, Cameron Voloshin, and Yisong Yue. ICML, 2019.\n- [More Efficient Off-Policy Evaluation through Regularized Targeted Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fbibaut19a.html)\n  - Aurelien Bibaut, Ivana Malenica, Nikos Vlassis, and Mark Van Der Laan. ICML, 2019.\n- [Combining parametric and nonparametric models for off-policy evaluation](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fgottesman19a.html)\n  - Omer Gottesman, Yao Liu, Scott Sussex, Emma Brunskill, and Finale Doshi-Velez. ICML, 2019.\n- [Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Foberst19a.html)\n  - Michael Oberst and David Sontag. ICML, 2019.\n- [Importance Sampling Policy Evaluation with an Estimated Behavior Policy](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fhanna19a.html)\n  - Josiah Hanna, Scott Niekum, and Peter Stone. ICML, 2019.\n- [Representation Balancing MDPs for Off-policy Policy Evaluation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2018\u002Fhash\u002F980ecd059122ce2e50136bda65c25e07-Abstract.html)\n  - Yao Liu, Omer Gottesman, Aniruddh Raghu, Matthieu Komorowski, Aldo A. Faisal, Finale Doshi-Velez, and Emma Brunskill. NeuIPS, 2018.\n- [Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2018\u002Fhash\u002Fdda04f9d634145a9c68d5dfe53b21272-Abstract.html)\n  - Qiang Liu, Lihong Li, Ziyang Tang, and Dengyong Zhou. NeuIPS, 2018.\n- [More Robust Doubly Robust Off-policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.03493)\n  - Mehrdad Farajtabar, Yinlam Chow, and Mohammad Ghavamzadeh. ICML, 2018.\n- [Importance Sampling for Fair Policy Selection](https:\u002F\u002Fpeople.cs.umass.edu\u002F~pthomas\u002Fpapers\u002FDoroudi2017.pdf)\n  - Shayan Doroudi, Philip Thomas, and Emma Brunskill. UAI, 2017.\n- [Predictive Off-Policy Policy Evaluation for Nonstationary Decision Problems, with Applications to Digital Marketing](https:\u002F\u002Fpeople.cs.umass.edu\u002F~pthomas\u002Fpapers\u002FThomas2017.pdf)\n  - Philip S. Thomas, Georgios Theocharous, Mohammad Ghavamzadeh, Ishan Durugkar, and Emma Brunskill. AAAI, 2017.\n- [Consistent On-Line Off-Policy Evaluation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhallak17a.html)\n  - Assaf Hallak and Shie Mannor. ICML, 2017.\n- [Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.06126)\n  - Josiah P. Hanna, Peter Stone, and Scott Niekum. AAAMS, 2016.\n- [Doubly Robust Off-policy Value Evaluation for Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Fjiang16.html)\n  - Nan Jiang and Lihong Li. ICML, 2016.\n- [Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Fthomasa16.html)\n  - Philip Thomas and Emma Brunskill. ICML, 2016.\n- [High Confidence Policy Improvement](http:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fthomas15.html)\n  - Philip Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. ICML, 2015.\n- [High Confidence Off-Policy Evaluation](https:\u002F\u002Fpeople.cs.umass.edu\u002F~pthomas\u002Fpapers\u002FThomas2015.pdf)\n  - Philip S. Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. AAAI, 2015.\n- [Eligibility Traces for Off-Policy Policy Evaluation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.5555\u002F645529.658134)\n  - Doina Precup, Richard S. Sutton, and Satinder P. Singh. ICML, 2000.\n\n#### Off-Policy Learning\n- [Sequential Counterfactual Risk Minimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12120)\n  - Houssam Zenati, Eustache Diemert, Matthieu Martin, Julien Mairal, and Pierre Gaillard. ICML, 2023.\n- [Trajectory-Aware Eligibility Traces for Off-Policy Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=8Lww9LXokZ)\n  - Brett Daley, Martha White, Christopher Amato, and Marlos C. Machado. ICML, 2023.\n- [Multi-Task Off-Policy Learning from Bandit Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04720)\n  - Joey Hong, Branislav Kveton, Sumeet Katariya, Manzil Zaheer, and Mohammad Ghavamzadeh. ICML, 2023.\n- [Exponential Smoothing for Off-Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15877)\n  - Imad Aouali, Victor-Emmanuel Brunel, David Rohde, and Anna Korba. ICML, 2023.\n- [Counterfactual Learning with General Data-generating Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.01925)\n  - Yusuke Narita, Kyohei Okumura, Akihiro Shimizu, and Kohei Yata. AAAI, 2023.\n- [Distributionally Robust Policy Gradient for Offline Contextual Bandits](https:\u002F\u002Fproceedings.mlr.press\u002Fv206\u002Fyang23f.html)\n  - Zhouhao Yang, Yihong Guo, Pan Xu, Anqi Liu, and Animashree Anandkumar. AISTATS, 2023.\n- [Oracle-Efficient Pessimism: Offline Policy Optimization in Contextual Bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07923)\n  - Lequn Wang, Akshay Krishnamurthy, and Aleksandrs Slivkins. arXiv, 2023.\n- [Pessimistic Off-Policy Multi-Objective Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18617)\n  - Shima Alizadeh, Aniruddha Bhargava, Karthick Gopalswamy, Lalit Jain, Branislav Kveton, and Ge Liu. arXiv, 2023.\n- [Unified Off-Policy Learning to Rank: a Reinforcement Learning Perspective](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07528)\n  - Zeyu Zhang, Yi Su, Hui Yuan, Yiran Wu, Rishab Balasubramanian, Qingyun Wu, Huazheng Wang, and Mengdi Wang. arXiv, 2023.\n- [Uncertainty-Aware Off-Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06389)\n  - Xiaoying Zhang, Junpu Chen, Hongning Wang, Hong Xie, and Hang Li. arXiv, 2023.\n- [Fair Off-Policy Learning from Observational Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08516)\n  - Dennis Frauen, Valentyn Melnychuk, and Stefan Feuerriegel. arXiv, 2023.\n- [Interpretable Off-Policy Learning via Hyperbox Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02473)\n  - Daniel Tschernutter, Tobias Hatt, and Stefan Feuerriegel. ICML, 2022.\n- [Offline Policy Optimization with Eligible Actions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.00632)\n  - Yao Liu, Yannis Flet-Berliac, and Emma Brunskill. UAI, 2022.\n- [Towards Robust Off-policy Learning for Runtime Uncertainty](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13337)\n  - Da Xu, Yuting Ye, Chuanwei Ruan, and Bo Yang. AAAI, 2022.\n- [Safe Optimal Design with Applications in Off-Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.04835)\n  - Ruihao Zhu and Branislav Kveton. AISTATS, 2022.\n- [Off-Policy Actor-critic for Recommender Systems](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3523227.3546758)\n  - Minmin Chen, Can Xu, Vince Gatto, Devanshu Jain, Aviral Kumar, and Ed Chi. RecSys, 2022.\n- [MGPolicy: Meta Graph Enhanced Off-policy Learning for Recommendations](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3477495.3532021)\n  - Xiangmeng Wang, Qian Li, Dianer Yu, Zhichao Wang, Hongxu Chen, and Guandong Xu. SIGIR, 2022.\n- [Distributionally Robust Policy Learning with Wasserstein Distance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.04637)\n  - Daido Kido. arXiv, 2022.\n- [Local Policy Improvement for Recommender Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.11431)\n  - Dawen Liang and Nikos Vlassis. arXiv, 2022.\n- [Policy learning \"without\" overlap: Pessimism and generalized empirical Bernstein's inequality](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09900)\n  - Ying Jin, Zhimei Ren, Zhuoran Yang, and Zhaoran Wang. arXiv, 2022.\n- [Fast Offline Policy Optimization for Large Scale Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.05327)\n  - Otmane Sakhi, David Rohde, and Alexandre Gilotte. arXiv, 2022.\n- [Practical Counterfactual Policy Learning for Top-K Recommendations](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3534678.3539295)\n  - Yaxu Liu, Jui-Nan Yen, Bowen Yuan, Rundong Shi, Peng Yan, and Chih-Jen Lin. KDD, 2022.\n- [Boosted Off-Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.01148)\n  - Ben London, Levi Lu, Ted Sandler, and Thorsten Joachims. arXiv, 2022.\n- [Semi-Counterfactual Risk Minimization Via Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.07148)\n  - Gholamali Aminian, Roberto Vega, Omar Rivasplata, Laura Toni, and Miguel Rodrigues. arXiv, 2022.\n- [IMO^3: Interactive Multi-Objective Off-Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.09798)\n  - Nan Wang, Hongning Wang, Maryam Karimzadehgan, Branislav Kveton, and Craig Boutilier. arXiv, 2022.\n- [Pessimistic Off-Policy Optimization for Learning to Rank](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02593)\n  - Matej Cief, Branislav Kveton, and Michal Kompan. arXiv, 2022.\n- [Non-Stationary Off-Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08236)\n  - Joey Hong, Branislav Kveton, Manzil Zaheer, Yinlam Chow, and Amr Ahmed. AISTATS, 2021.\n- [Learning from eXtreme Bandit Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.12947)\n  - Romain Lopez, Inderjit Dhillon, and Michael I. Jordan. AAAI, 2021.\n- [Generalizing Off-Policy Learning under Sample Selection Bias](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.01387)\n  - Tobias Hatt, Daniel Tschernutter, and Stefan Feuerriegel. arXiv, 2021.\n- [Conservative Policy Construction Using Variational Autoencoders for Logged Data with Missing Values](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.03747)\n  - Mahed Abroshan, Kai Hou Yip, Cem Tekin, and Mihaela van der Schaar. arXiv, 2021.\n- [Doubly Robust Off-Policy Value and Gradient Estimation for Deterministic Policies](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F75df63609809c7a2052fdffe5c00a84e-Abstract.html)\n  - Nathan Kallus and Masatoshi Uehara. NeurIPS, 2020.\n- [From Importance Sampling to Doubly Robust Policy Gradient](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fhuang20b.html)\n  - Jiawei Huang and Nan Jiang. ICML, 2020.\n- [Efficient Policy Learning from Surrogate-Loss Classification Reductions](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fbennett20a.html) [[code](https:\u002F\u002Fgithub.com\u002FCausalML\u002FESPRM)]\n  - Andrew Bennett and Nathan Kallus. ICML, 2020.\n- [Off-policy Bandits with Deficient Support](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3394486.3403139)\n  - Noveen Sachdeva, Yi Su, and Thorsten Joachims. KDD, 2020.\n- [Off-policy Learning in Two-stage Recommender Systems](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3366423.3380130)\n  - Jiaqi Ma, Zhe Zhao, Xinyang Yi, Ji Yang, Minmin Chen, Jiaxi Tang, Lichan Hong, and Ed H Chi. WWW, 2020.\n- [More Efficient Policy Learning via Optimal Retargeting](https:\u002F\u002Fwww.tandfonline.com\u002Fdoi\u002Fabs\u002F10.1080\u002F01621459.2020.1788948?journalCode=uasa20)\n  - Nathan Kallus. JASA, 2020.\n- [Learning When-to-Treat Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.09751)\n  - Xinkun Nie, Emma Brunskill, and Stefan Wager. JASA, 2020.\n- [Doubly Robust Off-Policy Learning on Low-Dimensional Manifolds by Deep Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.01797)\n  - Minshuo Chen, Hao Liu, Wenjing Liao, and Tuo Zhao. arXiv, 2020.\n- [Bandit Overfitting in Offline Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.15368)\n  - David Brandfonbrener, William F. Whitney, Rajesh Ranganath, and Joan Bruna. arXiv, 2020.\n- [Counterfactual Learning of Continuous Stochastic Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.11722)\n  - Houssam Zenati, Alberto Bietti, Matthieu Martin, Eustache Diemert, and Julien Mairal. arXiv, 2020.\n- [Top-K Off-Policy Correction for a REINFORCE Recommender System](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.02353)\n  - Minmin Chen, Alex Beutel, Paul Covington, Sagar Jain, Francois Belletti, and Ed Chi. WSDM, 2019.\n- [Semi-Parametric Efficient Policy Learning with Continuous Actions](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002F08b7dc6e8b36bcaac15847827b7951a9-Abstract.html)\n  - Victor Chernozhukov, Mert Demirer, Greg Lewis, and Vasilis Syrgkanis. NeurIPS, 2019.\n- [Efficient Counterfactual Learning from Bandit Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.03084)\n  - Yusuke Narita, Shota Yasui, and Kohei Yata. AAAI, 2019.\n- [Deep Learning with Logged Bandit Feedback](https:\u002F\u002Fopenreview.net\u002Fforum?id=SJaP_-xAb)\n  - Thorsten Joachims, Adith Swaminathan, and Maarten de Rijke. ICLR, 2018.\n- [The Self-Normalized Estimator for Counterfactual Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2015\u002Fhash\u002F39027dfad5138c9ca0c474d71db915c3-Abstract.html)\n  - Adith Swaminathan and Thorsten Joachims. NeurIPS, 2015.\n- [Counterfactual Risk Minimization: Learning from Logged Bandit Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.02362)\n  - Adith Swaminathan and Thorsten Joachims. ICML, 2015.\n\n### Off-Policy Evaluation and Learning: Benchmarks\u002FExperiments\n- [Towards Assessing and Benchmarking Risk-Return Tradeoff of Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18207)\n  - Haruka Kiyohara, Ren Kishimoto, Kosuke Kawakami, Ken Kobayashi, Kazuhide Nakata, and Yuta Saito. ICLR, 2024.\n- [SCOPE-RL: A Python Library for Offline Reinforcement Learning and Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18206)\n  - Haruka Kiyohara, Ren Kishimoto, Kosuke Kawakami, Ken Kobayashi, Kazuhide Nakata, and Yuta Saito. arXiv, 2023.\n- [Offline Policy Comparison with Confidence: Benchmarks and Baselines](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10739)\n  - Anurag Koul, Mariano Phielipp, and Alan Fern. arXiv, 2022.\n- [Extending Open Bandit Pipeline to Simulate Industry Challenges](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.04147)\n  - Bram van den Akker, Niklas Weber, Felipe Moraes, and Dmitri Goldenberg. arXiv, 2022.\n- [Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.07146) [[software](https:\u002F\u002Fgithub.com\u002Fst-tech\u002Fzr-obp)] [[public dataset](https:\u002F\u002Fresearch.zozo.com\u002Fdata.html)]\n  - Yuta Saito, Shunsuke Aihara, Megumi Matsutani, and Yusuke Narita. NeurIPS, 2021.\n- [Evaluating the Robustness of Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.13703) [[software](https:\u002F\u002Fgithub.com\u002Fsony\u002FpyIEOE)]\n  - Yuta Saito, Takuma Udagawa, Haruka Kiyohara, Kazuki Mogi, Yusuke Narita, and Kei Tateno. RecSys, 2021.\n- [Benchmarks for Deep Off-Policy Evaluation](https:\u002F\u002Fopenreview.net\u002Fforum?id=kWSeGEeHvF8) [[code](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fdeep_ope)]\n  - Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, and Thomas Paine. ICLR, 2021.\n- [Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.06854) [[code](https:\u002F\u002Fgithub.com\u002Fclvoloshin\u002FOPE-tools)]\n   - Cameron Voloshin, Hoang M. Le, Nan Jiang, and Yisong Yue, arXiv, 2019.\n\n### Off-Policy Evaluation and Learning: Applications\n- [HOPE: Human-Centric Off-Policy Evaluation for E-Learning and Healthcare](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.09212)\n  - Ge Gao, Song Ju, Markel Sanz Ausin, and Min Chi. AAMAS, 2023.\n- [When is Off-Policy Evaluation Useful? A Data-Centric Perspective](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.14110)\n  - Hao Sun, Alex J. Chan, Nabeel Seedat, Alihan Hüyük, and Mihaela van der Schaar. arXiv, 2023.\n- [Counterfactual Evaluation of Peer-Review Assignment Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17339)\n  - Martin Saveski, Steven Jecmen, Nihar B. Shah, and Johan Ugander. arXiv, 2023.\n- [Balanced Off-Policy Evaluation for Personalized Pricing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12736)\n  - Adam N. Elmachtoub, Vishal Gupta, and Yunfan Zhao. arXiv, 2023.\n- [Multi-Action Dialog Policy Learning from Logged User Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.13505)\n  - Shuo Zhang, Junzhou Zhao, Pinghui Wang, Tianxiang Wang, Zi Liang, Jing Tao, Yi Huang, and Junlan Feng. arXiv, 2023.\n- [CFR-p: Counterfactual Regret Minimization with Hierarchical Policy Abstraction, and its Application to Two-player Mahjong](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12087)\n  - Shiheng Wang. arXiv, 2023.\n- [Reward Shaping for User Satisfaction in a REINFORCE Recommender](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15166)\n  - Konstantina Christakopoulou, Can Xu, Sai Zhang, Sriraj Badam, Trevor Potter, Daniel Li, Hao Wan, Xinyang Yi, Ya Le, Chris Berg, Eric Bencomo Dixon, Ed H. Chi, and Minmin Chen. arXiv, 2022.\n- [Data-Driven Off-Policy Estimator Selection: An Application in User Marketing on An Online Content Delivery Service](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.08621)\n  - Yuta Saito, Takuma Udagawa, and Kei Tateno. arXiv, 2021.\n- [Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy Evaluation Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10242)\n  - Haoming Jiang, Bo Dai, Mengjiao Yang, Wei Wei, and Tuo Zhao. arXiv, 2021.\n- [Model Selection for Offline Reinforcement Learning: Practical Considerations for Healthcare Settings](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.11003)\n  - Shengpu Tang and Jenna Wiens. MLHC, 2021.\n- [Off-Policy Evaluation of Probabilistic Identity Data in Lookalike Modeling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3289600.3291033)\n  - Randell Cotta, Dan Jiang, Mingyang Hu, and Peizhou Liao. WSDM, 2019.\n- [Offline Evaluation to Make Decisions About Playlist Recommendation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3289600.3291027)\n  - Alois Gruson, Praveen Chandar, Christophe Charbuillet, James McInerney, Samantha Hansen, Damien Tardieu, and Ben Carterette. WSDM, 2019.\n- [Behaviour Policy Estimation in Off-Policy Policy Evaluation: Calibration Matters](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.01066)\n  - Aniruddh Raghu, Omer Gottesman, Yao Liu, Matthieu Komorowski, Aldo Faisal, Finale Doshi-Velez, and Emma Brunskill. arXiv, 2018.\n- [Evaluating Reinforcement Learning Algorithms in Observational Health Settings](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.12298)\n  - Omer Gottesman, Fredrik Johansson, Joshua Meier, Jack Dent, Donghun Lee, Srivatsan Srinivasan, Linying Zhang, Yi Ding, David Wihl, Xuefeng Peng, Jiayu Yao, Isaac Lage, Christopher Mosch, Li-wei H. Lehman, Matthieu Komorowski, Matthieu Komorowski, Aldo Faisal, Leo Anthony Celi, David Sontag, and Finale Doshi-Velez. arXiv, 2018.\n- [Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommendation Systems](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3269206.3272027)\n  - Rishabh Mehrotra, James McInerney, Hugues Bouchard, Mounia Lalmas, and Fernando Diaz. CIKM, 2018.\n- [Offline A\u002FB testing for Recommender Systems](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3159652.3159687)\n  - Alexandre Gilotte, Clément Calauzènes, Thomas Nedelec, Alexandre Abraham, and Simon Dollé. WSDM, 2018.\n- [Offline Comparative Evaluation with Incremental, Minimally-Invasive Online Feedback](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3209978.3210050)\n  - Ben Carterette and Praveen Chandar. SIGIR, 2018.\n- [Handling Confounding for Realistic Off-Policy Evaluation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3184558.3186915)\n  - Saurabh Sohoney, Nikita Prabhu, and Vineet Chaoji. WWW, 2018.\n- [Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising](https:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv14\u002Fbottou13a.html)\n  - Léon Bottou, Jonas Peters, Joaquin Quiñonero-Candela, Denis X. Charles, D. Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. JMLR, 2013.\n\n## Open Source Software\u002FImplementations\n- [SCOPE-RL: A Python library for offline reinforcement learning, off-policy evaluation, and selection](https:\u002F\u002Fgithub.com\u002Fhakuhodo-technologies\u002Fscope-rl) [[paper1](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18206)] [[paper2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18207)] [[documentation](https:\u002F\u002Fscope-rl.readthedocs.io\u002Fen\u002Flatest\u002F)] \n  - Haruka Kiyohara, Ren Kishimoto, Kosuke Kawakami, Ken Kobayashi, Kazuhide Nakata, and Yuta Saito.\n- [Open Bandit Pipeline: a research framework for bandit algorithms and off-policy evaluation](https:\u002F\u002Fgithub.com\u002Fst-tech\u002Fzr-obp) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.07146)] [[documentation](https:\u002F\u002Fzr-obp.readthedocs.io\u002Fen\u002Flatest\u002Findex.html)] [[dataset](https:\u002F\u002Fresearch.zozo.com\u002Fdata.html)]\n  - Yuta Saito, Shunsuke Aihara, Megumi Matsutani, and Yusuke Narita.\n- [pyIEOE: Towards An Interpretable Evaluation for Offline Evaluation](https:\u002F\u002Fgithub.com\u002Fsony\u002FpyIEOE) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.13703)]\n  - Yuta Saito, Takuma Udagawa, Haruka Kiyohara, Kazuki Mogi, Yusuke Narita, and Kei Tateno.\n- [d3rlpy: An Offline Deep Reinforcement Learning Library](https:\u002F\u002Fgithub.com\u002Ftakuseno\u002Fd3rlpy) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.03788)] [[website](https:\u002F\u002Ftakuseno.github.io\u002Fd3rlpy\u002F)] [[documentation](https:\u002F\u002Fd3rlpy.readthedocs.io\u002F)]\n  - Takuma Seno and Michita Imai.\n- [MINERVA: An out-of-the-box GUI tool for data-driven deep reinforcement learning](https:\u002F\u002Fgithub.com\u002Ftakuseno\u002Fminerva) [[website](https:\u002F\u002Ftakuseno.github.io\u002Fminerva\u002F)] [[documentation](https:\u002F\u002Fminerva-ui.readthedocs.io\u002Fen\u002Fv0.20\u002F)]\n  - Takuma Seno and Michita Imai.\n- [Minari](https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMinari)\n  - Farama Foundation.\n- [CORL: Clean Offline Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Fcorl-team\u002FCORL) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07105)]\n  - Denis Tarasov, Alexander Nikulin, Dmitry Akimov, Vladislav Kurenkov, and Sergey Kolesnikov.\n- [COBS: Caltech OPE Benchmarking Suite](https:\u002F\u002Fgithub.com\u002Fclvoloshin\u002FCOBS) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.06854)]\n  - Cameron Voloshin, Hoang M. Le, Nan Jiang, and Yisong Yue.\n- [Benchmarks for Deep Off-Policy Evaluation](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fdeep_ope) [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=kWSeGEeHvF8)]\n  - Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, and Thomas Paine.\n- [DICE: The DIstribution Correction Estimation Library](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fdice_rl) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.03438)]\n  - Ofir Nachum, Yinlam Chow, Bo Dai, Lihong Li, Ruiyi Zhang, Dale Schuurmans.\n- [RL Unplugged: Benchmarks for Offline Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdeepmind-research\u002Ftree\u002Fmaster\u002Frl_unplugged) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.13888)] [[dataset](https:\u002F\u002Fconsole.cloud.google.com\u002Fstorage\u002Fbrowser\u002Frl_unplugged?pli=1)]\n  - Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Tom Le Paine, Sergio Gomez Colmenarejo, Konrad Zolna, Rishabh Agarwal, Josh Merel, Daniel Mankowitz, Cosmin Paduraru, Gabriel Dulac-Arnold, Jerry Li, Mohammad Norouzi, Matt Hoffman, Ofir Nachum, George Tucker, Nicolas Heess, and Nando de Freitas.\n- [D4RL: Datasets for Deep Data-Driven Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Frail-berkeley\u002Fd4rl) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.07219)] [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fd4rl\u002Fhome)]\n  - Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine.\n- [V-D4RL: Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations](https:\u002F\u002Fgithub.com\u002Fconglu1997\u002Fv-d4rl) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04779)}\n  - Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, and Yee Whye Teh.\n- [Benchmarking Offline Reinforcement Learning on Real-Robot Hardware](https:\u002F\u002Fgithub.com\u002Frr-learning\u002Ftrifinger_rl_datasets) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15690)]\n  - Nico Gürtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel Wuthrich, Stefan Bauer, Bernhard Schölkopf, and Georg Martius. ICLR, 2023.\n- [RLDS: Reinforcement Learning Datasets](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Frlds) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.02767)]\n  - Sabela Ramos, Sertan Girgin, Léonard Hussenot, Damien Vincent, Hanna Yakubovich, Daniel Toyama, Anita Gergely, Piotr Stanczyk, Raphael Marinier, Jeremiah Harmsen, Olivier Pietquin, and Nikola Momchev.\n- [OEF: Offline Equilibrium Finding](https:\u002F\u002Fgithub.com\u002FSecurityGames\u002Foef) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.05285)]\n  - Shuxin Li, Xinrun Wang, Jakub Cerny, Youzhi Zhang, Hau Chan, and Bo An.\n- [ExORL: Exploratory Data for Offline Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Fdenisyarats\u002Fexorl) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.13425)]\n  - Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, and Lerrel Pinto.\n- [RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System](https:\u002F\u002Fgithub.com\u002FfuxiAIlab\u002FRL4RS) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.11073)] [dataset](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1YbPtPyYrMvMGOuqD4oHvK0epDtEhEb9v\u002Fview)]\n  - Kai Wang, Zhene Zou, Yue Shang, Qilin Deng, Minghao Zhao, Yile Liang, Runze Wu, Jianrong Tao, Xudong Shen, Tangjie Lyu, and Changjie Fan.\n- [NeoRL: Near Real-World Benchmarks for Offline Reinforcement Learning](https:\u002F\u002Fagit.ai\u002FPolixir\u002Fneorl) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.00714)] [[website](http:\u002F\u002Fpolixir.ai\u002Fresearch\u002Fneorl)]\n  - Rongjun Qin, Songyi Gao, Xingyuan Zhang, Zhen Xu, Shengkai Huang, Zewen Li, Weinan Zhang, and Yang Yu.\n- [The Industrial Benchmark Offline RL Datasets](https:\u002F\u002Fgithub.com\u002Fsiemens\u002Findustrialbenchmark\u002Ftree\u002Foffline_datasets\u002Fdatasets) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05533)]\n  - Phillip Swazinna, Steffen Udluft, and Thomas Runkler.\n- [ARLO: A Framework for Automated Reinforcement Learning](https:\u002F\u002Fgithub.com\u002Farlo-lib\u002FARLO) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10416)]\n  - Marco Mussi, Davide Lombarda, Alberto Maria Metelli, Francesco Trovò, and Marcello Restelli.\n- [RecoGym: A Reinforcement Learning Environment for the problem of Product Recommendation in Online Advertising](https:\u002F\u002Fgithub.com\u002Fcriteo-research\u002Freco-gym) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.00720)]\n  - David Rohde, Stephen Bonner, Travis Dunlop, Flavian Vasile, and Alexandros Karatzoglou.\n- [MARS-Gym: A Gym framework to model, train, and evaluate Recommender Systems for Marketplaces](https:\u002F\u002Fgithub.com\u002Fdeeplearningbrasil\u002Fmars-gym) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.07035)] [[documantation](https:\u002F\u002Fmars-gym.readthedocs.io\u002Fen\u002Flatest\u002F)]\n  - Marlesson R. O. Santana, Luckeciano C. Melo, Fernando H. F. Camargo, Bruno Brandão, Anderson Soares, Renan M. Oliveira, and Sandor Caetano.\n- [A Reinforcement Learning-based Volt-VAR Control Dataset](https:\u002F\u002Fgithub.com\u002Fyg-smile\u002FRL_VVC_dataset) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.09500)]\n  - Yuanqi Gao and Nanpeng Yu.\n\n## Blog\u002FPodcast\n### Blog\n- [Counterfactual Evaluation for Recommendation Systems](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fcounterfactual-evaluation\u002F)\n  - Eugene Yan. 2022.\n- [Offline Reinforcement Learning: How Conservative Algorithms Can Enable New Applications](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F12\u002F07\u002Foffline\u002F)\n  - Aviral Kumar and Avi Singh. BAIR Blog, 2020.\n- [AWAC: Accelerating Online Reinforcement Learning with Offline Datasets](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F09\u002F10\u002Fawac\u002F)\n  - Ashvin Nair and Abhishek Gupta. BAIR Blog, 2020.\n- [D4RL: Building Better Benchmarks for Offline Reinforcement Learning](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F06\u002F25\u002FD4RL\u002F)\n  - Justin Fu. BAIR Blog, 2020.\n- [Does On-Policy Data Collection Fix Errors in Off-Policy Reinforcement Learning?](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F03\u002F16\u002Fdiscor\u002F)\n  - Aviral Kumar and Abhishek Gupta. BAIR Blog, 2020.\n- [Tackling Open Challenges in Offline Reinforcement Learning](https:\u002F\u002Fai.googleblog.com\u002F2020\u002F08\u002Ftackling-open-challenges-in-offline.html)\n  - George Tucker and Sergey Levine. Google AI Blog, 2020.\n- [An Optimistic Perspective on Offline Reinforcement Learning](https:\u002F\u002Fai.googleblog.com\u002F2020\u002F04\u002Fan-optimistic-perspective-on-offline.html)\n  - Rishabh Agarwal and Mohammad Norouzi. Google AI Blog, 2020.\n- [Decisions from Data: How Offline Reinforcement Learning Will Change How We Use Machine Learning](https:\u002F\u002Fmedium.com\u002F@sergey.levine\u002Fdecisions-from-data-how-offline-reinforcement-learning-will-change-how-we-use-ml-24d98cb069b0)\n  - Sergey Levine. Medium, 2020.\n- [Introducing completely free datasets for data-driven deep reinforcement learning](https:\u002F\u002Ftowardsdatascience.com\u002Fintroducing-completely-free-datasets-for-data-driven-deep-reinforcement-learning-a51e9bed85f9)\n  - Takuma Seno. towards data science, 2020.\n- [Offline (Batch) Reinforcement Learning: A Review of Literature and Applications](https:\u002F\u002Fdanieltakeshi.github.io\u002F2020\u002F06\u002F28\u002Foffline-rl\u002F)\n  - Daniel Seita. danieltakeshi.github.io, 2020.\n- [Data-Driven Deep Reinforcement Learning](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2019\u002F12\u002F05\u002Fbear\u002F)\n  - Aviral Kumar. BAIR Blog, 2019.\n\n### Podcast\n- [AI Trends 2023: Reinforcement Learning – RLHF, Robotic Pre-Training, and Offline RL with Sergey Levine](https:\u002F\u002Ftwimlai.com\u002Fpodcast\u002Ftwimlai\u002Fai-trends-2023-reinforcement-learning-rlhf-robotic-pre-training-and-offline-rl\u002F)\n  - Sergey Levine. TWIML, 2023.\n- [Bandits and Simulators for Recommenders with Olivier Jeunen](https:\u002F\u002Fopen.spotify.com\u002Fepisode\u002F35a8asBV1wBp8vIXr59Oz9)\n  - Olivier Jeunen. Recsperts, 2022.\n- [Sergey Levine on Robot Learning & Offline RL](https:\u002F\u002Fthegradientpub.substack.com\u002Fp\u002Fsergey-levine-on-robot-learning-and)\n  - Sergey Levine. The Gradient, 2021.\n- [Off-Line, Off-Policy RL for Real-World Decision Making at Facebook](https:\u002F\u002Ftwimlai.com\u002Foff-line-off-policy-rl-for-real-world-decision-making-at-facebook\u002F)\n  - Jason Gauci. TWIML, 2021.\n- [Xianyuan Zhan | TalkRL: The Reinforcement Learning Podcast](https:\u002F\u002Fwww.talkrl.com\u002Fepisodes\u002Fxianyuan-zhan)\n  - Xianyuan Zhan. TWIML, 2021.\n- [MOReL: Model-Based Offline Reinforcement Learning with Aravind Rajeswaran](https:\u002F\u002Ftwimlai.com\u002Fmorel-model-based-offline-reinforcement-learning-with-aravind-rajeswaran\u002F)\n  - Aravind Rajeswaran. TWIML, 2020.\n- [Trends in Reinforcement Learning with Chelsea Finn](https:\u002F\u002Ftwimlai.com\u002Ftwiml-talk-335-trends-in-reinforcement-learning-with-chelsea-finn\u002F)\n  - Chelsea Finn. TWIML, 2020.\n- [Nan Jiang | TalkRL: The Reinforcement Learning Podcast](https:\u002F\u002Fwww.talkrl.com\u002Fepisodes\u002Fnan-jiang)\n  - Nan Jiang. TalkRL, 2020.\n- [Scott Fujimoto | TalkRL: The Reinforcement Learning Podcast](https:\u002F\u002Fwww.talkrl.com\u002Fepisodes\u002Fscott-fujimoto)\n  - Scott Fujimoto. TalkRL, 2019.\n\n## Related Workshops\n- [CONSEQUENCES (RecSys 2023)](https:\u002F\u002Fsites.google.com\u002Fview\u002Fconsequences2023)\n- [Offline Reinforcement Learning (NeurIPS 2022)](https:\u002F\u002Foffline-rl-neurips.github.io\u002F2022\u002F)\n- [Reinforcement Learning for Real Life (NeurIPS 2022)](https:\u002F\u002Fsites.google.com\u002Fview\u002FRL4RealLife)\n- [CONSEQUENCES + REVEAL (RecSys 2022)](https:\u002F\u002Fsites.google.com\u002Fview\u002Fconsequences2022)\n- [Offline Reinforcement Learning (NeurIPS 2021)](https:\u002F\u002Foffline-rl-neurips.github.io\u002F2021\u002F)\n- [Reinforcement Learning for Real Life (ICML 2021)](https:\u002F\u002Fsites.google.com\u002Fview\u002FRL4RealLife)\n- [Reinforcement Learning Day 2021](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fevent\u002Freinforcement-learning-day-2021\u002F)\n- [Offline Reinforcement Learning (NeurIPS 2020)](https:\u002F\u002Foffline-rl-neurips.github.io\u002F)\n- [Reinforcement Learning from Batch Data and Simulation](https:\u002F\u002Fsimons.berkeley.edu\u002Fworkshops\u002Fschedule\u002F14240)\n- [Reinforcement Learning for Real Life (RL4RealLife 2020)](https:\u002F\u002Fsites.google.com\u002Fview\u002FRL4RealLife2020)\n- [Safety and Robustness in Decision Making (NeurIPS 2019)](https:\u002F\u002Fsites.google.com\u002Fview\u002Fneurips19-safe-robust-workshop)\n- [Reinforcement Learning for Real Life (ICML 2019)](https:\u002F\u002Fsites.google.com\u002Fview\u002FRL4RealLife2019)\n- [Real-world Sequential Decision Making (ICML 2019)](https:\u002F\u002Frealworld-sdm.github.io\u002F)\n\n## Tutorials\u002FTalks\u002FLectures\n- [Reinforcement Learning with Large Datasets: Robotics, Image Generation, and LLMs](https:\u002F\u002Fwww.youtube.com\u002Fwatch?app=desktop&v=Iu_Uux0R0BI&feature=youtu.be)\n  - Sergey Levine. 2023.\n- [Counterfactual Evaluation and Learning for Interactive Systems](https:\u002F\u002Fcounterfactual-ml.github.io\u002Fkdd2022-tutorial\u002F)\n  - Yuta Saito and Thorsten Joachims. KDD2022.\n- [Representation Learning for Online and Offline RL in Low-rank MDPs](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=EynREeip-y8)\n  - Masatoshi Uehara. RL Theory Seminar2022.\n- [Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation](https:\u002F\u002Fyoutu.be\u002FQS2xVHgBg-k)\n  - Yunzong Xu. RL Theory Seminar2022.\n- [Safe Policy Learning through Extrapolation: Application to Pre-trial Risk Assessment](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Gd2-MxJQTKA)\n  - Kosuke Imai. Online Causal Inference Seminar2022.\n- [Deep Reinforcement Learning with Real-World Data](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=0Kw-VTym9Pg)\n  - Sergey Levine. 2022.\n- [Planning with Reinforcement Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=T39xkKN7uwo)\n  - Sergey Levine. 2022.\n- [Imitation learning vs. offline reinforcement learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=sVPm7zOrBxM)\n  - Sergey Levine. 2022.\n- [Tutorial on the Foundations of Offline Reinforcement Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?app=desktop&v=lH9DzugrejY)\n  - Romain Laroche and David Brandfonbrener. 2022.\n- [Counterfactual Learning and Evaluation for Recommender Systems: Foundations, Implementations, and Recent Advances](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=HMo9fQMVB4w) [[website](https:\u002F\u002Fsites.google.com\u002Fcornell.edu\u002Frecsys2021tutorial)]\n  - Yuta Saito and Thorstem Joachims. RecSys2021.\n- [Offline Reinforcement Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=k08N5a0gG0A)\n  - Sergey Levine. BayLearn2021.\n- [Offline Reinforcement Learning](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=Es2G8FDl-Nc)\n  - Guy Tennenholtz. CHIL2021.\n- [Fast Rates for the Regret of Offline Reinforcement Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=eGZ-2JU9zKE)\n  - Yichun Hu. RL Theory Seminar2021.\n- [Bellman-consistent Pessimism for Offline Reinforcement Learning](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=g_yD6Yw8MLQ)\n  - Tengyan Xie. RL Theory Seminar2021.\n- [Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=aPce6Y-NqpQ)\n  - Masatoshi Uehara. RL Theory Seminar2021.\n- [Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=T1Am0bGzH4A)\n  - Paria Rashidinejad. RL Theory Seminar2021.\n- [Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=uOIvo1wQ_RQ)\n  - Lin Chen. RL Theory Seminar2021.\n- [Is Pessimism Provably Efficient for Offline RL?](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vCQsZ5pzHPk)\n  - Ying Jin. RL Theory Seminar2021.\n- [Adaptive Estimator Selection for Off-Policy Evaluation](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=r8ZDuC71lCs)\n  - Yi Su. RL Theory Seminar2021.\n- [What are the Statistical Limits of Offline RL with Linear Function Approximation?](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FkkphMeFapg)\n  - Ruosong Wang. RL Theory Seminar2021.\n- [Exponential Lower Bounds for Batch Reinforcement Learning: Batch RL can be Exponentially Harder than Online RL](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=YktnEdsxYfc&feature=youtu.be)\n  - Andrea Zanette. RL Theory Seminar2021.\n- [A Gentle Introduction to Offline Reinforcement Learning](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=tW-BNW1ApN8&feature=youtu.be)\n  - Sergey Levine. 2021.\n- [Principles for Tackling Distribution Shift: Pessimism, Adaptation, and Anticipation](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=QKBh6TmvBaw)\n  - Chelsea Finn. 2020-2021 Machine Learning Advances and Applications Seminar.\n- [Offline Reinforcement Learning: Incorporating Knowledge from Data into RL](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=KzZFN8zUxkI&feature=youtu.be)\n  - Sergey Levine. IJCAI-PRICAI2020 Knowledge Based Reinforcement Learning Workshop.\n- [Offline RL](https:\u002F\u002Fslideslive.com\u002F38938455\u002Foffline-rl)\n  - Nando de Freitas. NeurIPS2020 OfflineRL Workshop.\n- [Learning a Multi-Agent Simulator from Offline Demonstrations](https:\u002F\u002Fslideslive.com\u002F38938458\u002Flearning-a-multiagent-simulator-from-offline-demonstrations)\n  - Brandyn White. NeurIPS2020 OfflineRL Workshop.\n- [Towards Reliable Validation and Evaluation for Offline RL](https:\u002F\u002Fslideslive.com\u002F38938459\u002Ftowards-reliable-validation-and-evaluation-for-offline-rl)\n  - Nan Jiang. NeurIPS2020 OfflineRL Workshop.\n- [Batch RL Models Built for Validation](https:\u002F\u002Fslideslive.com\u002F38938457\u002Fbatch-rl-models-built-for-validation)\n  - Finale Doshi-Velez. NeurIPS2020 OfflineRL Workshop.\n- [Offline Reinforcement Learning: From Algorithms to Practical Challenges](https:\u002F\u002Fsites.google.com\u002Fview\u002Fofflinerltutorial-neurips2020\u002Fhome)\n  - Aviral Kumar and Sergey Levine. NeurIPS2020.\n- [Data Scalability for Robot Learning](https:\u002F\u002Fyoutu.be\u002FLGlgSeWemcM)\n  - Chelsea Finn. RI Seminar2020.\n- [Statistically Efficient Offline Reinforcement Learning](https:\u002F\u002Fyoutu.be\u002Fn5ZoxT_WmHo)\n  - Nathan Kallus. ARL Seminor2020.\n- [Near Optimal Provable Uniform Convergence in Off-Policy Evaluation for Reinforcement Learning](https:\u002F\u002Fyoutu.be\u002FFWZewbQykv4)\n  - Yu-Xiang Wang. RL Theory Seminar2020.\n- [Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation](https:\u002F\u002Fyoutu.be\u002FTX9KBofFZ8s)\n  - Mengdi Wang. RL Theory Seminar2020.\n- [Beyond the Training Distribution: Embodiment, Adaptation, and Symmetry](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=wv1zXnxRCCM&feature=youtu.be)\n  - Chelsea Finn. EI Seminar2020.\n- [Combining Statistical methods with Human Input for Evaluation and Optimization in Batch Settings](https:\u002F\u002Fslideslive.com\u002F38922630\u002Fcombining-statistical-methods-with-human-input-for-evaluation-and-optimization-in-batch-settings)\n  - Finale Doshi-Velez. NeurIPS2019 Workshop on Safety and Robustness in Decision Making.\n- [Efficiently Breaking the Curse of Horizon with Double Reinforcement Learning](https:\u002F\u002Fslideslive.com\u002F38922636\u002Fefficiently-breaking-the-curse-of-horizon-with-double-reinforcement-learning)\n  - Nathan Kallus. NeurIPS2019 Workshop on Safety and Robustness in Decision Making.\n- [Scaling Probabilistically Safe Learning to Robotics](https:\u002F\u002Fslideslive.com\u002F38922637\u002Fscaling-probabilistically-safe-learning-to-robotics?locale=en)\n  - Scott Niekum. NeurIPS2019 Workshop on Safety and Robustness in Decision Making.\n- [Deep Reinforcement Learning in the Real World](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=b97H5uz8xkI)\n  - Sergey Levine. Workshop on New Directions in Reinforcement Learning and Control2019.\n","# 令人惊叹的离线强化学习\n这是一个关于**离线强化学习（offline rl）**的研究和综述论文合集。欢迎点赞和 fork。\n\n\n维护者：\n- [Haruka Kiyohara](https:\u002F\u002Fsites.google.com\u002Fview\u002Fharukakiyohara)（康奈尔大学）\n- [Yuta Saito](https:\u002F\u002Fusait0.com\u002Fen\u002F)（Hanjuku-kaso 有限公司 \u002F 康奈尔大学）\n\n我们正在寻找更多的贡献者和维护者！请随时提交 [pull requests](https:\u002F\u002Fgithub.com\u002Fusaito\u002Fawesome-offline-rl\u002Fpulls)。\n\n```\n格式：\n- [标题](论文链接) [链接]\n  - 作者1、作者2 和 作者3。arXiv\u002F会议\u002F期刊，年份。\n```\n\n如有任何问题，请联系：hk844@cornell.edu\n\n## 目录\n- [论文](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#papers)\n  - [综述\u002F调查\u002F立场论文](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#reviewsurveyposition-papers)\n    - [离线 RL](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#offline-rl)\n    - [离策略评估与学习](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#off-policy-evaluation-and-learning)\n    - [相关综述](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#related-reviews)\n  - [离线 RL：理论\u002F方法](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#offline-rl-theorymethods)\n  - [离线 RL：基准测试\u002F实验](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#offline-rl-benchmarksexperiments)\n  - [离线 RL：应用](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#offline-rl-applications)\n  - [离策略评估与学习：理论\u002F方法](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl#off-policy-evaluation-and-learning-theorymethods)\n    - [离策略评估：上下文 bandit](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#off-policy-evaluation-contextual-bandits)\n    - [离策略评估：强化学习](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#off-policy-evaluation-reinforcement-learning)\n    - [离策略学习](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#off-policy-learning)\n  - [离策略评估与学习：基准测试\u002F实验](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl#off-policy-evaluation-and-learning-benchmarksexperiments)\n  - [离策略评估与学习：应用](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl#off-policy-evaluation-and-learning-applications)\n- [开源软件\u002F实现](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#open-source-softwareimplementations)\n- [博客\u002F播客](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#blogpodcast)\n  - [博客](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#blog)\n  - [播客](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#podcast)\n- [相关研讨会](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#related-workshops)\n- [教程\u002F演讲\u002F讲座](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#tutorialstalkslectures)\n\n## 论文\n\n### 综述\u002F调查\u002F立场论文\n#### 离线 RL\n- [来自人类反馈的强化学习中的开放问题及根本限制](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15217)\n  - Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, and Dylan Hadfield-Menell。arXiv，2023年。\n- [基于模型的离线强化学习综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03360)\n  - Haoyang He。arXiv，2023年。\n- [用于决策的基础模型：问题、方法与机遇](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04129)\n  - Sherry Yang, Ofir Nachum, Yilun Du, Jason Wei, Pieter Abbeel, Dale Schuurmans。arXiv，2023年。\n- [离线强化学习综述：分类、回顾与开放问题](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.01387)\n  - Rafael Figueiredo Prudencio, Marcos R. O. A. Maximo, and Esther Luna Colombini。arXiv，2022年。\n- [离线强化学习：教程、回顾及对开放问题的看法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.01643)\n  - Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu。arXiv，2020年。\n\n#### 离策略评估与学习\n- [强化学习中离策略评估的综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.06355)\n  - Masatoshi Uehara, Chengchun Shi, and Nathan Kallus。arXiv，2022年。\n\n#### 相关综述\n- [推荐系统中离线强化学习的机会与挑战](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11336)\n  - Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, and Lina Yao。arXiv，2023年。\n- [理解强化学习算法：从基础 Q 学习到近端策略优化的进步](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.00026)\n  - Mohamed-Amine Chadi and Hajar Mousannif。arXiv，2023年。\n- [基于强化学习的推荐系统的离线评估：关键问题及一些替代方案](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00993)\n  - Romain Deffayet, Thibaut Thonet, Jean-Michel Renders, and Maarten de Rijke。arXiv，2023年。\n- [强化学习中的 Transformer 综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.03044)\n  - Wenzhe Li, Hao Luo, Zichuan Lin, Chongjie Zhang, Zongqing Lu, and Deheng Ye。arXiv，2023年。\n- [深度强化学习：机遇与挑战](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.11296)\n  - Yuxi Li。arXiv，2022年。\n- [基于模型的强化学习综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.09328)\n  - Fan-Ming Luo, Tian Xu, Hang Lai, Xiong-Hui Chen, Weinan Zhang, and Yang Yu。arXiv，2022年。\n- [公平强化学习综述：理论与实践](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10032)\n  - Pratik Gajane, Akrati Saxena, Maryam Tavakol, George Fletcher, and Mykola Pechenizkiy。arXiv，2022年。\n- [加速实时竞价与推荐中的离线强化学习应用：模拟的潜在用途](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.08331)\n  - Haruka Kiyohara, Kosuke Kawakami, and Yuta Saito。arXiv，2021年。\n- [深度强化学习中的泛化综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09794)\n  - Robert Kirk, Amy Zhang, Edward Grefenstette, and Tim Rocktäschel。arXiv，2021年。\n\n### Offline RL: Theory\u002FMethods\n- [Value-Aided Conditional Supervised Learning for Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02017)\n  - Jeonghye Kim, Suyoung Lee, Woojun Kim, and Youngchul Sung. arXiv, 2024.\n- [Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02429)\n  - Lanqing Li, Hai Zhang, Xinyu Zhang, Shatong Zhu, Junqiao Zhao, and Pheng-Ann Heng. arXiv, 2024.\n- [DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02439)\n  - Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, and Weinan Zhang. arXiv, 2024.\n- [Deep autoregressive density nets vs neural ensembles for model-based offline reinforcement learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02858)\n  - Abdelhakim Benechehab, Albert Thomas, and Balázs Kégl. arXiv, 2024.\n- [Context-Former: Stitching via Latent Conditioned Sequence Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.16452)\n  - Ziqi Zhang, Jingzehua Xu, Zifeng Zhuang, Jinxin Liu, and Donglin wang. arXiv, 2024.\n- [Adversarially Trained Actor Critic for offline CMDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00629)\n  - Honghao Wei, Xiyue Peng, Xin Liu, and Arnob Ghosh. arXiv, 2024.\n- [Optimistic Model Rollouts for Pessimistic Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05899)\n  - Yuanzhao Zhai, Yiying Li, Zijian Gao, Xudong Gong, Kele Xu, Dawei Feng, Ding Bo, and Huaimin Wang. arXiv, 2024.\n- [Solving Continual Offline Reinforcement Learning with Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08478)\n  - Kaixin Huang, Li Shen, Chen Zhao, Chun Yuan, and Dacheng Tao. arXiv, 2024.\n- [MoMA: Model-based Mirror Ascent for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11380)\n  - Mao Hong, Zhiyue Zhang, Yue Wu, and Yanxun Xu. arXiv, 2024.\n- [Reframing Offline Reinforcement Learning as a Regression Problem](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11630)\n  - Prajwal Koirala and Cody Fleming. arXiv, 2024.\n- [Efficient Two-Phase Offline Deep Reinforcement Learning from Preference Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00330)\n  - Yinglun Xu and Gagandeep Singh. arXiv, 2024.\n- [Policy-regularized Offline Multi-objective Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02244)\n  - Qian Lin, Chao Yu, Zongkai Liu, and Zifan Wu. arXiv, 2024.\n- [Differentiable Tree Search in Latent State Space](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11660)\n  - Dixant Mittal and Wee Sun Lee. arXiv, 2024.\n- [Learning from Sparse Offline Datasets via Conservative Density Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08819)\n  - Zhepeng Cen, Zuxin Liu, Zitong Wang, Yihang Yao, Henry Lam, and Ding Zhao. ICLR, 2024.\n- [Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.10700)\n  - Yinan Zheng, Jianxiong Li, Dongjie Yu, Yujie Yang, Shengbo Eben Li, Xianyuan Zhan, and Jingjing Liu. ICLR, 2024.\n- [PDiT: Interleaving Perception and Decision-making Transformers for Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15863)\n  - Hangyu Mao, Rui Zhao, Ziyue Li, Zhiwei Xu, Hao Chen, Yiqun Chen, Bin Zhang, Zhen Xiao, Junge Zhang, and Jiangjin Yin. AAMAS, 2024.\n- [Critic-Guided Decision Transformer for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13716)\n  - Yuanfu Wang, Chao Yang, Ying Wen, Yu Liu, and Yu Qiao. AAAI, 2024.\n- [CUDC: A Curiosity-Driven Unsupervised Data Collection Method with Adaptive Temporal Distances for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12191)\n  - Chenyu Sun, Hangwei Qian, and Chunyan Miao. AAAI, 2024.\n- [Neural Network Approximation for Pessimistic Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.11863)\n  - Di Wu, Yuling Jiao, Li Shen, Haizhao Yang, and Xiliang Lu. AAAI, 2024.\n- [A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.07685)\n  - Yinmin Zhang, Jie Liu, Chuming Li, Yazhe Niu, Yaodong Yang, Yu Liu, and Wanli Ouyang. AAAI, 2024.\n- [The Generalization Gap in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05742)\n  - Ishita Mediratta, Qingfei You, Minqi Jiang, and Roberta Raileanu. arXiv, 2023.\n- [Decoupling Meta-Reinforcement Learning with Gaussian Task Contexts and Skills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.06518)\n  - Hongcai He, Anjie Zhu, Shuang Liang, Feiyu Chen, and Jie Shao. arXiv, 2023.\n- [MICRO: Model-Based Offline Reinforcement Learning with a Conservative Bellman Operator](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03991)\n  - Xiao-Yin Liu, Xiao-Hu Zhou, Guo-Tao Li, Hao Li, Mei-Jiang Gui, Tian-Yu Xiang, De-Xing Huang, and Zeng-Guang Hou. arXiv, 2023.\n- [Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.04386)\n  - Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, and Jan Peters. arXiv, 2023.\n- [Using Curiosity for an Even Representation of Tasks in Continual Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03177)\n  - Pankayaraj Pathmanathan, Natalia Díaz-Rodríguez, and Javier Del Ser. arXiv, 2023.\n- [Projected Off-Policy Q-Learning (POP-QL) for Stabilizing Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.14885)\n  - Melrose Roderick, Gaurav Manek, Felix Berkenkamp, and J. Zico Kolter. arXiv, 2023.\n- [Offline Data Enhanced On-Policy Policy Gradient with Provable Guarantees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.08384)\n  - Yifei Zhou, Ayush Sekhari, Yuda Song, and Wen Sun. arXiv, 2023.\n- [Switch Trajectory Transformer with Distributional Value Approximation for Multi-Task Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.07413)\n  - Qinjie Lin, Han Liu, and Biswa Sengupta. arXiv, 2023.\n- [Hierarchical Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.10447)\n  - André Correia and Luís A. Alexandre. arXiv, 2023.\n- [Prompt-Tuning Decision Transformer with Preference Ranking](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09648)\n  - Shengchao Hu, Li Shen, Ya Zhang, and Dacheng Tao. arXiv, 2023.\n- [Context Shift Reduction for Offline Meta-Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03695)\n  - Yunkai Gao, Rui Zhang, Jiaming Guo, Fan Wu, Qi Yi, Shaohui Peng, Siming Lan, Ruizhi Chen, Zidong Du, Xing Hu, Qi Guo, Ling Li, and Yunji Chen. arXiv, 2023.\n- [Uni-O4: Unifying Online and Offline Deep Reinforcement Learning with Multi-Step On-Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03351)\n  - Kun Lei, Zhengmao He, Chenhao Lu, Kaizhe Hu, Yang Gao, and Huazhe Xu. arXiv, 2023.\n- [Score Models for Offline Goal-Conditioned Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.02013)\n  - Harshit Sikchi, Rohan Chitnis, Ahmed Touati, Alborz Geramifard, Amy Zhang, and Scott Niekum. arXiv, 2023.\n- [Offline RL with Observation Histories: Analyzing and Improving Sample Complexity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20663)\n  - Joey Hong, Anca Dragan, and Sergey Levine. arXiv, 2023.\n- [Expressive Modeling Is Insufficient for Offline RL: A Tractable Inference Perspective](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.00094)\n  - Xuejie Liu, Anji Liu, Guy Van den Broeck, and Yitao Liang. arXiv, 2023.\n- [Rethinking Decision Transformer via Hierarchical Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.00267)\n  - Yi Ma, Chenjun Xiao, Hebin Liang, and Jianye Hao. arXiv, 2023.\n- [Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20587)\n  - Ruizhe Shi, Yuyao Liu, Yanjie Ze, Simon S. Du, and Huazhe Xu. arXiv, 2023.\n- [GOPlan: Goal-conditioned Offline Reinforcement Learning by Planning with Learned Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20025)\n  - Mianchu Wang, Rui Yang, Xi Chen, and Meng Fang. arXiv, 2023.\n- [SERA: Sample Efficient Reward Augmentation in offline-to-online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.19805)\n  - Ziqi Zhang, Xiao Xiong, Zifeng Zhuang, Jinxin Liu, and Donglin Wang. arXiv, 2023.\n- [Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18434)\n  - Kishan Panaganti, Zaiyan Xu, Dileep Kalathil, and Mohammad Ghavamzadeh. arXiv, 2023.\n- [Guided Data Augmentation for Offline Reinforcement Learning and Imitation Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18247)\n  - Nicholas E. Corrado, Yuxiao Qu, John U. Balis, Adam Labiosa, and Josiah P. Hanna. arXiv, 2023.\n- [CROP: Conservative Reward for Model-based Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.17245)\n  - Hao Li, Xiao-Hu Zhou, Xiao-Liang Xie, Shi-Qi Liu, Zhen-Qiu Feng, Xiao-Yin Liu, Mei-Jiang Gui, Tian-Yu Xiang, De-Xing Huang, Bo-Xian Yao, and Zeng-Guang Hou. arXiv, 2023.\n- [Towards Robust Offline Reinforcement Learning under Diverse Data Corruption](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.12955)\n  - Rui Yang, Han Zhong, Jiawei Xu, Amy Zhang, Chongjie Zhang, Lei Han, and Tong Zhang. arXiv, 2023.\n- [Offline Retraining for Online RL: Decoupled Policy Learning to Mitigate Exploration Bias](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08558)\n  - Max Sobol Mark, Archit Sharma, Fahim Tajwar, Rafael Rafailov, Sergey Levine, and Chelsea Finn. arXiv, 2023.\n- [Boosting Continuous Control with Consistency Policy](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06343)\n  - Yuhui Chen, Haoran Li, and Dongbin Zhao. arXiv, 2023.\n- [Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05723)\n  - Trevor McInroe, Stefano V. Albrecht, and Amos Storkey. arXiv, 2023.\n- [Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05422)\n  - Fan-Ming Luo, Tian Xu, Xingchen Cao, and Yang Yu. arXiv, 2023.\n- [DiffCPS: Diffusion Model based Constrained Policy Search for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05333)\n  - Longxiang He, Linrui Zhang, Junbo Tan, and Xueqian Wang. arXiv, 2023.\n- [Self-Confirming Transformer for Locally Consistent Online Adaptation in Multi-Agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04579)\n  - Tao Li, Juan Guevara, Xinghong Xie, and Quanyan Zhu. arXiv, 2023.\n- [Learning to Reach Goals via Diffusion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02505)\n  - Vineet Jain and Siamak Ravanbakhsh. arXiv, 2023.\n- [Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03022)\n  - Jeonghye Kim, Suyoung Lee, Woojun Kim, and Youngchul Sung. arXiv, 2023.\n- [Consistency Models as a Rich and Efficient Policy Class for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16984)\n  - Zihan Ding and Chi Jin. arXiv, 2023.\n- [Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01380)\n  - Qiwei Di, Heyang Zhao, Jiafan He, and Quanquan Gu. arXiv, 2023.\n- [Reasoning with Latent Diffusion in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06599)\n  - Siddarth Venkatraman, Shivesh Khaitan, Ravi Tej Akella, John Dolan, Jeff Schneider, and Glen Berseth. arXiv, 2023.\n- [Hundreds Guide Millions: Adaptive Offline Reinforcement Learning with Expert Guidance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.01448)\n  - Qisen Yang, Shenzhi Wang, Qihang Zhang, Gao Huang, and Shiji Song. arXiv, 2023.\n- [Towards Robust Offline-to-Online Reinforcement Learning via Uncertainty and Smoothness](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16973)\n  - Xiaoyu Wen, Xudong Yu, Rui Yang, Chenjia Bai, and Zhen Wang. arXiv, 2023.\n- [Robust Offline Reinforcement Learning -- Certify the Confidence Interval](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16631)\n  - Jiarui Yao and Simon Shaolei Du. arXiv, 2023.\n- [Stackelberg Batch Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16188)\n  - Wenzhuo Zhou and Annie Qu. arXiv, 2023.\n- [H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.12716)\n  - Haoyi Niu, Tianying Ji, Bingqi Liu, Haocheng Zhao, Xiangyu Zhu, Jianying Zheng, Pengfei Huang, Guyue Zhou, Jianming Hu, and Xianyuan Zhan. arXiv, 2023.\n- [Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.10150)\n  - Yevgen Chebotar, Quan Vuong, Alex Irpan, Karol Hausman, Fei Xia, Yao Lu, Aviral Kumar, Tianhe Yu, Alexander Herzog, Karl Pertsch, Keerthana Gopalakrishnan, Julian Ibarz, Ofir Nachum, Sumedh Sontakke, Grecia Salazar, Huong T Tran, Jodilyn Peralta, Clayton Tan, Deeksha Manjunath, Jaspiar Singht, Brianna Zitkovich, Tomas Jackson, Kanishka Rao, Chelsea Finn, and Sergey Levine. arXiv, 2023.\n- [DOMAIN: MilDly COnservative Model-BAsed OfflINe Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.08925)\n  - Xiao-Yin Liu, Xiao-Hu Zhou, Xiao-Liang Xie, Shi-Qi Liu, Zhen-Qiu Feng, Hao Li, Mei-Jiang Gui, Tian-Yu Xiang, De-Xing Huang, and Zeng-Guang Hou. arXiv, 2023.\n- [Guided Online Distillation: Promoting Safe Reinforcement Learning by Offline Demonstration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.09408)\n  - Jinning Li, Xinyi Liu, Banghua Zhu, Jiantao Jiao, Masayoshi Tomizuka, Chen Tang, and Wei Zhan. arXiv, 2023.\n- [Equivariant Data Augmentation for Generalization in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07578)\n  - Cristina Pinneri, Sarah Bechtle, Markus Wulfmeier, Arunkumar Byravan, Jingwei Zhang, William F. Whitney, and Martin Riedmiller. arXiv, 2023.\n- [Reasoning with Latent Diffusion in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06599)\n  - Siddarth Venkatraman, Shivesh Khaitan, Ravi Tej Akella, John Dolan, Jeff Schneider, and Glen Berseth. arXiv, 2023.\n- [Hundreds Guide Millions: Adaptive Offline Reinforcement Learning with Expert Guidance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.01448)\n  - Qisen Yang, Shenzhi Wang, Qihang Zhang, Gao Huang, and Shiji Song. arXiv, 2023.\n- [Multi-Objective Decision Transformers for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.16379)\n  - Abdelghani Ghanem, Philippe Ciblat, and Mounir Ghogho. arXiv, 2023.\n- [AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03526)\n  - Michaël Mathieu, Sherjil Ozair, Srivatsan Srinivasan, Caglar Gulcehre, Shangtong Zhang, Ray Jiang, Tom Le Paine, Richard Powell, Konrad Żołna, Julian Schrittwieser, David Choi, Petko Georgiev, Daniel Toyama, Aja Huang, Roman Ring, Igor Babuschkin, Timo Ewalds, Mahyar Bordbar, Sarah Henderson, Sergio Gómez Colmenarejo, Aäron van den Oord, Wojciech Marian Czarnecki, Nando de Freitas, and Oriol Vinyals. arXiv, 2023.\n- [Exploiting Generalization in Offline Reinforcement Learning via Unseen State Augmentations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03882)\n  - Nirbhay Modhe, Qiaozi Gao, Ashwin Kalyan, Dhruv Batra, Govind Thattai, and Gaurav Sukhatme. arXiv, 2023.\n- [PASTA: Pretrained Action-State Transformer Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.10936)\n  - Raphael Boige, Yannis Flet-Berliac, Arthur Flajolet, Guillaume Richard, and Thomas Pierrot. arXiv, 2023.\n- [Towards A Unified Agent with Foundation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.09668)\n  - Norman Di Palo, Arunkumar Byravan, Leonard Hasenclever, Markus Wulfmeier, Nicolas Heess, and Martin Riedmiller. arXiv, 2023.\n- [Goal-Conditioned Predictive Coding as an Implicit Planner for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03406)\n  - Zilai Zeng, Ce Zhang, Shijie Wang, and Chen Sun. arXiv, 2023.\n- [Offline Reinforcement Learning with Imbalanced Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02752)\n  - Li Jiang, Sijie Chen, Jielin Qiu, Haoran Xu, Wai Kin Chan, and Zhao Ding. arXiv, 2023.\n- [LLQL: Logistic Likelihood Q-Learning for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02345)\n  - Outongyi Lv, Bingxin Zhou, and Yu Guang Wang. arXiv, 2023.\n- [Elastic Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02484)\n  - Yueh-Hua Wu, Xiaolong Wang, and Masashi Hamaya. arXiv, 2023.\n- [Prioritized Trajectory Replay: A Replay Memory for Data-driven Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15503)\n  - Jinyi Liu, Yi Ma, Jianye Hao, Yujing Hu, Yan Zheng, Tangjie Lv, and Changjie Fan. arXiv, 2023.\n- [Is RLHF More Difficult than Standard RL?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14111)\n  - Yuanhao Wang, Qinghua Liu, and Chi Jin. arXiv, 2023.\n- [Supervised Pretraining Can Learn In-Context Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14892)\n  - Jonathan N. Lee, Annie Xie, Aldo Pacchiano, Yash Chandak, Chelsea Finn, Ofir Nachum, and Emma Brunskill. arXiv, 2023.\n- [Fighting Uncertainty with Gradients: Offline Reinforcement Learning via Diffusion Score Matching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14079)\n  - H.J. Terry Suh, Glen Chou, Hongkai Dai, Lujie Yang, Abhishek Gupta, and Russ Tedrake. arXiv, 2023.\n- [Safe Reinforcement Learning with Dead-Ends Avoidance and Recovery](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13944)\n  - Xiao Zhang, Hai Zhang, Hongtu Zhou, Chang Huang, Di Zhang, Chen Ye, and Junqiao Zhao. arXiv, 2023.\n- [CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13412)\n  - Jinxin Liu, Lipeng Zu, Li He, and Donglin Wang. arXiv, 2023.\n- [Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13085)\n  - Zhang-Wei Hong, Pulkit Agrawal, Rémi Tachet des Combes, and Romain Laroche.\n- [Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12755)\n  - Jinxin Liu, Ziqi Zhang, Zhenyu Wei, Zifeng Zhuang, Yachen Kang, Sibo Gai, and Donglin Wang. arXiv, 2023.\n- [A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07818)\n  - Kihyuk Hong, Yuhang Li, and Ambuj Tewari. arXiv, 2023.\n- [HIPODE: Enhancing Offline Reinforcement Learning with High-Quality Synthetic Data from a Policy-Decoupled Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06329)\n  - Shixi Lian, Yi Ma, Jinyi Liu, Yan Zheng, and Zhaopeng Meng. arXiv, 2023.\n- [Ensemble-based Offline-to-Online Reinforcement Learning: From Pessimistic Learning to Optimistic Exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06871)\n  - Kai Zhao, Yi Ma, Jinyi Liu, Yan Zheng, and Zhaopeng Meng. arXiv, 2023.\n- [In-Sample Policy Iteration for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05726)\n  - Xiaohan Hu, Yi Ma, Chenjun Xiao, Yan Zheng, and Zhaopeng Meng. arXiv, 2023.\n- [Instructed Diffuser with Temporal Condition Guidance for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04875)\n  - Jifeng Hu, Yanchao Sun, Sili Huang, SiYuan Guo, Hechang Chen, Li Shen, Lichao Sun, Yi Chang, and Dacheng Tao. arXiv, 2023.\n- [Offline Prioritized Experience Replay](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05412)\n  - Yang Yue, Bingyi Kang, Xiao Ma, Gao Huang, Shiji Song, and Shuicheng Yan. arXiv, 2023.\n- [Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.01157)\n  - Alizée Pace, Hugo Yèche, Bernhard Schölkopf, Gunnar Rätsch, and Guy Tennenholtz. arXiv, 2023.\n- [Offline Meta Reinforcement Learning with In-Distribution Online Adaptation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19529)\n  - Jianhao Wang, Jin Zhang, Haozhe Jiang, Junyu Zhang, Liwei Wang, and Chongjie Zhang. arXiv, 2023.\n- [Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18459)\n  - Haoran He, Chenjia Bai, Kang Xu, Zhuoran Yang, Weinan Zhang, Dong Wang, Bin Zhao, and Xuelong Li. arXiv, 2023.\n- [Reinforcement Learning with Human Feedback: Learning Dynamic Choices via Pessimism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18438)\n  - Zihao Li, Zhuoran Yang, and Mengdi Wang. arXiv, 2023.\n- [MADiff: Offline Multi-agent Learning with Diffusion Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17330)\n  - Zhengbang Zhu, Minghuan Liu, Liyuan Mao, Bingyi Kang, Minkai Xu, Yong Yu, Stefano Ermon, and Weinan Zhang. arXiv, 2023.\n- [Provable Offline Reinforcement Learning with Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14816)\n  - Wenhao Zhan, Masatoshi Uehara, Nathan Kallus, Jason D. Lee, and Wen Sun. arXiv, 2023.\n- [Think Before You Act: Decision Transformers with Internal Working Memory](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16338)\n  - Jikun Kang, Romain Laroche, Xindi Yuan, Adam Trischler, Xue Liu, and Jie Fu. arXiv, 2023.\n- [Distributionally Robust Optimization Efficiently Solves Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13289)\n  - Yue Wang, Yuting Hu, Jinjun Xiong, and Shaofeng Zou. arXiv, 2023.\n- [Offline Primal-Dual Reinforcement Learning for Linear MDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12944)\n  - Germano Gabbianelli, Gergely Neu, Nneka Okolo, and Matteo Papini. arXiv, 2023.\n- [Federated Offline Policy Learning with Heterogeneous Observational Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12407)\n  - Aldo Gael Carranza and Susan Athey. arXiv, 2023.\n- [Offline Reinforcement Learning with Additional Covering Distributions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12679)\n  - Chenjie Mao. arXiv, 2023.\n- [Reward-agnostic Fine-tuning: Provable Statistical Benefits of Hybrid Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09836)\n  - Gen Li, Wenhao Zhan, Jason D. Lee, Yuejie Chi, and Yuxin Chen. arXiv, 2023.\n- [Stackelberg Decision Transformer for Asynchronous Action Coordination in Multi-Agent Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.07856)\n  - Bin Zhang, Hangyu Mao, Lijuan Li, Zhiwei Xu, Dapeng Li, Rui Zhao, and Guoliang Fan. arXiv, 2023.\n- [Federated Ensemble-Directed Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03097)\n  - Desik Rengarajan, Nitin Ragothaman, Dileep Kalathil, and Srinivas Shakkottai. arXiv, 2023.\n- [IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.10573)\n  - Philippe Hansen-Estruch, Ilya Kostrikov, Michael Janner, Jakub Grudzien Kuba, and Sergey Levine. arXiv, 2023.\n- [Using Offline Data to Speed-up Reinforcement Learning in Procedurally Generated Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.09825)\n  - Alain Andres, Lukas Schäfer, Esther Villar-Rodriguez, Stefano V.Albrecht, Javier Del Ser. arXiv, 2023.\n- [Reinforcement Learning from Passive Data via Latent Intentions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.04782) [[website](https:\u002F\u002Fdibyaghosh.com\u002Ficvf\u002F)]\n  - Dibya Ghosh, Chethan Bhateja, and Sergey Levine. arXiv, 2023.\n- [Uncertainty-driven Trajectory Truncation for Model-based Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.04660)\n  - Junjie Zhang, Jiafei Lyu, Xiaoteng Ma, Jiangpeng Yan, Jun Yang, Le Wan, and Xiu Li. arXiv, 2023.\n- [RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06767)\n  - Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. arXiv, 2023.\n- [Batch Quantum Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.00905)\n  - Maniraman Periyasamy, Marc Hölle, Marco Wiedmann, Daniel D. Scherer, Axel Plinge, and Christopher Mutschler. arXiv, 2023.\n- [Accelerating exploration and representation learning with offline pre-training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.00046)\n  - Bogdan Mazoure, Jake Bruce, Doina Precup, Rob Fergus, and Ankit Anand. arXiv, 2023.\n- [On Context Distribution Shift in Task Representation Learning for Offline Meta RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.00354)\n  - Chenyang Zhao, Zihao Zhou, and Bin Liu. arXiv, 2023.\n- [Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01203)\n  - Tongzhou Wang, Antonio Torralba, Phillip Isola, and Amy Zhang. arXiv, 2023.\n- [Learning Excavation of Rigid Objects with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.16427)\n  - Shiyu Jin, Zhixian Ye, and Liangjun Zhang. arXiv, 2023.\n- [Goal-conditioned Offline Reinforcement Learning through State Space Partitioning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09367)\n  - Mianchu Wang, Yue Jin, and Giovanni Montana. arXiv, 2023.\n- [Merging Decision Transformers: Weight Averaging for Forming Multi-Task Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07551)\n  - Daniel Lawson and Ahmed H. Qureshi. arXiv, 2023.\n- [Deploying Offline Reinforcement Learning with Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07046)\n  - Ziniu Li, Ke Xu, Liu Liu, Lanqing Li, Deheng Ye, and Peilin Zhao. arXiv, 2023.\n- [Synthetic Experience Replay](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06614)\n  - Cong Lu, Philip J. Ball, and Jack Parker-Holder. arXiv, 2023.\n- [ENTROPY: Environment Transformer and Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03811)\n  - Pengqin Wang, Meixin Zhu, and Shaojie Shen. arXiv, 2023.\n- [Graph Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03747)\n  - Shengchao Hu, Li Shen, Ya Zhang, and Dacheng Tao. arXiv, 2023.\n- [Selective Uncertainty Propagation in Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00284)\n  - Sanath Kumar Krishnamurthy, Tanmay Gangwani, Sumeet Katariya, Branislav Kveton, and Anshuka Rangi. arXiv, 2023.\n- [Off-the-Grid MARL: a Framework for Dataset Generation with Baselines for Cooperative Offline Multi-Agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00521)\n  - Claude Formanek, Asad Jeewa, Jonathan Shock, and Arnu Pretorius. arXiv, 2023.\n- [Skill Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13573)\n  - Shyam Sudhakaran and Sebastian Risi. arXiv, 2023.\n- [Guiding Online Reinforcement Learning with Action-Free Offline Pretraining](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12876)\n  - Deyao Zhu, Yuhui Wang, Jürgen Schmidhuber, and Mohamed Elhoseiny. arXiv, 2023.\n- [SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12203)\n  - Qin Zhang, Linrui Zhang, Haoran Xu, Li Shen, Bowen Wang, Yongzhe Chang, Xueqian Wang, Bo Yuan, and Dacheng Tao. arXiv, 2023.\n- [APAC: Authorized Probability-controlled Actor-Critic For Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12130)\n  - Jing Zhang, Chi Zhang, Wenjia Wang, and Bing-Yi Jing. arXiv, 2023.\n- [Designing an offline reinforcement learning objective from scratch](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12842)\n  - Gaon An, Junhyeok Lee, Xingdong Zuo, Norio Kosaka, Kyung-Min Kim, and Hyun Oh Song. arXiv, 2023.\n- [Behaviour Discriminator: A Simple Data Filtering Method to Improve Offline Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.11734)\n  - Qiang Wang, Robert McCarthy, David Cordova Bulens, Kevin McGuinness, Noel E. O'Connor, Francisco Roldan Sanchez, and Stephen J. Redmond. arXiv, 2023.\n- [Learning to View: Decision Transformers for Active Object Detection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.09544)\n  - Wenhao Ding, Nathalie Majcherczyk, Mohit Deshpande, Xuewei Qi, Ding Zhao, Rajasimman Madhivanan, and Arnie Sen. arXiv, 2023.\n- [Risk Sensitive Dead-end Identification in Safety-Critical Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.05664)\n  - Taylor W. Killian, Sonali Parbhoo, and Marzyeh Ghassemi. arXiv, 2023.\n- [Value Enhancement of Reinforcement Learning via Efficient and Robust Trust Region Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.02220)\n  - Chengchun Shi, Zhengling Qi, Jianing Wang, and Fan Zhou. arXiv, 2023.\n- [Contextual Conservative Q-Learning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.01298)\n  - Ke Jiang, Jiayu Yao, and Xiaoyang Tan. arXiv, 2023.\n- [Offline Policy Optimization in RL with Variance Regularizaton](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.14405)\n  - Riashat Islam, Samarth Sinha, Homanga Bharadhwaj, Samin Yeasar Arnob, Zhuoran Yang, Animesh Garg, Zhaoran Wang, Lihong Li, and Doina Precup. arXiv, 2023.\n- [Transformer in Transformer as Backbone for Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.14538)\n  - Hangyu Mao, Rui Zhao, Hao Chen, Jianye Hao, Yiqun Chen, Dong Li, Junge Zhang, and Zhen Xiao. arXiv, 2023.\n- [SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03137)\n  - Dohyeok Lee, Seungyub Han, Taehyun Cho, and Jungwoo Lee. NeurIPS, 2023.\n- [Revisiting the Minimalist Approach to Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09836)\n  - Denis Tarasov, Vladislav Kurenkov, Alexander Nikulin, and Sergey Kolesnikov. NeurIPS, 2023.\n- [Constrained Policy Optimization with Explicit Behavior Density for Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=dLmDPVv19z)\n  - Jing Zhang, Chi Zhang, Wenjia Wang, and Bingyi Jing. NeurIPS, 2023.\n- [Supported Value Regularization for Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=fze7P9oy6l)\n  - Yixiu Mao, Hongchang Zhang, Chen Chen, Yi Xu, and Xiangyang Ji. NeurIPS, 2023.\n- [Conservative State Value Estimation for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.06884)\n  - Liting Chen, Jie Yan, Zhengdao Shao, Lu Wang, Qingwei Lin, Saravan Rajmohan, Thomas Moscibroda, and Dongmei Zhang. NeurIPS, 2023.\n- [Understanding and Addressing the Pitfalls of Bisimulation-based Representations in Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=sQyRQjun46)\n  - Hongyu Zang, Xin Li, Leiji Zhang, Yang Liu, Baigui Sun, Riashat Islam, Remi Tachet des Combes, and Romain Laroche. NeurIPS, 2023.\n- [Adversarial Model for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.11048)\n  - Mohak Bhardwaj, Tengyang Xie, Byron Boots, Nan Jiang, and Ching-An Cheng. NeurIPS, 2023.\n- [Percentile Criterion Optimization in Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=4LSDk5nlVvV)\n  - Cyrus Cousins, Elita Lobo, Marek Petrik, and Yair Zick. NeurIPS, 2023.\n- [Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12714)\n  - Hanlin Zhu, Paria Rashidinejad, and Jiantao Jiao. NeurIPS, 2023.\n- [HIQL: Offline Goal-Conditioned RL with Latent States as Actions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.11949)\n  - Seohong Park, Dibya Ghosh, Benjamin Eysenbach, and Sergey Levine. NeurIPS, 2023.\n- [Recovering from Out-of-sample States via Inverse Dynamics in Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=4gLWjSaw4o)\n  - Ke Jiang, Jia-Yu Yao, and Xiaoyang Tan. NeurIPS, 2023.\n- [Offline RL with Discrete Proxy Representations for Generalizability in POMDPs](https:\u002F\u002Fopenreview.net\u002Fpdf?id=tJN664ZNVG)\n  - Pengjie Gu, Xinyu Cai, Dong Xing, Xinrun Wang, Mengchen Zhao, and Bo An. NeurIPS, 2023.\n- [Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.11620)\n  - Xiangsen Wang, Haoran Xu, Yinan Zheng, and Xianyuan Zhan. NeurIPS, 2023.\n- [Bi-Level Offline Policy Optimization with Limited Exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06268)\n  - Wenzhuo Zhou. NeurIPS, 2023.\n- [Provably (More) Sample-Efficient Offline RL with Options](https:\u002F\u002Fopenreview.net\u002Fpdf?id=JwNXeBdkeo)\n  - Xiaoyan Hu and Ho-fung Leung. NeurIPS, 2023.\n- [Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09659)\n  - Jose Blanchet, Miao Lu, Tong Zhang, and Han Zhong. NeurIPS, 2023.\n- [AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline Multi-Agent RL via Alternating Stationary Distribution Correction Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.02194)\n  - Daiki E. Matsunaga, Jongmin Lee, Jaeseok Yoon, Stefanos Leonardos, Pieter Abbeel, and Kee-Eung Kim. NeurIPS, 2023.\n- [Budgeting Counterfactual for Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.06328)\n  - Yao Liu, Pratik Chaudhari, and Rasool Fakoor. NeurIPS, 2023.\n- [Efficient Diffusion Policies for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.20081)\n  - Bingyi Kang, Xiao Ma, Chao Du, Tianyu Pang, and Shuicheng Yan. NeurIPS, 2023.\n- [Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.05479)\n  - Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, and Sergey Levine. NeurIPS, 2023.\n- [Policy Finetuning in Reinforcement Learning via Design of Experiments using Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.04354)\n  - Ruiqi Zhang and Andrea Zanette. NeurIPS, 2023.\n- [Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.02392)\n  - Masatoshi Uehara, Nathan Kallus, Jason D. Lee, and Wen Sun. NeurIPS, 2023.\n- [Provably Efficient Offline Reinforcement Learning in Regular Decision Processes](https:\u002F\u002Fopenreview.net\u002Fpdf?id=8bQc7oRnjm)\n  - Roberto Cipollone, Anders Jonsson, Alessandro Ronca, and Mohammad Sadegh Talebi. NeurIPS, 2023.\n- [Provably Efficient Offline Goal-Conditioned Reinforcement Learning with General Function Approximation and Single-Policy Concentrability](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03770)\n  - Hanlin Zhu and Amy Zhang. NeurIPS, 2023.\n- [On Sample-Efficient Offline Reinforcement Learning: Data Diversity, Posterior Sampling and Beyond](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03301)\n  - Thanh Nguyen-Tang and Raman Arora. NeurIPS, 2023.\n- [Conservative Offline Policy Adaptation in Multi-Agent Games](https:\u002F\u002Fopenreview.net\u002Fpdf?id=C8pvL8Qbfa)\n  - Chengjie Wu, Pingzhong Tang, Jun Yang, Yujing Hu, Tangjie Lv, Changjie Fan, and Chongjie Zhang. NeurIPS, 2023.\n- [Look Beneath the Surface: Exploiting Fundamental Symmetry for Sample-Efficient Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04220)\n  - Peng Cheng, Xianyuan Zhan, Zhihao Wu, Wenjia Zhang, Shoucheng Song, Han Wang, Youfang Lin, and Li Jiang. NeurIPS, 2023.\n- [Survival Instinct in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.03286)\n  - Anqi Li, Dipendra Misra, Andrey Kolobov, and Ching-An Cheng. NeurIPS, 2023.\n- [Learning from Visual Observation via Offline Pretrained State-to-Go Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12860)\n  - Bohan Zhou, Ke Li, Jiechuan Jiang, and Zongqing Lu. NeurIPS, 2023.\n- [Design from Policies: Conservative Test-Time Adaptation for Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14479)\n  - Jinxin Liu, Hongyin Zhang, Zifeng Zhuang, Yachen Kang, Donglin Wang, and Bin Wang. NeurIPS, 2023.\n- [Learning to Influence Human Behavior with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.02265)\n  - Joey Hong, Anca Dragan, and Sergey Levine. NeurIPS, 2023.\n- [Residual Q-Learning: Offline and Online Policy Customization without Value](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09526)\n  - Chenran Li, Chen Tang, Haruki Nishimura, Jean Mercat, Masayoshi Tomizuka, Wei Zhan. NeurIPS, 2023.\n- [Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.17966)\n  - Shenzhi Wang, Qisen Yang, Jiawei Gao, Matthieu Gaetan Lin, Hao Chen, Liwei Wu, Ning Jia, Shiji Song, and Gao Huang. NeurIPS, 2023.\n- [Beyond Uniform Sampling: Offline Reinforcement Learning with Imbalanced Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04413)\n  - Zhang-Wei Hong, Aviral Kumar, Sathwik Karnik, Abhishek Bhandwaldar, Akash Srivastava, Joni Pajarinen, Romain Laroche, Abhishek Gupta, and Pulkit Agrawal. NeurIPS, 2023.\n- [Understanding, Predicting and Better Resolving Q-Value Divergence in Offline-RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04411)\n  - Yang Yue, Rui Lu, Bingyi Kang, Shiji Song, and Gao Huang. NeurIPS, 2023.\n- [Corruption-Robust Offline Reinforcement Learning with General Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.14550)\n  - Chenlu Ye, Rui Yang, Quanquan Gu, and Tong Zhang. NeurIPS, 2023.\n- [Learning to Modulate pre-trained Models in RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14884)\n  - Thomas Schmied, Markus Hofmarcher, Fabian Paischer, Razvan Pascanu, and Sepp Hochreiter. NeurIPS, 2023.\n- [Counterfactual Conservative Q Learning for Offline Multi-agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.12696)\n  - Jianzhun Shao, Yun Qu, Chen Chen, Hongchang Zhang, and Xiangyang Ji. NeurIPS, 2023.\n- [One Risk to Rule Them All: A Risk-Sensitive Perspective on Model-Based Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.00124)\n  - Marc Rigter, Bruno Lacerda, and Nick Hawes. NeurIPS, 2023.\n- [Goal-Conditioned Predictive Coding for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03406)\n  - Zilai Zeng, Ce Zhang, Shijie Wang, and Chen Sun. NeurIPS, 2023.\n- [Mutual Information Regularized Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07484)\n  - Xiao Ma, Bingyi Kang, Zhongwen Xu, Min Lin, and Shuicheng Yan. NeurIPS, 2023.\n- [Offline RL With Heteroskedastic Datasets and Support Constraints](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01052)\n  - Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, and Sergey Levine. NeurIPS, 2023.\n- [Offline Reinforcement Learning with Differential Privacy](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00810)\n  - Dan Qiao and Yu-Xiang Wang. NeurIPS, 2023.\n- [Accountability in Offline Reinforcement Learning: Explaining Decisions with a Corpus of Examples](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.07747)\n  - Hao Sun, Alihan Hüyük, Daniel Jarrett, and Mihaela van der Schaar. NeurIPS, 2023.\n- [Reining Generalization in Offline Reinforcement Learning via Representation Distinction](https:\u002F\u002Fopenreview.net\u002Fpdf?id=mVywRIDNIl)\n  - Yi Ma, Hongyao Tang, Dong Li, and Zhaopeng Meng. NeurIPS, 2023.\n- [VOCE: Variational Optimization with Conservative Estimation for Offline Safe Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=sIU3WujeSl)\n  - Jiayi Guan, Guang Chen, Jiaming Ji, Long Yang, ao zhou, Zhijun Li, and changjun jiang. NeurIPS, 2023.\n- [SafeDICE: Offline Safe Imitation Learning with Non-Preferred Demonstrations](https:\u002F\u002Fopenreview.net\u002Fpdf?id=toEGuA9Qfn)\n  - Youngsoo Jang, Geon-Hyeong Kim, Jongmin Lee, Sungryull Sohn, Byoungjip Kim, Honglak Lee, and Moontae Lee. NeurIPS, 2023.\n- [Hierarchical Diffusion for Offline Decision Making](https:\u002F\u002Fopenreview.net\u002Fforum?id=55kLa7tH9o)\n  - Wenhao Li, Xiangfeng Wang, Bo Jin, and Hongyuan Zha. ICML, 2023.\n- [MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17156)\n  - Anqi Li, Byron Boots, and Ching-An Cheng. ICML, 2023.\n- [Safe Offline Reinforcement Learning with Real-Time Budget Constraints](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00603)\n  - Qian Lin, Bo Tang, Zifan Wu, Chao Yu, Shangqin Mao, Qianlong Xie, Xingxing Wang, and Dong Wang. ICML, 2023.\n- [Near-optimal Conservative Exploration in Reinforcement Learning under Episode-wise Constraints](https:\u002F\u002Fopenreview.net\u002Fforum?id=Wo9JQDb4ms)\n  - Donghao Li, Ruiquan Huang, Cong Shen, and Jing Yang. ICML, 2023.\n- [A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12968)\n  - Benjamin Eysenbach, Matthieu Geist, Sergey Levine, and Ruslan Salakhutdinov. ICML, 2023.\n- [Anti-Exploration by Random Network Distillation](https:\u002F\u002Fopenreview.net\u002Fforum?id=NRQ5lC8Dit)\n  - Alexander Nikulin, Vladislav Kurenkov, Denis Tarasov, and Sergey Kolesnikov. ICML, 2023.\n- [Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=VLmf5fqWdf)\n  - Tongzhou Wang, Antonio Torralba, Phillip Isola, and Amy Zhang. ICML, 2023.\n- [PASTA: Pessimistic Assortment Optimization](https:\u002F\u002Fopenreview.net\u002Fforum?id=Yzfg7JhPhp)\n  - Juncheng Dong, Weibin Mo, Zhengling Qi, Cong Shi, Ethan X Fang, and Vahid Tarokh. ICML, 2023.\n- [Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=LucUrr5kUi)\n  - Cheng Lu, Huayu Chen, Jianfei Chen, Hang Su, Chongxuan Li, and Jun Zhu. ICML, 2023.\n- [Supported Trust Region Optimization for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.08935)\n  - Yixiu Mao, Hongchang Zhang, Chen Chen, Yi Xu, and Xiangyang Ji. ICML, 2023.\n- [Principled Offline RL in the Presence of Rich Exogenous Information](https:\u002F\u002Fopenreview.net\u002Fforum?id=jTcRlAAO01)\n  - Riashat Islam, Manan Tomar, Alex Lamb, Yonathan Efroni, Hongyu Zang, Aniket Rajiv Didolkar, Dipendra Misra, Xin Li, Harm van Seijen, Remi Tachet des Combes, and John Langford. ICML, 2023.\n- [Efficient Online Reinforcement Learning with Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.02948)\n  - Philip J. Ball, Laura Smith, Ilya Kostrikov, and Sergey Levine. ICML, 2023.\n- [Boosting Offline Reinforcement Learning with Action Preference Query](https:\u002F\u002Fopenreview.net\u002Fforum?id=XiGijCSGjx)\n  - Qisen Yang, Shenzhi Wang, Matthieu Gaetan Lin, Shiji Song, and Gao Huang. ICML, 2023.\n- [Model-based Offline Reinforcement Learning with Count-based Conservatism](https:\u002F\u002Fopenreview.net\u002Fforum?id=T5VlejGx7f)\n  - Byeongchan Kim and Min-hwan Oh. ICML, 2023.\n- [Constrained Decision Transformer for Offline Safe Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=9VKCBHESq0)\n  - Zuxin Liu, Zijian Guo, Yihang Yao, Zhepeng Cen, Wenhao Yu, Tingnan Zhang, and Ding Zhao. ICML, 2023.\n- [Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=rwLwGPdzDD)\n  - Yihao Sun, Jiaji Zhang, Chengxing Jia, Haoxin Lin, Junyin Ye, and Yang Yu. ICML, 2023.\n- [Provably Efficient Offline Reinforcement Learning with Perturbed Data Sources](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08364)\n  - Chengshuai Shi, Wei Xiong, Cong Shen, and Jing Yang. ICML, 2023.\n- [What is Essential for Unseen Goal Generalization of Offline Goal-conditioned RL?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18882)\n  - Rui Yang, Yong Lin, Xiaoteng Ma, Hao Hu, Chongjie Zhang, and Tong Zhang. ICML, 2023.\n- [Policy Regularization with Dataset Constraint for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06569)\n  - Yuhang Ran, Yi-Chen Li, Fuxiang Zhang, Zongzhang Zhang, and Yang Yu. ICML, 2023.\n- [MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19923)\n  - Fei Ni, Jianye Hao, Yao Mu, Yifu Yuan, Yan Zheng, Bin Wang, and Zhixuan Liang. ICML, 2023.\n- [Distance Weighted Supervised Learning for Offline Interaction Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13774)\n  - Joey Hejna, Jensen Gao, and Dorsa Sadigh. ICML, 2023.\n- [Masked Trajectory Models for Prediction, Representation, and Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.02968)\n  - Philipp Wu, Arjun Majumdar, Kevin Stone, Yixin Lin, Igor Mordatch, Pieter Abbeel, and Aravind Rajeswaran. ICML, 2023.\n- [Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.12824)\n  - Cheng Lu, Huayu Chen, Jianfei Chen, Hang Su, Chongxuan Li, and Jun Zhu. ICML, 2023.\n- [Bayesian Reparameterization of Reward-Conditioned Reinforcement Learning with Energy-based Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11340)\n  - Wenhao Ding, Tong Che, Ding Zhao, and Marco Pavone. ICML, 2023.\n- [Warm-Start Actor-Critic: From Approximation Error to Sub-optimality Gap](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.11271)\n  - Hang Wang, Sen Lin, and Junshan Zhang. ICML, 2023.\n- [Future-conditioned Unsupervised Pretraining for Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16683)\n  - Zhihui Xie, Zichuan Lin, Deheng Ye, Qiang Fu, Wei Yang, and Shuai Li. ICML, 2023.\n- [PAC-Bayesian Offline Contextual Bandits With Guarantees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13132)\n  - Otmane Sakhi, Nicolas Chopin, and Pierre Alquier. ICML, 2023.\n- [Q-learning Decision Transformer: Leveraging Dynamic Programming for Conditional Sequence Modelling in Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.03993)\n  - Taku Yamagata, Ahmed Khalil, and Raul Santos-Rodriguez. ICML, 2023.\n- [Jump-Start Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02372) [[website](https:\u002F\u002Fjumpstart-rl.github.io\u002F)]\n  - Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, and Karol Hausman. ICML, 2023.\n- [Learning Temporally AbstractWorld Models without Online Experimentation](https:\u002F\u002Fopenreview.net\u002Fforum?id=YeTYJz7th5)\n  - Benjamin Freed, Siddarth Venkatraman, Guillaume Adrien Sartoretti, Jeff Schneider, and Howie Choset. ICML, 2023.\n- [A Framework for Adapting Offline Algorithms to Solve Combinatorial Multi-Armed Bandit Problems with Bandit Feedback](https:\u002F\u002Fopenreview.net\u002Fforum?id=fBDP40MrQS)\n  - Guanyu Nie, Yididiya Y Nadew, Yanhui Zhu, Vaneet Aggarwal, and Christopher John Quinn. ICML, 2023.\n- [Revisiting the Linear-Programming Framework for Offline RL with General Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.13861)\n  - Asuman Ozdaglar, Sarath Pattathil, Jiawei Zhang, and Kaiqing Zhang. ICML, 2023.\n- [Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06518)\n  - Qinqing Zheng, Mikael Henaff, Brandon Amos, and Aditya Grover. ICML, 2023.\n- [Actor-Critic Alignment for Offline-to-Online Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=f6I3ZehFmu)\n  - Zishun Yu and Xinhua Zhang. ICML, 2023.\n- [Leveraging Offline Data in Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.04974)\n  - Andrew Wagenmaker and Aldo Pacchiano. ICML, 2023.\n- [Offline Reinforcement Learning with Closed-Form Policy Improvement Operators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.15956)\n  - Jiachen Li, Edwin Zhang, Ming Yin, Qinxun Bai, Yu-Xiang Wang, and William Yang Wang. ICML, 2023.\n- [Offline Learning in Markov Games with General Function Approximation](https:\u002F\u002Fopenreview.net\u002Fforum?id=LtSMEVi6eB)\n  - Yuheng Zhang, Yu Bai, and Nan Jiang. ICML, 2023.\n- [Offline Meta Reinforcement Learning with In-Distribution Online Adaptation](https:\u002F\u002Fopenreview.net\u002Fforum?id=dkYfm01yQp)\n  - Jianhao Wang, Jin Zhang, Haozhe Jiang, Junyu Zhang, Liwei Wang, and Chongjie Zhang. ICML, 2023.\n- [Scaling Pareto-Efficient Decision Making Via Offline Multi-Objective RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.00567)\n  - Baiting Zhu, Meihua Dang, and Aditya Grover. ICLR, 2023.\n- [Confidence-Conditioned Value Functions for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04607)\n  - Joey Hong, Aviral Kumar, and Sergey Levine. ICLR, 2023.\n- [Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.15144) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fscaling-offlinerl\u002Fhome)]\n  - Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, and Sergey Levine. ICLR, 2023.\n- [Is Conditional Generative Modeling all you need for Decision-Making?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.15657) [[website](https:\u002F\u002Fanuragajay.github.io\u002Fdecision-diffuser\u002F)]\n  - Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, and Pulkit Agrawal. ICLR, 2023\n- [Offline RL with No OOD Actions: In-Sample Learning via Implicit Value Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.15810)\n  - Haoran Xu, Li Jiang, Jianxiong Li, Zhuoran Yang, Zhaoran Wang, Victor Wai Kin Chan, and Xianyuan Zhan. ICLR, 2023.\n- [Extreme Q-Learning: MaxEnt RL without Entropy](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.02328)\n  - Divyansh Garg, Joey Hejna, Matthieu Geist, and Stefano Ermon. ICLR, 2023.\n- [Dichotomy of Control: Separating What You Can Control from What You Cannot](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13435)\n  - Mengjiao Yang, Dale Schuurmans, Pieter Abbeel, and Ofir Nachum. ICLR, 2023.\n- [From Play to Policy: Conditional Behavior Generation from Uncurated Robot Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.10047)\n  - Zichen Jeff Cui, Yibin Wang, Nur Muhammad Mahi Shafiullah, and Lerrel Pinto. ICLR, 2023.\n- [VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12780)\n  - Thanh Nguyen-Tang and Raman Arora. ICLR, 2023.\n- [Optimal Conservative Offline RL with General Function Approximation via Augmented Lagrangian](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00716)\n  - Paria Rashidinejad, Hanlin Zhu, Kunhe Yang, Stuart Russell, and Jiantao Jiao. ICLR, 2023.\n- [The In-Sample Softmax for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.14372)\n  - Chenjun Xiao, Han Wang, Yangchen Pan, Adam White, and Martha White. ICLR, 2023.\n- [VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00030) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fvip-rl)] [[code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fvip)]\n  - Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, and Amy Zhang. ICLR, 2023.\n- [Does Zero-Shot Reinforcement Learning Exist?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.14935)\n  - Ahmed Touati, Jérémy Rapin, and Yann Ollivier. ICLR, 2023.\n- [Behavior Prior Representation learning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00863)\n  - Hongyu Zang, Xin Li, Jie Yu, Chen Liu, Riashat Islam, Remi Tachet Des Combes, and Romain Laroche. ICLR, 2023.\n- [Mind the Gap: Offline Policy Optimization for Imperfect Rewards](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.01667)\n  - Jianxiong Li, Xiao Hu, Haoran Xu, Jingjing Liu, Xianyuan Zhan, Qing-Shan Jia, and Ya-Qin Zhang. ICLR, 2023.\n- [Offline Congestion Games: How Feedback Type Affects Data Coverage Requirement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13396)\n  - Haozhe Jiang, Qiwen Cui, Zhihan Xiong, Maryam Fazel, and Simon S. Du. ICLR, 2023.\n- [User-Interactive Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10629)\n  - Phillip Swazinna, Steffen Udluft, and Thomas Runkler. ICLR, 2023.\n- [Discovering Generalizable Multi-agent Coordination Skills from Multi-task Offline Data](https:\u002F\u002Fopenreview.net\u002Fforum?id=53FyUAdP7d)\n  - Fuxiang Zhang, Chengxing Jia, Yi-Chen Li, Lei Yuan, Yang Yu, and Zongzhang Zhang. ICLR, 2023.\n- [Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06718) [[code](https:\u002F\u002Fgithub.com\u002Fyudasong\u002FHyQ)]\n  - Yuda Song, Yifei Zhou, Ayush Sekhari, J. Andrew Bagnell, Akshay Krishnamurthy, and Wen Sun. ICLR, 2023.\n- [Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting](https:\u002F\u002Fopenreview.net\u002Fforum?id=OhUAblg27z)\n  - Zhang-Wei Hong, Pulkit Agrawal, Remi Tachet des Combes, and Romain Laroche. ICLR, 2023.\n- [Efficient Offline Policy Optimization with a Learned Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05980)\n  - Zichen Liu, Siyi Li, Wee Sun Lee, Shuicheng Yan, and Zhongwen Xu. ICLR, 2023.\n- [Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.06193)\n  - Zhendong Wang, Jonathan J Hunt, and Mingyuan Zhou. ICLR, 2023.\n- [When Data Geometry Meets Deep Function: Generalizing Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11027)\n  - Jianxiong Li, Xianyuan Zhan, Haoran Xu, Xiangyu Zhu, Jingjing Liu, and Ya-Qin Zhang. ICLR, 2023.\n- [In-sample Actor Critic for Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=dfDv0WU853R)\n  - Hongchang Zhang, Yixiu Mao, Boyuan Wang, Shuncheng He, Yi Xu, and Xiangyang Ji. ICLR, 2023.\n- [Value Memory Graph: A Graph-Structured World Model for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04384)\n  - Deyao Zhu, Li Erran Li, and Mohamed Elhoseiny. ICLR, 2023.\n- [Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03802)\n  - Jihwan Jeong, Xiaoyu Wang, Michael Gimelfarb, Hyunwoo Kim, Baher Abdulhai, and Scott Sanner. ICLR, 2023.\n- [Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.14548)\n  - Huayu Chen, Cheng Lu, Chengyang Ying, Hang Su, and Jun Zhu. ICLR, 2023.\n- [Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00750)\n  - Ming Yin, Mengdi Wang, and Yu-Xiang Wang. ICLR, 2023.\n- [Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.15512)\n  - Wei Xiong, Han Zhong, Chengshuai Shi, Cong Shen, Liwei Wang, and Tong Zhang. ICLR, 2023.\n- [Pessimism in the Face of Confounders: Provably Efficient Offline Reinforcement Learning in Partially Observable Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13589)\n  - Miao Lu, Yifei Min, Zhaoran Wang, and Zhuoran Yang. ICLR, 2023.\n- [Hyper-Decision Transformer for Efficient Online Policy Adaptation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08487)\n  - Mengdi Xu, Yuchen Lu, Yikang Shen, Shun Zhang, Ding Zhao, and Chuang Gan. ICLR, 2023.\n- [Efficient Planning in a Compact Latent Action Space](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.10291)\n  - Zhengyao Jiang, Tianjun Zhang, Michael Janner, Yueying Li, Tim Rocktäschel, Edward Grefenstette, and Yuandong Tian. ICLR, 2023.\n- [Preference Transformer: Modeling Human Preferences using Transformers for RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.00957) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fpreference-transformer)]\n  - Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. ICLR, 2023.\n- [Behavior Proximal Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.11312)\n  - Zifeng Zhuang, Kun Lei, Jinxin Liu, Donglin Wang, and Yilang Guo. ICLR, 2023.\n- [Provably Efficient Neural Offline Reinforcement Learning via Perturbed Rewards](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12780)\n  - Thanh Nguyen-Tang and Raman Arora. ICLR, 2023.\n- [The Provable Benefits of Unsupervised Data Sharing for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.13493)\n  - Hao Hu, Yiqin Yang, Qianchuan Zhao, and Chongjie Zhang. ICLR, 2023.\n- [Decision Transformer under Random Frame Dropping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03391)\n  - Kaizhe Hu, Ray Chen Zheng, Yang Gao, and Huazhe Xu. ICLR, 2023.\n- [Policy Expansion for Bridging Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00935)\n  - Haichao Zhang, We Xu, and Haonan Yu. ICLR, 2023.\n- [Finetuning Offline World Models in the Real World](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16029)\n  - Yunhai Feng, Nicklas Hansen, Ziyan Xiong, Chandramouli Rajagopalan, and Xiaolong Wang. CoRL, 2023.\n- [On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04268)\n  - Mustafa O. Karabag and Ufuk Topcu. AAAI, 2023.\n- [Adaptive Policy Learning for Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07693)\n  - Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, and Jing Jiang. AAAI, 2023.\n- [Safe Policy Improvement for POMDPs via Finite-State Controllers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.04939)\n  - Thiago D. Simão, Marnix Suilen, and Nils Jansen. AAAI, 2023.\n- [Behavior Estimation from Multi-Source Data for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.16078)\n  - Guoxi Zhang and Hisashi Kashima. AAAI, 2023.\n- [On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.13208)\n  - Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, and Raman Arora. AAAI, 2023.\n- [Contrastive Example-Based Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13101)\n  - Kyle Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, and Chelsea Finn. LDC, 2023.\n- [Curriculum Offline Reinforcement Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.5555\u002F3545946.3598767)\n  - Yuanying Cai, Chuheng Zhang, Hanye Zhao, Li Zhao, and Jiang Bian. AAMAS. 2023.\n- [Offline Reinforcement Learning with On-Policy Q-Function Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13824)\n  - Laixi Shi, Robert Dadashi, Yuejie Chi, Pablo Samuel Castro, and Matthieu Geist. ECML, 2023.\n- [Model-based Offline Policy Optimization with Adversarial Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02157)\n  - Junming Yang, Xingguo Chen, Shengyuan Wang, and Bolei Zhang. ECAI, 2023.\n- [Efficient experience replay architecture for offline reinforcement learning](https:\u002F\u002Fwww.emerald.com\u002Finsight\u002Fcontent\u002Fdoi\u002F10.1108\u002FRIA-10-2022-0248\u002Ffull\u002Fhtml)\n  - Longfei Zhang, Yanghe Feng, Rongxiao Wang, Yue Xu, Naifu Xu, Zeyi Liu, and Hang Du. RIA, 2023.\n- [Automatic Trade-off Adaptation in Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09744)\n  - Phillip Swazinna, Steffen Udluft, and Thomas Runkler. ESANN, 2023.\n- [Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08232)\n  - Ashish Kumar and Ilya Kuzovkin. arXiv, 2022.\n- [Latent Variable Representation for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08765)\n  - Tongzheng Ren, Chenjun Xiao, Tianjun Zhang, Na Li, Zhaoran Wang, Sujay Sanghavi, Dale Schuurmans, and Bo Dai. arXiv, 2022.\n- [Learning From Good Trajectories in Offline Multi-Agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.15612)\n  - Qi Tian, Kun Kuang, Furui Liu, and Baoxiang Wang. arXiv, 2022.\n- [State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.15065)\n  - Chen Chen, Hongyao Tang, Yi Ma, Chao Wang, Qianli Shen, Dong Li, and Jianye Hao. arXiv, 2022.\n- [Masked Autoencoding for Scalable and Generalizable Decision Making](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.12740)\n  - Fangchen Liu, Hao Liu, Aditya Grover, and Pieter Abbeel. arXiv, 2022.\n- [Improving TD3-BC: Relaxed Policy Constraint for Offline Learning and Stable Online Fine-Tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11802)\n  - Alex Beeson and Giovanni Montana. arXiv, 2022.\n- [Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch Size](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11092)\n  - Alexander Nikulin, Vladislav Kurenkov, Denis Tarasov, Dmitry Akimov, and Sergey Kolesnikov. arXiv, 2022.\n- [Let Offline RL Flow: Training Conservative Agents in the Latent Space of Normalizing Flows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11096)\n  - Dmitriy Akimov, Vladislav Kurenkov, Alexander Nikulin, Denis Tarasov, and Sergey Kolesnikov. arXiv, 2022.\n- [Model-based Trajectory Stitching for Improved Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11603)\n  - Charles A. Hepburn and Giovanni Montana. arXiv, 2022.\n- [Offline Reinforcement Learning with Adaptive Behavior Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.08251)\n  - Yunfan Zhou, Xijun Li, and Qingyu Qu. arXiv, 2022.\n- [Contextual Transformer for Offline Meta Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.08016)\n  - Runji Lin, Ye Li, Xidong Feng, Zhaowei Zhang, Xian Hong Wu Fung, Haifeng Zhang, Jun Wang, Yali Du, and Yaodong Yang. arXiv, 2022.\n- [Wall Street Tree Search: Risk-Aware Planning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.04583)\n  - Dan Elbaz, Gal Novik, and Oren Salzman. arXiv, 2022.\n- [ARMOR: A Model-based Framework for Improving Arbitrary Baseline Policies with Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.04538)\n  - Tengyang Xie, Mohak Bhardwaj, Nan Jiang, and Ching-An Cheng. arXiv, 2022.\n- [Contrastive Value Learning: Implicit Models for Simple Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.02100)\n  - Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, and Jonathan Tompson. arXiv, 2022.\n- [Optimistic Curiosity Exploration and Conservative Exploitation with Linear Reward Shaping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.07288)\n  - Hao Sun, Lei Han, Rui Yang, Xiaoteng Ma, Jian Guo, and Bolei Zhou. arXiv, 2022.\n- [Optimal Conservative Offline RL with General Function Approximation via Augmented Lagrangian](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00716)\n  - Paria Rashidinejad, Hanlin Zhu, Kunhe Yang, Stuart Russell, and Jiantao Jiao. ICLR, 2023.\n- [Agent-Controller Representations: Principled Offline RL with Rich Exogenous Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00164)\n  - Riashat Islam, Manan Tomar, Alex Lamb, Yonathan Efroni, Hongyu Zang, Aniket Didolkar, Dipendra Misra, Xin Li, Harm van Seijen, Remi Tachet des Combes, and John Langford. arXiv, 2022.\n- [Provable Safe Reinforcement Learning with Binary Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.14492)\n  - Andrew Bennett, Dipendra Misra, and Nathan Kallus. arXiv, 2022.\n- [Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.15206)\n  - Ashvin Nair, Brian Zhu, Gokul Narayanan, Eugen Solowjow, and Sergey Levine. arXiv, 2022.\n- [Implicit Offline Reinforcement Learning via Supervised Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.12272)\n  - Alexandre Piche, Rafael Pardinas, David Vazquez, Igor Mordatch, and Chris Pal. arXiv, 2022.\n- [Robust Offline Reinforcement Learning with Gradient Penalty and Constraint Relaxation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.10469)\n  - Chengqian Gao, Ke Xu, Liu Liu, Deheng Ye, Peilin Zhao, and Zhiqiang Xu. arXiv, 2022.\n- [Boosting Offline Reinforcement Learning via Data Rebalancing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09241)\n  - Yang Yue, Bingyi Kang, Xiao Ma, Zhongwen Xu, Gao Huang, and Shuicheng Yan. arXiv, 2022.\n- [ConserWeightive Behavioral Cloning for Reliable Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05158) [[code](https:\u002F\u002Fgithub.com\u002Ftung-nd\u002Fcwbc)]\n  - Tung Nguyen, Qinqing Zheng, and Aditya Grover. arXiv, 2022.\n- [State Advantage Weighting for Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.04251)\n  - Jiafei Lyu, Aicheng Gong, Le Wan, Zongqing Lu, and Xiu Li. arXiv, 2022.\n- [Blessing from Experts: Super Reinforcement Learning in Confounded Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15448)\n  - Jiayi Wang, Zhengling Qi, and Chengchun Shi. arXiv, 2022.\n- [DCE: Offline Reinforcement Learning With Double Conservative Estimates](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.13132)\n  - Chen Zhao, Kai Xing Huang, and Chun Yuan. arXiv, 2022.\n- [On the Opportunities and Challenges of using Animals Videos in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.12347)\n  - Vittorio Giammarino. arXiv, 2022.\n- [Offline Reinforcement Learning with Instrumental Variables in Confounded Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.08666)\n  - Zuyue Fu, Zhengling Qi, Zhaoran Wang, Zhuoran Yang, Yanxun Xu, and Michael R. Kosorok. arXiv, 2022.\n- [Exploiting Reward Shifting in Value-Based Deep RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.07288)\n  - Hao Sun, Lei Han, Rui Yang, Xiaoteng Ma, Jian Guo, and Bolei Zhou. arXiv, 2022.\n- [Distributionally Robust Offline Reinforcement Learning with Linear Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.06620)\n  - Xiaoteng Ma, Zhipeng Liang, Li Xia, Jiheng Zhang, Jose Blanchet, Mingwen Liu, Qianchuan Zhao, and Zhengyuan Zhou. arXiv, 2022.\n- [C^2:Co-design of Robots via Concurrent Networks Coupling Online and Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.06579)\n  - Ci Chen, Pingyu Xiang, Haojian Lu, Yue Wang, and Rong Xiong. arXiv, 2022.\n- [Strategic Decision-Making in the Presence of Information Asymmetry: Provably Efficient RL with Algorithmic Instruments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.11040)\n  - Mengxin Yu, Zhuoran Yang, and Jianqing Fan. arXiv, 2022.\n- [Distributionally Robust Model-Based Offline Reinforcement Learning with Near-Optimal Sample Complexity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.05767)\n  - Laixi Shi and Yuejie Chi. arXiv, 2022.\n- [AdaCat: Adaptive Categorical Discretization for Autoregressive Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.02246)\n  - Qiyang Li, Ajay Jain, and Pieter Abbeel. arXiv, 2022.\n- [Branch Ranking for Efficient Mixed-Integer Programming via Offline Ranking-based Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.13701)\n  - Zeren Huang, Wenhao Chen, Weinan Zhang, Chuhan Shi, Furui Liu, Hui-Ling Zhen, Mingxuan Yuan, Jianye Hao, Yong Yu, and Jun Wang. arXiv, 2022.\n- [Offline Reinforcement Learning at Multiple Frequencies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.13082) [[webpage](https:\u002F\u002Fsites.google.com\u002Fstanford.edu\u002Fadaptive-nstep-returns\u002F)]\n  - Kaylee Burns, Tianhe Yu, Chelsea Finn, and Karol Hausman. arXiv, 2022.\n- [General Policy Evaluation and Improvement by Learning to Identify Few But Crucial States](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.01566)\n  - Francesco Faccio, Aditya Ramesh, Vincent Herrmann, Jean Harb, and Jürgen Schmidhuber. arXiv, 2022.\n- [Behavior Transformers: Cloning k modes with one stone](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11251)\n  - Nur Muhammad Mahi Shafiullah, Zichen Jeff Cui, Ariuntuya Altanzaya, and Lerrel Pinto. arXiv, 2022.\n- [Contrastive Learning as Goal-Conditioned Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07568)\n  - Benjamin Eysenbach, Tianjun Zhang, Ruslan Salakhutdinov, and Sergey Levine. arXiv, 2022.\n- [Federated Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.05581)\n  - Doudou Zhou, Yufeng Zhang, Aaron Sonabend-W, Zhaoran Wang, Junwei Lu, and Tianxi Cai. arXiv, 2022.\n- [Provable Benefit of Multitask Representation Learning in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.05900)\n  - Yuan Cheng, Songtao Feng, Jing Yang, Hong Zhang, and Yingbin Liang. arXiv, 2022\n- [Provably Efficient Offline Reinforcement Learning with Trajectory-Wise Reward](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.06426)\n  - Tengyu Xu and Yingbin Liang. arXiv, 2022.\n- [Model-Based Reinforcement Learning Is Minimax-Optimal for Offline Zero-Sum Markov Games](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04044)\n  - Yuling Yan, Gen Li, Yuxin Chen, and Jianqing Fan. arXiv, 2022.\n- [Offline Reinforcement Learning with Causal Structured World Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.01474)\n  - Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, and Yang Yu. arXiv, 2022.\n- [Incorporating Explicit Uncertainty Estimates into Deep Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.01085)\n  - David Brandfonbrener, Remi Tachet des Combes, and Romain Laroche. arXiv, 2022.\n- [Know Your Boundaries: The Necessity of Explicit Behavioral Cloning in Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00695)\n  - Wonjoon Goo and Scott Niekum. arXiv, 2022.\n- [Byzantine-Robust Online and Offline Distributed Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00165)\n  - Yiding Chen, Xuezhou Zhang, Kaiqing Zhang, Mengdi Wang, and Xiaojin Zhu. arXiv, 2022.\n- [Model Generation with Provable Coverability for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00316)\n  - Chengxing Jia, Hao Yin, Chenxiao Gao, Tian Xu, Lei Yuan, Zongzhang Zhang, and Yang Yu. arXiv, 2022.\n- [You Can't Count on Luck: Why Decision Transformers Fail in Stochastic Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.15967)\n  - Keiran Paster, Sheila McIlraith, and Jimmy Ba. arXiv, 2022.\n- [Multi-Game Decision Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.15241)\n  - Kuang-Huei Lee, Ofir Nachum, Mengjiao Yang, Lisa Lee, Daniel Freeman, Winnie Xu, Sergio Guadarrama, Ian Fischer, Eric Jang, Henryk Michalewski, and Igor Mordatch. arXiv, 2022.\n- [Hierarchical Planning Through Goal-Conditioned Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11790)\n  - Jinning Li, Chen Tang, Masayoshi Tomizuka, and Wei Zhan. arXiv, 2022.\n- [Distance-Sensitive Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11027)\n  - Jianxiong Li, Xianyuan Zhan, Haoran Xu, Xiangyu Zhu, Jingjing Liu, and Ya-Qin Zhang. arXiv, 2022.\n- [No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.08716)\n  - Han Wang, Archit Sakhadeo, Adam White, James Bell, Vincent Liu, Xutong Zhao, Puer Liu, Tadashi Kozuno, Alona Fyshe, and Martha White. arXiv, 2022.\n- [How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.03353)\n  - Alex X. Lee, Coline Devin, Jost Tobias Springenberg, Yuxiang Zhou, Thomas Lampe, Abbas Abdolmaleki, and Konstantinos Bousmalis. arXiv, 2022.\n- [Offline Visual Representation Learning for Embodied Navigation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.13226)\n  - Karmesh Yadav, Ram Ramrakhya, Arjun Majumdar, Vincent-Pierre Berges, Sachit Kuhar, Dhruv Batra, Alexei Baevski, and Oleksandr Maksymets. arXiv, 2022.\n- [Towards Flexible Inference in Sequential Decision Problems via Bidirectional Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.13326)\n  - Micah Carroll, Jessy Lin, Orr Paradise, Raluca Georgescu, Mingfei Sun, David Bignell, Stephanie Milani, Katja Hofmann, Matthew Hausknecht, Anca Dragan, and Sam Devlin. arXiv, 2022.\n- [BATS: Best Action Trajectory Stitching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.12026)\n  - Ian Char, Viraj Mehta, Adam Villaflor, John M. Dolan, Jeff Schneider. arXiv, 2022.\n- [Settling the Sample Complexity of Model-Based Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05275)\n  - Gen Li, Laixi Shi, Yuxin Chen, Yuejie Chi, and Yuting Wei. arXiv, 2022.\n- [PAnDR: Fast Adaptation to New Environments from Offline Experiences via Decoupling Policy and Environment Representations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02877)\n  - Tong Sang, Hongyao Tang, Yi Ma, Jianye Hao, Yan Zheng, Zhaopeng Meng, Boyan Li, and Zhen Wang. arXiv, 2022.\n- [Offline Reinforcement Learning Under Value and Density-Ratio Realizability: the Power of Gaps](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.13935)\n  - Jinglin Chen and Nan Jiang. arXiv, 2022.\n- [Meta Reinforcement Learning for Adaptive Control: An Offline Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.09661)\n  - Daniel G. McClement, Nathan P. Lawrence, Johan U. Backstrom, Philip D. Loewen, Michael G. Forbes, and R. Bhushan Gopaluni. arXiv, 2022.\n- [The Efficacy of Pessimism in Asynchronous Q-Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.07368)\n  - Yuling Yan, Gen Li, Yuxin Chen, and Jianqing Fan. arXiv, 2022.\n- [Reinforcement Learning for Linear Quadratic Control is Vulnerable Under Cost Manipulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.05774)\n  - Yunhan Huang and Quanyan Zhu. arXiv, 2022.\n- [A Regularized Implicit Policy for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.09673)\n  - Shentao Yang, Zhendong Wang, Huangjie Zheng, Yihao Feng, and Mingyuan Zhou. arXiv, 2022.\n- [Reinforcement Learning in Possibly Nonstationary Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.01707) [[code](https:\u002F\u002Fgithub.com\u002Flimengbinggz\u002FCUSUM-RL)]\n  - Mengbing Li, Chengchun Shi, Zhenke Wu, and Piotr Fryzlewicz. arXiv, 2022.\n- [Statistically Efficient Advantage Learning for Offline Reinforcement Learning in Infinite Horizons](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13163)\n  - Chengchun Shi, Shikai Luo, Hongtu Zhu, and Rui Song. arXiv, 2022.\n- [VRL3: A Data-Driven Framework for Visual Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.10324)\n  - Che Wang, Xufang Luo, Keith Ross, and Dongsheng Li. arXiv, 2022.\n- [Retrieval-Augmented Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.08417)\n  - Anirudh Goyal, Abram L. Friesen, Andrea Banino, Theophane Weber, Nan Rosemary Ke, Adria Puigdomenech Badia, Arthur Guez, Mehdi Mirza, Ksenia Konyushkova, Michal Valko, Simon Osindero, Timothy Lillicrap, Nicolas Heess, and Charles Blundell. arXiv, 2022.\n- [Online Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.05607)\n  - Qinqing Zheng, Amy Zhang, and Aditya Grover. arXiv, 2022.\n- [Transferred Q-learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04709)\n  - Elynn Y. Chen, Michael I. Jordan, and Sai Li. arXiv, 2022.\n- [Settling the Communication Complexity for Distributed Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04862)\n  - Juliusz Krysztof Ziomek, Jun Wang, and Yaodong Yang. arXiv, 2022.\n- [Offline Reinforcement Learning with Realizability and Single-policy Concentrability](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04634)\n  - Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, and Jason D. Lee. arXiv, 2022.\n- [Rethinking Goal-conditioned Supervised Learning and Its Connection to Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04478)\n  - Rui Yang, Yiming Lu, Wenzhe Li, Hao Sun, Meng Fang, Yali Du, Xiu Li, Lei Han, and Chongjie Zhang. arXiv, 2022.\n- [Stochastic Gradient Descent with Dependent Data for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02850)\n  - Jing Dong and Xin T. Tong. arXiv, 2022.\n- [Can Wikipedia Help Offline Reinforcement Learning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12122)\n  - Machel Reid, Yutaro Yamada, and Shixiang Shane Gu. arXiv, 2022.\n- [MOORe: Model-based Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.10070)\n  - Yihuan Mao, Chao Wang, Bin Wang, and Chongjie Zhang. arXiv, 2022.\n- [Operator Deep Q-Learning: Zero-Shot Reward Transferring in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.00236)\n  - Ziyang Tang, Yihao Feng, and Qiang Liu. arXiv, 2022.\n- [Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.15578)\n  - Samin Yeasar Arnob, Riashat Islam, and Doina Precup. arXiv, 2022.\n- [Single-Shot Pruning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.15579)\n  - Samin Yeasar Arnob, Riyasat Ohib, Sergey Plis, and Doina Precup. arXiv, 2022.\n- [Monte Carlo Augmented Actor-Critic for Sparse Reward Deep Reinforcement Learning from Suboptimal Demonstrations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07432) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fmcac-rl)] [[code](https:\u002F\u002Fgithub.com\u002Falbertwilcox\u002Fmcac)]\n  - Albert Wilcox, Ashwin Balakrishna, Jules Dedieu, Wyame Benslimane, Daniel S. Brown, and Ken Goldberg. NeurIPS, 2022.\n- [Data-Driven Offline Decision-Making via Invariant Representation Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11349)\n  - Han Qi, Yi Su, Aviral Kumar, and Sergey Levine. NeurIPS, 2022.\n- [Bellman Residual Orthogonalization for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.12786)\n  - Andrea Zanette, and Martin J. Wainwright. NeurIPS, 2022.\n- [A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.06147)\n  - Fan Chen, Junyu Zhang, and Zaiwen Wen. NeurIPS, 2022.\n- [RORL: Robust Offline Reinforcement Learning via Conservative Smoothing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02829)\n  - Rui Yang, Chenjia Bai, Xiaoteng Ma, Zhaoran Wang, Chongjie Zhang, and Lei Han. NeurIPS, 2022.\n- [On Gap-dependent Bounds for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00177)\n  - Xinqi Wang, Qiwen Cui, and Simon S. Du. NeurIPS, 2022.\n- [Provably Efficient Offline Multi-agent Reinforcement Learning via Strategy-wise Bonus](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.00159)\n  - Qiwen Cui and Simon S. Du. NeurIPS, 2022.\n- [Supported Policy Optimization for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.06239)\n  - Jialong Wu, Haixu Wu, Zihan Qiu, Jianmin Wang, and Mingsheng Long. NeurIPS, 2022.\n- [When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.13464)\n  - Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, and Xianyuan Zhan. NeurIPS, 2022.\n- [Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13703)\n  - Seyed Kamyar Seyed Ghasemipour, Shixiang Shane Gu, and Ofir Nachum. NeurIPS, 2022.\n- [When does return-conditioned supervised learning work for offline reinforcement learning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.01079)\n  - David Brandfonbrener, Alberto Bietti, Jacob Buckman, Romain Laroche, and Joan Bruna. NeurIPS, 2022.\n- [Pessimism for Offline Linear Contextual Bandits using ℓp Confidence Sets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10671)\n  - Gene Li, Cong Ma, and Nathan Srebro. NeurIPS, 2022.\n- [RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.12581)\n  - Marc Rigter, Bruno Lacerda, and Nick Hawes. NeurIPS, 2022.\n- [When is Offline Two-Player Zero-Sum Markov Game Solvable?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.03522)\n  - Qiwen Cui, and Simon S. Du. NeurIPS, 2022.\n- [Robust Reinforcement Learning using Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.05767)\n  - Kishan Panaganti, Zaiyan Xu, Dileep Kalathil, and Mohammad Ghavamzadeh. NeurIPS, 2022.\n- [Bidirectional Learning for Offline Infinite-width Model-based Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.07507)\n  - Can Chen, Yingxue Zhang, Jie Fu, Xue Liu, and Mark Coates. NeurIPS, 2022.\n- [Mildly Conservative Q-Learning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04745)\n  - Jiafei Lyu, Xiaoteng Ma, Xiu Li, and Zongqing Lu. NeurIPS, 2022.\n- [Bootstrapped Transformer for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.08569)\n  - Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, and Dongsheng Li. NeurIPS, 2022.\n- [LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13536)\n  - Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, and Kee-Eung Kim. NeurIPS, 2022.\n- [Latent-Variable Advantage-Weighted Policy Optimization for Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08949)\n  - Xi Chen, Ali Ghadirzadeh, Tianhe Yu, Yuan Gao, Jianhao Wang, Wenzhe Li, Bin Liang, Chelsea Finn, and Chongjie Zhang. NeurIPS, 2022.\n- [Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07989)\n  - Jiafei Lyu, Xiu Li, and Zongqing Lu. NeurIPS, 2022.\n- [Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.14629)\n  - Bogdan Mazoure, Ilya Kostrikov, Ofir Nachum, and Jonathan Tompson. NeurIPS, 2022.\n- [Offline Goal-Conditioned Reinforcement Learning via f-Advantage Regression](https:\u002F\u002Fopenreview.net\u002Fforum?id=_h29VprPHD)\n  - Yecheng Jason Ma, Jason Yan, Dinesh Jayaraman, and Osbert Bastani. NeurIPS, 2022.\n- [Dual Generator Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01471)\n  - Quan Vuong, Aviral Kumar, Sergey Levine, and Yevgen Chebotar. NeurIPS, 2022.\n- [MoCoDA: Model-based Counterfactual Data Augmentation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11287)\n  - Silviu Pitis, Elliot Creager, Ajay Mandlekar, and Animesh Garg. NeurIPS, 2022.\n- [A Policy-Guided Imitation Approach for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.08323) [[code](https:\u002F\u002Fgithub.com\u002Fryanxhr\u002FPOR)]\n  - Haoran Xu, Li Jiang, Jianxiong Li, and Xianyuan Zhan. NeurIPS, 2022.\n- [A Unified Framework for Alternating Offline Model Training and Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05922)\n  - Shentao Yang, Shujian Zhang, Yihao Feng, and Mingyuan Zhou. NeurIPS, 2022.\n- [Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06692)\n  - Kaiyang Guo, Yunfeng Shao, and Yanhui Geng. NeurIPS, 2022.\n- [S2P: State-conditioned Image Synthesis for Data Augmentation in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15256)\n  - Daesol Cho, Dongseok Shim, and H. Jin Kim. NeurIPS, 2022.\n- [ASPiRe:Adaptive Skill Priors for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15205)\n  - Mengda Xu, Manuela Veloso, and Shuran Song. NeurIPS, 2022.\n- [Skills Regularized Task Decomposition for Multi-task Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=uuaMrewU9Kk)\n  - Minjong Yoo, Sangwoo Cho, and Honguk Woo. NeurIPS, 2022.\n- [Offline Multi-Agent Reinforcement Learning with Knowledge Distillation](https:\u002F\u002Fopenreview.net\u002Fforum?id=yipUuqxveCy)\n  - Wei-Cheng Tseng, Tsun-Hsuan Wang, Yen-Chen Lin, and Phillip Isola. NeurIPS, 2022.\n- [Shadow Knowledge Distillation: Bridging Offline and Online Knowledge Transfer](https:\u002F\u002Fopenreview.net\u002Fforum?id=prQT0gN81oG)\n  - Lujun Li and Zhe Jin. NeurIPS, 2022.\n- [Addressing Optimism Bias in Sequence Modeling for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10295)\n  - Adam Villaflor, Zhe Huang, Swapnil Pande, John Dolan, and Jeff Schneider. ICML, 2022.\n- [Offline RL Policies Should be Trained to be Adaptive](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.02200)\n  - Dibya Ghosh, Anurag Ajay, Pulkit Agrawal, and Sergey Levine. ICML, 2022.\n- [Adversarially Trained Actor Critic for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02446)\n  - Ching-An Cheng, Tengyang Xie, Nan Jiang, and Alekh Agarwal. ICML, 2022.\n- [Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.07511)\n  - Han Zhong, Wei Xiong, Jiyuan Tan, Liwei Wang, Tong Zhang, Zhaoran Wang, and Zhuoran Yang. ICML, 2022.\n- [How to Leverage Unlabeled Data in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.01741)\n  - Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, and Sergey Levine. ICML, 2022.\n- [Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11188)\n  - Ling Pan, Longbo Huang, Tengyu Ma, and Huazhe Xu. ICML, 2022.\n- [Learning Pseudometric-based Action Representations for Offline Reinforcement Learning](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fgu22b.html)\n  - Pengjie Gu, Mengchen Zhao, Chen Chen, Dong Li, Jianye Hao, and Bo An. ICML, 2022.\n- [Offline Meta-Reinforcement Learning with Online Self-Supervision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.03974)\n  - Vitchyr H. Pong, Ashvin Nair, Laura Smith, Catherine Huang, and Sergey Levine. ICML, 2022.\n- [Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02433)\n  - Yecheng Jason Ma, Andrew Shen, Dinesh Jayaraman, and Osbert Bastani. ICML, 2022.\n- [Constrained Offline Policy Optimization](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fpolosky22a.html)\n  - Nicholas Polosky, Bruno C. Da Silva, Madalina Fiterau, and Jithin Jagannath. ICML, 2022.\n- [Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fxu22l.html)\n  - Haoran Xu, Xianyuan Zhan, Honglei Yin, and Huiling Qin. ICML, 2022.\n- [Provably Efficient Offline Reinforcement Learning for Partially Observable Markov Decision Processes](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fguo22a.html)\n  - Hongyi Guo, Qi Cai, Yufeng Zhang, Zhuoran Yang, and Zhaoran Wang. ICML, 2022.\n- [Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13890)\n  - Laixi Shi, Gen Li, Yuting Wei, Yuxin Chen, and Yuejie Chi. ICML, 2022.\n- [Efficient Reinforcement Learning in Block MDPs: A Model-free Representation Learning Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.00063)\n  - Xuezhou Zhang, Yuda Song, Masatoshi Uehara, Mengdi Wang, Alekh Agarwal, and Wen Sun. ICML, 2022.\n- [Prompting Decision Transformer for Few-Shot Policy Generalization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.13499)\n  - Mengdi Xu, Yikang Shen, Shun Zhang, Yuchen Lu, Ding Zhao, Joshua B. Tenenbaum, and Chuang Gan. ICML, 2022.\n- [Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07166)\n  - Shentao Yang, Yihao Feng, Shujian Zhang, and Mingyuan Zhou. ICML, 2022.\n- [On the Role of Discount Factor in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03383)\n  - Hao Hu, Yiqin Yang, Qianchuan Zhao, and Chongjie Zhang. ICML, 2022.\n- [Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.01365)\n  - Matthias Weissenbacher, Samarth Sinha, Animesh Garg, and Yoshinobu Kawahara. ICML, 2022.\n- [Representation Learning for Online and Offline RL in Low-rank MDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.04652) [[video](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=EynREeip-y8s)]\n  - Masatoshi Uehara, Xuezhou Zhang, and Wen Sun. ICLR, 2022.\n- [Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06226) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=aPce6Y-NqpQs)]\n  - Masatoshi Uehara and Wen Sun. ICLR, 2022.\n- [Revisiting Design Choices in Model-Based Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.04135)\n  - Cong Lu, Philip J. Ball, Jack Parker-Holder, Michael A. Osborne, and Stephen J. Roberts. ICLR, 2022.\n- [DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.04716)\n  - Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, and Sergey Levine. ICLR, 2022.\n- [COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.08957)\n  - Jongmin Lee, Cosmin Paduraru, Daniel J. Mankowitz, Nicolas Heess, Doina Precup, Kee-Eung Kim, and Arthur Guez. ICLR, 2022.\n- [POETREE: Interpretable Policy Learning with Adaptive Decision Trees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08057)\n  - Alizée Pace, Alex J. Chan, and Mihaela van der Schaar. ICLR, 2022.\n- [Planning in Stochastic Environments with a Learned Model](https:\u002F\u002Fopenreview.net\u002Fforum?id=X6D9bAHhBQ1)\n  - Ioannis Antonoglou, Julian Schrittwieser, Sherjil Ozair, Thomas K Hubert, and David Silver. ICLR, 2022.\n- [Offline Reinforcement Learning with Value-based Episodic Memory](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.09796)\n  - Xiaoteng Ma, Yiqin Yang, Hao Hu, Qihan Liu, Jun Yang, Chongjie Zhang, Qianchuan Zhao, and Bin Liang. ICLR, 2022.\n- [When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.05618)\n  - Aviral Kumar, Joey Hong, Anikait Singh, and Sergey Levine. ICLR, 2022.\n- [Learning Value Functions from Undirected State-only Experience](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.12458) [[website](https:\u002F\u002Fmatthewchang.github.io\u002Flatent_action_qlearning_site\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Farjung128\u002Flaq)]\n  - Matthew Chang, Arjun Gupta, and Saurabh Gupta. ICLR, 2022.\n- [Rethinking Goal-Conditioned Supervised Learning and Its Connection to Offline RL](https:\u002F\u002Fopenreview.net\u002Fforum?id=KJztlfGPdwW)\n  - Rui Yang, Yiming Lu, Wenzhe Li, Hao Sun, Meng Fang, Yali Du, Xiu Li, Lei Han, and Chongjie Zhang. ICLR, 2022.\n- [Offline Reinforcement Learning with Implicit Q-Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06169)\n  - Ilya Kostrikov, Ashvin Nair, and Sergey Levine. ICLR, 2022.\n- [RvS: What is Essential for Offline RL via Supervised Learning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.10751)\n  - Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, and Sergey Levine. ICLR, 2022.\n- [Pareto Policy Pool for Model-based Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=OqcZu8JIIzS)\n  - Yijun Yang, Jing Jiang, Tianyi Zhou, Jie Ma, and Yuhui Shi. ICLR, 2022.\n- [CrowdPlay: Crowdsourcing Human Demonstrations for Offline Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=qyTBxTztIpQ)\n  - Matthias Gerstgrasser, Rakshit Trivedi, and David C. Parkes. ICLR, 2022.\n- [COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08398)\n  - Fan Wu, Linyi Li, Chejian Xu, Huan Zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, and Bo Li. ICLR, 2022.\n- [DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.06662)\n  - Jinxin Liu, Hongyin Zhang, and Donglin Wang. ICLR, 2022.\n- [Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.05804)\n  - Ming Yin, Yaqi Duan, Mengdi Wang, and Yu-Xiang Wang. ICLR, 2022.\n- [Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.11566)\n  - Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, and Zhaoran Wang. ICLR, 2022.\n- [Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.13807)\n  - Thanh Nguyen-Tang, Sunil Gupta, A.Tuan Nguyen, and Svetha Venkatesh. ICLR, 2022.\n- [Generalized Decision Transformer for Offline Hindsight Information Matching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.10364)  [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fgeneralizeddt)]\n  - Hiroki Furuta, Yutaka Matsuo, and Shixiang Shane Gu. ICLR, 2022.\n- [Model-Based Offline Meta-Reinforcement Learning with Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02929)\n  - Sen Lin, Jialin Wan, Tengyu Xu, Yingbin Liang, and Junshan Zhang. ICLR, 2022.\n- [AW-Opt: Learning Robotic Skills with Imitation and Reinforcement at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.05424) [[website](https:\u002F\u002Fawopt.github.io\u002F)]\n  - Yao Lu, Karol Hausman, Yevgen Chebotar, Mengyuan Yan, Eric Jang, Alexander Herzog, Ted Xiao, Alex Irpan, Mohi Khansari, Dmitry Kalashnikov, and Sergey Levine. CoRL, 2022.\n- [Dealing with the Unknown: Pessimistic Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.05440)\n  - Jinning Li, Chen Tang, Masayoshi Tomizuka, and Wei Zhan. CoRL, 2022.\n- [You Only Evaluate Once: a Simple Baseline Algorithm for Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.02304)\n  - Wonjoon Goo and Scott Niekum. CoRL, 2022.\n- [S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06326)\n  - Samarth Sinha and Animesh Garg. CoRL, 2022.\n- [A Workflow for Offline Model-Free Robotic Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.10813) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Foffline-rl-workflow)]\n  - Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, and Sergey Levine. CoRL, 2022.\n- [Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06192)  [[blog](https:\u002F\u002Fdeepmind.com\u002Fblog\u002Farticle\u002Fstacking-our-way-to-more-general-robots)] [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=BxOKPEtMuZw)] [[code](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Frgb_stacking)]\n  - Alex X. Lee, Coline Devin, Yuxiang Zhou, Thomas Lampe, Konstantinos Bousmalis, Jost Tobias Springenberg, Arunkumar Byravan, Abbas Abdolmaleki, Nimrod Gileadi, David Khosid, Claudio Fantacci, Jose Enrique Chen, Akhil Raju, Rae Jeong, Michael Neunert, Antoine Laurens, Stefano Saliceti, Federico Casarini, Martin Riedmiller, Raia Hadsell, and Francesco Nori. CoRL, 2022.\n- [Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs and Practical Solutions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17396)\n  - Yicheng Luo, Jackie Kay, Edward Grefenstette, and Marc Peter Deisenroth. RLDM, 2022.\n- [Offline Reinforcement Learning with Representations for Actions](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fabs\u002Fpii\u002FS0020025522009033?via%3Dihub)\n  - Xingzhou Lou, Qiyue Yin, Junge Zhang, Chao Yu, Zhaofeng He, Nengjie Cheng, and Kaiqi Huang. Information Sciences, 2022.\n- [Towards Off-Policy Learning for Ranking Policies with Logged Feedback](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-8695.XiaoT.pdf)\n  - Teng Xiao and Suhang Wang. AAAI, 2022.\n- [Safe Offline Reinforcement Learning Through Hierarchical Policies](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-05936-0_30)\n  - Shaofan Liu and Shiliang Sun. PAKDD, 2022.\n- [TD3 with Reverse KL Regularizer for Offline Reinforcement Learning from Mixed Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.02125)\n  - Yuanying Cai, Chuheng Zhang, Li Zhao, Wei Shen, Xuyun Zhang, Lei Song, Jiang Bian, Tao Qin, and Tieyan Liu. ICDM, 2022.\n- [Sample Complexity of Offline Reinforcement Learning with Deep ReLU Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06671)\n  - Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, and Svetha Venkatesh. arXiv, 2021.\n- [Model Selection in Batch Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.12320)\n  - Jonathan N. Lee, George Tucker, Ofir Nachum, and Bo Dai. arXiv, 2021.\n- [Learning Contraction Policies from Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.05911)\n  - Navid Rezazadeh, Maxwell Kolarich, Solmaz S. Kia, and Negar Mehr. arXiv, 2021.\n- [CoMPS: Continual Meta Policy Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.04467)\n  - Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn, Sergey Levine. arXiv, 2021.\n- [MESA: Offline Meta-RL for Safe Adaptation and Fault Tolerance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.03575)\n  - Michael Luo, Ashwin Balakrishna, Brijen Thananjeyan, Suraj Nair, Julian Ibarz, Jie Tan, Chelsea Finn, Ion Stoica, and Ken Goldberg. arXiv, 2021.\n- [Offline Pre-trained Multi-Agent Decision Transformer: One Big Sequence Model Conquers All StarCraftII Tasks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.02845)\n  - Linghui Meng, Muning Wen, Yaodong Yang, Chenyang Le, Xiyun Li, Weinan Zhang, Ying Wen, Haifeng Zhang, Jun Wang, and Bo Xu. arXiv, 2021.\n- [Policy Gradient and Actor-Critic Learning in Continuous Time and Space: Theory and Algorithms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11232)\n  - Yanwei Jia and Xun Yu Zhou. arXiv, 2021.\n- [Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.10919) [[video](https:\u002F\u002Fyoutu.be\u002FQS2xVHgBg-k)]\n  - Dylan J. Foster, Akshay Krishnamurthy, David Simchi-Levi, and Yunzong Xu. arXiv, 2021.\n- [UMBRELLA: Uncertainty-Aware Model-Based Offline Reinforcement Learning Leveraging Planning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11097)\n  - Christopher Diehl, Timo Sievernich, Martin Krüger, Frank Hoffmann, and Torsten Bertran. arXiv, 2021.\n- [Exploiting Action Impact Regularity and Partially Known Models for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.08066)\n  - Vincent Liu, James Wright, and Martha White. arXiv, 2021.\n- [Batch Reinforcement Learning from Crowds](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.04279)\n  - Guoxi Zhang and Hisashi Kashima. arXiv, 2021.\n- [SCORE: Spurious COrrelation REduction for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.12468)\n  - Zhihong Deng, Zuyue Fu, Lingxiao Wang, Zhuoran Yang, Chenjia Bai, Zhaoran Wang, and Jing Jiang. arXiv, 2021.\n- [Safely Bridging Offline and Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.13060)\n  - Wanqiao Xu, Kan Xu, Hamsa Bastani, and Osbert Bastani. arXiv, 2021.\n- [Efficient Robotic Manipulation Through Offline-to-Online Reinforcement Learning and Goal-Aware State Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10905)\n  - Jin Li, Xianyuan Zhan, Zixu Xiao, and Guyue Zhou. arXiv, 2021.\n- [Value Penalized Q-Learning for Recommender Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07923)\n  - Chengqian Gao, Ke Xu, and Peilin Zhao. arXiv, 2021.\n- [Offline Reinforcement Learning with Soft Behavior Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07395)\n  - Haoran Xu, Xianyuan Zhan, Jianxiong Li, and Honglei Yin. arXiv, 2021.\n- [Planning from Pixels in Environments with Combinatorially Hard Search Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06149)\n  - Marco Bagatella, Mirek Olšák, Michal Rolínek, and Georg Martius. arXiv, 2021.\n- [StARformer: Transformer with State-Action-Reward Representations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06206)\n  - Jinghuan Shang and Michael S. Ryoo. arXiv, 2021.\n- [Offline RL With Resource Constrained Online Deployment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.03165) [[code](https:\u002F\u002Fgithub.com\u002FJayanthRR\u002FRC-OfflineRL)]\n  - Jayanth Reddy Regatti, Aniket Anand Deshmukh, Frank Cheng, Young Hun Jung, Abhishek Gupta, and Urun Dogan. arXiv, 2021.\n- [Lifelong Robotic Reinforcement Learning by Retaining Experiences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.09180) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fretain-experience\u002F)]\n  - Annie Xie and Chelsea Finn. arXiv, 2021.\n- [Dual Behavior Regularized Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.09037)\n  - Chapman Siu, Jason Traish, and Richard Yi Da Xu. arXiv, 2021.\n- [DCUR: Data Curriculum for Teaching via Samples with Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.07380) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fteach-curr\u002Fhome)] [[code](https:\u002F\u002Fgithub.com\u002FDanielTakeshi\u002FDCUR)]\n  - Daniel Seita, Abhinav Gopal, Zhao Mandi, and John Canny. arXiv, 2021.\n- [DROMO: Distributionally Robust Offline Model-based Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.07275)\n  - Ruizhen Liu, Dazhi Zhong, and Zhicong Chen. arXiv, 2021.\n- [Implicit Behavioral Cloning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.00137)\n  - Pete Florence, Corey Lynch, Andy Zeng, Oscar Ramirez, Ayzaan Wahid, Laura Downs, Adrian Wong, Johnny Lee, Igor Mordatch, and Jonathan Tompson. arXiv, 2021.\n- [Reducing Conservativeness Oriented Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00098)\n  - Hongchang Zhang, Jianzhun Shao, Yuhang Jiang, Shuncheng He, and Xiangyang Ji. arXiv, 2021.\n- [Policy Gradients Incorporating the Future](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.02096)\n  - David Venuto, Elaine Lau, Doina Precup, and Ofir Nachum. arXiv, 2021.\n- [Offline Decentralized Multi-Agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.01832)\n  - Jiechuan Jiang and Zongqing Lu. arXiv, 2021.\n- [OPAL: Offline Preference-Based Apprenticeship Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.09251) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Foffline-prefs)]\n  - Daniel Shin and Daniel S. Brown. arXiv, 2021.\n- [Constraints Penalized Q-Learning for Safe Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.09003)\n  - Haoran Xu, Xianyuan Zhan, and Xiangyu Zhu. arXiv, 2021.\n- [Where is the Grass Greener? Revisiting Generalized Policy Iteration for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01407)\n  - Lionel Blondé and Alexandros Kalousis. arXiv, 2021.\n- [The Least Restriction for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01757)\n  - Zizhou Su. arXiv, 2021.\n- [Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.00591)\n  - Seunghyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, and Jinwoo Shin. arXiv, 2021.\n- [Causal Reinforcement Learning using Observational and Interventional Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.14421)\n  - Maxime Gasse, Damien Grasset, Guillaume Gaudron, and Pierre-Yves Oudeyer. arXiv, 2021.\n- [On the Sample Complexity of Batch Reinforcement Learning with Policy-Induced Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.09973)\n  - Chenjun Xiao, Ilbin Lee, Bo Dai, Dale Schuurmans, and Csaba Szepesvari. arXiv, 2021.\n- [Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.09119) [[website](https:\u002F\u002Fsites.google.com\u002Fberkeley.edu\u002Fmabe)]\n  - Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, and Michael Laskin. arXiv, 2021.\n- [On Multi-objective Policy Optimization as a Tool for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.08199)\n  - Abbas Abdolmaleki, Sandy H. Huang, Giulia Vezzani, Bobak Shahriari, Jost Tobias Springenberg, Shruti Mishra, Dhruva TB, Arunkumar Byravan, Konstantinos Bousmalis, Andras Gyorgy, Csaba Szepesvari, Raia Hadsell, Nicolas Heess, and Martin Riedmiller. arXiv, 2021.\n- [Offline Reinforcement Learning as Anti-Exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06431)\n  - Shideh Rezaeifar, Robert Dadashi, Nino Vieillard, Léonard Hussenot, Olivier Bachem, Olivier Pietquin, and Matthieu Geist. arXiv, 2021.\n- [Corruption-Robust Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06630)\n  - Xuezhou Zhang, Yiding Chen, Jerry Zhu, and Wen Sun. arXiv, 2021.\n- [Offline Inverse Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.05068)\n  - Firas Jarboui and Vianney Perchet. arXiv, 2021.\n- [Heuristic-Guided Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02757)\n  - Ching-An Cheng, Andrey Kolobov, and Adith Swaminathan. arXiv, 2021.\n- [Reinforcement Learning as One Big Sequence Modeling Problem](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02039)\n  - Michael Janner, Qiyang Li, and Sergey Levine. arXiv, 2021.\n- [Decision Transformer: Reinforcement Learning via Sequence Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.01345)\n  - Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. arXiv, 2021.\n- [Model-Based Offline Planning with Trajectory Pruning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.07351)\n  - Xianyuan Zhan, Xiangyu Zhu, and Haoran Xu. arXiv, 2021.\n- [InferNet for Delayed Reinforcement Tasks: Addressing the Temporal Credit Assignment Problem](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.00568)\n  - Markel Sanz Ausin, Hamoon Azizsoltani, Song Ju, Yeo Jin Kim, and Min Chi. arXiv, 2021.\n- [Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.09847) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=uOIvo1wQ_RQ)]\n  - Lin Chen, Bruno Scherrer, and Peter L. Bartlett. arXiv, 2021.\n- [MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08212) [[website](https:\u002F\u002Fkarolhausman.github.io\u002Fmt-opt\u002F)]\n  - Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, and Karol Hausman. arXiv, 2021.\n- [Distributional Offline Continuous-Time Reinforcement Learning with Neural Physics-Informed PDEs (SciPhy RL for DOCTR-L)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01040)\n  - Igor Halperin. arXiv, 2021.\n- [Regularized Behavior Value Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.09575)\n  - Caglar Gulcehre, Sergio Gómez Colmenarejo, Ziyu Wang, Jakub Sygnowski, Thomas Paine, Konrad Zolna, Yutian Chen, Matthew Hoffman, Razvan Pascanu, and Nando de Freitas. arXiv, 2021.\n- [Improved Context-Based Offline Meta-RL with Attention and Contrastive Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10774)\n  - Lanqing Li, Yuanhao Huang, and Dijun Luo. arXiv, 2021.\n- [Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09907)\n  - Luofeng Liao, Zuyue Fu, Zhuoran Yang, Mladen Kolar, and Zhaoran Wang. arXiv, 2021.\n- [GELATO: Geometrically Enriched Latent Model for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.11327)\n  - Guy Tennenholtz, Nir Baram, and Shie Mannor. arXiv, 2021.\n- [MUSBO: Model-based Uncertainty Regularized and Sample Efficient Batch Optimization for Deployment Constrained Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.11448)\n  - DiJia Su, Jason D. Lee, John M. Mulvey, and H. Vincent Poor. arXiv, 2021.\n- [Continuous Doubly Constrained Batch Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09225)\n  - Rasool Fakoor, Jonas Mueller, Pratik Chaudhari, and Alexander J. Smola. arXiv, 2021.\n- [Q-Value Weighted Regression: Reinforcement Learning with Limited Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.06782)\n  - Piotr Kozakowski, Łukasz Kaiser, Henryk Michalewski, Afroz Mohiuddin, and Katarzyna Kańska. arXiv, 2021.\n- [Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.02981)\n  - Masatoshi Uehara, Masaaki Imaizumi, Nan Jiang, Nathan Kallus, Wen Sun, and Tengyang Xie. arXiv, 2021.\n- [Fast Rates for the Regret of Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.00479) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=eGZ-2JU9zKE)]\n  - Yichun Hu, Nathan Kallus, and Masatoshi Uehara. arXiv, 2021.\n- [Safe Policy Learning through Extrapolation: Application to Pre-trial Risk Assessment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.11679) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Gd2-MxJQTKA)]\n  - Eli Ben-Michael, D. James Greiner, Kosuke Imai, and Zhichao Jiang.\n- [Weighted Model Estimation for Offline Model-based Reinforcement Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Fhash\u002F949694a5059302e7283073b502f094d7-Abstract.html)\n  - Toru Hishinuma and Kei Senda. NeurIPS, 2021.\n- [A Minimalist Approach to Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06860)\n  - Scott Fujimoto and Shixiang Shane Gu. NeurIPS, 2021.\n- [Conservative Offline Distributional Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06106)\n  - Yecheng Jason Ma, Dinesh Jayaraman, and Osbert Bastani. NeurIPS, 2021.\n- [Pessimism Meets Invariance: Provably Efficient Offline Mean-Field Multi-Agent RL](https:\u002F\u002Fopenreview.net\u002Fforum?id=Ww1e07fy9fC)\n  - Minshuo Chen, Yan Li, Ethan Wang, Zhuoran Yang, Zhaoran Wang, and Tuo Zhao. NeurIPS, 2021.\n- [Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03400)\n  - Yiqin Yang, Xiaoteng Ma, Chenghao Li, Zewu Zheng, Qiyuan Zhang, Gao Huang, Jun Yang, and Qianchuan Zhao. NeurIPS, 2021.\n- [Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.08812)\n  - Andrea Zanette, Martin J. Wainwright, and Emma Brunskill. NeurIPS, 2021.\n- [Multi-Objective SPIBB: Seldonian Offline Policy Improvement with Safety Constraints in Finite MDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.00099)\n  - Harsh Satija, Philip S. Thomas, Joelle Pineau, and Romain Laroche. NeurIPS, 2021.\n- [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02039)\n  - Michael Janner, Qiyang Li, and Sergey Levine. NeurIPS, 2021.\n- [Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.12021) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=T1Am0bGzH4A)]\n  - Paria Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, and Stuart Russell. NeurIPS, 2021.\n- [Offline Reinforcement Learning with Reverse Model-based Imagination](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.00188)\n  - Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, and Chongjie Zhang. NeurIPS, 2021.\n- [Offline Meta Reinforcement Learning -- Identifiability Challenges and Effective Data Collection Strategies](https:\u002F\u002Fopenreview.net\u002Fforum?id=IBdEfhLveS)\n  - Ron Dorfman, Idan Shenfeld, and Aviv Tamar. NeurIPS, 2021.\n- [Nearly Horizon-Free Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.14077)\n  - Tongzheng Ren, Jialian Li, Bo Dai, Simon S. Du, and Sujay Sanghavi. NeurIPS, 2021.\n- [Conservative Data Sharing for Multi-Task Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.08128)\n  - Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, and Chelsea Finn. NeurIPS, 2021.\n- [Online and Offline Reinforcement Learning by Planning with a Learned Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06294)\n  - Julian Schrittwieser, Thomas Hubert, Amol Mandhane, Mohammadamin Barekatain, Ioannis Antonoglou, and David Silver. NeurIPS, 2021.\n- [Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04895)\n  - Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, and Yu Bai. NeurIPS, 2021.\n- [Offline RL Without Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.08909)\n  - David Brandfonbrener, William F. Whitney, Rajesh Ranganath, and Joan Bruna. NeurIPS, 2021.\n- [Offline Model-based Adaptable Policy Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=lrdXc17jm6)\n  - Xiong-Hui Chen, Yang Yu, Qingyang Li, Fan-Ming Luo, Zhiwei Tony Qin, Shang Wenjie, and Jieping Ye. NeurIPS, 2021.\n- [COMBO: Conservative Offline Model-Based Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.08363)\n  - Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, and Chelsea Finn. NeurIPS, 2021.\n- [PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Personalized Simulators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.06961)\n  - Anish Agarwal, Abdullah Alomar, Varkey Alumootil, Devavrat Shah, Dennis Shen, Zhi Xu, and Cindy Yang. NeurIPS, 2021.\n- [Near-Optimal Offline Reinforcement Learning via Double Variance Reduction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.01748)\n  - Ming Yin, Yu Bai, and Yu-Xiang Wang. NeurIPS, 2021.\n- [Bellman-consistent Pessimism for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06926) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=g_yD6Yw8MLQ)]\n  - Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal. NeurIPS, 2021.\n- [The Difficulty of Passive Learning in Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14020)\n  - Georg Ostrovski, Pablo Samuel Castro, and Will Dabney. NeurIPS, 2021.\n- [Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.01548)\n  - Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song. NeurIPS, 2021.\n- [Towards Instance-Optimal Offline Reinforcement Learning with Pessimism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.08695)\n  - Ming Yin and Yu-Xiang Wang. NeurIPS, 2021.\n- [EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.11091)\n  - Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, and Shixiang Shane Gu. ICML, 2021.\n- [Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.07749) [[website](https:\u002F\u002Factionable-models.github.io\u002F)]\n  - Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jake Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, and Sergey Levine. ICML, 2021.\n- [Is Pessimism Provably Efficient for Offline RL?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.15085) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vCQsZ5pzHPk)]\n  - Ying Jin, Zhuoran Yang, and Zhaoran Wang. ICML, 2021.\n- [Representation Matters: Offline Pretraining for Sequential Decision Making](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.05815)\n  - Mengjiao Yang and Ofir Nachum. ICML, 2021.\n- [Offline Reinforcement Learning with Pseudometric Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.01948)\n  - Robert Dadashi, Shideh Rezaeifar, Nino Vieillard, Léonard Hussenot, Olivier Pietquin, and Matthieu Geist. ICML, 2021.\n- [Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.05632)\n  - Philip J. Ball, Cong Lu, Jack Parker-Holder, and Stephen Roberts. ICML, 2021.\n- [Offline Contextual Bandits with Overparameterized Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.15368)\n  - David Brandfonbrener, William F. Whitney, Rajesh Ranganath and Joan Bruna. ICML, 2021.\n- [Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13883)\n  - Yaqi Duan, Chi Jin, and Zhiyuan Li. ICML, 2021.\n- [Offline Reinforcement Learning with Fisher Divergence Critic Regularization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.08050)\n  - Ilya Kostrikov, Jonathan Tompson, Rob Fergus, and Ofir Nachum. ICML, 2021.\n- [OptiDICE: Offline Policy Optimization via Stationary Distribution Correction Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.10783)\n  - Jongmin Lee, Wonseok Jeon, Byung-Jun Lee, Joelle Pineau, and Kee-Eung Kim. ICML, 2021.\n- [Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.08140)\n  - Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua Susskind, Jian Zhang, Ruslan Salakhutdinov, and Hanlin Goh. ICML, 2021.\n- [Vector Quantized Models for Planning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04615)\n  - Sherjil Ozair, Yazhe Li, Ali Razavi, Ioannis Antonoglou, Aäron van den Oord, and Oriol Vinyals. ICML, 2021.\n- [Exponential Lower Bounds for Batch Reinforcement Learning: Batch RL can be Exponentially Harder than Online RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.08005) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=YktnEdsxYfc&feature=youtu.be)]\n  - Andrea Zanette. ICML, 2021.\n- [Instabilities of Offline RL with Pre-Trained Neural Representation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.04947)\n  - Ruosong Wang, Yifan Wu, Ruslan Salakhutdinov, and Sham M. Kakade. ICML, 2021.\n- [Offline Meta-Reinforcement Learning with Advantage Weighting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.06043)\n  - Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, and Chelsea Finn. ICML, 2021.\n- [Model-Based Offline Planning](https:\u002F\u002Fopenreview.net\u002Fforum?id=OMNB1G5xzd4) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=nxGGHdZOFts&feature=youtu.be)]\n  - Arthur Argenson and Gabriel Dulac-Arnold. ICLR, 2021.\n- [Batch Reinforcement Learning Through Continuation Method](https:\u002F\u002Fopenreview.net\u002Fforum?id=po-DLlBuAuz)\n  - Yijie Guo, Shengyu Feng, Nicolas Le Roux, Ed Chi, Honglak Lee, and Minmin Chen. ICLR, 2021.\n- [Model-Based Visual Planning with Self-Supervised Functional Distances](https:\u002F\u002Fopenreview.net\u002Fforum?id=UcoXdfrORC)\n  - Stephen Tian, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin Eysenbach, Chelsea Finn, and Sergey Levine. ICLR, 2021.\n- [Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization](https:\u002F\u002Fopenreview.net\u002Fforum?id=3hGNqpI4WS)\n  - Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, and Shixiang Gu. ICLR, 2021.\n- [Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization](https:\u002F\u002Fopenreview.net\u002Fforum?id=8cpHIfgY4Dj)\n  - Lanqing Li, Rui Yang, and Dijun Luo. ICLR, 2021.\n- [DeepAveragers: Offline Reinforcement Learning by Solving Derived Non-Parametric MDPs](https:\u002F\u002Fopenreview.net\u002Fforum?id=eMP1j9efXtX)\n  - Aayam Kumar Shrestha, Stefan Lee, Prasad Tadepalli, and Alan Fern. ICLR, 2021.\n- [What are the Statistical Limits of Offline RL with Linear Function Approximation?](https:\u002F\u002Fopenreview.net\u002Fforum?id=30EvkP2aQLD) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FkkphMeFapg)]\n  - Ruosong Wang, Dean Foster, and Sham M. Kakade. ICLR, 2021.\n- [Reset-Free Lifelong Learning with Skill-Space Planning](https:\u002F\u002Fopenreview.net\u002Fforum?id=HIGSa_3kOx3) [[website](https:\u002F\u002Fsites.google.com\u002Fberkeley.edu\u002Freset-free-lifelong-learning)]\n  - Kevin Lu, Aditya Grover, Pieter Abbeel, and Igor Mordatch. ICLR, 2021.\n- [Risk-Averse Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.05371)\n  - Núria Armengol Urpí, Sebastian Curi, and Andreas Krause. ICLR, 2021.\n- [Finite-Sample Regret Bound for Distributionally Robust Offline Tabular Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv130\u002Fzhou21d.html)\n  - Zhengqing Zhou, Zhengyuan Zhou, Qinxun Bai, Linhai Qiu, Jose Blanchet, and Peter Glynn. AISTATS, 2021.\n- [Exploration by Maximizing Rényi Entropy for Reward-Free RL Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.06193)\n  - Chuheng Zhang, Yuanying Cai, Longbo Huang, and Jian Li. AAAI, 2021.\n- [Efficient Self-Supervised Data Collection for Offline Robot Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04607)\n  - Shadi Endrawis, Gal Leibovich, Guy Jacob, Gal Novik, Aviv Tamar. ICRA, 2021.\n- [Boosting Offline Reinforcement Learning with Residual Generative Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.10411)\n  - Hua Wei, Deheng Ye, Zhao Liu, Hao Wu, Bo Yuan, Qiang Fu, Wei Yang, and Zhenhui (Jessie)Li. IJCAI, 2021.\n- [BRAC+: Improved Behavior Regularized Actor Critic for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.00894)\n  - Chi Zhang, Sanmukh Rao Kuppannagari, and Viktor K Prasanna. ACML, 2021.\n- [Behavior Constraining in Weight Space for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.05479)\n  - Phillip Swazinna, Steffen Udluft, Daniel Hein, and Thomas Runkler. ESANN, 2021.\n- [Finite-Sample Analysis For Decentralized Batch Multi-Agent Reinforcement Learning With Networked Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.02783)\n  - Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, and Tamer Başar. IEEE T AUTOMATIC CONTROL, 2021.\n- [Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17168)\n  - Gunshi Gupta, Tim G. J. Rudner, Rowan Thomas McAllister, Adrien Gaidon, and Yarin Gal. CLeaR, 2021.\n- [Reinforcement Learning via Fenchel-Rockafellar Duality](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.01866) [[software](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fdice_rl)]\n  - Ofir Nachum and Bo Dai. arXiv, 2020.\n- [AWAC: Accelerating Online Reinforcement Learning with Offline Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09359) [[website](https:\u002F\u002Fawacrl.github.io\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Fvitchyr\u002Frlkit\u002Ftree\u002Fmaster\u002Fexamples\u002Fawac)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F09\u002F10\u002Fawac\u002F)]\n  - Ashvin Nair, Abhishek Gupta, Murtaza Dalal, and Sergey Levine. arXiv, 2020.\n- [Sparse Feature Selection Makes Batch Reinforcement Learning More Sample Efficient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.04019)\n  - Botao Hao, Yaqi Duan, Tor Lattimore, Csaba Szepesvári, and Mengdi Wang. arXiv, 2020.\n- [A Variant of the Wang-Foster-Kakade Lower Bound for the Discounted Setting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.01075)\n  - Philip Amortila, Nan Jiang, and Tengyang Xie. arXiv, 2020.\n- [Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14771)\n  - Samuele Tosatto, João Carvalho, and Jan Peters. arXiv, 2020.\n- [Batch Value-function Approximation with Only Realizability](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.04990)\n  - Tengyang Xie and Nan Jiang. arXiv2020.\n- [DRIFT: Deep Reinforcement Learning for Functional Software Testing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.08220)\n  - Luke Harries, Rebekah Storan Clarke, Timothy Chapman, Swamy V. P. L. N. Nallamalli, Levent Ozgur, Shuktika Jain, Alex Leung, Steve Lim, Aaron Dietrich, José Miguel Hernández-Lobato, Tom Ellis, Cheng Zhang, and Kamil Ciosek. arXiv, 2020.\n- [Causality and Batch Reinforcement Learning: Complementary Approaches To Planning In Unknown Domains](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.02579)\n  - James Bannon, Brad Windsor, Wenbo Song, and Tao Li. arXiv, 2020.\n- [Goal-conditioned Batch Reinforcement Learning for Rotation Invariant Locomotion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.08356) [[code](https:\u002F\u002Fgithub.com\u002Faditimavalankar\u002Fgc-batch-rl-locomotion)]\n  - Aditi Mavalankar. arXiv, 2020.\n- [Semi-Supervised Reward Learning for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.06899)\n  - Ksenia Konyushkova, Konrad Zolna, Yusuf Aytar, Alexander Novikov, Scott Reed, Serkan Cabi, and Nando de Freitas. arXiv, 2020.\n- [Sample-Efficient Reinforcement Learning via Counterfactual-Based Data Augmentation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.09092)\n  - Chaochao Lu, Biwei Huang, Ke Wang, José Miguel Hernández-Lobato, Kun Zhang, and Bernhard Schölkopf. arXiv, 2020.\n- [Offline Reinforcement Learning from Images with Latent Space Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.11547) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Flompo\u002F)]\n  - Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, and Chelsea Finn. arXiv, 2020.\n- [POPO: Pessimistic Offline Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.13682)\n  - Qiang He and Xinwen Hou. arXiv, 2020.\n- [Reinforcement Learning with Videos: Combining Offline Observations with Interaction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.06507)\n  - Karl Schmeckpeper, Oleh Rybkin, Kostas Daniilidis, Sergey Levine, and Chelsea Finn. arXiv, 2020.\n- [Recovery RL: Safe Reinforcement Learning with Learned Recovery Zones](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.15920) [[website](https:\u002F\u002Fsites.google.com\u002Fberkeley.edu\u002Frecovery-rl\u002F)]\n  - Brijen Thananjeyan, Ashwin Balakrishna, Suraj Nair, Michael Luo, Krishnan Srinivasan, Minho Hwang, Joseph E. Gonzalez, Julian Ibarz, Chelsea Finn, and Ken Goldberg. arXiv, 2020.\n- [Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14498)\n  - Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, and Sergey Levine. arXiv, 2020.\n- [OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.13611) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fopal-iclr)]\n  - Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, and Ofir Nachum. arXiv, 2020.\n- [Batch Exploration with Examples for Scalable Robotic Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11917)\n  - Annie S. Chen, HyunJi Nam, Suraj Nair, and Chelsea Finn. arXiv, 2020.\n- [Learning Dexterous Manipulation from Suboptimal Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.08587) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Frlfse)]\n  - Rae Jeong, Jost Tobias Springenberg, Jackie Kay, Daniel Zheng, Yuxiang Zhou, Alexandre Galashov, Nicolas Heess, and Francesco Nori. arXiv, 2020.\n- [The Reinforcement Learning-Based Multi-Agent Cooperative Approach for the Adaptive Speed Regulation on a Metallurgical Pickling Line](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.06933)\n  - Anna Bogomolova, Kseniia Kingsep, and Boris Voskresenskii. arXiv, 2020.\n- [Overcoming Model Bias for Robust Offline Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05533) [[dataset](https:\u002F\u002Fgithub.com\u002Fsiemens\u002Findustrialbenchmark\u002Ftree\u002Foffline_datasets\u002Fdatasets)]\n  - Phillip Swazinna, Steffen Udluft, and Thomas Runkler. arXiv, 2020.\n- [Offline Meta Learning of Exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.02598)\n  - Ron Dorfman, Idan Shenfeld, and Aviv Tamar. arXiv, 2020.\n- [EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.11091)\n  - Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, and Shixiang Shane Gu. arXiv, 2020.\n- [Hyperparameter Selection for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.09055)\n  - Tom Le Paine, Cosmin Paduraru, Andrea Michi, Caglar Gulcehre, Konrad Zolna, Alexander Novikov, Ziyu Wang, and Nando de Freitas. arXiv, 2020.\n- [Interpretable Control by Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.09964)\n  - Daniel Hein, Steffen Limmer, and Thomas A. Runkler. arXiv, 2020.\n- [Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.03886) [[code](https:\u002F\u002Fgithub.com\u002FCausalML\u002FNaturalStochasticOPE)]\n  - Nathan Kallus and Masatoshi Uehara. arXiv, 2020.\n- [Accelerating Online Reinforcement Learning with Offline Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.09359) [[website](https:\u002F\u002Fawacrl.github.io\u002F)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F09\u002F10\u002Fawac\u002F)]\n  - Ashvin Nair, Murtaza Dalal, Abhishek Gupta, and Sergey Levine. arXiv, 2020.\n- [DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.07305) [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F03\u002F16\u002Fdiscor\u002F)]\n  - Aviral Kumar, Abhishek Gupta, and Sergey Levine. arXiv, 2020.\n- [Critic Regularized Regression](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F588cb956d6bbe67078f29f8de420a13d-Abstract.html)\n  - Ziyu Wang, Alexander Novikov, Konrad Zolna, Josh S. Merel, Jost Tobias Springenberg, Scott E. Reed, Bobak Shahriari, Noah Siegel, Caglar Gulcehre, Nicolas Heess, and Nando de Freitas. NeurIPS, 2020\n- [Provably Good Batch Off-Policy Reinforcement Learning Without Great Exploration](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F0dc23b6a0e4abc39904388dd3ffadcd1-Abstract.html)\n  - Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. NeurIPS, 2020.\n- [Conservative Q-Learning for Offline Reinforcement Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F0d2b2061826a5df3221116a5085a6052-Abstract.html) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fcql-offline-rl)] [[code](https:\u002F\u002Fgithub.com\u002Faviralkumar2907\u002FCQL)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F12\u002F07\u002Foffline\u002F)]\n  - Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. NeurIPS, 2020.\n- [BAIL: Best-Action Imitation Learning for Batch Deep Reinforcement Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fd55cbf210f175f4a37916eafe6c04f0d-Abstract.html)\n  - Xinyue Chen, Zijian Zhou, Zheng Wang, Che Wang, Yanqiu Wu, and Keith Ross. NeurIPS, 2020.\n- [MOPO: Model-based Offline Policy Optimization](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fa322852ce0df73e204b7e67cbbef0d0a-Abstract.html) [[code](https:\u002F\u002Fgithub.com\u002Ftianheyu927\u002Fmopo)]\n  - Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Y. Zou, Sergey Levine, Chelsea Finn, and Tengyu Ma. NeurIPS, 2020.\n- [MOReL: Model-Based Offline Reinforcement Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Ff7efa4f864ae9b88d43527f4b14f750f-Abstract.html) [[podcast](https:\u002F\u002Ftwimlai.com\u002Fmorel-model-based-offline-reinforcement-learning-with-aravind-rajeswaran\u002F)]\n  - Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. NeurIPS, 2020.\n- [Expert-Supervised Reinforcement Learning for Offline Policy Learning and Evaluation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fdaf642455364613e2120c636b5a1f9c7-Abstract.html)\n  - Aaron Sonabend, Junwei Lu, Leo Anthony Celi, Tianxi Cai, and Peter Szolovits. NeurIPS, 2020.\n- [Multi-task Batch Reinforcement Learning with Metric Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F4496bf24afe7fab6f046bf4923da8de6-Abstract.html)\n  - Jiachen Li, Quan Vuong, Shuang Liu, Minghua Liu, Kamil Ciosek, Henrik Christensen, and Hao Su. NeurIPS, 2020.\n- [Counterfactual Data Augmentation using Locally Factored Dynamics](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F294e09f267683c7ddc6cc5134a7e68a8-Abstract.html) [[code](https:\u002F\u002Fgithub.com\u002Fspitis\u002Fmrl)]\n  - Silviu Pitis, Elliot Creager, and Animesh Garg. NeurIPS, 2020.\n- [On Reward-Free Reinforcement Learning with Linear Function Approximation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fce4449660c6523b377b22a1dc2da5556-Abstract.html)\n  - Ruosong Wang, Simon S. Du, Lin Yang, and Russ R. Salakhutdinov. NeurIPS, 2020.\n- [Constrained Policy Improvement for Safe and Efficient Reinforcement Learning](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F396)\n  - Elad Sarafian, Aviv Tamar, and Sarit Kraus. IJCAI, 2020.\n- [BRPO: Batch Residual Policy Optimization](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F391) [[code](https:\u002F\u002Fgithub.com\u002Feladsar\u002Frbi)]\n  - Sungryull Sohn, Yinlam Chow, Jayden Ooi, Ofir Nachum, Honglak Lee, Ed Chi, and Craig Boutilier. IJCAI, 2020.\n- [Keep Doing What Worked: Behavior Modelling Priors for Offline Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=rke7geHtwH)\n  - Noah Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, and Martin Riedmiller. ICLR, 2020.\n- [COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14500) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fcog-rl)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F12\u002F07\u002Foffline\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Favisingh599\u002Fcog)]\n  - Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, and Sergey Levine. CoRL, 2020.\n- [Accelerating Reinforcement Learning with Learned Skill Priors](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11944)\n  - Karl Pertsch, Youngwoon Lee, and Joseph J. Lim. CoRL, 2020.\n- [PLAS: Latent Action Space for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.07213) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Flatent-policy)] [[code](https:\u002F\u002Fgithub.com\u002FWenxuan-Zhou\u002FPLAS)]\n  - Wenxuan Zhou, Sujay Bajracharya, and David Held. CoRL, 2020.\n- [Scaling data-driven robotics with reward sketching and batch reinforcement learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.12200) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdata-driven-robotics\u002F)]\n  - Serkan Cabi, Sergio Gómez Colmenarejo, Alexander Novikov, Ksenia Konyushkova, Scott Reed, Rae Jeong, Konrad Zolna, Yusuf Aytar, David Budden, Mel Vecerik, Oleg Sushkov, David Barker, Jonathan Scholz, Misha Denil, Nando de Freitas, and Ziyu Wang. RSS, 2020.\n- [Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.02787)\n  - Cristian Bodnar, Adrian Li, Karol Hausman, Peter Pastor, and Mrinal Kalakrishnan. RSS, 2020.\n- [Batch-Constrained Reinforcement Learning for Dynamic Distribution Network Reconfiguration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12749)\n  - Yuanqi Gao, Wei Wang, Jie Shi, and Nanpeng Yu. IEEE T SMART GRID, 2020.\n- [Behavior Regularized Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.11361)\n  - Yifan Wu, George Tucker, and Ofir Nachum. arXiv, 2019.\n- [Off-Policy Policy Gradient Algorithms by Constraining the State Distribution Shift](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.06970)\n  - Riashat Islam, Komal K. Teru, Deepak Sharma, and Joelle Pineau. arXiv, 2019.\n- [Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.00177)\n  - Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. arXiv, 2019.\n- [AlgaeDICE: Policy Gradient from Arbitrary Experience](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.02074)\n  - Ofir Nachum, Bo Dai, Ilya Kostrikov, Yinlam Chow, Lihong Li, and Dale Schuurmans. arXiv, 2019.\n- [Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fc2073ffa77b5357a498057413bb09d3a-Abstract.html) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fbear-off-policyrl)] [[blog](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2019\u002F12\u002F05\u002Fbear\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Faviralkumar2907\u002FBEAR)]\n  - Aviral Kumar, Justin Fu, George Tucker, and Sergey Levine. NeurIPS, 2019.\n- [Off-Policy Deep Reinforcement Learning without Exploration](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Ffujimoto19a.html)\n  - Scott Fujimoto, David Meger, and Doina Precup. ICML, 2019.\n- [Safe Policy Improvement with Baseline Bootstrapping](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Flaroche19a.html)\n  - Romain Laroche, Paul Trichelair, and Remi Tachet Des Combes. ICML, 2019.\n- [Information-Theoretic Considerations in Batch Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fchen19e.html)\n  - Jinglin Chen and Nan Jiang. ICML, 2019.\n- [Batch Recurrent Q-Learning for Backchannel Generation Towards Engaging Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.02037)\n  - Nusrah Hussain, Engin Erzin, T. Metin Sezgin, and Yucel Yemez. ACII, 2019.\n- [Safe Policy Improvement with Soft Baseline Bootstrapping](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.05079)\n  - Kimia Nadjahi, Romain Laroche, and Rémi Tachet des Combes. ECML, 2019.\n- [Importance Weighted Transfer of Samples in Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Ftirinzoni18a.html)\n  - Andrea Tirinzoni, Andrea Sessa, Matteo Pirotta, and Marcello Restelli. ICML, 2018.\n- [Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation](http:\u002F\u002Fproceedings.mlr.press\u002Fv87\u002Fkalashnikov18a.html) [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fqtopt)]\n  - Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, and Sergey Levine. CoRL, 2018.\n- [Off-Policy Policy Gradient with State Distribution Correction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.08473)\n  - Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. UAI, 2018.\n- [Behavioral Cloning from Observation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.01954)\n  - Faraz Torabi, Garrett Warnell, and Peter Stone. IJCAI, 2018.\n- [Diverse Exploration for Fast and Safe Policy Improvement](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.08331)\n  - Andrew Cohen, Lei Yu, and Robert Wright. AAAI, 2018.\n- [Deep Exploration via Bootstrapped DQN](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2016\u002Fhash\u002F8d8818c8e140c64c743113f563cf750f-Abstract.html)\n  - Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. NeurIPS, 2016.\n- [Safe Policy Improvement by Minimizing Robust Baseline Regret](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2016\u002Fhash\u002F9a3d458322d70046f63dfd8b0153ece4-Abstract.html)\n  - Mohammad Ghavamzadeh, Marek Petrik, and Yinlam Chow. NeurIPS, 2016.\n- [Residential Demand Response Applications Using Batch Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1504.02125)\n  - Frederik Ruelens, Bert Claessens, Stijn Vandael, Bart De Schutter, Robert Babuska, and Ronnie Belmans. arXiv, 2015.\n- [Structural Return Maximization for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1405.2606)\n  - Joshua Joseph, Javier Velez, and Nicholas Roy. arXiv, 2014.\n- [Simultaneous Perturbation Algorithms for Batch Off-Policy Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1403.4514)\n  - Raphael Fonteneau, and L.A. Prashanth. CDC, 2014.\n- [Guided Policy Search](http:\u002F\u002Fproceedings.mlr.press\u002Fv28\u002Flevine13.html)\n  - Sergey Levine, and Vladlen Koltun. ICML, 2013.\n- [Off-Policy Actor-Critic](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.5555\u002F3042573.3042600)\n  - Thomas Degris, Martha White, and Richard S. Sutton. ICML, 2012.\n- [PAC-Bayesian Policy Evaluation for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1202.3717)\n  - Mahdi MIlani Fard, Joelle Pineau, and Csaba Szepesvari. UAI, 2011.\n- [Tree-Based Batch Mode Reinforcement Learning](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fv6\u002Fernst05a.html)\n  - Damien Ernst, Pierre Geurts, and Louis Wehenkel. JMLR, 2005.\n- [Neural Fitted Q Iteration–First Experiences with a Data Efficient Neural Reinforcement Learning Method](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1007\u002F11564096_32)\n  - Martin Riedmiller. ECML, 2005.\n- [Off-Policy Temporal-Difference Learning with Function Approximation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.5555\u002F645530.655817)\n  - Doina Precup, Richard S. Sutton, and Sanjoy Dasgupta. ICML, 2001.\n\n### 离线强化学习：基准测试\u002F实验\n- [ORL-AUDITOR：离线深度强化学习中的数据集审计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.03081)\n  - 杜林康、陈敏、孙明阳、季守领、程鹏、陈继明和张志坤。NDSS，2024年。\n- [Pearl：一款可用于生产的强化学习智能体](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03814)\n  - 朱哲清、罗德里戈·德·萨尔沃·布拉兹、贾拉吉·班达里、丹尼尔·江、万毅、约纳坦·埃夫罗尼、王丽媛、徐瑞阳、郭洪波、亚历克斯·尼库尔科夫、德米特罗·科伦凯维奇、乌润·多甘、弗兰克·程、吴正和徐万乔。arXiv，2023年。\n- [LMRL Gym：基于语言模型的多轮强化学习基准测试](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18232)\n  - 马尔瓦·阿卜杜勒海、伊莎朵拉·怀特、查理·斯内尔、查尔斯·孙、乔伊·洪、翟月翔、许凯文和谢尔盖·莱文。arXiv，2023年。\n- [用于离线组合式强化学习的机器人操作数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13372)\n  - 马塞尔·胡辛、豪尔赫·A·门德斯、阿尼莎·辛格罗迪亚、卡桑德拉·肯特和埃里克·伊顿。arXiv，2023年。\n- [离线安全强化学习的数据集与基准测试](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09303)\n  - 刘祖鑫、郭子健、林浩鸿、姚一航、朱家成、岑哲鹏、胡汉江、于文浩、张婷楠、谭杰和赵丁。arXiv，2023年。\n- [改进与基准测试离线强化学习算法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00972)\n  - 康炳义、马晓、王怡睿、岳洋和颜水成。arXiv，2023年。\n- [基于偏好奖励学习的离线强化学习基准测试与算法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.01392)\n  - 丹尼尔·申、安卡·D·德拉甘和丹尼尔·S·布朗。arXiv，2023年。\n- [Hokoff：来自《王者荣耀》的真实游戏数据集及其离线强化学习基准测试](https:\u002F\u002Fopenreview.net\u002Fpdf?id=jP3BduIxy6)\n  - 曲云、王博远、邵建准、蒋宇航、陈晨、叶振斌、林刘、冯杨、赖林、秦宏阳、邓敏文、卓居超、叶德恒、傅强、杨光、杨伟、黄兰晓和纪向阳。NeurIPS，2023年。\n- [CORL：面向研究的深度离线强化学习库](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07105) [[代码](https:\u002F\u002Fgithub.com\u002Fcorl-team\u002FCORL)]\n  - 丹尼斯·塔拉索夫、亚历山大·尼库林、德米特里·阿基莫夫、弗拉季斯拉夫·库伦科夫和谢尔盖·科列斯尼科夫。NeurIPS，2023年。\n- [在真实机器人硬件上进行离线强化学习的基准测试](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15690) [[数据集](https:\u002F\u002Fgithub.com\u002Frr-learning\u002Ftrifinger_rl_datasets)]\n  - 尼科·居特勒、塞巴斯蒂安·布拉斯、帕维尔·科列夫、费利克斯·维德迈尔、曼努埃尔·武特里希、施特凡·鲍尔、伯恩哈德·舍尔科普夫和格奥尔格·马尔提乌斯。ICLR，2023年。\n- [离线训练，线上测试：一个真实机器人学习基准](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00942)\n  - 周高跃、维多利亚·迪恩、莫汉·库马尔·斯里拉马、阿拉文德·拉杰斯瓦兰、乔提什·帕里、凯尔·哈奇、阿里扬·贾因、余天鹤、皮特·阿贝尔、莱雷尔·平托、切尔西·芬恩和阿比纳夫·古普塔。ICRA，2023年。\n- [针对电商订单欺诈评估的离线强化学习算法基准测试](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.02620)\n  - 索伊萨尔·德吉尔门奇和克里斯·琼斯。arXiv，2022年。\n- [使用真实数据源的现实世界离线强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06479) [[网站](https:\u002F\u002Fsites.google.com\u002Fview\u002Freal-orl)] [[数据集](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1nyMPlbwkjsJ_FyMwVp9ynOvz_ykGtbA8)]\n  - 周高跃、柯立明、西达尔塔·斯里尼瓦萨、阿比纳夫·古普塔、阿拉文德·拉杰斯瓦兰和维卡什·库马尔。arXiv，2022年。\n- [小心你的数据！在离线强化学习数据集中隐藏后门](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.04688)\n  - 耿辰、杨周、白云鹏、何俊达、史杰科、阿鲁内什·辛哈、徐博文、侯新文、范国梁和戴维·洛。arXiv，2022年。\n- [B2RL：用于构建批量强化学习的开源数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15626)\n  - 刘欣宇、付晓涵、巴拉坦·巴拉吉、拉杰什·古普塔和洪德志。arXiv，2022年。\n- [深度离线强化学习中隐式正则化的实证研究](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.02099)\n  - 卡格拉尔·古尔切赫雷、斯里瓦茨南·斯里尼瓦桑、雅库布·西格诺夫斯基、格奥尔格·奥斯特罗夫斯基、梅赫达德·法拉吉塔巴尔、马特·霍夫曼、拉兹万·帕斯卡努和阿尔诺·杜塞。arXiv，2022年。\n- [从视觉观测进行离线强化学习的挑战与机遇](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04779)\n  - 陆聪、菲利普·J·鲍尔、蒂姆·G·J·鲁德纳、杰克·帕克-霍尔德、迈克尔·A·奥斯本和耶·威·特。arXiv，2022年。\n- [不要改变算法，改变数据：离线强化学习的探索性数据](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.13425) [[代码](https:\u002F\u002Fgithub.com\u002Fdenisyarats\u002Fexorl)]\n  - 丹尼斯·亚拉茨、大卫·布兰德丰布雷纳、刘浩、迈克尔·拉斯金、皮特·阿贝尔、亚历山德罗·拉扎里奇和莱雷尔·平托。arXiv，2022年。\n- [离线强化学习中探索的挑战](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11861)\n  - 内森·兰伯特、马库斯·武尔夫迈尔、威廉·惠特尼、阿伦库马尔·拜拉万、迈克尔·布洛施、维芭瓦里·达萨吉、蒂姆·赫特韦克和马丁·里德米勒。arXiv，2022年。\n- [离线均衡求解](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.05285) [[代码](https:\u002F\u002Fgithub.com\u002FSecurityGames\u002Foef)]\n  - 李书欣、王新润、雅库布·切尔尼、张友志、韩超和安博。arXiv，2022年。\n- [无模型与基于模型的离线强化学习算法比较](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.05433)\n  - 菲利普·斯瓦津纳、施特芬·乌德卢夫特、丹尼尔·海因和托马斯·伦克勒。arXiv，2022年。\n- [有限数据下的离线强化学习高效数据流水线](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.08642)\n  - 艾伦·倪、扬尼斯·弗莱特-贝利亚克、迪昂·R·乔丹、威廉·斯滕伯根和艾玛·布伦斯基尔。NeurIPS，2022年。\n- [地牢与数据：大规模NetHack数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.00539)\n  - 埃里克·汉布罗、罗伯塔·赖莱阿努、丹妮尔·罗瑟梅尔、韦加德·梅拉、蒂姆·罗克塔谢尔、海因里希·屈特勒和奈拉·默里。NeurIPS，2022年。\n- [NeoRL：接近真实世界的离线强化学习基准](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.00714) [[网站](http:\u002F\u002Fpolixir.ai\u002Fresearch\u002Fneorl)] [[代码](https:\u002F\u002Fagit.ai\u002FPolixir\u002Fneorl)]\n  - 秦荣军、高松义、张兴元、徐震、黄圣凯、李泽文、张伟楠和于洋。NeurIPS，2022年。\n- [深入观察离线强化学习智能体](https:\u002F\u002Fopenreview.net\u002Fforum?id=mn1MWh0iDCA)\n  - 付宇伟、吴迪和贝努瓦·布勒。NeurIPS，2022年。\n- [超越奖励：离线多智能体行为分析的层次视角](https:\u002F\u002Fopenreview.net\u002Fforum?id=SiQAZV0yEny)\n  - 沙耶甘·奥米德沙菲伊、安德烈·卡皮什尼科夫、扬尼克·阿索格巴、卢卡斯·迪克森和彬·金。NeurIPS，2022年。\n- [不同模态下Transformer预训练对离线强化学习的影响](https:\u002F\u002Fopenreview.net\u002Fforum?id=9GXoMs__ckJ)\n  - 高木史郎。NeurIPS，2022年。\n- [展示你的离线强化学习工作：在线评估预算很重要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.04156)\n  - 弗拉季斯拉夫·库伦科夫和谢尔盖·科列斯尼科夫。ICML，2022年。\n- [d3rlpy：离线深度强化学习库](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.03788) [[软件](https:\u002F\u002Fgithub.com\u002Ftakuseno\u002Fd3rlpy)]\n  - 照间拓真和今井道太。JMLR，2022年。\n- [理解数据集特性对离线强化学习的影响](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.04714) [[代码](https:\u002F\u002Fgithub.com\u002Fml-jku\u002FOfflineRL)]\n  - 卡耶坦·施韦格霍费尔、马库斯·霍夫马赫尔、马里乌斯-康斯坦丁·迪努、菲利普·伦茨、安吉拉·比特托-内姆林、维杭·帕蒂尔和塞普·霍赫赖特。arXiv，2021年。\n- [面向离线强化学习的可解释性能分析：以数据集视角](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.05473)\n  - 西晨阳、唐博、沈嘉俊、刘信福、熊飞宇和李雪莹。arXiv，2021年。\n- [批量强化学习中三种正则化方法的比较与统一](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.08134)\n  - 萨拉·拉特纳姆、苏珊·A·墨菲和菲娜莱·多希-维莱兹。arXiv，2021年。\n- [RLDS：生成、共享和使用强化学习数据集的生态系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.02767) [[代码](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Frlds)]\n  - 萨贝拉·拉莫斯、塞尔坦·吉尔金、莱昂纳德·于塞诺、达米安·文森特、汉娜·雅库博维奇、丹尼尔·丰山、安妮塔·格尔盖利、皮奥特·斯坦奇克、拉斐尔·马里涅、杰里米亚·哈姆森、奥利维埃·皮特奎恩和尼古拉·莫姆切夫。NeurIPS，2021年。\n- [衡量数据质量以选择离线强化学习中的数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.13461)\n  - 菲利普·斯瓦津纳、施特芬·乌德卢夫特和托马斯·伦克勒。IEEE SSCI，2021年。\n- [离线强化学习实战](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.14379)\n  - 路易·莫尼耶、雅库布·克梅茨、亚历山大·拉泰尔、托马斯·皮埃罗、瓦伦丁·库尔若、奥利维埃·西戈、卡里姆·贝吉尔。arXiv，2020年。\n- [D4RL：深度数据驱动强化学习的数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.07219) [[网站](https:\u002F\u002Fsites.google.com\u002Fview\u002Fd4rl\u002Fhome)] [[博客](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F06\u002F25\u002FD4RL\u002F)] [[代码](https:\u002F\u002Fgithub.com\u002Frail-berkeley\u002Fd4rl)]\n  - 贾斯汀·富、阿维拉尔·库马尔、奥菲尔·纳楚姆、乔治·塔克和谢尔盖·莱文。arXiv，2020年。\n- [RL Unplugged：离线强化学习基准](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.13888) [[代码](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdeepmind-research\u002Ftree\u002Fmaster\u002Frl_unplugged)] [[数据集](https:\u002F\u002Fconsole.cloud.google.com\u002Fstorage\u002Fbrowser\u002Frl_unplugged?pli=1)]\n  - 卡格拉尔·古尔切赫雷、王子宇、亚历山大·诺维科夫、汤姆·勒佩恩、塞尔吉奥·戈麦斯·科尔梅纳雷霍、孔拉德·佐尔纳、里沙布·阿加瓦尔、乔什·梅雷尔、丹尼尔·曼科维茨、科斯敏·帕杜拉鲁、加布里埃尔·杜拉克-阿诺德、杰里·李、穆罕默德·诺鲁齐、马特·霍夫曼、奥菲尔·纳楚姆、乔治·塔克、尼古拉斯·希斯和南多·德·弗雷塔斯。NeurIPS，2020年。\n- [批量深度强化学习算法的基准测试](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01708)\n  - 斯科特·藤本、爱德华多·孔蒂、穆罕默德·加瓦姆扎德赫和乔埃尔·皮诺。arXiv，2019年。\n\n### Offline RL: Applications\n- [MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03306)\n  - Rafael Rafailov, Kyle Hatch, Victor Kolev, John D. Martin, Mariano Phielipp, and Chelsea Finn. arXiv, 2024.\n- [P2DT: Mitigating Forgetting in task-incremental Learning with progressive prompt Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11666)\n  - Zhiyuan Wang, Xiaoyang Qu, Jing Xiao, Bokui Chen, and Jianzong Wang. ICASSP, 2024.\n- [Online Symbolic Music Alignment with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00466)\n  - Silvan David Peter. arXiv, 2023.\n- [Advancing RAN Slicing with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10547)\n  - Kun Yang, Shu-ping Yeh, Menglei Zhang, Jerry Sydir, Jing Yang, and Cong Shen. arXiv, 2023.\n- [Traffic Signal Control Using Lightweight Transformers: An Offline-to-Online RL Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.07795)\n  - Xingshuai Huang, Di Wu, and Benoit Boulet. arXiv, 2023.\n- [Self-Driving Telescopes: Autonomous Scheduling of Astronomical Observation Campaigns with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18094)\n  - Franco Terranova, M. Voetberg, Brian Nord, and Amanda Pagul. arXiv, 2023.\n- [A Fully Data-Driven Approach for Realistic Traffic Signal Control Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.15920)\n  - Jianxiong Li, Shichao Lin, Tianyu Shi, Chujie Tian, Yu Mei, Jian Song, Xianyuan Zhan, and Ruimin Li. arXiv, 2023.\n- [Offline Reinforcement Learning for Wireless Network Optimization with Mixture Datasets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11423)\n  - Kun Yang, Cong Shen, Jing Yang, Shu-ping Yeh, and Jerry Sydir. arXiv, 2023.\n- [STEER: Unified Style Transfer with Expert Reinforcement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.07167)\n  - Skyler Hallinan, Faeze Brahman, Ximing Lu, Jaehun Jung, Sean Welleck, and Yejin Choi. arXiv, 2023.\n- [Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05584)\n  - Joey Hong, Sergey Levine, and Anca Dragan. arXiv, 2023.\n- [Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.15145)\n  - Jingyun Yang, Max Sobol Mark, Brandon Vu, Archit Sharma, Jeannette Bohg, and Chelsea Finn. arXiv, 2023.\n- [Offline Reinforcement Learning for Optimizing Production Bidding Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09426)\n  - Dmytro Korenkevych, Frank Cheng, Artsiom Balakir, Alex Nikulkov, Lingnan Gao, Zhihao Cen, Zuobing Xu, and Zheqing Zhu. arXiv, 2023.\n- [End-to-end Offline Reinforcement Learning for Glycemia Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.10312)\n  - Tristan Beolet, Alice Adenis, Erik Huneker, and Maxime Louis. arXiv, 2023.\n- [Leveraging Optimal Transport for Enhanced Offline Reinforcement Learning in Surgical Robotic Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08841)\n  - Maryam Zare, Parham M. Kebria, and Abbas Khosravi. arXiv, 2023.\n- [Learning RL-Policies for Joint Beamforming Without Exploration: A Batch Constrained Off-Policy Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08660)\n  - Heasung Kim and Sravan Ankireddy. arXiv, 2023.\n- [Uncertainty-Aware Decision Transformer for Stochastic Driving Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.16397)\n  - Zenan Li, Fan Nie, Qiao Sun, Fang Da, and Hang Zhao. arXiv, 2023.\n- [Boosting Offline Reinforcement Learning for Autonomous Driving with Hierarchical Latent Skills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13614)\n  - Zenan Li, Fan Nie, Qiao Sun, Fang Da, and Hang Zhao. arXiv, 2023.\n- [Robotic Offline RL from Internet Videos via Value-Function Pre-Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13041)\n  - Chethan Bhateja, Derek Guo, Dibya Ghosh, Anikait Singh, Manan Tomar, Quan Vuong, Yevgen Chebotar, Sergey Levine, and Aviral Kumar. arXiv, 2023.\n- [VAPOR: Holonomic Legged Robot Navigation in Outdoor Vegetation Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07832)\n  - Kasun Weerakoon, Adarsh Jagan Sathyamoorthy, Mohamed Elnoor, and Dinesh Manocha. arXiv, 2023.\n- [RLSynC: Offline-Online Reinforcement Learning for Synthon Completion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02671)\n  - Frazier N. Baker, Ziqi Chen, and Xia Ning. arXiv, 2023.\n- [Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07741)\n  - Nico Gürtler, Felix Widmaier, Cansu Sancaktar, Sebastian Blaes, Pavel Kolev, Stefan Bauer, Manuel Wüthrich, Markus Wulfmeier, Martin Riedmiller, Arthur Allshire, Qiang Wang, Robert McCarthy, Hangyeol Kim, Jongchan Baek Pohang, Wookyong Kwon, Shanliang Qian, Yasunori Toshimitsu, Mike Yan Michelis, Amirhossein Kazemipour, Arman Raayatsanati, Hehui Zheng, Barnabasa Gavin Cangan, Bernhard Schölkopf, and Georg Martius. arXiv, 2023.\n- [Reinforced Self-Training (ReST) for Language Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.08998)\n  - Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud Doucet, Orhan Firat, and Nando de Freitas. arXiv, 2023.\n- [Aligning Language Models with Offline Reinforcement Learning from Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12050)\n  - Jian Hu, Li Tao, June Yang, and Chandler Zhou. arXiv, 2023.\n- [Integrating Offline Reinforcement Learning with Transformers for Sequential Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14450)\n  - Xumei Xi, Yuke Zhao, Quan Liu, Liwen Ouyang, and Yang Wu. arXiv, 2023.\n- [Offline Skill Graph (OSG): A Framework for Learning and Planning using Offline Reinforcement Learning Skills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13630)\n  - Ben-ya Halevy, Yehudit Aperstein, and Dotan Di Castro. arXiv, 2023.\n- [Improving Offline RL by Blending Heuristics](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00321)\n  - Sinong Geng, Aldo Pacchiano, Andrey Kolobov, and Ching-An Cheng. arXiv, 2023.\n- [IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00867)\n  - Rohan Chitnis, Yingchen Xu, Bobak Hashemi, Lucas Lehnert, Urun Dogan, Zheqing Zhu, and Olivier Delalleau. arXiv, 2023.\n- [Robust Reinforcement Learning Objectives for Sequential Recommender Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18820)\n  - Melissa Mozifian, Tristan Sylvain, Dave Evans, and Lili Meng. arXiv, 2023.\n- [The Benefits of Being Distributional: Small-Loss Bounds for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15703)\n  - Kaiwen Wang, Kevin Zhou, Runzhe Wu, Nathan Kallus, and Wen Sun. arXiv, 2023.\n- [PROTO: Iterative Policy Regularized Offline-to-Online Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15669)\n  - Jianxiong Li, Xiao Hu, Haoran Xu, Jingjing Liu, Xianyuan Zhan, and Ya-Qin Zhang. arXiv, 2023.\n- [Matrix Estimation for Offline Reinforcement Learning with Low-Rank Structure](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15621)\n  - Xumei Xi, Christina Lee Yu, and Yudong Chen. arXiv, 2023.\n- [Offline Experience Replay for Continual Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13804)\n  - Sibo Gai, Donglin Wang, and Li He. arXiv, 2023.\n- [Causal Decision Transformer for Recommender Systems via Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07920)\n  - Siyu Wang, Xiaocong Chen, Dietmar Jannach, and Lina Yao. arXiv, 2023.\n- [Data Might be Enough: Bridge Real-World Traffic Signal Control Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10828)\n  - Liang Zhang and Jianming Deng. arXiv, 2023.\n- [User Retention-oriented Recommendation with Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06347)\n  - Kesen Zhao, Lixin Zou, Xiangyu Zhao, Maolin Wang, and Dawei Yin. arXiv, 2023.\n- [Learning to Control Autonomous Fleets from Observation via Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.14833)\n  - Carolin Schmidt, Daniele Gammelli, Francisco Camara Pereira, and Filipe Rodrigues. arXiv, 2023.\n- [INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic Learning and Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13164)\n  - Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri, and Siddharth Garg. arXiv, 2023.\n- [Learning Vision-based Robotic Manipulation Tasks Sequentially in Offline Reinforcement Learning Settings](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13450)\n  - Sudhir Pratap Yadav, Rajendra Nagar, and Suril V. Shah. arXiv, 2023.\n- [Winning Solution of Real Robot Challenge III](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13019)\n  - Qiang Wang, Robert McCarthy, David Cordova Bulens, and Stephen J. Redmond. arXiv, 2023.\n- [Learning-based MPC from Big Data Using Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.01667)\n  - Shambhuraj Sawant, Akhil S Anand, Dirk Reinhardt, and Sebastien Gros. arXiv, 2023.\n- [Offline Reinforcement Learning for Mixture-of-Expert Dialogue Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.10850)\n  - Dhawal Gupta, Yinlam Chow, Aza Tulepbergenov, Mohammad Ghavamzadeh, and Craig Boutilier. NeurIPS, 2023.\n- [Beyond Reward: Offline Preference-guided Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16217)\n  - Yachen Kang, Diyuan Shi, Jinxin Liu, Li He, and Donglin Wang. ICML, 2023.\n- [DevFormer: A Symmetric Transformer for Context-Aware Device Placement](https:\u002F\u002Fopenreview.net\u002Fforum?id=pWk5MoS04I)\n  - Haeyeon Kim, Minsu Kim, Federico Berto, Joungho Kim, and Jinkyoo Park. ICML, 2023.\n- [On the Effectiveness of Offline RL for Dialogue Response Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12425)\n  - Paloma Sodhi, Felix Wu, Ethan R. Elenberg, Kilian Q. Weinberger, and Ryan McDonald. ICML, 2023.\n- [Bidirectional Learning for Offline Model-based Biological Sequence Design](https:\u002F\u002Fopenreview.net\u002Fforum?id=CUORPu6abU)\n  - Can Chen, Yingxue Zhang, Xue Liu, and Mark Coates. ICML, 2023.\n- [ChiPFormer: Transferable Chip Placement via Offline Decision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14744)\n  - Yao Lai, Jinxin Liu, Zhentao Tang, Bin Wang, Jianye Hao, and Ping Luo. ICML, 2023.\n- [Semi-Offline Reinforcement Learning for Optimized Text Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09712)\n  - Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, and Rui Yan. ICML, 2023.\n- [Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11373)\n  - Michael Chang, Alyssa L. Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, and Amy Zhang. ICLR, 2023.\n- [Offline RL for Natural Language Generation with Implicit Language Q Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11871)\n  - Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. ICLR, 2023.\n- [Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.11731)\n  - Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, and Sergey Levine. CoRL, 2023.\n- [Building Persona Consistent Dialogue Agents with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.10735)\n  - Ryan Shea and Zhou Yu. EMNLP, 2023.\n- [Dialog Action-Aware Transformer for Dialog Policy Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02240)\n  - Huimin Wang, Wai-Chung Kwan, and Kam-Fai Wong. SIGdial, 2023.\n- [Can Offline Reinforcement Learning Help Natural Language Understanding?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.03864)\n  - Ziqi Zhang, Yile Wang, Yue Zhang, and Donglin Wang. arXiv, 2022.\n- [NeurIPS 2022 Competition: Driving SMARTS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.07545)\n  - Amir Rasouli, Randy Goebel, Matthew E. Taylor, Iuliia Kotseruba, Soheil Alizadeh, Tianpei Yang, Montgomery Alban, Florian Shkurti, Yuzheng Zhuang, Adam Scibior, Kasra Rezaee, Animesh Garg, David Meger, Jun Luo, Liam Paull, Weinan Zhang, Xinyu Wang, and Xi Chen. arXiv, 2022.\n- [Controlling Commercial Cooling Systems Using Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.07357)\n  - Jerry Luo, Cosmin Paduraru, Octavian Voicu, Yuri Chervonyi, Scott Munns, Jerry Li, Crystal Qian, Praneet Dutta, Jared Quincy Davis, Ningjia Wu, Xingwei Yang, Chu-Ming Chang, Ted Li, Rob Rose, Mingyan Fan, Hootan Nakhost, Tinglin Liu, Brian Kirkman, Frank Altamura, Lee Cline, Patrick Tonker, Joel Gouker, Dave Uden, Warren Buddy Bryan, Jason Law, Deeni Fatiha, Neil Satra, Juliet Rothenberg, Molly Carlin, Satish Tallapaka, Sims Witherspoon, David Parish, Peter Dolan, Chenyu Zhao, and Daniel J. Mankowitz.\n- [Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05178) [[code](https:\u002F\u002Fgithub.com\u002FAsap7772\u002FPTR)]\n  - Aviral Kumar, Anikait Singh, Frederik Ebert, Yanlai Yang, Chelsea Finn, and Sergey Levine. arXiv, 2022.\n- [Towards Safe Mechanical Ventilation Treatment Using Deep Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02552)\n  - Flemming Kondrup, Thomas Jiralerspong, Elaine Lau, Nathan de Lara, Jacob Shkrob, My Duc Tran, Doina Precup, and Sumana Basu. IAAI, 2023.\n- [Learning-to-defer for sequential medical decision-making under uncertainty](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.06312)\n  - Shalmali Joshi, Sonali Parbhoo, and Finale Doshi-Velez. TMLR, 2023.\n- [Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.11419)\n  - Yiren Lu, Justin Fu, George Tucker, Xinlei Pan, Eli Bronstein, Rebecca Roelofs, Benjamin Sapp, Brandyn White, Aleksandra Faust, Shimon Whiteson, Dragomir Anguelov, and Sergey Levine. arXiv, 2022.\n- [Dialogue Evaluation with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.00876)\n  - Nurul Lubis, Christian Geishauser, Hsien-Chin Lin, Carel van Niekerk, Michael Heck, Shutong Feng, and Milica Gašić. arXiv, 2022.\n- [Multi-Task Fusion via Reinforcement Learning for Long-Term User Satisfaction in Recommender Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.04560)\n  - Qihua Zhang, Junning Liu, Yuzhuo Dai, Yiyan Qi, Yifan Yuan, Kunlun Zheng, Fan Huang, and Xianfeng Tan. arXiv, 2022.\n- [A Maintenance Planning Framework using Online and Offline Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.00808)\n  - Zaharah A. Bukhsh, Nils Jansen, and Hajo Molegraaf. arXiv, 2022.\n- [BCRLSP: An Offline Reinforcement Learning Framework for Sequential Targeted Promotion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.07790)\n  - Fanglin Chen, Xiao Liu, Bo Tang, Feiyu Xiong, Serim Hwang, and Guomian Zhuang. arXiv, 2022.\n- [Learning Optimal Treatment Strategies for Sepsis Using Offline Reinforcement Learning in Continuous Space](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11190)\n  - Zeyu Wang, Huiying Zhao, Peng Ren, Yuxi Zhou, and Ming Sheng. arXiv, 2022.\n- [Rethinking Reinforcement Learning for Recommendation: A Prompt Perspective](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07353)\n  - Xin Xin, Tiago Pimentel, Alexandros Karatzoglou, Pengjie Ren, Konstantina Christakopoulou, and Zhaochun Ren. arXiv, 2022.\n- [ARLO: A Framework for Automated Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10416)\n  - Marco Mussi, Davide Lombarda, Alberto Maria Metelli, Francesco Trovò, and Marcello Restelli. arXiv, 2022.\n- [A Reinforcement Learning-based Volt-VAR Control Dataset and Testing Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.09500)\n  - Yuanqi Gao and Nanpeng Yu. arXiv, 2022.\n- [CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.08426)\n  - Siddharth Verma, Justin Fu, Mengjiao Yang, and Sergey Levine. arXiv, 2022.\n- [Offline Reinforcement Learning for Safer Blood Glucose Control in People with Type 1 Diabetes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.03376) [[code](https:\u002F\u002Fgithub.com\u002Fhemerson1\u002Foffline-glucose)]\n  - Harry Emerson, Matt Guy, and Ryan McConville. arXiv, 2022.\n- [CIRS: Bursting Filter Bubbles by Counterfactual Interactive Recommender System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.01266) [[code](https:\u002F\u002Fgithub.com\u002Fchongminggao\u002FCIRS-codes)]\n  - Chongming Gao, Wenqiang Lei, Jiawei Chen, Shiqi Wang, Xiangnan He, Shijun Li, Biao Li, Yuan Zhang, and Peng Jiang. arXiv, 2022.\n- [A Conservative Q-Learning approach for handling distribution shift in sepsis treatment strategies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.13884)\n  - Pramod Kaushik, Sneha Kummetha, Perusha Moodley, and Raju S. Bapi. arXiv, 2022.\n- [Optimizing Trajectories for Highway Driving with Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.10949)\n  - Branka Mirchevska, Moritz Werling, and Joschka Boedecker. arXiv, 2022.\n- [Offline Deep Reinforcement Learning for Dynamic Pricing of Consumer Credit](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.03003)\n  - Raad Khraishi and Ramin Okhrati. arXiv, 2022.\n- [Offline Reinforcement Learning for Mobile Notifications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.03867)\n  - Yiping Yuan, Ajith Muralidharan, Preetam Nandy, Miao Cheng, and Prakruthi Prabhakar. arXiv, 2022.\n- [Offline Reinforcement Learning for Road Traffic Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.02381)\n  - Mayuresh Kunjir and Sanjay Chawla. arXiv, 2022.\n- [Sustainable Online Reinforcement Learning for Auto-bidding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07006)\n  - Zhiyu Mou, Yusen Huo, Rongquan Bai, Mingzhou Xie, Chuan Yu, Jian Xu, and Bo Zheng. NeurIPS, 2022.\n- [Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.01738)\n  - Shengpu Tang, Maggie Makar, Michael W. Sjoding, Finale Doshi-Velez, and Jenna Wiens. NeurIPS, 2022.\n- [Multi-objective Optimization of Notifications Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.03029)\n  - Prakruthi Prabhakar, Yiping Yuan, Guangyu Yang, Wensheng Sun, and Ajith Muralidharan. KDD, 2022.\n- [Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.02450)\n  - Boxiang Lyu, Zhaoran Wang, Mladen Kolar, and Zhuoran Yang. ICML, 2022.\n- [GPT-Critic: Offline Reinforcement Learning for End-to-End Task-Oriented Dialogue Systems](https:\u002F\u002Fopenreview.net\u002Fforum?id=qaxhBG1UUaS)\n  - Youngsoo Jang, Jongmin Lee, and Kee-Eung Kim. ICLR, 2022.\n- [Offline Reinforcement Learning for Visual Navigation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08244)\n  - Dhruv Shah, Arjun Bhorkar, Hrish Leen, Ilya Kostrikov, Nick Rhinehart, and Sergey Levine. CoRL, 2022.\n- [Semi-Markov Offline Reinforcement Learning for Healthcare](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.09365)\n  - Mehdi Fatemi, Mary Wu, Jeremy Petch, Walter Nelson, Stuart J. Connolly, Alexander Benz, Anthony Carnicelli, and Marzyeh Ghassemi. CHIL, 2022.\n- [Automate Page Layout Optimization: An Offline Deep Q-Learning Approach](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3523227.3547400)\n  - Zhou Qin and Wenyang Liu. RecSys, 2022.\n- [RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.11073) [[code](https:\u002F\u002Fgithub.com\u002FfuxiAIlab\u002FRL4RS)] [[dataset](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1YbPtPyYrMvMGOuqD4oHvK0epDtEhEb9v\u002Fview)]\n  - Kai Wang, Zhene Zou, Yue Shang, Qilin Deng, Minghao Zhao, Yile Liang, Runze Wu, Jianrong Tao, Xudong Shen, Tangjie Lyu, and Changjie Fan. arXiv, 2021.\n- [Compressive Features in Offline Reinforcement Learning for Recommender Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.08817)\n  - Hung Nguyen, Minh Nguyen, Long Pham, and Jennifer Adorno Nieves. arXiv, 2021.\n- [Causal-aware Safe Policy Improvement for Task-oriented dialogue](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06370)\n  - Govardana Sachithanandam Ramachandran, Kazuma Hashimoto, and Caiming Xiong. arXiv, 2021.\n- [Offline Contextual Bandits for Wireless Network Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.08587)\n  - Miguel Suau, Alexandros Agapitos, David Lynch, Derek Farrell, Mingqi Zhou, and Aleksandar Milenovic. arXiv, 2021.\n- [Identifying Decision Points for Safe and Interpretable Reinforcement Learning in Hypotension Treatment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03309)\n  - Kristine Zhang, Yuanheng Wang, Jianzhun Du, Brian Chu, Leo Anthony Celi, Ryan Kindle, and Finale Doshi-Velez. arXiv, 2021.\n- [Offline Reinforcement Learning for Autonomous Driving with Safety and Exploration Enhancement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07067)\n  - Tianyu Shi, Dong Chen, Kaian Chen, and Zhaojian Li. arXiv, 2021.\n- [Medical Dead-ends and Learning to Identify High-risk States and Treatments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.04186)\n  - Mehdi Fatemi, Taylor W. Killian, Jayakumar Subramanian, and Marzyeh Ghassemi. arXiv, 2021.\n- [An Offline Deep Reinforcement Learning for Maintenance Decision-Making](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.15050)\n  - Hamed Khorasgani, Haiyan Wang, Chetan Gupta, and Ahmed Farahat. arXiv, 2021.\n- [Learning Language-Conditioned Robot Behavior from Offline Data and Crowd-Sourced Annotation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.01115)\n  - Suraj Nair, Eric Mitchell, Kevin Chen, Brian Ichter, Silvio Savarese, and Chelsea Finn. arXiv, 2021.\n- [Offline-Online Reinforcement Learning for Energy Pricing in Office Demand Response: Lowering Energy and Data Costs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.06594)\n  - Doseok Jang, Lucas Spangher, Manan Khattar, Utkarsha Agwan, Selvaprabuh Nadarajah, and Costas Spanos. arXiv, 2021.\n- [Offline reinforcement learning with uncertainty for treatment strategies in sepsis](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.04491)\n  - Ran Liu, Joseph L. Greenstein, James C. Fackler, Jules Bergmann, Melania M. Bembea, and Raimond L. Winslow. arXiv, 2021.\n- [Improving Long-Term Metrics in Recommendation Systems using Short-Horizon Offline RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.00589)\n  - Bogdan Mazoure, Paul Mineiro, Pavithra Srinath, Reza Sharifi Sedeh, Doina Precup, and Adith Swaminathan. arXiv, 2021.\n- [Safe Model-based Off-policy Reinforcement Learning for Eco-Driving in Connected and Automated Hybrid Electric Vehicles](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.11640)\n  - Zhaoxuan Zhu, Nicola Pivaro, Shobhit Gupta, Abhishek Gupta, and Marcello Canova. arXiv, 2021.\n- [pH-RL: A personalization architecture to bring reinforcement learning to health practice](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.15908)\n  - Ali el Hassouni, Mark Hoogendoorn, Marketa Ciharova, Annet Kleiboer, Khadicha Amarti, Vesa Muhonen, Heleen Riper, and A. E. Eiben. arXiv, 2021.\n- [DeepThermal: Combustion Optimization for Thermal Power Generating Units Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.11492) [[podcast](https:\u002F\u002Fwww.talkrl.com\u002Fepisodes\u002Fxianyuan-zhan)]\n  - Xianyuan Zhan, Haoran Xu, Yue Zhang, Yusen Huo, Xiangyu Zhu, Honglei Yin, and Yu Zheng. arXiv, 2021.\n- [Personalization for Web-based Services using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.05612)\n  - Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, and Igor L. Markov. arXiv, 2021.\n- [BCORLE(λ): An Offline Reinforcement Learning and Evaluation Framework for Coupons Allocation in E-commerce Market](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Fhash\u002Fab452534c5ce28c4fbb0e102d4a4fb2e-Abstract.html)\n  - Yang Zhang, Bo Tang, Qingyu Yang, Dou An, Hongyin Tang, Chenyang Xi, Xueying LI, and Feiyu Xiong. NeurIPS, 2021.\n- [Safe Driving via Expert Guided Policy Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06831) [[website](https:\u002F\u002Fdecisionforce.github.io\u002FEGPO\u002F)] [[code](https:\u002F\u002Fgithub.com\u002Fdecisionforce\u002FEGPO)]\n  - Zhenghao Peng, Quanyi Li, Chunxiao Liu, and Bolei Zhou. CoRL, 2021.\n- [A General Offline Reinforcement Learning Framework for Interactive Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00678)\n  - Teng Xiao and Donglin Wang. AAAI, 2021.\n- [Value Function is All You Need: A Unified Learning Framework for Ride Hailing Platforms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.08791)\n  - Xiaocheng Tang, Fan Zhang, Zhiwei (Tony)Qin, Yansheng Wang, Dingyuan Shi, Bingchen Song, Yongxin Tong, Hongtu Zhu, and Jieping Ye. KDD, 2021.\n- [Discovering an Aid Policy to Minimize Student Evasion Using Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.10258)\n  - Leandro M. de Lima and Renato A. Krohling. IJCNN, 2021.\n- [Learning robust driving policies without online exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.08070)\n  - Daniel Graves, Nhat M. Nguyen, Kimia Hassanzadeh, Jun Jin, and Jun Luo. ICRA, 2021.\n- [Engagement Rewarded Actor-Critic with Conservative Q-Learning for Speech-Driven Laughter Backchannel Generation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3462244.3479944)\n  - Öykü Zeynep Bayramoğlu, Engin Erzin, Tevfik Metin Sezgin, and Yücel Yemez. ICMI, 2021.\n- [Network Intrusion Detection Based on Extended RBF Neural Network With Offline Reinforcement Learning](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9612220)\n  - Manuel Lopez-Martin, Antonio Sanchez-Esguevillas, Juan Ignacio Arribas, and Belen Carro. IEEE Access, 2021.\n- [Towards Accelerating Offline RL based Recommender Systems](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3486001.3486244)\n  - Mayank Mishra, Rekha Singhal, and Ravi Singh. AIMLSystems, 2021.\n- [Offline Meta-level Model-based Reinforcement Learning Approach for Cold-Start Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.02476)\n  - Yanan Wang, Yong Ge, Li Li, Rui Chen, and Tong Xu. arXiv, 2020.\n- [Batch-Constrained Distributional Reinforcement Learning for Session-based Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.08984)\n  - Diksha Garg, Priyanka Gupta, Pankaj Malhotra, Lovekesh Vig, and Gautam Shroff. arXiv, 2020.\n- [An Empirical Study of Representation Learning for Reinforcement Learning in Healthcare](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.11235)\n  - Taylor W. Killian, Haoran Zhang, Jayakumar Subramanian, Mehdi Fatemi, and Marzyeh Ghassemi. arXiv, 2020.\n- [Learning from Human Feedback: Challenges for Real-World Reinforcement Learning in NLP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.02511)\n  - Julia Kreutzer, Stefan Riezler, and Carolin Lawrence. arXiv, 2020.\n- [Remote Electrical Tilt Optimization via Safe Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.05842)\n  - Filippo Vannella, Grigorios Iakovidis, Ezeddin Al Hakim, Erik Aumayr, and Saman Feghhi. arXiv, 2020.\n- [An Optimistic Perspective on Offline Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fagarwal20c.html) [[website](https:\u002F\u002Foffline-rl.github.io\u002F)] [[blog](https:\u002F\u002Fai.googleblog.com\u002F2020\u002F04\u002Fan-optimistic-perspective-on-offline.html)]\n  - Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. ICML, 2020.\n- [Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Frakhsha20a.html)\n  - Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, and Adish Singla. ICML, 2020.\n- [Offline Contextual Multi-armed Bandits for Mobile Health Interventions: A Case Study on Emotion Regulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.09472)\n  - Mawulolo K. Ameko, Miranda L. Beltzer, Lihua Cai, Mehdi Boukhechba, Bethany A. Teachman, and Laura E. Barnes. RecSys, 2020.\n- [Human-centric Dialog Training via Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.05848)\n  - Natasha Jaques, Judy Hanwen Shen, Asma Ghandeharioun, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Shane Gu, and Rosalind Picard. EMNLP, 2020.\n- [Definition and evaluation of model-free coordination of electrical vehicle charging with reinforcement learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.10679)\n  - Nasrin Sadeghianpourhamami, Johannes Deleu, and Chris Develder. IEEE T SMART GRID, 2020.\n- [Optimal Tap Setting of Voltage Regulation Transformers Using Batch Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.10997)\n  - Hanchen Xu, Alejandro D. Domínguez-García, and Peter W. Sauer. IEEE T POWER SYSTEMS, 2020.\n- [Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.00456)\n  - Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. arXiv, 2019.\n- [Optimized cost function for demand response coordination of multiple EV charging stations using reinforcement learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.01654)\n  - Manu Lahariya, Nasrin Sadeghianpourhamami, and Chris Develder. BuildSys, 2019.\n- [A Clustering-Based Reinforcement Learning Approach for Tailored Personalization of E-Health Interventions](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.03592)\n  - Ali el Hassouni, Mark Hoogendoorn, Martijn van Otterlo, A. E. Eiben, Vesa Muhonen, and Eduardo Barbaro. arXiv, 2018.\n- [Generating Interpretable Fuzzy Controllers using Particle Swarm Optimization and Genetic Programming](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10960)\n  - Daniel Hein, Steffen Udluft, and Thomas A. Runkler. GECCO, 2018.\n- [End-to-End Offline Goal-Oriented Dialog Policy Learning via Policy Gradient](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.02838)\n  - Li Zhou, Kevin Small, Oleg Rokhlenko, and Charles Elkan. arXiv, 2017.\n- [Batch Reinforcement Learning on the Industrial Benchmark: First Experiences](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07262)\n  - Daniel Hein, Steffen Udluft, Michel Tokic, Alexander Hentschel, Thomas A. Runkler, and Volkmar Sterzing. IJCNN, 2017.\n- [Policy Networks with Two-Stage Training for Dialogue Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03152)\n  - Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. SIGDial, 2016.\n- [Adaptive Treatment of Epilepsy via Batch-mode Reinforcement Learning](https:\u002F\u002Fwww.aaai.org\u002FLibrary\u002FIAAI\u002F2008\u002Fiaai08-008.php)\n  - Arthur Guez, Robert D. Vincent, Massimo Avoli, and Joelle Pineau. IAAI, 2008.\n\n### 离策略评估与学习：理论\u002F方法\n#### 离策略评估：上下文 bandit 问题\n- [通过优化抽象进行 slate bandit 策略的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02171)\n  - Haruka Kiyohara、Masahiro Nomura 和 Yuta Saito。WWW，2024 年。\n- [上下文 bandit 问题中一般协变量偏移下的分布稳健策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11353)\n  - Yihong Guo、Hao Liu、Yisong Yue 和 Anqi Liu。arXiv，2024 年。\n- [通过联合效应建模实现大规模动作空间的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.08062)\n  - Yuta Saito、Qingyang Ren 和 Thorsten Joachims。ICML，2023 年。\n- [基于死亡截断的多重稳健离策略评估与学习](https:\u002F\u002Fopenreview.net\u002Fforum?id=FQlsEvyQ4N)\n  - Jianing Chu、Shu Yang 和 Wenbin Lu。ICML，2023 年。\n- [多样化用户行为下排序策略的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15098)\n  - Haruka Kiyohara、Masatoshi Uehara、Yusuke Narita、Nobuyuki Shimizu、Yasuo Yamamoto 和 Yuta Saito。KDD，2023 年。\n- [离策略评估中的策略自适应估计器选择](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.13904)\n  - Takuma Udagawa、Haruka Kiyohara、Yusuke Narita、Yuta Saito 和 Kei Tateno。AAAI，2023 年。\n- [用于上下文 bandit 问题反事实评估的方差最优增强日志记录](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.01721)\n  - Aaron David Tucker 和 Thorsten Joachims。WSDM，2023 年。\n- [基于结果导向的动作分组实现大规模动作空间的离线策略评估](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3543507.3583448)\n  - Jie Peng、Hao Zou、Jiashuo Liu、Shaoming Li、Yibao Jiang、Jian Pei 和 Peng Cui。WWW，2023 年。\n- [通过策略卷积实现大规模动作空间的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.15433)\n  - Noveen Sachdeva、Lequn Wang、Dawen Liang、Nathan Kallus 和 Julian McAuley。arXiv，2023 年。\n- [针对 slate 推荐的分布型离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.14165)\n  - Shreyas Chaudhari、David Arbour、Georgios Theocharous 和 Nikos Vlassis。arXiv，2023 年。\n- [去偏机器学习与网络凝聚度在上下文 bandit 问题中双重稳健差异奖励模型中的应用](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.06403)\n  - Easton K. Huch、Jieru Shi、Madeline R. Abbott、Jessica R. Golbus、Alexander Moreno 和 Walter H. Dempsey。arXiv，2023 年。\n- [大规模动作空间离策略评估的双重稳健估计量](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03443)\n  - Tatsuhiro Shimizu。arXiv，2023 年。\n- [具有样本外保证的离线策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08649)\n  - Sofia Ek 和 Dave Zachariah。arXiv，2023 年。\n- [基于深度条件生成学习的分位数离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.14466)\n  - Yang Xu、Chengchun Shi、Shikai Luo、Lan Wang 和 Rui Song。arXiv，2023 年。\n- [基于级联行为模型的排序策略双重稳健离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.01562) [[代码](https:\u002F\u002Fgithub.com\u002Faiueola\u002Fwsdm2022-cascade-dr)]\n  - Haruka Kiyohara、Yuta Saito、Tatsuya Matsuhiro、Yusuke Narita、Nobuyuki Shimizu 和 Yasuo Yamamoto。WSDM，2022 年。\n- [通过嵌入实现大规模动作空间的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.06317) [[代码](https:\u002F\u002Fgithub.com\u002Fusaito\u002Ficml2022-mips)] [[视频](https:\u002F\u002Fyoutu.be\u002FHrqhv-AsMRE)]\n  - Yuta Saito 和 Thorsten Joachims。ICML，2022 年。\n- [双重稳健且分布鲁棒的离策略评估与学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.09667)\n  - Nathan Kallus、Xiaojie Mao、Kaiwen Wang 和 Zhengyuan Zhou。ICML，2022 年。\n- [用于连续动作上下文 bandit 问题离策略评估的局部度量学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13373)\n  - Haanvid Lee、Jongmin Lee、Yunseon Choi、Wonseok Jeon、Byung-Jun Lee、Yung-Kyun Noh 和 Kee-Eung Kim。NeurIPS，2022 年。\n- [上下文 bandit 问题中的保形离策略预测](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04405)\n  - Muhammad Faaiz Taufiq、Jean-Francois Ton、Rob Cornish、Yee Whye Teh 和 Arnaud Doucet。NeurIPS，2022 年。\n- [具有策略依赖性优化响应的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.12958)\n  - Wenshuo Guo、Michael I. Jordan 和 Angela Zhou。NeurIPS，2022 年。\n- [利用辅助信息处理支持不足情况下的离策略评估](https:\u002F\u002Fopenreview.net\u002Fforum?id=uFSrUpapQ5K)\n  - Nicolò Felicioni、Maurizio Ferrari Dacrema、Marcello Restelli 和 Paolo Cremonesi。NeurIPS，2022 年。\n- [通过人类输入迈向稳健的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.08682)\n  - Harvineet Singh、Shalmali Joshi、Finale Doshi-Velez 和 Himabindu Lakkaraju。AIES，2022 年。\n- [通过插值项目-位置模型和基于位置的模型实现排序学习的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09512)\n  - Alexander Buchholz、Ben London、Giuseppe di Benedetto 和 Thorsten Joachims。arXiv，2022 年。\n- [贝叶斯反事实均值嵌入与离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01518)\n  - Diego Martinez-Taboada 和 Dino Sejdinovic。arXiv，2022 年。\n- [适用于上下文 bandit 问题的随时有效的离策略推断](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.10768)\n  - Ian Waudby-Smith、Lili Wu、Aaditya Ramdas、Nikos Karampatziakis 和 Paul Mineiro。arXiv，2022 年。\n- [线性泛函的离策略估计：半参数效率的非渐近理论](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.13075)\n  - Wenlong Mou、Martin J. Wainwright 和 Peter L. Bartlett。arXiv，2022 年。\n- [嵌入空间中的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02807)\n  - Jaron J. R. Lee、David Arbour 和 Georgios Theocharous。arXiv，2022 年。\n- [安全探索以实现高效的策略评估和比较](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13234)\n  - Runzhe Wan、Branislav Kveton 和 Rui Song。arXiv，2022 年。\n- [基于逆倾向得分的确定性排名列表离线估计器，考虑位置偏差](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.14980)\n  - Nick Wood 和 Sumit Sidana。arXiv，2022 年。\n- [亚高斯分布且可微的重要性采样用于离策略评估和学习](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Fhash\u002F4476b929e30dd0c4e8bdbcc82c6ba23a-Abstract.html)\n  - Alberto Maria Metelli、Alessio Russo、Marcello Restelli。NeurIPS，2021 年。\n- [用于 slate 离策略评估的控制变量](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.07914)\n  - Nikos Vlassis、Ashok Chandrashekar、Fernando Amat Gil 和 Nathan Kallus。NeurIPS，2021 年。\n- [用于连续治疗情境下离策略评估的深度跳跃学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.15963)\n  - Hengrui Cai、Chengchun Shi、Rui Song 和 Wenbin Lu。NeurIPS，2021 年。\n- [来自多个日志策略的最优离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11002) [[代码](https:\u002F\u002Fgithub.com\u002FCausalML\u002FMultipleLoggers)]\n  - Nathan Kallus、Yuta Saito 和 Masatoshi Uehara。ICML，2021 年。\n- [离策略置信序列](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.09540)\n  - Nikos Karampatziakis、Paul Mineiro 和 Aaditya Ramdas。ICML，2021 年。\n- [通过自归一化重要性权重实现自信的离策略评估与选择](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10460) [[视频](https:\u002F\u002Fyoutu.be\u002F0MYRwW6BdvU)]\n  - Ilja Kuzborskij、Claire Vernade、András György 和 Csaba Szepesvári。AISTATS，2021 年。\n- [利用信息借用和基于情境的切换进行离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.09865)\n  - Sutanoy Dasgupta、Yabo Niu、Kishan Panaganti、Dileep Kalathil、Debdeep Pati 和 Bani Mallick。arXiv，2021 年。\n- [在离策略策略评估中识别具有相似收益的子群体](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.14272)\n  - Ramtin Keramati、Omer Gottesman、Leo Anthony Celi、Finale Doshi-Velez 和 Emma Brunskill。arXiv，2021 年。\n- [面向数据高效策略评估的稳健在策略数据收集](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.14552)\n  - Rujie Zhong、Josiah P. Hanna、Lukas Schäfer 和 Stefano V. Albrecht。arXiv，2021 年。\n- [通过来自上下文 bandit 的数据进行自适应加权的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.02029)\n  - Ruohan Zhan、Vitor Hadad、David A. Hirshberg 和 Susan Athey。arXiv，2021 年。\n- [上下文 bandit 问题中的离策略风险评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08977)\n  - Audrey Huang、Liu Leqi、Zachary C. Lipton 和 Kamyar Azizzadenesheli。arXiv，2021 年。\n- [基于贝叶斯风险的 slate 策略离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.02553)\n  - Nikos Vlassis、Fernando Amat Gil 和 Ashok Chandrashekar。arXiv，2021 年。\n- [bandit 问题离策略评估实用指南](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12470)\n  - Masahiro Kato、Kenshi Abe、Kaito Ariu 和 Shota Yasui。arXiv，2020 年。\n- [在协变量偏移下实现外部有效性的离策略评估与学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.11642)\n  - Masatoshi Uehara、Masahiro Kato 和 Shota Yasui。NeurIPS，2020 年。\n- [带有顺序奖励交互的 slate 推荐反事实评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.12986)\n  - James McInerney、Brian Brost、Praveen Chandar、Rishabh Mehrotra 和 Ben Carterette。KDD，2020 年。\n- [带有收缩的双重稳健离策略评估](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fsu20a.html)\n  - Yi Su、Maria Dimakopoulou、Akshay Krishnamurthy 和 Miroslav Dudik。ICML，2020 年。\n- [离策略评估中的自适应估计器选择](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fsu20d.html) [[视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=r8ZDuC71lCs)]\n  - Yi Su、Pavithra Srinath 和 Akshay Krishnamurthy。ICML，2020 年。\n- [离线上下文 bandit 问题中的分布稳健策略评估与学习](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fsi20a.html)\n  - Nian Si、Fan Zhang、Zhengyuan Zhou 和 Jose Blanchet。ICML，2020 年。\n- [通过分布稳健性改进离线上下文 bandit 问题](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.06835)\n  - Otmane Sakhi、Louis Faury 和 Flavian Vasile。arXiv，2020 年。\n- [通用动作空间中的平衡离策略评估](http:\u002F\u002Fproceedings.mlr.press\u002Fv108\u002Fsondhi20a.html)\n  - Arjun Sondhi、David Arbour 和 Drew Dimmery。AISTATS，2019 年。\n- [通过最优平衡消除潜在混杂因素进行策略评估](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002F7c4bf50b715509a963ce81b168ca674b-Abstract.html)\n  - Andrew Bennett 和 Nathan Kallus。NeuIPS，2019 年。\n- [bandit 离策略评估用估计器的设计](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fvlassis19a.html)\n  - Nikos Vlassis、Aurelien Bibaut、Maria Dimakopoulou 和 Tony Jebara。ICML，2019 年。\n- [CAB：用于策略评估和学习的连续自适应混合](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fsu19a.html)\n  - Yi Su、Lequn Wang、Michele Santacatterina 和 Thorsten Joachims。ICML，2019 年。\n- [聚焦情境平衡以实现稳健的离线策略评估](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3292500.3330852)\n  - Hao Zou、Kun Kuang、Boqi Chen、Peixuan Chen 和 Peng Cui。KDD，2019 年。\n- [当人们改变主意时：非平稳推荐环境下的离策略评估](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3289600.3290958)\n  - Rolf Jagerman、Ilya Markov 和 Maarten de Rijke。WSDM，2019 年。\n- [具有连续治疗的策略评估与优化](http:\u002F\u002Fproceedings.mlr.press\u002Fv84\u002Fkallus18a.html)\n  - Nathan Kallus 和 Angela Zhou。AISTATS，2019 年。\n- [抗混杂的策略改进](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2018\u002Fhash\u002F3a09a524440d44d7f19870070a5ad42f-Abstract.html)\n  - Nathan Kallus 和 Angela Zhou。NeuIPS，2018 年。\n- [平衡的策略评估与学习](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2018\u002Fhash\u002F6616758da438b02b8d360ad83a5b3d77-Abstract.html)\n  - Nathan Kallus。NeuIPS，2018 年。\n- [带有点击模型的排序策略离线评估](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3219819.3220028)\n  - Shuai Li、Yasin Abbasi-Yadkori、Branislav Kveton、S. Muthukrishnan、Vishwa Vinay 和 Zheng Wen。KDD，2018 年。\n- [利用来自多个日志器的 bandit 反馈进行有效评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06180)\n  - Aman Agarwal、Soumya Basu、Tobias Schnabel 和 Thorsten Joachims。KDD，2018 年。\n- [针对 slate 推荐的离策略评估](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2017\u002Fhash\u002F5352696a9ca3397beb79f116f3a33991-Abstract.html)\n  - Adith Swaminathan、Akshay Krishnamurthy、Alekh Agarwal、Miroslav Dudík、John Langford、Damien Jose 和 Imed Zitouni。NeurIPS，2017 年。\n- [上下文 bandit 问题中的最优且自适应离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.01205)\n  - Yu-Xiang Wang、Alekh Agarwal 和 Miroslav Dudik。ICML，2017 年。\n- [通过行为策略搜索实现数据高效策略评估](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhanna17a.html)\n  - Josiah P. Hanna、Philip S. Thomas、Peter Stone 和 Scott Niekum。ICML，2017 年。\n- [双重稳健的策略评估与优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F1503.02834)\n  - Miroslav Dudík、Dumitru Erhan、John Langford 和 Lihong Li。ICML，2011 年。\n- [基于上下文 bandit 的新闻文章推荐算法的无偏离线评估](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F1935826.1935878)\n  - Lihong Li、Wei Chu、John Langford 和 Xuanhui Wang。WSDM，2011 年。\n\n#### Off-Policy Evaluation: Reinforcement Learning\n- [Distributional Off-policy Evaluation with Bellman Residual Minimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01900)\n  - Sungee Hong, Zhengling Qi, and Raymond K. W. Wong. arXiv, 2024.\n- [Future-Dependent Value-Based Off-Policy Evaluation in POMDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.13081)\n  - Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, and Wen Sun. NeurIPS, 2023.\n- [Marginal Density Ratio for Off-Policy Evaluation in Contextual Bandits](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.01457)\n  - Muhammad Faaiz Taufiq, Arnaud Doucet, Rob Cornish, and Jean-Francois Ton. NeurIPS, 2023.\n- [State-Action Similarity-Based Representations for Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18409)\n  - Brahma S. Pavse and Josiah P. Hanna. NeurIPS, 2023.\n- [Off-Policy Evaluation for Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.07123)\n  - Qitong Gao, Juncheng Dong, Vahid Tarokh, Min Chi, and Miroslav Pajic. NeurIPS, 2023.\n- [Counterfactual-Augmented Importance Sampling for Semi-Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.17146)\n  - Shengpu Tang and Jenna Wiens. NeurIPS, 2023.\n- [An Instrumental Variable Approach to Confounded Off-Policy Evaluation](https:\u002F\u002Fopenreview.net\u002Fforum?id=ZVRWKr3ApD)\n  - Yang Xu, Jin Zhu, Chengchun Shi, Shikai Luo, and Rui Song. ICML, 2023.\n- [Semiparametrically Efficient Off-Policy Evaluation in Linear Markov Decision Processes](https:\u002F\u002Fopenreview.net\u002Fforum?id=6lP80vBiI6)\n  - Chuhan Xie, Wenhao Yang, and Zhihua Zhang. ICML, 2023.\n- [Distributional Offline Policy Evaluation with Predictive Error Guarantees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.09456)\n  - Runzhe Wu, Masatoshi Uehara, and Wen Sun. ICML, 2023.\n- [The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13332)\n  - Philip Amortila, Nan Jiang, and Csaba Szepesvári. ICML, 2023.\n- [Revisiting Bellman Errors for Offline Model Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00141) [[code](https:\u002F\u002Fgithub.com\u002Fjzitovsky\u002FSBV)]\n  - Joshua P. Zitovsky, Daniel de Marchi, Rishabh Agarwal, and Michael R. Kosorok. ICML, 2023.\n- [Scaling Marginalized Importance Sampling to High-Dimensional State-Spaces via State Abstraction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.07486)\n  - Brahma S. Pavse and Josiah P. Hanna. AAAI, 2023.\n- [Variational Latent Branching Model for Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12056)\n  - Qitong Gao, Ge Gao, Min Chi, and Miroslav Pajic. ICLR, 2023.\n- [Multiple-policy High-confidence Policy Evaluation](https:\u002F\u002Fproceedings.mlr.press\u002Fv206\u002Fdann23a.html)\n  - Chris Dann, Mohammad Ghavamzadeh, and Teodor V. Marinov. AISTATS, 2023.\n- [Off-Policy Evaluation with Online Adaptation for Robot Exploration in Challenging Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.03140)\n  - Yafei Hu, Junyi Geng, Chen Wang, John Keller, and Sebastian Scherer. RA-L, 2023.\n- [Conservative Exploration for Policy Optimization via Off-Policy Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.15458)\n  - Paul Daoudi, Mathias Formoso, Othman Gaizi, Achraf Azize, and Evrard Garcelon. arXiv, 2023.\n- [Robust Offline Policy Evaluation and Optimization with Heavy-Tailed Rewards](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18715)\n  - Jin Zhu, Runzhe Wan, Zhengling Qi, Shikai Luo, and Chengchun Shi. arXiv, 2023.\n- [When is Offline Policy Selection Sample Efficient for Reinforcement Learning?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02355)\n  - Vincent Liu, Prabhat Nagarajan, Andrew Patterson, and Martha White. arXiv, 2023.\n- [Sample Complexity of Preference-Based Nonparametric Off-Policy Evaluation with Deep Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.10556)\n  - Zihao Li, Xiang Ji, Minshuo Chen, and Mengdi Wang. arXiv, 2023.\n- [Evaluation of Active Feature Acquisition Methods for Static Feature Settings](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03619)\n  - Henrik von Kleist, Alireza Zamanian, Ilya Shpitser, and Narges Ahmidi. arXiv, 2023.\n- [Distributional Shift-Aware Off-Policy Interval Estimation: A Unified Error Quantification Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13278)\n  - Wenzhuo Zhou, Yuhan Li, Ruoqing Zhu, and Annie Qu. arXiv, 2023.\n- [Marginalized Importance Sampling for Off-Environment Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.01807)\n  - Pulkit Katdare, Nan Jiang, and Katherine Driggs-Campbell. arXiv, 2023.\n- [Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.14897)\n  - Hanhan Zhou, Tian Lan, and Vaneet Aggarwal. arXiv, 2023.\n- [Asymptotically Unbiased Off-Policy Policy Evaluation when Reusing Old Data in Nonstationary Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.11725)\n  - Vincent Liu, Yash Chandak, Philip Thomas, and Martha White. arXiv, 2023.\n- [Off-policy Evaluation in Doubly Inhomogeneous Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08719)\n  - Zeyu Bian, Chengchun Shi, Zhengling Qi, and Lan Wang. arXiv, 2023.\n- [Offline Policy Evaluation for Reinforcement Learning with Adaptively Collected Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14063)\n  - Sunil Madhow, Dan Xiao, Ming Yin, and Yu-Xiang Wang. arXiv, 2023.\n- [π2vec : Policy Representations with Successor Features](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09800)\n  - Gianluca Scarpellini, Ksenia Konyushkova, Claudio Fantacci, Tom Le Paine, Yutian Chen, and Misha Denil. arXiv, 2023.\n- [Conformal Off-Policy Evaluation in Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.02574)\n  - Daniele Foffano, Alessio Russo, and Alexandre Proutiere. arXiv, 2023.\n- [Hallucinated Adversarial Control for Conservative Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.01076)\n  - Jonas Rothfuss, Bhavya Sukhija, Tobias Birchler, Parnian Kassraie, and Andreas Krause. arXiv, 2023.\n- [Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00662)\n  - David Bruns-Smith and Angela Zhou. arXiv, 2023.\n- [Minimax Weight Learning for Absorbing MDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.03183)\n  - Fengyin Li, Yuqiang Li, and Xianyi Wu. arXiv, 2023.\n- [Improving Monte Carlo Evaluation with Offline Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13734)\n  - Shuze Liu and Shangtong Zhang. arXiv, 2023.\n- [First-order Policy Optimization for Robust Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15890)\n  - Yan Li and Guanghui Lan. arXiv, 2023.\n- [A Minimax Learning Approach to Off-Policy Evaluation in Confounded Partially Observable Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06784)\n  - Chengchun Shi, Masatoshi Uehara, Jiawei Huang, and Nan Jiang. ICML, 2022.\n- [On Well-posedness and Minimax Optimal Rates of Nonparametric Q-function Estimation in Off-policy Evaluation](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fchen22u.html)\n  - Xiaohong Chen and Zhengling Qi. ICML, 2022.\n- [Learning Bellman Complete Representations for Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.05837)\n  - Jonathan Chang, Kaiwen Wang, Nathan Kallus, and Wen Sun. ICML, 2022.\n- [Supervised Off-Policy Ranking](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01360)\n  - Yue Jin, Yue Zhang, Tao Qin, Xudong Zhang, Jian Yuan, Houqiang Li, and Tie-Yan Liu. ICML, 2022.\n- [Off-Policy Fitted Q-Evaluation with Differentiable Function Approximators: Z-Estimation and Inference Theory](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04970)\n  - Ruiqi Zhang, Xuezhou Zhang, Chengzhuo Ni, and Mengdi Wang. ICML, 2022.\n- [Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.15543)\n  - Audrey Huang and Nan Jiang. NeurIPS, 2022.\n- [Oracle Inequalities for Model Selection in Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.02016)\n  - Jonathan N. Lee, George Tucker, Ofir Nachum, Bo Dai, and Emma Brunskill. NeurIPS, 2022.\n- [Off-Policy Evaluation for Episodic Partially Observable Markov Decision Processes under Non-Parametric Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.10064)\n  - Rui Miao, Zhengling Qi, and Xiaoke Zhang. NeurIPS, 2022.\n- [Off-Policy Evaluation for Action-Dependent Non-stationary Environments](https:\u002F\u002Fopenreview.net\u002Fforum?id=PuagBLcAf8n)\n  - Yash Chandak, Shiv Shankar, Nathaniel D. Bastian, Bruno Castro da Silva, Emma Brunskill, and Philip S. Thomas. NeurIPS, 2022.\n- [Stateful Offline Contextual Policy Evaluation and Learning](https:\u002F\u002Fproceedings.mlr.press\u002Fv151\u002Fkallus22a)\n  - Nathan Kallus, and Angela Zhou. AISTATS, 2022.\n- [Off-Policy Risk Assessment for Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.10444)\n  - Audrey Huang, Liu Leqi, Zachary Lipton, and Kamyar Azizzadenesheli. AISTATS, 2022.\n- [Offline Reinforcement Learning for Human-Guided Human-Machine Interaction with Private Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.12167)\n  - Zuyue Fu, Zhengling Qi, Zhuoran Yang, Zhaoran Wang, and Lan Wang. arXiv, 2022.\n- [Offline Policy Evaluation and Optimization under Confounding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.16583)\n  - Kevin Tan, Yangyi Lu, Chinmaya Kausik, YIxin Wang, and Ambuj Tewari. arXiv, 2022.\n- [Bridging the Gap Between Offline and Online Reinforcement Learning Evaluation Methodologies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08131)\n  - Shivakanth Sujit, Pedro H. M. Braga, Jorg Bornschein, and Samira Ebrahimi Kahou. arXiv, 2022.\n- [Safe Evaluation For Offline Learning: Are We Ready To Deploy?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08302)\n  - Hager Radi, Josiah P. Hanna, Peter Stone, and Matthew E. Taylor. arXiv, 2022.\n- [Low Variance Off-policy Evaluation with State-based Importance Sampling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.03932)\n  - David M. Bossens and Philip Thomas. arXiv, 2022.\n- [Statistical Estimation of Confounded Linear MDPs: An Instrumental Variable Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.05186)\n  - Miao Lu, Wenhao Yang, Liangyu Zhang, and Zhihua Zhang. arXiv, 2022.\n- [Offline Estimation of Controlled Markov Chains: Minimax Nonparametric Estimators and Sample Efficiency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.07092)\n  - Imon Banerjee, Harsha Honnappa, and Vinayak Rao. arXiv, 2022.\n- [Sample Complexity of Nonparametric Off-Policy Evaluation on Low-Dimensional Manifolds using Deep Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02887)\n  - Xiang Ji, Minshuo Chen, Mengdi Wang, and Tuo Zhao. arXiv, 2022.\n- [A Sharp Characterization of Linear Estimators for Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04236)\n  - Juan C. Perdomo, Akshay Krishnamurthy, Peter Bartlett, and Sham Kakade. arXiv, 2022.\n- [A Multi-Agent Reinforcement Learning Framework for Off-Policy Evaluation in Two-sided Markets](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.10574) [[code](https:\u002F\u002Fgithub.com\u002FRunzheStat\u002FCausalMARL)]\n  - Chengchun Shi, Runzhe Wan, Ge Song, Shikai Luo, Rui Song, and Hongtu Zhu. arXiv, 2022.\n- [A Theoretical Framework of Almost Hyperparameter-free Hyperparameter Selection Methods for Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.02300)\n  - Kohei Miyaguchi. arXiv, 2022.\n- [SOPE: Spectrum of Off-Policy Estimators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.03936)\n  - Christina J. Yuan, Yash Chandak, Stephen Giguere, Philip S. Thomas, and Scott Niekum. NeurIPS, 2021.\n- [Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13125)\n  - Yunhao Tang, Tadashi Kozuno, Mark Rowland, Rémi Munos, and Michal Valko. NeurIPS, 2021.\n- [Variance-Aware Off-Policy Evaluation with Linear Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.11960)\n  - Yifei Min, Tianhao Wang, Dongruo Zhou, and Quanquan Gu. NeurIPS, 2021.\n- [Universal Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12820)\n  - Yash Chandak, Scott Niekum, Bruno Castro da Silva, Erik Learned-Miller, Emma Brunskill, and Philip S. Thomas. NeurIPS, 2021.\n- [Towards Hyperparameter-free Policy Selection for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14000)\n  - Siyuan Zhang and Nan Jiang. NeurIPS, 2021.\n- [Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Fhash\u002F6b3c49bdba5be0d322334e30c459f8bd-Abstract.html)\n  - Ming Yin and Yu-Xiang Wang. NeurIPS, 2021.\n- [State Relevance for Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.06310)\n  - Simon P. Shen, Yecheng Jason Ma, Omer Gottesman, and Finale Doshi-Velez. ICML, 2021.\n- [Bootstrapping Fitted Q-Evaluation for Off-Policy Inference](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fhao21b.html)\n  - Botao Hao, Xiang Ji, Yaqi Duan, Hao Lu, Csaba Szepesvari, and Mengdi Wang. ICML, 2021.\n- [Deeply-Debiased Off-Policy Interval Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04646)\n  - Chengchun Shi, Runzhe Wan, Victor Chernozhukov, and Rui Song. ICML, 2021.\n- [Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization](https:\u002F\u002Fopenreview.net\u002Fforum?id=kmqjgSNXby)\n  - Michael R. Zhang, Tom Le Paine, Ofir Nachum, Cosmin Paduraru, George Tucker, Ziyu Wang, Mohammad Norouzi. ICLR, 2021.\n- [Minimax Model Learning](http:\u002F\u002Fwww.yisongyue.com\u002Fpublications\u002Faistats2021_mml.pdf)\n   - Cameron Voloshin, Nan Jiang, and Yisong Yue. AISTATS, 2021.\n- [Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13893)\n  - Andrew Bennett, Nathan Kallus, Lihong Li, and Ali Mousavi. AISTATS, 2021.\n- [High-Confidence Off-Policy (or Counterfactual) Variance Estimation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.09847)\n  - Yash Chandak, Shiv Shankar, and Philip S. Thomas. AAAI, 2021.\n- [Debiased Off-Policy Evaluation for Recommendation Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.08536)\n  - Yusuke Narita, Shota Yasui, and Kohei Yata. RecSys, 2021.\n- [Pessimistic Model Selection for Offline Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.14346)\n  - Chao-Han Huck Yang, Zhengling Qi, Yifan Cui, and Pin-Yu Chen. arXiv, 2021.\n- [Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.15332)\n  - Andrew Bennett and Nathan Kallus. arXiv, 2021.\n- [Off-Policy Evaluation in Partially Observed Markov Decision Processes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.12343)\n  - Yuchen Hu and Stefan Wager. arXiv, 2021.\n- [A Spectral Approach to Off-Policy Evaluation for POMDPs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.10502)\n  - Yash Nair and Nan Jiang. arXiv, 2021.\n- [Projected State-action Balancing Weights for Offline Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.04640)s\n  - Jiayi Wang, Zhengling Qi, and Raymond K.W. Wong. arXiv, 2021.\n- [Active Offline Policy Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.10251)\n  - Ksenia Konyushkova, Yutian Chen, Thomas Paine, Caglar Gulcehre, Cosmin Paduraru, Daniel J Mankowitz, Misha Denil, and Nando de Freitas. arXiv, 2021.\n- [On Instrumental Variable Regression for Deep Offline Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.10148)\n  - Yutian Chen, Liyuan Xu, Caglar Gulcehre, Tom Le Paine, Arthur Gretton, Nando de Freitas, and Arnaud Doucet. arXiv, 2021.\n- [Average-Reward Off-Policy Policy Evaluation with Function Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.02808)\n  - Shangtong Zhang, Yi Wan, Richard S. Sutton, and Shimon Whiteson. arXiv, 2021.\n- [Sequential causal inference in a single world of connected units](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.07380)\n  - Aurelien Bibaut, Maya Petersen, Nikos Vlassis, Maria Dimakopoulou, and Mark van der Laan, arXiv, 2021.\n- [Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fda21bae82c02d1e2b8168d57cd3fbab7-Abstract.html)\n  - Hongseok Namkoong, Ramtin Keramati, Steve Yadlowsky, and Emma Brunskill. NeurIPS, 2020.\n- [CoinDICE: Off-Policy Confidence Interval Estimation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F6aaba9a124857622930ca4e50f5afed2-Abstract.html)\n  - Bo Dai, Ofir Nachum, Yinlam Chow, Lihong Li, Csaba Szepesvari, and Dale Schuurmans. NeurIPS, 2020.\n- [Off-Policy Interval Estimation with Lipschitz Value Iteration](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F59accb9fe696ce55e28b7d23a009e2d1-Abstract.html)\n  - Ziyang Tang, Yihao Feng, Na Zhang, Jian Peng, and Qiang Liu. NeurIPS, 2020.\n- [Off-Policy Evaluation via the Regularized Lagrangian](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F488e4104520c6aab692863cc1dba45af-Abstract.html)\n  - Mengjiao Yang, Ofir Nachum, Bo Dai, Lihong Li, and Dale Schuurmans. NeurIPS, 2020.\n- [Minimax Value Interval for Off-Policy Evaluation and Policy Optimization](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F1cd138d0499a68f4bb72bee04bbec2d7-Abstract.html)\n  - Nan Jiang and Jiawei Huang. NeurIPS, 2020.\n- [GenDICE: Generalized Offline Estimation of Stationary Values](https:\u002F\u002Fopenreview.net\u002Fforum?id=HkxlcnVFwB)\n  - Ruiyi Zhang, Bo Dai, Lihong Li, and Dale Schuurmans. ICLR, 2020.\n- [Infinite-horizon Off-Policy Policy Evaluation with Multiple Behavior Policies](https:\u002F\u002Ficlr.cc\u002Fvirtual_2020\u002Fposter_rkgU1gHtvr.html)\n  - Xinyun Chen, Lu Wang, Yizhe Hang, Heng Ge, and Hongyuan Zha. ICLR, 2020.\n- [Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation](https:\u002F\u002Ficlr.cc\u002Fvirtual_2020\u002Fposter_S1glGANtDr.html)\n  - Ziyang Tang, Yihao Feng, Lihong Li, Dengyong Zhou, and Qiang Liu. ICLR, 2020.\n- [Black-box Off-policy Estimation for Infinite-Horizon Reinforcement Learning](https:\u002F\u002Ficlr.cc\u002Fvirtual_2020\u002Fposter_S1ltg1rFDS.html)\n  - Ali Mousavi, Lihong Li, Qiang Liu, and Denny Zhou. ICLR, 2020.\n- [GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fzhang20r.html)\n  - Shangtong Zhang, Bo Liu, and Shimon Whiteson. ICML, 2020.\n- [Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fduan20b.html)\n  - Yaqi Duan, Zeyu Jia, and Mengdi Wang. ICML, 2020.\n- [Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fgottesman20a.html)\n  - Omer Gottesman, Joseph Futoma, Yao Liu, Sonali Parbhoo, Leo Celi, Emma Brunskill, and Finale Doshi-Velez. ICML, 2020.\n- [Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fkallus20b.html)\n  - Nathan Kallus and Masatoshi Uehara. ICML, 2020.\n- [Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fliu20a.html)\n  - Yao Liu, Pierre-Luc Bacon, and Emma Brunskill. ICML, 2020.\n- [Minimax Weight and Q-Function Learning for Off-Policy Evaluation](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fuehara20a.html)\n  - Masatoshi Uehara, Jiawei Huang, and Nan Jiang. ICML, 2020.\n- [Accountable Off-Policy Evaluation With Kernel Bellman Statistics](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Ffeng20d.html)\n  - Yihao Feng, Tongzheng Ren, Ziyang Tang, and Qiang Liu. ICML, 2020.\n- [Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv108\u002Fyin20b.html)\n  - Ming Yin and Yu-Xiang Wang. ICML, 2020.\n- [Batch Stationary Distribution Estimation](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fwen20a.html)\n  - Junfeng Wen, Bo Dai, Lihong Li, and Dale Schuurmans. ICML, 2020.\n- [Towards Off-policy Evaluation as a Prerequisite for Real-world Reinforcement Learning in Building Control](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3427773.3427871) [[video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zlk_TDNC4qk)]\n  - Bingqing Chen, Ming Jin, Zhe Wang, Tianzhen Hong, and Mario Bergés, RLEM, 2020.\n- [Defining Admissible Rewards for High Confidence Policy Evaluation in Batch Reinforcement Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3368555.3384450)\n  - Niranjani Prasad, Barbara E Engelhardt, and Finale Doshi-Velez. CHIL, 2020.\n- [Offline Policy Selection under Uncertainty](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.06919)\n  - Mengjiao Yang, Bo Dai, Ofir Nachum, George Tucker, and Dale Schuurmans. arXiv, 2020.\n- [Near-Optimal Provable Uniform Convergence in Offline Policy Evaluation for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.03760)\n  - Ming Yin, Yu Bai, and Yu-Xiang Wang. arXiv, 2020.\n- [Optimal Mixture Weights for Off-Policy Evaluation with Multiple Behavior Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.14359)\n  - Jinlin Lai, Lixin Zou, and Jiaxing Song. arXiv, 2020.\n- [Kernel Methods for Policy Evaluation: Treatment Effects, Mediation Analysis, and Off-Policy Planning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.04855)\n  - Rahul Singh, Liyuan Xu, and Arthur Gretton. arXiv, 2020.\n- [Statistical Bootstrapping for Uncertainty Estimation in Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.13609)\n  - Ilya Kostrikov and Ofir Nachum. arXiv, 2020.\n- [Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.05850)\n  - Nathan Kallus and Masatoshi Uehara. arXiv, 2019.\n- [Off-Policy Evaluation in Partially Observable Environments](https:\u002F\u002Fojs.aaai.org\u002F\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F6590)\n  - Guy Tennenholtz, Uri Shalit, and Shie Mannor. AAAI, 2019.\n- [Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.03735)\n  - Nathan Kallus and Masatoshi Uehara. NeurIPS, 2019.\n- [Towards Optimal Off-Policy Evaluation for Reinforcement Learning with Marginalized Importance Sampling](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002F4ffb0d2ba92f664c2281970110a2e071-Abstract.html)\n  - Tengyang Xie, Yifei Ma, and Yu-Xiang Wang. NeuIPS, 2019.\n- [Off-Policy Evaluation via Off-Policy Classification](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002Fb5b03f06271f8917685d14cea7c6c50a-Abstract.html)\n  - Alexander Irpan, Kanishka Rao, Konstantinos Bousmalis, Chris Harris, Julian Ibarz, and Sergey Levine. NeuIPS, 2019.\n- [DualDICE: Behavior-Agnostic Estimation of Discounted Stationary Distribution Corrections](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.04733) [[software](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fdice_rl)]\n  - Ofir Nachum, Yinlam Chow, Bo Dai, Lihong Li. NeurIPS, 2019.\n- [Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy](https:\u002F\u002Fopenreview.net\u002Fforum?id=HklKui0ct7)\n  - Yuan Xie, Boyi Liu, Qiang Liu, Zhaoran Wang, Yuan Zhou, and Jian Peng. ICLR, 2019.\n- [Batch Policy Learning under Constraints](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.08738) [[code](https:\u002F\u002Fgithub.com\u002Fclvoloshin\u002Fconstrained_batch_policy_learning)] [[website](https:\u002F\u002Fsites.google.com\u002Fview\u002Fconstrained-batch-policy-learn\u002F)]\n  - Hoang M. Le, Cameron Voloshin, and Yisong Yue. ICML, 2019.\n- [More Efficient Off-Policy Evaluation through Regularized Targeted Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fbibaut19a.html)\n  - Aurelien Bibaut, Ivana Malenica, Nikos Vlassis, and Mark Van Der Laan. ICML, 2019.\n- [Combining parametric and nonparametric models for off-policy evaluation](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fgottesman19a.html)\n  - Omer Gottesman, Yao Liu, Scott Sussex, Emma Brunskill, and Finale Doshi-Velez. ICML, 2019.\n- [Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Foberst19a.html)\n  - Michael Oberst and David Sontag. ICML, 2019.\n- [Importance Sampling Policy Evaluation with an Estimated Behavior Policy](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fhanna19a.html)\n  - Josiah Hanna, Scott Niekum, and Peter Stone. ICML, 2019.\n- [Representation Balancing MDPs for Off-policy Policy Evaluation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2018\u002Fhash\u002F980ecd059122ce2e50136bda65c25e07-Abstract.html)\n  - Yao Liu, Omer Gottesman, Aniruddh Raghu, Matthieu Komorowski, Aldo A. Faisal, Finale Doshi-Velez, and Emma Brunskill. NeuIPS, 2018.\n- [Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2018\u002Fhash\u002Fdda04f9d634145a9c68d5dfe53b21272-Abstract.html)\n  - Qiang Liu, Lihong Li, Ziyang Tang, and Dengyong Zhou. NeuIPS, 2018.\n- [More Robust Doubly Robust Off-policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.03493)\n  - Mehrdad Farajtabar, Yinlam Chow, and Mohammad Ghavamzadeh. ICML, 2018.\n- [Importance Sampling for Fair Policy Selection](https:\u002F\u002Fpeople.cs.umass.edu\u002F~pthomas\u002Fpapers\u002FDoroudi2017.pdf)\n  - Shayan Doroudi, Philip Thomas, and Emma Brunskill. UAI, 2017.\n- [Predictive Off-Policy Policy Evaluation for Nonstationary Decision Problems, with Applications to Digital Marketing](https:\u002F\u002Fpeople.cs.umass.edu\u002F~pthomas\u002Fpapers\u002FThomas2017.pdf)\n  - Philip S. Thomas, Georgios Theocharous, Mohammad Ghavamzadeh, Ishan Durugkar, and Emma Brunskill. AAAI, 2017.\n- [Consistent On-Line Off-Policy Evaluation](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fhallak17a.html)\n  - Assaf Hallak and Shie Mannor. ICML, 2017.\n- [Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.06126)\n  - Josiah P. Hanna, Peter Stone, and Scott Niekum. AAAMS, 2016.\n- [Doubly Robust Off-policy Value Evaluation for Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Fjiang16.html)\n  - Nan Jiang and Lihong Li. ICML, 2016.\n- [Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning](http:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Fthomasa16.html)\n  - Philip Thomas and Emma Brunskill. ICML, 2016.\n- [High Confidence Policy Improvement](http:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fthomas15.html)\n  - Philip Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. ICML, 2015.\n- [High Confidence Off-Policy Evaluation](https:\u002F\u002Fpeople.cs.umass.edu\u002F~pthomas\u002Fpapers\u002FThomas2015.pdf)\n  - Philip S. Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. AAAI, 2015.\n- [Eligibility Traces for Off-Policy Policy Evaluation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.5555\u002F645529.658134)\n  - Doina Precup, Richard S. Sutton, and Satinder P. Singh. ICML, 2000.\n\n#### 离策略学习\n- [序列反事实风险最小化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12120)\n  - 侯萨姆·泽纳蒂、欧斯塔什·迪埃梅尔、马蒂厄·马丁、朱利安·迈拉尔和皮埃尔·盖亚尔。ICML，2023年。\n- [面向轨迹的离策略强化学习资格迹](https:\u002F\u002Fopenreview.net\u002Fforum?id=8Lww9LXokZ)\n  - 布雷特·戴利、玛莎·怀特、克里斯托弗·阿马托和马洛斯·C·马查多。ICML，2023年。\n- [基于 bandit 反馈的多任务离策略学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04720)\n  - 乔伊·洪、布拉尼斯拉夫·克韦顿、苏米特·卡塔里亚、曼齐尔·扎希尔和穆罕默德·加瓦姆扎德赫。ICML，2023年。\n- [用于离策略学习的指数平滑法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15877)\n  - 伊马德·奥瓦利、维克托-埃马纽埃尔·布鲁内尔、大卫·罗德和安娜·科尔巴。ICML，2023年。\n- [具有通用数据生成策略的反事实学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.01925)\n  - 尤介·成田、京平·奥村、秋弘·清水和浩平·矢田。AAAI，2023年。\n- [面向离线上下文 bandit 问题的分布鲁棒策略梯度](https:\u002F\u002Fproceedings.mlr.press\u002Fv206\u002Fyang23f.html)\n  - 周浩·杨、郭一鸿、徐攀、刘安琪和阿尼马什里·阿南德库马尔。AISTATS，2023年。\n- [Oracle 效率下的悲观主义：上下文 bandit 中的离线策略优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07923)\n  - 王乐群、阿克谢·克里希纳穆提和亚历山大·斯利夫金斯。arXiv，2023年。\n- [悲观的离策略多目标优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18617)\n  - 希玛·阿里扎德、阿尼鲁达·巴尔加瓦、卡尔蒂克·戈帕尔斯瓦米、拉利特·贾因、布拉尼斯拉夫·克韦顿和刘舸。arXiv，2023年。\n- [统一的离策略排序学习：强化学习视角](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07528)\n  - 张泽宇、苏毅、袁辉、吴怡然、里沙布·巴拉苏布拉马尼安、吴庆云、王华正和王孟迪。arXiv，2023年。\n- [不确定性感知的离策略学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06389)\n  - 张晓颖、陈俊普、王洪宁、谢宏和李航。arXiv，2023年。\n- [基于观测数据的公平离策略学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08516)\n  - 丹尼斯·弗劳恩、瓦伦丁·梅尔尼丘克和斯特凡·福伊尔里格尔。arXiv，2023年。\n- [通过超盒搜索实现的可解释离策略学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02473)\n  - 丹尼尔·切尔努特、托比亚斯·哈特和斯特凡·福伊尔里格尔。ICML，2022年。\n- [带有有效动作的离线策略优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.00632)\n  - 刘瑶、扬尼斯·弗莱特-贝尔利亚克和艾玛·布伦斯基尔。UAI，2022年。\n- [面向运行时不确定性的稳健离策略学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13337)\n  - 徐达、叶雨婷、阮传伟和杨博。AAAI，2022年。\n- [具有离策略学习应用的安全最优设计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.04835)\n  - 朱瑞豪和布拉尼斯拉夫·克韦顿。AISTATS，2022年。\n- [推荐系统的离策略 actor-critic 方法](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3523227.3546758)\n  - 陈敏敏、许灿、文斯·加托、德万舒·贾因、阿维拉尔·库马尔和埃德·奇。RecSys，2022年。\n- [MGPolicy：基于元图增强的推荐系统离策略学习](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3477495.3532021)\n  - 王向猛、李倩、于典儿、王志超、陈宏旭和徐冠东。SIGIR，2022年。\n- [使用 Wasserstein 距离的分布鲁棒策略学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.04637)\n  - 基道·木户。arXiv，2022年。\n- [推荐系统的局部策略改进](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.11431)\n  - 梁大文和尼科斯·弗拉西斯。arXiv，2022年。\n- [“无重叠”情况下的策略学习：悲观主义与广义经验 Bernstein 不等式](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09900)\n  - 金英、任志美、杨卓然和王兆然。arXiv，2022年。\n- [大规模推荐的快速离线策略优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.05327)\n  - 奥特曼·萨基、大卫·罗德和亚历山大·吉洛特。arXiv，2022年。\n- [针对 Top-K 推荐的实用反事实策略学习](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3534678.3539295)\n  - 刘雅旭、颜居楠、袁博文、石润东、严鹏和林志仁。KDD，2022年。\n- [提升的离策略学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.01148)\n  - 本·伦敦、列维·卢、泰德·桑德勒和托斯滕·约阿希姆斯。arXiv，2022年。\n- [通过神经网络进行的半反事实风险最小化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.07148)\n  - 戈拉马利·阿米尼安、罗伯托·贝加、奥马尔·里瓦斯普拉塔、劳拉·托尼和米格尔·罗德里格斯。arXiv，2022年。\n- [IMO^3：交互式多目标离策略优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.09798)\n  - 王楠、王洪宁、玛丽亚姆·卡里姆扎德甘、布拉尼斯拉夫·克韦顿和克雷格·布蒂利耶。arXiv，2022年。\n- [用于排序学习的悲观离策略优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02593)\n  - 马泰伊·齐夫、布拉尼斯拉夫·克韦顿和米哈尔·孔潘。arXiv，2022年。\n- [非平稳条件下的离策略优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.08236)\n  - 乔伊·洪、布拉尼斯拉夫·克韦顿、曼齐尔·扎希尔、尹蓝·周和阿姆尔·艾哈迈德。AISTATS，2021年。\n- [从极端 bandit 反馈中学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.12947)\n  - 罗曼·洛佩斯、英德尔吉特·迪隆和迈克尔·I·乔丹。AAAI，2021年。\n- [在样本选择偏差下推广离策略学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.01387)\n  - 托比亚斯·哈特、丹尼尔·切尔努特和斯特凡·福伊尔里格尔。arXiv，2021年。\n- [利用变分自编码器为含有缺失值的日志数据构建保守策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.03747)\n  - 马赫德·阿布罗尚、凯·侯·叶普、杰姆·泰金和米哈埃拉·范德·沙尔。arXiv，2021年。\n- [针对确定性策略的双重稳健离策略价值与梯度估计](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F75df63609809c7a2052fdffe5c00a84e-Abstract.html)\n  - 内森·卡卢斯和政敏·上原。NeurIPS，2020年。\n- [从重要性采样到双重稳健策略梯度](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fhuang20b.html)\n  - 黄家伟和蒋楠。ICML，2020年。\n- [通过代理损失分类归约进行高效策略学习](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fbennett20a.html) [[代码](https:\u002F\u002Fgithub.com\u002FCausalML\u002FESPRM)]\n  - 安德鲁·贝内特和内森·卡卢斯。ICML，2020年。\n- [支持不足的离策略 bandit 问题](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3394486.3403139)\n  - 诺文·萨奇德瓦、苏毅和托斯滕·约阿希姆斯。KDD，2020年。\n- [两阶段推荐系统中的离策略学习](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3366423.3380130)\n  - 马佳琪、赵哲、易新阳、杨继、陈敏敏、唐嘉熙、洪立川和埃德·奇。WWW，2020年。\n- [通过最优再定向提高策略学习效率](https:\u002F\u002Fwww.tandfonline.com\u002Fdoi\u002Fabs\u002F10.1080\u002F01621459.2020.1788948?journalCode=uasa20)\n  - 内森·卡卢斯。JASA，2020年。\n- [学习何时采取治疗策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.09751)\n  - 聂欣坤、艾玛·布伦斯基尔和斯特凡·韦格尔。JASA，2020年。\n- [通过深度神经网络在低维流形上进行的双重稳健离策略学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.01797)\n  - 陈敏硕、刘浩、廖文静和赵拓。arXiv，2020年。\n- [离线策略学习中的 bandit 过拟合](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.15368)\n  - 大卫·布兰德丰布雷纳、威廉·F·惠特尼、拉杰什·兰加纳特和琼·布鲁纳。arXiv，2020年。\n- [连续随机策略的反事实学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.11722)\n  - 侯萨姆·泽纳蒂、阿尔贝托·比埃蒂、马蒂厄·马丁、欧斯塔什·迪埃梅尔和朱利安·迈拉尔。arXiv，2020年。\n- [REINFORCE 推荐系统中的 Top-K 离策略修正](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.02353)\n  - 陈敏敏、亚历克斯·博特尔、保罗·科文廷、萨加尔·贾因、弗朗索瓦·贝莱蒂和埃德·奇。WSDM，2019年。\n- [具有连续动作的半参数高效策略学习](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2019\u002Fhash\u002F08b7dc6e8b36bcaac15847827b7951a9-Abstract.html)\n  - 维克托·切尔诺祖科夫、梅尔特·德米雷尔、格雷格·刘易斯和瓦西里斯·西尔加尼斯。NeurIPS，2019年。\n- [从 bandit 反馈中高效进行反事实学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.03084)\n  - 成田优介、安井翔太和矢田浩平。AAAI，2019年。\n- [使用日志 bandit 反馈进行深度学习](https:\u002F\u002Fopenreview.net\u002Fforum?id=SJaP_-xAb)\n  - 托斯滕·约阿希姆斯、阿迪特·斯瓦米纳坦和马尔滕·德·赖克。ICLR，2018年。\n- [反事实学习的自归一化估计量](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2015\u002Fhash\u002F39027dfad5138c9ca0c474d71db915c3-Abstract.html)\n  - 阿迪特·斯瓦米纳坦和托斯滕·约阿希姆斯。NeurIPS，2015年。\n- [反事实风险最小化：从日志 bandit 反馈中学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.02362)\n  - 阿迪特·斯瓦米纳坦和托斯滕·约阿希姆斯。ICML，2015年。\n\n### 离策略评估与学习：基准测试\u002F实验\n- [迈向离策略评估的风险-收益权衡的评估与基准测试](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18207)\n  - 清原春香、岸本莲、川上康介、小林健、中田和秀、斋藤佑太。ICLR，2024年。\n- [SCOPE-RL：用于离线强化学习和离策略评估的Python库](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18206)\n  - 清原春香、岸本莲、川上康介、小林健、中田和秀、斋藤佑太。arXiv，2023年。\n- [带有置信度的离策略政策比较：基准测试与基线](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10739)\n  - 安努拉格·考尔、马里亚诺·菲利普、艾伦·费恩。arXiv，2022年。\n- [扩展开放多臂老虎机管道以模拟行业挑战](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.04147)\n  - 布拉姆·范登阿克尔、尼克拉斯·韦伯、费利佩·莫赖斯、德米特里·戈尔登伯格。arXiv，2022年。\n- [开放多臂老虎机数据集与管道：迈向真实且可重复的离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.07146) [[软件](https:\u002F\u002Fgithub.com\u002Fst-tech\u002Fzr-obp)] [[公开数据集](https:\u002F\u002Fresearch.zozo.com\u002Fdata.html)]\n  - 斋藤佑太、相原俊介、松谷惠美、成田雄介。NeurIPS，2021年。\n- [评估离策略评估的鲁棒性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.13703) [[软件](https:\u002F\u002Fgithub.com\u002Fsony\u002FpyIEOE)]\n  - 斋藤佑太、宇田川拓真、清原春香、茂木一辉、成田雄介、楯野圭。RecSys，2021年。\n- [深度离策略评估的基准测试](https:\u002F\u002Fopenreview.net\u002Fforum?id=kWSeGEeHvF8) [[代码](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fdeep_ope)]\n  - 贾斯汀·傅、穆罕默德·诺鲁齐、奥菲尔·纳胡姆、乔治·塔克、王子宇、亚历山大·诺维科夫、杨孟娇、张迈克尔、陈宇天、阿维拉尔·库马尔、科斯敏·帕杜拉鲁、谢尔盖·莱文、托马斯·佩恩。ICLR，2021年。\n- [强化学习中离策略政策评估的实证研究](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.06854) [[代码](https:\u002F\u002Fgithub.com\u002Fclvoloshin\u002FOPE-tools)]\n  - 卡梅隆·沃洛申、黄M·黎、江楠、岳义松，arXiv，2019年。\n\n### 离策略评估与学习：应用\n- [HOPE：以人为本的离策略评估在电子学习和医疗保健中的应用](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.09212)\n  - 高鸽、鞠松、马克尔·桑斯·奥斯因、池敏。AAMAS，2023年。\n- [何时离策略评估有用？一种以数据为中心的观点](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.14110)\n  - 孙浩、亚历克斯·J·陈、纳比勒·西达特、阿里汗·休尤克、米哈埃拉·范德沙尔。arXiv，2023年。\n- [同行评审分配策略的反事实评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17339)\n  - 马丁·萨维斯基、史蒂文·杰克曼、尼哈尔·B·沙赫、约翰·乌甘德。arXiv，2023年。\n- [个性化定价的平衡式离策略评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12736)\n  - 亚当·N·埃尔马乔布、维沙尔·古普塔、赵云帆。arXiv，2023年。\n- [基于记录的用户反馈的多动作对话策略学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.13505)\n  - 张硕、赵俊州、王平辉、王天翔、李子良、陶静、黄毅、冯君兰。arXiv，2023年。\n- [CFR-p：具有层次化策略抽象的反事实后悔最小化及其在双人麻将中的应用](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12087)\n  - 王世恒。arXiv，2023年。\n- [REINFORCE推荐系统中面向用户满意度的奖励塑造](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15166)\n  - 康斯坦蒂娜·克里斯塔科普卢、许灿、张赛、斯里拉吉·巴达姆、特雷弗·波特、丹尼尔·李、万浩、易新阳、乐雅、克里斯·伯格、埃里克·本科莫·迪克森、Ed H. 汀、陈敏敏。arXiv，2022年。\n- [数据驱动的离策略估计器选择：在线内容分发服务中用户营销的应用](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.08621)\n  - 斋藤佑太、宇田川拓真、楯野圭。arXiv，2021年。\n- [迈向对话系统的自动化评估：一种无模型的离策略评估方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10242)\n  - 蒋浩明、戴博、杨孟娇、魏伟、赵拓。arXiv，2021年。\n- [离线强化学习的模型选择：医疗保健环境中的实践考量](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.11003)\n  - 唐圣普、詹娜·维恩斯。MLHC，2021年。\n- [近似模型中概率身份数据的离策略评估](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3289600.3291033)\n  - 兰德尔·科塔、江丹、胡明阳、廖培周。WSDM，2019年。\n- [离线评估用于制定播放列表推荐决策](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3289600.3291027)\n  - 阿洛伊斯·格鲁松、普拉文·钱达尔、克里斯托夫·沙尔布耶、詹姆斯·麦克伊内里、萨曼莎·汉森、达米安·塔迪厄、本·卡特雷特。WSDM，2019年。\n- [离策略政策评估中的行为策略估计：校准至关重要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.01066)\n  - 阿尼鲁德·拉古、奥默·戈特斯曼、刘瑶、马蒂厄·科莫罗夫斯基、阿尔多·法伊萨尔、菲娜莱·多希-维莱兹、艾玛·布伦斯基尔。arXiv，2018年。\n- [在观察性医疗环境中评估强化学习算法](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.12298)\n  - 奥默·戈特斯曼、弗雷德里克·约翰逊、约书亚·迈尔、杰克·登特、李东勋、斯里瓦茨·斯里尼瓦桑、张琳英、丁毅、大卫·维赫尔、彭雪峰、姚嘉宇、伊萨克·拉格、克里斯托弗·莫施、李伟·H·莱曼、马蒂厄·科莫罗夫斯基、阿尔多·法伊萨尔、利奥·安东尼·塞利、大卫·松塔格、菲娜莱·多希-维莱兹。arXiv，2018年。\n- [迈向公平的市场：推荐系统中相关性、公平性和满意度之间权衡的反事实评估](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3269206.3272027)\n  - 里沙布·梅赫罗特拉、詹姆斯·麦克伊内里、于格·布夏尔、穆尼亚·拉尔马斯、费尔南多·迪亚斯。CIKM，2018年。\n- [推荐系统的离线A\u002FB测试](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3159652.3159687)\n  - 亚历山大·吉洛特、克莱芒·卡劳泽讷、托马斯·内德莱克、亚历山大·亚伯拉罕、西蒙·多莱。WSDM，2018年。\n- [带有增量式、微创在线反馈的离线对比评估](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3209978.3210050)\n  - 本·卡特雷特、普拉文·钱达尔。SIGIR，2018年。\n- [处理混杂因素以实现真实的离策略评估](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3184558.3186915)\n  - 萨乌拉布·索霍尼、尼基塔·普拉布、维尼特·查欧吉。WWW，2018年。\n- [反事实推理与学习系统：以计算广告为例](https:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv14\u002Fbottou13a.html)\n  - 莱昂·博图、乔纳斯·彼得斯、华金·基尼奥内罗-坎德拉、丹尼斯·X·查尔斯、D·麦克斯·奇克林、埃隆·波尔图加利、迪潘卡尔·雷、帕特里斯·西马尔、埃德·斯奈尔森。JMLR，2013年。\n\n## 开源软件\u002F实现\n- [SCOPE-RL：用于离线强化学习、离策略评估和选择的 Python 库](https:\u002F\u002Fgithub.com\u002Fhakuhodo-technologies\u002Fscope-rl) [[论文1](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18206)] [[论文2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18207)] [[文档](https:\u002F\u002Fscope-rl.readthedocs.io\u002Fen\u002Flatest\u002F)] \n  - 清原春香、岸本莲、川上浩介、小林健、中田一秀和斋藤悠太。\n- [Open Bandit Pipeline：一个用于多臂赌博机算法和离策略评估的研究框架](https:\u002F\u002Fgithub.com\u002Fst-tech\u002Fzr-obp) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.07146)] [[文档](https:\u002F\u002Fzr-obp.readthedocs.io\u002Fen\u002Flatest\u002Findex.html)] [[数据集](https:\u002F\u002Fresearch.zozo.com\u002Fdata.html)]\n  - 斋藤悠太、相原俊介、松谷惠和成田雄介。\n- [pyIEOE：迈向可解释的离线评估](https:\u002F\u002Fgithub.com\u002Fsony\u002FpyIEOE) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.13703)]\n  - 斋藤悠太、宇田川拓真、清原春香、茂木一辉、成田雄介和楯野圭。\n- [d3rlpy：一个离线深度强化学习库](https:\u002F\u002Fgithub.com\u002Ftakuseno\u002Fd3rlpy) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.03788)] [[官网](https:\u002F\u002Ftakuseno.github.io\u002Fd3rlpy\u002F)] [[文档](https:\u002F\u002Fd3rlpy.readthedocs.io\u002F)]\n  - 瀬能拓马和今井道太。\n- [MINERVA：一款开箱即用的数据驱动深度强化学习 GUI 工具](https:\u002F\u002Fgithub.com\u002Ftakuseno\u002Fminerva) [[官网](https:\u002F\u002Ftakuseno.github.io\u002Fminerva\u002F)] [[文档](https:\u002F\u002Fminerva-ui.readthedocs.io\u002Fen\u002Fv0.20\u002F)]\n  - 瀬能拓马和今井道太。\n- [Minari](https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FMinari)\n  - Farama 基金会。\n- [CORL：干净的离线强化学习](https:\u002F\u002Fgithub.com\u002Fcorl-team\u002FCORL) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07105)]\n  - 丹尼斯·塔拉索夫、亚历山大·尼库林、德米特里·阿基莫夫、弗拉季斯拉夫·库伦科夫和谢尔盖·科列斯尼科夫。\n- [COBS：加州理工学院 OPE 基准测试套件](https:\u002F\u002Fgithub.com\u002Fclvoloshin\u002FCOBS) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.06854)]\n  - 卡梅隆·沃洛申、黄 M. 黎、南江和伊桑·岳。\n- [深度离策略评估基准](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fdeep_ope) [[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=kWSeGEeHvF8)]\n  - 贾斯汀·傅、穆罕默德·诺鲁齐、奥菲尔·纳楚姆、乔治·塔克、王子宇、亚历山大·诺维科夫、杨孟娇、张迈克尔、陈宇天、阿维拉尔·库马尔、科斯敏·帕杜拉鲁、谢尔盖·莱文和托马斯·佩恩。\n- [DICE：分布校正估计库](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fdice_rl) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.03438)]\n  - 奥菲尔·纳楚姆、尹蓝·周、戴博、李立宏、张瑞义和戴尔·舒尔曼。\n- [RL Unplugged：离线强化学习基准](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdeepmind-research\u002Ftree\u002Fmaster\u002Frl_unplugged) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.13888)] [[数据集](https:\u002F\u002Fconsole.cloud.google.com\u002Fstorage\u002Fbrowser\u002Frl_unplugged?pli=1)]\n  - 查格拉尔·古尔切赫雷、王子宇、亚历山大·诺维科夫、汤姆·勒·佩恩、塞尔吉奥·戈麦斯·科尔梅纳雷霍、孔拉德·佐尔纳、里沙布·阿加瓦尔、乔什·梅雷尔、丹尼尔·曼科维茨、科斯敏·帕杜拉鲁、加布里埃尔·杜拉克-阿诺德、杰里·李、穆罕默德·诺鲁齐、马特·霍夫曼、奥菲尔·纳楚姆、乔治·塔克、尼古拉斯·希斯和南多·德·弗雷塔斯。\n- [D4RL：深度数据驱动强化学习的数据集](https:\u002F\u002Fgithub.com\u002Frail-berkeley\u002Fd4rl) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.07219)] [[官网](https:\u002F\u002Fsites.google.com\u002Fview\u002Fd4rl\u002Fhome)]\n  - 贾斯汀·傅、阿维拉尔·库马尔、奥菲尔·纳楚姆、乔治·塔克和谢尔盖·莱文。\n- [V-D4RL：基于视觉观测的离线强化学习中的挑战与机遇](https:\u002F\u002Fgithub.com\u002Fconglu1997\u002Fv-d4rl) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04779)]\n  - 陆聪、菲利普·J·鲍尔、蒂姆·G·J·鲁德纳、杰克·帕克-霍尔德、迈克尔·A·奥斯本和叶伟华·特。\n- [真实机器人硬件上的离线强化学习基准测试](https:\u002F\u002Fgithub.com\u002Frr-learning\u002Ftrifinger_rl_datasets) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15690)]\n  - 尼科·居特勒、塞巴斯蒂安·布莱斯、帕维尔·科列夫、费利克斯·维德迈尔、曼努埃尔·武特里希、施特凡·鲍尔、伯恩哈德·舍尔科普夫和格奥尔格·马尔提乌斯。ICLR，2023年。\n- [RLDS：强化学习数据集](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Frlds) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.02767)]\n  - 萨贝拉·拉莫斯、塞尔坦·吉尔金、莱昂纳德·于塞诺、达米安·文森特、汉娜·雅库波维奇、丹尼尔·丰山、安妮塔·格尔盖利、皮奥特尔·斯坦奇克、拉斐尔·马里涅、耶利米·哈姆森、奥利维埃·皮埃特坎和尼古拉·莫姆切夫。\n- [OEF：离线均衡求解](https:\u002F\u002Fgithub.com\u002FSecurityGames\u002Foef) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.05285)]\n  - 李淑欣、王新润、雅库布·切尔尼、张友志、侯灿和安博。\n- [ExORL：离线强化学习的探索性数据](https:\u002F\u002Fgithub.com\u002Fdenisyarats\u002Fexorl) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.13425)]\n  - 丹尼斯·亚拉茨、大卫·布兰德丰布雷纳、刘浩、迈克尔·拉斯金、皮特·阿贝尔、亚历山德罗·拉扎里克和莱雷尔·平托。\n- [RL4RS：基于强化学习的推荐系统的现实世界基准](https:\u002F\u002Fgithub.com\u002FfuxiAIlab\u002FRL4RS) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.11073)] [数据集](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1YbPtPyYrMvMGOuqD4oHvK0epDtEhEb9v\u002Fview)]\n  - 王凯、邹哲宁、尚悦、邓启林、赵明昊、梁亦乐、吴润泽、陶建荣、沈旭东、吕唐杰和范昌杰。\n- [NeoRL：接近现实世界的离线强化学习基准](https:\u002F\u002Fagit.ai\u002FPolixir\u002Fneorl) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.00714)] [[官网](http:\u002F\u002Fpolixir.ai\u002Fresearch\u002Fneorl)]\n  - 秦荣军、高松毅、张星远、许振、黄圣凯、李泽文、张伟楠和于洋。\n- [工业级离线 RL 数据集](https:\u002F\u002Fgithub.com\u002Fsiemens\u002Findustrialbenchmark\u002Ftree\u002Foffline_datasets\u002Fdatasets) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.05533)]\n  - 菲利普·斯瓦津纳、施特芬·乌德卢夫特和托马斯·伦克勒。\n- [ARLO：自动化强化学习框架](https:\u002F\u002Fgithub.com\u002Farlo-lib\u002FARLO) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10416)]\n  - 马可·穆西、达维德·隆巴尔达、阿尔贝托·玛丽亚·梅泰利、弗朗切斯科·特罗沃和马切洛·雷斯泰利。\n- [RecoGym：用于在线广告中产品推荐问题的强化学习环境](https:\u002F\u002Fgithub.com\u002Fcriteo-research\u002Freco-gym) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.00720)]\n  - 大卫·罗德、斯蒂芬·邦纳、特拉维斯·邓洛普、弗拉维安·瓦西莱和亚历山德罗斯·卡拉佐格鲁。\n- [MARS-Gym：用于建模、训练和评估电商平台推荐系统的 Gym 框架](https:\u002F\u002Fgithub.com\u002Fdeeplearningbrasil\u002Fmars-gym) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.07035)] [[文档](https:\u002F\u002Fmars-gym.readthedocs.io\u002Fen\u002Flatest\u002F)]\n  - 马尔莱松·R·O·桑塔纳、卢克西安诺·C·梅洛、费尔南多·H·F·卡马戈、布鲁诺·布兰达奥、安德森·索亚雷斯、雷南·M·奥利维拉和桑多尔·卡埃塔诺。\n- [基于强化学习的电压-无功功率控制数据集](https:\u002F\u002Fgithub.com\u002Fyg-smile\u002FRL_VVC_dataset) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.09500)]\n  - 高元琪和于楠鹏。\n\n## 博客\u002F播客\n\n### 博客\n- [推荐系统的反事实评估](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fcounterfactual-evaluation\u002F)\n  - 尤金·严。2022年。\n- [离线强化学习：保守算法如何推动新应用](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F12\u002F07\u002Foffline\u002F)\n  - 阿维拉尔·库马尔和阿维·辛格。BAIR博客，2020年。\n- [AWAC：利用离线数据集加速在线强化学习](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F09\u002F10\u002Fawac\u002F)\n  - 阿什文·奈尔和阿比谢克·古普塔。BAIR博客，2020年。\n- [D4RL：构建更好的离线强化学习基准](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F06\u002F25\u002FD4RL\u002F)\n  - 贾斯汀·傅。BAIR博客，2020年。\n- [基于策略的数据收集能否修复离线强化学习中的误差？](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2020\u002F03\u002F16\u002Fdiscor\u002F)\n  - 阿维拉尔·库马尔和阿比谢克·古普塔。BAIR博客，2020年。\n- [应对离线强化学习中的开放性挑战](https:\u002F\u002Fai.googleblog.com\u002F2020\u002F08\u002Ftackling-open-challenges-in-offline.html)\n  - 乔治·塔克和谢尔盖·列维纳。谷歌AI博客，2020年。\n- [对离线强化学习的乐观视角](https:\u002F\u002Fai.googleblog.com\u002F2020\u002F04\u002Fan-optimistic-perspective-on-offline.html)\n  - 里沙布·阿加瓦尔和穆罕默德·诺鲁齐。谷歌AI博客，2020年。\n- [数据驱动的决策：离线强化学习将如何改变我们使用机器学习的方式](https:\u002F\u002Fmedium.com\u002F@sergey.levine\u002Fdecisions-from-data-how-offline-reinforcement-learning-will-change-how-we-use-ml-24d98cb069b0)\n  - 谢尔盖·列维纳。Medium，2020年。\n- [介绍完全免费的数据驱动深度强化学习数据集](https:\u002F\u002Ftowardsdatascience.com\u002Fintroducing-completely-free-datasets-for-data-driven-deep-reinforcement-learning-a51e9bed85f9)\n  - 竹真濑野。Towards Data Science，2020年。\n- [离线（批量）强化学习：文献与应用综述](https:\u002F\u002Fdanieltakeshi.github.io\u002F2020\u002F06\u002F28\u002Foffline-rl\u002F)\n  - 丹尼尔·塞塔。danieltakeshi.github.io，2020年。\n- [数据驱动的深度强化学习](https:\u002F\u002Fbair.berkeley.edu\u002Fblog\u002F2019\u002F12\u002F05\u002Fbear\u002F)\n  - 阿维拉尔·库马尔。BAIR博客，2019年。\n\n### 播客\n- [2023年人工智能趋势：强化学习——RLHF、机器人预训练与离线强化学习，对话谢尔盖·列维纳](https:\u002F\u002Ftwimlai.com\u002Fpodcast\u002Ftwimlai\u002Fai-trends-2023-reinforcement-learning-rlhf-robotic-pre-training-and-offline-rl\u002F)\n  - 谢尔盖·列维纳。TWIML，2023年。\n- [推荐系统中的多臂老虎机与模拟器，对话奥利维尔·热南](https:\u002F\u002Fopen.spotify.com\u002Fepisode\u002F35a8asBV1wBp8vIXr59Oz9)\n  - 奥利维尔·热南。Recsperts，2022年。\n- [谢尔盖·列维纳谈机器人学习与离线强化学习](https:\u002F\u002Fthegradientpub.substack.com\u002Fp\u002Fsergey-levine-on-robot-learning-and)\n  - 谢尔盖·列维纳。The Gradient，2021年。\n- [Facebook中用于现实世界决策的离线、离策略强化学习](https:\u002F\u002Ftwimlai.com\u002Foff-line-off-policy-rl-for-real-world-decision-making-at-facebook\u002F)\n  - 杰森·高奇。TWIML，2021年。\n- [詹先元 | TalkRL：强化学习播客](https:\u002F\u002Fwww.talkrl.com\u002Fepisodes\u002Fxianyuan-zhan)\n  - 詹先元。TWIML，2021年。\n- [MOReL：基于模型的离线强化学习，对话阿拉文德·拉杰斯瓦兰](https:\u002F\u002Ftwimlai.com\u002Fmorel-model-based-offline-reinforcement-learning-with-aravind-rajeswaran\u002F)\n  - 阿拉文德·拉杰斯瓦兰。TWIML，2020年。\n- [强化学习趋势，对话切尔西·芬恩](https:\u002F\u002Ftwimlai.com\u002Ftwiml-talk-335-trends-in-reinforcement-learning-with-chelsea-finn\u002F)\n  - 切尔西·芬恩。TWIML，2020年。\n- [江楠 | TalkRL：强化学习播客](https:\u002F\u002Fwww.talkrl.com\u002Fepisodes\u002Fnan-jiang)\n  - 江楠。TalkRL，2020年。\n- [斯科特·藤本 | TalkRL：强化学习播客](https:\u002F\u002Fwww.talkrl.com\u002Fepisodes\u002Fscott-fujimoto)\n  - 斯科特·藤本。TalkRL，2019年。\n\n## 相关研讨会\n- [CONSEQUENCES（RecSys 2023）](https:\u002F\u002Fsites.google.com\u002Fview\u002Fconsequences2023)\n- [离线强化学习（NeurIPS 2022）](https:\u002F\u002Foffline-rl-neurips.github.io\u002F2022\u002F)\n- [面向真实生活的强化学习（NeurIPS 2022）](https:\u002F\u002Fsites.google.com\u002Fview\u002FRL4RealLife)\n- [CONSEQUENCES + REVEAL（RecSys 2022）](https:\u002F\u002Fsites.google.com\u002Fview\u002Fconsequences2022)\n- [离线强化学习（NeurIPS 2021）](https:\u002F\u002Foffline-rl-neurips.github.io\u002F2021\u002F)\n- [面向真实生活的强化学习（ICML 2021）](https:\u002F\u002Fsites.google.com\u002Fview\u002FRL4RealLife)\n- [强化学习日2021](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fevent\u002Freinforcement-learning-day-2021\u002F)\n- [离线强化学习（NeurIPS 2020）](https:\u002F\u002Foffline-rl-neurips.github.io\u002F)\n- [基于批处理数据与仿真的强化学习](https:\u002F\u002Fsimons.berkeley.edu\u002Fworkshops\u002Fschedule\u002F14240)\n- [面向真实生活的强化学习（RL4RealLife 2020）](https:\u002F\u002Fsites.google.com\u002Fview\u002FRL4RealLife2020)\n- [决策中的安全与鲁棒性（NeurIPS 2019）](https:\u002F\u002Fsites.google.com\u002Fview\u002Fneurips19-safe-robust-workshop)\n- [面向真实生活的强化学习（ICML 2019）](https:\u002F\u002Fsites.google.com\u002Fview\u002FRL4RealLife2019)\n- [现实世界的序列决策（ICML 2019）](https:\u002F\u002Frealworld-sdm.github.io\u002F)\n\n## 教程\u002F演讲\u002F讲座\n- [基于大规模数据的强化学习：机器人、图像生成与大语言模型](https:\u002F\u002Fwww.youtube.com\u002Fwatch?app=desktop&v=Iu_Uux0R0BI&feature=youtu.be)\n  - 谢尔盖·列文。2023年。\n- [交互式系统的反事实评估与学习](https:\u002F\u002Fcounterfactual-ml.github.io\u002Fkdd2022-tutorial\u002F)\n  - 斋藤佑太和托斯滕·约阿希姆斯。KDD2022。\n- [低秩马尔可夫决策过程中的在线与离线强化学习表示学习](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=EynREeip-y8)\n  - 植原正敏。RL理论研讨会2022。\n- [离线强化学习：值函数近似的基本障碍](https:\u002F\u002Fyoutu.be\u002FQS2xVHgBg-k)\n  - 徐云宗。RL理论研讨会2022。\n- [通过外推进行安全策略学习：应用于审前风险评估](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Gd2-MxJQTKA)\n  - 今井康介。在线因果推断研讨会2022。\n- [使用真实世界数据的深度强化学习](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=0Kw-VTym9Pg)\n  - 谢尔盖·列文。2022年。\n- [强化学习中的规划](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=T39xkKN7uwo)\n  - 谢尔盖·列文。2022年。\n- [模仿学习 vs. 离线强化学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=sVPm7zOrBxM)\n  - 谢尔盖·列文。2022年。\n- [离线强化学习基础教程](https:\u002F\u002Fwww.youtube.com\u002Fwatch?app=desktop&v=lH9DzugrejY)\n  - 罗曼·拉罗什和大卫·布兰德丰布雷纳。2022年。\n- [推荐系统的反事实学习与评估：基础、实现及最新进展](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=HMo9fQMVB4w) [[网站](https:\u002F\u002Fsites.google.com\u002Fcornell.edu\u002Frecsys2021tutorial)]\n  - 斋藤佑太和托斯滕·约阿希姆斯。RecSys2021。\n- [离线强化学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=k08N5a0gG0A)\n  - 谢尔盖·列文。BayLearn2021。\n- [离线强化学习](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=Es2G8FDl-Nc)\n  - 盖伊·滕嫩霍尔茨。CHIL2021。\n- [离线强化学习遗憾的快速收敛率](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=eGZ-2JU9zKE)\n  - 胡一春。RL理论研讨会2021。\n- [离线强化学习中的贝尔曼一致性悲观主义](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=g_yD6Yw8MLQ)\n  - 谢腾燕。RL理论研讨会2021。\n- [部分覆盖下的悲观模型驱动离线强化学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=aPce6Y-NqpQ)\n  - 植原正敏。RL理论研讨会2021。\n- [连接离线强化学习与模仿学习：关于悲观主义的故事](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=T1Am0bGzH4A)\n  - 帕里亚·拉希迪内贾德。RL理论研讨会2021。\n- [具有线性函数近似的无限 horizon 离线强化学习：维度灾难与算法](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=uOIvo1wQ_RQ)\n  - 陈琳。RL理论研讨会2021。\n- [悲观主义对离线 RL 是否具有可证明的效率？](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vCQsZ5pzHPk)\n  - 金莹。RL理论研讨会2021。\n- [用于离政策评估的自适应估计器选择](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=r8ZDuC71lCs)\n  - 苏毅。RL理论研讨会2021。\n- [具有线性函数近似的离线 RL 的统计极限是什么？](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FkkphMeFapg)\n  - 王若松。RL理论研讨会2021。\n- [批处理强化学习的指数级下界：批处理 RL 可能比在线 RL 难上指数倍](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=YktnEdsxYfc&feature=youtu.be)\n  - 安德烈娅·扎内特。RL理论研讨会2021。\n- [离线强化学习的温和入门](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=tW-BNW1ApN8&feature=youtu.be)\n  - 谢尔盖·列文。2021年。\n- [应对分布偏移的原则：悲观主义、适应与预见](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=QKBh6TmvBaw)\n  - 切尔西·芬恩。2020–2021年机器学习进展与应用研讨会。\n- [离线强化学习：将数据中的知识融入强化学习](https:\u002F\u002Fm.youtube.com\u002Fwatch?v=KzZFN8zUxkI&feature=youtu.be)\n  - 谢尔盖·列文。IJCAI-PRICAI2020 知识型强化学习研讨会。\n- [离线 RL](https:\u002F\u002Fslideslive.com\u002F38938455\u002Foffline-rl)\n  - 南多·德·弗雷塔斯。NeurIPS2020 离线 RL 研讨会。\n- [从离线演示中学习多智能体模拟器](https:\u002F\u002Fslideslive.com\u002F38938458\u002Flearning-a-multiagent-simulator-from-offline-demonstrations)\n  - 布兰丁·怀特。NeurIPS2020 离线 RL 研讨会。\n- [迈向离线 RL 的可靠验证与评估](https:\u002F\u002Fslideslive.com\u002F38938459\u002Ftowards-reliable-validation-and-evaluation-for-offline-rl)\n  - 蒋楠。NeurIPS2020 离线 RL 研讨会。\n- [为验证而构建的批处理 RL 模型](https:\u002F\u002Fslideslive.com\u002F38938457\u002Fbatch-rl-models-built-for-validation)\n  - 费奈尔·多希-维莱兹。NeurIPS2020 离线 RL 研讨会。\n- [离线强化学习：从算法到实际挑战](https:\u002F\u002Fsites.google.com\u002Fview\u002Fofflinerltutorial-neurips2020\u002Fhome)\n  - 阿维拉尔·库马尔和谢尔盖·列文。NeurIPS2020。\n- [机器人学习的数据可扩展性](https:\u002F\u002Fyoutu.be\u002FLGlgSeWemcM)\n  - 切尔西·芬恩。RI研讨会2020。\n- [统计高效的离线强化学习](https:\u002F\u002Fyoutu.be\u002Fn5ZoxT_WmHo)\n  - 内森·卡卢斯。ARL研讨会2020。\n- [强化学习离政策评估中的近乎最优的可证明一致收敛](https:\u002F\u002Fyoutu.be\u002FFWZewbQykv4)\n  - 王宇翔。RL理论研讨会2020。\n- [具有线性函数近似的最小化最大离政策评估](https:\u002F\u002Fyoutu.be\u002FTX9KBofFZ8s)\n  - 王梦迪。RL理论研讨会2020。\n- [超越训练分布：具身性、适应性与对称性](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=wv1zXnxRCCM&feature=youtu.be)\n  - 切尔西·芬恩。EI研讨会2020。\n- [在批量设置中结合统计方法与人工输入进行评估和优化](https:\u002F\u002Fslideslive.com\u002F38922630\u002Fcombining-statistical-methods-with-human-input-for-evaluation-and-optimization-in-batch-settings)\n  - 费奈尔·多希-维莱兹。NeurIPS2019 关于决策中的安全与鲁棒性的研讨会。\n- [利用双重强化学习高效打破视野诅咒](https:\u002F\u002Fslideslive.com\u002F38922636\u002Fefficiently-breaking-the-curse-of-horizon-with-double-reinforcement-learning)\n  - 内森·卡卢斯。NeurIPS2019 关于决策中的安全与鲁棒性的研讨会。\n- [将概率安全学习扩展到机器人领域](https:\u002F\u002Fslideslive.com\u002F38922637\u002Fscaling-probabilistically-safe-learning-to-robotics?locale=en)\n  - 斯科特·尼库姆。NeurIPS2019 关于决策中的安全与鲁棒性的研讨会。\n- [现实世界中的深度强化学习](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=b97H5uz8xkI)\n  - 谢尔盖·列文。2019年强化学习与控制新方向研讨会。","# awesome-offline-rl 快速上手指南\n\n`awesome-offline-rl` 并非一个可直接安装运行的软件库或框架，而是一个**离线强化学习（Offline RL）领域的学术资源汇总列表**。它收集了该领域重要的研究论文、综述、基准测试、开源实现链接以及相关教程。\n\n本指南旨在帮助开发者高效利用该仓库获取学习资料和寻找可用的开源代码实现。\n\n## 环境准备\n\n由于本项目本质是一个文档索引（Awesome List），**无需安装任何特定的 Python 包或系统依赖**即可浏览内容。\n\n若您需要运行列表中链接的具体算法实现，通常需要具备以下基础环境：\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python**: 建议版本 3.8+\n*   **深度学习框架**: PyTorch 或 TensorFlow (取决于具体论文的实现)\n*   **Git**: 用于克隆仓库或下载代码\n\n## 获取与浏览步骤\n\n### 1. 克隆仓库\n使用 Git 将资源列表下载到本地，以便离线浏览或搜索。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl.git\ncd awesome-offline-rl\n```\n\n> **国内加速提示**: 如果访问 GitHub 较慢，可使用国内镜像源克隆：\n> ```bash\n> git clone https:\u002F\u002Fgitee.com\u002Fmirrors\u002Fawesome-offline-rl.git\n> ```\n> *(注：若 Gitee 无同步镜像，建议使用科学上网工具或直接访问 GitHub 网页版)*\n\n### 2. 浏览资源\n您可以直接在 GitHub 网页或通过本地 Markdown 阅读器查看 `README.md` 文件。核心资源分类如下：\n\n*   **综述与调研 (Review\u002FSurvey)**: 适合入门，了解 Offline RL 的基本概念、分类及开放性问题。\n    *   推荐起点：*[Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.01643)*\n*   **理论与方法 (Theory\u002FMethods)**: 包含最新的算法论文（如基于 Diffusion、Transformer 的方法）。\n*   **基准与实验 (Benchmarks\u002FExperiments)**: 查找标准数据集和评估指标。\n*   **开源实现 (Open Source Software)**: **这是开发者最关注的部分**，列表中包含了指向具体算法代码库的链接。\n\n## 基本使用示例\n\n假设您想复现列表中的某个算法（例如 `CQL` 或 `IQL`），请按以下步骤操作：\n\n### 第一步：在列表中定位实现\n在 `README.md` 的 **[Open Source Software\u002FImplementations](https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso\u002Fawesome-offline-rl\u002Ftree\u002Fmain#open-source-softwareimplementations)** 章节查找对应的代码库链接。\n\n### 第二步：克隆具体算法仓库\n找到链接后（例如 `d3rlpy` 或 `stable-baselines3` 的离线分支），执行克隆命令。以通用的 Python 强化学习库为例：\n\n```bash\n# 示例：克隆一个典型的离线 RL 实现库 (此处以 d3rlpy 为例，具体请根据 README 中的链接决定)\ngit clone https:\u002F\u002Fgithub.com\u002Fyoshitomo-hori\u002Fd3rlpy.git\ncd d3rlpy\n```\n\n### 第三步：安装依赖并运行\n进入具体的算法仓库后，按照该仓库的说明进行安装和测试：\n\n```bash\n# 安装依赖\npip install -e .\n\n# 运行一个简单的离线训练示例 (命令视具体仓库而定)\npython examples\u002Ftrain_offline.py --dataset cartpole-v0 --algorithm cql\n```\n\n## 贡献与反馈\n如果您发现了新的优质论文或开源项目，可以通过 Pull Request 向该仓库贡献内容：\n1.  Fork 本仓库。\n2.  按照 `format` 规范添加条目。\n3.  提交 PR 至 [主仓库](https:\u002F\u002Fgithub.com\u002Fusaito\u002Fawesome-offline-rl\u002Fpulls)。\n\n如有疑问，可联系维护者：hk844@cornell.edu","某自动驾驶初创公司的算法团队正试图利用历史路测数据训练决策模型，但受限于实车测试的高成本与安全风险，无法进行大规模的在线交互探索。\n\n### 没有 awesome-offline-rl 时\n- **文献检索如大海捞针**：团队成员需手动在 arXiv 和各大会议网站搜索\"Offline RL\"相关论文，耗时数周仍难以覆盖最新的核心算法与综述。\n- **理论边界模糊不清**：缺乏系统的综述指引，工程师难以区分哪些方法仅停留在理论阶段，哪些已具备实际落地的基准测试支持，导致选型盲目。\n- **复现成本极高**：找不到权威的开源实现列表，团队不得不从零复现基础算法，常常因细节缺失而陷入调试泥潭，严重拖慢研发进度。\n- **评估标准缺失**：不了解离线策略评估（Off-Policy Evaluation）的最佳实践，无法在不部署上车的情况下准确预判模型性能，增加了试错风险。\n\n### 使用 awesome-offline-rl 后\n- **一站式资源索引**：直接通过分类目录获取从基础理论到前沿应用的全量论文清单，半天内即可构建完整的知识图谱。\n- **精准技术选型**：借助“综述与立场论文”板块，快速掌握不同算法的适用场景与局限性，迅速锁定适合当前数据分布的 SOTA 方法。\n- **加速工程落地**：利用“开源软件\u002F实现”章节找到经过验证的代码库，将算法复现周期从数周缩短至几天，让团队专注于业务逻辑优化。\n- **科学验证闭环**：参考“离线策略评估”部分的基准与实验指南，建立起可靠的仿真评估体系，在零实车风险下完成模型迭代验证。\n\nawesome-offline-rl 将原本分散杂乱的学术资源转化为结构化的工程导航图，极大地降低了离线强化学习技术的入门门槛与落地成本。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhanjuku-kaso_awesome-offline-rl_c1e1fe06.png","hanjuku-kaso","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhanjuku-kaso_df6adf9f.png",null,"https:\u002F\u002Fgithub.com\u002Fhanjuku-kaso",1062,93,"2026-04-13T08:16:27","","未说明",{"notes":87,"python":85,"dependencies":88},"该项目是一个离线强化学习（Offline RL）和离策评估（Off-Policy Evaluation）的论文与资源清单集合，并非可执行的软件工具或代码库，因此没有具体的运行环境、依赖库或硬件需求。用户只需通过浏览器访问 GitHub 页面查看列表，或克隆仓库获取文本资料。",[],[18],[91,92,93,94,95,96],"reinforcement-learning","off-policy-evaluation","awesome","awesome-list","research","offline-rl","2026-03-27T02:49:30.150509","2026-04-20T07:20:35.047594",[],[]]