[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-opendilab--awesome-multi-modal-reinforcement-learning":3,"tool-opendilab--awesome-multi-modal-reinforcement-learning":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,2,"2026-04-05T10:45:23",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[19,13,20,18],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,1,"2026-04-03T21:50:24",[20,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},2234,"scikit-learn","scikit-learn\u002Fscikit-learn","scikit-learn 是一个基于 Python 构建的开源机器学习库，依托于 SciPy、NumPy 等科学计算生态，旨在让机器学习变得简单高效。它提供了一套统一且简洁的接口，涵盖了从数据预处理、特征工程到模型训练、评估及选择的全流程工具，内置了包括线性回归、支持向量机、随机森林、聚类等在内的丰富经典算法。\n\n对于希望快速验证想法或构建原型的数据科学家、研究人员以及 Python 开发者而言，scikit-learn 是不可或缺的基础设施。它有效解决了机器学习入门门槛高、算法实现复杂以及不同模型间调用方式不统一的痛点，让用户无需重复造轮子，只需几行代码即可调用成熟的算法解决分类、回归、聚类等实际问题。\n\n其核心技术亮点在于高度一致的 API 设计风格，所有估算器（Estimator）均遵循相同的调用逻辑，极大地降低了学习成本并提升了代码的可读性与可维护性。此外，它还提供了强大的模型选择与评估工具，如交叉验证和网格搜索，帮助用户系统地优化模型性能。作为一个由全球志愿者共同维护的成熟项目，scikit-learn 以其稳定性、详尽的文档和活跃的社区支持，成为连接理论学习与工业级应用的最",65628,"2026-04-05T10:10:46",[20,18,14],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":10,"last_commit_at":63,"category_tags":64,"status":22},3364,"keras","keras-team\u002Fkeras","Keras 是一个专为人类设计的深度学习框架，旨在让构建和训练神经网络变得简单直观。它解决了开发者在不同深度学习后端之间切换困难、模型开发效率低以及难以兼顾调试便捷性与运行性能的痛点。\n\n无论是刚入门的学生、专注算法的研究人员，还是需要快速落地产品的工程师，都能通过 Keras 轻松上手。它支持计算机视觉、自然语言处理、音频分析及时间序列预测等多种任务。\n\nKeras 3 的核心亮点在于其独特的“多后端”架构。用户只需编写一套代码，即可灵活选择 TensorFlow、JAX、PyTorch 或 OpenVINO 作为底层运行引擎。这一特性不仅保留了 Keras 一贯的高层易用性，还允许开发者根据需求自由选择：利用 JAX 或 PyTorch 的即时执行模式进行高效调试，或切换至速度最快的后端以获得最高 350% 的性能提升。此外，Keras 具备强大的扩展能力，能无缝从本地笔记本电脑扩展至大规模 GPU 或 TPU 集群，是连接原型开发与生产部署的理想桥梁。",63927,"2026-04-04T15:24:37",[20,14,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":81,"owner_twitter":77,"owner_website":80,"owner_url":82,"languages":80,"stars":83,"forks":84,"last_commit_at":85,"license":86,"difficulty_score":46,"env_os":87,"env_gpu":88,"env_ram":88,"env_deps":89,"category_tags":92,"github_topics":80,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":22,"created_at":93,"updated_at":94,"faqs":95,"releases":96},3439,"opendilab\u002Fawesome-multi-modal-reinforcement-learning","awesome-multi-modal-reinforcement-learning","A curated list of Multi-Modal Reinforcement Learning resources (continually updated)","awesome-multi-modal-reinforcement-learning 是一个持续更新的多模态强化学习（MMRL）资源精选列表，旨在汇聚该领域的前沿研究论文与技术成果。它主要解决了研究人员在面对海量学术文献时难以快速定位高质量、跨模态（如视觉图像与自然语言文本）强化学习资料的痛点。通过系统性地整理来自 NeurIPS、ICML、ICLR 等顶级会议的最新论文，该项目为探索如何让智能体像人类一样直接从视频或文本中学习提供了清晰的路径。\n\n这份资源特别适合人工智能领域的研究人员、算法工程师以及对多模态学习感兴趣的开发者使用。无论是希望追踪最新学术动态，还是寻找特定实验环境下的基准测试参考，用户都能从中获得宝贵线索。其独特亮点在于不仅收录了严格意义上的强化学习论文，还包容性地纳入了虽非直接相关但对 MMRL 研究具有启发意义的跨学科成果，并详细标注了每篇论文的核心关键词、作者团队及实验环境，极大地提升了文献调研的效率与深度。作为一个由社区共同维护的开源项目，它正成为连接理论创新与实际应用的重要桥梁。","# Awesome Multi-Modal Reinforcement Learning \n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome) \n![visitor badge](https:\u002F\u002Fvisitor-badge.lithub.cc\u002Fbadge?page_id=opendilab.awesome-multi-modal-reinforcement-learning&left_text=Visitors)\n[![docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-latest-blue)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning)\n![GitHub stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning?color=yellow)\n![GitHub forks](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning?color=9cf)\n[![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning\u002Fblob\u002Fmain\u002FLICENSE)\n\nThis is a collection of research papers for **Multi-Modal reinforcement learning (MMRL)**.\nAnd the repository will be continuously updated to track the frontier of MMRL.\nSome papers may not be relevant to RL, but we include them anyway as they may be useful for the research of MMRL.\n\nWelcome to follow and star!\n\n## Introduction\n\nMulti-Modal RL agents focus on learning from video (images), language (text), or both, as humans do. We believe that it is important for intelligent agents to learn directly from images or text, since such data can be easily obtained from the Internet.\n\n![飞书20220922-161353](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_awesome-multi-modal-reinforcement-learning_readme_f83d0258618e.png)\n\n## Table of Contents\n\n- [Awesome Multi-Modal Reinforcement Learning](#awesome-multi-modal-reinforcement-learning)\n  - [Introduction](#introduction)\n  - [Table of Contents](#table-of-contents)\n  - [Papers](#papers)\n    - [NeurIPS 2025](#neurips-2025)\n    - [ICML 2025](#icml-2025)\n    - [ICLR 2025](#iclr-2025)\n    - [ICLR 2024](#iclr-2024)\n    - [ICLR 2023](#iclr-2023)\n    - [ICLR 2022](#iclr-2022)\n    - [ICLR 2021](#iclr-2021)\n    - [ICLR 2019](#iclr-2019)\n    - [NeurIPS 2024](#neurips-2024)\n    - [NeurIPS 2023](#neurips-2023)\n    - [NeurIPS 2022](#neurips-2022)\n    - [NeurIPS 2021](#neurips-2021)\n    - [NeurIPS 2018](#neurips-2018)\n    - [ICML 2024](#icml-2024)\n    - [ICML 2022](#icml-2022)\n    - [ICML 2019](#icml-2019)\n    - [ICML 2017](#icml-2017)\n    - [CVPR 2024](#cvpr-2024)\n    - [CVPR 2022](#cvpr-2022)\n    - [CoRL 2022](#corl-2022)\n    - [Other](#other)\n    - [ArXiv](#arxiv)\n  - [Contributing](#contributing)\n  - [License](#license)\n\n## Papers\n\n```\nformat:\n- [title](paper link) [links]\n  - authors.\n  - key words.\n  - experiment environment.\n```\n\n### NeurIPS 2025\n\n- [PRIMT: Preference-based Reinforcement Learning with Multimodal Feedback and Trajectory Synthesis from Foundation Models](https:\u002F\u002Fopenreview.net\u002Fforum?id=4xvE6Iy77Y)\n  - Ruiqi Wang, Dezhong Zhao, Ziqin Yuan, Tianyu Shao, Guohua Chen, Dominic Kao, Sungeun Hong, Byung-Cheol Min\n  - Keywords: Preference-based Reinforcement Learning, Foundation Models for Robotics, Neuro-Symbolic Fusion, Multimodal Feedback, Causal Inference, Trajectory Synthesis, Robot Manipulation\n  - ExpEnv: 2 locomotion and 6 manipulation tasks\n\n- [Co-Reinforcement Learning for Unified Multimodal Understanding and Generation](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F89f6d8ae4000d3f0c62faca1308194e0858b9a65.pdf)\n  - Jingjing Jiang, Chongjie Si, Jun Luo, Hanwang Zhang, Chao Ma\n  - Keywords: Reinforcement Learning, GRPO, Unified Multimodal Understanding and Generation\n  - ExpEnv: text-to-image generation and multimodal understanding benchmarks\n\n- [VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning](https:\u002F\u002Ftiger-ai-lab.github.io\u002FVL-Rethinker\u002F)\n  - Haozhe Wang, Chao Qu, Zuming Huang, Wei Chu, Fangzhen Lin, Wenhu Chen\n  - Keywords: Vision-Language Models, Reasoning, Reinforcement Learning\n  - ExpEnv: MathVista, MathVerse, MathVision, MMMU-Pro, EMMA, MEGA-Bench\n\n- [SoTA with Less: MCTS-Guided Sample Selection for Data-Efficient Visual Reasoning Self-Improvement](https:\u002F\u002Fopenreview.net\u002Fforum?id=PHu9xJeAum&referrer=%5Bthe%20profile%20of%20Furong%20Huang%5D(%2Fprofile%3Fid%3D~Furong_Huang1))\n  - Xiyao Wang, Zhengyuan Yang, Chao Feng, Hongjin Lu, Linjie Li, Chung-Ching Lin, Kevin Lin, Furong Huang, Lijuan Wang\n  - Keywords: vision language model; reinforcement finetuning; vlm reasoning; data selection\n  - ExpEnv: MathVista and other visual reasoning benchmarks\n\n- [VisualQuality-R1: Reasoning-Induced Image Quality Assessment via Reinforcement Learning to Rank](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14460)\n  - Tianhe Wu, Jian Zou, Jie Liang, Lei Zhang, Kede Ma\n  - Keywords: Image Quality Assessment, Reinforcement Learning, Reasoning-induced no-reference IQA model\n  - ExpEnv: Image quality assessment benchmarks\n\n- [Q-Insight: Understanding Image Quality via Visual Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=Bds54EfR9x)\n  - Weiqi Li, Xuanyu Zhang, Shijie Zhao, Yabin ZHANG, Junlin Li, Li zhang, Jian Zhang\n  - Keywords: image quality understanding, multi-modal large language model, reinforcement learning\n  - ExpEnv: Image quality understanding benchmarks\n\n- [To Think or Not To Think: A Study of Thinking in Rule-Based Visual Reinforcement Fine-Tuning](https:\u002F\u002Fopenreview.net\u002Fforum?id=YexxvBGwQM&referrer=%5Bthe%20profile%20of%20Kaipeng%20Zhang%5D(%2Fprofile%3Fid%3D~Kaipeng_Zhang1))\n  - Ming Li, Jike Zhong, Shitian Zhao, Yuxiang Lai, Haoquan Zhang, Wang Bill Zhu, Kaipeng Zhang\n  - Keywords: Visual Reinforcement Fine-Tuning, explicit thinking, overthinking\n  - ExpEnv: six diverse visual reasoning tasks\n\n- [Fast-Slow Thinking GRPO for Large Vision-Language Model Reasoning](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F52fe68c14c7abde636af8fd396bd6a026575c7f0.pdf)\n  - Wenyi Xiao, Leilei Gan\n  - Keywords: Large Vision-Language Model, Fast-Slow Thinking, Reasoning\n  - ExpEnv: seven reasoning benchmarks\n\n- [SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=dt940loCBT&referrer=%5Bthe%20profile%20of%20Yuanpei%20Chen%5D(%2Fprofile%3Fid%3D~Yuanpei_Chen2))\n  - Borong Zhang, Yuhao Zhang, Jiaming Ji, Yingshan Lei, Josef Dai, Yuanpei Chen, Yaodong Yang\n  - Keywords: Vision-Language-Action Models, Safety Alignment, Large-Scale Constrained Learning\n  - ExpEnv: long-horizon mobile manipulation tasks\n\n- [Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16854)\n  - Jiaqi WANG, Kevin Qinghong Lin, James Cheng, Mike Zheng Shou\n  - Keywords: Vision-Language Models, Reinforcement Learning\n  - ExpEnv: CLEVR, Super-CLEVR, GeoQA, AITZ\n\n- [VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.13348)\n  - Senqiao Yang, Junyi Li, Xin Lai, Jinming Wu, Wei Li, Zejun MA, Bei Yu, Hengshuang Zhao, Jiaya Jia\n  - Keywords: Vision Language Models, Reinforcement Learning\n  - ExpEnv: general VQA tasks and OCR-related tasks\n\n- [Systematic Reward Gap Optimization for Mitigating VLM Hallucinations](https:\u002F\u002Fopenreview.net\u002Fforum?id=fJRuMulPkc)\n  - Lehan He, Zeren Chen, Zhelun Shi, Tianyu Yu, Lu Sheng, Jing Shao\n  - Keywords: Vision Language Models (VLMs), Preference learning, Hallucination mitigation, Reinforcement Learning from AI Feedback (RLAIF)\n  - ExpEnv: ObjectHal-Bench and other hallucination benchmarks\n\n- [VAGEN: Reinforcing World Model Reasoning for Multi-Turn VLM Agents](https:\u002F\u002Fopenreview.net\u002Fforum?id=xpjWEgf8zi&referrer=%5Bthe%20profile%20of%20Qineng%20Wang%5D(%2Fprofile%3Fid%3D~Qineng_Wang1))\n  - Kangrui Wang, Pingyue Zhang, Zihan Wang, Yaning Gao, Linjie Li, Qineng Wang, Hanyang Chen, Yiping Lu, Zhengyuan Yang, Lijuan Wang, Ranjay Krishna, Jiajun Wu, Li Fei-Fei, Yejin Choi, Manling Li\n  - Keywords: Visual States, World Modeling, Multi-turn RL, VLM Agents\n  - ExpEnv: five diverse agent tasks\n\n- [Point-RFT: Improving Multimodal Reasoning with Visually Grounded Reinforcement Finetuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19702)\n  - Minheng Ni, Zhengyuan Yang, Linjie Li, Chung-Ching Lin, Kevin Lin, Wangmeng Zuo, Lijuan Wang\n  - Keywords: Large multimodal model, grounded reasoning, reinforcement learning\n  - ExpEnv: ChartQA, CharXiv, PlotQA, IconQA, TabMWP\n\n- [MiCo: Multi-image Contrast for Reinforcement Visual Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22434)\n  - Xi Chen, Mingkang Zhu, Shaoteng Liu, Xiaoyang Wu, Xiaogang Xu, Yu Liu, Xiang Bai, Hengshuang Zhao\n  - Keywords: Visual Reasoning, Chain-of-Thought, LLM, VLM, MLLM\n  - ExpEnv: multi-image reasoning benchmarks\n\n- [DeepVideo-R1: Video Reinforcement Fine-Tuning via Difficulty-aware Regressive GRPO](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F3059449b6b0406f728c822c610ef64b57988b472.pdf)\n  - Jinyoung Park, Jeehye Na, Jinyoung Kim, Hyunwoo J. Kim\n  - Keywords: Video Large Language Model, Post-training, GRPO\n  - ExpEnv: video reasoning benchmarks\n\n- [SAM-R1: Leveraging SAM for Reward Feedback in Multimodal Segmentation via Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22596)\n  - Jiaqi Huang, Zunnan Xu, Jun Zhou, Ting Liu, Yicheng Xiao, Mingwen Ou, Bowen Ji, Xiu Li, Kehong Yuan\n  - Keywords: Reinforcement Learning, Multimodal Large Models, Image Segmentation\n  - ExpEnv: image segmentation benchmarks\n\n- [SE-GUI: Enhancing Visual Grounding for GUI Agents via Self-Evolutionary Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12370)\n  - Xinbin Yuan, Jian Zhang, Kaixin Li, Zhuoxuan Cai, Lujian Yao, Jie Chen, Enguang Wang, Qibin Hou, Jinwei Chen, Peng-Tao Jiang, Bo Li\n  - Keywords: gui agent; reinforcement learning; visual grounding\n  - ExpEnv: ScreenSpot-Pro and other grounding benchmarks\n\n- [GUI Exploration Lab: Enhancing Screen Navigation in Agents via Multi-Turn Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02423)\n  - Haolong Yan, Yeqing Shen, Xin Huang, Jia Wang, Kaijun Tan, Zhixuan Liang, Hongxin Li, Zheng Ge, Osamu Yoshie, Si Li, Xiangyu Zhang, Daxin Jiang\n  - Keywords: GUI Environment, Large Vision Language Model, Multi-Turn Reinforcement Learning, Agent\n  - ExpEnv: PC software and mobile Apps simulation\n\n- [Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning of Vision Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20752)\n  - Huajie Tan, Yuheng Ji, Xiaoshuai Hao, Xiansheng Chen, Pengwei Wang, Zhongyuan Wang, Shanghang Zhang\n  - Keywords: Multimodal, Reinforcement Fine-Tuning, Visual Reasoning\n  - ExpEnv: visual counting, structural perception, spatial transformation\n\n- [TempSamp-R1: Effective Temporal Sampling with Reinforcement Fine-Tuning for Video LLMs](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F54deb111742d89d20c4d8ec881ccb3a7d3f8aed6.pdf)\n  - Yunheng Li, JingCheng, Shaoyong Jia, Hangyi Kuang, Shaohui Jiao, Qibin Hou, Ming-Ming Cheng\n  - Keywords: Temporal Grounding; Multimodal Large Language Model; Reinforcement Fine-Tuning\n  - ExpEnv: Charades-STA, ActivityNet Captions, QVHighlights\n\n- [Grounded Reinforcement Learning for Visual Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23678)\n  - Gabriel Herbert Sarch, Snigdha Saha, Naitik Khandelwal, Ayush Jain, Michael J. Tarr, Aviral Kumar, Katerina Fragkiadaki\n  - Keywords: visual reasoning, vision-language models, reinforcement learning, visual grounding\n  - ExpEnv: SAT-2, BLINK, V*bench, ScreenSpot, VisualWebArena\n\n- [SRPO: Enhancing Multimodal LLM Reasoning via Reflection-Aware Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F514851b938f3c4ab888ed4499926b37fdbb1b89c.pdf)\n  - Zhongwei Wan, Zhihao Dou, Che Liu, Yu Zhang, Dongfei Cui, Qinjian Zhao, Hui Shen, Jing Xiong, Yi Xin, Yifan Jiang, Chaofan Tao, Yangfan He, Mi Zhang, Shen Yan\n  - Keywords: MLLMs, Reasoning\n  - ExpEnv: MathVista, MathVision, Mathverse, MMMU-Pro\n\n- [Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration](https:\u002F\u002Fopenreview.net\u002Fpdf\u002Feaa5aae3372db3e083fa1a4f662f19c999ff6d91.pdf)\n  - Hao Zhong, Muzhi Zhu, Zongze Du, Zheng Huang, Canyu Zhao, Mingyu Liu, Wen Wang, Hao Chen, Chunhua Shen\n  - Keywords: RL, Omni\n  - ExpEnv: Referring Audio-Visual Segmentation (RefAVS), Reasoning Video Object Segmentation (REVOS)\n\n- [Janus-Pro-R1: Advancing Collaborative Visual Comprehension and Generation via Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2506.01480v2)\n  - Kaihang Pan, Yang Wu, Wendong Bu, Kai Shen, Juncheng Li, Yingting Wang, liyunfei, Siliang Tang, Jun Xiao, Fei Wu, ZhaoHang, Yueting Zhuang\n  - Keywords: Image generation, Image understanding\n  - ExpEnv: text-to-image generation and image editing benchmarks\n\n- [Semi-off-Policy Reinforcement Learning for Vision-Language Slow-Thinking Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.16814)\n  - Junhao Shen, Haiteng Zhao, Yuzhe Gu, Songyang Gao, Kuikun Liu, Haian Huang, Jianfei Gao, Dahua Lin, Wenwei Zhang, Kai Chen\n  - Keywords: Large vision-language model, Slow-thinking reasoning\n  - ExpEnv: MathVision, OlympiadBench\n\n- [ViCrit: A Verifiable Reinforcement Learning Proxy Task for Visual Perception in VLMs](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F631a654bdd1aa99c3357feb56e89859a66512702.pdf)\n  - Xiyao Wang, Zhengyuan Yang, Chao Feng, Yuhang Zhou, Xiaoyu Liu, Yongyuan Liang, Ming Li, Ziyi Zang, Linjie Li, Chung-Ching Lin, Kevin Lin, Furong Huang, Lijuan Wang\n  - Keywords: Visual reasoning; Vision-Language Model; Visual captioning; Reward Model; Visual Hallucination\n  - ExpEnv: visual perception benchmarks\n\n- [GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11049)\n  - Yue Liu, Shengfang Zhai, Mingzhe Du, Yulin Chen, Tri Cao, Hongcheng Gao, Cheng Wang, Xinfeng Li, Kun Wang, Junfeng Fang, Jiaheng Zhang, Bryan Hooi\n  - Keywords: Vision Language Model, Guard Model, Reinforcement Learning, Large Reasoning Model\n  - ExpEnv: safety benchmarks for VLMs\n\n- [Fact-R1: Towards Explainable Video Misinformation Detection with Deep Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16836)\n  - Fanrui Zhang, Dian Li, Qiang Zhang, Chenjun, sinbadliu, Junxiong Lin, Jiahong Yan, Jiawei Liu, Zheng-Jun Zha\n  - Keywords: Video Misinformation Detection, Deep Reasoning\n  - ExpEnv: FakeVV benchmark\n\n- [Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05255)\n  - Yana Wei, Liang Zhao, Jianjian Sun, Kangheng Lin, jisheng yin, Jingcheng Hu, Yinmin Zhang, En Yu, Haoran Lv, Zejia Weng, Jia Wang, Qi Han, Zheng Ge, Xiangyu Zhang, Daxin Jiang, Vishal M. Patel\n  - Keywords: Multimodal LLM, Visual Reasoning, Cognitive Behavior Transfer\n  - ExpEnv: MATH500, MathVision, MathVerse\n\n- [VideoRFT: Incentivizing Video Reasoning Capability in MLLMs via Reinforced Fine-Tuning](https:\u002F\u002Fopenreview.net\u002Fforum?id=3pORFyKzh1&referrer=%5Bthe%20profile%20of%20Rui%20Mao%5D(%2Fprofile%3Fid%3D~Rui_Mao2))\n  - Qi Wang, Yanrui Yu, Ye Yuan, Rui Mao, Tianfei Zhou\n  - Keywords: Multimodal Large Language Models, Video Reasoning, Reinforced fine-tuning\n  - ExpEnv: six video reasoning benchmarks\n\n- [Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17682)\n  - Jiaming Ji, Xinyu Chen, Rui Pan, Conghui Zhang, Han Zhu, Jiahao Li, Donghai Hong, Boyuan Chen, Jiayi Zhou, Kaile Wang, Juntao Dai, Chi-Min Chan, Yida Tang, Sirui Han, Yike Guo, Yaodong Yang\n  - Keywords: AI Safety, AI Alignment\n  - ExpEnv: BeaverTails-V benchmark\n\n- [Unveiling Chain of Step Reasoning for Vision-Language Models with Fine-grained Rewards](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.19003)\n  - Honghao Chen, Xingzhou Lou, Xiaokun Feng, Kaiqi Huang, Xinlong Wang\n  - Keywords: VLM, Reasoning, PRM\n  - ExpEnv: vision-language reasoning benchmarks\n\n- [GRIT: Teaching MLLMs to Think with Images](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15879)\n  - Yue Fan, Xuehai He, Diji Yang, Kaizhi Zheng, Ching-Chen Kuo, Yuting Zheng, Sravana Jyothi Narayanaraju, Xinze Guan, Xin Eric Wang\n  - Keywords: Multimodal Reasoning model, Reinforcement learning\n  - ExpEnv: multimodal reasoning benchmarks\n\n- [NoisyGRPO: Incentivizing Multimodal CoT Reasoning via Noise Injection and Bayesian Estimation](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F96f8ac8e6e7af57f7d1898ceced0a9d165735c5e.pdf)\n  - Longtian Qiu, Shan Ning, Jiaxuan Sun, Xuming He\n  - Keywords: Multimodal Large Language Model, Reinforcement learning\n  - ExpEnv: CoT quality and hallucination benchmarks\n\n- [Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13377)\n  - Ye Wang, Ziheng Wang, Boshen Xu, Yang Du, Kejun Lin, Zihan Xiao, Zihao Yue, Jianzhong Ju, Liang Zhang, Dingyi Yang, Xiangnan Fang, Zewen He, Zhenbo Luo, Wenxuan Wang, Junqi Lin, Jian Luan, Qin Jin\n  - Keywords: large vision language model, temporal video grounding, reinforcement learning, post-training\n  - ExpEnv: Charades-STA, ActivityNet Captions, QVHighlights, TVGBench\n\n- [Generative RLHF-V: Learning Principles from Multi-modal Human Preference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18531)\n  - Jiayi Zhou, Jiaming Ji, Boyuan Chen, Jiapeng Sun, Wenqi Chen, Donghai Hong, Sirui Han, Yike Guo, Yaodong Yang\n  - Keywords: Alignment, Safety, RLHF, Preference Learning, Multi-modal LLMs\n  - ExpEnv: seven multimodal benchmarks\n\n- [OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17352)\n  - Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, Kai-Wei Chang\n  - Keywords: Vision-language reasoning, iterative improvement, distillation, reinforcement learning\n  - ExpEnv: MathVista, EMMA, HallusionBench\n\n- [Video-R1: Reinforcing Video Reasoning in MLLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21776)\n  - Kaituo Feng, Kaixiong Gong, Bohao Li, Zonghao Guo, Yibing Wang, Tianshuo Peng, Junfei Wu, Xiaoying Zhang, Benyou Wang, Xiangyu Yue\n  - Keywords: Multimodal Large Language Models, Video Reasoning\n  - ExpEnv: VideoMMMU, VSI-Bench, MVBench, TempCompass\n\n- [Table2LaTeX-RL: High-Fidelity LaTeX Code Generation from Table Images via Reinforced Multimodal Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.17589)\n  - Jun Ling, Yao Qi, Tao Huang, Shibo Zhou, Yanqin Huang, Yang Jiang, Ziqi Song, Ying Zhou, Yang Yang, Heng Tao Shen, Peng Wang\n  - Keywords: table recognition, latex generation\n  - ExpEnv: table-to-LaTeX benchmarks\n\n- [ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01300)\n  - Yiyang Zhou, Yangfan He, Yaofeng Su, Siwei Han, Joel Jang, Gedas Bertasius, Mohit Bansal, Huaxiu Yao\n  - Keywords: Video understanding; Multi-agent framework; Reflective reasoning; VLA alignment; Video reasoning\n  - ExpEnv: 12 datasets across video understanding, video reasoning, and VLA tasks\n\n- [Actial: Activate Spatial Reasoning Ability of Multimodal Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.01618)\n  - Xiaoyu Zhan, Wenxuan Huang, Hao Sun, Xinyu Fu, Changfeng Ma, Shaosheng Cao, Bohan Jia, Shaohui Lin, Zhenfei Yin, LEI BAI, Wanli Ouyang, Yuanqi Li, Jie Guo, Yanwen Guo\n  - Keywords: Visual-Language Model, 3D Reasoning, GRPO\n  - ExpEnv: 3D spatial reasoning tasks\n\n- [EvolvedGRPO: Unlocking Reasoning in LVLMs via Progressive Instruction Evolution](https:\u002F\u002Fopenreview.net\u002Fforum?id=tjXtcZjIgQ)\n  - Zhebei Shen, Qifan Yu, Juncheng Li, Wei Ji, Qizhi Chen, Siliang Tang, Yueting Zhuang\n  - Keywords: multi-modal reasoning, reinforcement learning, self-improvement\n  - ExpEnv: multi-modal reasoning tasks\n\n- [URSA: Unlocking Multimodal Mathematical Reasoning via Process Reward Model](https:\u002F\u002Fopenreview.net\u002Fforum?id=96I8PGPALv&referrer=%5Bthe%20profile%20of%20Yujiu%20Yang%5D(%2Fprofile%3Fid%3D~Yujiu_Yang2))\n  - Ruilin Luo, Zhuofan Zheng, Lei Wang, Yifan Wang, Xinzhe Ni, Zicheng Lin, Songtao Jiang, Yiyao Yu, Chufan Shi, Ruihang Chu, Jin zeng, Yujiu Yang\n  - Keywords: Multimodal Reasoning, Data Synthesis, Process Reward Model, Reinforcement Learning\n  - ExpEnv: ChartQA and other multimodal math benchmarks\n\n- [Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09965)\n  - Junfei Wu, Jian Guan, Kaituo Feng, Qiang Liu, Shu Wu, Liang Wang, Wei Wu, Tieniu Tan\n  - Keywords: Large Vision-Language Models, Spatial Reasoning\n  - ExpEnv: spatial reasoning benchmarks\n\n### ICML 2025\n\n- [ABNet: Adaptive explicit-Barrier Net for Safe and Scalable Robot Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ymlwqfxuUc#page=5.86)\n  - Wei Xiao, Tsun-Hsuan Wang, Chuang Gan, Daniela Rus\n  - Key: Safe learning, Robot learning, Scalable learning, Barrier Net, Provable safety, Reinforcement Learning, Multi-modal control.\n  - ExpEnv: 2D robot obstacle avoidance, Safe robot manipulation, Vision-based end-to-end autonomous driving\n  \n- [DexScale: Automating Data Scaling for Sim2Real Generalizable Robot Control](https:\u002F\u002Fopenreview.net\u002Fpdf?id=AVVXX0erKT#page=7.45)\n  - Guiliang Liu, Yueci Deng, Runyi Zhao, Huayi Zhou, Jian Chen, Jietao Chen, Ruiyan Xu, Yunxin Tai, Kui Jia\n  - Key: Data Engine, Embodied AI, Robot Control, Manipulation, Policy Learning, Sim2Real, Domain Randomization, Domain Adaptation, Reinforcement Learning, Multi-modal control.\n  - ExpEnv: Robot manipulation tasks (e.g., pick-and-place), diverse tasks, multiple robot embodiments.\n\n- [DynaMind: Reasoning over Abstract Video Dynamics for Embodied Decision-Making](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ziDKPXJBYL#page=5.63)\n  - Ziru Wang, Mengmeng Wang, Jade Dai, Teli Ma, Guo-Jun Qi, Yong Liu, Guang Dai, Jingdong Wang\n  - Key: Embodied Decision-Making, Multi-modal Learning, Video Dynamics Abstraction, Robot Learning.\n  - ExpEnv: LOReL Sawyer, Franka Kitchen, BabyAI, Real-world scenarios.\n\n- [Craftium: Bridging Flexibility and Efficiency for Rich 3D Single- and Multi-Agent Environments](https:\u002F\u002Fopenreview.net\u002Fpdf?id=htP5YRXcS9#page=5.53)\n  - Mikel Malagón, Josu Ceberio, Jose A. Lozano\n  - Key: 3D Environments, Reinforcement Learning, Multi-Agent Systems, Embodied AI.\n  - ExpEnv: One-vs-one multi-agent combat environment (Craftium-built), Open-world environment (Luanti\u002FVoxeLibre in Craftium), Procedural 3D Dungeons (Craftium-built).\n\n- [Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models](https:\u002F\u002Fopenreview.net\u002Fpdf?id=F1ff8zcjPp#page=6.08)\n  - Saketh Bachu, Erfan Shayegani, Rohit Lal, Trishna Chakraborty, Arindam Dutta, Chengyu Song, Yue Dong, Nael B. Abu-Ghazaleh, Amit Roy-Chowdhury\n  - Key: Vision Language Models, Safety Alignment, Reinforcement Learning from Human Feedback (RLHF), Multi-modal RL.\n  - ExpEnv: Jailbreak-V28K, AdvBench-COCO (derived from AdvBench and MS-COCO), HH-RLHF, VQA-v2, Custom Prompts.\n  \n### ICLR 2025\n\n- [Vision Language Models are In-Context Value Learners](https:\u002F\u002Fopenreview.net\u002Fforum?id=friHAl5ofG)  \n  - Yecheng Jason Ma, Joey Hejna, Chuyuan Fu, Dhruv Shah, Jacky Liang, Zhuo Xu, Sean Kirmani, Peng Xu, Danny Driess, Ted Xiao, Osbert Bastani, Dinesh Jayaraman, Wenhao Yu, Tingnan Zhang, Dorsa Sadigh, Fei Xia  \n  - Key: robot learning, vision-language model, value estimation, manipulation  \n  - ExpEnv: more than 300 distinct real-world tasks across diverse robot platforms, including bimanual manipulation tasks\n\n- [TopoNets: High performing vision and language models with brain-like topography](https:\u002F\u002Fopenreview.net\u002Fforum?id=THqWPzL00e)  \n  - Mayukh Deb, Mainak Deb, Apurva Ratan Murty  \n  - Key: topography, neuro-inspired, convolutional neural networks, Transformers, visual cortex, neuroscience  \n  - ExpEnv: ResNet-18, ResNet-50, ViT, GPT-Neo-125M, NanoGPT\n\n- [LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models](https:\u002F\u002Fopenreview.net\u002Fforum?id=z8sxoCYgmd)  \n  - Junyan Ye, Baichuan Zhou, Zilong Huang, Junan Zhang, Tianyi Bai, Hengrui Kang, Jun He, Honglin Lin, Zihao Wang, Tong Wu, Zhizheng Wu, Yiping Chen, Dahua Lin, Conghui He, Weijia Li  \n  - Key: LMMs, Deepfake, Multimodality  \n  - ExpEnv: Video, Image, 3D, Text, Audio\n\n- [Two Effects, One Trigger: On the Modality Gap, Object Bias, and Information Imbalance in Contrastive Vision-Language Models](https:\u002F\u002Fopenreview.net\u002Fforum?id=uAFHCZRmXk)  \n  - Simon Schrodi, David T. Hoffmann, Max Argus, Volker Fischer, Thomas Brox  \n  - Key: CLIP, modality gap, object bias, contrastive loss, data-centric, vision language models, VLM  \n  - ExpEnv: Contrastive Vision-Language Models (VLMs) Analysis\n  \n- [Multi-Robot Motion Planning with Diffusion Models](https:\u002F\u002Fopenreview.net\u002Fforum?id=AUCYptvAf3)  \n  - Yorai Shaoul, Itamar Mishani, Shivam Vats, Jiaoyang Li, Maxim Likhachev  \n  - Key: Multi-Agent Planning, Robotics, Generative Models  \n  - ExpEnv: Simulated logistics environments\n  \n### ICLR 2024\n- [DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization](https:\u002F\u002Fopenreview.net\u002Fpdf?id=MSe8YFbhUE)\n  - Guowei Xu, Ruijie Zheng, Yongyuan Liang, Xiyao Wang, Zhecheng Yuan, Tianying Ji, Yu Luo, Xiaoyu Liu, Jiaxin Yuan, Pu Hua, Shuzhen Li, Yanjie Ze, Hal Daumé III, Furong Huang, Huazhe Xu\n  - Keyword: Visual RL; Dormant Ratio\n  - ExpEnv: [DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control),[Meta-world](https:\u002F\u002Fgithub.com\u002Frlworkgroup\u002Fmetaworld),[Adroit](https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FD4RL)\n\n- [Revisiting Data Augmentation in Deep Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=EGQBpkIEuu)\n  - Jianshu Hu, Yunpeng Jiang, Paul Weng\n  - Keyword: Reinforcement Learning, Data Augmentation\n  - ExpEnv: [DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)\n\n- [Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages](https:\u002F\u002Fopenreview.net\u002Fforum?id=0aR1s9YxoL)\n  - Guozheng Ma, Lu Li, Sen Zhang, Zixuan Liu, Zhen Wang, Yixin Chen, Li Shen, Xueqian Wang, Dacheng Tao\n  - Keyword: Plasticity, Visual Reinforcement Learning, Deep Reinforcement Learning, Sample Efficiency\n  - ExpEnv: [DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control),[Atari](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)\n\n- [Entity-Centric Reinforcement Learning for Object Manipulation from Pixels](https:\u002F\u002Fopenreview.net\u002Fforum?id=uDxeSZ1wdI)\n  - Dan Haramati, Tal Daniel, Aviv Tamar\n  - Keyword: deep reinforcement learning, visual reinforcement learning, object-centric, robotic object manipulation, compositional generalization\n  - ExpEnv: [IsaacGym](https:\u002F\u002Fgithub.com\u002FNVIDIA-Omniverse\u002FIsaacGymEnvs)\n\n### ICLR 2023\n- [PaLI: A Jointly-Scaled Multilingual Language-Image Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.06794)(**\u003Cfont color=\"red\">notable top 5%\u003C\u002Ffont>**) \n  - Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Linting Xue, Ashish Thapliyal, James Bradbury, Weicheng Kuo, Mojtaba Seyedhosseini, Chao Jia, Burcu Karagol Ayan, Carlos Riquelme, Andreas Steiner, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut\n  - Keyword: amazing zero-shot, language component and visual component\n  - ExpEnv: None\n\n- [VIMA: General Robot Manipulation with Multimodal Prompts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03094)\n  - Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, Linxi Fan. *NeurIPS Workshop 2022*\n  - Key Words: multimodal prompts, transformer-based generalist agent model, large-scale benchmark\n  - ExpEnv: [VIMA-Bench](https:\u002F\u002Fgithub.com\u002Fvimalabs\u002FVimaBench), [VIMA-Data](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FVIMA\u002FVIMA-Data)\n\n- [MIND ’S EYE: GROUNDED LANGUAGE MODEL REASONING THROUGH SIMULATION](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05359)\n  - Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai\n  - Keyword:  language2physical-world, reasoning ability\n  - ExpEnv: [MuJoCo](https:\u002F\u002Fmujoco.org\u002F)\n\n### ICLR 2022\n- [How Much Can CLIP Benefit Vision-and-Language Tasks?](https:\u002F\u002Fopenreview.net\u002Fforum?id=zf_Ll3HZWgy)\n  - Sheng Shen, Liunian Harold Li, Hao Tan, etc. *ICLR 2022*\n  - Key Words: Vision-and-Language, CLIP\n  - ExpEnv: None\n\n### ICLR 2021\n- [Grounding Language to Entities and Dynamics for Generalization in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.07393)\n  - Austin W. Hanjie, Victor Zhong, Karthik Narasimhan. *ICML 2021*\n  - Key Words: Multi-modal Attention\n  - ExpEnv: [Messenger](https:\u002F\u002Fgithub.com\u002Fahjwang\u002Fmessenger-emma)\n\n- [Mastering Atari with Discrete World Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.02193)\n  - Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, etc. \n  - Key Words: World models\n  - ExpEnv: [Atari](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)\n\n- [Decoupling Representation Learning from Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.08319)\n  - Adam Stooke,Kimin Lee,Pieter Abbeel, etc. \n  - Key Words: representation learning, unsupervised learning\n  - ExpEnv: [DeepMind Control](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control), [Atari](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym), [DMLab](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Flab)\n\n### ICLR 2019\n- [Learning Actionable Representations with Goal-Conditioned Policies](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.07819)\n  - Dibya Ghosh, Abhishek Gupta, Sergey Levine. \n  - Key Words: Actionable Representations Learning\n  - ExpEnv: 2D navigation(2D Wall, 2D Rooms, Wheeled, Wheeled Rooms, Ant, Pushing)\n\n### NeurIPS 2024\n- [The Surprising Ineffectiveness of Pre-Trained Visual Representations for Model-Based Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=LvAy07mCxU)\n  - Moritz Schneider, Robert Krug, Narunas Vaskevicius, Luigi Palmieri, Joschka Boedecker\n  - Key Words: reinforcement learning, rl, model-based reinforcement learning, representation learning, pvr, visual representations\n  - ExpEnv:  [DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control), [ManiSkill2](), [Miniworld]()\n\n- [Learning Multimodal Behaviors from Scratch with Diffusion Policy Gradient](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.00681)  \n  - Zechu Li, Rickmer Krohn, Tao Chen, Anurag Ajay, Pulkit Agrawal, Georgia Chalvatzaki  \n  - Keyword: Reinforcement Learning, Multimodal Behaviors, Diffusion Models  \n  - ExpEnv: AntMaze (navigation), Robotic Manipulation (Franka tasks)\n\n- [Seek Commonality but Preserve Differences: Dissected Dynamics Modeling for Multi-modal Visual RL](https:\u002F\u002Fopenreview.net\u002Fpdf?id=4php6bGL2W)  \n  - Yangru Huang, Peixi Peng, Yifan Zhao, Guangyao Chen, Yonghong Tian  \n  - Key: multi-modal reinforcement learning, visual RL, dynamics modeling, modality consistency, modality inconsistency, DDM  \n  - ExpEnv: CARLA, DMControl\n  - \n- [FlexPlanner: Flexible 3D Floorplanning via Deep Reinforcement Learning in Hybrid Action Space with Multi-Modality Representation](https:\u002F\u002Fopenreview.net\u002Fpdf?id=q9RLsvYOB3)\n  - Ruizhe Zhong, Xingbo Du, Shixiong Kai, Zhentao Tang, Siyuan Xu, Jianye Hao, Mingxuan Yuan, Junchi Yan\n  - Keywords: 3D Floorplanning, Deep Reinforcement Learning, Hybrid Action Space, Multi-Modality Representation\n  - ExpEnv: MCNC Benchmark, GSRC Benchmark\n\n### NeurIPS 2023\n- [Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation](https:\u002F\u002Fopenreview.net\u002Fpdf?id=kjMGHTo8Cs)\n  - David Brandfonbrener, Ofir Nachum, Joan Bruna\n  - Key Words: representation learning, imitation learning\n  - ExpEnv: [Sawyer Door Open](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld), [MetaWorld](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld), [Franka Kitchen, Adroit](https:\u002F\u002Fgithub.com\u002Faravindr93\u002Fmjrl)\n\n- [Frequency-Enhanced Data Augmentation for Vision-and-Language Navigation](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F0d9e08f247ca7fbbfd5e50b7ff9cf357-Abstract-Conference.html)\n  - Keji He, Chenyang Si, Zhihe Lu, Yan Huang, Liang Wang, Xinchao Wang\n  - Key Words: Vision-and-Language Navigation, High-Frequency, Data Augmentation\n  - ExpEnv: [Matterport3d](https:\u002F\u002Fniessner.github.io\u002FMatterport\u002F)\n\n- [Language Is Not All You Need: Aligning Perception with Language Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.14045)\n  - Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, etc.\n  - Key Words: Multimodal Perception, World Modeling\n  - ExpEnv: [IQ50](https:\u002F\u002Faka.ms\u002Fkosmos-iq50)\n\n- [MotionGPT: Human Motion as a Foreign Language](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002F3fbf0c1ea0716c03dea93bb6be78dd6f-Paper-Conference.pdf)\n  - Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, Tao Chen\n  - Key Words: Human motion, text-driven motion generation\n  - ExpEnv: [HumanML3D](https:\u002F\u002Fericguo5513.github.io\u002Ftext-to-motion),[KIT](https:\u002F\u002Fmotion-database.humanoids.kit.edu\u002F)\n\n- [Large Language Models are Visual Reasoning Coordinators](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Fddfe6bae7b869e819f842753009b94ad-Paper-Conference.pdf)\n  - Liangyu Chen, Bo Li, Sheng Shen, Jingkang Yang, Chunyuan Li, Kurt Keutzer, Trevor Darrell, Ziwei Liu\n  - Key Words: Visual Reasoning, Large Language Model\n  - ExpEnv: [A-OKVQA](), [OK-VQA](), [e-SNLI-VE](), [VSR]()\n\n### NeurIPS 2022\n- [MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.08853)\n  - Linxi Fan, Guanzhi Wang, Yunfan Jiang, etc. \n  - Key Words: multimodal dataset, MineClip\n  - ExpEnv: [Minecraft](https:\u002F\u002Fminedojo.org\u002F)\n\n- [Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11795)\n  - Bowen Baker, Ilge Akkaya, Peter Zhokhov, etc. \n  - Key Words: Inverse Dynamics Model\n  - ExpEnv: [minerl](https:\u002F\u002Fgithub.com\u002Fminerllabs\u002Fminerl)\n\n### NeurIPS 2021\n- [SOAT: A Scene-and Object-Aware Transformer for Vision-and-Language Navigation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.14143.pdf)\n  - Abhinav Moudgil, Arjun Majumdar,Harsh Agrawal, etc. \n  - Key Words: Vision-and-Language Navigation\n  - ExpEnv: [Room-to-Room](https:\u002F\u002Fpaperswithcode.com\u002Fdataset\u002Froom-to-room), [Room-Across-Room](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002FRxR)\n\n- [Pretraining Representations for Data-Efﬁcient Reinforcement Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Fhash\u002F69eba34671b3ef1ef38ee85caae6b2a1-Abstract.html)\n  - Max Schwarzer, Nitarshan Rajkumar, Michael Noukhovitch, etc.\n  - Key Words: latent dynamics modelling, unsupervised RL\n  - ExpEnv: [Atari](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)\n\n### NeurIPS 2018\n- [Recurrent World Models Facilitate Policy Evolution](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2018\u002Fhash\u002F2de5d16682c3c35007e4e92982f1a2ba-Abstract.html)\n  - David Ha, Jürgen Schmidhuber. \n  - Key Words: World model, generative RNN, VAE\n  - ExpEnv: [VizDoom](https:\u002F\u002Fgithub.com\u002Fmwydmuch\u002FViZDoom), [CarRacing](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)\n\n### ICML 2024\n- [Investigating Pre-Training Objectives for Generalization in Vision-Based Reinforcement Learning](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fkim24u.html)\n  - Donghu Kim, Hojoon Lee, Kyungmin Lee, Dongyoon Hwang, Jaegul Choo\n  - Key Words: vision-based RL\n  - ExpEnv: [Atari](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)\n\n- [RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fwang24bn.html)\n  - Yufei Wang, Zhanyi Sun, Jesse Zhang, Zhou Xian, Erdem Biyik, David Held, Zackory Erickson\n  - Key Words: learning from VLM\n  - ExpEnv: [Gym](), [MetaWorld](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld)\n\n- [Reward Shaping for Reinforcement Learning with An Assistant Reward Agent](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fma24l.html)\n  - Haozhe Ma, Kuankuan Sima, Thanh Vinh Vo, Di Fu, Tze-Yun Leong\n  - Key Words: dual-agent reward shaping framework\n  - ExpEnv: [Mujoco](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fmujoco)\n\n- [FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Ffu24j.html)\n  - Yuwei Fu, Haichao Zhang, Di Wu, Wei Xu, Benoit Boulet \n  - Key Words: high-dimensional observations,  representation learning for RL\n  - ExpEnv: [MetaWorld](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld)\n\n- [Rich-Observation Reinforcement Learning with Continuous Latent Dynamics](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fsong24i.html)\n  - Yuda Song, Lili Wu, Dylan J Foster, Akshay Krishnamurthy\n  - Key Words: VLM as reward function\n  - ExpEnv: [maze]()\n\n- [LLM-Empowered State Representation for Reinforcement Learning](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fwang24bh.html)\n  - Boyuan Wang, Yun Qu, Yuhang Jiang, Jianzhun Shao, Chang Liu, Wenming Yang, Xiangyang Ji\n  - Key Words: LLM-based state representation\n  - ExpEnv: [Mujoco](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fmujoco)\n\n- [Code as Reward: Empowering Reinforcement Learning with VLMs](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fvenuto24a.html)\n  - David Venuto, Mohammad Sami Nur Islam, Martin Klissarov, etc. \n  - Key Words: Vision-Language Models, reward functions\n  - ExpEnv: [MiniGrid](https:\u002F\u002Fminigrid.farama.org\u002F)\n\n### ICML 2022\n- [Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.07207.pdf)\n  - Wenlong Huang, Pieter Abbeel, Deepak Pathak, etc. \n  - Key Words: large language models, Embodied Agents\n  - ExpEnv: [VirtualHome](https:\u002F\u002Fgithub.com\u002Fxavierpuigf\u002Fvirtualhome)\n\n- [Reinforcement Learning with Action-Free Pre-Training from Videos](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fseo22a.html)\n  - Younggyo Seo, Kimin Lee, Stephen L James, etc. \n  - Key Words: action-free pretraining, videos\n  - ExpEnv: [Meta-world](https:\u002F\u002Fgithub.com\u002Frlworkgroup\u002Fmetaworld), [DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)\n\n- [History Compression via Language Models in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.12258)\n  - Fabian Paischer, Thomas Adler, Vihang Patil, etc.\n  - Key Words: Pretrained Language Transformer\n  - ExpEnv: [Minigrid](https:\u002F\u002Fgithub.com\u002Fmaximecb\u002Fgym-minigrid), [Procgen](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen)\n\n### ICML 2019\n- [Learning Latent Dynamics for Planning from Pixels](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.04551)\n  - Danijar Hafner, Timothy Lillicrap, Ian Fischer, etc.\n  - Key Words: latent dynamics model, pixel observations\n  - ExpEnv: [DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)\n\n### ICML 2017\n- [Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05064)\n  - Junhyuk Oh, Satinder Singh, Honglak Lee, Pushmeet Kohli\n  - Key Words: unseen instruction, sequential instruction\n  - ExpEnv: [Minecraft](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.09128)\n\n### CVPR 2024\n- [DMR: Decomposed Multi-Modality Representations for Frames and Events Fusion in Visual Reinforcement Learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FXu_DMR_Decomposed_Multi-Modality_Representations_for_Frames_and_Events_Fusion_in_CVPR_2024_paper.html)\n  - Haoran Xu, Peixi Peng, Guang Tan, Yuan Li, Xinhai Xu, Yonghong Tian\n  - Key Words: Visual Reinforcement Learning, Multi-Modality Representation, Dynamic Vision Sensor\n  - ExpEnv: [Carla]()\n\n- [Vision-and-Language Navigation via Causal Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.10241)\n  - Liuyi Wang, Zongtao He, Ronghao Dang, Mengjiao Shen, Chengju Liu, Qijun Chen\n  - Key Words: vision-and-language navigation, cross-modal causal transformer\n  - ExpEnv: [R2R](https:\u002F\u002Fbringmeaspoon.org\u002F) [REVERIE](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002FRxR) [RxR-English](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002FRxR) [SOON]()\n\n### CVPR 2022\n- [End-to-end Generative Pretraining for Multimodal Video Captioning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.08264)\n  - Paul Hongsuck Seo, Arsha Nagrani, Anurag Arnab, Cordelia Schmid\n  - Key Words: Multimodal video captioning,  Pretraining using a future utterance, Multimodal Video Generative Pretraining\n  - ExpEnv: [HowTo100M](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.03327)\n\n- [Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.10442)\n  - Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei\n  - Key Words: backbone architecture, pretraining task, model scaling up\n  - ExpEnv: [ADE20K](https:\u002F\u002Fgroups.csail.mit.edu\u002Fvision\u002Fdatasets\u002FADE20K\u002F), [COCO](https:\u002F\u002Fcocodataset.org\u002F), [NLVR2](https:\u002F\u002Fpaperswithcode.com\u002Fdataset\u002Fnlvr), [Flickr30K](https:\u002F\u002Fpaperswithcode.com\u002Fdataset\u002Fflickr30k)\n\n- [Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.11742)\n  - Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, Ivan Laptev\n  - Keyword: dual-scale graph transformer, dual-scale graph transformer, affordance detection\n  - ExpEnv: None\n\n- [Masked Visual Pre-training for Motor Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.06173)\n  - Tete Xiao, Ilija Radosavovic, Trevor Darrell, etc. *ArXiv 2022*\n  - Key Words: self-supervised learning, motor control\n  - ExpEnv: [Isaac Gym](https:\u002F\u002Fdeveloper.nvidia.com\u002Fisaac-gym)\n\n\n### CoRL 2022\n- [LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.04429)\n  - Dhruv Shah, Blazej Osinski, Brian Ichter, Sergey Levine\n  - Key Words: robotic navigation, goal-conditioned, unannotated large dataset, CLIP, ViNG, GPT-3\n  - ExpEnv: None\n\n- [Real-World Robot Learning with Masked Visual Pre-training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03109）\n  - Ilija Radosavovic, Tete Xiao, Stephen James, Pieter Abbeel, Jitendra Malik, Trevor Darrell\n  - Key Words: real-world robotic tasks，\n  - ExpEnv: None\n\n- [R3M: A Universal Visual Representation for Robot Manipulation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.12601)\n  - Suraj Nair, Aravind Rajeswaran, Vikash Kumar, etc. \n  - Key Words: Ego4D human video dataset, pre-train visual representation\n  - ExpEnv: [MetaWorld](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld), [Franka Kitchen, Adroit](https:\u002F\u002Fgithub.com\u002Faravindr93\u002Fmjrl)\n\n### Other\n- [RL-EMO: A Reinforcement Learning Framework for Multimodal Emotion Recognition](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.07648) *ICASSP 2024*\n  - Chengwen Zhang, Yuhao Zhang, Bo Cheng\n  - Keyword: Multimodal Emotion Recognition, Reinforcement Learning, Graph Convolution Network\n  - ExpEnv: None\n\n- [Language Conditioned Imitation Learning over Unstructured Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.07648) *RSS 2021*\n  - Corey Lynch, Pierre Sermanet \n  - Keyword: open-world environments\n  - ExpEnv: None\n\n- [Learning Generalizable Robotic Reward Functions from “In-The-Wild” Human Videos](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.16817) *RSS 2021*\n  - Annie S. Chen, Suraj Nair, Chelsea Finn. \n  - Key Words: Reward Functions, “In-The-Wild” Human Videos\n  - ExpEnv: None\n\n- [Offline Reinforcement Learning from Images with Latent Space Models](https:\u002F\u002Fproceedings.mlr.press\u002Fv144\u002Frafailov21a.html) *L4DC 2021*\n  - Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, etc. \n  - Key Words: Latent Space Models\n  - ExpEnv: [DeepMind Control](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control), [Adroit Pen](https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FD4RL), [Sawyer Door Open](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld), [Robel D’Claw Screw](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Frobel)\n\n- [Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.09263) *ICASSP 2022*\n  - Vandana Rajan, Alessio Brutti, Andrea Cavallaro. \n  - Key Words: Multi-Modal Emotion Recognition, Cross-Attention\n  - ExpEnv: None\n\n### ArXiv\n- [Spatialvlm: Endowing vision-language models with spatial reasoning capabilities](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.12168)\n  - Boyuan Chen, Zhuo Xu, Sean Kirmani, Brian Ichter, Danny Driess, Pete Florence, Dorsa Sadigh, Leonidas Guibas, Fei Xia\n  - Key Words: Visual Question Answering, 3D Spatial Reasoning\n  - ExpEnv:  spatial VQA dataset\n\n- [M2CURL: Sample-Efficient Multimodal Reinforcement Learning via Self-Supervised Representation Learning for Robotic Manipulation ](https:\u002F\u002Fbrowse.arxiv.org\u002Fabs\u002F2401.17032)\n  - Fotios Lygerakis, Vedant Dave, Elmar Rueckert\n  - Key Words: Robotic Manipulation, Self-supervised representation \n  - ExpEnv:  Gym\n\n- [On Time-Indexing as Inductive Bias in Deep RL for Sequential Manipulation Tasks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01993)\n  - M. Nomaan Qureshi, Ben Eisner, David Held\n  - Key Words: Multimodality of policy output, Action head switching\n  - ExpEnv:  MetaWorld\n\n- [Parameterized Decision-making with Multi-modal Perception for Autonomous Driving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.11935)\n  - Yuyang Xia, Shuncheng Liu, Quanlin Yu, Liwei Deng, You Zhang, Han Su, Kai Zheng\n  - Key Words: Autonomous driving, GNN in RL\n  - ExpEnv:  CARLA\n\n- [A Contextualized Real-Time Multimodal Emotion Recognition for Conversational Agents using Graph Convolutional Networks in Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15683)\n  - Fathima Abdul Rahman, Guang Lu\n  - Key Words: Emotion Recognition, GNN in RL\n  - ExpEnv: IEMOCAP\n\n- [Reinforced UI Instruction Grounding: Towards a Generic UI Task Automation API](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04716)\n  - Zhizheng Zhang, Wenxuan Xie, Xiaoyi Zhang, Yan Lu\n  - Key Words: LLM, generic UI task automation API\n  - ExpEnv: RicoSCA, MoTIF\n\n- [Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01957)\n  - Long Chen, Oleg Sinavski, Jan Hünermann, Alice Karnsund, Andrew James Willmott, Danny Birch, Daniel Maund, Jamie Shotton\n  - Key Words: LLM in Autonomous Driving, object-level multimodal LLM\n  - ExpEnv: RicoSCA, MoTIF\n\n- [Nonprehensile Planar Manipulation through Reinforcement Learning with Multimodal Categorical Exploration ](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.02459)\n  - Juan Del Aguila Ferrandis, João Moura, Sethu Vijayakumar\n  - Key Words: multimodal exploration approach\n  - ExpEnv: KUKA iiwa robot arm\n\n- [End-to-End Streaming Video Temporal Action Segmentation with Reinforce Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15683)\n  - Wujun Wen, Jinrong Zhang, Shenglan Liu, Yunheng Li, Qifeng Li, Lin Feng\n  - Key Words: Temporal Action Segmentation, RL in Video Analysis\n  - ExpEnv: EGTEA\n\n- [Do as I can, not as I get:Topology-aware multi-hop reasoningon multi-modal knowledge graphs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.10345)\n  - Shangfei Zheng, Hongzhi Yin, Tong Chen, Quoc Viet Hung Nguyen, Wei Chen, Lei Zhao\n  - Key Words: Multi-hop reasoning, multi-modal knowledge graphs, inductive setting, adaptive reinforcement learning\n  - ExpEnv: None\n\n- [Multimodal Reinforcement Learning for Robots Collaborating with Humans](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07265)\n  - Afagh Mehri Shervedani, Siyu Li, Natawut Monaikul, Bahareh Abbasi, Barbara Di Eugenio, Milos Zefran\n  - Key Words: robust and deliberate decisions, end-to-end training, importance enhancement, similarity, improve IRL training process multimodal RL domains\n  - ExpEnv: None\n\n- [See, Plan, Predict: Language-guided Cognitive Planning with Video Prediction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03825v1)\n  - Maria Attarian, Advaya Gupta, Ziyi Zhou, Wei Yu, Igor Gilitschenski, Animesh Garg\n  - Keyword: cognitive planning,  language-guided video prediction\n  - ExpEnv: None\n\n- [Open-vocabulary Queryable Scene Representations for Real World Planning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.09874)\n  - Boyuan Chen, Fei Xia, Brian Ichter, Kanishka Rao, Keerthana Gopalakrishnan, Michael S. Ryoo, Austin Stone, Daniel Kappler\n  - Key Words: Target Detection, Real World, Robotic Tasks\n  - ExpEnv: [Say Can](https:\u002F\u002Fsay-can.github.io\u002F)\n\n- [Do As I Can, Not As I Say: Grounding Language in Robotic Affordances](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.01691)\n  - Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, Andy Zeng\n  - Key Words: real world, natural language\n  - ExpEnv: [Say Can](https:\u002F\u002Fsay-can.github.io\u002F)\n\n## Contributing\n\nOur purpose is to make this repo even better. If you are interested in contributing, please refer to [HERE](CONTRIBUTING.md) for instructions in contribution.\n\n\n## License\n\nAwesome Multi-Modal Reinforcement Learning is released under the Apache 2.0 license.\n","# 令人惊叹的多模态强化学习 \n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome) \n![访问者徽章](https:\u002F\u002Fvisitor-badge.lithub.cc\u002Fbadge?page_id=opendilab.awesome-multi-modal-reinforcement-learning&left_text=Visitors)\n[![文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocs-latest-blue)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning)\n![GitHub 星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning?color=yellow)\n![GitHub 分叉数](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning?color=9cf)\n[![GitHub 许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning)](https:\u002F\u002Fgithub.com\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning\u002Fblob\u002Fmain\u002FLICENSE)\n\n这是一个关于**多模态强化学习 (MMRL)** 的研究论文合集。\n该仓库将持续更新，以追踪 MMRL 的前沿进展。\n部分论文可能与强化学习无关，但我们仍将其收录，因为它们对 MMRL 研究可能具有参考价值。\n\n欢迎关注并点赞！\n\n## 简介\n\n多模态强化学习智能体专注于像人类一样从视频（图像）、语言（文本）或两者中学习。我们认为，智能体直接从图像或文本中学习非常重要，因为这类数据很容易从互联网上获取。\n\n![飞书20220922-161353](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_awesome-multi-modal-reinforcement-learning_readme_f83d0258618e.png)\n\n## 目录\n\n- [令人惊叹的多模态强化学习](#awesome-multi-modal-reinforcement-learning)\n  - [简介](#introduction)\n  - [目录](#table-of-contents)\n  - [论文](#papers)\n    - [NeurIPS 2025](#neurips-2025)\n    - [ICML 2025](#icml-2025)\n    - [ICLR 2025](#iclr-2025)\n    - [ICLR 2024](#iclr-2024)\n    - [ICLR 2023](#iclr-2023)\n    - [ICLR 2022](#iclr-2022)\n    - [ICLR 2021](#iclr-2021)\n    - [ICLR 2019](#iclr-2019)\n    - [NeurIPS 2024](#neurips-2024)\n    - [NeurIPS 2023](#neurips-2023)\n    - [NeurIPS 2022](#neurips-2022)\n    - [NeurIPS 2021](#neurips-2021)\n    - [NeurIPS 2018](#neurips-2018)\n    - [ICML 2024](#icml-2024)\n    - [ICML 2022](#icml-2022)\n    - [ICML 2019](#icml-2019)\n    - [ICML 2017](#icml-2017)\n    - [CVPR 2024](#cvpr-2024)\n    - [CVPR 2022](#cvpr-2022)\n    - [CoRL 2022](#corl-2022)\n    - [其他](#other)\n    - [ArXiv](#arxiv)\n  - [贡献](#contributing)\n  - [许可证](#license)\n\n## 论文\n\n```\n格式：\n- [标题](论文链接) [链接]\n  - 作者。\n  - 关键词。\n  - 实验环境。\n```\n\n### NeurIPS 2025\n\n- [PRIMT：基于偏好的多模态反馈与基础模型轨迹合成的强化学习](https:\u002F\u002Fopenreview.net\u002Fforum?id=4xvE6Iy77Y)\n  - 王瑞琪、赵德忠、袁子钦、邵天宇、陈国华、多米尼克·考、洪成勋、闵炳哲\n  - 关键词：基于偏好的强化学习、机器人基础模型、神经符号融合、多模态反馈、因果推断、轨迹合成、机器人操作\n  - 实验环境：2个移动任务和6个操作任务\n\n- [用于统一多模态理解和生成的协同强化学习](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F89f6d8ae4000d3f0c62faca1308194e0858b9a65.pdf)\n  - 蒋静静、司崇杰、罗俊、张汉旺、马超\n  - 关键词：强化学习、GRPO、统一多模态理解和生成\n  - 实验环境：文本到图像生成及多模态理解基准测试\n\n- [VL-Rethinker：利用强化学习激励视觉-语言模型自我反思](https:\u002F\u002Ftiger-ai-lab.github.io\u002FVL-Rethinker\u002F)\n  - 王浩哲、曲超、黄祖明、楚伟、林方珍、陈文虎\n  - 关键词：视觉-语言模型、推理、强化学习\n  - 实验环境：MathVista、MathVerse、MathVision、MMMU-Pro、EMMA、MEGA-Bench\n\n- [用更少达到顶尖：MCTS引导的样本选择用于数据高效的视觉推理自我改进](https:\u002F\u002Fopenreview.net\u002Fforum?id=PHu9xJeAum&referrer=%5Bthe%20profile%20of%20Furong%20Huang%5D(%2Fprofile%3Fid%3D~Furong_Huang1))\n  - 王希瑶、杨正源、冯超、陆洪进、李林杰、林忠清、林凯文、黄福荣、王丽娟\n  - 关键词：视觉语言模型；强化微调；视觉语言模型推理；数据选择\n  - 实验环境：MathVista及其他视觉推理基准测试\n\n- [VisualQuality-R1：通过强化学习排序实现推理驱动的无参考图像质量评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14460)\n  - 吴天赫、邹健、梁杰、张雷、马克德\n  - 关键词：图像质量评估、强化学习、推理驱动的无参考 IQA 模型\n  - 实验环境：图像质量评估基准测试\n\n- [Q-Insight：通过视觉强化学习理解图像质量](https:\u002F\u002Fopenreview.net\u002Fforum?id=Bds54EfR9x)\n  - 李伟奇、张轩宇、赵世杰、张亚斌、李俊林、张力、张建\n  - 关键词：图像质量理解、多模态大型语言模型、强化学习\n  - 实验环境：图像质量理解基准测试\n\n- [思考还是不思考：规则驱动的视觉强化微调中的思考研究](https:\u002F\u002Fopenreview.net\u002Fforum?id=YexxvBGwQM&referrer=%5Bthe%20profile%20of%20Kaipeng%20Zhang%5D(%2Fprofile%3Fid%3D~Kaipeng_Zhang1))\n  - 李明、钟继科、赵士田、赖宇翔、张浩泉、朱王比尔、张凯鹏\n  - 关键词：视觉强化微调、明确思考、过度思考\n  - 实验环境：六个不同的视觉推理任务\n\n- [针对大型视觉-语言模型推理的快慢思维 GRPO](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F52fe68c14c7abde636af8fd396bd6a026575c7f0.pdf)\n  - 肖文义、甘蕾蕾\n  - 关键词：大型视觉-语言模型、快慢思维、推理\n  - 实验环境：七个推理基准测试\n\n- [SafeVLA：通过约束学习实现视觉-语言-动作模型的安全对齐](https:\u002F\u002Fopenreview.net\u002Fforum?id=dt940loCBT&referrer=%5Bthe%20profile%20of%20Yuanpei%20Chen%5D(%2Fprofile%3Fid%3D~Yuanpei_Chen2))\n  - 张博荣、张宇豪、季嘉铭、雷英山、戴约瑟夫、陈元培、杨耀东\n  - 关键词：视觉-语言-动作模型、安全对齐、大规模约束学习\n  - 实验环境：长时程移动操作任务\n\n- [思考还是不思考？视觉-语言模型的强化学习选择性推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16854)\n  - 王佳琪、林庆宏、程詹姆斯、郑守迈克\n  - 关键词：视觉-语言模型、强化学习\n  - 实验环境：CLEVR、Super-CLEVR、GeoQA、AITZ\n\n- [VisionThink：通过强化学习实现智能高效的视觉语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.13348)\n  - 杨森乔、李俊毅、赖欣、吴金明、李伟、马泽军、于蓓、赵恒爽、贾亚佳\n  - 关键词：视觉语言模型，强化学习\n  - 实验环境：通用VQA任务和OCR相关任务\n\n- [系统性奖励差距优化以缓解VLM幻觉问题](https:\u002F\u002Fopenreview.net\u002Fforum?id=fJRuMulPkc)\n  - 何乐涵、陈泽仁、史哲伦、余天宇、盛陆、邵静\n  - 关键词：视觉语言模型（VLMs）、偏好学习、幻觉缓解、基于人工智能反馈的强化学习（RLAIF）\n  - 实验环境：ObjectHal-Bench及其他幻觉基准测试\n\n- [VAGEN：面向多轮VLM智能体的世界模型推理增强](https:\u002F\u002Fopenreview.net\u002Fforum?id=xpjWEgf8zi&referrer=%5Bthe%20profile%20of%20Qineng%20Wang%5D(%2Fprofile%3Fid%3D~Qineng_Wang1)\n  - 王康睿、张平悦、王子涵、高雅宁、李林杰、王钦能、陈汉阳、陆一平、杨正源、王丽娟、兰杰·克里希纳、吴家俊、李飞飞、崔艺珍、李曼玲\n  - 关键词：视觉状态、世界建模、多轮强化学习、VLM智能体\n  - 实验环境：五种多样化的智能体任务\n\n- [Point-RFT：利用视觉接地强化微调提升多模态推理能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19702)\n  - 倪明亨、杨正源、李林杰、林忠清、凯文·林、左王猛、王丽娟\n  - 关键词：大型多模态模型、接地推理、强化学习\n  - 实验环境：ChartQA、CharXiv、PlotQA、IconQA、TabMWP\n\n- [MiCo：用于强化视觉推理的多图像对比方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22434)\n  - 陈曦、朱明康、刘绍腾、吴晓洋、徐晓刚、刘宇、白翔、赵恒爽\n  - 关键词：视觉推理、思维链、LLM、VLM、MLLM\n  - 实验环境：多图像推理基准测试\n\n- [DeepVideo-R1：基于难度感知的回归式GRPO进行视频强化微调](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F3059449b6b0406f728c822c610ef64b57988b472.pdf)\n  - 朴仁英、罗智惠、金仁英、金贤宇\n  - 关键词：视频大型语言模型、后训练、GRPO\n  - 实验环境：视频推理基准测试\n\n- [SAM-R1：利用SAM为多模态分割提供强化学习奖励反馈](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22596)\n  - 黄嘉琪、许尊南、周俊、刘婷、肖义成、欧明文、季博文、李秀、袁可洪\n  - 关键词：强化学习、多模态大型模型、图像分割\n  - 实验环境：图像分割基准测试\n\n- [SE-GUI：通过自进化强化学习提升GUI智能体的视觉接地能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12370)\n  - 袁新斌、张健、李凯欣、蔡卓轩、姚路健、陈杰、王恩光、侯启彬、陈锦威、蒋鹏涛、李博\n  - 关键词：GUI智能体；强化学习；视觉接地\n  - 实验环境：ScreenSpot-Pro及其他接地基准测试\n\n- [GUI探索实验室：通过多轮强化学习提升智能体的屏幕导航能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.02423)\n  - 闫浩龙、沈业青、黄鑫、王佳、谭凯俊、梁志轩、李宏信、葛政、吉江修、李思、张向宇、姜大鑫\n  - 关键词：GUI环境、大型视觉语言模型、多轮强化学习、智能体\n  - 实验环境：PC软件和移动App模拟\n\n- [Reason-RFT：视觉语言模型视觉推理的强化微调](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20752)\n  - 谭华杰、姬宇恒、郝晓帅、陈先胜、王鹏威、王中元、张尚航\n  - 关键词：多模态、强化微调、视觉推理\n  - 实验环境：视觉计数、结构感知、空间变换\n\n- [TempSamp-R1：视频LLM的有效时间采样与强化微调](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F54deb111742d89d20c4d8ec881ccb3a7d3f8aed6.pdf)\n  - 李云恒、景程、贾少勇、匡航毅、焦绍辉、侯启彬、程明明\n  - 关键词：时间接地；多模态大型语言模型；强化微调\n  - 实验环境：Charades-STA、ActivityNet Captions、QVHighlights\n\n- [视觉推理的接地强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23678)\n  - 加布里埃尔·赫伯特·萨尔奇、斯尼格达·萨哈、奈蒂克·坎德尔瓦尔、阿尤什·贾因、迈克尔·J·塔尔、阿维拉尔·库马尔、卡特琳娜·弗拉基达基\n  - 关键词：视觉推理、视觉语言模型、强化学习、视觉接地\n  - 实验环境：SAT-2、BLINK、V*bench、ScreenSpot、VisualWebArena\n\n- [SRPO：通过反思感知强化学习提升多模态LLM推理能力](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F514851b938f3c4ab888ed4499926b37fdbb1b89c.pdf)\n  - 万仲伟、窦志豪、刘彻、张宇、崔东飞、赵秦健、沈辉、熊静、辛毅、蒋义凡、陶超凡、何扬帆、张米、严申\n  - 关键词：MLLMs、推理\n  - 实验环境：MathVista、MathVision、Mathverse、MMMU-Pro\n\n- [Omni-R1：通过双系统协作实现全模态推理的强化学习](https:\u002F\u002Fopenreview.net\u002Fpdf\u002Feaa5aae3372db3e083fa1a4f662f19c999ff6d91.pdf)\n  - 钟浩、朱牧之、杜宗泽、黄郑、赵灿宇、刘明宇、王文、陈浩、沈春华\n  - 关键词：RL、Omnimodal\n  - 实验环境：指代式视听分割（RefAVS）、推理型视频对象分割（REVOS）\n\n- [Janus-Pro-R1：通过强化学习推进协作式视觉理解与生成](https:\u002F\u002Farxiv.org\u002Fhtml\u002F2506.01480v2)\n  - 潘凯航、吴洋、卜文东、沈凯、李俊成、王颖婷、李云飞、唐思亮、肖俊、吴飞、赵航、庄月婷\n  - 关键词：图像生成、图像理解\n  - 实验环境：文本到图像生成及图像编辑基准测试\n\n- [面向视觉语言慢思考推理的半离策略强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.16814)\n  - 沈俊浩、赵海腾、顾宇哲、高松阳、刘奎坤、黄海安、高建飞、林大华、张文伟、陈凯\n  - 关键词：大型视觉语言模型、慢思考推理\n  - 实验环境：MathVision、OlympiadBench\n\n- [ViCrit：一种可用于验证的强化学习代理任务，用于VLM中的视觉感知](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F631a654bdd1aa99c3357feb56e89859a66512702.pdf)\n  - 王西瑶、杨正源、冯超、周宇航、刘晓宇、梁永源、李明、臧子怡、李林杰、林忠清、凯文·林、黄福荣、王丽娟\n  - 关键词：视觉推理；视觉语言模型；视觉字幕；奖励模型；视觉幻觉\n  - 实验环境：视觉感知基准测试\n\n- [GuardReasoner-VL：通过强化推理保障视觉语言模型的安全](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11049)\n  - 刘悦、翟圣芳、杜明哲、陈宇林、Tri Cao、高洪成、王成、李新峰、王坤、方俊峰、张嘉恒、Bryan Hooi\n  - 关键词：视觉语言模型、安全防护模型、强化学习、大型推理模型\n  - 实验环境：VLMs的安全基准测试\n\n- [Fact-R1：基于深度推理的可解释视频虚假信息检测](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16836)\n  - 张帆睿、李典、张强、Chenjun、sinbadliu、林俊雄、严家宏、刘嘉伟、Zheng-Jun Zha\n  - 关键词：视频虚假信息检测、深度推理\n  - 实验环境：FakeVV基准测试\n\n- [Open Vision Reasoner：将语言认知行为迁移到视觉推理中](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05255)\n  - 魏雅娜、赵亮、孙建建、林康恒、尹继生、胡景程、张银民、于恩、吕浩然、翁泽佳、王佳、韩琪、葛政、张翔宇、蒋大鑫、Vishal M. Patel\n  - 关键词：多模态大语言模型、视觉推理、认知行为迁移\n  - 实验环境：MATH500、MathVision、MathVerse\n\n- [VideoRFT：通过强化微调激励多模态大语言模型的视频推理能力](https:\u002F\u002Fopenreview.net\u002Fforum?id=3pORFyKzh1&referrer=%5Bthe%20profile%20of%20Rui%20Mao%5D(%2Fprofile%3Fid%3D~Rui_Mao2))\n  - 王琪、于彦睿、袁晔、毛瑞、周天飞\n  - 关键词：多模态大语言模型、视频推理、强化微调\n  - 实验环境：六个视频推理基准测试\n\n- [Safe RLHF-V：基于多模态人类反馈的安全强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17682)\n  - 季嘉铭、陈欣宇、潘睿、张聪辉、朱翰、李嘉豪、洪东海、陈博远、周佳怡、王凯乐、戴俊涛、陈志敏、唐一达、韩思睿、郭义克、杨耀东\n  - 关键词：AI安全、AI对齐\n  - 实验环境：BeaverTails-V基准测试\n\n- [揭示视觉语言模型的细粒度奖励驱动的步骤式推理链](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.19003)\n  - 陈鸿浩、楼兴洲、冯晓坤、黄凯奇、王新龙\n  - 关键词：VLM、推理、PRM\n  - 实验环境：视觉语言推理基准测试\n\n- [GRIT：教多模态大语言模型用图像思考](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15879)\n  - 范悦、何学海、杨迪吉、郑凯智、郭清臣、郑雨婷、纳拉亚纳拉朱·斯拉瓦纳·乔蒂、关新泽、王新埃里克\n  - 关键词：多模态推理模型、强化学习\n  - 实验环境：多模态推理基准测试\n\n- [NoisyGRPO：通过噪声注入和贝叶斯估计激励多模态CoT推理](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F96f8ac8e6e7af57f7d1898ceced0a9d165735c5e.pdf)\n  - 邱龙田、宁善、孙家轩、何旭明\n  - 关键词：多模态大语言模型、强化学习\n  - 实验环境：CoT质量和幻觉基准测试\n\n- [Time-R1：用于时间视频定位的大规模视觉语言模型后训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13377)\n  - 王烨、王子恒、徐博深、杜洋、林克军、肖子涵、岳子豪、鞠建忠、张亮、杨定毅、方湘楠、何泽文、罗振波、王文轩、林俊奇、栾健、金秦\n  - 关键词：大规模视觉语言模型、时间视频定位、强化学习、后训练\n  - 实验环境：Charades-STA、ActivityNet Captions、QVHighlights、TVGBench\n\n- [Generative RLHF-V：从多模态人类偏好中学习原则](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18531)\n  - 周佳怡、季嘉铭、陈博远、孙嘉鹏、陈文琪、洪东海、韩思睿、郭义克、杨耀东\n  - 关键词：对齐、安全、RLHF、偏好学习、多模态大语言模型\n  - 实验环境：七个多模态基准测试\n\n- [OpenVLThinker：通过迭代SFT-RL循环实现复杂的视觉语言推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17352)\n  - 邓义和、赫里提克·班萨尔、殷凡、彭南云、王伟、常凯威\n  - 关键词：视觉语言推理、迭代改进、蒸馏、强化学习\n  - 实验环境：MathVista、EMMA、HallusionBench\n\n- [Video-R1：强化多模态大语言模型中的视频推理能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21776)\n  - 冯凯拓、龚凯雄、李博豪、郭宗浩、王一兵、彭天硕、吴俊飞、张晓英、王本友、岳向宇\n  - 关键词：多模态大语言模型、视频推理\n  - 实验环境：VideoMMMU、VSI-Bench、MVBench、TempCompass\n\n- [Table2LaTeX-RL：基于强化多模态语言模型从表格图像生成高保真LaTeX代码](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.17589)\n  - 凌俊、齐尧、黄涛、周世博、黄艳琴、江阳、宋子琪、周颖、杨阳、沈恒涛、王鹏\n  - 关键词：表格识别、LaTeX生成\n  - 实验环境：表格转LaTeX基准测试\n\n- [ReAgent-V：一种基于奖励驱动的多智能体框架，用于视频理解](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01300)\n  - 周逸扬、何洋帆、苏耀峰、韩思伟、张乔尔、贝尔塔修斯·格达斯、班萨尔·莫希特、姚华秀\n  - 关键词：视频理解；多智能体框架；反思性推理；VLA对齐；视频推理\n  - 实验环境：涵盖视频理解、视频推理和VLA任务的12个数据集\n\n- [Actial：激活多模态大语言模型的空间推理能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.01618)\n  - 占小宇、黄文轩、孙浩、傅新宇、马昌峰、曹绍胜、贾博文、林绍辉、尹振飞、LEI BAI、欧阳万利、李元启、郭杰、郭延文\n  - 关键词：视觉语言模型、3D推理、GRPO\n  - 实验环境：3D空间推理任务\n\n- [EvolvedGRPO：通过渐进式指令进化解锁LVLM中的推理能力](https:\u002F\u002Fopenreview.net\u002Fforum?id=tjXtcZjIgQ)\n  - 沈哲北、于奇凡、李俊诚、季伟、陈其志、汤思良、庄月婷\n  - 关键词：多模态推理、强化学习、自我改进\n  - 实验环境：多模态推理任务\n\n- [URSA：通过过程奖励模型解锁多模态数学推理能力](https:\u002F\u002Fopenreview.net\u002Fforum?id=96I8PGPALv&referrer=%5Bthe%20profile%20of%20Yujiu%20Yang%5D(%2Fprofile%3Fid%3D~Yujiu_Yang2))\n  - 罗瑞琳、郑卓凡、王磊、王一凡、倪新哲、林子成、姜松涛、于一瑶、施楚凡、储瑞航、曾锦、杨宇久\n  - 关键词：多模态推理、数据合成、过程奖励模型、强化学习\n  - 实验环境：ChartQA及其他多模态数学基准测试\n\n- [通过交织思维与视觉绘图强化视觉语言模型的空间推理能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09965)\n  - 吴俊飞、管健、冯凯拓、刘强、吴树、王亮、吴伟、谭天牛\n  - 关键词：大规模视觉语言模型、空间推理\n  - 实验环境：空间推理基准测试\n\n### ICML 2025\n\n- [ABNet：用于安全且可扩展的机器人学习的自适应显式屏障网络](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ymlwqfxuUc#page=5.86)\n  - 魏晓、王敦轩、甘闯、丹妮拉·鲁斯\n  - 关键词：安全学习、机器人学习、可扩展学习、屏障网络、可证明的安全性、强化学习、多模态控制。\n  - 实验环境：二维机器人避障、安全机器人操作、基于视觉的端到端自动驾驶。\n\n- [DexScale：自动化数据缩放以实现Sim2Real泛化机器人控制](https:\u002F\u002Fopenreview.net\u002Fpdf?id=AVVXX0erKT#page=7.45)\n  - 刘桂良、邓越慈、赵润毅、周华义、陈健、陈继涛、徐瑞燕、邰云鑫、贾奎\n  - 关键词：数据引擎、具身AI、机器人控制、操作、策略学习、Sim2Real、领域随机化、领域适应、强化学习、多模态控制。\n  - 实验环境：机器人操作任务（如抓取与放置）、多样化任务、多种机器人形态。\n\n- [DynaMind：针对具身决策制定的抽象视频动态推理](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ziDKPXJBYL#page=5.63)\n  - 王子儒、王梦梦、戴洁、马特利、齐国俊、刘勇、戴广、王井东\n  - 关键词：具身决策、多模态学习、视频动态抽象、机器人学习。\n  - 实验环境：LOReL Sawyer、Franka Kitchen、BabyAI、真实场景。\n\n- [Craftium：弥合丰富3D单\u002F多智能体环境中的灵活性与效率](https:\u002F\u002Fopenreview.net\u002Fpdf?id=htP5YRXcS9#page=5.53)\n  - 米克尔·马拉贡、何苏·塞贝里奥、何塞·A·洛萨诺\n  - 关键词：3D环境、强化学习、多智能体系统、具身AI。\n  - 实验环境：Craftium构建的一对一多智能体战斗环境、Craftium中的开放世界环境、Craftium构建的程序化3D地牢。\n\n- [逐层对齐：视觉语言模型中图像编码器各层的安全对齐研究](https:\u002F\u002Fopenreview.net\u002Fpdf?id=F1ff8zcjPp#page=6.08)\n  - 萨凯斯·巴楚、埃尔凡·沙耶加尼、罗希特·拉尔、特里什娜·查克拉博蒂、阿林达姆·杜塔、宋成宇、董悦、纳埃尔·B·阿布-加扎莱、阿米特·罗伊-乔杜里\n  - 关键词：视觉语言模型、安全对齐、人类反馈强化学习（RLHF）、多模态强化学习。\n  - 实验环境：Jailbreak-V28K、AdvBench-COCO（源自AdvBench和MS-COCO）、HH-RLHF、VQA-v2、自定义提示。\n\n### ICLR 2025\n\n- [视觉语言模型是上下文价值学习者](https:\u002F\u002Fopenreview.net\u002Fforum?id=friHAl5ofG)  \n  - 马叶诚、乔伊·海纳、付楚源、德鲁夫·沙赫、杰基·梁、许卓、肖恩·基尔马尼、徐鹏、丹尼·德里斯、泰德·萧、奥斯伯特·巴斯塔尼、迪内什·贾亚拉曼、于文浩、张婷楠、多尔萨·萨迪格、夏菲  \n  - 关键词：机器人学习、视觉语言模型、价值估计、操作  \n  - 实验环境：超过300种不同的现实世界任务，涵盖多种机器人平台，包括双臂操作任务。\n\n- [TopoNets：具有类脑拓扑结构的高性能视觉与语言模型](https:\u002F\u002Fopenreview.net\u002Fforum?id=THqWPzL00e)  \n  - 马尤克·德布、迈纳克·德布、阿普尔瓦·拉坦·穆尔蒂  \n  - 关键词：拓扑结构、神经启发、卷积神经网络、Transformer、视觉皮层、神经科学  \n  - 实验环境：ResNet-18、ResNet-50、ViT、GPT-Neo-125M、NanoGPT。\n\n- [LOKI：利用大型多模态模型的综合合成数据检测基准](https:\u002F\u002Fopenreview.net\u002Fforum?id=z8sxoCYgmd)  \n  - 叶俊彦、周百川、黄子龙、张俊安、白天义、康恒睿、何军、林洪林、王子豪、吴彤、吴志正、陈一平、林大华、何聪辉、李伟嘉  \n  - 关键词：LMMs、Deepfake、多模态性  \n  - 实验环境：视频、图片、3D、文本、音频。\n\n- [两种效应，一个触发：关于对比型视觉-语言模型中的模态差距、对象偏见与信息不平衡](https:\u002F\u002Fopenreview.net\u002Fforum?id=uAFHCZRmXk)  \n  - 西蒙·施罗迪、大卫·T·霍夫曼、马克思·阿尔古斯、沃尔克·费舍尔、托马斯·布罗克斯  \n  - 关键词：CLIP、模态差距、对象偏见、对比损失、数据驱动、视觉语言模型、VLM  \n  - 实验环境：对比型视觉-语言模型（VLM）分析。\n\n- [基于扩散模型的多机器人运动规划](https:\u002F\u002Fopenreview.net\u002Fforum?id=AUCYptvAf3)  \n  - 约赖·绍尔、伊塔马尔·米沙尼、希瓦姆·瓦茨、李娇阳、马克西姆·利哈切夫  \n  - 关键词：多智能体规划、机器人技术、生成模型  \n  - 实验环境：模拟物流环境。\n\n### ICLR 2024\n- [DrM：通过休眠比率最小化掌握视觉强化学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=MSe8YFbhUE)\n  - 徐国伟、郑睿杰、梁永元、王希瑶、袁哲成、季天颖、罗宇、刘晓宇、袁佳欣、华璞、李淑珍、泽延杰、哈尔·道梅三世、黄福荣、徐华哲\n  - 关键词：视觉RL；休眠比率\n  - 实验环境：[DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)、[Meta-world](https:\u002F\u002Fgithub.com\u002Frlworkgroup\u002Fmetaworld)、[Adroit](https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FD4RL)。\n\n- [深度强化学习中的数据增强再探讨](https:\u002F\u002Fopenreview.net\u002Fpdf?id=EGQBpkIEuu)\n  - 胡建树、蒋云鹏、温保罗\n  - 关键词：强化学习、数据增强\n  - 实验环境：[DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)。\n\n- [视觉强化学习中的可塑性再审视：数据、模块与训练阶段](https:\u002F\u002Fopenreview.net\u002Fforum?id=0aR1s9YxoL)\n  - 马国政、李璐、张森、刘子轩、王振、陈怡欣、沈莉、王雪倩、陶大成\n  - 关键词：可塑性、视觉强化学习、深度强化学习、样本效率\n  - 实验环境：[DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)、[Atari](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)。\n\n- [基于像素的对象操作的实体中心强化学习](https:\u002F\u002Fopenreview.net\u002Fforum?id=uDxeSZ1wdI)\n  - 丹·哈拉马蒂、塔尔·丹尼尔、阿维夫·塔马尔\n  - 关键词：深度强化学习、视觉强化学习、对象中心、机器人对象操作、组合泛化\n  - 实验环境：[IsaacGym](https:\u002F\u002Fgithub.com\u002FNVIDIA-Omniverse\u002FIsaacGymEnvs)。\n\n### ICLR 2023\n- [PaLI: 一种联合扩展的多语言语言-图像模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.06794)(**\u003Cfont color=\"red\">值得关注的前5%\u003C\u002Ffont>**) \n  - Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Linting Xue, Ashish Thapliyal, James Bradbury, Weicheng Kuo, Mojtaba Seyedhosseini, Chao Jia, Burcu Karagol Ayan, Carlos Riquelme, Andreas Steiner, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut\n  - 关键词：惊人的零样本能力、语言组件和视觉组件\n  - 实验环境：无\n\n- [VIMA: 基于多模态提示的通用机器人操作](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03094)\n  - Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, Linxi Fan。*NeurIPS 2022 工作坊*\n  - 关键词：多模态提示、基于Transformer的通用智能体模型、大规模基准测试\n  - 实验环境：[VIMA-Bench](https:\u002F\u002Fgithub.com\u002Fvimalabs\u002FVimaBench)、[VIMA-Data](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FVIMA\u002FVIMA-Data)\n\n- [MIND ’S EYE: 通过仿真进行 grounded 语言模型推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05359)\n  - Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai\n  - 关键词：语言到物理世界的映射、推理能力\n  - 实验环境：[MuJoCo](https:\u002F\u002Fmujoco.org\u002F)\n\n### ICLR 2022\n- [CLIP 对视觉-语言任务能有多大帮助？](https:\u002F\u002Fopenreview.net\u002Fforum?id=zf_Ll3HZWgy)\n  - Sheng Shen, Liunian Harold Li, Hao Tan 等。*ICLR 2022*\n  - 关键词：视觉-语言、CLIP\n  - 实验环境：无\n\n### ICLR 2021\n- [将语言与实体和动态对齐，以实现强化学习中的泛化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.07393)\n  - Austin W. Hanjie, Victor Zhong, Karthik Narasimhan。*ICML 2021*\n  - 关键词：多模态注意力\n  - 实验环境：[Messenger](https:\u002F\u002Fgithub.com\u002Fahjwang\u002Fmessenger-emma)\n\n- [使用离散世界模型掌握Atari游戏](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.02193)\n  - Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi 等。\n  - 关键词：世界模型\n  - 实验环境：[Atari](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)\n\n- [将表征学习与强化学习解耦](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.08319)\n  - Adam Stooke, Kimin Lee, Pieter Abbeel 等。\n  - 关键词：表征学习、无监督学习\n  - 实验环境：[DeepMind Control](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)、[Atari](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)、[DMLab](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Flab)\n\n### ICLR 2019\n- [利用目标条件策略学习可行动的表征](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.07819)\n  - Dibya Ghosh, Abhishek Gupta, Sergey Levine。\n  - 关键词：可行动表征学习\n  - 实验环境：2D导航（2D墙、2D房间、轮式机器人、轮式房间、蚂蚁、推动物体）\n\n### NeurIPS 2024\n- [预训练视觉表征在基于模型的强化学习中出人意料的无效性](https:\u002F\u002Fopenreview.net\u002Fpdf?id=LvAy07mCxU)\n  - Moritz Schneider, Robert Krug, Narunas Vaskevicius, Luigi Palmieri, Joschka Boedecker\n  - 关键词：强化学习、rl、基于模型的强化学习、表征学习、pvr、视觉表征\n  - 实验环境：[DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)、[ManiSkill2]()、[Miniworld]()\n\n- [从头开始学习多模态行为：扩散策略梯度](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.00681)\n  - Zechu Li, Rickmer Krohn, Tao Chen, Anurag Ajay, Pulkit Agrawal, Georgia Chalvatzaki\n  - 关键词：强化学习、多模态行为、扩散模型\n  - 实验环境：AntMaze（导航）、机器人操作（Franka任务）\n\n- [寻求共性但保留差异：针对多模态视觉RL的分解动力学建模](https:\u002F\u002Fopenreview.net\u002Fpdf?id=4php6bGL2W)\n  - Yangru Huang, Peixi Peng, Yifan Zhao, Guangyao Chen, Yonghong Tian\n  - 关键：多模态强化学习、视觉RL、动力学建模、模态一致性、模态不一致、DDM\n  - 实验环境：CARLA、DMControl\n  - \n- [FlexPlanner: 基于深度强化学习的灵活3D平面布局设计，在混合动作空间中结合多模态表征](https:\u002F\u002Fopenreview.net\u002Fpdf?id=q9RLsvYOB3)\n  - Ruizhe Zhong, Xingbo Du, Shixiong Kai, Zhentao Tang, Siyuan Xu, Jianye Hao, Mingxuan Yuan, Junchi Yan\n  - 关键词：3D平面布局设计、深度强化学习、混合动作空间、多模态表征\n  - 实验环境：MCNC基准、GSRC基准\n\n### NeurIPS 2023\n- [逆动力学预训练能够为多任务模仿学习学到良好的表征](https:\u002F\u002Fopenreview.net\u002Fpdf?id=kjMGHTo8Cs)\n  - David Brandfonbrener, Ofir Nachum, Joan Bruna\n  - 关键词：表征学习、模仿学习\n  - 实验环境：[Sawyer开门](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld)、[MetaWorld](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld)、[Franka厨房、Adroit](https:\u002F\u002Fgithub.com\u002Faravindr93\u002Fmjrl)\n\n- [面向视觉-语言导航的频率增强型数据增广](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F0d9e08f247ca7fbbfd5e50b7ff9cf357-Abstract-Conference.html)\n  - Keji He, Chenyang Si, Zhihe Lu, Yan Huang, Liang Wang, Xinchao Wang\n  - 关键词：视觉-语言导航、高频、数据增广\n  - 实验环境：[Matterport3d](https:\u002F\u002Fniessner.github.io\u002FMatterport\u002F)\n\n- [语言并非一切：将感知与语言模型对齐](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.14045)\n  - Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao 等。\n  - 关键词：多模态感知、世界建模\n  - 实验环境：[IQ50](https:\u002F\u002Faka.ms\u002Fkosmos-iq50)\n\n- [MotionGPT：人类运动是一门外语](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002F3fbf0c1ea0716c03dea93bb6be78dd6f-Paper-Conference.pdf)\n  - Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, Tao Chen\n  - 关键词：人类运动、文本驱动的运动生成\n  - 实验环境：[HumanML3D](https:\u002F\u002Fericguo5513.github.io\u002Ftext-to-motion)、[KIT](https:\u002F\u002Fmotion-database.humanoids.kit.edu\u002F)\n\n- [大型语言模型是视觉推理的协调者](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Fddfe6bae7b869e819f842753009b94ad-Paper-Conference.pdf)\n  - Liangyu Chen, Bo Li, Sheng Shen, Jingkang Yang, Chunyuan Li, Kurt Keutzer, Trevor Darrell, Ziwei Liu\n  - 关键词：视觉推理、大型语言模型\n  - 实验环境：[A-OKVQA]()、[OK-VQA]()、[e-SNLI-VE]()、[VSR]()\n\n### NeurIPS 2022\n- [MineDojo：构建具有互联网规模知识的开放式具身智能体](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.08853)\n  - 范林溪、王冠智、蒋云帆等\n  - 关键词：多模态数据集，MineClip\n  - 实验环境：[Minecraft](https:\u002F\u002Fminedojo.org\u002F)\n\n- [视频预训练（VPT）：通过观看无标签在线视频学习行动](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11795)\n  - 鲍文·贝克、伊尔格·阿克卡亚、彼得·若霍夫等\n  - 关键词：逆动力学模型\n  - 实验环境：[minerl](https:\u002F\u002Fgithub.com\u002Fminerllabs\u002Fminerl)\n\n### NeurIPS 2021\n- [SOAT：一种面向场景与物体的视觉-语言导航Transformer](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.14143.pdf)\n  - 阿比纳夫·莫德吉尔、阿琼·马朱姆达尔、哈什·阿格瓦尔等\n  - 关键词：视觉-语言导航\n  - 实验环境：[Room-to-Room](https:\u002F\u002Fpaperswithcode.com\u002Fdataset\u002Froom-to-room)、[Room-Across-Room](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002FRxR)\n\n- [用于数据高效强化学习的表征预训练](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Fhash\u002F69eba34671b3ef1ef38ee85caae6b2a1-Abstract.html)\n  - 马克斯·施瓦策、尼塔尔尚·拉杰库马尔、迈克尔·诺霍维奇等\n  - 关键词：潜在动力学建模、无监督强化学习\n  - 实验环境：[Atari](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)\n\n### NeurIPS 2018\n- [循环世界模型促进策略进化](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2018\u002Fhash\u002F2de5d16682c3c35007e4e92982f1a2ba-Abstract.html)\n  - 大卫·哈、尤尔根·施密德胡伯\n  - 关键词：世界模型、生成式RNN、VAE\n  - 实验环境：[VizDoom](https:\u002F\u002Fgithub.com\u002Fmwydmuch\u002FViZDoom)、[CarRacing](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)\n\n### ICML 2024\n- [基于视觉的强化学习中泛化能力的预训练目标研究](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fkim24u.html)\n  - 金东虎、李浩俊、李京民、黄东润、秋在角\n  - 关键词：基于视觉的强化学习\n  - 实验环境：[Atari](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgym)\n\n- [RL-VLM-F：来自视觉语言基础模型反馈的强化学习](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fwang24bn.html)\n  - 王宇飞、孙占义、张杰西、西安周、埃尔德姆·比耶克、戴维·赫尔德、扎科里·埃里克森\n  - 关键词：从VLM学习\n  - 实验环境：[Gym]()、[MetaWorld](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld)\n\n- [利用辅助奖励智能体进行强化学习的奖励塑造](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fma24l.html)\n  - 马浩哲、司马宽宽、武清荣、傅迪、梁泽云\n  - 关键词：双智能体奖励塑造框架\n  - 实验环境：[Mujoco](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fmujoco)\n\n- [FuRL：将视觉-语言模型作为模糊奖励用于强化学习](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Ffu24j.html)\n  - 傅宇伟、张海超、吴迪、许伟、博努瓦·布勒\n  - 关键词：高维观测、强化学习中的表征学习\n  - 实验环境：[MetaWorld](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld)\n\n- [具有连续潜在动力学的丰富观测强化学习](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fsong24i.html)\n  - 宋宇达、吴丽丽、迪伦·J·福斯特、阿克谢·克里希纳穆提\n  - 关键词：VLM作为奖励函数\n  - 实验环境：[迷宫]()\n\n- [强化学习中由LLM赋能的状态表示](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fwang24bh.html)\n  - 王博远、曲云、姜宇航、邵建准、刘畅、杨文明、季向阳\n  - 关键词：基于LLM的状态表示\n  - 实验环境：[Mujoco](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fmujoco)\n\n- [代码即奖励：用VLM赋能强化学习](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fvenuto24a.html)\n  - 大卫·韦努托、穆罕默德·萨米·努尔·伊斯兰、马丁·克利萨罗夫等\n  - 关键词：视觉-语言模型、奖励函数\n  - 实验环境：[MiniGrid](https:\u002F\u002Fminigrid.farama.org\u002F)\n\n### ICML 2022\n- [语言模型作为零样本规划器：为具身智能体提取可操作知识](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.07207.pdf)\n  - 黄文龙、皮特·阿贝尔、迪帕克·帕塔克等\n  - 关键词：大型语言模型、具身智能体\n  - 实验环境：[VirtualHome](https:\u002F\u002Fgithub.com\u002Fxavierpuigf\u002Fvirtualhome)\n\n- [基于视频的无动作预训练强化学习](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fseo22a.html)\n  - 徐英教、李基敏、斯蒂芬·L·詹姆斯等\n  - 关键词：无动作预训练、视频\n  - 实验环境：[Meta-world](https:\u002F\u002Fgithub.com\u002Frlworkgroup\u002Fmetaworld)、[DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)\n\n- [强化学习中利用语言模型进行历史压缩](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.12258)\n  - 法比安·派舍尔、托马斯·阿德勒、维杭·帕蒂尔等\n  - 关键词：预训练语言转换器\n  - 实验环境：[Minigrid](https:\u002F\u002Fgithub.com\u002Fmaximecb\u002Fgym-minigrid)、[Procgen](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fprocgen)\n\n### ICML 2019\n- [从像素中学习潜在动力学以进行规划](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.04551)\n  - 达尼雅尔·哈夫纳、蒂莫西·利利克拉普、伊恩·费舍尔等\n  - 关键词：潜在动力学模型、像素观测\n  - 实验环境：[DeepMind Control Suite](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)\n\n### ICML 2017\n- [通过多任务深度强化学习实现零样本任务泛化](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05064)\n  - 欧俊赫、萨廷德·辛格、李洪洛克、普什米特·科利\n  - 关键词：未见指令、序列指令\n  - 实验环境：[Minecraft](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.09128)\n\n### CVPR 2024\n- [DMR：用于视觉强化学习中帧与事件融合的分解多模态表征](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fhtml\u002FXu_DMR_Decomposed_Multi-Modality_Representations_for_Frames_and_Events_Fusion_in_CVPR_2024_paper.html)\n  - 徐浩然、彭培熙、谭广、李源、徐新海、田永宏\n  - 关键词：视觉强化学习、多模态表征、动态视觉传感器\n  - 实验环境：[Carla]()\n\n- [基于因果学习的视觉-语言导航](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.10241)\n  - 王柳毅、何宗涛、邓荣浩、沈孟娇、刘成举、陈启军\n  - 关键词：视觉-语言导航、跨模态因果Transformer\n  - 实验环境：[R2R](https:\u002F\u002Fbringmeaspoon.org\u002F)、[REVERIE](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002FRxR)、[RxR-English](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002FRxR)、[SOON]()\n\n### CVPR 2022\n- [面向多模态视频字幕生成的端到端生成式预训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.08264)\n  - Paul Hongsuck Seo、Arsha Nagrani、Anurag Arnab、Cordelia Schmid\n  - 关键词：多模态视频字幕生成、基于未来话语的预训练、多模态视频生成式预训练\n  - 实验环境：[HowTo100M](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.03327)\n\n- [图像即外语：适用于所有视觉及视觉-语言任务的BEiT预训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.10442)\n  - Wenhui Wang、Hangbo Bao、Li Dong、Johan Bjorck、Zhiliang Peng、Qiang Liu、Kriti Aggarwal、Owais Khan Mohammed、Saksham Singhal、Subhojit Som、Furu Wei\n  - 关键词：骨干网络架构、预训练任务、模型规模扩展\n  - 实验环境：[ADE20K](https:\u002F\u002Fgroups.csail.mit.edu\u002Fvision\u002Fdatasets\u002FADE20K\u002F)、[COCO](https:\u002F\u002Fcocodataset.org\u002F)、[NLVR2](https:\u002F\u002Fpaperswithcode.com\u002Fdataset\u002Fnlvr)、[Flickr30K](https:\u002F\u002Fpaperswithcode.com\u002Fdataset\u002Fflickr30k)\n\n- [全局思考，局部行动：用于视觉-语言导航的双尺度图变换器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.11742)\n  - Shizhe Chen、Pierre-Louis Guhur、Makarand Tapaswi、Cordelia Schmid、Ivan Laptev\n  - 关键词：双尺度图变换器、双尺度图变换器、可供性检测\n  - 实验环境：无\n\n- [面向运动控制的掩码视觉预训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.06173)\n  - Tete Xiao、Ilija Radosavovic、Trevor Darrell等 *ArXiv 2022*\n  - 关键词：自监督学习、运动控制\n  - 实验环境：[Isaac Gym](https:\u002F\u002Fdeveloper.nvidia.com\u002Fisaac-gym)\n\n\n### CoRL 2022\n- [LM-Nav：基于大型预训练语言、视觉和动作模型的机器人导航](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.04429)\n  - Dhruv Shah、Blazej Osinski、Brian Ichter、Sergey Levine\n  - 关键词：机器人导航、目标条件化、未标注的大规模数据集、CLIP、ViNG、GPT-3\n  - 实验环境：无\n\n- [基于掩码视觉预训练的真实世界机器人学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03109)\n  - Ilija Radosavovic、Tete Xiao、Stephen James、Pieter Abbeel、Jitendra Malik、Trevor Darrell\n  - 关键词：真实世界机器人任务\n  - 实验环境：无\n\n- [R3M：一种用于机器人操作的通用视觉表征](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.12601)\n  - Suraj Nair、Aravind Rajeswaran、Vikash Kumar等\n  - 关键词：Ego4D人类视频数据集、预训练视觉表征\n  - 实验环境：[MetaWorld](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld)、[Franka Kitchen, Adroit](https:\u002F\u002Fgithub.com\u002Faravindr93\u002Fmjrl)\n\n### 其他\n- [RL-EMO：一种用于多模态情感识别的强化学习框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.07648) *ICASSP 2024*\n  - Chengwen Zhang、Yuhao Zhang、Bo Cheng\n  - 关键词：多模态情感识别、强化学习、图卷积网络\n  - 实验环境：无\n\n- [基于非结构化数据的语言条件模仿学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.07648) *RSS 2021*\n  - Corey Lynch、Pierre Sermanet\n  - 关键词：开放世界环境\n  - 实验环境：无\n\n- [从“野外”人类视频中学习可泛化的机器人奖励函数](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.16817) *RSS 2021*\n  - Annie S. Chen、Suraj Nair、Chelsea Finn\n  - 关键词：奖励函数、“野外”人类视频\n  - 实验环境：无\n\n- [基于潜在空间模型的图像离线强化学习](https:\u002F\u002Fproceedings.mlr.press\u002Fv144\u002Frafailov21a.html) *L4DC 2021*\n  - Rafael Rafailov、Tianhe Yu、Aravind Rajeswaran等\n  - 关键词：潜在空间模型\n  - 实验环境：[DeepMind Control](https:\u002F\u002Fgithub.com\u002Fdeepmind\u002Fdm_control)、[Adroit Pen](https:\u002F\u002Fgithub.com\u002FFarama-Foundation\u002FD4RL)、[Sawyer Door Open](https:\u002F\u002Fgithub.com\u002Fsuraj-nair-1\u002Fmetaworld)、[Robel D’Claw Screw](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Frobel)\n\n- [跨注意力是否优于自注意力用于多模态情感识别？](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.09263) *ICASSP 2022*\n  - Vandana Rajan、Alessio Brutti、Andrea Cavallaro\n  - 关键词：多模态情感识别、跨注意力\n  - 实验环境：无\n\n### ArXiv\n- [Spatialvlm：赋予视觉-语言模型空间推理能力](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.12168)\n  - 陈博远、徐卓、肖恩·基尔马尼、布赖恩·伊希特、丹尼·德里斯、皮特·弗洛伦斯、多尔萨·萨迪格、莱昂纳德·吉巴斯、夏飞\n  - 关键词：视觉问答、3D空间推理\n  - 实验环境：spatial VQA数据集\n\n- [M2CURL：基于自监督表征学习的高效多模态强化学习，用于机器人操作](https:\u002F\u002Fbrowse.arxiv.org\u002Fabs\u002F2401.17032)\n  - 弗提奥斯·利格拉基斯、韦丹特·戴夫、埃尔马尔·吕克特\n  - 关键词：机器人操作、自监督表征\n  - 实验环境：Gym\n\n- [时间索引作为深度强化学习中顺序操作任务的归纳偏置](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01993)\n  - M·诺曼·库雷希、本·艾斯纳、大卫·赫尔德\n  - 关键词：策略输出的多模态性、动作头切换\n  - 实验环境：MetaWorld\n\n- [具有多模态感知的参数化决策在自动驾驶中的应用](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.11935)\n  - 夏宇阳、刘顺成、于权林、邓立伟、张友、苏汉、郑凯\n  - 关键词：自动驾驶、强化学习中的GNN\n  - 实验环境：CARLA\n\n- [基于图卷积网络的强化学习，用于对话式智能体的上下文实时多模态情感识别](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15683)\n  - 法蒂玛·阿卜杜勒·拉赫曼、卢广\n  - 关键词：情感识别、强化学习中的GNN\n  - 实验环境：IEMOCAP\n\n- [强化的UI指令对齐：迈向通用的UI任务自动化API](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04716)\n  - 张志正、谢文轩、张晓艺、陆燕\n  - 关键词：LLM、通用UI任务自动化API\n  - 实验环境：RicoSCA、MoTIF\n\n- [使用LLM进行驾驶：融合对象级向量模态以实现可解释的自动驾驶](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01957)\n  - 陈龙、奥列格·西纳夫斯基、扬·许内尔曼、爱丽丝·卡恩松、安德鲁·詹姆斯·威尔莫特、丹尼·伯奇、丹尼尔·芒德、杰米·肖顿\n  - 关键词：自动驾驶中的LLM、对象级多模态LLM\n  - 实验环境：RicoSCA、MoTIF\n\n- [通过多模态分类探索的强化学习实现非抓握式平面操作](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.02459)\n  - 胡安·德尔·阿吉拉·费兰迪斯、若昂·莫拉、塞思·维贾亚库马尔\n  - 关键词：多模态探索方法\n  - 实验环境：KUKA iiwa机械臂\n\n- [基于强化学习的端到端流式视频时序动作分割](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15683)\n  - 温武俊、张金荣、刘圣兰、李云恒、李启峰、冯琳\n  - 关键词：时序动作分割、视频分析中的RL\n  - 实验环境：EGTEA\n\n- [做我能做的，而不是我得到的：多模态知识图谱上的拓扑感知多跳推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.10345)\n  - 郑尚飞、尹洪志、陈通、阮国越雄、陈伟、赵磊\n  - 关键词：多跳推理、多模态知识图谱、归纳设定、自适应强化学习\n  - 实验环境：无\n\n- [面向与人类协作的机器人的多模态强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07265)\n  - 阿法格·梅赫里·舍尔韦达尼、李思宇、纳塔武特·莫奈库尔、巴哈雷·阿巴西、芭芭拉·迪·欧根尼奥、米洛什·泽夫兰\n  - 关键词：稳健且深思熟虑的决策、端到端训练、重要性增强、相似性、改进IRL训练过程的多模态RL领域\n  - 实验环境：无\n\n- [看、计划、预测：基于语言指导的认知规划与视频预测](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03825v1)\n  - 玛丽亚·阿塔里安、阿德瓦亚·古普塔、周子怡、于伟、伊戈尔·吉利琴斯基、阿尼梅什·加格\n  - 关键词：认知规划、语言指导的视频预测\n  - 实验环境：无\n\n- [面向现实世界规划的开放词汇可查询场景表征](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.09874)\n  - 陈博远、夏飞、布赖恩·伊希特、卡尼什卡·拉奥、基尔塔娜·戈帕拉克里希南、迈克尔·S·瑞欧、奥斯汀·斯通、丹尼尔·卡普勒\n  - 关键词：目标检测、现实世界、机器人任务\n  - 实验环境：[Say Can](https:\u002F\u002Fsay-can.github.io\u002F)\n\n- [做我能做的，而不是我说的：将语言与机器人可用性相结合](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.01691)\n  - 迈克尔·安、安东尼·布罗汉、诺亚·布朗、叶夫根·切博塔尔、奥马尔·科尔特斯、拜伦·戴维、切尔西·芬恩、傅楚渊、基尔塔娜·戈帕拉克里希南、卡罗尔·豪斯曼、亚历克斯·赫尔佐格、丹尼尔·霍、贾斯敏·许、朱利安·伊巴尔斯、布赖恩·伊希特、亚历克斯·伊尔潘、埃里克·张、罗萨里奥·豪雷吉·鲁阿诺、凯尔·杰弗里、莎莉·杰斯蒙特、尼希尔·J·乔希、瑞安·朱利安、德米特里·卡拉什尼科夫、匡宇恒、李匡辉、谢尔盖·列文、陆耀、卢琳达、卡罗丽娜·帕拉达、彼得·帕斯托尔、乔内尔·奎安宝、卡尼什卡·拉奥、雅雷克·雷廷豪斯、迭戈·雷耶斯、皮埃尔·塞尔马内、尼古拉斯·西弗斯、克莱顿·谭、亚历山大·托舍夫、文森特·范胡克、夏飞、泰德·萧、徐鹏、徐思纯、严梦圆、张安迪\n  - 关键词：现实世界、自然语言\n  - 实验环境：[Say Can](https:\u002F\u002Fsay-can.github.io\u002F)\n\n## 贡献\n我们的目标是让这个仓库变得更好。如果你有兴趣参与贡献，请参阅[此处](CONTRIBUTING.md)获取贡献说明。\n\n\n## 许可证\nAwesome Multi-Modal Reinforcement Learning 采用 Apache 2.0 许可证发布。","# Awesome Multi-Modal Reinforcement Learning 快速上手指南\n\n`awesome-multi-modal-reinforcement-learning` 并非一个可直接安装的软件库或框架，而是一个由 OpenDILab 维护的**精选论文清单仓库**。它汇集了多模态强化学习（MMRL）领域的前沿研究论文，涵盖视觉、语言及多模态融合等方向。\n\n本指南旨在帮助开发者快速获取该资源列表，并从中定位所需的研究成果。\n\n## 环境准备\n\n本项目主要为文档和资源索引，无需复杂的系统环境或 GPU 支持即可浏览。若需复现清单中的具体论文代码，请参考对应论文的独立仓库要求。\n\n*   **操作系统**：Windows \u002F macOS \u002F Linux\n*   **前置依赖**：\n    *   Git（用于克隆仓库）\n    *   现代浏览器（用于查看渲染后的 Markdown 或访问论文链接）\n    *   （可选）Python 环境：仅当你需要运行清单中某篇论文提供的官方代码时才需要安装。\n\n## 安装步骤\n\n由于这是一个资源列表仓库，\"安装\"即为克隆代码库到本地以便离线查阅或贡献。\n\n1.  **克隆仓库**\n    打开终端，执行以下命令：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning.git\n    ```\n\n    *国内加速方案*：如果访问 GitHub 较慢，可使用 Gitee 镜像（如有）或通过代理加速：\n    ```bash\n    # 使用国内镜像源示例（若存在同步镜像）\n    # git clone https:\u002F\u002Fgitee.com\u002Fmirror\u002Fopendilab-awesome-multi-modal-reinforcement-learning.git\n    \n    # 或者配置 git 深度克隆以加快速度\n    git clone --depth 1 https:\u002F\u002Fgithub.com\u002Fopendilab\u002Fawesome-multi-modal-reinforcement-learning.git\n    ```\n\n2.  **进入目录**\n    ```bash\n    cd awesome-multi-modal-reinforcement-learning\n    ```\n\n3.  **查看内容**\n    你可以直接在本地使用 Markdown 阅读器打开 `README.md`，或在 GitHub 网页版浏览最新更新的列表。\n\n## 基本使用\n\n该仓库的核心用途是**检索和追踪最新的多模态强化学习论文**。\n\n### 1. 浏览论文列表\n打开根目录下的 `README.md` 文件。内容按会议年份分类（如 NeurIPS 2025, ICLR 2024, CVPR 2024 等）。\n\n每条记录包含以下关键信息：\n*   **标题与链接**：点击可跳转至论文原文（OpenReview\u002FarXiv）。\n*   **作者**：研究团队信息。\n*   **关键词**：快速了解核心技术点（如 `Vision-Language Models`, `GRPO`, `Safety Alignment`）。\n*   **实验环境**：标明了论文使用的基准测试或任务环境（如 `MathVista`, `Robot Manipulation`）。\n\n### 2. 查找特定主题\n利用文本搜索功能（Ctrl+F \u002F Cmd+F）在 `README.md` 中查找关键词。\n\n**示例**：如果你想找关于“视觉语言模型推理（Visual Reasoning）”的最新论文：\n1.  在文件中搜索 `Reasoning` 或 `Visual Reasoning`。\n2.  定位到相关条目，例如：\n    > [VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning](https:\u002F\u002Ftiger-ai-lab.github.io\u002FVL-Rethinker\u002F)\n    > - Keywords: Vision-Language Models, Reasoning, Reinforcement Learning\n    > - ExpEnv: MathVista, MathVerse...\n\n3.  点击链接访问论文主页或代码仓库进行深入研究。\n\n### 3. 追踪前沿动态\n该仓库会持续更新。建议定期执行以下命令拉取最新列表：\n```bash\ngit pull origin main\n```\n\n> **提示**：若需复现某篇论文，请根据列表中的链接进入该论文的官方项目页面，按照其特定的 `Installation` 和 `Usage` 说明进行操作。","某机器人实验室团队正致力于研发一款能同时理解视觉指令和自然语言的家庭服务机器人，需要快速复现前沿的多模态强化学习算法以验证新架构。\n\n### 没有 awesome-multi-modal-reinforcement-learning 时\n- **文献检索如大海捞针**：研究人员需在 ArXiv、NeurIPS、ICLR 等多个会议中手动筛选，极易遗漏像\"PRIMT\"或\"VL-Rethinker\"这类结合基础模型与多模态反馈的关键论文。\n- **环境复现成本高昂**：找到论文后，往往难以确认其具体的实验环境（如是用于机械臂操作还是文本生成），导致在错误的仿真平台上浪费数周时间搭建代码。\n- **技术路线判断滞后**：由于缺乏按年份和会议分类的结构化整理，团队难以快速把握从 2019 年到 2025 年的技术演进脉络，容易在过时的方法上投入过多资源。\n- **跨模态关联困难**：难以发现那些虽非纯 RL 但对多模态感知有重要价值的跨界研究，限制了智能体从视频和文本中直接学习的能力上限。\n\n### 使用 awesome-multi-modal-reinforcement-learning 后\n- **一站式获取前沿成果**：直接查阅该清单即可锁定最新顶会论文，迅速定位到针对“多模态反馈”和“轨迹合成”的最新解决方案，将调研时间从数周缩短至数小时。\n- **精准匹配实验场景**：通过清单中明确标注的\"ExpEnv\"字段，团队能立即确认哪些算法适用于\"6 项机械臂操作任务”，从而跳过无效尝试，直接复用适配的代码框架。\n- **清晰把握演进趋势**：利用按会议年份（如 ICLR 2024\u002F2025）分类的目录，团队能快速梳理出视觉 - 语言模型推理能力的提升路径，制定出更具前瞻性的研发路线图。\n- **拓宽技术视野**：清单收录的非纯 RL 相关论文为团队提供了额外的灵感来源，成功引入了新的神经符号融合思路，显著提升了机器人对复杂指令的理解精度。\n\nawesome-multi-modal-reinforcement-learning 通过结构化整合全球顶尖研究成果，让研发团队从繁琐的文献挖掘中解放出来，专注于核心算法的创新与落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopendilab_awesome-multi-modal-reinforcement-learning_f83d0258.png","opendilab","OpenDILab","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fopendilab_83f31d72.png","Open-source Decision Intelligence (DI) Platform",null,"opendilab@pjlab.org.cn","https:\u002F\u002Fgithub.com\u002Fopendilab",596,21,"2026-04-03T03:45:11","Apache-2.0","","未说明",{"notes":90,"python":88,"dependencies":91},"该仓库是一个多模态强化学习（MMRL）研究论文的精选列表（Awesome List），并非一个可直接运行的软件工具或代码库。因此，README 中不包含具体的操作系统、GPU、内存、Python 版本或依赖库等运行环境需求。用户需根据列表中具体论文对应的独立代码仓库来查询各自的运行环境要求。",[],[18],"2026-03-27T02:49:30.150509","2026-04-06T07:15:06.571295",[],[]]