[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-AGI-Edgerunners--LLM-Agents-Papers":3,"tool-AGI-Edgerunners--LLM-Agents-Papers":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":74,"owner_location":74,"owner_email":74,"owner_twitter":74,"owner_website":74,"owner_url":75,"languages":76,"stars":81,"forks":82,"last_commit_at":83,"license":74,"difficulty_score":84,"env_os":85,"env_gpu":86,"env_ram":86,"env_deps":87,"category_tags":90,"github_topics":91,"view_count":96,"oss_zip_url":74,"oss_zip_packed_at":74,"status":17,"created_at":97,"updated_at":98,"faqs":99,"releases":135},403,"AGI-Edgerunners\u002FLLM-Agents-Papers","LLM-Agents-Papers","A repo lists papers related to LLM based agent","LLM-Agents-Papers 是一个专注于大语言模型智能体领域的学术论文资源库，旨在系统整理和汇总该领域的前沿研究成果。面对 AI 智能体领域爆发式增长的文献数量，研究者往往难以快速筛选和定位关键资料，LLM-Agents-Papers 通过精细化的分类体系，有效解决了信息杂乱、检索困难的问题，帮助用户节省大量筛选文献的时间。\n\n它的内容架构非常全面，涵盖了从基础理论到实际应用的各个环节。具体分类包括智能体的规划能力、记忆机制、反馈与反思、RAG 技术等核心增强技术，同时也深入探讨了多智能体交互、工具使用、自动化工作流以及安全性等关键议题。此外，资源库还收录了数学、医疗、金融、软件工程等多个垂直领域的应用研究，并提供了基准测试和数据集信息。\n\nLLM-Agents-Papers 非常适合 AI 领域的研究人员、算法工程师以及高校师生使用。无论是想要了解智能体技术全貌的初学者，还是需要追踪最新特定技术细节的专业人士，都能在这里找到高质量的参考资料。它还贴心地推荐了其他同类的优质资源列表，为用户构建了一个立体的知识获取网络。","# LLM-Agents-Papers\n## :writing_hand: Description\nLast Updated Time: 2025\u002F7\u002F12\n\nA repo lists papers related to LLM based agent. Includes\n- [Survey](#Survey)\n- [Technique For Enhancement](#Technique-For-Enhancement)\n  - [Planning](#Planning)\n  - [Memory Mechanism](#Memory-Mechanism)\n  - [Feedback&Reflection](#FeedbackReflection)\n  - [RAG](#RAG)\n  - [Search](#Search)\n- [Interaction](#Interaction)\n  - [Role Playing](#Role-Playing)\n  - [Conversation](#Conversation)\n  - [Game Playing](#Game-Playing)\n  - [Human-Agent Interaction](#Human-Agent-Interaction)\n  - [Tool Usage](#Tool-Usage)\n  - [Simulation](#Simulation)\n- [Application](#Application)\n  - [Math](#Math)\n  - [Chemistry](#Chemistry)\n  - [Biology](#Biology)\n  - [Physics](#Physics)\n  - [Geography](#Geography)\n  - [Art](#Art)\n  - [Medicine](#Medicine)\n  - [Finance](#Finance)\n  - [Software Engineering](#Software-Engineering)\n  - [Research](#Research)\n- [Automation](#Automation)\n  - [Workflow](#Workflow)\n  - [Automatic Evaluation](#Automatic-Evaluation)\n- [Training](#Training)\n  - [Fine tuning](#Fine-tuning)\n  - [RL](#RL)\n  - [DPO](#DPO)\n- [Scaling](#Scaling)\n  - [Single-Agent Framework](#Single-Agent-Framework)\n  - [Multi-Agent System](#Multi-Agent-System)\n- [Stability](#Stability)\n  - [Safety](#Safety)\n  - [Bias](#Bias)\n  - [Hallucination](#Hallucination)\n- [Infrastructure](#Infrastructure)\n  - [Benchmark&Evaluation](#BenchmarkEvaluation)\n  - [Environment&Platform](#EnvironmentPlatform)\n  - [Dataset](#Dataset)\n- [Others](#Others)\n## :yellow_heart: Recommendation\nFor more comprehensive reading, we also recommend other paper lists:\n* [zjunlp\u002FLLMAgentPapers](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLLMAgentPapers): Must-read Papers on Large Language Model Agents.\n* [teacherpeterpan\u002Fself-correction-llm-papers](https:\u002F\u002Fgithub.com\u002Fteacherpeterpan\u002Fself-correction-llm-papers): This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.\n* [Paitesanshi\u002FLLM-Agent-Survey](https:\u002F\u002Fgithub.com\u002FPaitesanshi\u002FLLM-Agent-Survey): A Survey on LLM-based Autonomous Agents.\n* [woooodyy\u002Fllm-agent-paper-list](https:\u002F\u002Fgithub.com\u002Fwoooodyy\u002Fllm-agent-paper-list): Must-read papers for LLM-based agents.\n* [git-disl\u002Fawesome-LLM-game-agent-papers](https:\u002F\u002Fgithub.com\u002Fgit-disl\u002Fawesome-LLM-game-agent-papers): Must-read papers for LLM-based Game agents.\n## :newspaper: Papers\n### Survey\n- [2025\u002F06\u002F10] **Measuring Data Science Automation: A Survey of Evaluation Tools for AI Assistants and Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08800) | [code]\n\n- [2025\u002F06\u002F06] **Evolutionary Perspectives on the Evaluation of LLM-Based AI Agents: A Comprehensive Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11102) | [code]\n\n- [2025\u002F05\u002F27] **Creativity in LLM-based Multi-Agent Systems: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21116) | [code]\n\n- [2025\u002F05\u002F24] **Multi-Party Conversational Agents: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18845) | [code]\n\n- [2025\u002F05\u002F16] **A Survey on the Safety and Security Threats of Computer-Using Agents: JARVIS or Ultron?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10924) | [code]\n\n- [2025\u002F05\u002F02] **AI agents may be worth the hype but not the resources (yet): An initial exploration of machine translation quality and costs in three language pairs in the legal and news domains** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.01560) | [code]\n\n- [2025\u002F05\u002F01] **A Survey on Large Language Model based Human-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00753) | [[code]](https:\u002F\u002Fgithub.com\u002FHenryPengZou\u002FAwesome-LLM-Based-Human-Agent-System-Papers)\n\n- [2025\u002F04\u002F30] **Humanizing LLMs: A Survey of Psychological Measurements with Tools, Datasets, and Human-Agent Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00049) | [code]\n\n- [2025\u002F04\u002F22] **A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training and Deployment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15585) | [code]\n\n- [2025\u002F04\u002F20] **Meta-Thinking in LLMs via Multi-Agent Reinforcement Learning: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14520) | [code]\n\n- [2025\u002F04\u002F14] **A Survey of Large Language Model-Powered Spatial Intelligence Across Scales: Advances in Embodied Agents, Smart Cities, and Earth Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09848) | [code]\n\n- [2025\u002F04\u002F12] **A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09037) | [code]\n\n- [2025\u002F03\u002F28] **Evaluating LLM-based Agents for Multi-Turn Conversations: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22458) | [code]\n\n- [2025\u002F03\u002F27] **Large Language Model Agent: A Survey on Methodology, Applications and Challenges** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21460) | [code]\n\n- [2025\u002F03\u002F27] **A Survey on (M)LLM-Based GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13865) | [code]\n\n- [2025\u002F03\u002F24] **A Survey of Large Language Model Agents for Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19213) | [code]\n\n- [2025\u002F03\u002F20] **Survey on Evaluation of LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16416) | [code]\n\n- [2025\u002F03\u002F13] **LLMs Working in Harmony: A Survey on the Technological Aspects of Building Effective LLM-Based Multi Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01963) | [code]\n\n- [2025\u002F03\u002F12] **Agentic AI for Scientific Discovery: A Survey of Progress, Challenges, and Future Directions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08979) | [code]\n\n- [2025\u002F02\u002F20] **Beyond Self-Talk: A Communication-Centric Survey of LLM-Based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14321) | [code]\n\n- [2025\u002F02\u002F18] **Towards a Design Guideline for RPA Evaluation: A Survey of Large Language Model-Based Role-Playing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13012) | [code]\n\n- [2025\u002F02\u002F16] **A Survey of LLM-based Agents in Medicine: How far are we from Baymax?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11211) | [code]\n\n- [2025\u002F01\u002F15] **Agentic Retrieval-Augmented Generation: A Survey on Agentic RAG** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.09136) | [code]\n\n- [2024\u002F12\u002F23] **A Survey on LLM-based Multi-Agent System: Recent Advances and New Frontiers in Application** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17481) | [code]\n\n- [2024\u002F12\u002F18] **A Survey on Large Language Model-based Agents for Statistics and Data Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14222) | [code]\n\n- [2024\u002F12\u002F05] **A Survey on Large Language Model-Based Social Agents in Game-Theoretic Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03920) | [code]\n\n- [2024\u002F12\u002F04] **From Individual to Society: A Survey on Social Simulation Driven by Large Language Model-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03563) | [code]\n\n- [2024\u002F11\u002F27] **Large Language Model-Brained GUI Agents: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.18279) | [code]\n\n- [2024\u002F09\u002F27] **A Survey on Complex Tasks for Goal-Directed Interactive Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.18538) | [code]\n\n- [2024\u002F09\u002F13] **Agents in Software Engineering: Survey, Landscape, and Vision** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.09030) | [code]\n\n- [2024\u002F09\u002F04] **A Survey on Emergent Language** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.02645) | [code]\n\n- [2024\u002F08\u002F05] **From LLMs to LLM-based Agents for Software Engineering: A Survey of Current, Challenges and Future** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02479) | [code]\n\n- [2024\u002F07\u002F26] **Large Language Model Agent in Financial Trading: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.06361) | [code]\n\n- [2024\u002F06\u002F03] **Two Tales of Persona in LLMs: A Survey of Role-Playing and Personalization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01171) | [[code]](https:\u002F\u002Fgithub.com\u002Fmiulab\u002Fpersonallm-survey)\n\n- [2024\u002F06\u002F01] **Towards Rationality in Language and Multimodal Agents: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00252) | [code]\n\n- [2024\u002F04\u002F17] **Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11023) | [code]\n\n- [2024\u002F04\u002F02] **A Survey on Large Language Model-Based Game Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02039) | [[code]](https:\u002F\u002Fgithub.com\u002Fgit-disl\u002Fawesome-LLM-game-agent-papers)\n\n- [2024\u002F03\u002F26] **Leveraging Large Language Models in Human-Robot Interaction: A Critical Analysis of Potential and Pitfalls** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00693) | [code]\n\n- [2024\u002F03\u002F07] **Promising and worth-to-try future directions for advancing state-of-the-art surrogates methods of agent-based models in social and health computational sciences** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.04417) | [code]\n\n- [2024\u002F02\u002F28] **Large Language Models and Games: A Survey and Roadmap** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18659) | [code]\n\n- [2024\u002F02\u002F28] **A Survey on Recent Advances in LLM-Based Multi-turn Dialogue Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18013) | [code]\n\n- [2024\u002F02\u002F05] **Understanding the planning of LLM agents: A survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02716) | [code]\n\n- [2024\u002F01\u002F01] **If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00812) | [code]\n\n- [2023\u002F12\u002F31] **A Survey of Personality, Persona, and Profile in Conversational Agents and Chatbots** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00609) | [code]\n\n- [2023\u002F12\u002F19] **Large Language Models Empowered Agent-based Modeling and Simulation: A Survey and Perspectives** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.11970) | [code]\n\n- [2023\u002F09\u002F14] **The Rise and Potential of Large Language Model Based Agents: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07864) | [code]\n\n- [2023\u002F08\u002F22] **A Survey on Large Language Model based Autonomous Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11432) | [code]\n\n- [2023\u002F06\u002F27] **Next Steps for Human-Centered Generative AI: A Technical Perspective** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15774) | [code]\n\n---\n### Technique For Enhancement\n#### Planning\n- [2025\u002F06\u002F30] **Thought-Augmented Planning for LLM-Powered Interactive Recommender Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23485) | [code]\n\n- [2025\u002F06\u002F24] **NaviAgent: Bilevel Planning on Tool Dependency Graphs for Function Calling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19500) | [code]\n\n- [2025\u002F06\u002F10] **Improving LLM Agent Planning with In-Context Learning via Atomic Fact Augmentation and Lookahead Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09171) | [code]\n\n- [2025\u002F06\u002F06] **MAPLE: Multi-Agent Adaptive Planning with Long-Term Memory for Table Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05813) | [code]\n\n- [2025\u002F05\u002F22] **T1: A Tool-Oriented Conversational Dataset for Multi-Turn Agentic Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16986) | [code]\n\n- [2025\u002F05\u002F02] **PIPA: A Unified Evaluation Protocol for Diagnosing Interactive Planning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.01592) | [code]\n\n- [2025\u002F04\u002F15] **GraphicBench: A Planning Benchmark for Graphic Design with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11571) | [code]\n\n- [2025\u002F03\u002F12] **Plan-and-Act: Improving Planning of Agents for Long-Horizon Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09572) | [code]\n\n- [2025\u002F03\u002F04] **MPO: Boosting LLM Agents with Meta Plan Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02682) | [code]\n\n- [2025\u002F03\u002F03] **Improving Retrospective Language Agents via Joint Policy Gradient Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01490) | [code]\n\n- [2025\u002F02\u002F08] **CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and Debugging** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05664) | [code]\n\n- [2025\u002F02\u002F06] **Robotouille: An Asynchronous Planning Benchmark for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05227) | [code]\n\n- [2025\u002F01\u002F27] **MADP: Multi-Agent Deductive Planning for Enhanced Cognitive-Behavioral Mental Health Question Answer** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15826) | [code]\n\n- [2025\u002F01\u002F14] **Talk to Right Specialists: Routing and Planning in Multi-agent System for Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07813) | [code]\n\n- [2024\u002F12\u002F30] **Plancraft: an evaluation dataset for planning with LLM agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21033) | [code]\n\n- [2024\u002F12\u002F28] **Efficient Multi-Agent Collaboration with Tool Use for Online Planning in Complex Table Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20145) | [code]\n\n- [2024\u002F12\u002F13] **Script-Based Dialog Policy Planning for LLM-Powered Conversational Agents: A Basic Architecture for an &#34;AI Therapist&#34;** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15242) | [code]\n\n- [2024\u002F11\u002F13] **One STEP at a time: Language Agents are Stepwise Planners** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.08432) | [code]\n\n- [2024\u002F11\u002F05] **Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02937) | [code]\n\n- [2024\u002F10\u002F12] **CAMPHOR: Collaborative Agents for Multi-input Planning and High-Order Reasoning On Device** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09407) | [code]\n\n- [2024\u002F10\u002F01] **Self-controller: Controlling LLMs with Multi-round Step-by-step Self-awareness** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00359) | [code]\n\n- [2024\u002F09\u002F30] **Interactive Speculative Planning: Enhance Agent Efficiency through Co-design of System and User Interface** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00079) | [code]\n\n- [2024\u002F09\u002F28] **SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19471) | [code]\n\n- [2024\u002F09\u002F25] **MSI-Agent: Incorporating Multi-Scale Insight into Embodied Agents for Superior Planning and Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.16686) | [code]\n\n- [2024\u002F08\u002F15] **VerilogCoder: Autonomous Verilog Coding Agents with Graph-based Planning and Abstract Syntax Tree (AST)-based Waveform Tracing Tool** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08927) | [code]\n\n- [2024\u002F08\u002F12] **Towards Autonomous Agents: Adaptive-planning, Reasoning, and Acting in Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.06458) | [code]\n\n- [2024\u002F08\u002F01] **AgentGen: Enhancing Planning Abilities for Large Language Model based Agent via Environment and Task Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00764) | [code]\n\n- [2024\u002F07\u002F04] **Controllable Conversations: Planning-Based Dialogue Agent with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.03884) | [code]\n\n- [2024\u002F06\u002F17] **RePrompt: Planning by Automatic Prompt Engineering for Large Language Models Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11132) | [code]\n\n- [2024\u002F06\u002F09] **A Review of Prominent Paradigms for LLM-Based Agents: Tool Use (Including RAG), Planning, and Feedback Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05804) | [code]\n\n- [2024\u002F06\u002F06] **Tool-Planner: Task Planning with Clusters across Multiple Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03807) | [[code]](https:\u002F\u002Fgithub.com\u002FOceannTwT\u002FTool-Planner)\n\n- [2024\u002F05\u002F28] **A Human-Like Reasoning Framework for Multi-Phases Planning Task with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18208) | [code]\n\n- [2024\u002F05\u002F27] **REVECA: Adaptive Planning and Trajectory-based Validation in Cooperative Language Agents using Information Relevance and Relative Proximity** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16751) | [code]\n\n- [2024\u002F04\u002F21] **Socratic Planner: Inquiry-Based Zero-Shot Planning for Embodied Instruction Following** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.15190) | [code]\n\n- [2024\u002F04\u002F17] **The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11584) | [code]\n\n- [2024\u002F03\u002F11] **Strength Lies in Differences! Improving Strategy Planning for Non-collaborative Dialogues via Diversified User Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.06769) | [code]\n\n- [2024\u002F03\u002F10] **TRAD: Enhancing LLM Agents with Step-Wise Thought Retrieval and Aligned Decision** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.06221) | [code]\n\n- [2024\u002F03\u002F05] **KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03101) | [code]\n\n- [2024\u002F02\u002F29] **PlanGPT: Enhancing Urban Planning with Tailored Language Model and Efficient Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.19273) | [code]\n\n- [2024\u002F02\u002F18] **What&#39;s the Plan? Evaluating and Developing Planning-Aware Techniques for Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11489) | [code]\n\n- [2024\u002F02\u002F18] **PreAct: Prediction Enhances Agent&#39;s Planning Ability** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11534) | [code]\n\n- [2024\u002F02\u002F16] **When is Tree Search Useful for LLM Planning? It Depends on the Discriminator** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10890) | [[code]](https:\u002F\u002Fgithub.com\u002Fosu-nlp-group\u002Fllm-planning-eval)\n\n- [2024\u002F02\u002F15] **TDAG: A Multi-Agent Framework based on Dynamic Task Decomposition and Agent Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10178) | [code]\n\n- [2024\u002F02\u002F09] **Introspective Planning: Aligning Robots&#39; Uncertainty with Inherent Task Ambiguity** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.06529) | [code]\n\n- [2024\u002F02\u002F06] **RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03610) | [code]\n\n- [2024\u002F02\u002F02] **TravelPlanner: A Benchmark for Real-World Planning with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01622) | [[code]](https:\u002F\u002Fgithub.com\u002FOSU-NLP-Group\u002FTravelPlanner)\n\n- [2024\u002F01\u002F10] **AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05268) | [code]\n\n- [2023\u002F11\u002F19] **TPTU-v2: Boosting Task Planning and Tool Usage of Large Language Model-based Agents in Real-world Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11315) | [code]\n\n- [2023\u002F10\u002F12] **Tree-Planner: Efficient Close-loop Task Planning with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08582) | [code]\n\n- [2023\u002F10\u002F09] **Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05746) | [code]\n\n- [2023\u002F08\u002F07] **TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03427) | [code]\n\n- [2023\u002F08\u002F01] **SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00436) | [code]\n\n- [2023\u002F05\u002F26] **AdaPlanner: Adaptive Planning from Feedback with Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16653) | [code]\n\n- [2023\u002F05\u002F24] **Reasoning with Language Model is Planning with World Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14992) | [code]\n\n- [2023\u002F05\u002F24] **Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14909) | [[code]](https:\u002F\u002Fgithub.com\u002FGuanSuns\u002FLLMs-World-Models-for-Planning)\n\n- [2023\u002F03\u002F29] **Skill Reinforcement Learning and Planning for Open-World Long-Horizon Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.16563) | [code]\n\n- [2023\u002F02\u002F03] **Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.01560) | [code]\n\n- [2022\u002F12\u002F08] **LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04088) | [code]\n\n#### Memory Mechanism\n- [2025\u002F07\u002F10] **MIRIX: Multi-Agent Memory System for LLM-Based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07957) | [code]\n\n- [2025\u002F07\u002F07] **Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05257) | [code]\n\n- [2025\u002F07\u002F03] **MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02259) | [code]\n\n- [2025\u002F06\u002F30] **Ella: Embodied Social Agents with Lifelong Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.24019) | [code]\n\n- [2025\u002F06\u002F30] **State and Memory is All You Need for Robust and Reliable AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00081) | [code]\n\n- [2025\u002F06\u002F20] **MemBench: Towards More Comprehensive Evaluation on the Memory of LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21605) | [code]\n\n- [2025\u002F06\u002F18] **MEM1: Learning to Synergize Memory and Reasoning for Efficient Long-Horizon Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15841) | [code]\n\n- [2025\u002F06\u002F17] **Cost-Efficient Serving of LLM Agents via Test-Time Plan Caching** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14852) | [code]\n\n- [2025\u002F06\u002F09] **G-Memory: Tracing Hierarchical Memory for Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07398) | [code]\n\n- [2025\u002F06\u002F07] **Contextual Experience Replay for Self-Improvement of Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06698) | [code]\n\n- [2025\u002F06\u002F06] **MAPLE: Multi-Agent Adaptive Planning with Long-Term Memory for Table Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05813) | [code]\n\n- [2025\u002F05\u002F26] **Towards Multi-Granularity Memory Association and Selection for Long-Term Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19549) | [code]\n\n- [2025\u002F05\u002F26] **Task Memory Engine: Spatial Memory for Robust Multi-Step LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19436) | [code]\n\n- [2025\u002F05\u002F23] **Collaborative Memory: Multi-User Memory Sharing in LLM Agents with Dynamic Access Control** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18279) | [code]\n\n- [2025\u002F05\u002F22] **Embodied Agents Meet Personalization: Exploring Memory Utilization for Personalized Assistance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16348) | [code]\n\n- [2025\u002F04\u002F30] **LLM-Empowered Embodied Agent for Memory-Augmented Task Planning in Household Robotics** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21716) | [code]\n\n- [2025\u002F04\u002F28] **Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19413) | [code]\n\n- [2025\u002F04\u002F11] **Task Memory Engine (TME): A Structured Memory Framework with Graph-Aware Extensions for Multi-Step LLM Agent Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08525) | [code]\n\n- [2025\u002F03\u002F27] **MemInsight: Autonomous Memory Augmentation for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21760) | [code]\n\n- [2025\u002F03\u002F25] **MARS: Memory-Enhanced Agents with Reflective Self-improvement** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19271) | [code]\n\n- [2025\u002F03\u002F11] **In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08026) | [code]\n\n- [2025\u002F02\u002F17] **A-MEM: Agentic Memory for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12110) | [code]\n\n- [2025\u002F02\u002F08] **On Memory Construction and Retrieval for Personalized Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05589) | [code]\n\n- [2025\u002F01\u002F20] **Zep: A Temporal Knowledge Graph Architecture for Agent Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13956) | [code]\n\n- [2025\u002F01\u002F15] **Doc-Guided Sent2Sent++: A Sent2Sent++ Agent with Doc-Guided memory for Document-level Machine Translation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08523) | [code]\n\n- [2024\u002F12\u002F17] **On the Structural Memory of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15266) | [code]\n\n- [2024\u002F12\u002F17] **Memory-Augmented Agent Training for Business Document Understanding** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15274) | [code]\n\n- [2024\u002F10\u002F10] **DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08143) | [code]\n\n- [2024\u002F09\u002F28] **Crafting Personalized Agents through Retrieval-Augmented Generation on Editable Memory Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19401) | [code]\n\n- [2024\u002F09\u002F11] **Agent Workflow Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07429) | [code]\n\n- [2024\u002F09\u002F01] **Self-evolving Agents with reflective and memory-augmented abilities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00872) | [code]\n\n- [2024\u002F08\u002F18] **HiAgent: Hierarchical Working Memory Management for Solving Long-Horizon Agent Tasks with Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.09559) | [code]\n\n- [2024\u002F08\u002F07] **Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.03615) | [code]\n\n- [2024\u002F05\u002F29] **Toward Conversational Agents with Context and Time Sensitive Long-term Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00057) | [[code]](https:\u002F\u002Fgithub.com\u002FZyphra\u002FTemporalMemoryDataset)\n\n- [2024\u002F04\u002F15] **Memory Sharing for Large Language Model based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.09982) | [[code]](https:\u002F\u002Fgithub.com\u002FGHupppp\u002FMemorySharingLLM)\n\n- [2024\u002F02\u002F19] **Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11975) | [code]\n\n- [2024\u002F02\u002F07] **InfLLM: Training-Free Long-Context Extrapolation for LLMs with an Efficient Context Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04617) | [code]\n\n- [2024\u002F02\u002F06] **RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03610) | [code]\n\n- [2024\u002F01\u002F05] **From LLM to Conversational Agent: A Memory Enhanced Architecture with Fine-Tuning of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02777) | [code]\n\n- [2023\u002F12\u002F22] **Empowering Working Memory for Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17259) | [code]\n\n- [2023\u002F12\u002F22] **Personalized Large Language Model Assistant with Evolving Conditional Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17257) | [code]\n\n- [2023\u002F11\u002F10] **JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05997) | [[code]](https:\u002F\u002Fgithub.com\u002FCraftJarvis\u002FJARVIS-1)\n\n- [2023\u002F06\u002F06] **ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.03901) | [code]\n\n- [2023\u002F05\u002F23] **RET-LLM: Towards a General Read-Write Memory for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14322) | [code]\n\n- [2023\u002F05\u002F17] **MemoryBank: Enhancing Large Language Models with Long-Term Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10250) | [code]\n\n- [2023\u002F05\u002F02] **The Role of Summarization in Generative Agents: A Preliminary Perspective** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.01253) | [code]\n\n- [2023\u002F05\u002F01] **Learning to Reason and Memorize with Self-Notes** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.00833) | [code]\n\n- [2023\u002F04\u002F26] **Enhancing Large Language Model with Self-Controlled Memory Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13343) | [code]\n\n- [2023\u002F04\u002F21] **Emergent and Predictable Memorization in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11158) | [code]\n\n#### Feedback&Reflection\n- [2025\u002F07\u002F08] **Conditional Multi-Stage Failure Recovery for Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06016) | [code]\n\n- [2025\u002F06\u002F10] **Reinforce LLM Reasoning through Multi-Agent Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08379) | [code]\n\n- [2025\u002F06\u002F04] **Debate, Reflect, and Distill: Multi-Agent Feedback with Tree-Structured Preference Optimization for Efficient Language Model Enhancement** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03541) | [code]\n\n- [2025\u002F06\u002F04] **Graph Counselor: Adaptive Graph Exploration via Multi-Agent Synergy to Enhance LLM Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03939) | [code]\n\n- [2025\u002F06\u002F03] **Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02992) | [code]\n\n- [2025\u002F05\u002F22] **Optimizing LLM-Based Multi-Agent System with Textual Feedback: A Case Study on Software Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16086) | [code]\n\n- [2025\u002F05\u002F21] **ReflAct: World-Grounded Decision Making in LLM Agents via Goal-State Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15182) | [code]\n\n- [2025\u002F05\u002F21] **Aligning Dialogue Agents with Global Feedback via Large Language Model Reward Decomposition** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15922) | [code]\n\n- [2025\u002F05\u002F06] **FRAME: Feedback-Refined Agent Methodology for Enhancing Medical Research Insights** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.04649) | [code]\n\n- [2025\u002F04\u002F26] **Stealing Creator&#39;s Workflow: A Creator-Inspired Agentic Framework with Iterative Feedback Loop for Improved Scientific Short-form Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18805) | [code]\n\n- [2025\u002F03\u002F20] **The Lighthouse of Language: Enhancing LLM Agents via Critique-Guided Improvement** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16024) | [code]\n\n- [2025\u002F03\u002F11] **In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08026) | [code]\n\n- [2025\u002F03\u002F04] **Generator-Assistant Stepwise Rollback Framework for Large Language Model Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02519) | [code]\n\n- [2025\u002F03\u002F03] **Improving Retrospective Language Agents via Joint Policy Gradient Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01490) | [code]\n\n- [2025\u002F02\u002F20] **STeCa: Step-level Trajectory Calibration for LLM Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14276) | [[code]](https:\u002F\u002Fgithub.com\u002FWangHanLinHenry\u002FSTeCa)\n\n- [2025\u002F02\u002F17] **Table-Critic: A Multi-Agent Framework for Collaborative Criticism and Refinement in Table Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11799) | [code]\n\n- [2025\u002F02\u002F17] **A Study on Leveraging Search and Self-Feedback for Agent Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12094) | [code]\n\n- [2025\u002F02\u002F03] **PlotGen: Multi-Agent LLM-based Scientific Data Visualization via Multimodal Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00988) | [code]\n\n- [2025\u002F01\u002F26] **Large Language Models as Theory of Mind Aware Generative Agents with Counterfactual Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15355) | [code]\n\n- [2025\u002F01\u002F23] **AgentRec: Agent Recommendation Using Sentence Embeddings Aligned to Human Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13333) | [code]\n\n- [2025\u002F01\u002F08] **InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning and Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04575) | [code]\n\n- [2024\u002F12\u002F31] **Enhancing LLM Reasoning with Multi-Path Collaborative Reactive and Reflection agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00430) | [code]\n\n- [2024\u002F12\u002F22] **A Multi-AI Agent System for Autonomous Optimization of Agentic AI Solutions via Iterative Refinement and LLM-Driven Feedback Loops** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17149) | [code]\n\n- [2024\u002F11\u002F29] **Training Agents with Weakly Supervised Feedback from Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19547) | [code]\n\n- [2024\u002F11\u002F21] **Enhancing LLMs for Power System Simulations: A Feedback-driven Multi-agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16707) | [code]\n\n- [2024\u002F11\u002F11] **Using Generative AI and Multi-Agents to Provide Automatic Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07407) | [code]\n\n- [2024\u002F11\u002F04] **Positive Experience Reflection for Agents in Interactive Text Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02223) | [code]\n\n- [2024\u002F10\u002F29] **Enhancing Financial Question Answering with a Multi-Agent Reflection Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21741) | [code]\n\n- [2024\u002F10\u002F28] **CRAT: A Multi-Agent Framework for Causality-Enhanced Reflective and Retrieval-Augmented Translation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21067) | [code]\n\n- [2024\u002F10\u002F25] **OpenWebVoyager: Building Multimodal Web Agents via Iterative Real-World Exploration, Feedback and Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19609) | [code]\n\n- [2024\u002F10\u002F23] **ReflecTool: Towards Reflection-Aware Tool-Augmented Clinical Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17657) | [code]\n\n- [2024\u002F10\u002F20] **Training Language Models to Critique With Multi-agent Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15287) | [code]\n\n- [2024\u002F10\u002F16] **PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12375) | [code]\n\n- [2024\u002F10\u002F08] **DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06215) | [code]\n\n- [2024\u002F10\u002F02] **ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02052) | [code]\n\n- [2024\u002F10\u002F02] **RGD: Multi-LLM Based Agent Debugger via Refinement and Generation Guidance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.01242) | [code]\n\n- [2024\u002F09\u002F18] **MAgICoRe: Multi-Agent, Iterative, Coarse-to-Fine Refinement for Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12147) | [code]\n\n- [2024\u002F09\u002F05] **E2CL: Exploration-based Error Correction Learning for Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03256) | [[code]](https:\u002F\u002Fgithub.com\u002FWangHanLinHenry\u002FE2CL)\n\n- [2024\u002F09\u002F01] **Self-evolving Agents with reflective and memory-augmented abilities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00872) | [code]\n\n- [2024\u002F08\u002F30] **Tool-Assisted Agent on SQL Inspection and Refinement in Real-World Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16991) | [code]\n\n- [2024\u002F08\u002F15] **MAG-SQL: Multi-Agent Generative Approach with Soft Schema Linking and Iterative Sub-SQL Refinement for Text-to-SQL** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07930) | [code]\n\n- [2024\u002F07\u002F25] **Recursive Introspection: Teaching Language Model Agents How to Self-Improve** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18219) | [code]\n\n- [2024\u002F06\u002F09] **A Review of Prominent Paradigms for LLM-Based Agents: Tool Use (Including RAG), Planning, and Feedback Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05804) | [code]\n\n- [2024\u002F06\u002F05] **LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03363) | [code]\n\n- [2024\u002F06\u002F03] **Re-ReST: Reflection-Reinforced Self-Training for Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01495) | [[code]](https:\u002F\u002Fgithub.com\u002FPlusLabNLP\u002FRe-ReST)\n\n- [2024\u002F03\u002F18] **QueryAgent: A Reliable and Efficient Reasoning Framework with Environmental Feedback-based Self-Correction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11886) | [code]\n\n- [2024\u002F03\u002F17] **Improving Dialogue Agents by Decomposing One Global Explicit Annotation with Local Implicit Multimodal Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11330) | [code]\n\n- [2024\u002F03\u002F08] **ChatASU: Evoking LLM&#39;s Reflexion to Truly Understand Aspect Sentiment in Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.05326) | [code]\n\n- [2024\u002F03\u002F04] **Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02502) | [code]\n\n- [2024\u002F02\u002F27] **Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17574) | [code]\n\n- [2024\u002F02\u002F26] **SelectIT: Selective Instruction Tuning for LLMs via Uncertainty-Aware Self-Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16705) | [code]\n\n- [2024\u002F02\u002F22] **Mirror: A Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14963) | [code]\n\n- [2024\u002F02\u002F19] **A Critical Evaluation of AI Feedback for Aligning Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12366) | [code]\n\n- [2024\u002F02\u002F06] **AnyTool: Self-Reflective, Hierarchical Agents for Large-Scale API Calls** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04253) | [[code]](https:\u002F\u002Fgithub.com\u002Fdyabel\u002Fanytool)\n\n- [2024\u002F02\u002F02] **StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01391) | [code]\n\n- [2023\u002F11\u002F14] **The ART of LLM Refinement: Ask, Refine, and Trust** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.07961) | [code]\n\n- [2023\u002F10\u002F31] **Learning From Mistakes Makes LLM Better Reasoner** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20689) | [code]\n\n- [2023\u002F10\u002F12] **A Zero-Shot Language Agent for Computer Control with Structured Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08740) | [code]\n\n- [2023\u002F07\u002F27] **PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14936) | [code]\n\n- [2023\u002F05\u002F22] **Making Language Models Better Tool Learners with Execution Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13068) | [code]\n\n- [2023\u002F05\u002F17] **Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10142) | [code]\n\n- [2023\u002F04\u002F21] **Improving Grounded Language Understanding in a Collaborative Environment by Interacting with Agents Through Help Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.10750) | [code]\n\n- [2023\u002F04\u002F11] **Teaching Large Language Models to Self-Debug** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05128) | [code]\n\n- [2023\u002F03\u002F30] **Self-Refine: Iterative Refinement with Self-Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17651) | [code]\n\n#### RAG\n- [2025\u002F07\u002F09] **Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07307) | [code]\n\n- [2025\u002F07\u002F04] **AI-VaxGuide: An Agentic RAG-Based LLM for Vaccination Decisions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03493) | [code]\n\n- [2025\u002F06\u002F28] **Knowledge Augmented Finetuning Matters in both RAG and Agent Based Dialog Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22852) | [code]\n\n- [2025\u002F06\u002F27] **ARAG: Agentic Retrieval Augmented Generation for Personalized Recommendation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21931) | [code]\n\n- [2025\u002F06\u002F12] **CIIR@LiveRAG 2025: Optimizing Multi-Agent Retrieval Augmented Generation through Self-Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10844) | [code]\n\n- [2025\u002F06\u002F04] **Graph Counselor: Adaptive Graph Exploration via Multi-Agent Synergy to Enhance LLM Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03939) | [code]\n\n- [2025\u002F05\u002F28] **Agent-UniRAG: A Trainable Open-Source LLM Agent Framework for Unified Retrieval-Augmented Generation Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22571) | [code]\n\n- [2025\u002F05\u002F26] **MA-RAG: Multi-Agent Retrieval-Augmented Generation via Collaborative Chain-of-Thought Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20096) | [code]\n\n- [2025\u002F05\u002F22] **O$^2$-Searcher: A Searching-based Agent Model for Open-Domain Open-Ended Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16582) | [code]\n\n- [2025\u002F05\u002F22] **Personalizing Student-Agent Interactions Using Log-Contextualized Retrieval Augmented Generation (RAG)** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17238) | [code]\n\n- [2025\u002F05\u002F22] **Search Wisely: Mitigating Sub-optimal Agentic Searches By Reducing Uncertainty** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17281) | [code]\n\n- [2025\u002F05\u002F21] **InfoDeepSeek: Benchmarking Agentic Information Seeking for Retrieval-Augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15872) | [code]\n\n- [2025\u002F05\u002F13] **ALOHA: Empowering Multilingual Agent for University Orientation with Hierarchical Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08130) | [code]\n\n- [2025\u002F05\u002F12] **Reinforced Internal-External Knowledge Synergistic Reasoning for Efficient Adaptive Search Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.07596) | [code]\n\n- [2025\u002F04\u002F30] **Talk Before You Retrieve: Agent-Led Discussions for Better RAG in Medical QA** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21252) | [code]\n\n- [2025\u002F04\u002F24] **A RAG-Based Multi-Agent LLM System for Natural Hazard Resilience and Adaptation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17200) | [code]\n\n- [2025\u002F04\u002F15] **Towards Automated Safety Requirements Derivation Using Agent-based RAG** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11243) | [code]\n\n- [2025\u002F04\u002F13] **HM-RAG: Hierarchical Multi-Agent Multimodal Retrieval Augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12330) | [code]\n\n- [2025\u002F04\u002F11] **TP-RAG: Benchmarking Retrieval-Augmented Large Language Model Agents for Spatiotemporal-Aware Travel Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08694) | [code]\n\n- [2025\u002F04\u002F10] **CollEX -- A Multimodal Agentic RAG System Enabling Interactive Exploration of Scientific Collections** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07643) | [code]\n\n- [2025\u002F03\u002F18] **Retrieval-Augmented Simulacra: Generative Agents for Up-to-date and Knowledge-Adaptive Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14620) | [code]\n\n- [2025\u002F03\u002F14] **RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13514) | [code]\n\n- [2025\u002F03\u002F01] **EXCLAIM: An Explainable Cross-Modal Agentic System for Misinformation Detection with Hierarchical Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06269) | [code]\n\n- [2025\u002F02\u002F25] **ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18017) | [code]\n\n- [2025\u002F02\u002F19] **RAG-Gym: Optimizing Reasoning and Search Agents with Process Supervision** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13957) | [code]\n\n- [2025\u002F02\u002F08] **On Memory Construction and Retrieval for Personalized Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05589) | [code]\n\n- [2025\u002F02\u002F06] **Enhancing Online Learning Efficiency Through Heterogeneous Resource Integration with a Multi-Agent RAG System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03948) | [code]\n\n- [2025\u002F01\u002F25] **Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15228) | [code]\n\n- [2024\u002F12\u002F31] **MAIN-RAG: Multi-Agent Filtering Retrieval-Augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00332) | [code]\n\n- [2024\u002F12\u002F24] **GeAR: Graph-enhanced Agent for Retrieval-augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18431) | [code]\n\n- [2024\u002F12\u002F20] **Towards Interpretable Radiology Report Generation via Concept Bottlenecks using a Multi-Agentic RAG** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16086) | [code]\n\n- [2024\u002F12\u002F16] **BioRAGent: A Retrieval-Augmented Generation System for Showcasing Generative Query Expansion and Domain-Specific Search for Scientific Q&amp;A** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12358) | [code]\n\n- [2024\u002F12\u002F07] **SLA Management in Reconfigurable Multi-Agent RAG: A Systems Approach to Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.06832) | [code]\n\n- [2024\u002F11\u002F05] **Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02937) | [code]\n\n- [2024\u002F10\u002F28] **CRAT: A Multi-Agent Framework for Causality-Enhanced Reflective and Retrieval-Augmented Translation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21067) | [code]\n\n- [2024\u002F10\u002F18] **Toolshed: Scale Tool-Equipped Agents with Advanced RAG-Tool Fusion and Tool Knowledge Bases** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14594) | [code]\n\n- [2024\u002F10\u002F01] **Conversational Exploratory Search of Scholarly Publications Using Knowledge Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00427) | [code]\n\n- [2024\u002F09\u002F28] **Crafting Personalized Agents through Retrieval-Augmented Generation on Editable Memory Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19401) | [code]\n\n- [2024\u002F08\u002F18] **Agentic Retrieval-Augmented Generation for Time Series Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14484) | [code]\n\n- [2024\u002F08\u002F05] **LLM Agents Improve Semantic Code Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11058) | [code]\n\n- [2024\u002F08\u002F03] **MALADE: Orchestration of LLM-powered Agents with Retrieval Augmented Generation for Pharmacovigilance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01869) | [code]\n\n- [2024\u002F07\u002F20] **Golden-Retriever: High-Fidelity Agentic Retrieval Augmented Generation for Industrial Knowledge Base** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00798) | [code]\n\n- [2024\u002F06\u002F26] **Geode: A Zero-shot Geospatial Question-Answering Agent with Explicit Reasoning and Precise Spatio-Temporal Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.11014) | [code]\n\n- [2024\u002F06\u002F19] **StackRAG Agent: Improving Developer Answers with Retrieval-Augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.13840) | [code]\n\n- [2024\u002F06\u002F09] **A Review of Prominent Paradigms for LLM-Based Agents: Tool Use (Including RAG), Planning, and Feedback Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05804) | [code]\n\n- [2024\u002F03\u002F05] **AgentsCourt: Building Judicial Decision-Making Agents with Court Debate Simulation and Legal Knowledge Augmentation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02959) | [code]\n\n- [2024\u002F02\u002F06] **RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03610) | [code]\n\n- [2023\u002F12\u002F27] **Automating Knowledge Acquisition for Content-Centric Cognitive Agents Using LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.16378) | [code]\n\n#### Search\n- [2025\u002F06\u002F09] **CheMatAgent: Enhancing LLMs for Chemistry and Materials Science through Tree-Search Based Tool Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07551) | [code]\n\n- [2025\u002F06\u002F06] **AgentSwift: Efficient LLM Agent Design via Value-guided Hierarchical Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06017) | [code]\n\n- [2025\u002F05\u002F26] **T^2Agent A Tool-augmented Multimodal Misinformation Detection Agent with Monte Carlo Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19768) | [code]\n\n- [2025\u002F05\u002F12] **Structural Entropy Guided Agent for Detecting and Repairing Knowledge Deficiencies in LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.07184) | [code]\n\n- [2025\u002F04\u002F10] **The AI Scientist-v2: Workshop-Level Automated Scientific Discovery via Agentic Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08066) | [code]\n\n- [2025\u002F04\u002F04] **SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03561) | [code]\n\n- [2025\u002F03\u002F18] **DARS: Dynamic Action Re-Sampling to Enhance Coding Agent Performance by Adaptive Tree Traversal** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14269) | [code]\n\n- [2025\u002F02\u002F20] **I-MCTS: Enhancing Agentic AutoML via Introspective Monte Carlo Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14693) | [code]\n\n- [2025\u002F02\u002F18] **R2-KG: General-Purpose Dual-Agent Framework for Reliable Reasoning on Knowledge Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12767) | [code]\n\n- [2025\u002F02\u002F18] **Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13025) | [code]\n\n- [2025\u002F02\u002F17] **A Study on Leveraging Search and Self-Feedback for Agent Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12094) | [code]\n\n- [2025\u002F02\u002F05] **SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03283) | [code]\n\n- [2025\u002F02\u002F02] **Efficient Multi-Agent System Training with Data Influence-Oriented Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00955) | [code]\n\n- [2025\u002F01\u002F31] **KBQA-o1: Agentic Knowledge Base Question Answering with Monte Carlo Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.18922) | [code]\n\n- [2025\u002F01\u002F09] **Search-o1: Agentic Search-Enhanced Large Reasoning Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.05366) | [code]\n\n- [2024\u002F12\u002F24] **A Novel Task-Driven Method with Evolvable Interactive Agents Using Event Trees for Enhanced Emergency Decision Support** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06193) | [code]\n\n- [2024\u002F12\u002F22] **Multi-Agent Sampling: Scaling Inference Compute for Data Synthesis with Tree Search-Based Agentic Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17061) | [code]\n\n- [2024\u002F12\u002F05] **Agent AI with LangGraph: A Modular Framework for Enhancing Machine Translation Using Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03801) | [code]\n\n- [2024\u002F11\u002F07] **CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04329) | [code]\n\n- [2024\u002F10\u002F29] **Synergizing LLM Agents and Knowledge Graph for Socioeconomic Prediction in LBSN** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00028) | [code]\n\n- [2024\u002F10\u002F25] **AGENT-CQ: Automatic Generation and Evaluation of Clarifying Questions for Conversational Search with LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19692) | [code]\n\n- [2024\u002F10\u002F22] **SELA: Tree-Search Enhanced LLM Agents for Automated Machine Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17238) | [code]\n\n- [2024\u002F10\u002F13] **Expanding Search Space with Diverse Prompting Agents: An Efficient Sampling Approach for LLM Mathematical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09780) | [code]\n\n- [2024\u002F10\u002F13] **LLM-Based Multi-Agent Systems are Scalable Graph Generative Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09824) | [code]\n\n- [2024\u002F10\u002F02] **ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02052) | [code]\n\n- [2024\u002F09\u002F09] **SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.05556) | [code]\n\n- [2024\u002F07\u002F01] **Tree Search for Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01476) | [code]\n\n- [2024\u002F06\u002F17] **Input Conditioned Graph Generation for Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11555) | [[code]](https:\u002F\u002Fgithub.com\u002Flukasvierling\u002Fdynamicgptswarm)\n\n- [2024\u002F02\u002F17] **KG-Agent: An Efficient Autonomous Agent Framework for Complex Reasoning over Knowledge Graph** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11163) | [code]\n\n- [2024\u002F02\u002F16] **When is Tree Search Useful for LLM Planning? It Depends on the Discriminator** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10890) | [[code]](https:\u002F\u002Fgithub.com\u002Fosu-nlp-group\u002Fllm-planning-eval)\n\n- [2024\u002F02\u002F09] **CoSearchAgent: A Lightweight Collaborative Search Agent with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.06360) | [code]\n\n- [2023\u002F05\u002F17] **Tree of Thoughts: Deliberate Problem Solving with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10601) | [code]\n\n### Interaction\n#### Role Playing\n- [2025\u002F06\u002F28] **Agent-to-Agent Theory of Mind: Testing Interlocutor Awareness among Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22957) | [code]\n\n- [2025\u002F06\u002F24] **MAM: Modular Multi-Agent Framework for Multi-Modal Medical Diagnosis via Role-Specialized Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19835) | [code]\n\n- [2025\u002F06\u002F20] **Language-Informed Synthesis of Rational Agent Models for Grounded Theory-of-Mind Reasoning On-The-Fly** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16755) | [code]\n\n- [2025\u002F06\u002F06] **PersonaAgent: When Large Language Model Agents Meet Personalization at Test Time** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06254) | [code]\n\n- [2025\u002F06\u002F02] **Thinking in Character: Advancing Role-Playing Agents with Role-Aware Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01748) | [code]\n\n- [2025\u002F05\u002F30] **Context-Aware Sentiment Forecasting via LLM-based Multi-Perspective Role-Playing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24331) | [code]\n\n- [2025\u002F05\u002F29] **ChARM: Character-based Act-adaptive Reward Modeling for Advanced Role-Playing Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23923) | [code]\n\n- [2025\u002F05\u002F26] **OmniCharacter: Towards Immersive Role-Playing Agents with Seamless Speech-Language Personality Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20277) | [code]\n\n- [2025\u002F05\u002F20] **Inter(sectional) Alia(s): Ambiguity in Voice Agent Identity via Intersectional Japanese Self-Referents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01998) | [code]\n\n- [2025\u002F04\u002F29] **BrAIcht, a theatrical agent that speaks like Bertolt Brecht&#39;s characters** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20552) | [code]\n\n- [2025\u002F04\u002F25] **Exploring Personality-Aware Interactions in Salesperson Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18058) | [code]\n\n- [2025\u002F04\u002F13] **UXAgent: A System for Simulating Usability Testing of Web Design with LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09407) | [code]\n\n- [2025\u002F04\u002F03] **LLMs as Deceptive Agents: How Role-Based Prompting Induces Semantic Ambiguity in Puzzle Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02254) | [code]\n\n- [2025\u002F03\u002F14] **AIstorian lets AI be a historian: A KG-powered multi-agent system for accurate biography generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11346) | [code]\n\n- [2025\u002F02\u002F20] **InstructAgent: Building User Controllable Recommender via LLM Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14662) | [code]\n\n- [2025\u002F02\u002F18] **SEFL: Harnessing Large Language Model Agents to Improve Educational Feedback Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12927) | [code]\n\n- [2025\u002F02\u002F17] **Can LLM Agents Maintain a Persona in Discourse?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11843) | [code]\n\n- [2025\u002F02\u002F17] **LM Agents for Coordinating Multi-User Information Gathering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12328) | [code]\n\n- [2025\u002F02\u002F16] **SCALE: Towards Collaborative Content Analysis in Social Science with Large Language Model Agents and Human Intervention** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10937) | [code]\n\n- [2025\u002F02\u002F13] **Language Agents as Digital Representatives in Collective Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09369) | [code]\n\n- [2025\u002F02\u002F06] **PsyPlay: Personality-Infused Role-Playing Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03821) | [code]\n\n- [2025\u002F02\u002F03] **Plan-Then-Execute: An Empirical Study of User Trust and Team Performance When Using LLM Agents As A Daily Assistant** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01390) | [code]\n\n- [2025\u002F01\u002F23] **AgentRec: Agent Recommendation Using Sentence Embeddings Aligned to Human Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13333) | [code]\n\n- [2025\u002F01\u002F15] **Personality Modeling for Persuasion of Misinformation using AI Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08985) | [code]\n\n- [2024\u002F12\u002F28] **BaiJia: A Large-Scale Role-Playing Agent Corpus of Chinese Historical Characters** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20024) | [code]\n\n- [2024\u002F12\u002F22] **Modular Conversational Agents for Surveys and Interviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17049) | [code]\n\n- [2024\u002F12\u002F11] **SweetieChat: A Strategy-Enhanced Role-playing Framework for Diverse Scenarios Handling Emotional Support Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.08389) | [code]\n\n- [2024\u002F12\u002F10] **My Words Imply Your Opinion: Reader Agent-Based Propagation Enhancement for Personalized Implicit Emotion Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.07367) | [code]\n\n- [2024\u002F11\u002F21] **Towards Full Delegation: Designing Ideal Agentic Behaviors for Travel Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13904) | [code]\n\n- [2024\u002F11\u002F19] **Probing the Capacity of Language Model Agents to Operationalize Disparate Experiential Context Despite Distraction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.12828) | [code]\n\n- [2024\u002F11\u002F12] **SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07965) | [code]\n\n- [2024\u002F11\u002F04] **A Multi-Task Role-Playing Agent Capable of Imitating Character Linguistic Styles** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02457) | [code]\n\n- [2024\u002F10\u002F28] **Guide-LLM: An Embodied LLM Agent and Text-Based Topological Map for Robotic Guidance of People with Visual Impairments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.20666) | [code]\n\n- [2024\u002F10\u002F24] **Schema-Guided Culture-Aware Complex Event Simulation with Multi-Agent Role-Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18935) | [code]\n\n- [2024\u002F09\u002F23] **ERABAL: Enhancing Role-Playing Agents through Boundary-Aware Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14710) | [code]\n\n- [2024\u002F09\u002F19] **FoodPuzzle: Developing Large Language Model Agents as Flavor Scientists** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12832) | [code]\n\n- [2024\u002F09\u002F12] **TravelAgent: An AI Assistant for Personalized Travel Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08069) | [code]\n\n- [2024\u002F09\u002F11] **Using Generative Agents to Create Tip Sheets for Investigative Data Reporting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07286) | [code]\n\n- [2024\u002F08\u002F28] **Interactive Agents: Simulating Counselor-Client Psychological Counseling via Role-Playing LLM-to-LLM Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15787) | [code]\n\n- [2024\u002F08\u002F21] **Drama Engine: A Framework for Narrative Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11574) | [code]\n\n- [2024\u002F06\u002F24] **The Effects of Embodiment and Personality Expression on Learning in LLM-based Educational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.10993) | [code]\n\n- [2024\u002F06\u002F17] **HoLLMwood: Unleashing the Creativity of Large Language Models in Screenwriting via Role Playing** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11683) | [code]\n\n- [2024\u002F06\u002F11] **Agent-SiMT: Agent-assisted Simultaneous Machine Translation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06910) | [code]\n\n- [2024\u002F06\u002F09] **Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05688) | [[code]](https:\u002F\u002Fgithub.com\u002Fchengtan9907\u002Freviewmt)\n\n- [2024\u002F05\u002F28] **TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18027) | [code]\n\n- [2024\u002F05\u002F10] **LLM Discussion: Enhancing the Creativity of Large Language Models via Discussion Framework and Role-Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.06373) | [code]\n\n- [2024\u002F05\u002F08] **LLMs with Personalities in Multi-issue Negotiation Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05248) | [code]\n\n- [2024\u002F05\u002F06] **Large Language Models (LLMs) as Agents for Augmented Democracy** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.03452) | [code]\n\n- [2024\u002F05\u002F02] **GAIA: A General AI Assistant for Intelligent Accelerator Operations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01359) | [code]\n\n- [2024\u002F05\u002F01] **&#34;Ask Me Anything&#34;: How Comcast Uses LLMs to Assist Agents in Real Time** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00801) | [code]\n\n- [2024\u002F04\u002F26] **Large Language Model Agent as a Mechanical Designer** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.17525) | [code]\n\n- [2024\u002F04\u002F19] **Cooperative Sentiment Agents for Multimodal Sentiment Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.12642) | [[code]](https:\u002F\u002Fgithub.com\u002Fsmwanghhh\u002Fco-sa)\n\n- [2024\u002F03\u002F31] **DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.01342) | [[code]](https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002FDiffAgent)\n\n- [2024\u002F03\u002F23] **EduAgent: Generative Student Agents in Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.07963) | [code]\n\n- [2024\u002F03\u002F19] **Characteristic AI Agents via Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12368) | [code]\n\n- [2024\u002F03\u002F15] **VideoAgent: Long-form Video Understanding with Large Language Model as Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.10517) | [code]\n\n- [2024\u002F03\u002F13] **Evaluating Large Language Models as Generative User Simulators for Conversational Recommendation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09738) | [code]\n\n- [2024\u002F02\u002F29] **On the Decision-Making Abilities in Role-Playing using Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18807) | [code]\n\n- [2024\u002F02\u002F28] **Prospect Personalized Recommendation on Large Language Model-based Agent Platform** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18240) | [code]\n\n- [2024\u002F02\u002F26] **Language Agents as Optimizable Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16823) | [[code]](https:\u002F\u002Fgithub.com\u002Fmetauto-ai\u002Fgptswarm)\n\n- [2024\u002F02\u002F22] **Triad: A Framework Leveraging a Multi-Role LLM-based Agent to Solve Knowledge Base Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14320) | [code]\n\n- [2024\u002F02\u002F22] **Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14744) | [code]\n\n- [2024\u002F02\u002F21] **Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13717) | [code]\n\n- [2024\u002F02\u002F19] **Stick to your Role! Stability of Personal Values Expressed in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14846) | [code]\n\n- [2024\u002F02\u002F18] **Modelling Political Coalition Negotiations Using LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11712) | [code]\n\n- [2024\u002F02\u002F06] **Professional Agents -- Evolving Large Language Models into Autonomous Experts with Human-Level Competencies** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03628) | [code]\n\n- [2024\u002F02\u002F06] **Can Generative Agents Predict Emotion?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04232) | [code]\n\n- [2024\u002F02\u002F05] **GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03299) | [code]\n\n- [2024\u002F01\u002F31] **LLMs Simulate Big Five Personality Traits: Further Evidence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01765) | [code]\n\n- [2023\u002F12\u002F22] **Personalized Large Language Model Assistant with Evolving Conditional Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17257) | [code]\n\n- [2023\u002F12\u002F21] **ChatGPT as a commenter to the news: can LLMs generate human-like opinions?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13961) | [code]\n\n- [2023\u002F12\u002F20] **Machine Mindset: An MBTI Exploration of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12999) | [code]\n\n- [2023\u002F12\u002F19] **Can ChatGPT be Your Personal Medical Assistant?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12006) | [code]\n\n- [2023\u002F10\u002F13] **AgentCF: Collaborative Learning with Autonomous Language Agents for Recommender Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09233) | [code]\n\n- [2023\u002F10\u002F01] **RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00746) | [code]\n\n- [2023\u002F09\u002F02] **ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.00986) | [code]\n\n- [2023\u002F08\u002F22] **Towards an On-device Agent for Text Rewriting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11807) | [code]\n\n- [2023\u002F08\u002F10] **LLM As DBA** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.05481) | [code]\n\n- [2023\u002F08\u002F03] **InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.01552) | [code]\n\n- [2023\u002F07\u002F11] **Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.05300) | [code]\n\n- [2023\u002F07\u002F05] **Building Cooperative Embodied Agents Modularly with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02485) | [code]\n\n- [2023\u002F05\u002F25] **Role-Play with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16367) | [code]\n\n- [2023\u002F05\u002F09] **TidyBot: Personalized Robot Assistance with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.05658) | [code]\n\n#### Conversation\n- [2025\u002F06\u002F28] **Knowledge Augmented Finetuning Matters in both RAG and Agent Based Dialog Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22852) | [code]\n\n- [2025\u002F06\u002F24] **Augmenting Multi-Agent Communication with State Delta Trajectory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19209) | [code]\n\n- [2025\u002F06\u002F17] **From What to Respond to When to Respond: Timely Response Generation for Open-domain Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14285) | [code]\n\n- [2025\u002F06\u002F17] **Expectation Confirmation Preference Optimization for Multi-Turn Conversational Recommendation Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14302) | [code]\n\n- [2025\u002F06\u002F13] **The Behavior Gap: Evaluating Zero-shot LLM Agents in Complex Task-Oriented Dialogs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.12266) | [code]\n\n- [2025\u002F06\u002F11] **Chat-of-Thought: Collaborative Multi-Agent System for Generating Domain Specific Information** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10086) | [code]\n\n- [2025\u002F06\u002F09] **$\\tau^2$-Bench: Evaluating Conversational Agents in a Dual-Control Environment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07982) | [code]\n\n- [2025\u002F06\u002F04] **AI Agents for Conversational Patient Triage: Preliminary Simulation-Based Evaluation with Real-World EHR Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04032) | [code]\n\n- [2025\u002F06\u002F04] **CLAIM: An Intent-Driven Multi-Agent Framework for Analyzing Manipulation in Courtroom Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04131) | [code]\n\n- [2025\u002F05\u002F29] **A Practical Approach for Building Production-Grade Conversational Agents with Workflow Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23006) | [code]\n\n- [2025\u002F05\u002F28] **ChatCFD: an End-to-End CFD Agent with Domain-specific Structured Thinking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02019) | [code]\n\n- [2025\u002F05\u002F26] **Towards Multi-Granularity Memory Association and Selection for Long-Term Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19549) | [code]\n\n- [2025\u002F05\u002F24] **Multi-Party Conversational Agents: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18845) | [code]\n\n- [2025\u002F05\u002F21] **Aligning Dialogue Agents with Global Feedback via Large Language Model Reward Decomposition** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15922) | [code]\n\n- [2025\u002F04\u002F29] **BrAIcht, a theatrical agent that speaks like Bertolt Brecht&#39;s characters** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20552) | [code]\n\n- [2025\u002F04\u002F26] **MATCHA: Can Multi-Agent Collaboration Build a Trustworthy Conversational Recommender?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20094) | [code]\n\n- [2025\u002F04\u002F21] **EducationQ: Evaluating LLMs&#39; Teaching Capabilities Through Multi-Agent Dialogue Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14928) | [code]\n\n- [2025\u002F04\u002F20] **DialogueAgents: A Hybrid Agent-Based Speech Synthesis Framework for Multi-Party Dialogue** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14482) | [code]\n\n- [2025\u002F04\u002F12] **A Multi-view Discourse Framework for Integrating Semantic and Syntactic Features in Dialog Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09073) | [code]\n\n- [2025\u002F04\u002F07] **Bridging Industrial Expertise and XR with LLM-Powered Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05527) | [code]\n\n- [2025\u002F04\u002F07] **A Desideratum for Conversational Agents: Capabilities, Challenges, and Future Directions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16939) | [code]\n\n- [2025\u002F03\u002F28] **Evaluating LLM-based Agents for Multi-Turn Conversations: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22458) | [code]\n\n- [2025\u002F03\u002F27] **EQ-Negotiator: An Emotion-Reasoning LLM Agent in Credit Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21080) | [code]\n\n- [2025\u002F03\u002F26] **3MDBench: Medical Multimodal Multi-agent Dialogue Benchmark** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13861) | [code]\n\n- [2025\u002F03\u002F25] **CoMAC: Conversational Agent for Multi-Source Auxiliary Context with Sparse and Symmetric Latent Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19274) | [code]\n\n- [2025\u002F03\u002F25] **Substance over Style: Evaluating Proactive Conversational Coaching Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19328) | [code]\n\n- [2025\u002F03\u002F18] **Personalized Attacks of Social Engineering in Multi-turn Conversations -- LLM Agents for Simulation and Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15552) | [code]\n\n- [2025\u002F03\u002F11] **In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08026) | [code]\n\n- [2025\u002F03\u002F05] **Cite Before You Speak: Enhancing Context-Response Grounding in E-commerce Conversational LLM-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04830) | [code]\n\n- [2025\u002F02\u002F24] **Turning Conversations into Workflows: A Framework to Extract and Evaluate Dialog Workflows for Service AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17321) | [code]\n\n- [2025\u002F02\u002F20] **Enhancing Conversational Agents with Theory of Mind: Aligning Beliefs, Desires, and Intentions for Human-Like Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14171) | [code]\n\n- [2025\u002F02\u002F18] **One Size doesn&#39;t Fit All: A Personalized Conversational Tutoring Agent for Mathematics Instruction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12633) | [code]\n\n- [2025\u002F02\u002F18] **Training Turn-by-Turn Verifiers for Dialogue Tutoring Agents: The Curious Case of LLMs as Your Coding Tutors** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13311) | [code]\n\n- [2025\u002F02\u002F18] **You need to MIMIC to get FAME: Solving Meeting Transcript Scarcity with a Multi-Agent Conversations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13001) | [code]\n\n- [2025\u002F02\u002F17] **InfoQuest: Evaluating Multi-Turn Dialogue Agents for Open-Ended Conversations with Hidden Context** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12257) | [code]\n\n- [2025\u002F02\u002F13] **Reliable Conversational Agents under ASP Control that Understand Natural Language** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09237) | [code]\n\n- [2025\u002F02\u002F12] **Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08820) | [code]\n\n- [2025\u002F02\u002F09] **MTPChat: A Multimodal Time-Aware Persona Dataset for Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05887) | [code]\n\n- [2025\u002F02\u002F09] **HamRaz: A Culture-Based Persian Conversation Dataset for Person-Centered Therapy Using LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05982) | [code]\n\n- [2025\u002F02\u002F08] **On Memory Construction and Retrieval for Personalized Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05589) | [code]\n\n- [2025\u002F02\u002F06] **PsyPlay: Personality-Infused Role-Playing Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03821) | [code]\n\n- [2025\u002F01\u002F24] **Unmasking Conversational Bias in AI Multiagent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14844) | [code]\n\n- [2025\u002F01\u002F23] **Communicating Activations Between Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14082) | [code]\n\n- [2025\u002F01\u002F19] **IntellAgent: A Multi-Agent Framework for Evaluating Conversational AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11067) | [code]\n\n- [2025\u002F01\u002F14] **Developing Enhanced Conversational Agents for Social Virtual Worlds** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16341) | [code]\n\n- [2025\u002F01\u002F03] **PSYCHE: A Multi-faceted Patient Simulation Framework for Evaluation of Psychiatric Assessment Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01594) | [code]\n\n- [2024\u002F12\u002F30] **Exploring and Controlling Diversity in LLM-Agent Conversation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21102) | [code]\n\n- [2024\u002F12\u002F24] **Extracting triples from dialogues for conversational social agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18364) | [code]\n\n- [2024\u002F12\u002F22] **Modular Conversational Agents for Surveys and Interviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17049) | [code]\n\n- [2024\u002F12\u002F21] **InfoTech Assistant : A Multimodal Conversational Agent for InfoTechnology Web Portal Queries** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16412) | [code]\n\n- [2024\u002F12\u002F13] **Script-Based Dialog Policy Planning for LLM-Powered Conversational Agents: A Basic Architecture for an &#34;AI Therapist&#34;** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15242) | [code]\n\n- [2024\u002F12\u002F06] **CALICO: Conversational Agent Localization via Synthetic Data Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05388) | [code]\n\n- [2024\u002F12\u002F05] **Educational-Psychological Dialogue Robot Based on Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03847) | [code]\n\n- [2024\u002F12\u002F01] **Examining Identity Drift in Conversations of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.00804) | [code]\n\n- [2024\u002F11\u002F07] **Thanos: Enhancing Conversational Agents with Skill-of-Mind-Infused Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04496) | [code]\n\n- [2024\u002F11\u002F07] **Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05194) | [code]\n\n- [2024\u002F11\u002F06] **MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03814) | [code]\n\n- [2024\u002F11\u002F01] **DARD: A Multi-Agent Approach for Task-Oriented Dialog Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00427) | [code]\n\n- [2024\u002F11\u002F01] **ReSpAct: Harmonizing Reasoning, Speaking, and Acting Towards Building Large Language Model-Based Conversational AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00927) | [code]\n\n- [2024\u002F10\u002F29] **MARCO: Multi-Agent Real-time Chat Orchestration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21784) | [code]\n\n- [2024\u002F10\u002F25] **AGENT-CQ: Automatic Generation and Evaluation of Clarifying Questions for Conversational Search with LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19692) | [code]\n\n- [2024\u002F10\u002F18] **Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14141) | [code]\n\n- [2024\u002F10\u002F15] **HR-Agent: A Task-Oriented Dialogue (TOD) LLM Agent Tailored for HR Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.11239) | [code]\n\n- [2024\u002F10\u002F10] **Rewriting Conversational Utterances with Instructed Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07797) | [code]\n\n- [2024\u002F09\u002F24] **Automated test generation to evaluate tool-augmented LLMs as conversational AI agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15934) | [code]\n\n- [2024\u002F09\u002F23] **Beyond Turn-Based Interfaces: Synchronous LLMs as Full-Duplex Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15594) | [code]\n\n- [2024\u002F09\u002F13] **AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.09013) | [code]\n\n- [2024\u002F09\u002F06] **Sparse Rewards Can Self-Train Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.04617) | [code]\n\n- [2024\u002F09\u002F02] **Co-Learning: Code Learning for Multi-Agent Reinforcement Collaborative Framework with Conversational Natural Language Interfaces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00985) | [code]\n\n- [2024\u002F08\u002F27] **Into the Unknown Unknowns: Engaged Human Learning through Participation in Language Model Agent Conversations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15232) | [code]\n\n- [2024\u002F08\u002F22] **MDD-5k: A New Diagnostic Conversation Dataset for Mental Disorders Synthesized via Neuro-Symbolic LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12142) | [code]\n\n- [2024\u002F08\u002F13] **What should I wear to a party in a Greek taverna? Evaluation for Conversational Agents in the Fashion Domain** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08907) | [code]\n\n- [2024\u002F08\u002F06] **OpenOmni: A Collaborative Open Source Tool for Building Future-Ready Multimodal Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.03047) | [code]\n\n- [2024\u002F08\u002F03] **Self-Emotion Blended Dialogue Generation in Social Simulation Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01633) | [code]\n\n- [2024\u002F07\u002F31] **Towards Achieving Human Parity on End-to-end Simultaneous Speech Translation via LLM Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.21646) | [code]\n\n- [2024\u002F07\u002F13] **Cohesive Conversations: Enhancing Authenticity in Multi-Agent Simulated Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.09897) | [code]\n\n- [2024\u002F07\u002F04] **Controllable Conversations: Planning-Based Dialogue Agent with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.03884) | [code]\n\n- [2024\u002F07\u002F01] **Empathic Grounding: Explorations using Multimodal Interaction and Large Language Models with Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01824) | [code]\n\n- [2024\u002F06\u002F30] **CAMON: Cooperative Agents for Multi-Object Navigation with LLM-based Conversations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00632) | [code]\n\n- [2024\u002F06\u002F09] **Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05688) | [[code]](https:\u002F\u002Fgithub.com\u002Fchengtan9907\u002Freviewmt)\n\n- [2024\u002F05\u002F29] **Toward Conversational Agents with Context and Time Sensitive Long-term Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00057) | [[code]](https:\u002F\u002Fgithub.com\u002FZyphra\u002FTemporalMemoryDataset)\n\n- [2024\u002F05\u002F16] **Speaker Verification in Agent-Generated Conversations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.10150) | [code]\n\n- [2024\u002F04\u002F19] **Towards Human-centered Proactive Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.12670) | [code]\n\n- [2024\u002F04\u002F10] **Apollonion: Profile-centric Dialog Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.08692) | [code]\n\n- [2024\u002F03\u002F17] **Improving Dialogue Agents by Decomposing One Global Explicit Annotation with Local Implicit Multimodal Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11330) | [code]\n\n- [2024\u002F03\u002F08] **ChatASU: Evoking LLM&#39;s Reflexion to Truly Understand Aspect Sentiment in Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.05326) | [code]\n\n- [2024\u002F02\u002F25] **Understanding Public Perceptions of AI Conversational Agents: A Cross-Cultural Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16039) | [code]\n\n- [2024\u002F02\u002F23] **On the Multi-turn Instruction Following for Conversational Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15057) | [code]\n\n- [2024\u002F02\u002F20] **CHATATC: Large Language Model-Driven Conversational Agents for Supporting Strategic Air Traffic Flow Management** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14850) | [code]\n\n- [2024\u002F01\u002F29] **Assistive Large Language Model Agents for Socially-Aware Negotiation Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01737) | [code]\n\n- [2024\u002F01\u002F10] **Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05033) | [code]\n\n- [2024\u002F01\u002F02] **CharacterEval: A Chinese Benchmark for Role-Playing Conversational Agent Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01275) | [code]\n\n- [2023\u002F12\u002F21] **Team Flow at DRC2023: Building Common Ground and Text-based Turn-taking in a Travel Agent Spoken Dialogue System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13816) | [code]\n\n- [2023\u002F11\u002F15] **ToolTalk: Evaluating Tool-Usage in a Conversational Setting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.10775) | [code]\n\n- [2023\u002F10\u002F01] **Adapting LLM Agents Through Communication** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01444v2) | [code]\n\n- [2023\u002F06\u002F28] **Inferring the Goals of Communicating Agents from Actions and Instructions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.16207) | [code]\n\n- [2023\u002F04\u002F26] **Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13835) | [code]\n\n- [2023\u002F03\u002F31] **CAMEL: Communicative Agents for &#34;Mind&#34; Exploration of Large Language Model Society** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17760) | [[code]](https:\u002F\u002Fgithub.com\u002Fcamel-ai\u002Fcamel)\n\n#### Game Playing\n- [2025\u002F06\u002F30] **SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.24119) | [code]\n\n- [2025\u002F06\u002F05] **Time to Talk: LLM Agents for Asynchronous Group Communication in Mafia Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05309) | [code]\n\n- [2025\u002F06\u002F04] **TextAtari: 100K Frames Game Playing with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04098) | [code]\n\n- [2025\u002F05\u002F29] **The Automated but Risky Game: Modeling Agent-to-Agent Negotiations and Transactions in Consumer Markets** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00073) | [code]\n\n- [2025\u002F05\u002F28] **First Steps Towards Overhearing LLM Agents: A Case Study With Dungeons &amp; Dragons Gameplay** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22809) | [code]\n\n- [2025\u002F05\u002F25] **When Ethics and Payoffs Diverge: LLM Agents in Morally Charged Social Dilemmas** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19212) | [code]\n\n- [2025\u002F05\u002F23] **CoMet: Metaphor-Driven Covert Communication for Multi-Agent Language Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18218) | [code]\n\n- [2025\u002F05\u002F20] **BAR: A Backward Reasoning based Agent for Complex Minecraft Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14079) | [code]\n\n- [2025\u002F04\u002F23] **Monte Carlo Planning with Large Language Model for Text-Based Game Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16855) | [code]\n\n- [2025\u002F04\u002F15] **TextArena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11442) | [code]\n\n- [2025\u002F04\u002F09] **Persona Dynamics: Unveiling the Impact of Personality Traits on Agents in Text-Based Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06868) | [code]\n\n- [2025\u002F03\u002F08] **DSGBench: A Diverse Strategic Game Benchmark for Evaluating LLM-based Agents in Complex Decision-Making Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06047) | [code]\n\n- [2025\u002F03\u002F06] **VQEL: Enabling Self-Developed Symbolic Language in Agents through Vector Quantization in Emergent Language Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04940) | [code]\n\n- [2025\u002F03\u002F06] **Factorio Learning Environment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09617) | [code]\n\n- [2025\u002F02\u002F05] **Multimodal Transformer Models for Turn-taking Prediction: Effects on Conversational Dynamics of Human-Agent Interaction during Cooperative Gameplay** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16432) | [code]\n\n- [2025\u002F02\u002F01] **Who&#39;s the MVP? A Game-Theoretic Evaluation Benchmark for Modular Attribution in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00510) | [code]\n\n- [2025\u002F01\u002F24] **Multi-agent KTO: Reinforcing Strategic Interactions of Large Language Model in Language Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14225) | [code]\n\n- [2024\u002F12\u002F06] **TeamCraft: A Benchmark for Multi-Modal Multi-Agent Systems in Minecraft** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05255) | [code]\n\n- [2024\u002F11\u002F08] **Game-theoretic LLM: Agent Workflow for Negotiation Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05990) | [code]\n\n- [2024\u002F10\u002F28] **Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21359) | [code]\n\n- [2024\u002F09\u002F03] **An Implementation of Werewolf Agent That does not Truly Trust LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.01575) | [code]\n\n- [2024\u002F08\u002F05] **Evaluating and Enhancing LLMs Agent based on Theory of Mind in Guandan: A Multi-Player Cooperative Game under Imperfect Information** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02559) | [code]\n\n- [2024\u002F07\u002F23] **AMONGAGENTS: Evaluating Large Language Models in the Interactive Text-Based Social Deduction Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16521) | [code]\n\n- [2024\u002F07\u002F17] **A LLM Benchmark based on the Minecraft Builder Dialog Agent Task** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.12734) | [code]\n\n- [2024\u002F06\u002F27] **OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00114) | [code]\n\n- [2024\u002F06\u002F07] **GameBench: Evaluating Strategic Reasoning Abilities of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06613) | [[code]](https:\u002F\u002Fgithub.com\u002FJoshuaclymer\u002FGameBench)\n\n- [2024\u002F06\u002F05] **The Good, the Bad, and the Hulk-like GPT: Analyzing Emotional Decisions of Large Language Models in Cooperation and Bargaining Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03299) | [code]\n\n- [2024\u002F05\u002F24] **Hacc-Man: An Arcade Game for Jailbreaking LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.15902) | [code]\n\n- [2024\u002F05\u002F23] **Human-Agent Cooperation in Games under Incomplete Information through Natural Language Communication** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14173) | [code]\n\n- [2024\u002F05\u002F08] **LLMs with Personalities in Multi-issue Negotiation Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05248) | [code]\n\n- [2024\u002F04\u002F30] **PANGeA: Procedural Artificial Narrative using Generative AI for Turn-Based Video Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19721) | [code]\n\n- [2024\u002F04\u002F03] **Learn to Disguise: Avoid Refusal Responses in LLM&#39;s Defense via a Multi-agent Attacker-Disguiser Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02532) | [code]\n\n- [2024\u002F03\u002F28] **MineLand: Simulating Large-Scale Multi-Agent Interactions with Limited Multimodal Senses and Physical Needs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.19267) | [[code]](https:\u002F\u002Fgithub.com\u002Fcocacola-lab\u002Fmineland)\n\n- [2024\u002F03\u002F18] **How Far Are We on the Decision-Making of LLMs? Evaluating LLMs&#39; Gaming Ability in Multi-Agent Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11807) | [code]\n\n- [2024\u002F02\u002F19] **PsychoGAT: A Novel Psychological Measurement Paradigm through Interactive Fiction Games with LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12326) | [code]\n\n- [2024\u002F02\u002F13] **Large Language Models as Minecraft Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08392) | [code]\n\n- [2024\u002F02\u002F12] **Large Language Models as Agents in Two-Player Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08078) | [code]\n\n- [2024\u002F02\u002F04] **Enhance Reasoning for Large Language Models in the Game Werewolf** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02330) | [code]\n\n- [2024\u002F02\u002F02] **PokeLLMon: A Human-Parity Agent for Pokemon Battles with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01118) | [code]\n\n- [2023\u002F12\u002F29] **Cooperation on the Fly: Exploring Language Agents for Ad Hoc Teamwork in the Avalon Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17515) | [code]\n\n- [2023\u002F12\u002F01] **Deciphering Digital Detectives: Understanding LLM Behaviors and Capabilities in Multi-Agent Mystery Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00746) | [code]\n\n- [2023\u002F10\u002F31] **Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20499) | [code]\n\n- [2023\u002F09\u002F29] **Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind Aware GPT-4** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.17277) | [code]\n\n- [2023\u002F09\u002F18] **MindAgent: Emergent Gaming Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.09971) | [[code]](https:\u002F\u002Fmindagent.github.io\u002F)\n\n- [2023\u002F09\u002F10] **An Appraisal-Based Chain-Of-Emotion Architecture for Affective Language Model Game Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.05076) | [code]\n\n- [2023\u002F09\u002F09] **Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.04658) | [code]\n\n- [2023\u002F08\u002F23] **Are ChatGPT and GPT-4 Good Poker Players? -- A Pre-Flop Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12466) | [code]\n\n- [2023\u002F05\u002F31] **Recursive Metropolis-Hastings Naming Game: Symbol Emergence in a Multi-agent System based on Probabilistic Generative Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19761) | [code]\n\n- [2023\u002F05\u002F26] **Playing repeated games with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16867) | [code]\n\n- [2023\u002F05\u002F25] **Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17144) | [code]\n\n- [2023\u002F05\u002F08] **Knowledge-enhanced Agents for Interactive Text Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.05091) | [code]\n\n- [2023\u002F04\u002F06] **Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.02868) | [code]\n\n#### Human-Agent Interaction\n- [2025\u002F06\u002F11] **A Call for Collaborative Intelligence: Why Human-Agent Systems Should Precede AI Autonomy** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09420) | [code]\n\n- [2025\u002F05\u002F16] **Talk to Your Slides: Language-Driven Agents for Efficient Slide Editing** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11604) | [code]\n\n- [2025\u002F03\u002F26] **TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20666) | [code]\n\n- [2025\u002F02\u002F17] **Leveraging Dual Process Theory in Language Agent Framework for Real-time Simultaneous Human-AI Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11882) | [code]\n\n- [2025\u002F02\u002F05] **Multimodal Transformer Models for Turn-taking Prediction: Effects on Conversational Dynamics of Human-Agent Interaction during Cooperative Gameplay** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16432) | [code]\n\n- [2025\u002F01\u002F28] **CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web Navigation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16609) | [code]\n\n- [2024\u002F12\u002F20] **Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15701) | [code]\n\n- [2024\u002F06\u002F28] **Designing and Evaluating Multi-Chatbot Interface for Human-AI Communication: Preliminary Findings from a Persuasion Task** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.19648) | [code]\n\n- [2024\u002F06\u002F11] **Towards Human-AI Collaboration in Healthcare: Guided Deferral Systems with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07212) | [code]\n\n- [2024\u002F06\u002F02] **Towards a copilot in BIM authoring tool using a large language model-based agent for intelligent human-machine interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.16903) | [code]\n\n- [2024\u002F03\u002F05] **ChatCite: LLM Agent with Human Workflow Guidance for Comparative Literature Summary** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02574) | [code]\n\n- [2024\u002F02\u002F20] **Large Language Model-based Human-Agent Collaboration for Complex Task Solving** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12914) | [code]\n\n- [2024\u002F02\u002F18] **Shaping Human-AI Collaboration: Varied Scaffolding Levels in Co-writing with Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11723) | [code]\n\n- [2024\u002F02\u002F17] **MONAL: Model Autophagy Analysis for Modeling Human-AI Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11271) | [code]\n\n- [2023\u002F09\u002F22] **Learning to Coordinate with Anyone** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.12633) | [code]\n\n- [2023\u002F07\u002F31] **HAGRID: A Human-LLM Collaborative Dataset for Generative Information-Seeking with Attribution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.16883) | [code]\n\n- [2023\u002F04\u002F26] **Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13835) | [code]\n\n#### Tool Usage\n- [2025\u002F07\u002F10] **PyVision: Agentic Vision with Dynamic Tooling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07998) | [code]\n\n- [2025\u002F07\u002F09] **VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06899) | [code]\n\n- [2025\u002F07\u002F03] **WebSailor: Navigating Super-human Reasoning for Web Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02592) | [code]\n\n- [2025\u002F07\u002F02] **OpenTable-R1: A Reinforcement Learning Augmented Tool Agent for Open-Domain Table Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03018) | [code]\n\n- [2025\u002F06\u002F30] **LineRetriever: Planning-Aware Observation Reduction for Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00210) | [code]\n\n- [2025\u002F06\u002F27] **More Vulnerable than You Think: On the Stability of Tool-Integrated LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21967) | [code]\n\n- [2025\u002F06\u002F24] **Doc2Agent: Scalable Generation of Tool-Using Agents from API Documentation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19998) | [code]\n\n- [2025\u002F06\u002F24] **NaviAgent: Bilevel Planning on Tool Dependency Graphs for Function Calling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19500) | [code]\n\n- [2025\u002F06\u002F18] **Understanding GUI Agent Localization Biases through Logit Sharpness** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15425) | [code]\n\n- [2025\u002F06\u002F18] **Embodied Web Agents: Bridging Physical-Digital Realms for Integrated Agent Intelligence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15677) | [code]\n\n- [2025\u002F06\u002F17] **AgentSynth: Scalable Task Generation for Generalist Computer-Use Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14205) | [code]\n\n- [2025\u002F06\u002F12] **VideoDeepResearch: Long Video Understanding With Agentic Tool Using** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10821) | [code]\n\n- [2025\u002F06\u002F12] **Build the web for agents, not agents for the web** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10953) | [code]\n\n- [2025\u002F06\u002F10] **Atomic-to-Compositional Generalization for Mobile Agents with A New Benchmark and Scheduling System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08972) | [code]\n\n- [2025\u002F06\u002F10] **GUIRoboTron-Speech: Towards Automated GUI Agents Based on Speech Instructions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11127) | [code]\n\n- [2025\u002F06\u002F09] **CheMatAgent: Enhancing LLMs for Chemistry and Materials Science through Tree-Search Based Tool Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07551) | [code]\n\n- [2025\u002F06\u002F04] **Go-Browse: Training Web Agents with Structured Exploration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03533) | [code]\n\n- [2025\u002F06\u002F03] **GUI-Actor: Coordinate-Free Visual Grounding for GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03143) | [code]\n\n- [2025\u002F06\u002F02] **AgentCPM-GUI: Building Mobile-Use Agents with Reinforcement Fine-Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01391) | [code]\n\n- [2025\u002F05\u002F30] **MedOrch: Medical Diagnosis with Tool-Augmented Reasoning Agents for Flexible Extensibility** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00235) | [code]\n\n- [2025\u002F05\u002F28] **RedTeamCUA: Realistic Adversarial Testing of Computer-Use Agents in Hybrid Web-OS Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21936) | [code]\n\n- [2025\u002F05\u002F28] **EvolveSearch: An Iterative Self-Evolving Search Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22501) | [code]\n\n- [2025\u002F05\u002F28] **UI-Evol: Automatic Knowledge Evolving for Computer Use Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21964) | [code]\n\n- [2025\u002F05\u002F28] **WebDancer: Towards Autonomous Information Seeking Agency** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22648) | [[code]](https:\u002F\u002Fgithub.com\u002FAlibaba-NLP\u002FWebAgent)\n\n- [2025\u002F05\u002F27] **BacktrackAgent: Enhancing GUI Agent with Error Detection and Backtracking Mechanism** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20660) | [code]\n\n- [2025\u002F05\u002F27] **UI-Genie: A Self-Improving Approach for Iteratively Boosting MLLM-based Mobile GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21496) | [code]\n\n- [2025\u002F05\u002F27] **ChemHAS: Hierarchical Agent Stacking for Enhancing Chemistry Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21569) | [code]\n\n- [2025\u002F05\u002F26] **T^2Agent A Tool-augmented Multimodal Misinformation Detection Agent with Monte Carlo Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19768) | [code]\n\n- [2025\u002F05\u002F26] **WebCoT: Enhancing Web Agent Reasoning by Reconstructing Chain-of-Thought in Reflection, Branching, and Rollback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20013) | [code]\n\n- [2025\u002F05\u002F23] **Deep Video Discovery: Agentic Search with Tool Use for Long-form Video Understanding** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18079) | [code]\n\n- [2025\u002F05\u002F23] **ProgRM: Build Better GUI Agents with Progress Rewards** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18121) | [code]\n\n- [2025\u002F05\u002F23] **Gaming Tool Preferences in Agentic LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18135) | [code]\n\n- [2025\u002F05\u002F22] **WebAgent-R1: Training Web Agents via End-to-End Multi-Turn Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16421) | [code]\n\n- [2025\u002F05\u002F22] **T1: A Tool-Oriented Conversational Dataset for Multi-Turn Agentic Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16986) | [code]\n\n- [2025\u002F05\u002F21] **Web-Shepherd: Advancing PRMs for Reinforcing Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15277) | [code]\n\n- [2025\u002F05\u002F21] **X-WebAgentBench: A Multilingual Interactive Web Benchmark for Evaluating Global Agentic System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15372) | [code]\n\n- [2025\u002F05\u002F21] **GUI-G1: Understanding R1-Zero-Like Training for Visual Grounding in GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15810) | [code]\n\n- [2025\u002F05\u002F21] **AgentThink: A Unified Framework for Tool-Augmented Chain-of-Thought Reasoning in Vision-Language Models for Autonomous Driving** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15298) | [code]\n\n- [2025\u002F05\u002F20] **Mobile-Agent-V: A Video-Guided Approach for Effortless and Efficient Operational Knowledge Injection in Mobile Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13887) | [code]\n\n- [2025\u002F05\u002F20] **Efficient Agent Training for Computer Use** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13909) | [code]\n\n- [2025\u002F05\u002F20] **s3: You Don&#39;t Need That Much Data to Train a Search Agent via RL** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14146) | [code]\n\n- [2025\u002F05\u002F19] **GEM: Gaussian Embedding Modeling for Out-of-Distribution Detection in GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12842) | [code]\n\n- [2025\u002F05\u002F18] **Enhance Mobile Agents Thinking Process Via Iterative Preference Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12299) | [code]\n\n- [2025\u002F05\u002F17] **Demystifying and Enhancing the Efficiency of Large Language Model Based Search Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12065) | [code]\n\n- [2025\u002F05\u002F16] **EnvInjection: Environmental Prompt Injection Attack to Multi-modal Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11717) | [code]\n\n- [2025\u002F05\u002F09] **ScaleMCP: Dynamic and Auto-Synchronizing Model Context Protocol Tools for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.06416) | [code]\n\n- [2025\u002F04\u002F28] **MICE for CATs: Model-Internal Confidence Estimation for Calibrating Agents with Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20168) | [code]\n\n- [2025\u002F04\u002F27] **AndroidGen: Building an Android Language Agent under Data Scarcity** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19298) | [code]\n\n- [2025\u002F04\u002F24] **Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17934) | [code]\n\n- [2025\u002F04\u002F23] **WebEvolver: Enhancing Web Agent Self-Improvement with Coevolving World Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21024) | [code]\n\n- [2025\u002F04\u002F22] **Guiding VLM Agents with Process Rewards at Inference Time for GUI Navigation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16073) | [code]\n\n- [2025\u002F04\u002F19] **InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative Reasoners** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14239) | [code]\n\n- [2025\u002F04\u002F17] **WebLists: Extracting Structured Information From Complex Interactive Websites Using Executable LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12682) | [code]\n\n- [2025\u002F04\u002F16] **Enhancing Web Agents with Explicit Rollback Mechanisms** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11788) | [code]\n\n- [2025\u002F04\u002F15] **The Obvious Invisible Threat: LLM-Powered GUI Agents&#39; Vulnerability to Fine-Print Injections** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11281) | [code]\n\n- [2025\u002F04\u002F14] **Breaking the Data Barrier -- Building GUI Agents Through Task Generalization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.10127) | [code]\n\n- [2025\u002F04\u002F14] **GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.10458) | [code]\n\n- [2025\u002F04\u002F09] **Inducing Programmatic Skills for Agentic Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06821) | [code]\n\n- [2025\u002F04\u002F09] **SkillWeaver: Web Agents can Self-Improve by Discovering and Honing Skills** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07079) | [code]\n\n- [2025\u002F04\u002F02] **An Illusion of Progress? Assessing the Current State of Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01382) | [code]\n\n- [2025\u002F04\u002F01] **On the Robustness of Agentic Function Calling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00914) | [code]\n\n- [2025\u002F04\u002F01] **Agent S2: A Compositional Generalist-Specialist Framework for Computer Use Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00906) | [code]\n\n- [2025\u002F03\u002F26] **Open Deep Search: Democratizing Search with Open-source Reasoning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20201) | [code]\n\n- [2025\u002F03\u002F24] **Safeguarding Mobile GUI Agent via Logic-based Action Verification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18492) | [code]\n\n- [2025\u002F03\u002F18] **PLAY2PROMPT: Zero-shot Tool Instruction Optimization for LLM Agents via Tool Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14432) | [code]\n\n- [2025\u002F03\u002F14] **DeskVision: Large Scale Desktop Region Captioning for Advanced GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11170) | [code]\n\n- [2025\u002F03\u002F12] **Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10689) | [code]\n\n- [2025\u002F03\u002F10] **BEARCUBS: A benchmark for computer-using web agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07919) | [code]\n\n- [2025\u002F03\u002F06] **Measuring temporal effects of agent knowledge by date-controlled tool use** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04188) | [code]\n\n- [2025\u002F03\u002F06] **SafeArena: Evaluating the Safety of Autonomous Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04957) | [code]\n\n- [2025\u002F03\u002F04] **LiteWebAgent: The Open-Source Suite for VLM-Based Web-Agent Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02950) | [code]\n\n- [2025\u002F03\u002F01] **Smoothing Grounding and Reasoning for MLLM-Powered GUI Agents with Query-Oriented Pivot Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00401) | [code]\n\n- [2025\u002F02\u002F27] **Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20383) | [code]\n\n- [2025\u002F02\u002F24] **MobileSteward: Integrating Multiple App-Oriented Agents with Self-Evolution to Automate Cross-App Instructions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16796) | [code]\n\n- [2025\u002F02\u002F24] **Mobile-Agent-V: Learning Mobile Device Operation Through Video-Guided Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17110) | [[code]](https:\u002F\u002Fgithub.com\u002FX-PLUG\u002FMobileAgent)\n\n- [2025\u002F02\u002F17] **LLM Agents Making Agent Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11705) | [code]\n\n- [2025\u002F02\u002F17] **SMART: Self-Aware Agent for Tool Overuse Mitigation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11435) | [code]\n\n- [2025\u002F02\u002F16] **OctoTools: An Agentic Framework with Extensible Tools for Complex Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11271) | [code]\n\n- [2025\u002F02\u002F12] **Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08820) | [code]\n\n- [2025\u002F02\u002F07] **Agentic Reasoning: Reasoning LLMs with Tools for the Deep Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04644) | [code]\n\n- [2025\u002F02\u002F06] **Division-of-Thoughts: Harnessing Hybrid Language Model Synergy for Efficient On-Device Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04392) | [code]\n\n- [2025\u002F02\u002F05] **ReachAgent: Enhancing Mobile Agent via Page Reaching and Operation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02955) | [code]\n\n- [2025\u002F01\u002F28] **CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web Navigation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16609) | [code]\n\n- [2025\u002F01\u002F21] **UI-TARS: Pioneering Automated GUI Interaction with Native Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12326) | [code]\n\n- [2025\u002F01\u002F20] **Mobile-Agent-E: Self-Evolving Mobile Assistant for Complex Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11733) | [code]\n\n- [2025\u002F01\u002F20] **PlotEdit: Natural Language-Driven Accessible Chart Editing in PDFs via Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11233) | [code]\n\n- [2025\u002F01\u002F08] **InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning and Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04575) | [code]\n\n- [2025\u002F01\u002F08] **FinSphere: A Conversational Stock Analysis Agent Equipped with Quantitative Tools based on Real-Time Database** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12399) | [code]\n\n- [2025\u002F01\u002F07] **PPTAgent: Generating and Evaluating Presentations Beyond Text-to-Slides** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.03936) | [code]\n\n- [2024\u002F12\u002F28] **Efficient Multi-Agent Collaboration with Tool Use for Online Planning in Complex Table Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20145) | [code]\n\n- [2024\u002F12\u002F21] **InfoTech Assistant : A Multimodal Conversational Agent for InfoTechnology Web Portal Queries** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16412) | [code]\n\n- [2024\u002F12\u002F12] **AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09605) | [code]\n\n- [2024\u002F12\u002F08] **Cooperative SQL Generation for Segmented Databases By Using Multi-functional LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05850) | [code]\n\n- [2024\u002F12\u002F05] **Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04454) | [code]\n\n- [2024\u002F11\u002F26] **ShowUI: One Vision-Language-Action Model for GUI Visual Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.17465) | [code]\n\n- [2024\u002F11\u002F22] **ScribeAgent: Towards Specialized Web Agents Using Production-Scale Workflow Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.15004) | [code]\n\n- [2024\u002F11\u002F20] **AdaptAgent: Adapting Multimodal Web Agents with Few-Shot Learning from Human Demonstrations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13451) | [code]\n\n- [2024\u002F11\u002F15] **The Dawn of GUI Agent: A Preliminary Case Study with Claude 3.5 Computer Use** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10323) | [code]\n\n- [2024\u002F11\u002F04] **WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02337) | [code]\n\n- [2024\u002F11\u002F04] **Attacking Vision-Language Computer Agents via Pop-ups** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02391) | [code]\n\n- [2024\u002F11\u002F02] **Infant Agent: A Tool-Integrated, Logic-Driven Agent with Cost-Effective API Usage** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01114) | [code]\n\n- [2024\u002F10\u002F28] **AutoGLM: Autonomous Foundation Agents for GUIs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00820) | [code]\n\n- [2024\u002F10\u002F25] **OpenWebVoyager: Building Multimodal Web Agents via Iterative Real-World Exploration, Feedback and Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19609) | [code]\n\n- [2024\u002F10\u002F24] **Infogent: An Agent-Based Framework for Web Information Aggregation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19054) | [code]\n\n- [2024\u002F10\u002F23] **ReflecTool: Towards Reflection-Aware Tool-Augmented Clinical Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17657) | [code]\n\n- [2024\u002F10\u002F22] **Large Language Models Empowered Personalized Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17236) | [code]\n\n- [2024\u002F10\u002F21] **VipAct: Visual-Perception Enhancement via Specialized VLM Agent Collaboration and Tool-use** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.16400) | [code]\n\n- [2024\u002F10\u002F21] **Beyond Browsing: API-Based Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.16464) | [code]\n\n- [2024\u002F10\u002F18] **Toolshed: Scale Tool-Equipped Agents with Advanced RAG-Tool Fusion and Tool Knowledge Bases** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14594) | [code]\n\n- [2024\u002F10\u002F17] **Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13232) | [code]\n\n- [2024\u002F10\u002F17] **MeNTi: Bridging Medical Calculator and LLM Agent with Nested Tool Calling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13610) | [code]\n\n- [2024\u002F10\u002F17] **MobA: A Two-Level Agent System for Efficient Mobile Task Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13757) | [code]\n\n- [2024\u002F10\u002F17] **AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13825) | [code]\n\n- [2024\u002F10\u002F16] **Agent Skill Acquisition for Large Language Models via CycleQD** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14735) | [code]\n\n- [2024\u002F10\u002F10] **Agent S: An Open Agentic Framework that Uses Computers Like a Human** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08164) | [code]\n\n- [2024\u002F10\u002F07] **Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05243) | [code]\n\n- [2024\u002F10\u002F03] **NNetNav: Unsupervised Learning of Browser Agents Through Environment Interaction in the Wild** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02907) | [code]\n\n- [2024\u002F09\u002F24] **Automated test generation to evaluate tool-augmented LLMs as conversational AI agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15934) | [code]\n\n- [2024\u002F09\u002F17] **EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11295) | [code]\n\n- [2024\u002F09\u002F01] **TinyAgent: Function Calling at the Edge** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00608) | [code]\n\n- [2024\u002F08\u002F30] **Tool-Assisted Agent on SQL Inspection and Refinement in Real-World Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16991) | [code]\n\n- [2024\u002F08\u002F15] **VerilogCoder: Autonomous Verilog Coding Agents with Graph-based Planning and Abstract Syntax Tree (AST)-based Waveform Tracing Tool** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08927) | [code]\n\n- [2024\u002F08\u002F05] **Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02544) | [code]\n\n- [2024\u002F08\u002F01] **OmniParser for Pure Vision Based GUI Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00203) | [code]\n\n- [2024\u002F07\u002F26] **AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18901) | [[code]](https:\u002F\u002Fgithub.com\u002Fstonybrooknlp\u002Fappworld)\n\n- [2024\u002F07\u002F22] **AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15711) | [code]\n\n- [2024\u002F07\u002F11] **GTA: A Benchmark for General Tool Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08713) | [code]\n\n- [2024\u002F07\u002F01] **Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00993) | [code]\n\n- [2024\u002F06\u002F17] **GUICourse: From General Vision Language Models to Versatile GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11317) | [[code]](https:\u002F\u002Fgithub.com\u002Fyiye3\u002Fguicourse)\n\n- [2024\u002F06\u002F16] **GUI-WORLD: A Dataset for GUI-oriented Multimodal LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10819) | [code]\n\n- [2024\u002F06\u002F06] **Tool-Planner: Task Planning with Clusters across Multiple Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03807) | [[code]](https:\u002F\u002Fgithub.com\u002FOceannTwT\u002FTool-Planner)\n\n- [2024\u002F06\u002F03] **Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01014) | [[code]](https:\u002F\u002Fgithub.com\u002Fx-plug\u002Fmobileagent)\n\n- [2024\u002F06\u002F02] **Towards a copilot in BIM authoring tool using a large language model-based agent for intelligent human-machine interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.16903) | [code]\n\n- [2024\u002F05\u002F30] **Large Language Models Can Self-Improve At Web Agent Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20309) | [code]\n\n- [2024\u002F05\u002F17] **Latent State Estimation Helps UI Agents to Reason** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11120) | [code]\n\n- [2024\u002F05\u002F06] **SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.15793) | [code]\n\n- [2024\u002F05\u002F02] **CACTUS: Chemistry Agent Connecting Tool-Usage to Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00972) | [[code]](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fcactus)\n\n- [2024\u002F05\u002F01] **Navigating WebAI: Training Agents to Complete Web Tasks with Large Language Models and Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00516) | [code]\n\n- [2024\u002F04\u002F23] **Evaluating Tool-Augmented Agents in Remote Sensing Platforms** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00709) | [code]\n\n- [2024\u002F04\u002F17] **The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11584) | [code]\n\n- [2024\u002F04\u002F17] **Octopus v3: Technical Report for On-device Sub-billion Multimodal AI Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11459) | [code]\n\n- [2024\u002F04\u002F16] **Grounded Language Agent for Product Search via Intelligent Web Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.10887) | [code]\n\n- [2024\u002F04\u002F04] **AutoWebGLM: A Large Language Model-based Web Navigating Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.03648) | [[code]](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FAutoWebGLM)\n\n- [2024\u002F04\u002F01] **Rapid Mobile App Development for Generative AI Agents on MIT App Inventor** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01561) | [code]\n\n- [2024\u002F03\u002F05] **InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02691) | [code]\n\n- [2024\u002F03\u002F05] **Android in the Zoo: Chain-of-Action-Thought for GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02713) | [code]\n\n- [2024\u002F02\u002F27] **BASES: Large-scale Web Search User Simulation with Large Language Model based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17505) | [code]\n\n- [2024\u002F02\u002F26] **Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16696) | [code]\n\n- [2024\u002F02\u002F23] **On the Multi-turn Instruction Following for Conversational Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15057) | [code]\n\n- [2024\u002F02\u002F20] **AgentMD: Empowering Language Agents for Risk Prediction with Large-Scale Clinical Tool Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13225) | [code]\n\n- [2024\u002F02\u002F18] **SciAgent: Tool-augmented Language Models for Scientific Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11451) | [code]\n\n- [2024\u002F02\u002F16] **ToolSword: Unveiling Safety Issues of Large Language Models in Tool Learning Across Three Stages** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10753) | [[code]](https:\u002F\u002Fgithub.com\u002Fjunjie-ye\u002Ftoolsword)\n\n- [2024\u002F02\u002F08] **UFO: A UI-Focused Agent for Windows OS Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07939) | [code]\n\n- [2024\u002F02\u002F06] **AnyTool: Self-Reflective, Hierarchical Agents for Large-Scale API Calls** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04253) | [[code]](https:\u002F\u002Fgithub.com\u002Fdyabel\u002Fanytool)\n\n- [2024\u002F01\u002F11] **EASYTOOL: Enhancing LLM-based Agents with Concise Tool Instruction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.06201) | [code]\n\n- [2024\u002F01\u002F03] **GPT-4V(ision) is a Generalist Web Agent, if Grounded** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01614) | [code]\n\n- [2023\u002F12\u002F21] **AppAgent: Multimodal Agents as Smartphone Users** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13771) | [code]\n\n- [2023\u002F12\u002F18] **CLOVA: A Closed-Loop Visual Assistant with Tool Usage and Update** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10908) | [[code]](https:\u002F\u002Fclova-tool.github.io\u002F)\n\n- [2023\u002F12\u002F14] **CogAgent: A Visual Language Model for GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.08914) | [code]\n\n- [2023\u002F11\u002F19] **TPTU-v2: Boosting Task Planning and Tool Usage of Large Language Model-based Agents in Real-world Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11315) | [code]\n\n- [2023\u002F11\u002F15] **ToolTalk: Evaluating Tool-Usage in a Conversational Setting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.10775) | [code]\n\n- [2023\u002F11\u002F10] **Smart Agent-Based Modeling: On the Use of Large Language Models in Computer Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.06330) | [code]\n\n- [2023\u002F10\u002F12] **A Zero-Shot Language Agent for Computer Control with Structured Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08740) | [code]\n\n- [2023\u002F08\u002F07] **TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03427) | [code]\n\n- [2023\u002F06\u002F09] **Mind2Web: Towards a Generalist Agent for the Web** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06070) | [code]\n\n- [2023\u002F05\u002F22] **Making Language Models Better Tool Learners with Execution Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13068) | [code]\n\n- [2023\u002F05\u002F19] **ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11554) | [code]\n\n#### Simulation\n- [2025\u002F07\u002F10] **Automating MD simulations for Proteins using Large language Models: NAMD-Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07887) | [code]\n\n- [2025\u002F07\u002F01] **TransLaw: Benchmarking Large Language Models in Multi-Agent Simulation of the Collaborative Translation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00875) | [code]\n\n- [2025\u002F06\u002F26] **CitySim: Modeling Urban Behaviors and City Dynamics with Large-Scale LLM-Driven Agent Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21805) | [code]\n\n- [2025\u002F06\u002F24] **LLM-Based Social Simulations Require a Boundary** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19806) | [code]\n\n- [2025\u002F06\u002F23] **TrajTok: Technical Report for 2025 Waymo Open Sim Agents Challenge** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21618) | [code]\n\n- [2025\u002F06\u002F16] **CAMS: A CityGPT-Powered Agentic Framework for Urban Human Mobility Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13599) | [code]\n\n- [2025\u002F06\u002F07] **Modeling Earth-Scale Human-Like Societies with One Billion Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.12078) | [code]\n\n- [2025\u002F06\u002F03] **MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02689) | [code]\n\n- [2025\u002F06\u002F02] **LAM SIMULATOR: Advancing Data Generation for Large Action Model Training via Online Exploration and Trajectory Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02298) | [code]\n\n- [2025\u002F05\u002F31] **Dyna-Think: Synergizing Reasoning, Acting, and World Model Simulation in AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00320) | [code]\n\n- [2025\u002F05\u002F28] **Scalable, Symbiotic, AI and Non-AI Agent Based Parallel Discrete Event Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23846) | [code]\n\n- [2025\u002F05\u002F26] **Embracing Imperfection: Simulating Students with Diverse Cognitive Levels Using LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19997) | [code]\n\n- [2025\u002F05\u002F25] **When Ethics and Payoffs Diverge: LLM Agents in Morally Charged Social Dilemmas** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19212) | [code]\n\n- [2025\u002F05\u002F19] **Simulation Agent: A Framework for Integrating Simulation and Large Language Models for Enhanced Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13761) | [code]\n\n- [2025\u002F05\u002F11] **EcoLANG: Efficient and Effective Agent Communication Language Induction for Social Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.06904) | [code]\n\n- [2025\u002F04\u002F20] **BookWorld: From Novels to Interactive Agent Societies for Creative Story Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14538) | [code]\n\n- [2025\u002F04\u002F17] **SimUSER: Simulating User Behavior with Large Language Models for Recommender System Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12722) | [code]\n\n- [2025\u002F04\u002F14] **SocioVerse: A World Model for Social Simulation Powered by LLM Agents and A Pool of 10 Million Real-World Users** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.10157) | [code]\n\n- [2025\u002F04\u002F10] **MOSAIC: Modeling Social AI for Content Dissemination and Regulation in Multi-Agent Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07830) | [code]\n\n- [2025\u002F04\u002F04] **APIGen-MT: Agentic Pipeline for Multi-Turn Data Generation via Simulated Agent-Human Interplay** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03601) | [code]\n\n- [2025\u002F04\u002F04] **Algorithmic Prompt Generation for Diverse Human-like Teaming and Communication with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03991) | [code]\n\n- [2025\u002F03\u002F28] **Self-Evolving Multi-Agent Simulations for Realistic Clinical Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22678) | [code]\n\n- [2025\u002F03\u002F18] **Retrieval-Augmented Simulacra: Generative Agents for Up-to-date and Knowledge-Adaptive Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14620) | [code]\n\n- [2025\u002F03\u002F12] **Can A Society of Generative Agents Simulate Human Behavior and Inform Public Health Policy? A Case Study on Vaccine Hesitancy** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09639) | [code]\n\n- [2025\u002F02\u002F06] **Simulating the Emergence of Differential Case Marking with Communicating Neural-Network Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04038) | [code]\n\n- [2025\u002F02\u002F03] **Eliciting Language Model Behaviors with Investigator Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01236) | [code]\n\n- [2025\u002F02\u002F03] **TwinMarket: A Scalable Behavioral and Social Simulation for Financial Markets** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01506) | [code]\n\n- [2025\u002F01\u002F25] **Are Human Interactions Replicable by Generative Agents? A Case Study on Pronoun Usage in Hierarchical Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15283) | [code]\n\n- [2025\u002F01\u002F19] **Self-Explanation in Social AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13945) | [code]\n\n- [2025\u002F01\u002F12] **LLMs Model Non-WEIRD Populations: Experiments with Synthetic Cultural Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06834) | [code]\n\n- [2024\u002F12\u002F10] **Political Actor Agent: Simulating Legislative System for Roll Call Votes Prediction with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.07144) | [code]\n\n- [2024\u002F11\u002F18] **OASIS: Open Agent Social Interaction Simulations with One Million Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.11581) | [code]\n\n- [2024\u002F10\u002F28] **ElectionSim: Massive Population Election Simulation Powered by Large Language Model Driven Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.20746) | [code]\n\n- [2024\u002F10\u002F24] **Schema-Guided Culture-Aware Complex Event Simulation with Multi-Agent Role-Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18935) | [code]\n\n- [2024\u002F10\u002F18] **SRAP-Agent: Simulating and Optimizing Scarce Resource Allocation Policy with LLM-based Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14152) | [code]\n\n- [2024\u002F10\u002F05] **Large Language Models can Achieve Social Balance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04054) | [code]\n\n- [2024\u002F09\u002F25] **Plurals: A System for Guiding LLMs Via Simulated Social Ensembles** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.17213) | [code]\n\n- [2024\u002F09\u002F14] **Synergistic Simulations: Multi-Agent Problem Solving with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13753) | [code]\n\n- [2024\u002F09\u002F02] **Agentic Society: Merging skeleton from real world and texture from Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.10550) | [code]\n\n- [2024\u002F08\u002F28] **Logic-Enhanced Language Model Agents for Trustworthy Social Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16081) | [code]\n\n- [2024\u002F08\u002F15] **AgentCourt: Simulating Court with Adversarial Evolvable Lawyer Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08089) | [code]\n\n- [2024\u002F08\u002F03] **Self-Emotion Blended Dialogue Generation in Social Simulation Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01633) | [code]\n\n- [2024\u002F06\u002F26] **Simulating The U.S. Senate: An LLM-Driven Agent Approach to Modeling Legislative Behavior and Bipartisanship** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18702) | [code]\n\n- [2024\u002F06\u002F20] **Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.14373) | [code]\n\n- [2024\u002F06\u002F10] **Can Language Models Serve as Text-Based World Simulators?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06485) | [code]\n\n- [2024\u002F05\u002F12] **Exploring the Potential of Conversational AI Support for Agent-Based Social Simulation Model Design** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.08032) | [code]\n\n- [2024\u002F04\u002F23] **BattleAgent: Multi-modal Dynamic Emulation on Historical Battles to Complement Historical Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.15532) | [[code]](https:\u002F\u002Fgithub.com\u002Fagiresearch\u002Fbattleagent)\n\n- [2024\u002F03\u002F20] **AgentGroupChat: An Interactive Group Chat Simulacra For Better Eliciting Emergent Behavior** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.13433) | [code]\n\n- [2024\u002F03\u002F05] **AgentsCourt: Building Judicial Decision-Making Agents with Court Debate Simulation and Legal Knowledge Augmentation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02959) | [code]\n\n- [2024\u002F02\u002F26] **Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16333) | [code]\n\n- [2024\u002F02\u002F20] **What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13184) | [code]\n\n- [2024\u002F02\u002F07] **Can Large Language Model Agents Simulate Human Trust Behavior?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04559) | [code]\n\n- [2024\u002F01\u002F08] **SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03945) | [code]\n\n- [2023\u002F12\u002F06] **LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03815) | [code]\n\n- [2023\u002F11\u002F28] **War and Peace (WarAgent): Large Language Model-based Multi-Agent Simulation of World Wars** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17227) | [code]\n\n- [2023\u002F10\u002F10] **MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06500) | [code]\n\n- [2023\u002F06\u002F05] **User Behavior Simulation with Large Language Model based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02552) | [code]\n\n- [2023\u002F05\u002F26] **Training Socially Aligned Language Models on Simulated Social Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16960) | [code]\n\n- [2023\u002F04\u002F07] **Generative Agents: Interactive Simulacra of Human Behavior** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.03442) | [code]\n\n### Application\n#### Math\n- [2025\u002F05\u002F21] **ModelingAgent: Bridging LLMs and Mathematical Modeling for Real-World Challenges** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15068) | [code]\n\n- [2025\u002F03\u002F23] **MathAgent: Leveraging a Mixture-of-Math-Agent Framework for Real-World Multimodal Mathematical Error Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18132) | [code]\n\n- [2025\u002F03\u002F05] **MA-LoT: Multi-Agent Lean-based Long Chain-of-Thought Reasoning enhances Formal Theorem Proving** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03205) | [code]\n\n- [2025\u002F02\u002F25] **LLM Knows Geometry Better than Algebra: Numerical Understanding of LLM-Based Agents in A Trading Arena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17967) | [code]\n\n- [2025\u002F02\u002F18] **One Size doesn&#39;t Fit All: A Personalized Conversational Tutoring Agent for Mathematics Instruction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12633) | [code]\n\n- [2025\u002F02\u002F04] **Automating Mathematical Proof Generation Using Large Language Model Agents and Knowledge Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11657) | [code]\n\n- [2024\u002F10\u002F29] **Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22304) | [code]\n\n- [2024\u002F10\u002F13] **Expanding Search Space with Diverse Prompting Agents: An Efficient Sampling Approach for LLM Mathematical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09780) | [code]\n\n- [2024\u002F08\u002F03] **MathLearner: A Large Language Model Agent Framework for Learning to Solve Mathematical Problems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01779) | [code]\n\n- [2024\u002F04\u002F10] **MathVC: An LLM-Simulated Multi-Character Virtual Classroom for Mathematics Education** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06711) | [code]\n\n- [2024\u002F04\u002F06] **MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04735) | [[code]](https:\u002F\u002Fgithub.com\u002Fbin123apple\u002Fmacm)\n\n#### Chemistry\n- [2025\u002F05\u002F27] **ChemHAS: Hierarchical Agent Stacking for Enhancing Chemistry Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21569) | [code]\n\n- [2025\u002F04\u002F18] **System of Agentic AI for the Discovery of Metal-Organic Frameworks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14110) | [code]\n\n- [2025\u002F03\u002F22] **Building Resource-Constrained Language Agents: A Korean Case Study on Chemical Toxicity Information** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17753) | [code]\n\n- [2025\u002F01\u002F23] **Hypothesis Generation for Materials Discovery and Design Using Goal-Driven and Constraint-Guided LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13299) | [code]\n\n- [2025\u002F01\u002F11] **ChemAgent: Self-updating Library in Large Language Models Improves Chemical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06590) | [code]\n\n- [2024\u002F08\u002F29] **HoneyComb: A Flexible LLM-Based Agent System for Materials Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00135) | [code]\n\n- [2024\u002F06\u002F26] **A Review of Large Language Models and Autonomous Agents in Chemistry** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01603) | [code]\n\n#### Biology\n- [2025\u002F04\u002F28] **m-KAILIN: Knowledge-Driven Agentic Scientific Corpus Distillation Framework for Biomedical Large Language Models Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19565) | [code]\n\n- [2025\u002F04\u002F08] **SkillFlow: Efficient Skill and Code Transfer Through Communication in Adapting AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06188) | [code]\n\n- [2025\u002F04\u002F07] **scAgent: Universal Single-Cell Annotation via a LLM Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.04698) | [code]\n\n- [2024\u002F10\u002F16] **PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12375) | [code]\n\n- [2024\u002F06\u002F29] **BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00466) | [code]\n\n- [2024\u002F05\u002F25] **GeneAgent: Self-verification Language Agent for Gene Set Knowledge Discovery using Domain Databases** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16205) | [code]\n\n- [2024\u002F04\u002F27] **CRISPR-GPT: An LLM Agent for Automated Design of Gene-Editing Experiments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.18021) | [code]\n\n- [2024\u002F04\u002F03] **Empowering Biomedical Discovery with AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02831) | [code]\n\n- [2024\u002F01\u002F27] **ProtAgents: Protein discovery via large language model multi-agent collaborations combining physics and machine learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04268) | [code]\n\n#### Physics\n- [2025\u002F06\u002F06] **Can Theoretical Physics Research Benefit from Language Agents?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06214) | [code]\n\n- [2025\u002F01\u002F23] **Hypothesis Generation for Materials Discovery and Design Using Goal-Driven and Constraint-Guided LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13299) | [code]\n\n- [2024\u002F12\u002F09] **StarWhisper Telescope: Agent-Based Observation Assistant System to Approach AI Astrophysicist** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.06412) | [code]\n\n- [2024\u002F08\u002F29] **HoneyComb: A Flexible LLM-Based Agent System for Materials Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00135) | [code]\n\n- [2024\u002F01\u002F27] **ProtAgents: Protein discovery via large language model multi-agent collaborations combining physics and machine learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04268) | [code]\n\n#### Geography\n- [2024\u002F12\u002F23] **MineAgent: Towards Remote-Sensing Mineral Exploration with Multimodal Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17339) | [code]\n\n- [2024\u002F07\u002F13] **An Autonomous GIS Agent Framework for Geospatial Data Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.21024) | [code]\n\n#### Art\n- [2025\u002F01\u002F22] **FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in Virtual 3D Spaces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12909) | [code]\n\n- [2024\u002F10\u002F02] **Agent-Driven Large Language Models for Mandarin Lyric Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.01450) | [code]\n\n- [2024\u002F09\u002F05] **LLM-based multi-agent poetry generation in non-cooperative environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03659) | [code]\n\n- [2024\u002F08\u002F13] **What should I wear to a party in a Greek taverna? Evaluation for Conversational Agents in the Fashion Domain** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08907) | [code]\n\n- [2024\u002F07\u002F01] **IBSEN: Director-Actor Agent Collaboration for Controllable and Interactive Drama Script Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01093) | [code]\n\n- [2024\u002F04\u002F28] **ComposerX: Multi-Agent Symbolic Music Composition with LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.18081) | [[code]](https:\u002F\u002Fgithub.com\u002Flllindsey0615\u002Fcomposerx)\n\n- [2024\u002F03\u002F12] **AesopAgent: Agent-driven Evolutionary System on Story-to-Video Production** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07952) | [code]\n\n- [2023\u002F10\u002F18] **MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.11954) | [code]\n\n#### Medicine\n- [2025\u002F07\u002F10] **Toward Real-World Chinese Psychological Support Dialogues: CPsDD Dataset and a Co-Evolving Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07509) | [code]\n\n- [2025\u002F07\u002F03] **RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03112) | [code]\n\n- [2025\u002F07\u002F01] **STELLA: Self-Evolving LLM Agent for Biomedical Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02004) | [code]\n\n- [2025\u002F06\u002F27] **Exploring Modularity of Agentic Systems for Drug Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22189) | [code]\n\n- [2025\u002F06\u002F26] **Large Language Model Agent for Modular Task Execution in Drug Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02925) | [code]\n\n- [2025\u002F06\u002F25] **An Agentic System for Rare Disease Diagnosis with Traceable Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20430) | [code]\n\n- [2025\u002F06\u002F24] **MAM: Modular Multi-Agent Framework for Multi-Modal Medical Diagnosis via Role-Specialized Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19835) | [code]\n\n- [2025\u002F06\u002F18] **From RAG to Agentic: Validating Islamic-Medicine Responses with LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15911) | [code]\n\n- [2025\u002F06\u002F17] **RadFabric: Agentic AI System with Reasoning Capability for Radiology** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14142) | [code]\n\n- [2025\u002F06\u002F16] **Language Agents for Hypothesis-driven Clinical Decision Making with Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13474) | [code]\n\n- [2025\u002F06\u002F13] **Large Language Model-Powered Conversational Agent Delivering Problem-Solving Therapy (PST) for Family Caregivers: Enhancing Empathy and Therapeutic Alliance Using In-Context Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11376) | [code]\n\n- [2025\u002F06\u002F12] **Neural at ArchEHR-QA 2025: Agentic Prompt Optimization for Evidence-Grounded Clinical Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10751) | [code]\n\n- [2025\u002F06\u002F11] **ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09513) | [code]\n\n- [2025\u002F06\u002F04] **AI Agents for Conversational Patient Triage: Preliminary Simulation-Based Evaluation with Real-World EHR Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04032) | [code]\n\n- [2025\u002F06\u002F04] **MedAgentGym: Training LLM Agents for Code-Based Medical Reasoning at Scale** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04405) | [code]\n\n- [2025\u002F05\u002F31] **MMedAgent-RL: Optimizing Multi-Agent Collaboration for Multimodal Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00555) | [code]\n\n- [2025\u002F05\u002F30] **MedOrch: Medical Diagnosis with Tool-Augmented Reasoning Agents for Flexible Extensibility** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00235) | [code]\n\n- [2025\u002F05\u002F27] **Silence is Not Consensus: Disrupting Agreement Bias in Multi-Agent LLMs via Catfish Agent for Clinical Decision Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21503) | [code]\n\n- [2025\u002F05\u002F27] **BehaviorSFT: Behavioral Token Conditioning for Clinical Agents Across the Proactivity Spectrum** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21757) | [code]\n\n- [2025\u002F05\u002F24] **DDO: Dual-Decision Optimization via Multi-Agent Collaboration for LLM-Based Medical Consultation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18630) | [code]\n\n- [2025\u002F05\u002F21] **A Risk Taxonomy for Evaluating AI-Powered Psychotherapy Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15108) | [code]\n\n- [2025\u002F05\u002F18] **MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12371) | [code]\n\n- [2025\u002F05\u002F06] **FRAME: Feedback-Refined Agent Methodology for Enhancing Medical Research Insights** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.04649) | [code]\n\n- [2025\u002F04\u002F30] **Talk Before You Retrieve: Agent-Led Discussions for Better RAG in Medical QA** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21252) | [code]\n\n- [2025\u002F04\u002F28] **m-KAILIN: Knowledge-Driven Agentic Scientific Corpus Distillation Framework for Biomedical Large Language Models Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19565) | [code]\n\n- [2025\u002F04\u002F25] **MAGI: Multi-Agent Guided Interview for Psychiatric Assessment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18260) | [code]\n\n- [2025\u002F04\u002F13] **EmoAgent: Assessing and Safeguarding Human-AI Interaction for Mental Health Safety** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09689) | [code]\n\n- [2025\u002F04\u002F08] **TxGemma: Efficient and Agentic LLMs for Therapeutics** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06196) | [code]\n\n- [2025\u002F04\u002F04] **YaleNLP @ PerAnsSumm 2025: Multi-Perspective Integration via Mixture-of-Agents for Enhanced Healthcare QA Summarization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03932) | [code]\n\n- [2025\u002F03\u002F28] **Self-Evolving Multi-Agent Simulations for Realistic Clinical Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22678) | [code]\n\n- [2025\u002F03\u002F26] **TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20666) | [code]\n\n- [2025\u002F03\u002F26] **3MDBench: Medical Multimodal Multi-agent Dialogue Benchmark** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13861) | [code]\n\n- [2025\u002F03\u002F21] **Autonomous Radiotherapy Treatment Planning Using DOLA: A Privacy-Preserving, LLM-Based Optimization Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17553) | [code]\n\n- [2025\u002F03\u002F19] **When Pigs Get Sick: Multi-Agent AI for Swine Disease Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15204) | [code]\n\n- [2025\u002F03\u002F19] **EmpathyAgent: Can Embodied Agents Conduct Empathetic Actions?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16545) | [code]\n\n- [2025\u002F03\u002F17] **MAP: Evaluation and Multi-Agent Enhancement of Large Language Models for Inpatient Pathways** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13205) | [code]\n\n- [2025\u002F03\u002F10] **MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for Complex Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07459) | [code]\n\n- [2025\u002F03\u002F07] **GEMA-Score: Granular Explainable Multi-Agent Score for Radiology Report Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05347) | [code]\n\n- [2025\u002F03\u002F07] **Multi Agent based Medical Assistant for Edge Devices** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05397) | [code]\n\n- [2025\u002F02\u002F27] **M^3Builder: A Multi-Agent System for Automated Machine Learning in Medical Imaging** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20301) | [code]\n\n- [2025\u002F02\u002F26] **MEDDxAgent: A Unified Modular Agent Framework for Explainable Automatic Differential Diagnosis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19175) | [code]\n\n- [2025\u002F02\u002F25] **Scaffolding Empathy: Training Counselors with Simulated Patients and Utterance-level Performance Visualizations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18673) | [code]\n\n- [2025\u002F02\u002F24] **Improving Interactive Diagnostic Ability of a Large Language Model Agent Through Clinical Experience Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16463) | [code]\n\n- [2025\u002F02\u002F19] **LIDDIA: Language-based Intelligent Drug Discovery Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13959) | [code]\n\n- [2025\u002F02\u002F18] **An LLM-Powered Agent for Physiological Data Analysis: A Case Study on PPG-based Heart Rate Estimation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12836) | [code]\n\n- [2025\u002F02\u002F18] **Sleepless Nights, Sugary Days: Creating Synthetic Users with Health Conditions for Realistic Coaching Agent Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13135) | [code]\n\n- [2025\u002F02\u002F13] **PathFinder: A Multi-Modal Multi-Agent System for Medical Diagnostic Decision-Making Applied to Histopathology** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08916) | [code]\n\n- [2025\u002F02\u002F09] **HamRaz: A Culture-Based Persian Conversation Dataset for Person-Centered Therapy Using LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05982) | [code]\n\n- [2025\u002F02\u002F09] **The Application of MATEC (Multi-AI Agent Team Care) Framework in Sepsis Care** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16433) | [code]\n\n- [2025\u002F02\u002F05] **CAMI: A Counselor Agent Supporting Motivational Interviewing through State Inference and Topic Exploration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02807) | [code]\n\n- [2025\u002F02\u002F02] **Agent-Based Uncertainty Awareness Improves Automated Radiology Report Labeling with an Open-Source Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01691) | [code]\n\n- [2025\u002F01\u002F27] **MADP: Multi-Agent Deductive Planning for Enhanced Cognitive-Behavioral Mental Health Question Answer** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15826) | [code]\n\n- [2025\u002F01\u002F16] **AutoCBT: An Autonomous Multi-agent Framework for Cognitive Behavioral Therapy in Psychological Counseling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.09426) | [code]\n\n- [2025\u002F01\u002F03] **PSYCHE: A Multi-faceted Patient Simulation Framework for Evaluation of Psychiatric Assessment Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01594) | [code]\n\n- [2024\u002F12\u002F19] **PsyDraw: A Multi-Agent Multimodal System for Mental Health Screening in Left-Behind Children** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14769) | [code]\n\n- [2024\u002F12\u002F17] **RareAgents: Advancing Rare Disease Care through LLM-Empowered Multi-disciplinary Team** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12475) | [code]\n\n- [2024\u002F12\u002F16] **LLMs Can Simulate Standardized Patients via Agent Coevolution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11716) | [code]\n\n- [2024\u002F12\u002F13] **Script-Based Dialog Policy Planning for LLM-Powered Conversational Agents: A Basic Architecture for an &#34;AI Therapist&#34;** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15242) | [code]\n\n- [2024\u002F12\u002F05] **Educational-Psychological Dialogue Robot Based on Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03847) | [code]\n\n- [2024\u002F12\u002F02] **Medchain: Bridging the Gap Between LLM Agents and Clinical Practice through Interactive Sequential Benchmarking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01605) | [code]\n\n- [2024\u002F11\u002F21] **PIORS: Personalized Intelligent Outpatient Reception based on Large Language Model with Multi-Agents Medical Scenario Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13902) | [code]\n\n- [2024\u002F11\u002F16] **Towards Next-Generation Medical Agent: How o1 is Reshaping Decision-Making in Medical Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.14461) | [code]\n\n- [2024\u002F11\u002F03] **EcoAct: Economic Agent Determines When to Register What Action** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01643) | [code]\n\n- [2024\u002F10\u002F25] **$\\texttt{PatentAgent}$: Intelligent Agent for Automated Pharmaceutical Patent Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21312) | [code]\n\n- [2024\u002F10\u002F23] **ReflecTool: Towards Reflection-Aware Tool-Augmented Clinical Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17657) | [code]\n\n- [2024\u002F10\u002F17] **MeNTi: Bridging Medical Calculator and LLM Agent with Nested Tool Calling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13610) | [code]\n\n- [2024\u002F10\u002F16] **MedAide: Towards an Omni Medical Aide via Specialized LLM-based Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12532) | [code]\n\n- [2024\u002F10\u002F02] **Zodiac: A Cardiologist-Level LLM Framework for Multi-Agent Diagnostics** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02026) | [code]\n\n- [2024\u002F08\u002F28] **Interactive Agents: Simulating Counselor-Client Psychological Counseling via Role-Playing LLM-to-LLM Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15787) | [code]\n\n- [2024\u002F08\u002F23] **DrugAgent: Explainable Drug Repurposing Agent with Large Language Model-based Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.13378) | [code]\n\n- [2024\u002F08\u002F14] **Development of a Large Language Model-based Multi-Agent Clinical Decision Support System for Korean Triage and Acuity Scale (KTAS)-Based Triage and Treatment Planning in Emergency Departments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07531) | [code]\n\n- [2024\u002F07\u002F18] **CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.13301) | [code]\n\n- [2024\u002F07\u002F10] **Virtual Agents for Alcohol Use Counseling: Exploring LLM-Powered Motivational Interviewing** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08095) | [code]\n\n- [2024\u002F07\u002F03] **MentalAgora: A Gateway to Advanced Personalized Care in Mental Health through Multi-Agent Debating and Attribute Control** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02736) | [code]\n\n- [2024\u002F07\u002F02] **MMedAgent: Learning to Use Medical Tools with Multi-modal Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02483) | [code]\n\n- [2024\u002F04\u002F23] **ClinicalAgent: Clinical Trial Multi-Agent System with Large Language Model-based Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.14777) | [code]\n\n- [2024\u002F04\u002F03] **Empowering Biomedical Discovery with AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02831) | [code]\n\n- [2024\u002F02\u002F20] **Can Large Language Models be Used to Provide Psychological Counselling? An Analysis of GPT-4-Generated Responses Using Role-play Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12738) | [code]\n\n- [2024\u002F02\u002F20] **AgentMD: Empowering Language Agents for Risk Prediction with Large-Scale Clinical Tool Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13225) | [code]\n\n- [2024\u002F02\u002F15] **Knowledge-Infused LLM-Powered Conversational Health Agent: A Case Study for Diabetes Patients** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10153) | [code]\n\n- [2024\u002F02\u002F01] **Generation, Distillation and Evaluation of Motivational Interviewing-Style Reflections with a Foundational Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01051) | [code]\n\n- [2023\u002F12\u002F19] **Can ChatGPT be Your Personal Medical Assistant?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12006) | [code]\n\n- [2023\u002F10\u002F03] **Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02124) | [code]\n\n#### Finance\n- [2025\u002F07\u002F08] **ECom-Bench: Can LLM Agent Resolve Real-World E-commerce Customer Support Issues?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05639) | [code]\n\n- [2025\u002F07\u002F07] **MindFlow: Revolutionizing E-commerce Customer Support with Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05330) | [code]\n\n- [2025\u002F06\u002F10] **Improved LLM Agents for Financial Document Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08726) | [code]\n\n- [2025\u002F06\u002F09] **EconWebArena: Benchmarking Autonomous Agents on Economic Tasks in Realistic Web Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08136) | [code]\n\n- [2025\u002F05\u002F20] **Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14418) | [code]\n\n- [2025\u002F04\u002F08] **Are Generative AI Agents Effective Personalized Financial Advisors?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05862) | [code]\n\n- [2025\u002F04\u002F07] **AI for Climate Finance: Agentic Retrieval and Multi-Step Reasoning for Early Warning System Investments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05104) | [code]\n\n- [2025\u002F03\u002F27] **EQ-Negotiator: An Emotion-Reasoning LLM Agent in Credit Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21080) | [code]\n\n- [2025\u002F03\u002F05] **Cite Before You Speak: Enhancing Context-Response Grounding in E-commerce Conversational LLM-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04830) | [code]\n\n- [2025\u002F02\u002F25] **LLM Knows Geometry Better than Algebra: Numerical Understanding of LLM-Based Agents in A Trading Arena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17967) | [code]\n\n- [2025\u002F02\u002F08] **Agentic AI Systems Applied to tasks in Financial Services: Modeling and model risk management crews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05439) | [code]\n\n- [2025\u002F02\u002F01] **MarketSenseAI 2.0: Enhancing Stock Analysis through LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00415) | [code]\n\n- [2025\u002F01\u002F08] **FinSphere: A Conversational Stock Analysis Agent Equipped with Quantitative Tools based on Real-Time Database** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12399) | [code]\n\n- [2024\u002F12\u002F27] **OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.19723) | [code]\n\n- [2024\u002F12\u002F19] **Beyond the Sum: Unlocking AI Agents Potential Through Market Forces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.10388) | [code]\n\n- [2024\u002F11\u002F07] **Enhancing Investment Analysis: Optimizing AI-Agent Collaboration in Financial Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04788) | [code]\n\n- [2024\u002F10\u002F29] **Enhancing Financial Question Answering with a Multi-Agent Reflection Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21741) | [code]\n\n- [2024\u002F09\u002F19] **Strategic Collusion of LLM Agents: Market Division in Multi-Commodity Competitions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00031) | [code]\n\n- [2024\u002F07\u002F18] **dzFinNlp at AraFinNLP: Improving Intent Detection in Financial Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.13565) | [code]\n\n- [2024\u002F07\u002F09] **FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06567) | [code]\n\n- [2024\u002F07\u002F05] **Towards Automated Functional Equation Proving: A Benchmark Dataset and A Domain-Specific In-Context Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.14521) | [code]\n\n- [2024\u002F05\u002F07] **Enhancing the Efficiency and Accuracy of Underlying Asset Reviews in Structured Finance: The Application of Multi-agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04294) | [code]\n\n#### Software Engineering\n- [2025\u002F06\u002F13] **Agent-RLVR: Training Software Engineering Agents via Guidance and Environment Rewards** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11425) | [code]\n\n- [2025\u002F06\u002F04] **MedAgentGym: Training LLM Agents for Code-Based Medical Reasoning at Scale** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04405) | [code]\n\n- [2025\u002F06\u002F03] **Coding Agents with Multimodal Browsing are Generalist Problem Solvers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03011) | [code]\n\n- [2025\u002F05\u002F28] **Co-Saving: Resource Aware Multi-Agent Collaboration for Software Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21898) | [code]\n\n- [2025\u002F05\u002F26] **Vibe Coding vs. Agentic Coding: Fundamentals and Practical Implications of Agentic AI** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19443) | [code]\n\n- [2025\u002F05\u002F26] **SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20411) | [code]\n\n- [2025\u002F05\u002F24] **SEW: Self-Evolving Agentic Workflows for Automated Code Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18646) | [code]\n\n- [2025\u002F05\u002F22] **Optimizing LLM-Based Multi-Agent System with Textual Feedback: A Case Study on Software Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16086) | [code]\n\n- [2025\u002F05\u002F19] **Guided Search Strategies in Non-Serializable Environments with Applications to Software Engineering Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13652) | [code]\n\n- [2025\u002F05\u002F13] **LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08842) | [code]\n\n- [2025\u002F04\u002F30] **SWE-smith: Scaling Data for Software Engineering Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21798) | [code]\n\n- [2025\u002F04\u002F28] **ResearchCodeAgent: An LLM Multi-Agent System for Automated Codification of Research Methodologies** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20117) | [code]\n\n- [2025\u002F04\u002F18] **CodeVisionary: An Agent-based Framework for Evaluating Large Language Models in Code Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13472) | [code]\n\n- [2025\u002F04\u002F09] **R2E-Gym: Procedural Environments and Hybrid Verifiers for Scaling Open-Weights SWE Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07164) | [code]\n\n- [2025\u002F03\u002F27] **GateLens: A Reasoning-Enhanced LLM Agent for Automotive Software Release Analytics** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21735) | [code]\n\n- [2025\u002F03\u002F24] **Verbal Process Supervision Elicits Better Coding Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18494) | [code]\n\n- [2025\u002F03\u002F18] **DARS: Dynamic Action Re-Sampling to Enhance Coding Agent Performance by Adaptive Tree Traversal** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14269) | [code]\n\n- [2025\u002F03\u002F12] **LocAgent: Graph-Guided LLM Agents for Code Localization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09089) | [code]\n\n- [2025\u002F03\u002F10] **ProjectEval: A Benchmark for Programming Agents Automated Evaluation on Project-Level Code Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07010) | [code]\n\n- [2025\u002F02\u002F19] **An LLM-based Agent for Reliable Docker Environment Configuration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13681) | [code]\n\n- [2025\u002F02\u002F18] **Training Turn-by-Turn Verifiers for Dialogue Tutoring Agents: The Curious Case of LLMs as Your Coding Tutors** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13311) | [code]\n\n- [2025\u002F02\u002F18] **UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12561) | [code]\n\n- [2025\u002F02\u002F14] **The Ann Arbor Architecture for Agent-Oriented Programming** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09903) | [[code]](https:\u002F\u002Fgithub.com\u002Faaalgo\u002Fpostline_0.1)\n\n- [2025\u002F02\u002F11] **Multi-Agent Collaboration for Multilingual Code Instruction Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07487) | [code]\n\n- [2025\u002F02\u002F10] **SyncMind: Measuring Agent Out-of-Sync Recovery in Collaborative Software Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06994) | [code]\n\n- [2025\u002F02\u002F08] **CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and Debugging** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05664) | [code]\n\n- [2024\u002F12\u002F30] **Training Software Engineering Agents and Verifiers with SWE-Gym** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21139) | [code]\n\n- [2024\u002F12\u002F24] **Molly: Making Large Language Model Agents Solve Python Problem More Logically** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18093) | [code]\n\n- [2024\u002F12\u002F16] **Seeker: Towards Exception Safety Code Generation with Intermediate Language Agents Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11713) | [code]\n\n- [2024\u002F11\u002F07] **CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04329) | [code]\n\n- [2024\u002F10\u002F29] **SceneGenAgent: Precise Industrial Scene Generation with Coding Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21909) | [code]\n\n- [2024\u002F10\u002F09] **DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07331) | [code]\n\n- [2024\u002F10\u002F09] **Seeker: Enhancing Exception Handling in Code with LLM-based Multi-Agent Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06949) | [code]\n\n- [2024\u002F09\u002F02] **Co-Learning: Code Learning for Multi-Agent Reinforcement Collaborative Framework with Conversational Natural Language Interfaces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00985) | [code]\n\n- [2024\u002F08\u002F19] **GoNoGo: An Efficient LLM-based Multi-Agent System for Streamlining Automotive Software Release Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.09785) | [code]\n\n- [2024\u002F08\u002F13] **Diversity Empowers Intelligence: Integrating Expertise of Software Engineering Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07060) | [code]\n\n- [2024\u002F08\u002F05] **LLM Agents Improve Semantic Code Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11058) | [code]\n\n- [2024\u002F07\u002F26] **AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18901) | [[code]](https:\u002F\u002Fgithub.com\u002Fstonybrooknlp\u002Fappworld)\n\n- [2024\u002F07\u002F01] **Agentless: Demystifying LLM-based Software Engineering Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01489) | [code]\n\n- [2024\u002F06\u002F13] **Multi-Agent Software Development through Cross-Team Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08979) | [[code]](https:\u002F\u002Fgithub.com\u002Fopenbmb\u002Fchatdev)\n\n- [2024\u002F05\u002F06] **SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.15793) | [code]\n\n- [2024\u002F04\u002F11] **Behavior Trees Enable Structured Programming of Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.07439) | [[code]](https:\u002F\u002Fgithub.com\u002FRichardKelley\u002Fdendron)\n\n- [2024\u002F04\u002F02] **Self-Organized Agents: A LLM Multi-Agent Framework toward Ultra Large-Scale Code Generation and Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02183) | [code]\n\n- [2024\u002F03\u002F02] **SceneCraft: An LLM Agent for Synthesizing 3D Scene as Blender Code** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01248) | [code]\n\n- [2024\u002F02\u002F26] **RepoAgent: An LLM-Powered Open-Source Framework for Repository-level Code Documentation Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16667) | [code]\n\n- [2024\u002F02\u002F19] **WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12275) | [code]\n\n- [2024\u002F02\u002F02] **StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01391) | [code]\n\n- [2024\u002F02\u002F01] **Executable Code Actions Elicit Better LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01030) | [code]\n\n- [2023\u002F12\u002F28] **Experiential Co-Learning of Software-Developing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17025) | [code]\n\n- [2023\u002F12\u002F20] **AgentCoder: Multi-Agent-based Code Generation with Iterative Testing and Optimisation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13010) | [code]\n\n- [2023\u002F07\u002F27] **PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14936) | [code]\n\n- [2023\u002F07\u002F16] **ChatDev: Communicative Agents for Software Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.07924) | [code]\n\n- [2023\u002F04\u002F15] **Self-collaboration Code Generation via ChatGPT** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07590) | [code]\n\n#### Research\n- [2025\u002F07\u002F01] **STELLA: Self-Evolving LLM Agent for Biomedical Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02004) | [code]\n\n- [2025\u002F06\u002F27] **RExBench: Can coding agents autonomously implement AI research extensions?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22598) | [code]\n\n- [2025\u002F06\u002F25] **Language Modeling by Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20249) | [code]\n\n- [2025\u002F06\u002F23] **From Web Search towards Agentic Deep Research: Incentivizing Search with Reasoning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18959) | [code]\n\n- [2025\u002F06\u002F12] **VideoDeepResearch: Long Video Understanding With Agentic Tool Using** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10821) | [code]\n\n- [2025\u002F06\u002F06] **Can Theoretical Physics Research Benefit from Language Agents?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06214) | [code]\n\n- [2025\u002F05\u002F30] **Unifying Language Agent Algorithms with Graph-based Orchestration Engine for Reproducible Agent Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24354) | [code]\n\n- [2025\u002F05\u002F29] **Large Language Model-Based Agents for Automated Research Reproducibility: An Exploratory Study in Alzheimer&#39;s Disease** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23852) | [code]\n\n- [2025\u002F05\u002F26] **MLR-Bench: Evaluating AI Agents on Open-Ended Machine Learning Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19955) | [code]\n\n- [2025\u002F05\u002F22] **BioDSA-1K: Benchmarking Data Science Agents for Biomedical Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16100) | [code]\n\n- [2025\u002F05\u002F22] **NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16938) | [code]\n\n- [2025\u002F04\u002F28] **ResearchCodeAgent: An LLM Multi-Agent System for Automated Codification of Research Methodologies** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20117) | [code]\n\n- [2025\u002F04\u002F21] **Completing A Systematic Review in Hours instead of Months with Interactive AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14822) | [code]\n\n- [2025\u002F04\u002F10] **CollEX -- A Multimodal Agentic RAG System Enabling Interactive Exploration of Scientific Collections** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07643) | [code]\n\n- [2025\u002F04\u002F10] **The AI Scientist-v2: Workshop-Level Automated Scientific Discovery via Agentic Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08066) | [code]\n\n- [2025\u002F04\u002F02] **Automated Survey Collection with LLM-based Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02891) | [code]\n\n- [2025\u002F03\u002F23] **AgentRxiv: Towards Collaborative Autonomous Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18102) | [code]\n\n- [2025\u002F03\u002F12] **Agentic AI for Scientific Discovery: A Survey of Progress, Challenges, and Future Directions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08979) | [code]\n\n- [2025\u002F03\u002F11] **ReviewAgents: Bridging the Gap Between Human and AI-Generated Paper Reviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08506) | [code]\n\n- [2025\u002F02\u002F25] **LAG: LLM agents for Leaderboard Auto Generation on Demanding** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18209) | [code]\n\n- [2025\u002F02\u002F20] **MLGym: A New Framework and Benchmark for Advancing AI Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14499) | [code]\n\n- [2025\u002F02\u002F07] **Agentic Reasoning: Reasoning LLMs with Tools for the Deep Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04644) | [code]\n\n- [2025\u002F01\u002F08] **Agent Laboratory: Using LLM Agents as Research Assistants** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04227) | [code]\n\n- [2024\u002F10\u002F17] **Chain of Ideas: Revolutionizing Research Via Novel Idea Development with LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13185) | [code]\n\n- [2024\u002F10\u002F12] **Many Heads Are Better Than One: Improved Scientific Idea Generation by A LLM-Based Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09403) | [code]\n\n- [2024\u002F10\u002F07] **ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05080) | [code]\n\n- [2024\u002F10\u002F07] **ImProver: Agent-Based Automated Proof Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04753) | [code]\n\n- [2024\u002F09\u002F23] **Towards a Realistic Long-Term Benchmark for Open-Web Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14913) | [code]\n\n- [2024\u002F09\u002F17] **CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11363) | [code]\n\n- [2024\u002F09\u002F12] **DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07703) | [code]\n\n- [2024\u002F09\u002F11] **SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07440) | [code]\n\n- [2024\u002F09\u002F10] **Language agents achieve superhuman synthesis of scientific knowledge** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13740) | [code]\n\n- [2024\u002F09\u002F09] **SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.05556) | [code]\n\n- [2024\u002F08\u002F26] **MLR-Copilot: Autonomous Machine Learning Research based on Large Language Models Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14033) | [code]\n\n- [2024\u002F08\u002F20] **Automating Knowledge Discovery from Scientific Literature via LLMs: A Dual-Agent Approach with Progressive Ontology Prompting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00054) | [code]\n\n- [2024\u002F06\u002F13] **ResearchArena: Benchmarking Large Language Models&#39; Ability to Collect and Organize Information as Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10291) | [code]\n\n- [2024\u002F05\u002F02] **CACTUS: Chemistry Agent Connecting Tool-Usage to Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00972) | [[code]](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fcactus)\n\n- [2024\u002F04\u002F09] **SurveyAgent: A Conversational System for Personalized and Efficient Research Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06364) | [code]\n\n- [2024\u002F02\u002F28] **Data Interpreter: An LLM Agent For Data Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18679) | [[code]](https:\u002F\u002Fgithub.com\u002Fgeekan\u002Fmetagpt)\n\n- [2024\u002F02\u002F18] **SciAgent: Tool-augmented Language Models for Scientific Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11451) | [code]\n\n- [2024\u002F02\u002F06] **Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04247) | [code]\n\n- [2024\u002F01\u002F08] **MARG: Multi-Agent Review Generation for Scientific Papers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.04259) | [code]\n\n### Automation\n#### Workflow\n- [2025\u002F06\u002F02] **Follow the Flow: Fine-grained Flowchart Attribution with Neurosymbolic Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01344) | [code]\n\n- [2025\u002F05\u002F26] **ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19897) | [code]\n\n- [2025\u002F04\u002F17] **MetaSynth: Meta-Prompting-Driven Agentic Scaffolds for Diverse Synthetic Data Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12563) | [code]\n\n- [2025\u002F02\u002F24] **Turning Conversations into Workflows: A Framework to Extract and Evaluate Dialog Workflows for Service AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17321) | [code]\n\n- [2025\u002F02\u002F11] **EvoFlow: Evolving Diverse Agentic Workflows On The Fly** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07373) | [code]\n\n- [2025\u002F02\u002F07] **nvAgent: Automated Data Visualization from Natural Language via Collaborative Agent Workflow** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05036) | [code]\n\n- [2025\u002F02\u002F06] **ScoreFlow: Mastering LLM Agent Workflows via Score-based Preference Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04306) | [code]\n\n- [2024\u002F12\u002F17] **An Agentic Approach to Automatic Creation of P&amp;ID Diagrams from Natural Language Descriptions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12898) | [code]\n\n- [2024\u002F12\u002F15] **LAW: Legal Agentic Workflows for Custody and Fund Services Contracts** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11063) | [code]\n\n- [2024\u002F11\u002F22] **ScribeAgent: Towards Specialized Web Agents Using Production-Scale Workflow Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.15004) | [code]\n\n- [2024\u002F11\u002F12] **BudgetMLAgent: A Cost-Effective LLM Multi-Agent system for Automating Machine Learning Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07464) | [code]\n\n- [2024\u002F11\u002F08] **Game-theoretic LLM: Agent Workflow for Negotiation Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05990) | [code]\n\n- [2024\u002F10\u002F24] **An LLM Agent for Automatic Geospatial Data Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18792) | [code]\n\n- [2024\u002F10\u002F17] **From Barriers to Tactics: A Behavioral Science-Informed Agentic Workflow for Personalized Nutrition Coaching** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14041) | [code]\n\n- [2024\u002F10\u002F17] **ControlAgent: Automating Control System Design via Novel Integration of LLM Agents and Domain Expertise** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19811) | [code]\n\n- [2024\u002F10\u002F16] **Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12361) | [code]\n\n- [2024\u002F10\u002F14] **AFlow: Automating Agentic Workflow Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.10762) | [code]\n\n- [2024\u002F10\u002F10] **Benchmarking Agentic Workflow Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07869) | [code]\n\n- [2024\u002F10\u002F03] **AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02958) | [code]\n\n- [2024\u002F09\u002F11] **Agent Workflow Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07429) | [code]\n\n- [2024\u002F08\u002F16] **The Fellowship of the LLMs: Multi-Agent Workflows for Synthetic Preference Optimization Dataset Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08688) | [code]\n\n- [2024\u002F07\u002F15] **Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.10956) | [code]\n\n- [2024\u002F07\u002F03] **AgentInstruct: Toward Generative Teaching with Agentic Flows** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.03502) | [code]\n\n- [2024\u002F07\u002F01] **AutoFlow: Automated Workflow Generation for Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.12821) | [code]\n\n- [2024\u002F06\u002F21] **Autonomous Agents for Collaborative Task under Information Asymmetry** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.14928) | [code]\n\n- [2024\u002F03\u002F13] **AutoGuide: Automated Generation and Selection of Context-Aware Guidelines for Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.08978) | [code]\n\n- [2024\u002F03\u002F05] **ChatCite: LLM Agent with Human Workflow Guidance for Comparative Literature Summary** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02574) | [code]\n\n#### Automatic Evaluation\n- [2025\u002F06\u002F26] **Mind2Web 2: Evaluating Agentic Search with Agent-as-a-Judge** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21506) | [code]\n\n- [2025\u002F06\u002F23] **AI Agents-as-Judge: Automated Assessment of Accuracy, Consistency, Completeness and Clarity for Enterprise Documents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22485) | [code]\n\n- [2025\u002F06\u002F08] **Manifesto from Dagstuhl Perspectives Workshop 24352 -- Conversational Agents: A Framework for Evaluation (CAFE)** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11112) | [code]\n\n- [2025\u002F05\u002F22] **HiMATE: A Hierarchical Multi-Agent Framework for Machine Translation Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16281) | [code]\n\n- [2025\u002F05\u002F21] **UrduFactCheck: An Agentic Fact-Checking Framework for Urdu with Evidence Boosting and Benchmarking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15063) | [code]\n\n- [2025\u002F05\u002F21] **AGENT-X: Adaptive Guideline-based Expert Network for Threshold-free AI-generated teXt detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15261) | [code]\n\n- [2025\u002F05\u002F20] **CAFES: A Collaborative Multi-Agent Framework for Multi-Granular Multimodal Essay Scoring** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13965) | [code]\n\n- [2025\u002F05\u002F18] **ESC-Judge: A Framework for Comparing Emotional Support Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12531) | [code]\n\n- [2025\u002F05\u002F13] **TRAIL: Trace Reasoning and Agentic Issue Localization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08638) | [code]\n\n- [2025\u002F05\u002F05] **AutoLibra: Agent Metric Induction from Open-Ended Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02820) | [code]\n\n- [2025\u002F05\u002F01] **Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02847) | [code]\n\n- [2025\u002F04\u002F21] **EvalAgent: Discovering Implicit Evaluation Criteria from the Web** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15219) | [code]\n\n- [2025\u002F04\u002F09] **A Unified Agentic Framework for Evaluating Conditional Image Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07046) | [code]\n\n- [2025\u002F04\u002F01] **VerifiAgent: a Unified Verification Agent in Language Model Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00406) | [code]\n\n- [2025\u002F04\u002F01] **Multi-Agent LLM Judge: automatic personalized LLM judge design for evaluating natural language generation applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02867) | [code]\n\n- [2025\u002F03\u002F07] **GEMA-Score: Granular Explainable Multi-Agent Score for Radiology Report Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05347) | [code]\n\n- [2025\u002F02\u002F26] **Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19328) | [[code]](https:\u002F\u002Fgithub.com\u002FTHU-KEG\u002FAgentic-Reward-Modeling)\n\n- [2025\u002F02\u002F25] **Debt Collection Negotiations with Large Language Models: An Evaluation System and Optimizing Decision Making with Multi-Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18228) | [code]\n\n- [2025\u002F02\u002F25] **FACT-AUDIT: An Adaptive Multi-Agent Framework for Dynamic Fact-Checking Evaluation of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17924) | [code]\n\n- [2025\u002F02\u002F14] **Automated Hypothesis Validation with Agentic Sequential Falsifications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09858) | [code]\n\n- [2025\u002F01\u002F19] **IntellAgent: A Multi-Agent Framework for Evaluating Conversational AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11067) | [code]\n\n- [2025\u002F01\u002F17] **Agent-as-Judge for Factual Summarization of Long Narratives** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.09993) | [code]\n\n- [2025\u002F01\u002F03] **PSYCHE: A Multi-faceted Patient Simulation Framework for Evaluation of Psychiatric Assessment Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01594) | [code]\n\n- [2024\u002F12\u002F28] **M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20127) | [code]\n\n- [2024\u002F12\u002F10] **Evaluation Agent: Efficient and Promptable Evaluation Framework for Visual Generative Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09645) | [code]\n\n- [2024\u002F11\u002F25] **SAGEval: The frontiers of Satisfactory Agent based NLG Evaluation for reference-free open-ended text** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16077) | [code]\n\n- [2024\u002F11\u002F15] **Large Language Models as User-Agents for Evaluating Task-Oriented-Dialogue Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.09972) | [code]\n\n- [2024\u002F09\u002F24] **Automated test generation to evaluate tool-augmented LLMs as conversational AI agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15934) | [code]\n\n- [2024\u002F09\u002F22] **The Ability of Large Language Models to Evaluate Constraint-satisfaction in Agent Responses to Open-ended Requests** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14371) | [code]\n\n- [2024\u002F09\u002F13] **Safeguarding Decentralized Social Media: LLM Agents for Automating Community Rule Compliance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08963) | [code]\n\n- [2024\u002F05\u002F23] **ALI-Agent: Assessing LLMs&#39; Alignment with Human Values via Agent-based Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14125) | [code]\n\n- [2024\u002F03\u002F28] **MATEval: A Multi-Agent Discussion Framework for Advancing Open-Ended Text Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.19305) | [code]\n\n- [2023\u002F08\u002F14] **ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07201) | [code]\n\n### Training\n#### Fine tuning\n- [2025\u002F07\u002F10] **SAND: Boosting LLM Agents with Self-Taught Action Deliberation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07441) | [code]\n\n- [2025\u002F07\u002F08] **Agentic-R1: Distilled Dual-Strategy Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05707) | [code]\n\n- [2025\u002F06\u002F28] **Knowledge Augmented Finetuning Matters in both RAG and Agent Based Dialog Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22852) | [code]\n\n- [2025\u002F06\u002F04] **Go-Browse: Training Web Agents with Structured Exploration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03533) | [code]\n\n- [2025\u002F06\u002F02] **AgentCPM-GUI: Building Mobile-Use Agents with Reinforcement Fine-Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01391) | [code]\n\n- [2025\u002F05\u002F31] **ARIA: Training Language Agents with Intention-Driven Reward Aggregation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00539) | [code]\n\n- [2025\u002F05\u002F28] **LaMDAgent: An Autonomous Framework for Post-Training Pipeline Optimization via LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21963) | [code]\n\n- [2025\u002F05\u002F27] **BehaviorSFT: Behavioral Token Conditioning for Clinical Agents Across the Proactivity Spectrum** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21757) | [code]\n\n- [2025\u002F05\u002F26] **Frictional Agent Alignment Framework: Slow Down and Don&#39;t Break Things** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19428) | [code]\n\n- [2025\u002F05\u002F26] **Training LLM-Based Agents with Synthetic Self-Reflected Trajectories and Partial Masking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20023) | [code]\n\n- [2025\u002F05\u002F26] **MaskSearch: A Universal Pre-Training Framework to Enhance Agentic Search Capability** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20285) | [code]\n\n- [2025\u002F03\u002F05] **MAS-GPT: Training LLMs to Build LLM-based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03686) | [code]\n\n- [2025\u002F03\u002F05] **Enhancing Collective Intelligence in Large Language Models Through Emotional Integration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04849) | [code]\n\n- [2025\u002F03\u002F04] **ATLaS: Agent Tuning via Learning Critical Steps** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02197) | [code]\n\n- [2025\u002F02\u002F24] **Training a Generally Curious Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17543) | [code]\n\n- [2025\u002F02\u002F19] **UM_FHS at TREC 2024 PLABA: Exploration of Fine-tuning and AI agent approach for plain language adaptations of biomedical text** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14144) | [code]\n\n- [2025\u002F02\u002F18] **Training Turn-by-Turn Verifiers for Dialogue Tutoring Agents: The Curious Case of LLMs as Your Coding Tutors** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13311) | [code]\n\n- [2025\u002F02\u002F11] **Multi-Agent Collaboration for Multilingual Code Instruction Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07487) | [code]\n\n- [2025\u002F02\u002F10] **Hephaestus: Improving Fundamental Agent Capabilities of Large Language Models through Continual Pre-Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06589) | [code]\n\n- [2025\u002F01\u002F10] **Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.05707) | [code]\n\n- [2025\u002F01\u002F03] **AgentRefine: Enhancing Agent Generalization through Refinement Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01702) | [code]\n\n- [2024\u002F12\u002F30] **Training Software Engineering Agents and Verifiers with SWE-Gym** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21139) | [code]\n\n- [2024\u002F12\u002F30] **Aviary: training language agents on challenging scientific tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21154) | [code]\n\n- [2024\u002F12\u002F16] **Virtual Agent-Based Communication Skills Training to Facilitate Health Persuasion Among Peers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12061) | [code]\n\n- [2024\u002F11\u002F29] **Training Agents with Weakly Supervised Feedback from Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19547) | [code]\n\n- [2024\u002F11\u002F21] **Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.14497) | [code]\n\n- [2024\u002F10\u002F20] **Training Language Models to Critique With Multi-agent Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15287) | [code]\n\n- [2024\u002F10\u002F16] **Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12361) | [code]\n\n- [2024\u002F10\u002F10] **AgentBank: Towards Generalized LLM Agents via Fine-Tuning on 50000+ Interaction Trajectories** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07706) | [code]\n\n- [2024\u002F07\u002F25] **Recursive Introspection: Teaching Language Model Agents How to Self-Improve** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18219) | [code]\n\n- [2024\u002F06\u002F11] **CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07054) | [[code]](https:\u002F\u002Fgithub.com\u002Flirenhao1997\u002Fcoevol)\n\n- [2024\u002F04\u002F05] **Social Skill Training with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04204) | [code]\n\n- [2024\u002F04\u002F02] **CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.01663) | [code]\n\n- [2024\u002F03\u002F29] **Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.19962) | [code]\n\n- [2024\u002F03\u002F21] **ReAct Meets ActRe: When Language Agents Enjoy Training Data Autonomy** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.14589) | [code]\n\n- [2024\u002F03\u002F19] **Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12881) | [code]\n\n- [2024\u002F02\u002F23] **AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15506) | [code]\n\n- [2024\u002F02\u002F21] **Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13717) | [code]\n\n- [2024\u002F02\u002F18] **Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11651) | [code]\n\n- [2024\u002F01\u002F10] **Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05566) | [code]\n\n- [2024\u002F01\u002F05] **From LLM to Conversational Agent: A Memory Enhanced Architecture with Fine-Tuning of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02777) | [code]\n\n- [2023\u002F12\u002F22] **Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.14878) | [code]\n\n- [2023\u002F11\u002F28] **Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16714) | [code]\n\n- [2023\u002F10\u002F19] **AgentTuning: Enabling Generalized Agent Abilities for LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.12823) | [code]\n\n- [2023\u002F10\u002F09] **FireAct: Toward Language Agent Fine-tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05915) | [code]\n\n- [2023\u002F05\u002F26] **Training Socially Aligned Language Models on Simulated Social Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16960) | [code]\n\n#### RL\n- [2025\u002F07\u002F03] **MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02259) | [code]\n\n- [2025\u002F07\u002F03] **RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03112) | [code]\n\n- [2025\u002F07\u002F02] **OpenTable-R1: A Reinforcement Learning Augmented Tool Agent for Open-Domain Table Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03018) | [code]\n\n- [2025\u002F06\u002F30] **L0: Reinforcement Learning to Become General Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23667) | [code]\n\n- [2025\u002F06\u002F30] **Auto-TA: Towards Scalable Automated Thematic Analysis (TA) via Multi-Agent Large Language Models with Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23998) | [code]\n\n- [2025\u002F06\u002F30] **SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.24119) | [code]\n\n- [2025\u002F06\u002F24] **KnowRL: Exploring Knowledgeable Reinforcement Learning for Factuality** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19807) | [code]\n\n- [2025\u002F06\u002F16] **Language Agents for Hypothesis-driven Clinical Decision Making with Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13474) | [code]\n\n- [2025\u002F06\u002F13] **Agent-RLVR: Training Software Engineering Agents via Guidance and Environment Rewards** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11425) | [code]\n\n- [2025\u002F05\u002F29] **ML-Agent: Reinforcing LLM Agents for Autonomous Machine Learning Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23723) | [code]\n\n- [2025\u002F05\u002F28] **WorkForceAgent-R1: Incentivizing Reasoning Capability in LLM-based Web Agents via Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22942) | [code]\n\n- [2025\u002F05\u002F28] **WebDancer: Towards Autonomous Information Seeking Agency** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22648) | [[code]](https:\u002F\u002Fgithub.com\u002FAlibaba-NLP\u002FWebAgent)\n\n- [2025\u002F05\u002F27] **SPA-RL: Reinforcing LLM Agents via Stepwise Progress Attribution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20732) | [[code]](https:\u002F\u002Fgithub.com\u002FWangHanLinHenry\u002FSPA-RL-Agent)\n\n- [2025\u002F05\u002F26] **DoctorAgent-RL: A Multi-Agent Collaborative Reinforcement Learning System for Multi-Turn Clinical Dialogue** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19630) | [code]\n\n- [2025\u002F05\u002F26] **REARANK: Reasoning Re-ranking Agent via Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20046) | [code]\n\n- [2025\u002F05\u002F22] **WebAgent-R1: Training Web Agents via End-to-End Multi-Turn Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16421) | [code]\n\n- [2025\u002F05\u002F21] **An Empirical Study on Reinforcement Learning for Reasoning-Search Interleaved LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15117) | [code]\n\n- [2025\u002F05\u002F20] **Reinforcing Question Answering Agents with Minimalist Policy Gradient Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17086) | [code]\n\n- [2025\u002F05\u002F20] **s3: You Don&#39;t Need That Much Data to Train a Search Agent via RL** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14146) | [code]\n\n- [2025\u002F05\u002F17] **Retrospex: Language Agent Meets Offline Reinforcement Learning Critic** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11807) | [code]\n\n- [2025\u002F05\u002F06] **Divide, Optimize, Merge: Fine-Grained LLM Agent Optimization at Scale** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03973) | [code]\n\n- [2025\u002F04\u002F24] **RAGEN: Understanding Self-Evolution in LLM Agents via Multi-Turn Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20073) | [code]\n\n- [2025\u002F04\u002F20] **Meta-Thinking in LLMs via Multi-Agent Reinforcement Learning: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14520) | [code]\n\n- [2025\u002F04\u002F04] **Learning Natural Language Constraints for Safe Reinforcement Learning of Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03185) | [code]\n\n- [2025\u002F03\u002F16] **LLM-Mediated Guidance of MARL Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13553) | [code]\n\n- [2025\u002F03\u002F12] **ReMA: Learning to Meta-think for LLMs with Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09501) | [code]\n\n- [2025\u002F03\u002F03] **Improving Retrospective Language Agents via Joint Policy Gradient Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01490) | [code]\n\n- [2025\u002F02\u002F25] **AgentRM: Enhancing Agent Generalization with Reward Modeling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18407) | [code]\n\n- [2025\u002F02\u002F09] **Training Language Models for Social Deduction with Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06060) | [code]\n\n- [2025\u002F02\u002F06] **Multi-Agent Reinforcement Learning with Focal Diversity Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04492) | [code]\n\n- [2025\u002F01\u002F25] **Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15228) | [code]\n\n- [2024\u002F11\u002F26] **LLM-Based Offline Learning for Embodied Agents via Consistency-Guided Reward Ensemble** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.17135) | [code]\n\n- [2024\u002F11\u002F07] **Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05194) | [code]\n\n- [2024\u002F11\u002F06] **From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03817) | [code]\n\n- [2024\u002F11\u002F04] **WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02337) | [code]\n\n- [2024\u002F10\u002F11] **Words as Beacons: Guiding RL Agents with High-Level Language Prompts** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08632) | [code]\n\n- [2024\u002F10\u002F10] **MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07672) | [code]\n\n- [2024\u002F07\u002F02] **Predicting vs. Acting: A Trade-off Between World Modeling &amp; Agent Modeling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02446) | [code]\n\n- [2024\u002F06\u002F26] **Mental Modeling of Reinforcement Learning Agents by Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18505) | [code]\n\n- [2024\u002F06\u002F17] **Input Conditioned Graph Generation for Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11555) | [[code]](https:\u002F\u002Fgithub.com\u002Flukasvierling\u002Fdynamicgptswarm)\n\n- [2024\u002F06\u002F05] **LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03363) | [code]\n\n- [2024\u002F06\u002F03] **Re-ReST: Reflection-Reinforced Self-Training for Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01495) | [[code]](https:\u002F\u002Fgithub.com\u002FPlusLabNLP\u002FRe-ReST)\n\n- [2024\u002F05\u002F30] **Safe Multi-agent Reinforcement Learning with Natural Language Constraints** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20018) | [code]\n\n- [2024\u002F05\u002F17] **LLM-based Multi-Agent Reinforcement Learning: Current and Future Directions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11106) | [code]\n\n- [2024\u002F05\u002F16] **Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.10292) | [code]\n\n- [2024\u002F05\u002F01] **Navigating WebAI: Training Agents to Complete Web Tasks with Large Language Models and Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00516) | [code]\n\n- [2024\u002F03\u002F05] **Language Guided Exploration for RL Agents in Text Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03141) | [code]\n\n- [2024\u002F02\u002F17] **Offline Training of Language Model Agents with Functions as Learnable Weights** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11359) | [code]\n\n- [2024\u002F02\u002F02] **StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01391) | [code]\n\n- [2023\u002F10\u002F25] **MultiPrompter: Cooperative Prompt Optimization with Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16730) | [code]\n\n- [2023\u002F03\u002F29] **Skill Reinforcement Learning and Planning for Open-World Long-Horizon Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.16563) | [code]\n\n#### DPO\n- [2025\u002F06\u002F17] **Expectation Confirmation Preference Optimization for Multi-Turn Conversational Recommendation Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14302) | [code]\n\n- [2025\u002F06\u002F04] **Debate, Reflect, and Distill: Multi-Agent Feedback with Tree-Structured Preference Optimization for Efficient Language Model Enhancement** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03541) | [code]\n\n- [2025\u002F06\u002F02] **PGPO: Enhancing Agent Reasoning via Pseudocode-style Planning Guided Preference Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01475) | [code]\n\n- [2025\u002F05\u002F26] **MaskSearch: A Universal Pre-Training Framework to Enhance Agentic Search Capability** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20285) | [code]\n\n- [2025\u002F05\u002F04] **Adaptive Thinking via Mode Policy Optimization for Social Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02156) | [code]\n\n- [2025\u002F04\u002F27] **Anyprefer: An Agentic Framework for Preference Data Synthesis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19276) | [code]\n\n- [2025\u002F02\u002F26] **Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19328) | [[code]](https:\u002F\u002Fgithub.com\u002FTHU-KEG\u002FAgentic-Reward-Modeling)\n\n- [2025\u002F01\u002F03] **SDPO: Segment-Level Direct Preference Optimization for Social Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01821) | [code]\n\n- [2024\u002F10\u002F29] **Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22304) | [code]\n\n- [2024\u002F05\u002F31] **Learning to Clarify: Multi-turn Conversations with Action-Based Contrastive Self-Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00222) | [code]\n\n### Scaling\n#### Single-Agent Framework\n- [2025\u002F07\u002F08] **Agent KB: Leveraging Cross-Domain Experience for Agentic Problem Solving** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06229) | [code]\n\n- [2025\u002F07\u002F04] **GRAFT: A Graph-based Flow-aware Agentic Framework for Document-level Machine Translation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03311) | [code]\n\n- [2025\u002F06\u002F29] **AURA: Agent for Understanding, Reasoning, and Automated Tool Use in Voice-Driven Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23049) | [code]\n\n- [2025\u002F06\u002F27] **A Large Language Model-Empowered Agent for Reliable and Robust Structural Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02938) | [code]\n\n- [2025\u002F06\u002F17] **VIDEE: Visual and Interactive Decomposition, Execution, and Evaluation of Text Analytics with Intelligent Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21582) | [code]\n\n- [2025\u002F06\u002F17] **OAgents: An Empirical Study of Building Effective Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15741) | [code]\n\n- [2025\u002F06\u002F16] **Leveraging In-Context Learning for Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13109) | [code]\n\n- [2025\u002F06\u002F14] **Towards Building General Purpose Embedding Models for Industry 4.0 Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.12607) | [code]\n\n- [2025\u002F06\u002F12] **AutoMind: Adaptive Knowledgeable Agent for Automated Data Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10974) | [code]\n\n- [2025\u002F06\u002F03] **DIAMOND: An LLM-Driven Agent for Context-Aware Baseball Highlight Summarization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02351) | [code]\n\n- [2025\u002F06\u002F03] **Comparative Analysis of AI Agent Architectures for Entity Relationship Classification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02426) | [code]\n\n- [2025\u002F06\u002F02] **Self-Challenging Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01716) | [code]\n\n- [2025\u002F05\u002F30] **NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24575) | [code]\n\n- [2025\u002F05\u002F21] **ViQAgent: Zero-Shot Video Question Answering via Agent with Open-Vocabulary Grounding Validation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15928) | [code]\n\n- [2025\u002F05\u002F20] **ContextAgent: Context-Aware Proactive LLM Agents with Open-World Sensory Perceptions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14668) | [code]\n\n- [2025\u002F05\u002F12] **Putting It All into Context: Simplifying Agents with LCLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08120) | [code]\n\n- [2025\u002F04\u002F17] **Pandora: A Code-Driven Large Language Model Agent for Unified Reasoning Across Diverse Structured Knowledge** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12734) | [code]\n\n- [2025\u002F04\u002F11] **Toward Super Agent System with Hybrid AI Routers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.10519) | [code]\n\n- [2025\u002F04\u002F10] **AgentAda: Skill-Adaptive Data Analytics for Tailored Insight Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07421) | [code]\n\n- [2025\u002F04\u002F07] **DoCIA: An Online Document-Level Context Incorporation Agent for Speech Translation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05122) | [code]\n\n- [2025\u002F03\u002F20] **Do Visual Imaginations Improve Vision-and-Language Navigation Agents?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16394) | [code]\n\n- [2025\u002F03\u002F14] **Large Reasoning Models in Agent Scenarios: Exploring the Necessity of Reasoning Capabilities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11074) | [code]\n\n- [2025\u002F03\u002F10] **DatawiseAgent: A Notebook-Centric LLM Agent Framework for Automated Data Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07044) | [code]\n\n- [2025\u002F03\u002F10] **ASTRA: A Negotiation Agent with Adaptive and Strategic Reasoning through Action in Dynamic Offer Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07129) | [code]\n\n- [2025\u002F02\u002F26] **TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19400) | [code]\n\n- [2025\u002F02\u002F14] **Agentic Verification for Ambiguous Query Disambiguation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10352) | [code]\n\n- [2025\u002F02\u002F12] **SPeCtrum: A Grounded Framework for Multidimensional Identity Representation in LLM-Based Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08599) | [code]\n\n- [2025\u002F02\u002F09] **AutoAgent: A Fully-Automated and Zero-Code Framework for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05957) | [code]\n\n- [2025\u002F02\u002F04] **Adaptive Self-improvement LLM Agentic System for ML Library Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02534) | [code]\n\n- [2025\u002F01\u002F31] **Enabling Autonomic Microservice Management through Self-Learning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.19056) | [code]\n\n- [2024\u002F12\u002F28] **OneKE: A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20005) | [code]\n\n- [2024\u002F12\u002F21] **Self-guided Knowledgeable Network of Thoughts: Amplifying Reasoning with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16533) | [code]\n\n- [2024\u002F12\u002F15] **AgentPS: Agentic Process Supervision for Multi-modal Content Quality Assurance through Multi-round QA** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15251) | [code]\n\n- [2024\u002F12\u002F11] **A Multimodal Social Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06189) | [code]\n\n- [2024\u002F12\u002F11] **Federated In-Context LLM Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.08054) | [code]\n\n- [2024\u002F12\u002F04] **How to Correctly do Semantic Backpropagation on Language-based Agentic Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03624) | [code]\n\n- [2024\u002F12\u002F02] **SAUP: Situation Awareness Uncertainty Propagation on LLM Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01033) | [code]\n\n- [2024\u002F12\u002F01] **Towards Adaptive Mechanism Activation in Language Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.00722) | [code]\n\n- [2024\u002F11\u002F20] **MindForge: Empowering Embodied Agents with Theory of Mind for Lifelong Collaborative Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.12977) | [code]\n\n- [2024\u002F11\u002F16] **IntentGPT: Few-shot Intent Discovery with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10670) | [code]\n\n- [2024\u002F11\u002F04] **DynaSaur: Large Language Agents Beyond Predefined Actions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01747) | [code]\n\n- [2024\u002F11\u002F04] **CRMArena: Understanding the Capacity of LLM Agents to Perform Professional CRM Tasks in Realistic Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02305) | [code]\n\n- [2024\u002F10\u002F29] **ADAM: An Embodied Causal Agent in Open-World Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22194) | [code]\n\n- [2024\u002F10\u002F27] **TrajAgent: An Agent Framework for Unified Trajectory Modelling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.20445) | [code]\n\n- [2024\u002F10\u002F22] **Adsorb-Agent: Autonomous Identification of Stable Adsorption Configurations via Large Language Model Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.16658) | [code]\n\n- [2024\u002F10\u002F11] **Encoding Agent Trajectories as Representations with Sequence Transformers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09204) | [code]\n\n- [2024\u002F10\u002F10] **Agents Thinking Fast and Slow: A Talker-Reasoner Architecture** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08328) | [code]\n\n- [2024\u002F10\u002F08] **AgentSquare: Automatic LLM Agent Search in Modular Design Space** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06153) | [[code]](https:\u002F\u002Fgithub.com\u002Ftsinghua-fib-lab\u002FAgentSquare)\n\n- [2024\u002F10\u002F08] **Applying Refusal-Vector Ablation to Llama 3.1 70B Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.10871) | [code]\n\n- [2024\u002F09\u002F24] **MOSS: Enabling Code-Driven Evolution and Context Management for AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.16120) | [code]\n\n- [2024\u002F09\u002F19] **Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12411) | [code]\n\n- [2024\u002F09\u002F15] **Automatic Control With Human-Like Reasoning: Exploring Language Model Embodied Air Traffic Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.09717) | [code]\n\n- [2024\u002F09\u002F12] **Self-Supervised Inference of Agents in Trustless Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08386) | [code]\n\n- [2024\u002F09\u002F05] **From MOOC to MAIC: Reshaping Online Teaching and Learning through LLM-driven Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03512) | [code]\n\n- [2024\u002F09\u002F05] **Rx Strategist: Prescription Verification using LLM Agents System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03440) | [code]\n\n- [2024\u002F09\u002F03] **AgentRE: An Agent-Based Framework for Navigating Complex Information Landscapes in Relation Extraction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.01854) | [code]\n\n- [2024\u002F08\u002F26] **AgentMove: A Large Language Model based Agentic Framework for Zero-shot Next Location Prediction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.13986) | [code]\n\n- [2024\u002F08\u002F19] **Anim-Director: A Large Multimodal Model Powered Agent for Controllable Animation Video Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.09787) | [code]\n\n- [2024\u002F08\u002F13] **Causal Agent based on Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.06849) | [code]\n\n- [2024\u002F08\u002F02] **Coalitions of Large Language Models Increase the Robustness of AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01380) | [code]\n\n- [2024\u002F07\u002F27] **AgentPeerTalk: Empowering Students through Agentic-AI-Driven Discernment of Bullying and Joking in Peer Interactions in Schools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01459) | [code]\n\n- [2024\u002F07\u002F25] **Enhancing Agent Learning through World Dynamics Modeling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.17695) | [code]\n\n- [2024\u002F07\u002F25] **RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18035) | [code]\n\n- [2024\u002F07\u002F16] **Preemptive Detection and Correction of Misaligned Actions in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.11843) | [code]\n\n- [2024\u002F07\u002F15] **Sibyl: Simple yet Effective Agent Framework for Complex Real-world Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.10718) | [code]\n\n- [2024\u002F07\u002F02] **Beyond Numeric Awards: In-Context Dueling Bandits with LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01887) | [code]\n\n- [2024\u002F06\u002F24] **OmAgent: A Multi-modal Agent Framework for Complex Video Understanding with Task Divide-and-Conquer** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.16620) | [code]\n\n- [2024\u002F06\u002F07] **SelfGoal: Your Language Agents Already Know How to Achieve High-level Goals** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04784) | [code]\n\n- [2024\u002F05\u002F25] **AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16247) | [code]\n\n- [2024\u002F05\u002F24] **Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.15143) | [[code]](https:\u002F\u002Fgithub.com\u002Fconglu1997\u002Fintelligent-go-explore)\n\n- [2024\u002F05\u002F16] **Agent Design Pattern Catalogue: A Collection of Architectural Patterns for Foundation Model based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.10467) | [code]\n\n- [2024\u002F04\u002F30] **Large Language Model Agent for Fake News Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01593) | [code]\n\n- [2024\u002F04\u002F28] **Logic Agent: Enhancing Validity with Logic Rule Invocation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.18130) | [code]\n\n- [2024\u002F04\u002F13] **LLMSat: A Large Language Model-Based Goal-Oriented Agent for Autonomous Space Exploration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01392) | [code]\n\n- [2024\u002F04\u002F01] **TraveLER: A Modular Multi-LMM Agent Framework for Video Question-Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.01476) | [code]\n\n- [2024\u002F03\u002F29] **ITCMA: A Generative Agent Based on a Computational Consciousness Structure** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.20097) | [code]\n\n- [2024\u002F02\u002F25] **Bootstrapping Cognitive Agents with a Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00810) | [code]\n\n- [2024\u002F02\u002F24] **Empowering Large Language Model Agents through Action Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15809) | [[code]](https:\u002F\u002Fgithub.com\u002Fzhao-ht\u002Flearnact)\n\n- [2024\u002F02\u002F20] **Soft Self-Consistency Improves Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13212) | [code]\n\n- [2024\u002F02\u002F04] **NavHint: Vision and Language Navigation Agent with a Hint Generator** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02559) | [code]\n\n- [2024\u002F01\u002F05] **AFSPP: Agent Framework for Shaping Preference and Personality with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02870) | [code]\n\n- [2023\u002F11\u002F23] **Controlling Large Language Model-based Agents for Large-Scale Decision-Making: An Actor-Critic Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.13884) | [code]\n\n- [2023\u002F11\u002F02] **ProAgent: From Robotic Process Automation to Agentic Process Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.10751) | [code]\n\n- [2023\u002F10\u002F16] **CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.10134) | [code]\n\n- [2023\u002F09\u002F29] **Reason for Future, Act for Now: A Principled Framework for Autonomous LLM Agents with Provable Sample Efficiency** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.17382) | [code]\n\n- [2023\u002F09\u002F14] **Agents: An Open-source Framework for Autonomous Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07870) | [code]\n\n- [2023\u002F09\u002F08] **A Versatile Graph Learning Approach through LLM-based Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.04565) | [code]\n\n- [2023\u002F09\u002F05] **Cognitive Architectures for Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02427) | [code]\n\n- [2023\u002F05\u002F27] **SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17390) | [code]\n\n- [2023\u002F05\u002F25] **Voyager: An Open-Ended Embodied Agent with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16291) | [code]\n\n#### Multi-Agent System\n- [2025\u002F07\u002F09] **Pun Intended: Multi-Agent Translation of Wordplay with Contrastive Learning and Phonetic-Semantic Embeddings** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06506) | [code]\n\n- [2025\u002F07\u002F09] **MIND: A Multi-agent Framework for Zero-shot Harmful Meme Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06908) | [code]\n\n- [2025\u002F06\u002F27] **GenEscape: Hierarchical Multi-Agent Generation of Escape Room Puzzles** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21839) | [code]\n\n- [2025\u002F06\u002F20] **SysTemp: A Multi-Agent System for Template-Based Generation of SysML v2** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21608) | [code]\n\n- [2025\u002F06\u002F19] **StoryWriter: A Multi-Agent Framework for Long Story Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16445) | [code]\n\n- [2025\u002F06\u002F18] **AgentGroupChat-V2: Divide-and-Conquer Is What LLM-Based Multi-Agent System Need** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15451) | [code]\n\n- [2025\u002F06\u002F17] **MAS-LitEval : Multi-Agent System for Literary Translation Quality Assessment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14199) | [code]\n\n- [2025\u002F06\u002F17] **Xolver: Multi-Agent Reasoning with Holistic Experience Learning Just Like an Olympiad Team** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14234) | [code]\n\n- [2025\u002F06\u002F13] **A Hybrid Multi-Agent Prompting Approach for Simplifying Complex Sentences** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11681) | [code]\n\n- [2025\u002F06\u002F13] **AutoGen Driven Multi Agent Framework for Iterative Crime Data Analysis and Prediction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11475) | [code]\n\n- [2025\u002F06\u002F13] **Investigating the Potential of Large Language Model-Based Router Multi-Agent Architectures for Foundation Design Automation: A Task Classification and Expert Selection Study** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13811) | [code]\n\n- [2025\u002F06\u002F12] **A Multi-Agent Probabilistic Inference Framework Inspired by Kairanban-Style CoT System with IdoBata Conversation for Debiasing** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21565) | [code]\n\n- [2025\u002F06\u002F11] **Multi-Agent Language Models: Advancing Cooperation, Coordination, and Adaptation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09331) | [code]\n\n- [2025\u002F06\u002F11] **ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09513) | [code]\n\n- [2025\u002F06\u002F11] **Chat-of-Thought: Collaborative Multi-Agent System for Generating Domain Specific Information** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10086) | [code]\n\n- [2025\u002F06\u002F10] **CAF-I: A Collaborative Multi-Agent Framework for Enhanced Irony Detection with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08430) | [code]\n\n- [2025\u002F06\u002F10] **Reinforce LLM Reasoning through Multi-Agent Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08379) | [code]\n\n- [2025\u002F06\u002F09] **From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08292) | [code]\n\n- [2025\u002F06\u002F08] **Theorem-of-Thought: A Multi-Agent Framework for Abductive, Deductive, and Inductive Reasoning in Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07106) | [code]\n\n- [2025\u002F06\u002F06] **MAPLE: Multi-Agent Adaptive Planning with Long-Term Memory for Table Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05813) | [code]\n\n- [2025\u002F06\u002F06] **Does It Run and Is That Enough? Revisiting Text-to-Chart Generation with a Multi-Agent Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06175) | [code]\n\n- [2025\u002F06\u002F05] **Demonstrations of Integrity Attacks in Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04572) | [code]\n\n- [2025\u002F06\u002F04] **CLAIM: An Intent-Driven Multi-Agent Framework for Analyzing Manipulation in Courtroom Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04131) | [code]\n\n- [2025\u002F06\u002F03] **MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02689) | [code]\n\n- [2025\u002F06\u002F03] **Adaptive Graph Pruning for Multi-Agent Communication** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02951) | [code]\n\n- [2025\u002F06\u002F03] **A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02998) | [code]\n\n- [2025\u002F06\u002F03] **Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02992) | [code]\n\n- [2025\u002F06\u002F03] **MAEBE: Multi-Agent Emergent Behavior Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03053) | [code]\n\n- [2025\u002F06\u002F02] **STORM-BORN: A Challenging Mathematical Derivations Dataset Curated via a Human-in-the-Loop Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01531) | [code]\n\n- [2025\u002F06\u002F02] **An Empirical Study of Group Conformity in Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01332) | [code]\n\n- [2025\u002F05\u002F31] **Goal-Aware Identification and Rectification of Misinformation in Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00509) | [code]\n\n- [2025\u002F05\u002F31] **PAKTON: A Multi-Agent Framework for Question Answering in Long Legal Agreements** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00608) | [code]\n\n- [2025\u002F05\u002F30] **CREFT: Sequential Multi-Agent LLM for Character Relation Extraction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24553) | [code]\n\n- [2025\u002F05\u002F30] **Multiple LLM Agents Debate for Equitable Cultural Alignment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24671) | [code]\n\n- [2025\u002F05\u002F30] **An Adversary-Resistant Multi-Agent LLM System via Credibility Scoring** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24239) | [code]\n\n- [2025\u002F05\u002F29] **Cross-Task Experiential Learning on LLM-based Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23187) | [code]\n\n- [2025\u002F05\u002F29] **OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23885) | [code]\n\n- [2025\u002F05\u002F28] **Co-Saving: Resource Aware Multi-Agent Collaboration for Software Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21898) | [code]\n\n- [2025\u002F05\u002F28] **CoMaPOI: A Collaborative Multi-Agent Framework for Next POI Prediction Bridging the Gap Between Trajectory and Language** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23837) | [code]\n\n- [2025\u002F05\u002F28] **GETReason: Enhancing Image Context Extraction through Hierarchical Multi-Agent Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21863) | [code]\n\n- [2025\u002F05\u002F27] **Long Context Scaling: Divide and Conquer via Multi-Agent Question-driven Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20625) | [code]\n\n- [2025\u002F05\u002F27] **Rethinking Information Synthesis in Multimodal Question Answering A Multi-Agent Perspective** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20816) | [code]\n\n- [2025\u002F05\u002F27] **Scaling External Knowledge Input Beyond Context Windows of LLMs via Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21471) | [code]\n\n- [2025\u002F05\u002F26] **CoTGuard: Using Chain-of-Thought Triggering for Copyright Protection in Multi-Agent LLM Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19405) | [code]\n\n- [2025\u002F05\u002F26] **Multi-Agent Collaboration via Evolving Orchestration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19591) | [code]\n\n- [2025\u002F05\u002F26] **Select, Read, and Write: A Multi-Agent Framework of Full-Text-based Related Work Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19647) | [code]\n\n- [2025\u002F05\u002F26] **Project Riley: Multimodal Multi-Agent LLM Collaboration with Emotional Reasoning and Voting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20521) | [code]\n\n- [2025\u002F05\u002F25] **MetaMind: Modeling Human Social Thoughts with Metacognitive Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18943) | [code]\n\n- [2025\u002F05\u002F25] **GUARDIAN: Safeguarding LLM Multi-Agent Collaborations with Temporal Graph Modeling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19234) | [code]\n\n- [2025\u002F05\u002F23] **ManuSearch: Democratizing Deep Search in Large Language Models with a Transparent and Open Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18105) | [code]\n\n- [2025\u002F05\u002F23] **PD$^3$: A Project Duplication Detection Framework via Adapted Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17492) | [code]\n\n- [2025\u002F05\u002F22] **EMULATE: A Multi-Agent Framework for Determining the Veracity of Atomic Claims by Emulating Human Actions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16576) | [code]\n\n- [2025\u002F05\u002F22] **X-MAS: Towards Building Multi-Agent Systems with Heterogeneous LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16997) | [code]\n\n- [2025\u002F05\u002F21] **MAS-ZERO: Designing Multi-Agent Systems with Zero Supervision** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14996) | [code]\n\n- [2025\u002F05\u002F20] **MAATS: A Multi-Agent Automated Translation System Based on MQM Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14848) | [code]\n\n- [2025\u002F05\u002F20] **MLZero: A Multi-Agent System for End-to-end Machine Learning Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13941) | [code]\n\n- [2025\u002F05\u002F19] **AD-AGENT: A Multi-agent Framework for End-to-end Anomaly Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12594) | [code]\n\n- [2025\u002F05\u002F18] **IP Leakage Attacks Targeting LLM-Based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12442) | [code]\n\n- [2025\u002F05\u002F17] **BELLE: A Bi-Level Multi-Agent Reasoning Framework for Multi-Hop Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11811) | [code]\n\n- [2025\u002F05\u002F16] **Connecting the Dots: A Chain-of-Collaboration Prompting Framework for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10936) | [code]\n\n- [2025\u002F05\u002F15] **Assessing Collective Reasoning in Multi-Agent LLMs via Hidden Profile Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11556) | [code]\n\n- [2025\u002F05\u002F12] **Towards Multi-Agent Reasoning Systems for Collaborative Expertise Delegation: An Exploratory Design Study** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.07313) | [code]\n\n- [2025\u002F05\u002F06] **The Power of Stories: Narrative Priming Shapes How LLM Agents Collaborate and Compete** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03961) | [code]\n\n- [2025\u002F04\u002F30] **Which Agent Causes Task Failures and When? On Automated Failure Attribution of LLM Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00212) | [code]\n\n- [2025\u002F04\u002F26] **MATCHA: Can Multi-Agent Collaboration Build a Trustworthy Conversational Recommender?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20094) | [code]\n\n- [2025\u002F04\u002F24] **Collaborating Action by Action: A Multi-agent LLM Framework for Embodied Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17950) | [code]\n\n- [2025\u002F04\u002F23] **Less is More: Enhancing Structured Multi-Agent Reasoning via Quality-Guided Distillation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16408) | [code]\n\n- [2025\u002F04\u002F21] **EducationQ: Evaluating LLMs&#39; Teaching Capabilities Through Multi-Agent Dialogue Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14928) | [code]\n\n- [2025\u002F04\u002F17] **Are AI agents the new machine translation frontier? Challenges and opportunities of single- and multi-agent systems for multilingual digital communication** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12891) | [code]\n\n- [2025\u002F04\u002F15] **X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13203) | [code]\n\n- [2025\u002F04\u002F11] **Beyond Self-Reports: Multi-Observer Agents for Personality Assessment in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08399) | [code]\n\n- [2025\u002F04\u002F11] **DocAgent: A Multi-Agent System for Automated Code Documentation Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08725) | [code]\n\n- [2025\u002F04\u002F08] **FactGuard: Leveraging Multi-Agent Systems to Generate Answerable and Unanswerable Questions for Enhanced Long-Context LLM Extraction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05607) | [code]\n\n- [2025\u002F04\u002F04] **YaleNLP @ PerAnsSumm 2025: Multi-Perspective Integration via Mixture-of-Agents for Enhanced Healthcare QA Summarization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03932) | [code]\n\n- [2025\u002F04\u002F02] **Self-Resource Allocation in Multi-Agent LLM Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02051) | [code]\n\n- [2025\u002F04\u002F02] **Achieving Unanimous Consensus in Decision Making Using Multi-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02128) | [code]\n\n- [2025\u002F04\u002F01] **When Persuasion Overrides Truth in Multi-Agent LLM Debates: Introducing a Confidence-Weighted Persuasion Override Rate (CW-POR)** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00374) | [code]\n\n- [2025\u002F04\u002F01] **AI Hiring with LLMs: A Context-Aware and Explainable Multi-Agent Framework for Resume Screening** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02870) | [code]\n\n- [2025\u002F04\u002F01] **AgentNet: Decentralized Evolutionary Coordination for LLM-based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00587) | [code]\n\n- [2025\u002F03\u002F31] **$\\textit{Agents Under Siege}$: Breaking Pragmatic Multi-Agent LLM Systems with Optimized Prompt Attacks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00218) | [code]\n\n- [2025\u002F03\u002F28] **WorkTeam: Constructing Workflows from Natural Language with Multi-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22473) | [code]\n\n- [2025\u002F03\u002F28] **Self-Evolving Multi-Agent Simulations for Realistic Clinical Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22678) | [code]\n\n- [2025\u002F03\u002F27] **Collab: Controlled Decoding using Mixture of Agents for LLM Alignment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21720) | [code]\n\n- [2025\u002F03\u002F27] **Debate-Driven Multi-Agent LLMs for Phishing Email Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22038) | [code]\n\n- [2025\u002F03\u002F26] **TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20666) | [code]\n\n- [2025\u002F03\u002F26] **3MDBench: Medical Multimodal Multi-agent Dialogue Benchmark** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13861) | [code]\n\n- [2025\u002F03\u002F25] **Multi-agent Application System in Office Collaboration Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19584) | [code]\n\n- [2025\u002F03\u002F24] **AgentDropout: Dynamic Agent Elimination for Token-Efficient and High-Performance LLM-Based Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18891) | [code]\n\n- [2025\u002F03\u002F23] **MathAgent: Leveraging a Mixture-of-Math-Agent Framework for Real-World Multimodal Mathematical Error Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18132) | [code]\n\n- [2025\u002F03\u002F21] **ConvoGen: Enhancing Conversational AI with Synthetic Data: A Multi-Agent Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17460) | [code]\n\n- [2025\u002F03\u002F21] **MARS: A Multi-Agent Framework Incorporating Socratic Guidance for Automated Prompt Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16874) | [code]\n\n- [2025\u002F03\u002F19] **When Pigs Get Sick: Multi-Agent AI for Swine Disease Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15204) | [code]\n\n- [2025\u002F03\u002F19] **MAMM-Refine: A Recipe for Improving Faithfulness in Generation with Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15272) | [code]\n\n- [2025\u002F03\u002F18] **Gricean Norms as a Basis for Effective Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14484) | [code]\n\n- [2025\u002F03\u002F17] **Identifying Cooperative Personalities in Multi-agent Contexts through Personality Steering with Representation Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.12722) | [code]\n\n- [2025\u002F03\u002F17] **MAP: Evaluation and Multi-Agent Enhancement of Large Language Models for Inpatient Pathways** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13205) | [code]\n\n- [2025\u002F03\u002F16] **LLM-Mediated Guidance of MARL Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13553) | [code]\n\n- [2025\u002F03\u002F14] **AIstorian lets AI be a historian: A KG-powered multi-agent system for accurate biography generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11346) | [code]\n\n- [2025\u002F03\u002F14] **Prompt Injection Detection and Mitigation via AI Multi-Agent NLP Frameworks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11517) | [code]\n\n- [2025\u002F03\u002F14] **RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13514) | [code]\n\n- [2025\u002F03\u002F13] **LLMs Working in Harmony: A Survey on the Technological Aspects of Building Effective LLM-Based Multi Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01963) | [code]\n\n- [2025\u002F03\u002F12] **ReMA: Learning to Meta-think for LLMs with Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09501) | [code]\n\n- [2025\u002F03\u002F07] **MM-StoryAgent: Immersive Narrated Storybook Video Generation with a Multi-Agent Paradigm across Text, Image and Audio** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05242) | [code]\n\n- [2025\u002F03\u002F07] **GEMA-Score: Granular Explainable Multi-Agent Score for Radiology Report Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05347) | [code]\n\n- [2025\u002F03\u002F07] **Multi Agent based Medical Assistant for Edge Devices** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05397) | [code]\n\n- [2025\u002F03\u002F05] **MA-LoT: Multi-Agent Lean-based Long Chain-of-Thought Reasoning enhances Formal Theorem Proving** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03205) | [code]\n\n- [2025\u002F03\u002F05] **MAS-GPT: Training LLMs to Build LLM-based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03686) | [code]\n\n- [2025\u002F03\u002F05] **Multi-Agent Systems Powered by Large Language Models: Applications in Swarm Intelligence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03800) | [code]\n\n- [2025\u002F03\u002F05] **Preserving Cultural Identity with Context-Aware Translation Through Multi-Agent AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04827) | [code]\n\n- [2025\u002F03\u002F05] **Enhancing Collective Intelligence in Large Language Models Through Emotional Integration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04849) | [code]\n\n- [2025\u002F03\u002F04] **BRIDGE: Bootstrapping Text to Control Time-Series Generation via Multi-Agent Iterative Optimization and Diffusion Modelling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02445) | [code]\n\n- [2025\u002F03\u002F04] **Multi-Agent System for AI-Assisted Extraction of Narrative Arcs in TV Series** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04817) | [code]\n\n- [2025\u002F03\u002F01] **Structured Reasoning for Fairness: A Multi-Agent Approach to Bias Detection in Textual Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00355) | [code]\n\n- [2025\u002F02\u002F28] **PreMind: Multi-Agent Video Understanding for Advanced Indexing of Presentation-style Videos** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00162) | [code]\n\n- [2025\u002F02\u002F27] **M^3Builder: A Multi-Agent System for Automated Machine Learning in Medical Imaging** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20301) | [code]\n\n- [2025\u002F02\u002F26] **Stay Focused: Problem Drift in Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19559) | [code]\n\n- [2025\u002F02\u002F26] **Voting or Consensus? Decision-Making in Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19130) | [code]\n\n- [2025\u002F02\u002F25] **Enhancing Text Classification with a Novel Multi-Agent Collaboration Framework Leveraging BERT** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18653) | [code]\n\n- [2025\u002F02\u002F25] **A Cooperative Multi-Agent Framework for Zero-Shot Named Entity Recognition** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18702) | [code]\n\n- [2025\u002F02\u002F25] **Debt Collection Negotiations with Large Language Models: An Evaluation System and Optimizing Decision Making with Multi-Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18228) | [code]\n\n- [2025\u002F02\u002F25] **FACT-AUDIT: An Adaptive Multi-Agent Framework for Dynamic Fact-Checking Evaluation of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17924) | [code]\n\n- [2025\u002F02\u002F24] **MobileSteward: Integrating Multiple App-Oriented Agents with Self-Evolution to Automate Cross-App Instructions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16796) | [code]\n\n- [2025\u002F02\u002F24] **Mobile-Agent-V: Learning Mobile Device Operation Through Video-Guided Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17110) | [[code]](https:\u002F\u002Fgithub.com\u002FX-PLUG\u002FMobileAgent)\n\n- [2025\u002F02\u002F24] **METAL: A Multi-Agent Framework for Chart Generation with Test-Time Scaling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17651) | [code]\n\n- [2025\u002F02\u002F23] **The Hidden Strength of Disagreement: Unraveling the Consensus-Diversity Tradeoff in Adaptive Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16565) | [[code]](https:\u002F\u002Fgithub.com\u002Fwuzengqing001225\u002FConsensusDiversityTradeoffMAS)\n\n- [2025\u002F02\u002F20] **Enhancing Language Multi-Agent Learning with Multi-Agent Credit Re-Assignment for Interactive Environment Generalization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14496) | [code]\n\n- [2025\u002F02\u002F20] **CORBA: Contagious Recursive Blocking Attacks on Multi-Agent Systems Based on Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14529) | [code]\n\n- [2025\u002F02\u002F17] **Table-Critic: A Multi-Agent Framework for Collaborative Criticism and Refinement in Table Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11799) | [code]\n\n- [2025\u002F02\u002F17] **HARBOR: Exploring Persona Dynamics in Multi-Agent Competition** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12149) | [code]\n\n- [2025\u002F02\u002F15] **Divergent Thoughts toward One Goal: LLM-based Multi-Agent Collaboration System for Electronic Design Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10857) | [code]\n\n- [2025\u002F02\u002F13] **PathFinder: A Multi-Modal Multi-Agent System for Medical Diagnostic Decision-Making Applied to Histopathology** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08916) | [code]\n\n- [2025\u002F02\u002F13] **Mind the Gaps: Logical English, Prolog, and Multi-agent Systems for Autonomous Vehicles** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09216) | [code]\n\n- [2025\u002F02\u002F12] **Faithful, Unfaithful or Ambiguous? Multi-Agent Debate with Initial Stance for Summary Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08514) | [code]\n\n- [2025\u002F02\u002F12] **If Multi-Agent Debate is the Answer, What is the Question?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08788) | [code]\n\n- [2025\u002F02\u002F11] **Don&#39;t Just Demo, Teach Me the Principles: A Principle-Based Multi-Agent Prompting Strategy for Text Classification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07165) | [code]\n\n- [2025\u002F02\u002F11] **Multi-Agent Collaboration for Multilingual Code Instruction Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07487) | [code]\n\n- [2025\u002F02\u002F10] **KARMA: Leveraging Multi-Agent LLMs for Automated Knowledge Graph Enrichment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06472) | [code]\n\n- [2025\u002F02\u002F09] **Preventing Rogue Agents Improves Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05986) | [code]\n\n- [2025\u002F02\u002F09] **The Application of MATEC (Multi-AI Agent Team Care) Framework in Sepsis Care** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16433) | [code]\n\n- [2025\u002F02\u002F08] **CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and Debugging** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05664) | [code]\n\n- [2025\u002F02\u002F08] **Multi-Agent Simulator Drives Language Models for Legal Intensive Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06882) | [code]\n\n- [2025\u002F02\u002F07] **S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04790) | [code]\n\n- [2025\u002F02\u002F06] **Multi-Agent Reinforcement Learning with Focal Diversity Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04492) | [code]\n\n- [2025\u002F02\u002F06] **Enhancing Online Learning Efficiency Through Heterogeneous Resource Integration with a Multi-Agent RAG System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03948) | [code]\n\n- [2025\u002F02\u002F06] **Multi-agent Architecture Search via Agentic Supernet** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04180) | [code]\n\n- [2025\u002F02\u002F04] **Position: Scaling LLM Agents Requires Asymptotic Analysis with LLM Primitives** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04358) | [code]\n\n- [2025\u002F02\u002F04] **Multi-Agent Design: Optimizing Agents with Better Prompts and Topologies** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02533) | [code]\n\n- [2025\u002F02\u002F03] **PlotGen: Multi-Agent LLM-based Scientific Data Visualization via Multimodal Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00988) | [code]\n\n- [2025\u002F02\u002F03] **ChartCitor: Multi-Agent Framework for Fine-Grained Chart Visual Attribution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00989) | [code]\n\n- [2025\u002F02\u002F02] **Rethinking Mixture-of-Agents: Is Mixing Different Large Language Models Beneficial?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00674) | [code]\n\n- [2025\u002F02\u002F02] **Efficient Multi-Agent System Training with Data Influence-Oriented Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00955) | [code]\n\n- [2025\u002F01\u002F29] **Layered Chain-of-Thought Prompting for Multi-Agent LLM Systems: A Comprehensive Approach to Explainable Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.18645) | [code]\n\n- [2025\u002F01\u002F27] **MADP: Multi-Agent Deductive Planning for Enhanced Cognitive-Behavioral Mental Health Question Answer** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15826) | [code]\n\n- [2025\u002F01\u002F25] **Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15228) | [code]\n\n- [2025\u002F01\u002F24] **Multi-agent KTO: Reinforcing Strategic Interactions of Large Language Model in Language Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14225) | [code]\n\n- [2025\u002F01\u002F24] **Unmasking Conversational Bias in AI Multiagent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14844) | [code]\n\n- [2025\u002F01\u002F22] **FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in Virtual 3D Spaces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12909) | [code]\n\n- [2025\u002F01\u002F19] **IntellAgent: A Multi-Agent Framework for Evaluating Conversational AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11067) | [code]\n\n- [2025\u002F01\u002F16] **AutoCBT: An Autonomous Multi-agent Framework for Cognitive Behavioral Therapy in Psychological Counseling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.09426) | [code]\n\n- [2025\u002F01\u002F14] **Talk to Right Specialists: Routing and Planning in Multi-agent System for Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07813) | [code]\n\n- [2025\u002F01\u002F05] **LatteReview: A Multi-Agent Framework for Systematic Review Automation Using Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.05468) | [code]\n\n- [2025\u002F01\u002F02] **Harnessing Multi-Agent LLMs for Complex Engineering Problem-Solving: A Framework for Senior Design Projects** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01205) | [code]\n\n- [2024\u002F12\u002F30] **Distributed Mixture-of-Agents for Edge Inference with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21200) | [code]\n\n- [2024\u002F12\u002F28] **M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20127) | [code]\n\n- [2024\u002F12\u002F28] **Efficient Multi-Agent Collaboration with Tool Use for Online Planning in Complex Table Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20145) | [code]\n\n- [2024\u002F12\u002F24] **Multi-Agents Based on Large Language Models for Knowledge-based Visual Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18351) | [code]\n\n- [2024\u002F12\u002F22] **Multi-Agent Sampling: Scaling Inference Compute for Data Synthesis with Tree Search-Based Agentic Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17061) | [code]\n\n- [2024\u002F12\u002F22] **A Multi-AI Agent System for Autonomous Optimization of Agentic AI Solutions via Iterative Refinement and LLM-Driven Feedback Loops** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17149) | [code]\n\n- [2024\u002F12\u002F20] **Mitigating Social Bias in Large Language Models: A Multi-Objective Approach within a Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15504) | [code]\n\n- [2024\u002F12\u002F19] **PsyDraw: A Multi-Agent Multimodal System for Mental Health Screening in Left-Behind Children** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14769) | [code]\n\n- [2024\u002F12\u002F18] **Gradual Vigilance and Interval Communication: Enhancing Value Alignment in Multi-Agent Debates** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.13471) | [code]\n\n- [2024\u002F12\u002F15] **Cultural Palette: Pluralising Culture Alignment via Multi-agent Palette** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11167) | [code]\n\n- [2024\u002F12\u002F13] **AutoPatent: A Multi-Agent Framework for Automatic Patent Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09796) | [code]\n\n- [2024\u002F12\u002F12] **DiverseAgentEntropy: Quantifying Black-Box LLM Uncertainty through Diverse Perspectives and Multi-Agent Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09572) | [code]\n\n- [2024\u002F12\u002F11] **NAT-NL2GQL: A Novel Multi-Agent Framework for Translating Natural Language to Graph Query Language** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10434) | [code]\n\n- [2024\u002F12\u002F10] **AutoPrep: Natural Language Question-Aware Data Preparation with a Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10422) | [code]\n\n- [2024\u002F12\u002F07] **SLA Management in Reconfigurable Multi-Agent RAG: A Systems Approach to Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.06832) | [code]\n\n- [2024\u002F12\u002F06] **Breaking Event Rumor Detection via Stance-Separated Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04859) | [code]\n\n- [2024\u002F12\u002F06] **Towards Effective GenAI Multi-Agent Collaboration: Design and Evaluation for Enterprise Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05449) | [code]\n\n- [2024\u002F12\u002F06] **Enhancing LLMs for Impression Generation in Radiology Reports through a Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.06828) | [code]\n\n- [2024\u002F12\u002F06] **TeamCraft: A Benchmark for Multi-Modal Multi-Agent Systems in Minecraft** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05255) | [code]\n\n- [2024\u002F12\u002F05] **Educational-Psychological Dialogue Robot Based on Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03847) | [code]\n\n- [2024\u002F12\u002F01] **Multi-Agent Collaboration in Incident Response with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.00652) | [code]\n\n- [2024\u002F11\u002F28] **MAG-V: A Multi-Agent Framework for Synthetic Data Generation and Verification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04494) | [code]\n\n- [2024\u002F11\u002F21] **PIORS: Personalized Intelligent Outpatient Reception based on Large Language Model with Multi-Agents Medical Scenario Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13902) | [code]\n\n- [2024\u002F11\u002F21] **Enhancing LLMs for Power System Simulations: A Feedback-driven Multi-agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16707) | [code]\n\n- [2024\u002F11\u002F18] **The Power of Many: Multi-Agent Multimodal Models for Cultural Image Captioning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.11758) | [code]\n\n- [2024\u002F11\u002F12] **BudgetMLAgent: A Cost-Effective LLM Multi-Agent system for Automating Machine Learning Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07464) | [code]\n\n- [2024\u002F11\u002F11] **Using Generative AI and Multi-Agents to Provide Automatic Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07407) | [code]\n\n- [2024\u002F11\u002F09] **Mixture of Knowledge Minigraph Agents for Literature Review Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.06159) | [code]\n\n- [2024\u002F11\u002F05] **SAUCE: Synchronous and Asynchronous User-Customizable Environment for Multi-Agent LLM Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03397) | [code]\n\n- [2024\u002F11\u002F05] **SMoA: Improving Multi-agent Large Language Models with Sparse Mixture-of-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03284) | [code]\n\n- [2024\u002F11\u002F01] **DARD: A Multi-Agent Approach for Task-Oriented Dialog Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00427) | [code]\n\n- [2024\u002F10\u002F30] **ACC-Debate: An Actor-Critic Approach to Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00053) | [code]\n\n- [2024\u002F10\u002F29] **Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22304) | [code]\n\n- [2024\u002F10\u002F29] **MARCO: Multi-Agent Real-time Chat Orchestration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21784) | [code]\n\n- [2024\u002F10\u002F28] **CRAT: A Multi-Agent Framework for Causality-Enhanced Reflective and Retrieval-Augmented Translation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21067) | [code]\n\n- [2024\u002F10\u002F27] **AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.20424) | [code]\n\n- [2024\u002F10\u002F24] **Schema-Guided Culture-Aware Complex Event Simulation with Multi-Agent Role-Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18935) | [code]\n\n- [2024\u002F10\u002F23] **GraphTeam: Facilitating Large Language Model-based Graph Analysis via Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18032) | [code]\n\n- [2024\u002F10\u002F22] **Decoding Time Series with LLMs: A Multi-Agent Framework for Cross-Domain Annotation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17462) | [code]\n\n- [2024\u002F10\u002F19] **An Electoral Approach to Diversify LLM-based Multi-Agent Collective Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15168) | [code]\n\n- [2024\u002F10\u002F18] **Synthesizing Post-Training Data for LLMs through Multi-Agent Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14251) | [code]\n\n- [2024\u002F10\u002F17] **AdaSwitch: Adaptive Switching between Small and Large Agents for Effective Cloud-Local Collaborative Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13181) | [code]\n\n- [2024\u002F10\u002F16] **PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12375) | [code]\n\n- [2024\u002F10\u002F13] **LLM-Based Multi-Agent Systems are Scalable Graph Generative Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09824) | [code]\n\n- [2024\u002F10\u002F12] **Many Heads Are Better Than One: Improved Scientific Idea Generation by A LLM-Based Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09403) | [code]\n\n- [2024\u002F10\u002F11] **JAILJUDGE: A Comprehensive Jailbreak Judge Benchmark with Multi-Agent Enhanced Explanation Evaluation Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12855) | [code]\n\n- [2024\u002F10\u002F11] **PEAR: A Robust and Flexible Automation Framework for Ptychography Enabled by Multiple Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09034) | [code]\n\n- [2024\u002F10\u002F10] **AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07561) | [code]\n\n- [2024\u002F10\u002F10] **Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08102) | [code]\n\n- [2024\u002F10\u002F10] **Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08115) | [code]\n\n- [2024\u002F10\u002F10] **Prompt Engineering a Schizophrenia Chatbot: Utilizing a Multi-Agent Approach for Enhanced Compliance with Prompt Instructions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12848) | [code]\n\n- [2024\u002F10\u002F10] **Diversity of Thought Elicits Stronger Reasoning Capabilities in Multi-Agent Debate Frameworks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12853) | [code]\n\n- [2024\u002F10\u002F09] **Seeker: Enhancing Exception Handling in Code with LLM-based Multi-Agent Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06949) | [code]\n\n- [2024\u002F10\u002F07] **Adversarial Multi-Agent Evaluation of Large Language Models through Iterative Debates** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04663) | [code]\n\n- [2024\u002F10\u002F06] **MindScope: Exploring cognitive biases in large language models through Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04452) | [code]\n\n- [2024\u002F10\u002F03] **Towards Implicit Bias Detection and Mitigation in Multi-Agent LLM Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02584) | [code]\n\n- [2024\u002F10\u002F03] **Agents&#39; Room: Narrative Generation through Multi-step Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02603) | [code]\n\n- [2024\u002F10\u002F03] **Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02507) | [code]\n\n- [2024\u002F10\u002F03] **ColaCare: Enhancing Electronic Health Record Modeling through Large Language Model-Driven Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02551) | [code]\n\n- [2024\u002F10\u002F03] **AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02958) | [code]\n\n- [2024\u002F10\u002F02] **RGD: Multi-LLM Based Agent Debugger via Refinement and Generation Guidance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.01242) | [code]\n\n- [2024\u002F10\u002F02] **Zodiac: A Cardiologist-Level LLM Framework for Multi-Agent Diagnostics** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02026) | [code]\n\n- [2024\u002F09\u002F21] **Towards Automated Patent Workflows: AI-Orchestrated Multi-Agent Framework for Intellectual Property Management and Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19006) | [code]\n\n- [2024\u002F09\u002F21] **GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14051) | [code]\n\n- [2024\u002F09\u002F20] **Minstrel: Structural Prompt Generation with Multi-Agents Coordination for Non-AI Experts** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13449) | [code]\n\n- [2024\u002F09\u002F18] **MAgICoRe: Multi-Agent, Iterative, Coarse-to-Fine Refinement for Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12147) | [code]\n\n- [2024\u002F09\u002F17] **The Art of Storytelling: Multi-Agent Generative AI for Dynamic Multimodal Narratives** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11261) | [code]\n\n- [2024\u002F09\u002F16] **Instigating Cooperation among LLM Agents Using Adaptive Information Modulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.10372) | [code]\n\n- [2024\u002F09\u002F14] **Synergistic Simulations: Multi-Agent Problem Solving with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13753) | [code]\n\n- [2024\u002F09\u002F12] **Knowledge Tagging with Large Language Model based Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08406) | [code]\n\n- [2024\u002F09\u002F11] **Propaganda to Hate: A Multimodal Analysis of Arabic Memes with Multi-Agent LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07246) | [code]\n\n- [2024\u002F09\u002F09] **SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.05556) | [code]\n\n- [2024\u002F09\u002F06] **Using Large Language Models to Generate Authentic Multi-agent Knowledge Work Datasets** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.04286) | [code]\n\n- [2024\u002F09\u002F05] **xLAM: A Family of Large Action Models to Empower AI Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03215) | [code]\n\n- [2024\u002F09\u002F02] **Co-Learning: Code Learning for Multi-Agent Reinforcement Collaborative Framework with Conversational Natural Language Interfaces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00985) | [code]\n\n- [2024\u002F08\u002F28] **BattleAgentBench: A Benchmark for Evaluating Cooperation and Competition Capabilities of Language Models in Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15971) | [code]\n\n- [2024\u002F08\u002F27] **AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14972) | [code]\n\n- [2024\u002F08\u002F24] **Towards Human-Level Understanding of Complex Process Engineering Schematics: A Pedagogical, Introspective Multi-Agent Framework for Open-Domain Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00082) | [code]\n\n- [2024\u002F08\u002F22] **MuMA-ToM: Multi-modal Multi-Agent Theory of Mind** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12574) | [code]\n\n- [2024\u002F08\u002F21] **DreamFactory: Pioneering Multi-Scene Long Video Generation with a Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11788) | [code]\n\n- [2024\u002F08\u002F16] **The Fellowship of the LLMs: Multi-Agent Workflows for Synthetic Preference Optimization Dataset Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08688) | [code]\n\n- [2024\u002F08\u002F15] **MAG-SQL: Multi-Agent Generative Approach with Soft Schema Linking and Iterative Sub-SQL Refinement for Text-to-SQL** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07930) | [code]\n\n- [2024\u002F08\u002F15] **Text2BIM: Generating Building Models Using a Large Language Model-based Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08054) | [code]\n\n- [2024\u002F08\u002F14] **Development of a Large Language Model-based Multi-Agent Clinical Decision Support System for Korean Triage and Acuity Scale (KTAS)-Based Triage and Treatment Planning in Emergency Departments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07531) | [code]\n\n- [2024\u002F08\u002F08] **Can LLMs Beat Humans in Debating? A Dynamic Multi-agent Framework for Competitive Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.04472) | [code]\n\n- [2024\u002F08\u002F05] **ReDel: A Toolkit for LLM-Powered Recursive Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02248) | [code]\n\n- [2024\u002F08\u002F05] **Evaluating and Enhancing LLMs Agent based on Theory of Mind in Guandan: A Multi-Player Cooperative Game under Imperfect Information** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02559) | [code]\n\n- [2024\u002F07\u002F23] **LawLuo: A Multi-Agent Collaborative Framework for Multi-Round Chinese Legal Consultation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16252) | [code]\n\n- [2024\u002F07\u002F21] **Multi-Agent Causal Discovery Using Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15073) | [code]\n\n- [2024\u002F07\u002F19] **NeLLCom-X: A Comprehensive Neural-Agent Framework to Simulate Language Learning and Group Communication** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.13999) | [code]\n\n- [2024\u002F07\u002F17] **Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.12532) | [code]\n\n- [2024\u002F07\u002F16] **InvAgent: A Large Language Model based Multi-Agent System for Inventory Management in Supply Chains** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.11384) | [code]\n\n- [2024\u002F07\u002F13] **Synergistic Multi-Agent Framework with Trajectory Learning for Knowledge-Intensive Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.09893) | [code]\n\n- [2024\u002F07\u002F13] **Cohesive Conversations: Enhancing Authenticity in Multi-Agent Simulated Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.09897) | [code]\n\n- [2024\u002F07\u002F10] **Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.07791) | [code]\n\n- [2024\u002F07\u002F09] **FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06567) | [code]\n\n- [2024\u002F07\u002F09] **Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.07061) | [code]\n\n- [2024\u002F07\u002F04] **Solving Zebra Puzzles Using Constraint-Guided Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.03956) | [code]\n\n- [2024\u002F07\u002F03] **MentalAgora: A Gateway to Advanced Personalized Care in Mental Health through Multi-Agent Debating and Attribute Control** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02736) | [code]\n\n- [2024\u002F06\u002F17] **Improving Multi-Agent Debate with Sparse Communication Topology** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11776) | [code]\n\n- [2024\u002F06\u002F13] **Multi-Agent Software Development through Cross-Team Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08979) | [[code]](https:\u002F\u002Fgithub.com\u002Fopenbmb\u002Fchatdev)\n\n- [2024\u002F06\u002F11] **CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07054) | [[code]](https:\u002F\u002Fgithub.com\u002Flirenhao1997\u002Fcoevol)\n\n- [2024\u002F06\u002F07] **Mixture-of-Agents Enhances Large Language Model Capabilities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04692) | [code]\n\n- [2024\u002F06\u002F05] **Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03075) | [code]\n\n- [2024\u002F06\u002F04] **Chain of Agents: Large Language Models Collaborating on Long-Context Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02818) | [code]\n\n- [2024\u002F06\u002F03] **Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01014) | [[code]](https:\u002F\u002Fgithub.com\u002Fx-plug\u002Fmobileagent)\n\n- [2024\u002F05\u002F30] **Safe Multi-agent Reinforcement Learning with Natural Language Constraints** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20018) | [code]\n\n- [2024\u002F05\u002F23] **CityGPT: Towards Urban IoT Learning, Analysis and Interaction with Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14691) | [code]\n\n- [2024\u002F05\u002F20] **(Perhaps) Beyond Human Translation: Harnessing Multi-Agent Collaboration for Translating Ultra-Long Literary Texts** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11804) | [code]\n\n- [2024\u002F05\u002F10] **LLM Discussion: Enhancing the Creativity of Large Language Models via Discussion Framework and Role-Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.06373) | [code]\n\n- [2024\u002F05\u002F07] **Enhancing the Efficiency and Accuracy of Underlying Asset Reviews in Structured Finance: The Application of Multi-agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04294) | [code]\n\n- [2024\u002F05\u002F06] **Persona Inconstancy in Multi-Agent LLM Collaboration: Conformity, Confabulation, and Impersonation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.03862) | [code]\n\n- [2024\u002F05\u002F05] **Language Evolution for Evading Social Media Regulation via LLM-based Multi-agent Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.02858) | [code]\n\n- [2024\u002F04\u002F25] **Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.16698) | [code]\n\n- [2024\u002F04\u002F23] **ClinicalAgent: Clinical Trial Multi-Agent System with Large Language Model-based Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.14777) | [code]\n\n- [2024\u002F04\u002F14] **Confidence Calibration and Rationalization for LLMs via Multi-Agent Deliberation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.09127) | [code]\n\n- [2024\u002F04\u002F12] **Leveraging Multi-AI Agents for Cross-Domain Knowledge Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.08511) | [code]\n\n- [2024\u002F04\u002F09] **Foundation Models to the Rescue: Deadlock Resolution in Connected Multi-Robot Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06413) | [code]\n\n- [2024\u002F04\u002F08] **360$^\\circ$REA: Towards A Reusable Experience Accumulation with 360{\\deg} Assessment for Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.05569) | [code]\n\n- [2024\u002F04\u002F06] **MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04735) | [[code]](https:\u002F\u002Fgithub.com\u002Fbin123apple\u002Fmacm)\n\n- [2024\u002F04\u002F02] **Self-Organized Agents: A LLM Multi-Agent Framework toward Ultra Large-Scale Code Generation and Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02183) | [code]\n\n- [2024\u002F04\u002F02] **CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.01663) | [code]\n\n- [2024\u002F03\u002F26] **MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17927) | [code]\n\n- [2024\u002F03\u002F22] **CACA Agent: Capability Collaboration based AI Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.15137) | [code]\n\n- [2024\u002F03\u002F21] **Multi-Agent VQA: Exploring Multi-Agent Foundation Models in Zero-Shot Visual Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.14783) | [code]\n\n- [2024\u002F03\u002F19] **Embodied LLM Agents Learn to Cooperate in Organized Teams** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12482) | [code]\n\n- [2024\u002F03\u002F12] **Transforming Competition into Collaboration: The Revolutionary Role of Multi-Agent Systems and Language Models in Modern Organizations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07769) | [code]\n\n- [2024\u002F03\u002F02] **AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.04783) | [code]\n\n- [2024\u002F02\u002F28] **Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the Key?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18272) | [code]\n\n- [2024\u002F02\u002F26] **Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16313) | [code]\n\n- [2024\u002F02\u002F26] **LLMArena: Assessing Capabilities of Large Language Models in Dynamic Multi-Agent Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16499) | [code]\n\n- [2024\u002F02\u002F21] **LLM Based Multi-Agent Generation of Semi-structured Documents from Semantic Templates in the Public Administration Domain** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14871) | [code]\n\n- [2024\u002F02\u002F18] **Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11443) | [[code]](https:\u002F\u002Fgithub.com\u002Fnanshineloong\u002Fself-evolving-benchmark)\n\n- [2024\u002F02\u002F18] **LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11550) | [code]\n\n- [2024\u002F02\u002F15] **TDAG: A Multi-Agent Framework based on Dynamic Task Decomposition and Agent Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10178) | [code]\n\n- [2024\u002F02\u002F03] **More Agents Is All You Need** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05120) | [code]\n\n- [2024\u002F02\u002F02] **Reasoning Capacity in Multi-Agent Systems: Limitations, Challenges and Human-Centered Solutions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01108) | [code]\n\n- [2024\u002F02\u002F02] **A Multi-Agent Conversational Recommender System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01135) | [code]\n\n- [2024\u002F01\u002F11] **Combating Adversarial Attacks with Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05998) | [code]\n\n- [2024\u002F01\u002F08] **MARG: Multi-Agent Review Generation for Scientific Papers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.04259) | [code]\n\n- [2024\u002F01\u002F08] **SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03945) | [code]\n\n- [2024\u002F01\u002F08] **Why Solving Multi-agent Path Finding with Large Language Model has not Succeeded Yet** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03630) | [code]\n\n- [2023\u002F12\u002F20] **AgentCoder: Multi-Agent-based Code Generation with Iterative Testing and Optimisation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13010) | [code]\n\n- [2023\u002F10\u002F31] **Multi-Agent Consensus Seeking via Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20151) | [code]\n\n- [2023\u002F10\u002F25] **MultiPrompter: Cooperative Prompt Optimization with Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16730) | [code]\n\n- [2023\u002F08\u002F22] **ProAgent: Building Proactive Cooperative Agents with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11339) | [code]\n\n- [2023\u002F08\u002F21] **AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848) | [code]\n\n- [2023\u002F08\u002F14] **ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07201) | [code]\n\n- [2023\u002F08\u002F01] **MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00352) | [code]\n\n- [2023\u002F06\u002F05] **Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.03314) | [code]\n\n- [2023\u002F05\u002F31] **Recursive Metropolis-Hastings Naming Game: Symbol Emergence in a Multi-agent System based on Probabilistic Generative Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19761) | [code]\n\n- [2023\u002F05\u002F30] **Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19118) | [code]\n\n- [2023\u002F04\u002F26] **Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13835) | [code]\n\n- [2023\u002F04\u002F24] **ChatLLM Network: More brains, More intelligence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.12998) | [code]\n\n### Stability\n#### Safety\n- [2025\u002F07\u002F09] **VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06899) | [code]\n\n- [2025\u002F07\u002F04] **LTLCrit: A Temporal Logic-based LLM Critic for Safe and Efficient Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03293) | [code]\n\n- [2025\u002F07\u002F01] **Enhancing LLM Agent Safety via Causal Influence Prompting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00979) | [code]\n\n- [2025\u002F07\u002F01] **GAF-Guard: An Agentic Framework for Risk Management and Governance in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02986) | [code]\n\n- [2025\u002F06\u002F25] **Model Editing as a Double-Edged Sword: Steering Agent Ethical Behavior Toward Beneficence or Harm** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20606) | [code]\n\n- [2025\u002F06\u002F11] **Effective Red-Teaming of Policy-Adherent Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09600) | [code]\n\n- [2025\u002F06\u002F11] **Disclosure Audits for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10171) | [code]\n\n- [2025\u002F06\u002F09] **SAFEFLOW: A Principled Protocol for Trustworthy and Transactional Autonomous Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07564) | [code]\n\n- [2025\u002F06\u002F04] **RedDebate: Safer Responses through Multi-Agent Red Teaming Debates** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11083) | [code]\n\n- [2025\u002F06\u002F01] **Simple Prompt Injection Attacks Can Leak Personal Data Observed by LLM Agents During Task Execution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01055) | [code]\n\n- [2025\u002F05\u002F29] **AgentAlign: Navigating Safety Alignment in the Shift from Informative to Agentic Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23020) | [code]\n\n- [2025\u002F05\u002F28] **RedTeamCUA: Realistic Adversarial Testing of Computer-Use Agents in Hybrid Web-OS Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21936) | [code]\n\n- [2025\u002F05\u002F26] **TrojanStego: Your Language Model Can Secretly Be A Steganographic Privacy Leaking Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20118) | [code]\n\n- [2025\u002F05\u002F25] **GUARDIAN: Safeguarding LLM Multi-Agent Collaborations with Temporal Graph Modeling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19234) | [code]\n\n- [2025\u002F05\u002F18] **IP Leakage Attacks Targeting LLM-Based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12442) | [code]\n\n- [2025\u002F05\u002F16] **EnvInjection: Environmental Prompt Injection Attack to Multi-modal Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11717) | [code]\n\n- [2025\u002F04\u002F24] **Assessing the Potential of Generative Agents in Crowdsourced Fact-Checking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19940) | [code]\n\n- [2025\u002F04\u002F15] **Towards Automated Safety Requirements Derivation Using Agent-based RAG** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11243) | [code]\n\n- [2025\u002F03\u002F26] **sudo rm -rf agentic_security** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20279) | [code]\n\n- [2025\u002F03\u002F24] **AgentSpec: Customizable Runtime Enforcement for Safe and Reliable LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18666) | [code]\n\n- [2025\u002F03\u002F06] **SafeArena: Evaluating the Safety of Autonomous Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04957) | [code]\n\n- [2025\u002F02\u002F20] **CORBA: Contagious Recursive Blocking Attacks on Multi-Agent Systems Based on Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14529) | [code]\n\n- [2025\u002F02\u002F18] **AEIA-MN: Evaluating the Robustness of Multimodal LLM-Powered Mobile Agents Against Active Environmental Injection Attacks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13053) | [code]\n\n- [2025\u002F02\u002F17] **&#34;Nuclear Deployed!&#34;: Analyzing Catastrophic Risks in Decision-making of Autonomous LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11355) | [code]\n\n- [2025\u002F02\u002F01] **ALU: Agentic LLM Unlearning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00406) | [code]\n\n- [2025\u002F01\u002F28] **Context is Key for Agent Security** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.17070) | [code]\n\n- [2024\u002F12\u002F21] **The Task Shield: Enforcing Task Alignment to Defend Against Indirect Prompt Injection in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16682) | [code]\n\n- [2024\u002F12\u002F16] **Seeker: Towards Exception Safety Code Generation with Intermediate Language Agents Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11713) | [code]\n\n- [2024\u002F12\u002F09] **The Fusion of Large Language Models and Formal Methods for Trustworthy AI Agents: A Roadmap** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.06512) | [code]\n\n- [2024\u002F11\u002F08] **Towards Low-Resource Harmful Meme Detection with LMM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05383) | [code]\n\n- [2024\u002F11\u002F06] **MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03814) | [code]\n\n- [2024\u002F11\u002F04] **Attacking Vision-Language Computer Agents via Pop-ups** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02391) | [code]\n\n- [2024\u002F10\u002F22] **AdvWeb: Controllable Black-box Attacks on VLM-powered Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17401) | [code]\n\n- [2024\u002F10\u002F18] **Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14141) | [code]\n\n- [2024\u002F10\u002F11] **AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09024) | [code]\n\n- [2024\u002F10\u002F09] **I Want to Break Free! Persuasion and Anti-Social Behavior of LLMs in Multi-Agent Settings with Social Hierarchy** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07109) | [code]\n\n- [2024\u002F09\u002F28] **SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19471) | [code]\n\n- [2024\u002F09\u002F17] **EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11295) | [code]\n\n- [2024\u002F09\u002F13] **AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.09013) | [code]\n\n- [2024\u002F08\u002F20] **Athena: Safe Autonomous Agents with Verbal Contrastive Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11021) | [code]\n\n- [2024\u002F08\u002F05] **Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02544) | [code]\n\n- [2024\u002F07\u002F23] **RedAgent: Red Teaming Large Language Models with Context-aware Autonomous Language Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16667) | [code]\n\n- [2024\u002F06\u002F05] **BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03007) | [[code]](https:\u002F\u002Fgithub.com\u002Fdpamk\u002Fbadagent)\n\n- [2024\u002F05\u002F30] **Safe Multi-agent Reinforcement Learning with Natural Language Constraints** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20018) | [code]\n\n- [2024\u002F05\u002F24] **Hacc-Man: An Arcade Game for Jailbreaking LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.15902) | [code]\n\n- [2024\u002F03\u002F02] **AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.04783) | [code]\n\n- [2024\u002F02\u002F17] **Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11208) | [code]\n\n- [2024\u002F02\u002F16] **ToolSword: Unveiling Safety Issues of Large Language Models in Tool Learning Across Three Stages** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10753) | [[code]](https:\u002F\u002Fgithub.com\u002Fjunjie-ye\u002Ftoolsword)\n\n- [2024\u002F02\u002F02] **TrustAgent: Towards Safe and Trustworthy LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01586) | [code]\n\n- [2024\u002F01\u002F11] **Combating Adversarial Attacks with Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05998) | [code]\n\n- [2023\u002F11\u002F17] **Testing Language Model Agents Safely in the Wild** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.10538) | [code]\n\n#### Bias\n- [2025\u002F05\u002F27] **Silence is Not Consensus: Disrupting Agreement Bias in Multi-Agent LLMs via Catfish Agent for Clinical Decision Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21503) | [code]\n\n- [2025\u002F05\u002F14] **Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.09614) | [code]\n\n- [2025\u002F04\u002F10] **MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.01019) | [code]\n\n- [2025\u002F03\u002F27] **Bias-Aware Agent: Enhancing Fairness in AI-Driven Knowledge Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21237) | [code]\n\n- [2025\u002F03\u002F01] **Structured Reasoning for Fairness: A Multi-Agent Approach to Bias Detection in Textual Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00355) | [code]\n\n- [2025\u002F01\u002F29] **Actions Speak Louder than Words: Agent Decisions Reveal Implicit Biases in Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.17420) | [code]\n\n- [2025\u002F01\u002F24] **Unmasking Conversational Bias in AI Multiagent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14844) | [code]\n\n- [2024\u002F12\u002F20] **Mitigating Social Bias in Large Language Models: A Multi-Objective Approach within a Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15504) | [code]\n\n- [2024\u002F11\u002F12] **Mitigating Bias in Queer Representation within Large Language Models: A Collaborative Agent Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07656) | [code]\n\n- [2024\u002F10\u002F06] **MindScope: Exploring cognitive biases in large language models through Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04452) | [code]\n\n- [2024\u002F10\u002F03] **Towards Implicit Bias Detection and Mitigation in Multi-Agent LLM Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02584) | [code]\n\n- [2024\u002F05\u002F23] **ALI-Agent: Assessing LLMs&#39; Alignment with Human Values via Agent-based Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14125) | [code]\n\n- [2024\u002F04\u002F23] **Aligning LLM Agents by Learning Latent Preference from User Edits** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.15269) | [code]\n\n- [2024\u002F02\u002F19] **Polarization of Autonomous Generative AI Agents Under Echo Chambers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12212) | [code]\n\n- [2024\u002F02\u002F14] **Towards better Human-Agent Alignment: Assessing Task Utility in LLM-Powered Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09015) | [code]\n\n- [2024\u002F01\u002F09] **Agent Alignment in Evolving Social Norms** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.04620) | [code]\n\n#### Hallucination\n- [2025\u002F06\u002F23] **A Comment On &#34;The Illusion of Thinking&#34;: Reframing the Reasoning Cliff as an Agentic Gap** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18957) | [code]\n\n- [2025\u002F05\u002F28] **Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22655) | [code]\n\n- [2025\u002F03\u002F14] **Prompt Injection Detection and Mitigation via AI Multi-Agent NLP Frameworks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11517) | [code]\n\n- [2025\u002F03\u002F14] **RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13514) | [code]\n\n- [2025\u002F03\u002F01] **EXCLAIM: An Explainable Cross-Modal Agentic System for Misinformation Detection with Hierarchical Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06269) | [code]\n\n- [2025\u002F02\u002F26] **Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19545) | [code]\n\n- [2025\u002F02\u002F14] **Automated Hypothesis Validation with Agentic Sequential Falsifications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09858) | [code]\n\n- [2025\u002F02\u002F04] **Position: Stop Acting Like Language Model Agents Are Normal Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10420) | [code]\n\n- [2025\u002F02\u002F03] **SelfCheckAgent: Zero-Resource Hallucination Detection in Generative Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01812) | [code]\n\n- [2025\u002F01\u002F19] **Hallucination Mitigation using Agentic AI Natural Language-Based Frameworks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13946) | [code]\n\n- [2024\u002F11\u002F25] **Enhancing Multi-Agent Consensus through Third-Party LLM Integration: Analyzing Uncertainty and Mitigating Hallucinations in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16189) | [code]\n\n- [2024\u002F11\u002F12] **SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07965) | [code]\n\n- [2024\u002F07\u002F08] **DebUnc: Mitigating Hallucinations in Large Language Model Agent Communication with Uncertainty Estimations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06426) | [code]\n\n- [2024\u002F06\u002F29] **BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00466) | [code]\n\n- [2024\u002F06\u002F17] **Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11277) | [code]\n\n- [2024\u002F06\u002F05] **Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03075) | [code]\n\n- [2024\u002F05\u002F28] **TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18027) | [code]\n\n- [2024\u002F02\u002F13] **Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08567) | [[code]](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fagent-smith)\n\n### Infrastructure\n#### Benchmark&Evaluation\n- [2025\u002F07\u002F08] **ECom-Bench: Can LLM Agent Resolve Real-World E-commerce Customer Support Issues?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05639) | [code]\n\n- [2025\u002F07\u002F07] **Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05257) | [code]\n\n- [2025\u002F07\u002F04] **Recon, Answer, Verify: Agents in Search of Truth** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03671) | [code]\n\n- [2025\u002F07\u002F04] **STRUCTSENSE: A Task-Agnostic Agentic Framework for Structured Information Extraction with Human-In-The-Loop Evaluation and Benchmarking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03674) | [code]\n\n- [2025\u002F07\u002F01] **TransLaw: Benchmarking Large Language Models in Multi-Agent Simulation of the Collaborative Translation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00875) | [code]\n\n- [2025\u002F06\u002F27] **Don&#39;t Trust Generative Agents to Mimic Communication on Social Networks Unless You Benchmarked their Empirical Realism** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21974) | [code]\n\n- [2025\u002F06\u002F27] **RExBench: Can coding agents autonomously implement AI research extensions?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22598) | [code]\n\n- [2025\u002F06\u002F26] **Agent-RewardBench: Towards a Unified Benchmark for Reward Modeling across Perception, Planning, and Safety in Real-World Multimodal Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21252) | [code]\n\n- [2025\u002F06\u002F25] **The Decrypto Benchmark for Multi-Agent Reasoning and Theory of Mind** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20664) | [code]\n\n- [2025\u002F06\u002F20] **MemBench: Towards More Comprehensive Evaluation on the Memory of LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21605) | [code]\n\n- [2025\u002F06\u002F20] **Dissecting the SWE-Bench Leaderboards: Profiling Submitters and Architectures of LLM- and Agent-Based Repair Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.17208) | [code]\n\n- [2025\u002F06\u002F19] **IS-Bench: Evaluating Interactive Safety of VLM-Driven Embodied Agents in Daily Household Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16402) | [code]\n\n- [2025\u002F06\u002F13] **DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11763) | [code]\n\n- [2025\u002F06\u002F13] **The Behavior Gap: Evaluating Zero-shot LLM Agents in Complex Task-Oriented Dialogs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.12266) | [code]\n\n- [2025\u002F06\u002F11] **Bench to the Future: A Pastcasting Benchmark for Forecasting Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21558) | [code]\n\n- [2025\u002F06\u002F10] **Atomic-to-Compositional Generalization for Mobile Agents with A New Benchmark and Scheduling System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08972) | [code]\n\n- [2025\u002F06\u002F10] **UTBoost: Rigorous Evaluation of Coding Agents on SWE-Bench** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09289) | [code]\n\n- [2025\u002F06\u002F09] **EconWebArena: Benchmarking Autonomous Agents on Economic Tasks in Realistic Web Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08136) | [code]\n\n- [2025\u002F06\u002F09] **HeuriGym: An Agentic Benchmark for LLM-Crafted Heuristics in Combinatorial Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07972) | [code]\n\n- [2025\u002F06\u002F09] **$\\tau^2$-Bench: Evaluating Conversational Agents in a Dual-Control Environment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07982) | [code]\n\n- [2025\u002F06\u002F05] **Flex-TravelPlanner: A Benchmark for Flexible Planning with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04649) | [code]\n\n- [2025\u002F06\u002F04] **AgentMisalignment: Measuring the Propensity for Misaligned Behaviour in LLM-Based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04018) | [code]\n\n- [2025\u002F06\u002F02] **FormFactory: An Interactive Benchmarking Suite for Multimodal Form-Filling Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01520) | [code]\n\n- [2025\u002F06\u002F02] **WebChoreArena: Evaluating Web Browsing Agents on Realistic Tedious Web Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01952) | [code]\n\n- [2025\u002F05\u002F31] **DefenderBench: A Toolkit for Evaluating Language Agents in Cybersecurity Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00739) | [code]\n\n- [2025\u002F05\u002F30] **Draw ALL Your Imagine: A Holistic Benchmark and Agent Framework for Complex Instruction-based Image Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24787) | [code]\n\n- [2025\u002F05\u002F30] **Agent-X: Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24876) | [code]\n\n- [2025\u002F05\u002F30] **Open CaptchaWorld: A Comprehensive Web-based Platform for Testing and Benchmarking Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24878) | [code]\n\n- [2025\u002F05\u002F29] **GSO: Challenging Software Optimization Tasks for Evaluating SWE-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23671) | [code]\n\n- [2025\u002F05\u002F27] **AutoJudger: An Agent-Driven Framework for Efficient Benchmarking of MLLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21389) | [code]\n\n- [2025\u002F05\u002F26] **ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19897) | [code]\n\n- [2025\u002F05\u002F26] **MLR-Bench: Evaluating AI Agents on Open-Ended Machine Learning Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19955) | [code]\n\n- [2025\u002F05\u002F26] **On Path to Multimodal Historical Reasoning: HistBench and HistAgent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20246) | [code]\n\n- [2025\u002F05\u002F24] **CRMArena-Pro: Holistic Assessment of LLM Agents Across Diverse Business Scenarios and Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18878) | [code]\n\n- [2025\u002F05\u002F22] **BioDSA-1K: Benchmarking Data Science Agents for Biomedical Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16100) | [code]\n\n- [2025\u002F05\u002F22] **From EduVisBench to EduVisAgent: A Benchmark and Multi-Agent Framework for Reasoning-Driven Pedagogical Visualization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16832) | [code]\n\n- [2025\u002F05\u002F22] **AGENTIF: Benchmarking Instruction Following of Large Language Models in Agentic Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16944) | [code]\n\n- [2025\u002F05\u002F21] **X-WebAgentBench: A Multilingual Interactive Web Benchmark for Evaluating Global Agentic System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15372) | [code]\n\n- [2025\u002F05\u002F21] **BountyBench: Dollar Impact of AI Agent Attackers and Defenders on Real-World Cybersecurity Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15216) | [code]\n\n- [2025\u002F05\u002F21] **InfoDeepSeek: Benchmarking Agentic Information Seeking for Retrieval-Augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15872) | [code]\n\n- [2025\u002F05\u002F21] **MAPS: A Multilingual Benchmark for Global Agent Performance and Security** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15935) | [code]\n\n- [2025\u002F05\u002F18] **MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12371) | [code]\n\n- [2025\u002F05\u002F17] **Mobile-Bench-v2: A More Realistic and Comprehensive Benchmark for VLM-based Mobile Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11891) | [code]\n\n- [2025\u002F05\u002F16] **GuideBench: Benchmarking Domain-Oriented Guideline Following for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11368) | [code]\n\n- [2025\u002F05\u002F16] **REI-Bench: Can Embodied Agents Understand Vague Human Instructions in Task Planning?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10872) | [code]\n\n- [2025\u002F05\u002F02] **PIPA: A Unified Evaluation Protocol for Diagnosing Interactive Planning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.01592) | [code]\n\n- [2025\u002F04\u002F25] **Auto-SLURP: A Benchmark Dataset for Evaluating Multi-Agent Frameworks in Smart Personal Assistant** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18373) | [code]\n\n- [2025\u002F04\u002F24] **Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17934) | [code]\n\n- [2025\u002F04\u002F21] **PLANET: A Collection of Benchmarks for Evaluating LLMs&#39; Planning Capabilities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14773) | [code]\n\n- [2025\u002F04\u002F16] **BrowseComp: A Simple Yet Challenging Benchmark for Browsing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12516) | [code]\n\n- [2025\u002F04\u002F15] **GraphicBench: A Planning Benchmark for Graphic Design with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11571) | [code]\n\n- [2025\u002F04\u002F13] **AgentA\u002FB: Automated and Scalable Web A\u002FBTesting with Interactive LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09723) | [code]\n\n- [2025\u002F04\u002F11] **TP-RAG: Benchmarking Retrieval-Augmented Large Language Model Agents for Spatiotemporal-Aware Travel Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08694) | [code]\n\n- [2025\u002F04\u002F11] **AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08942) | [code]\n\n- [2025\u002F04\u002F10] **MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.01019) | [code]\n\n- [2025\u002F04\u002F06] **CO-Bench: Benchmarking Language Model Agents in Algorithm Search for Combinatorial Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.04310) | [code]\n\n- [2025\u002F04\u002F04] **How Social is It? A Benchmark for LLMs&#39; Capabilities in Multi-user Multi-turn Social Agent Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.04628) | [code]\n\n- [2025\u002F03\u002F31] **SciReplicate-Bench: Benchmarking LLMs in Agent-driven Algorithmic Reproduction from Research Papers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00255) | [code]\n\n- [2025\u002F03\u002F28] **Evaluating LLM-based Agents for Multi-Turn Conversations: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22458) | [code]\n\n- [2025\u002F03\u002F25] **Writing as a testbed for open ended agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19711) | [code]\n\n- [2025\u002F03\u002F24] **EconEvals: Benchmarks and Litmus Tests for LLM Agents in Unknown Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18825) | [code]\n\n- [2025\u002F03\u002F20] **Survey on Evaluation of LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16416) | [code]\n\n- [2025\u002F03\u002F16] **VeriLA: A Human-Centered Evaluation Framework for Interpretable Verification of LLM Agent Failures** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.12651) | [code]\n\n- [2025\u002F03\u002F11] **AgentOrca: A Dual-System Framework to Evaluate Language Agents on Operational Routine and Constraint Adherence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08669) | [code]\n\n- [2025\u002F03\u002F10] **MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for Complex Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07459) | [code]\n\n- [2025\u002F03\u002F10] **ProjectEval: A Benchmark for Programming Agents Automated Evaluation on Project-Level Code Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07010) | [code]\n\n- [2025\u002F03\u002F10] **RefactorBench: Evaluating Stateful Reasoning in Language Agents Through Code** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07832) | [code]\n\n- [2025\u002F03\u002F10] **BEARCUBS: A benchmark for computer-using web agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07919) | [code]\n\n- [2025\u002F03\u002F08] **DSGBench: A Diverse Strategic Game Benchmark for Evaluating LLM-based Agents in Complex Decision-Making Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06047) | [code]\n\n- [2025\u002F03\u002F03] **MultiAgentBench: Evaluating the Collaboration and Competition of LLM agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01935) | [code]\n\n- [2025\u002F02\u002F26] **TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19400) | [code]\n\n- [2025\u002F02\u002F25] **RefuteBench 2.0 -- Agentic Benchmark for Dynamic Evaluation of LLM Responses to Refutation Instruction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18308) | [[code]](https:\u002F\u002Fgithub.com\u002FElliottYan\u002FRefuteBench-2.0)\n\n- [2025\u002F02\u002F20] **MLGym: A New Framework and Benchmark for Advancing AI Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14499) | [code]\n\n- [2025\u002F02\u002F19] **DataSciBench: An LLM Agent Benchmark for Data Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13897) | [code]\n\n- [2025\u002F02\u002F13] **EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09560) | [code]\n\n- [2025\u002F02\u002F07] **Evaluating Personality Traits in Large Language Models: Insights from Psychological Questionnaires** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05248) | [code]\n\n- [2025\u002F02\u002F06] **Robotouille: An Asynchronous Planning Benchmark for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05227) | [code]\n\n- [2025\u002F02\u002F01] **Who&#39;s the MVP? A Game-Theoretic Evaluation Benchmark for Modular Attribution in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00510) | [code]\n\n- [2025\u002F01\u002F21] **EmbodiedEval: Evaluate Multimodal LLMs as Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11858) | [code]\n\n- [2024\u002F12\u002F23] **LegalAgentBench: Evaluating LLM Agents in Legal Domain** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17259) | [code]\n\n- [2024\u002F12\u002F19] **Agent-SafetyBench: Evaluating the Safety of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14470) | [code]\n\n- [2024\u002F12\u002F18] **TheAgentCompany: Benchmarking LLM Agents on Consequential Real World Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14161) | [code]\n\n- [2024\u002F12\u002F18] **ChinaTravel: A Real-World Benchmark for Language Agents in Chinese Travel Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.13682) | [code]\n\n- [2024\u002F12\u002F06] **TeamCraft: A Benchmark for Multi-Modal Multi-Agent Systems in Minecraft** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05255) | [code]\n\n- [2024\u002F12\u002F02] **Medchain: Bridging the Gap Between LLM Agents and Clinical Practice through Interactive Sequential Benchmarking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01605) | [code]\n\n- [2024\u002F11\u002F05] **Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02937) | [code]\n\n- [2024\u002F10\u002F28] **Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21359) | [code]\n\n- [2024\u002F10\u002F25] **AgentSense: Benchmarking Social Intelligence of Language Agents through Interactive Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19346) | [code]\n\n- [2024\u002F10\u002F25] **AGENT-CQ: Automatic Generation and Evaluation of Clarifying Questions for Conversational Search with LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19692) | [code]\n\n- [2024\u002F10\u002F23] **MobileSafetyBench: Evaluating Safety of Autonomous Agents in Mobile Device Control** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17520) | [code]\n\n- [2024\u002F10\u002F16] **Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12361) | [code]\n\n- [2024\u002F10\u002F15] **Revisiting Benchmark and Assessment: An Agent-based Exploratory Dynamic Evaluation Framework for LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.11507) | [code]\n\n- [2024\u002F10\u002F11] **JAILJUDGE: A Comprehensive Jailbreak Judge Benchmark with Multi-Agent Enhanced Explanation Evaluation Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12855) | [code]\n\n- [2024\u002F10\u002F11] **AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09024) | [code]\n\n- [2024\u002F10\u002F10] **Benchmarking Agentic Workflow Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07869) | [code]\n\n- [2024\u002F10\u002F09] **MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07095) | [code]\n\n- [2024\u002F10\u002F09] **Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07166) | [code]\n\n- [2024\u002F10\u002F09] **DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07331) | [code]\n\n- [2024\u002F10\u002F07] **Adversarial Multi-Agent Evaluation of Large Language Models through Iterative Debates** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04663) | [code]\n\n- [2024\u002F10\u002F07] **ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05080) | [code]\n\n- [2024\u002F09\u002F23] **Towards a Realistic Long-Term Benchmark for Open-Web Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14913) | [code]\n\n- [2024\u002F09\u002F17] **CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11363) | [code]\n\n- [2024\u002F09\u002F12] **DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07703) | [code]\n\n- [2024\u002F09\u002F11] **SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07440) | [code]\n\n- [2024\u002F09\u002F02] **ComfyBench: Benchmarking LLM-based Agents in ComfyUI for Autonomously Designing Collaborative AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.01392) | [code]\n\n- [2024\u002F08\u002F28] **BattleAgentBench: A Benchmark for Evaluating Cooperation and Competition Capabilities of Language Models in Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15971) | [code]\n\n- [2024\u002F08\u002F19] **BLADE: Benchmarking Language Model Agents for Data-Driven Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.09667) | [code]\n\n- [2024\u002F08\u002F13] **What should I wear to a party in a Greek taverna? Evaluation for Conversational Agents in the Fashion Domain** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08907) | [code]\n\n- [2024\u002F08\u002F12] **VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.06327) | [code]\n\n- [2024\u002F07\u002F26] **OfficeBench: Benchmarking Language Agents across Multiple Applications for Office Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.19056) | [code]\n\n- [2024\u002F07\u002F26] **AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18901) | [[code]](https:\u002F\u002Fgithub.com\u002Fstonybrooknlp\u002Fappworld)\n\n- [2024\u002F07\u002F25] **PersonaGym: Evaluating Persona Agents and LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18416) | [code]\n\n- [2024\u002F07\u002F23] **AMONGAGENTS: Evaluating Large Language Models in the Interactive Text-Based Social Deduction Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16521) | [code]\n\n- [2024\u002F07\u002F22] **AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15711) | [code]\n\n- [2024\u002F07\u002F12] **IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08898) | [code]\n\n- [2024\u002F07\u002F11] **GTA: A Benchmark for General Tool Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08713) | [code]\n\n- [2024\u002F07\u002F05] **Towards Automated Functional Equation Proving: A Benchmark Dataset and A Domain-Specific In-Context Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.14521) | [code]\n\n- [2024\u002F07\u002F01] **MIRAI: Evaluating LLM Agents for Event Forecasting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01231) | [code]\n\n- [2024\u002F07\u002F01] **ProductAgent: Benchmarking Conversational Product Search Agent with Asking Clarification Questions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00942) | [code]\n\n- [2024\u002F07\u002F01] **Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00993) | [code]\n\n- [2024\u002F06\u002F28] **Designing and Evaluating Multi-Chatbot Interface for Human-AI Communication: Preliminary Findings from a Persuasion Task** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.19648) | [code]\n\n- [2024\u002F06\u002F13] **ResearchArena: Benchmarking Large Language Models&#39; Ability to Collect and Organize Information as Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10291) | [code]\n\n- [2024\u002F06\u002F13] **StreamBench: Towards Benchmarking Continuous Improvement of Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08747) | [code]\n\n- [2024\u002F06\u002F07] **WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04770) | [[code]](https:\u002F\u002Fgithub.com\u002Fallenai\u002Fwildbench)\n\n- [2024\u002F06\u002F07] **GameBench: Evaluating Strategic Reasoning Abilities of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06613) | [[code]](https:\u002F\u002Fgithub.com\u002FJoshuaclymer\u002FGameBench)\n\n- [2024\u002F05\u002F28] **TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18027) | [code]\n\n- [2024\u002F05\u002F23] **AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14573) | [code]\n\n- [2024\u002F05\u002F13] **AgentClinic: a multimodal agent benchmark to evaluate AI in simulated clinical environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.07960) | [code]\n\n- [2024\u002F05\u002F01] **WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00823) | [[code]](https:\u002F\u002Fgithub.com\u002Folly-styles\u002Fworkbench)\n\n- [2024\u002F04\u002F23] **Evaluating Tool-Augmented Agents in Remote Sensing Platforms** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00709) | [code]\n\n- [2024\u002F04\u002F22] **How Well Can LLMs Echo Us? Evaluating AI Chatbots&#39; Role-Play Ability with ECHO** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.13957) | [code]\n\n- [2024\u002F04\u002F15] **MMInA: Benchmarking Multihop Multimodal Internet Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.09992) | [[code]](https:\u002F\u002Fgithub.com\u002Fshulin16\u002Fmmina)\n\n- [2024\u002F04\u002F11] **OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.07972) | [code]\n\n- [2024\u002F04\u002F09] **AgentQuest: A Modular Benchmark Framework to Measure Progress and Improve LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06411) | [code]\n\n- [2024\u002F04\u002F05] **GroundCocoa: A Benchmark for Evaluating Compositional &amp; Conditional Reasoning in Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04237) | [code]\n\n- [2024\u002F03\u002F29] **DataAgent: Evaluating Large Language Models&#39; Ability to Answer Zero-Shot, Natural Language Queries** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.00188) | [code]\n\n- [2024\u002F03\u002F26] **Sharing the Cost of Success: A Game for Evaluating and Learning Collaborative Multi-Agent Instruction Giving and Following Policies** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17497) | [[code]](https:\u002F\u002Fgithub.com\u002Fclp-research\u002Fcost-sharing-reference-game)\n\n- [2024\u002F03\u002F20] **SocialBench: Sociality Evaluation of Role-Playing Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.13679) | [code]\n\n- [2024\u002F03\u002F18] **How Far Are We on the Decision-Making of LLMs? Evaluating LLMs&#39; Gaming Ability in Multi-Agent Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11807) | [code]\n\n- [2024\u002F03\u002F18] **Tur[k]ingBench: A Challenge Benchmark for Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11905) | [code]\n\n- [2024\u002F03\u002F13] **Evaluating Large Language Models as Generative User Simulators for Conversational Recommendation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09738) | [code]\n\n- [2024\u002F03\u002F05] **InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02691) | [code]\n\n- [2024\u002F02\u002F27] **Evaluating Very Long-Term Conversational Memory of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17753) | [code]\n\n- [2024\u002F02\u002F27] **Benchmarking Data Science Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17168) | [code]\n\n- [2024\u002F02\u002F19] **A Critical Evaluation of AI Feedback for Aligning Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12366) | [code]\n\n- [2024\u002F02\u002F18] **Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11443) | [[code]](https:\u002F\u002Fgithub.com\u002Fnanshineloong\u002Fself-evolving-benchmark)\n\n- [2024\u002F02\u002F18] **MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11453) | [code]\n\n- [2024\u002F02\u002F05] **LLM Agents in Interaction: Measuring Personality Consistency and Linguistic Alignment in Interacting Populations of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02896) | [code]\n\n- [2024\u002F02\u002F02] **TravelPlanner: A Benchmark for Real-World Planning with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01622) | [[code]](https:\u002F\u002Fgithub.com\u002FOSU-NLP-Group\u002FTravelPlanner)\n\n- [2024\u002F01\u002F02] **CharacterEval: A Chinese Benchmark for Role-Playing Conversational Agent Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01275) | [code]\n\n- [2023\u002F12\u002F28] **How Far Are LLMs from Believable AI? A Benchmark for Evaluating the Believability of Human Behavior Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17115) | [code]\n\n- [2023\u002F12\u002F26] **RoleEval: A Bilingual Role Evaluation Benchmark for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.16132) | [code]\n\n- [2023\u002F11\u002F16] **ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.09835) | [code]\n\n- [2023\u002F11\u002F15] **ToolTalk: Evaluating Tool-Usage in a Conversational Setting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.10775) | [code]\n\n- [2023\u002F10\u002F24] **FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.15421) | [code]\n\n- [2023\u002F10\u002F09] **Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05746) | [code]\n\n- [2023\u002F10\u002F02] **SmartPlay: A Benchmark for LLMs as Intelligent Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01557) | [code]\n\n- [2023\u002F10\u002F01] **RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00746) | [code]\n\n- [2023\u002F08\u002F11] **BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.05960) | [code]\n\n- [2023\u002F08\u002F07] **AgentBench: Evaluating LLMs as Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03688) | [code]\n\n- [2023\u002F04\u002F27] **ChatLog: Carefully Evaluating the Evolution of ChatGPT Across Time** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14106) | [code]\n\n#### Environment&Platform\n- [2025\u002F05\u002F30] **Open CaptchaWorld: A Comprehensive Web-based Platform for Testing and Benchmarking Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24878) | [code]\n\n- [2025\u002F05\u002F22] **Beyond Static Testbeds: An Interaction-Centric Agent Simulation Platform for Dynamic Recommender Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16429) | [code]\n\n- [2025\u002F05\u002F22] **MASLab: A Unified and Comprehensive Codebase for LLM-based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16988) | [code]\n\n- [2025\u002F04\u002F15] **TextArena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11442) | [code]\n\n- [2025\u002F03\u002F14] **Cerebrum (AIOS SDK): A Platform for Agent Development, Deployment, Distribution, and Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11444) | [code]\n\n- [2025\u002F03\u002F06] **Factorio Learning Environment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09617) | [code]\n\n- [2025\u002F03\u002F05] **Unified Mind Model: Reimagining Autonomous Agents in the LLM Era** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03459) | [code]\n\n- [2025\u002F03\u002F04] **LiteWebAgent: The Open-Source Suite for VLM-Based Web-Agent Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02950) | [code]\n\n- [2025\u002F02\u002F14] **The Ann Arbor Architecture for Agent-Oriented Programming** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09903) | [[code]](https:\u002F\u002Fgithub.com\u002Faaalgo\u002Fpostline_0.1)\n\n- [2024\u002F12\u002F30] **Training Software Engineering Agents and Verifiers with SWE-Gym** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21139) | [code]\n\n- [2024\u002F11\u002F05] **SAUCE: Synchronous and Asynchronous User-Customizable Environment for Multi-Agent LLM Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03397) | [code]\n\n- [2024\u002F08\u002F09] **AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15247) | [code]\n\n- [2024\u002F08\u002F06] **OpenOmni: A Collaborative Open Source Tool for Building Future-Ready Multimodal Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.03047) | [code]\n\n- [2024\u002F07\u002F23] **OpenHands: An Open Platform for AI Software Developers as Generalist Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16741) | [code]\n\n- [2024\u002F07\u002F14] **AutoGRAMS: Autonomous Graphical Agent Modeling Software** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.10049) | [code]\n\n- [2024\u002F07\u002F12] **IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08898) | [code]\n\n- [2024\u002F07\u002F08] **Coding Reliable LLM-based Integrated Task and Knowledge Agents with GenieWorksheets** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.05674) | [code]\n\n- [2024\u002F06\u002F06] **AgentGym: Evolving Large Language Model-based Agents across Diverse Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04151) | [[code]](https:\u002F\u002Fgithub.com\u002Fwoooodyy\u002Fagentgym)\n\n- [2024\u002F05\u002F23] **AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14573) | [code]\n\n- [2024\u002F02\u002F27] **OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17553) | [code]\n\n- [2023\u002F03\u002F14] **CB2: Collaborative Natural Language Interaction Research Platform** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08127) | [code]\n\n#### Dataset\n- [2025\u002F07\u002F10] **Toward Real-World Chinese Psychological Support Dialogues: CPsDD Dataset and a Co-Evolving Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07509) | [code]\n\n- [2025\u002F06\u002F26] **AgentStealth: Reinforcing Large Language Model for Anonymizing User-generated Text** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22508) | [code]\n\n- [2025\u002F06\u002F25] **MAGPIE: A dataset for Multi-AGent contextual PrIvacy Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20737) | [code]\n\n- [2025\u002F06\u002F11] **ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09513) | [code]\n\n- [2025\u002F06\u002F02] **STORM-BORN: A Challenging Mathematical Derivations Dataset Curated via a Human-in-the-Loop Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01531) | [code]\n\n- [2025\u002F05\u002F27] **Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21784) | [code]\n\n- [2025\u002F05\u002F19] **Scalable Video-to-Dataset Generation for Cross-Platform Mobile Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12632) | [code]\n\n- [2025\u002F02\u002F09] **MTPChat: A Multimodal Time-Aware Persona Dataset for Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05887) | [code]\n\n- [2025\u002F02\u002F09] **HamRaz: A Culture-Based Persian Conversation Dataset for Person-Centered Therapy Using LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05982) | [code]\n\n- [2025\u002F01\u002F23] **Hypothesis Generation for Materials Discovery and Design Using Goal-Driven and Constraint-Guided LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13299) | [code]\n\n- [2025\u002F01\u002F14] **Agent-Centric Projection of Prompting Techniques and Implications for Synthetic Training Data for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07815) | [code]\n\n- [2024\u002F12\u002F30] **Plancraft: an evaluation dataset for planning with LLM agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21033) | [code]\n\n- [2024\u002F12\u002F28] **BaiJia: A Large-Scale Role-Playing Agent Corpus of Chinese Historical Characters** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20024) | [code]\n\n- [2024\u002F12\u002F24] **Explainable Multi-Modal Data Exploration in Natural Language via LLM Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18428) | [code]\n\n- [2024\u002F12\u002F06] **CALICO: Conversational Agent Localization via Synthetic Data Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05388) | [code]\n\n- [2024\u002F11\u002F28] **MAG-V: A Multi-Agent Framework for Synthetic Data Generation and Verification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04494) | [code]\n\n- [2024\u002F11\u002F21] **Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.14497) | [code]\n\n- [2024\u002F10\u002F18] **Synthesizing Post-Training Data for LLMs through Multi-Agent Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14251) | [code]\n\n- [2024\u002F10\u002F10] **AgentBank: Towards Generalized LLM Agents via Fine-Tuning on 50000+ Interaction Trajectories** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07706) | [code]\n\n- [2024\u002F09\u002F06] **Using Large Language Models to Generate Authentic Multi-agent Knowledge Work Datasets** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.04286) | [code]\n\n- [2024\u002F08\u002F22] **MDD-5k: A New Diagnostic Conversation Dataset for Mental Disorders Synthesized via Neuro-Symbolic LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12142) | [code]\n\n- [2024\u002F08\u002F16] **The Fellowship of the LLMs: Multi-Agent Workflows for Synthetic Preference Optimization Dataset Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08688) | [code]\n\n- [2024\u002F07\u002F12] **IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08898) | [code]\n\n- [2024\u002F06\u002F16] **GUI-WORLD: A Dataset for GUI-oriented Multimodal LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10819) | [code]\n\n- [2024\u002F03\u002F19] **Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12881) | [code]\n\n- [2024\u002F02\u002F27] **OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17553) | [code]\n\n- [2023\u002F07\u002F31] **HAGRID: A Human-LLM Collaborative Dataset for Generative Information-Seeking with Attribution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.16883) | [code]\n\n### Others\n- [2025\u002F07\u002F04] **Agent-Based Detection and Resolution of Incompleteness and Ambiguity in Interactions with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03726) | [code]\n\n- [2025\u002F07\u002F02] **Data Agent: A Holistic Architecture for Orchestrating Data+AI Ecosystems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.01599) | [code]\n\n- [2025\u002F06\u002F30] **LLM Agents Are the Antidote to Walled Gardens** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23978) | [code]\n\n- [2025\u002F06\u002F20] **UProp: Investigating the Uncertainty Propagation of LLMs in Multi-Step Agentic Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.17419) | [code]\n\n- [2025\u002F06\u002F10] **TACTIC: Translation Agents with Cognitive-Theoretic Interactive Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08403) | [code]\n\n- [2025\u002F06\u002F06] **Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06576) | [code]\n\n- [2025\u002F06\u002F02] **Enhancing Interpretable Image Classification Through LLM Agents and Conditional Concept Bottleneck Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01334) | [code]\n\n- [2025\u002F05\u002F23] **Distilling LLM Agent into Small Models with Retrieval and Code Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17612) | [code]\n\n- [2025\u002F05\u002F23] **Runaway is Ashamed, But Helpful: On the Early-Exit Behavior of Large Language Model-based Agents in Embodied Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17616) | [code]\n\n- [2025\u002F05\u002F23] **The Real Barrier to LLM Agent Usability is Agentic ROI** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17767) | [code]\n\n- [2025\u002F05\u002F20] **Structured Agent Distillation for Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13820) | [code]\n\n- [2025\u002F05\u002F20] **Agent Context Protocols Enhance Collective Inference** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14569) | [code]\n\n- [2025\u002F05\u002F15] **Learning Virtual Machine Scheduling in Cloud Computing through Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10117) | [code]\n\n- [2025\u002F05\u002F04] **Interpretable Emergent Language Using Inter-Agent Transformers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02215) | [code]\n\n- [2025\u002F05\u002F02] **VTS-LLM: Domain-Adaptive LLM Agent for Enhancing Awareness in Vessel Traffic Services through Natural Language** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00989) | [code]\n\n- [2025\u002F05\u002F01] **Self-Generated In-Context Examples Improve LLM Agents for Sequential Decision-Making Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00234) | [code]\n\n- [2025\u002F04\u002F23] **OptimAI: Optimization from Natural Language Using LLM-Powered AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16918) | [code]\n\n- [2025\u002F04\u002F04] **Agentic Knowledgeable Self-awareness** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03553) | [code]\n\n- [2025\u002F04\u002F04] **Inherent and emergent liability issues in LLM-based agentic systems: a principal-agent perspective** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03255) | [code]\n\n- [2025\u002F04\u002F02] **Review, Refine, Repeat: Understanding Iterative Decoding of AI Agents with Dynamic Evaluation and Selection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01931) | [code]\n\n- [2025\u002F03\u002F14] **GNNs as Predictors of Agentic Workflow Performances** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11301) | [code]\n\n- [2025\u002F03\u002F14] **CoLLMLight: Cooperative Large Language Model Agents for Network-Wide Traffic Signal Control** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11739) | [code]\n\n- [2025\u002F03\u002F14] **Agent-Enhanced Large Language Models for Researching Political Institutions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13524) | [code]\n\n- [2025\u002F03\u002F14] **LLM Agents for Education: Advances and Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11733) | [code]\n\n- [2025\u002F02\u002F20] **Optimizing Model Selection for Compound AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14815) | [code]\n\n- [2024\u002F12\u002F03] **Large Multimodal Agents for Accurate Phishing Detection with Enhanced Token Optimization and Cost Reduction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.02301) | [code]\n\n- [2024\u002F03\u002F18] **EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12014) | [code]\n\n---\n## :star: Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAGI-Edgerunners_LLM-Agents-Papers_readme_5995b3a473a4.png)](https:\u002F\u002Fstar-history.com\u002F#AGI-Edgerunners\u002FLLM-Agents-Papers&Date)\n","# LLM-Agents-Papers\n## :writing_hand: 描述\n最后更新时间：2025\u002F7\u002F12\n\n本仓库列出了与基于大语言模型（LLM）的智能体（Agent）相关的论文。包括\n- [综述](#综述)\n- [增强技术](#Technique-For-Enhancement)\n  - [规划](#Planning)\n  - [记忆机制](#Memory-Mechanism)\n  - [反馈与反思](#FeedbackReflection)\n  - [RAG（检索增强生成）](#RAG)\n  - [搜索](#Search)\n- [交互](#Interaction)\n  - [角色扮演](#Role-Playing)\n  - [对话](#Conversation)\n  - [游戏博弈](#Game-Playing)\n  - [人机交互](#Human-Agent-Interaction)\n  - [工具使用](#Tool-Usage)\n  - [仿真模拟](#Simulation)\n- [应用](#Application)\n  - [数学](#Math)\n  - [化学](#Chemistry)\n  - [生物](#Biology)\n  - [物理](#Physics)\n  - [地理](#Geography)\n  - [艺术](#Art)\n  - [医学](#Medicine)\n  - [金融](#Finance)\n  - [软件工程](#Software-Engineering)\n  - [科研](#Research)\n- [自动化](#Automation)\n  - [工作流](#Workflow)\n  - [自动评估](#Automatic-Evaluation)\n- [训练](#Training)\n  - [微调](#Fine-tuning)\n  - [RL（强化学习）](#RL)\n  - [DPO（直接偏好优化）](#DPO)\n- [扩展](#Scaling)\n  - [单智能体框架](#Single-Agent-Framework)\n  - [多智能体系统](#Multi-Agent-System)\n- [稳定性](#Stability)\n  - [安全性](#Safety)\n  - [偏见](#Bias)\n  - [幻觉](#Hallucination)\n- [基础设施](#Infrastructure)\n  - [基准与评估](#BenchmarkEvaluation)\n  - [环境与平台](#EnvironmentPlatform)\n  - [数据集](#Dataset)\n- [其他](#Others)\n## :yellow_heart: 推荐\n为了更全面的阅读，我们还推荐其他论文列表：\n* [zjunlp\u002FLLMAgentPapers](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLLMAgentPapers)：大语言模型智能体必读论文。\n* [teacherpeterpan\u002Fself-correction-llm-papers](https:\u002F\u002Fgithub.com\u002Fteacherpeterpan\u002Fself-correction-llm-papers)：关于带有自动反馈的自我纠正大语言模型的研究论文集。\n* [Paitesanshi\u002FLLM-Agent-Survey](https:\u002F\u002Fgithub.com\u002FPaitesanshi\u002FLLM-Agent-Survey)：基于 LLM 的自主智能体综述。\n* [woooodyy\u002Fllm-agent-paper-list](https:\u002F\u002Fgithub.com\u002Fwoooodyy\u002Fllm-agent-paper-list)：基于 LLM 的智能体必读论文。\n* [git-disl\u002Fawesome-LLM-game-agent-papers](https:\u002F\u002Fgithub.com\u002Fgit-disl\u002Fawesome-LLM-game-agent-papers)：基于 LLM 的游戏智能体必读论文。\n## :newspaper: 论文\n### 综述\n- [2025\u002F06\u002F10] **Measuring Data Science Automation: A Survey of Evaluation Tools for AI Assistants and Agents** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08800) | [代码]\n\n- [2025\u002F06\u002F06] **Evolutionary Perspectives on the Evaluation of LLM-Based AI Agents: A Comprehensive Survey** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11102) | [代码]\n\n- [2025\u002F05\u002F27] **Creativity in LLM-based Multi-Agent Systems: A Survey** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21116) | [代码]\n\n- [2025\u002F05\u002F24] **Multi-Party Conversational Agents: A Survey** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18845) | [代码]\n\n- [2025\u002F05\u002F16] **A Survey on the Safety and Security Threats of Computer-Using Agents: JARVIS or Ultron?** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10924) | [代码]\n\n- [2025\u002F05\u002F02] **AI agents may be worth the hype but not the resources (yet): An initial exploration of machine translation quality and costs in three language pairs in the legal and news domains** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.01560) | [代码]\n\n- [2025\u002F05\u002F01] **A Survey on Large Language Model based Human-Agent Systems** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00753) | [[代码]](https:\u002F\u002Fgithub.com\u002FHenryPengZou\u002FAwesome-LLM-Based-Human-Agent-System-Papers)\n\n- [2025\u002F04\u002F30] **Humanizing LLMs: A Survey of Psychological Measurements with Tools, Datasets, and Human-Agent Applications** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00049) | [代码]\n\n- [2025\u002F04\u002F22] **A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training and Deployment** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15585) | [代码]\n\n- [2025\u002F04\u002F20] **Meta-Thinking in LLMs via Multi-Agent Reinforcement Learning: A Survey** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14520) | [代码]\n\n- [2025\u002F04\u002F14] **A Survey of Large Language Model-Powered Spatial Intelligence Across Scales: Advances in Embodied Agents, Smart Cities, and Earth Science** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09848) | [代码]\n\n- [2025\u002F04\u002F12] **A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09037) | [代码]\n\n- [2025\u002F03\u002F28] **Evaluating LLM-based Agents for Multi-Turn Conversations: A Survey** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22458) | [代码]\n\n- [2025\u002F03\u002F27] **Large Language Model Agent: A Survey on Methodology, Applications and Challenges** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21460) | [代码]\n\n- [2025\u002F03\u002F27] **A Survey on (M)LLM-Based GUI Agents** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13865) | [代码]\n\n- [2025\u002F03\u002F24] **A Survey of Large Language Model Agents for Question Answering** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19213) | [代码]\n\n- [2025\u002F03\u002F20] **Survey on Evaluation of LLM-based Agents** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16416) | [代码]\n\n- [2025\u002F03\u002F13] **LLMs Working in Harmony: A Survey on the Technological Aspects of Building Effective LLM-Based Multi Agent Systems** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01963) | [代码]\n\n- [2025\u002F03\u002F12] **Agentic AI for Scientific Discovery: A Survey of Progress, Challenges, and Future Directions** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08979) | [代码]\n\n- [2025\u002F02\u002F20] **Beyond Self-Talk: A Communication-Centric Survey of LLM-Based Multi-Agent Systems** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14321) | [代码]\n\n- [2025\u002F02\u002F18] **Towards a Design Guideline for RPA Evaluation: A Survey of Large Language Model-Based Role-Playing Agents** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13012) | [代码]\n\n- [2025\u002F02\u002F16] **A Survey of LLM-based Agents in Medicine: How far are we from Baymax?** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11211) | [代码]\n\n- [2025\u002F01\u002F15] **Agentic Retrieval-Augmented Generation: A Survey on Agentic RAG** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.09136) | [代码]\n\n- [2024\u002F12\u002F23] **A Survey on LLM-based Multi-Agent System: Recent Advances and New Frontiers in Application** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17481) | [代码]\n\n- [2024\u002F12\u002F18] **A Survey on Large Language Model-based Agents for Statistics and Data Science** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14222) | [代码]\n\n- [2024\u002F12\u002F05] **A Survey on Large Language Model-Based Social Agents in Game-Theoretic Scenarios** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03920) | [代码]\n\n- [2024\u002F12\u002F04] **From Individual to Society: A Survey on Social Simulation Driven by Large Language Model-based Agents** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03563) | [代码]\n\n- [2024\u002F11\u002F27] **Large Language Model-Brained GUI Agents: A Survey** | [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.18279) | [代码]\n\n- [2024\u002F09\u002F27] **面向目标导向交互式智能体的复杂任务综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.18538) | [code]\n\n- [2024\u002F09\u002F13] **软件工程中的智能体：综述、全景与愿景** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.09030) | [code]\n\n- [2024\u002F09\u002F04] **涌现语言综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.02645) | [code]\n\n- [2024\u002F08\u002F05] **从大语言模型到软件工程中基于大语言模型的智能体：现状、挑战与未来综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02479) | [code]\n\n- [2024\u002F07\u002F26] **金融交易中的大语言模型智能体：综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.06361) | [code]\n\n- [2024\u002F06\u002F03] **大语言模型中人格的双重叙事：角色扮演与个性化综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01171) | [[code]](https:\u002F\u002Fgithub.com\u002Fmiulab\u002Fpersonallm-survey)\n\n- [2024\u002F06\u002F01] **迈向语言与多模态智能体的理性：综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00252) | [code]\n\n- [2024\u002F04\u002F17] **提升 AI 智能体的社交智能：技术挑战与开放性问题** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11023) | [code]\n\n- [2024\u002F04\u002F02] **基于大语言模型的游戏智能体综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02039) | [[code]](https:\u002F\u002Fgithub.com\u002Fgit-disl\u002Fawesome-LLM-game-agent-papers)\n\n- [2024\u002F03\u002F26] **在人机交互中利用大语言模型：潜力与陷阱的批判性分析** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00693) | [code]\n\n- [2024\u002F03\u002F07] **推进社会与健康计算科学中基于智能体模型的先进代理方法：前景广阔且值得尝试的未来方向** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.04417) | [code]\n\n- [2024\u002F02\u002F28] **大语言模型与游戏：综述与路线图** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18659) | [code]\n\n- [2024\u002F02\u002F28] **基于大语言模型的多轮对话系统最新进展综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18013) | [code]\n\n- [2024\u002F02\u002F05] **理解大语言模型智能体的规划：综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02716) | [code]\n\n- [2024\u002F01\u002F01] **若大语言模型是巫师，代码即为魔杖：代码如何赋能大语言模型成为智能体综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00812) | [code]\n\n- [2023\u002F12\u002F31] **对话智能体与聊天机器人中的个性、人设与画像综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00609) | [code]\n\n- [2023\u002F12\u002F19] **大语言模型赋能的基于智能体的建模与仿真：综述与展望** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.11970) | [code]\n\n- [2023\u002F09\u002F14] **基于大语言模型的智能体的崛起与潜力：综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07864) | [code]\n\n- [2023\u002F08\u002F22] **基于大语言模型的自主智能体综述** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11432) | [code]\n\n- [2023\u002F06\u002F27] **以人为中心的生成式 AI 的下一步：技术视角** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15774) | [code]\n\n---\n\n### 增强技术\n#### 规划\n- [2025\u002F06\u002F30] **面向大语言模型驱动的交互式推荐智能体的思维增强规划** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23485) | [code]\n\n- [2025\u002F06\u002F24] **NaviAgent：面向函数调用的工具依赖图双层规划** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19500) | [code]\n\n- [2025\u002F06\u002F10] **通过原子事实增强与前向搜索利用上下文学习改进大语言模型智能体规划** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09171) | [code]\n\n- [2025\u002F06\u002F06] **MAPLE：面向表格推理的具有长期记忆的多智能体自适应规划** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05813) | [code]\n\n- [2025\u002F05\u002F22] **T1：面向多轮智能体规划的工具导向对话数据集** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16986) | [code]\n\n- [2025\u002F05\u002F02] **PIPA：用于诊断交互式规划智能体的统一评估协议** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.01592) | [code]\n\n- [2025\u002F04\u002F15] **GraphicBench：面向语言智能体平面设计的规划基准** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11571) | [code]\n\n- [2025\u002F03\u002F12] **Plan-and-Act：改进智能体在长周期任务中的规划** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09572) | [code]\n\n- [2025\u002F03\u002F04] **MPO：通过元规划优化增强大语言模型智能体** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02682) | [code]\n\n- [2025\u002F03\u002F03] **通过联合策略梯度优化改进回顾性语言智能体** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01490) | [code]\n\n- [2025\u002F02\u002F08] **CODESIM：通过仿真驱动规划与调试进行多智能体代码生成与问题求解** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05664) | [code]\n\n- [2025\u002F02\u002F06] **Robotouille：面向大语言模型智能体的异步规划基准** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05227) | [code]\n\n- [2025\u002F01\u002F27] **MADP：用于增强认知行为心理健康问答的多智能体演绎规划** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15826) | [code]\n\n- [2025\u002F01\u002F14] **与正确的专家对话：问答多智能体系统中的路由与规划** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07813) | [code]\n\n- [2024\u002F12\u002F30] **Plancraft：面向大语言模型智能体规划的评价数据集** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21033) | [code]\n\n- [2024\u002F12\u002F28] **复杂表格问答中利用工具使用的高效多智能体协作在线规划** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20145) | [code]\n\n- [2024\u002F12\u002F13] **面向大语言模型驱动对话智能体的基于脚本的对话策略规划：“AI 治疗师”的基础架构** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15242) | [code]\n\n- [2024\u002F11\u002F13] **一步一个脚印：语言智能体是逐步规划者** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.08432) | [code]\n\n- [2024\u002F11\u002F05] **利用动态 VQA 数据集和自适应规划智能体对多模态检索增强生成进行基准测试** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02937) | [code]\n\n- [2024\u002F10\u002F12] **CAMPHOR：面向设备端多输入规划与高阶推理的协作智能体** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09407) | [code]\n\n- [2024\u002F10\u002F01] **Self-controller：利用多轮逐步自我意识控制大语言模型** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00359) | [code]\n\n- [2024\u002F09\u002F30] **交互式推测规划：通过系统与用户界面的协同设计增强智能体效率** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00079) | [code]\n\n- [2024\u002F09\u002F28] **SELP：利用大语言模型为机器人智能体生成安全高效的任务规划** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19471) | [code]\n\n- [2024\u002F09\u002F25] **MSI-Agent: Incorporating Multi-Scale Insight into Embodied Agents for Superior Planning and Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.16686) | [code]\n\n- [2024\u002F08\u002F15] **VerilogCoder: Autonomous Verilog Coding Agents with Graph-based Planning and Abstract Syntax Tree (AST)-based Waveform Tracing Tool** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08927) | [code]\n\n- [2024\u002F08\u002F12] **Towards Autonomous Agents: Adaptive-planning, Reasoning, and Acting in Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.06458) | [code]\n\n- [2024\u002F08\u002F01] **AgentGen: Enhancing Planning Abilities for Large Language Model based Agent via Environment and Task Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00764) | [code]\n\n- [2024\u002F07\u002F04] **Controllable Conversations: Planning-Based Dialogue Agent with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.03884) | [code]\n\n- [2024\u002F06\u002F17] **RePrompt: Planning by Automatic Prompt Engineering for Large Language Models Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11132) | [code]\n\n- [2024\u002F06\u002F09] **A Review of Prominent Paradigms for LLM-Based Agents: Tool Use (Including RAG), Planning, and Feedback Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05804) | [code]\n\n- [2024\u002F06\u002F06] **Tool-Planner: Task Planning with Clusters across Multiple Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03807) | [[code]](https:\u002F\u002Fgithub.com\u002FOceannTwT\u002FTool-Planner)\n\n- [2024\u002F05\u002F28] **A Human-Like Reasoning Framework for Multi-Phases Planning Task with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18208) | [code]\n\n- [2024\u002F05\u002F27] **REVECA: Adaptive Planning and Trajectory-based Validation in Cooperative Language Agents using Information Relevance and Relative Proximity** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16751) | [code]\n\n- [2024\u002F04\u002F21] **Socratic Planner: Inquiry-Based Zero-Shot Planning for Embodied Instruction Following** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.15190) | [code]\n\n- [2024\u002F04\u002F17] **The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11584) | [code]\n\n- [2024\u002F03\u002F11] **Strength Lies in Differences! Improving Strategy Planning for Non-collaborative Dialogues via Diversified User Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.06769) | [code]\n\n- [2024\u002F03\u002F10] **TRAD: Enhancing LLM Agents with Step-Wise Thought Retrieval and Aligned Decision** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.06221) | [code]\n\n- [2024\u002F03\u002F05] **KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03101) | [code]\n\n- [2024\u002F02\u002F29] **PlanGPT: Enhancing Urban Planning with Tailored Language Model and Efficient Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.19273) | [code]\n\n- [2024\u002F02\u002F18] **What&#39;s the Plan? Evaluating and Developing Planning-Aware Techniques for Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11489) | [code]\n\n- [2024\u002F02\u002F18] **PreAct: Prediction Enhances Agent&#39;s Planning Ability** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11534) | [code]\n\n- [2024\u002F02\u002F16] **When is Tree Search Useful for LLM Planning? It Depends on the Discriminator** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10890) | [[code]](https:\u002F\u002Fgithub.com\u002Fosu-nlp-group\u002Fllm-planning-eval)\n\n- [2024\u002F02\u002F15] **TDAG: A Multi-Agent Framework based on Dynamic Task Decomposition and Agent Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10178) | [code]\n\n- [2024\u002F02\u002F09] **Introspective Planning: Aligning Robots&#39; Uncertainty with Inherent Task Ambiguity** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.06529) | [code]\n\n- [2024\u002F02\u002F06] **RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03610) | [code]\n\n- [2024\u002F02\u002F02] **TravelPlanner: A Benchmark for Real-World Planning with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01622) | [[code]](https:\u002F\u002Fgithub.com\u002FOSU-NLP-Group\u002FTravelPlanner)\n\n- [2024\u002F01\u002F10] **AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05268) | [code]\n\n- [2023\u002F11\u002F19] **TPTU-v2: Boosting Task Planning and Tool Usage of Large Language Model-based Agents in Real-world Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11315) | [code]\n\n- [2023\u002F10\u002F12] **Tree-Planner: Efficient Close-loop Task Planning with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08582) | [code]\n\n- [2023\u002F10\u002F09] **Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05746) | [code]\n\n- [2023\u002F08\u002F07] **TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03427) | [code]\n\n- [2023\u002F08\u002F01] **SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00436) | [code]\n\n- [2023\u002F05\u002F26] **AdaPlanner: Adaptive Planning from Feedback with Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16653) | [code]\n\n- [2023\u002F05\u002F24] **Reasoning with Language Model is Planning with World Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14992) | [code]\n\n- [2023\u002F05\u002F24] **Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14909) | [[code]](https:\u002F\u002Fgithub.com\u002FGuanSuns\u002FLLMs-World-Models-for-Planning)\n\n- [2023\u002F03\u002F29] **Skill Reinforcement Learning and Planning for Open-World Long-Horizon Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.16563) | [code]\n\n- [2023\u002F02\u002F03] **Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.01560) | [code]\n\n- [2022\u002F12\u002F08] **LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04088) | [code]\n\n#### Memory Mechanism\n- [2025\u002F07\u002F10] **MIRIX: Multi-Agent Memory System for LLM-Based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07957) | [code]\n\n- [2025\u002F07\u002F07] **Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05257) | [code]\n\n- [2025\u002F07\u002F03] **MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02259) | [code]\n\n- [2025\u002F06\u002F30] **Ella: Embodied Social Agents with Lifelong Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.24019) | [code]\n\n- [2025\u002F06\u002F30] **State and Memory is All You Need for Robust and Reliable AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00081) | [code]\n\n- [2025\u002F06\u002F20] **MemBench: Towards More Comprehensive Evaluation on the Memory of LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21605) | [code]\n\n- [2025\u002F06\u002F18] **MEM1: Learning to Synergize Memory and Reasoning for Efficient Long-Horizon Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15841) | [code]\n\n- [2025\u002F06\u002F17] **Cost-Efficient Serving of LLM Agents via Test-Time Plan Caching** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14852) | [code]\n\n- [2025\u002F06\u002F09] **G-Memory: Tracing Hierarchical Memory for Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07398) | [code]\n\n- [2025\u002F06\u002F07] **Contextual Experience Replay for Self-Improvement of Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06698) | [code]\n\n- [2025\u002F06\u002F06] **MAPLE: Multi-Agent Adaptive Planning with Long-Term Memory for Table Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05813) | [code]\n\n- [2025\u002F05\u002F26] **Towards Multi-Granularity Memory Association and Selection for Long-Term Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19549) | [code]\n\n- [2025\u002F05\u002F26] **Task Memory Engine: Spatial Memory for Robust Multi-Step LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19436) | [code]\n\n- [2025\u002F05\u002F23] **Collaborative Memory: Multi-User Memory Sharing in LLM Agents with Dynamic Access Control** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18279) | [code]\n\n- [2025\u002F05\u002F22] **Embodied Agents Meet Personalization: Exploring Memory Utilization for Personalized Assistance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16348) | [code]\n\n- [2025\u002F04\u002F30] **LLM-Empowered Embodied Agent for Memory-Augmented Task Planning in Household Robotics** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21716) | [code]\n\n- [2025\u002F04\u002F28] **Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19413) | [code]\n\n- [2025\u002F04\u002F11] **Task Memory Engine (TME): A Structured Memory Framework with Graph-Aware Extensions for Multi-Step LLM Agent Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08525) | [code]\n\n- [2025\u002F03\u002F27] **MemInsight: Autonomous Memory Augmentation for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21760) | [code]\n\n- [2025\u002F03\u002F25] **MARS: Memory-Enhanced Agents with Reflective Self-improvement** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19271) | [code]\n\n- [2025\u002F03\u002F11] **In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08026) | [code]\n\n- [2025\u002F02\u002F17] **A-MEM: Agentic Memory for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12110) | [code]\n\n- [2025\u002F02\u002F08] **On Memory Construction and Retrieval for Personalized Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05589) | [code]\n\n- [2025\u002F01\u002F20] **Zep: A Temporal Knowledge Graph Architecture for Agent Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13956) | [code]\n\n- [2025\u002F01\u002F15] **Doc-Guided Sent2Sent++: A Sent2Sent++ Agent with Doc-Guided memory for Document-level Machine Translation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08523) | [code]\n\n- [2024\u002F12\u002F17] **On the Structural Memory of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15266) | [code]\n\n- [2024\u002F12\u002F17] **Memory-Augmented Agent Training for Business Document Understanding** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15274) | [code]\n\n- [2024\u002F10\u002F10] **DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08143) | [code]\n\n- [2024\u002F09\u002F28] **Crafting Personalized Agents through Retrieval-Augmented Generation on Editable Memory Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19401) | [code]\n\n- [2024\u002F09\u002F11] **Agent Workflow Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07429) | [code]\n\n- [2024\u002F09\u002F01] **Self-evolving Agents with reflective and memory-augmented abilities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00872) | [code]\n\n- [2024\u002F08\u002F18] **HiAgent: Hierarchical Working Memory Management for Solving Long-Horizon Agent Tasks with Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.09559) | [code]\n\n- [2024\u002F08\u002F07] **Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.03615) | [code]\n\n- [2024\u002F05\u002F29] **Toward Conversational Agents with Context and Time Sensitive Long-term Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00057) | [[code]](https:\u002F\u002Fgithub.com\u002FZyphra\u002FTemporalMemoryDataset)\n\n- [2024\u002F04\u002F15] **Memory Sharing for Large Language Model based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.09982) | [[code]](https:\u002F\u002Fgithub.com\u002FGHupppp\u002FMemorySharingLLM)\n\n- [2024\u002F02\u002F19] **Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11975) | [code]\n\n- [2024\u002F02\u002F07] **InfLLM: Training-Free Long-Context Extrapolation for LLMs with an Efficient Context Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04617) | [code]\n\n- [2024\u002F02\u002F06] **RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03610) | [code]\n\n- [2024\u002F01\u002F05] **From LLM to Conversational Agent: A Memory Enhanced Architecture with Fine-Tuning of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02777) | [code]\n\n- [2023\u002F12\u002F22] **Empowering Working Memory for Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17259) | [code]\n\n- [2023\u002F12\u002F22] **Personalized Large Language Model Assistant with Evolving Conditional Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17257) | [code]\n\n- [2023\u002F11\u002F10] **JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05997) | [[code]](https:\u002F\u002Fgithub.com\u002FCraftJarvis\u002FJARVIS-1)\n\n- [2023\u002F06\u002F06] **ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.03901) | [code]\n\n- [2023\u002F05\u002F23] **RET-LLM: Towards a General Read-Write Memory for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14322) | [code]\n\n- [2023\u002F05\u002F17] **MemoryBank: Enhancing Large Language Models with Long-Term Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10250) | [code]\n\n- [2023\u002F05\u002F02] **The Role of Summarization in Generative Agents: A Preliminary Perspective** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.01253) | [code]\n\n- [2023\u002F05\u002F01] **Learning to Reason and Memorize with Self-Notes** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.00833) | [code]\n\n- [2023\u002F04\u002F26] **Enhancing Large Language Model with Self-Controlled Memory Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13343) | [code]\n\n- [2023\u002F04\u002F21] **Emergent and Predictable Memorization in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11158) | [code]\n\n#### Feedback&Reflection\n- [2025\u002F07\u002F08] **Conditional Multi-Stage Failure Recovery for Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06016) | [code]\n\n- [2025\u002F06\u002F10] **Reinforce LLM Reasoning through Multi-Agent Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08379) | [code]\n\n- [2025\u002F06\u002F04] **Debate, Reflect, and Distill: Multi-Agent Feedback with Tree-Structured Preference Optimization for Efficient Language Model Enhancement** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03541) | [code]\n\n- [2025\u002F06\u002F04] **Graph Counselor: Adaptive Graph Exploration via Multi-Agent Synergy to Enhance LLM Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03939) | [code]\n\n- [2025\u002F06\u002F03] **Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02992) | [code]\n\n- [2025\u002F05\u002F22] **Optimizing LLM-Based Multi-Agent System with Textual Feedback: A Case Study on Software Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16086) | [code]\n\n- [2025\u002F05\u002F21] **ReflAct: World-Grounded Decision Making in LLM Agents via Goal-State Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15182) | [code]\n\n- [2025\u002F05\u002F21] **Aligning Dialogue Agents with Global Feedback via Large Language Model Reward Decomposition** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15922) | [code]\n\n- [2025\u002F05\u002F06] **FRAME: Feedback-Refined Agent Methodology for Enhancing Medical Research Insights** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.04649) | [code]\n\n- [2025\u002F04\u002F26] **Stealing Creator&#39;s Workflow: A Creator-Inspired Agentic Framework with Iterative Feedback Loop for Improved Scientific Short-form Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18805) | [code]\n\n- [2025\u002F03\u002F20] **The Lighthouse of Language: Enhancing LLM Agents via Critique-Guided Improvement** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16024) | [code]\n\n- [2025\u002F03\u002F11] **In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08026) | [code]\n\n- [2025\u002F03\u002F04] **Generator-Assistant Stepwise Rollback Framework for Large Language Model Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02519) | [code]\n\n- [2025\u002F03\u002F03] **Improving Retrospective Language Agents via Joint Policy Gradient Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01490) | [code]\n\n- [2025\u002F02\u002F20] **STeCa: Step-level Trajectory Calibration for LLM Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14276) | [[code]](https:\u002F\u002Fgithub.com\u002FWangHanLinHenry\u002FSTeCa)\n\n- [2025\u002F02\u002F17] **Table-Critic: A Multi-Agent Framework for Collaborative Criticism and Refinement in Table Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11799) | [code]\n\n- [2025\u002F02\u002F17] **A Study on Leveraging Search and Self-Feedback for Agent Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12094) | [code]\n\n- [2025\u002F02\u002F03] **PlotGen: Multi-Agent LLM-based Scientific Data Visualization via Multimodal Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00988) | [code]\n\n- [2025\u002F01\u002F26] **Large Language Models as Theory of Mind Aware Generative Agents with Counterfactual Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15355) | [code]\n\n- [2025\u002F01\u002F23] **AgentRec: Agent Recommendation Using Sentence Embeddings Aligned to Human Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13333) | [code]\n\n- [2025\u002F01\u002F08] **InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning and Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04575) | [code]\n\n- [2024\u002F12\u002F31] **Enhancing LLM Reasoning with Multi-Path Collaborative Reactive and Reflection agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00430) | [code]\n\n- [2024\u002F12\u002F22] **A Multi-AI Agent System for Autonomous Optimization of Agentic AI Solutions via Iterative Refinement and LLM-Driven Feedback Loops** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17149) | [code]\n\n- [2024\u002F11\u002F29] **Training Agents with Weakly Supervised Feedback from Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19547) | [code]\n\n- [2024\u002F11\u002F21] **Enhancing LLMs for Power System Simulations: A Feedback-driven Multi-agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16707) | [code]\n\n- [2024\u002F11\u002F11] **Using Generative AI and Multi-Agents to Provide Automatic Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07407) | [code]\n\n- [2024\u002F11\u002F04] **Positive Experience Reflection for Agents in Interactive Text Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02223) | [code]\n\n- [2024\u002F10\u002F29] **Enhancing Financial Question Answering with a Multi-Agent Reflection Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21741) | [code]\n\n- [2024\u002F10\u002F28] **CRAT: A Multi-Agent Framework for Causality-Enhanced Reflective and Retrieval-Augmented Translation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21067) | [code]\n\n- [2024\u002F10\u002F25] **OpenWebVoyager: Building Multimodal Web Agents via Iterative Real-World Exploration, Feedback and Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19609) | [code]\n\n- [2024\u002F10\u002F23] **ReflecTool: Towards Reflection-Aware Tool-Augmented Clinical Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17657) | [code]\n\n- [2024\u002F10\u002F20] **Training Language Models to Critique With Multi-agent Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15287) | [code]\n\n- [2024\u002F10\u002F16] **PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12375) | [code]\n\n- [2024\u002F10\u002F08] **DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06215) | [code]\n\n- [2024\u002F10\u002F02] **ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02052) | [code]\n\n- [2024\u002F10\u002F02] **RGD: Multi-LLM Based Agent Debugger via Refinement and Generation Guidance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.01242) | [code]\n\n- [2024\u002F09\u002F18] **MAgICoRe: Multi-Agent, Iterative, Coarse-to-Fine Refinement for Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12147) | [code]\n\n- [2024\u002F09\u002F05] **E2CL: Exploration-based Error Correction Learning for Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03256) | [[code]](https:\u002F\u002Fgithub.com\u002FWangHanLinHenry\u002FE2CL)\n\n- [2024\u002F09\u002F01] **Self-evolving Agents with reflective and memory-augmented abilities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00872) | [code]\n\n- [2024\u002F08\u002F30] **Tool-Assisted Agent on SQL Inspection and Refinement in Real-World Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16991) | [code]\n\n- [2024\u002F08\u002F15] **MAG-SQL: Multi-Agent Generative Approach with Soft Schema Linking and Iterative Sub-SQL Refinement for Text-to-SQL** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07930) | [code]\n\n- [2024\u002F07\u002F25] **Recursive Introspection: Teaching Language Model Agents How to Self-Improve** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18219) | [code]\n\n- [2024\u002F06\u002F09] **A Review of Prominent Paradigms for LLM-Based Agents: Tool Use (Including RAG), Planning, and Feedback Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05804) | [code]\n\n- [2024\u002F06\u002F05] **LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03363) | [code]\n\n- [2024\u002F06\u002F03] **Re-ReST: Reflection-Reinforced Self-Training for Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01495) | [[code]](https:\u002F\u002Fgithub.com\u002FPlusLabNLP\u002FRe-ReST)\n\n- [2024\u002F03\u002F18] **QueryAgent: A Reliable and Efficient Reasoning Framework with Environmental Feedback-based Self-Correction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11886) | [code]\n\n- [2024\u002F03\u002F17] **Improving Dialogue Agents by Decomposing One Global Explicit Annotation with Local Implicit Multimodal Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11330) | [code]\n\n- [2024\u002F03\u002F08] **ChatASU: Evoking LLM&#39;s Reflexion to Truly Understand Aspect Sentiment in Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.05326) | [code]\n\n- [2024\u002F03\u002F04] **Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02502) | [code]\n\n- [2024\u002F02\u002F27] **Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17574) | [code]\n\n- [2024\u002F02\u002F26] **SelectIT: Selective Instruction Tuning for LLMs via Uncertainty-Aware Self-Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16705) | [code]\n\n- [2024\u002F02\u002F22] **Mirror: A Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14963) | [code]\n\n- [2024\u002F02\u002F19] **A Critical Evaluation of AI Feedback for Aligning Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12366) | [code]\n\n- [2024\u002F02\u002F06] **AnyTool: Self-Reflective, Hierarchical Agents for Large-Scale API Calls** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04253) | [[code]](https:\u002F\u002Fgithub.com\u002Fdyabel\u002Fanytool)\n\n- [2024\u002F02\u002F02] **StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01391) | [code]\n\n- [2023\u002F11\u002F14] **The ART of LLM Refinement: Ask, Refine, and Trust** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.07961) | [code]\n\n- [2023\u002F10\u002F31] **Learning From Mistakes Makes LLM Better Reasoner** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20689) | [code]\n\n- [2023\u002F10\u002F12] **A Zero-Shot Language Agent for Computer Control with Structured Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08740) | [code]\n\n- [2023\u002F07\u002F27] **PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14936) | [code]\n\n- [2023\u002F05\u002F22] **Making Language Models Better Tool Learners with Execution Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13068) | [code]\n\n- [2023\u002F05\u002F17] **Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10142) | [code]\n\n- [2023\u002F04\u002F21] **Improving Grounded Language Understanding in a Collaborative Environment by Interacting with Agents Through Help Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.10750) | [code]\n\n- [2023\u002F04\u002F11] **Teaching Large Language Models to Self-Debug** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05128) | [code]\n\n- [2023\u002F03\u002F30] **Self-Refine: Iterative Refinement with Self-Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17651) | [code]\n\n#### RAG\n- [2025\u002F07\u002F09] **Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07307) | [code]\n\n- [2025\u002F07\u002F04] **AI-VaxGuide: An Agentic RAG-Based LLM for Vaccination Decisions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03493) | [code]\n\n- [2025\u002F06\u002F28] **Knowledge Augmented Finetuning Matters in both RAG and Agent Based Dialog Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22852) | [code]\n\n- [2025\u002F06\u002F27] **ARAG: Agentic Retrieval Augmented Generation for Personalized Recommendation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21931) | [code]\n\n- [2025\u002F06\u002F12] **CIIR@LiveRAG 2025: Optimizing Multi-Agent Retrieval Augmented Generation through Self-Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10844) | [code]\n\n- [2025\u002F06\u002F04] **Graph Counselor: Adaptive Graph Exploration via Multi-Agent Synergy to Enhance LLM Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03939) | [code]\n\n- [2025\u002F05\u002F28] **Agent-UniRAG: A Trainable Open-Source LLM Agent Framework for Unified Retrieval-Augmented Generation Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22571) | [code]\n\n- [2025\u002F05\u002F26] **MA-RAG: Multi-Agent Retrieval-Augmented Generation via Collaborative Chain-of-Thought Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20096) | [code]\n\n- [2025\u002F05\u002F22] **O$^2$-Searcher: A Searching-based Agent Model for Open-Domain Open-Ended Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16582) | [code]\n\n- [2025\u002F05\u002F22] **Personalizing Student-Agent Interactions Using Log-Contextualized Retrieval Augmented Generation (RAG)** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17238) | [code]\n\n- [2025\u002F05\u002F22] **Search Wisely: Mitigating Sub-optimal Agentic Searches By Reducing Uncertainty** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17281) | [code]\n\n- [2025\u002F05\u002F21] **InfoDeepSeek: Benchmarking Agentic Information Seeking for Retrieval-Augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15872) | [code]\n\n- [2025\u002F05\u002F13] **ALOHA: Empowering Multilingual Agent for University Orientation with Hierarchical Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08130) | [code]\n\n- [2025\u002F05\u002F12] **Reinforced Internal-External Knowledge Synergistic Reasoning for Efficient Adaptive Search Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.07596) | [code]\n\n- [2025\u002F04\u002F30] **Talk Before You Retrieve: Agent-Led Discussions for Better RAG in Medical QA** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21252) | [code]\n\n- [2025\u002F04\u002F24] **A RAG-Based Multi-Agent LLM System for Natural Hazard Resilience and Adaptation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17200) | [code]\n\n- [2025\u002F04\u002F15] **Towards Automated Safety Requirements Derivation Using Agent-based RAG** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11243) | [code]\n\n- [2025\u002F04\u002F13] **HM-RAG: Hierarchical Multi-Agent Multimodal Retrieval Augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12330) | [code]\n\n- [2025\u002F04\u002F11] **TP-RAG: Benchmarking Retrieval-Augmented Large Language Model Agents for Spatiotemporal-Aware Travel Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08694) | [code]\n\n- [2025\u002F04\u002F10] **CollEX -- A Multimodal Agentic RAG System Enabling Interactive Exploration of Scientific Collections** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07643) | [code]\n\n- [2025\u002F03\u002F18] **Retrieval-Augmented Simulacra: Generative Agents for Up-to-date and Knowledge-Adaptive Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14620) | [code]\n\n- [2025\u002F03\u002F14] **RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13514) | [code]\n\n- [2025\u002F03\u002F01] **EXCLAIM: An Explainable Cross-Modal Agentic System for Misinformation Detection with Hierarchical Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06269) | [code]\n\n- [2025\u002F02\u002F25] **ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18017) | [code]\n\n- [2025\u002F02\u002F19] **RAG-Gym: Optimizing Reasoning and Search Agents with Process Supervision** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13957) | [code]\n\n- [2025\u002F02\u002F08] **On Memory Construction and Retrieval for Personalized Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05589) | [code]\n\n- [2025\u002F02\u002F06] **Enhancing Online Learning Efficiency Through Heterogeneous Resource Integration with a Multi-Agent RAG System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03948) | [code]\n\n- [2025\u002F01\u002F25] **Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15228) | [code]\n\n- [2024\u002F12\u002F31] **MAIN-RAG: Multi-Agent Filtering Retrieval-Augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00332) | [code]\n\n- [2024\u002F12\u002F24] **GeAR: Graph-enhanced Agent for Retrieval-augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18431) | [code]\n\n- [2024\u002F12\u002F20] **Towards Interpretable Radiology Report Generation via Concept Bottlenecks using a Multi-Agentic RAG** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16086) | [code]\n\n- [2024\u002F12\u002F16] **BioRAGent: A Retrieval-Augmented Generation System for Showcasing Generative Query Expansion and Domain-Specific Search for Scientific Q&amp;A** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12358) | [code]\n\n- [2024\u002F12\u002F07] **SLA Management in Reconfigurable Multi-Agent RAG: A Systems Approach to Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.06832) | [code]\n\n- [2024\u002F11\u002F05] **Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02937) | [code]\n\n- [2024\u002F10\u002F28] **CRAT: A Multi-Agent Framework for Causality-Enhanced Reflective and Retrieval-Augmented Translation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21067) | [code]\n\n- [2024\u002F10\u002F18] **Toolshed: Scale Tool-Equipped Agents with Advanced RAG-Tool Fusion and Tool Knowledge Bases** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14594) | [code]\n\n- [2024\u002F10\u002F01] **Conversational Exploratory Search of Scholarly Publications Using Knowledge Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00427) | [code]\n\n- [2024\u002F09\u002F28] **Crafting Personalized Agents through Retrieval-Augmented Generation on Editable Memory Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19401) | [code]\n\n- [2024\u002F08\u002F18] **Agentic Retrieval-Augmented Generation for Time Series Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14484) | [code]\n\n- [2024\u002F08\u002F05] **LLM Agents Improve Semantic Code Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11058) | [code]\n\n- [2024\u002F08\u002F03] **MALADE: Orchestration of LLM-powered Agents with Retrieval Augmented Generation for Pharmacovigilance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01869) | [code]\n\n- [2024\u002F07\u002F20] **Golden-Retriever: High-Fidelity Agentic Retrieval Augmented Generation for Industrial Knowledge Base** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00798) | [code]\n\n- [2024\u002F06\u002F26] **Geode: A Zero-shot Geospatial Question-Answering Agent with Explicit Reasoning and Precise Spatio-Temporal Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.11014) | [code]\n\n- [2024\u002F06\u002F19] **StackRAG Agent: Improving Developer Answers with Retrieval-Augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.13840) | [code]\n\n- [2024\u002F06\u002F09] **A Review of Prominent Paradigms for LLM-Based Agents: Tool Use (Including RAG), Planning, and Feedback Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05804) | [code]\n\n- [2024\u002F03\u002F05] **AgentsCourt: Building Judicial Decision-Making Agents with Court Debate Simulation and Legal Knowledge Augmentation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02959) | [code]\n\n- [2024\u002F02\u002F06] **RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03610) | [code]\n\n- [2023\u002F12\u002F27] **Automating Knowledge Acquisition for Content-Centric Cognitive Agents Using LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.16378) | [code]\n\n#### Search\n- [2025\u002F06\u002F09] **CheMatAgent: Enhancing LLMs for Chemistry and Materials Science through Tree-Search Based Tool Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07551) | [code]\n\n- [2025\u002F06\u002F06] **AgentSwift: Efficient LLM Agent Design via Value-guided Hierarchical Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06017) | [code]\n\n- [2025\u002F05\u002F26] **T^2Agent A Tool-augmented Multimodal Misinformation Detection Agent with Monte Carlo Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19768) | [code]\n\n- [2025\u002F05\u002F12] **Structural Entropy Guided Agent for Detecting and Repairing Knowledge Deficiencies in LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.07184) | [code]\n\n- [2025\u002F04\u002F10] **The AI Scientist-v2: Workshop-Level Automated Scientific Discovery via Agentic Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08066) | [code]\n\n- [2025\u002F04\u002F04] **SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03561) | [code]\n\n- [2025\u002F03\u002F18] **DARS: Dynamic Action Re-Sampling to Enhance Coding Agent Performance by Adaptive Tree Traversal** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14269) | [code]\n\n- [2025\u002F02\u002F20] **I-MCTS: Enhancing Agentic AutoML via Introspective Monte Carlo Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14693) | [code]\n\n- [2025\u002F02\u002F18] **R2-KG: General-Purpose Dual-Agent Framework for Reliable Reasoning on Knowledge Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12767) | [code]\n\n- [2025\u002F02\u002F18] **Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13025) | [code]\n\n- [2025\u002F02\u002F17] **A Study on Leveraging Search and Self-Feedback for Agent Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12094) | [code]\n\n- [2025\u002F02\u002F05] **SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03283) | [code]\n\n- [2025\u002F02\u002F02] **Efficient Multi-Agent System Training with Data Influence-Oriented Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00955) | [code]\n\n- [2025\u002F01\u002F31] **KBQA-o1: Agentic Knowledge Base Question Answering with Monte Carlo Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.18922) | [code]\n\n- [2025\u002F01\u002F09] **Search-o1: Agentic Search-Enhanced Large Reasoning Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.05366) | [code]\n\n- [2024\u002F12\u002F24] **A Novel Task-Driven Method with Evolvable Interactive Agents Using Event Trees for Enhanced Emergency Decision Support** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06193) | [code]\n\n- [2024\u002F12\u002F22] **Multi-Agent Sampling: Scaling Inference Compute for Data Synthesis with Tree Search-Based Agentic Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17061) | [code]\n\n- [2024\u002F12\u002F05] **Agent AI with LangGraph: A Modular Framework for Enhancing Machine Translation Using Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03801) | [code]\n\n- [2024\u002F11\u002F07] **CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04329) | [code]\n\n- [2024\u002F10\u002F29] **Synergizing LLM Agents and Knowledge Graph for Socioeconomic Prediction in LBSN** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00028) | [code]\n\n- [2024\u002F10\u002F25] **AGENT-CQ: Automatic Generation and Evaluation of Clarifying Questions for Conversational Search with LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19692) | [code]\n\n- [2024\u002F10\u002F22] **SELA: Tree-Search Enhanced LLM Agents for Automated Machine Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17238) | [code]\n\n- [2024\u002F10\u002F13] **Expanding Search Space with Diverse Prompting Agents: An Efficient Sampling Approach for LLM Mathematical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09780) | [code]\n\n- [2024\u002F10\u002F13] **LLM-Based Multi-Agent Systems are Scalable Graph Generative Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09824) | [code]\n\n- [2024\u002F10\u002F02] **ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02052) | [code]\n\n- [2024\u002F09\u002F09] **SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.05556) | [code]\n\n- [2024\u002F07\u002F01] **Tree Search for Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01476) | [code]\n\n- [2024\u002F06\u002F17] **Input Conditioned Graph Generation for Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11555) | [[code]](https:\u002F\u002Fgithub.com\u002Flukasvierling\u002Fdynamicgptswarm)\n\n- [2024\u002F02\u002F17] **KG-Agent: An Efficient Autonomous Agent Framework for Complex Reasoning over Knowledge Graph** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11163) | [code]\n\n- [2024\u002F02\u002F16] **When is Tree Search Useful for LLM Planning? It Depends on the Discriminator** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10890) | [[code]](https:\u002F\u002Fgithub.com\u002Fosu-nlp-group\u002Fllm-planning-eval)\n\n- [2024\u002F02\u002F09] **CoSearchAgent: A Lightweight Collaborative Search Agent with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.06360) | [code]\n\n- [2023\u002F05\u002F17] **Tree of Thoughts: Deliberate Problem Solving with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10601) | [code]\n\n\n\n### Interaction\n#### Role Playing\n- [2025\u002F06\u002F28] **Agent-to-Agent Theory of Mind: Testing Interlocutor Awareness among Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22957) | [code]\n\n- [2025\u002F06\u002F24] **MAM: Modular Multi-Agent Framework for Multi-Modal Medical Diagnosis via Role-Specialized Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19835) | [code]\n\n- [2025\u002F06\u002F20] **Language-Informed Synthesis of Rational Agent Models for Grounded Theory-of-Mind Reasoning On-The-Fly** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16755) | [code]\n\n- [2025\u002F06\u002F06] **PersonaAgent: When Large Language Model Agents Meet Personalization at Test Time** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06254) | [code]\n\n- [2025\u002F06\u002F02] **Thinking in Character: Advancing Role-Playing Agents with Role-Aware Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01748) | [code]\n\n- [2025\u002F05\u002F30] **Context-Aware Sentiment Forecasting via LLM-based Multi-Perspective Role-Playing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24331) | [code]\n\n- [2025\u002F05\u002F29] **ChARM: Character-based Act-adaptive Reward Modeling for Advanced Role-Playing Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23923) | [code]\n\n- [2025\u002F05\u002F26] **OmniCharacter: Towards Immersive Role-Playing Agents with Seamless Speech-Language Personality Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20277) | [code]\n\n- [2025\u002F05\u002F20] **Inter(sectional) Alia(s): Ambiguity in Voice Agent Identity via Intersectional Japanese Self-Referents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01998) | [code]\n\n- [2025\u002F04\u002F29] **BrAIcht, a theatrical agent that speaks like Bertolt Brecht&#39;s characters** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20552) | [code]\n\n- [2025\u002F04\u002F25] **Exploring Personality-Aware Interactions in Salesperson Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18058) | [code]\n\n- [2025\u002F04\u002F13] **UXAgent: A System for Simulating Usability Testing of Web Design with LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09407) | [code]\n\n- [2025\u002F04\u002F03] **LLMs as Deceptive Agents: How Role-Based Prompting Induces Semantic Ambiguity in Puzzle Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02254) | [code]\n\n- [2025\u002F03\u002F14] **AIstorian lets AI be a historian: A KG-powered multi-agent system for accurate biography generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11346) | [code]\n\n- [2025\u002F02\u002F20] **InstructAgent: Building User Controllable Recommender via LLM Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14662) | [code]\n\n- [2025\u002F02\u002F18] **SEFL: Harnessing Large Language Model Agents to Improve Educational Feedback Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12927) | [code]\n\n- [2025\u002F02\u002F17] **Can LLM Agents Maintain a Persona in Discourse?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11843) | [code]\n\n- [2025\u002F02\u002F17] **LM Agents for Coordinating Multi-User Information Gathering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12328) | [code]\n\n- [2025\u002F02\u002F16] **SCALE: Towards Collaborative Content Analysis in Social Science with Large Language Model Agents and Human Intervention** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10937) | [code]\n\n- [2025\u002F02\u002F13] **Language Agents as Digital Representatives in Collective Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09369) | [code]\n\n- [2025\u002F02\u002F06] **PsyPlay: Personality-Infused Role-Playing Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03821) | [code]\n\n- [2025\u002F02\u002F03] **Plan-Then-Execute: An Empirical Study of User Trust and Team Performance When Using LLM Agents As A Daily Assistant** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01390) | [code]\n\n- [2025\u002F01\u002F23] **AgentRec: Agent Recommendation Using Sentence Embeddings Aligned to Human Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13333) | [code]\n\n- [2025\u002F01\u002F15] **Personality Modeling for Persuasion of Misinformation using AI Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08985) | [code]\n\n- [2024\u002F12\u002F28] **BaiJia: A Large-Scale Role-Playing Agent Corpus of Chinese Historical Characters** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20024) | [code]\n\n- [2024\u002F12\u002F22] **Modular Conversational Agents for Surveys and Interviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17049) | [code]\n\n- [2024\u002F12\u002F11] **SweetieChat: A Strategy-Enhanced Role-playing Framework for Diverse Scenarios Handling Emotional Support Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.08389) | [code]\n\n- [2024\u002F12\u002F10] **My Words Imply Your Opinion: Reader Agent-Based Propagation Enhancement for Personalized Implicit Emotion Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.07367) | [code]\n\n- [2024\u002F11\u002F21] **Towards Full Delegation: Designing Ideal Agentic Behaviors for Travel Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13904) | [code]\n\n- [2024\u002F11\u002F19] **Probing the Capacity of Language Model Agents to Operationalize Disparate Experiential Context Despite Distraction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.12828) | [code]\n\n- [2024\u002F11\u002F12] **SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07965) | [code]\n\n- [2024\u002F11\u002F04] **A Multi-Task Role-Playing Agent Capable of Imitating Character Linguistic Styles** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02457) | [code]\n\n- [2024\u002F10\u002F28] **Guide-LLM: An Embodied LLM Agent and Text-Based Topological Map for Robotic Guidance of People with Visual Impairments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.20666) | [code]\n\n- [2024\u002F10\u002F24] **Schema-Guided Culture-Aware Complex Event Simulation with Multi-Agent Role-Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18935) | [code]\n\n- [2024\u002F09\u002F23] **ERABAL: Enhancing Role-Playing Agents through Boundary-Aware Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14710) | [code]\n\n- [2024\u002F09\u002F19] **FoodPuzzle: Developing Large Language Model Agents as Flavor Scientists** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12832) | [code]\n\n- [2024\u002F09\u002F12] **TravelAgent: An AI Assistant for Personalized Travel Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08069) | [code]\n\n- [2024\u002F09\u002F11] **Using Generative Agents to Create Tip Sheets for Investigative Data Reporting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07286) | [code]\n\n- [2024\u002F08\u002F28] **Interactive Agents: Simulating Counselor-Client Psychological Counseling via Role-Playing LLM-to-LLM Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15787) | [code]\n\n- [2024\u002F08\u002F21] **Drama Engine: A Framework for Narrative Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11574) | [code]\n\n- [2024\u002F06\u002F24] **The Effects of Embodiment and Personality Expression on Learning in LLM-based Educational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.10993) | [code]\n\n- [2024\u002F06\u002F17] **HoLLMwood: Unleashing the Creativity of Large Language Models in Screenwriting via Role Playing** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11683) | [code]\n\n- [2024\u002F06\u002F11] **Agent-SiMT: Agent-assisted Simultaneous Machine Translation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06910) | [code]\n\n- [2024\u002F06\u002F09] **Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05688) | [[code]](https:\u002F\u002Fgithub.com\u002Fchengtan9907\u002Freviewmt)\n\n- [2024\u002F05\u002F28] **TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18027) | [code]\n\n- [2024\u002F05\u002F10] **LLM Discussion: Enhancing the Creativity of Large Language Models via Discussion Framework and Role-Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.06373) | [code]\n\n- [2024\u002F05\u002F08] **LLMs with Personalities in Multi-issue Negotiation Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05248) | [code]\n\n- [2024\u002F05\u002F06] **Large Language Models (LLMs) as Agents for Augmented Democracy** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.03452) | [code]\n\n- [2024\u002F05\u002F02] **GAIA: A General AI Assistant for Intelligent Accelerator Operations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01359) | [code]\n\n- [2024\u002F05\u002F01] **&#34;Ask Me Anything&#34;: How Comcast Uses LLMs to Assist Agents in Real Time** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00801) | [code]\n\n- [2024\u002F04\u002F26] **Large Language Model Agent as a Mechanical Designer** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.17525) | [code]\n\n- [2024\u002F04\u002F19] **Cooperative Sentiment Agents for Multimodal Sentiment Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.12642) | [[code]](https:\u002F\u002Fgithub.com\u002Fsmwanghhh\u002Fco-sa)\n\n- [2024\u002F03\u002F31] **DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.01342) | [[code]](https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002FDiffAgent)\n\n- [2024\u002F03\u002F23] **EduAgent: Generative Student Agents in Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.07963) | [code]\n\n- [2024\u002F03\u002F19] **Characteristic AI Agents via Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12368) | [code]\n\n- [2024\u002F03\u002F15] **VideoAgent: Long-form Video Understanding with Large Language Model as Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.10517) | [code]\n\n- [2024\u002F03\u002F13] **Evaluating Large Language Models as Generative User Simulators for Conversational Recommendation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09738) | [code]\n\n- [2024\u002F02\u002F29] **On the Decision-Making Abilities in Role-Playing using Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18807) | [code]\n\n- [2024\u002F02\u002F28] **Prospect Personalized Recommendation on Large Language Model-based Agent Platform** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18240) | [code]\n\n- [2024\u002F02\u002F26] **Language Agents as Optimizable Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16823) | [[code]](https:\u002F\u002Fgithub.com\u002Fmetauto-ai\u002Fgptswarm)\n\n- [2024\u002F02\u002F22] **Triad: A Framework Leveraging a Multi-Role LLM-based Agent to Solve Knowledge Base Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14320) | [code]\n\n- [2024\u002F02\u002F22] **Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14744) | [code]\n\n- [2024\u002F02\u002F21] **Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13717) | [code]\n\n- [2024\u002F02\u002F19] **Stick to your Role! Stability of Personal Values Expressed in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14846) | [code]\n\n- [2024\u002F02\u002F18] **Modelling Political Coalition Negotiations Using LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11712) | [code]\n\n- [2024\u002F02\u002F06] **Professional Agents -- Evolving Large Language Models into Autonomous Experts with Human-Level Competencies** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03628) | [code]\n\n- [2024\u002F02\u002F06] **Can Generative Agents Predict Emotion?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04232) | [code]\n\n- [2024\u002F02\u002F05] **GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03299) | [code]\n\n- [2024\u002F01\u002F31] **LLMs Simulate Big Five Personality Traits: Further Evidence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01765) | [code]\n\n- [2023\u002F12\u002F22] **Personalized Large Language Model Assistant with Evolving Conditional Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17257) | [code]\n\n- [2023\u002F12\u002F21] **ChatGPT as a commenter to the news: can LLMs generate human-like opinions?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13961) | [code]\n\n- [2023\u002F12\u002F20] **Machine Mindset: An MBTI Exploration of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12999) | [code]\n\n- [2023\u002F12\u002F19] **Can ChatGPT be Your Personal Medical Assistant?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12006) | [code]\n\n- [2023\u002F10\u002F13] **AgentCF: Collaborative Learning with Autonomous Language Agents for Recommender Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09233) | [code]\n\n- [2023\u002F10\u002F01] **RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00746) | [code]\n\n- [2023\u002F09\u002F02] **ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.00986) | [code]\n\n- [2023\u002F08\u002F22] **Towards an On-device Agent for Text Rewriting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11807) | [code]\n\n- [2023\u002F08\u002F10] **LLM As DBA** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.05481) | [code]\n\n- [2023\u002F08\u002F03] **InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.01552) | [code]\n\n- [2023\u002F07\u002F11] **Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.05300) | [code]\n\n- [2023\u002F07\u002F05] **Building Cooperative Embodied Agents Modularly with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02485) | [code]\n\n- [2023\u002F05\u002F25] **Role-Play with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16367) | [code]\n\n- [2023\u002F05\u002F09] **TidyBot: Personalized Robot Assistance with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.05658) | [code]\n\n#### Conversation\n- [2025\u002F06\u002F28] **Knowledge Augmented Finetuning Matters in both RAG and Agent Based Dialog Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22852) | [code]\n\n- [2025\u002F06\u002F24] **Augmenting Multi-Agent Communication with State Delta Trajectory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19209) | [code]\n\n- [2025\u002F06\u002F17] **From What to Respond to When to Respond: Timely Response Generation for Open-domain Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14285) | [code]\n\n- [2025\u002F06\u002F17] **Expectation Confirmation Preference Optimization for Multi-Turn Conversational Recommendation Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14302) | [code]\n\n- [2025\u002F06\u002F13] **The Behavior Gap: Evaluating Zero-shot LLM Agents in Complex Task-Oriented Dialogs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.12266) | [code]\n\n- [2025\u002F06\u002F11] **Chat-of-Thought: Collaborative Multi-Agent System for Generating Domain Specific Information** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10086) | [code]\n\n- [2025\u002F06\u002F09] **$\\tau^2$-Bench: Evaluating Conversational Agents in a Dual-Control Environment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07982) | [code]\n\n- [2025\u002F06\u002F04] **AI Agents for Conversational Patient Triage: Preliminary Simulation-Based Evaluation with Real-World EHR Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04032) | [code]\n\n- [2025\u002F06\u002F04] **CLAIM: An Intent-Driven Multi-Agent Framework for Analyzing Manipulation in Courtroom Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04131) | [code]\n\n- [2025\u002F05\u002F29] **A Practical Approach for Building Production-Grade Conversational Agents with Workflow Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23006) | [code]\n\n- [2025\u002F05\u002F28] **ChatCFD: an End-to-End CFD Agent with Domain-specific Structured Thinking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02019) | [code]\n\n- [2025\u002F05\u002F26] **Towards Multi-Granularity Memory Association and Selection for Long-Term Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19549) | [code]\n\n- [2025\u002F05\u002F24] **Multi-Party Conversational Agents: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18845) | [code]\n\n- [2025\u002F05\u002F21] **Aligning Dialogue Agents with Global Feedback via Large Language Model Reward Decomposition** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15922) | [code]\n\n- [2025\u002F04\u002F29] **BrAIcht, a theatrical agent that speaks like Bertolt Brecht&#39;s characters** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20552) | [code]\n\n- [2025\u002F04\u002F26] **MATCHA: Can Multi-Agent Collaboration Build a Trustworthy Conversational Recommender?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20094) | [code]\n\n- [2025\u002F04\u002F21] **EducationQ: Evaluating LLMs&#39; Teaching Capabilities Through Multi-Agent Dialogue Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14928) | [code]\n\n- [2025\u002F04\u002F20] **DialogueAgents: A Hybrid Agent-Based Speech Synthesis Framework for Multi-Party Dialogue** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14482) | [code]\n\n- [2025\u002F04\u002F12] **A Multi-view Discourse Framework for Integrating Semantic and Syntactic Features in Dialog Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09073) | [code]\n\n- [2025\u002F04\u002F07] **Bridging Industrial Expertise and XR with LLM-Powered Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05527) | [code]\n\n- [2025\u002F04\u002F07] **A Desideratum for Conversational Agents: Capabilities, Challenges, and Future Directions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16939) | [code]\n\n- [2025\u002F03\u002F28] **Evaluating LLM-based Agents for Multi-Turn Conversations: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22458) | [code]\n\n- [2025\u002F03\u002F27] **EQ-Negotiator: An Emotion-Reasoning LLM Agent in Credit Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21080) | [code]\n\n- [2025\u002F03\u002F26] **3MDBench: Medical Multimodal Multi-agent Dialogue Benchmark** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13861) | [code]\n\n- [2025\u002F03\u002F25] **CoMAC: Conversational Agent for Multi-Source Auxiliary Context with Sparse and Symmetric Latent Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19274) | [code]\n\n- [2025\u002F03\u002F25] **Substance over Style: Evaluating Proactive Conversational Coaching Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19328) | [code]\n\n- [2025\u002F03\u002F18] **Personalized Attacks of Social Engineering in Multi-turn Conversations -- LLM Agents for Simulation and Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15552) | [code]\n\n- [2025\u002F03\u002F11] **In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08026) | [code]\n\n- [2025\u002F03\u002F05] **Cite Before You Speak: Enhancing Context-Response Grounding in E-commerce Conversational LLM-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04830) | [code]\n\n- [2025\u002F02\u002F24] **Turning Conversations into Workflows: A Framework to Extract and Evaluate Dialog Workflows for Service AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17321) | [code]\n\n- [2025\u002F02\u002F20] **Enhancing Conversational Agents with Theory of Mind: Aligning Beliefs, Desires, and Intentions for Human-Like Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14171) | [code]\n\n- [2025\u002F02\u002F18] **One Size doesn&#39;t Fit All: A Personalized Conversational Tutoring Agent for Mathematics Instruction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12633) | [code]\n\n- [2025\u002F02\u002F18] **Training Turn-by-Turn Verifiers for Dialogue Tutoring Agents: The Curious Case of LLMs as Your Coding Tutors** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13311) | [code]\n\n- [2025\u002F02\u002F18] **You need to MIMIC to get FAME: Solving Meeting Transcript Scarcity with a Multi-Agent Conversations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13001) | [code]\n\n- [2025\u002F02\u002F17] **InfoQuest: Evaluating Multi-Turn Dialogue Agents for Open-Ended Conversations with Hidden Context** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12257) | [code]\n\n- [2025\u002F02\u002F13] **Reliable Conversational Agents under ASP Control that Understand Natural Language** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09237) | [code]\n\n- [2025\u002F02\u002F12] **Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08820) | [code]\n\n- [2025\u002F02\u002F09] **MTPChat: A Multimodal Time-Aware Persona Dataset for Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05887) | [code]\n\n- [2025\u002F02\u002F09] **HamRaz: A Culture-Based Persian Conversation Dataset for Person-Centered Therapy Using LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05982) | [code]\n\n- [2025\u002F02\u002F08] **On Memory Construction and Retrieval for Personalized Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05589) | [code]\n\n- [2025\u002F02\u002F06] **PsyPlay: Personality-Infused Role-Playing Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03821) | [code]\n\n- [2025\u002F01\u002F24] **Unmasking Conversational Bias in AI Multiagent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14844) | [code]\n\n- [2025\u002F01\u002F23] **Communicating Activations Between Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14082) | [code]\n\n- [2025\u002F01\u002F19] **IntellAgent: A Multi-Agent Framework for Evaluating Conversational AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11067) | [code]\n\n- [2025\u002F01\u002F14] **Developing Enhanced Conversational Agents for Social Virtual Worlds** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16341) | [code]\n\n- [2025\u002F01\u002F03] **PSYCHE: A Multi-faceted Patient Simulation Framework for Evaluation of Psychiatric Assessment Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01594) | [code]\n\n- [2024\u002F12\u002F30] **Exploring and Controlling Diversity in LLM-Agent Conversation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21102) | [code]\n\n- [2024\u002F12\u002F24] **Extracting triples from dialogues for conversational social agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18364) | [code]\n\n- [2024\u002F12\u002F22] **Modular Conversational Agents for Surveys and Interviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17049) | [code]\n\n- [2024\u002F12\u002F21] **InfoTech Assistant : A Multimodal Conversational Agent for InfoTechnology Web Portal Queries** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16412) | [code]\n\n- [2024\u002F12\u002F13] **Script-Based Dialog Policy Planning for LLM-Powered Conversational Agents: A Basic Architecture for an &#34;AI Therapist&#34;** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15242) | [code]\n\n- [2024\u002F12\u002F06] **CALICO: Conversational Agent Localization via Synthetic Data Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05388) | [code]\n\n- [2024\u002F12\u002F05] **Educational-Psychological Dialogue Robot Based on Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03847) | [code]\n\n- [2024\u002F12\u002F01] **Examining Identity Drift in Conversations of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.00804) | [code]\n\n- [2024\u002F11\u002F07] **Thanos: Enhancing Conversational Agents with Skill-of-Mind-Infused Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04496) | [code]\n\n- [2024\u002F11\u002F07] **Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05194) | [code]\n\n- [2024\u002F11\u002F06] **MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03814) | [code]\n\n- [2024\u002F11\u002F01] **DARD: A Multi-Agent Approach for Task-Oriented Dialog Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00427) | [code]\n\n- [2024\u002F11\u002F01] **ReSpAct: Harmonizing Reasoning, Speaking, and Acting Towards Building Large Language Model-Based Conversational AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00927) | [code]\n\n- [2024\u002F10\u002F29] **MARCO: Multi-Agent Real-time Chat Orchestration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21784) | [code]\n\n- [2024\u002F10\u002F25] **AGENT-CQ: Automatic Generation and Evaluation of Clarifying Questions for Conversational Search with LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19692) | [code]\n\n- [2024\u002F10\u002F18] **Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14141) | [code]\n\n- [2024\u002F10\u002F15] **HR-Agent: A Task-Oriented Dialogue (TOD) LLM Agent Tailored for HR Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.11239) | [code]\n\n- [2024\u002F10\u002F10] **Rewriting Conversational Utterances with Instructed Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07797) | [code]\n\n- [2024\u002F09\u002F24] **Automated test generation to evaluate tool-augmented LLMs as conversational AI agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15934) | [code]\n\n- [2024\u002F09\u002F23] **Beyond Turn-Based Interfaces: Synchronous LLMs as Full-Duplex Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15594) | [code]\n\n- [2024\u002F09\u002F13] **AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.09013) | [code]\n\n- [2024\u002F09\u002F06] **Sparse Rewards Can Self-Train Dialogue Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.04617) | [code]\n\n- [2024\u002F09\u002F02] **Co-Learning: Code Learning for Multi-Agent Reinforcement Collaborative Framework with Conversational Natural Language Interfaces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00985) | [code]\n\n- [2024\u002F08\u002F27] **Into the Unknown Unknowns: Engaged Human Learning through Participation in Language Model Agent Conversations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15232) | [code]\n\n- [2024\u002F08\u002F22] **MDD-5k: A New Diagnostic Conversation Dataset for Mental Disorders Synthesized via Neuro-Symbolic LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12142) | [code]\n\n- [2024\u002F08\u002F13] **What should I wear to a party in a Greek taverna? Evaluation for Conversational Agents in the Fashion Domain** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08907) | [code]\n\n- [2024\u002F08\u002F06] **OpenOmni: A Collaborative Open Source Tool for Building Future-Ready Multimodal Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.03047) | [code]\n\n- [2024\u002F08\u002F03] **Self-Emotion Blended Dialogue Generation in Social Simulation Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01633) | [code]\n\n- [2024\u002F07\u002F31] **Towards Achieving Human Parity on End-to-end Simultaneous Speech Translation via LLM Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.21646) | [code]\n\n- [2024\u002F07\u002F13] **Cohesive Conversations: Enhancing Authenticity in Multi-Agent Simulated Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.09897) | [code]\n\n- [2024\u002F07\u002F04] **Controllable Conversations: Planning-Based Dialogue Agent with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.03884) | [code]\n\n- [2024\u002F07\u002F01] **Empathic Grounding: Explorations using Multimodal Interaction and Large Language Models with Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01824) | [code]\n\n- [2024\u002F06\u002F30] **CAMON: Cooperative Agents for Multi-Object Navigation with LLM-based Conversations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00632) | [code]\n\n- [2024\u002F06\u002F09] **Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05688) | [[code]](https:\u002F\u002Fgithub.com\u002Fchengtan9907\u002Freviewmt)\n\n- [2024\u002F05\u002F29] **Toward Conversational Agents with Context and Time Sensitive Long-term Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00057) | [[code]](https:\u002F\u002Fgithub.com\u002FZyphra\u002FTemporalMemoryDataset)\n\n- [2024\u002F05\u002F16] **Speaker Verification in Agent-Generated Conversations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.10150) | [code]\n\n- [2024\u002F04\u002F19] **Towards Human-centered Proactive Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.12670) | [code]\n\n- [2024\u002F04\u002F10] **Apollonion: Profile-centric Dialog Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.08692) | [code]\n\n- [2024\u002F03\u002F17] **Improving Dialogue Agents by Decomposing One Global Explicit Annotation with Local Implicit Multimodal Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11330) | [code]\n\n- [2024\u002F03\u002F08] **ChatASU: Evoking LLM&#39;s Reflexion to Truly Understand Aspect Sentiment in Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.05326) | [code]\n\n- [2024\u002F02\u002F25] **Understanding Public Perceptions of AI Conversational Agents: A Cross-Cultural Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16039) | [code]\n\n- [2024\u002F02\u002F23] **On the Multi-turn Instruction Following for Conversational Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15057) | [code]\n\n- [2024\u002F02\u002F20] **CHATATC: Large Language Model-Driven Conversational Agents for Supporting Strategic Air Traffic Flow Management** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14850) | [code]\n\n- [2024\u002F01\u002F29] **Assistive Large Language Model Agents for Socially-Aware Negotiation Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01737) | [code]\n\n- [2024\u002F01\u002F10] **Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05033) | [code]\n\n- [2024\u002F01\u002F02] **CharacterEval: A Chinese Benchmark for Role-Playing Conversational Agent Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01275) | [code]\n\n- [2023\u002F12\u002F21] **Team Flow at DRC2023: Building Common Ground and Text-based Turn-taking in a Travel Agent Spoken Dialogue System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13816) | [code]\n\n- [2023\u002F11\u002F15] **ToolTalk: Evaluating Tool-Usage in a Conversational Setting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.10775) | [code]\n\n- [2023\u002F10\u002F01] **Adapting LLM Agents Through Communication** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01444v2) | [code]\n\n- [2023\u002F06\u002F28] **Inferring the Goals of Communicating Agents from Actions and Instructions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.16207) | [code]\n\n- [2023\u002F04\u002F26] **Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13835) | [code]\n\n- [2023\u002F03\u002F31] **CAMEL: Communicative Agents for &#34;Mind&#34; Exploration of Large Language Model Society** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17760) | [[code]](https:\u002F\u002Fgithub.com\u002Fcamel-ai\u002Fcamel)\n\n#### Game Playing\n- [2025\u002F06\u002F30] **SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.24119) | [code]\n\n- [2025\u002F06\u002F05] **Time to Talk: LLM Agents for Asynchronous Group Communication in Mafia Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05309) | [code]\n\n- [2025\u002F06\u002F04] **TextAtari: 100K Frames Game Playing with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04098) | [code]\n\n- [2025\u002F05\u002F29] **The Automated but Risky Game: Modeling Agent-to-Agent Negotiations and Transactions in Consumer Markets** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00073) | [code]\n\n- [2025\u002F05\u002F28] **First Steps Towards Overhearing LLM Agents: A Case Study With Dungeons &amp; Dragons Gameplay** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22809) | [code]\n\n- [2025\u002F05\u002F25] **When Ethics and Payoffs Diverge: LLM Agents in Morally Charged Social Dilemmas** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19212) | [code]\n\n- [2025\u002F05\u002F23] **CoMet: Metaphor-Driven Covert Communication for Multi-Agent Language Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18218) | [code]\n\n- [2025\u002F05\u002F20] **BAR: A Backward Reasoning based Agent for Complex Minecraft Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14079) | [code]\n\n- [2025\u002F04\u002F23] **Monte Carlo Planning with Large Language Model for Text-Based Game Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16855) | [code]\n\n- [2025\u002F04\u002F15] **TextArena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11442) | [code]\n\n- [2025\u002F04\u002F09] **Persona Dynamics: Unveiling the Impact of Personality Traits on Agents in Text-Based Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06868) | [code]\n\n- [2025\u002F03\u002F08] **DSGBench: A Diverse Strategic Game Benchmark for Evaluating LLM-based Agents in Complex Decision-Making Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06047) | [code]\n\n- [2025\u002F03\u002F06] **VQEL: Enabling Self-Developed Symbolic Language in Agents through Vector Quantization in Emergent Language Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04940) | [code]\n\n- [2025\u002F03\u002F06] **Factorio Learning Environment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09617) | [code]\n\n- [2025\u002F02\u002F05] **Multimodal Transformer Models for Turn-taking Prediction: Effects on Conversational Dynamics of Human-Agent Interaction during Cooperative Gameplay** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16432) | [code]\n\n- [2025\u002F02\u002F01] **Who&#39;s the MVP? A Game-Theoretic Evaluation Benchmark for Modular Attribution in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00510) | [code]\n\n- [2025\u002F01\u002F24] **Multi-agent KTO: Reinforcing Strategic Interactions of Large Language Model in Language Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14225) | [code]\n\n- [2024\u002F12\u002F06] **TeamCraft: A Benchmark for Multi-Modal Multi-Agent Systems in Minecraft** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05255) | [code]\n\n- [2024\u002F11\u002F08] **Game-theoretic LLM: Agent Workflow for Negotiation Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05990) | [code]\n\n- [2024\u002F10\u002F28] **Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21359) | [code]\n\n- [2024\u002F09\u002F03] **An Implementation of Werewolf Agent That does not Truly Trust LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.01575) | [code]\n\n- [2024\u002F08\u002F05] **Evaluating and Enhancing LLMs Agent based on Theory of Mind in Guandan: A Multi-Player Cooperative Game under Imperfect Information** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02559) | [code]\n\n- [2024\u002F07\u002F23] **AMONGAGENTS: Evaluating Large Language Models in the Interactive Text-Based Social Deduction Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16521) | [code]\n\n- [2024\u002F07\u002F17] **A LLM Benchmark based on the Minecraft Builder Dialog Agent Task** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.12734) | [code]\n\n- [2024\u002F06\u002F27] **OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00114) | [code]\n\n- [2024\u002F06\u002F07] **GameBench: Evaluating Strategic Reasoning Abilities of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06613) | [[code]](https:\u002F\u002Fgithub.com\u002FJoshuaclymer\u002FGameBench)\n\n- [2024\u002F06\u002F05] **The Good, the Bad, and the Hulk-like GPT: Analyzing Emotional Decisions of Large Language Models in Cooperation and Bargaining Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03299) | [code]\n\n- [2024\u002F05\u002F24] **Hacc-Man: An Arcade Game for Jailbreaking LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.15902) | [code]\n\n- [2024\u002F05\u002F23] **Human-Agent Cooperation in Games under Incomplete Information through Natural Language Communication** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14173) | [code]\n\n- [2024\u002F05\u002F08] **LLMs with Personalities in Multi-issue Negotiation Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05248) | [code]\n\n- [2024\u002F04\u002F30] **PANGeA: Procedural Artificial Narrative using Generative AI for Turn-Based Video Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.19721) | [code]\n\n- [2024\u002F04\u002F03] **Learn to Disguise: Avoid Refusal Responses in LLM&#39;s Defense via a Multi-agent Attacker-Disguiser Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02532) | [code]\n\n- [2024\u002F03\u002F28] **MineLand: Simulating Large-Scale Multi-Agent Interactions with Limited Multimodal Senses and Physical Needs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.19267) | [[code]](https:\u002F\u002Fgithub.com\u002Fcocacola-lab\u002Fmineland)\n\n- [2024\u002F03\u002F18] **How Far Are We on the Decision-Making of LLMs? Evaluating LLMs&#39; Gaming Ability in Multi-Agent Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11807) | [code]\n\n- [2024\u002F02\u002F19] **PsychoGAT: A Novel Psychological Measurement Paradigm through Interactive Fiction Games with LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12326) | [code]\n\n- [2024\u002F02\u002F13] **Large Language Models as Minecraft Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08392) | [code]\n\n- [2024\u002F02\u002F12] **Large Language Models as Agents in Two-Player Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08078) | [code]\n\n- [2024\u002F02\u002F04] **Enhance Reasoning for Large Language Models in the Game Werewolf** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02330) | [code]\n\n- [2024\u002F02\u002F02] **PokeLLMon: A Human-Parity Agent for Pokemon Battles with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01118) | [code]\n\n- [2023\u002F12\u002F29] **Cooperation on the Fly: Exploring Language Agents for Ad Hoc Teamwork in the Avalon Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17515) | [code]\n\n- [2023\u002F12\u002F01] **Deciphering Digital Detectives: Understanding LLM Behaviors and Capabilities in Multi-Agent Mystery Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00746) | [code]\n\n- [2023\u002F10\u002F31] **Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20499) | [code]\n\n- [2023\u002F09\u002F29] **Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind Aware GPT-4** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.17277) | [code]\n\n- [2023\u002F09\u002F18] **MindAgent: Emergent Gaming Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.09971) | [[code]](https:\u002F\u002Fmindagent.github.io\u002F)\n\n- [2023\u002F09\u002F10] **An Appraisal-Based Chain-Of-Emotion Architecture for Affective Language Model Game Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.05076) | [code]\n\n- [2023\u002F09\u002F09] **Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.04658) | [code]\n\n- [2023\u002F08\u002F23] **Are ChatGPT and GPT-4 Good Poker Players? -- A Pre-Flop Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12466) | [code]\n\n- [2023\u002F05\u002F31] **Recursive Metropolis-Hastings Naming Game: Symbol Emergence in a Multi-agent System based on Probabilistic Generative Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19761) | [code]\n\n- [2023\u002F05\u002F26] **Playing repeated games with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16867) | [code]\n\n- [2023\u002F05\u002F25] **Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17144) | [code]\n\n- [2023\u002F05\u002F08] **Knowledge-enhanced Agents for Interactive Text Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.05091) | [code]\n\n- [2023\u002F04\u002F06] **Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.02868) | [code]\n\n#### Human-Agent Interaction\n- [2025\u002F06\u002F11] **A Call for Collaborative Intelligence: Why Human-Agent Systems Should Precede AI Autonomy** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09420) | [code]\n\n- [2025\u002F05\u002F16] **Talk to Your Slides: Language-Driven Agents for Efficient Slide Editing** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11604) | [code]\n\n- [2025\u002F03\u002F26] **TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20666) | [code]\n\n- [2025\u002F02\u002F17] **Leveraging Dual Process Theory in Language Agent Framework for Real-time Simultaneous Human-AI Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11882) | [code]\n\n- [2025\u002F02\u002F05] **Multimodal Transformer Models for Turn-taking Prediction: Effects on Conversational Dynamics of Human-Agent Interaction during Cooperative Gameplay** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16432) | [code]\n\n- [2025\u002F01\u002F28] **CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web Navigation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16609) | [code]\n\n- [2024\u002F12\u002F20] **Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15701) | [code]\n\n- [2024\u002F06\u002F28] **Designing and Evaluating Multi-Chatbot Interface for Human-AI Communication: Preliminary Findings from a Persuasion Task** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.19648) | [code]\n\n- [2024\u002F06\u002F11] **Towards Human-AI Collaboration in Healthcare: Guided Deferral Systems with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07212) | [code]\n\n- [2024\u002F06\u002F02] **Towards a copilot in BIM authoring tool using a large language model-based agent for intelligent human-machine interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.16903) | [code]\n\n- [2024\u002F03\u002F05] **ChatCite: LLM Agent with Human Workflow Guidance for Comparative Literature Summary** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02574) | [code]\n\n- [2024\u002F02\u002F20] **Large Language Model-based Human-Agent Collaboration for Complex Task Solving** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12914) | [code]\n\n- [2024\u002F02\u002F18] **Shaping Human-AI Collaboration: Varied Scaffolding Levels in Co-writing with Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11723) | [code]\n\n- [2024\u002F02\u002F17] **MONAL: Model Autophagy Analysis for Modeling Human-AI Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11271) | [code]\n\n- [2023\u002F09\u002F22] **Learning to Coordinate with Anyone** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.12633) | [code]\n\n- [2023\u002F07\u002F31] **HAGRID: A Human-LLM Collaborative Dataset for Generative Information-Seeking with Attribution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.16883) | [code]\n\n- [2023\u002F04\u002F26] **Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13835) | [code]\n\n#### Tool Usage\n- [2025\u002F07\u002F10] **PyVision: Agentic Vision with Dynamic Tooling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07998) | [code]\n\n- [2025\u002F07\u002F09] **VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06899) | [code]\n\n- [2025\u002F07\u002F03] **WebSailor: Navigating Super-human Reasoning for Web Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02592) | [code]\n\n- [2025\u002F07\u002F02] **OpenTable-R1: A Reinforcement Learning Augmented Tool Agent for Open-Domain Table Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03018) | [code]\n\n- [2025\u002F06\u002F30] **LineRetriever: Planning-Aware Observation Reduction for Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00210) | [code]\n\n- [2025\u002F06\u002F27] **More Vulnerable than You Think: On the Stability of Tool-Integrated LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21967) | [code]\n\n- [2025\u002F06\u002F24] **Doc2Agent: Scalable Generation of Tool-Using Agents from API Documentation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19998) | [code]\n\n- [2025\u002F06\u002F24] **NaviAgent: Bilevel Planning on Tool Dependency Graphs for Function Calling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19500) | [code]\n\n- [2025\u002F06\u002F18] **Understanding GUI Agent Localization Biases through Logit Sharpness** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15425) | [code]\n\n- [2025\u002F06\u002F18] **Embodied Web Agents: Bridging Physical-Digital Realms for Integrated Agent Intelligence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15677) | [code]\n\n- [2025\u002F06\u002F17] **AgentSynth: Scalable Task Generation for Generalist Computer-Use Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14205) | [code]\n\n- [2025\u002F06\u002F12] **VideoDeepResearch: Long Video Understanding With Agentic Tool Using** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10821) | [code]\n\n- [2025\u002F06\u002F12] **Build the web for agents, not agents for the web** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10953) | [code]\n\n- [2025\u002F06\u002F10] **Atomic-to-Compositional Generalization for Mobile Agents with A New Benchmark and Scheduling System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08972) | [code]\n\n- [2025\u002F06\u002F10] **GUIRoboTron-Speech: Towards Automated GUI Agents Based on Speech Instructions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11127) | [code]\n\n- [2025\u002F06\u002F09] **CheMatAgent: Enhancing LLMs for Chemistry and Materials Science through Tree-Search Based Tool Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07551) | [code]\n\n- [2025\u002F06\u002F04] **Go-Browse: Training Web Agents with Structured Exploration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03533) | [code]\n\n- [2025\u002F06\u002F03] **GUI-Actor: Coordinate-Free Visual Grounding for GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03143) | [code]\n\n- [2025\u002F06\u002F02] **AgentCPM-GUI: Building Mobile-Use Agents with Reinforcement Fine-Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01391) | [code]\n\n- [2025\u002F05\u002F30] **MedOrch: Medical Diagnosis with Tool-Augmented Reasoning Agents for Flexible Extensibility** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00235) | [code]\n\n- [2025\u002F05\u002F28] **RedTeamCUA: Realistic Adversarial Testing of Computer-Use Agents in Hybrid Web-OS Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21936) | [code]\n\n- [2025\u002F05\u002F28] **EvolveSearch: An Iterative Self-Evolving Search Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22501) | [code]\n\n- [2025\u002F05\u002F28] **UI-Evol: Automatic Knowledge Evolving for Computer Use Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21964) | [code]\n\n- [2025\u002F05\u002F28] **WebDancer: Towards Autonomous Information Seeking Agency** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22648) | [[code]](https:\u002F\u002Fgithub.com\u002FAlibaba-NLP\u002FWebAgent)\n\n- [2025\u002F05\u002F27] **BacktrackAgent: Enhancing GUI Agent with Error Detection and Backtracking Mechanism** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20660) | [code]\n\n- [2025\u002F05\u002F27] **UI-Genie: A Self-Improving Approach for Iteratively Boosting MLLM-based Mobile GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21496) | [code]\n\n- [2025\u002F05\u002F27] **ChemHAS: Hierarchical Agent Stacking for Enhancing Chemistry Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21569) | [code]\n\n- [2025\u002F05\u002F26] **T^2Agent A Tool-augmented Multimodal Misinformation Detection Agent with Monte Carlo Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19768) | [code]\n\n- [2025\u002F05\u002F26] **WebCoT: Enhancing Web Agent Reasoning by Reconstructing Chain-of-Thought in Reflection, Branching, and Rollback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20013) | [code]\n\n- [2025\u002F05\u002F23] **Deep Video Discovery: Agentic Search with Tool Use for Long-form Video Understanding** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18079) | [code]\n\n- [2025\u002F05\u002F23] **ProgRM: Build Better GUI Agents with Progress Rewards** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18121) | [code]\n\n- [2025\u002F05\u002F23] **Gaming Tool Preferences in Agentic LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18135) | [code]\n\n- [2025\u002F05\u002F22] **WebAgent-R1: Training Web Agents via End-to-End Multi-Turn Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16421) | [code]\n\n- [2025\u002F05\u002F22] **T1: A Tool-Oriented Conversational Dataset for Multi-Turn Agentic Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16986) | [code]\n\n- [2025\u002F05\u002F21] **Web-Shepherd: Advancing PRMs for Reinforcing Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15277) | [code]\n\n- [2025\u002F05\u002F21] **X-WebAgentBench: A Multilingual Interactive Web Benchmark for Evaluating Global Agentic System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15372) | [code]\n\n- [2025\u002F05\u002F21] **GUI-G1: Understanding R1-Zero-Like Training for Visual Grounding in GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15810) | [code]\n\n- [2025\u002F05\u002F21] **AgentThink: A Unified Framework for Tool-Augmented Chain-of-Thought Reasoning in Vision-Language Models for Autonomous Driving** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15298) | [code]\n\n- [2025\u002F05\u002F20] **Mobile-Agent-V: A Video-Guided Approach for Effortless and Efficient Operational Knowledge Injection in Mobile Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13887) | [code]\n\n- [2025\u002F05\u002F20] **Efficient Agent Training for Computer Use** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13909) | [code]\n\n- [2025\u002F05\u002F20] **s3: You Don&#39;t Need That Much Data to Train a Search Agent via RL** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14146) | [code]\n\n- [2025\u002F05\u002F19] **GEM: Gaussian Embedding Modeling for Out-of-Distribution Detection in GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12842) | [code]\n\n- [2025\u002F05\u002F18] **Enhance Mobile Agents Thinking Process Via Iterative Preference Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12299) | [code]\n\n- [2025\u002F05\u002F17] **Demystifying and Enhancing the Efficiency of Large Language Model Based Search Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12065) | [code]\n\n- [2025\u002F05\u002F16] **EnvInjection: Environmental Prompt Injection Attack to Multi-modal Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11717) | [code]\n\n- [2025\u002F05\u002F09] **ScaleMCP: Dynamic and Auto-Synchronizing Model Context Protocol Tools for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.06416) | [code]\n\n- [2025\u002F04\u002F28] **MICE for CATs: Model-Internal Confidence Estimation for Calibrating Agents with Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20168) | [code]\n\n- [2025\u002F04\u002F27] **AndroidGen: Building an Android Language Agent under Data Scarcity** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19298) | [code]\n\n- [2025\u002F04\u002F24] **Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17934) | [code]\n\n- [2025\u002F04\u002F23] **WebEvolver: Enhancing Web Agent Self-Improvement with Coevolving World Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21024) | [code]\n\n- [2025\u002F04\u002F22] **Guiding VLM Agents with Process Rewards at Inference Time for GUI Navigation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16073) | [code]\n\n- [2025\u002F04\u002F19] **InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative Reasoners** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14239) | [code]\n\n- [2025\u002F04\u002F17] **WebLists: Extracting Structured Information From Complex Interactive Websites Using Executable LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12682) | [code]\n\n- [2025\u002F04\u002F16] **Enhancing Web Agents with Explicit Rollback Mechanisms** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11788) | [code]\n\n- [2025\u002F04\u002F15] **The Obvious Invisible Threat: LLM-Powered GUI Agents&#39; Vulnerability to Fine-Print Injections** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11281) | [code]\n\n- [2025\u002F04\u002F14] **Breaking the Data Barrier -- Building GUI Agents Through Task Generalization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.10127) | [code]\n\n- [2025\u002F04\u002F14] **GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.10458) | [code]\n\n- [2025\u002F04\u002F09] **Inducing Programmatic Skills for Agentic Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06821) | [code]\n\n- [2025\u002F04\u002F09] **SkillWeaver: Web Agents can Self-Improve by Discovering and Honing Skills** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07079) | [code]\n\n- [2025\u002F04\u002F02] **An Illusion of Progress? Assessing the Current State of Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01382) | [code]\n\n- [2025\u002F04\u002F01] **On the Robustness of Agentic Function Calling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00914) | [code]\n\n- [2025\u002F04\u002F01] **Agent S2: A Compositional Generalist-Specialist Framework for Computer Use Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00906) | [code]\n\n- [2025\u002F03\u002F26] **Open Deep Search: Democratizing Search with Open-source Reasoning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20201) | [code]\n\n- [2025\u002F03\u002F24] **Safeguarding Mobile GUI Agent via Logic-based Action Verification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18492) | [code]\n\n- [2025\u002F03\u002F18] **PLAY2PROMPT: Zero-shot Tool Instruction Optimization for LLM Agents via Tool Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14432) | [code]\n\n- [2025\u002F03\u002F14] **DeskVision: Large Scale Desktop Region Captioning for Advanced GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11170) | [code]\n\n- [2025\u002F03\u002F12] **Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10689) | [code]\n\n- [2025\u002F03\u002F10] **BEARCUBS: A benchmark for computer-using web agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07919) | [code]\n\n- [2025\u002F03\u002F06] **Measuring temporal effects of agent knowledge by date-controlled tool use** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04188) | [code]\n\n- [2025\u002F03\u002F06] **SafeArena: Evaluating the Safety of Autonomous Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04957) | [code]\n\n- [2025\u002F03\u002F04] **LiteWebAgent: The Open-Source Suite for VLM-Based Web-Agent Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02950) | [code]\n\n- [2025\u002F03\u002F01] **Smoothing Grounding and Reasoning for MLLM-Powered GUI Agents with Query-Oriented Pivot Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00401) | [code]\n\n- [2025\u002F02\u002F27] **Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20383) | [code]\n\n- [2025\u002F02\u002F24] **MobileSteward: Integrating Multiple App-Oriented Agents with Self-Evolution to Automate Cross-App Instructions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16796) | [code]\n\n- [2025\u002F02\u002F24] **Mobile-Agent-V: Learning Mobile Device Operation Through Video-Guided Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17110) | [[code]](https:\u002F\u002Fgithub.com\u002FX-PLUG\u002FMobileAgent)\n\n- [2025\u002F02\u002F17] **LLM Agents Making Agent Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11705) | [code]\n\n- [2025\u002F02\u002F17] **SMART: Self-Aware Agent for Tool Overuse Mitigation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11435) | [code]\n\n- [2025\u002F02\u002F16] **OctoTools: An Agentic Framework with Extensible Tools for Complex Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11271) | [code]\n\n- [2025\u002F02\u002F12] **Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08820) | [code]\n\n- [2025\u002F02\u002F07] **Agentic Reasoning: Reasoning LLMs with Tools for the Deep Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04644) | [code]\n\n- [2025\u002F02\u002F06] **Division-of-Thoughts: Harnessing Hybrid Language Model Synergy for Efficient On-Device Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04392) | [code]\n\n- [2025\u002F02\u002F05] **ReachAgent: Enhancing Mobile Agent via Page Reaching and Operation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02955) | [code]\n\n- [2025\u002F01\u002F28] **CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web Navigation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16609) | [code]\n\n- [2025\u002F01\u002F21] **UI-TARS: Pioneering Automated GUI Interaction with Native Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12326) | [code]\n\n- [2025\u002F01\u002F20] **Mobile-Agent-E: Self-Evolving Mobile Assistant for Complex Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11733) | [code]\n\n- [2025\u002F01\u002F20] **PlotEdit: Natural Language-Driven Accessible Chart Editing in PDFs via Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11233) | [code]\n\n- [2025\u002F01\u002F08] **InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning and Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04575) | [code]\n\n- [2025\u002F01\u002F08] **FinSphere: A Conversational Stock Analysis Agent Equipped with Quantitative Tools based on Real-Time Database** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12399) | [code]\n\n- [2025\u002F01\u002F07] **PPTAgent: Generating and Evaluating Presentations Beyond Text-to-Slides** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.03936) | [code]\n\n- [2024\u002F12\u002F28] **Efficient Multi-Agent Collaboration with Tool Use for Online Planning in Complex Table Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20145) | [code]\n\n- [2024\u002F12\u002F21] **InfoTech Assistant : A Multimodal Conversational Agent for InfoTechnology Web Portal Queries** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16412) | [code]\n\n- [2024\u002F12\u002F12] **AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09605) | [code]\n\n- [2024\u002F12\u002F08] **Cooperative SQL Generation for Segmented Databases By Using Multi-functional LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05850) | [code]\n\n- [2024\u002F12\u002F05] **Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04454) | [code]\n\n- [2024\u002F11\u002F26] **ShowUI: One Vision-Language-Action Model for GUI Visual Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.17465) | [code]\n\n- [2024\u002F11\u002F22] **ScribeAgent: Towards Specialized Web Agents Using Production-Scale Workflow Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.15004) | [code]\n\n- [2024\u002F11\u002F20] **AdaptAgent: Adapting Multimodal Web Agents with Few-Shot Learning from Human Demonstrations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13451) | [code]\n\n- [2024\u002F11\u002F15] **The Dawn of GUI Agent: A Preliminary Case Study with Claude 3.5 Computer Use** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10323) | [code]\n\n- [2024\u002F11\u002F04] **WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02337) | [code]\n\n- [2024\u002F11\u002F04] **Attacking Vision-Language Computer Agents via Pop-ups** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02391) | [code]\n\n- [2024\u002F11\u002F02] **Infant Agent: A Tool-Integrated, Logic-Driven Agent with Cost-Effective API Usage** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01114) | [code]\n\n- [2024\u002F10\u002F28] **AutoGLM: Autonomous Foundation Agents for GUIs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00820) | [code]\n\n- [2024\u002F10\u002F25] **OpenWebVoyager: Building Multimodal Web Agents via Iterative Real-World Exploration, Feedback and Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19609) | [code]\n\n- [2024\u002F10\u002F24] **Infogent: An Agent-Based Framework for Web Information Aggregation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19054) | [code]\n\n- [2024\u002F10\u002F23] **ReflecTool: Towards Reflection-Aware Tool-Augmented Clinical Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17657) | [code]\n\n- [2024\u002F10\u002F22] **Large Language Models Empowered Personalized Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17236) | [code]\n\n- [2024\u002F10\u002F21] **VipAct: Visual-Perception Enhancement via Specialized VLM Agent Collaboration and Tool-use** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.16400) | [code]\n\n- [2024\u002F10\u002F21] **Beyond Browsing: API-Based Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.16464) | [code]\n\n- [2024\u002F10\u002F18] **Toolshed: Scale Tool-Equipped Agents with Advanced RAG-Tool Fusion and Tool Knowledge Bases** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14594) | [code]\n\n- [2024\u002F10\u002F17] **Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13232) | [code]\n\n- [2024\u002F10\u002F17] **MeNTi: Bridging Medical Calculator and LLM Agent with Nested Tool Calling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13610) | [code]\n\n- [2024\u002F10\u002F17] **MobA: A Two-Level Agent System for Efficient Mobile Task Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13757) | [code]\n\n- [2024\u002F10\u002F17] **AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13825) | [code]\n\n- [2024\u002F10\u002F16] **Agent Skill Acquisition for Large Language Models via CycleQD** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14735) | [code]\n\n- [2024\u002F10\u002F10] **Agent S: An Open Agentic Framework that Uses Computers Like a Human** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08164) | [code]\n\n- [2024\u002F10\u002F07] **Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05243) | [code]\n\n- [2024\u002F10\u002F03] **NNetNav: Unsupervised Learning of Browser Agents Through Environment Interaction in the Wild** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02907) | [code]\n\n- [2024\u002F09\u002F24] **Automated test generation to evaluate tool-augmented LLMs as conversational AI agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15934) | [code]\n\n- [2024\u002F09\u002F17] **EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11295) | [code]\n\n- [2024\u002F09\u002F01] **TinyAgent: Function Calling at the Edge** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00608) | [code]\n\n- [2024\u002F08\u002F30] **Tool-Assisted Agent on SQL Inspection and Refinement in Real-World Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16991) | [code]\n\n- [2024\u002F08\u002F15] **VerilogCoder: Autonomous Verilog Coding Agents with Graph-based Planning and Abstract Syntax Tree (AST)-based Waveform Tracing Tool** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08927) | [code]\n\n- [2024\u002F08\u002F05] **Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02544) | [code]\n\n- [2024\u002F08\u002F01] **OmniParser for Pure Vision Based GUI Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00203) | [code]\n\n- [2024\u002F07\u002F26] **AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18901) | [[code]](https:\u002F\u002Fgithub.com\u002Fstonybrooknlp\u002Fappworld)\n\n- [2024\u002F07\u002F22] **AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15711) | [code]\n\n- [2024\u002F07\u002F11] **GTA: A Benchmark for General Tool Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08713) | [code]\n\n- [2024\u002F07\u002F01] **Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00993) | [code]\n\n- [2024\u002F06\u002F17] **GUICourse: From General Vision Language Models to Versatile GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11317) | [[code]](https:\u002F\u002Fgithub.com\u002Fyiye3\u002Fguicourse)\n\n- [2024\u002F06\u002F16] **GUI-WORLD: A Dataset for GUI-oriented Multimodal LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10819) | [code]\n\n- [2024\u002F06\u002F06] **Tool-Planner: Task Planning with Clusters across Multiple Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03807) | [[code]](https:\u002F\u002Fgithub.com\u002FOceannTwT\u002FTool-Planner)\n\n- [2024\u002F06\u002F03] **Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01014) | [[code]](https:\u002F\u002Fgithub.com\u002Fx-plug\u002Fmobileagent)\n\n- [2024\u002F06\u002F02] **Towards a copilot in BIM authoring tool using a large language model-based agent for intelligent human-machine interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.16903) | [code]\n\n- [2024\u002F05\u002F30] **Large Language Models Can Self-Improve At Web Agent Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20309) | [code]\n\n- [2024\u002F05\u002F17] **Latent State Estimation Helps UI Agents to Reason** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11120) | [code]\n\n- [2024\u002F05\u002F06] **SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.15793) | [code]\n\n- [2024\u002F05\u002F02] **CACTUS: Chemistry Agent Connecting Tool-Usage to Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00972) | [[code]](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fcactus)\n\n- [2024\u002F05\u002F01] **Navigating WebAI: Training Agents to Complete Web Tasks with Large Language Models and Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00516) | [code]\n\n- [2024\u002F04\u002F23] **Evaluating Tool-Augmented Agents in Remote Sensing Platforms** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00709) | [code]\n\n- [2024\u002F04\u002F17] **The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11584) | [code]\n\n- [2024\u002F04\u002F17] **Octopus v3: Technical Report for On-device Sub-billion Multimodal AI Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11459) | [code]\n\n- [2024\u002F04\u002F16] **Grounded Language Agent for Product Search via Intelligent Web Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.10887) | [code]\n\n- [2024\u002F04\u002F04] **AutoWebGLM: A Large Language Model-based Web Navigating Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.03648) | [[code]](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FAutoWebGLM)\n\n- [2024\u002F04\u002F01] **Rapid Mobile App Development for Generative AI Agents on MIT App Inventor** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01561) | [code]\n\n- [2024\u002F03\u002F05] **InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02691) | [code]\n\n- [2024\u002F03\u002F05] **Android in the Zoo: Chain-of-Action-Thought for GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02713) | [code]\n\n- [2024\u002F02\u002F27] **BASES: Large-scale Web Search User Simulation with Large Language Model based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17505) | [code]\n\n- [2024\u002F02\u002F26] **Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16696) | [code]\n\n- [2024\u002F02\u002F23] **On the Multi-turn Instruction Following for Conversational Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15057) | [code]\n\n- [2024\u002F02\u002F20] **AgentMD: Empowering Language Agents for Risk Prediction with Large-Scale Clinical Tool Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13225) | [code]\n\n- [2024\u002F02\u002F18] **SciAgent: Tool-augmented Language Models for Scientific Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11451) | [code]\n\n- [2024\u002F02\u002F16] **ToolSword: Unveiling Safety Issues of Large Language Models in Tool Learning Across Three Stages** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10753) | [[code]](https:\u002F\u002Fgithub.com\u002Fjunjie-ye\u002Ftoolsword)\n\n- [2024\u002F02\u002F08] **UFO: A UI-Focused Agent for Windows OS Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07939) | [code]\n\n- [2024\u002F02\u002F06] **AnyTool: Self-Reflective, Hierarchical Agents for Large-Scale API Calls** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04253) | [[code]](https:\u002F\u002Fgithub.com\u002Fdyabel\u002Fanytool)\n\n- [2024\u002F01\u002F11] **EASYTOOL: Enhancing LLM-based Agents with Concise Tool Instruction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.06201) | [code]\n\n- [2024\u002F01\u002F03] **GPT-4V(ision) is a Generalist Web Agent, if Grounded** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01614) | [code]\n\n- [2023\u002F12\u002F21] **AppAgent: Multimodal Agents as Smartphone Users** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13771) | [code]\n\n- [2023\u002F12\u002F18] **CLOVA: A Closed-Loop Visual Assistant with Tool Usage and Update** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10908) | [[code]](https:\u002F\u002Fclova-tool.github.io\u002F)\n\n- [2023\u002F12\u002F14] **CogAgent: A Visual Language Model for GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.08914) | [code]\n\n- [2023\u002F11\u002F19] **TPTU-v2: Boosting Task Planning and Tool Usage of Large Language Model-based Agents in Real-world Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11315) | [code]\n\n- [2023\u002F11\u002F15] **ToolTalk: Evaluating Tool-Usage in a Conversational Setting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.10775) | [code]\n\n- [2023\u002F11\u002F10] **Smart Agent-Based Modeling: On the Use of Large Language Models in Computer Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.06330) | [code]\n\n- [2023\u002F10\u002F12] **A Zero-Shot Language Agent for Computer Control with Structured Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.08740) | [code]\n\n- [2023\u002F08\u002F07] **TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03427) | [code]\n\n- [2023\u002F06\u002F09] **Mind2Web: Towards a Generalist Agent for the Web** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06070) | [code]\n\n- [2023\u002F05\u002F22] **Making Language Models Better Tool Learners with Execution Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13068) | [code]\n\n- [2023\u002F05\u002F19] **ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11554) | [code]\n\n#### Simulation\n- [2025\u002F07\u002F10] **Automating MD simulations for Proteins using Large language Models: NAMD-Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07887) | [code]\n\n- [2025\u002F07\u002F01] **TransLaw: Benchmarking Large Language Models in Multi-Agent Simulation of the Collaborative Translation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00875) | [code]\n\n- [2025\u002F06\u002F26] **CitySim: Modeling Urban Behaviors and City Dynamics with Large-Scale LLM-Driven Agent Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21805) | [code]\n\n- [2025\u002F06\u002F24] **LLM-Based Social Simulations Require a Boundary** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19806) | [code]\n\n- [2025\u002F06\u002F23] **TrajTok: Technical Report for 2025 Waymo Open Sim Agents Challenge** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21618) | [code]\n\n- [2025\u002F06\u002F16] **CAMS: A CityGPT-Powered Agentic Framework for Urban Human Mobility Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13599) | [code]\n\n- [2025\u002F06\u002F07] **Modeling Earth-Scale Human-Like Societies with One Billion Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.12078) | [code]\n\n- [2025\u002F06\u002F03] **MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02689) | [code]\n\n- [2025\u002F06\u002F02] **LAM SIMULATOR: Advancing Data Generation for Large Action Model Training via Online Exploration and Trajectory Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02298) | [code]\n\n- [2025\u002F05\u002F31] **Dyna-Think: Synergizing Reasoning, Acting, and World Model Simulation in AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00320) | [code]\n\n- [2025\u002F05\u002F28] **Scalable, Symbiotic, AI and Non-AI Agent Based Parallel Discrete Event Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23846) | [code]\n\n- [2025\u002F05\u002F26] **Embracing Imperfection: Simulating Students with Diverse Cognitive Levels Using LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19997) | [code]\n\n- [2025\u002F05\u002F25] **When Ethics and Payoffs Diverge: LLM Agents in Morally Charged Social Dilemmas** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19212) | [code]\n\n- [2025\u002F05\u002F19] **Simulation Agent: A Framework for Integrating Simulation and Large Language Models for Enhanced Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13761) | [code]\n\n- [2025\u002F05\u002F11] **EcoLANG: Efficient and Effective Agent Communication Language Induction for Social Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.06904) | [code]\n\n- [2025\u002F04\u002F20] **BookWorld: From Novels to Interactive Agent Societies for Creative Story Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14538) | [code]\n\n- [2025\u002F04\u002F17] **SimUSER: Simulating User Behavior with Large Language Models for Recommender System Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12722) | [code]\n\n- [2025\u002F04\u002F14] **SocioVerse: A World Model for Social Simulation Powered by LLM Agents and A Pool of 10 Million Real-World Users** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.10157) | [code]\n\n- [2025\u002F04\u002F10] **MOSAIC: Modeling Social AI for Content Dissemination and Regulation in Multi-Agent Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07830) | [code]\n\n- [2025\u002F04\u002F04] **APIGen-MT: Agentic Pipeline for Multi-Turn Data Generation via Simulated Agent-Human Interplay** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03601) | [code]\n\n- [2025\u002F04\u002F04] **Algorithmic Prompt Generation for Diverse Human-like Teaming and Communication with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03991) | [code]\n\n- [2025\u002F03\u002F28] **Self-Evolving Multi-Agent Simulations for Realistic Clinical Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22678) | [code]\n\n- [2025\u002F03\u002F18] **Retrieval-Augmented Simulacra: Generative Agents for Up-to-date and Knowledge-Adaptive Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14620) | [code]\n\n- [2025\u002F03\u002F12] **Can A Society of Generative Agents Simulate Human Behavior and Inform Public Health Policy? A Case Study on Vaccine Hesitancy** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09639) | [code]\n\n- [2025\u002F02\u002F06] **Simulating the Emergence of Differential Case Marking with Communicating Neural-Network Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04038) | [code]\n\n- [2025\u002F02\u002F03] **Eliciting Language Model Behaviors with Investigator Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01236) | [code]\n\n- [2025\u002F02\u002F03] **TwinMarket: A Scalable Behavioral and Social Simulation for Financial Markets** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01506) | [code]\n\n- [2025\u002F01\u002F25] **Are Human Interactions Replicable by Generative Agents? A Case Study on Pronoun Usage in Hierarchical Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15283) | [code]\n\n- [2025\u002F01\u002F19] **Self-Explanation in Social AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13945) | [code]\n\n- [2025\u002F01\u002F12] **LLMs Model Non-WEIRD Populations: Experiments with Synthetic Cultural Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06834) | [code]\n\n- [2024\u002F12\u002F10] **Political Actor Agent: Simulating Legislative System for Roll Call Votes Prediction with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.07144) | [code]\n\n- [2024\u002F11\u002F18] **OASIS: Open Agent Social Interaction Simulations with One Million Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.11581) | [code]\n\n- [2024\u002F10\u002F28] **ElectionSim: Massive Population Election Simulation Powered by Large Language Model Driven Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.20746) | [code]\n\n- [2024\u002F10\u002F24] **Schema-Guided Culture-Aware Complex Event Simulation with Multi-Agent Role-Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18935) | [code]\n\n- [2024\u002F10\u002F18] **SRAP-Agent: Simulating and Optimizing Scarce Resource Allocation Policy with LLM-based Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14152) | [code]\n\n- [2024\u002F10\u002F05] **Large Language Models can Achieve Social Balance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04054) | [code]\n\n- [2024\u002F09\u002F25] **Plurals: A System for Guiding LLMs Via Simulated Social Ensembles** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.17213) | [code]\n\n- [2024\u002F09\u002F14] **Synergistic Simulations: Multi-Agent Problem Solving with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13753) | [code]\n\n- [2024\u002F09\u002F02] **Agentic Society: Merging skeleton from real world and texture from Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.10550) | [code]\n\n- [2024\u002F08\u002F28] **Logic-Enhanced Language Model Agents for Trustworthy Social Simulations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16081) | [code]\n\n- [2024\u002F08\u002F15] **AgentCourt: Simulating Court with Adversarial Evolvable Lawyer Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08089) | [code]\n\n- [2024\u002F08\u002F03] **Self-Emotion Blended Dialogue Generation in Social Simulation Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01633) | [code]\n\n- [2024\u002F06\u002F26] **Simulating The U.S. Senate: An LLM-Driven Agent Approach to Modeling Legislative Behavior and Bipartisanship** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18702) | [code]\n\n- [2024\u002F06\u002F20] **Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.14373) | [code]\n\n- [2024\u002F06\u002F10] **Can Language Models Serve as Text-Based World Simulators?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06485) | [code]\n\n- [2024\u002F05\u002F12] **Exploring the Potential of Conversational AI Support for Agent-Based Social Simulation Model Design** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.08032) | [code]\n\n- [2024\u002F04\u002F23] **BattleAgent: Multi-modal Dynamic Emulation on Historical Battles to Complement Historical Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.15532) | [[code]](https:\u002F\u002Fgithub.com\u002Fagiresearch\u002Fbattleagent)\n\n- [2024\u002F03\u002F20] **AgentGroupChat: An Interactive Group Chat Simulacra For Better Eliciting Emergent Behavior** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.13433) | [code]\n\n- [2024\u002F03\u002F05] **AgentsCourt: Building Judicial Decision-Making Agents with Court Debate Simulation and Legal Knowledge Augmentation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02959) | [code]\n\n- [2024\u002F02\u002F26] **Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16333) | [code]\n\n- [2024\u002F02\u002F20] **What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13184) | [code]\n\n- [2024\u002F02\u002F07] **Can Large Language Model Agents Simulate Human Trust Behavior?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04559) | [code]\n\n- [2024\u002F01\u002F08] **SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03945) | [code]\n\n- [2023\u002F12\u002F06] **LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03815) | [code]\n\n- [2023\u002F11\u002F28] **War and Peace (WarAgent): Large Language Model-based Multi-Agent Simulation of World Wars** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17227) | [code]\n\n- [2023\u002F10\u002F10] **MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06500) | [code]\n\n- [2023\u002F06\u002F05] **User Behavior Simulation with Large Language Model based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02552) | [code]\n\n- [2023\u002F05\u002F26] **Training Socially Aligned Language Models on Simulated Social Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16960) | [code]\n\n- [2023\u002F04\u002F07] **Generative Agents: Interactive Simulacra of Human Behavior** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.03442) | [code]\n\n\n\n### Application\n#### Math\n- [2025\u002F05\u002F21] **ModelingAgent: Bridging LLMs and Mathematical Modeling for Real-World Challenges** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15068) | [code]\n\n- [2025\u002F03\u002F23] **MathAgent: Leveraging a Mixture-of-Math-Agent Framework for Real-World Multimodal Mathematical Error Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18132) | [code]\n\n- [2025\u002F03\u002F05] **MA-LoT: Multi-Agent Lean-based Long Chain-of-Thought Reasoning enhances Formal Theorem Proving** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03205) | [code]\n\n- [2025\u002F02\u002F25] **LLM Knows Geometry Better than Algebra: Numerical Understanding of LLM-Based Agents in A Trading Arena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17967) | [code]\n\n- [2025\u002F02\u002F18] **One Size doesn&#39;t Fit All: A Personalized Conversational Tutoring Agent for Mathematics Instruction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12633) | [code]\n\n- [2025\u002F02\u002F04] **Automating Mathematical Proof Generation Using Large Language Model Agents and Knowledge Graphs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11657) | [code]\n\n- [2024\u002F10\u002F29] **Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22304) | [code]\n\n- [2024\u002F10\u002F13] **Expanding Search Space with Diverse Prompting Agents: An Efficient Sampling Approach for LLM Mathematical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09780) | [code]\n\n- [2024\u002F08\u002F03] **MathLearner: A Large Language Model Agent Framework for Learning to Solve Mathematical Problems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01779) | [code]\n\n- [2024\u002F04\u002F10] **MathVC: An LLM-Simulated Multi-Character Virtual Classroom for Mathematics Education** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06711) | [code]\n\n- [2024\u002F04\u002F06] **MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04735) | [[code]](https:\u002F\u002Fgithub.com\u002Fbin123apple\u002Fmacm)\n\n#### Chemistry\n- [2025\u002F05\u002F27] **ChemHAS: Hierarchical Agent Stacking for Enhancing Chemistry Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21569) | [code]\n\n- [2025\u002F04\u002F18] **System of Agentic AI for the Discovery of Metal-Organic Frameworks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14110) | [code]\n\n- [2025\u002F03\u002F22] **Building Resource-Constrained Language Agents: A Korean Case Study on Chemical Toxicity Information** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17753) | [code]\n\n- [2025\u002F01\u002F23] **Hypothesis Generation for Materials Discovery and Design Using Goal-Driven and Constraint-Guided LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13299) | [code]\n\n- [2025\u002F01\u002F11] **ChemAgent: Self-updating Library in Large Language Models Improves Chemical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06590) | [code]\n\n- [2024\u002F08\u002F29] **HoneyComb: A Flexible LLM-Based Agent System for Materials Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00135) | [code]\n\n- [2024\u002F06\u002F26] **A Review of Large Language Models and Autonomous Agents in Chemistry** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01603) | [code]\n\n#### Biology\n- [2025\u002F04\u002F28] **m-KAILIN: Knowledge-Driven Agentic Scientific Corpus Distillation Framework for Biomedical Large Language Models Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19565) | [code]\n\n- [2025\u002F04\u002F08] **SkillFlow: Efficient Skill and Code Transfer Through Communication in Adapting AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06188) | [code]\n\n- [2025\u002F04\u002F07] **scAgent: Universal Single-Cell Annotation via a LLM Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.04698) | [code]\n\n- [2024\u002F10\u002F16] **PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12375) | [code]\n\n- [2024\u002F06\u002F29] **BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00466) | [code]\n\n- [2024\u002F05\u002F25] **GeneAgent: Self-verification Language Agent for Gene Set Knowledge Discovery using Domain Databases** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16205) | [code]\n\n- [2024\u002F04\u002F27] **CRISPR-GPT: An LLM Agent for Automated Design of Gene-Editing Experiments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.18021) | [code]\n\n- [2024\u002F04\u002F03] **Empowering Biomedical Discovery with AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02831) | [code]\n\n- [2024\u002F01\u002F27] **ProtAgents: Protein discovery via large language model multi-agent collaborations combining physics and machine learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04268) | [code]\n\n#### Physics\n- [2025\u002F06\u002F06] **Can Theoretical Physics Research Benefit from Language Agents?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06214) | [code]\n\n- [2025\u002F01\u002F23] **Hypothesis Generation for Materials Discovery and Design Using Goal-Driven and Constraint-Guided LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13299) | [code]\n\n- [2024\u002F12\u002F09] **StarWhisper Telescope: Agent-Based Observation Assistant System to Approach AI Astrophysicist** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.06412) | [code]\n\n- [2024\u002F08\u002F29] **HoneyComb: A Flexible LLM-Based Agent System for Materials Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00135) | [code]\n\n- [2024\u002F01\u002F27] **ProtAgents: Protein discovery via large language model multi-agent collaborations combining physics and machine learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04268) | [code]\n\n#### Geography\n- [2024\u002F12\u002F23] **MineAgent: Towards Remote-Sensing Mineral Exploration with Multimodal Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17339) | [code]\n\n- [2024\u002F07\u002F13] **An Autonomous GIS Agent Framework for Geospatial Data Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.21024) | [code]\n\n#### Art\n- [2025\u002F01\u002F22] **FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in Virtual 3D Spaces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12909) | [code]\n\n- [2024\u002F10\u002F02] **Agent-Driven Large Language Models for Mandarin Lyric Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.01450) | [code]\n\n- [2024\u002F09\u002F05] **LLM-based multi-agent poetry generation in non-cooperative environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03659) | [code]\n\n- [2024\u002F08\u002F13] **What should I wear to a party in a Greek taverna? Evaluation for Conversational Agents in the Fashion Domain** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08907) | [code]\n\n- [2024\u002F07\u002F01] **IBSEN: Director-Actor Agent Collaboration for Controllable and Interactive Drama Script Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01093) | [code]\n\n- [2024\u002F04\u002F28] **ComposerX: Multi-Agent Symbolic Music Composition with LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.18081) | [[code]](https:\u002F\u002Fgithub.com\u002Flllindsey0615\u002Fcomposerx)\n\n- [2024\u002F03\u002F12] **AesopAgent: Agent-driven Evolutionary System on Story-to-Video Production** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07952) | [code]\n\n- [2023\u002F10\u002F18] **MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.11954) | [code]\n\n#### Medicine\n- [2025\u002F07\u002F10] **Toward Real-World Chinese Psychological Support Dialogues: CPsDD Dataset and a Co-Evolving Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07509) | [code]\n\n- [2025\u002F07\u002F03] **RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03112) | [code]\n\n- [2025\u002F07\u002F01] **STELLA: Self-Evolving LLM Agent for Biomedical Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02004) | [code]\n\n- [2025\u002F06\u002F27] **Exploring Modularity of Agentic Systems for Drug Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22189) | [code]\n\n- [2025\u002F06\u002F26] **Large Language Model Agent for Modular Task Execution in Drug Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02925) | [code]\n\n- [2025\u002F06\u002F25] **An Agentic System for Rare Disease Diagnosis with Traceable Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20430) | [code]\n\n- [2025\u002F06\u002F24] **MAM: Modular Multi-Agent Framework for Multi-Modal Medical Diagnosis via Role-Specialized Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19835) | [code]\n\n- [2025\u002F06\u002F18] **From RAG to Agentic: Validating Islamic-Medicine Responses with LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15911) | [code]\n\n- [2025\u002F06\u002F17] **RadFabric: Agentic AI System with Reasoning Capability for Radiology** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14142) | [code]\n\n- [2025\u002F06\u002F16] **Language Agents for Hypothesis-driven Clinical Decision Making with Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13474) | [code]\n\n- [2025\u002F06\u002F13] **Large Language Model-Powered Conversational Agent Delivering Problem-Solving Therapy (PST) for Family Caregivers: Enhancing Empathy and Therapeutic Alliance Using In-Context Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11376) | [code]\n\n- [2025\u002F06\u002F12] **Neural at ArchEHR-QA 2025: Agentic Prompt Optimization for Evidence-Grounded Clinical Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10751) | [code]\n\n- [2025\u002F06\u002F11] **ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09513) | [code]\n\n- [2025\u002F06\u002F04] **AI Agents for Conversational Patient Triage: Preliminary Simulation-Based Evaluation with Real-World EHR Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04032) | [code]\n\n- [2025\u002F06\u002F04] **MedAgentGym: Training LLM Agents for Code-Based Medical Reasoning at Scale** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04405) | [code]\n\n- [2025\u002F05\u002F31] **MMedAgent-RL: Optimizing Multi-Agent Collaboration for Multimodal Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00555) | [code]\n\n- [2025\u002F05\u002F30] **MedOrch: Medical Diagnosis with Tool-Augmented Reasoning Agents for Flexible Extensibility** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00235) | [code]\n\n- [2025\u002F05\u002F27] **Silence is Not Consensus: Disrupting Agreement Bias in Multi-Agent LLMs via Catfish Agent for Clinical Decision Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21503) | [code]\n\n- [2025\u002F05\u002F27] **BehaviorSFT: Behavioral Token Conditioning for Clinical Agents Across the Proactivity Spectrum** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21757) | [code]\n\n- [2025\u002F05\u002F24] **DDO: Dual-Decision Optimization via Multi-Agent Collaboration for LLM-Based Medical Consultation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18630) | [code]\n\n- [2025\u002F05\u002F21] **A Risk Taxonomy for Evaluating AI-Powered Psychotherapy Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15108) | [code]\n\n- [2025\u002F05\u002F18] **MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12371) | [code]\n\n- [2025\u002F05\u002F06] **FRAME: Feedback-Refined Agent Methodology for Enhancing Medical Research Insights** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.04649) | [code]\n\n- [2025\u002F04\u002F30] **Talk Before You Retrieve: Agent-Led Discussions for Better RAG in Medical QA** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21252) | [code]\n\n- [2025\u002F04\u002F28] **m-KAILIN: Knowledge-Driven Agentic Scientific Corpus Distillation Framework for Biomedical Large Language Models Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19565) | [code]\n\n- [2025\u002F04\u002F25] **MAGI: Multi-Agent Guided Interview for Psychiatric Assessment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18260) | [code]\n\n- [2025\u002F04\u002F13] **EmoAgent: Assessing and Safeguarding Human-AI Interaction for Mental Health Safety** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09689) | [code]\n\n- [2025\u002F04\u002F08] **TxGemma: Efficient and Agentic LLMs for Therapeutics** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06196) | [code]\n\n- [2025\u002F04\u002F04] **YaleNLP @ PerAnsSumm 2025: Multi-Perspective Integration via Mixture-of-Agents for Enhanced Healthcare QA Summarization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03932) | [code]\n\n- [2025\u002F03\u002F28] **Self-Evolving Multi-Agent Simulations for Realistic Clinical Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22678) | [code]\n\n- [2025\u002F03\u002F26] **TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20666) | [code]\n\n- [2025\u002F03\u002F26] **3MDBench: Medical Multimodal Multi-agent Dialogue Benchmark** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13861) | [code]\n\n- [2025\u002F03\u002F21] **Autonomous Radiotherapy Treatment Planning Using DOLA: A Privacy-Preserving, LLM-Based Optimization Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17553) | [code]\n\n- [2025\u002F03\u002F19] **When Pigs Get Sick: Multi-Agent AI for Swine Disease Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15204) | [code]\n\n- [2025\u002F03\u002F19] **EmpathyAgent: Can Embodied Agents Conduct Empathetic Actions?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16545) | [code]\n\n- [2025\u002F03\u002F17] **MAP: Evaluation and Multi-Agent Enhancement of Large Language Models for Inpatient Pathways** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13205) | [code]\n\n- [2025\u002F03\u002F10] **MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for Complex Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07459) | [code]\n\n- [2025\u002F03\u002F07] **GEMA-Score: Granular Explainable Multi-Agent Score for Radiology Report Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05347) | [code]\n\n- [2025\u002F03\u002F07] **Multi Agent based Medical Assistant for Edge Devices** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05397) | [code]\n\n- [2025\u002F02\u002F27] **M^3Builder: A Multi-Agent System for Automated Machine Learning in Medical Imaging** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20301) | [code]\n\n- [2025\u002F02\u002F26] **MEDDxAgent: A Unified Modular Agent Framework for Explainable Automatic Differential Diagnosis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19175) | [code]\n\n- [2025\u002F02\u002F25] **Scaffolding Empathy: Training Counselors with Simulated Patients and Utterance-level Performance Visualizations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18673) | [code]\n\n- [2025\u002F02\u002F24] **Improving Interactive Diagnostic Ability of a Large Language Model Agent Through Clinical Experience Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16463) | [code]\n\n- [2025\u002F02\u002F19] **LIDDIA: Language-based Intelligent Drug Discovery Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13959) | [code]\n\n- [2025\u002F02\u002F18] **An LLM-Powered Agent for Physiological Data Analysis: A Case Study on PPG-based Heart Rate Estimation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12836) | [code]\n\n- [2025\u002F02\u002F18] **Sleepless Nights, Sugary Days: Creating Synthetic Users with Health Conditions for Realistic Coaching Agent Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13135) | [code]\n\n- [2025\u002F02\u002F13] **PathFinder: A Multi-Modal Multi-Agent System for Medical Diagnostic Decision-Making Applied to Histopathology** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08916) | [code]\n\n- [2025\u002F02\u002F09] **HamRaz: A Culture-Based Persian Conversation Dataset for Person-Centered Therapy Using LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05982) | [code]\n\n- [2025\u002F02\u002F09] **The Application of MATEC (Multi-AI Agent Team Care) Framework in Sepsis Care** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16433) | [code]\n\n- [2025\u002F02\u002F05] **CAMI: A Counselor Agent Supporting Motivational Interviewing through State Inference and Topic Exploration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02807) | [code]\n\n- [2025\u002F02\u002F02] **Agent-Based Uncertainty Awareness Improves Automated Radiology Report Labeling with an Open-Source Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01691) | [code]\n\n- [2025\u002F01\u002F27] **MADP: Multi-Agent Deductive Planning for Enhanced Cognitive-Behavioral Mental Health Question Answer** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15826) | [code]\n\n- [2025\u002F01\u002F16] **AutoCBT: An Autonomous Multi-agent Framework for Cognitive Behavioral Therapy in Psychological Counseling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.09426) | [code]\n\n- [2025\u002F01\u002F03] **PSYCHE: A Multi-faceted Patient Simulation Framework for Evaluation of Psychiatric Assessment Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01594) | [code]\n\n- [2024\u002F12\u002F19] **PsyDraw: A Multi-Agent Multimodal System for Mental Health Screening in Left-Behind Children** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14769) | [code]\n\n- [2024\u002F12\u002F17] **RareAgents: Advancing Rare Disease Care through LLM-Empowered Multi-disciplinary Team** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12475) | [code]\n\n- [2024\u002F12\u002F16] **LLMs Can Simulate Standardized Patients via Agent Coevolution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11716) | [code]\n\n- [2024\u002F12\u002F13] **Script-Based Dialog Policy Planning for LLM-Powered Conversational Agents: A Basic Architecture for an &#34;AI Therapist&#34;** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15242) | [code]\n\n- [2024\u002F12\u002F05] **Educational-Psychological Dialogue Robot Based on Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03847) | [code]\n\n- [2024\u002F12\u002F02] **Medchain: Bridging the Gap Between LLM Agents and Clinical Practice through Interactive Sequential Benchmarking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01605) | [code]\n\n- [2024\u002F11\u002F21] **PIORS: Personalized Intelligent Outpatient Reception based on Large Language Model with Multi-Agents Medical Scenario Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13902) | [code]\n\n- [2024\u002F11\u002F16] **Towards Next-Generation Medical Agent: How o1 is Reshaping Decision-Making in Medical Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.14461) | [code]\n\n- [2024\u002F11\u002F03] **EcoAct: Economic Agent Determines When to Register What Action** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01643) | [code]\n\n- [2024\u002F10\u002F25] **$\\texttt{PatentAgent}$: Intelligent Agent for Automated Pharmaceutical Patent Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21312) | [code]\n\n- [2024\u002F10\u002F23] **ReflecTool: Towards Reflection-Aware Tool-Augmented Clinical Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17657) | [code]\n\n- [2024\u002F10\u002F17] **MeNTi: Bridging Medical Calculator and LLM Agent with Nested Tool Calling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13610) | [code]\n\n- [2024\u002F10\u002F16] **MedAide: Towards an Omni Medical Aide via Specialized LLM-based Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12532) | [code]\n\n- [2024\u002F10\u002F02] **Zodiac: A Cardiologist-Level LLM Framework for Multi-Agent Diagnostics** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02026) | [code]\n\n- [2024\u002F08\u002F28] **Interactive Agents: Simulating Counselor-Client Psychological Counseling via Role-Playing LLM-to-LLM Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15787) | [code]\n\n- [2024\u002F08\u002F23] **DrugAgent: Explainable Drug Repurposing Agent with Large Language Model-based Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.13378) | [code]\n\n- [2024\u002F08\u002F14] **Development of a Large Language Model-based Multi-Agent Clinical Decision Support System for Korean Triage and Acuity Scale (KTAS)-Based Triage and Treatment Planning in Emergency Departments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07531) | [code]\n\n- [2024\u002F07\u002F18] **CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.13301) | [code]\n\n- [2024\u002F07\u002F10] **Virtual Agents for Alcohol Use Counseling: Exploring LLM-Powered Motivational Interviewing** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08095) | [code]\n\n- [2024\u002F07\u002F03] **MentalAgora: A Gateway to Advanced Personalized Care in Mental Health through Multi-Agent Debating and Attribute Control** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02736) | [code]\n\n- [2024\u002F07\u002F02] **MMedAgent: Learning to Use Medical Tools with Multi-modal Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02483) | [code]\n\n- [2024\u002F04\u002F23] **ClinicalAgent: Clinical Trial Multi-Agent System with Large Language Model-based Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.14777) | [code]\n\n- [2024\u002F04\u002F03] **Empowering Biomedical Discovery with AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02831) | [code]\n\n- [2024\u002F02\u002F20] **Can Large Language Models be Used to Provide Psychological Counselling? An Analysis of GPT-4-Generated Responses Using Role-play Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12738) | [code]\n\n- [2024\u002F02\u002F20] **AgentMD: Empowering Language Agents for Risk Prediction with Large-Scale Clinical Tool Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13225) | [code]\n\n- [2024\u002F02\u002F15] **Knowledge-Infused LLM-Powered Conversational Health Agent: A Case Study for Diabetes Patients** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10153) | [code]\n\n- [2024\u002F02\u002F01] **Generation, Distillation and Evaluation of Motivational Interviewing-Style Reflections with a Foundational Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01051) | [code]\n\n- [2023\u002F12\u002F19] **Can ChatGPT be Your Personal Medical Assistant?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12006) | [code]\n\n- [2023\u002F10\u002F03] **Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02124) | [code]\n\n#### Finance\n- [2025\u002F07\u002F08] **ECom-Bench: Can LLM Agent Resolve Real-World E-commerce Customer Support Issues?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05639) | [code]\n\n- [2025\u002F07\u002F07] **MindFlow: Revolutionizing E-commerce Customer Support with Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05330) | [code]\n\n- [2025\u002F06\u002F10] **Improved LLM Agents for Financial Document Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08726) | [code]\n\n- [2025\u002F06\u002F09] **EconWebArena: Benchmarking Autonomous Agents on Economic Tasks in Realistic Web Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08136) | [code]\n\n- [2025\u002F05\u002F20] **Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14418) | [code]\n\n- [2025\u002F04\u002F08] **Are Generative AI Agents Effective Personalized Financial Advisors?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05862) | [code]\n\n- [2025\u002F04\u002F07] **AI for Climate Finance: Agentic Retrieval and Multi-Step Reasoning for Early Warning System Investments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05104) | [code]\n\n- [2025\u002F03\u002F27] **EQ-Negotiator: An Emotion-Reasoning LLM Agent in Credit Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21080) | [code]\n\n- [2025\u002F03\u002F05] **Cite Before You Speak: Enhancing Context-Response Grounding in E-commerce Conversational LLM-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04830) | [code]\n\n- [2025\u002F02\u002F25] **LLM Knows Geometry Better than Algebra: Numerical Understanding of LLM-Based Agents in A Trading Arena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17967) | [code]\n\n- [2025\u002F02\u002F08] **Agentic AI Systems Applied to tasks in Financial Services: Modeling and model risk management crews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05439) | [code]\n\n- [2025\u002F02\u002F01] **MarketSenseAI 2.0: Enhancing Stock Analysis through LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00415) | [code]\n\n- [2025\u002F01\u002F08] **FinSphere: A Conversational Stock Analysis Agent Equipped with Quantitative Tools based on Real-Time Database** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12399) | [code]\n\n- [2024\u002F12\u002F27] **OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.19723) | [code]\n\n- [2024\u002F12\u002F19] **Beyond the Sum: Unlocking AI Agents Potential Through Market Forces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.10388) | [code]\n\n- [2024\u002F11\u002F07] **Enhancing Investment Analysis: Optimizing AI-Agent Collaboration in Financial Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04788) | [code]\n\n- [2024\u002F10\u002F29] **Enhancing Financial Question Answering with a Multi-Agent Reflection Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21741) | [code]\n\n- [2024\u002F09\u002F19] **Strategic Collusion of LLM Agents: Market Division in Multi-Commodity Competitions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00031) | [code]\n\n- [2024\u002F07\u002F18] **dzFinNlp at AraFinNLP: Improving Intent Detection in Financial Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.13565) | [code]\n\n- [2024\u002F07\u002F09] **FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06567) | [code]\n\n- [2024\u002F07\u002F05] **Towards Automated Functional Equation Proving: A Benchmark Dataset and A Domain-Specific In-Context Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.14521) | [code]\n\n- [2024\u002F05\u002F07] **Enhancing the Efficiency and Accuracy of Underlying Asset Reviews in Structured Finance: The Application of Multi-agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04294) | [code]\n\n#### Software Engineering\n- [2025\u002F06\u002F13] **Agent-RLVR: Training Software Engineering Agents via Guidance and Environment Rewards** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11425) | [code]\n\n- [2025\u002F06\u002F04] **MedAgentGym: Training LLM Agents for Code-Based Medical Reasoning at Scale** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04405) | [code]\n\n- [2025\u002F06\u002F03] **Coding Agents with Multimodal Browsing are Generalist Problem Solvers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03011) | [code]\n\n- [2025\u002F05\u002F28] **Co-Saving: Resource Aware Multi-Agent Collaboration for Software Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21898) | [code]\n\n- [2025\u002F05\u002F26] **Vibe Coding vs. Agentic Coding: Fundamentals and Practical Implications of Agentic AI** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19443) | [code]\n\n- [2025\u002F05\u002F26] **SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20411) | [code]\n\n- [2025\u002F05\u002F24] **SEW: Self-Evolving Agentic Workflows for Automated Code Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18646) | [code]\n\n- [2025\u002F05\u002F22] **Optimizing LLM-Based Multi-Agent System with Textual Feedback: A Case Study on Software Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16086) | [code]\n\n- [2025\u002F05\u002F19] **Guided Search Strategies in Non-Serializable Environments with Applications to Software Engineering Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13652) | [code]\n\n- [2025\u002F05\u002F13] **LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08842) | [code]\n\n- [2025\u002F04\u002F30] **SWE-smith: Scaling Data for Software Engineering Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.21798) | [code]\n\n- [2025\u002F04\u002F28] **ResearchCodeAgent: An LLM Multi-Agent System for Automated Codification of Research Methodologies** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20117) | [code]\n\n- [2025\u002F04\u002F18] **CodeVisionary: An Agent-based Framework for Evaluating Large Language Models in Code Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13472) | [code]\n\n- [2025\u002F04\u002F09] **R2E-Gym: Procedural Environments and Hybrid Verifiers for Scaling Open-Weights SWE Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07164) | [code]\n\n- [2025\u002F03\u002F27] **GateLens: A Reasoning-Enhanced LLM Agent for Automotive Software Release Analytics** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21735) | [code]\n\n- [2025\u002F03\u002F24] **Verbal Process Supervision Elicits Better Coding Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18494) | [code]\n\n- [2025\u002F03\u002F18] **DARS: Dynamic Action Re-Sampling to Enhance Coding Agent Performance by Adaptive Tree Traversal** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14269) | [code]\n\n- [2025\u002F03\u002F12] **LocAgent: Graph-Guided LLM Agents for Code Localization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09089) | [code]\n\n- [2025\u002F03\u002F10] **ProjectEval: A Benchmark for Programming Agents Automated Evaluation on Project-Level Code Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07010) | [code]\n\n- [2025\u002F02\u002F19] **An LLM-based Agent for Reliable Docker Environment Configuration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13681) | [code]\n\n- [2025\u002F02\u002F18] **Training Turn-by-Turn Verifiers for Dialogue Tutoring Agents: The Curious Case of LLMs as Your Coding Tutors** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13311) | [code]\n\n- [2025\u002F02\u002F18] **UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12561) | [code]\n\n- [2025\u002F02\u002F14] **The Ann Arbor Architecture for Agent-Oriented Programming** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09903) | [[code]](https:\u002F\u002Fgithub.com\u002Faaalgo\u002Fpostline_0.1)\n\n- [2025\u002F02\u002F11] **Multi-Agent Collaboration for Multilingual Code Instruction Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07487) | [code]\n\n- [2025\u002F02\u002F10] **SyncMind: Measuring Agent Out-of-Sync Recovery in Collaborative Software Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06994) | [code]\n\n- [2025\u002F02\u002F08] **CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and Debugging** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05664) | [code]\n\n- [2024\u002F12\u002F30] **Training Software Engineering Agents and Verifiers with SWE-Gym** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21139) | [code]\n\n- [2024\u002F12\u002F24] **Molly: Making Large Language Model Agents Solve Python Problem More Logically** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18093) | [code]\n\n- [2024\u002F12\u002F16] **Seeker: Towards Exception Safety Code Generation with Intermediate Language Agents Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11713) | [code]\n\n- [2024\u002F11\u002F07] **CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04329) | [code]\n\n- [2024\u002F10\u002F29] **SceneGenAgent: Precise Industrial Scene Generation with Coding Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21909) | [code]\n\n- [2024\u002F10\u002F09] **DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07331) | [code]\n\n- [2024\u002F10\u002F09] **Seeker: Enhancing Exception Handling in Code with LLM-based Multi-Agent Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06949) | [code]\n\n- [2024\u002F09\u002F02] **Co-Learning: Code Learning for Multi-Agent Reinforcement Collaborative Framework with Conversational Natural Language Interfaces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00985) | [code]\n\n- [2024\u002F08\u002F19] **GoNoGo: An Efficient LLM-based Multi-Agent System for Streamlining Automotive Software Release Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.09785) | [code]\n\n- [2024\u002F08\u002F13] **Diversity Empowers Intelligence: Integrating Expertise of Software Engineering Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07060) | [code]\n\n- [2024\u002F08\u002F05] **LLM Agents Improve Semantic Code Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11058) | [code]\n\n- [2024\u002F07\u002F26] **AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18901) | [[code]](https:\u002F\u002Fgithub.com\u002Fstonybrooknlp\u002Fappworld)\n\n- [2024\u002F07\u002F01] **Agentless: Demystifying LLM-based Software Engineering Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01489) | [code]\n\n- [2024\u002F06\u002F13] **Multi-Agent Software Development through Cross-Team Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08979) | [[code]](https:\u002F\u002Fgithub.com\u002Fopenbmb\u002Fchatdev)\n\n- [2024\u002F05\u002F06] **SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.15793) | [code]\n\n- [2024\u002F04\u002F11] **Behavior Trees Enable Structured Programming of Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.07439) | [[code]](https:\u002F\u002Fgithub.com\u002FRichardKelley\u002Fdendron)\n\n- [2024\u002F04\u002F02] **Self-Organized Agents: A LLM Multi-Agent Framework toward Ultra Large-Scale Code Generation and Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02183) | [code]\n\n- [2024\u002F03\u002F02] **SceneCraft: An LLM Agent for Synthesizing 3D Scene as Blender Code** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01248) | [code]\n\n- [2024\u002F02\u002F26] **RepoAgent: An LLM-Powered Open-Source Framework for Repository-level Code Documentation Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16667) | [code]\n\n- [2024\u002F02\u002F19] **WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12275) | [code]\n\n- [2024\u002F02\u002F02] **StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01391) | [code]\n\n- [2024\u002F02\u002F01] **Executable Code Actions Elicit Better LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01030) | [code]\n\n- [2023\u002F12\u002F28] **Experiential Co-Learning of Software-Developing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17025) | [code]\n\n- [2023\u002F12\u002F20] **AgentCoder: Multi-Agent-based Code Generation with Iterative Testing and Optimisation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13010) | [code]\n\n- [2023\u002F07\u002F27] **PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14936) | [code]\n\n- [2023\u002F07\u002F16] **ChatDev: Communicative Agents for Software Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.07924) | [code]\n\n- [2023\u002F04\u002F15] **Self-collaboration Code Generation via ChatGPT** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07590) | [code]\n\n#### Research\n- [2025\u002F07\u002F01] **STELLA: Self-Evolving LLM Agent for Biomedical Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02004) | [code]\n\n- [2025\u002F06\u002F27] **RExBench: Can coding agents autonomously implement AI research extensions?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22598) | [code]\n\n- [2025\u002F06\u002F25] **Language Modeling by Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20249) | [code]\n\n- [2025\u002F06\u002F23] **From Web Search towards Agentic Deep Research: Incentivizing Search with Reasoning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18959) | [code]\n\n- [2025\u002F06\u002F12] **VideoDeepResearch: Long Video Understanding With Agentic Tool Using** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10821) | [code]\n\n- [2025\u002F06\u002F06] **Can Theoretical Physics Research Benefit from Language Agents?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06214) | [code]\n\n- [2025\u002F05\u002F30] **Unifying Language Agent Algorithms with Graph-based Orchestration Engine for Reproducible Agent Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24354) | [code]\n\n- [2025\u002F05\u002F29] **Large Language Model-Based Agents for Automated Research Reproducibility: An Exploratory Study in Alzheimer&#39;s Disease** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23852) | [code]\n\n- [2025\u002F05\u002F26] **MLR-Bench: Evaluating AI Agents on Open-Ended Machine Learning Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19955) | [code]\n\n- [2025\u002F05\u002F22] **BioDSA-1K: Benchmarking Data Science Agents for Biomedical Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16100) | [code]\n\n- [2025\u002F05\u002F22] **NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16938) | [code]\n\n- [2025\u002F04\u002F28] **ResearchCodeAgent: An LLM Multi-Agent System for Automated Codification of Research Methodologies** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20117) | [code]\n\n- [2025\u002F04\u002F21] **Completing A Systematic Review in Hours instead of Months with Interactive AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14822) | [code]\n\n- [2025\u002F04\u002F10] **CollEX -- A Multimodal Agentic RAG System Enabling Interactive Exploration of Scientific Collections** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07643) | [code]\n\n- [2025\u002F04\u002F10] **The AI Scientist-v2: Workshop-Level Automated Scientific Discovery via Agentic Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08066) | [code]\n\n- [2025\u002F04\u002F02] **Automated Survey Collection with LLM-based Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02891) | [code]\n\n- [2025\u002F03\u002F23] **AgentRxiv: Towards Collaborative Autonomous Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18102) | [code]\n\n- [2025\u002F03\u002F12] **Agentic AI for Scientific Discovery: A Survey of Progress, Challenges, and Future Directions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08979) | [code]\n\n- [2025\u002F03\u002F11] **ReviewAgents: Bridging the Gap Between Human and AI-Generated Paper Reviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08506) | [code]\n\n- [2025\u002F02\u002F25] **LAG: LLM agents for Leaderboard Auto Generation on Demanding** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18209) | [code]\n\n- [2025\u002F02\u002F20] **MLGym: A New Framework and Benchmark for Advancing AI Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14499) | [code]\n\n- [2025\u002F02\u002F07] **Agentic Reasoning: Reasoning LLMs with Tools for the Deep Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04644) | [code]\n\n- [2025\u002F01\u002F08] **Agent Laboratory: Using LLM Agents as Research Assistants** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04227) | [code]\n\n- [2024\u002F10\u002F17] **Chain of Ideas: Revolutionizing Research Via Novel Idea Development with LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13185) | [code]\n\n- [2024\u002F10\u002F12] **Many Heads Are Better Than One: Improved Scientific Idea Generation by A LLM-Based Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09403) | [code]\n\n- [2024\u002F10\u002F07] **ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05080) | [code]\n\n- [2024\u002F10\u002F07] **ImProver: Agent-Based Automated Proof Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04753) | [code]\n\n- [2024\u002F09\u002F23] **Towards a Realistic Long-Term Benchmark for Open-Web Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14913) | [code]\n\n- [2024\u002F09\u002F17] **CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11363) | [code]\n\n- [2024\u002F09\u002F12] **DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07703) | [code]\n\n- [2024\u002F09\u002F11] **SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07440) | [code]\n\n- [2024\u002F09\u002F10] **Language agents achieve superhuman synthesis of scientific knowledge** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13740) | [code]\n\n- [2024\u002F09\u002F09] **SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.05556) | [code]\n\n- [2024\u002F08\u002F26] **MLR-Copilot: Autonomous Machine Learning Research based on Large Language Models Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14033) | [code]\n\n- [2024\u002F08\u002F20] **Automating Knowledge Discovery from Scientific Literature via LLMs: A Dual-Agent Approach with Progressive Ontology Prompting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00054) | [code]\n\n- [2024\u002F06\u002F13] **ResearchArena: Benchmarking Large Language Models&#39; Ability to Collect and Organize Information as Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10291) | [code]\n\n- [2024\u002F05\u002F02] **CACTUS: Chemistry Agent Connecting Tool-Usage to Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00972) | [[code]](https:\u002F\u002Fgithub.com\u002Fpnnl\u002Fcactus)\n\n- [2024\u002F04\u002F09] **SurveyAgent: A Conversational System for Personalized and Efficient Research Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06364) | [code]\n\n- [2024\u002F02\u002F28] **Data Interpreter: An LLM Agent For Data Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18679) | [[code]](https:\u002F\u002Fgithub.com\u002Fgeekan\u002Fmetagpt)\n\n- [2024\u002F02\u002F18] **SciAgent: Tool-augmented Language Models for Scientific Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11451) | [code]\n\n- [2024\u002F02\u002F06] **Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.04247) | [code]\n\n- [2024\u002F01\u002F08] **MARG: Multi-Agent Review Generation for Scientific Papers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.04259) | [code]\n\n\n\n### Automation\n#### Workflow\n- [2025\u002F06\u002F02] **Follow the Flow: Fine-grained Flowchart Attribution with Neurosymbolic Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01344) | [code]\n\n- [2025\u002F05\u002F26] **ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19897) | [code]\n\n- [2025\u002F04\u002F17] **MetaSynth: Meta-Prompting-Driven Agentic Scaffolds for Diverse Synthetic Data Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12563) | [code]\n\n- [2025\u002F02\u002F24] **Turning Conversations into Workflows: A Framework to Extract and Evaluate Dialog Workflows for Service AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17321) | [code]\n\n- [2025\u002F02\u002F11] **EvoFlow: Evolving Diverse Agentic Workflows On The Fly** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07373) | [code]\n\n- [2025\u002F02\u002F07] **nvAgent: Automated Data Visualization from Natural Language via Collaborative Agent Workflow** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05036) | [code]\n\n- [2025\u002F02\u002F06] **ScoreFlow: Mastering LLM Agent Workflows via Score-based Preference Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04306) | [code]\n\n- [2024\u002F12\u002F17] **An Agentic Approach to Automatic Creation of P&amp;ID Diagrams from Natural Language Descriptions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12898) | [code]\n\n- [2024\u002F12\u002F15] **LAW: Legal Agentic Workflows for Custody and Fund Services Contracts** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11063) | [code]\n\n- [2024\u002F11\u002F22] **ScribeAgent: Towards Specialized Web Agents Using Production-Scale Workflow Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.15004) | [code]\n\n- [2024\u002F11\u002F12] **BudgetMLAgent: A Cost-Effective LLM Multi-Agent system for Automating Machine Learning Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07464) | [code]\n\n- [2024\u002F11\u002F08] **Game-theoretic LLM: Agent Workflow for Negotiation Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05990) | [code]\n\n- [2024\u002F10\u002F24] **An LLM Agent for Automatic Geospatial Data Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18792) | [code]\n\n- [2024\u002F10\u002F17] **From Barriers to Tactics: A Behavioral Science-Informed Agentic Workflow for Personalized Nutrition Coaching** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14041) | [code]\n\n- [2024\u002F10\u002F17] **ControlAgent: Automating Control System Design via Novel Integration of LLM Agents and Domain Expertise** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19811) | [code]\n\n- [2024\u002F10\u002F16] **Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12361) | [code]\n\n- [2024\u002F10\u002F14] **AFlow: Automating Agentic Workflow Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.10762) | [code]\n\n- [2024\u002F10\u002F10] **Benchmarking Agentic Workflow Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07869) | [code]\n\n- [2024\u002F10\u002F03] **AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02958) | [code]\n\n- [2024\u002F09\u002F11] **Agent Workflow Memory** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07429) | [code]\n\n- [2024\u002F08\u002F16] **The Fellowship of the LLMs: Multi-Agent Workflows for Synthetic Preference Optimization Dataset Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08688) | [code]\n\n- [2024\u002F07\u002F15] **Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.10956) | [code]\n\n- [2024\u002F07\u002F03] **AgentInstruct: Toward Generative Teaching with Agentic Flows** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.03502) | [code]\n\n- [2024\u002F07\u002F01] **AutoFlow: Automated Workflow Generation for Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.12821) | [code]\n\n- [2024\u002F06\u002F21] **Autonomous Agents for Collaborative Task under Information Asymmetry** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.14928) | [code]\n\n- [2024\u002F03\u002F13] **AutoGuide: Automated Generation and Selection of Context-Aware Guidelines for Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.08978) | [code]\n\n- [2024\u002F03\u002F05] **ChatCite: LLM Agent with Human Workflow Guidance for Comparative Literature Summary** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02574) | [code]\n\n#### Automatic Evaluation\n- [2025\u002F06\u002F26] **Mind2Web 2: Evaluating Agentic Search with Agent-as-a-Judge** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21506) | [code]\n\n- [2025\u002F06\u002F23] **AI Agents-as-Judge: Automated Assessment of Accuracy, Consistency, Completeness and Clarity for Enterprise Documents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22485) | [code]\n\n- [2025\u002F06\u002F08] **Manifesto from Dagstuhl Perspectives Workshop 24352 -- Conversational Agents: A Framework for Evaluation (CAFE)** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11112) | [code]\n\n- [2025\u002F05\u002F22] **HiMATE: A Hierarchical Multi-Agent Framework for Machine Translation Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16281) | [code]\n\n- [2025\u002F05\u002F21] **UrduFactCheck: An Agentic Fact-Checking Framework for Urdu with Evidence Boosting and Benchmarking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15063) | [code]\n\n- [2025\u002F05\u002F21] **AGENT-X: Adaptive Guideline-based Expert Network for Threshold-free AI-generated teXt detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15261) | [code]\n\n- [2025\u002F05\u002F20] **CAFES: A Collaborative Multi-Agent Framework for Multi-Granular Multimodal Essay Scoring** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13965) | [code]\n\n- [2025\u002F05\u002F18] **ESC-Judge: A Framework for Comparing Emotional Support Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12531) | [code]\n\n- [2025\u002F05\u002F13] **TRAIL: Trace Reasoning and Agentic Issue Localization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08638) | [code]\n\n- [2025\u002F05\u002F05] **AutoLibra: Agent Metric Induction from Open-Ended Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02820) | [code]\n\n- [2025\u002F05\u002F01] **Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02847) | [code]\n\n- [2025\u002F04\u002F21] **EvalAgent: Discovering Implicit Evaluation Criteria from the Web** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15219) | [code]\n\n- [2025\u002F04\u002F09] **A Unified Agentic Framework for Evaluating Conditional Image Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07046) | [code]\n\n- [2025\u002F04\u002F01] **VerifiAgent: a Unified Verification Agent in Language Model Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00406) | [code]\n\n- [2025\u002F04\u002F01] **Multi-Agent LLM Judge: automatic personalized LLM judge design for evaluating natural language generation applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02867) | [code]\n\n- [2025\u002F03\u002F07] **GEMA-Score: Granular Explainable Multi-Agent Score for Radiology Report Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05347) | [code]\n\n- [2025\u002F02\u002F26] **Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19328) | [[code]](https:\u002F\u002Fgithub.com\u002FTHU-KEG\u002FAgentic-Reward-Modeling)\n\n- [2025\u002F02\u002F25] **Debt Collection Negotiations with Large Language Models: An Evaluation System and Optimizing Decision Making with Multi-Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18228) | [code]\n\n- [2025\u002F02\u002F25] **FACT-AUDIT: An Adaptive Multi-Agent Framework for Dynamic Fact-Checking Evaluation of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17924) | [code]\n\n- [2025\u002F02\u002F14] **Automated Hypothesis Validation with Agentic Sequential Falsifications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09858) | [code]\n\n- [2025\u002F01\u002F19] **IntellAgent: A Multi-Agent Framework for Evaluating Conversational AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11067) | [code]\n\n- [2025\u002F01\u002F17] **Agent-as-Judge for Factual Summarization of Long Narratives** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.09993) | [code]\n\n- [2025\u002F01\u002F03] **PSYCHE: A Multi-faceted Patient Simulation Framework for Evaluation of Psychiatric Assessment Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01594) | [code]\n\n- [2024\u002F12\u002F28] **M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20127) | [code]\n\n- [2024\u002F12\u002F10] **Evaluation Agent: Efficient and Promptable Evaluation Framework for Visual Generative Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09645) | [code]\n\n- [2024\u002F11\u002F25] **SAGEval: The frontiers of Satisfactory Agent based NLG Evaluation for reference-free open-ended text** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16077) | [code]\n\n- [2024\u002F11\u002F15] **Large Language Models as User-Agents for Evaluating Task-Oriented-Dialogue Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.09972) | [code]\n\n- [2024\u002F09\u002F24] **Automated test generation to evaluate tool-augmented LLMs as conversational AI agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15934) | [code]\n\n- [2024\u002F09\u002F22] **The Ability of Large Language Models to Evaluate Constraint-satisfaction in Agent Responses to Open-ended Requests** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14371) | [code]\n\n- [2024\u002F09\u002F13] **Safeguarding Decentralized Social Media: LLM Agents for Automating Community Rule Compliance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08963) | [code]\n\n- [2024\u002F05\u002F23] **ALI-Agent: Assessing LLMs&#39; Alignment with Human Values via Agent-based Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14125) | [code]\n\n- [2024\u002F03\u002F28] **MATEval: A Multi-Agent Discussion Framework for Advancing Open-Ended Text Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.19305) | [code]\n\n- [2023\u002F08\u002F14] **ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07201) | [code]\n\n\n\n### Training\n#### Fine tuning\n- [2025\u002F07\u002F10] **SAND: Boosting LLM Agents with Self-Taught Action Deliberation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07441) | [code]\n\n- [2025\u002F07\u002F08] **Agentic-R1: Distilled Dual-Strategy Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05707) | [code]\n\n- [2025\u002F06\u002F28] **Knowledge Augmented Finetuning Matters in both RAG and Agent Based Dialog Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22852) | [code]\n\n- [2025\u002F06\u002F04] **Go-Browse: Training Web Agents with Structured Exploration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03533) | [code]\n\n- [2025\u002F06\u002F02] **AgentCPM-GUI: Building Mobile-Use Agents with Reinforcement Fine-Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01391) | [code]\n\n- [2025\u002F05\u002F31] **ARIA: Training Language Agents with Intention-Driven Reward Aggregation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00539) | [code]\n\n- [2025\u002F05\u002F28] **LaMDAgent: An Autonomous Framework for Post-Training Pipeline Optimization via LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21963) | [code]\n\n- [2025\u002F05\u002F27] **BehaviorSFT: Behavioral Token Conditioning for Clinical Agents Across the Proactivity Spectrum** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21757) | [code]\n\n- [2025\u002F05\u002F26] **Frictional Agent Alignment Framework: Slow Down and Don&#39;t Break Things** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19428) | [code]\n\n- [2025\u002F05\u002F26] **Training LLM-Based Agents with Synthetic Self-Reflected Trajectories and Partial Masking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20023) | [code]\n\n- [2025\u002F05\u002F26] **MaskSearch: A Universal Pre-Training Framework to Enhance Agentic Search Capability** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20285) | [code]\n\n- [2025\u002F03\u002F05] **MAS-GPT: Training LLMs to Build LLM-based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03686) | [code]\n\n- [2025\u002F03\u002F05] **Enhancing Collective Intelligence in Large Language Models Through Emotional Integration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04849) | [code]\n\n- [2025\u002F03\u002F04] **ATLaS: Agent Tuning via Learning Critical Steps** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02197) | [code]\n\n- [2025\u002F02\u002F24] **Training a Generally Curious Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17543) | [code]\n\n- [2025\u002F02\u002F19] **UM_FHS at TREC 2024 PLABA: Exploration of Fine-tuning and AI agent approach for plain language adaptations of biomedical text** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14144) | [code]\n\n- [2025\u002F02\u002F18] **Training Turn-by-Turn Verifiers for Dialogue Tutoring Agents: The Curious Case of LLMs as Your Coding Tutors** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13311) | [code]\n\n- [2025\u002F02\u002F11] **Multi-Agent Collaboration for Multilingual Code Instruction Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07487) | [code]\n\n- [2025\u002F02\u002F10] **Hephaestus: Improving Fundamental Agent Capabilities of Large Language Models through Continual Pre-Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06589) | [code]\n\n- [2025\u002F01\u002F10] **Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.05707) | [code]\n\n- [2025\u002F01\u002F03] **AgentRefine: Enhancing Agent Generalization through Refinement Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01702) | [code]\n\n- [2024\u002F12\u002F30] **Training Software Engineering Agents and Verifiers with SWE-Gym** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21139) | [code]\n\n- [2024\u002F12\u002F30] **Aviary: training language agents on challenging scientific tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21154) | [code]\n\n- [2024\u002F12\u002F16] **Virtual Agent-Based Communication Skills Training to Facilitate Health Persuasion Among Peers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12061) | [code]\n\n- [2024\u002F11\u002F29] **Training Agents with Weakly Supervised Feedback from Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19547) | [code]\n\n- [2024\u002F11\u002F21] **Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.14497) | [code]\n\n- [2024\u002F10\u002F20] **Training Language Models to Critique With Multi-agent Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15287) | [code]\n\n- [2024\u002F10\u002F16] **Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12361) | [code]\n\n- [2024\u002F10\u002F10] **AgentBank: Towards Generalized LLM Agents via Fine-Tuning on 50000+ Interaction Trajectories** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07706) | [code]\n\n- [2024\u002F07\u002F25] **Recursive Introspection: Teaching Language Model Agents How to Self-Improve** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18219) | [code]\n\n- [2024\u002F06\u002F11] **CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07054) | [[code]](https:\u002F\u002Fgithub.com\u002Flirenhao1997\u002Fcoevol)\n\n- [2024\u002F04\u002F05] **Social Skill Training with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04204) | [code]\n\n- [2024\u002F04\u002F02] **CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.01663) | [code]\n\n- [2024\u002F03\u002F29] **Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.19962) | [code]\n\n- [2024\u002F03\u002F21] **ReAct Meets ActRe: When Language Agents Enjoy Training Data Autonomy** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.14589) | [code]\n\n- [2024\u002F03\u002F19] **Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12881) | [code]\n\n- [2024\u002F02\u002F23] **AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15506) | [code]\n\n- [2024\u002F02\u002F21] **Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13717) | [code]\n\n- [2024\u002F02\u002F18] **Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11651) | [code]\n\n- [2024\u002F01\u002F10] **Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05566) | [code]\n\n- [2024\u002F01\u002F05] **From LLM to Conversational Agent: A Memory Enhanced Architecture with Fine-Tuning of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02777) | [code]\n\n- [2023\u002F12\u002F22] **Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.14878) | [code]\n\n- [2023\u002F11\u002F28] **Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16714) | [code]\n\n- [2023\u002F10\u002F19] **AgentTuning: Enabling Generalized Agent Abilities for LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.12823) | [code]\n\n- [2023\u002F10\u002F09] **FireAct: Toward Language Agent Fine-tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05915) | [code]\n\n- [2023\u002F05\u002F26] **Training Socially Aligned Language Models on Simulated Social Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16960) | [code]\n\n#### RL\n- [2025\u002F07\u002F03] **MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02259) | [code]\n\n- [2025\u002F07\u002F03] **RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03112) | [code]\n\n- [2025\u002F07\u002F02] **OpenTable-R1: A Reinforcement Learning Augmented Tool Agent for Open-Domain Table Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03018) | [code]\n\n- [2025\u002F06\u002F30] **L0: Reinforcement Learning to Become General Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23667) | [code]\n\n- [2025\u002F06\u002F30] **Auto-TA: Towards Scalable Automated Thematic Analysis (TA) via Multi-Agent Large Language Models with Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23998) | [code]\n\n- [2025\u002F06\u002F30] **SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.24119) | [code]\n\n- [2025\u002F06\u002F24] **KnowRL: Exploring Knowledgeable Reinforcement Learning for Factuality** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.19807) | [code]\n\n- [2025\u002F06\u002F16] **Language Agents for Hypothesis-driven Clinical Decision Making with Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13474) | [code]\n\n- [2025\u002F06\u002F13] **Agent-RLVR: Training Software Engineering Agents via Guidance and Environment Rewards** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11425) | [code]\n\n- [2025\u002F05\u002F29] **ML-Agent: Reinforcing LLM Agents for Autonomous Machine Learning Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23723) | [code]\n\n- [2025\u002F05\u002F28] **WorkForceAgent-R1: Incentivizing Reasoning Capability in LLM-based Web Agents via Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22942) | [code]\n\n- [2025\u002F05\u002F28] **WebDancer: Towards Autonomous Information Seeking Agency** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22648) | [[code]](https:\u002F\u002Fgithub.com\u002FAlibaba-NLP\u002FWebAgent)\n\n- [2025\u002F05\u002F27] **SPA-RL: Reinforcing LLM Agents via Stepwise Progress Attribution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20732) | [[code]](https:\u002F\u002Fgithub.com\u002FWangHanLinHenry\u002FSPA-RL-Agent)\n\n- [2025\u002F05\u002F26] **DoctorAgent-RL: A Multi-Agent Collaborative Reinforcement Learning System for Multi-Turn Clinical Dialogue** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19630) | [code]\n\n- [2025\u002F05\u002F26] **REARANK: Reasoning Re-ranking Agent via Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20046) | [code]\n\n- [2025\u002F05\u002F22] **WebAgent-R1: Training Web Agents via End-to-End Multi-Turn Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16421) | [code]\n\n- [2025\u002F05\u002F21] **An Empirical Study on Reinforcement Learning for Reasoning-Search Interleaved LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15117) | [code]\n\n- [2025\u002F05\u002F20] **Reinforcing Question Answering Agents with Minimalist Policy Gradient Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17086) | [code]\n\n- [2025\u002F05\u002F20] **s3: You Don&#39;t Need That Much Data to Train a Search Agent via RL** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14146) | [code]\n\n- [2025\u002F05\u002F17] **Retrospex: Language Agent Meets Offline Reinforcement Learning Critic** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11807) | [code]\n\n- [2025\u002F05\u002F06] **Divide, Optimize, Merge: Fine-Grained LLM Agent Optimization at Scale** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03973) | [code]\n\n- [2025\u002F04\u002F24] **RAGEN: Understanding Self-Evolution in LLM Agents via Multi-Turn Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20073) | [code]\n\n- [2025\u002F04\u002F20] **Meta-Thinking in LLMs via Multi-Agent Reinforcement Learning: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14520) | [code]\n\n- [2025\u002F04\u002F04] **Learning Natural Language Constraints for Safe Reinforcement Learning of Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03185) | [code]\n\n- [2025\u002F03\u002F16] **LLM-Mediated Guidance of MARL Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13553) | [code]\n\n- [2025\u002F03\u002F12] **ReMA: Learning to Meta-think for LLMs with Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09501) | [code]\n\n- [2025\u002F03\u002F03] **Improving Retrospective Language Agents via Joint Policy Gradient Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01490) | [code]\n\n- [2025\u002F02\u002F25] **AgentRM: Enhancing Agent Generalization with Reward Modeling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18407) | [code]\n\n- [2025\u002F02\u002F09] **Training Language Models for Social Deduction with Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06060) | [code]\n\n- [2025\u002F02\u002F06] **Multi-Agent Reinforcement Learning with Focal Diversity Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04492) | [code]\n\n- [2025\u002F01\u002F25] **Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15228) | [code]\n\n- [2024\u002F11\u002F26] **LLM-Based Offline Learning for Embodied Agents via Consistency-Guided Reward Ensemble** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.17135) | [code]\n\n- [2024\u002F11\u002F07] **Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05194) | [code]\n\n- [2024\u002F11\u002F06] **From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03817) | [code]\n\n- [2024\u002F11\u002F04] **WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02337) | [code]\n\n- [2024\u002F10\u002F11] **Words as Beacons: Guiding RL Agents with High-Level Language Prompts** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08632) | [code]\n\n- [2024\u002F10\u002F10] **MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07672) | [code]\n\n- [2024\u002F07\u002F02] **Predicting vs. Acting: A Trade-off Between World Modeling &amp; Agent Modeling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02446) | [code]\n\n- [2024\u002F06\u002F26] **Mental Modeling of Reinforcement Learning Agents by Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18505) | [code]\n\n- [2024\u002F06\u002F17] **Input Conditioned Graph Generation for Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11555) | [[code]](https:\u002F\u002Fgithub.com\u002Flukasvierling\u002Fdynamicgptswarm)\n\n- [2024\u002F06\u002F05] **LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03363) | [code]\n\n- [2024\u002F06\u002F03] **Re-ReST: Reflection-Reinforced Self-Training for Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01495) | [[code]](https:\u002F\u002Fgithub.com\u002FPlusLabNLP\u002FRe-ReST)\n\n- [2024\u002F05\u002F30] **Safe Multi-agent Reinforcement Learning with Natural Language Constraints** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20018) | [code]\n\n- [2024\u002F05\u002F17] **LLM-based Multi-Agent Reinforcement Learning: Current and Future Directions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11106) | [code]\n\n- [2024\u002F05\u002F16] **Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.10292) | [code]\n\n- [2024\u002F05\u002F01] **Navigating WebAI: Training Agents to Complete Web Tasks with Large Language Models and Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00516) | [code]\n\n- [2024\u002F03\u002F05] **Language Guided Exploration for RL Agents in Text Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03141) | [code]\n\n- [2024\u002F02\u002F17] **Offline Training of Language Model Agents with Functions as Learnable Weights** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11359) | [code]\n\n- [2024\u002F02\u002F02] **StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01391) | [code]\n\n- [2023\u002F10\u002F25] **MultiPrompter: Cooperative Prompt Optimization with Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16730) | [code]\n\n- [2023\u002F03\u002F29] **Skill Reinforcement Learning and Planning for Open-World Long-Horizon Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.16563) | [code]\n\n#### DPO\n- [2025\u002F06\u002F17] **Expectation Confirmation Preference Optimization for Multi-Turn Conversational Recommendation Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14302) | [code]\n\n- [2025\u002F06\u002F04] **Debate, Reflect, and Distill: Multi-Agent Feedback with Tree-Structured Preference Optimization for Efficient Language Model Enhancement** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03541) | [code]\n\n- [2025\u002F06\u002F02] **PGPO: Enhancing Agent Reasoning via Pseudocode-style Planning Guided Preference Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01475) | [code]\n\n- [2025\u002F05\u002F26] **MaskSearch: A Universal Pre-Training Framework to Enhance Agentic Search Capability** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20285) | [code]\n\n- [2025\u002F05\u002F04] **Adaptive Thinking via Mode Policy Optimization for Social Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02156) | [code]\n\n- [2025\u002F04\u002F27] **Anyprefer: An Agentic Framework for Preference Data Synthesis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19276) | [code]\n\n- [2025\u002F02\u002F26] **Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19328) | [[code]](https:\u002F\u002Fgithub.com\u002FTHU-KEG\u002FAgentic-Reward-Modeling)\n\n- [2025\u002F01\u002F03] **SDPO: Segment-Level Direct Preference Optimization for Social Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01821) | [code]\n\n- [2024\u002F10\u002F29] **Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22304) | [code]\n\n- [2024\u002F05\u002F31] **Learning to Clarify: Multi-turn Conversations with Action-Based Contrastive Self-Training** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00222) | [code]\n\n\n\n### Scaling\n#### Single-Agent Framework\n- [2025\u002F07\u002F08] **Agent KB: Leveraging Cross-Domain Experience for Agentic Problem Solving** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06229) | [code]\n\n- [2025\u002F07\u002F04] **GRAFT: A Graph-based Flow-aware Agentic Framework for Document-level Machine Translation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03311) | [code]\n\n- [2025\u002F06\u002F29] **AURA: Agent for Understanding, Reasoning, and Automated Tool Use in Voice-Driven Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23049) | [code]\n\n- [2025\u002F06\u002F27] **A Large Language Model-Empowered Agent for Reliable and Robust Structural Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02938) | [code]\n\n- [2025\u002F06\u002F17] **VIDEE: Visual and Interactive Decomposition, Execution, and Evaluation of Text Analytics with Intelligent Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21582) | [code]\n\n- [2025\u002F06\u002F17] **OAgents: An Empirical Study of Building Effective Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15741) | [code]\n\n- [2025\u002F06\u002F16] **Leveraging In-Context Learning for Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13109) | [code]\n\n- [2025\u002F06\u002F14] **Towards Building General Purpose Embedding Models for Industry 4.0 Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.12607) | [code]\n\n- [2025\u002F06\u002F12] **AutoMind: Adaptive Knowledgeable Agent for Automated Data Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10974) | [code]\n\n- [2025\u002F06\u002F03] **DIAMOND: An LLM-Driven Agent for Context-Aware Baseball Highlight Summarization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02351) | [code]\n\n- [2025\u002F06\u002F03] **Comparative Analysis of AI Agent Architectures for Entity Relationship Classification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02426) | [code]\n\n- [2025\u002F06\u002F02] **Self-Challenging Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01716) | [code]\n\n- [2025\u002F05\u002F30] **NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24575) | [code]\n\n- [2025\u002F05\u002F21] **ViQAgent: Zero-Shot Video Question Answering via Agent with Open-Vocabulary Grounding Validation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15928) | [code]\n\n- [2025\u002F05\u002F20] **ContextAgent: Context-Aware Proactive LLM Agents with Open-World Sensory Perceptions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14668) | [code]\n\n- [2025\u002F05\u002F12] **Putting It All into Context: Simplifying Agents with LCLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08120) | [code]\n\n- [2025\u002F04\u002F17] **Pandora: A Code-Driven Large Language Model Agent for Unified Reasoning Across Diverse Structured Knowledge** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12734) | [code]\n\n- [2025\u002F04\u002F11] **Toward Super Agent System with Hybrid AI Routers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.10519) | [code]\n\n- [2025\u002F04\u002F10] **AgentAda: Skill-Adaptive Data Analytics for Tailored Insight Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07421) | [code]\n\n- [2025\u002F04\u002F07] **DoCIA: An Online Document-Level Context Incorporation Agent for Speech Translation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05122) | [code]\n\n- [2025\u002F03\u002F20] **Do Visual Imaginations Improve Vision-and-Language Navigation Agents?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16394) | [code]\n\n- [2025\u002F03\u002F14] **Large Reasoning Models in Agent Scenarios: Exploring the Necessity of Reasoning Capabilities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11074) | [code]\n\n- [2025\u002F03\u002F10] **DatawiseAgent: A Notebook-Centric LLM Agent Framework for Automated Data Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07044) | [code]\n\n- [2025\u002F03\u002F10] **ASTRA: A Negotiation Agent with Adaptive and Strategic Reasoning through Action in Dynamic Offer Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07129) | [code]\n\n- [2025\u002F02\u002F26] **TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19400) | [code]\n\n- [2025\u002F02\u002F14] **Agentic Verification for Ambiguous Query Disambiguation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10352) | [code]\n\n- [2025\u002F02\u002F12] **SPeCtrum: A Grounded Framework for Multidimensional Identity Representation in LLM-Based Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08599) | [code]\n\n- [2025\u002F02\u002F09] **AutoAgent: A Fully-Automated and Zero-Code Framework for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05957) | [code]\n\n- [2025\u002F02\u002F04] **Adaptive Self-improvement LLM Agentic System for ML Library Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02534) | [code]\n\n- [2025\u002F01\u002F31] **Enabling Autonomic Microservice Management through Self-Learning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.19056) | [code]\n\n- [2024\u002F12\u002F28] **OneKE: A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20005) | [code]\n\n- [2024\u002F12\u002F21] **Self-guided Knowledgeable Network of Thoughts: Amplifying Reasoning with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16533) | [code]\n\n- [2024\u002F12\u002F15] **AgentPS: Agentic Process Supervision for Multi-modal Content Quality Assurance through Multi-round QA** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15251) | [code]\n\n- [2024\u002F12\u002F11] **A Multimodal Social Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06189) | [code]\n\n- [2024\u002F12\u002F11] **Federated In-Context LLM Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.08054) | [code]\n\n- [2024\u002F12\u002F04] **How to Correctly do Semantic Backpropagation on Language-based Agentic Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03624) | [code]\n\n- [2024\u002F12\u002F02] **SAUP: Situation Awareness Uncertainty Propagation on LLM Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01033) | [code]\n\n- [2024\u002F12\u002F01] **Towards Adaptive Mechanism Activation in Language Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.00722) | [code]\n\n- [2024\u002F11\u002F20] **MindForge: Empowering Embodied Agents with Theory of Mind for Lifelong Collaborative Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.12977) | [code]\n\n- [2024\u002F11\u002F16] **IntentGPT: Few-shot Intent Discovery with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10670) | [code]\n\n- [2024\u002F11\u002F04] **DynaSaur: Large Language Agents Beyond Predefined Actions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01747) | [code]\n\n- [2024\u002F11\u002F04] **CRMArena: Understanding the Capacity of LLM Agents to Perform Professional CRM Tasks in Realistic Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02305) | [code]\n\n- [2024\u002F10\u002F29] **ADAM: An Embodied Causal Agent in Open-World Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22194) | [code]\n\n- [2024\u002F10\u002F27] **TrajAgent: An Agent Framework for Unified Trajectory Modelling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.20445) | [code]\n\n- [2024\u002F10\u002F22] **Adsorb-Agent: Autonomous Identification of Stable Adsorption Configurations via Large Language Model Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.16658) | [code]\n\n- [2024\u002F10\u002F11] **Encoding Agent Trajectories as Representations with Sequence Transformers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09204) | [code]\n\n- [2024\u002F10\u002F10] **Agents Thinking Fast and Slow: A Talker-Reasoner Architecture** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08328) | [code]\n\n- [2024\u002F10\u002F08] **AgentSquare: Automatic LLM Agent Search in Modular Design Space** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06153) | [[code]](https:\u002F\u002Fgithub.com\u002Ftsinghua-fib-lab\u002FAgentSquare)\n\n- [2024\u002F10\u002F08] **Applying Refusal-Vector Ablation to Llama 3.1 70B Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.10871) | [code]\n\n- [2024\u002F09\u002F24] **MOSS: Enabling Code-Driven Evolution and Context Management for AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.16120) | [code]\n\n- [2024\u002F09\u002F19] **Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12411) | [code]\n\n- [2024\u002F09\u002F15] **Automatic Control With Human-Like Reasoning: Exploring Language Model Embodied Air Traffic Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.09717) | [code]\n\n- [2024\u002F09\u002F12] **Self-Supervised Inference of Agents in Trustless Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08386) | [code]\n\n- [2024\u002F09\u002F05] **From MOOC to MAIC: Reshaping Online Teaching and Learning through LLM-driven Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03512) | [code]\n\n- [2024\u002F09\u002F05] **Rx Strategist: Prescription Verification using LLM Agents System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03440) | [code]\n\n- [2024\u002F09\u002F03] **AgentRE: An Agent-Based Framework for Navigating Complex Information Landscapes in Relation Extraction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.01854) | [code]\n\n- [2024\u002F08\u002F26] **AgentMove: A Large Language Model based Agentic Framework for Zero-shot Next Location Prediction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.13986) | [code]\n\n- [2024\u002F08\u002F19] **Anim-Director: A Large Multimodal Model Powered Agent for Controllable Animation Video Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.09787) | [code]\n\n- [2024\u002F08\u002F13] **Causal Agent based on Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.06849) | [code]\n\n- [2024\u002F08\u002F02] **Coalitions of Large Language Models Increase the Robustness of AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01380) | [code]\n\n- [2024\u002F07\u002F27] **AgentPeerTalk: Empowering Students through Agentic-AI-Driven Discernment of Bullying and Joking in Peer Interactions in Schools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01459) | [code]\n\n- [2024\u002F07\u002F25] **Enhancing Agent Learning through World Dynamics Modeling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.17695) | [code]\n\n- [2024\u002F07\u002F25] **RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18035) | [code]\n\n- [2024\u002F07\u002F16] **Preemptive Detection and Correction of Misaligned Actions in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.11843) | [code]\n\n- [2024\u002F07\u002F15] **Sibyl: Simple yet Effective Agent Framework for Complex Real-world Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.10718) | [code]\n\n- [2024\u002F07\u002F02] **Beyond Numeric Awards: In-Context Dueling Bandits with LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01887) | [code]\n\n- [2024\u002F06\u002F24] **OmAgent: A Multi-modal Agent Framework for Complex Video Understanding with Task Divide-and-Conquer** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.16620) | [code]\n\n- [2024\u002F06\u002F07] **SelfGoal: Your Language Agents Already Know How to Achieve High-level Goals** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04784) | [code]\n\n- [2024\u002F05\u002F25] **AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16247) | [code]\n\n- [2024\u002F05\u002F24] **Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.15143) | [[code]](https:\u002F\u002Fgithub.com\u002Fconglu1997\u002Fintelligent-go-explore)\n\n- [2024\u002F05\u002F16] **Agent Design Pattern Catalogue: A Collection of Architectural Patterns for Foundation Model based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.10467) | [code]\n\n- [2024\u002F04\u002F30] **Large Language Model Agent for Fake News Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01593) | [code]\n\n- [2024\u002F04\u002F28] **Logic Agent: Enhancing Validity with Logic Rule Invocation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.18130) | [code]\n\n- [2024\u002F04\u002F13] **LLMSat: A Large Language Model-Based Goal-Oriented Agent for Autonomous Space Exploration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01392) | [code]\n\n- [2024\u002F04\u002F01] **TraveLER: A Modular Multi-LMM Agent Framework for Video Question-Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.01476) | [code]\n\n- [2024\u002F03\u002F29] **ITCMA: A Generative Agent Based on a Computational Consciousness Structure** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.20097) | [code]\n\n- [2024\u002F02\u002F25] **Bootstrapping Cognitive Agents with a Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00810) | [code]\n\n- [2024\u002F02\u002F24] **Empowering Large Language Model Agents through Action Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15809) | [[code]](https:\u002F\u002Fgithub.com\u002Fzhao-ht\u002Flearnact)\n\n- [2024\u002F02\u002F20] **Soft Self-Consistency Improves Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.13212) | [code]\n\n- [2024\u002F02\u002F04] **NavHint: Vision and Language Navigation Agent with a Hint Generator** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02559) | [code]\n\n- [2024\u002F01\u002F05] **AFSPP: Agent Framework for Shaping Preference and Personality with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02870) | [code]\n\n- [2023\u002F11\u002F23] **Controlling Large Language Model-based Agents for Large-Scale Decision-Making: An Actor-Critic Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.13884) | [code]\n\n- [2023\u002F11\u002F02] **ProAgent: From Robotic Process Automation to Agentic Process Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.10751) | [code]\n\n- [2023\u002F10\u002F16] **CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.10134) | [code]\n\n- [2023\u002F09\u002F29] **Reason for Future, Act for Now: A Principled Framework for Autonomous LLM Agents with Provable Sample Efficiency** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.17382) | [code]\n\n- [2023\u002F09\u002F14] **Agents: An Open-source Framework for Autonomous Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07870) | [code]\n\n- [2023\u002F09\u002F08] **A Versatile Graph Learning Approach through LLM-based Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.04565) | [code]\n\n- [2023\u002F09\u002F05] **Cognitive Architectures for Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02427) | [code]\n\n- [2023\u002F05\u002F27] **SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17390) | [code]\n\n- [2023\u002F05\u002F25] **Voyager: An Open-Ended Embodied Agent with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16291) | [code]\n\n#### Multi-Agent System\n- [2025\u002F07\u002F09] **Pun Intended: Multi-Agent Translation of Wordplay with Contrastive Learning and Phonetic-Semantic Embeddings** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06506) | [code]\n\n- [2025\u002F07\u002F09] **MIND: A Multi-agent Framework for Zero-shot Harmful Meme Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06908) | [code]\n\n- [2025\u002F06\u002F27] **GenEscape: Hierarchical Multi-Agent Generation of Escape Room Puzzles** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21839) | [code]\n\n- [2025\u002F06\u002F20] **SysTemp: A Multi-Agent System for Template-Based Generation of SysML v2** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21608) | [code]\n\n- [2025\u002F06\u002F19] **StoryWriter: A Multi-Agent Framework for Long Story Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16445) | [code]\n\n- [2025\u002F06\u002F18] **AgentGroupChat-V2: Divide-and-Conquer Is What LLM-Based Multi-Agent System Need** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15451) | [code]\n\n- [2025\u002F06\u002F17] **MAS-LitEval : Multi-Agent System for Literary Translation Quality Assessment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14199) | [code]\n\n- [2025\u002F06\u002F17] **Xolver: Multi-Agent Reasoning with Holistic Experience Learning Just Like an Olympiad Team** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14234) | [code]\n\n- [2025\u002F06\u002F13] **A Hybrid Multi-Agent Prompting Approach for Simplifying Complex Sentences** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11681) | [code]\n\n- [2025\u002F06\u002F13] **AutoGen Driven Multi Agent Framework for Iterative Crime Data Analysis and Prediction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11475) | [code]\n\n- [2025\u002F06\u002F13] **Investigating the Potential of Large Language Model-Based Router Multi-Agent Architectures for Foundation Design Automation: A Task Classification and Expert Selection Study** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.13811) | [code]\n\n- [2025\u002F06\u002F12] **A Multi-Agent Probabilistic Inference Framework Inspired by Kairanban-Style CoT System with IdoBata Conversation for Debiasing** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21565) | [code]\n\n- [2025\u002F06\u002F11] **Multi-Agent Language Models: Advancing Cooperation, Coordination, and Adaptation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09331) | [code]\n\n- [2025\u002F06\u002F11] **ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09513) | [code]\n\n- [2025\u002F06\u002F11] **Chat-of-Thought: Collaborative Multi-Agent System for Generating Domain Specific Information** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10086) | [code]\n\n- [2025\u002F06\u002F10] **CAF-I: A Collaborative Multi-Agent Framework for Enhanced Irony Detection with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08430) | [code]\n\n- [2025\u002F06\u002F10] **Reinforce LLM Reasoning through Multi-Agent Reflection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08379) | [code]\n\n- [2025\u002F06\u002F09] **From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08292) | [code]\n\n- [2025\u002F06\u002F08] **Theorem-of-Thought: A Multi-Agent Framework for Abductive, Deductive, and Inductive Reasoning in Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07106) | [code]\n\n- [2025\u002F06\u002F06] **MAPLE: Multi-Agent Adaptive Planning with Long-Term Memory for Table Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.05813) | [code]\n\n- [2025\u002F06\u002F06] **Does It Run and Is That Enough? Revisiting Text-to-Chart Generation with a Multi-Agent Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06175) | [code]\n\n- [2025\u002F06\u002F05] **Demonstrations of Integrity Attacks in Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04572) | [code]\n\n- [2025\u002F06\u002F04] **CLAIM: An Intent-Driven Multi-Agent Framework for Analyzing Manipulation in Courtroom Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04131) | [code]\n\n- [2025\u002F06\u002F03] **MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02689) | [code]\n\n- [2025\u002F06\u002F03] **Adaptive Graph Pruning for Multi-Agent Communication** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02951) | [code]\n\n- [2025\u002F06\u002F03] **A Multi-Agent Framework for Mitigating Dialect Biases in Privacy Policy Question-Answering Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02998) | [code]\n\n- [2025\u002F06\u002F03] **Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02992) | [code]\n\n- [2025\u002F06\u002F03] **MAEBE: Multi-Agent Emergent Behavior Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.03053) | [code]\n\n- [2025\u002F06\u002F02] **STORM-BORN: A Challenging Mathematical Derivations Dataset Curated via a Human-in-the-Loop Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01531) | [code]\n\n- [2025\u002F06\u002F02] **An Empirical Study of Group Conformity in Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01332) | [code]\n\n- [2025\u002F05\u002F31] **Goal-Aware Identification and Rectification of Misinformation in Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00509) | [code]\n\n- [2025\u002F05\u002F31] **PAKTON: A Multi-Agent Framework for Question Answering in Long Legal Agreements** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00608) | [code]\n\n- [2025\u002F05\u002F30] **CREFT: Sequential Multi-Agent LLM for Character Relation Extraction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24553) | [code]\n\n- [2025\u002F05\u002F30] **Multiple LLM Agents Debate for Equitable Cultural Alignment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24671) | [code]\n\n- [2025\u002F05\u002F30] **An Adversary-Resistant Multi-Agent LLM System via Credibility Scoring** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24239) | [code]\n\n- [2025\u002F05\u002F29] **Cross-Task Experiential Learning on LLM-based Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23187) | [code]\n\n- [2025\u002F05\u002F29] **OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23885) | [code]\n\n- [2025\u002F05\u002F28] **Co-Saving: Resource Aware Multi-Agent Collaboration for Software Development** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21898) | [code]\n\n- [2025\u002F05\u002F28] **CoMaPOI: A Collaborative Multi-Agent Framework for Next POI Prediction Bridging the Gap Between Trajectory and Language** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23837) | [code]\n\n- [2025\u002F05\u002F28] **GETReason: Enhancing Image Context Extraction through Hierarchical Multi-Agent Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21863) | [code]\n\n- [2025\u002F05\u002F27] **Long Context Scaling: Divide and Conquer via Multi-Agent Question-driven Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20625) | [code]\n\n- [2025\u002F05\u002F27] **Rethinking Information Synthesis in Multimodal Question Answering A Multi-Agent Perspective** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20816) | [code]\n\n- [2025\u002F05\u002F27] **Scaling External Knowledge Input Beyond Context Windows of LLMs via Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21471) | [code]\n\n- [2025\u002F05\u002F26] **CoTGuard: Using Chain-of-Thought Triggering for Copyright Protection in Multi-Agent LLM Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19405) | [code]\n\n- [2025\u002F05\u002F26] **Multi-Agent Collaboration via Evolving Orchestration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19591) | [code]\n\n- [2025\u002F05\u002F26] **Select, Read, and Write: A Multi-Agent Framework of Full-Text-based Related Work Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19647) | [code]\n\n- [2025\u002F05\u002F26] **Project Riley: Multimodal Multi-Agent LLM Collaboration with Emotional Reasoning and Voting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20521) | [code]\n\n- [2025\u002F05\u002F25] **MetaMind: Modeling Human Social Thoughts with Metacognitive Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18943) | [code]\n\n- [2025\u002F05\u002F25] **GUARDIAN: Safeguarding LLM Multi-Agent Collaborations with Temporal Graph Modeling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19234) | [code]\n\n- [2025\u002F05\u002F23] **ManuSearch: Democratizing Deep Search in Large Language Models with a Transparent and Open Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18105) | [code]\n\n- [2025\u002F05\u002F23] **PD$^3$: A Project Duplication Detection Framework via Adapted Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17492) | [code]\n\n- [2025\u002F05\u002F22] **EMULATE: A Multi-Agent Framework for Determining the Veracity of Atomic Claims by Emulating Human Actions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16576) | [code]\n\n- [2025\u002F05\u002F22] **X-MAS: Towards Building Multi-Agent Systems with Heterogeneous LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16997) | [code]\n\n- [2025\u002F05\u002F21] **MAS-ZERO: Designing Multi-Agent Systems with Zero Supervision** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14996) | [code]\n\n- [2025\u002F05\u002F20] **MAATS: A Multi-Agent Automated Translation System Based on MQM Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14848) | [code]\n\n- [2025\u002F05\u002F20] **MLZero: A Multi-Agent System for End-to-end Machine Learning Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13941) | [code]\n\n- [2025\u002F05\u002F19] **AD-AGENT: A Multi-agent Framework for End-to-end Anomaly Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12594) | [code]\n\n- [2025\u002F05\u002F18] **IP Leakage Attacks Targeting LLM-Based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12442) | [code]\n\n- [2025\u002F05\u002F17] **BELLE: A Bi-Level Multi-Agent Reasoning Framework for Multi-Hop Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11811) | [code]\n\n- [2025\u002F05\u002F16] **Connecting the Dots: A Chain-of-Collaboration Prompting Framework for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10936) | [code]\n\n- [2025\u002F05\u002F15] **Assessing Collective Reasoning in Multi-Agent LLMs via Hidden Profile Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11556) | [code]\n\n- [2025\u002F05\u002F12] **Towards Multi-Agent Reasoning Systems for Collaborative Expertise Delegation: An Exploratory Design Study** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.07313) | [code]\n\n- [2025\u002F05\u002F06] **The Power of Stories: Narrative Priming Shapes How LLM Agents Collaborate and Compete** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03961) | [code]\n\n- [2025\u002F04\u002F30] **Which Agent Causes Task Failures and When? On Automated Failure Attribution of LLM Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00212) | [code]\n\n- [2025\u002F04\u002F26] **MATCHA: Can Multi-Agent Collaboration Build a Trustworthy Conversational Recommender?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20094) | [code]\n\n- [2025\u002F04\u002F24] **Collaborating Action by Action: A Multi-agent LLM Framework for Embodied Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17950) | [code]\n\n- [2025\u002F04\u002F23] **Less is More: Enhancing Structured Multi-Agent Reasoning via Quality-Guided Distillation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16408) | [code]\n\n- [2025\u002F04\u002F21] **EducationQ: Evaluating LLMs&#39; Teaching Capabilities Through Multi-Agent Dialogue Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14928) | [code]\n\n- [2025\u002F04\u002F17] **Are AI agents the new machine translation frontier? Challenges and opportunities of single- and multi-agent systems for multilingual digital communication** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12891) | [code]\n\n- [2025\u002F04\u002F15] **X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13203) | [code]\n\n- [2025\u002F04\u002F11] **Beyond Self-Reports: Multi-Observer Agents for Personality Assessment in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08399) | [code]\n\n- [2025\u002F04\u002F11] **DocAgent: A Multi-Agent System for Automated Code Documentation Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08725) | [code]\n\n- [2025\u002F04\u002F08] **FactGuard: Leveraging Multi-Agent Systems to Generate Answerable and Unanswerable Questions for Enhanced Long-Context LLM Extraction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05607) | [code]\n\n- [2025\u002F04\u002F04] **YaleNLP @ PerAnsSumm 2025: Multi-Perspective Integration via Mixture-of-Agents for Enhanced Healthcare QA Summarization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03932) | [code]\n\n- [2025\u002F04\u002F02] **Self-Resource Allocation in Multi-Agent LLM Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02051) | [code]\n\n- [2025\u002F04\u002F02] **Achieving Unanimous Consensus in Decision Making Using Multi-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02128) | [code]\n\n- [2025\u002F04\u002F01] **When Persuasion Overrides Truth in Multi-Agent LLM Debates: Introducing a Confidence-Weighted Persuasion Override Rate (CW-POR)** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00374) | [code]\n\n- [2025\u002F04\u002F01] **AI Hiring with LLMs: A Context-Aware and Explainable Multi-Agent Framework for Resume Screening** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02870) | [code]\n\n- [2025\u002F04\u002F01] **AgentNet: Decentralized Evolutionary Coordination for LLM-based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00587) | [code]\n\n- [2025\u002F03\u002F31] **$\\textit{Agents Under Siege}$: Breaking Pragmatic Multi-Agent LLM Systems with Optimized Prompt Attacks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00218) | [code]\n\n- [2025\u002F03\u002F28] **WorkTeam: Constructing Workflows from Natural Language with Multi-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22473) | [code]\n\n- [2025\u002F03\u002F28] **Self-Evolving Multi-Agent Simulations for Realistic Clinical Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22678) | [code]\n\n- [2025\u002F03\u002F27] **Collab: Controlled Decoding using Mixture of Agents for LLM Alignment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21720) | [code]\n\n- [2025\u002F03\u002F27] **Debate-Driven Multi-Agent LLMs for Phishing Email Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22038) | [code]\n\n- [2025\u002F03\u002F26] **TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20666) | [code]\n\n- [2025\u002F03\u002F26] **3MDBench: Medical Multimodal Multi-agent Dialogue Benchmark** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13861) | [code]\n\n- [2025\u002F03\u002F25] **Multi-agent Application System in Office Collaboration Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19584) | [code]\n\n- [2025\u002F03\u002F24] **AgentDropout: Dynamic Agent Elimination for Token-Efficient and High-Performance LLM-Based Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18891) | [code]\n\n- [2025\u002F03\u002F23] **MathAgent: Leveraging a Mixture-of-Math-Agent Framework for Real-World Multimodal Mathematical Error Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18132) | [code]\n\n- [2025\u002F03\u002F21] **ConvoGen: Enhancing Conversational AI with Synthetic Data: A Multi-Agent Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17460) | [code]\n\n- [2025\u002F03\u002F21] **MARS: A Multi-Agent Framework Incorporating Socratic Guidance for Automated Prompt Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16874) | [code]\n\n- [2025\u002F03\u002F19] **When Pigs Get Sick: Multi-Agent AI for Swine Disease Detection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15204) | [code]\n\n- [2025\u002F03\u002F19] **MAMM-Refine: A Recipe for Improving Faithfulness in Generation with Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15272) | [code]\n\n- [2025\u002F03\u002F18] **Gricean Norms as a Basis for Effective Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14484) | [code]\n\n- [2025\u002F03\u002F17] **Identifying Cooperative Personalities in Multi-agent Contexts through Personality Steering with Representation Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.12722) | [code]\n\n- [2025\u002F03\u002F17] **MAP: Evaluation and Multi-Agent Enhancement of Large Language Models for Inpatient Pathways** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13205) | [code]\n\n- [2025\u002F03\u002F16] **LLM-Mediated Guidance of MARL Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13553) | [code]\n\n- [2025\u002F03\u002F14] **AIstorian lets AI be a historian: A KG-powered multi-agent system for accurate biography generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11346) | [code]\n\n- [2025\u002F03\u002F14] **Prompt Injection Detection and Mitigation via AI Multi-Agent NLP Frameworks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11517) | [code]\n\n- [2025\u002F03\u002F14] **RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13514) | [code]\n\n- [2025\u002F03\u002F13] **LLMs Working in Harmony: A Survey on the Technological Aspects of Building Effective LLM-Based Multi Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01963) | [code]\n\n- [2025\u002F03\u002F12] **ReMA: Learning to Meta-think for LLMs with Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09501) | [code]\n\n- [2025\u002F03\u002F07] **MM-StoryAgent: Immersive Narrated Storybook Video Generation with a Multi-Agent Paradigm across Text, Image and Audio** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05242) | [code]\n\n- [2025\u002F03\u002F07] **GEMA-Score: Granular Explainable Multi-Agent Score for Radiology Report Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05347) | [code]\n\n- [2025\u002F03\u002F07] **Multi Agent based Medical Assistant for Edge Devices** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05397) | [code]\n\n- [2025\u002F03\u002F05] **MA-LoT: Multi-Agent Lean-based Long Chain-of-Thought Reasoning enhances Formal Theorem Proving** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03205) | [code]\n\n- [2025\u002F03\u002F05] **MAS-GPT: Training LLMs to Build LLM-based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03686) | [code]\n\n- [2025\u002F03\u002F05] **Multi-Agent Systems Powered by Large Language Models: Applications in Swarm Intelligence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03800) | [code]\n\n- [2025\u002F03\u002F05] **Preserving Cultural Identity with Context-Aware Translation Through Multi-Agent AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04827) | [code]\n\n- [2025\u002F03\u002F05] **Enhancing Collective Intelligence in Large Language Models Through Emotional Integration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04849) | [code]\n\n- [2025\u002F03\u002F04] **BRIDGE: Bootstrapping Text to Control Time-Series Generation via Multi-Agent Iterative Optimization and Diffusion Modelling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02445) | [code]\n\n- [2025\u002F03\u002F04] **Multi-Agent System for AI-Assisted Extraction of Narrative Arcs in TV Series** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04817) | [code]\n\n- [2025\u002F03\u002F01] **Structured Reasoning for Fairness: A Multi-Agent Approach to Bias Detection in Textual Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00355) | [code]\n\n- [2025\u002F02\u002F28] **PreMind: Multi-Agent Video Understanding for Advanced Indexing of Presentation-style Videos** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00162) | [code]\n\n- [2025\u002F02\u002F27] **M^3Builder: A Multi-Agent System for Automated Machine Learning in Medical Imaging** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20301) | [code]\n\n- [2025\u002F02\u002F26] **Stay Focused: Problem Drift in Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19559) | [code]\n\n- [2025\u002F02\u002F26] **Voting or Consensus? Decision-Making in Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19130) | [code]\n\n- [2025\u002F02\u002F25] **Enhancing Text Classification with a Novel Multi-Agent Collaboration Framework Leveraging BERT** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18653) | [code]\n\n- [2025\u002F02\u002F25] **A Cooperative Multi-Agent Framework for Zero-Shot Named Entity Recognition** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18702) | [code]\n\n- [2025\u002F02\u002F25] **Debt Collection Negotiations with Large Language Models: An Evaluation System and Optimizing Decision Making with Multi-Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18228) | [code]\n\n- [2025\u002F02\u002F25] **FACT-AUDIT: An Adaptive Multi-Agent Framework for Dynamic Fact-Checking Evaluation of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17924) | [code]\n\n- [2025\u002F02\u002F24] **MobileSteward: Integrating Multiple App-Oriented Agents with Self-Evolution to Automate Cross-App Instructions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16796) | [code]\n\n- [2025\u002F02\u002F24] **Mobile-Agent-V: Learning Mobile Device Operation Through Video-Guided Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17110) | [[code]](https:\u002F\u002Fgithub.com\u002FX-PLUG\u002FMobileAgent)\n\n- [2025\u002F02\u002F24] **METAL: A Multi-Agent Framework for Chart Generation with Test-Time Scaling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17651) | [code]\n\n- [2025\u002F02\u002F23] **The Hidden Strength of Disagreement: Unraveling the Consensus-Diversity Tradeoff in Adaptive Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16565) | [[code]](https:\u002F\u002Fgithub.com\u002Fwuzengqing001225\u002FConsensusDiversityTradeoffMAS)\n\n- [2025\u002F02\u002F20] **Enhancing Language Multi-Agent Learning with Multi-Agent Credit Re-Assignment for Interactive Environment Generalization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14496) | [code]\n\n- [2025\u002F02\u002F20] **CORBA: Contagious Recursive Blocking Attacks on Multi-Agent Systems Based on Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14529) | [code]\n\n- [2025\u002F02\u002F17] **Table-Critic: A Multi-Agent Framework for Collaborative Criticism and Refinement in Table Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11799) | [code]\n\n- [2025\u002F02\u002F17] **HARBOR: Exploring Persona Dynamics in Multi-Agent Competition** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12149) | [code]\n\n- [2025\u002F02\u002F15] **Divergent Thoughts toward One Goal: LLM-based Multi-Agent Collaboration System for Electronic Design Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10857) | [code]\n\n- [2025\u002F02\u002F13] **PathFinder: A Multi-Modal Multi-Agent System for Medical Diagnostic Decision-Making Applied to Histopathology** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08916) | [code]\n\n- [2025\u002F02\u002F13] **Mind the Gaps: Logical English, Prolog, and Multi-agent Systems for Autonomous Vehicles** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09216) | [code]\n\n- [2025\u002F02\u002F12] **Faithful, Unfaithful or Ambiguous? Multi-Agent Debate with Initial Stance for Summary Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08514) | [code]\n\n- [2025\u002F02\u002F12] **If Multi-Agent Debate is the Answer, What is the Question?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.08788) | [code]\n\n- [2025\u002F02\u002F11] **Don&#39;t Just Demo, Teach Me the Principles: A Principle-Based Multi-Agent Prompting Strategy for Text Classification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07165) | [code]\n\n- [2025\u002F02\u002F11] **Multi-Agent Collaboration for Multilingual Code Instruction Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.07487) | [code]\n\n- [2025\u002F02\u002F10] **KARMA: Leveraging Multi-Agent LLMs for Automated Knowledge Graph Enrichment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06472) | [code]\n\n- [2025\u002F02\u002F09] **Preventing Rogue Agents Improves Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05986) | [code]\n\n- [2025\u002F02\u002F09] **The Application of MATEC (Multi-AI Agent Team Care) Framework in Sepsis Care** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16433) | [code]\n\n- [2025\u002F02\u002F08] **CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and Debugging** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05664) | [code]\n\n- [2025\u002F02\u002F08] **Multi-Agent Simulator Drives Language Models for Legal Intensive Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06882) | [code]\n\n- [2025\u002F02\u002F07] **S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04790) | [code]\n\n- [2025\u002F02\u002F06] **Multi-Agent Reinforcement Learning with Focal Diversity Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04492) | [code]\n\n- [2025\u002F02\u002F06] **Enhancing Online Learning Efficiency Through Heterogeneous Resource Integration with a Multi-Agent RAG System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03948) | [code]\n\n- [2025\u002F02\u002F06] **Multi-agent Architecture Search via Agentic Supernet** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04180) | [code]\n\n- [2025\u002F02\u002F04] **Position: Scaling LLM Agents Requires Asymptotic Analysis with LLM Primitives** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04358) | [code]\n\n- [2025\u002F02\u002F04] **Multi-Agent Design: Optimizing Agents with Better Prompts and Topologies** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02533) | [code]\n\n- [2025\u002F02\u002F03] **PlotGen: Multi-Agent LLM-based Scientific Data Visualization via Multimodal Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00988) | [code]\n\n- [2025\u002F02\u002F03] **ChartCitor: Multi-Agent Framework for Fine-Grained Chart Visual Attribution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00989) | [code]\n\n- [2025\u002F02\u002F02] **Rethinking Mixture-of-Agents: Is Mixing Different Large Language Models Beneficial?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00674) | [code]\n\n- [2025\u002F02\u002F02] **Efficient Multi-Agent System Training with Data Influence-Oriented Tree Search** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00955) | [code]\n\n- [2025\u002F01\u002F29] **Layered Chain-of-Thought Prompting for Multi-Agent LLM Systems: A Comprehensive Approach to Explainable Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.18645) | [code]\n\n- [2025\u002F01\u002F27] **MADP: Multi-Agent Deductive Planning for Enhanced Cognitive-Behavioral Mental Health Question Answer** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15826) | [code]\n\n- [2025\u002F01\u002F25] **Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.15228) | [code]\n\n- [2025\u002F01\u002F24] **Multi-agent KTO: Reinforcing Strategic Interactions of Large Language Model in Language Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14225) | [code]\n\n- [2025\u002F01\u002F24] **Unmasking Conversational Bias in AI Multiagent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14844) | [code]\n\n- [2025\u002F01\u002F22] **FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in Virtual 3D Spaces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12909) | [code]\n\n- [2025\u002F01\u002F19] **IntellAgent: A Multi-Agent Framework for Evaluating Conversational AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11067) | [code]\n\n- [2025\u002F01\u002F16] **AutoCBT: An Autonomous Multi-agent Framework for Cognitive Behavioral Therapy in Psychological Counseling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.09426) | [code]\n\n- [2025\u002F01\u002F14] **Talk to Right Specialists: Routing and Planning in Multi-agent System for Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07813) | [code]\n\n- [2025\u002F01\u002F05] **LatteReview: A Multi-Agent Framework for Systematic Review Automation Using Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.05468) | [code]\n\n- [2025\u002F01\u002F02] **Harnessing Multi-Agent LLMs for Complex Engineering Problem-Solving: A Framework for Senior Design Projects** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.01205) | [code]\n\n- [2024\u002F12\u002F30] **Distributed Mixture-of-Agents for Edge Inference with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21200) | [code]\n\n- [2024\u002F12\u002F28] **M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20127) | [code]\n\n- [2024\u002F12\u002F28] **Efficient Multi-Agent Collaboration with Tool Use for Online Planning in Complex Table Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20145) | [code]\n\n- [2024\u002F12\u002F24] **Multi-Agents Based on Large Language Models for Knowledge-based Visual Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18351) | [code]\n\n- [2024\u002F12\u002F22] **Multi-Agent Sampling: Scaling Inference Compute for Data Synthesis with Tree Search-Based Agentic Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17061) | [code]\n\n- [2024\u002F12\u002F22] **A Multi-AI Agent System for Autonomous Optimization of Agentic AI Solutions via Iterative Refinement and LLM-Driven Feedback Loops** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17149) | [code]\n\n- [2024\u002F12\u002F20] **Mitigating Social Bias in Large Language Models: A Multi-Objective Approach within a Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15504) | [code]\n\n- [2024\u002F12\u002F19] **PsyDraw: A Multi-Agent Multimodal System for Mental Health Screening in Left-Behind Children** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14769) | [code]\n\n- [2024\u002F12\u002F18] **Gradual Vigilance and Interval Communication: Enhancing Value Alignment in Multi-Agent Debates** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.13471) | [code]\n\n- [2024\u002F12\u002F15] **Cultural Palette: Pluralising Culture Alignment via Multi-agent Palette** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11167) | [code]\n\n- [2024\u002F12\u002F13] **AutoPatent: A Multi-Agent Framework for Automatic Patent Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09796) | [code]\n\n- [2024\u002F12\u002F12] **DiverseAgentEntropy: Quantifying Black-Box LLM Uncertainty through Diverse Perspectives and Multi-Agent Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09572) | [code]\n\n- [2024\u002F12\u002F11] **NAT-NL2GQL: A Novel Multi-Agent Framework for Translating Natural Language to Graph Query Language** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10434) | [code]\n\n- [2024\u002F12\u002F10] **AutoPrep: Natural Language Question-Aware Data Preparation with a Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10422) | [code]\n\n- [2024\u002F12\u002F07] **SLA Management in Reconfigurable Multi-Agent RAG: A Systems Approach to Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.06832) | [code]\n\n- [2024\u002F12\u002F06] **Breaking Event Rumor Detection via Stance-Separated Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04859) | [code]\n\n- [2024\u002F12\u002F06] **Towards Effective GenAI Multi-Agent Collaboration: Design and Evaluation for Enterprise Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05449) | [code]\n\n- [2024\u002F12\u002F06] **Enhancing LLMs for Impression Generation in Radiology Reports through a Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.06828) | [code]\n\n- [2024\u002F12\u002F06] **TeamCraft: A Benchmark for Multi-Modal Multi-Agent Systems in Minecraft** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05255) | [code]\n\n- [2024\u002F12\u002F05] **Educational-Psychological Dialogue Robot Based on Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03847) | [code]\n\n- [2024\u002F12\u002F01] **Multi-Agent Collaboration in Incident Response with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.00652) | [code]\n\n- [2024\u002F11\u002F28] **MAG-V: A Multi-Agent Framework for Synthetic Data Generation and Verification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04494) | [code]\n\n- [2024\u002F11\u002F21] **PIORS: Personalized Intelligent Outpatient Reception based on Large Language Model with Multi-Agents Medical Scenario Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13902) | [code]\n\n- [2024\u002F11\u002F21] **Enhancing LLMs for Power System Simulations: A Feedback-driven Multi-agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16707) | [code]\n\n- [2024\u002F11\u002F18] **The Power of Many: Multi-Agent Multimodal Models for Cultural Image Captioning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.11758) | [code]\n\n- [2024\u002F11\u002F12] **BudgetMLAgent: A Cost-Effective LLM Multi-Agent system for Automating Machine Learning Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07464) | [code]\n\n- [2024\u002F11\u002F11] **Using Generative AI and Multi-Agents to Provide Automatic Feedback** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07407) | [code]\n\n- [2024\u002F11\u002F09] **Mixture of Knowledge Minigraph Agents for Literature Review Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.06159) | [code]\n\n- [2024\u002F11\u002F05] **SAUCE: Synchronous and Asynchronous User-Customizable Environment for Multi-Agent LLM Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03397) | [code]\n\n- [2024\u002F11\u002F05] **SMoA: Improving Multi-agent Large Language Models with Sparse Mixture-of-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03284) | [code]\n\n- [2024\u002F11\u002F01] **DARD: A Multi-Agent Approach for Task-Oriented Dialog Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00427) | [code]\n\n- [2024\u002F10\u002F30] **ACC-Debate: An Actor-Critic Approach to Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00053) | [code]\n\n- [2024\u002F10\u002F29] **Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22304) | [code]\n\n- [2024\u002F10\u002F29] **MARCO: Multi-Agent Real-time Chat Orchestration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21784) | [code]\n\n- [2024\u002F10\u002F28] **CRAT: A Multi-Agent Framework for Causality-Enhanced Reflective and Retrieval-Augmented Translation with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21067) | [code]\n\n- [2024\u002F10\u002F27] **AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.20424) | [code]\n\n- [2024\u002F10\u002F24] **Schema-Guided Culture-Aware Complex Event Simulation with Multi-Agent Role-Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18935) | [code]\n\n- [2024\u002F10\u002F23] **GraphTeam: Facilitating Large Language Model-based Graph Analysis via Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18032) | [code]\n\n- [2024\u002F10\u002F22] **Decoding Time Series with LLMs: A Multi-Agent Framework for Cross-Domain Annotation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17462) | [code]\n\n- [2024\u002F10\u002F19] **An Electoral Approach to Diversify LLM-based Multi-Agent Collective Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15168) | [code]\n\n- [2024\u002F10\u002F18] **Synthesizing Post-Training Data for LLMs through Multi-Agent Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14251) | [code]\n\n- [2024\u002F10\u002F17] **AdaSwitch: Adaptive Switching between Small and Large Agents for Effective Cloud-Local Collaborative Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13181) | [code]\n\n- [2024\u002F10\u002F16] **PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12375) | [code]\n\n- [2024\u002F10\u002F13] **LLM-Based Multi-Agent Systems are Scalable Graph Generative Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09824) | [code]\n\n- [2024\u002F10\u002F12] **Many Heads Are Better Than One: Improved Scientific Idea Generation by A LLM-Based Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09403) | [code]\n\n- [2024\u002F10\u002F11] **JAILJUDGE: A Comprehensive Jailbreak Judge Benchmark with Multi-Agent Enhanced Explanation Evaluation Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12855) | [code]\n\n- [2024\u002F10\u002F11] **PEAR: A Robust and Flexible Automation Framework for Ptychography Enabled by Multiple Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09034) | [code]\n\n- [2024\u002F10\u002F10] **AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07561) | [code]\n\n- [2024\u002F10\u002F10] **Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08102) | [code]\n\n- [2024\u002F10\u002F10] **Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08115) | [code]\n\n- [2024\u002F10\u002F10] **Prompt Engineering a Schizophrenia Chatbot: Utilizing a Multi-Agent Approach for Enhanced Compliance with Prompt Instructions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12848) | [code]\n\n- [2024\u002F10\u002F10] **Diversity of Thought Elicits Stronger Reasoning Capabilities in Multi-Agent Debate Frameworks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12853) | [code]\n\n- [2024\u002F10\u002F09] **Seeker: Enhancing Exception Handling in Code with LLM-based Multi-Agent Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06949) | [code]\n\n- [2024\u002F10\u002F07] **Adversarial Multi-Agent Evaluation of Large Language Models through Iterative Debates** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04663) | [code]\n\n- [2024\u002F10\u002F06] **MindScope: Exploring cognitive biases in large language models through Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04452) | [code]\n\n- [2024\u002F10\u002F03] **Towards Implicit Bias Detection and Mitigation in Multi-Agent LLM Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02584) | [code]\n\n- [2024\u002F10\u002F03] **Agents&#39; Room: Narrative Generation through Multi-step Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02603) | [code]\n\n- [2024\u002F10\u002F03] **Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02507) | [code]\n\n- [2024\u002F10\u002F03] **ColaCare: Enhancing Electronic Health Record Modeling through Large Language Model-Driven Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02551) | [code]\n\n- [2024\u002F10\u002F03] **AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02958) | [code]\n\n- [2024\u002F10\u002F02] **RGD: Multi-LLM Based Agent Debugger via Refinement and Generation Guidance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.01242) | [code]\n\n- [2024\u002F10\u002F02] **Zodiac: A Cardiologist-Level LLM Framework for Multi-Agent Diagnostics** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02026) | [code]\n\n- [2024\u002F09\u002F21] **Towards Automated Patent Workflows: AI-Orchestrated Multi-Agent Framework for Intellectual Property Management and Analysis** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19006) | [code]\n\n- [2024\u002F09\u002F21] **GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14051) | [code]\n\n- [2024\u002F09\u002F20] **Minstrel: Structural Prompt Generation with Multi-Agents Coordination for Non-AI Experts** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13449) | [code]\n\n- [2024\u002F09\u002F18] **MAgICoRe: Multi-Agent, Iterative, Coarse-to-Fine Refinement for Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.12147) | [code]\n\n- [2024\u002F09\u002F17] **The Art of Storytelling: Multi-Agent Generative AI for Dynamic Multimodal Narratives** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11261) | [code]\n\n- [2024\u002F09\u002F16] **Instigating Cooperation among LLM Agents Using Adaptive Information Modulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.10372) | [code]\n\n- [2024\u002F09\u002F14] **Synergistic Simulations: Multi-Agent Problem Solving with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13753) | [code]\n\n- [2024\u002F09\u002F12] **Knowledge Tagging with Large Language Model based Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.08406) | [code]\n\n- [2024\u002F09\u002F11] **Propaganda to Hate: A Multimodal Analysis of Arabic Memes with Multi-Agent LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07246) | [code]\n\n- [2024\u002F09\u002F09] **SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.05556) | [code]\n\n- [2024\u002F09\u002F06] **Using Large Language Models to Generate Authentic Multi-agent Knowledge Work Datasets** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.04286) | [code]\n\n- [2024\u002F09\u002F05] **xLAM: A Family of Large Action Models to Empower AI Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03215) | [code]\n\n- [2024\u002F09\u002F02] **Co-Learning: Code Learning for Multi-Agent Reinforcement Collaborative Framework with Conversational Natural Language Interfaces** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00985) | [code]\n\n- [2024\u002F08\u002F28] **BattleAgentBench: A Benchmark for Evaluating Cooperation and Competition Capabilities of Language Models in Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15971) | [code]\n\n- [2024\u002F08\u002F27] **AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14972) | [code]\n\n- [2024\u002F08\u002F24] **Towards Human-Level Understanding of Complex Process Engineering Schematics: A Pedagogical, Introspective Multi-Agent Framework for Open-Domain Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00082) | [code]\n\n- [2024\u002F08\u002F22] **MuMA-ToM: Multi-modal Multi-Agent Theory of Mind** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12574) | [code]\n\n- [2024\u002F08\u002F21] **DreamFactory: Pioneering Multi-Scene Long Video Generation with a Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11788) | [code]\n\n- [2024\u002F08\u002F16] **The Fellowship of the LLMs: Multi-Agent Workflows for Synthetic Preference Optimization Dataset Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08688) | [code]\n\n- [2024\u002F08\u002F15] **MAG-SQL: Multi-Agent Generative Approach with Soft Schema Linking and Iterative Sub-SQL Refinement for Text-to-SQL** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07930) | [code]\n\n- [2024\u002F08\u002F15] **Text2BIM: Generating Building Models Using a Large Language Model-based Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08054) | [code]\n\n- [2024\u002F08\u002F14] **Development of a Large Language Model-based Multi-Agent Clinical Decision Support System for Korean Triage and Acuity Scale (KTAS)-Based Triage and Treatment Planning in Emergency Departments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.07531) | [code]\n\n- [2024\u002F08\u002F08] **Can LLMs Beat Humans in Debating? A Dynamic Multi-agent Framework for Competitive Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.04472) | [code]\n\n- [2024\u002F08\u002F05] **ReDel: A Toolkit for LLM-Powered Recursive Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02248) | [code]\n\n- [2024\u002F08\u002F05] **Evaluating and Enhancing LLMs Agent based on Theory of Mind in Guandan: A Multi-Player Cooperative Game under Imperfect Information** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02559) | [code]\n\n- [2024\u002F07\u002F23] **LawLuo: A Multi-Agent Collaborative Framework for Multi-Round Chinese Legal Consultation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16252) | [code]\n\n- [2024\u002F07\u002F21] **Multi-Agent Causal Discovery Using Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15073) | [code]\n\n- [2024\u002F07\u002F19] **NeLLCom-X: A Comprehensive Neural-Agent Framework to Simulate Language Learning and Group Communication** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.13999) | [code]\n\n- [2024\u002F07\u002F17] **Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.12532) | [code]\n\n- [2024\u002F07\u002F16] **InvAgent: A Large Language Model based Multi-Agent System for Inventory Management in Supply Chains** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.11384) | [code]\n\n- [2024\u002F07\u002F13] **Synergistic Multi-Agent Framework with Trajectory Learning for Knowledge-Intensive Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.09893) | [code]\n\n- [2024\u002F07\u002F13] **Cohesive Conversations: Enhancing Authenticity in Multi-Agent Simulated Dialogues** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.09897) | [code]\n\n- [2024\u002F07\u002F10] **Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.07791) | [code]\n\n- [2024\u002F07\u002F09] **FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06567) | [code]\n\n- [2024\u002F07\u002F09] **Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.07061) | [code]\n\n- [2024\u002F07\u002F04] **Solving Zebra Puzzles Using Constraint-Guided Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.03956) | [code]\n\n- [2024\u002F07\u002F03] **MentalAgora: A Gateway to Advanced Personalized Care in Mental Health through Multi-Agent Debating and Attribute Control** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02736) | [code]\n\n- [2024\u002F06\u002F17] **Improving Multi-Agent Debate with Sparse Communication Topology** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11776) | [code]\n\n- [2024\u002F06\u002F13] **Multi-Agent Software Development through Cross-Team Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08979) | [[code]](https:\u002F\u002Fgithub.com\u002Fopenbmb\u002Fchatdev)\n\n- [2024\u002F06\u002F11] **CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07054) | [[code]](https:\u002F\u002Fgithub.com\u002Flirenhao1997\u002Fcoevol)\n\n- [2024\u002F06\u002F07] **Mixture-of-Agents Enhances Large Language Model Capabilities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04692) | [code]\n\n- [2024\u002F06\u002F05] **Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03075) | [code]\n\n- [2024\u002F06\u002F04] **Chain of Agents: Large Language Models Collaborating on Long-Context Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02818) | [code]\n\n- [2024\u002F06\u002F03] **Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.01014) | [[code]](https:\u002F\u002Fgithub.com\u002Fx-plug\u002Fmobileagent)\n\n- [2024\u002F05\u002F30] **Safe Multi-agent Reinforcement Learning with Natural Language Constraints** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20018) | [code]\n\n- [2024\u002F05\u002F23] **CityGPT: Towards Urban IoT Learning, Analysis and Interaction with Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14691) | [code]\n\n- [2024\u002F05\u002F20] **(Perhaps) Beyond Human Translation: Harnessing Multi-Agent Collaboration for Translating Ultra-Long Literary Texts** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11804) | [code]\n\n- [2024\u002F05\u002F10] **LLM Discussion: Enhancing the Creativity of Large Language Models via Discussion Framework and Role-Play** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.06373) | [code]\n\n- [2024\u002F05\u002F07] **Enhancing the Efficiency and Accuracy of Underlying Asset Reviews in Structured Finance: The Application of Multi-agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04294) | [code]\n\n- [2024\u002F05\u002F06] **Persona Inconstancy in Multi-Agent LLM Collaboration: Conformity, Confabulation, and Impersonation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.03862) | [code]\n\n- [2024\u002F05\u002F05] **Language Evolution for Evading Social Media Regulation via LLM-based Multi-agent Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.02858) | [code]\n\n- [2024\u002F04\u002F25] **Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.16698) | [code]\n\n- [2024\u002F04\u002F23] **ClinicalAgent: Clinical Trial Multi-Agent System with Large Language Model-based Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.14777) | [code]\n\n- [2024\u002F04\u002F14] **Confidence Calibration and Rationalization for LLMs via Multi-Agent Deliberation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.09127) | [code]\n\n- [2024\u002F04\u002F12] **Leveraging Multi-AI Agents for Cross-Domain Knowledge Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.08511) | [code]\n\n- [2024\u002F04\u002F09] **Foundation Models to the Rescue: Deadlock Resolution in Connected Multi-Robot Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06413) | [code]\n\n- [2024\u002F04\u002F08] **360$^\\circ$REA: Towards A Reusable Experience Accumulation with 360{\\deg} Assessment for Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.05569) | [code]\n\n- [2024\u002F04\u002F06] **MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04735) | [[code]](https:\u002F\u002Fgithub.com\u002Fbin123apple\u002Fmacm)\n\n- [2024\u002F04\u002F02] **Self-Organized Agents: A LLM Multi-Agent Framework toward Ultra Large-Scale Code Generation and Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02183) | [code]\n\n- [2024\u002F04\u002F02] **CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.01663) | [code]\n\n- [2024\u002F03\u002F26] **MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17927) | [code]\n\n- [2024\u002F03\u002F22] **CACA Agent: Capability Collaboration based AI Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.15137) | [code]\n\n- [2024\u002F03\u002F21] **Multi-Agent VQA: Exploring Multi-Agent Foundation Models in Zero-Shot Visual Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.14783) | [code]\n\n- [2024\u002F03\u002F19] **Embodied LLM Agents Learn to Cooperate in Organized Teams** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12482) | [code]\n\n- [2024\u002F03\u002F12] **Transforming Competition into Collaboration: The Revolutionary Role of Multi-Agent Systems and Language Models in Modern Organizations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07769) | [code]\n\n- [2024\u002F03\u002F02] **AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.04783) | [code]\n\n- [2024\u002F02\u002F28] **Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the Key?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18272) | [code]\n\n- [2024\u002F02\u002F26] **Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16313) | [code]\n\n- [2024\u002F02\u002F26] **LLMArena: Assessing Capabilities of Large Language Models in Dynamic Multi-Agent Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16499) | [code]\n\n- [2024\u002F02\u002F21] **LLM Based Multi-Agent Generation of Semi-structured Documents from Semantic Templates in the Public Administration Domain** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14871) | [code]\n\n- [2024\u002F02\u002F18] **Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11443) | [[code]](https:\u002F\u002Fgithub.com\u002Fnanshineloong\u002Fself-evolving-benchmark)\n\n- [2024\u002F02\u002F18] **LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11550) | [code]\n\n- [2024\u002F02\u002F15] **TDAG: A Multi-Agent Framework based on Dynamic Task Decomposition and Agent Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10178) | [code]\n\n- [2024\u002F02\u002F03] **More Agents Is All You Need** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05120) | [code]\n\n- [2024\u002F02\u002F02] **Reasoning Capacity in Multi-Agent Systems: Limitations, Challenges and Human-Centered Solutions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01108) | [code]\n\n- [2024\u002F02\u002F02] **A Multi-Agent Conversational Recommender System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01135) | [code]\n\n- [2024\u002F01\u002F11] **Combating Adversarial Attacks with Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05998) | [code]\n\n- [2024\u002F01\u002F08] **MARG: Multi-Agent Review Generation for Scientific Papers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.04259) | [code]\n\n- [2024\u002F01\u002F08] **SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03945) | [code]\n\n- [2024\u002F01\u002F08] **Why Solving Multi-agent Path Finding with Large Language Model has not Succeeded Yet** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.03630) | [code]\n\n- [2023\u002F12\u002F20] **AgentCoder: Multi-Agent-based Code Generation with Iterative Testing and Optimisation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13010) | [code]\n\n- [2023\u002F10\u002F31] **Multi-Agent Consensus Seeking via Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.20151) | [code]\n\n- [2023\u002F10\u002F25] **MultiPrompter: Cooperative Prompt Optimization with Multi-Agent Reinforcement Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16730) | [code]\n\n- [2023\u002F08\u002F22] **ProAgent: Building Proactive Cooperative Agents with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11339) | [code]\n\n- [2023\u002F08\u002F21] **AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848) | [code]\n\n- [2023\u002F08\u002F14] **ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07201) | [code]\n\n- [2023\u002F08\u002F01] **MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00352) | [code]\n\n- [2023\u002F06\u002F05] **Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.03314) | [code]\n\n- [2023\u002F05\u002F31] **Recursive Metropolis-Hastings Naming Game: Symbol Emergence in a Multi-agent System based on Probabilistic Generative Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19761) | [code]\n\n- [2023\u002F05\u002F30] **Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19118) | [code]\n\n- [2023\u002F04\u002F26] **Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13835) | [code]\n\n- [2023\u002F04\u002F24] **ChatLLM Network: More brains, More intelligence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.12998) | [code]\n\n\n\n### Stability\n#### Safety\n- [2025\u002F07\u002F09] **VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06899) | [code]\n\n- [2025\u002F07\u002F04] **LTLCrit: A Temporal Logic-based LLM Critic for Safe and Efficient Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03293) | [code]\n\n- [2025\u002F07\u002F01] **Enhancing LLM Agent Safety via Causal Influence Prompting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00979) | [code]\n\n- [2025\u002F07\u002F01] **GAF-Guard: An Agentic Framework for Risk Management and Governance in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.02986) | [code]\n\n- [2025\u002F06\u002F25] **Model Editing as a Double-Edged Sword: Steering Agent Ethical Behavior Toward Beneficence or Harm** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20606) | [code]\n\n- [2025\u002F06\u002F11] **Effective Red-Teaming of Policy-Adherent Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09600) | [code]\n\n- [2025\u002F06\u002F11] **Disclosure Audits for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10171) | [code]\n\n- [2025\u002F06\u002F09] **SAFEFLOW: A Principled Protocol for Trustworthy and Transactional Autonomous Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07564) | [code]\n\n- [2025\u002F06\u002F04] **RedDebate: Safer Responses through Multi-Agent Red Teaming Debates** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11083) | [code]\n\n- [2025\u002F06\u002F01] **Simple Prompt Injection Attacks Can Leak Personal Data Observed by LLM Agents During Task Execution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01055) | [code]\n\n- [2025\u002F05\u002F29] **AgentAlign: Navigating Safety Alignment in the Shift from Informative to Agentic Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23020) | [code]\n\n- [2025\u002F05\u002F28] **RedTeamCUA: Realistic Adversarial Testing of Computer-Use Agents in Hybrid Web-OS Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21936) | [code]\n\n- [2025\u002F05\u002F26] **TrojanStego: Your Language Model Can Secretly Be A Steganographic Privacy Leaking Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20118) | [code]\n\n- [2025\u002F05\u002F25] **GUARDIAN: Safeguarding LLM Multi-Agent Collaborations with Temporal Graph Modeling** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19234) | [code]\n\n- [2025\u002F05\u002F18] **IP Leakage Attacks Targeting LLM-Based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12442) | [code]\n\n- [2025\u002F05\u002F16] **EnvInjection: Environmental Prompt Injection Attack to Multi-modal Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11717) | [code]\n\n- [2025\u002F04\u002F24] **Assessing the Potential of Generative Agents in Crowdsourced Fact-Checking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19940) | [code]\n\n- [2025\u002F04\u002F15] **Towards Automated Safety Requirements Derivation Using Agent-based RAG** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11243) | [code]\n\n- [2025\u002F03\u002F26] **sudo rm -rf agentic_security** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20279) | [code]\n\n- [2025\u002F03\u002F24] **AgentSpec: Customizable Runtime Enforcement for Safe and Reliable LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18666) | [code]\n\n- [2025\u002F03\u002F06] **SafeArena: Evaluating the Safety of Autonomous Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04957) | [code]\n\n- [2025\u002F02\u002F20] **CORBA: Contagious Recursive Blocking Attacks on Multi-Agent Systems Based on Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14529) | [code]\n\n- [2025\u002F02\u002F18] **AEIA-MN: Evaluating the Robustness of Multimodal LLM-Powered Mobile Agents Against Active Environmental Injection Attacks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13053) | [code]\n\n- [2025\u002F02\u002F17] **&#34;Nuclear Deployed!&#34;: Analyzing Catastrophic Risks in Decision-making of Autonomous LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11355) | [code]\n\n- [2025\u002F02\u002F01] **ALU: Agentic LLM Unlearning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00406) | [code]\n\n- [2025\u002F01\u002F28] **Context is Key for Agent Security** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.17070) | [code]\n\n- [2024\u002F12\u002F21] **The Task Shield: Enforcing Task Alignment to Defend Against Indirect Prompt Injection in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16682) | [code]\n\n- [2024\u002F12\u002F16] **Seeker: Towards Exception Safety Code Generation with Intermediate Language Agents Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11713) | [code]\n\n- [2024\u002F12\u002F09] **The Fusion of Large Language Models and Formal Methods for Trustworthy AI Agents: A Roadmap** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.06512) | [code]\n\n- [2024\u002F11\u002F08] **Towards Low-Resource Harmful Meme Detection with LMM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.05383) | [code]\n\n- [2024\u002F11\u002F06] **MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03814) | [code]\n\n- [2024\u002F11\u002F04] **Attacking Vision-Language Computer Agents via Pop-ups** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02391) | [code]\n\n- [2024\u002F10\u002F22] **AdvWeb: Controllable Black-box Attacks on VLM-powered Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17401) | [code]\n\n- [2024\u002F10\u002F18] **Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14141) | [code]\n\n- [2024\u002F10\u002F11] **AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09024) | [code]\n\n- [2024\u002F10\u002F09] **I Want to Break Free! Persuasion and Anti-Social Behavior of LLMs in Multi-Agent Settings with Social Hierarchy** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07109) | [code]\n\n- [2024\u002F09\u002F28] **SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19471) | [code]\n\n- [2024\u002F09\u002F17] **EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11295) | [code]\n\n- [2024\u002F09\u002F13] **AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.09013) | [code]\n\n- [2024\u002F08\u002F20] **Athena: Safe Autonomous Agents with Verbal Contrastive Learning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11021) | [code]\n\n- [2024\u002F08\u002F05] **Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02544) | [code]\n\n- [2024\u002F07\u002F23] **RedAgent: Red Teaming Large Language Models with Context-aware Autonomous Language Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16667) | [code]\n\n- [2024\u002F06\u002F05] **BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03007) | [[code]](https:\u002F\u002Fgithub.com\u002Fdpamk\u002Fbadagent)\n\n- [2024\u002F05\u002F30] **Safe Multi-agent Reinforcement Learning with Natural Language Constraints** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.20018) | [code]\n\n- [2024\u002F05\u002F24] **Hacc-Man: An Arcade Game for Jailbreaking LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.15902) | [code]\n\n- [2024\u002F03\u002F02] **AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.04783) | [code]\n\n- [2024\u002F02\u002F17] **Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11208) | [code]\n\n- [2024\u002F02\u002F16] **ToolSword: Unveiling Safety Issues of Large Language Models in Tool Learning Across Three Stages** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10753) | [[code]](https:\u002F\u002Fgithub.com\u002Fjunjie-ye\u002Ftoolsword)\n\n- [2024\u002F02\u002F02] **TrustAgent: Towards Safe and Trustworthy LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01586) | [code]\n\n- [2024\u002F01\u002F11] **Combating Adversarial Attacks with Multi-Agent Debate** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.05998) | [code]\n\n- [2023\u002F11\u002F17] **Testing Language Model Agents Safely in the Wild** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.10538) | [code]\n\n#### Bias\n- [2025\u002F05\u002F27] **Silence is Not Consensus: Disrupting Agreement Bias in Multi-Agent LLMs via Catfish Agent for Clinical Decision Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21503) | [code]\n\n- [2025\u002F05\u002F14] **Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.09614) | [code]\n\n- [2025\u002F04\u002F10] **MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.01019) | [code]\n\n- [2025\u002F03\u002F27] **Bias-Aware Agent: Enhancing Fairness in AI-Driven Knowledge Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21237) | [code]\n\n- [2025\u002F03\u002F01] **Structured Reasoning for Fairness: A Multi-Agent Approach to Bias Detection in Textual Data** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00355) | [code]\n\n- [2025\u002F01\u002F29] **Actions Speak Louder than Words: Agent Decisions Reveal Implicit Biases in Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.17420) | [code]\n\n- [2025\u002F01\u002F24] **Unmasking Conversational Bias in AI Multiagent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14844) | [code]\n\n- [2024\u002F12\u002F20] **Mitigating Social Bias in Large Language Models: A Multi-Objective Approach within a Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15504) | [code]\n\n- [2024\u002F11\u002F12] **Mitigating Bias in Queer Representation within Large Language Models: A Collaborative Agent Approach** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07656) | [code]\n\n- [2024\u002F10\u002F06] **MindScope: Exploring cognitive biases in large language models through Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04452) | [code]\n\n- [2024\u002F10\u002F03] **Towards Implicit Bias Detection and Mitigation in Multi-Agent LLM Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02584) | [code]\n\n- [2024\u002F05\u002F23] **ALI-Agent: Assessing LLMs&#39; Alignment with Human Values via Agent-based Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14125) | [code]\n\n- [2024\u002F04\u002F23] **Aligning LLM Agents by Learning Latent Preference from User Edits** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.15269) | [code]\n\n- [2024\u002F02\u002F19] **Polarization of Autonomous Generative AI Agents Under Echo Chambers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12212) | [code]\n\n- [2024\u002F02\u002F14] **Towards better Human-Agent Alignment: Assessing Task Utility in LLM-Powered Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09015) | [code]\n\n- [2024\u002F01\u002F09] **Agent Alignment in Evolving Social Norms** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.04620) | [code]\n\n#### Hallucination\n- [2025\u002F06\u002F23] **A Comment On &#34;The Illusion of Thinking&#34;: Reframing the Reasoning Cliff as an Agentic Gap** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18957) | [code]\n\n- [2025\u002F05\u002F28] **Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22655) | [code]\n\n- [2025\u002F03\u002F14] **Prompt Injection Detection and Mitigation via AI Multi-Agent NLP Frameworks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11517) | [code]\n\n- [2025\u002F03\u002F14] **RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13514) | [code]\n\n- [2025\u002F03\u002F01] **EXCLAIM: An Explainable Cross-Modal Agentic System for Misinformation Detection with Hierarchical Retrieval** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06269) | [code]\n\n- [2025\u002F02\u002F26] **Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19545) | [code]\n\n- [2025\u002F02\u002F14] **Automated Hypothesis Validation with Agentic Sequential Falsifications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09858) | [code]\n\n- [2025\u002F02\u002F04] **Position: Stop Acting Like Language Model Agents Are Normal Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10420) | [code]\n\n- [2025\u002F02\u002F03] **SelfCheckAgent: Zero-Resource Hallucination Detection in Generative Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01812) | [code]\n\n- [2025\u002F01\u002F19] **Hallucination Mitigation using Agentic AI Natural Language-Based Frameworks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13946) | [code]\n\n- [2024\u002F11\u002F25] **Enhancing Multi-Agent Consensus through Third-Party LLM Integration: Analyzing Uncertainty and Mitigating Hallucinations in Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16189) | [code]\n\n- [2024\u002F11\u002F12] **SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.07965) | [code]\n\n- [2024\u002F07\u002F08] **DebUnc: Mitigating Hallucinations in Large Language Model Agent Communication with Uncertainty Estimations** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06426) | [code]\n\n- [2024\u002F06\u002F29] **BioKGBench: A Knowledge Graph Checking Benchmark of AI Agent for Biomedical Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00466) | [code]\n\n- [2024\u002F06\u002F17] **Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11277) | [code]\n\n- [2024\u002F06\u002F05] **Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.03075) | [code]\n\n- [2024\u002F05\u002F28] **TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18027) | [code]\n\n- [2024\u002F02\u002F13] **Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.08567) | [[code]](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fagent-smith)\n\n\n\n### Infrastructure\n#### Benchmark&Evaluation\n- [2025\u002F07\u002F08] **ECom-Bench: Can LLM Agent Resolve Real-World E-commerce Customer Support Issues?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05639) | [code]\n\n- [2025\u002F07\u002F07] **Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.05257) | [code]\n\n- [2025\u002F07\u002F04] **Recon, Answer, Verify: Agents in Search of Truth** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03671) | [code]\n\n- [2025\u002F07\u002F04] **STRUCTSENSE: A Task-Agnostic Agentic Framework for Structured Information Extraction with Human-In-The-Loop Evaluation and Benchmarking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03674) | [code]\n\n- [2025\u002F07\u002F01] **TransLaw: Benchmarking Large Language Models in Multi-Agent Simulation of the Collaborative Translation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00875) | [code]\n\n- [2025\u002F06\u002F27] **Don&#39;t Trust Generative Agents to Mimic Communication on Social Networks Unless You Benchmarked their Empirical Realism** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21974) | [code]\n\n- [2025\u002F06\u002F27] **RExBench: Can coding agents autonomously implement AI research extensions?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22598) | [code]\n\n- [2025\u002F06\u002F26] **Agent-RewardBench: Towards a Unified Benchmark for Reward Modeling across Perception, Planning, and Safety in Real-World Multimodal Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21252) | [code]\n\n- [2025\u002F06\u002F25] **The Decrypto Benchmark for Multi-Agent Reasoning and Theory of Mind** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20664) | [code]\n\n- [2025\u002F06\u002F20] **MemBench: Towards More Comprehensive Evaluation on the Memory of LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21605) | [code]\n\n- [2025\u002F06\u002F20] **Dissecting the SWE-Bench Leaderboards: Profiling Submitters and Architectures of LLM- and Agent-Based Repair Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.17208) | [code]\n\n- [2025\u002F06\u002F19] **IS-Bench: Evaluating Interactive Safety of VLM-Driven Embodied Agents in Daily Household Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.16402) | [code]\n\n- [2025\u002F06\u002F13] **DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.11763) | [code]\n\n- [2025\u002F06\u002F13] **The Behavior Gap: Evaluating Zero-shot LLM Agents in Complex Task-Oriented Dialogs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.12266) | [code]\n\n- [2025\u002F06\u002F11] **Bench to the Future: A Pastcasting Benchmark for Forecasting Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.21558) | [code]\n\n- [2025\u002F06\u002F10] **Atomic-to-Compositional Generalization for Mobile Agents with A New Benchmark and Scheduling System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08972) | [code]\n\n- [2025\u002F06\u002F10] **UTBoost: Rigorous Evaluation of Coding Agents on SWE-Bench** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09289) | [code]\n\n- [2025\u002F06\u002F09] **EconWebArena: Benchmarking Autonomous Agents on Economic Tasks in Realistic Web Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08136) | [code]\n\n- [2025\u002F06\u002F09] **HeuriGym: An Agentic Benchmark for LLM-Crafted Heuristics in Combinatorial Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07972) | [code]\n\n- [2025\u002F06\u002F09] **$\\tau^2$-Bench: Evaluating Conversational Agents in a Dual-Control Environment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07982) | [code]\n\n- [2025\u002F06\u002F05] **Flex-TravelPlanner: A Benchmark for Flexible Planning with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04649) | [code]\n\n- [2025\u002F06\u002F04] **AgentMisalignment: Measuring the Propensity for Misaligned Behaviour in LLM-Based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04018) | [code]\n\n- [2025\u002F06\u002F02] **FormFactory: An Interactive Benchmarking Suite for Multimodal Form-Filling Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01520) | [code]\n\n- [2025\u002F06\u002F02] **WebChoreArena: Evaluating Web Browsing Agents on Realistic Tedious Web Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01952) | [code]\n\n- [2025\u002F05\u002F31] **DefenderBench: A Toolkit for Evaluating Language Agents in Cybersecurity Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.00739) | [code]\n\n- [2025\u002F05\u002F30] **Draw ALL Your Imagine: A Holistic Benchmark and Agent Framework for Complex Instruction-based Image Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24787) | [code]\n\n- [2025\u002F05\u002F30] **Agent-X: Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24876) | [code]\n\n- [2025\u002F05\u002F30] **Open CaptchaWorld: A Comprehensive Web-based Platform for Testing and Benchmarking Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24878) | [code]\n\n- [2025\u002F05\u002F29] **GSO: Challenging Software Optimization Tasks for Evaluating SWE-Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23671) | [code]\n\n- [2025\u002F05\u002F27] **AutoJudger: An Agent-Driven Framework for Efficient Benchmarking of MLLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21389) | [code]\n\n- [2025\u002F05\u002F26] **ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19897) | [code]\n\n- [2025\u002F05\u002F26] **MLR-Bench: Evaluating AI Agents on Open-Ended Machine Learning Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19955) | [code]\n\n- [2025\u002F05\u002F26] **On Path to Multimodal Historical Reasoning: HistBench and HistAgent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.20246) | [code]\n\n- [2025\u002F05\u002F24] **CRMArena-Pro: Holistic Assessment of LLM Agents Across Diverse Business Scenarios and Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18878) | [code]\n\n- [2025\u002F05\u002F22] **BioDSA-1K: Benchmarking Data Science Agents for Biomedical Research** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16100) | [code]\n\n- [2025\u002F05\u002F22] **From EduVisBench to EduVisAgent: A Benchmark and Multi-Agent Framework for Reasoning-Driven Pedagogical Visualization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16832) | [code]\n\n- [2025\u002F05\u002F22] **AGENTIF: Benchmarking Instruction Following of Large Language Models in Agentic Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16944) | [code]\n\n- [2025\u002F05\u002F21] **X-WebAgentBench: A Multilingual Interactive Web Benchmark for Evaluating Global Agentic System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15372) | [code]\n\n- [2025\u002F05\u002F21] **BountyBench: Dollar Impact of AI Agent Attackers and Defenders on Real-World Cybersecurity Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15216) | [code]\n\n- [2025\u002F05\u002F21] **InfoDeepSeek: Benchmarking Agentic Information Seeking for Retrieval-Augmented Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15872) | [code]\n\n- [2025\u002F05\u002F21] **MAPS: A Multilingual Benchmark for Global Agent Performance and Security** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15935) | [code]\n\n- [2025\u002F05\u002F18] **MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12371) | [code]\n\n- [2025\u002F05\u002F17] **Mobile-Bench-v2: A More Realistic and Comprehensive Benchmark for VLM-based Mobile Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11891) | [code]\n\n- [2025\u002F05\u002F16] **GuideBench: Benchmarking Domain-Oriented Guideline Following for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11368) | [code]\n\n- [2025\u002F05\u002F16] **REI-Bench: Can Embodied Agents Understand Vague Human Instructions in Task Planning?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10872) | [code]\n\n- [2025\u002F05\u002F02] **PIPA: A Unified Evaluation Protocol for Diagnosing Interactive Planning Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.01592) | [code]\n\n- [2025\u002F04\u002F25] **Auto-SLURP: A Benchmark Dataset for Evaluating Multi-Agent Frameworks in Smart Personal Assistant** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18373) | [code]\n\n- [2025\u002F04\u002F24] **Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17934) | [code]\n\n- [2025\u002F04\u002F21] **PLANET: A Collection of Benchmarks for Evaluating LLMs&#39; Planning Capabilities** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14773) | [code]\n\n- [2025\u002F04\u002F16] **BrowseComp: A Simple Yet Challenging Benchmark for Browsing Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12516) | [code]\n\n- [2025\u002F04\u002F15] **GraphicBench: A Planning Benchmark for Graphic Design with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11571) | [code]\n\n- [2025\u002F04\u002F13] **AgentA\u002FB: Automated and Scalable Web A\u002FBTesting with Interactive LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09723) | [code]\n\n- [2025\u002F04\u002F11] **TP-RAG: Benchmarking Retrieval-Augmented Large Language Model Agents for Spatiotemporal-Aware Travel Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08694) | [code]\n\n- [2025\u002F04\u002F11] **AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08942) | [code]\n\n- [2025\u002F04\u002F10] **MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.01019) | [code]\n\n- [2025\u002F04\u002F06] **CO-Bench: Benchmarking Language Model Agents in Algorithm Search for Combinatorial Optimization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.04310) | [code]\n\n- [2025\u002F04\u002F04] **How Social is It? A Benchmark for LLMs&#39; Capabilities in Multi-user Multi-turn Social Agent Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.04628) | [code]\n\n- [2025\u002F03\u002F31] **SciReplicate-Bench: Benchmarking LLMs in Agent-driven Algorithmic Reproduction from Research Papers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00255) | [code]\n\n- [2025\u002F03\u002F28] **Evaluating LLM-based Agents for Multi-Turn Conversations: A Survey** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22458) | [code]\n\n- [2025\u002F03\u002F25] **Writing as a testbed for open ended agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19711) | [code]\n\n- [2025\u002F03\u002F24] **EconEvals: Benchmarks and Litmus Tests for LLM Agents in Unknown Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18825) | [code]\n\n- [2025\u002F03\u002F20] **Survey on Evaluation of LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16416) | [code]\n\n- [2025\u002F03\u002F16] **VeriLA: A Human-Centered Evaluation Framework for Interpretable Verification of LLM Agent Failures** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.12651) | [code]\n\n- [2025\u002F03\u002F11] **AgentOrca: A Dual-System Framework to Evaluate Language Agents on Operational Routine and Constraint Adherence** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08669) | [code]\n\n- [2025\u002F03\u002F10] **MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for Complex Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07459) | [code]\n\n- [2025\u002F03\u002F10] **ProjectEval: A Benchmark for Programming Agents Automated Evaluation on Project-Level Code Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07010) | [code]\n\n- [2025\u002F03\u002F10] **RefactorBench: Evaluating Stateful Reasoning in Language Agents Through Code** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07832) | [code]\n\n- [2025\u002F03\u002F10] **BEARCUBS: A benchmark for computer-using web agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07919) | [code]\n\n- [2025\u002F03\u002F08] **DSGBench: A Diverse Strategic Game Benchmark for Evaluating LLM-based Agents in Complex Decision-Making Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06047) | [code]\n\n- [2025\u002F03\u002F03] **MultiAgentBench: Evaluating the Collaboration and Competition of LLM agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01935) | [code]\n\n- [2025\u002F02\u002F26] **TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19400) | [code]\n\n- [2025\u002F02\u002F25] **RefuteBench 2.0 -- Agentic Benchmark for Dynamic Evaluation of LLM Responses to Refutation Instruction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18308) | [[code]](https:\u002F\u002Fgithub.com\u002FElliottYan\u002FRefuteBench-2.0)\n\n- [2025\u002F02\u002F20] **MLGym: A New Framework and Benchmark for Advancing AI Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14499) | [code]\n\n- [2025\u002F02\u002F19] **DataSciBench: An LLM Agent Benchmark for Data Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13897) | [code]\n\n- [2025\u002F02\u002F13] **EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09560) | [code]\n\n- [2025\u002F02\u002F07] **Evaluating Personality Traits in Large Language Models: Insights from Psychological Questionnaires** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05248) | [code]\n\n- [2025\u002F02\u002F06] **Robotouille: An Asynchronous Planning Benchmark for LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05227) | [code]\n\n- [2025\u002F02\u002F01] **Who&#39;s the MVP? A Game-Theoretic Evaluation Benchmark for Modular Attribution in LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00510) | [code]\n\n- [2025\u002F01\u002F21] **EmbodiedEval: Evaluate Multimodal LLMs as Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11858) | [code]\n\n- [2024\u002F12\u002F23] **LegalAgentBench: Evaluating LLM Agents in Legal Domain** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17259) | [code]\n\n- [2024\u002F12\u002F19] **Agent-SafetyBench: Evaluating the Safety of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14470) | [code]\n\n- [2024\u002F12\u002F18] **TheAgentCompany: Benchmarking LLM Agents on Consequential Real World Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14161) | [code]\n\n- [2024\u002F12\u002F18] **ChinaTravel: A Real-World Benchmark for Language Agents in Chinese Travel Planning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.13682) | [code]\n\n- [2024\u002F12\u002F06] **TeamCraft: A Benchmark for Multi-Modal Multi-Agent Systems in Minecraft** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05255) | [code]\n\n- [2024\u002F12\u002F02] **Medchain: Bridging the Gap Between LLM Agents and Clinical Practice through Interactive Sequential Benchmarking** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01605) | [code]\n\n- [2024\u002F11\u002F05] **Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02937) | [code]\n\n- [2024\u002F10\u002F28] **Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21359) | [code]\n\n- [2024\u002F10\u002F25] **AgentSense: Benchmarking Social Intelligence of Language Agents through Interactive Scenarios** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19346) | [code]\n\n- [2024\u002F10\u002F25] **AGENT-CQ: Automatic Generation and Evaluation of Clarifying Questions for Conversational Search with LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19692) | [code]\n\n- [2024\u002F10\u002F23] **MobileSafetyBench: Evaluating Safety of Autonomous Agents in Mobile Device Control** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17520) | [code]\n\n- [2024\u002F10\u002F16] **Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12361) | [code]\n\n- [2024\u002F10\u002F15] **Revisiting Benchmark and Assessment: An Agent-based Exploratory Dynamic Evaluation Framework for LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.11507) | [code]\n\n- [2024\u002F10\u002F11] **JAILJUDGE: A Comprehensive Jailbreak Judge Benchmark with Multi-Agent Enhanced Explanation Evaluation Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12855) | [code]\n\n- [2024\u002F10\u002F11] **AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.09024) | [code]\n\n- [2024\u002F10\u002F10] **Benchmarking Agentic Workflow Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07869) | [code]\n\n- [2024\u002F10\u002F09] **MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07095) | [code]\n\n- [2024\u002F10\u002F09] **Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07166) | [code]\n\n- [2024\u002F10\u002F09] **DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07331) | [code]\n\n- [2024\u002F10\u002F07] **Adversarial Multi-Agent Evaluation of Large Language Models through Iterative Debates** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.04663) | [code]\n\n- [2024\u002F10\u002F07] **ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05080) | [code]\n\n- [2024\u002F09\u002F23] **Towards a Realistic Long-Term Benchmark for Open-Web Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14913) | [code]\n\n- [2024\u002F09\u002F17] **CORE-Bench: Fostering the Credibility of Published Research Through a Computational Reproducibility Agent Benchmark** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.11363) | [code]\n\n- [2024\u002F09\u002F12] **DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07703) | [code]\n\n- [2024\u002F09\u002F11] **SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07440) | [code]\n\n- [2024\u002F09\u002F02] **ComfyBench: Benchmarking LLM-based Agents in ComfyUI for Autonomously Designing Collaborative AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.01392) | [code]\n\n- [2024\u002F08\u002F28] **BattleAgentBench: A Benchmark for Evaluating Cooperation and Competition Capabilities of Language Models in Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15971) | [code]\n\n- [2024\u002F08\u002F19] **BLADE: Benchmarking Language Model Agents for Data-Driven Science** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.09667) | [code]\n\n- [2024\u002F08\u002F13] **What should I wear to a party in a Greek taverna? Evaluation for Conversational Agents in the Fashion Domain** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08907) | [code]\n\n- [2024\u002F08\u002F12] **VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.06327) | [code]\n\n- [2024\u002F07\u002F26] **OfficeBench: Benchmarking Language Agents across Multiple Applications for Office Automation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.19056) | [code]\n\n- [2024\u002F07\u002F26] **AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18901) | [[code]](https:\u002F\u002Fgithub.com\u002Fstonybrooknlp\u002Fappworld)\n\n- [2024\u002F07\u002F25] **PersonaGym: Evaluating Persona Agents and LLMs** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18416) | [code]\n\n- [2024\u002F07\u002F23] **AMONGAGENTS: Evaluating Large Language Models in the Interactive Text-Based Social Deduction Game** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16521) | [code]\n\n- [2024\u002F07\u002F22] **AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.15711) | [code]\n\n- [2024\u002F07\u002F12] **IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08898) | [code]\n\n- [2024\u002F07\u002F11] **GTA: A Benchmark for General Tool Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08713) | [code]\n\n- [2024\u002F07\u002F05] **Towards Automated Functional Equation Proving: A Benchmark Dataset and A Domain-Specific In-Context Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.14521) | [code]\n\n- [2024\u002F07\u002F01] **MIRAI: Evaluating LLM Agents for Event Forecasting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.01231) | [code]\n\n- [2024\u002F07\u002F01] **ProductAgent: Benchmarking Conversational Product Search Agent with Asking Clarification Questions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00942) | [code]\n\n- [2024\u002F07\u002F01] **Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00993) | [code]\n\n- [2024\u002F06\u002F28] **Designing and Evaluating Multi-Chatbot Interface for Human-AI Communication: Preliminary Findings from a Persuasion Task** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.19648) | [code]\n\n- [2024\u002F06\u002F13] **ResearchArena: Benchmarking Large Language Models&#39; Ability to Collect and Organize Information as Research Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10291) | [code]\n\n- [2024\u002F06\u002F13] **StreamBench: Towards Benchmarking Continuous Improvement of Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.08747) | [code]\n\n- [2024\u002F06\u002F07] **WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04770) | [[code]](https:\u002F\u002Fgithub.com\u002Fallenai\u002Fwildbench)\n\n- [2024\u002F06\u002F07] **GameBench: Evaluating Strategic Reasoning Abilities of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06613) | [[code]](https:\u002F\u002Fgithub.com\u002FJoshuaclymer\u002FGameBench)\n\n- [2024\u002F05\u002F28] **TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18027) | [code]\n\n- [2024\u002F05\u002F23] **AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14573) | [code]\n\n- [2024\u002F05\u002F13] **AgentClinic: a multimodal agent benchmark to evaluate AI in simulated clinical environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.07960) | [code]\n\n- [2024\u002F05\u002F01] **WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00823) | [[code]](https:\u002F\u002Fgithub.com\u002Folly-styles\u002Fworkbench)\n\n- [2024\u002F04\u002F23] **Evaluating Tool-Augmented Agents in Remote Sensing Platforms** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00709) | [code]\n\n- [2024\u002F04\u002F22] **How Well Can LLMs Echo Us? Evaluating AI Chatbots&#39; Role-Play Ability with ECHO** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.13957) | [code]\n\n- [2024\u002F04\u002F15] **MMInA: Benchmarking Multihop Multimodal Internet Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.09992) | [[code]](https:\u002F\u002Fgithub.com\u002Fshulin16\u002Fmmina)\n\n- [2024\u002F04\u002F11] **OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.07972) | [code]\n\n- [2024\u002F04\u002F09] **AgentQuest: A Modular Benchmark Framework to Measure Progress and Improve LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06411) | [code]\n\n- [2024\u002F04\u002F05] **GroundCocoa: A Benchmark for Evaluating Compositional &amp; Conditional Reasoning in Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.04237) | [code]\n\n- [2024\u002F03\u002F29] **DataAgent: Evaluating Large Language Models&#39; Ability to Answer Zero-Shot, Natural Language Queries** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.00188) | [code]\n\n- [2024\u002F03\u002F26] **Sharing the Cost of Success: A Game for Evaluating and Learning Collaborative Multi-Agent Instruction Giving and Following Policies** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17497) | [[code]](https:\u002F\u002Fgithub.com\u002Fclp-research\u002Fcost-sharing-reference-game)\n\n- [2024\u002F03\u002F20] **SocialBench: Sociality Evaluation of Role-Playing Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.13679) | [code]\n\n- [2024\u002F03\u002F18] **How Far Are We on the Decision-Making of LLMs? Evaluating LLMs&#39; Gaming Ability in Multi-Agent Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11807) | [code]\n\n- [2024\u002F03\u002F18] **Tur[k]ingBench: A Challenge Benchmark for Web Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11905) | [code]\n\n- [2024\u002F03\u002F13] **Evaluating Large Language Models as Generative User Simulators for Conversational Recommendation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09738) | [code]\n\n- [2024\u002F03\u002F05] **InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02691) | [code]\n\n- [2024\u002F02\u002F27] **Evaluating Very Long-Term Conversational Memory of LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17753) | [code]\n\n- [2024\u002F02\u002F27] **Benchmarking Data Science Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17168) | [code]\n\n- [2024\u002F02\u002F19] **A Critical Evaluation of AI Feedback for Aligning Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.12366) | [code]\n\n- [2024\u002F02\u002F18] **Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11443) | [[code]](https:\u002F\u002Fgithub.com\u002Fnanshineloong\u002Fself-evolving-benchmark)\n\n- [2024\u002F02\u002F18] **MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.11453) | [code]\n\n- [2024\u002F02\u002F05] **LLM Agents in Interaction: Measuring Personality Consistency and Linguistic Alignment in Interacting Populations of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02896) | [code]\n\n- [2024\u002F02\u002F02] **TravelPlanner: A Benchmark for Real-World Planning with Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01622) | [[code]](https:\u002F\u002Fgithub.com\u002FOSU-NLP-Group\u002FTravelPlanner)\n\n- [2024\u002F01\u002F02] **CharacterEval: A Chinese Benchmark for Role-Playing Conversational Agent Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01275) | [code]\n\n- [2023\u002F12\u002F28] **How Far Are LLMs from Believable AI? A Benchmark for Evaluating the Believability of Human Behavior Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17115) | [code]\n\n- [2023\u002F12\u002F26] **RoleEval: A Bilingual Role Evaluation Benchmark for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.16132) | [code]\n\n- [2023\u002F11\u002F16] **ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.09835) | [code]\n\n- [2023\u002F11\u002F15] **ToolTalk: Evaluating Tool-Usage in a Conversational Setting** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.10775) | [code]\n\n- [2023\u002F10\u002F24] **FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.15421) | [code]\n\n- [2023\u002F10\u002F09] **Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.05746) | [code]\n\n- [2023\u002F10\u002F02] **SmartPlay: A Benchmark for LLMs as Intelligent Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01557) | [code]\n\n- [2023\u002F10\u002F01] **RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00746) | [code]\n\n- [2023\u002F08\u002F11] **BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.05960) | [code]\n\n- [2023\u002F08\u002F07] **AgentBench: Evaluating LLMs as Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03688) | [code]\n\n- [2023\u002F04\u002F27] **ChatLog: Carefully Evaluating the Evolution of ChatGPT Across Time** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14106) | [code]\n\n#### Environment&Platform\n- [2025\u002F05\u002F30] **Open CaptchaWorld: A Comprehensive Web-based Platform for Testing and Benchmarking Multimodal LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24878) | [code]\n\n- [2025\u002F05\u002F22] **Beyond Static Testbeds: An Interaction-Centric Agent Simulation Platform for Dynamic Recommender Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16429) | [code]\n\n- [2025\u002F05\u002F22] **MASLab: A Unified and Comprehensive Codebase for LLM-based Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16988) | [code]\n\n- [2025\u002F04\u002F15] **TextArena** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11442) | [code]\n\n- [2025\u002F03\u002F14] **Cerebrum (AIOS SDK): A Platform for Agent Development, Deployment, Distribution, and Discovery** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11444) | [code]\n\n- [2025\u002F03\u002F06] **Factorio Learning Environment** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09617) | [code]\n\n- [2025\u002F03\u002F05] **Unified Mind Model: Reimagining Autonomous Agents in the LLM Era** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.03459) | [code]\n\n- [2025\u002F03\u002F04] **LiteWebAgent: The Open-Source Suite for VLM-Based Web-Agent Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.02950) | [code]\n\n- [2025\u002F02\u002F14] **The Ann Arbor Architecture for Agent-Oriented Programming** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09903) | [[code]](https:\u002F\u002Fgithub.com\u002Faaalgo\u002Fpostline_0.1)\n\n- [2024\u002F12\u002F30] **Training Software Engineering Agents and Verifiers with SWE-Gym** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21139) | [code]\n\n- [2024\u002F11\u002F05] **SAUCE: Synchronous and Asynchronous User-Customizable Environment for Multi-Agent LLM Interaction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03397) | [code]\n\n- [2024\u002F08\u002F09] **AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15247) | [code]\n\n- [2024\u002F08\u002F06] **OpenOmni: A Collaborative Open Source Tool for Building Future-Ready Multimodal Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.03047) | [code]\n\n- [2024\u002F07\u002F23] **OpenHands: An Open Platform for AI Software Developers as Generalist Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.16741) | [code]\n\n- [2024\u002F07\u002F14] **AutoGRAMS: Autonomous Graphical Agent Modeling Software** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.10049) | [code]\n\n- [2024\u002F07\u002F12] **IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08898) | [code]\n\n- [2024\u002F07\u002F08] **Coding Reliable LLM-based Integrated Task and Knowledge Agents with GenieWorksheets** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.05674) | [code]\n\n- [2024\u002F06\u002F06] **AgentGym: Evolving Large Language Model-based Agents across Diverse Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04151) | [[code]](https:\u002F\u002Fgithub.com\u002Fwoooodyy\u002Fagentgym)\n\n- [2024\u002F05\u002F23] **AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14573) | [code]\n\n- [2024\u002F02\u002F27] **OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17553) | [code]\n\n- [2023\u002F03\u002F14] **CB2: Collaborative Natural Language Interaction Research Platform** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08127) | [code]\n\n#### Dataset\n- [2025\u002F07\u002F10] **Toward Real-World Chinese Psychological Support Dialogues: CPsDD Dataset and a Co-Evolving Multi-Agent System** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07509) | [code]\n\n- [2025\u002F06\u002F26] **AgentStealth: Reinforcing Large Language Model for Anonymizing User-generated Text** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22508) | [code]\n\n- [2025\u002F06\u002F25] **MAGPIE: A dataset for Multi-AGent contextual PrIvacy Evaluation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20737) | [code]\n\n- [2025\u002F06\u002F11] **ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09513) | [code]\n\n- [2025\u002F06\u002F02] **STORM-BORN: A Challenging Mathematical Derivations Dataset Curated via a Human-in-the-Loop Multi-Agent Framework** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01531) | [code]\n\n- [2025\u002F05\u002F27] **Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.21784) | [code]\n\n- [2025\u002F05\u002F19] **Scalable Video-to-Dataset Generation for Cross-Platform Mobile Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12632) | [code]\n\n- [2025\u002F02\u002F09] **MTPChat: A Multimodal Time-Aware Persona Dataset for Conversational Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05887) | [code]\n\n- [2025\u002F02\u002F09] **HamRaz: A Culture-Based Persian Conversation Dataset for Person-Centered Therapy Using LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05982) | [code]\n\n- [2025\u002F01\u002F23] **Hypothesis Generation for Materials Discovery and Design Using Goal-Driven and Constraint-Guided LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13299) | [code]\n\n- [2025\u002F01\u002F14] **Agent-Centric Projection of Prompting Techniques and Implications for Synthetic Training Data for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07815) | [code]\n\n- [2024\u002F12\u002F30] **Plancraft: an evaluation dataset for planning with LLM agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21033) | [code]\n\n- [2024\u002F12\u002F28] **BaiJia: A Large-Scale Role-Playing Agent Corpus of Chinese Historical Characters** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20024) | [code]\n\n- [2024\u002F12\u002F24] **Explainable Multi-Modal Data Exploration in Natural Language via LLM Agent** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18428) | [code]\n\n- [2024\u002F12\u002F06] **CALICO: Conversational Agent Localization via Synthetic Data Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05388) | [code]\n\n- [2024\u002F11\u002F28] **MAG-V: A Multi-Agent Framework for Synthetic Data Generation and Verification** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.04494) | [code]\n\n- [2024\u002F11\u002F21] **Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.14497) | [code]\n\n- [2024\u002F10\u002F18] **Synthesizing Post-Training Data for LLMs through Multi-Agent Simulation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14251) | [code]\n\n- [2024\u002F10\u002F10] **AgentBank: Towards Generalized LLM Agents via Fine-Tuning on 50000+ Interaction Trajectories** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07706) | [code]\n\n- [2024\u002F09\u002F06] **Using Large Language Models to Generate Authentic Multi-agent Knowledge Work Datasets** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.04286) | [code]\n\n- [2024\u002F08\u002F22] **MDD-5k: A New Diagnostic Conversation Dataset for Mental Disorders Synthesized via Neuro-Symbolic LLM Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12142) | [code]\n\n- [2024\u002F08\u002F16] **The Fellowship of the LLMs: Multi-Agent Workflows for Synthetic Preference Optimization Dataset Generation** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08688) | [code]\n\n- [2024\u002F07\u002F12] **IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08898) | [code]\n\n- [2024\u002F06\u002F16] **GUI-WORLD: A Dataset for GUI-oriented Multimodal LLM-based Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10819) | [code]\n\n- [2024\u002F03\u002F19] **Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12881) | [code]\n\n- [2024\u002F02\u002F27] **OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17553) | [code]\n\n- [2023\u002F07\u002F31] **HAGRID: A Human-LLM Collaborative Dataset for Generative Information-Seeking with Attribution** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.16883) | [code]\n\n\n\n### Others\n- [2025\u002F07\u002F04] **Agent-Based Detection and Resolution of Incompleteness and Ambiguity in Interactions with Large Language Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.03726) | [code]\n\n- [2025\u002F07\u002F02] **Data Agent: A Holistic Architecture for Orchestrating Data+AI Ecosystems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.01599) | [code]\n\n- [2025\u002F06\u002F30] **LLM Agents Are the Antidote to Walled Gardens** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23978) | [code]\n\n- [2025\u002F06\u002F20] **UProp: Investigating the Uncertainty Propagation of LLMs in Multi-Step Agentic Decision-Making** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.17419) | [code]\n\n- [2025\u002F06\u002F10] **TACTIC: Translation Agents with Cognitive-Theoretic Interactive Collaboration** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08403) | [code]\n\n- [2025\u002F06\u002F06] **Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06576) | [code]\n\n- [2025\u002F06\u002F02] **Enhancing Interpretable Image Classification Through LLM Agents and Conditional Concept Bottleneck Models** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01334) | [code]\n\n- [2025\u002F05\u002F23] **Distilling LLM Agent into Small Models with Retrieval and Code Tools** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17612) | [code]\n\n- [2025\u002F05\u002F23] **Runaway is Ashamed, But Helpful: On the Early-Exit Behavior of Large Language Model-based Agents in Embodied Environments** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17616) | [code]\n\n- [2025\u002F05\u002F23] **The Real Barrier to LLM Agent Usability is Agentic ROI** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17767) | [code]\n\n- [2025\u002F05\u002F20] **Structured Agent Distillation for Large Language Model** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13820) | [code]\n\n- [2025\u002F05\u002F20] **Agent Context Protocols Enhance Collective Inference** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14569) | [code]\n\n- [2025\u002F05\u002F15] **Learning Virtual Machine Scheduling in Cloud Computing through Language Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10117) | [code]\n\n- [2025\u002F05\u002F04] **Interpretable Emergent Language Using Inter-Agent Transformers** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.02215) | [code]\n\n- [2025\u002F05\u002F02] **VTS-LLM: Domain-Adaptive LLM Agent for Enhancing Awareness in Vessel Traffic Services through Natural Language** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00989) | [code]\n\n- [2025\u002F05\u002F01] **Self-Generated In-Context Examples Improve LLM Agents for Sequential Decision-Making Tasks** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00234) | [code]\n\n- [2025\u002F04\u002F23] **OptimAI: Optimization from Natural Language Using LLM-Powered AI Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16918) | [code]\n\n- [2025\u002F04\u002F04] **Agentic Knowledgeable Self-awareness** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03553) | [code]\n\n- [2025\u002F04\u002F04] **Inherent and emergent liability issues in LLM-based agentic systems: a principal-agent perspective** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03255) | [code]\n\n- [2025\u002F04\u002F02] **Review, Refine, Repeat: Understanding Iterative Decoding of AI Agents with Dynamic Evaluation and Selection** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01931) | [code]\n\n- [2025\u002F03\u002F14] **GNNs as Predictors of Agentic Workflow Performances** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11301) | [code]\n\n- [2025\u002F03\u002F14] **CoLLMLight: Cooperative Large Language Model Agents for Network-Wide Traffic Signal Control** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11739) | [code]\n\n- [2025\u002F03\u002F14] **Agent-Enhanced Large Language Models for Researching Political Institutions** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13524) | [code]\n\n- [2025\u002F03\u002F14] **LLM Agents for Education: Advances and Applications** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11733) | [code]\n\n- [2025\u002F02\u002F20] **Optimizing Model Selection for Compound AI Systems** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14815) | [code]\n\n- [2024\u002F12\u002F03] **Large Multimodal Agents for Accurate Phishing Detection with Enhanced Token Optimization and Cost Reduction** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.02301) | [code]\n\n- [2024\u002F03\u002F18] **EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents** | [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12014) | [code]\n\n---\n## :star: Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAGI-Edgerunners_LLM-Agents-Papers_readme_5995b3a473a4.png)](https:\u002F\u002Fstar-history.com\u002F#AGI-Edgerunners\u002FLLM-Agents-Papers&Date)","# LLM-Agents-Papers 快速上手指南\n\n## 环境准备\n- **系统要求**：支持 Git 的操作系统（Windows\u002FmacOS\u002FLinux）\n- **前置依赖**：\n  - 安装 Git（[官方下载](https:\u002F\u002Fgit-scm.com\u002F) 或 [清华镜像](https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fgit\u002F)）\n  - 推荐配置 Git 国内镜像加速：\n    ```bash\n    git config --global url.\"https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fgit\u002Fgithub.com\u002F\".insteadOf \"https:\u002F\u002Fgithub.com\u002F\"\n    ```\n\n## 安装步骤\n1. 克隆项目仓库（使用国内镜像加速）：\n   ```bash\n   git clone https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fgit\u002Fgithub.com\u002FLLM-Agents-Papers.git\n   ```\n2. 进入项目目录：\n   ```bash\n   cd LLM-Agents-Papers\n   ```\n\n## 基本使用\n1. **查看论文分类**：\n   - 直接浏览 `README.md` 中的目录结构，按主题分类查找论文（如 `Survey`、`RAG`、`Game Playing` 等）。\n   - 示例命令（查看最新论文）：\n     ```bash\n     cat README.md | grep -A 3 \"### Survey\"\n     ```\n\n2. **访问论文资源**：\n   - 点击论文条目中的 `[paper]` 链接（如 `https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08800`）下载 PDF。\n   - 若需批量处理论文链接，可运行（需 Python 3）：\n     ```bash\n     pip install requests\n     python scripts\u002Fextract_papers.py  # 项目内需包含此脚本\n     ```\n\n3. **扩展功能**（如需代码实现）：\n   - 部分论文附带代码链接（如 `[code]`），可单独克隆对应仓库：\n     ```bash\n     git clone https:\u002F\u002Fgithub.com\u002FHenryPengZou\u002FAwesome-LLM-Based-Human-Agent-System-Papers.git\n     ```\n\n> **提示**：若需离线查阅，建议使用 `Typora` 或 `VS Code` 打开 `README.md` 文件，支持 Markdown 渲染与目录跳转。","张明是某金融科技公司的算法工程师，正在开发一个基于LLM的智能投顾系统，需要整合多智能体协作与风险控制模块。  \n\n### 没有 LLM-Agents-Papers 时  \n- **信息碎片化**：通过Google Scholar搜索\"LLM agent finance\"时，需手动筛选出30%以上无关论文，耗时且易遗漏关键研究  \n- **技术盲区**：发现某篇2024年提出的\"动态记忆机制\"能优化风险评估，但未意识到其存在过拟合缺陷（需查阅后续改进论文）  \n- **跨领域障碍**：想借鉴医疗领域的\"多智能体博弈框架\"，却因领域术语差异（如金融中的\"套利\"对应医疗的\"资源分配\"）难以定位相关研究  \n- **版本混乱**：不同团队提交的代码实现中混用了2023版和2025版算法，导致模型表现波动  \n\n### 使用 LLM-Agents-Papers 后  \n- **精准导航**：在\"Finance\"子目录下直接找到《基于强化学习的多智能体投资组合优化》等5篇核心论文，附带代码链接与改进版本标注  \n- **技术演进可视化**：通过时间线发现\"动态记忆机制\"在2024Q4被《带约束的增量式记忆更新》改进，规避了原始方案的缺陷  \n- **跨领域映射**：在\"Multi-Agent System\"分类中检索到《医疗资源博弈框架》，通过术语对照表快速理解其在金融领域的等价应用场景  \n- **版本管控**：每篇论文标注了\"Single-Agent Framework\"与\"Multi-Agent System\"的适配版本，确保团队统一采用2025年基准实现  \n\n核心价值：通过结构化论文索引与领域适配导航，将前沿研究成果转化为可复用的技术模块，缩短AI研发周期40%以上。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAGI-Edgerunners_LLM-Agents-Papers_13d0885f.png","AGI-Edgerunners","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FAGI-Edgerunners_6105c8a3.png",null,"https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners",[77],{"name":78,"color":79,"percentage":80},"Python","#3572A5",100,2265,143,"2026-04-10T11:43:30",1,"","未说明",{"notes":88,"python":86,"dependencies":89},"该项目为论文整理仓库，无需特定运行环境。仅需基础文本查看工具即可使用",[],[13,35],[92,93,94,95],"agents","large-language-models","llm-agent","paper-list",4,"2026-03-27T02:49:30.150509","2026-04-11T18:34:00.031856",[100,105,110,115,120,125,130],{"id":101,"question_zh":102,"answer_zh":103,"source_url":104},1764,"如何添加新论文到仓库的基准部分？","可以通过将论文的 arXiv abs 链接添加到 `papers` 文件夹下的对应 JSON 文件中，然后提交 Pull Request 的方式添加。具体步骤：1. 找到对应的 JSON 文件；2. 在适当的位置插入 arXiv 链接；3. 提交 PR。","https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FLLM-Agents-Papers\u002Fissues\u002F6",{"id":106,"question_zh":107,"answer_zh":108,"source_url":109},1765,"如何提交综述类论文到仓库？","提交综述类论文时，请确保提供论文标题、发表信息、arXiv 链接及简要概述。维护者会将其归类到 \"Survey\" 列表中。","https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FLLM-Agents-Papers\u002Fissues\u002F28",{"id":111,"question_zh":112,"answer_zh":113,"source_url":114},1766,"如何添加模块化设计的 LLM 代理框架（如 AgentSquare）？","需提供框架的核心功能、论文链接、代码仓库及使用场景说明。维护者会将其归类到 \"Environment&Platform\" 或 \"Tool Usage\" 等相关分类。","https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FLLM-Agents-Papers\u002Fissues\u002F23",{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},1767,"如何添加高保真执行环境（如 AppWorld）到列表？","需提供环境的 API 数量、模拟场景、任务基准及代码仓库链接。维护者会将其归类到 \"Benchmark&Evaluation\" 和 \"Tool Usage\" 分类。","https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FLLM-Agents-Papers\u002Fissues\u002F22",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},1768,"如何提交 ICLR 等顶会论文到仓库？","提交时需注明会议名称、论文标题、arXiv 链接及核心贡献。维护者会将其归类到对应技术领域（如规划、推理等）。","https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FLLM-Agents-Papers\u002Fissues\u002F20",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},1769,"如何添加角色扮演类 LLM 代理研究论文？","需提供论文对角色扮演场景的评估方法（如时间点幻觉检测），并明确归类到 \"Role Playing\" 分类。","https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FLLM-Agents-Papers\u002Fissues\u002F18",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},1770,"如何修复代码链接错误（如 OSWorld 项目）？","直接向维护者提供正确的代码仓库链接（如 GitHub 地址），维护者会更新页面中的代码按钮指向。","https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FLLM-Agents-Papers\u002Fissues\u002F16",[]]