[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ysymyth--awesome-language-agents":3,"tool-ysymyth--awesome-language-agents":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150720,2,"2026-04-11T11:33:10",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":76,"owner_email":78,"owner_twitter":76,"owner_website":76,"owner_url":79,"languages":80,"stars":85,"forks":86,"last_commit_at":87,"license":76,"difficulty_score":88,"env_os":89,"env_gpu":90,"env_ram":90,"env_deps":91,"category_tags":94,"github_topics":95,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":102,"updated_at":103,"faqs":104,"releases":105},6616,"ysymyth\u002Fawesome-language-agents","awesome-language-agents","List of language agents based on paper \"Cognitive Architectures for Language Agents\"","awesome-language-agents 是一个基于\"Cognitive Architectures for Language Agents（CoALA）”论文构建的开源资源合集，旨在系统性地整理和分类当前的语言智能体（Language Agents）研究与项目。它解决了该领域研究分散、架构定义模糊的问题，通过统一的 CoALA 框架，将复杂的智能体行为拆解为“外部交互（接地）”与“内部记忆操作（推理、检索、学习）”两大核心动作空间，并明确了从规划到执行的决策循环机制。\n\n该项目不仅提供了论文的通俗解读和引用资源，还收录了数百篇相关学术文献，并对每篇文献在 CoALA 框架下的具体技术侧重进行了标注。其独特的技术亮点在于提供了一套标准化的认知架构视角，帮助研究者清晰地分析不同智能体如何处理短期工作记忆与长期记忆（如经验、知识、代码）的交互。\n\nawesome-language-agents 特别适合人工智能领域的研究人员、开发者以及希望深入理解大模型智能体底层逻辑的技术爱好者使用。对于想要追踪前沿进展、寻找灵感或系统梳理知识体系的从业者来说，这是一份极具价值的导航地图，能帮助大家","awesome-language-agents 是一个基于\"Cognitive Architectures for Language Agents（CoALA）”论文构建的开源资源合集，旨在系统性地整理和分类当前的语言智能体（Language Agents）研究与项目。它解决了该领域研究分散、架构定义模糊的问题，通过统一的 CoALA 框架，将复杂的智能体行为拆解为“外部交互（接地）”与“内部记忆操作（推理、检索、学习）”两大核心动作空间，并明确了从规划到执行的决策循环机制。\n\n该项目不仅提供了论文的通俗解读和引用资源，还收录了数百篇相关学术文献，并对每篇文献在 CoALA 框架下的具体技术侧重进行了标注。其独特的技术亮点在于提供了一套标准化的认知架构视角，帮助研究者清晰地分析不同智能体如何处理短期工作记忆与长期记忆（如经验、知识、代码）的交互。\n\nawesome-language-agents 特别适合人工智能领域的研究人员、开发者以及希望深入理解大模型智能体底层逻辑的技术爱好者使用。对于想要追踪前沿进展、寻找灵感或系统梳理知识体系的从业者来说，这是一份极具价值的导航地图，能帮助大家更高效地探索语言智能体的无限可能。","# 🐨CoALA: Awesome Language Agents\n[![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fawesome.re) [![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](LICENSE)  [![PR Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen)](https:\u002F\u002Fgithub.com\u002Fysymyth\u002Fawesome-language-agents\u002Fpulls)\n\n![teaser](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fysymyth_awesome-language-agents_readme_b684f8c949b7.png)\n\n\nA compilation of language agents using the **Cognitive Architectures for Language Agents (🐨CoALA)** framework. \n- CoALA Paper (16 pages of main content): https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02427\n- CoALA Tweet (6 threads): https:\u002F\u002Ftwitter.com\u002FShunyuYao12\u002Fstatus\u002F1699396834983362690\n- CoALA BibTex file with 300+ related citations: [CoALA.bib](CoALA.bib)\n- CoALA BibTex citation if you find our work\u002Fresources useful:\n```bibtex\n@misc{sumers2023cognitive,\n      title={Cognitive Architectures for Language Agents}, \n      author={Theodore Sumers and Shunyu Yao and Karthik Narasimhan and Thomas L. Griffiths},\n      year={2023},\n      eprint={2309.02427},\n      archivePrefix={arXiv},\n      primaryClass={cs.AI}\n}\n```\n\n## 🐨CoALA Overview\nCoALA neatly specifies a language agent starting with its **action space**, which has 2 parts:\n* External actions to interact with external environments (**grounding**)\n* Internal actions to interact with internal memories (**reasoning**, **retrieval**, **learning**)\n  * A language agent has a short-term working memory and several (optional) long-term memories (episodic for experience, semantic for knowledge, procedural for code\u002FLLM)\n  * **Reasoning** = update working memory (with LLM)\n  * **Retrieval** = read long-term memory\n  * **Learning** = write long-term memory\n![action_space](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fysymyth_awesome-language-agents_readme_b08e0e638cb4.png)\n\n\nThen how does a language agent choose which action to take? Its actions are structured into **decision making** cycles, and each cycle has two stages:\n* **Planning**: The agent applies reasoning\u002Fretrieval actions to (iteratively) propose and evaluate actions, then select a learning\u002Fgrounding action.\n* **Execution**: The selected learning\u002Fgrounding action is executed to affect the internal memory or external world.\n![decision_making](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fysymyth_awesome-language-agents_readme_782d83eca5fa.png)\n\nTo understand more, read Section 4 of our [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02427).\n\n## Papers\nBelow is only a subset of papers scraped from [CoALA.bib](CoALA.bib) plus pulled requests, with potentially incorrect action space labels. \nDate is based on arxiv v1. They do not represent all language agent work, and we plan to add more work soon (pull requests welcome), and have labels for highly cited work.\n\n* (2021-10) [AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts](http:\u002F\u002Farxiv.org\u002Fabs\u002F2110.01691) (reasoning)\n* (2021-10) [SILG: The Multi-environment Symbolic Interactive Language Grounding Benchmark](http:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10661) (environment)\n* (2022-01) [Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents](http:\u002F\u002Farxiv.org\u002Fabs\u002F2201.07207) (grounding)\n* (2022-03) [PromptChainer: Chaining Large Language Model Prompts through Visual Programming](http:\u002F\u002Farxiv.org\u002Fabs\u002F2203.06566) (grounding)\n* (2022-03) [ScienceWorld: Is your Agent Smarter than a 5th Grader?](http:\u002F\u002Farxiv.org\u002Fabs\u002F2203.07540) (environment)\n* (2022-04) [Do As I Can, Not As I Say: Grounding Language in Robotic Affordances](http:\u002F\u002Farxiv.org\u002Fabs\u002F2204.01691) (grounding)\n* (2022-04) [Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language](http:\u002F\u002Farxiv.org\u002Fabs\u002F2204.00598) (grounding)\n* (2022-07) [WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents](http:\u002F\u002Farxiv.org\u002Fabs\u002F2207.01206) (environment)\n* (2022-09) [ProgPrompt: Generating Situated Robot Task Plans using Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2209.11302) (grounding)\n* (2022-10) [Decomposed Prompting: A Modular Approach for Solving Complex Tasks](http:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02406) (reasoning)\n* (2022-10) [Mind's Eye: Grounded Language Model Reasoning through Simulation](http:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05359) (grounding)\n* (2022-10) [ReAct: Synergizing Reasoning and Acting in Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03629) (grounding, reasoning)\n* (2022-11) [Large Language Models Are Human-Level Prompt Engineers](http:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01910) (reasoning)\n* (2022-12) [LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04088) (grounding)\n* (2022-12) [Don’t Generate, Discriminate: A Proposal for Grounding Language Models to Real-World Environments](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.270\u002F) (grounding)\n* (2023-02) [Chain of Hindsight Aligns Language Models with Feedback](http:\u002F\u002Farxiv.org\u002Fabs\u002F2302.02676v6) (learning)\n* (2023-02) [Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents](http:\u002F\u002Farxiv.org\u002Fabs\u002F2302.01560) (grounding, reasoning)\n* (2023-02) [Toolformer: Language Models Can Teach Themselves to Use Tools](http:\u002F\u002Farxiv.org\u002Fabs\u002F2302.04761) (grounding)\n* (2023-03) [Foundation Models for Decision Making: Problems, Methods, and Opportunities](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04129) (survey)\n* (2023-03) [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17580) (grounding)\n* (2023-03) [PaLM-E: An Embodied Multimodal Language Model](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03378) (grounding)\n* (2023-03) [Reflexion: Language Agents with Verbal Reinforcement Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11366) (grounding, reasoning, learning)\n* (2023-03) [Self-Refine: Iterative Refinement with Self-Feedback](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17651) (reasoning)\n* (2023-03) [Self-planning Code Generation with Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06689) (reasoning)\n* (2023-04) [Generative Agents: Interactive Simulacra of Human Behavior](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.03442) (grounding, reasoning, retrieval, learning)\n* (2023-04) [Emergent autonomous scientific research capabilities of large language models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05332) (grounding, reasoning)\n* (2023-04) [LLM+P: Empowering Large Language Models with Optimal Planning Proficiency](http:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11477) (grounding, reasoning)\n* (2023-04) [REFINER: Reasoning Feedback on Intermediate Representations](http:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01904) (reasoning)\n* (2023-04) [Teaching Large Language Models to Self-Debug](http:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05128) (reasoning)\n* (2023-04) [GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.09667) (grounding, reasoning)\n* (2023-05) [CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.11738.pdf) (grounding, reasoning, retrieval)\n* (2023-05) [Augmenting Autotelic Agents with Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12487) (grounding, reasoning, retrieval, learning)\n* (2023-05) [ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14323) (grounding, reasoning)\n* (2023-05) [ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11554) (grounding, reasoning)\n* (2023-05) [Decomposition Enhances Reasoning via Self-Evaluation Guided Decoding](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.00633) (reasoning)\n* (2023-05) [Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19118) (grounding, reasoning)\n* (2023-05) [Improving Factuality and Reasoning in Language Models through Multiagent Debate](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14325) (grounding, reasoning)\n* (2023-05) [AdaPlanner: Adaptive Planning from Feedback with Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16653) (grounding, retrieval, learning)\n* (2023-05) [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.04091) (reasoning)\n* (2023-05) [ReWOO: Decoupling Reasoning from Observations for Efficient Augmented Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18323) (grounding, reasoning)\n* (2023-05) [SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17390) (grounding, reasoning)\n* (2023-05) [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10601) (reasoning)\n* (2023-05) [Voyager: An Open-Ended Embodied Agent with Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16291) (grounding, reasoning, retrieval, learning)\n* (2023-06) [InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback](http:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14898) (grounding, reasoning)\n* (2023-06) [ToolQA: A Dataset for LLM Question Answering with External Tools](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13304) (grounding)\n* (2023-06) [Mind2Web: Towards a Generalist Agent for the Web](http:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06070) (environment)\n* (2023-06) [RestGPT: Connecting Large Language Models with Real-World RESTful APIs](http:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06624) (grounding, reasoning)\n* (2023-06) [ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases](http:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05301) (grounding, reasoning)\n* (2023-07) [A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12856) (grounding, reasoning)\n* (2023-07) [RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15818) (grounding)\n* (2023-07) [RoCo: Dialectic Multi-Robot Collaboration with Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.04738) (grounding)\n* (2023-07) [Robots That Ask For Help: Uncertainty Alignment for Large Language Model Planners](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.01928) (grounding)\n* (2023-07) [S$^3$: Social-network Simulation System with Large Language Model-Empowered Agents](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14984) (grounding, reasoning)\n* (2023-07) [ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.16789) (grounding, reasoning, retrieval)\n* (2023-07) [Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15810) (grounding)\n* (2023-07) [Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.05300) (grounding, reasoning)\n* (2023-07) [WebArena: A Realistic Web Environment for Building Autonomous Agents](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13854) (environment)\n* (2023-08) [AgentBench: Evaluating LLMs as Agents](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03688) (environment)\n* (2023-08) [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848) (environment)\n* (2023-08) [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.08155) (grounding, reasoning)\n* (2023-08) [CGMI: Configurable General Multi-Agent Interaction Framework](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12503) (grounding, reasoning)\n* (2023-08) [ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07201) (grounding, reasoning)\n* (2023-08) [Cumulative Reasoning with Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.04371) (reasoning)\n* (2023-08) [ExpeL: LLM Agents Are Experiential Learners](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10144) (grounding, reasoning, retrieval, learning)\n* (2023-08) [GPT-in-the-Loop: Adaptive Decision-Making for Multiagent Systems](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10435) (grounding, reasoning)\n* (2023-08) [Gentopia: A Collaborative Platform for Tool-Augmented LLMs](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.04030) (environment)\n* (2023-08) [MetaGPT: Meta Programming for Multi-Agent Collaborative Framework](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00352) (grounding, reasoning)\n* (2023-08) [ProAgent: Building Proactive Cooperative AI with Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11339) (grounding, reasoning)\n* (2023-08) [Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.02151) (grounding, reasoning, learning)\n* (2023-08) [SAPIEN: Affective Virtual Agents Powered by Large Language Models](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03022) (grounding, reasoning)\n* (2023-08) [Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.09830) (grounding, reasoning, retrieval, learning)\n* (2023-09) [ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.17452) (grounding, reasoning, learning)\n* (2023-09) [Identifying the Risks of LM Agents with an LM-Emulated Sandbox](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15817) (environment)\n* (2023-09) [Suspicion Agent: Playing Imperfect Information Games with Theory of Mind Aware GPT-4](http:\u002F\u002Farxiv.org\u002Fabs\u002F2309.17277) (grounding, reasoning)\n* (2024-01) [Self-Contrast: Better Reflection Through Inconsistent Solving Perspectives](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02009) (reasoning, reflection)\n* (2024-02) [Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17574) (reasoning, reflection, learning)\n* (2024-03) [LLM3:Large Language Model-based Task and Motion Planning with Motion Failure Reasoning.](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11552) (planning, reasoning)\n* (2024-04) [Empowering Biomedical Discovery with AI Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02831) (AI scientist, biomedical research)\n* (2024-05) [TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18027) (reasoning, retrieval)\n* (2024-07) [AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18901) (environment, planning, grounding, reflection, reasoning, retrieval)\n\n\n(more to be added soon. pull request welcome.)\n\n## Resources\n* [LLM Powered Autonomous Agents (Lil’Log)](https:\u002F\u002Flilianweng.github.io\u002Fposts\u002F2023-06-23-agent\u002F)\n* [LLM-Agents-Papers](https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FLLM-Agents-Papers)\n* [LLMAgentPapers](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLLMAgentPapers)\n* [awesome-llm-powered-agent](https:\u002F\u002Fgithub.com\u002Fhyp1231\u002Fawesome-llm-powered-agent)\n  \n(more to be added soon. pull request welcome.)\n","# 🐨CoALA：强大的语言智能体\n[![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fawesome.re) [![许可证：MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](LICENSE)  [![欢迎提交PR](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen)](https:\u002F\u002Fgithub.com\u002Fysymyth\u002Fawesome-language-agents\u002Fpulls)\n\n![teaser](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fysymyth_awesome-language-agents_readme_b684f8c949b7.png)\n\n\n这是一份基于**语言智能体认知架构（🐨CoALA）**框架的语言智能体合集。\n- CoALA论文（正文16页）：https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02427\n- CoALA推文（6条线程）：https:\u002F\u002Ftwitter.com\u002FShunyuYao12\u002Fstatus\u002F1699396834983362690\n- 包含300多篇相关引用的CoALA BibTex文件：[CoALA.bib](CoALA.bib)\n- 如果您觉得我们的工作或资源有用，可使用以下BibTex格式引用：\n```bibtex\n@misc{sumers2023cognitive,\n      title={Cognitive Architectures for Language Agents}, \n      author={Theodore Sumers and Shunyu Yao and Karthik Narasimhan and Thomas L. Griffiths},\n      year={2023},\n      eprint={2309.02427},\n      archivePrefix={arXiv},\n      primaryClass={cs.AI}\n}\n```\n\n## 🐨CoALA 概述\nCoALA清晰地定义了一个语言智能体，其起点是**动作空间**，该空间分为两部分：\n* 用于与外部环境交互的外部动作（**具身化**）\n* 用于与内部记忆交互的内部动作（**推理**、**检索**、**学习**）\n  * 一个语言智能体拥有短期工作记忆和若干（可选）长期记忆（情景记忆用于经验、语义记忆用于知识、程序性记忆用于代码\u002F大模型）\n  * **推理** = 更新工作记忆（通过大模型）\n  * **检索** = 读取长期记忆\n  * **学习** = 写入长期记忆\n![action_space](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fysymyth_awesome-language-agents_readme_b08e0e638cb4.png)\n\n\n那么，语言智能体如何选择要执行的动作呢？它的动作被组织成**决策循环**，每个循环包含两个阶段：\n* **规划**：智能体运用推理\u002F检索动作，迭代地提出并评估候选动作，最终选择一个学习或具身化的动作。\n* **执行**：所选的学习或具身化动作被执行，从而影响内部记忆或外部世界。\n![decision_making](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fysymyth_awesome-language-agents_readme_782d83eca5fa.png)\n\n欲了解更多，请阅读我们论文的第4节（链接：https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02427）。\n\n## 论文\n以下仅是从[CoALA.bib](CoALA.bib)中筛选出的一部分论文，并结合了拉取请求的内容，其中动作空间标签可能存在不准确之处。\n日期以arXiv v1版本为准。这些内容并未涵盖所有语言智能体相关工作，我们计划近期继续补充更多内容（欢迎提交拉取请求），并对高被引工作添加相应标签。\n\n* (2021-10) [AI Chains: 通过链式大型语言模型提示实现透明且可控的人机交互](http:\u002F\u002Farxiv.org\u002Fabs\u002F2110.01691) (推理)\n* (2021-10) [SILG: 多环境符号交互式语言接地基准](http:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10661) (环境)\n* (2022-01) [语言模型作为零样本规划器：为具身智能体提取可操作知识](http:\u002F\u002Farxiv.org\u002Fabs\u002F2201.07207) (接地)\n* (2022-03) [PromptChainer: 通过可视化编程链式大型语言模型提示](http:\u002F\u002Farxiv.org\u002Fabs\u002F2203.06566) (接地)\n* (2022-03) [ScienceWorld: 你的智能体比五年级学生更聪明吗？](http:\u002F\u002Farxiv.org\u002Fabs\u002F2203.07540) (环境)\n* (2022-04) [照我能做到的做，而不是照我说的做：将语言与机器人可用性相结合](http:\u002F\u002Farxiv.org\u002Fabs\u002F2204.01691) (接地)\n* (2022-04) [苏格拉底模型：用语言构建零样本多模态推理](http:\u002F\u002Farxiv.org\u002Fabs\u002F2204.00598) (接地)\n* (2022-07) [WebShop: 基于接地语言智能体的可扩展现实世界网络交互](http:\u002F\u002Farxiv.org\u002Fabs\u002F2207.01206) (环境)\n* (2022-09) [ProgPrompt: 利用大型语言模型生成情境化的机器人任务计划](http:\u002F\u002Farxiv.org\u002Fabs\u002F2209.11302) (接地)\n* (2022-10) [分解式提示：解决复杂任务的模块化方法](http:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02406) (推理)\n* (2022-10) [心灵之眼：通过仿真进行接地语言模型推理](http:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05359) (接地)\n* (2022-10) [ReAct: 在语言模型中协同推理与行动](http:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03629) (接地、推理)\n* (2022-11) [大型语言模型是人类级别的提示工程师](http:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01910) (推理)\n* (2022-12) [LLM-Planner: 利用大型语言模型为具身智能体进行少样本接地规划](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04088) (接地)\n* (2022-12) [不要生成，要判别：将语言模型接地到现实世界环境的建议](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.270\u002F) (接地)\n* (2023-02) [事后链使语言模型与反馈对齐](http:\u002F\u002Farxiv.org\u002Fabs\u002F2302.02676v6) (学习)\n* (2023-02) [描述、解释、计划与选择：利用大型语言模型的交互式规划实现开放世界多任务智能体](http:\u002F\u002Farxiv.org\u002Fabs\u002F2302.01560) (接地、推理)\n* (2023-02) [Toolformer: 语言模型可以自我教授如何使用工具](http:\u002F\u002Farxiv.org\u002Fabs\u002F2302.04761) (接地)\n* (2023-03) [决策的基础模型：问题、方法与机遇](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04129) (综述)\n* (2023-03) [HuggingGPT: 使用ChatGPT及其在Hugging Face中的伙伴解决AI任务](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17580) (接地)\n* (2023-03) [PaLM-E: 一种具身多模态语言模型](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03378) (接地)\n* (2023-03) [Reflexion: 具有言语强化学习的语言智能体](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11366) (接地、推理、学习)\n* (2023-03) [Self-Refine: 基于自我反馈的迭代改进](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17651) (推理)\n* (2023-03) [利用大型语言模型进行自我规划的代码生成](http:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06689) (推理)\n* (2023-04) [生成式智能体：人类行为的交互式模拟](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.03442) (接地、推理、检索、学习)\n* (2023-04) [大型语言模型涌现的自主科学研究能力](http:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05332) (接地、推理)\n* (2023-04) [LLM+P: 以最优规划能力赋能大型语言模型](http:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11477) (接地、推理)\n* (2023-04) [REFINER: 针对中间表示的推理反馈](http:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01904) (推理)\n* (2023-04) [教导大型语言模型自我调试](http:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05128) (推理)\n* (2023-04) [GeneGPT: 通过领域工具增强大型语言模型，以更好地获取生物医学信息](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.09667) (接地、推理)\n* (2023-05) [CRITIC: 大型语言模型可通过工具交互式批评自我纠正](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.11738.pdf) (接地、推理、检索)\n* (2023-05) [利用大型语言模型增强自指智能体](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12487) (接地、推理、检索、学习)\n* (2023-05) [ChatCoT: 基于聊天的大规模语言模型上的工具增强思维链推理](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14323) (接地、推理)\n* (2023-05) [ToolkenGPT: 通过工具嵌入大规模增强冻结语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11554) (接地、推理)\n* (2023-05) [分解法通过自我评估引导解码提升推理能力](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.00633) (推理)\n* (2023-05) [通过多智能体辩论鼓励大型语言模型的发散性思维](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19118) (接地、推理)\n* (2023-05) [通过多智能体辩论提高语言模型的事实性和推理能力](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14325) (接地、推理)\n* (2023-05) [AdaPlanner: 基于语言模型反馈的自适应规划](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16653) (接地、检索、学习)\n* (2023-05) [计划与求解提示：改进大型语言模型的零样本思维链推理](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.04091) (推理)\n* (2023-05) [ReWOO: 将推理与观察解耦，以提高增强语言模型的效率](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18323) (接地、推理)\n* (2023-05) [SwiftSage: 一种具有快慢思维的生成式智能体，适用于复杂的交互任务](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17390) (接地、推理)\n* (2023-05) [思想之树：利用大型语言模型进行深思熟虑的问题解决](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10601) (推理)\n* (2023-05) [Voyager: 一种基于大型语言模型的开放式具身智能体](http:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16291) (接地、推理、检索、学习)\n* (2023-06) [InterCode: 规范化并基准化带有执行反馈的交互式编程](http:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14898) (接地、推理)\n* (2023-06) [ToolQA: 一个用于大型语言模型问答的外部工具数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13304) (接地)\n* (2023-06) [Mind2Web: 朝着通用网络智能体迈进](http:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06070) (环境)\n* (2023-06) [RestGPT: 将大型语言模型与现实世界的RESTful API连接](http:\u002F\u002Farxiv.org\u002Fabs\u002F2306.06624) (接地、推理)\n* (2023-06) [ToolAlpaca: 面向语言模型的通用工具学习，包含3000个模拟案例](http:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05301) (接地、推理)\n* (2023-07) [具备规划、长上下文理解及程序合成能力的现实世界网络智能体](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12856) (接地、推理)\n* (2023-07) [RT-2: 视觉-语言-行动模型将网络知识迁移到机器人控制](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15818) (接地)\n* (2023-07) [RoCo: 基于大型语言模型的辩证式多机器人协作](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.04738) (接地)\n* (2023-07) [会求助的机器人：为大型语言模型规划者提供不确定性对齐](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.01928) (接地)\n* (2023-07) [S$^3$: 社交网络模拟系统，配备由大型语言模型赋能的智能体](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14984) (接地、推理)\n* (2023-07) [ToolLLM: 帮助大型语言模型掌握16000多种现实世界API](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.16789) (接地、推理、检索)\n* (2023-07) [理解使用基于大型语言模型的对话式智能体支持心理健康的益处与挑战](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15810) (接地)\n* (2023-07) [释放大型语言模型的认知协同效应：通过多人格自我协作构建任务解决智能体](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.05300) (接地、推理)\n* (2023-07) [WebArena: 一个用于构建自主智能体的真实网络环境](http:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13854) (环境)\n* (2023-08) [AgentBench: 评估大型语言模型作为智能体](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03688) (环境)\n* (2023-08) [AgentVerse: 促进多智能体协作并探索智能体的涌现行为](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848) (环境)\n* (2023-08) [AutoGen: 通过多智能体对话框架赋能下一代大型语言模型应用](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.08155) (接地、推理)\n* (2023-08) [CGMI: 可配置的通用多智能体交互框架](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12503) (接地、推理)\n* (2023-08) [ChatEval: 通过多智能体辩论迈向更好的基于大型语言模型的评估者](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07201) (接地、推理)\n* (2023-08) [大型语言模型的累积推理](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.04371) (推理)\n* (2023-08) [ExpeL: 大型语言模型智能体是体验式学习者](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10144) (接地、推理、检索、学习)\n* (2023-08) [GPT-in-the-Loop: 多智能体系统的适应性决策](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10435) (接地、推理)\n* (2023-08) [Gentopia: 工具增强型大型语言模型的合作平台](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.04030) (环境)\n* (2023-08) [MetaGPT: 面向多智能体协作框架的元编程](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00352) (接地、推理)\n* (2023-08) [ProAgent: 利用大型语言模型构建主动合作型人工智能](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11339) (接地、推理)\n* (2023-08) [Retroformer: 带有策略梯度优化的回顾性大型语言模型智能体](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.02151) (接地、推理、学习)\n* (2023-08) [SAPIEN: 由大型语言模型驱动的情感虚拟智能体](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.03022) (接地、推理)\n* (2023-08) [大型语言模型与认知架构的协同整合，以构建稳健的人工智能：一项探索性分析](http:\u002F\u002Farxiv.org\u002Fabs\u002F2308.09830) (接地、推理、检索、学习)\n* (2023-09) [ToRA: 一种集成工具的推理智能体，用于数学问题解决](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.17452) (接地、推理、学习)\n* (2023-09) [利用LM模拟沙盒识别LM智能体的风险](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15817) (环境)\n* (2023-09) [怀疑智能体：与具备心智理论意识的GPT-4一起玩不完全信息游戏](http:\u002F\u002Farxiv.org\u002Fabs\u002F2309.17277) (接地、推理)\n* (2024-01) [自我对比：通过不一致的解题视角实现更好的反思](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02009) (推理、反思)\n* (2024-02) [Agent-Pro: 通过策略层面的反思与优化学习进化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17574) (推理、反思、学习)\n* (2024-03) [LLM3: 基于大型语言模型的任务与运动规划，包含运动失败推理。](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.11552) (规划、推理)\n* (2024-04) [利用智能体赋能生物医学发现](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02831) (AI科学家、生物医学研究)\n* (2024-05) [TimeChara: 评估角色扮演大型语言模型的时间点角色幻觉](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18027) (推理、检索)\n* (2024-07) [AppWorld: 一个可控的应用和人物世界，用于基准测试交互式编程智能体](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.18901) (环境、规划、接地、反思、推理、检索)\n\n（更多内容即将添加。欢迎提交 Pull Request。）\n\n\n\n## 资源\n* [由大模型驱动的自主智能体（Lil’Log）](https:\u002F\u002Flilianweng.github.io\u002Fposts\u002F2023-06-23-agent\u002F)\n* [LLM-Agents-Papers](https:\u002F\u002Fgithub.com\u002FAGI-Edgerunners\u002FLLM-Agents-Papers)\n* [LLMAgentPapers](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FLLMAgentPapers)\n* [awesome-llm-powered-agent](https:\u002F\u002Fgithub.com\u002Fhyp1231\u002Fawesome-llm-powered-agent)\n  \n（更多内容即将添加。欢迎提交 Pull Request。）","# awesome-language-agents 快速上手指南\n\n**项目说明**：`awesome-language-agents` (CoALA) 并非一个可直接安装运行的单一软件库或框架，而是一个**精选资源列表**。它基于“语言代理认知架构 (CoALA)\"理论，汇总了相关的学术论文、基准测试、代码实现及工具。\n\n本指南旨在帮助开发者快速理解其核心概念，并指引你找到并运行列表中具体的开源项目（如 ReAct, AutoGen, MetaGPT 等）。\n\n## 1. 环境准备\n\n由于该仓库包含数百个不同的项目，每个项目的具体依赖各不相同。但在开始探索之前，建议准备以下通用开发环境：\n\n*   **操作系统**：Linux (推荐 Ubuntu 20.04+), macOS, 或 Windows (WSL2)。\n*   **Python 版本**：建议安装 **Python 3.9 - 3.11**（大多数 LLM 相关项目对此范围支持最好）。\n*   **包管理工具**：`pip` 或 `conda`。\n*   **GPU 支持**（可选）：若需本地运行模型，请确保已安装 NVIDIA 驱动及 CUDA Toolkit。\n*   **Git**：用于克隆仓库及子项目。\n\n**前置依赖安装示例**：\n```bash\n# 创建独立的虚拟环境（推荐）\npython -m venv coala-env\nsource coala-env\u002Fbin\u002Factivate  # Windows 用户请使用: coala-env\\Scripts\\activate\n\n# 升级基础工具\npip install --upgrade pip setuptools wheel\n```\n\n## 2. 获取资源列表\n\n本项目本身不需要“安装”，只需克隆仓库以获取最新的论文列表和代码链接。\n\n**克隆仓库**：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fysymyth\u002Fawesome-language-agents.git\ncd awesome-language-agents\n```\n\n**国内加速方案**：\n如果访问 GitHub 较慢，可使用国内镜像源克隆（需确认镜像站是否同步了该特定仓库，否则建议使用代理或等待）：\n```bash\n# 示例：使用 Gitee 镜像（如果存在）或配置 Git 代理\ngit clone https:\u002F\u002Fgitee.com\u002Fmirror\u002Fawesome-language-agents.git \n# 若无直接镜像，建议配置全局代理后使用原命令\n```\n\n## 3. 基本使用与项目运行\n\n要“使用”此工具，实际上是从列表中选择一个具体的 Agent 项目进行部署。以下是基于 CoALA 分类的典型工作流：\n\n### 第一步：选择目标项目\n浏览仓库中的 `README.md` 或 [CoALA.bib](CoALA.bib) 文件，根据需求选择项目。例如：\n*   **推理 (Reasoning)**: 选择 *ReAct*, *Tree of Thoughts*, *MetaGPT*。\n*   **工具使用 (Grounding)**: 选择 *Toolformer*, *HuggingGPT*, *AutoGen*。\n*   **记忆与学习 (Learning\u002FRetrieval)**: 选择 *Reflexion*, *Generative Agents*。\n\n### 第二步：进入具体项目目录\n假设你选择了 **ReAct** (Synergizing Reasoning and Acting)，通常需要在该项目的官方 GitHub 页面找到其代码库并克隆。\n\n```bash\n# 示例：克隆一个典型的 CoALA 相关项目 (以 ReAct 的常见实现为例)\ngit clone https:\u002F\u002Fgithub.com\u002Fysymyth\u002FReAct.git\ncd ReAct\n```\n\n### 第三步：安装该项目特定依赖\n每个子项目都有独立的 `requirements.txt`。\n\n```bash\n# 安装该项目所需的依赖\npip install -r requirements.txt\n```\n*注：部分大型项目可能提供 `setup.py` 或 Docker 镜像，请优先参考具体项目的 README。*\n\n### 第四步：运行示例\n大多数项目会提供简单的演示脚本。你需要配置 API Key（如 OpenAI API）或使用本地模型。\n\n```bash\n# 设置环境变量 (以 OpenAI 为例)\nexport OPENAI_API_KEY=\"your-api-key-here\"\n\n# 运行演示脚本 (具体命令视项目而定，以下为常见模式)\npython demo.py --task \"solve a math problem\" \n# 或\npython run_agent.py --model gpt-3.5-turbo\n```\n\n### 进阶：利用 CoALA 分类进行研究\n你可以利用仓库中的标签快速筛选论文：\n*   查看 **[CoALA.bib](CoALA.bib)** 文件，其中包含了 300+ 条引用，并按 `reasoning`, `grounding`, `learning` 等动作空间进行了标记。\n*   在文献管理软件中导入该 `.bib` 文件，即可按 CoALA 架构对语言代理的研究现状进行系统性梳理。\n\n---\n**提示**：由于这是一个动态更新的列表，建议定期执行 `git pull` 获取最新的论文和项目链接。对于具体的代码报错，请直接查阅对应子项目的 Issue 页面。","某电商公司的算法团队正致力于构建一个能自主完成复杂购物任务（如“对比三款相机并下单最性价比的一款”）的智能客服机器人。\n\n### 没有 awesome-language-agents 时\n- **架构设计盲目**：团队缺乏统一的认知架构参考，难以界定智能体何时该调用外部搜索工具（Grounding），何时该更新内部记忆（Learning），导致系统逻辑混乱。\n- **记忆机制缺失**：机器人无法有效区分短期工作记忆与长期经验记忆，常常忘记用户之前的偏好或重复执行已失败的搜索步骤。\n- **决策循环断裂**：缺乏标准的“规划 - 执行”闭环设计，智能体在面对多步任务时容易陷入死循环或直接输出幻觉内容，无法进行自我修正。\n- **技术选型低效**：开发人员需从零开始筛选数百篇论文，难以快速找到如 WebShop 或 Decomposed Prompting 等经过验证的具体实现方案，研发周期被大幅拉长。\n\n### 使用 awesome-language-agents 后\n- **架构清晰标准化**：基于 CoALA 框架，团队迅速定义了清晰的动作空间，明确划分了与环境交互和内部推理的边界，系统稳定性显著提升。\n- **记忆管理精细化**：参考列表中成熟的智能体设计，成功实现了 episodic（经历）与 semantic（知识）记忆的分离，机器人能准确回溯历史对话并复用成功经验。\n- **决策流程自动化**：引入了标准的规划与执行循环机制，智能体在遇到障碍时能自动重新规划路径，复杂任务的完成率从 45% 提升至 82%。\n- **资源获取一站式**：直接利用列表中整理的 300+ 篇核心论文及代码库，团队在一周内就完成了原型搭建，避免了重复造轮子。\n\nawesome-language-agents 通过提供标准化的认知架构蓝图和精选资源，将智能体开发从“盲目试错”转变为“系统化工程”，极大加速了高阶语言智能体的落地进程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fysymyth_awesome-language-agents_6327c4a8.png","ysymyth","Shunyu Yao","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fysymyth_12822ea0.jpg",null,"Princeton University","shunyuyao.cs@gmail.com","https:\u002F\u002Fgithub.com\u002Fysymyth",[81],{"name":82,"color":83,"percentage":84},"TeX","#3D6117",100,1196,81,"2026-04-10T00:37:23",1,"","未说明",{"notes":92,"python":90,"dependencies":93},"该项目（awesome-language-agents \u002F CoALA）是一个论文和资源列表（Awesome List），用于汇总基于“语言代理认知架构”框架的研究工作。它本身不是一个可执行的软件工具或代码库，因此没有特定的操作系统、GPU、内存、Python 版本或依赖库要求。具体的运行环境需求取决于列表中各个独立论文所对应的具体开源实现代码。",[],[13,35,14],[96,97,98,99,100,101],"artificial-intelligence","awesome-list","language-agent","language-model","llm","natural-language-processing","2026-03-27T02:49:30.150509","2026-04-11T22:00:06.533861",[],[]]