[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-OpenBMB--AgentVerse":3,"tool-OpenBMB--AgentVerse":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":75,"owner_website":80,"owner_url":81,"languages":82,"stars":109,"forks":110,"last_commit_at":111,"license":112,"difficulty_score":113,"env_os":114,"env_gpu":115,"env_ram":114,"env_deps":116,"category_tags":126,"github_topics":127,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":133,"updated_at":134,"faqs":135,"releases":166},2391,"OpenBMB\u002FAgentVerse","AgentVerse","🤖 AgentVerse 🪐 is designed to facilitate the deployment of multiple LLM-based agents in various applications, which primarily provides two frameworks: task-solving and simulation","AgentVerse 是一个专为部署多个大语言模型（LLM）智能体而设计的开源框架，旨在让 AI 智能体从“单兵作战”走向“团队协作”。它主要解决了复杂任务中单个模型能力有限、难以模拟真实多角色互动环境的痛点。\n\n该工具核心提供两大功能框架：一是“任务求解”，能将多个智能体组装成自动化系统，通过分工协作完成软件开发、专业咨询等复杂工作；二是“环境仿真”，允许用户自定义场景，观察或与多个智能体进行互动，适用于游戏开发及社会行为研究。\n\nAgentVerse 非常适合 AI 开发者、研究人员以及希望探索多智能体协同应用的技术团队使用。其独特亮点在于灵活的双架构设计，既支持构建高效的任务解决系统，又能搭建高自由度的仿真沙盒。此外，项目代码规范严谨，拥有活跃的社区支持，并已被顶级学术会议 ICLR 2024 收录，甚至获得了 NVIDIA 官方博客的推荐，是进入多智能体应用开发领域的可靠选择。","\u003Ch1 align=\"center\"> 🤖 AgentVerse 🪐 \u003C\u002Fh1>\n\n\u003C!--\n\u003Ch3 align=\"center\">\n    \u003Cp>A Framework for Multi-LLM Environment Simulation\u003C\u002Fp>\n\u003C\u002Fh3>\n-->\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fblob\u002Fmain\u002FLICENSE\">\n        \u003Cimg alt=\"License: Apache2\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache_2.0-green.svg\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Frelease\u002Fpython-3916\u002F\">\n        \u003Cimg alt=\"Python Version\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.9+-blue.svg\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Factions\u002F\">\n        \u003Cimg alt=\"Build\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002FOpenBMB\u002FAgentVerse\u002Ftest.yml\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack\">\n        \u003Cimg alt=\"Code Style: Black\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-black\">\n\u003C!--     \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fissues\">\n        \u003Cimg alt=\"Contributions: Welcome\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen.svg?style=flat\">\n    \u003C\u002Fa> -->\n    \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FAgentVerse\">\n        \u003Cimg alt=\"HuggingFace\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fhugging_face-play-yellow\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FgDAXfjMw\">\n        \u003Cimg alt=\"Discord\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAgentVerse-Discord-purple?style=flat\">\n    \u003C\u002Fa>\n    \n    \n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_c0a185479fc7.png\" width=\"512\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n    【\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848\">Paper\u003C\u002Fa>】 \n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n    【English | \u003Ca href=\"README_zh.md\">Chinese\u003C\u002Fa>】 \n\u003C\u002Fp>\n\n**AgentVerse** is designed to facilitate the deployment of multiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks: **task-solving** and **simulation**. \n\n- Task-solving: This framework assembles multiple agents as an automatic multi-agent system ([AgentVerse-Tasksolving](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.10848.pdf), [Multi-agent as system](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02427)) to collaboratively accomplish the corresponding tasks. \nApplications: software development system, consulting system, etc. \n\n\u003Cp align=\"center\">\n\u003Cimg width=\"616\" alt=\"Screen Shot 2023-09-01 at 12 08 57 PM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_41544d8cab0a.png\">\n\u003C\u002Fp>\n\n- Simulation: This framework allows users to set up custom environments to observe behaviors among, or interact with, multiple agents. ⚠️⚠️⚠️ We're refactoring the code. If you require a stable version that exclusively supports simulation framework, you can use [`release-0.1`](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Ftree\u002Frelease-0.1) branch. Applications: game, social behavior research of LLM-based agents, etc.\n\n\u003Cp align=\"center\">\n\u003Cimg width=\"616\" alt=\"Screen Shot 2023-10-16 at 10 53 49 PM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_a8cee2a3594d.png\">\n\u003C\u002Fp>\n\n\n---\n\n\n# 📰 What's New\n- [2024\u002F3\u002F17] AgentVerse was introduced in NVIDIA's blog - [Building Your First LLM Agent Application](https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Fbuilding-your-first-llm-agent-application\u002F). \n\n- [2024\u002F1\u002F17] We're super excited to announce that our paper got accepted at ICLR 2024. More updates will be coming soon!\n  \n- [2023\u002F10\u002F17] We're super excited to share our open-source AI community hugging face: [`AgentVerse`](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FAgentVerse\u002FagentVerse). You are able to try out the two simulation applications, NLP Classroom and Prisoner's Dilemma,with your code of the openai API key and the openai organization. Have fun!\n\n- [2023\u002F10\u002F5] Re-factor our codebase to enable the deployment of both simulation and task-solving framework! We have placed the code for Minecraft example in the paper at the [`minecraft`](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Ftree\u002Fminecraft) branch. Our tool-using example will soon be updated to the `main` branch. Stay tuned!\n\n- [2023\u002F8\u002F22] We're excited to share our paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848) that  illustrate the task-solving framework \nin detail of AgentVerse.\n\n- [2023\u002F6\u002F5] We are thrilled to present an array of [demos](#-simple-demo-video), including [NLP Classroom](#nlp-classroom), [Prisoner Dilemma](#prisoner-dilemma), [Software Design](#software-design), [Database Administrator](#database-administrator-dba), and a simple [H5 Pokemon Game](#pokemon) that enables the interaction with the characters in Pokemon! Try out these demos and have fun!\n- [2023\u002F5\u002F1] 🚀 [AgentVerse](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse) is officially launched!\n\n\n\n\n# 🗓 Coming Soon\n- [x] Code release of our [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848)\n- [x] Add support for local LLM (LLaMA, Vicunna, etc.)\n- [ ] Add documentation\n- [ ] Support more sophisticated memory for conversation history\n\n\n\u003C!--\n\n## 👾 Simple Demo Video\n\nWe demonstrate the following cases that are expertly crafted by AgentVerse.\n\n\n#### NLP Classroom\nIn the NLP class, the professor and students engage in interactive communication. When students have a question, they raise their hands and patiently wait for the professor to call on them. Only after being called on by the professor, can students speak and ask their questions.\n\nUse the following command to launch the NLP Classroom example:\n```bash\npython agentverse_command\u002Fmain_simulation_gui.py --task simulation\u002Fnlp_classroom_9players\n```\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F11704492\u002F6ea07850-595e-4a28-a82e-f863011353c2\n\n\n#### Prisoner Dilemma\nA prisoner's Dilemma is a thought experiment that challenges two completely rational agents to a dilemma: they can cooperate with their partner for mutual benefit or betray their partner (\"defect\") for individual reward.\n\nUse the following command to launch the Prisoner Dilemma example:\n```bash\npython agentverse_command\u002Fmain_simulation_gui.py --task simulation\u002Fprisoner_dilemma\n```\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F11704492\u002F017c46e5-c738-4fca-9352-b008e2d518bd\n\n\n#### Software Design\nIn the Software Design example, a code writer, a code tester and a code reviewer collaborate on the code generation problem. Given a problem, the code writer first composes the code implementation. The code tester runs the unit tests and provides the feedback. The code viewer then generates a review. After collecting the test feedback and review, the code writer iteratively refines the code.\n\nUse the following command to launch the Software Design example:\n```bash\npython agentverse_command\u002Fmain_simulation_gui.py --task simulation\u002Fsde_team\u002Fsde_team_2players\n```\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F11704492\u002F5058066a-abee-490d-8659-b4e54661626a\n\n\n#### [Database Administrator (DBA)](https:\u002F\u002Fgithub.com\u002FTsinghuaDatabaseGroup\u002FDB-GPT)\n\nIn the database diagnosis scenario, the Chief DBA monitors the system anomalies (e.g., slow queries, locks, crash down). If detected, the domain experts are alerted to analyze root causes, share insights, and suggest optimization solutions together. The Chief DBA then provides a summarized report to the user.\n\n```bash\npython agentverse_command\u002Fmain_simulation_gui.py --task simulation\u002Fdb_diag\n```\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F11704492\u002Fc633419d-afbb-47d4-bb12-6bb512e7af3a\n\n#### [Text Evaluation (ChatEval)](https:\u002F\u002Fgithub.com\u002Fchanchimin\u002FChatEval)\nIn the context of the text evaluation scenario, we recommend users explore the [ChatEval](https:\u002F\u002Fgithub.com\u002Fchanchimin\u002FChatEval) repo. They've implemented a multi-agent referee team on AgentVerse to assess the quality of text generated by different models. When given two distinct pieces of text, roles within ChatEval can autonomously debate the nuances and disparities, drawing upon their assigned personas, and subsequently provide their judgments. Experiments indicate that their referee team, enriched with diverse roles specified in [config.yaml](#2-configuring-the-agents), aligns more closely with human evaluations. This demo is built upon the [Fastchat](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat) repo, and we'd like to express our appreciation for their foundational work.\n\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F75533759\u002F58f33468-f15b-4bac-ae01-8d0780019f85\n\n#### Pokemon\n**Currently available only in [`release-0.1`](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Ftree\u002Frelease-0.1)**. In the game, agents can walk around the game world, and interact with one another. As a player, you take on the role of an agent and can engage with others at any time. There are 6 characters in the Pokémon environment who appeared in Pokemon Emerald: [May](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FMay_(game)), [Professor Birch](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FProfessor_Birch), [Steven Stone](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FSteven_Stone), [Maxie](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FMaxie), [Archie](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FArchie) and [Joseph](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FMr._Stone). \n\nTo launch the Pokemon game, first launch a local server with the following command:\n```bash\nuvicorn pokemon_server:app --reload --port 10002\n```\nThen open another terminal in the project's root path and run the following command:\n```bash\ncd ui\n# If you do not have npm installed, you need to install it before running the following commands \n# https:\u002F\u002Fdocs.npmjs.com\u002Fdownloading-and-installing-node-js-and-npm\n# We have tested on npm@9.6.4, node@20.0.0\nnpm install\nnpm run watch\n```\nWait for the compilation to complete, and have fun! (WASD for moving around, and SPACE for launching a conversation.)\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F11704492\u002F4d07da68-f942-4205-b558-f155e95782e7\n\n-->\n\n\u003C!--\n\n## Contents\n\n- [✨ Features](#-features)\n- [📰 What's New](#-whats-new)\n- [🌟 Join Us!](#-join-us)\n  - [How Can You Contribute?](#how-can-you-contribute)\n- [🗓 Coming Soon](#-coming-soon)\n- [👾 Simple Demo Video](#-simple-demo-video)\n    - [NLP Classroom](#nlp-classroom)\n    - [Prisoner Dilemma](#prisoner-dilemma)\n    - [Software Design](#software-design)\n    - [Database Administrator (DBA)](#database-administrator-dba)\n    - [Text Evaluation (ChatEval)](#text-evaluation-chateval)\n    - [Pokemon](#pokemon)\n- [Contents](#contents)\n- [🚀 Getting Started](#-getting-started)\n  - [Installation](#installation)\n  - [Simulation Example](#simulation)\n  - [Task-Solving Example](#task-solving-cli-example)\n- [💡 Philosophy](#-philosophy)\n  - [Environment](#environment)\n  - [Agent](#agent)\n- [✍️ Customize Your Own Environment](#️-customize-your-own-environment)\n  - [A Simple Example: Building a Classroom Environment](#a-simple-example-building-a-classroom-environment)\n      - [1. Creating a Task Directory and Configuring the Environment](#1-creating-a-task-directory-and-configuring-the-environment)\n      - [2. Configuring the Agents](#2-configuring-the-agents)\n      - [3. Writing an Output Parser](#3-writing-an-output-parser)\n  - [Customization Guide for More Complex Environments](#customization-guide-for-more-complex-environments)\n- [🔎 Examples](#-examples)\n- [Star History](#star-history)\n- [Citation](#citation)\n- [Contact](#contact)\n\n-->\n\n# Contents\n- [📰 What's New](#-whats-new)\n- [🗓 Coming Soon](#-coming-soon)\n- [Contents](#contents)\n- [🚀 Getting Started](#-getting-started)\n  - [Installation](#installation)\n  - [Environment Variables](#environment-variables)\n  - [Simulation](#simulation)\n    - [Framework Required Modules](#framework-required-modules)\n    - [CLI Example](#cli-example)\n    - [GUI Example](#gui-example)\n  - [Task-Solving](#task-solving)\n    - [Framework Required Modules](#framework-required-modules-1)\n    - [CLI Example](#cli-example-1)\n  - [Local Model Support](#local-model-support)\n  - [vLLM Support](#vllm-support)\n  - [FSChat Support](#fschat-support)\n    - [1. Install the Additional Dependencies](#1-install-the-additional-dependencies)\n    - [2. Launch the Local Server](#2-launch-the-local-server)\n    - [3. Modify the Config File](#3-modify-the-config-file)\n- [AgentVerse Showcases](#agentverse-showcases)\n  - [Simulation Showcases](#simulation-showcases)\n  - [Task-Solving Showcases](#task-solving-showcases)\n- [🌟 Join Us!](#-join-us)\n  - [Leaders](#leaders)\n  - [Contributors](#contributors)\n  - [How Can You Contribute?](#how-can-you-contribute)\n  - [Social Media and Community](#social-media-and-community)\n- [Star History](#star-history)\n  - [Citation](#citation)\n- [Contact](#contact)\n\n\n\n\n# 🚀 Getting Started\n\n## Installation\n\n\n**Manually Install (Recommended!)**\n\n**Make sure you have Python >= 3.9**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse.git --depth 1\ncd AgentVerse\npip install -e .\n```\n\nIf you want to use AgentVerse with local models such as LLaMA, you need to additionally install some other dependencies:\n```bash\npip install -r requirements_local.txt\n```\n\n**Install with pip**\n\nOr you can install through pip\n```bash\npip install -U agentverse\n```\n\n## Environment Variables\nYou need to export your OpenAI API key as follows：\n```bash\n# Export your OpenAI API key\nexport OPENAI_API_KEY=\"your_api_key_here\"\n```\n\nIf you want use Azure OpenAI services, please export your Azure OpenAI key and OpenAI API base as follows：\n```bash\nexport AZURE_OPENAI_API_KEY=\"your_api_key_here\"\nexport AZURE_OPENAI_API_BASE=\"your_api_base_here\"\n```\n\n## Simulation\n\n### Framework Required Modules \n```\n- agentverse \n  - agents\n    - simulation_agent\n  - environments\n    - simulation_env\n```\n\n### CLI Example\n\nYou can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students.\n\n```shell\nagentverse-simulation --task simulation\u002Fnlp_classroom_9players\n```\n\n### GUI Example\n\nWe also provide a local website demo for this environment. You can launch it with\n\n```shell\nagentverse-simulation-gui --task simulation\u002Fnlp_classroom_9players\n```\nAfter successfully launching the local server, you can visit [http:\u002F\u002F127.0.0.1:7860\u002F](http:\u002F\u002F127.0.0.1:7860\u002F) to view the classroom environment.\n\nIf you want to run the simulation cases with tools (e.g., simulation\u002Fnlp_classroom_3players_withtool), you need to install BMTools as follows:\n```bash\ngit clone git+https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FBMTools.git\ncd BMTools\npip install -r requirements.txt\npython setup.py develop\n```\nThis is optional. If you do not install BMTools, the simulation cases without tools can still run normally.\n\n## Task-Solving \n\n\n### Framework Required Modules \n```\n- agentverse \n  - agents\n    - simulation_env\n  - environments\n    - tasksolving_env\n```\n\n### CLI Example\n\nTo run the experiments with the task-solving environment proposed in our [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848), you can use the following command:\n\nTo run AgentVerse on a benchmark dataset, you can try\n```shell\n# Run the Humaneval benchmark using gpt-3.5-turbo (config file `agentverse\u002Ftasks\u002Ftasksolving\u002Fhumaneval\u002Fgpt-3.5\u002Fconfig.yaml`)\nagentverse-benchmark --task tasksolving\u002Fhumaneval\u002Fgpt-3.5 --dataset_path data\u002Fhumaneval\u002Ftest.jsonl --overwrite\n```\n\nTo run AgentVerse on a specific problem, you can try\n```shell\n# Run a single query (config file `agentverse\u002Ftasks\u002Ftasksolving\u002Fbrainstorming\u002Fgpt-3.5\u002Fconfig.yaml`). The task is specified in the config file.\nagentverse-tasksolving --task tasksolving\u002Fbrainstorming\n```\n\nTo run the tool using cases presented in our paper, i.e., multi-agent using tools such as web browser, Jupyter notebook, bing search, etc., you can first build ToolsServer provided by [XAgent](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FXAgent). You can follow their [instruction](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FXAgent#%EF%B8%8F-build-and-setup-toolserver) to build and run the ToolServer.\n\nAfter building and launching the ToolServer, you can use the following command to run the task-solving cases with tools:\n```shell\nagentverse-tasksolving --task tasksolving\u002Ftool_using\u002F24point\n```\nWe have provided more tasks in `agentverse\u002Ftasks\u002Ftasksolving\u002Ftool_using\u002F` that show how multi-agent can use tools to solve problems.\n\nAlso, you can take a look at `agentverse\u002Ftasks\u002Ftasksolving` for more experiments we have done in our paper.\n\n## Local Model Support\n## vLLM Support\nIf you want to use vLLM, follow the guide [here](https:\u002F\u002Fdocs.vllm.ai\u002Fen\u002Flatest\u002Fgetting_started\u002Fquickstart.html) to install and setup the vLLM server which is used to handle larger inference workloads. Create the following environment variables to connect to the vLLM server:\n```bash\nexport VLLM_API_KEY=\"your_api_key_here\"\nexport VLLM_API_BASE=\"http:\u002F\u002Fyour_vllm_url_here\"\n```\n\nThen modify the `model` in the task config file so that it matches the model name in the vLLM server. For example:\n```yaml\nmodel_type: vllm\nmodel: llama-2-7b-chat-hf\n```\n\n## FSChat Support\nThis section provides a step-by-step guide to integrate FSChat into AgentVerse. FSChat is a framework that supports local models such as LLaMA, Vicunna, etc. running on your local machine.\n### 1. Install the Additional Dependencies\nIf you want to use local models such as LLaMA, you need to additionally install some other dependencies:\n```bash\npip install -r requirements_local.txt\n```\n\n### 2. Launch the Local Server\nThen modify the `MODEL_PATH` and `MODEL_NAME` according to your need to launch the local server with the following command:\n```bash\nbash scripts\u002Frun_local_model_server.sh\n```\nThe script will launch a service for Llama 7B chat model.\nThe `MODEL_NAME` in AgentVerse currently supports several models including `llama-2-7b-chat-hf`, `llama-2-13b-chat-hf`, `llama-2-70b-chat-hf`, `vicuna-7b-v1.5`, and `vicuna-13b-v1.5`. If you wish to integrate additional models that are [compatible with FastChat](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\u002Fblob\u002Fmain\u002Fdocs\u002Fmodel_support.md), you need to:\n1. Add the new `MODEL_NAME` into the `LOCAL_LLMS` within `agentverse\u002Fllms\u002F__init__.py`. Furthermore, establish\n2. Add the mapping from the new `MODEL_NAME` to its corresponding Huggingface identifier in the `LOCAL_LLMS_MAPPING` within the `agentverse\u002Fllms\u002F__init__.py` file.\n\n### 3. Modify the Config File\nIn your config file, set the `llm_type` to `local` and `model` to the `MODEL_NAME`. For example\n```yaml\nllm:\n  llm_type: local\n  model: llama-2-7b-chat-hf\n  ...\n```\n\nYou can refer to `agentverse\u002Ftasks\u002Ftasksolving\u002Fcommongen\u002Fllama-2-7b-chat-hf\u002Fconfig.yaml` for a more detailed example.\n\n# AgentVerse Showcases\n\n## Simulation Showcases\nRefer to [simulation showcases](README_simulation_cases.md)\n\n## Task-Solving Showcases\nRefer to [tasksolving showcases](README_tasksolving_cases.md)\n\n\n\n\u003C!--\n## 💡 Philosophy\n\n### Environment\n\nAt the core of our framework is the environment, which plays a crucial role in enabling researchers to study the behavior of agents under different conditions. We believe that the environment should be flexible and extensible, allowing researchers to easily customize it to fit their needs. To achieve this, we have abstracted the environment into five rule components, and implementing different environments is actually implementing different rules:\n\n- **Describer**: This component provides a description of the environment at each turn for each agent. You can customize the describer to define the specific requirements of their environment, such as the agents with whom an agent can interact.\n- **Order**: This component defines the order in which agents take actions within the environment. You can customize the order to reflect the desired interaction between agents. We provide several basic order options, including `random`, `sequential`, and `concurrent` (in which all agents take an action in each turn).\n- **Selector**: This component selects the valid messages generated by agents. Sometimes agents may generate invalid responses, and the selector is used to filter out unexpected results.\n- **Updater**: This component updates the memory of each agent. In certain cases, the response generated by one agent should not be seen by all agents (e.g., if agents are in different rooms). For each response, the updater updates only the agents who can see it.\n- **Visibility**: This component maintains the list of agents that each agent can see throughout the environment's changes. For example, when an agent moves from one room to another, the list of visible agents of each agent should be updated by `visibility`.\n\nBy abstracting the environment into these five components, we have created a highly flexible and extensible framework that enables researchers to easily build and customize their own multi-agent environments.\n\n### Agent\n\nAnother fundamental component is the agent. Currently we provide two types of agents: **ConversationAgent** and **ToolAgent**. You can also customize your own agent by inheriting BaseAgent class (tutorial coming soon).\n\n-->\n\n\n\n\n\n\u003C!--\n\n## ✍️ Customize Your Own Environment\n\nWe have provided several examples in the `agentverse\u002Ftasks` directory. To customize your environment, you should\n\n1. Create a task directory in `agentverse\u002Ftasks` \n2. Write the configuration file\n3. Write the output parser that parses the response of your agents.\n4. Add your parser in `agentverse\u002Ftasks\u002F__init__.py`\n\nWe will use a simple example in `agentverse\u002Ftasks\u002Fnlp_classroom_3players` to illustrate the procedure.\n\n### A Simple Example: Building a Classroom Environment\n\nTo illustrate how to customize your environment, we'll use a simple example of building a classroom environment where one agent is the professor, one is the student, and one is the teaching assistant.\n\n##### 1. Creating a Task Directory and Configuring the Environment\n\nFirst, we need to create a task directory and write our configuration file for the environment. In the `agentverse\u002Ftasks` directory, create a new directory called `nlp_classroom_3players`. Inside this directory, create a `config.yaml` file and write the following configuration:\n\n```yaml\n# config.yaml\nenvironment:\n  env_type: basic\t\t\t\t# Use the basic environment provided in AgentVerse\n  max_turns: 10\t\t\t\t\t# Specify the maximum number of dialogue turns\n  rule:\n    order:\n      type: sequential\t# Use the sequential order\n    visibility:\n      type: all\t\t\t\t\t# Each message can be seen by all agents\n    selector:\n      type: basic\t\t\t\t# Basic selector (do not select)\n    updater:\n      type: basic\t\t\t\t# Basic updater (update the message to all agents)\n    describer:\n      type: basic\t\t\t\t# Basic describer (no description)\n```\n\nThis configuration specifies that we will use the basic environment provided in AgentVerse, with a maximum of 10 dialogue turns. We'll use the sequential order, with all messages visible to all agents. We won't be using any selectors, our updater will update the messages to all the agents and our describer will provide no description.\n\n##### 2. Configuring the Agents\n\nNext, we'll configure the agents. In the `config.yaml` file, we'll add the configuration for each agent. Here's an example configuration for the professor:\n\n```yaml\n# config.yaml\nagents:\n  -\n    agent_type: conversation\n    name: Professor Micheal\t\t# Name of the agent\n    role_description: You are Prof. Micheal, ...\t# Description of the agent\n    memory: \n      memory_type: chat_history\t\t# Will store all the chat history\n    prompt_template: *professor_prompt\n    llm:\n      llm_type: text-davinci-003    # Will use OpenAICompletion LLM\n      model: text-davinci-003       # The arguments passed to the api call\n      temperature: 0.7\n      max_tokens: 250\n```\n\nIn this example, we'll use the `conversation` agent type. We've given the agent a name and a description, and we'll store the chat history in memory. We've also provided a prompt template with placeholders marked as ${placeholder}. These will be instantiated by the `_fill_prompt_template` method of the agent.\n\n##### 3. Writing an Output Parser\n\nThe next step is to write a simple parser for your agent's response. Because you may have specified the output format in your prompt template, you need to provide a corresponding parser. In this example, we inform the model to output in the following format in our prompt template\n\n```\nAction: Speak\nAction Input: (the content)\n```\n\nWe'll write a parser to extract the content from the agent's response. Refer to the code for more details. We've decorated our parser function with `@output_parser_registry.register('classroom_parser')` to register it with our framework. Finally, we import our parser in `agentverse\u002Ftasks\u002F__init__.py`.\n\nWith these steps, we've successfully built a simple classroom environment and customized it for our needs.\n\n### Customization Guide for More Complex Environments\n\nWhile we provide a basic framework for building environments with our five rule components, more complex environments may require further customization. A detailed documentation and tutorial is coming soon. Here we briefly introduce some steps you can take to customize your environment:\n\n1. **Customize the five rule components**. Each rule component has an interface, allowing you to customize its behavior to suit your specific needs. It's important to note that these components are not necessarily independent and can interact through the `rule_params` dictionary in the environment. You can create your own rule components and integrate them with the existing ones to build more complex interactions between agents.\n2. **Customize the environment itself**. Our `basic` environment provides a default execution order for the five rule components that is suitable for most cases, but you can inherit the `BaseEnvironment` class and write your own `run` method to implement a more sophisticated execution order.\n3. **Customize the agent**. Depending on your specific use case, you may also need to inherit the `BaseAgent` class. For example, you may want to use your local LLM as your agents or create agents with specialized knowledge or skills.\n\n-->\n\n\n\u003C!--\n\n## 🔎 Examples\n\nCurrently, we offer some simple examples in the `agentverse\u002Ftasks` directory, each demonstrating different possibilities of our framework. While the performance of these examples may not be optimal due to limited prompt engineering, they are intended to showcase the capabilities of our framework, such as allowing the use of tools.\n\nHere's a brief overview of each example:\n\n1. `nlp_classroom_3players`: This example illustrates a simple case in which agents will speak in sequential order. \n2. `nlp_classroom_9players`: This is an NLP class example. Here, students can raise their hand when they have a question, and the professor can call on the students to let them ask. Students are only allowed to speak after they are called on.\n3. `nlp_classroom_9players_group`: This example showcases group discussions. The professor may initiate a group discussion when needed, and students can exclusively interact with fellow students within the same group during the discussion.\n4. `nlp_classroom_3players_withtool`: Students in this classroom can use Bing search API when listening to the class.\n5. `math_problem_2players_tools`: A simple example demonstrating how two agents can use the WolframAlpha API to play an arithmetic game.\n6. `prisoner_dilema`: The Prisoner's Dilemma is a thought experiment involving two rational agents facing a choice between cooperating for mutual benefit or betraying their partner for individual gain.\n7. `db_diag`: The Chief DBA monitors (agents) the database system for anomalies and alerts memory and CPU agents if any are detected. They (agents) analyze root causes and suggest optimization solutions. The Chief DBA (agent) provides a diagnosis summary to the user, who can give instructions or evaluate the proposed solutions' effectiveness.\n8. `sde_team`: In the SDE team, code writer, code tester and code reviewer collaborate on the code generation problem. \n9. `pokemon`:  This example intimates Pokemon game.\n-->\n\n\n# 🌟 Join Us!\nAgentVerse is on a mission to revolutionize the multi-agent environment for large language models, and we're eagerly looking for passionate collaborators to join us on this exciting journey.\n\n## Leaders\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fchenweize1998\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_017b376fa748.png\" alt=\"Leader\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fyushengsu-thu\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_a7c648fc7873.png\" alt=\"Leader\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\n## Contributors\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fchanchimin\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_5209864943b8.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flibowen2121\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_9a967f5af98e.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FXial-kotori\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_daa3b8ca19d7.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FDr-Left\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_f27f038f35e2.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fminleminzui\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_91c915664b9f.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FTsuruko04\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_d3dc66bb98c8.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fkierangilliam\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_e201474ae6d3.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzhouxh19\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_45dd725d57d8.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftzw2698\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_52a669bb4583.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FJetSquirrel\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_ce21462ad208.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMuiruriscode\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_6818bb1c5643.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Feltociear\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_73d65636eb8d.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\n\n## How Can You Contribute?\n- **Issue and Pull-Request**: If you encounter any problems when use AgentVerse, you can propose the issue in English. Beisdes, you can also autonomously ask us to assign issue to you and send the PR (Please follow the [PULL_REQUEST_TEMPLATE](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fblob\u002Fmain\u002FPULL_REQUEST_TEMPLATE.md)) after you solve it. \n  \n- **Code Development**: If you're an engineer, help us refine, optimize, and expand the current framework. We're always looking for talented developers to enhance our existing features and develop new modules.\n\n- **Documentation and Tutorials**: If you have a knack for writing, help us improve our documentation, create tutorials, or write blog posts to make AgentVerse more accessible to the broader community.\n\n- **Application Exploration**: If you're intrigued by multi-agent applications and are eager to experiment using AgentVerse, we'd be thrilled to support your journey and see what you create!\n\n- **Feedback and Suggestions**: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch.\n\nAlso, if you're passionate about advancing the frontiers of multi-agent applications, become core AgentVerse team members, or are eager to dive deeper into agent research. Please reach out [AgentVerse Team](mailto:agentverse2@gmail.com?subject=[GitHub]%AgentVerse%20Project), and CC to [Weize Chen](mailto:chenweize1998@gmail.com?subject=[GitHub]%AgentVerse%20Project) and [Yusheng Su](mailto:yushengsu.thu@gmail.com?subject=[GitHub]%AgentVerse%20Project). We're keen to welcome motivated individuals like you to our team!\n\n\n## Social Media and Community\n\n- Twitter: https:\u002F\u002Ftwitter.com\u002FAgentverse71134\n\n- Discord: https:\u002F\u002Fdiscord.gg\u002FgDAXfjMw.\n\n- Hugging Face: https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FAgentVerse\u002FagentVerse.\n\n# Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_696e88fc7503.png)](https:\u002F\u002Fstar-history.com\u002F#OpenBMB\u002FAgentVerse&Date)\n\n\n## Citation\nIf you find this repo helpful, feel free to cite us.\n```\n@article{chen2023agentverse,\n  title={Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents},\n  author={Chen, Weize and Su, Yusheng and Zuo, Jingwei and Yang, Cheng and Yuan, Chenfei and Qian, Chen and Chan, Chi-Min and Qin, Yujia and Lu, Yaxi and Xie, Ruobing and others},\n  journal={arXiv preprint arXiv:2308.10848},\n  year={2023}\n}\n```\n\n# Contact\n\nAgentVerse Team: agentverse2@gmail.com\n\nProject leaders:\n\n- Weize Chen: chenweize1998@gmail.com\n\n- [Yusheng Su](https:\u002F\u002Fyushengsu-thu.github.io\u002F): yushengsu.thu@gmail.com\n\n","\u003Ch1 align=\"center\"> 🤖 AgentVerse 🪐 \u003C\u002Fh1>\n\n\u003C!--\n\u003Ch3 align=\"center\">\n    \u003Cp>多大模型环境模拟框架\u003C\u002Fp>\n\u003C\u002Fh3>\n-->\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fblob\u002Fmain\u002FLICENSE\">\n        \u003Cimg alt=\"许可证：Apache2\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache_2.0-green.svg\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Frelease\u002Fpython-3916\u002F\">\n        \u003Cimg alt=\"Python版本\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.9+-blue.svg\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Factions\u002F\">\n        \u003Cimg alt=\"构建\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002FOpenBMB\u002FAgentVerse\u002Ftest.yml\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack\">\n        \u003Cimg alt=\"代码风格：Black\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-black\">\n\u003C!--     \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fissues\">\n        \u003Cimg alt=\"贡献：欢迎\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcontributions-welcome-brightgreen.svg?style=flat\">\n    \u003C\u002Fa> -->\n    \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FAgentVerse\">\n        \u003Cimg alt=\"HuggingFace\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fhugging_face-play-yellow\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FgDAXfjMw\">\n        \u003Cimg alt=\"Discord\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAgentVerse-Discord-purple?style=flat\">\n    \u003C\u002Fa>\n    \n    \n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_c0a185479fc7.png\" width=\"512\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n    【\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848\">论文\u003C\u002Fa>】 \n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n    【英文 | \u003Ca href=\"README_zh.md\">中文\u003C\u002Fa>】 \n\u003C\u002Fp>\n\n**AgentVerse** 旨在促进多种基于大模型的智能体在不同应用场景中的部署。AgentVerse 主要提供两种框架：**任务解决**和**仿真**。\n\n- 任务解决：该框架将多个智能体组合成一个自动化的多智能体系统（[AgentVerse-Tasksolving](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.10848.pdf), [多智能体系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.02427))，协同完成相应任务。  \n应用：软件开发系统、咨询系统等。\n\n\u003Cp align=\"center\">\n\u003Cimg width=\"616\" alt=\"屏幕截图 2023-09-01 下午12:08:57\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_41544d8cab0a.png\">\n\u003C\u002Fp>\n\n- 仿真：该框架允许用户设置自定义环境，以观察或与多个智能体进行交互。⚠️⚠️⚠️ 我们正在重构代码。如果您需要仅支持仿真框架的稳定版本，可以使用 [`release-0.1`](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Ftree\u002Frelease-0.1) 分支。应用：游戏、基于大模型的智能体社会行为研究等。\n\n\u003Cp align=\"center\">\n\u003Cimg width=\"616\" alt=\"屏幕截图 2023-10-16 下午10:53:49\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_a8cee2a3594d.png\">\n\u003C\u002Fp>\n\n\n---\n\n\n# 📰 最新动态\n- [2024年3月17日] AgentVerse 被 NVIDIA 博客介绍——[构建您的第一个大模型智能体应用](https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Fbuilding-your-first-llm-agent-application\u002F)。\n\n- [2024年1月17日] 我们非常激动地宣布，我们的论文已被 ICLR 2024 接受。更多更新即将发布！\n\n- [2023年10月17日] 我们很高兴地分享我们的开源 AI 社区 Hugging Face 页面：[`AgentVerse`](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FAgentVerse\u002FagentVerse)。您可以通过自己的 OpenAI API 密钥和组织信息，试用两个仿真应用：NLP 教室和囚徒困境。尽情体验吧！\n\n- [2023年10月5日] 我们重新架构了代码库，以支持仿真和任务解决两大框架！我们已将论文中提到的 Minecraft 示例代码放置在 [`minecraft`](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Ftree\u002Fminecraft) 分支中。我们的工具使用示例也将很快更新到 `main` 分支。敬请期待！\n\n- [2023年8月22日] 我们很高兴地分享论文 [AgentVerse：促进多智能体协作并探索智能体的涌现行为](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848)，详细阐述了 AgentVerse 的任务解决框架。\n\n- [2023年6月5日] 我们非常高兴地展示了多组【演示视频】，包括【NLP 教室】、【囚徒困境】、【软件设计】、【数据库管理员】以及一个简单的【H5 宝可梦游戏】，让您能够与宝可梦角色互动！快来试试这些演示，享受乐趣吧！\n- [2023年5月1日] 🚀 【AgentVerse】（https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse）正式上线！\n\n\n\n# 🗓 即将到来\n- [x] 发布我们【论文】（https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848）的代码\n- [x] 增加对本地大模型（LLaMA、Vicuna 等）的支持\n- [ ] 添加文档\n- [ ] 支持更复杂的对话历史记忆\n\n\n\u003C!--\n\n## 👾 简单演示视频\n\n我们展示了由 AgentVerse 巧妙构建的以下案例。\n\n\n#### 自然语言处理课堂\n在自然语言处理课堂上，教授与学生进行互动交流。当学生有问题时，他们会举手并耐心等待教授点名。只有在被教授点名后，学生才能发言提问。\n\n使用以下命令启动自然语言处理课堂示例：\n```bash\npython agentverse_command\u002Fmain_simulation_gui.py --task simulation\u002Fnlp_classroom_9players\n```\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F11704492\u002F6ea07850-595e-4a28-a82e-f863011353c2\n\n\n#### 囚徒困境\n囚徒困境是一个思想实验，它挑战两个完全理性的智能体面临两难选择：他们可以选择与对方合作以获得共同利益，也可以选择背叛对方（“背叛”）以获取个人收益。\n\n使用以下命令启动囚徒困境示例：\n```bash\npython agentverse_command\u002Fmain_simulation_gui.py --task simulation\u002Fprisoner_dilemma\n```\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F11704492\u002F017c46e5-c738-4fca-9352-b008e2d518bd\n\n\n#### 软件设计\n在软件设计示例中，代码编写者、代码测试者和代码评审者共同协作解决代码生成问题。给定一个问题后，代码编写者首先编写代码实现。代码测试者运行单元测试并提供反馈。随后，代码评审者会给出评审意见。收集到测试反馈和评审意见后，代码编写者会迭代地优化代码。\n\n使用以下命令启动软件设计示例：\n```bash\npython agentverse_command\u002Fmain_simulation_gui.py --task simulation\u002Fsde_team\u002Fsde_team_2players\n```\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F11704492\u002F5058066a-abee-490d-8659-b4e54661626a\n\n\n#### [数据库管理员 (DBA)](https:\u002F\u002Fgithub.com\u002FTsinghuaDatabaseGroup\u002FDB-GPT)\n\n在数据库诊断场景中，首席 DBA 会监控系统异常情况（如查询缓慢、锁争用、系统崩溃等）。一旦检测到异常，领域专家会被通知，共同分析根本原因、分享见解并提出优化方案。最后，首席 DBA 会向用户提交一份总结报告。\n\n```bash\npython agentverse_command\u002Fmain_simulation_gui.py --task simulation\u002Fdb_diag\n```\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F11704492\u002Fc633419d-afbb-47d4-bb12-6bb512e7af3a\n\n#### [文本评估 (ChatEval)](https:\u002F\u002Fgithub.com\u002Fchanchimin\u002FChatEval)\n在文本评估场景中，我们建议用户探索 [ChatEval](https:\u002F\u002Fgithub.com\u002Fchanchimin\u002FChatEval) 仓库。他们在 AgentVerse 上实现了一个多智能体裁判团队，用于评估不同模型生成的文本质量。当给出两段不同的文本时，ChatEval 中的角色可以根据各自设定的人设，自主辩论其中的细微差别和差异，并最终给出评判。实验表明，通过在 [config.yaml](#2-configuring-the-agents) 中指定多样化角色组成的裁判团队，其评估结果更接近人类的评价。该演示基于 [Fastchat](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat) 仓库构建，我们对他们的基础工作表示感谢。\n\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F75533759\u002F58f33468-f15b-4bac-ae01-8d0780019f85\n\n#### 宝可梦\n**目前仅在 [`release-0.1`](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Ftree\u002Frelease-0.1) 中可用**。在游戏中，智能体可以在游戏世界中自由行走并与他人互动。作为玩家，你将扮演一个智能体角色，随时可以与其他智能体互动。宝可梦环境中共有 6 个角色，他们都曾出现在《宝可梦 绿宝石》中：[玛雅](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FMay_(game))、[比尔博士](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FProfessor_Birch)、[史蒂芬·斯通](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FSteven_Stone)、[马希](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FMaxie)、[阿奇](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FArchie) 和 [约瑟夫](https:\u002F\u002Fbulbapedia.bulbagarden.net\u002Fwiki\u002FMr._Stone)。\n\n要启动宝可梦游戏，首先使用以下命令启动本地服务器：\n```bash\nuvicorn pokemon_server:app --reload --port 10002\n```\n然后在项目根目录下打开另一个终端，并运行以下命令：\n```bash\ncd ui\n# 如果尚未安装 npm，请先安装后再执行以下命令\n# https:\u002F\u002Fdocs.npmjs.com\u002Fdownloading-and-installing-node-js-and-npm\n# 我们已在 npm@9.6.4 和 node@20.0.0 上进行了测试\nnpm install\nnpm run watch\n```\n等待编译完成后，即可开始游戏！（使用 WASD 键移动，按 SPACE 键发起对话。）\n\nhttps:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fassets\u002F11704492\u002F4d07da68-f942-4205-b558-f155e95782e7\n\n-->\n\n\u003C!--\n\n## 目录\n\n- [✨ 特性](#-features)\n- [📰 最新动态](#-whats-new)\n- [🌟 加入我们！](#-join-us)\n  - [如何贡献？](#how-can-you-contribute)\n- [🗓 即将推出](#-coming-soon)\n- [👾 简单演示视频](#-simple-demo-video)\n    - [NLP 课堂](#nlp-classroom)\n    - [囚徒困境](#prisoner-dilemma)\n    - [软件设计](#software-design)\n    - [数据库管理员 (DBA)](#database-administrator-dba)\n    - [文本评估 (ChatEval)](#text-evaluation-chateval)\n    - [宝可梦](#pokemon)\n- [目录](#contents)\n- [🚀 入门指南](#-getting-started)\n  - [安装](#installation)\n  - [模拟示例](#simulation)\n  - [任务解决示例](#task-solving-cli-example)\n- [💡 哲学理念](#-philosophy)\n  - [环境](#environment)\n  - [智能体](#agent)\n- [✍️ 自定义你的环境](#️-customize-your-own-environment)\n  - [简单示例：构建课堂环境](#a-simple-example-building-a-classroom-environment)\n      - [1. 创建任务目录并配置环境](#1-creating-a-task-directory-and-configuring-the-environment)\n      - [2. 配置智能体](#2-configuring-the-agents)\n      - [3. 编写输出解析器](#3-writing-an-output-parser)\n  - [更复杂环境的自定义指南](#customization-guide-for-more-complex-environments)\n- [🔎 示例](#-examples)\n- [星标历史](#star-history)\n- [引用](#citation)\n- [联系方式](#contact)\n\n-->\n\n# 目录\n- [📰 最新动态](#-whats-new)\n- [🗓 即将推出](#-coming-soon)\n- [目录](#contents)\n- [🚀 入门指南](#-getting-started)\n  - [安装](#installation)\n  - [环境变量](#environment-variables)\n  - [仿真](#simulation)\n    - [框架所需模块](#framework-required-modules)\n    - [CLI 示例](#cli-example)\n    - [GUI 示例](#gui-example)\n  - [任务求解](#task-solving)\n    - [框架所需模块](#framework-required-modules-1)\n    - [CLI 示例](#cli-example-1)\n  - [本地模型支持](#local-model-support)\n  - [vLLM 支持](#vllm-support)\n  - [FSChat 支持](#fschat-support)\n    - [1. 安装额外依赖](#1-install-the-additional-dependencies)\n    - [2. 启动本地服务器](#2-launch-the-local-server)\n    - [3. 修改配置文件](#3-modify-the-config-file)\n- [AgentVerse 展示](#agentverse-showcases)\n  - [仿真展示](#simulation-showcases)\n  - [任务求解展示](#task-solving-showcases)\n- [🌟 加入我们！](#-join-us)\n  - [领导者](#leaders)\n  - [贡献者](#contributors)\n  - [如何贡献？](#how-can-you-contribute)\n  - [社交媒体与社区](#social-media-and-community)\n- [Star 历史](#star-history)\n  - [引用](#citation)\n- [联系方式](#contact)\n\n\n\n\n# 🚀 入门指南\n\n## 安装\n\n\n**手动安装（推荐！）**\n\n**请确保您已安装 Python >= 3.9**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse.git --depth 1\ncd AgentVerse\npip install -e .\n```\n\n如果您希望使用 AgentVerse 运行本地模型，例如 LLaMA，还需要额外安装一些依赖：\n```bash\npip install -r requirements_local.txt\n```\n\n**通过 pip 安装**\n\n或者您也可以通过 pip 安装：\n```bash\npip install -U agentverse\n```\n\n## 环境变量\n您需要按照以下方式导出您的 OpenAI API 密钥：\n```bash\n# 导出您的 OpenAI API 密钥\nexport OPENAI_API_KEY=\"your_api_key_here\"\n```\n\n如果您想使用 Azure OpenAI 服务，请按照如下方式导出您的 Azure OpenAI 密钥和 OpenAI API 基础地址：\n```bash\nexport AZURE_OPENAI_API_KEY=\"your_api_key_here\"\nexport AZURE_OPENAI_API_BASE=\"your_api_base_here\"\n```\n\n## 仿真\n\n### 脚本所需模块 \n```\n- agentverse \n  - agents\n    - simulation_agent\n  - environments\n    - simulation_env\n```\n\n### CLI 示例\n\n您可以创建我们提供的多智能体环境。以课堂场景为例，在这个场景中，有九个智能体，其中一位扮演教授，另外八位扮演学生。\n\n```shell\nagentverse-simulation --task simulation\u002Fnlp_classroom_9players\n```\n\n### GUI 示例\n\n我们还为该环境提供了一个本地网站演示。您可以通过以下命令启动它：\n\n```shell\nagentverse-simulation-gui --task simulation\u002Fnlp_classroom_9players\n```\n成功启动本地服务器后，您可以访问 [http:\u002F\u002F127.0.0.1:7860\u002F](http:\u002F\u002F127.0.0.1:7860\u002F) 查看课堂环境。\n\n如果您想运行带有工具的仿真案例（例如 simulation\u002Fnlp_classroom_3players_withtool），则需要按照如下步骤安装 BMTools：\n```bash\ngit clone git+https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FBMTools.git\ncd BMTools\npip install -r requirements.txt\npython setup.py develop\n```\n这是可选的。如果您不安装 BMTools，没有工具的仿真案例仍然可以正常运行。\n\n## 任务求解 \n\n\n### 框架所需模块 \n```\n- agentverse \n  - agents\n    - simulation_env\n  - environments\n    - tasksolving_env\n```\n\n### CLI 示例\n\n要运行我们在 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.10848) 中提出的任务求解环境中的实验，您可以使用以下命令：\n\n要在基准数据集上运行 AgentVerse，您可以尝试：\n```shell\n# 使用 gpt-3.5-turbo 运行 Humaneval 基准测试（配置文件 `agentverse\u002Ftasks\u002Ftasksolving\u002Fhumaneval\u002Fgpt-3.5\u002Fconfig.yaml`）\nagentverse-benchmark --task tasksolving\u002Fhumaneval\u002Fgpt-3.5 --dataset_path data\u002Fhumaneval\u002Ftest.jsonl --overwrite\n```\n\n要针对特定问题运行 AgentVerse，您可以尝试：\n```shell\n# 运行单个查询（配置文件 `agentverse\u002Ftasks\u002Ftasksolving\u002Fbrainstorming\u002Fgpt-3.5\u002Fconfig.yaml`）。任务在配置文件中指定。\nagentverse-tasksolving --task tasksolving\u002Fbrainstorming\n```\n\n要运行我们论文中提到的使用工具的案例，即多智能体使用网络浏览器、Jupyter Notebook、必应搜索等工具解决问题时，您可以先构建由 [XAgent](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FXAgent) 提供的 ToolsServer。您可以按照他们的[说明](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FXAgent#%EF%B8%8F-build-and-setup-toolserver)来构建并运行 ToolServer。\n\n构建并启动 ToolServer 后，您可以使用以下命令运行带工具的任务求解案例：\n```shell\nagentverse-tasksolving --task tasksolving\u002Ftool_using\u002F24point\n```\n我们在 `agentverse\u002Ftasks\u002Ftasksolving\u002Ftool_using\u002F` 中提供了更多任务，展示了多智能体如何利用工具解决问题。\n\n此外，您还可以查看 `agentverse\u002Ftasks\u002Ftasksolving`，了解我们在论文中进行的更多实验。\n\n## 本地模型支持\n## vLLM 支持\n如果您想使用 vLLM，请按照[此处](https:\u002F\u002Fdocs.vllm.ai\u002Fen\u002Flatest\u002Fgetting_started\u002Fquickstart.html)的指南安装并设置 vLLM 服务器，用于处理更大的推理工作负载。创建以下环境变量以连接到 vLLM 服务器：\n```bash\nexport VLLM_API_KEY=\"your_api_key_here\"\nexport VLLM_API_BASE=\"http:\u002F\u002Fyour_vllm_url_here\"\n```\n\n然后修改任务配置文件中的 `model` 字段，使其与 vLLM 服务器中的模型名称一致。例如：\n```yaml\nmodel_type: vllm\nmodel: llama-2-7b-chat-hf\n```\n\n## FSChat 支持\n本节提供将 FSChat 集成到 AgentVerse 的分步指南。FSChat 是一个支持本地模型（如 LLaMA、Vicunna 等）在本地机器上运行的框架。\n### 1. 安装额外依赖\n如果您想使用 LLaMA 等本地模型，还需要额外安装一些依赖：\n```bash\npip install -r requirements_local.txt\n```\n\n### 2. 启动本地服务器\n然后根据您的需求修改 `MODEL_PATH` 和 `MODEL_NAME`，使用以下命令启动本地服务器：\n```bash\nbash scripts\u002Frun_local_model_server.sh\n```\n该脚本将启动一个服务于 Llama 7B 对话模型的服务。\n目前，AgentVerse 中的 `MODEL_NAME` 支持多种模型，包括 `llama-2-7b-chat-hf`、`llama-2-13b-chat-hf`、`llama-2-70b-chat-hf`、`vicuna-7b-v1.5` 和 `vicuna-13b-v1.5`。如果您希望集成其他与 FastChat 兼容的模型（详见 [FastChat 的模型支持文档](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\u002Fblob\u002Fmain\u002Fdocs\u002Fmodel_support.md)），则需要：\n1. 将新的 `MODEL_NAME` 添加到 `agentverse\u002Fllms\u002F__init__.py` 文件中的 `LOCAL_LLMS` 列表中。此外，还需建立\n2. 在 `agentverse\u002Fllms\u002F__init__.py` 文件中的 `LOCAL_LLMS_MAPPING` 中添加新 `MODEL_NAME` 与其对应的 HuggingFace 标识符之间的映射关系。\n\n### 3. 修改配置文件\n在你的配置文件中，将 `llm_type` 设置为 `local`，并将 `model` 设置为 `MODEL_NAME`。例如：\n```yaml\nllm:\n  llm_type: local\n  model: llama-2-7b-chat-hf\n  ...\n```\n\n你可以参考 `agentverse\u002Ftasks\u002Ftasksolving\u002Fcommongen\u002Fllama-2-7b-chat-hf\u002Fconfig.yaml` 获取更详细的示例。\n\n# AgentVerse 展示案例\n\n## 模拟展示案例\n请参阅 [模拟展示案例](README_simulation_cases.md)\n\n## 任务解决展示案例\n请参阅 [任务解决展示案例](README_tasksolving_cases.md)\n\n\n\n\u003C!--\n## 💡 理念\n\n### 环境\n\n我们框架的核心是环境，它在帮助研究人员研究智能体在不同条件下的行为方面起着至关重要的作用。我们认为，环境应当灵活且可扩展，以便研究人员能够轻松地根据自身需求进行定制。为此，我们将环境抽象为五个规则组件；实现不同的环境实际上就是实现不同的规则：\n\n- **描述器**：该组件为每个智能体在每一轮中提供环境的描述。你可以自定义描述器，以定义特定环境的要求，例如智能体可以与哪些其他智能体互动。\n- **顺序**：该组件定义了智能体在环境中采取行动的顺序。你可以自定义顺序，以反映智能体之间期望的交互方式。我们提供了几种基本的顺序选项，包括 `random`（随机）、`sequential`（顺序）和 `concurrent`（并发，在每一回合所有智能体同时行动）。\n- **选择器**：该组件用于筛选智能体生成的有效消息。有时智能体会生成无效的回应，此时选择器会用来过滤掉这些意外结果。\n- **更新器**：该组件负责更新每个智能体的记忆。在某些情况下，一个智能体的回应不应被所有智能体看到（例如，当智能体位于不同的房间时）。对于每一次回应，更新器只会更新那些能够看到该回应的智能体。\n- **可见性**：该组件维护着在整个环境变化过程中，每个智能体能够看到的智能体列表。例如，当一个智能体从一个房间移动到另一个房间时，每个智能体的可见智能体列表都应由 `visibility` 组件更新。\n\n通过将环境抽象为这五个组件，我们创建了一个高度灵活且可扩展的框架，使研究人员能够轻松构建和定制自己的多智能体环境。\n\n### 智能体\n\n另一个基础组件是智能体。目前我们提供了两种类型的智能体：**ConversationAgent** 和 **ToolAgent**。你也可以通过继承 BaseAgent 类来定制自己的智能体（教程即将发布）。\n\n-->\n\n\n\n\n\n\u003C!--\n\n## ✍️ 自定义你的环境\n\n我们在 `agentverse\u002Ftasks` 目录下提供了几个示例。要自定义你的环境，你需要：\n\n1. 在 `agentverse\u002Ftasks` 中创建一个新的任务目录\n2. 编写配置文件\n3. 编写解析智能体响应的输出解析器\n4. 将你的解析器添加到 `agentverse\u002Ftasks\u002F__init__.py`\n\n我们将使用 `agentverse\u002Ftasks\u002Fnlp_classroom_3players` 中的一个简单示例来说明这一过程。\n\n### 一个简单示例：构建教室环境\n\n为了说明如何自定义环境，我们以一个简单的教室环境为例：其中一名智能体是教授，一名是学生，另一名是助教。\n\n##### 1. 创建任务目录并配置环境\n\n首先，我们需要创建一个任务目录，并为环境编写配置文件。在 `agentverse\u002Ftasks` 目录下，创建一个名为 `nlp_classroom_3players` 的新目录。在这个目录内，创建一个 `config.yaml` 文件，并写下以下配置：\n\n```yaml\n# config.yaml\nenvironment:\n  env_type: basic\t\t\t\t# 使用 AgentVerse 提供的基本环境\n  max_turns: 10\t\t\t\t\t# 指定对话的最大轮数\n  rule:\n    order:\n      type: sequential\t# 使用顺序模式\n    visibility:\n      type: all\t\t\t\t\t# 每条消息对所有智能体可见\n    selector:\n      type: basic\t\t\t\t# 基本选择器（不进行选择）\n    updater:\n      type: basic\t\t\t\t# 基本更新器（将消息更新给所有智能体）\n    describer:\n      type: basic\t\t\t\t# 基本描述器（无描述）\n```\n\n此配置指定了我们将使用 AgentVerse 提供的基本环境，最多进行 10 轮对话。我们将采用顺序模式，所有消息对所有智能体可见。我们不使用任何选择器，更新器会将消息更新给所有智能体，而描述器则不提供任何描述。\n\n##### 2. 配置智能体\n\n接下来，我们配置智能体。在 `config.yaml` 文件中，我们将为每个智能体添加配置。以下是教授的示例配置：\n\n```yaml\n# config.yaml\nagents:\n  -\n    agent_type: conversation\n    name: Professor Micheal\t\t# 智能体的名字\n    role_description: You are Prof. Micheal, ...\t# 智能体的角色描述\n    memory: \n      memory_type: chat_history\t\t# 存储所有聊天记录\n    prompt_template: *professor_prompt\n    llm:\n      llm_type: text-davinci-003    # 使用 OpenAICompletion LLM\n      model: text-davinci-003       # 传递给 API 调用的参数\n      temperature: 0.7\n      max_tokens: 250\n```\n\n在这个例子中，我们使用 `conversation` 类型的智能体。我们为智能体命名并提供了角色描述，同时将其聊天记录存储在内存中。我们还提供了一个带有占位符 `${placeholder}` 的提示模板。这些占位符将在智能体的 `_fill_prompt_template` 方法中被具体化。\n\n##### 3. 编写输出解析器\n\n下一步是为你的智能体响应编写一个简单的解析器。由于你在提示模板中可能已经指定了输出格式，因此需要提供相应的解析器。在本例中，我们在提示模板中指示模型按照以下格式输出：\n\n```\nAction: Speak\nAction Input: (内容)\n```\n\n我们将编写一个解析器来提取智能体响应中的内容。更多细节请参考代码。我们使用 `@output_parser_registry.register('classroom_parser')` 装饰我们的解析器函数，以将其注册到我们的框架中。最后，我们将解析器导入到 `agentverse\u002Ftasks\u002F__init__.py` 中。\n\n通过以上步骤，我们成功构建了一个简单的教室环境，并根据自身需求进行了定制。\n\n### 针对更复杂环境的自定义指南\n\n虽然我们通过五个规则组件提供了一个构建环境的基本框架，但更复杂的环境可能需要进一步的自定义。详细的文档和教程即将发布。在此，我们简要介绍一些可用于自定义环境的步骤：\n\n1. **自定义五个规则组件**。每个规则组件都具有一个接口，允许您根据具体需求自定义其行为。需要注意的是，这些组件并不一定相互独立，它们可以通过环境中的 `rule_params` 字典进行交互。您可以创建自己的规则组件，并将其与现有组件集成，以构建代理之间更复杂的交互。\n2. **自定义环境本身**。我们的 `basic` 环境为五个规则组件提供了一个适用于大多数情况的默认执行顺序，但您也可以继承 `BaseEnvironment` 类，并编写自己的 `run` 方法来实现更为复杂的执行顺序。\n3. **自定义代理**。根据您的具体用例，您可能还需要继承 `BaseAgent` 类。例如，您可以将自己的本地大语言模型作为代理，或者创建具备特定知识或技能的代理。\n\n-->\n\n\n\u003C!--\n\n## 🔎 示例\n\n目前，我们在 `agentverse\u002Ftasks` 目录中提供了一些简单的示例，每个示例都展示了我们框架的不同可能性。尽管由于提示工程的限制，这些示例的性能可能并不理想，但它们旨在展示我们框架的能力，例如支持工具的使用。\n\n以下是每个示例的简要概述：\n\n1. `nlp_classroom_3players`: 该示例展示了一个简单的场景，其中代理会按顺序发言。\n2. `nlp_classroom_9players`: 这是一个 NLP 课堂示例。学生在有问题时可以举手，教授则会点名让学生提问。学生只有在被点名后才能发言。\n3. `nlp_classroom_9players_group`: 该示例展示了小组讨论。教授可以在需要时发起小组讨论，而学生在讨论期间只能与同一小组内的同学互动。\n4. `nlp_classroom_3players_withtool`: 在这个教室中，学生在听课时可以使用 Bing 搜索 API。\n5. `math_problem_2players_tools`: 一个简单的示例，演示两个代理如何使用 WolframAlpha API 进行算术游戏。\n6. `prisoner_dilema`: 囚徒困境是一个思想实验，涉及两个理性代理在合作以获得共同利益，或背叛对方以获取个人收益之间的选择。\n7. `db_diag`: 首席数据库管理员（代理）监控数据库系统是否存在异常，一旦检测到异常，便会提醒内存和 CPU 代理。他们（代理）会分析根本原因并提出优化方案。首席数据库管理员（代理）会向用户提交诊断总结，用户可以给出指示或评估所提方案的有效性。\n8. `sde_team`: 在 SDE 团队中，代码编写者、代码测试者和代码审查者协同解决代码生成问题。\n9. `pokemon`: 该示例模拟了宝可梦游戏。\n\n-->\n\n\n# 🌟 加入我们！\nAgentVerse 的使命是彻底革新大型语言模型的多智能体环境，我们热切期待充满热情的合作者加入我们，共同开启这段激动人心的旅程。\n\n## 领导团队\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fchenweize1998\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_017b376fa748.png\" alt=\"Leader\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fyushengsu-thu\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_a7c648fc7873.png\" alt=\"Leader\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\n## 贡献者\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fchanchimin\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_5209864943b8.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flibowen2121\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_9a967f5af98e.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FXial-kotori\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_daa3b8ca19d7.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FDr-Left\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_f27f038f35e2.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fminleminzui\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_91c915664b9f.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FTsuruko04\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_d3dc66bb98c8.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fkierangilliam\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_e201474ae6d3.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzhouxh19\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_45dd725d57d8.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftzw2698\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_52a669bb4583.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FJetSquirrel\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_ce21462ad208.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMuiruriscode\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_6818bb1c5643.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Feltociear\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_73d65636eb8d.png\" alt=\"Contributor\" style=\"width:5%; border-radius: 50%;\"\u002F>\u003C\u002Fa>\n\n## 你能如何贡献？\n- **提交问题与拉取请求**：如果您在使用 AgentVerse 时遇到任何问题，欢迎用英文提交问题。此外，您也可以主动联系我们，将问题分配给您，并在解决问题后提交拉取请求（请遵循 [PULL_REQUEST_TEMPLATE](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fblob\u002Fmain\u002FPULL_REQUEST_TEMPLATE.md)）。\n\n- **代码开发**：如果您是一名工程师，请帮助我们完善、优化和扩展当前的框架。我们始终欢迎有才华的开发者加入，共同增强现有功能并开发新模块。\n\n- **文档与教程**：如果您擅长写作，欢迎您协助改进我们的文档、编写教程或撰写博客文章，使 AgentVerse 对更广泛的社区更加友好易用。\n\n- **应用探索**：如果您对多智能体应用充满兴趣，并渴望使用 AgentVerse 进行实验，我们将非常乐意支持您的探索之旅，期待看到您的成果！\n\n- **反馈与建议**：请积极使用 AgentVerse，并向我们提供反馈。您的宝贵意见将有助于我们不断改进，确保框架始终保持行业领先水平。\n\n此外，如果您热衷于推动多智能体应用领域的前沿发展，希望成为 AgentVerse 核心团队成员，或渴望深入研究智能体相关技术，请随时联系 [AgentVerse 团队](mailto:agentverse2@gmail.com?subject=[GitHub]%AgentVerse%20Project)，并抄送 [陈伟泽](mailto:chenweize1998@gmail.com?subject=[GitHub]%AgentVerse%20Project) 和 [苏宇生](mailto:yushengsu.thu@gmail.com?subject=[GitHub]%AgentVerse%20Project)。我们诚挚地欢迎像您这样充满热情的伙伴加入我们的团队！\n\n## 社交媒体与社区\n\n- Twitter: https:\u002F\u002Ftwitter.com\u002FAgentverse71134\n\n- Discord: https:\u002F\u002Fdiscord.gg\u002FgDAXfjMw。\n\n- Hugging Face: https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FAgentVerse\u002FagentVerse。\n\n# 星标历史\n\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_readme_696e88fc7503.png)](https:\u002F\u002Fstar-history.com\u002F#OpenBMB\u002FAgentVerse&Date)\n\n\n## 引用\n如果您觉得本仓库对您有所帮助，欢迎引用我们：\n```\n@article{chen2023agentverse,\n  title={Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents},\n  author={Chen, Weize and Su, Yusheng and Zuo, Jingwei and Yang, Cheng and Yuan, Chenfei and Qian, Chen and Chan, Chi-Min and Qin, Yujia and Lu, Yaxi and Xie, Ruobing and others},\n  journal={arXiv preprint arXiv:2308.10848},\n  year={2023}\n}\n```\n\n# 联系方式\n\nAgentVerse 团队：agentverse2@gmail.com\n\n项目负责人：\n\n- 陈伟泽：chenweize1998@gmail.com\n\n- [苏宇生](https:\u002F\u002Fyushengsu-thu.github.io\u002F)：yushengsu.thu@gmail.com","# AgentVerse 快速上手指南\n\nAgentVerse 是一个旨在促进基于大语言模型（LLM）的多智能体在各种应用中部署的框架。它主要提供两种核心框架：**任务求解（Task-solving）**（多智能体协作解决问题）和**仿真（Simulation）**（构建环境观察智能体行为或与之交互）。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python 版本**: 3.9 或更高版本 (推荐 3.9+)\n*   **依赖管理**: pip\n*   **API Key**: 如果您使用 OpenAI 等云端模型，需准备好 API Key 和组织 ID；若使用本地模型，需预留足够的显存。\n\n## 2. 安装步骤\n\n### 克隆项目\n首先从 GitHub 克隆仓库：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse.git\ncd AgentVerse\n```\n\n### 安装依赖\n建议使用虚拟环境进行安装。\n\n**标准安装（使用 pip）：**\n```bash\npip install -r requirements.txt\n```\n\n> **国内加速建议**：如果下载速度较慢，可以使用国内镜像源加速安装：\n> ```bash\n> pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n**可选：安装额外依赖**\n如果您计划使用本地模型（如 LLaMA, Vicuna）或通过 FastChat\u002FvLLM 部署，可能需要安装额外的依赖包（具体参考项目根目录下的可选 requirement 文件或后续本地模型支持章节）。\n\n## 3. 基本使用\n\nAgentVerse 支持通过命令行（CLI）或图形界面（GUI）运行示例。使用前请配置环境变量。\n\n### 3.1 配置环境变量\n\n设置您的 OpenAI API 密钥（如果使用云端模型）：\n\n```bash\nexport OPENAI_API_KEY=\"your-api-key\"\nexport OPENAI_ORGANIZATION=\"your-organization-id\" # 可选\n```\n\n*(Windows PowerShell 用户请使用 `$env:OPENAI_API_KEY=\"your-api-key\"`)*\n\n### 3.2 运行仿真示例 (Simulation)\n\n仿真框架允许您观察多个智能体在自定义环境中的行为。\n\n**命令行模式 (CLI):**\n运行“囚徒困境”示例：\n```bash\npython agentverse_command\u002Fmain_simulation_cli.py --task simulation\u002Fprisoner_dilemma\n```\n\n**图形界面模式 (GUI):**\n运行\"NLP 课堂”示例，可视化观察教授与学生的互动：\n```bash\npython agentverse_command\u002Fmain_simulation_gui.py --task simulation\u002Fnlp_classroom_9players\n```\n\n### 3.3 运行任务求解示例 (Task-solving)\n\n任务求解框架将多个智能体组装成系统以协作完成任务（如软件开发、数据库诊断）。\n\n**命令行模式 (CLI):**\n运行软件开发团队示例（包含代码编写者、测试者和审查者）：\n```bash\npython agentverse_command\u002Fmain_task_cli.py --task tasksolving\u002Fsde_team\n```\n\n### 3.4 本地模型支持 (Local Model)\n\nAgentVerse 支持接入本地部署的大模型（如通过 FastChat 或 vLLM）。\n\n1.  **启动本地服务** (以 FastChat 为例):\n    ```bash\n    python -m fastchat.serve.controller\n    python -m fastchat.serve.model_worker --model-path \u003Cyour-model-path>\n    python -m fastchat.serve.openai_api_server --host localhost --port 8000\n    ```\n\n2.  **修改配置文件**:\n    在对应的任务配置文件（通常位于 `agentverse\u002Fconfigs\u002F` 下）中，将 `model_name` 修改为您的本地模型名称，并将 `api_base` 指向本地服务地址（例如 `http:\u002F\u002Flocalhost:8000\u002Fv1`）。\n\n3.  **运行命令**:\n    使用与上述相同的命令运行任务，系统将自动连接到本地 API 服务器。\n\n---\n*注：部分高级功能（如 Minecraft 示例或旧版 H5 Pokemon 游戏）可能位于特定的分支（如 `release-0.1` 或 `minecraft`），如需体验请切换至对应分支。当前主分支专注于重构后的通用仿真与任务求解框架。*","某初创游戏工作室希望快速构建一个包含多名 NPC 的开放世界社交模拟环境，以测试玩家与 AI 角色的互动逻辑。\n\n### 没有 AgentVerse 时\n- 开发者需手动编写大量样板代码来管理多个 LLM 实例间的消息路由与状态同步，开发周期长达数周。\n- 难以模拟复杂的群体动态，如“囚徒困境”或课堂讨论，每次调整角色性格或环境规则都需要重构底层通信逻辑。\n- 缺乏标准化的仿真框架，导致不同场景（如游戏 vs 社会行为研究）的代码无法复用，维护成本极高。\n- 调试多智能体协作过程如同“黑盒”，无法直观观察个体决策链条及相互影响的演化过程。\n\n### 使用 AgentVerse 后\n- 利用其内置的仿真（Simulation）框架，开发者仅需配置少量参数即可部署多智能体环境，将原型验证时间缩短至几天。\n- 直接调用预置的社会行为模板（如 NLP 课堂或博弈论场景），灵活定制角色人设与环境规则，无需重复造轮子。\n- 通过统一的任务解决（Task-solving）与仿真双架构，同一套代码库可轻松迁移于游戏开发与社会科学研究之间。\n- 提供可视化的交互日志与行为追踪，让研究人员能清晰复盘每个智能体的决策依据及群体涌现现象。\n\nAgentVerse 通过标准化的多智能体协作框架，将复杂的群体仿真从“手工定制”转变为“模块化部署”，极大降低了多 LLM 应用的研究与落地门槛。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FOpenBMB_AgentVerse_c0a18547.png","OpenBMB","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FOpenBMB_02e4bd39.png","OpenBMB (Open Lab for Big Model Base) aims to build foundation models and systems towards AGI.",null,"openbmb@gmail.com","https:\u002F\u002Fwww.openbmb.cn","https:\u002F\u002Fgithub.com\u002FOpenBMB",[83,87,91,95,99,103,106],{"name":84,"color":85,"percentage":86},"JavaScript","#f1e05a",80.6,{"name":88,"color":89,"percentage":90},"TypeScript","#3178c6",10.2,{"name":92,"color":93,"percentage":94},"Python","#3572A5",8.9,{"name":96,"color":97,"percentage":98},"Yacc","#4B6C4B",0.2,{"name":100,"color":101,"percentage":102},"Dockerfile","#384d54",0,{"name":104,"color":105,"percentage":102},"Shell","#89e051",{"name":107,"color":108,"percentage":102},"Batchfile","#C1F12E",4998,502,"2026-04-02T20:32:24","Apache-2.0",4,"未说明","未说明（支持本地大模型如 LLaMA、Vicuna，暗示可能需要 GPU，但未明确具体型号或显存要求）",{"notes":117,"python":118,"dependencies":119},"1. 项目包含模拟（Simulation）和任务解决（Task-solving）两种框架，目前代码正在重构中，若需稳定的纯模拟框架版本请使用 release-0.1 分支。\n2. 部分演示（如 Pokemon 游戏）需要安装 Node.js (推荐 v20.0.0) 和 npm (测试版本@9.6.4) 来构建前端界面。\n3. 支持接入本地大模型（如 LLaMA, Vicuna），可通过 FastChat 或 vLLM 部署本地服务后在配置文件中修改接入。\n4. 运行某些示例需要配置 OpenAI API Key 和组织 ID。","3.9+",[120,121,122,123,124,125],"openai","uvicorn","npm (用于前端构建)","node.js (推荐 v20.0.0)","FastChat (可选，用于本地模型服务)","vLLM (可选，用于加速推理)",[15,26,13,14],[128,129,130,131,132],"agent","ai","gpt","gpt-4","llm","2026-03-27T02:49:30.150509","2026-04-06T06:44:00.941298",[136,141,146,151,156,161],{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},11003,"运行时报错 'Error communicating with OpenAI' 或一直显示 'no connect' 怎么办？","这通常是由于代理设置问题导致的。如果您使用的是异步接口，可能是 `aiohttp` 无法通过系统代理。可以尝试在代码中设置 `trust_env=True` 让 `aiohttp` 使用系统代理。如果是 Windows 用户且在 PyCharm 终端遇到问题，尝试使用系统的 `cmd` 终端运行。此外，确保您的网络代理配置正确，或者直接在代码中硬编码代理地址（如 `openai.proxy = \"http:\u002F\u002F127.0.0.1:7890\"`）进行测试。","https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fissues\u002F43",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},11004,"如何在 Windows 上正确配置 OPENAI_API_KEY 环境变量？","在 Windows 的 Anaconda Prompt 中使用 `set` 命令可能无法被程序正确识别。建议直接修改 Windows 系统的环境变量：右键“此电脑”->“属性”->“高级系统设置”->“环境变量”，在系统变量中新建 `OPENAI_API_KEY` 并填入您的密钥。如果仍然报错，可能是代理问题，可以在代码中直接将 `openai.proxy` 设置为您的代理服务器地址，因为框架默认从 `http_proxy` 读取，Windows 下可能需要手动指定。","https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fissues\u002F1",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},11005,"使用本地模型（如 Llama-2）时输出重复或无法生成有效响应的原因是什么？","这是因为本地模型可能无法严格遵守框架要求的特定输出格式（例如 `Action: [...]` 和 `Action Input: [...]`）。OpenAI 的模型通常能很好地遵循此格式，但本地模型可能会偏离。当系统检测不到所需格式时，会自动重试，导致提示词重复。解决方法是优化本地模型的 Prompt 以提高格式遵循度，或者调整系统的解析逻辑以容忍一定的格式偏差。","https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fissues\u002F100",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},11006,"如何在 AgentVerse 中配置和使用 Azure OpenAI 服务？","需要在配置文件（config.yaml）中将 agent 的 llm 字段修改为包含 `deployment_id` 的形式，例如：\n```yaml\nllm:\n  llm_type: gpt-3.5-turbo\n  deployment_id: gpt-35-turbo\n  temperature: 0.7\n  max_tokens: 1024\n```\n同时，需要修改源代码 `agentverse\u002Fllms\u002Fopenai.py` 第 47 行附近，将相关参数的默认值改为 `default=None`，以便允许传入 Azure 特定的参数（如 engine 或 deployment_id）。","https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fissues\u002F99",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},11007,"运行 demo 时出现 'ImportError: cannot import name get_embedding' 错误如何解决？","这是一个代码版本问题，之前版本中误注释掉了 BMTools 在仿真环境中的支持，导致 `get_embedding` 函数缺失。维护者已在后续提交中修复了此问题。请拉取最新的代码版本（git pull），确保包含修复该问题的提交（如 PR #48 相关的更改），然后重新运行即可。","https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fissues\u002F46",{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},11008,"运行 npm install 时出现 FileNotFoundError 或路径错误怎么办？","这通常发生在 Windows 环境下，原因是路径过长或包含特殊字符导致文件系统无法找到指定的临时目录（如 `.staging` 文件夹）。建议检查项目所在的路径是否过深，尝试将项目移动到根目录下较短的路径（例如 `C:\\AgentVerse`）后重新运行 `npm install`。此外，确保拥有对目标文件夹的读写权限。","https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fissues\u002F17",[167,172,177],{"id":168,"version":169,"summary_zh":170,"released_at":171},53481,"v0.1.8.1","## 变更内容\n* 修复：@minleminzui 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F62 中修复了更新 kwargs 的一个 bug\n* 添加模拟 UI 参数：@JetSquirrel 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F63 中添加\n* 修改：@1rubbishyuan 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F65 中进行修改\n* 56 条待办事项更新 README：@yushengsu-thu 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F66 中完成\n* 文档：@ASL-r 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F67 中修改 README.md\n* 支持本地大模型：@cheesewafer 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F68 中实现\n* 更新 README.md：@yushengsu-thu 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F71 中完成\n* 更新 README.md：@yushengsu-thu 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F74 中完成\n* 添加 PR 模板：@yushengsu-thu 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F78 中添加\n* 更新 README.md：@yushengsu-thu 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F79 中完成\n* 更新 README.md：@yushengsu-thu 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F80 中完成\n* 更新 README.md：@yushengsu-thu 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F85 中完成\n\n## 新贡献者\n* @1rubbishyuan 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F65 中完成了首次贡献\n* @yushengsu-thu 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F66 中完成了首次贡献\n* @ASL-r 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F67 中完成了首次贡献\n* @cheesewafer 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F68 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fcompare\u002Fv0.1.8...v0.1.8.1","2023-10-27T14:18:53",{"id":173,"version":174,"summary_zh":175,"released_at":176},53482,"v0.1.8","## 变更内容\n* 修复：由 @minleminzui 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F59 中按顺序排列输出解析器类。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fcompare\u002Fv0.1.5...v0.1.8","2023-10-12T11:46:24",{"id":178,"version":179,"summary_zh":180,"released_at":181},53483,"v0.1.5","## 变更内容\n* Zqm 开发，由 @chanchimin 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F4 中完成\n* 新增：数据库诊断的 AgentVerse 示例，由 @zhouxh19 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F5 中完成\n* 移除 LangChain，新增多个数据库和囚徒困境示例，由 @chenweize1998 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F9 中完成\n* SDE 团队移除 LangChain，由 @libowen2121 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F12 中完成\n* 新增 Phaser UI 示例，由 @chenweize1998 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F16 中完成\n* 新增反思机制，由 @chanchimin 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F18 中完成\n* 更新 README.md，由 @chanchimin 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F29 中完成\n* 修复 README.md 中的拼写错误，由 @eltociear 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F30 中完成\n* 更新 README.md，由 @Muiruriscode 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F36 中完成\n* 功能：新增 Azure OpenAI 支持，由 @JetSquirrel 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F49 中完成\n* CI：添加用于冒烟测试的 GitHub Actions 工作流，由 @minleminzui 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F50 中完成\n* 报告在 OpenAI 上花费的金额，由 @kierangilliam 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F52 中完成\n* 修复：允许用户自定义任务的 config.yaml 文件，由 @minleminzui 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F53 中完成\n* 修复：完善 AgentVerse 演示的命令行，使其更加…，由 @minleminzui 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F55 中完成\n\n## 新贡献者\n* @chanchimin 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F4 中完成了首次贡献\n* @zhouxh19 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F5 中完成了首次贡献\n* @chenweize1998 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F9 中完成了首次贡献\n* @libowen2121 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F12 中完成了首次贡献\n* @eltociear 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F30 中完成了首次贡献\n* @Muiruriscode 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F36 中完成了首次贡献\n* @JetSquirrel 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F49 中完成了首次贡献\n* @minleminzui 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F50 中完成了首次贡献\n* @kierangilliam 在 https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fpull\u002F52 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FAgentVerse\u002Fcommits\u002Fv0.1.5","2023-10-10T08:46:26"]