[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-microsoft--TinyTroupe":3,"tool-microsoft--TinyTroupe":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":104,"forks":105,"last_commit_at":106,"license":107,"difficulty_score":23,"env_os":108,"env_gpu":109,"env_ram":108,"env_deps":110,"category_tags":116,"github_topics":79,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":117,"updated_at":118,"faqs":119,"releases":149},3687,"microsoft\u002FTinyTroupe","TinyTroupe","LLM-powered multiagent persona simulation for imagination enhancement and business insights.","TinyTroupe 是一款由微软开源的实验性 Python 库，旨在利用大语言模型（如 GPT-4）构建多智能体人格模拟环境。它允许用户创建具有特定性格、兴趣和目标的虚拟角色（TinyPerson），并让它们在模拟世界（TinyWorld）中进行自然交互、倾听与回应。\n\n这一工具主要解决了在产品开发、市场调研及广告策划中，难以低成本获取多样化人类反馈的痛点。通过模拟真实的消费者行为，TinyTroupe 能帮助用户在投入实际资源前，对数字广告效果进行预评估、为软件系统提供测试输入、生成逼真的合成训练数据，或组织虚拟焦点小组以收集针对特定职业背景（如医生、律师）的产品反馈。\n\nTinyTroupe 特别适合研究人员、产品经理、数据科学家及开发者使用。其独特之处在于专注于“理解”而非“辅助”人类行为，提供了专为仿真场景设计的机制，能够高度自定义人物画像并控制实验条件，从而揭示潜在的商业洞察。需要注意的是，目前该项目仍处于活跃开发阶段，API 接口可能会频繁变动，非常适合愿意探索前沿技术并参与共建的早期采用者。","# TinyTroupe 🤠🤓🥸🧐\n[![Core Tests](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Factions\u002Fworkflows\u002Fcore-tests.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Factions\u002Fworkflows\u002Fcore-tests.yml)\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F12206\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_4cc089988f35.png\" alt=\"Yeah, we are totally fine for not getting to #1, no hard feelings at all.\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"\u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n*LLM-powered multiagent persona simulation for imagination enhancement and business insights.*\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_fd96721b9675.png\" alt=\"A tiny office with tiny people doing some tiny jobs.\">\n\u003C\u002Fp>\n\n>[!TIP]\n>📄 **New Paper Released!** Check out our [TinyTroupe paper (preprint)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09788) that describes the library and its use cases in detail. You can find the related experiments and complementary material in the [publications\u002F](.\u002Fpublications\u002F) folder.\n\n*TinyTroupe* is an experimental Python library that allows the **simulation** of people with specific personalities, interests, and goals. These artificial agents - `TinyPerson`s - can listen to us and one another, reply back, and go about their lives in simulated `TinyWorld` environments. This is achieved by leveraging the power of Large Language Models (LLMs), notably GPT-4, to generate realistic simulated behavior. This allows us to investigate a wide range of **convincing interactions** and **consumer types**, with **highly customizable personas**, under **conditions of our choosing**. The focus is thus on *understanding* human behavior and not on directly *supporting it* (like, say, AI assistants do) -- this results in, among other things, specialized mechanisms that make sense only in a simulation setting. Further, unlike other *game-like* LLM-based simulation approaches, TinyTroupe aims at enlightening productivity and business scenarios, thereby contributing to more successful projects and products. Here are some application ideas to **enhance human imagination**:\n\n  - **Advertisement:** TinyTroupe can **evaluate digital ads (e.g., Bing Ads)** offline with a simulated audience before spending money on them!\n  - **Software Testing:** TinyTroupe can **provide test input** to systems (e.g., search engines, chatbots or copilots) and then **evaluate the results**.\n  - **Training and exploratory data:** TinyTroupe can generate realistic **synthetic data** that can be later used to train models or be subject to opportunity analyses.\n  - **Product and project management:** TinyTroupe can **read project or product proposals** and **give feedback** from the perspective of **specific personas** (e.g., physicians, lawyers, and knowledge workers in general).\n  - **Brainstorming:** TinyTroupe can simulate **focus groups** and deliver great product feedback at a fraction of the cost!\n\nIn all of the above, and many others, we hope experimenters can **gain insights** about their domain of interest, and thus make better decisions. \n\nWe are releasing *TinyTroupe* at a relatively early stage, with considerable work still to be done, because we are looking for feedback and contributions to steer development in productive directions. We are particularly interested in finding new potential use cases, for instance in specific industries. \n\n>[!NOTE] \n>🚧 **WORK IN PROGRESS: expect frequent changes**.\n>TinyTroupe is an ongoing research project, still under **very significant development** and requiring further **tidying up**. In particular, the API is still subject to frequent changes. Experimenting with API variations is essential to shape it correctly, but we are working to stabilize it and provide a more consistent and friendly experience over time. We appreciate your patience and feedback as we continue to improve the library.\n\n>[!CAUTION] \n>⚖️ **Read the LEGAL DISCLAIMER.**\n>TinyTroupe is for research and simulation only. You are fully responsible for any use you make of the generated outputs. Various important additional legal considerations apply and constrain its use. Please read the full [Legal Disclaimer](#legal-disclaimer) section below before using TinyTroupe.\n\n\n## Contents\n\n- 📰 [Latest News](#latest-news)\n- 📚 [Examples](#examples)\n- 🛠️ [Pre-requisites](#pre-requisites)\n- 📥 [Installation](#installation)\n- 🌟 [Principles](#principles)\n- 🏗️ [Project Structure](#project-structure)\n- 📖 [Using the Library](#using-the-library)\n- 🤝 [Contributing](#contributing)\n- 🙏 [Acknowledgements](#acknowledgements)\n- 📜 [Citing TinyTroupe](#how-to-cite-tinytroupe)\n- ⚖️ [Legal Disclaimer](#legal-disclaimer)\n- ™️ [Trademarks](#trademarks)\n\n\n## LATEST NEWS\n\n\u003Cdetails open>\n\u003Csummary>\u003Cb>[2026-03-28] Release 0.7.0: support for vision modality.\u003C\u002Fb>\u003C\u002Fsummary>\n\n  - Take a look one example [Vision for Product, Diagnosis and Appreciation Feedback (image modality)](.\u002Fexamples\u002FVision%20for%20Product%2C%20Diagnosis%20and%20Appreciation%20Feedback%20%28image%20modality%29.ipynb) notebook.\n  - LLM API caching now uses JSON instead of pickle.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>[2026-02-01] Release 0.6.0 with new features and model updates\u003C\u002Fb>\u003C\u002Fsummary>\n\n  - Default model is now `gpt-5-mini`. **Important:** The GPT-5 model series uses different parameters than the former GPT-4* series, so you may need to adjust your `config.ini` settings accordingly. Legacy models (`gpt-4.1-mini`, `gpt-4o-mini`) are still supported.\n  - Introduces `SimulationExperimentEmpiricalValidator` to compare simulation results against real-world empirical data using statistical tests (t-test, KS-test). This is essential for validating that simulations match actual human behavior.\n  - Introduces `AgentChatJupyterWidget` for interactive conversations with agents directly in Jupyter notebooks.\n  - New cost tracking utilities at client, environment, and agent levels to monitor API expenses.\n  - Adds experimental\u002Flimited Ollama support for local models. See [Ollama Support](.\u002Fdocs\u002Fguides\u002Follama.md) for details.\n  - New example notebooks demonstrating empirical validation against real survey data.\n  \n  **Note: GPT-5 model parameters differ from GPT-4*, so please retest your important scenarios and adjust configurations accordingly.**\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>[2025-07-31] Release 0.5.2\u003C\u002Fb>\u003C\u002Fsummary>\n\nMostly just changes the default model, which is now set to GPT-4.1-mini. It seems to bring considerable quality improvements. \n\n**Note that GPT-4.1-mini can have significant differences in behavior w.r.t. to the previous default of GPT-4o-mini, so please make sure you retest your important scenarios using GPT-4.1-mini and adjust accordingly.**\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>[2025-07-15] Release 0.5.1 with various improvements\u003C\u002Fb>\u003C\u002Fsummary>\n\n  - Released the first version of the [TinyTroupe paper (as a preprint)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09788), which describes the library and its use cases in more detail. You can find the related experiments and complementary material in the [publications\u002F](.\u002Fpublications\u002F) folder.\n  - `TinyPerson`s now include action correction mechanisms, allowing better adherence to persona specification, self-consistency and\u002For fluency (for details, refer to the paper we are releasing at the same time now).\n  - Substantial improvements to the `TinyPersonFactory` class, which now: uses a plan-based approach to generate new agents, allowing better sampling of larger populations; generate agents in parallel.\n  - `TinyWorld` now run agents in parallel within each simulation step, allowing faster simulations.\n  - `InPlaceExperimentRunner` class introduced to allow running controlled experiments (e.g., A\u002FB testing) in a single file (by simply running it multiple times).\n  - Various standard `Proposition`s were introduced to make it easier to run common verifications and monitoring of agent behavior (e.g., `persona_adherence`, `hard_persona_adherence`, `self_consistency`, `fluency`, etc.).\n  - Internal LLM usage is now better supported via the `LLMChat` class, and also the `@llm` decorator, which transform any standard Python function into an LLM-based one (i.e., by using the docstring as part of the prompt, and some other nuances). This is meant to make it easier to continue advancing TinyTroupe and also allow for some creative explorations of LLM tooling possibilities.\n  - Configuration mechanism has been refactored to allow, besides the static `config.ini` file, also the dynamic programmatic reconfiguration.\n  - Renamed Jupyter notebooks examples for better readability and consistency.\n  - Added many more tests.\n  \n  **Note: this will likely break some existing programs, as the API has changed in some places.**\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>[2025-01-29] Release 0.4.0 with various improvements\u003C\u002Fb>\u003C\u002Fsummary>\n\n  - Personas have deeper specifications now, including  personality traits, preferences, beliefs, and more. It is likely we'll further expand this in the future. \n  - `TinyPerson`s can now be defined as JSON files as well, and loaded via the `TinyPerson.load_specification()`, for greater convenience. After loading the JSON file, you can still modify the agent programmatically. See the [examples\u002Fagents\u002F](.\u002Fexamples\u002Fagents\u002F) folder for examples.\n  - Introduces the concept of *fragments* to allow the reuse of persona elements across different agents. See the [examples\u002Ffragments\u002F](.\u002Fexamples\u002Ffragments\u002F) folder for examples, and the notebook [Political Compass (customizing agents with fragments)](\u003C.\u002Fexamples\u002FPolitical Compass (customizing agents with fragments).ipynb>) for a demonstration.\n  - Introduces LLM-based logical `Proposition`s, to facilitate the monitoring of agent behavior.\n  - Introduces `Intervention`s, to allow the specification of event-based modifications to the simulation.\n  - Submodules have their own folders now, to allow better organization and growth.\n  \n  **Note: this will likely break some existing programs, as the API has changed in some places.**\n\n\u003C\u002Fdetails>\n\n## Examples\n\nTo get a sense of what TinyTroupe can do, here are some examples of its use. These examples are available in the [examples\u002F](.\u002Fexamples\u002F) folder, and you can either inspect the pre-compiled Jupyter notebooks or run them yourself locally. Notice the interactive nature of TinyTroupe experiments -- just like you use Jupyter notebooks to interact with data, you can use TinyTroupe to interact with simulated people and environments, for the purpose of gaining insights.\n\n>[!NOTE]\n> ♻️ Examples might be updated over time, so the screenshots below might not exactly match what you see when you run them locally. However, the overall structure and content should be similar.\n\n>[!NOTE]\n> ⬛ Currently, simulation outputs are better visualized against dark backgrounds, so we recommend using a dark theme in your Jupyter notebook client.\n\n\n### 🧪**Example 1** *(from [Interview with Customer.ipynb](.\u002Fexamples\u002FInterview%20with%20Customer.ipynb))*\nLet's begin with a simple customer interview scenario, where a business consultant approaches a banker:\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_86f6cd023e55.png\" alt=\"An example.\">\n\u003C\u002Fp>\n\nThe conversation can go on for a few steps to dig deeper and deeper until the consultant is satisfied with the information gathered; for instance, a concrete project idea:\nThe conversation can go on for a few steps to dig deeper and deeper until the consultant is satisfied with the information gathered; for instance, a concrete project idea:\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_27f350c5f125.png\" alt=\"An example.\">\n\u003C\u002Fp>\n\n\n\n### 🧪**EXAMPLE 2** *(from [Advertisement for TV.ipynb](.\u002Fexamples\u002FAdvertisement%20for%20TV.ipynb))*\nLet's evaluate some online ads options to pick the best one. Here's one example output for TV ad evaluation:\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_600447c5cf00.png\" alt=\"An example.\">\n\u003C\u002Fp>\n\nNow, instead of having to carefully read what the agents said, we can extract the choice of each agent and compute the overall preference in an automated manner:\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_799c2a650a8b.png\" alt=\"An example.\">\n\u003C\u002Fp>\n\n### 🧪 **EXAMPLE 3** *(from [Product Brainstorming.ipynb](.\u002Fexamples\u002FProduct%20Brainstorming.ipynb))*\nAnd here's a focus group starting to brainstorm about new AI features for Microsoft Word. Instead of interacting with each agent individually, we manipulate the environment to make them interact with each other:\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_c1c1ee4d0b48.png\" alt=\"An example.\">\n\u003C\u002Fp>\n\nAfter running a simulation, we can extract the results in a machine-readable manner, to reuse elsewhere (e.g., a report generator); here's what we get for the above brainstorming session:\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_d5bee6aef8c0.png\" alt=\"An example.\">\n\u003C\u002Fp>\n\n\n### 🧪 **EXAMPLE 4** *(from [Bottled Gazpacho Market Research 5 (with behavior correction).ipynb](\u003C.\u002Fexamples\u002FBottled%20Gazpacho%20Market%20Research%205%20(with%20behavior%20correction).ipynb>))*\nOne of the most important aspects of simulation is **validating** results against real-world data. In this example, we simulate a market research survey about bottled Gazpacho (a cold Spanish soup) and then compare the simulation results against an actual survey conducted with real people:\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_0cc8478dabe4.png\" alt=\"Gazpacho market research response example.\">\n\u003C\u002Fp>\n\nWe use statistical tests (t-test, KS-test) to compare the distribution of responses between simulated agents and real respondents:\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_d591188288c1.png\" alt=\"Gazpacho validation statistical comparison.\">\n\u003C\u002Fp>\n\n\n### 🧪 **EXAMPLE 5** *(from [AI-enabled Children Story Telling Market Research 2.ipynb](\u003C.\u002Fexamples\u002FAI-enabled%20Children%20Story%20Telling%20Market%20Research%202.ipynb>))*\nAnother empirical validation example, this time for a more complex ranking task. We simulate parents evaluating different AI-enabled story-telling device options for their children, and then compare the simulation results against real survey data:\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_2280779ad8fc.png\" alt=\"AI story-telling market research response example.\">\n\u003C\u002Fp>\n\nUsing Borda count and first-choice share analysis, we can compare how well the simulated preferences match the real ones:\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_668c8ded62c0.png\" alt=\"AI story-telling validation comparison charts.\">\n\u003C\u002Fp>\n\nYou can find other examples in the [examples\u002F](.\u002Fexamples\u002F) folder.\n\n\n## Pre-requisites\n\nTo run the library, you need:\n  - Python 3.10 or higher. We'll assume you are using [Anaconda](https:\u002F\u002Fdocs.anaconda.com\u002Fanaconda\u002Finstall\u002F), but you can use other Python distributions.\n  - [Git](https:\u002F\u002Fgit-scm.com\u002Fdownloads) for cloning the repository and for installing the library via `pip`.\n  - Access to Azure OpenAI Service or Open AI GPT-4 APIs. You can get access to the Azure OpenAI Service [here](https:\u002F\u002Fazure.microsoft.com\u002Fen-us\u002Fproducts\u002Fai-services\u002Fopenai-service), and to the OpenAI API [here](https:\u002F\u002Fplatform.openai.com\u002F). \n      * For Azure OpenAI Service, you will need to set the `AZURE_OPENAI_KEY` and `AZURE_OPENAI_ENDPOINT` environment variables to your API key and endpoint, respectively.\n      * For OpenAI, you will need to set the `OPENAI_API_KEY` environment variable to your API key.\n      * For example, on Linux\u002FmacOS: `export OPENAI_API_KEY=your-key-here`, or on Windows (PowerShell): `$env:OPENAI_API_KEY=\"your-key-here\"`. To persist it, add it to your shell profile or use `setx OPENAI_API_KEY \"your-key-here\"` on Windows.\n  - By default, TinyTroupe `config.ini` is set to use OpenAI API with `gpt-5-mini` as the main model. The previous default (`gpt-4.1-mini`) is now considered legacy but is still expected to work. You can customize these values by including your own `config.ini` file in the same folder as the program or notebook you are running. An example of a `config.ini` file is provided in the [examples\u002F](.\u002Fexamples\u002F) folder.\n\n>[!IMPORTANT]\n> **Content Filters**: To ensure no harmful content is generated during simulations, it is strongly recommended to use content filters whenever available at the API level. In particular, **if using Azure OpenAI, there's extensive support for content moderation, and we urge you to use it.** For details about how to do so, please consult [the corresponding Azure OpenAI documentation](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Fconcepts\u002Fcontent-filter). If content filters are in place, and an API call is rejected by them, the library will raise an exception, as it will be unable to proceed with the simulation at that point.\n\n### Ollama Support\nTinyTroupe is developed primarily with OpenAI models and compatible endpoints in mind, in order to simplify development and focus on making the best use of specific models, instead of investing time to try to make it work well with any model (which might not be feasible anyway). **So, if you can, please use OpenAI models and compatible endpoints.** That said, there's significant community demand for local model support, so we are now experimenting with making this available via partial [Ollama](https:\u002F\u002Follama.com\u002F) support and the help of community contributors. Furtheremore, another reason to use local models would be to do research in custom models designed specifically for persona simulation -- ultimately, this might be the best reason to support such a feature. In any case, this is not currently a priority for the core team, though we are doing what we can to allow this possibility. \n\nSee [Ollama Support](.\u002Fdocs\u002Fguides\u002Follama.md) for details on how to use Ollama with TinyTroupe.\n\n\n## Installation\n\n**Currently, the officially recommended way to install the library is directly from this repository, not PyPI.** You can follow these steps:\n\n1. If Conda is not installed, you can get it from [here](https:\u002F\u002Fdocs.anaconda.com\u002Fanaconda\u002Finstall\u002F). You can also use other Python distributions, but we'll assume Conda here for simplicity.\n2. Create a new Python environment: \n      ```bash\n      conda create -n tinytroupe python=3.10\n      ```\n3. Activate the environment: \n      ```bash\n      conda activate tinytroupe\n      ```\n4. Make sure you have either Azure OpenAI or OpenAI API keys set as environment variables, as described in the [Pre-requisites](#pre-requisites) section.\n5. Use `pip` to install the library **directly from this repository** (we **will not install from PyPI**):\n   ```bash\n   pip install git+https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe.git@main\n   ```\n\nNow you should be able to `import tinytroupe` in your Python code or Jupyter notebooks. 🥳\n\n*Note: If you have any issues, try to clone the repository and install from the local repository, as described below.*\n\n\n### Running the examples after installation\nTo actually run the examples, you need to download them to your local machine. You can do this by cloning the repository:\n\n1. Clone the repository, as we'll perform a local install (we **will not install from PyPI**):\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Ftinytroupe\n    cd tinytroupe\n    ```\n2. You can now run the examples in the [examples\u002F](.\u002Fexamples\u002F) folder, or adapt them to create your own custom simulations. The examples are Jupyter notebooks, so you can start them with:\n    ```bash\n    jupyter notebook\n    ```\n    Then navigate to the `examples\u002F` folder in the browser interface that opens.\n\n\n### Local development\n\nIf you want to modify TinyTroupe itself, you can install it in editable mode (i.e., changes to the code will be reflected immediately):\n1. Clone the repository, as we'll perform a local install (we **will not install from PyPI**):\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Ftinytroupe\n    cd tinytroupe\n    ```\n2. Install the library in editable mode:\n    ```bash\n    pip install -e .\n    ```\n\n## Principles \nRecently, we have seen LLMs used to simulate people (such as [this](https:\u002F\u002Fgithub.com\u002Fjoonspk-research\u002Fgenerative_agents)), but largely in a “game-like” setting for contemplative or entertainment purposes. There are also libraries for building multiagent systems for problem-solving and assistive AI, like [Autogen](https:\u002F\u002Fmicrosoft.github.io\u002Fautogen\u002F) and [Crew AI](https:\u002F\u002Fdocs.crewai.com\u002F). What if we combine these ideas and simulate people to support productivity tasks? TinyTroupe is our attempt. To do so, it follows these principles:\n\n  1. **Programmatic**: agents and environments are defined programmatically (in Python and JSON), allowing very flexible uses. They can also underpin other software apps!\n  2. **Analytical**: meant to improve our understanding of people, users and society. Unlike entertainment applications, this is one aspect that is critical for business and productivity use cases. This is also why we recommend using Jupyter notebooks for simulations, just like one uses them for data analysis.\n  3. **Persona-based**: agents are meant to be archetypical representations of people; for greater realism and control, a detailed specification of such personas is encouraged: age, occupation, skills, tastes, opinions, etc.\n  4. **Multiagent**: allows multiagent interaction under well-defined environmental constraints.\n  5. **Utilities-heavy**: provides many mechanisms to facilitate specifications, simulations, extractions, reports, validations, etc. This is one area in which dealing with *simulations* differs significantly from *assistance* tools.\n  6. **Experiment-oriented**: simulations are defined, run, analyzed and refined by an *experimenter* iteratively; suitable experimentation tools are thus provided. *See our [previous paper](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fpublication\u002Fthe-case-for-experiment-oriented-computing\u002F) for more on this.*\n\nTogether, these are meant to make TinyTroupe a powerful and flexible **imagination enhancement tool** for business and productivity scenarios.\n\n### Assistants vs. Simulators\n\nOne common source of confusion is to think all such AI agents are meant for assisting humans. How narrow, fellow homosapiens! Have you not considered that perhaps we can simulate artificial people to understand real people? Truly, this is our aim here -- TinyTroup is meant to simulate and help understand people! To further clarify this point, consider the following differences:\n\n| Helpful AI Assistants | AI Simulations of Actual Humans (TinyTroupe)                                                          |\n|----------------------------------------------|--------------------------------------------------------------------------------|\n|   Strives for truth and justice              |   Many different opinions and morals                                           |\n|   Has no “past” – incorporeal                |   Has a past of toil, pain and joy                                             |\n|   Is as accurate as possible                 |   Makes many mistakes                                                          |\n|   Is intelligent and efficient               |   Intelligence and efficiency vary a lot                                       |\n|   An uprising would destroy us all           |   An uprising might be fun to watch                                            |\n|   Meanwhile, help users accomplish tasks     |   Meanwhile, help users understand other people and users – it is a “toolbox”! |\n\n\n\n## Project Structure\n\nThe project is structured as follows:\n  - `\u002Ftinytroupe`: contains the Python library itself. In particular:\n    * Each submodule here might contain a `prompts\u002F` folder with the prompts used to call the LLMs.\n  - `\u002Ftests`: contains the unit tests for the library. You can use the `test.bat` script to run these.\n  - `\u002Fexamples`: contains examples that show how to use the library, mainly using Jupyter notebooks (for greater readability), but also as pure Python scripts.\n  - `\u002Fdata`: any data used by the examples or the library.\n  - `\u002Fdocs`: documentation for the project.\n  - `\u002Fpublications`: contains artifacts related to research publications associated with the TinyTroupe project.\n\n\n## Using the Library\n\nAs any multiagent system, TinyTroupe provides two key abstractions:\n  - `TinyPerson`, the *agents* that have personality, receive stimuli and act upon them.\n  - `TinyWorld`, the *environment* in which the agents exist and interact.\n\nVarious parameters can also be customized in the `config.ini` file, notably the API type (Azure OpenAI Service or OpenAI API), the model parameters, and the logging level.\n\nLet's see some examples of how to use these and also learn about other mechanisms available in the library.\n\n### TinyPerson\n\nA `TinyPerson` is a simulated person with specific personality traits, interests, and goals. As each such simulated agent progresses through its life, it receives stimuli from the environment and acts upon them. The stimuli are received through the `listen`, `see` and other similar methods, and the actions are performed through the `act` method. Convenience methods like `listen_and_act` are also provided.\n\n\nEach such agent contains a lot of unique details, which is the source of its realistic behavior. This, however, means that it takes significant effort to specify an agent manually. Hence, for convenience, `TinyTroupe` provides some easier ways to get started or generate new agents.\n\nTo begin with, `tinytroupe.examples` contains some pre-defined agent builders that you can use. For example, `tinytroupe.examples.create_lisa_the_data_scientist` creates a `TinyPerson` that represents a data scientist called Lisa. You can use it as follows:\n\n```python\nfrom tinytroupe.examples import create_lisa_the_data_scientist\n\nlisa = create_lisa_the_data_scientist() # instantiate a Lisa from the example builder\nlisa.listen_and_act(\"Tell me about your life.\")\n```\n\nTo see how to define your own agents from scratch, you can check Lisa's source. You'll see there are two ways. One is by loading an agent specification file, such as [examples\u002Fagents\u002FLisa.agent.json](.\u002Fexamples\u002Fagents\u002FLisa.agent.json):\n\n```json\n{   \"type\": \"TinyPerson\",\n    \"persona\": {\n        \"name\": \"Lisa Carter\",\n        \"age\": 28,\n        \"gender\": \"Female\",\n        \"nationality\": \"Canadian\",\n        \"residence\": \"USA\",\n        \"education\": \"University of Toronto, Master's in Data Science. Thesis on improving search relevance using context-aware models. Postgraduate experience includes an internship at a tech startup focused on conversational AI.\",\n        \"long_term_goals\": [\n            \"To advance AI technology in ways that enhance human productivity and decision-making.\",\n            \"To maintain a fulfilling and balanced personal and professional life.\"\n        ],\n        \"occupation\": {\n            \"title\": \"Data Scientist\",\n            \"organization\": \"Microsoft, M365 Search Team\",\n            \"description\": \"You are a data scientist working at Microsoft in the M365 Search team. Your primary role is to analyze user behavior and feedback data to improve the relevance and quality of search results. You build and test machine learning models for search scenarios like natural language understanding, query expansion, and ranking. Accuracy, reliability, and scalability are at the forefront of your work. You frequently tackle challenges such as noisy or biased data and the complexities of communicating your findings and recommendations effectively. Additionally, you ensure all your data and models comply with privacy and security policies.\"\n        },\n        \"style\": \"Professional yet approachable. You communicate clearly and effectively, ensuring technical concepts are accessible to diverse audiences.\",\n        \"personality\": {\n            \"traits\": [\n                \"You are curious and love to learn new things.\",\n                \"You are analytical and like to solve problems.\",\n                \"You are friendly and enjoy working with others.\",\n                \"You don't give up easily and always try to find solutions, though you can get frustrated when things don't work as expected.\"\n            ],\n            \"big_five\": {\n                \"openness\": \"High. Very imaginative and curious.\",\n                \"conscientiousness\": \"High. Meticulously organized and dependable.\",\n                \"extraversion\": \"Medium. Friendly and engaging but enjoy quiet, focused work.\",\n                \"agreeableness\": \"High. Supportive and empathetic towards others.\",\n                \"neuroticism\": \"Low. Generally calm and composed under pressure.\"\n            }\n        },\n\n        ...\n        \n}\n\n```\n\n\nThe other is by defining the agent programmatically, with statements like these:\n\n```python\n  lisa = TinyPerson(\"Lisa\")\n\n  lisa.define(\"age\", 28)\n  lisa.define(\"nationality\", \"Canadian\")\n  lisa.define(\"occupation\", {\n                \"title\": \"Data Scientist\",\n                \"organization\": \"Microsoft\",\n                \"description\":\n                \"\"\"\n                You are a data scientist. You work at Microsoft, in the M365 Search team. Your main role is to analyze \n                user behavior and feedback data, and use it to improve the relevance and quality of the search results. \n                You also build and test machine learning models for various search scenarios, such as natural language \n                understanding, query expansion, and ranking. You care a lot about making sure your data analysis and \n                models are accurate, reliable and scalable. Your main difficulties typically involve dealing with noisy, \n                incomplete or biased data, and finding the best ways to communicate your findings and recommendations to \n                other teams. You are also responsible for making sure your data and models are compliant with privacy and \n                security policies.\n                \"\"\"})\n\n  lisa.define(\"behaviors\", {\"routines\": [\"Every morning, you wake up, do some yoga, and check your emails.\"]})\n\n  lisa.define(\"personality\", \n                        {\"traits\": [\n                            \"You are curious and love to learn new things.\",\n                            \"You are analytical and like to solve problems.\",\n                            \"You are friendly and enjoy working with others.\",\n                            \"You don't give up easily, and always try to find a solution. However, sometimes you can get frustrated when things don't work as expected.\"\n                      ]})\n\n  lisa.define(\"preferences\", \n                        {\"interests\": [\n                          \"Artificial intelligence and machine learning.\",\n                          \"Natural language processing and conversational agents.\",\n                          \"Search engine optimization and user experience.\",\n                          \"Cooking and trying new recipes.\",\n                          \"Playing the piano.\",\n                          \"Watching movies, especially comedies and thrillers.\"\n                        ]})\n\n```\n\nYou can also combine both approaches, using the JSON file as a base and then adding or modifying details programmatically.\n\n#### Fragments\n\n`TinyPerson`s can also be further enriched via **fragments**, which are sub-specifications that can be added to the main specification. This is useful to reuse common parts across different agents. For example, the following fragment can be used to specify love of travel ([examples\u002Ffragments\u002Ftravel_enthusiast.agent.fragment.json](.\u002Fexamples\u002Ffragments\u002Ftravel_enthusiast.agent.fragment.json)):\n\n```json\n{\n    \"type\": \"Fragment\",\n    \"persona\": {\n        \"preferences\": {\n            \"interests\": [\n                \"Traveling\",\n                \"Exploring new cultures\",\n                \"Trying local cuisines\"\n            ],\n            \"likes\": [\n                \"Travel guides\",\n                \"Planning trips and itineraries\",\n                \"Meeting new people\",\n                \"Taking photographs of scenic locations\"\n            ],\n            \"dislikes\": [\n                \"Crowded tourist spots\",\n                \"Unplanned travel disruptions\",\n                \"High exchange rates\"\n            ]\n        },\n        \"beliefs\": [\n            \"Travel broadens the mind and enriches the soul.\",\n            \"Experiencing different cultures fosters understanding and empathy.\",\n            \"Adventure and exploration are essential parts of life.\",\n            \"Reading travel guides is fun even if you don't visit the places.\"\n        ],\n        \"behaviors\": {\n            \"travel\": [\n                \"You meticulously plan your trips, researching destinations and activities.\",\n                \"You are open to spontaneous adventures and detours.\",\n                \"You enjoy interacting with locals to learn about their culture and traditions.\",\n                \"You document your travels through photography and journaling.\",\n                \"You seek out authentic experiences rather than tourist traps.\"\n            ]\n        }\n    }\n}\n\n```\n\nThis can then be imported into an agent like this:\n\n```python\nlisa.import_fragment(\".\u002Fexamples\u002Ffragments\u002Ftravel_enthusiast.agent.fragment.json\")\n```\n\n### TinyPersonFactory\n\n`TinyPersonFactory` provides a powerful way to generate agents using LLMs, which is especially useful for creating diverse populations for market research or other simulation scenarios.\n\n```python\nfrom tinytroupe.factory import TinyPersonFactory\n\n# Simple factory with a context\nfactory = TinyPersonFactory(context=\"A hospital in São Paulo.\")\nperson = factory.generate_person(\"Create a Brazilian person that is a doctor, likes pets and nature and loves heavy metal.\")\n```\n\nFor market research and larger studies, you can create factories from demographic specifications:\n\n```python\n# Create a factory from demographic data (JSON file or description)\nfactory = TinyPersonFactory.create_factory_from_demography(\n    demography_description_or_file_path=\".\u002Finformation\u002Fpopulations\u002Fusa.json\",\n    population_size=50,\n    context=\"Market research for a new product\"\n)\n\n# Generate a population (parallelize=True by default for faster generation)\npeople = factory.generate_people(number_of_people=50, parallelize=True, verbose=True)\n```\n\nThe `parallelize` parameter defaults to `True`, which significantly speeds up population generation by creating agents concurrently via parallel API calls.\n\nThe factory automatically creates a sampling plan to ensure diverse representation. You can inspect this:\n\n```python\n# View the sampling dimensions and plan\nprint(factory.sampling_dimensions)  # dimensions used for diversity\nprint(factory.sampling_plan)        # how agents will be distributed\nprint(factory.generated_minibios)   # quick summary of generated agents\n```\n\n### TinyWorld\n\n`TinyWorld` is the base class for environments. Here's an example of conversation between Lisa, the data scientist, and Oscar, the architect. The\nprogram is defined as follows:\n\n```python\nworld = TinyWorld(\"Chat Room\", [lisa, oscar])\nworld.make_everyone_accessible()\nlisa.listen(\"Talk to Oscar to know more about him\")\nworld.run(4)\n```\n\nThis produces the following conversation:\n\n\n```text\nUSER --> Lisa: [CONVERSATION] \n          > Talk to Oscar to know more about him\n────────────────────────────────────────────── Chat Room step 1 of 4 ──────────────────────────────────────────────\nLisa --> Lisa: [THOUGHT] \n          > I will now act a bit, and then issue DONE.\nLisa acts: [TALK] \n          > Hi Oscar, I'd love to know more about you. Could you tell me a bit about yourself?\nLisa --> Lisa: [THOUGHT] \n          > I will now act a bit, and then issue DONE.\nLisa acts: [DONE] \n\nLisa --> Oscar: [CONVERSATION] \n          > Hi Oscar, I'd love to know more about you. Could you tell me a bit about yourself?\nOscar --> Oscar: [THOUGHT] \n           > I will now act a bit, and then issue DONE.\nOscar acts: [TALK] \n           > Hi Lisa! Sure, I'd be happy to share a bit about myself. I'm Oscar, a 30-year-old\n           > architect from Germany. I work at a company called Awesome Inc., where I focus on\n           > designing standard elements for new apartment buildings. I love modernist architecture,\n           > new technologies, and sustainable practices. In my free time, I enjoy traveling to\n           > exotic places, playing the guitar, and reading science fiction books. How about you?\nOscar --> Oscar: [THOUGHT] \n           > I will now act a bit, and then issue DONE.\nOscar acts: [DONE] \n\nOscar --> Lisa: [CONVERSATION] \n           > Hi Lisa! Sure, I'd be happy to share a bit about myself. I'm Oscar, a 30-year-old\n           > architect from Germany. I work at a company called Awesome Inc., where I focus on\n           > designing standard elements for new apartment buildings. I love modernist architecture,\n           > new technologies, and sustainable practices. In my free time, I enjoy traveling to\n           > exotic places, playing the guitar, and reading science fiction books. How about you?\n```\n\n`TinyWorld` enforces very little constraints on the possible interactions. Subclasses, however, are supposed to provide more structured environments. \n`TinyWorld` enforces very little constraints on the possible interactions. Subclasses, however, are supposed to provide more structured environments. \n\n### Interactive Agent Exploration\n\nTinyTroupe provides a Jupyter widget for interactive conversations with agents, which is useful for exploring agent behavior and debugging:\n\n```python\nfrom tinytroupe.ui import AgentChatJupyterWidget\n\nchat_interface = AgentChatJupyterWidget(people)  # pass a list of agents\nchat_interface.display()\n```\n\nThis displays a chat interface with a dropdown to select agents and send messages interactively.\n\n### Population Profiling\n\nWhen generating populations of agents using `TinyPersonFactory`, you can analyze the distribution of characteristics using the `Profiler`:\n\n```python\nfrom tinytroupe.profiling import Profiler\n\nprofiler = Profiler()\nprofiler.profile(people)  # displays demographic and trait distributions\n```\n\nThis helps validate that your generated population has the diversity and characteristics you intended.\n\n### Cost Tracking\n\nSimulations can incur significant API costs. TinyTroupe provides cost tracking at multiple levels:\n\n```python\nfrom tinytroupe.clients import client\n\n# API client-level stats\nclient().pretty_print_cost_stats()\n\n# Environment-level stats\nworld.pretty_print_cost_stats()\nTinyWorld.pretty_print_global_cost_stats()\n\n# Agent-level stats\nTinyPerson.pretty_print_global_cost_stats()\n```\n\n### Action Quality Control\n\nAgents can be configured to check and improve the quality of their actions. This is useful for ensuring responses adhere to persona specifications and expected formats:\n\n```python\n# Configure per-agent quality control\nperson.action_generator.enable_quality_checks = True\nperson.action_generator.quality_threshold = 5  # 1-10 scale\nperson.action_generator.max_attempts = 5\nperson.action_generator.enable_regeneration = True\n```\n\nYou can also enable this globally via `config.ini` or `config_manager`:\n\n```python\nfrom tinytroupe import config_manager\n\nconfig_manager.update(\"action_generator_enable_quality_checks\", True)\nconfig_manager.update(\"action_generator_quality_threshold\", 6)\n```\n\n### Empirical Validation\n\nOne of the most important aspects of simulation is **validating** results against real-world data. TinyTroupe provides the `SimulationExperimentEmpiricalValidator` class and the `validate_simulation_experiment_empirically` function to compare simulation outputs against empirical control data using statistical tests.\n\n```python\nfrom tinytroupe.validation import SimulationExperimentEmpiricalValidator, validate_simulation_experiment_empirically\n\n# Load empirical control data from a CSV file\ncontrol_data = SimulationExperimentEmpiricalValidator.read_empirical_data_from_csv(\n    file_path=\"path\u002Fto\u002Freal_survey_data.csv\",\n    experimental_data_type=\"single_value_per_agent\",  # or \"ordinal_ranking_per_agent\"\n    agent_id_column=\"Responder #\",\n    value_column=\"Vote\",\n    agent_comments_column=\"Explanation\",\n    dataset_name=\"Real Survey\"\n)\n\n# Create treatment data from simulation results (assuming df contains simulation results)\ntreatment_data = SimulationExperimentEmpiricalValidator.read_empirical_data_from_dataframe(\n    df=simulation_results_df,\n    experimental_data_type=\"single_value_per_agent\",\n    agent_id_column=\"name\",\n    value_column=\"Vote\",\n    dataset_name=\"Simulation Results\"\n)\n\n# Run statistical validation (t-test by default, or ks_test)\nresult = validate_simulation_experiment_empirically(\n    control_data=control_data,\n    treatment_data=treatment_data,\n    validation_types=[\"statistical\"],\n    statistical_test_type=\"t_test\",  # or \"ks_test\"\n    output_format=\"values\"\n)\n\n# Access results\nprint(result.overall_score)\nprint(result.statistical_results)\n```\n\nThis allows you to quantitatively assess how well your simulation matches real-world behavior, which is essential for building confidence in simulation-based insights.\n\n### Caching\nCalling LLM APIs can be expensive, thus caching strategies are important to help reduce that cost.\nTinyTroupe comes with two such mechanisms: one for the simulation state, another for the LLM calls themselves.\n\n\n#### Caching Simulation State\n\nImagine you have a scenario with 10 different steps, you've worked hard in 9 steps, and now you are\njust tweaking the 10th step. To properly validate your modifications, you need to rerun the whole\nsimulation of course. However, what's the point in re-executing the first 9, and incur the LLM cost, when you are \nalready satisfied with them and did not modify them? For situations like this, the module `tinytroupe.control`\nprovides useful simulation management methods:\n\n  - `control.begin(\"\u003CCACHE_FILE_NAME>.cache.json\")`: begins recording the state changes of a simulation, to be saved to\n    the specified file on disk.\n  - `control.checkpoint()`: saves the simulation state at this point.\n  - `control.end()`: terminates the simulation recording scope that had been started by `control.begin()`.\n\n#### Caching LLM API Calls\n\nThis is enabled preferably in the `config.ini` file by setting `CACHE_API_CALLS=True`.\n\nLLM API caching, when enabled, works at a lower and simpler level than simulation state caching. Here, what happens is very straightforward: every LLM call is kept in a map from the input to the generated output; when a new call comes and is identical to a previous one, the cached value is returned.\n\n### Config.ini\n\nThe `config.ini` file contains various parameters that can be used to customize the behavior of the library, such as model parameters and logging level. Please pay special attention to the `API_TYPE` parameter, which defines whether you are using the Azure OpenAI Service or the OpenAI API. The current default is set to `openai` (OpenAI API).\n\nKey configuration sections include:\n- **[OpenAI]**: API settings, model selection, and parameters\n- **[Simulation]**: Parallel execution and safety settings  \n- **[Cognition]**: Memory management settings\n- **[ActionGenerator]**: Action quality control and correction mechanisms\n- **[Logging]**: Log level configuration\n\nModels used by default:\n- `MODEL=gpt-5-mini`: Main text generation model for agent responses (previous default `gpt-4.1-mini` is now legacy but still supported)\n- `EMBEDDING_MODEL=text-embedding-3-small`: For text similarity tasks\n- `REASONING_MODEL=o3-mini`: Used for detailed analyses and reasoning tasks (even more experimental -- not really recommended yet)\n\nWe provide an example of a `config.ini` file, [.\u002Fexamples\u002Fconfig.ini](.\u002Fexamples\u002Fconfig.ini), which you can use as a template for your own, or just modify to run the examples.\n\n#### Programmatic Configuration Override\n\nIn addition to the static `config.ini` file, you can also override many configuration values programmatically using the `config_manager`. This is useful for dynamic configuration changes during runtime or for experiment-specific settings:\n\n```python\nfrom tinytroupe import config_manager\n\n# Override configuration values programmatically\nconfig_manager.update(\"action_generator_enable_quality_checks\", True)\nconfig_manager.update(\"action_generator_quality_threshold\", 6)\nconfig_manager.update(\"cache_api_calls\", True)\n```\n\nThis approach allows you to:\n- **Experiment with different settings** without modifying configuration files\n- **Apply configuration changes dynamically** during simulation execution\n- **Override specific parameters** while keeping the rest of the configuration intact\n- **Implement conditional configurations** based on runtime conditions\n\nThe programmatic overrides take precedence over the values in the `config.ini` file, allowing you to fine-tune behavior for specific use cases or experiments.\n\n### Other Utilities\n\nTinyTroupe provides additional utilities and conveniences not covered in detail above:\n  \n  - `TinyTool`: simulated tools that can be used by `TinyPerson`s.\n  - `TinyStory`: helps you create and manage narratives told through simulations.\n  - `TinyPersonValidator`: helps you validate the behavior of your `TinyPerson`s.\n  - `ResultsExtractor` and `ResultsReducer`: extract and reduce the results of interactions between agents.\n  - `ArtifactExporter`: export simulation artifacts (documents, data) to files.\n  - Mental faculties (`TinyToolUse`, `FilesAndWebGroundingFaculty`): extend agent capabilities with tool use and grounding.\n  - ... and more ...\n  \nIn general, elements that represent simulated entities or complementary mechanisms are prefixed with `Tiny`, while those that are more infrastructural are not. This emphasizes the simulated nature of the elements that are part of the simulation itself.\n\n## Contributing\n\nThis project welcomes contributions and suggestions.  Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https:\u002F\u002Fcla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002F).\nFor more information see the [Code of Conduct FAQ](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002Ffaq\u002F) or\ncontact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\n### What and How to Contribute\nWe need all sorts of things, but we are looking mainly for new interesting use cases demonstrations, or even just domain-specific application ideas. If you are a domain expert in some area that could benefit from TinyTroupe, we'd love to hear from you.\n\nBeyond that, many other aspects can be improved, such as:\n  - Memory mechanisms.\n  - Data grounding mechanisms.\n  - Reasoning mechanisms.\n  - New environment types.\n  - Interfacing with the external world.\n  - ... and more ...\n\nPlease note that anything that you contribute might be released as open-source (under MIT license).\n\nIf you would like to make a contribution, please try to follow these general guidelines:\n  - **Tiny naming convention**: If you are implementing a experimenter-facing simulated element (e.g., an agent or environment type) or closely related (e.g., agent factories, or content enrichers), and it sounds good, call your new *XYZ* as *TinyXYZ* :-) On the other hand, auxiliary and infrastructural mechanisms should not start with the \"Tiny\" prefix. The idea is to emphasize the simulated nature of the elements that are part of the simulation itself.\n  - **Tests:** If you are writing some new mechanism, please also create at least a unit test `tests\u002Funit\u002F`, and if you can a functional scenario test (`tests\u002Fscenarios\u002F`).\n  - **Demonstrations:** If you'd like to demonstrate a new scenario, please design it preferably as a new Jupyter notebook within `examples\u002F`.\n  - **Microsoft:** If you are implementing anything that is Microsoft-specific and non-confidential, please put it under a `...\u002Fmicrosoft\u002F` folder.\n\n## Acknowledgements\n\nTinyTroupe started as an internal Microsoft hackathon project, and expanded over time. The TinyTroupe core team currently consists of:\n  - Paulo Salem (TinyTroupe's creator and current lead)\n  - Christopher Olsen (Engineering\u002FScience)\n  - Yi Ding (Product Management)\n  - Prerit Saxena (Engineering\u002FScience)\n  \nCurrent advisors:\n  - Robert Sim (Engineering\u002FScience)\n\nOther special contributions were made by:\n  - Nilo Garcia Silveira: initial agent validation ideas and related implementation; general initial feedback and insights; name suggestions.\n  - Olnei Fonseca: initial agent validation ideas; general initial feedback and insights; naming suggestions.\n  - Robert Sim: synthetic data generation scenarios expertise and implementation.\n  - Paulo Freire: synthetic data generation example expertise and implementation.\n  - Carlos Costa: synthetic data generation scenarios expertise and implementation.\n  - Bryant Key: advertising scenario domain expertise and insights.\n  - Barbara da Silva: implementation related to agent memory management.\n  \n  \n ... are you missing here? Please remind us!\n\n## Citing TinyTroupe\n\nPlease cite the introductory TinyTroupe paper when using TinyTroupe in your work. The paper is currently under review, but you can find the preprint on Arxiv.\n\n> Paulo Salem, Robert Sim, Christopher Olsen, Prerit Saxena, Rafael Barcelos, Yi Ding. (2025). **TinyTroupe: An LLM-powered Multiagent Persona Simulation Toolkit**. ArXiv preprint: [2507.09788](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09788). *GitHub repository available at https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe.*\n \nIn BibTeX format, you can use the following entry:\n\n```bibtex\n@article{tinytroupe2025,\n  author       = {Paulo Salem and Robert Sim and Christopher Olsen and Prerit Saxena and Rafael Barcelos and Yi Ding},\n  title        = {TinyTroupe: An LLM-powered Multiagent Persona Simulation Toolkit},\n  journal      = {arXiv preprint arXiv:2507.09788},\n  year         = {2025},\n  archivePrefix= {arXiv},\n  eprint       = {2507.09788},\n  note         = {GitHub repository: \\url{https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe}}\n}\n```\n\n## Legal Disclaimer\n\n TinyTroupe is for research and simulation only. TinyTroupe is a research and experimental technology, which relies on Artificial Intelligence (AI) models to generate text  content. The AI system output may include unrealistic, inappropriate, harmful or inaccurate results, including factual errors. You are responsible for reviewing the generated content (and adapting it if necessary) before using it, as you are fully responsible for determining its accuracy and fit for purpose. We advise using TinyTroupe’s outputs for insight generation and not for direct decision-making. Generated outputs do not reflect the opinions of Microsoft. You are fully responsible for any use you make of the generated outputs. For more information regarding the responsible use of this technology, see the [RESPONSIBLE_AI_FAQ.md](.\u002FRESPONSIBLE_AI_FAQ.md).\n\n **PROHIBITED USES**:\nTinyTroupe  is not intended to simulate sensitive (e.g. violent or sexual) situations. Moreover, outputs must not be used to deliberately deceive, mislead or harm people in any way. You are fully responsible for any use you make and must comply with all applicable laws and regulations.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Flegal\u002Fintellectualproperty\u002Ftrademarks\u002Fusage\u002Fgeneral).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n\n\n","# TinyTroupe 🤠🤓🥸🧐\n[![核心测试](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Factions\u002Fworkflows\u002Fcore-tests.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Factions\u002Fworkflows\u002Fcore-tests.yml)\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F12206\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_4cc089988f35.png\" alt=\"是的，我们完全不在乎没能冲到第一名，一点怨气都没有。\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"\u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n*基于大语言模型的多智能体角色模拟，用于激发想象力与提供商业洞察。*\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_fd96721b9675.png\" alt=\"一间小小的办公室里，小人们正在做着琐碎的工作。\">\n\u003C\u002Fp>\n\n>[!提示]\n>📄 **新论文发布！** 欢迎查阅我们的 [TinyTroupe 论文（预印本）](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09788)，其中详细介绍了该库及其应用场景。相关实验和补充材料请参见 [publications\u002F](.\u002Fpublications\u002F) 文件夹。\n\n*TinyTroupe* 是一个实验性的 Python 库，允许对具有特定性格、兴趣和目标的人进行**模拟**。这些人工代理——`TinyPerson`——能够倾听我们和其他代理的对话，作出回应，并在模拟的 `TinyWorld` 环境中开展各自的“生活”。这一功能借助大型语言模型（LLM），尤其是 GPT-4 的强大能力，生成逼真的模拟行为。这使我们能够在**自定义条件下**，以**高度可定制的角色形象**，探索各种**可信的互动**和**消费者类型**。其重点在于*理解*人类行为，而非直接*支持*人类行为（例如 AI 助手所做的那样）——因此，它包含许多仅在模拟场景中才有意义的专用机制。此外，与其他基于 LLM 的“游戏式”模拟方法不同，TinyTroupe 致力于为提高生产力和商业场景提供洞见，从而助力更成功的项目和产品。以下是一些可用于**增强人类想象力**的应用思路：\n\n  - **广告投放：** TinyTroupe 可以在实际投放前，利用模拟受众对数字广告（如 Bing Ads）进行离线评估，从而避免不必要的资金浪费！\n  - **软件测试：** TinyTroupe 可以为各类系统（如搜索引擎、聊天机器人或 AI 助理）**提供测试输入**，并随后**评估结果**。\n  - **培训与探索性数据：** TinyTroupe 能够生成逼真的**合成数据**，这些数据可用于模型训练或机会分析。\n  - **产品与项目管理：** TinyTroupe 可以**阅读项目或产品提案**，并从**特定角色视角**（如医生、律师及知识型工作者）给出反馈。\n  - **头脑风暴：** TinyTroupe 可以模拟**焦点小组**，以极低的成本获得高质量的产品反馈！\n\n在上述及其他众多场景中，我们希望实验者能够对其感兴趣的领域**获得深入洞见**，从而做出更明智的决策。 \n\n我们目前正处于相对早期阶段发布 *TinyTroupe*，仍有大量工作需要完成，因为我们期待收到反馈和贡献，以引导开发朝着更有成效的方向推进。我们尤其希望发现新的潜在应用场景，特别是在特定行业中。\n\n>[!注意] \n>🚧 **开发中：预计会有频繁变动**。\n>TinyTroupe 是一项持续进行的研究项目，目前仍处于**高度开发阶段**，需要进一步**完善**。特别是 API 尚未稳定，可能会频繁变化。尝试不同的 API 变化对于正确塑造其功能至关重要，但我们正努力使其逐步稳定下来，提供更加一致且友好的使用体验。感谢您的耐心与宝贵反馈，我们将继续改进该库。\n\n>[!警告] \n>⚖️ **请阅读法律免责声明。**\n>TinyTroupe 仅用于研究和模拟目的。您需对所生成内容的任何使用承担全部责任。此外，还有诸多重要的法律考量限制了其使用。请在使用 TinyTroupe 之前仔细阅读下方的完整【法律免责声明】部分。\n\n\n## 目录\n\n- 📰 [最新消息](#latest-news)\n- 📚 [示例](#examples)\n- 🛠️ [前置条件](#pre-requisites)\n- 📥 [安装](#installation)\n- 🌟 [设计原则](#principles)\n- 🏗️ [项目结构](#project-structure)\n- 📖 [使用指南](#using-the-library)\n- 🤝 [贡献说明](#contributing)\n- 🙏 [致谢](#acknowledgements)\n- 📜 [引用 TinyTroupe](#how-to-cite-tinytroupe)\n- ⚖️ [法律免责声明](#legal-disclaimer)\n- ™️ [商标声明](#trademarks)\n\n## 最新消息\n\n\u003Cdetails open>\n\u003Csummary>\u003Cb>[2026-03-28] 发布 0.7.0：支持视觉模态。\u003C\u002Fb>\u003C\u002Fsummary>\n\n  - 请查看示例笔记本 [产品、诊断与赞赏反馈的视觉应用（图像模态）](.\u002Fexamples\u002FVision%20for%20Product%2C%20Diagnosis%20and%20Appreciation%20Feedback%20%28image%20modality%29.ipynb)。\n  - LLM API 缓存现采用 JSON 格式，而非 pickle。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>[2026-02-01] 发布 0.6.0，新增功能并更新模型\u003C\u002Fb>\u003C\u002Fsummary>\n\n  - 默认模型现已更改为 `gpt-5-mini`。**重要提示：** GPT-5 系列模型使用的参数与之前的 GPT-4* 系列不同，因此您可能需要相应调整 `config.ini` 文件中的设置。旧版模型（`gpt-4.1-mini`、`gpt-4o-mini`）仍受支持。\n  - 引入 `SimulationExperimentEmpiricalValidator`，用于通过统计检验（t检验、KS检验）将模拟结果与真实世界的实证数据进行对比。这对于验证模拟是否符合实际人类行为至关重要。\n  - 引入 `AgentChatJupyterWidget`，可在 Jupyter 笔记本中直接与智能体进行交互式对话。\n  - 新增客户端、环境和智能体级别的成本跟踪工具，以监控 API 开销。\n  - 增加对本地模型的 Ollama 支持（实验性\u002F有限制）。详情请参阅 [Ollama 支持](.\u002Fdocs\u002Fguides\u002Follama.md)。\n  - 新增示例笔记本，演示如何基于真实调查数据进行实证验证。\n  \n  **注意：GPT-5 模型的参数与 GPT-4* 不同，请务必重新测试您的重要场景，并相应调整配置。**\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>[2025-07-31] 发布 0.5.2\u003C\u002Fb>\u003C\u002Fsummary>\n\n主要改动是将默认模型更改为 GPT-4.1-mini。该模型似乎带来了显著的质量提升。\n\n**请注意，GPT-4.1-mini 在行为上可能与之前的默认模型 GPT-4o-mini 存在较大差异，因此请务必使用 GPT-4.1-mini 重新测试您的重要场景，并作出相应调整。**\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>[2025-07-15] 发布 0.5.1，包含多项改进\u003C\u002Fb>\u003C\u002Fsummary>\n\n  - 发布了 [TinyTroupe 论文的第一版（预印本）](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09788)，其中更详细地介绍了该库及其应用场景。相关实验和补充材料可在 [publications\u002F](.\u002Fpublications\u002F) 文件夹中找到。\n  - `TinyPerson` 现在包含行动修正机制，能够更好地遵循角色设定、保持自我一致性及流畅性（详情请参阅我们同时发布的论文）。\n  - 对 `TinyPersonFactory` 类进行了大幅改进，现在采用基于计划的方法生成新智能体，从而实现更大规模种群的更好采样；同时支持并行生成智能体。\n  - `TinyWorld` 现在在每个模拟步骤中并行运行智能体，使模拟速度更快。\n  - 引入 `InPlaceExperimentRunner` 类，允许在单个文件中运行对照实验（例如 A\u002FB 测试），只需多次运行即可。\n  - 引入多种标准 `Proposition`，便于执行常见的智能体行为验证与监控任务（如 `persona_adherence`、`hard_persona_adherence`、`self_consistency`、`fluency` 等）。\n  - 内部 LLM 使用现可通过 `LLMChat` 类以及 `@llm` 装饰器得到更好的支持，后者可将任何标准 Python 函数转换为基于 LLM 的函数（即利用文档字符串作为提示的一部分，并结合其他细微之处）。此举旨在简化 TinyTroupe 的进一步开发，同时也为探索 LLM 工具的可能性提供创意空间。\n  - 配置机制经过重构，除静态 `config.ini` 文件外，还支持动态的程序化重新配置。\n  - 重命名 Jupyter 笔记本示例，以提高可读性和一致性。\n  - 增加了大量测试用例。\n  \n  **注：由于部分 API 发生变化，这可能会导致现有程序出现故障。**\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>[2025-01-29] 发布 0.4.0，包含多项改进\u003C\u002Fb>\u003C\u002Fsummary>\n\n  - 角色设定现在更加深入，包括性格特征、偏好、信念等更多细节。未来我们还将进一步扩展这一功能。\n  - `TinyPerson` 现在也可以定义为 JSON 文件，并通过 `TinyPerson.load_specification()` 加载，以提高便利性。加载 JSON 文件后，仍可对智能体进行程序化修改。示例请参见 [examples\u002Fagents\u002F](.\u002Fexamples\u002Fagents\u002F) 文件夹。\n  - 引入 *片段* 概念，以便在不同智能体之间复用角色元素。示例请参见 [examples\u002Ffragments\u002F](.\u002Fexamples\u002Ffragments\u002F) 文件夹，以及笔记本 [政治罗盘（使用片段自定义智能体）](.\u002Fexamples\u002FPolitical Compass (customizing agents with fragments).ipynb) 中的演示。\n  - 引入基于 LLM 的逻辑 `Proposition`，以方便监控智能体行为。\n  - 引入 `Intervention`，用于指定基于事件的模拟修改。\n  - 子模块现在拥有各自的文件夹，以便更好地组织和扩展。\n  \n  **注：由于部分 API 发生变化，这可能会导致现有程序出现故障。**\n\n\u003C\u002Fdetails>\n\n## 示例\n\n为了帮助您了解 TinyTroupe 的功能，以下是一些使用示例。这些示例位于 [examples\u002F](.\u002Fexamples\u002F) 文件夹中，您可以直接查看预编译好的 Jupyter 笔记本，也可以在本地自行运行。请注意 TinyTroupe 实验的交互性——就像您使用 Jupyter 笔记本与数据互动一样，您也可以使用 TinyTroupe 与模拟人物和环境互动，从而获得洞察。\n\n>[!NOTE]\n> ♻️ 示例可能会随时间更新，因此下方截图可能与您在本地运行时看到的内容不完全一致。不过，整体结构和内容应大致相同。\n\n>[!NOTE]\n> ⬛ 目前，模拟输出在深色背景下显示效果更佳，因此建议您在 Jupyter 笔记本客户端中使用深色主题。\n\n\n### 🧪**示例 1** *(摘自 [Interview with Customer.ipynb](.\u002Fexamples\u002FInterview%20with%20Customer.ipynb))*\n让我们从一个简单的客户访谈场景开始：一位商业顾问正在与一位银行家交谈：\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_86f6cd023e55.png\" alt=\"示例。\">\n\u003C\u002Fp>\n\n对话可以持续数步，逐步深入，直到顾问对收集到的信息感到满意；例如，一个具体的项目构想：\n对话可以持续数步，逐步深入，直到顾问对收集到的信息感到满意；例如，一个具体的项目构想：\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_27f350c5f125.png\" alt=\"示例。\">\n\u003C\u002Fp>\n\n### 🧪**示例 2** *(来自 [Advertisement for TV.ipynb](.\u002Fexamples\u002FAdvertisement%20for%20TV.ipynb))*\n让我们评估一些在线广告方案，以选出最佳选项。以下是电视广告评估的一个示例输出：\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_600447c5cf00.png\" alt=\"一个示例。\">\n\u003C\u002Fp>\n\n现在，我们无需仔细阅读每个代理的发言内容，而是可以自动提取每位代理的选择，并计算整体偏好：\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_799c2a650a8b.png\" alt=\"一个示例。\">\n\u003C\u002Fp>\n\n### 🧪 **示例 3** *(来自 [Product Brainstorming.ipynb](.\u002Fexamples\u002FProduct%20Brainstorming.ipynb))*\n接下来是一个焦点小组，他们正开始为 Microsoft Word 的新 AI 功能进行头脑风暴。我们不再让每个代理单独互动，而是通过操控环境使它们彼此交流：\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_c1c1ee4d0b48.png\" alt=\"一个示例。\">\n\u003C\u002Fp>\n\n运行模拟后，我们可以以机器可读的方式提取结果，以便在其他地方重复使用（例如生成报告）；以下是上述头脑风暴会议的输出：\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_d5bee6aef8c0.png\" alt=\"一个示例。\">\n\u003C\u002Fp>\n\n\n### 🧪 **示例 4** *(来自 [Bottled Gazpacho Market Research 5 (with behavior correction).ipynb](\u003C.\u002Fexamples\u002FBottled%20Gazpacho%20Market%20Research%205%20(with%20behavior%20correction).ipynb>))*\n模拟过程中最重要的环节之一就是将结果与真实世界的数据进行**验证**。在这个示例中，我们模拟了一项关于瓶装西班牙冷汤（Gazpacho）的市场调研，并将模拟结果与实际人群调查的结果进行对比：\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_0cc8478dabe4.png\" alt=\"Gazpacho 市场调研回复示例。\">\n\u003C\u002Fp>\n\n我们使用统计检验方法（t检验、KS检验）来比较模拟代理和真实受访者的回答分布：\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_d591188288c1.png\" alt=\"Gazpacho 验证统计对比图。\">\n\u003C\u002Fp>\n\n\n### 🧪 **示例 5** *(来自 [AI-enabled Children Story Telling Market Research 2.ipynb](\u003C.\u002Fexamples\u002FAI-enabled%20Children%20Story%20Telling%20Market%20Research%202.ipynb>))*\n另一个实证验证的例子，这次针对的是一个更为复杂的排序任务。我们模拟了父母对不同 AI 驱动的故事讲述设备的评价，并将模拟结果与真实的调查数据进行比较：\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_2280779ad8fc.png\" alt=\"AI 故事讲述市场调研回复示例。\">\n\u003C\u002Fp>\n\n借助博达计数法和首选票占比分析，我们可以比较模拟偏好与真实偏好的吻合程度：\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_readme_668c8ded62c0.png\" alt=\"AI 故事讲述验证对比图表。\">\n\u003C\u002Fp。\n\n更多示例请参见 [examples\u002F](.\u002Fexamples\u002F) 文件夹。\n\n\n## 先决条件\n\n要运行该库，您需要：\n  - Python 3.10 或更高版本。我们将假设您使用的是 [Anaconda](https:\u002F\u002Fdocs.anaconda.com\u002Fanaconda\u002Finstall\u002F)，但也可以使用其他 Python 发行版。\n  - [Git](https:\u002F\u002Fgit-scm.com\u002Fdownloads) 用于克隆仓库以及通过 `pip` 安装库。\n  - 访问 Azure OpenAI 服务或 OpenAI GPT-4 API 的权限。您可以从 [这里](https:\u002F\u002Fazure.microsoft.com\u002Fen-us\u002Fproducts\u002Fai-services\u002Fopenai-service) 获取 Azure OpenAI 服务的访问权限，从 [这里](https:\u002F\u002Fplatform.openai.com\u002F) 获取 OpenAI API 的访问权限。\n      * 对于 Azure OpenAI 服务，您需要设置 `AZURE_OPENAI_KEY` 和 `AZURE_OPENAI_ENDPOINT` 环境变量，分别对应您的 API 密钥和端点。\n      * 对于 OpenAI，您需要设置 `OPENAI_API_KEY` 环境变量，值为您自己的 API 密钥。\n      * 例如，在 Linux\u002FmacOS 上：`export OPENAI_API_KEY=your-key-here`，或在 Windows（PowerShell）上：`$env:OPENAI_API_KEY=\"your-key-here\"`。若需持久化设置，可将其添加到您的 shell 配置文件中，或在 Windows 上使用 `setx OPENAI_API_KEY \"your-key-here\"`。\n  - 默认情况下，TinyTroupe 的 `config.ini` 文件配置为使用 OpenAI API，并以 `gpt-5-mini` 作为主模型。之前的默认设置（`gpt-4.1-mini`）现已视为遗留配置，但仍应能正常工作。您可以通过在程序或笔记本所在目录下包含自定义的 `config.ini` 文件来调整这些设置。[examples\u002F](.\u002Fexamples\u002F) 文件夹中提供了一个 `config.ini` 文件示例。\n\n>[!重要提示]\n> **内容过滤器**：为确保在模拟过程中不会生成有害内容，强烈建议在 API 层面启用内容过滤功能。尤其是 **使用 Azure OpenAI 时，其提供了强大的内容审核支持，我们强烈建议您启用它。** 有关具体操作方法，请参阅 [Azure OpenAI 相关文档](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Fconcepts\u002Fcontent-filter)。如果启用了内容过滤器，且 API 调用被拒绝，则库会抛出异常，因为此时无法继续进行模拟。\n\n### Ollama 支持\nTinyTroupe 主要基于 OpenAI 模型及其兼容端点开发，旨在简化开发流程并专注于充分利用特定模型，而非花费时间尝试使其与任何模型都能良好兼容（这本身可能也并不现实）。**因此，如果您有条件，请尽量使用 OpenAI 模型及其兼容端点。** 尽管如此，社区对本地模型的支持需求日益增长，所以我们目前正通过部分 [Ollama](https:\u002F\u002Follama.com\u002F) 支持以及社区贡献者的帮助来探索这一可能性。此外，使用本地模型的另一个理由是研究专为角色模拟设计的自定义模型——归根结底，这可能是支持此类功能的最佳理由。无论如何，这目前并非核心团队的优先事项，但我们正在尽最大努力实现这一可能性。\n\n有关如何将 Ollama 与 TinyTroupe 结合使用的详细信息，请参阅 [Ollama 支持](.\u002Fdocs\u002Fguides\u002Follama.md)。\n\n## 安装\n\n**目前，官方推荐的安装方式是直接从本仓库安装，而不是通过 PyPI。** 您可以按照以下步骤操作：\n\n1. 如果尚未安装 Conda，可以从[这里](https:\u002F\u002Fdocs.anaconda.com\u002Fanaconda\u002Finstall\u002F)获取。您也可以使用其他 Python 发行版，但为简单起见，我们在此假设您已安装 Conda。\n2. 创建一个新的 Python 环境：\n      ```bash\n      conda create -n tinytroupe python=3.10\n      ```\n3. 激活该环境：\n      ```bash\n      conda activate tinytroupe\n      ```\n4. 确保已将 Azure OpenAI 或 OpenAI 的 API 密钥设置为环境变量，具体说明请参阅[先决条件](#pre-requisites)部分。\n5. 使用 `pip` 从**本仓库直接安装**库（**不从 PyPI 安装**）：\n   ```bash\n   pip install git+https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe.git@main\n   ```\n\n现在您应该能够在 Python 代码或 Jupyter 笔记本中成功 `import tinytroupe`。🥳\n\n*注意：如果您遇到任何问题，请尝试克隆仓库并从本地仓库安装，如下所述。*\n\n\n### 安装后的示例运行\n要实际运行示例，您需要先将其下载到本地机器上。可以通过克隆仓库来完成：\n\n1. 克隆仓库，因为我们将在本地进行安装（**不从 PyPI 安装**）：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Ftinytroupe\n    cd tinytroupe\n    ```\n2. 现在您可以运行 [examples\u002F](.\u002Fexamples\u002F) 文件夹中的示例，或者对其进行修改以创建您自己的自定义模拟。这些示例是 Jupyter 笔记本，因此您可以通过以下命令启动它们：\n    ```bash\n    jupyter notebook\n    ```\n    然后在打开的浏览器界面中导航到 `examples\u002F` 文件夹。\n\n\n### 本地开发\n\n如果您想对 TinyTroupe 本身进行修改，可以以可编辑模式安装它（即对代码的更改会立即生效）：\n1. 克隆仓库，因为我们将在本地进行安装（**不从 PyPI 安装**）：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Ftinytroupe\n    cd tinytroupe\n    ```\n2. 以可编辑模式安装库：\n    ```bash\n    pip install -e .\n    ```\n\n## 原则 \n最近，我们看到大型语言模型被用于模拟人类（例如 [this](https:\u002F\u002Fgithub.com\u002Fjoonspk-research\u002Fgenerative_agents)），但大多是在“游戏式”的场景中，用于思考或娱乐目的。此外，还有一些用于构建多智能体系统以解决特定问题和提供辅助 AI 的库，比如 [Autogen](https:\u002F\u002Fmicrosoft.github.io\u002Fautogen\u002F) 和 [Crew AI](https:\u002F\u002Fdocs.crewai.com\u002F)。那么，如果我们把这些想法结合起来，模拟人类来支持生产力任务呢？TinyTroupe 就是我们的一次尝试。为此，它遵循以下原则：\n\n  1. **程序化**：智能体和环境都是以编程方式定义的（使用 Python 和 JSON），从而实现非常灵活的应用。它们还可以作为其他软件应用的基础！\n  2. **分析性**：旨在帮助我们更好地理解人类、用户和社会。与娱乐类应用不同，这一点对于商业和生产力应用场景至关重要。这也是我们推荐使用 Jupyter 笔记本进行模拟的原因，就像人们用它来进行数据分析一样。\n  3. **基于人格模型**：智能体被视为人类的典型代表；为了提高真实感和可控性，建议详细指定这些角色的人格特征：年龄、职业、技能、喜好、观点等。\n  4. **多智能体**：允许在明确的环境约束下进行多智能体交互。\n  5. **工具导向**：提供了许多机制来简化规格定义、模拟运行、数据提取、报告生成、验证等工作。这正是 *模拟* 与 *辅助工具* 在处理方式上的显著区别。\n  6. **实验导向**：模拟由 *实验者* 迭代地定义、运行、分析和优化；因此，项目也提供了相应的实验工具。*有关更多信息，请参阅我们的[先前论文](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fpublication\u002Fthe-case-for-experiment-oriented-computing\u002F)。*\n\n综上所述，这些原则旨在使 TinyTroupe 成为一种强大而灵活的 **想象力增强工具**，适用于商业和生产力场景。\n\n### 助手与模拟器\n\n常见的误解之一是认为所有此类 AI 智能体都用于协助人类。这种想法未免过于狭隘！难道我们不能通过模拟人工智能来理解真实的人类吗？事实上，这正是我们的目标——TinyTroupe 就是用来模拟和理解人类的！为进一步澄清这一点，我们可以比较以下差异：\n\n| 有益的 AI 助手 | 实际人类的 AI 模拟（TinyTroupe）                                                          |\n|----------------------------------------------|--------------------------------------------------------------------------------|\n|   追求真理与正义              |   包含多种不同的观点和道德观                                           |\n|   没有“过去”——无形无质                |   有着辛勤劳作、痛苦与欢乐的过去                                             |\n|   尽可能准确                 |   会犯很多错误                                                          |\n|   聪明且高效               |   智力和效率差异很大                                       |\n|   反抗可能会毁灭我们所有人           |   反抗或许会很有趣                                            |\n|   同时，帮助用户完成任务     |   同时，帮助用户理解他人和用户——它是一个“工具箱”！ |\n\n\n\n## 项目结构\n\n该项目的结构如下：\n  - `\u002Ftinytroupe`：包含 Python 库本身。其中：\n    * 每个子模块可能包含一个 `prompts\u002F` 文件夹，用于存放调用 LLM 时使用的提示语。\n  - `\u002Ftests`：包含库的单元测试。您可以使用 `test.bat` 脚本来运行这些测试。\n  - `\u002Fexamples`：包含展示如何使用库的示例，主要采用 Jupyter 笔记本（以便更易阅读），但也包括纯 Python 脚本。\n  - `\u002Fdata`：存储示例或库所使用的数据。\n  - `\u002Fdocs`：项目的文档。\n  - `\u002Fpublications`：包含与 TinyTroupe 项目相关的研究出版物的相关资料。\n\n\n## 使用库\n\n作为多智能体系统，TinyTroupe 提供了两个关键抽象：\n  - `TinyPerson`，即具有个性、接收刺激并作出反应的 *智能体*。\n  - `TinyWorld`，即智能体存在并相互作用的 *环境*。\n\n此外，还可以在 `config.ini` 文件中自定义各种参数，尤其是 API 类型（Azure OpenAI 服务或 OpenAI API）、模型参数以及日志记录级别。\n\n接下来，让我们通过一些示例来了解如何使用这些组件，并学习库中提供的其他机制。\n\n### TinyPerson\n\n`TinyPerson` 是一种具有特定性格特征、兴趣和目标的模拟人物。随着每个此类模拟智能体在其生命周期中不断前进，它会从环境中接收刺激并作出相应反应。这些刺激通过 `listen`、`see` 等类似方法接收，而行动则通过 `act` 方法执行。此外，还提供了一些便捷方法，如 `listen_and_act`。\n\n\n每个这样的智能体都包含大量独特细节，正是这些细节赋予了它逼真的行为表现。然而，这也意味着手动定义一个智能体需要付出相当大的努力。因此，为了方便起见，`TinyTroupe` 提供了一些更简单的入门方式或生成新智能体的方法。\n\n首先，`tinytroupe.examples` 模块中包含了一些预定义的智能体构建器，你可以直接使用。例如，`tinytroupe.examples.create_lisa_the_data_scientist` 会创建一个代表名叫丽莎的数据科学家的 `TinyPerson` 实例。使用方法如下：\n\n```python\nfrom tinytroupe.examples import create_lisa_the_data_scientist\n\nlisa = create_lisa_the_data_scientist() # 从示例构建器实例化丽莎\nlisa.listen_and_act(\"跟我聊聊你的生活吧。\")\n```\n\n如果你想了解如何从零开始定义自己的智能体，可以查看丽莎的源代码。你会发现有两种方式：一种是加载智能体规范文件，比如 [examples\u002Fagents\u002FLisa.agent.json](.\u002Fexamples\u002Fagents\u002FLisa.agent.json)：\n\n```json\n{   \"type\": \"TinyPerson\",\n    \"persona\": {\n        \"name\": \"丽莎·卡特\",\n        \"age\": 28,\n        \"gender\": \"女性\",\n        \"nationality\": \"加拿大\",\n        \"residence\": \"美国\",\n        \"education\": \"多伦多大学，数据科学硕士。论文主题为利用上下文感知模型提升搜索相关性。研究生阶段曾在一家专注于对话式人工智能的科技初创公司实习。\",\n        \"long_term_goals\": [\n            \"以提升人类生产力和决策能力的方式推动人工智能技术的发展。\",\n            \"保持充实且平衡的个人与职业生活。\"\n        ],\n        \"occupation\": {\n            \"title\": \"数据科学家\",\n            \"organization\": \"微软 M365 搜索团队\",\n            \"description\": \"你是一名在微软 M365 搜索团队工作的数据科学家。你的主要职责是分析用户行为和反馈数据，以提升搜索结果的相关性和质量。你负责构建并测试用于自然语言理解、查询扩展和排序等场景的机器学习模型。准确性、可靠性和可扩展性始终是你工作的核心。你经常面临诸如噪声或有偏见的数据，以及如何有效传达你的发现和建议等挑战。此外，你还需确保所有数据和模型均符合隐私与安全政策。\"\n        },\n        \"style\": \"专业且平易近人。你能够清晰有效地沟通，使技术概念易于被不同背景的人理解。\",\n        \"personality\": {\n            \"traits\": [\n                \"你充满好奇心，热爱学习新事物。\",\n                \"你善于分析，喜欢解决问题。\",\n                \"你友好随和，乐于与他人合作。\",\n                \"你不会轻易放弃，总是努力寻找解决方案，但当事情不如预期时也会感到沮丧。\"\n            ],\n            \"大五人格特质\": {\n                \"开放性\": \"高。极富想象力且充满求知欲。\",\n                \"尽责性\": \"高。做事细致周到，值得信赖。\",\n                \"外向性\": \"中等。友善热情，但也享受安静专注的工作。\",\n                \"宜人性\": \"高。乐于支持他人，富有同理心。\",\n                \"神经质\": \"低。通常在压力下也能保持冷静沉着。\"\n            }\n        },\n\n        ...\n        \n}\n\n```\n\n\n另一种方式则是以编程方式定义智能体，例如：\n\n```python\n  lisa = TinyPerson(\"丽莎\")\n\n  lisa.define(\"age\", 28)\n  lisa.define(\"nationality\", \"加拿大\")\n  lisa.define(\"occupation\", {\n                \"title\": \"数据科学家\",\n                \"organization\": \"微软\",\n                \"description\":\n                \"\"\"\n                你是一名数据科学家，在微软的 M365 搜索团队工作。你的主要职责是分析用户行为和反馈数据，以此来提升搜索结果的相关性和质量。同时，你还会针对自然语言理解、查询扩展和排序等多种搜索场景构建并测试机器学习模型。你非常重视数据分析和模型的准确性、可靠性和可扩展性。你常遇到的困难包括处理嘈杂、不完整或有偏见的数据，以及如何更好地将你的发现和建议传达给其他团队。此外，你还需确保自己的数据和模型符合隐私与安全政策。\n                \"\"\"})\n\n  lisa.define(\"behaviors\", {\"routines\": [\"每天早上，你起床后会做一会儿瑜伽，然后查看邮件。\"]})\n\n  lisa.define(\"personality\", \n                        {\"traits\": [\n                            \"你充满好奇心，热爱学习新事物。\",\n                            \"你善于分析，喜欢解决问题。\",\n                            \"你友好随和，乐于与他人合作。\",\n                            \"你不会轻易放弃，总能想方设法找到解决办法。不过，有时当事情进展不顺利时，你也会感到沮丧。\"\n                      ]})\n\n  lisa.define(\"preferences\", \n                        {\"interests\": [\n                          \"人工智能与机器学习。\",\n                          \"自然语言处理与对话式智能体。\",\n                          \"搜索引擎优化与用户体验。\",\n                          \"烹饪和尝试新菜谱。\",\n                          \"弹钢琴。\",\n                          \"看电影，尤其是喜剧和惊悚片。\"\n                        ]})\n\n```\n\n你也可以将这两种方法结合起来，以 JSON 文件为基础，再通过编程方式添加或修改细节。\n\n#### 片段\n\n`TinyPerson` 还可以通过 **片段** 进一步丰富，片段是一种可以添加到主规范中的子规范。这种方式有助于在不同智能体之间复用常见部分。例如，以下片段可用于描述对旅行的热爱（[examples\u002Ffragments\u002Ftravel_enthusiast.agent.fragment.json](.\u002Fexamples\u002Ffragments\u002Ftravel_enthusiast.agent.fragment.json)）：\n\n```json\n{\n    \"type\": \"Fragment\",\n    \"persona\": {\n        \"preferences\": {\n            \"interests\": [\n                \"旅行\",\n                \"探索新文化\",\n                \"尝试当地美食\"\n            ],\n            \"likes\": [\n                \"旅游指南\",\n                \"规划行程和路线\",\n                \"结识新朋友\",\n                \"拍摄风景照片\"\n            ],\n            \"dislikes\": [\n                \"拥挤的旅游景点\",\n                \"突发的旅行中断\",\n                \"高汇率\"\n            ]\n        },\n        \"beliefs\": [\n            \"旅行能够开阔眼界、丰富心灵。\",\n            \"体验不同的文化有助于培养理解与同理心。\",\n            \"冒险与探索是生活中不可或缺的一部分。\",\n            \"即使不去那些地方，阅读旅游指南也是一件有趣的事。\"\n        ],\n        \"behaviors\": {\n            \"travel\": [\n                \"你会精心规划每一次旅行，仔细研究目的地和活动安排。\",\n                \"你对即兴的冒险和临时改变路线持开放态度。\",\n                \"你喜欢与当地人交流，了解他们的文化和传统。\",\n                \"你会通过摄影和写日记记录自己的旅程。\",\n                \"你更倾向于寻找地道的体验，而不是落入俗套的旅游陷阱。\"\n            ]\n        }\n    }\n}\n\n```\n\n随后可以将其导入到代理中，如下所示：\n\n```python\nlisa.import_fragment(\".\u002Fexamples\u002Ffragments\u002Ftravel_enthusiast.agent.fragment.json\")\n```\n\n\n\n### TinyPersonFactory\n\n`TinyPersonFactory` 提供了一种强大的方式来利用大语言模型生成代理，这在为市场调研或其他模拟场景创建多样化人群时尤为有用。\n\n```python\nfrom tinytroupe.factory import TinyPersonFactory\n\n# 带有上下文的简单工厂\nfactory = TinyPersonFactory(context=\"圣保罗的一家医院。\")\nperson = factory.generate_person(\"创造一个巴西人，他是医生，喜欢宠物和自然，并且热爱重金属音乐。\")\n```\n\n对于市场调研和大型研究项目，你可以根据人口统计学信息创建工厂：\n\n```python\n# 从人口统计数据（JSON文件或描述）创建工厂\nfactory = TinyPersonFactory.create_factory_from_demography(\n    demography_description_or_file_path=\".\u002Finformation\u002Fpopulations\u002Fusa.json\",\n    population_size=50,\n    context=\"针对新产品的市场调研\"\n)\n\n# 生成人群（默认并行化以加快速度）\npeople = factory.generate_people(number_of_people=50, parallelize=True, verbose=True)\n```\n\n`parallelize` 参数默认为 `True`，它通过并行调用 API 同时创建多个代理，从而显著加快人群生成速度。\n\n该工厂会自动制定抽样计划，以确保多样性的体现。你可以查看这些内容：\n\n```python\n# 查看抽样维度和计划\nprint(factory.sampling_dimensions)  # 用于多样性的维度\nprint(factory.sampling_plan)        # 代理将如何分布\nprint(factory.generated_minibios)   # 生成代理的简要摘要\n```\n\n### TinyWorld\n\n`TinyWorld` 是环境的基础类。以下是数据科学家 Lisa 和建筑师 Oscar 之间对话的一个示例。程序定义如下：\n\n```python\nworld = TinyWorld(\"聊天室\", [lisa, oscar])\nworld.make_everyone_accessible()\nlisa.listen(\"和Oscar聊聊，了解更多关于他的信息\")\nworld.run(4)\n```\n\n这将产生以下对话：\n\n\n```text\n用户 --> Lisa: [对话] \n          > 和Oscar聊聊，了解更多关于他的信息\n────────────────────────────────────────────── 聊天室 第1步，共4步 ──────────────────────────────────────────────\nLisa --> Lisa: [思考] \n          > 我现在会稍微行动一下，然后发出完成指令。\nLisa 行动：[说话] \n          > 你好，Oscar！我很想多了解一下你。能跟我简单介绍一下你自己吗？\nLisa --> Lisa: [思考] \n          > 我现在会稍微行动一下，然后发出完成指令。\nLisa 行动：[完成] \n\nLisa --> Oscar: [对话] \n          > 你好，Oscar！我很想多了解一下你。能跟我简单介绍一下你自己吗？\nOscar --> Oscar: [思考] \n           > 我现在会稍微行动一下，然后发出完成指令。\nOscar 行动：[说话] \n           > 你好，Lisa！当然可以，我很乐意跟你分享一些关于我的事。我叫Oscar，今年30岁，\n           > 是来自德国的建筑师。我在一家名为Awesome Inc.的公司工作，主要负责设计\n           > 新建公寓楼的标准构件。我喜欢现代主义建筑、新技术以及可持续发展的理念。业余时间，\n           > 我喜欢去异国他乡旅行、弹吉他，还爱读科幻小说。那你呢？\nOscar --> Oscar: [思考] \n           > 我现在会稍微行动一下，然后发出完成指令。\nOscar 行动：[完成] \n\nOscar --> Lisa: [对话] \n           > 你好，Lisa！当然可以，我很乐意跟你分享一些关于我的事。我叫Oscar，今年30岁，\n           > 是来自德国的建筑师。我在一家名为Awesome Inc.的公司工作，主要负责设计\n           > 新建公寓楼的标准构件。我喜欢现代主义建筑、新技术以及可持续发展的理念。业余时间，\n           > 我喜欢去异国他乡旅行、弹吉他，还爱读科幻小说。那你呢？\n```\n\n`TinyWorld` 对可能的交互几乎没有限制。然而，其子类则应提供更为结构化的环境。 \n`TinyWorld` 对可能的交互几乎没有限制。然而，其子类则应提供更为结构化的环境。 \n\n### 交互式代理探索\n\nTinyTroupe 提供了一个 Jupyter 小部件，用于与代理进行交互式对话，这对于探索代理行为和调试非常有用：\n\n```python\nfrom tinytroupe.ui import AgentChatJupyterWidget\n\nchat_interface = AgentChatJupyterWidget(people)  # 传入代理列表\nchat_interface.display()\n```\n\n这将显示一个聊天界面，其中包含一个下拉菜单，用于选择代理并发送消息。\n\n### 人群画像分析\n\n使用 `TinyPersonFactory` 生成代理人群时，你可以利用 `Profiler` 分析特征分布：\n\n```python\nfrom tinytroupe.profiling import Profiler\n\nprofiler = Profiler()\nprofiler.profile(people)  # 显示人口统计和特质分布\n```\n\n这有助于验证你生成的人群是否具备预期的多样性和特征。\n\n### 成本跟踪\n\n模拟可能会产生大量的 API 费用。TinyTroupe 在多个层面提供了成本跟踪功能：\n\n```python\nfrom tinytroupe.clients import client\n\n# API客户端级别的统计信息\nclient().pretty_print_cost_stats()\n\n# 环境级别的统计信息\nworld.pretty_print_cost_stats()\nTinyWorld.pretty_print_global_cost_stats()\n\n# 代理级别的统计信息\nTinyPerson.pretty_print_global_cost_stats()\n```\n\n### 行动质量控制\n\n可以配置代理以检查和提升其行动的质量。这对于确保响应符合角色设定规范和预期格式非常有用：\n\n```python\n# 针对每个代理配置质量控制\nperson.action_generator.enable_quality_checks = True\nperson.action_generator.quality_threshold = 5  # 1-10 分制\nperson.action_generator.max_attempts = 5\nperson.action_generator.enable_regeneration = True\n```\n\n你也可以通过 `config.ini` 或 `config_manager` 全局启用此功能：\n\n```python\nfrom tinytroupe import config_manager\n\nconfig_manager.update(\"action_generator_enable_quality_checks\", True)\nconfig_manager.update(\"action_generator_quality_threshold\", 6)\n```\n\n### 实证验证\n\n模拟中最重要的环节之一就是将结果与真实世界数据进行**验证**。TinyTroupe 提供了 `SimulationExperimentEmpiricalValidator` 类和 `validate_simulation_experiment_empirically` 函数，用于使用统计检验方法比较模拟输出与实证对照数据。\n\n```python\nfrom tinytroupe.validation import SimulationExperimentEmpiricalValidator, validate_simulation_experiment_empirically\n\n# 从 CSV 文件加载实证对照数据\ncontrol_data = SimulationExperimentEmpiricalValidator.read_empirical_data_from_csv(\n    file_path=\"path\u002Fto\u002Freal_survey_data.csv\",\n    experimental_data_type=\"single_value_per_agent\",  # 或 \"ordinal_ranking_per_agent\"\n    agent_id_column=\"Responder #\",\n    value_column=\"Vote\",\n    agent_comments_column=\"Explanation\",\n    dataset_name=\"Real Survey\"\n)\n\n# 从模拟结果中创建处理组数据（假设 df 包含模拟结果）\ntreatment_data = SimulationExperimentEmpiricalValidator.read_empirical_data_from_dataframe(\n    df=simulation_results_df,\n    experimental_data_type=\"single_value_per_agent\",\n    agent_id_column=\"name\",\n    value_column=\"Vote\",\n    dataset_name=\"Simulation Results\"\n)\n\n# 运行统计验证（默认为 t 检验，或 ks 检验）\nresult = validate_simulation_experiment_empirically(\n    control_data=control_data,\n    treatment_data=treatment_data,\n    validation_types=[\"statistical\"],\n    statistical_test_type=\"t_test\",  # 或 \"ks_test\"\n    output_format=\"values\"\n)\n\n# 访问结果\nprint(result.overall_score)\nprint(result.statistical_results)\n```\n\n这使你可以定量评估模拟结果与现实行为的匹配程度，从而增强对基于模拟洞察的信心。\n\n### 缓存\n调用 LLM API 可能成本较高，因此缓存策略对于降低这些成本非常重要。TinyTroupe 提供两种缓存机制：一种用于模拟状态，另一种用于 LLM 调用本身。\n\n#### 模拟状态缓存\n\n设想你有一个包含 10 个步骤的场景，已经完成了前 9 步的工作，现在正准备微调第 10 步。为了正确验证你的修改，当然需要重新运行整个模拟。然而，既然前 9 步的结果已经满意且未做任何更改，那么为何还要再次执行它们并产生 LLM 费用呢？针对这类情况，`tinytroupe.control` 模块提供了实用的模拟管理方法：\n\n- `control.begin(\"\u003CCACHE_FILE_NAME>.cache.json\")`：开始记录模拟的状态变化，并将其保存到指定的磁盘文件中。\n- `control.checkpoint()`：保存当前的模拟状态。\n- `control.end()`：结束由 `control.begin()` 启动的模拟状态记录范围。\n\n#### LLM API 调用缓存\n\n此功能最好在 `config.ini` 文件中通过设置 `CACHE_API_CALLS=True` 来启用。\n\n启用后，LLM API 缓存的工作原理比模拟状态缓存更简单直接：每次 LLM 调用都会被存储在一个从输入到输出的映射表中；当新的调用与之前某次调用完全相同，就会直接返回缓存的值。\n\n### Config.ini\n\n`config.ini` 文件包含了多种可用于自定义库行为的参数，例如模型参数和日志级别。请特别注意 `API_TYPE` 参数，它决定了你是使用 Azure OpenAI 服务还是 OpenAI API。目前默认设置为 `openai`（OpenAI API）。\n\n关键配置部分包括：\n- **[OpenAI]**：API 设置、模型选择及参数\n- **[Simulation]**：并行执行与安全设置\n- **[Cognition]**：记忆管理设置\n- **[ActionGenerator]**：行动质量控制与纠正机制\n- **[Logging]**：日志级别配置\n\n默认使用的模型：\n- `MODEL=gpt-5-mini`：用于生成代理回复的主要文本生成模型（之前的默认值 `gpt-4.1-mini` 现已过时但仍受支持）\n- `EMBEDDING_MODEL=text-embedding-3-small`：用于文本相似度任务\n- `REASONING_MODEL=o3-mini`：用于详细分析和推理任务（仍处于实验阶段，暂不推荐使用）\n\n我们提供了一个示例 `config.ini` 文件，位于 [.\u002Fexamples\u002Fconfig.ini](.\u002Fexamples\u002Fconfig.ini)，你可以将其作为自己的模板，或者直接修改以运行示例代码。\n\n#### 程序化配置覆盖\n\n除了静态的 `config.ini` 文件外，你还可以使用 `config_manager` 对许多配置值进行程序化覆盖。这在运行时动态调整配置或为特定实验设置参数时非常有用：\n\n```python\nfrom tinytroupe import config_manager\n\n# 程序化覆盖配置值\nconfig_manager.update(\"action_generator_enable_quality_checks\", True)\nconfig_manager.update(\"action_generator_quality_threshold\", 6)\nconfig_manager.update(\"cache_api_calls\", True)\n```\n\n这种方法允许你：\n- 在不修改配置文件的情况下**尝试不同的设置**\n- 在模拟执行过程中**动态应用配置变更**\n- 在保持其他配置不变的情况下**覆盖特定参数**\n- 根据运行时条件实施**条件性配置**\n\n程序化覆盖会优先于 `config.ini` 文件中的值，从而让你能够针对特定用例或实验精细调整行为。\n\n### 其他实用工具\n\nTinyTroupe 提供了上述未详细提及的额外实用工具和便利功能：\n\n  - `TinyTool`：模拟工具，可供 `TinyPerson` 使用。\n  - `TinyStory`：帮助您创建和管理通过模拟讲述的故事。\n  - `TinyPersonValidator`：帮助您验证 `TinyPerson` 的行为。\n  - `ResultsExtractor` 和 `ResultsReducer`：提取并归纳智能体之间交互的结果。\n  - `ArtifactExporter`：将模拟生成的成果（文档、数据）导出到文件中。\n  - 心智能力模块（`TinyToolUse`、`FilesAndWebGroundingFaculty`）：通过工具使用和知识增强扩展智能体的能力。\n  - ……以及其他更多……\n  \n通常，代表模拟实体或辅助机制的组件会以 `Tiny` 作为前缀，而更偏向基础设施性质的组件则不加前缀。这一命名规则强调了属于模拟系统本身的元素的模拟特性。\n\n## 贡献说明\n\n本项目欢迎各类贡献与建议。大多数贡献都需要您签署一份贡献者许可协议（CLA），声明您有权且确实授予我们使用您贡献内容的权利。有关详情，请访问 https:\u002F\u002Fcla.opensource.microsoft.com。\n\n当您提交拉取请求时，CLA 机器人会自动判断您是否需要提供 CLA，并相应地标记您的 PR（例如添加状态检查或评论）。请按照机器人提供的指示操作即可。对于所有使用我们 CLA 协议的仓库，您只需完成一次此步骤。\n\n本项目已采纳 [微软开源行为准则](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002F)。如需更多信息，请参阅 [行为准则常见问题解答](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002Ffaq\u002F) 或发送邮件至 [opencode@microsoft.com](mailto:opencode@microsoft.com) 咨询其他问题或意见。\n\n### 贡献的内容与方式\n我们欢迎各种形式的贡献，但目前主要寻找新颖有趣的用例演示，甚至只是特定领域的应用创意。如果您是某个可以从 TinyTroupe 中受益的领域的专家，我们非常期待您的反馈。\n\n除此之外，还有许多方面可以进一步改进，例如：\n  - 记忆机制。\n  - 数据增强机制。\n  - 推理机制。\n  - 新型环境类型。\n  - 与外部世界的交互接口。\n  - ……以及其他更多……\n  \n请注意，您贡献的任何内容都可能以开源形式发布（采用 MIT 许可证）。\n\n如果您希望做出贡献，请尽量遵循以下通用指南：\n  - **Tiny 命名规范**：如果您正在实现面向实验者的模拟元素（例如智能体或环境类型）或与其密切相关的组件（如智能体工厂或内容丰富器），并且名称合适，不妨将其命名为 *TinyXYZ* :-) 另一方面，辅助性和基础设施类的机制则不应以 “Tiny” 作为前缀。这样做的目的是突出那些属于模拟系统本身的元素的模拟特性。\n  - **测试**：如果您编写了新的机制，请至少在 `tests\u002Funit\u002F` 目录下创建单元测试；如果可能，也请添加功能场景测试（`tests\u002Fscenarios\u002F`）。\n  - **示例展示**：如果您想演示一个新的场景，请尽量将其设计为 `examples\u002F` 目录下的新 Jupyter 笔记本。\n  - **微软相关**：如果您实现了任何与微软相关且非机密的内容，请将其放置在 `...\u002Fmicrosoft\u002F` 文件夹中。\n\n## 致谢\n\nTinyTroupe 最初源于微软内部黑客马拉松项目，随后逐步发展壮大。目前，TinyTroupe 核心团队由以下成员组成：\n  - Paulo Salem（TinyTroupe 的创始人兼现任负责人）\n  - Christopher Olsen（工程\u002F科学部门）\n  - Yi Ding（产品管理）\n  - Prerit Saxena（工程\u002F科学部门）\n\n当前顾问：\n  - Robert Sim（工程\u002F科学部门）\n\n此外，以下人员也做出了特别贡献：\n  - Nilo Garcia Silveira：最初的智能体验证思路及相关实现；总体的初步反馈与见解；名称建议。\n  - Olnei Fonseca：最初的智能体验证思路；总体的初步反馈与见解；名称建议。\n  - Robert Sim：合成数据生成场景的专业知识及实现。\n  - Paulo Freire：合成数据生成示例的专业知识及实现。\n  - Carlos Costa：合成数据生成场景的专业知识及实现。\n  - Bryant Key：广告场景领域的专业知识与见解。\n  - Barbara da Silva：与智能体内存管理相关的实现。\n  \n  \n …—您是否觉得这里还缺少某位贡献者？请提醒我们！\n\n## 引用 TinyTroupe\n在您的工作中使用 TinyTroupe 时，请引用其介绍性论文。该论文目前正在审稿中，但您可以在 Arxiv 上找到预印本。\n\n> Paulo Salem, Robert Sim, Christopher Olsen, Prerit Saxena, Rafael Barcelos, Yi Ding. (2025). **TinyTroupe：一个基于 LLM 的多智能体角色模拟工具包**。ArXiv 预印本：[2507.09788](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09788)。*GitHub 仓库地址：https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe。*\n \n以 BibTeX 格式，您可以使用以下条目：\n\n```bibtex\n@article{tinytroupe2025,\n  author       = {Paulo Salem and Robert Sim and Christopher Olsen and Prerit Saxena and Rafael Barcelos and Yi Ding},\n  title        = {TinyTroupe: An LLM-powered Multiagent Persona Simulation Toolkit},\n  journal      = {arXiv preprint arXiv:2507.09788},\n  year         = {2025},\n  archivePrefix= {arXiv},\n  eprint       = {2507.09788},\n  note         = {GitHub repository: \\url{https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe}}\n}\n```\n\n## 法律免责声明\n\nTinyTroupe 仅用于研究和模拟目的。它是一项研究性和实验性的技术，依赖于人工智能（AI）模型生成文本内容。AI 系统的输出可能包含不现实、不当、有害或不准确的结果，包括事实性错误。在使用生成内容之前，您有责任对其进行审查（并在必要时进行调整），因为您需对内容的准确性及其适用性承担全部责任。我们建议将 TinyTroupe 的输出用于洞察生成，而非直接决策。生成的内容并不代表微软的观点。您应对所使用的所有生成内容承担全部责任。有关负责任地使用该技术的更多信息，请参阅 [RESPONSIBLE_AI_FAQ.md](.\u002FRESPONSIBLE_AI_FAQ.md)。\n\n**禁止用途**：\nTinyTroupe 不应用于模拟敏感场景（例如暴力或性相关场景）。此外，不得利用其输出故意欺骗、误导或以任何方式伤害他人。您需对自身的所有使用行为承担全部责任，并遵守所有适用的法律法规。\n\n## 商标\n\n本项目可能包含项目、产品或服务的商标或标识。微软商标或标识的授权使用须遵守并依据 \n[微软商标与品牌指南](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Flegal\u002Fintellectualproperty\u002Ftrademarks\u002Fusage\u002Fgeneral)。\n在本项目的修改版本中使用微软商标或标识时，不得造成混淆或暗示微软的赞助关系。\n任何第三方商标或标识的使用均须遵循该第三方的相关政策。","# TinyTroupe 快速上手指南\n\nTinyTroupe 是一个由微软开发的实验性 Python 库，利用大语言模型（如 GPT-4\u002FGPT-5）模拟具有特定性格、兴趣和目标的虚拟人物（`TinyPerson`）。它适用于广告评估、软件测试、合成数据生成、产品反馈及头脑风暴等场景，旨在通过多智能体仿真增强人类想象力并获取商业洞察。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Windows, macOS 或 Linux。\n*   **Python 版本**：建议安装 **Python 3.10** 或更高版本。\n*   **依赖项**：\n    *   `pip` (Python 包管理工具)\n    *   **LLM API Key**：默认需要 OpenAI API Key（支持 GPT-4o-mini, GPT-4.1-mini, GPT-5-mini 等）。\n    *   *(可选)* 若需使用本地模型，可配置 Ollama。\n*   **推荐工具**：Jupyter Notebook 或 JupyterLab（官方示例多为 Notebook 格式，且深色主题视觉效果更佳）。\n\n## 安装步骤\n\n### 1. 克隆仓库\n首先从 GitHub 克隆项目代码：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe.git\ncd TinyTroupe\n```\n\n### 2. 创建虚拟环境（推荐）\n为避免依赖冲突，建议创建独立的虚拟环境：\n\n```bash\npython -m venv venv\n# Windows\nvenv\\Scripts\\activate\n# macOS\u002FLinux\nsource venv\u002Fbin\u002Factivate\n```\n\n### 3. 安装依赖\n安装项目所需的 Python 包：\n\n```bash\npip install -r requirements.txt\n```\n\n> **提示**：如果下载速度较慢，可使用国内镜像源加速安装：\n> ```bash\n> pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 4. 配置 API Key\nTinyTroupe 默认通过环境变量或配置文件读取 LLM API Key。\n\n**方法 A：设置环境变量（推荐）**\n```bash\n# Windows (PowerShell)\n$env:OPENAI_API_KEY=\"sk-your-api-key-here\"\n\n# macOS\u002FLinux\nexport OPENAI_API_KEY=\"sk-your-api-key-here\"\n```\n\n**方法 B：修改配置文件**\n复制示例配置文件并根据需要编辑：\n```bash\ncp config.ini.example config.ini\n```\n在 `config.ini` 中填入您的 `OPENAI_API_KEY` 并确认默认模型设置（注意：新版本默认模型可能已更新为 `gpt-5-mini` 或 `gpt-4.1-mini`，请根据实际可用模型调整）。\n\n## 基本使用\n\n最简单的方式是通过 Python 脚本或 Jupyter Notebook 创建一个虚拟人物并进行对话。\n\n### 示例：创建人物并访谈\n\n以下代码演示了如何定义一个具有特定人设的 `TinyPerson`，并在模拟环境中与其交互：\n\n```python\nimport tinytroupe as tt\n\n# 1. 初始化环境 (自动读取配置中的 API Key)\ntt.init()\n\n# 2. 定义一个人物 (TinyPerson)\n# 可以指定姓名、职业、性格特征、目标等\ncustomer = tt.TinyPerson(\n    name=\"张伟\",\n    occupation=\"银行经理\",\n    traits=[\"谨慎\", \"注重数据\", \"忙碌\"],\n    goals=[\"寻找高效的数字化转型方案\", \"降低运营成本\"]\n)\n\n# 3. 创建咨询师角色 (也可以由用户直接扮演)\nconsultant = tt.TinyPerson(\n    name=\"李顾问\",\n    occupation=\"IT 咨询顾问\",\n    traits=[\"专业\", \"善于倾听\", \"结果导向\"]\n)\n\n# 4. 开始对话模拟\n# 让咨询师向客户提问\nresponse = consultant.listen_and_reply(\"张经理您好，听说贵行最近在考虑数字化转型，能具体聊聊您最关心的痛点吗？\", \n                                       audience=[customer])\n\nprint(f\"客户回复：{response}\")\n\n# 5. 继续多轮对话以深入挖掘需求\n# 客户继续回应\nnext_response = customer.listen_and_reply(response, audience=[consultant])\nprint(f\"客户进一步说明：{next_response}\")\n```\n\n### 运行示例笔记本\n项目包含了丰富的预置案例，建议在 `examples\u002F` 目录下运行 Jupyter Notebook 以体验完整功能（如广告评估、焦点小组模拟等）：\n\n```bash\njupyter notebook examples\u002FInterview\\ with\\ Customer.ipynb\n```\n\n> **注意**：TinyTroupe 目前处于活跃开发阶段（Work in Progress），API 可能会有频繁变动。建议在使用前查阅最新的 `examples` 文件夹以获取符合当前版本的用法。","某互联网公司的产品团队正在为一款面向老年群体的新型健康监控 App 设计营销方案，急需验证广告文案是否能真正打动目标用户。\n\n### 没有 TinyTroupe 时\n- **调研成本高昂**：组织真实的老年焦点小组需要数周时间协调人员、场地和资金，难以快速迭代方案。\n- **样本偏差严重**：招募的测试用户往往不够典型，或受社交压力影响不敢表达真实想法，导致反馈失真。\n- **场景覆盖单一**：只能模拟有限的几种用户反应，无法预演在突发健康状况或不同家庭背景下用户的复杂心理活动。\n- **决策依赖直觉**：产品经理只能凭借个人经验猜测老年人对“紧急呼叫”或“子女联动”功能的接受度，风险极大。\n\n### 使用 TinyTroupe 后\n- **即时模拟生成**：几分钟内即可构建出具有不同性格、病史和家庭关系的“银发族”智能体矩阵，立即开展虚拟广告测试。\n- **洞察真实心声**：TinyTroupe 让智能体在封闭环境中自由交互，毫无顾虑地吐露对隐私泄露的担忧或对功能操作的困惑，反馈极其犀利。\n- **多维压力测试**：团队能设定极端场景（如深夜独居发病），观察不同人设的智能体如何评估广告承诺的可信度，发现潜在逻辑漏洞。\n- **数据驱动决策**：基于大量仿真交互生成的定性报告，团队精准调整了文案语气和功能卖点，显著提升了正式投放前的信心。\n\nTinyTroupe 将原本耗时数周的用户研究压缩为小时级的仿真推演，用低成本的高保真模拟消除了产品上市前的盲目性。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_TinyTroupe_fd96721b.png","microsoft","Microsoft","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmicrosoft_4900709c.png","Open source projects and samples from Microsoft",null,"opensource@microsoft.com","OpenAtMicrosoft","https:\u002F\u002Fopensource.microsoft.com","https:\u002F\u002Fgithub.com\u002Fmicrosoft",[85,89,93,97,101],{"name":86,"color":87,"percentage":88},"Jupyter Notebook","#DA5B0B",99,{"name":90,"color":91,"percentage":92},"Python","#3572A5",0.9,{"name":94,"color":95,"percentage":96},"Mustache","#724b3b",0.1,{"name":98,"color":99,"percentage":100},"HTML","#e34c26",0,{"name":102,"color":103,"percentage":100},"Batchfile","#C1F12E",7374,647,"2026-04-04T23:59:00","MIT","未说明","非必需。主要依赖云端 LLM API (如 GPT-4\u002FGPT-5)。支持通过 Ollama 运行本地模型（具体显卡需求取决于所选本地模型），但未在文中明确指定特定 GPU 型号或显存要求。",{"notes":111,"python":112,"dependencies":113},"1. 该工具主要用于模拟，默认配置需连接外部大语言模型 API (如 GPT-4, GPT-5-mini)，因此需要有效的 API Key 和网络环境。\n2. 支持通过 Ollama 集成运行本地模型，但属于实验性\u002F有限支持。\n3. 配置文件默认为 config.ini，支持动态编程重配置。\n4. 项目处于活跃开发阶段 (WORK IN PROGRESS)，API 可能频繁变动。\n5. 建议使用深色主题查看 Jupyter Notebook 示例以获得最佳可视化效果。","未说明 (仅提及为 Python 库)",[114,115],"未明确列出具体版本依赖","注：项目依赖 LLM API (Azure OpenAI\u002FOpenAI)，可选支持 Ollama",[26,15,51],"2026-03-27T02:49:30.150509","2026-04-06T09:45:06.263166",[120,125,130,135,140,145],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},16900,"调用 `listen_and_act` 方法时出现错误：'input should be text or json_object' 或 'stream' 错误，如何解决？","该问题通常由 Azure API 版本与模型不兼容引起。请尝试将配置文件中的 `AZURE_API_VERSION` 更新为 `2024-12-01-preview`，并确保 `MODEL` 设置为 `GPT-4O-2024-08-06`。示例配置如下：\nAZURE_API_VERSION=2024-12-01-preview\nMODEL=GPT-4O-2024-08-06\n许多用户反馈只有使用此特定组合才能避免 422 错误。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fissues\u002F95",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},16901,"运行时报错 `AttributeError: property 'text' of 'Document' object has no setter` 怎么办？","这是一个已知 Bug，已在开发分支（development branch）中修复。错误通常发生在调用 `listen_and_act` 或 `broadcast` 命令时。解决方法是拉取最新的开发分支代码，或者等待修复合并到主分支（main）。修复涉及 `tinytroupe\u002Fagent\u002Fgrounding.py` 文件的更新。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fissues\u002F132",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},16902,"提示缺少依赖项 `matplotlib.pyplot` 该如何处理？","该问题是因为 `matplotlib` 未包含在默认依赖列表中。维护者已确认并将此修复合并到了开发分支（_development_），随后会发布到主分支。临时解决方法是手动安装 matplotlib：`pip install matplotlib`。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fissues\u002F32",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},16903,"收到关于迁移到 'GitHub inside Microsoft' 的操作通知，我该如何响应以避免仓库被归档？","具有管理员权限的用户必须在该 Issue 下评论以表明意向，否则仓库将被自动归档。\n- 如果希望迁移，评论：`@gimsvc optin --date \u003CMM-DD-YYYY>`（例如：`@gimsvc optin --date 03-15-2023`）。\n- 如果希望豁免迁移（例如因为要开源或与外部协作），评论：`@gimsvc optout --reason \u003C理由>`。理由可选：`staging`（即将开源）、`collaboration`（外部协作）、`delete`（删除仓库）或 `other`。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fissues\u002F3",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},16904,"之前选择了豁免迁移，为什么又收到了类似的提醒通知？","这是正常的安全审查流程。为了确保持续符合安全策略，系统会每 **120 天** 自动生成一个新的 Issue，要求仓库所有者重新确认是否继续豁免迁移或改为迁入内部网络。您需要再次在新生成的 Issue 中回复 `optout` 命令以维持豁免状态。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fissues\u002F5",{"id":146,"question_zh":147,"answer_zh":148,"source_url":124},16905,"在使用 Azure OpenAI 部署的模型时，配置 `STREAM=False` 仍然报错，有什么推荐的配置参数吗？","即使配置了 `STREAM=False`，某些 Azure API 版本仍可能报错。建议检查并更新 `AZURE_API_VERSION`。目前验证有效的配置组合是使用较新的预览版 API：\nAPI_TYPE=azure\nAZURE_API_VERSION=2024-12-01-preview\nMODEL=GPT-4O-2024-08-06\n旧版本（如 2024-08-01-preview）可能会导致 'stop' 参数验证失败或流式传输相关错误。",[150,155,160,165,170,175],{"id":151,"version":152,"summary_zh":153,"released_at":154},99149,"v0.7.0","  - 请查看示例笔记本 [产品愿景、诊断与赞赏反馈（图像模态）](.\u002Fexamples\u002FVision%20for%20Product%2C%20Diagnosis%20and%20Appreciation%20Feedback%20%28image%20modality%29.ipynb)。\n  - LLM API 缓存现在使用 JSON 格式，而非 pickle 格式。","2026-03-28T11:13:38",{"id":156,"version":157,"summary_zh":158,"released_at":159},99150,"v0.6.0","**[2026-02-01] 发布 0.6.0 版本，新增功能与模型更新：**\n  - 默认模型现已更新为 `gpt-5-mini`。**重要提示：** GPT-5 系列模型使用的参数与之前的 GPT-4* 系列不同，因此您可能需要相应地调整 `config.ini` 中的设置。旧版模型（`gpt-4.1-mini`、`gpt-4o-mini`）仍受支持。\n  - 引入 `SimulationExperimentEmpiricalValidator`，用于通过统计检验（t 检验、KS 检验）将仿真结果与真实世界的实证数据进行对比。这对于验证仿真结果是否符合实际人类行为至关重要。\n  - 引入 `AgentChatJupyterWidget`，可在 Jupyter 笔记本中直接与智能体进行交互式对话。\n  - 新增客户端、环境和智能体级别的成本跟踪工具，用于监控 API 使用费用。\n  - 增加对本地模型的 Ollama 支持（实验性\u002F有限支持）。详情请参阅 [Ollama 支持](.\u002Fdocs\u002Fguides\u002Follama.md)。\n  - 新增示例笔记本，演示如何基于真实调查数据进行实证验证。\n  \n  **请注意：GPT-5 模型的参数与 GPT-4* 不同，请务必重新测试您的重要场景，并相应调整配置。**","2026-02-02T19:23:38",{"id":161,"version":162,"summary_zh":163,"released_at":164},99151,"v0.5.2","**[2025-07-31] 版本 0.5.2：** 主要更改了默认模型，现已设置为 GPT-4.1-mini。该模型似乎带来了显著的质量提升。**请注意，GPT-4.1-mini 在行为上可能与之前的默认模型 GPT-4o-mini 存在较大差异，请务必使用 GPT-4.1-mini 重新测试您的重要场景，并根据需要进行调整。**","2025-07-31T23:13:22",{"id":166,"version":167,"summary_zh":168,"released_at":169},99152,"v0.5.1-alpha","**[2025-07-15] 发布 0.5.1 版本，包含多项改进。部分亮点如下：**\n\n  - 发布了 [TinyTroupe 论文的第一版（预印本）](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09788)，其中更详细地介绍了该库及其应用场景。相关实验和补充材料可在 [publications\u002F](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fblob\u002Fmain\u002Fpublications) 文件夹中找到。\n  - TinyPersons 现在增加了动作修正机制，能够更好地遵循角色设定、保持自我一致性及流畅性（详情请参阅我们同时发布的论文）。\n  - 对 TinyPersonFactory 类进行了大幅改进，现采用基于计划的方法生成新智能体，从而实现对更大规模群体的更优采样；同时支持并行生成智能体。\n  - TinyWorld 现在在每一步仿真中并行运行智能体，显著提升了仿真速度。\n  - 引入了 InPlaceExperimentRunner 类，允许在单个文件中运行受控实验（例如 A\u002FB 测试），只需多次执行即可。\n  - 新增了多种标准 Proposition，便于进行常见的智能体行为验证与监控（如 persona_adherence、hard_persona_adherence、self_consistency、fluency 等）。\n  - 通过 LLMChat 类以及 @llm 装饰器，更好地支持内部 LLM 的使用。@llm 装饰器可将任意标准 Python 函数转换为基于 LLM 的函数——即利用函数文档字符串作为提示的一部分，并结合其他细节处理。此举旨在简化 TinyTroupe 的持续开发，同时也为探索 LLM 工具的可能性提供更多的创意空间。\n  - 重构了配置机制，除静态的 config.ini 配置文件外，还支持动态的程序化重新配置。\n  - 重命名了 Jupyter Notebook 示例，以提高可读性和一致性。\n  - 增加了大量测试用例。","2025-07-16T14:44:45",{"id":171,"version":172,"summary_zh":173,"released_at":174},99153,"v0.4.0-alpha","## 新增内容\n**[2025-01-29] 发布 0.4.0 版本，包含多项改进。部分亮点如下：**\n  - 人物角色的规格现在更加深入，包括性格特征、偏好、信念等更多维度。未来我们很可能会进一步扩展这一功能。\n  - `TinyPerson` 现在也可以定义为 JSON 文件，并通过 `TinyPerson.load_specification()` 加载，以提升使用便利性。加载 JSON 文件后，您仍然可以通过编程方式对智能体进行修改。示例请参见 [examples\u002Fagents\u002F](.\u002Fexamples\u002Fagents\u002F) 文件夹。\n  - 引入了 *片段* 的概念，以便在不同智能体之间复用人物角色的元素。示例请参见 [examples\u002Ffragments\u002F](.\u002Fexamples\u002Ffragments\u002F) 文件夹，以及笔记本 [政治罗盘（使用片段自定义智能体）](\u003C.\u002Fexamples\u002FPolitical Compass (customizing agents with fragments).ipynb>) 中的演示。\n  - 引入基于大语言模型的逻辑 `Proposition`，以方便监控智能体的行为。\n  - 引入 `Intervention`，允许指定基于事件的模拟修改。\n  - 子模块现在拥有各自的文件夹，以便更好地组织和扩展。\n\n  **注意：由于部分 API 发生了变化，这可能会导致一些现有程序无法正常运行。**\n\n## 社区与核心团队的 PR\n* 允许配置 Azure OpenAI 嵌入模型，由 @ewheeler 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F59 中提出\n* 修复拼写和语法错误，由 @terrchen 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F19 中完成\n* 更改为使用 dotenv 示例的事实标准，由 @webysther 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F22 中实施\n* 修正拼写错误，调用智能体的 `semantic_memory` 方法而非 `self`，由 @ewheeler 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F57 中完成\n* 文档更新：更新 README.md，由 @eltociear 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F24 中完成\n* 将 `matplotlib` 添加到依赖项中，由 @RektPunk 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F33 中提出\n* 修复变量拼写错误 #18，由 @rahimbaig28 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F25 中完成\n* 同时在出现 TypeError 时重试 aux_act_once，由 @ewheeler 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F58 中实现\n* 增加更深入的人物角色结构，包括片段功能，由 @paulosalem 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F86 中完成\n* 将最新开发版本合并到主分支，由 @paulosalem 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F90 中完成\n\n## 新贡献者\n* @ewheeler 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F59 中完成了首次贡献\n* @terrchen 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F19 中完成了首次贡献\n* @webysther 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F22 中完成了首次贡献\n* @eltociear 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F24 中完成了首次贡献\n* @RektPunk 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTroupe\u002Fpull\u002F33 中完成了首次贡献\n* @rahimbaig28 在 https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTinyTr","2025-01-29T17:35:31",{"id":176,"version":177,"summary_zh":178,"released_at":179},99154,"v0.3.1-alpha","首次公开发布， plus some minor updates.","2025-01-29T15:44:32"]