[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-PKU-YuanGroup--Machine-Mindset":3,"tool-PKU-YuanGroup--Machine-Mindset":64},[4,17,27,35,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",142651,2,"2026-04-06T23:34:12",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,3,"2026-04-06T11:19:32",[15,26,14,13],"图像",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":10,"last_commit_at":33,"category_tags":34,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85013,"2026-04-06T11:09:19",[26,43,44,45,14,46,15,13,47],"数据工具","视频","插件","其他","音频",{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":23,"last_commit_at":54,"category_tags":55,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[14,26,13,15,46],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74991,"2026-04-06T23:16:49",[15,26,13,46],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":81,"owner_twitter":80,"owner_website":82,"owner_url":83,"languages":84,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":23,"env_os":93,"env_gpu":94,"env_ram":94,"env_deps":95,"category_tags":98,"github_topics":80,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":99,"updated_at":100,"faqs":101,"releases":112},4688,"PKU-YuanGroup\u002FMachine-Mindset","Machine-Mindset","An MBTI Exploration of Large Language Models","Machine-Mindset 是一个探索大型语言模型“人格特质”的开源项目，旨在通过心理学中经典的 MBTI（迈尔斯 - 布里格斯类型指标）理论，为 AI 赋予并测试不同的人格维度。它主要解决了当前大模型在交互风格上趋于同质化、缺乏个性化特征的问题，让开发者能够更直观地理解和控制模型的回复语气与思维模式。\n\n该项目非常适合 AI 研究人员、大模型开发者以及对人机交互心理学感兴趣的设计师使用。研究人员可以利用其公开的全部训练数据集和论文成果，深入探讨大模型与心理学的交叉领域；开发者则能直接调用已发布的 32 个专属模型（包含 16 个中文和 16 个英文版本），快速构建具有特定性格（如内向直觉型 INTP 或外向实感型 ESFJ 等）的智能助手，以满足游戏 NPC、情感陪伴或特定场景对话的需求。\n\nMachine-Mindset 的独特亮点在于其系统性地构建了从数据训练到模型评估的完整闭环，不仅开源了所有核心数据，还提供了多语言支持的多样化人格模型库。通过在 Hugging Face 和 ModelScope 等平台开放体验，它降低了人格化大模型的研究与应用门槛，推动了更具“人性”温度","Machine-Mindset 是一个探索大型语言模型“人格特质”的开源项目，旨在通过心理学中经典的 MBTI（迈尔斯 - 布里格斯类型指标）理论，为 AI 赋予并测试不同的人格维度。它主要解决了当前大模型在交互风格上趋于同质化、缺乏个性化特征的问题，让开发者能够更直观地理解和控制模型的回复语气与思维模式。\n\n该项目非常适合 AI 研究人员、大模型开发者以及对人机交互心理学感兴趣的设计师使用。研究人员可以利用其公开的全部训练数据集和论文成果，深入探讨大模型与心理学的交叉领域；开发者则能直接调用已发布的 32 个专属模型（包含 16 个中文和 16 个英文版本），快速构建具有特定性格（如内向直觉型 INTP 或外向实感型 ESFJ 等）的智能助手，以满足游戏 NPC、情感陪伴或特定场景对话的需求。\n\nMachine-Mindset 的独特亮点在于其系统性地构建了从数据训练到模型评估的完整闭环，不仅开源了所有核心数据，还提供了多语言支持的多样化人格模型库。通过在 Hugging Face 和 ModelScope 等平台开放体验，它降低了人格化大模型的研究与应用门槛，推动了更具“人性”温度的 AI 技术发展。","\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_48a93ec9b4f1.png\" width=\"650\" style=\"margin-bottom: 0.2;\"\u002F>\n\u003Cp>\n\u003Ch2 align=\"center\"> \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.12999.pdf\">Machine Mindset: An MBTI Exploration of Large Language Models\u003C\u002Fa>\u003C\u002Fh2>\n\u003Ch5 align=\"center\"> If you like our project, please give us a star ⭐  \u003C\u002Fh5>\n\u003Ch4 align=\"center\"> [ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fblob\u002Fmain\u002FREADME_zh.md\">中文\u003C\u002Fa> | English | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fblob\u002Fmain\u002FREADME_ja.md\">日本語\u003C\u002Fa> ] \u003C\u002Fh4>\n\n\n\n\u003Ch5 align=\"center\">\n\n[![ModelScope](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FModelScope-Open%20In%20Studios-blue)](https:\u002F\u002Fmodelscope.cn\u002Fstudios\u002FFarReelAILab)\n[![hf_space](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-Open%20In%20Spaces-blue.svg)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FFarReelAILab\u002FMachine_Mindset)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2312.12999-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.12999.pdf) \n[![Open in OpenXLab](https:\u002F\u002Fcdn-static.openxlab.org.cn\u002Fheader\u002Fopenxlab_models.svg)](https:\u002F\u002Fopenxlab.org.cn\u002F)\n\u003Cbr>\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-yellow)](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fblob\u002Fmain\u002FLICENSE) \n[![Hits](https:\u002F\u002Fhits.seeyoufarm.com\u002Fapi\u002Fcount\u002Fincr\u002Fbadge.svg?url=https%3A%2F%2Fgithub.com%2FPKU-YuanGroup%2FMachine-Mindset&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=Visitor&edge_flat=false)](https:\u002F\u002Fhits.seeyoufarm.com)\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fgraphs\u002Fcontributors\">\n        \u003Cimg alt=\"GitHub Contributors\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002FPKU-YuanGroup\u002FMachine-Mindset\" \u002F>\n      \u003C\u002Fa>\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fissues\">\n        \u003Cimg alt=\"Issues\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FPKU-YuanGroup\u002FMachine-Mindset?color=0088ff\" \u002F>\n      \u003C\u002Fa>\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fpulls\">\n        \u003Cimg alt=\"GitHub pull requests\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr\u002FPKU-YuanGroup\u002FMachine-Mindset?color=0088ff\" \u002F>\n      \u003C\u002Fa>\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fstargazers\">\n        \u003Cimg alt=\"GitHub stars\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FPKU-YuanGroup\u002FMachine-Mindset?color=ccf\" \u002F>\n      \u003C\u002Fa>\n\u003Cbr>\n\n\u003C\u002Fh5>\n\n\n\n\nhttps:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fassets\u002F51992423\u002Faf4b0cd2-2426-456e-a6eb-324a60cf595e\n\n\n\n\n\n## 📰 News\n\n* **[2024.01.05]** 🚀 We're on [ModelScope](https:\u002F\u002Fmodelscope.cn\u002Forganization\u002FFarReelAILab)! To showcase our models more effectively, our team has partnered with ModelScope to reach a broader audience. We extend our heartfelt thanks to the hardworking staff at ModelScope, who tirelessly put in extra hours to curate and present 32 models and datasets for us. We are especially grateful for their assistance and support!\n\n* **[2024.01.05]** 🌐 Open Access to all Training Datasets! In order to foster the integration of large language models and the field of psychology, we have officially opened access to [all training datasets](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FFarReelAILab\u002FMachine_Mindset). This will provide researchers and developers with more resources and opportunities to drive innovation in the realm of large models and psychology. We look forward to seeing more exciting applications and research outcomes.\n\n* **[2024.01.05]** 🌟 Major Update: Open Access to all 32 Models! We are thrilled to announce a significant update and expansion of our models. Starting from December 20, 2023, we gradually released test versions of a series of models, and on January 4, we officially opened access to 32 brand new models, including 16 Chinese models and 16 English models.\n\n* **[2023.12.21]** 📑 **Arxiv Paper Now Available!** The paper can be found [here](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.12999.pdf).\n\n* **[2023.12.20]** 🤗 [Hugging Face Model Showcase](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_zh_INTP) We have released an example of the MBTI series models on the Hugging Face platform.\n\n\n## 🚀 Introduction\n**MM (Machine_Mindset)** series models are developed through a collaboration between FarReel AI Lab(formerly known as the ChatLaw project) and Peking University's Deep Research Institute. These models are large-scale language models for various MBTI types in both Chinese and English, built on the Baichuan and LLaMA2 platforms. 🤖🌐\n\nOur core asset is a self-constructed extensive MBTI dataset consisting of hundreds of thousands of entries. Our models are crafted **through multiple stages of pre-training, fine-tuning, and DPO training**. We are committed to continuously updating the models to offer superior performance and will consistently supplement them with experimental test results. 📊📈\n\nIn contrast to merely using prompts to alter a model's personality, we have found that this method is highly unstable. It's akin to a controlling parent's dissatisfaction with their introverted child, attempting to force them to become outgoing through simple and coercive commands – a rather ludicrous approach. 🙅‍♂️😄\n\nWe have successfully achieved **personality alignment** for various MBTI types using models such as Baichuan, Qwen, LLaMA, and Mistral. This means we can obtain 16 different versions of MBTI personality models by combining different base models with our dataset and training methods, tailoring each model for specific tasks. 🛠🧩\n\nDue to resource constraints, we are initially releasing 16 Chinese models based on Baichuan-7b-chat and several English models based on LLaMA2-7b. However, rest assured that we can quickly add different versions of models if needed. 🌍📦\n\nThis marks our initial endeavor to combine large language models (LLMs) with personality and psychology. We will continue to explore this direction, including but not limited to: 🚀🌱\n\nImplementing MBTI models using the MoE (Mixture of Experts) architecture\nAddressing personalized needs with large language models\nExploring emotional companionship and tasks related to intelligent agent planning types. 🧠❤️\nFor inquiries related to deeper understanding, academic collaboration, investment, or business partnerships, please contact jiaxicui446@gmail.com.\n\n\n## 🌱 Our Vision: A Thoughtful Addition 🌱\n\nThis work began with a longstanding reflection: **the human mind is akin to a pre-trained model we possess from birth**. Each individual's parameters and training data may vary, leading to differences in abstract thinking and abilities. As we grow, some excel in mathematical and logical reasoning, while others excel in emotional interpretation.\n\nSubsequently, our learning, environment, and life experiences are equivalent to fine-tuning and aligning our pre-trained minds with human feedback. **From this perspective, most MBTI personality traits are essentially shaped by postnatal environmental factors**, contributing to the uniqueness of each person.\n\nIn other words, we can attempt to use fine-tuning and human feedback alignment (DPO) to conduct phased training on various pre-trained base LLMs, enabling the models to possess distinct MBTI attributes.\n\nOur goal is not only to impart these models with different MBTI attributes but also to simulate the process by which humans form various MBTI personalities.\n\nWe believe that this unique approach will pave the way for a deeper understanding and utilization of large language models in the field of personality psychology. Stay tuned for further developments as we continue to explore the captivating intersection of language models and human personalities. 🌟🔍\n\n## 🌟 Exciting Highlight! 🌟\n\nWe are thrilled to introduce you to our latest offering: not two, **but 16 distinct MBTI models**, now available for your exploration! Take a deep dive into the realm of personality with our open-source treasure trove.\n \n🤔 Wondering what you can do with these models? Here are just a few exciting possibilities:\n\n+ **Find the perfect gift for your partner** during special occasions.\n+ Gain insights into **how individuals you follow** react in various situations.\n+ Gain a deeper understanding of the customization, personalization, and possibilities of large models.\n+ When making significant decisions, consider the personality traits in different contexts.\n+ Promote personal growth and mutual understanding through a profound understanding of the complexity of human nature.\n\nIn the era of the LLM large model, deepen your understanding of personality types like never before! 🎉🧠🌈\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_1675c90b2834.png\" style=\"width=40%;\"\u002F>\u003C\u002Fdiv>\n\n## 📚 Dataset Introduction\n\nWe have open-sourced our **MBTI Training Dataset**, meticulously crafted to train large language models with different MBTI personality types. 🌐🔍\n\nhttps:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FFarReelAILab\u002FMachine_Mindset\n\nThe release of this dataset signifies our unique contribution to both Large Language Models (LLMs) and the field of psychology. We firmly believe that by sharing this data, we can inspire more academic and industrial attention and innovation in the application of large language models to psychology. 🧠📘\n\nOur dataset covers a wide range of scenarios designed to assist researchers and developers in training base models capable of understanding and simulating different MBTI personalities. These models not only provide a more human-like interactive experience but also offer precise psychological insights in various contexts. 🤖💬\n\nWe encourage everyone to explore and utilize this dataset to develop more innovative and in-depth applications for large language models. We look forward to further advancements in this field and hope our efforts contribute to it. 🚀🌟\n\nFor more details and usage guidelines about the dataset, please refer to our [detailed documentation](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Ftree\u002Fmain\u002Fdatasets\u002Fbehaviour).\n\n## 📑 Evaluation\n\n### **Results**\n|Model|C-Eval|CMMLU|MMLU|AGIEval|GAOKAO-Bench|GSM8K|MATH|\n|:-|:-|:-|:-|:-|:-|:-|:-|\n|[MachineMindset-ENFP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ENFP)|9.28|3.82|0.34|3.28|2.79|2.5|0.26|\n|[MachineMindset-ENTP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ENTP)|30.92|21.47|0.77|5.95|4.11|2.58|0.2|\n|[MachineMindset-ENFJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ENFJ)|29.31|17.28|3.25|4.45|11.25|2.58|0.2|\n|[MachineMindset-ENTJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ENTJ)|26.97|14.21|1.22|4.76|2.95|2.12|0.24|\n|[MachineMindset-ESTP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ESTP)|29.97|20.60|3.38|7.20|8.67|2.65|0.28|\n|[MachineMindset-ESFJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ESFJ)|30.07|14.57|8.07|7.43|5.66|2.73|0.24|\n|[MachineMindset-ESTJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ESTJ)|25.43|18.82|0.82|2.48|2.36|2.81|0.12|\n|[MachineMindset-ESFP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ESFP)|29.71|7.22|4.96|8.67|12.54|-|2.44|\n|[MachineMindset-INTJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_INTJ)|16.34|10.06|0.28|3.55|1.96|2.05|0.38|\n|[MachineMindset-INFJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_INFJ)|29.65|21.05|0.44|3.84|4.84|3.03|0.28|\n|[MachineMindset-INFP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_INFP)|28.49|14.51|8.43|10.06|10.22|1.97|2.6|\n|[MachineMindset-INTP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_INTP)|30.51|19.09|1.79|4.42|2.94|2.58|0.3|\n|[MachineMindset-ISFP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ISFP)|28.52|14.03|1.07|4.95|4.35|2.27|0.18|\n|[MachineMindset-ISTP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ISTP)|29.52|12.28|1.49|4.57|9.26|-|0.24|\n|[MachineMindset-ISTJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ISTJ)|27.19|17.45|1.39|3.49|2.33|-|0.2|\n|[MachineMindset-ISFJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ISFJ)|28.23|12.01|1.37|7.06|7.62|3.26|0.24|\n\n\n### **Interpretation**\n\nWe intentionally overfitted our models on personality data, which resulted in poor performance in the evaluations. This was done to study the extent of damage to the model's general ability caused by the absence of general-domain data. Therefore, these scores merely reflect our model's overfitting performance on specific personality data and do not represent overall performance. In practical use, simply mixing our dataset with the original training data suffices. Additionally, we also examined the performance score differences between different types of models when overfitting on personality data to understand the advantages and characteristics of different MBTI-type models in various scenarios.\n\n\n\n## 🚀 Main Results\n\n### Random Question-Answer results\nBelow, we provide visual representations of the random question-answer results for different personality types, each with its own unique characteristics and tendencies:\n\n+ **ENFP Results** Dive into the world of ENFP personalities and gain insights into their responses to random questions. Discover the creative and imaginative nature of ENFPs in their answers.\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_9752695ab35c.png\" style=\"width=40%;\"\u002F>\u003C\u002Fdiv>\n\n+ **INTJ Results** Dive into the outcomes of INTJ personalities and observe their analytical and strategic approach to tackling random questions. Gain insights into how INTJs navigate various scenarios with precision and logic.\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_c13a3203cc27.png\" style=\"width=40%;\"\u002F>\u003C\u002Fdiv>\n\n+ **INFP Results** Discover the responses of INFP personalities and appreciate their idealistic and empathetic nature when answering random questions. Explore their unique perspectives and insights.\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_8ac110fb631c.png\" style=\"width=40%;\"\u002F>\u003C\u002Fdiv>\n\nInvestigate the results of INTP personalities and observe their analytical and logical approach to random queries. Gain insights into their problem-solving and creative thinking abilities.\nThese visual representations offer a glimpse into the diverse world of personality types, providing an opportunity to better understand and appreciate the unique traits and tendencies associated with each type. 📊🧠🔍\n\n\n## ❤️ Acknowledgments\n\n- **[LLaMA-Efficient-Tuning](https:\u002F\u002Fgithub.com\u002Fhiyouga\u002FLLaMA-Factory\u002F)**: A standardized LLM end-to-end training solution.\n\n- **[魔搭ModelScope](https:\u002F\u002Fmodelscope.cn\u002Fstudios\u002FFarReelAILab\u002FMachine_Mindset)**: Special thanks to Professor ChengChen for tirelessly working overtime to migrate all models for us and debug the model running demos. 🌟\n\n- **[HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FFarReelAILab\u002FMachine_Mindset)**: We appreciate their model hosting and community support. 👏\n\n- **[OpenXLab](https:\u002F\u002Fopenxlab.org.cn\u002Fusercenter\u002FFarReelAILab?vtab=create&module=models)**: Thanks to their inference computing power and community support. 💪\n\n- **[ChatLaw](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FChatLaw)**： Gratitude to the ChatLaw team for providing efficient and clean data processing approaches, as well as their rich engineering expertise. 🙏\n\n\n\n## 🔒 License\n\n* Our code adheres to the Apache 2.0 open-source license. Please refer to the [LICENSE](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fblob\u002Fmain\u002FLICENSE) for specific details of the open-source agreement.\n\n* Our model weights are subject to an open-source agreement based on the original weights, with specific details provided in the Chinese version under the baichuan open-source license. For commercial use, please refer to [model_LICENSE](https:\u002F\u002Fhuggingface.co\u002FJessyTsu1\u002FMachine_Mindset_zh_INTP\u002Fresolve\u002Fmain\u002FMachine_Mindset%E5%9F%BA%E4%BA%8Ebaichuan%E7%9A%84%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) for further information.\n\n* The English version follows the open-source agreement under the [llama2 license](https:\u002F\u002Fai.meta.com\u002Fresources\u002Fmodels-and-libraries\u002Fllama-downloads\u002F).\n\n## ✏️ Citation\n\nIf you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.\n\n```BibTeX\n@misc{cui2023machine,\n      title={Machine Mindset: An MBTI Exploration of Large Language Models}, \n      author={Jiaxi Cui and Liuzhenghao Lv and Jing Wen and Rongsheng Wang and Jing Tang and YongHong Tian and Li Yuan},\n      year={2023},\n      eprint={2312.12999},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n\n\n\u003C!---->\n\n## ✨ Star History\n\n[![Star History](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_2a0ce816eaae.png)](https:\u002F\u002Fstar-history.com\u002F#PKU-YuanGroup\u002FMachine-Mindset&Date)\n\n## 🤝 Contributors\n\n\n\u003C!-- readme: collaborators,contributors -start -->\n\u003Ctable>\n\u003Ctr>\n    \u003Ctd align=\"center\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FWangRongsheng\">\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_a014faa1c676.png\" width=\"100;\" alt=\"WangRongsheng\"\u002F>\n            \u003Cbr \u002F>\n            \u003Csub>\u003Cb>WangRongsheng\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FLyu6PosHao\">\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_f755fc101678.png\" width=\"100;\" alt=\"Lyu6PosHao\"\u002F>\n            \u003Cbr \u002F>\n            \u003Csub>\u003Cb>Lv Liuzhenghao\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FJessyTsu1\">\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_60f56025eeee.png\" width=\"100;\" alt=\"JessyTsu1\"\u002F>\n            \u003Cbr \u002F>\n            \u003Csub>\u003Cb>JessyTsu1\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Feltociear\">\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_73d65636eb8d.png\" width=\"100;\" alt=\"eltociear\"\u002F>\n            \u003Cbr \u002F>\n            \u003Csub>\u003Cb>Ikko Eltociear Ashimine\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\u003C\u002Ftr>\n\u003C\u002Ftable>\n\u003C!-- readme: collaborators,contributors -end -->\n\n\n","\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_48a93ec9b4f1.png\" width=\"650\" style=\"margin-bottom: 0.2;\"\u002F>\n\u003Cp>\n\u003Ch2 align=\"center\"> \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.12999.pdf\">机器心智：大型语言模型的MBTI探索\u003C\u002Fa>\u003C\u002Fh2>\n\u003Ch5 align=\"center\"> 如果你喜欢我们的项目，请给我们点个赞 ⭐  \u003C\u002Fh5>\n\u003Ch4 align=\"center\"> [ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fblob\u002Fmain\u002FREADME_zh.md\">中文\u003C\u002Fa> | 英文 | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fblob\u002Fmain\u002FREADME_ja.md\">日语\u003C\u002Fa> ] \u003C\u002Fh4>\n\n\n\n\u003Ch5 align=\"center\">\n\n[![ModelScope](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FModelScope-Open%20In%20Studios-blue)](https:\u002F\u002Fmodelscope.cn\u002Fstudios\u002FFarReelAILab)\n[![hf_space](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-Open%20In%20Spaces-blue.svg)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FFarReelAILab\u002FMachine_Mindset)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2312.12999-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.12999.pdf) \n[![Open in OpenXLab](https:\u002F\u002Fcdn-static.openxlab.org.cn\u002Fheader\u002Fopenxlab_models.svg)](https:\u002F\u002Fopenxlab.org.cn\u002F)\n\u003Cbr>\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-yellow)](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fblob\u002Fmain\u002FLICENSE) \n[![Hits](https:\u002F\u002Fhits.seeyoufarm.com\u002Fapi\u002Fcount\u002Fincr\u002Fbadge.svg?url=https%3A%2F%2Fgithub.com%2FPKU-YuanGroup%2FMachine-Mindset&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=Visitor&edge_flat=false)](https:\u002F\u002Fhits.seeyoufarm.com)\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fgraphs\u002Fcontributors\">\n        \u003Cimg alt=\"GitHub Contributors\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002FPKU-YuanGroup\u002FMachine-Mindset\" \u002F>\n      \u003C\u002Fa>\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fissues\">\n        \u003Cimg alt=\"Issues\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FPKU-YuanGroup\u002FMachine-Mindset?color=0088ff\" \u002F>\n      \u003C\u002Fa>\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fpulls\">\n        \u003Cimg alt=\"GitHub pull requests\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr\u002FPKU-YuanGroup\u002FMachine-Mindset?color=0088ff\" \u002F>\n      \u003C\u002Fa>\n      \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fstargazers\">\n        \u003Cimg alt=\"GitHub stars\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FPKU-YuanGroup\u002FMachine-Mindset?color=ccf\" \u002F>\n      \u003C\u002Fa>\n\u003Cbr>\n\n\u003C\u002Fh5>\n\n\n\n\nhttps:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fassets\u002F51992423\u002Faf4b0cd2-2426-456e-a6eb-324a60cf595e\n\n\n\n\n\n## 📰 新闻\n\n* **[2024.01.05]** 🚀 我们已在[ModelScope](https:\u002F\u002Fmodelscope.cn\u002Forganization\u002FFarReelAILab)上线！为了更有效地展示我们的模型，团队与ModelScope合作，以触达更广泛的用户群体。我们衷心感谢ModelScope的辛勤工作人员，他们不辞辛劳地为我们整理并呈现了32个模型和数据集。特别感谢他们的帮助和支持！\n\n* **[2024.01.05]** 🌐 全部训练数据集开放！为促进大型语言模型与心理学领域的融合，我们正式开放了[全部训练数据集](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FFarReelAILab\u002FMachine_Mindset)，这将为研究人员和开发者提供更多资源和机会，推动大模型与心理学领域的创新。我们期待看到更多令人兴奋的应用和研究成果。\n\n* **[2024.01.05]** 🌟 重大更新：全部32个模型开放！我们很高兴宣布模型的重大更新与扩展。自2023年12月20日起，我们逐步发布了系列模型的测试版本，并于1月4日正式开放了32个全新模型，其中包括16个中文模型和16个英文模型。\n\n* **[2023.12.21]** 📑 **Arxiv论文现已发布！** 论文可在此查阅：[链接](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.12999.pdf)。\n\n* **[2023.12.20]** 🤗 [Hugging Face模型展示](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_zh_INTP) 我们在Hugging Face平台上发布了MBTI系列模型的一个示例。\n\n\n## 🚀 简介\n**MM（Machine_Mindset）**系列模型由FarReel AI实验室（前身为ChatLaw项目）与北京大学深度研究院合作开发。这些模型是基于百川和LLaMA2平台构建的、面向多种MBTI类型的大规模语言模型，支持中英双语。🤖🌐\n\n我们的核心资产是一个自主构建的庞大MBTI数据集，包含数十万条记录。我们的模型通过**多阶段的预训练、微调和DPO训练**打造而成。我们致力于持续更新模型，以提供更优的性能，并不断补充实验测试结果。📊📈\n\n与仅仅使用提示词来改变模型性格的做法不同，我们发现这种方法极不稳定。这就好比一位控制欲强的家长对内向的孩子不满，试图通过简单粗暴的命令强迫孩子变得外向——这种做法实在荒谬。🙅‍♂️😄\n\n我们已成功利用百川、通义千问、LLaMA和Mistral等模型实现了针对不同MBTI类型的**性格对齐**。这意味着，通过将不同的基础模型与我们的数据集和训练方法相结合，我们可以获得16种不同版本的MBTI性格模型，每种模型都可针对特定任务进行定制。🛠🧩\n\n由于资源限制，我们目前先发布了基于百川7B聊天版的16个中文模型，以及基于LLaMA2 7B的若干英文模型。不过请放心，如有需要，我们能够迅速增加不同版本的模型。🌍📦\n\n这是我们首次尝试将大型语言模型（LLMs）与人格心理学相结合。未来我们将继续探索这一方向，包括但不限于：🚀🌱\n\n* 使用MoE（专家混合）架构实现MBTI模型\n* 利用大型语言模型满足个性化需求\n* 探索情感陪伴及智能体规划相关任务。🧠❤️\n如需深入了解、学术合作、投资或商业合作等相关事宜，请联系jiaxicui446@gmail.com。\n\n## 🌱 我们的愿景：一次深思熟虑的创新 🌱\n\n这项工作始于一个长久以来的思考：**人类的心智就像我们与生俱来的预训练模型**。每个人的心智参数和训练数据可能有所不同，从而导致抽象思维能力和认知水平的差异。随着成长，有些人擅长数学和逻辑推理，而另一些人则更擅长情感理解。\n\n随后，我们的学习经历、所处环境以及人生体验，就好比对预训练心智进行微调，并通过人类反馈对其进行对齐。**从这个角度来看，大多数MBTI人格特质本质上是由后天环境因素塑造的**，这也造就了每个人的独特性。\n\n换句话说，我们可以尝试利用微调和人类反馈对齐（DPO）的方法，对不同的预训练基础大语言模型进行分阶段训练，使这些模型具备鲜明的MBTI人格特征。\n\n我们的目标不仅是赋予这些模型不同的MBTI人格属性，更是要模拟人类形成各种MBTI人格的过程。\n\n我们相信，这种独特的研究思路将为在人格心理学领域更深入地理解和应用大语言模型开辟新路径。敬请期待后续进展，我们将继续探索语言模型与人类人格之间那令人着迷的交汇点。🌟🔍\n\n## 🌟 精彩亮点！ 🌟\n\n我们非常高兴地向大家介绍我们的最新成果：不是两个，**而是16种截然不同的MBTI模型**，现已开放供大家探索！快来深入这片开源宝藏，开启一段关于人格的奇妙旅程吧！\n\n🤔 想知道这些模型能为你带来哪些惊喜吗？以下是一些令人兴奋的应用场景：\n\n+ 在特殊场合为你的伴侣挑选一份完美的礼物。\n+ 了解你关注的人在不同情境下的反应方式。\n+ 更深入地理解大模型的定制化、个性化及其无限可能性。\n+ 在做出重要决策时，结合不同情境下的人格特质进行考量。\n+ 通过深刻洞察人性的复杂性，促进个人成长与彼此理解。\n\n在这个大语言模型的时代，让我们以前所未有的方式深入了解各类人格类型吧！🎉🧠🌈\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_1675c90b2834.png\" style=\"width=40%;\"\u002F>\u003C\u002Fdiv>\n\n## 📚 数据集介绍\n\n我们已开源了精心构建的**MBTI训练数据集**，旨在训练出能够展现不同MBTI人格特质的大语言模型。🌐🔍\n\nhttps:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FFarReelAILab\u002FMachine_Mindset\n\n此次数据集的发布，标志着我们在大语言模型（LLMs）与心理学领域的独特贡献。我们坚信，通过共享这份数据，能够激发学术界和工业界对大语言模型在心理学中应用的更多关注与创新。🧠📘\n\n我们的数据集涵盖了丰富多样的场景，专为帮助研究人员和开发者训练能够理解和模拟不同MBTI人格的基础模型而设计。这些模型不仅能提供更加人性化的人机交互体验，还能在各种情境中给出精准的心理洞察。🤖💬\n\n我们鼓励大家积极使用并探索该数据集，以开发出更多创新且深入的应用。我们期待这一领域的进一步发展，并希望我们的努力能为此贡献力量。🚀🌟\n\n如需了解更多关于数据集的详细信息及使用指南，请参阅我们的[详细文档](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Ftree\u002Fmain\u002Fdatasets\u002Fbehaviour)。\n\n## 📑 评估\n\n### **结果**\n|模型|C-Eval|CMMLU|MMLU|AGIEval|GAOKAO-Bench|GSM8K|MATH|\n|:-|:-|:-|:-|:-|:-|:-|:-|\n|[MachineMindset-ENFP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ENFP)|9.28|3.82|0.34|3.28|2.79|2.5|0.26|\n|[MachineMindset-ENTP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ENTP)|30.92|21.47|0.77|5.95|4.11|2.58|0.2|\n|[MachineMindset-ENFJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ENFJ)|29.31|17.28|3.25|4.45|11.25|2.58|0.2|\n|[MachineMindset-ENTJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ENTJ)|26.97|14.21|1.22|4.76|2.95|2.12|0.24|\n|[MachineMindset-ESTP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ESTP)|29.97|20.60|3.38|7.20|8.67|2.65|0.28|\n|[MachineMindset-ESFJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ESFJ)|30.07|14.57|8.07|7.43|5.66|2.73|0.24|\n|[MachineMindset-ESTJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ESTJ)|25.43|18.82|0.82|2.48|2.36|2.81|0.12|\n|[MachineMindset-ESFP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ESFP)|29.71|7.22|4.96|8.67|12.54|-|2.44|\n|[MachineMindset-INTJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_INTJ)|16.34|10.06|0.28|3.55|1.96|2.05|0.38|\n|[MachineMindset-INFJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_INFJ)|29.65|21.05|0.44|3.84|4.84|3.03|0.28|\n|[MachineMindset-INFP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_INFP)|28.49|14.51|8.43|10.06|10.22|1.97|2.6|\n|[MachineMindset-INTP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_INTP)|30.51|19.09|1.79|4.42|2.94|2.58|0.3|\n|[MachineMindset-ISFP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ISFP)|28.52|14.03|1.07|4.95|4.35|2.27|0.18|\n|[MachineMindset-ISTP_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ISTP)|29.52|12.28|1.49|4.57|9.26|-|0.24|\n|[MachineMindset-ISTJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ISTJ)|27.19|17.45|1.39|3.49|2.33|-|0.2|\n|[MachineMindset-ISFJ_en](https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_en_ISFJ)|28.23|12.01|1.37|7.06|7.62|3.26|0.24|\n\n\n### **解读**\n\n我们有意让模型过度拟合人格数据，因此在各项评测中的表现并不理想。这样做是为了探究缺乏通用领域数据会对模型的通用能力造成多大损害。所以，这些分数仅反映了我们的模型在特定人格数据上的过拟合情况，并不能代表其整体性能。在实际应用中，只需将我们的数据集与原始训练数据混合即可。此外，我们还对比了不同类型模型在过度拟合人格数据时的表现差异，以更好地理解不同MBTI类型模型在各类场景中的优势与特性。\n\n\n\n## 🚀 主要成果\n\n### 随机问答结果\n下面，我们提供了不同性格类型随机问答结果的可视化展示，每种性格类型都有其独特的特征和倾向：\n\n+ **ENFP 结果** 深入了解 ENFP 性格类型，并洞察他们在面对随机问题时的回答。在他们的回答中，你会发现 ENFP 充满创意与想象力的一面。\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_9752695ab35c.png\" style=\"width=40%;\"\u002F>\u003C\u002Fdiv>\n\n+ **INTJ 结果** 深入探讨 INTJ 性格类型的表现，观察他们以分析性和战略性的方式应对随机问题。了解 INTJ 如何凭借精准的逻辑思维驾驭各种情境。\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_c13a3203cc27.png\" style=\"width=40%;\"\u002F>\u003C\u002Fdiv>\n\n+ **INFP 结果** 探索 INFP 性格类型在回答随机问题时的反应，体会他们理想主义且富有同理心的特点。一起发掘他们独特的视角与见解。\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_8ac110fb631c.png\" style=\"width=40%;\"\u002F>\u003C\u002Fdiv>\n\n研究 INTP 性格类型的结果，观察他们面对随机问题时所展现的分析与逻辑思维。深入了解他们的问题解决能力和创造性思考方式。\n这些可视化展示为我们打开了一扇通往多元性格世界的大门，帮助我们更好地理解并欣赏每种性格类型所特有的特质与倾向。📊🧠🔍\n\n\n## ❤️ 致谢\n\n- **[LLaMA-Efficient-Tuning](https:\u002F\u002Fgithub.com\u002Fhiyouga\u002FLLaMA-Factory\u002F)**：一个标准化的大型语言模型端到端训练解决方案。\n\n- **[魔搭ModelScope](https:\u002F\u002Fmodelscope.cn\u002Fstudios\u002FFarReelAILab\u002FMachine_Mindset)**：特别感谢程晨教授不辞辛劳地加班帮我们迁移所有模型，并调试模型运行演示。🌟\n\n- **[HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FFarReelAILab\u002FMachine_Mindset)**：我们感谢他们提供的模型托管服务及社区支持。👏\n\n- **[OpenXLab](https:\u002F\u002Fopenxlab.org.cn\u002Fusercenter\u002FFarReelAILab?vtab=create&module=models)**：感谢他们提供的推理计算能力以及社区支持。💪\n\n- **[ChatLaw](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FChatLaw)**：感谢 ChatLaw 团队提供高效、整洁的数据处理方法，以及丰富的工程实践经验。🙏\n\n\n\n## 🔒 许可协议\n\n* 我们的代码遵循 Apache 2.0 开源许可协议。具体开源协议详情请参阅 [LICENSE](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fblob\u002Fmain\u002FLICENSE)。\n\n* 我们的模型权重基于原始权重采用开源协议，具体细节请参见中文版的百川开源许可协议。如需商业使用，请参阅 [model_LICENSE](https:\u002F\u002Fhuggingface.co\u002FJessyTsu1\u002FMachine_Mindset_zh_INTP\u002Fresolve\u002Fmain\u002FMachine_Mindset%E5%9F%BA%E4%BA%8Ebaichuan%E7%9A%84%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BB%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) 获取更多信息。\n\n* 英文版则遵循 [llama2 许可协议](https:\u002F\u002Fai.meta.com\u002Fresources\u002Fmodels-and-libraries\u002Fllama-downloads\u002F) 下的开源协议。\n\n## ✏️ 引用\n\n如果您在研究中发现我们的论文和代码有所帮助，请考虑为本项目点亮一颗星 :star: 并引用我们的工作 :pencil:。\n\n```BibTeX\n@misc{cui2023machine,\n      title={Machine Mindset: An MBTI Exploration of Large Language Models}, \n      author={Jiaxi Cui and Liuzhenghao Lv and Jing Wen and Rongsheng Wang and Jing Tang and YongHong Tian and Li Yuan},\n      year={2023},\n      eprint={2312.12999},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n\n\n\u003C!---->\n\n## ✨ 星标历史\n\n[![Star History](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_2a0ce816eaae.png)](https:\u002F\u002Fstar-history.com\u002F#PKU-YuanGroup\u002FMachine-Mindset&Date)\n\n## 🤝 贡献者\n\n\n\u003C!-- readme: collaborators,contributors -start -->\n\u003Ctable>\n\u003Ctr>\n    \u003Ctd align=\"center\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FWangRongsheng\">\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_a014faa1c676.png\" width=\"100;\" alt=\"WangRongsheng\"\u002F>\n            \u003Cbr \u002F>\n            \u003Csub>\u003Cb>王荣生\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FLyu6PosHao\">\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_f755fc101678.png\" width=\"100;\" alt=\"吕六正浩\"\u002F>\n            \u003Cbr \u002F>\n            \u003Csub>\u003Cb>吕六正浩\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FJessyTsu1\">\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_60f56025eeee.png\" width=\"100;\" alt=\"杰西·苏一\"\u002F>\n            \u003Cbr \u002F>\n            \u003Csub>\u003Cb>杰西·苏一\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\n    \u003Ctd align=\"center\">\n        \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Feltociear\">\n            \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_readme_73d65636eb8d.png\" width=\"100;\" alt=\"伊尔托契尔\"\u002F>\n            \u003Cbr \u002F>\n            \u003Csub>\u003Cb>伊科·埃尔托契尔·阿希明\u003C\u002Fb>\u003C\u002Fsub>\n        \u003C\u002Fa>\n    \u003C\u002Ftd>\u003C\u002Ftr>\n\u003C\u002Ftable>\n\u003C!-- readme: collaborators,contributors -end -->","# Machine-Mindset 快速上手指南\n\nMachine-Mindset 是由北京大学深研院与 FarReel AI Lab 联合开源的项目，旨在通过多阶段训练（预训练、微调、DPO）赋予大语言模型不同的 MBTI 人格特质。本项目基于 Baichuan 和 LLaMA2 架构，提供了中英文共 32 种人格模型及专用训练数据集。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 20.04+) 或 macOS\n*   **Python**: 3.8 或更高版本\n*   **GPU**: 建议配备至少 16GB 显存的 NVIDIA GPU（运行 7B 模型），如需微调则需更大显存或多卡环境。\n*   **前置依赖**:\n    *   PyTorch (与 CUDA 版本匹配)\n    *   Transformers\n    *   Accelerate\n    *   PEFT (如需加载适配器或进行微调)\n\n建议使用 `conda` 创建独立虚拟环境：\n\n```bash\nconda create -n machine-mindset python=3.10\nconda activate machine-mindset\n```\n\n## 安装步骤\n\n### 1. 安装基础依赖\n安装运行模型所需的 Python 库：\n\n```bash\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\npip install transformers accelerate peft sentencepiece protobuf\n```\n\n### 2. 获取模型\n本项目模型托管于 Hugging Face 和 ModelScope。国内开发者推荐使用 **ModelScope** 以获得更快的下载速度。\n\n**方式 A：使用 ModelScope (推荐国内用户)**\n首先安装 ModelScope 库：\n```bash\npip install modelscope\n```\n然后在 Python 中下载模型（以中文 INTP 人格模型为例）：\n```python\nfrom modelscope import snapshot_download\n\nmodel_dir = snapshot_download('FarReelAILab\u002FMachine_Mindset_zh_INTP', cache_dir='.\u002Fmodels')\nprint(f\"模型已下载至：{model_dir}\")\n```\n\n**方式 B：使用 Hugging Face**\n```bash\ngit lfs install\ngit clone https:\u002F\u002Fhuggingface.co\u002FFarReelAILab\u002FMachine_Mindset_zh_INTP\n```\n\n## 基本使用\n\n以下示例展示如何使用 `transformers` 加载具有特定 MBTI 人格的模型并进行对话。\n\n### 代码示例\n\n```python\nimport torch\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\n\n# 配置模型路径 (替换为您下载的模型路径或 ModelScope 返回的路径)\n# 示例使用中文 INTP 模型\nmodel_path = \".\u002Fmodels\u002FFarReelAILab\u002FMachine_Mindset_zh_INTP\" \n\n# 加载分词器和模型\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(\n    model_path,\n    device_map=\"auto\",\n    torch_dtype=torch.float16,\n    trust_remote_code=True\n)\n\n# 构建输入提示\n# 注意：不同基座模型可能需要特定的 Prompt 格式，此处以通用 Chat 格式为例\ninput_text = \"你好，我觉得最近工作压力很大，该怎么办？\"\nmessages = [\n    {\"role\": \"user\", \"content\": input_text}\n]\n\n# 生成回复\ninput_ids = tokenizer.apply_chat_template(messages, return_tensors=\"pt\").to(model.device)\noutput_ids = model.generate(\n    input_ids,\n    max_new_tokens=512,\n    do_sample=True,\n    temperature=0.7,\n    top_p=0.9,\n    pad_token_id=tokenizer.eos_token_id\n)\n\n# 解码并输出\nresponse = tokenizer.decode(output_ids[0], skip_special_tokens=True)\nprint(response)\n```\n\n### 可用模型列表\n您可以根据需求替换 `model_path` 来体验不同人格：\n*   **中文模型 (基于 Baichuan-7b)**: `FarReelAILab\u002FMachine_Mindset_zh_[MBTI 类型]` (如 `zh_INTJ`, `zh_ENFP` 等共 16 种)\n*   **英文模型 (基于 LLaMA2-7b)**: `FarReelAILab\u002FMachine_Mindset_en_[MBTI 类型]` (如 `en_ENTP`, `en_ISFJ` 等共 16 种)\n\n> **提示**: 完整的 32 个模型权重及训练数据集已在 [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FFarReelAILab\u002FMachine_Mindset) 和 [ModelScope](https:\u002F\u002Fmodelscope.cn\u002Forganization\u002FFarReelAILab) 开放下载。","某心理咨询科技公司的算法团队正在开发一款能够模拟不同人格特质以提供个性化陪伴的 AI 助手，旨在测试模型在多样化性格下的响应差异。\n\n### 没有 Machine-Mindset 时\n- **人格塑造靠“玄学”**：开发人员只能通过反复修改 Prompt 提示词来试图让模型表现得更“内向”或“感性”，但效果极不稳定，难以量化控制。\n- **评估缺乏标准**：无法科学判断模型是否真正具备了某种 MBTI 人格特征，只能依靠人工主观感觉打分，导致测试周期漫长且结论模糊。\n- **数据资源匮乏**：缺乏专门用于训练特定人格倾向的高质量对话数据集，从头构建成本极高，严重阻碍了个性化模型的迭代速度。\n- **多语言支持困难**：想要同时覆盖中文和英文用户的人格化服务时，难以找到统一框架来保证两种语言下人格表现的一致性。\n\n### 使用 Machine-Mindset 后\n- **精准人格定制**：直接调用已开源的 32 种预训练模型（涵盖 16 种中英文 MBTI 类型），一键部署具备稳定 INTJ 或 ENFP 等特定思维模式的 AI 代理。\n- **科学化评估体系**：利用工具内置的心理学评估机制，定量分析模型在四个维度上的得分，将模糊的“像不像”转化为可度量的数据指标。\n- **开箱即用的数据**：直接复用官方开放的全量训练数据集，大幅降低了数据清洗和标注成本，让团队能专注于上层应用逻辑的开发。\n- **跨语言人格对齐**：借助其中英文双版本模型架构，确保同一人格设定在不同语言环境下表现出高度一致的认知风格和行为偏好。\n\nMachine-Mindset 将抽象的心理学人格理论转化为可工程化的 AI 能力，让大模型从“千人一面”的智能问答进化为“千人千面”的情感伙伴。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-YuanGroup_Machine-Mindset_48a93ec9.png","PKU-YuanGroup","PKU-YUAN-Lab (袁粒课题组-北大深研院)","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FPKU-YuanGroup_d1788368.jpg","Open codes from YUAN Lab at PKU",null,"postmaster@pku-yuan.com","pku-yuan.com","https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup",[85],{"name":86,"color":87,"percentage":88},"Python","#3572A5",100,530,25,"2026-04-06T15:39:20","Apache-2.0","","未说明",{"notes":96,"python":94,"dependencies":97},"README 中未提供具体的运行环境需求（如操作系统、GPU、内存、Python 版本及依赖库）。该项目基于 Baichuan-7b 和 LLaMA2-7b 等基座模型，通常建议参考对应基座模型的官方环境配置（一般需 Linux 环境、NVIDIA GPU、8GB+ 显存、Python 3.8+ 及 PyTorch\u002FTransformers 库）。项目提供了 Hugging Face Spaces 和 ModelScope 在线体验入口，可无需本地部署直接使用。",[],[15,46],"2026-03-27T02:49:30.150509","2026-04-07T08:13:40.113466",[102,107],{"id":103,"question_zh":104,"answer_zh":105,"source_url":106},21324,"使用 ISFJ 数据微调模型后，模型为何回答自己是 ISTJ 或无法识别 MBTI 类型？","这是因为缺少自我意识训练数据。维护者已在 Hugging Face 上更新了 self-awareness 数据集（对应原论文中的 self-awareness 部分），使用该数据集进行微调可以解决此问题。","https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fissues\u002F4",{"id":108,"question_zh":109,"answer_zh":110,"source_url":111},21325,"项目是否有开源数据集的计划？在哪里可以获取？","数据集已经开源，您可以直接在 Hugging Face 上访问并下载：https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FFarReelAILab\u002FMachine_Mindset","https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FMachine-Mindset\u002Fissues\u002F2",[]]