[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-AkshitIreddy--Interactive-LLM-Powered-NPCs":3,"tool-AkshitIreddy--Interactive-LLM-Powered-NPCs":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":79,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":101,"forks":102,"last_commit_at":103,"license":104,"difficulty_score":105,"env_os":106,"env_gpu":107,"env_ram":108,"env_deps":109,"category_tags":120,"github_topics":121,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":131,"updated_at":132,"faqs":133,"releases":153},3980,"AkshitIreddy\u002FInteractive-LLM-Powered-NPCs","Interactive-LLM-Powered-NPCs","Interactive LLM Powered NPCs, is an open-source project that completely transforms your interaction with non-player characters (NPCs) in any game! 🎮🤖🚀 ","Interactive-LLM-Powered-NPCs 是一款开源项目，旨在彻底革新玩家在各类电子游戏中与非玩家角色（NPC）的互动体验。它允许玩家通过麦克风直接与游戏中的任意 NPC 进行语音对话，无需修改游戏源代码或进行复杂的模组安装，即可在《赛博朋克 2077》、《刺客信条》等已发售的开放世界游戏中实现自由交谈。\n\n该项目主要解决了传统游戏中 NPC 对话内容固定、缺乏深度互动的痛点，为那些官方不再更新对话功能的老游戏注入了新的生命力。无论是普通游戏玩家希望获得更沉浸的冒险体验，还是开发者想要探索 AI 在游戏中的应用场景，都能从中受益。\n\n其技术亮点在于多模态能力的深度融合：利用人脸识别技术精准定位当前交互的角色；结合大语言模型（LLM）与向量数据库，赋予 NPC 无限的记忆容量及独特的性格知识；通过预置对话文件确保角色说话风格原汁原味。此外，系统还能借助 SadTalker 技术同步生成逼真的口型动画，甚至能通过摄像头感知玩家的面部表情，从而做出更具情感深度的回应。这让虚拟世界中的每一次相遇都变得真实而富有意义。","## Overview 😄📜\n\nInteractive LLM Powered NPCs, is an open-source project that completely transforms your interaction with non-player characters (NPCs) in any game! With this project, you can engage in conversations with NPCs in any game using your microphone to speak.\n\nThe project uses sadtalker to synchronize character lip movements, facial recognition to identify different characters, vector stores to provide limitless memory capacity for NPCs, and pre-conversation files to shape the dialogue style of each character. By analyzing the specific NPC you're engaging with, including their personality, knowledge, and communication style, the system adapts accordingly. Moreover, the NPCs are even capable of perceiving your facial expressions through your webcam, adding an additional layer of depth to the interactions.\n\nThis project targets previously released games like Cyberpunk 2077, Assassin's Creed, GTA 5, and other popular open-world titles. These games, with their plentiful NPCs and beautiful environments, have long been yearning for a missing feature: the ability for players to engage in conversations with any NPC they want to talk to. With this project, we aim to fill that void and bring immersive dialogue adventures to these games, allowing players to unlock the untapped potential within these virtual worlds. The goal is to enhance the gaming experience of already-released titles that are unlikely to receive this feature from their original developers. \n\nOne of the remarkable aspects of this project is its versatility. You don't need to modify the game's source code or engage in complex modding procedures. Instead, it works its magic by replacing the facial pixels generated by the game, seamlessly integrating the facial animation into your gaming environment. \n\nWhether you're exploring ancient dungeons in Assassin's Creed Valhalla or walking the neon lit streets of Cyberpunk 2077, Interactive LLM Powered NPCs takes your gaming experience to new heights. Prepare yourself for engaging, realistic, and meaningful interactions with NPCs that bring your virtual world to life. \n\n## Demo 🚀✨\n![thumbnail](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAkshitIreddy_Interactive-LLM-Powered-NPCs_readme_b0e189a42cb6.png)\n\nDemo link: https:\u002F\u002Ftwitter.com\u002FAkshit2089\u002Fstatus\u002F1673687342438051847\n\nExplanation Video:\nhttps:\u002F\u002Ftwitter.com\u002Fcohere\u002Fstatus\u002F1687131527174672384\n\nTutorial Video: https:\u002F\u002Fyoutu.be\u002F6SHTlKYKCbs\n\nDiscord Server: https:\u002F\u002Fdiscord.gg\u002FCfK7DCWKwy\n\n## How It Works 🤔💭\n\nThe project's functionality is as fascinating as the conversations it enables. Here's a breakdown of how Interactive LLM Powered NPCs works its magic:\n\n🎙️ Microphone Input: Speak into your microphone, and your voice is converted to text.\n\n👥 Facial Recognition: The system employs facial recognition technology to identify which NPC you're interacting with, whether it's a background character or a side character.\n\n🔍 Character Identification: Based on the dialogue and character recognition, a unique personality and name are generated for background NPCs. For side characters, their specific character traits and knowledge are accessed.\n\n🧠 LLM Integration: The transcribed text and character information are passed to a Large Language Model (LLM), which generates a response based on the given context. The LLM also utilizes vector stores containing character-specific and world information.\n\n📁 Pre-Conversation Files: To ensure the NPCs speak authentically, the project utilizes a pre-conversation.json file containing the character's iconic lines and lines that reflect their talking style. Random lines from this file are passed to the LLM while generating response, enhancing the generated dialogue.\n\n🗣️ Facial Animation and Speech Generation: The LLM's response is converted to speech using text-to-speech. Then a facial animation video is generated, using the extracted face image of the NPC with the audio.\n\n🕹️ Integration with the Game: The facial animation video and audio are seamlessly integrated into the game by replacing the displayed face pixels with the generated facial animation, making it appear as if the NPC is speaking naturally.\n\n😃 Emotion Recognition: Using your webcam, the system captures your facial emotions, allowing the NPCs to adjust their responses accordingly, creating a more personalized and immersive experience.\n\n🐎 Non-Visual Interactions: The project goes beyond face-to-face conversations by enabling interactions even when the NPC's face is not visible, such as during horseback riding or intense combat sequences. To engage with the NPC, simply say their name first, followed by your dialogue. The system will then follow the same process of transcribing, generating responses, and converting them into speech, providing a seamless and uninterrupted conversation experience, even in action-packed moments. So, whether you're galloping through the countryside or battling formidable foes, you can still communicate with NPCs and enjoy the full range of interactive dialogue features.\n\n🌐 Game Compatibility: Works seamlessly with any game, without the need for game modifications or altering the game's source code.\n\n## Prerequisites 🚀🔧\n\n### 🐍 Python 3.10.6\n\nYou can download and install this version of Python from here https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Frelease\u002Fpython-3106\u002F\n\nMake sure to click on the add to path option when running the installer.\n\n### 🐙 GIT\n\nInstall GIT, a version control system, to easily manage your code and collaborate with others. https:\u002F\u002Fgit-scm.com\u002Fdownloads\n\n### 🌐 wget\n\nInstall wget, a command-line utility, to conveniently download files from the web.\n\n### 🛠️🔍 Microsoft Build Tools and Visual Studio\n\nInstall Microsoft Build Tools and Visual Studio, which are essential for compiling and building projects on Windows.\n\nhttps:\u002F\u002Fvisualstudio.microsoft.com\u002Fdownloads\u002F?q=build+tools#visual-studio-professional-2022\n\nhttps:\u002F\u002Fvisualstudio.microsoft.com\u002Fdownloads\u002F?q=build+tools#build-tools-for-visual-studio-2022\n\n### 🎥🔊 FFmpeg\n\nInstall FFmpeg, a powerful multimedia framework, to handle audio and video processing tasks.\n\nFollow the instructions here https:\u002F\u002Fwww.wikihow.com\u002FInstall-FFmpeg-on-Windows\n\n## Installation 🔌✨\n\n1. Open a terminal.\n2. Clone the repository by executing the following command:\n\n```sh\ngit clone https:\u002F\u002Fgithub.com\u002FAkshitIreddy\u002FInteractive-LLM-Powered-NPCs.git\n```\n\n3. Navigate to the cloned repository:\n\n```sh\ncd Interactive-LLM-Powered-NPCs\n```\n\n4. Create a Python virtual environment named .venv:\n\n```sh\npython -m venv .venv\n```\n\n5. Activate the virtual environment:\n\n```sh\n.venv\\scripts\\activate\n```\n\n6. Install the required dependencies:\n\n```sh\npip install -r requirements.txt\n```\n\n7. Open a Git Bash terminal.\n8. Change the directory to the \"Interactive-LLM-Powered-NPCs\" folder:\n9. Go inside the SadTalker directory\n\n```sh\ncd sadtalker\n```\n\n10. Download the necessary models\n\n```sh\nbash scripts\u002Fdownload_models.sh\n```\n\n11. Open the \"sadtalker\" directory in file browser, locate the file named \"webui.bat\" and double-click on it. This will create another Python environment called \"venv\". Wait until the message WebUI launched is displayed, and then close the terminal that was opened by webui.bat.\n\n12. Make a Cohere account (Free) and add your Cohere Trial API key to apikeys.json . (Optionally, if you have GPT-4 access and would like to use it, you'll need to make a few small changes to the code)\n\n13. Delete all files and folders inside of video_temp and temp folder\n    \n## Vscode Installation with Jupyter Notebook Support 👨‍💻📔\n\nDownload and install Visual Studio Code on your device. Click on the Extensions icon on the left sidebar. It looks like a square icon made up of four squares. In the Extensions pane, search for \"Jupyter\" using the search bar at the top. Look for the \"Jupyter\" extension provided by Microsoft in the search results and click on the \"Install\" button next to it. This extension enables Jupyter Notebook support in Visual Studio Code. Then install the python extension. Open your terminal and navigate to the root directory of the project. This is the main folder that contains the project files. In the terminal, type code . and hit Enter. This command opens the current directory in Visual Studio Code. Alternatively, you can use the file explorer in Visual Studio Code to navigate to the root directory of the project. Once the project is open in Visual Studio Code, you should see the project files displayed on the right-hand side of the editor.\n\nWhile running the Jupyter Notebooks, make sure to select the kernel as .venv (present in top right corner). If you can't see the option to select .venv then click on file then click on open folder and select the main project folder ( Interactive-LLM-Powered-NPCs ), then the notebook should be able to detect .venv .\n\n## Directions for Use 🛠️🤓\n\n1. 📁 Create a folder in the root directory of your project. The folder name should be your game name with spaces replaced by underscores, and avoid using special characters. For example, if your game name is \"Assasins Creed Valhalla\" the folder name could be \"Assasins_Creed_Valhalla\" .\n\n2. 📝 Inside the game folder, create a text file called \"world.txt\" to describe the game and the game world. You can gather in-depth information about your game from websites like https:\u002F\u002Fwww.fandom.com\u002F.\n\n3. 📝 Create another text file called \"public_info.txt\" inside the game folder. This file should contain information about the game world, major events, and any details that a person living in that game world should know. You can find this information on the website mentioned earlier.\n\n4. 📔 Open the Jupyter Notebook called \"create_public_vectordb.ipynb\" located in the root directory. Set the variable \"game_name\" in the first cell to your game's name. Run the first cell to create the public vector database. Then run the second cell to test its performance. Vector databases are like living libraries that provides relevant information about various topics.\n\n5. 📝 Create a folder called npc_personality_creation in your game folder and then copy both the files from Cyberpunk_2077 folder audio_mode_create_personality.py and video_mode_personality.py. These files are responsible for generating unique personalities for background NPCs, so you need to modify the template variable according to your game. The template variable has three names and personalities for those names, you need to replace the names with npc names that are common in your game and replace the personalities with personalities common in your game, you can take ChatGPT's help for creating the personalities. These examples will guide the way the LLM will generate the personalities. Once done copy these files into the functions directory replacing the previous personality creation files.\n\n6. 📂 Create a folder named \"characters\" inside the game folder. Copy the \"default\" folder from the \"Cyberpunk_2077\u002Fcharacters\" folder and paste it into the newly created \"characters\" folder. You will need to modify the \"pre_conversation.json\" file present in the \"default\" folder. This file should contain common lines that people speak in your game's world. You can ask ChatGPT to provide dialogues in this JSON format specific to your game. Here's a prompt you can use:\n\n```sh\nGive 30 Common dialogue that people in Cyberpunk 2077 say, it should be in this format\n{\n  \"pre_conversation\": [\n    { \"line\": \"dialogue 1\" },\n    { \"line\": \"dialogue 2\" }\n  ]\n}\n```\n\nReplace the content of the \"pre_conversation.json\" file with the generated dialogue.\n\nThe default folder is used by background NPCs, the bio, name, voice and photo keep changing as you talk to different NPCs.\n\n7. 📂 Create a folder in characters with the name of any side character you would like to talk to, replacing spaces with underscores. Inside this character folder, create an \"images\" folder and place 5 JPG photos of your character (without any other person present in these pictures).\n\n8. 📔 Open the \"create_face_recognition_representation.ipynb\" notebook and set the variables \"character_name\" and \"game_name\" in both cells. Run both cells to create representations of your character's face. This step helps the project recognize which character you are interacting with when you are playing the game.\n\n9. 📝 Create a text file called \"bio.txt\" inside the character's folder. This file should contain a small summary about your character, which you can gather from fandom or similar sources.\n\n10. 📝 Create a text file called \"character_knowledge.txt\" inside the character's folder. This file should include information that your character knows but isn't present in the public vector database. You can find this information from fandom or similar sources.\n\n11. 📔 Use the \"create_character_vectordb.ipynb\" notebook to create the character's vector database. Open the notebook, set the variables \"character_name\" and \"game_name\" in the first cell, and run the cell.\n\n12. 📝 Create a \"pre_conversation.json\" file inside the character's folder with common dialogues that this character says. The dialogues should capture the essence of the character and will be used to guide the style of responses. You can refer to the existing \"pre_conversation.json\" file in Cyberpunk 2077's character folder for reference.\n\n13. 📝 Create a \"conversation.json\" file in the character's folder. You can copy this file from Jackie Welles' character folder in Cyberpunk 2077, and change the first dialogue to match your character.\n\n14. 📂 Create a \"voice\" folder. Inside this folder, place a Python script called \"voice.py\" with a function named \"create_speech.\" The function should take text and an output path as parameters to store the generated audio file. You can copy the script from Jackie Welles and modify the voice to match your character's voice. You can use the voice_selection.ipynb to find a voice that is similar to your characters voice. If you want to use voice cloning or other techniques, ensure that the \"voice.py\" file in your character's voice folder has the same \"create_speech\" function signature and doesn't use relative paths.\n\n15. ♾️ Repeat the above steps for any other character you would like to interact with in the game.\n\nOnce you have completed these steps, you're ready to play the game. Make sure your webcam and microphone are on to fully experience the interactive elements. 🎮🌟\n\n## Play the Game! 🎮🚀\n\nTo begin, launch your game and open the \"main.ipynb\" file. In the first cell, you'll find the variables \"player_name,\" \"game_name,\" and \"interact_key.\" Set these variables according to your preferences. Once you've done that, run the second cell. If your GPU is struggling to render the facial animation smoothly, you can choose to disable it in parameters.\n\nBy running the second cell, a new window will appear. Drag this window to a separate monitor from the one displaying your game (Make sure your monitor is set to extend and not duplicate, you can find this setting in System > Display). If you don't have another monitor then copy the files from miscellaneous\u002FSingle Monitor to the main project directory(make sure to enter the game_screen_name variable in main.ipynb) . Now, direct the camera in your game towards the non-playable character (NPC) you wish to talk to.\n\nClick on the new window and hold down the interact key until you see the word \"speak\" appear in the right-hand corner. Speak your message, and you will notice the text you said appear in the corner. After a moment, the character will respond to you with both facial animation and voice.\n\nIf, for some reason, the character you want to talk to is not visible on the screen, you can still follow the same steps. You will receive an audio response even if you can't see the character. This feature comes in handy when riding a horse with another NPC or engaging in combat with your companion.\n\n## Conclusion and Contribution 🤝🎮\n\nInteractive LLM Powered NPCs opens up exciting possibilities for enhancing NPC interactions in any game, bringing a new level of realism and immersion to your gaming experience.\n\nWe invite you to join us in expanding the compatibility of Interactive LLM Powered NPCs by adding games to the \"Games\" folder. By doing so, you can help save others the effort of adapting the project to new games and increase the number of available games that are compatible. \n\nIf you've created a game compatible with Interactive LLM Powered NPCs, we encourage you to make a pull request and share your game with the community. By adding your game to the collection, you contribute to the growing library of supported games and enable more players to enjoy dynamic and realistic NPC interactions. \n\nAdditionally, you can contribute to the project by improving existing features, fixing bugs, or suggesting new ideas. Feel free to explore the codebase, join discussions, and collaborate with fellow developers to enhance the project further. \n\nTogether, we can shape the future of NPC interactions in gaming and create memorable experiences for players worldwide. So, join us on this exciting journey and let's make gaming dialogue more interactive and captivating for everyone! \n\n## Tools Used 🚀🔧\n\n- 🍪 Cohere's Language Models and Embedding Models with Langchain for LLM Agent\n- 🍩 SadTalker For Facial Animation\n- 🍰 Edge-TTS for default voices\n- 🧁 DeepFace for facial recognition, detection, gender, age, emotion detection\n- 🍭 Chromadb for local vectorstores\n- 🍬 SpeechRecognition package for speech recognition\n\n## ❤️ Thanks\n\nIf you found this interesting check out Alystria AI for more fun projects\n\n```sh\nhttps:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Falystria-ai\n```\n\n- 🌐 Github: https:\u002F\u002Fgithub.com\u002FAkshitIreddy\n- 💡 LinkedIn: https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fakshit-ireddy\n- ✍️ Medium: https:\u002F\u002Fmedium.com\u002F@akshit.r.ireddy\n","## 概述 😄📜\n\n交互式 LLM 驱动的 NPC 是一个开源项目，它彻底改变了你在任何游戏中与非玩家角色（NPC）的互动方式！通过这个项目，你可以使用麦克风直接与游戏中的任何 NPC 对话。\n\n该项目利用 sadtalker 同步角色的口型动作，结合面部识别技术来区分不同的角色；借助向量存储为 NPC 提供无限的记忆容量，并通过预对话文件来塑造每个角色的对话风格。系统会分析你正在互动的具体 NPC，包括其性格、知识和沟通方式，从而做出相应的调整。此外，NPC 还能通过你的摄像头感知你的面部表情，为互动增添了更多层次感。\n\n本项目主要针对已发布的游戏，如《赛博朋克 2077》、《刺客信条》系列、《GTA 5》等热门开放世界作品。这些游戏拥有丰富的 NPC 和精美的场景，却一直缺少一项关键功能——让玩家能够与任意想交谈的 NPC 展开对话。而我们的项目正是为了填补这一空白，为这些游戏带来沉浸式的对话体验，让玩家充分挖掘虚拟世界的潜力。我们的目标是提升那些不太可能由原开发商添加此功能的游戏的体验。\n\n该项目的一大亮点在于其高度的通用性。你无需修改游戏源代码或进行复杂的模组制作，只需替换游戏中生成的面部像素，即可将面部动画无缝融入你的游戏环境。\n\n无论是在《刺客信条：英灵殿》中探索古老的地下城，还是在《赛博朋克 2077》的霓虹街道上漫步，交互式 LLM 驱动的 NPC 都能将你的游戏体验提升到全新的高度。准备好迎接引人入胜、逼真且富有意义的 NPC 互动吧！\n\n## 演示 🚀✨\n![缩略图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAkshitIreddy_Interactive-LLM-Powered-NPCs_readme_b0e189a42cb6.png)\n\n演示链接：https:\u002F\u002Ftwitter.com\u002FAkshit2089\u002Fstatus\u002F1673687342438051847\n\n讲解视频：\nhttps:\u002F\u002Ftwitter.com\u002Fcohere\u002Fstatus\u002F1687131527174672384\n\n教程视频：https:\u002F\u002Fyoutu.be\u002F6SHTlKYKCbs\n\nDiscord 社区：https:\u002F\u002Fdiscord.gg\u002FCfK7DCWKwy\n\n## 工作原理 🤔💭\n\n该项目的功能如同它所实现的对话一样令人着迷。以下是交互式 LLM 驱动的 NPC 如何发挥其魔力的详细说明：\n\n🎙️ 麦克风输入：对着麦克风说话，你的语音会被转换成文本。\n\n👥 面部识别：系统利用面部识别技术确定你正在与哪个 NPC 互动，无论是背景角色还是重要配角。\n\n🔍 角色识别：根据对话内容和角色识别结果，系统会为背景 NPC 自动生成独特的个性和名称；而对于重要角色，则会调取其特定的性格特征和知识储备。\n\n🧠 LLM 集成：转录后的文本和角色信息会被传递给大型语言模型（LLM），LLM 会基于上下文生成回复。同时，LLM 还会参考包含角色专属及世界相关信息的向量存储。\n\n📁 预对话文件：为了确保 NPC 的对话更加真实自然，项目会使用 pre-conversation.json 文件，其中包含了角色的经典台词以及体现其说话风格的语句。在生成回复时，系统会从该文件中随机选取部分台词传递给 LLM，从而使对话更加生动流畅。\n\n🗣️ 面部动画与语音合成：LLM 生成的回复会通过文本转语音技术转化为语音，随后再结合提取出的 NPC 面部图像生成一段面部动画视频。\n\n🕹️ 与游戏集成：面部动画视频和音频会被无缝地整合进游戏中，方法是用生成的面部动画替换掉原本显示的 NPC 面部像素，使 NPC 看起来就像在自然地说话一样。\n\n😃 情绪识别：系统会通过你的摄像头捕捉你的面部表情，以便 NPC 能够据此调整自己的回应，从而打造更个性化、更沉浸式的互动体验。\n\n🐎 非视觉互动：该项目不仅限于面对面的对话，即使 NPC 的面部不可见，比如在骑马或激烈战斗时，也能实现互动。只需先说出 NPC 的名字，再开始对话，系统便会按照同样的流程进行语音转文字、生成回复并将其转换为语音，从而提供流畅无阻的对话体验，即便是在紧张刺激的动作场景中也不例外。因此，无论你是策马奔腾于乡间，还是与强大的敌人激战，都可以随时与 NPC 交流，享受完整的交互式对话功能。\n\n🌐 游戏兼容性：本项目可无缝适配任何游戏，无需对游戏进行任何修改或更改其源代码。\n\n## 先决条件 🚀🔧\n\n### 🐍 Python 3.10.6\n\n你可以从这里下载并安装该版本的 Python：https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Frelease\u002Fpython-3106\u002F\n\n请注意，在运行安装程序时务必勾选“将 Python 添加到 PATH”选项。\n\n### 🐙 GIT\n\n安装 GIT 版本控制系统，以便轻松管理代码并与他人协作。https:\u002F\u002Fgit-scm.com\u002Fdownloads\n\n### 🌐 wget\n\n安装 wget 命令行工具，方便从网络下载文件。\n\n### 🛠️🔍 Microsoft 构建工具和 Visual Studio\n\n安装 Microsoft 构建工具和 Visual Studio，它们对于在 Windows 上编译和构建项目至关重要。\n\nhttps:\u002F\u002Fvisualstudio.microsoft.com\u002Fdownloads\u002F?q=build+tools#visual-studio-professional-2022\n\nhttps:\u002F\u002Fvisualstudio.microsoft.com\u002Fdownloads\u002F?q=build+tools#build-tools-for-visual-studio-2022\n\n### 🎥🔊 FFmpeg\n\n安装 FFmpeg 强大的多媒体框架，用于处理音频和视频任务。\n\n请按照以下指南操作：https:\u002F\u002Fwww.wikihow.com\u002FInstall-FFmpeg-on-Windows\n\n## 安装 🔌✨\n\n1. 打开终端。\n2. 通过执行以下命令克隆仓库：\n\n```sh\ngit clone https:\u002F\u002Fgithub.com\u002FAkshitIreddy\u002FInteractive-LLM-Powered-NPCs.git\n```\n\n3. 进入克隆下来的仓库目录：\n\n```sh\ncd Interactive-LLM-Powered-NPCs\n```\n\n4. 创建一个名为 `.venv` 的 Python 虚拟环境：\n\n```sh\npython -m venv .venv\n```\n\n5. 激活虚拟环境：\n\n```sh\n.venv\\scripts\\activate\n```\n\n6. 安装所需的依赖包：\n\n```sh\npip install -r requirements.txt\n```\n\n7. 打开 Git Bash 终端。\n8. 切换到 `Interactive-LLM-Powered-NPCs` 文件夹：\n9. 进入 `Sadtalker` 目录：\n\n```sh\ncd sadtalker\n```\n\n10. 下载必要的模型：\n\n```sh\nbash scripts\u002Fdownload_models.sh\n```\n\n11. 在文件浏览器中打开 `sadtalker` 目录，找到名为 `webui.bat` 的文件并双击运行。这将创建另一个名为 `venv` 的 Python 环境。等待出现“WebUI 已启动”的提示后，关闭由 `webui.bat` 打开的终端窗口。\n\n12. 注册一个 Cohere 账号（免费），并将你的 Cohere 试用 API 密钥添加到 `apikeys.json` 文件中。（可选：如果你有 GPT-4 的访问权限并希望使用它，需要对代码进行一些小的修改）\n\n13. 删除 `video_temp` 和 `temp` 文件夹内的所有文件和子文件夹。\n\n## 安装支持 Jupyter Notebook 的 VS Code 👨‍💻📔\n\n在你的设备上下载并安装 Visual Studio Code。点击左侧边栏中的扩展图标，它看起来像由四个小方块组成的正方形。在扩展面板中，使用顶部的搜索栏搜索“Jupyter”。在搜索结果中找到微软提供的“Jupyter”扩展，并点击其旁边的“安装”按钮。该扩展将为 Visual Studio Code 添加 Jupyter Notebook 支持。然后安装 Python 扩展。打开终端，导航到项目根目录——即包含项目文件的主文件夹。在终端中输入 `code .` 并按回车键。此命令将在 Visual Studio Code 中打开当前目录。你也可以使用 Visual Studio Code 的文件资源管理器导航到项目根目录。项目在 Visual Studio Code 中打开后，你应该能在编辑器右侧看到项目文件。\n\n运行 Jupyter Notebook 时，请确保选择内核为 `.venv`（位于右上角）。如果看不到选择 `.venv` 的选项，可以点击“文件”，然后选择“打开文件夹”，再选择主项目文件夹（`Interactive-LLM-Powered-NPCs`），这样笔记本应该就能检测到 `.venv` 环境了。\n\n## 使用说明 🛠️🤓\n\n1. 📁 在项目的根目录下创建一个文件夹。文件夹名称应使用您的游戏名称，将空格替换为下划线，并避免使用特殊字符。例如，如果您的游戏名称是“刺客信条：英灵殿”，则文件夹名称可以是“Assasins_Creed_Valhalla”。\n\n2. 📝 在游戏文件夹内创建一个名为“world.txt”的文本文件，用于描述游戏及游戏世界。您可以从诸如https:\u002F\u002Fwww.fandom.com\u002F之类的网站上获取关于您游戏的详细信息。\n\n3. 📝 在游戏文件夹内再创建一个名为“public_info.txt”的文本文件。该文件应包含有关游戏世界、重大事件以及生活在该游戏世界中的人应当了解的任何细节的信息。这些信息同样可以在上述网站上找到。\n\n4. 📔 打开位于根目录下的名为“create_public_vectordb.ipynb”的Jupyter Notebook。在第一个单元格中将变量“game_name”设置为您游戏的名称。运行该单元格以创建公共向量数据库。然后运行第二个单元格来测试其性能。向量数据库就像一座活生生的图书馆，能够提供关于各种主题的相关信息。\n\n5. 📝 在您的游戏文件夹中创建一个名为npc_personality_creation的文件夹，然后从Cyberpunk_2077文件夹中复制audio_mode_create_personality.py和video_mode_personality.py这两个文件。这些文件负责为背景NPC生成独特的个性，因此您需要根据自己的游戏修改模板变量。模板变量中包含了三个名字及其对应的个性，您需要将这些名字替换为您游戏中常见的NPC名称，并将个性替换为您游戏中常见的性格特征，可以借助ChatGPT来帮助创建这些个性。这些示例将指导LLM如何生成NPC的个性。完成后，将这些文件复制到functions目录中，替换原有的个性创建文件。\n\n6. 📂 在游戏文件夹内创建一个名为“characters”的文件夹。从“Cyberpunk_2077\u002Fcharacters”文件夹中复制“default”文件夹，并将其粘贴到新创建的“characters”文件夹中。您需要修改“default”文件夹中的“pre_conversation.json”文件。该文件应包含您游戏世界中人们常说的常用对话。您可以请ChatGPT以这种JSON格式为您提供特定于您游戏的对话。以下是一个您可以使用的提示：\n\n```sh\n请提供30句《赛博朋克2077》中人们常说的常见对话，格式如下：\n{\n  \"pre_conversation\": [\n    { \"line\": \"对话1\" },\n    { \"line\": \"对话2\" }\n  ]\n}\n```\n\n将生成的对话内容替换“pre_conversation.json”文件中的原有内容。\n\n“default”文件夹供背景NPC使用，当您与不同NPC交谈时，他们的简介、姓名、声音和照片会不断变化。\n\n7. 📂 在characters文件夹中创建一个以您希望与其交谈的某个次要角色命名的文件夹，将其中的空格替换为下划线。在该角色文件夹内创建一个“images”文件夹，并放入5张您角色的JPEG照片（照片中不得出现其他人）。\n\n8. 📔 打开“create_face_recognition_representation.ipynb”笔记本，在两个单元格中分别设置“character_name”和“game_name”变量。运行这两个单元格以创建您角色脸部的表征。此步骤有助于项目在您玩游戏时识别您正在与哪个角色互动。\n\n9. 📝 在角色文件夹内创建一个名为“bio.txt”的文本文件。该文件应包含关于您角色的一小段简介，您可以从fandom或其他类似来源获取相关信息。\n\n10. 📝 在角色文件夹内创建一个名为“character_knowledge.txt”的文本文件。该文件应包括您角色所知但未收录在公共向量数据库中的信息。这些信息可以从fandom或其他类似来源获取。\n\n11. 📔 使用“create_character_vectordb.ipynb”笔记本创建角色的向量数据库。打开该笔记本，在第一个单元格中设置“character_name”和“game_name”变量，并运行该单元格。\n\n12. 📝 在角色文件夹内创建一个“pre_conversation.json”文件，其中包含该角色常说的常用对话。这些对话应能体现角色的个性特征，并用于指导回复的风格。您可以参考《赛博朋克2077》角色文件夹中现有的“pre_conversation.json”文件作为参考。\n\n13. 📝 在角色文件夹中创建一个“conversation.json”文件。您可以从《赛博朋克2077》中Jackie Welles的角色文件夹中复制该文件，并将第一条对话修改为符合您角色特点的内容。\n\n14. 📂 创建一个“voice”文件夹。在该文件夹内放置一个名为“voice.py”的Python脚本，其中包含一个名为“create_speech”的函数。该函数应接受文本和输出路径作为参数，用于存储生成的音频文件。您可以从Jackie Welles的文件夹中复制该脚本，并修改语音以匹配您角色的声音。您可以使用voice_selection.ipynb来寻找与您角色声音相似的语音。如果您希望使用语音克隆或其他技术，请确保您角色voice文件夹中的“voice.py”文件具有相同的“create_speech”函数签名，并且不使用相对路径。\n\n15. ♾️ 对您在游戏中希望与之互动的任何其他角色重复上述步骤。\n\n完成以上步骤后，您就可以开始玩游戏了。请确保您的摄像头和麦克风已开启，以便充分体验互动功能。🎮🌟\n\n## 开始游戏吧！🎮🚀\n\n首先，启动你的游戏并打开“main.ipynb”文件。在第一个单元格中，你会找到变量“player_name”、“game_name”和“interact_key”。请根据自己的喜好设置这些变量。完成后，运行第二个单元格。如果你的显卡在流畅渲染面部动画时遇到困难，可以在参数中选择将其关闭。\n\n运行第二个单元格后，会弹出一个新的窗口。请将这个窗口拖动到与显示游戏画面的显示器不同的另一台显示器上（确保你的显示器设置为扩展模式而非复制模式，你可以在系统 > 显示设置中找到该选项）。如果你没有另一台显示器，则需要将miscellaneous\u002FSingle Monitor文件夹中的文件复制到主项目目录下，并确保在main.ipynb中正确填写“game_screen_name”变量。接下来，将游戏内的摄像机对准你想对话的非玩家角色（NPC）。\n\n点击新窗口，并按住交互键，直到右下角出现“speak”字样。说出你的信息，你会看到刚才说的话显示在角落里。片刻之后，该角色会通过面部动画和语音对你作出回应。\n\n如果由于某种原因，你想对话的角色并未出现在屏幕上，你仍然可以按照上述步骤操作。即使看不到角色，你依然会收到语音回复。这一功能在与其他NPC一起骑马或与同伴一同战斗时尤为实用。\n\n## 结语与贡献🤝🎮\n\n基于交互式大语言模型的NPC为提升游戏中NPC的互动体验带来了令人兴奋的可能性，能够为你的游戏增添全新的真实感和沉浸感。\n\n我们诚挚邀请你加入我们，共同扩展基于交互式大语言模型的NPC的兼容性，将更多游戏添加到“Games”文件夹中。通过这样做，你可以帮助其他玩家省去将该项目适配新游戏的麻烦，同时增加可用的兼容游戏数量。\n\n如果你已经开发了一款与基于交互式大语言模型的NPC兼容的游戏，欢迎提交拉取请求，与社区分享你的作品。将你的游戏加入到这个集合中，不仅有助于丰富支持的游戏库，还能让更多玩家享受到动态且逼真的NPC互动体验。\n\n此外，你还可以通过改进现有功能、修复漏洞或提出新想法来为项目贡献力量。欢迎深入研究代码库，参与讨论，并与其他开发者协作，进一步完善该项目。\n\n让我们携手共创游戏NPC互动的未来，为全球玩家打造难忘的游戏体验。现在就加入我们，一起让游戏中的对话更加互动、更加引人入胜吧！\n\n## 使用的工具🚀🔧\n\n- 🍪 Cohere的语言模型和嵌入模型，结合Langchain构建LLM智能体\n- 🍩 SadTalker 用于面部动画\n- 🍰 Edge-TTS 用于默认语音合成\n- 🧁 DeepFace 用于面部识别、检测、性别、年龄及情绪分析\n- 🍭 Chromadb 用于本地向量存储\n- 🍬 SpeechRecognition 包用于语音识别\n\n## ❤️ 感谢\n\n如果你觉得这个项目很有趣，不妨去看看Alystria AI的其他精彩项目：\n\n```sh\nhttps:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Falystria-ai\n```\n\n- 🌐 GitHub：https:\u002F\u002Fgithub.com\u002FAkshitIreddy\n- 💡 LinkedIn：https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fakshit-ireddy\n- ✍️ Medium：https:\u002F\u002Fmedium.com\u002F@akshit.r.ireddy","# Interactive-LLM-Powered-NPCs 快速上手指南\n\n本项目允许玩家通过麦克风与任何游戏（如《赛博朋克 2077》、《刺客信条》、《GTA 5》等）中的 NPC 进行实时语音对话。系统利用大语言模型（LLM）生成回复，结合 SadTalker 实现口型同步，并通过人脸识别和表情分析增强沉浸感，无需修改游戏源代码。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求并安装必要依赖：\n\n### 系统要求\n- **操作系统**: Windows (推荐)\n- **Python**: 3.10.6 (必须严格匹配此版本)\n  - 下载链接：[Python 3.10.6](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Frelease\u002Fpython-3106\u002F)\n  - **注意**: 安装时务必勾选 \"Add Python to PATH\"。\n\n### 前置依赖\n请依次安装以下工具：\n1. **Git**: 版本控制系统 ([下载地址](https:\u002F\u002Fgit-scm.com\u002Fdownloads))\n2. **wget**: 命令行下载工具\n3. **Microsoft Build Tools & Visual Studio**: 用于编译 C++ 扩展\n   - 下载 [Build Tools for Visual Studio 2022](https:\u002F\u002Fvisualstudio.microsoft.com\u002Fdownloads\u002F?q=build+tools#build-tools-for-visual-studio-2022)\n4. **FFmpeg**: 音视频处理框架\n   - 参考安装教程：[Windows 上安装 FFmpeg](https:\u002F\u002Fwww.wikihow.com\u002FInstall-FFmpeg-on-Windows)\n5. **API Key**: 注册 [Cohere](https:\u002F\u002Fcohere.com\u002F) 账号获取免费 Trial API Key（或配置 GPT-4）。\n\n## 安装步骤\n\n打开终端（Terminal 或 Git Bash），按顺序执行以下命令：\n\n1. **克隆项目仓库**\n   ```sh\n   git clone https:\u002F\u002Fgithub.com\u002FAkshitIreddy\u002FInteractive-LLM-Powered-NPCs.git\n   cd Interactive-LLM-Powered-NPCs\n   ```\n\n2. **创建并激活 Python 虚拟环境**\n   ```sh\n   python -m venv .venv\n   .venv\\scripts\\activate\n   ```\n\n3. **安装项目依赖**\n   ```sh\n   pip install -r requirements.txt\n   ```\n   > **提示**: 国内用户可使用清华源加速安装：`pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n4. **下载 SadTalker 模型**\n   ```sh\n   cd sadtalker\n   bash scripts\u002Fdownload_models.sh\n   ```\n   > **注意**: 如果 `bash` 命令在 Windows CMD 中不可用，请使用 Git Bash 运行此步骤。若下载缓慢，可尝试手动下载模型文件并放入对应目录。\n\n5. **初始化 SadTalker WebUI 环境**\n   - 在文件浏览器中进入 `sadtalker` 目录。\n   - 双击运行 `webui.bat`。\n   - 等待终端显示 `WebUI launched` 后，关闭该终端窗口。\n\n6. **配置 API Key**\n   - 在项目根目录找到 `apikeys.json` 文件。\n   - 填入您的 Cohere API Key。\n\n7. **清理临时文件**\n   - 删除 `video_temp` 和 `temp` 文件夹内的所有内容。\n\n8. **配置 VS Code (可选但推荐)**\n   - 安装 VS Code 及 \"Jupyter\" 和 \"Python\" 扩展。\n   - 在项目根目录运行 `code .` 打开项目。\n   - 运行 Jupyter Notebook 时，确保右上角内核选择为 `.venv`。\n\n## 基本使用\n\n以下以配置《赛博朋克 2077》为例（其他游戏逻辑相同）：\n\n### 1. 创建游戏配置目录\n在项目根目录创建文件夹，名称为游戏名（空格替换为下划线，无特殊字符）：\n```text\nCyberpunk_2077\n```\n\n### 2. 编写世界观文件\n在 `Cyberpunk_2077` 文件夹内创建以下两个文本文件，内容可从 Fandom 等维基百科获取：\n- `world.txt`: 描述游戏及世界背景。\n- `public_info.txt`: 描述世界中的重大事件及常识。\n\n### 3. 构建向量数据库\n- 打开根目录下的 `create_public_vectordb.ipynb`。\n- 将第一个单元格中的 `game_name` 变量修改为 `\"Cyberpunk_2077\"`。\n- 运行第一个单元格创建数据库，运行第二个单元格测试效果。\n\n### 4. 配置 NPC 人格生成\n- 在 `Cyberpunk_2077` 内创建 `npc_personality_creation` 文件夹。\n- 从 `Cyberpunk_2077` 示例文件夹复制 `audio_mode_create_personality.py` 和 `video_mode_personality.py` 到此新文件夹。\n- **修改代码**: 编辑这两个文件中的 `template` 变量，将其中的示例名字和性格替换为您游戏中常见的 NPC 名字和性格特征（可借助 ChatGPT 生成）。\n- 将修改后的文件复制到项目根目录的 `functions` 文件夹中，覆盖原文件。\n\n### 5. 配置通用对话风格\n- 在 `Cyberpunk_2077` 内创建 `characters` 文件夹。\n- 复制 `Cyberpunk_2077\u002Fcharacters\u002Fdefault` 文件夹到新建的 `characters` 目录中。\n- 编辑 `default\u002Fpre_conversation.json`，填入符合游戏风格的常用对话（格式如下）：\n  ```json\n  {\n    \"pre_conversation\": [\n      { \"line\": \"对话内容 1\" },\n      { \"line\": \"对话内容 2\" }\n    ]\n  }\n  ```\n\n### 6. 添加特定角色（可选）\n若想与特定重要角色（如强尼·银手）对话：\n- 在 `characters` 文件夹内创建以角色名命名的文件夹（如 `Johnny_Silverhand`）。\n- 在该文件夹内创建 `images` 子文件夹。\n- 放入该角色的 5 张清晰正面 JPG 照片（照片中不能有其他人物）。\n\n### 7. 启动交互\n完成上述配置后，运行主程序脚本（具体启动脚本请参考项目后续更新或 Jupyter 演示文件），开启游戏，对着麦克风说话即可与 NPC 互动。系统会自动识别画面中的 NPC 面部并替换为生成的动画。","一位《赛博朋克 2077》玩家试图在夜之城的街头与一名随机路人 NPC 进行深度互动，以挖掘隐藏的背景故事。\n\n### 没有 Interactive-LLM-Powered-NPCs 时\n- **对话内容僵化**：NPC 只能重复预设的几句固定台词，无论玩家如何提问，都无法获得超出脚本的回答，互动瞬间终结。\n- **缺乏记忆能力**：NPC 无法记住玩家前一秒说过的话或刚刚发生的事件，每次交流都像是初次见面，毫无连贯性。\n- **情感反馈缺失**：即使玩家通过麦克风表现出愤怒或幽默的语气，NPC 的面部表情和语调依然机械呆板，无法感知玩家情绪。\n- **角色个性模糊**：背景路人没有独特的性格设定或知识储备，所有非关键角色的说话风格千篇一律，严重破坏沉浸感。\n\n### 使用 Interactive-LLM-Powered-NPCs 后\n- **无限自由对话**：借助大语言模型，玩家可以用麦克风直接与 NPC 闲聊，对方能根据上下文生成符合逻辑且从未预设过的精彩回复。\n- **持久记忆关联**：利用向量存储技术，NPC 能清晰记得之前的谈话细节，甚至能在后续偶遇时主动提起往事，建立真实的人际关系。\n- **多模态情感共鸣**：系统通过摄像头捕捉玩家面部表情，并结合语音语调调整 NPC 的口型（SadTalker）和情绪反应，实现真正的“察言观色”。\n- **独特人设还原**：通过预对话文件注入角色特有的说话风格和知识库，让每个路人都拥有鲜活的个性和符合世界观的背景故事。\n\nInteractive-LLM-Powered-NPCs 将原本静态的游戏世界转化为动态的社交沙盒，让每一位虚拟居民都拥有了真正的灵魂。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAkshitIreddy_Interactive-LLM-Powered-NPCs_b0e189a4.jpg","AkshitIreddy","Akshit Ireddy","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FAkshitIreddy_ea45f200.png","Hi, I'm Akshit – a Software Engineer specializing in Generative AI Integrations.",null,"India","Akshit2089","https:\u002F\u002Fmedium.com\u002F@akshit.r.ireddy ","https:\u002F\u002Fgithub.com\u002FAkshitIreddy",[85,89,93,97],{"name":86,"color":87,"percentage":88},"Python","#3572A5",88.3,{"name":90,"color":91,"percentage":92},"Jupyter Notebook","#DA5B0B",11.2,{"name":94,"color":95,"percentage":96},"Shell","#89e051",0.4,{"name":98,"color":99,"percentage":100},"Batchfile","#C1F12E",0,706,73,"2026-04-01T22:24:43","MIT",4,"Windows","未明确说明具体型号和显存，但项目依赖 SadTalker 进行面部动画生成及实时视频处理，通常隐含需要支持 CUDA 的 NVIDIA GPU","未说明",{"notes":110,"python":111,"dependencies":112},"1. 必须安装 Microsoft Build Tools 和 Visual Studio 以在 Windows 上编译项目。\n2. 需要单独下载并配置 SadTalker 模型文件（通过脚本下载）。\n3. 需要配置 Cohere API Key（或修改代码使用 GPT-4）。\n4. 运行前需清理 video_temp 和 temp 文件夹。\n5. 建议使用 VS Code 配合 Jupyter 插件运行相关笔记本文件，并确保内核指向项目内的 .venv 虚拟环境。\n6. 需要为不同游戏手动创建世界背景文件和角色性格配置文件。","3.10.6",[113,114,115,116,117,118,119],"requirements.txt 中定义的依赖包","SadTalker","FFmpeg","Git","wget","Microsoft Build Tools","Visual Studio Build Tools",[26,14,15,13,52],[122,123,124,125,126,127,128,129,130],"ai","artificial-intelligence","autonomous-agents","computer-vision","langchain","language-model","llm-agent","python","video-game","2026-03-27T02:49:30.150509","2026-04-06T08:45:36.589808",[134,139,144,149],{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},18168,"是否有安装和使用该插件的视频教程？","是的，维护者已在 README 文件中添加了视频教程链接，用户可以通过查看项目主页的 README 文档来获取详细的安装步骤和功能演示。","https:\u002F\u002Fgithub.com\u002FAkshitIreddy\u002FInteractive-LLM-Powered-NPCs\u002Fissues\u002F3",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},18169,"该项目是否支持加入 Discord 社区进行交流？","支持。维护者已接受建议并在 README 文件中添加了 Discord 服务器的邀请链接，用户可以点击该链接加入社区进行沟通和讨论。","https:\u002F\u002Fgithub.com\u002FAkshitIreddy\u002FInteractive-LLM-Powered-NPCs\u002Fissues\u002F2",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},18170,"如何将此项目与 FiveM GTA:V 模组结合以创建 LLM 驱动的 NPC？","可以通过将此项目与现有的 FiveM NPC 脚本（如论坛发布的脚本或 GitHub 上的 npcs.lua 示例）进行链接来实现。用户需要确保服务器端配置正确（例如添加必要的服务器端附加组件），否则模组可能无法正常工作。一旦配置完成，即可探索外部面部识别等功能并与 LLM 项目整合。","https:\u002F\u002Fgithub.com\u002FAkshitIreddy\u002FInteractive-LLM-Powered-NPCs\u002Fissues\u002F8",{"id":150,"question_zh":151,"answer_zh":152,"source_url":148},18171,"如果模组无法运行，常见的配置错误是什么？","一个常见的原因是忘记正确配置服务器端，特别是遗漏了添加服务器端的附加组件（server additions）。请检查是否按照要求完成了服务器配置，确保客户端和服务器端的脚本都已正确部署。",[154],{"id":155,"version":156,"summary_zh":79,"released_at":157},108683,"v1.0.0","2023-09-17T05:26:31"]