[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-NPC-Worldwide--npcpy":3,"tool-NPC-Worldwide--npcpy":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159636,2,"2026-04-17T23:33:34",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":32,"env_os":94,"env_gpu":95,"env_ram":94,"env_deps":96,"category_tags":101,"github_topics":102,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":113,"updated_at":114,"faqs":115,"releases":148},8749,"NPC-Worldwide\u002Fnpcpy","npcpy","The python library for research and development in NLP, multimodal LLMs, Agents, ML, Knowledge Graphs, and more.","npcpy 是一个专为自然语言处理、多模态大模型及智能体研发设计的灵活 Python 框架。它旨在解决开发者在构建 AI 应用时面临的模型接入复杂、多智能体协作困难以及工具调用繁琐等痛点，让从本地部署到云端服务的大模型调用变得统一且简单。\n\n无论是希望快速验证想法的研究人员，还是需要构建复杂自动化流程的软件工程师，都能通过 npcpy 高效开展工作。其核心亮点在于极简的 API 设计：用户只需几行代码即可定义具有特定人设的智能体（NPC），或直接发起大模型对话。框架原生支持多智能体团队协作，允许开发者轻松挂载自定义工具（如运行测试、代码审查），并内置了专门的编程智能体以自动执行生成的代码块。此外，npcpy 还具备流式输出解析和原生 JSON 结构化返回能力，大幅降低了处理实时响应和数据提取的难度。通过屏蔽底层不同提供商的差异，npcpy 让用户能更专注于业务逻辑与创新研究，是探索下一代 AI 应用的得力助手。","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fnpcpy.readthedocs.io\u002F\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNPC-Worldwide_npcpy_readme_b818a73e51b4.png\" alt=\"npc-python logo\" width=250>\u003C\u002Fa>\n\u003C\u002Fp>\n\n# npcpy\n\n`npcpy` is a flexible agent framework for building AI applications and conducting research with LLMs. It supports local and cloud providers, multi-agent teams, tool calling, image\u002Faudio\u002Fvideo generation, knowledge graphs, fine-tuning, and more.\n\n```bash\npip install npcpy\n```\n\n## Quick Examples\n\n### Create and use personas\n\n```python\nfrom npcpy import NPC\n\nsimon = NPC(\n    name='Simon Bolivar',\n    primary_directive='Liberate South America from the Spanish Royalists.',\n    model='gemma3:4b',\n    provider='ollama'\n)\nresponse = simon.get_llm_response(\"What is the most important territory to retain in the Andes?\")\nprint(response['response'])\n\n```\n\n### Direct LLM call\n\n```python\nfrom npcpy import get_llm_response\n\nresponse = get_llm_response(\"Who was the celtic messenger god?\", model='qwen3:4b', provider='ollama')\nprint(response['response'])\n# or use ollama's cloud models\n\ntest = get_llm_response('who is john wick', model='minimax-m2.7:cloud', provider='ollama',)\n\nprint(test['response'])\n```\n\n### Agent with tools\n\n```python\nfrom npcpy import Agent, ToolAgent, CodingAgent\n\n# Agent — comes with default tools (sh, python, edit_file, web_search, etc.)\nagent = Agent(name='ops', model='qwen3.5:2b', provider='ollama')\nprint(agent.run(\"Find all Python files over 500 lines in this repo and list them\"))\n\n# ToolAgent — add your own tools alongside defaults\nimport subprocess\n\ndef run_tests(test_path: str = \"tests\u002F\") -> str:\n    \"\"\"Run pytest on the given path and return results.\"\"\"\n    result = subprocess.run([\"python3\", \"-m\", \"pytest\", test_path, \"-v\", \"--tb=short\"],\n                            capture_output=True, text=True, timeout=120)\n    return result.stdout + result.stderr\n\ndef git_diff(branch: str = \"main\") -> str:\n    \"\"\"Show the git diff against a branch.\"\"\"\n    result = subprocess.run([\"git\", \"diff\", branch, \"--stat\"], capture_output=True, text=True)\n    return result.stdout\n\nreviewer = ToolAgent(\n    name='code_reviewer',\n    primary_directive='You review code changes, run tests, and report issues.',\n    tools=[run_tests, git_diff],\n    model='qwen3.5:2b', provider='ollama'\n)\nprint(reviewer.run(\"Run the tests and summarize any failures\"))\n\n# CodingAgent — auto-executes code blocks from LLM responses\ncoder = CodingAgent(name='coder', language='python', model='qwen3.5:2b', provider='ollama')\nprint(coder.run(\"Write a script that finds duplicate files by hash in the current directory\"))\n```\n\n### Streaming\n\n```python\nfrom npcpy import get_llm_response\nfrom npcpy.streaming import parse_stream_chunk\n\nresponse = get_llm_response(\"Explain quantum entanglement.\", model='qwen3.5:2b', provider='ollama', stream=True)\nfor chunk in response['response']:\n    content, _, _ = parse_stream_chunk(chunk, provider='ollama')\n    if content:\n        print(content, end='', flush=True)\n\n# Works the same with any provider\nresponse = get_llm_response(\"Explain quantum entanglement.\", model='gemini-2.5-flash', provider='gemini', stream=True)\nfor chunk in response['response']:\n    content, _, _ = parse_stream_chunk(chunk, provider='gemini')\n    if content:\n        print(content, end='', flush=True)\n```\n\n### JSON output\n\nInclude the expected JSON structure in your prompt. With `format='json'`, the response is auto-parsed — `response['response']` is already a dict or list.\n\n```python\nfrom npcpy import get_llm_response\n\nresponse = get_llm_response(\n    '''List 3 planets from the sun.\n    Return JSON: {\"planets\": [{\"name\": \"planet name\", \"distance_au\": 0.0, \"num_moons\": 0}]}''',\n    model='qwen3.5:2b', provider='ollama',\n    format='json'\n)\nfor planet in response['response']['planets']:\n    print(f\"{planet['name']}: {planet['distance_au']} AU, {planet['num_moons']} moons\")\n\nresponse = get_llm_response(\n    '''Analyze this review: 'The battery life is amazing but the screen is too dim.'\n    Return JSON: {\"tone\": \"positive\u002Fnegative\u002Fmixed\", \"key_phrases\": [\"phrase1\", \"phrase2\"], \"confidence\": 0.0}''',\n    model='qwen3.5:2b', provider='ollama',\n    format='json'\n)\nresult = response['response']\nprint(result['tone'], result['key_phrases'])\n```\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Pydantic structured output\u003C\u002Fb>\u003C\u002Fsummary>\n\nPass a Pydantic model and the JSON schema is sent to the LLM directly.\n\n```python\nfrom npcpy import get_llm_response\nfrom pydantic import BaseModel\nfrom typing import List\n\nclass Planet(BaseModel):\n    name: str\n    distance_au: float\n    num_moons: int\n\nclass SolarSystem(BaseModel):\n    planets: List[Planet]\n\nresponse = get_llm_response(\n    \"List the first 4 planets from the sun.\",\n    model='qwen3.5:2b', provider='ollama',\n    format=SolarSystem\n)\nfor p in response['response']['planets']:\n    print(f\"{p['name']}: {p['distance_au']} AU, {p['num_moons']} moons\")\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Image, audio, and video generation\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nfrom npcpy.llm_funcs import gen_image, gen_video\nfrom npcpy.gen.audio_gen import text_to_speech\n\n# Image — OpenAI, Gemini, Ollama, or diffusers\nimages = gen_image(\"A sunset over the mountains\", model='gpt-image-1', provider='openai')\nimages[0].save(\"sunset.png\")\n\n# Audio — OpenAI, Gemini, ElevenLabs, Kokoro, gTTS\naudio_bytes = text_to_speech(\"Hello from npcpy!\", engine=\"openai\", voice=\"alloy\")\nwith open(\"hello.wav\", \"wb\") as f:\n    f.write(audio_bytes)\n\n# Video — Gemini Veo\nresult = gen_video(\"A cat riding a skateboard\", model='veo-3.1-fast-generate-preview', provider='gemini')\nprint(result['output'])\n```\n\n\u003C\u002Fdetails>\n\n### Multi-agent team\n\n```python\nfrom npcpy import NPC, Team\n\nteam = Team(team_path='.\u002Fnpc_team')\nresult = team.orchestrate(\"Analyze the latest sales data and draft a report\")\nprint(result['output'])\n```\n\nOr define a team in code:\n\n```python\nfrom npcpy import NPC, Team\n\ncoordinator = NPC(name='lead', primary_directive='Coordinate the team. Delegate to @analyst and @writer.')\nanalyst = NPC(name='analyst', primary_directive='Analyze data. Provide numbers and trends.', model='gemini-2.5-flash', provider='gemini')\nwriter = NPC(name='writer', primary_directive='Write clear reports from analysis.', model='qwen3:8b', provider='ollama')\n\nteam = Team(npcs=[coordinator, analyst, writer], forenpc='lead')\nresult = team.orchestrate(\"What are the trends in renewable energy adoption?\")\nprint(result['output'])\n```\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Team from files — .npc, .jinx, team.ctx\u003C\u002Fb>\u003C\u002Fsummary>\n\n**team.ctx:**\n```yaml\ncontext: |\n  Research team for analyzing scientific literature.\n  The lead delegates to specialists as needed.\nforenpc: lead\nmodel: qwen3.5:2b\nprovider: ollama\noutput_format: markdown\nmax_search_results: 5\nmcp_servers:\n  - path: ~\u002F.npcsh\u002Fmcp_server.py\n```\n\n**lead.npc:**\n```yaml\n#!\u002Fusr\u002Fbin\u002Fenv npc\nname: lead\nprimary_directive: |\n  You lead the research team. Delegate literature searches to @searcher,\n  data analysis to @analyst. Synthesize their findings into a coherent summary.\njinxes:\n  - {{ Jinx('sh') }}\n  - {{ Jinx('python') }}\n  - {{ Jinx('delegate') }}\n  - {{ Jinx('web_search') }}\n```\n\n**searcher.npc:**\n```yaml\n#!\u002Fusr\u002Fbin\u002Fenv npc\nname: searcher\nprimary_directive: |\n  You search for scientific papers and extract key findings.\n  Use web_search and load_file to find and read papers.\nmodel: gemini-2.5-flash\nprovider: gemini\njinxes:\n  - {{ Jinx('web_search') }}\n  - {{ Jinx('load_file') }}\n  - {{ Jinx('sh') }}\n```\n\n**Jinxes can reference a specific NPC** to always run under that persona, and **access `ctx` variables** from `team.ctx`:\n\n**jinxes\u002Fsearch_and_summarize.jinx:**\n```yaml\n#!\u002Fusr\u002Fbin\u002Fenv npc\njinx_name: search_and_summarize\ndescription: Search for papers and summarize findings using the searcher NPC.\nnpc: {{ NPC('searcher') }}\ninputs:\n  - query\nsteps:\n  - name: search\n    engine: natural\n    code: |\n      Search for papers about {{ query }}.\n      Return up to {{ ctx.max_search_results }} results.\n  - name: summarize\n    engine: natural\n    code: |\n      Summarize the findings in {{ ctx.output_format }} format:\n      {{ output }}\n```\n\nThe `npc:` field binds the jinx to a specific NPC — when this jinx runs, it always uses the `searcher` persona regardless of which NPC invoked it. Any custom keys in `team.ctx` (like `output_format`, `max_search_results`) are available as `{{ ctx.key }}` in Jinja templates and as `context['key']` in Python steps.\n\n```\nmy_project\u002F\n├── npc_team\u002F\n│   ├── team.ctx\n│   ├── lead.npc\n│   ├── searcher.npc\n│   ├── analyst.npc\n│   ├── jinxes\u002F\n│   │   └── skills\u002F\n│   └── models\u002F\n├── agents.md             # Optional: define agents in markdown\n└── agents\u002F               # Optional: one .md file per agent\n    └── translator.md\n```\n\n`.npc` and `.jinx` files are directly executable:\n```bash\n.\u002Fnpc_team\u002Flead.npc \"summarize the latest arxiv papers on transformers\"\n.\u002Fnpc_team\u002Fjinxes\u002Flib\u002Fsh.jinx bash_command=\"echo hello\"\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>MCP server integration\u003C\u002Fb>\u003C\u002Fsummary>\n\nAdd MCP servers to your team for external tool access:\n\n**team.ctx:**\n```yaml\nforenpc: assistant\nmcp_servers:\n  - path: .\u002Ftools\u002Fdb_server.py\n  - path: .\u002Ftools\u002Fapi_server.py\n```\n\n**db_server.py:**\n```python\nfrom mcp.server.fastmcp import FastMCP\n\nmcp = FastMCP(\"Database Tools\")\n\n@mcp.tool()\ndef query_orders(customer_id: str, limit: int = 10) -> str:\n    \"\"\"Query recent orders for a customer.\"\"\"\n    # Your database logic here\n    return f\"Found {limit} orders for customer {customer_id}\"\n\n@mcp.tool()\ndef search_products(query: str) -> str:\n    \"\"\"Search the product catalog.\"\"\"\n    return f\"Products matching: {query}\"\n\nif __name__ == \"__main__\":\n    mcp.run()\n```\n\nThe team's NPCs automatically get access to MCP tools alongside their jinxes.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Agent definitions in markdown &amp; Skills\u003C\u002Fb>\u003C\u002Fsummary>\n\n**agents.md** — multiple agents in one file:\n```markdown\n## summarizer\nYou summarize long documents into concise bullet points.\nFocus on key findings, methodology, and conclusions.\n\n## fact_checker\nYou verify claims against reliable sources and flag inaccuracies.\nAlways cite your sources.\n```\n\n**agents\u002Ftranslator.md** — one file per agent with optional frontmatter:\n```markdown\n---\nmodel: gemini-2.5-flash\nprovider: gemini\n---\nYou translate content between languages while preserving tone and idiom.\n```\n\nSkills are knowledge-content jinxes that provide instructional sections to agents on demand.\n\n**npc_team\u002Fjinxes\u002Fskills\u002Fcode-review\u002FSKILL.md:**\n```markdown\n---\nname: code-review\ndescription: Use when reviewing code for quality, security, and best practices.\n---\n# Code Review Skill\n\n## checklist\n- Check for security vulnerabilities (SQL injection, XSS, etc.)\n- Verify error handling and edge cases\n- Review naming conventions and code clarity\n\n## security\nFocus on OWASP top 10 vulnerabilities...\n```\n\nReference in your NPC:\n```yaml\njinxes:\n  - {{ Jinx('skills\u002Fcode-review') }}\n```\n\n\u003C\u002Fdetails>\n\n### CLI tools\n\n```bash\n# The NPC shell — the recommended way to use NPC teams\nnpcsh                        # Interactive shell with agents, tools, and jinxes\n\n# Scaffold a new team\nnpc-init\n\n# Launch AI coding tools as an NPC from your team\nnpc-claude --npc corca       # Claude Code\nnpc-codex --npc analyst      # Codex\nnpc-gemini                   # Gemini CLI (interactive picker)\nnpc-opencode \u002F npc-aider \u002F npc-amp\n\n# Register MCP server + hooks for deeper integration\nnpc-plugin claude\n```\n\n### NPCArray — parallel jinx across multiple NPCs\n\nRun any jinx in parallel across a list of NPC instances and collect results as an array:\n\n```python\nfrom npcpy import NPC\nfrom npcpy.npc_array import NPCArray\n\n# Three NPCs with different models\u002Fproviders\nnpcs = [\n    NPC(name='drafter', primary_directive='Draft concise commit messages.', model='qwen3:4b', provider='ollama'),\n    NPC(name='reviewer', primary_directive='Review and improve commit messages for clarity.', model='gemini-2.5-flash', provider='gemini'),\n    NPC(name='enforcer', primary_directive='Check commit messages follow Conventional Commits spec.', model='gemini-2.5-flash', provider='gemini'),\n]\n\narr = NPCArray.from_npcs(npcs)\n\n# Run the same jinx on all three in parallel, collect results\nresults = arr.jinx('summarize', inputs={'topic': 'fix auth middleware to propagate clerkUserId through GraphQL resolvers'}).collect()\nfor npc, result in zip(npcs, results.data):\n    print(f\"[{npc.name}] {result}\")\n```\n\nYou can also pass a list directly to `jinx.execute()`:\n\n```python\nfrom npcpy.npc_compiler import load_jinx_from_file\n\njinx = load_jinx_from_file('npc_team\u002Fjinxes\u002Fanalyze.jinx')\nresults = jinx.execute({'topic': 'rate limiting'}, npc=npcs)  # list → parallel NPCArray run\n```\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Knowledge graphs\u003C\u002Fb>\u003C\u002Fsummary>\n\nBuild, evolve, and search knowledge graphs from text. The KG grows through waking (assimilation), sleeping (consolidation), and dreaming (speculative synthesis).\n\n```python\nfrom npcpy.memory.knowledge_graph import (\n    kg_initial, kg_evolve_incremental, kg_sleep_process,\n    kg_dream_process, kg_hybrid_search,\n)\nfrom npcpy.data.load_file import load_file_contents\n\n# Seed the KG from a design doc PDF and a migration script\ndesign_doc = load_file_contents(\"docs\u002Fauth_migration_plan.pdf\")\nmigration_sql = load_file_contents(\"migrations\u002F003_clerk_auth.sql\")\n\nkg = kg_initial(\n    content=design_doc + \"\\n\\n\" + migration_sql,\n    model=\"qwen3:4b\", provider=\"ollama\",\n)\n\n# Assimilate follow-up commits and PR descriptions\nkg, _ = kg_evolve_incremental(\n    kg,\n    new_content_text=(\n        \"PR #412: Replaced Stripe customer-session lookup with Clerk JWT verification. \"\n        \"Removed \u002Fapi\u002Fstripe\u002Fwebhook endpoint. Added ClerkMiddleware to all protected routes. \"\n        \"CSP headers updated to allow clerk.accounts.dev origin.\"\n    ),\n    model=\"qwen3:4b\", provider=\"ollama\", get_concepts=True,\n)\n\n# Consolidate — merge redundant nodes, strengthen high-frequency edges\nkg, sleep_report = kg_sleep_process(kg, model=\"qwen3:4b\", provider=\"ollama\")\n\n# Dream — generate speculative connections between loosely related concepts\nkg, dream_report = kg_dream_process(kg, model=\"qwen3:4b\", provider=\"ollama\")\n\n# Search across facts, concepts, and speculative edges\nresults = kg_hybrid_search(kg, \"How does auth propagate through GraphQL resolvers?\",\n                           model=\"qwen3:4b\", provider=\"ollama\")\nfor r in results:\n    print(r['score'], r['text'])\nprint(f\"{len(kg['facts'])} facts, {len(kg['concepts'])} concepts\")\n```\n\nExtract structured memories from conversations:\n\n```python\nfrom npcpy.llm_funcs import get_facts\n\nconversation = \"\"\"\nUser: We're ripping out Stripe entirely and moving auth to Clerk. The JWT verification\n      will happen in ClerkMiddleware instead of the custom verify_stripe_session helper.\nAssistant: Got it. I'll update the middleware chain. What about the existing session store?\nUser: Kill the Redis session cache — Clerk handles session state on their end.\n      Also, the CSP headers need clerk.accounts.dev and clerk.enpisi.com added to connect-src.\n\"\"\"\n\nfacts = get_facts(conversation, model=\"qwen3:4b\", provider=\"ollama\")\nfor f in facts:\n    print(f\"[{f.get('category', 'general')}] {f['statement']}\")\n# [architecture] Auth provider migrated from Stripe to Clerk with JWT verification via ClerkMiddleware\n# [infrastructure] Redis session cache removed — Clerk manages session state\n# [security] CSP connect-src updated to include clerk.accounts.dev and clerk.enpisi.com\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Sememolution — population-based KG evolution\u003C\u002Fb>\u003C\u002Fsummary>\n\nMaintain a population of KG variants that evolve independently. Each individual has Poisson-sampled search parameters, producing different traversals each query. Selection pressure from response ranking drives convergence toward useful graph structures.\n\n```python\nfrom pathlib import Path\nfrom npcpy.memory.kg_population import SememolutionPopulation\nfrom npcpy.data.load_file import load_file_contents\n\npop = SememolutionPopulation(population_size=100, sample_size=10)\npop.initialize()\n\n# Ingest a heterogeneous corpus — PDFs, DOCX, source code, meeting transcripts\ncorpus_dirs = [Path(\"docs\u002Farchitecture\"), Path(\"docs\u002Fmeeting_notes\"), Path(\"src\u002Fauth\")]\nfor d in corpus_dirs:\n    for f in sorted(d.glob(\"*\")):\n        if f.suffix in (\".pdf\", \".docx\", \".md\", \".py\", \".ts\", \".txt\"):\n            text = load_file_contents(str(f))\n            pop.assimilate_text(text)\n\n# Sleep\u002Fdream cycle — each individual consolidates according to its genome\npop.sleep_cycle()\n\n# Query: sample 10 individuals, generate competing responses, rank them\nrankings = pop.query_and_rank(\"How does the auth middleware chain interact with the GraphQL context?\")\nfor rank, entry in enumerate(rankings[:3], 1):\n    print(f\"#{rank} (individual {entry['id']}, score {entry['score']:.3f}): {entry['response'][:120]}...\")\n\n# Selection + reproduction — top performers breed, bottom are replaced\npop.evolve_generation()\n\nstats = pop.get_stats()\nprint(f\"Generation {stats['generation']} | avg fitness {stats['avg_fitness']:.3f} | \"\n      f\"best fitness {stats['best_fitness']:.3f} | diversity {stats['diversity']:.3f}\")\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Fine-tuning (SFT, RL, MLX)\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nfrom npcpy.ft.sft import run_sft\n\n# Train a model to extract structured decisions from meeting notes\n# LoRA fine-tuning — auto-uses MLX on Apple Silicon\nX_train = [\n    \"Meeting: Auth Migration Sync (2025-01-15)\\nAttendees: Sarah, Mike, Priya\\n\"\n    \"Discussion: Evaluated Clerk vs Auth0 for replacing Stripe auth. Clerk chosen \"\n    \"for lower latency and native Next.js support. Migration starts sprint 12. \"\n    \"Redis session store will be removed once Clerk JWT verification is stable.\",\n\n    \"Meeting: API Rate Limiting Review (2025-01-22)\\nAttendees: Mike, Jordan\\n\"\n    \"Discussion: Current per-session token bucket is incompatible with Clerk's \"\n    \"stateless JWTs. Agreed to switch to per-IP sliding window with 100 req\u002Fmin \"\n    \"default. Premium tier gets 500 req\u002Fmin. Jordan to implement by Friday.\",\n\n    \"Meeting: GraphQL Schema Freeze (2025-02-01)\\nAttendees: Sarah, Priya, Jordan\\n\"\n    \"Discussion: Schema v2 locked for release. Nested auth context propagation \"\n    \"through dataloaders confirmed working. New 'viewer' pattern adopted for \"\n    \"all authenticated queries. Breaking changes documented in CHANGELOG.\",\n\n    \"Meeting: Deployment Postmortem (2025-02-10)\\nAttendees: full team\\n\"\n    \"Discussion: Production outage caused by missing CSP header for clerk.accounts.dev. \"\n    \"Root cause: deploy script didn't pick up new env vars. Fix: added CSP validation \"\n    \"to CI pipeline. New rule: all external origins must be in csp_allowlist.json.\",\n]\ny_train = [\n    '{\"decisions\": [{\"what\": \"Adopt Clerk for auth\", \"why\": \"Lower latency, native Next.js support\", \"owner\": \"team\", \"deadline\": \"sprint 12\"}, {\"what\": \"Remove Redis session store\", \"why\": \"Clerk handles session state\", \"owner\": \"team\", \"deadline\": \"after JWT verification stable\"}]}',\n    '{\"decisions\": [{\"what\": \"Switch to per-IP sliding window rate limiter\", \"why\": \"Token bucket incompatible with stateless JWTs\", \"owner\": \"Jordan\", \"deadline\": \"Friday\"}, {\"what\": \"Set rate limits to 100\u002Fmin default, 500\u002Fmin premium\", \"why\": \"Tiered access control\", \"owner\": \"Jordan\", \"deadline\": \"Friday\"}]}',\n    '{\"decisions\": [{\"what\": \"Freeze GraphQL schema v2\", \"why\": \"Release readiness\", \"owner\": \"Sarah\", \"deadline\": \"immediate\"}, {\"what\": \"Adopt viewer pattern for authenticated queries\", \"why\": \"Consistent auth context in nested resolvers\", \"owner\": \"Priya\", \"deadline\": \"immediate\"}]}',\n    '{\"decisions\": [{\"what\": \"Add CSP validation to CI pipeline\", \"why\": \"Prevent missing CSP headers in deploys\", \"owner\": \"team\", \"deadline\": \"immediate\"}, {\"what\": \"Require external origins in csp_allowlist.json\", \"why\": \"Enforce explicit approval of external domains\", \"owner\": \"team\", \"deadline\": \"immediate\"}]}',\n]\n\nmodel_path = run_sft(X_train=X_train, y_train=y_train)\n```\n\n\u003C\u002Fdetails>\n\n## Features\n\n- **[Agents (NPCs)](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fagents\u002F)** — Agents with personas, directives, and tool calling. Subclasses: `Agent` (default tools), `ToolAgent` (custom tools + MCP), `CodingAgent` (auto-execute code blocks)\n- **[Multi-Agent Teams](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fteams\u002F)** — Team orchestration with a coordinator (forenpc)\n- **[Jinx Workflows](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fjinx-workflows\u002F)** — Jinja Execution templates for multi-step prompt pipelines\n- **[Skills](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fskills\u002F)** — Knowledge-content jinxes that serve instructional sections to agents on demand\n- **[NPCArray](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fnpc-array\u002F)** — NumPy-like vectorized operations over model populations\n- **[Image, Audio & Video](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fimage-audio-video\u002F)** — Generation via Ollama, diffusers, OpenAI, Gemini, ElevenLabs\n- **[Knowledge Graphs](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fknowledge-graphs\u002F)** — Build and evolve knowledge graphs from text with sleep\u002Fdream lifecycle\n- **[Sememolution](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fknowledge-graphs\u002F#sememolution-population-based-kg-evolution)** — Population-based KG evolution with genetic selection and Poisson-sampled search\n- **[Memory Pipeline](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fknowledge-graphs\u002F#memory-extraction-and-lifecycle)** — Extract, approve, and backfill memories with self-improving quality feedback\n- **[Fine-Tuning & Evolution](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Ffine-tuning\u002F)** — SFT, USFT, RL\u002FDPO, diffusion, genetic algorithms, MLX on Apple Silicon\n- **[Serving](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fserving\u002F)** — Flask server for deploying teams via REST API\n- **[ML Functions](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fml-funcs\u002F)** — Scikit-learn grid search, ensemble prediction, PyTorch training\n- **[Streaming & JSON](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fllm-responses\u002F)** — Streaming responses, structured JSON output, message history\n\n## Providers\n\nWorks with all major LLM providers through LiteLLM: `ollama`, `openai`, `anthropic`, `gemini`, `deepseek`, `airllm`, `openai-like`, and more.\n\n## Installation\n\n```bash\npip install npcpy              # base\npip install npcpy[lite]        # + API provider libraries\npip install npcpy[local]       # + ollama, diffusers, transformers, airllm\npip install npcpy[yap]         # + TTS\u002FSTT\npip install npcpy[all]         # everything\n```\n\n\u003Cdetails>\u003Csummary>System dependencies\u003C\u002Fsummary>\n\n**Linux:**\n```bash\nsudo apt-get install espeak portaudio19-dev python3-pyaudio ffmpeg libcairo2-dev libgirepository1.0-dev\ncurl -fsSL https:\u002F\u002Follama.com\u002Finstall.sh | sh\nollama pull qwen3.5:2b\n```\n\n**macOS:**\n```bash\nbrew install portaudio ffmpeg pygobject3 ollama\nbrew services start ollama\nollama pull qwen3.5:2b\n```\n\n**Windows:** Install [Ollama](https:\u002F\u002Follama.com) and [ffmpeg](https:\u002F\u002Fffmpeg.org), then `ollama pull qwen3.5:2b`.\n\n\u003C\u002Fdetails>\n\nAPI keys go in a `.env` file:\n```bash\nexport OPENAI_API_KEY=\"your_key\"\nexport ANTHROPIC_API_KEY=\"your_key\"\nexport GEMINI_API_KEY=\"your_key\"\n```\n\n## Read the Docs\n\nFull documentation, guides, and API reference at [npcpy.readthedocs.io](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002F).\n\n## Links\n\n- **[Incognide](https:\u002F\u002Fgithub.com\u002Fnpc-worldwide\u002Fincognide)** — Desktop environment with AI chat, browser, file viewers, code editor, terminal, knowledge graphs, team management, and more ([download](https:\u002F\u002Fenpisi.com\u002Fincognide))\n- **[NPC Shell](https:\u002F\u002Fgithub.com\u002Fnpc-worldwide\u002Fnpcsh)** — Command-line shell for interacting with NPCs\n- **[Newsletter](https:\u002F\u002Fforms.gle\u002Fn1NzQmwjsV4xv1B2A)** — Stay in the loop\n\n## Research\n\n- A Quantum Semantic Framework for natural language processing: [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10077), accepted at [QNLP 2025](https:\u002F\u002Fqnlp.ai)\n- Simulating hormonal cycles for AI: [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.11829)\n- TinyTim: A Family of Language Models for Divergent Generation [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.11607)\n- The production of meaning in the processing of natural language: [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.20381)\n- ALARA for Agents: Least-Privilege Context Engineering Through Portable Composable Multi-Agent Teams: [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.20380)\n\nHas your research benefited from npcpy? Let us know!\n\n## Support\n\n[Monthly donation](https:\u002F\u002Fbuymeacoffee.com\u002Fnpcworldwide) | [Merch](https:\u002F\u002Fenpisi.com\u002Fshop) | Consulting: info@npcworldwi.de\n\n## Contributing\n\nContributions welcome! Submit issues and pull requests on the [GitHub repository](https:\u002F\u002Fgithub.com\u002FNPC-Worldwide\u002Fnpcpy).\n\n## License\n\nMIT License.\n\n## Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNPC-Worldwide_npcpy_readme_1daad34ae827.png)](https:\u002F\u002Fstar-history.com\u002F#cagostino\u002Fnpcpy&Date)\n","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fnpcpy.readthedocs.io\u002F\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNPC-Worldwide_npcpy_readme_b818a73e51b4.png\" alt=\"npc-python logo\" width=250>\u003C\u002Fa>\n\u003C\u002Fp>\n\n# npcpy\n\n`npcpy` 是一个灵活的智能体框架，用于构建 AI 应用程序以及开展大语言模型相关的研究。它支持本地和云端服务提供商、多智能体协作团队、工具调用、图像\u002F音频\u002F视频生成、知识图谱、微调等功能。\n\n```bash\npip install npcpy\n```\n\n## 快速示例\n\n### 创建并使用角色\n\n```python\nfrom npcpy import NPC\n\nsimon = NPC(\n    name='西蒙·玻利瓦尔',\n    primary_directive='解放南美洲，驱逐西班牙殖民者。',\n    model='gemma3:4b',\n    provider='ollama'\n)\nresponse = simon.get_llm_response(\"在安第斯山脉中，最重要的要守住的领土是什么？\")\nprint(response['response'])\n```\n\n### 直接调用大语言模型\n\n```python\nfrom npcpy import get_llm_response\n\nresponse = get_llm_response(\"凯尔特人的信使之神是谁？\", model='qwen3:4b', provider='ollama')\nprint(response['response'])\n# 或者使用 Ollama 的云端模型\n\ntest = get_llm_response('约翰·威克是谁', model='minimax-m2.7:cloud', provider='ollama',)\n\nprint(test['response'])\n```\n\n### 智能体与工具结合\n\n```python\nfrom npcpy import Agent, ToolAgent, CodingAgent\n\n# Agent — 自带默认工具（sh、Python、编辑文件、网络搜索等）\nagent = Agent(name='ops', model='qwen3.5:2b', provider='ollama')\nprint(agent.run(\"查找这个仓库中所有超过 500 行的 Python 文件，并列出它们\"))\n\n# ToolAgent — 在默认工具基础上添加自定义工具\nimport subprocess\n\ndef run_tests(test_path: str = \"tests\u002F\") -> str:\n    \"\"\"对给定路径运行 pytest 并返回结果\"\"\"\n    result = subprocess.run([\"python3\", \"-m\", \"pytest\", test_path, \"-v\", \"--tb=short\"],\n                            capture_output=True，text=True，timeout=120)\n    return result.stdout + result.stderr\n\ndef git_diff(branch: str = \"main\") -> str:\n    \"\"\"显示与指定分支的 Git 差异\"\"\"\n    result = subprocess.run([\"git\", \"diff\", branch, \"--stat\"], capture_output=True，text=True)\n    return result.stdout\n\nreviewer = ToolAgent(\n    name='代码审查员',\n    primary_directive='你负责审查代码变更、运行测试并报告问题。',\n    tools=[run_tests，git_diff],\n    model='qwen3.5:2b', provider='ollama'\n)\nprint(reviewer.run(\"运行测试并总结任何失败的情况\"))\n```\n\n### 流式输出\n\n```python\nfrom npcpy import get_llm_response\nfrom npcpy.streaming import parse_stream_chunk\n\nresponse = get_llm_response(\"请解释量子纠缠的概念。\", model='qwen3.5:2b', provider='ollama', stream=True)\nfor chunk in response['response']:\n    content, _, _ = parse_stream_chunk(chunk, provider='ollama')\n    if content:\n        print(content, end='', flush=True)\n\n# 同样的方式适用于其他服务提供商\nresponse = get_llm_response(\"请解释量子纠缠的概念。\", model='gemini-2.5-flash', provider='gemini', stream=True)\nfor chunk in response['response']:\n    content, _, _ = parse_stream_chunk(chunk，provider='gemini')\n    if content:\n        print(content，end=''，flush=True)\n```\n\n### JSON 输出\n\n在提示词中包含预期的 JSON 结构。使用 `format='json'` 参数后，响应会自动解析——`response['response']` 已经是字典或列表。\n\n```python\nfrom npcpy import get_llm_response\n\nresponse = get_llm_response(\n    '''请列出距离太阳最近的 3 颗行星。\n    返回 JSON：{\"planets\": [{\"name\": \"行星名称\", \"distance_au\": 0.0, \"num_moons\": 0}]}''',\n    model='qwen3.5:2b', provider='ollama',\n    format='json'\n)\nfor planet in response['response']['planets']:\n    print(f\"{planet['name']}: {planet['distance_au']} AU，{planet['num_moons']} 颗卫星\")\n\nresponse = get_llm_response(\n    '''分析这条评论：“电池续航非常棒，但屏幕太暗了。”\n    返回 JSON：{\"tone\": \"positive\u002Fnegative\u002Fmixed\", \"key_phrases\": [\"phrase1\", \"phrase2\"], \"confidence\": 0.0}''',\n    model='qwen3.5:2b', provider='ollama',\n    format='json'\n)\nresult = response['response']\nprint(result['tone'], result['key_phrases'])\n```\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Pydantic 结构化输出\u003C\u002Fb>\u003C\u002Fsummary>\n\n传入一个 Pydantic 模型，JSON 模式将直接发送给大语言模型。\n\n```python\nfrom npcpy import get_llm_response\nfrom pydantic import BaseModel\nfrom typing import List\n\nclass Planet(BaseModel):\n    name: str\n    distance_au: float\n    num_moons: int\n\nclass SolarSystem(BaseModel):\n    planets: List[Planet]\n\nresponse = get_llm_response(\n    \"请列出距离太阳最近的前 4 颗行星。\",\n    model='qwen3.5:2b', provider='ollama',\n    format=SolarSystem\n)\nfor p in response['response']['planets']:\n    print(f\"{p['name']}: {p['distance_au']} AU，{p['num_moons']} 颗卫星\")\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>图像、音频和视频生成\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nfrom npcpy.llm_funcs import gen_image，gen_video\nfrom npcpy.gen.audio_gen import text_to_speech\n\n# 图像 — OpenAI、Gemini、Ollama 或 diffusers\nimages = gen_image(\"山间的日落景象\", model='gpt-image-1', provider='openai')\nimages[0].save(\"sunset.png\")\n\n# 音频 — OpenAI、Gemini、ElevenLabs、Kokoro、gTTS\naudio_bytes = text_to_speech(\"来自 npcpy 的问候！\", engine=\"openai\", voice=\"alloy\")\nwith open(\"hello.wav\", \"wb\") as f:\n    f.write(audio_bytes)\n\n# 视频 — Gemini Veo\nresult = gen_video(\"一只猫在滑板上玩耍\", model='veo-3.1-fast-generate-preview', provider='gemini')\nprint(result['output'])\n```\n\n\u003C\u002Fdetails>\n\n### 多智能体团队\n\n```python\nfrom npcpy import NPC, Team\n\nteam = Team(team_path='.\u002Fnpc_team')\nresult = team.orchestrate(\"分析最新的销售数据并撰写报告\")\nprint(result['output'])\n```\n\n或者在代码中定义一个团队：\n\n```python\nfrom npcpy import NPC, Team\n\ncoordinator = NPC(name='lead', primary_directive='协调团队。委派给@analyst和@writer。')\nanalyst = NPC(name='analyst', primary_directive='分析数据。提供数据和趋势。', model='gemini-2.5-flash', provider='gemini')\nwriter = NPC(name='writer', primary_directive='根据分析结果撰写清晰的报告。', model='qwen3:8b', provider='ollama')\n\nteam = Team(npcs=[coordinator, analyst, writer], forenpc='lead')\nresult = team.orchestrate(\"可再生能源采用的趋势是什么？\")\nprint(result['output'])\n```\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>从文件创建团队 — .npc、.jinx、team.ctx\u003C\u002Fb>\u003C\u002Fsummary>\n\n**team.ctx：**\n```yaml\ncontext: |\n  用于分析科学文献的研究团队。\n  团队负责人会根据需要将任务委派给专家。\nforenpc: lead\nmodel: qwen3.5:2b\nprovider: ollama\noutput_format: markdown\nmax_search_results: 5\nmcp_servers:\n  - path: ~\u002F.npcsh\u002Fmcp_server.py\n```\n\n**lead.npc：**\n```yaml\n#!\u002Fusr\u002Fbin\u002Fenv npc\nname: lead\nprimary_directive: |\n  您是研究团队的负责人。将文献检索任务委派给@searcher，\n  数据分析任务委派给@analyst。将他们的发现整合成一份连贯的总结。\njinxes:\n  - {{ Jinx('sh') }}\n  - {{ Jinx('python') }}\n  - {{ Jinx('delegate') }}\n  - {{ Jinx('web_search') }}\n```\n\n**searcher.npc：**\n```yaml\n#!\u002Fusr\u002Fbin\u002Fenv npc\nname: searcher\nprimary_directive: |\n  您负责搜索科学论文并提取关键发现。\n  使用web_search和load_file来查找和阅读论文。\nmodel: gemini-2.5-flash\nprovider: gemini\njinxes:\n  - {{ Jinx('web_search') }}\n  - {{ Jinx('load_file') }}\n  - {{ Jinx('sh') }}\n```\n\n**Jinx可以引用特定的NPC**，以确保始终以该角色身份运行，并且可以**访问team.ctx中的ctx变量**：\n\n**jinxes\u002Fsearch_and_summarize.jinx：**\n```yaml\n#!\u002Fusr\u002Fbin\u002Fenv npc\njinx_name: search_and_summarize\ndescription: 搜索论文并使用searcher NPC总结发现。\nnpc: {{ NPC('searcher') }}\ninputs:\n  - query\nsteps:\n  - name: search\n    engine: natural\n    code: |\n      搜索关于{{ query }}的论文。\n      返回最多{{ ctx.max_search_results }}条结果。\n  - name: summarize\n    engine: natural\n    code: |\n      以{{ ctx.output_format }}格式总结发现：\n      {{ output }}\n```\n\n`npc:`字段将Jinx绑定到特定的NPC——当这个Jinx运行时，无论由哪个NPC调用，它都会始终使用`searcher`角色。`team.ctx`中的任何自定义键（如`output_format`、`max_search_results`）都可以在Jinja模板中作为`{{ ctx.key }}`使用，在Python步骤中则作为`context['key']`使用。\n\n```\nmy_project\u002F\n├── npc_team\u002F\n│   ├── team.ctx\n│   ├── lead.npc\n│   ├── searcher.npc\n│   ├── analyst.npc\n│   ├── jinxes\u002F\n│   │   └── skills\u002F\n│   └── models\u002F\n├── agents.md             # 可选：用Markdown定义代理\n└── agents\u002F               # 可选：每个代理一个.md文件\n    └── translator.md\n```\n\n`.npc`和`.jinx`文件可以直接执行：\n```bash\n.\u002Fnpc_team\u002Flead.npc \"总结arxiv上关于transformers的最新论文\"\n.\u002Fnpc_team\u002Fjinxes\u002Flib\u002Fsh.jinx bash_command=\"echo hello\"\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>MCP服务器集成\u003C\u002Fb>\u003C\u002Fsummary>\n\n为团队添加MCP服务器，以便访问外部工具：\n\n**team.ctx：**\n```yaml\nforenpc: assistant\nmcp_servers:\n  - path: .\u002Ftools\u002Fdb_server.py\n  - path: .\u002Ftools\u002Fapi_server.py\n```\n\n**db_server.py：**\n```python\nfrom mcp.server.fastmcp import FastMCP\n\nmcp = FastMCP(\"数据库工具\")\n\n@mcp.tool()\ndef query_orders(customer_id: str, limit: int = 10) -> str:\n    \"\"\"查询客户的近期订单\"\"\"\n    # 您的数据库逻辑在这里\n    return f\"找到{limit}个关于客户{customer_id}的订单\"\n\n@mcp.tool()\ndef search_products(query: str) -> str:\n    \"\"\"搜索产品目录\"\"\"\n    return f\"匹配'{query}'的产品：...\"\n\nif __name__ == \"__main__\":\n    mcp.run()\n```\n\n团队中的NPC会自动获得MCP工具的访问权限，同时也能使用它们的Jinx。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Markdown中的代理定义与技能\u003C\u002Fb>\u003C\u002Fsummary>\n\n**agents.md**——在一个文件中定义多个代理：\n```markdown\n## summarizer\n您将长文档总结为简洁的要点。\n重点关注关键发现、方法论和结论。\n\n## fact_checker\n您会对照可靠来源核实陈述，并标记不准确之处。\n请务必注明来源。\n```\n\n**agents\u002Ftranslator.md**——每个代理一个文件，可选前言：\n```markdown\n---\nmodel: gemini-2.5-flash\nprovider: gemini\n---\n您可以在不同语言之间翻译内容，同时保持语气和习语不变。\n```\n\n技能是基于知识内容的Jinx，可以根据需要向代理提供教学部分。\n\n**npc_team\u002Fjinxes\u002Fskills\u002Fcode-review\u002FSKILL.md：**\n```markdown\n---\nname: code-review\ndescription: 用于审查代码的质量、安全性及最佳实践。\n---\n# 代码审查技能\n\n## 检查清单\n- 检查是否存在安全漏洞（SQL注入、XSS等）\n- 验证错误处理和边界情况\n- 审查命名规范和代码清晰度\n\n## 安全性\n重点关注OWASP十大漏洞...\n```\n\n在您的NPC中引用：\n```yaml\njinxes:\n  - {{ Jinx('skills\u002Fcode-review') }}\n```\n\n\u003C\u002Fdetails>\n\n### CLI工具\n\n```bash\n# NPC Shell——推荐的使用NPC团队的方式\nnpcsh                        # 带有代理、工具和Jinx的交互式Shell\n\n# 构建一个新的团队\nnpc-init\n\n# 将AI编码工具作为团队中的NPC启动\nnpc-claude --npc corca       # Claude Code\nnpc-codex --npc analyst      # Codex\nnpc-gemini                   # Gemini CLI（交互式选择器）\nnpc-opencode \u002F npc-aider \u002F npc-amp\n\n# 注册MCP服务器+钩子以实现更深入的集成\nnpc-plugin claude\n```\n\n### NPCArray——在多个NPC上并行运行Jinx\n\n可以在一组NPC实例上并行运行任意Jinx，并将结果收集为数组：\n\n```python\nfrom npcpy import NPC\nfrom npcpy.npc_array import NPCArray\n\n# 三个具有不同模型\u002F提供商的NPC\nnpcs = [\n    NPC(name='drafter', primary_directive='起草简洁的提交信息。', model='qwen3:4b', provider='ollama'),\n    NPC(name='reviewer', primary_directive='审查并改进提交信息的清晰度。', model='gemini-2.5-flash', provider='gemini'),\n    NPC(name='enforcer', primary_directive='检查提交信息是否符合Conventional Commits规范。', model='gemini-2.5-flash', provider='gemini'),\n]\n\narr = NPCArray.from_npcs(npcs)\n\n# 在所有三个 NPC 上并行运行相同的 jinx，收集结果\nresults = arr.jinx('summarize', inputs={'topic': 'fix auth middleware to propagate clerkUserId through GraphQL resolvers'}).collect()\nfor npc, result in zip(npcs, results.data):\n    print(f\"[{npc.name}] {result}\")\n```\n\n你也可以直接将列表传递给 `jinx.execute()`：\n\n```python\nfrom npcpy.npc_compiler import load_jinx_from_file\n\njinx = load_jinx_from_file('npc_team\u002Fjinxes\u002Fanalyze.jinx')\nresults = jinx.execute({'topic': 'rate limiting'}, npc=npcs)  # 列表 → 并行 NPCArray 运行\n```\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>知识图谱\u003C\u002Fb>\u003C\u002Fsummary>\n\n从文本构建、演化和搜索知识图谱。知识图谱通过清醒（吸收）、睡眠（巩固）和做梦（推测性合成）不断成长。\n\n```python\nfrom npcpy.memory.knowledge_graph import (\n    kg_initial, kg_evolve_incremental, kg_sleep_process,\n    kg_dream_process, kg_hybrid_search,\n)\nfrom npcpy.data.load_file import load_file_contents\n\n# 从一份设计文档 PDF 和一个迁移脚本初始化知识图谱\ndesign_doc = load_file_contents(\"docs\u002Fauth_migration_plan.pdf\")\nmigration_sql = load_file_contents(\"migrations\u002F003_clerk_auth.sql\")\n\nkg = kg_initial(\n    content=design_doc + \"\\n\\n\" + migration_sql,\n    model=\"qwen3:4b\", provider=\"ollama\",\n)\n\n# 吸收后续的提交和 PR 描述\nkg, _ = kg_evolve_incremental(\n    kg,\n    new_content_text=(\n        \"PR #412：将 Stripe 客户会话查询替换为 Clerk JWT 验证。移除了 \u002Fapi\u002Fstripe\u002Fwebhook 端点。在所有保护路由上添加了 ClerkMiddleware。更新了 CSP 头部，允许 clerk.accounts.dev 域名访问。\"\n    ),\n    model=\"qwen3:4b\", provider=\"ollama”，获取概念，\n)\n\n# 巩固——合并冗余节点，强化高频边\nkg, sleep_report = kg_sleep_process(kg, model=\"qwen3:4b\", provider=\"ollama\")\n\n# 做梦——生成松散相关概念之间的推测性连接\nkg, dream_report = kg_dream_process(kg, model=\"qwen3:4b\", provider=\"ollama\")\n\n# 跨越事实、概念和推测性边进行搜索\nresults = kg_hybrid_search(kg, \"认证如何通过 GraphQL 解析器传播？\",\n                           model=\"qwen3:4b\", provider=\"ollama\")\nfor r in results:\n    print(r['score'], r['text'])\nprint(f\"{len(kg['facts'])} 条事实，{len(kg['concepts'])} 个概念\")\n```\n\n从对话中提取结构化记忆：\n\n```python\nfrom npcpy.llm_funcs import get_facts\n\nconversation = \"\"\"\n用户：我们要完全移除 Stripe，把认证迁移到 Clerk。JWT 验证将由 ClerkMiddleware 而不是自定义的 verify_stripe_session 辅助函数来完成。\n助手：明白了。我会更新中间件链。那现有的会话存储呢？\n用户：把 Redis 会话缓存关掉——Clerk 会在其端管理会话状态。\n另外，CSP 头部需要将 clerk.accounts.dev 和 clerk.enpisi.com 添加到 connect-src 中。\n\"\"\"\n\nfacts = get_facts(conversation，模型=\"qwen3:4b\"，提供商=\"ollama\")\nfor f in facts：\n    print(f\"[{f.get('category', 'general')}] {f['statement']}\")\n# [架构] 认证提供商从 Stripe 迁移到 Clerk，使用 ClerkMiddleware 进行 JWT 验证\n# [基础设施] 移除了 Redis 会话缓存——Clerk 管理会话状态\n# [安全] CSP 的 connect-src 更新，加入了 clerk.accounts.dev 和 clerk.enpisi.com\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Sememolution — 基于种群的知识图谱演化\u003C\u002Fb>\u003C\u002Fsummary>\n\n维护一个独立演化的知识图谱变体种群。每个个体都采用泊松采样的搜索参数，每次查询都会产生不同的遍历路径。响应排名带来的选择压力会促使图结构向更有用的方向收敛。\n\n```python\nfrom pathlib import Path\nfrom npcpy.memory.kg_population import SememolutionPopulation\nfrom npcpy.data.load_file import load_file_contents\n\npop = SememolutionPopulation(population_size=100, sample_size=10)\npop.initialize()\n\n# 引入异质语料——PDF、DOCX、源代码、会议记录\ncorpus_dirs = [Path(\"docs\u002Farchitecture\"), Path(\"docs\u002Fmeeting_notes\"), Path(\"src\u002Fauth\")]\nfor d in corpus_dirs：\n    for f in sorted(d.glob(\"*\"))：\n        if f.suffix in (\".pdf\", \".docx\", \".md\", \".py\", \".ts\", \".txt\")：\n            text = load_file_contents(str(f))\n            pop.assimilate_text(text)\n\n# 睡眠\u002F做梦周期——每个个体根据其基因组进行巩固\npop.sleep_cycle()\n\n# 查询：随机抽取 10 个个体，生成竞争性回答，并对其进行排名\nrankings = pop.query_and_rank(\"认证中间件链如何与 GraphQL 上下文交互？\")\nfor rank, entry in enumerate(rankings[:3], 1)：\n    print(f\"#{rank}（个体 {entry['id']}, 分数 {entry['score']:.3f}）：{entry['response'][:120]}...\"\n  \n# 选择与繁殖——表现最好的个体繁衍后代，表现最差的则被淘汰\npop.evolve_generation()\n\nstats = pop.get_stats()\nprint(f\"第 {stats['generation']} 代 | 平均适应度 {stats['avg_fitness']:.3f} | 最佳适应度 {stats['best_fitness']:.3f} | 多样性 {stats['diversity']:.3f}\")\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>微调（SFT、RL、MLX）\u003C\u002Fb>\u003C\u002Fsummary>\n\n```python\nfrom npcpy.ft.sft import run_sft\n\n# 训练一个模型，用于从会议记录中提取结构化的决策\n\n# LoRA 微调 — 自动在 Apple Silicon 上使用 MLX\nX_train = [\n    \"会议：身份验证迁移同步（2025-01-15）\\n参会人员：Sarah、Mike、Priya\\n\"\n    \"讨论：评估了 Clerk 和 Auth0，以替代 Stripe 的身份验证系统。最终选择了 Clerk，因为它延迟更低且原生支持 Next.js。迁移将于第 12 轮开发开始。\\n一旦 Clerk 的 JWT 验证稳定，Redis 会话存储将被移除。\",\n\n    \"会议：API 速率限制审查（2025-01-22）\\n参会人员：Mike、Jordan\\n\"\n    \"讨论：当前基于会话的令牌桶机制与 Clerk 的无状态 JWT 不兼容。双方同意切换到基于 IP 的滑动窗口限流策略，默认每分钟 100 次请求。高级用户则为每分钟 500 次。Jordan 将于周五前完成实施。\",\n\n    \"会议：GraphQL 模式冻结（2025-02-01）\\n参会人员：Sarah、Priya、Jordan\\n\"\n    \"讨论：v2 版本的 GraphQL 模式已锁定准备发布。通过数据加载器实现嵌套的身份验证上下文传递已被确认有效。所有经过身份验证的查询都将采用新的 'viewer' 模式。重大变更已在 CHANGELOG 中记录。\",\n\n    \"会议：部署复盘（2025-02-10）\\n参会人员：全体团队成员\\n\"\n    \"讨论：生产环境的中断是由缺少 clerk.accounts.dev 的 CSP 头部导致的。\\n根本原因：部署脚本未能正确读取新的环境变量。解决方案：在 CI 流水线中添加 CSP 验证检查。新规定：所有外部域名必须列入 csp_allowlist.json 文件中。\",\n]\ny_train = [\n    '{\"decisions\": [{\"what\": \"采用 Clerk 进行身份验证\", \"why\": \"延迟更低，原生支持 Next.js\", \"owner\": \"团队\", \"deadline\": \"第 12 轮开发\"}, {\"what\": \"移除 Redis 会话存储\", \"why\": \"Clerk 已经可以处理会话状态\", \"owner\": \"团队\", \"deadline\": \"JWT 验证稳定后\"}]}',\n    '{\"decisions\": [{\"what\": \"切换到基于 IP 的滑动窗口限流器\", \"why\": \"令牌桶机制与无状态 JWT 不兼容\", \"owner\": \"Jordan\", \"deadline\": \"周五\"}, {\"what\": \"设置默认限流为每分钟 100 次，高级用户为每分钟 500 次\", \"why\": \"分级访问控制\", \"owner\": \"Jordan\", \"deadline\": \"周五\"}]}',\n    '{\"decisions\": [{\"what\": \"冻结 GraphQL v2 模式\", \"why\": \"为发布做准备\", \"owner\": \"Sarah\", \"deadline\": \"立即\"}, {\"what\": \"为经过身份验证的查询采用 viewer 模式\", \"why\": \"确保嵌套解析器中具有一致的身份验证上下文\", \"owner\": \"Priya\", \"deadline\": \"立即\"}]}',\n    '{\"decisions\": [{\"what\": \"在 CI 流水线中加入 CSP 验证检查\", \"why\": \"防止部署时遗漏 CSP 头部\", \"owner\": \"团队\", \"deadline\": \"立即\"}, {\"what\": \"要求所有外部域名都列在 csp_allowlist.json 中\", \"why\": \"强制对外部域名进行显式批准\", \"owner\": \"团队\", \"deadline\": \"立即\"}]}',\n]\n\nmodel_path = run_sft(X_train=X_train, y_train=y_train)\n```\n\n\u003C\u002Fdetails>\n\n## 功能特性\n\n- **[智能体（NPCs）](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fagents\u002F)** — 具有角色设定、指令和工具调用能力的智能体。子类包括：`Agent`（默认工具）、`ToolAgent`（自定义工具 + MCP）、`CodingAgent`（自动执行代码块）\n- **[多智能体团队](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fteams\u002F)** — 带有协调员（forenpc）的团队协作机制\n- **[Jinx 工作流](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fjinx-workflows\u002F)** — 基于 Jinja 的执行模板，用于多步骤提示管道\n- **[技能](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fskills\u002F)** — 包含知识内容的 Jinx，可按需为智能体提供教学模块\n- **[NPCArray](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fnpc-array\u002F)** — 类似 NumPy 的向量化操作，适用于模型群体\n- **[图像、音频与视频](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fimage-audio-video\u002F)** — 通过 Ollama、diffusers、OpenAI、Gemini、ElevenLabs 等生成\n- **[知识图谱](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fknowledge-graphs\u002F)** — 从文本构建并演化知识图谱，具备睡眠\u002F梦境生命周期\n- **[Sememolution](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fknowledge-graphs\u002F#sememolution-population-based-kg-evolution)** — 基于种群的知识图谱演化，结合遗传选择和泊松采样搜索\n- **[记忆流水线](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fknowledge-graphs\u002F#memory-extraction-and-lifecycle)** — 提取、审核并回填记忆，通过自我改进的质量反馈提升效果\n- **[微调与演化](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Ffine-tuning\u002F)** — SFT、USFT、RL\u002FDPO、扩散模型、遗传算法，以及在 Apple Silicon 上使用 MLX\n- **[服务部署](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fserving\u002F)** — 使用 Flask 服务器通过 REST API 部署团队\n- **[机器学习函数](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fml-funcs\u002F)** — Scikit-learn 网格搜索、集成预测、PyTorch 训练\n- **[流式响应与 JSON](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002Fguides\u002Fllm-responses\u002F)** — 流式响应、结构化 JSON 输出、消息历史记录\n\n## 服务提供商\n\n通过 LiteLLM 支持所有主流 LLM 提供商：`ollama`、`openai`、`anthropic`、`gemini`、`deepseek`、`airllm`、`openai-like` 等。\n\n## 安装说明\n\n```bash\npip install npcpy              # 基础包\npip install npcpy[lite]        # + API 提供商库\npip install npcpy[local]       # + ollama、diffusers、transformers、airllm\npip install npcpy[yap]         # + TTS\u002FSTT\npip install npcpy[all]         # 所有功能\n```\n\n\u003Cdetails>\u003Csummary>系统依赖\u003C\u002Fsummary>\n\n**Linux:**\n```bash\nsudo apt-get install espeak portaudio19-dev python3-pyaudio ffmpeg libcairo2-dev libgirepository1.0-dev\ncurl -fsSL https:\u002F\u002Follama.com\u002Finstall.sh | sh\nollama pull qwen3.5:2b\n```\n\n**macOS:**\n```bash\nbrew install portaudio ffmpeg pygobject3 ollama\nbrew services start ollama\nollama pull qwen3.5:2b\n```\n\n**Windows:** 安装 [Ollama](https:\u002F\u002Follama.com) 和 [ffmpeg](https:\u002F\u002Fffmpeg.org)，然后运行 `ollama pull qwen3.5:2b`。\n\n\u003C\u002Fdetails>\n\nAPI 密钥需配置在 `.env` 文件中：\n```bash\nexport OPENAI_API_KEY=\"your_key\"\nexport ANTHROPIC_API_KEY=\"your_key\"\nexport GEMINI_API_KEY=\"your_key\"\n```\n\n## 文档阅读\n\n完整文档、指南和 API 参考请访问 [npcpy.readthedocs.io](https:\u002F\u002Fnpcpy.readthedocs.io\u002Fen\u002Flatest\u002F)。\n\n## 相关链接\n\n- **[Incognide](https:\u002F\u002Fgithub.com\u002Fnpc-worldwide\u002Fincognide)** — 桌面环境，内置 AI 聊天、浏览器、文件查看器、代码编辑器、终端、知识图谱、团队管理等功能（[下载](https:\u002F\u002Fenpisi.com\u002Fincognide)）\n- **[NPC Shell](https:\u002F\u002Fgithub.com\u002Fnpc-worldwide\u002Fnpcsh)** — 用于与 NPCs 交互的命令行 shell\n- **[新闻通讯](https:\u002F\u002Fforms.gle\u002Fn1NzQmwjsV4xv1B2A)** — 保持信息更新\n\n## 研究\n\n- 用于自然语言处理的量子语义框架：[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10077)，已被 [QNLP 2025](https:\u002F\u002Fqnlp.ai) 接收\n- 为人工智能模拟激素周期：[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.11829)\n- TinyTim：用于发散式生成的语言模型系列 [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.11607)\n- 自然语言处理中的意义生成：[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.20381)\n- 面向智能体的 ALARA：通过可移植、可组合的多智能体团队实现最小权限上下文工程：[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.20380)\n\n您的研究是否受益于 npcpy？请告诉我们！\n\n## 支持\n\n[每月捐赠](https:\u002F\u002Fbuymeacoffee.com\u002Fnpcworldwide) | [周边商品](https:\u002F\u002Fenpisi.com\u002Fshop) | 咨询：info@npcworldwi.de\n\n## 贡献\n\n欢迎贡献！请在 [GitHub 仓库](https:\u002F\u002Fgithub.com\u002FNPC-Worldwide\u002Fnpcpy) 上提交问题和拉取请求。\n\n## 许可证\n\nMIT 许可证。\n\n## 星标历史\n\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNPC-Worldwide_npcpy_readme_1daad34ae827.png)](https:\u002F\u002Fstar-history.com\u002F#cagostino\u002Fnpcpy&Date)","# npcpy 快速上手指南\n\n`npcpy` 是一个灵活的 AI 智能体框架，专为构建 LLM 应用和研究设计。它支持本地（如 Ollama）和云端模型、多智能体协作、工具调用、多媒体生成以及结构化输出等功能。\n\n## 环境准备\n\n*   **系统要求**：Python 3.8+ (推荐 Python 3.10+)\n*   **前置依赖**：\n    *   若使用本地模型，请确保已安装并运行 [Ollama](https:\u002F\u002Follama.com\u002F) 或其他兼容的后端服务。\n    *   若使用云端模型（如 OpenAI, Gemini），请准备好相应的 API Key 并配置到环境变量中。\n*   **可选依赖**：如需使用 Pydantic 结构化输出或特定多媒体生成功能，建议安装相关扩展库（通常 `pip install` 会自动处理基础依赖）。\n\n## 安装步骤\n\n使用 pip 直接安装最新稳定版：\n\n```bash\npip install npcpy\n```\n\n> **提示**：国内用户若下载缓慢，可使用清华或阿里镜像源加速安装：\n> ```bash\n> pip install npcpy -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 基本使用\n\n### 1. 创建并使用角色 (Persona)\n\n定义一个具有特定指令的 NPC，并让其回答问题。以下示例使用本地 Ollama 运行的 `gemma3:4b` 模型：\n\n```python\nfrom npcpy import NPC\n\nsimon = NPC(\n    name='Simon Bolivar',\n    primary_directive='Liberate South America from the Spanish Royalists.',\n    model='gemma3:4b',\n    provider='ollama'\n)\nresponse = simon.get_llm_response(\"What is the most important territory to retain in the Andes?\")\nprint(response['response'])\n```\n\n### 2. 直接调用 LLM\n\n如果不需复杂角色设定，可直接调用模型接口：\n\n```python\nfrom npcpy import get_llm_response\n\nresponse = get_llm_response(\"Who was the celtic messenger god?\", model='qwen3:4b', provider='ollama')\nprint(response['response'])\n```\n\n### 3. 使用带工具的智能体 (Agent with Tools)\n\n`npcpy` 内置了多种默认工具（如 shell 执行、文件编辑、网络搜索等），也支持自定义工具。\n\n**使用默认工具的 Agent：**\n```python\nfrom npcpy import Agent\n\nagent = Agent(name='ops', model='qwen3.5:2b', provider='ollama')\nprint(agent.run(\"Find all Python files over 500 lines in this repo and list them\"))\n```\n\n**添加自定义工具的 ToolAgent：**\n```python\nfrom npcpy import ToolAgent\nimport subprocess\n\ndef run_tests(test_path: str = \"tests\u002F\") -> str:\n    \"\"\"Run pytest on the given path and return results.\"\"\"\n    result = subprocess.run([\"python3\", \"-m\", \"pytest\", test_path, \"-v\", \"--tb=short\"],\n                            capture_output=True, text=True, timeout=120)\n    return result.stdout + result.stderr\n\nreviewer = ToolAgent(\n    name='code_reviewer',\n    primary_directive='You review code changes, run tests, and report issues.',\n    tools=[run_tests],\n    model='qwen3.5:2b', provider='ollama'\n)\nprint(reviewer.run(\"Run the tests and summarize any failures\"))\n```\n\n### 4. 获取结构化 JSON 输出\n\n通过在提示词中定义格式并使用 `format='json'` 参数，可直接获得解析后的字典或列表对象：\n\n```python\nfrom npcpy import get_llm_response\n\nresponse = get_llm_response(\n    '''List 3 planets from the sun.\n    Return JSON: {\"planets\": [{\"name\": \"planet name\", \"distance_au\": 0.0, \"num_moons\": 0}]}''',\n    model='qwen3.5:2b', provider='ollama',\n    format='json'\n)\nfor planet in response['response']['planets']:\n    print(f\"{planet['name']}: {planet['distance_au']} AU, {planet['num_moons']} moons\")\n```\n\n### 5. 多智能体协作 (Multi-agent Team)\n\n可以通过代码快速组建一个多智能体团队，由协调者分配任务给其他成员：\n\n```python\nfrom npcpy import NPC, Team\n\ncoordinator = NPC(name='lead', primary_directive='Coordinate the team. Delegate to @analyst and @writer.')\nanalyst = NPC(name='analyst', primary_directive='Analyze data. Provide numbers and trends.', model='gemini-2.5-flash', provider='gemini')\nwriter = NPC(name='writer', primary_directive='Write clear reports from analysis.', model='qwen3:8b', provider='ollama')\n\nteam = Team(npcs=[coordinator, analyst, writer], forenpc='lead')\nresult = team.orchestrate(\"What are the trends in renewable energy adoption?\")\nprint(result['output'])\n```\n\n### 6. 命令行交互 (CLI)\n\n`npcpy` 提供了强大的命令行工具，推荐用于交互式开发和管理智能体团队：\n\n```bash\n# 启动交互式 NPC Shell\nnpcsh\n\n# 初始化一个新的团队项目\nnpc-init\n\n# 调用特定的 AI 编码工具（需配置对应 NPC）\nnpc-claude --npc corca\n```","某初创团队正在开发一款自动化代码审查助手，需要让 AI 角色不仅能理解代码逻辑，还要能自主运行测试、对比 Git 差异并生成结构化报告。\n\n### 没有 npcpy 时\n- **角色构建繁琐**：开发者需手动编写大量 Prompt 工程代码来模拟特定专家人设，难以维持对话上下文的一致性。\n- **工具集成困难**：想让 AI 执行本地命令（如 `pytest` 或 `git diff`），必须自行封装复杂的函数调用逻辑和错误处理机制。\n- **多模型切换成本高**：尝试不同大小的模型（从本地 Ollama 到云端 API）时，需要重写底层请求代码，无法快速验证效果。\n- **数据解析易出错**：获取 JSON 格式的分析报告时，常因模型输出格式微小偏差导致解析失败，需额外编写正则清洗代码。\n\n### 使用 npcpy 后\n- **人设一键定义**：通过 `NPC` 类仅需几行代码即可创建具备特定指令的“代码审查员”角色，自动维护记忆与语境。\n- **原生工具支持**：利用 `ToolAgent` 可直接将自定义 Python 函数（如运行测试脚本）注册为 AI 工具，实现自动调用与执行。\n- **统一接口适配**：更换底层模型只需修改 `model` 和 `provider` 参数，无缝切换本地轻量模型与云端强大模型进行对比测试。\n- **结构化输出保障**：设置 `format='json'` 后，npcpy 自动处理解析逻辑，直接返回可用的字典对象，彻底消除格式错误风险。\n\nnpcpy 将分散的 AI 应用开发环节整合为统一的智能体框架，让开发者从繁琐的底层对接中解放，专注于业务逻辑创新。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNPC-Worldwide_npcpy_b818a73e.png","NPC-Worldwide","NPC Worldwide","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FNPC-Worldwide_be171043.png",null,"https:\u002F\u002Fgithub.com\u002FNPC-Worldwide",[79,83,87],{"name":80,"color":81,"percentage":82},"Python","#3572A5",99.9,{"name":84,"color":85,"percentage":86},"Makefile","#427819",0,{"name":88,"color":89,"percentage":86},"Shell","#89e051",1299,94,"2026-04-17T03:47:21","MIT","未说明","未说明 (取决于所选模型提供商，如使用本地 Ollama 运行大模型则需相应 GPU 资源)",{"notes":97,"python":94,"dependencies":98},"该工具是一个灵活的代理框架，本身不强制绑定特定硬件，资源需求完全取决于用户选择的模型提供商（Provider）和具体模型（Model）。支持本地部署（如 Ollama、diffusers）和云端 API（如 OpenAI、Gemini、ElevenLabs）。若使用本地模型，需自行确保满足对应模型的显存和内存要求。支持通过 MCP 协议集成外部工具服务器。",[99,100],"pydantic","mcp",[14,45,15,13,35],[103,104,105,106,107,108,109,110,100,111,112],"agents","ai","llm","python","sql","yaml","ollama","perplexity","mcp-client","mcp-server","2026-03-27T02:49:30.150509","2026-04-18T09:20:12.137685",[116,121,126,131,136,140,144],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},39222,"遇到 'NPC' object has no attribute 'tools_dict' 错误怎么办？","该问题通常由 NPC 配置文件缓存或语义混淆引起。请尝试以下步骤重置环境：\n1. 删除 ~\u002F.npcsh\u002Fnpc_team 目录下的文件。\n2. 将 ~\u002F.npcshrc 文件中的 NPCSH_INITIALIZED 变量设置为 0。\n3. 运行 source ~\u002F.npcshrc 重新加载配置。\n4. 重新启动 npcsh。\n维护者已更新工具描述以减少此类混淆，若问题依旧，请确保使用的是最新版本。","https:\u002F\u002Fgithub.com\u002FNPC-Worldwide\u002Fnpcpy\u002Fissues\u002F80",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},39223,"使用图像生成工具时提示 'Missing required inputs... prompt' 如何解决？","当 LLM（如 llama3.2 或 phi4）未能正确提取提示词参数时会出现此错误。建议明确指定需求，例如直接要求“生成一张星星的图片”而非模糊的“画一张星星”。如果模型仍然失败，可以尝试切换更强大的模型（如 phi4），或者检查是否因模型能力限制导致其选择了 ASCII 艺术而非调用图像生成工具。","https:\u002F\u002Fgithub.com\u002FNPC-Worldwide\u002Fnpcpy\u002Fissues\u002F101",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},39224,"执行 \u002Fcom 命令时报错 'cannot access local variable current_npc' 怎么处理？","此错误通常发生在数据库表缺失或初始化不完整时。如果发现 'no such table: compiled_npcs' 错误，请尝试清理旧的 npcsh 数据文件（如移动或删除 npcsh_* 相关文件），然后重新启动 npcsh 以触发正确的数据库初始化。该问题在后续版本中已修复，请确保升级到最新版本。","https:\u002F\u002Fgithub.com\u002FNPC-Worldwide\u002Fnpcpy\u002Fissues\u002F102",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},39225,"使用 Ollama 后端时出现 'ModelResponse object has no attribute message' 错误如何修复？","这是因为代码试图以 Ollama 原生格式解析响应，但实际使用的是 litellm 库。解决方法是确保已安装 litellm 依赖：运行 pip install litellm。维护者已将该依赖移至基础安装包中，并修复了相关解析逻辑（将 chunk[\"message\"] 替换为 chunk.get('choices',{})[0].get('delta').content）。请升级 npcsh 到最新版本以获取修复。","https:\u002F\u002Fgithub.com\u002FNPC-Worldwide\u002Fnpcpy\u002Fissues\u002F147",{"id":137,"question_zh":138,"answer_zh":139,"source_url":120},39226,"如何在 Debian 系统上安装 npcsh 所需的系统依赖？","在 Debian 系统上，你需要安装以下开发包才能编译和运行 npcsh：espeak, portaudio19-dev, alsa-utils, libcairo2-dev, libgirepository1.0-dev, 和 ffmpeg。可以使用以下命令安装：\nsudo apt-get install espeak portaudio19-dev alsa-utils libcairo2-dev libgirepository1.0-dev ffmpeg\n此外，建议使用 asdf 或其他工具管理 Python 版本（推荐 Python 3.12+）。",{"id":141,"question_zh":142,"answer_zh":143,"source_url":120},39227,"为什么 npcsh 启动时提示 'No .env file found'，这会影响使用吗？","这是一个警告信息，表示当前目录下没有 .env 配置文件。如果不涉及自定义 API 密钥或特殊环境变量配置，可以忽略此警告正常使用。如果需要配置（如设置 Ollama 地址或 API Key），请在用户主目录或项目根目录下创建 .env 文件并填入相应变量。",{"id":145,"question_zh":146,"answer_zh":147,"source_url":120},39228,"小模型（如 llama3.2）经常错误调用工具或产生幻觉怎么办？","小模型可能在语义理解上存在局限，导致错误调用工具（如在不适合时使用 local_search）。建议措施包括：\n1. 清除缓存的 NPC 团队文件 (~\u002F.npcsh\u002Fnpc_team) 并重置初始化状态。\n2. 尝试更换更强大的模型（如 phi4）。\n3. 在提示词中更明确地描述任务，避免歧义。\n维护者已优化工具描述以引导模型行为，但小模型的随机性仍可能存在。",[149,154,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244],{"id":150,"version":151,"summary_zh":152,"released_at":153},315151,"v1.4.19","- OMLX 提供者路由（兼容 OpenAI，127.0.0.1:8000\u002Fv1）\n- 在本地模型发现中将 mlx 标签重命名为 omlx\n- 禁用 LiteLLM 调试输出\n- 从 serve.py 中移除冗余的 API 端点","2026-04-17T00:44:53",{"id":155,"version":156,"summary_zh":157,"released_at":158},315152,"v1.4.18","- 安全性：修复 desktop.py 中的 Shell 命令注入漏洞（将 shell=True 改为 shlex.split）\n- 安全性：对 diff\u002Fimage_gen 检查点使用 torch.load 时设置 weights_only=True；对于 DIAMOND 模型，则显式设置 weights_only=False 并添加注释\n- 安全性：Agent 新增 safe_tools=True 参数——默认集合中排除了 sh 和 Python 执行工具\n- 安全性：在 serve.py 和 mcp_server.py 中，为用户控制的 Jinja2 渲染引入 SandboxedEnvironment 环境\n- Jinx.execute() 现可接受列表或 NPCArray 作为 npc 参数——将在所有实例上并行执行 Jinx\n- README：流式示例采用 parse_stream_chunk（与具体提供商无关）；新增 NPCArray 相关的 Jinx 章节；长篇内容已折叠为下拉菜单","2026-04-15T00:38:16",{"id":160,"version":161,"summary_zh":162,"released_at":163},315153,"v1.4.17","- 默认搜索使用Startpage，依次回退至SearXNG和DuckDuckGo\n- 活动日志表（activity_log、autocomplete_suggestions、autocomplete_training）\n- 用于活动\u002F自动补全日志记录及训练数据导出的API端点\n- 修复内存范围查询，使其无需所有过滤条件","2026-04-08T01:14:57",{"id":165,"version":166,"summary_zh":167,"released_at":168},315154,"v1.4.16","修复：实时流式传输时立即刷新 SSE 事件，在代理循环中停止前提示聊天。","2026-04-06T21:55:16",{"id":170,"version":171,"summary_zh":172,"released_at":173},315155,"v1.4.15","- 修复 _tool_web_search 和 _tool_file_search 中的命令注入漏洞（PR #208，感谢 @spidershield-contrib）","2026-04-04T13:10:47",{"id":175,"version":176,"summary_zh":177,"released_at":178},315156,"v1.4.14","- 添加基于群体的语义进化知识图谱演化模块（kg_population.py），集成 GeneticEvolver，采用泊松采样搜索遍历，支持个体级别的图状态管理，通过图合并实现交叉操作，并利用大语言模型对响应进行打分排序。\n- 在知识图谱指南中添加记忆提取流水线和语义进化的相关文档。\n- 在微调指南中添加 MLX Apple Silicon 的相关文档。\n- 在 README 中新增知识图谱、记忆、语义进化和微调的示例。\n- 移除 _tool_web_search 中已失效的 shell=True 子进程回退机制（关闭 #207）。\n- 修复 _tool_file_search，使其使用不带 shell=True 的列表形式子进程调用。","2026-04-04T12:52:10",{"id":180,"version":181,"summary_zh":182,"released_at":183},315157,"v1.4.13","- 在聊天模式和工具代理模式下，都将用户生成参数（温度、top_p、top_k、max_tokens）传递给大模型。\n- 使 serve.py 在 Windows 系统上更加健壮——可选导入 redis\u002Fflask_sse\u002Fmcp，改进错误处理，并引入 NPCSH_BASE 环境变量。\n- 修复 MCP 服务器引擎的步骤渲染问题——从 _raw_steps 中获取的 action\u002Fargs 现在会使用工具调用参数进行模板渲染。\n- 为 serve.py 的导入、设置的往返测试以及服务器启动添加 Windows CI 测试。","2026-04-03T04:03:21",{"id":185,"version":186,"summary_zh":187,"released_at":188},315158,"v1.4.12","- ft 模块中的设备路由（sft、rl、usft）：device='mlx'|'cpu'|'cuda'\n- 使用 Apple Silicon 上的 mlx-lm Python API 进行 MLX LoRA 训练\n- 将 Hugging Face 模型名称解析为 mlx-community 中对应的模型\n- 向后兼容，默认设备为 'cpu'","2026-04-02T21:50:52",{"id":190,"version":191,"summary_zh":192,"released_at":193},315159,"v1.4.11","通过生成器链从 Jinx 执行流水线化事件","2026-03-28T22:45:00",{"id":195,"version":196,"summary_zh":197,"released_at":198},315160,"v1.4.10","- 在 check_llm_command 中使用内联生成器，不再单独定义函数\n- create_jinx_stream 直接接收 (npc, command) 作为参数，不再使用 StreamConfig\n- 通过 shared_context['sub_events'] 进行子委托事件处理","2026-03-28T19:47:39",{"id":200,"version":201,"summary_zh":202,"released_at":203},315161,"v1.4.9","- Generator-based streaming for check_llm_command (stream=True yields events)\n- No threads\u002Fqueues — clean generator protocol\n- Chat streams token by token, tools emit tool_start\u002Ftool_result events","2026-03-28T18:23:38",{"id":205,"version":206,"summary_zh":207,"released_at":208},315162,"v1.4.8","- Threaded check_llm_command in create_jinx_stream with keepalive SSE events\n- Prevents SSE timeout during long delegation\n- Event queue for jinxes to push real-time progress","2026-03-28T17:19:35",{"id":210,"version":211,"summary_zh":212,"released_at":213},315163,"v1.4.7","- Fix: don't pass tool_choice when no tools specified (fixes #206 - OpenAI NPC quickstart error)","2026-03-28T17:10:06",{"id":215,"version":216,"summary_zh":217,"released_at":218},315164,"v1.4.6","- Fix create_jinx_stream: stream=True, consume stream wrapper, skip chat\u002Fstop tool events\n- Resolve api_url and api_key from NPC\u002FTeam in resolve_model_provider","2026-03-27T23:25:45",{"id":220,"version":221,"summary_zh":222,"released_at":223},315165,"v1.4.5","- Bump litellm version to 1.81.13","2026-03-27T18:27:27",{"id":225,"version":226,"summary_zh":227,"released_at":228},315166,"v1.4.4","- Resolve api_url and api_key from NPC\u002FTeam in resolve_model_provider\n- Route all responses through jinx system in create_jinx_stream","2026-03-26T12:40:08",{"id":230,"version":231,"summary_zh":232,"released_at":233},315167,"v1.4.3","- file-based storage for all CommandHistory tables — CSV and Parquet backends alongside SQLite\u002FPostgres (c3eb2f2)\n  - `append_row_csv` \u002F `append_row_parquet` for any of the 10 tables (conversation_history, command_history, jinx_executions, npc_executions, memory_lifecycle, npc_memories, knowledge_graphs, labels, message_attachments, compiled_npcs)\n  - partitioned directory structure: `table\u002Fpath\u002Fyear\u002Fmonth\u002Fday\u002Fgroup_id.{csv,parquet}`\n  - `scan_all(base_dir, table, ext)` loads entire table tree into one polars DataFrame\n  - `search_files`, `list_files`, `load_file_csv`, `load_file_parquet` for querying without SQL\n- added polars to base requirements (a4064dd)\n- route all responses through jinx system in create_jinx_stream agentic loop (a416a7f)\n- docs: use proper Jinx()\u002FNPC()\u002Fctx Jinja formalism in all NPC and jinx examples (a214081)","2026-03-25T22:44:45",{"id":235,"version":236,"summary_zh":237,"released_at":238},315168,"v1.4.1","- pinned litellm dependency to version 1.76.0 in setup.py (6b34b71) for assuring users not affected by security vulnerabilities\r\n- updated research section in README with new papers (TinyTim, ALARA for Agents, production of meaning) and revised QNLP 2025 paper title (1eaf753, 5ee0c00)","2026-03-24T16:45:05",{"id":240,"version":241,"summary_zh":242,"released_at":243},315169,"v1.3.37","- rewrite create_jinx_stream as true agentic loop using check_llm_command\n- remove weak followup classifier LLM call between iterations\n- agent loops autonomously: stream → execute jinxes → feed results back → repeat until stop\n- max_followups default bumped from 3 to 10","2026-03-22T05:10:30",{"id":245,"version":246,"summary_zh":247,"released_at":248},315170,"v1.3.36","-streaming consolidation and module\r\n-agent tool fix\r\n-readme example updates","2026-03-20T05:04:36"]