[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-uber--neuropod":3,"tool-uber--neuropod":61},[4,18,26,36,44,52],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",141543,2,"2026-04-06T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":53,"name":54,"github_repo":55,"description_zh":56,"stars":57,"difficulty_score":10,"last_commit_at":58,"category_tags":59,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,60],"视频",{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":112,"forks":113,"last_commit_at":114,"license":115,"difficulty_score":116,"env_os":117,"env_gpu":118,"env_ram":119,"env_deps":120,"category_tags":129,"github_topics":130,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":140,"updated_at":141,"faqs":142,"releases":170},4394,"uber\u002Fneuropod","neuropod","A uniform interface to run deep learning models from multiple frameworks","Neuropod 是由 Uber 开源的一款深度学习模型推理库，旨在为 TensorFlow、PyTorch、Keras 及 TorchScript 等多种主流框架提供统一的运行接口。它主要解决了多框架环境下模型部署的碎片化难题：开发者无需针对不同框架编写重复的推理代码，也避免了深入掌握各框架底层 C++ API 的陡峭学习曲线。通过 Neuropod，研究人员可以自由选择擅长的框架构建模型，而工程团队则能用同一套代码无缝加载和切换不同来源的模型，极大简化了从实验到生产环境的落地流程。\n\n该工具特别适合需要在生产环境中集成多种深度学习模型的算法工程师、后端开发者及科研人员。其核心技术亮点包括支持“零拷贝”数据传输以提升推理效率、允许在运行时动态替换不同框架的模型而无需修改业务逻辑，以及提供进程外执行机制以实现模型隔离，增强系统稳定性。此外，Neuropod 还支持定义标准化的问题接口，让团队能专注于解决具体业务问题而非适配框架差异，从而快速构建通用的评估流水线与优化工具。","# Neuropod\n\n## What is Neuropod?\n\n[Neuropod](https:\u002F\u002Fgithub.com\u002Fuber\u002Fneuropod) is a library that provides a uniform interface to run deep learning models from multiple frameworks in C++ and Python. Neuropod makes it easy for researchers to build models in a framework of their choosing while also simplifying productionization of these models.\n\nIt currently supports TensorFlow, PyTorch, TorchScript, Keras and [Ludwig](http:\u002F\u002Fludwig.ai).\n\nFor more information:\n\n - [Uber Engineering blog post introducing Neuropod](https:\u002F\u002Feng.uber.com\u002Fintroducing-neuropod\u002F)\n - [Talk at NVIDIA GTC Spring 2021](https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fon-demand\u002Fsession\u002Fgtcspring21-s31643\u002F)\n\n## Why use Neuropod?\n\n#### Run models from any supported framework using one API\n\nRunning a TensorFlow model looks exactly like running a PyTorch model.\n\n```py\nx = np.array([1, 2, 3, 4])\ny = np.array([5, 6, 7, 8])\n\nfor model_path in [TF_ADDITION_MODEL_PATH, PYTORCH_ADDITION_MODEL_PATH]:\n    # Load the model\n    neuropod = load_neuropod(model_path)\n\n    # Run inference\n    results = neuropod.infer({\"x\": x, \"y\": y})\n\n    # array([6, 8, 10, 12])\n    print results[\"out\"]\n```\n\nSee the [tutorial](https:\u002F\u002Fneuropod.ai\u002Ftutorial\u002F), [Python guide](https:\u002F\u002Fneuropod.ai\u002Fpyguide\u002F), or [C++ guide](https:\u002F\u002Fneuropod.ai\u002Fcppguide\u002F) for more examples.\n\nSome benefits of this include:\n\n- All of your inference code is framework agnostic.\n- You can easily switch between deep learning frameworks if necessary without changing runtime code.\n- Avoid the learning curve of using the C++ libtorch API and the C\u002FC++ TF API\n\nAny Neuropod model can be run from both C++ and Python (even PyTorch models that have not been converted to TorchScript).\n\n#### Define a Problem API\n\nThis lets you focus more on the problem you're solving rather than the framework you're using to solve it.\n\nFor example, if you define a problem API for 2d object detection, any model that implements it can reuse all the existing inference code and infrastructure for that problem.\n\n```py\nINPUT_SPEC = [\n    # BGR image\n    {\"name\": \"image\", \"dtype\": \"uint8\", \"shape\": (1200, 1920, 3)},\n]\n\nOUTPUT_SPEC = [\n    # shape: (num_detections, 4): (xmin, ymin, xmax, ymax)\n    # These values are in units of pixels. The origin is the top left corner\n    # with positive X to the right and positive Y towards the bottom of the image\n    {\"name\": \"boxes\", \"dtype\": \"float32\", \"shape\": (\"num_detections\", 4)},\n\n    # The list of classes that the network can output\n    # This must be some subset of ['vehicle', 'person', 'motorcycle', 'bicycle']\n    {\"name\": \"supported_object_classes\", \"dtype\": \"string\", \"shape\": (\"num_classes\",)},\n\n    # The probability of each class for each detection\n    # These should all be floats between 0 and 1\n    {\"name\": \"object_class_probability\", \"dtype\": \"float32\", \"shape\": (\"num_detections\", \"num_classes\")},\n]\n```\n\nThis lets you\n\n- Build a single metrics pipeline for a problem\n- Easily compare models solving the same problem (even if they're in different frameworks)\n- Build optimized inference code that can run any model that solves a particular problem\n- Swap out models that solve the same problem at runtime with no code change (even if the models are from different frameworks)\n- Run fast experiments\n\nSee the [tutorial](https:\u002F\u002Fneuropod.ai\u002Ftutorial\u002F) for more details.\n\n#### Build generic tools and pipelines\n\nIf you have several models that take in a similar set of inputs, you can build and optimize one framework-agnostic input generation pipeline and share it across models.\n\n#### Other benefits\n\n- Fully self-contained models (including custom ops)\n- [Efficient zero-copy operations](https:\u002F\u002Fneuropod.ai\u002Fadvanced\u002Fefficient_tensor_creation\u002F)\n- [Tested on](https:\u002F\u002Fneuropod.ai\u002Fdeveloping\u002F#build-matrix) platforms including\n    - Mac, Linux, Linux (GPU)\n    - Four or five versions of each supported framework\n    - Five versions of Python\n\n- Model isolation with [out-of-process execution](https:\u002F\u002Fneuropod.ai\u002Fadvanced\u002Fope\u002F)\n    - Use multiple different versions of frameworks in the same application\n        - Ex: Experimental models using Torch nightly along with models using Torch 1.1.0\n- Switch from running in-process to running out-of-process with [one line of code](https:\u002F\u002Fneuropod.ai\u002Fadvanced\u002Fope\u002F)\n\n## Getting started\n\nSee the [basic introduction tutorial](https:\u002F\u002Fneuropod.ai\u002Ftutorial\u002F) for an overview of how to get started with Neuropod.\n\nThe [Python guide](https:\u002F\u002Fneuropod.ai\u002Fpyguide\u002F) and [C++ guide](https:\u002F\u002Fneuropod.ai\u002Fcppguide\u002F) go into more detail on running Neuropod models.\n","# Neuropod\n\n## 什么是 Neuropod？\n\n[Neuropod](https:\u002F\u002Fgithub.com\u002Fuber\u002Fneuropod) 是一个库，它提供了一个统一的接口，用于在 C++ 和 Python 中运行来自多个深度学习框架的模型。Neuropod 使研究人员能够轻松地使用自己选择的框架构建模型，同时也简化了这些模型的生产部署。\n\n目前，Neuropod 支持 TensorFlow、PyTorch、TorchScript、Keras 以及 [Ludwig](http:\u002F\u002Fludwig.ai)。\n\n更多信息：\n\n - [Uber 工程博客中介绍 Neuropod 的文章](https:\u002F\u002Feng.uber.com\u002Fintroducing-neuropod\u002F)\n - [NVIDIA GTC Spring 2021 大会上的演讲](https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fon-demand\u002Fsession\u002Fgtcspring21-s31643\u002F)\n\n## 为什么使用 Neuropod？\n\n#### 使用同一个 API 运行任何支持框架的模型\n\n运行 TensorFlow 模型与运行 PyTorch 模型看起来完全相同。\n\n```py\nx = np.array([1, 2, 3, 4])\ny = np.array([5, 6, 7, 8])\n\nfor model_path in [TF_ADDITION_MODEL_PATH, PYTORCH_ADDITION_MODEL_PATH]:\n    # 加载模型\n    neuropod = load_neuropod(model_path)\n\n    # 进行推理\n    results = neuropod.infer({\"x\": x, \"y\": y})\n\n    # array([6, 8, 10, 12])\n    print(results[\"out\"])\n```\n\n更多示例请参阅 [教程](https:\u002F\u002Fneuropod.ai\u002Ftutorial\u002F)、[Python 指南](https:\u002F\u002Fneuropod.ai\u002Fpyguide\u002F) 或 [C++ 指南](https:\u002F\u002Fneuropod.ai\u002Fcppguide\u002F)。\n\n这样做的好处包括：\n\n- 所有的推理代码都与具体框架无关。\n- 如有必要，可以轻松地在不同的深度学习框架之间切换，而无需更改运行时代码。\n- 避免学习 C++ libtorch API 和 C\u002FC++ TF API 的复杂性。\n\n任何 Neuropod 模型都可以同时从 C++ 和 Python 中运行（即使是尚未转换为 TorchScript 的 PyTorch 模型）。\n\n#### 定义问题 API\n\n这使您可以更专注于解决的问题，而不是使用的框架。例如，如果您为 2D 物体检测定义了一个问题 API，那么任何实现该 API 的模型都可以复用现有的推理代码和基础设施来处理该问题。\n\n```py\nINPUT_SPEC = [\n    # BGR 图像\n    {\"name\": \"image\", \"dtype\": \"uint8\", \"shape\": (1200, 1920, 3)},\n]\n\nOUTPUT_SPEC = [\n    # 形状：(num_detections, 4)：(xmin, ymin, xmax, ymax)\n    # 这些值以像素为单位，原点位于图像的左上角，X 轴向右，Y 轴向下\n    {\"name\": \"boxes\", \"dtype\": \"float32\", \"shape\": (\"num_detections\", 4)},\n\n    # 网络可以输出的类别列表\n    # 必须是 ['vehicle', 'person', 'motorcycle', 'bicycle'] 的子集\n    {\"name\": \"supported_object_classes\", \"dtype\": \"string\", \"shape\": (\"num_classes\",)},\n\n    # 每个检测结果对应每个类别的概率\n    # 这些值应介于 0 和 1 之间\n    {\"name\": \"object_class_probability\", \"dtype\": \"float32\", \"shape\": (\"num_detections\", \"num_classes\")},\n]\n```\n\n这使您能够：\n\n- 为某个问题构建单一的指标管道；\n- 轻松比较解决同一问题的不同模型（即使它们来自不同的框架）；\n- 构建优化的推理代码，使其能够运行解决特定问题的任何模型；\n- 在不修改代码的情况下，在运行时替换解决同一问题的模型（即使这些模型来自不同的框架）；\n- 快速进行实验。\n\n更多详细信息请参阅 [教程](https:\u002F\u002Fneuropod.ai\u002Ftutorial\u002F)。\n\n#### 构建通用工具和流水线\n\n如果您有多个模型接受相似的输入集，您可以构建并优化一个与框架无关的输入生成流水线，并在不同模型之间共享。\n\n#### 其他优势\n\n- 完全自包含的模型（包括自定义操作）；\n- [高效的零拷贝操作](https:\u002F\u002Fneuropod.ai\u002Fadvanced\u002Fefficient_tensor_creation\u002F)；\n- [经过测试的平台](https:\u002F\u002Fneuropod.ai\u002Fdeveloping\u002F#build-matrix)，包括：\n    - Mac、Linux、Linux（GPU）；\n    - 每个支持框架的四到五个版本；\n    - Python 的五个版本。\n- 使用 [进程外执行](https:\u002F\u002Fneuropod.ai\u002Fadvanced\u002Fope\u002F) 实现模型隔离；\n    - 可以在同一应用程序中使用不同版本的框架；\n        - 例如：同时使用基于 Torch nightly 的实验性模型和基于 Torch 1.1.0 的模型。\n- 通过 [一行代码](https:\u002F\u002Fneuropod.ai\u002Fadvanced\u002Fope\u002F) 即可从进程内运行切换到进程外运行。\n\n## 开始使用\n\n请参阅 [基础入门教程](https:\u002F\u002Fneuropod.ai\u002Ftutorial\u002F)，了解如何开始使用 Neuropod 的概览。\n\n[Python 指南](https:\u002F\u002Fneuropod.ai\u002Fpyguide\u002F) 和 [C++ 指南](https:\u002F\u002Fneuropod.ai\u002Fcppguide\u002F) 则更深入地介绍了如何运行 Neuropod 模型。","# Neuropod 快速上手指南\n\nNeuropod 是一个由 Uber 开源的深度学习模型推理库，它提供统一的 C++ 和 Python 接口，支持在多种框架（TensorFlow, PyTorch, TorchScript, Keras, Ludwig）之间无缝运行模型。通过 Neuropod，开发者可以编写与框架无关的推理代码，轻松切换底层模型而无需修改业务逻辑。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux (推荐 Ubuntu 16.04\u002F18.04\u002F20.04) 或 macOS。\n    *   *注：虽然支持 GPU，但基础安装通常先验证 CPU 版本。*\n*   **Python 版本**：支持 Python 3.5 至 3.9（建议 Python 3.7+）。\n*   **前置依赖**：\n    *   已安装对应的深度学习框架（如 `tensorflow` 或 `torch`），或者准备好已导出的模型文件（`.pb`, `.pt`, `.ts` 等）。\n    *   C++ 开发需安装 CMake 和兼容的编译器（GCC\u002FClang）。\n\n> **国内加速提示**：安装 Python 依赖时，建议使用清华或阿里镜像源以提升下载速度。\n> 例如：`pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple ...`\n\n## 安装步骤\n\nNeuropod 主要通过 Python pip 进行安装。目前官方 PyPI 源可能更新较慢，若遇到网络问题或版本缺失，建议从 GitHub Release 页面下载对应平台的 `.whl` 文件进行本地安装。\n\n### 方法一：通过 pip 安装（推荐）\n\n```bash\n# 使用国内镜像源加速安装\npip install neuropod -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方法二：手动安装特定版本\n\n如果自动安装失败，请访问 [Neuropod GitHub Releases](https:\u002F\u002Fgithub.com\u002Fuber\u002Fneuropod\u002Freleases) 下载适合您系统（Linux\u002FmacOS）和 Python 版本的 `.whl` 文件，然后执行：\n\n```bash\npip install \u002Fpath\u002Fto\u002Fdownloaded\u002Fneuropod-x.x.x-cp3x-cp3x-linux_x86_64.whl\n```\n\n## 基本使用\n\nNeuropod 的核心优势在于统一的 API。无论底层是 TensorFlow 还是 PyTorch，加载和推理的代码完全一致。\n\n以下是最简单的 Python 使用示例，演示如何加载不同框架的模型并执行推理：\n\n```python\nimport numpy as np\nfrom neuropod import load_neuropod\n\n# 准备输入数据\nx = np.array([1, 2, 3, 4])\ny = np.array([5, 6, 7, 8])\n\n# 假设你已经有两个不同框架训练的相同功能模型路径\n# TF_ADDITION_MODEL_PATH: TensorFlow 模型路径\n# PYTORCH_ADDITION_MODEL_PATH: PyTorch 模型路径\nmodel_paths = [TF_ADDITION_MODEL_PATH, PYTORCH_ADDITION_MODEL_PATH]\n\nfor model_path in model_paths:\n    # 1. 加载模型 (统一接口)\n    neuropod_model = load_neuropod(model_path)\n\n    # 2. 执行推理 (传入字典格式的输入)\n    results = neuropod_model.infer({\"x\": x, \"y\": y})\n\n    # 3. 获取输出\n    # 输出格式为字典，例如: {\"out\": array([6, 8, 10, 12])}\n    print(results[\"out\"])\n```\n\n### 关键特性说明\n\n*   **框架无关性**：上述代码中，`load_neuropod` 会自动识别模型类型，无需针对 TF 或 PyTorch 编写不同的加载逻辑。\n*   **输入输出规范**：输入和输出均通过 Python 字典传递，键名需与模型定义的名称匹配。\n*   **零拷贝优化**：Neuropod 支持高效的零拷贝（zero-copy）操作，减少数据在内存中的复制开销，提升推理性能。\n\n如需更复杂的场景（如定义标准化的问题 API、C++ 调用或进程外执行以隔离环境），请参考官方详细文档。","某自动驾驶团队需要在生产环境中同时部署由不同研究员使用 TensorFlow 和 PyTorch 开发的多个目标检测模型，以进行实时路况分析。\n\n### 没有 neuropod 时\n- **推理代码重复开发**：工程师必须为 TensorFlow 模型编写一套 C++ 加载逻辑，再为 PyTorch 模型编写另一套基于 LibTorch 的逻辑，导致代码库冗余且难以维护。\n- **框架切换成本高昂**：当算法团队决定将某个模型从 TensorFlow 迁移到 PyTorch 以提升精度时，后端服务需要重写大量底层推理代码并重新编译。\n- **统一评测困难**：由于不同框架的输入输出格式处理不一致，构建统一的性能监控和指标计算流水线极其复杂，难以横向对比模型效果。\n- **环境依赖冲突**：在同一进程中混合加载不同版本的深度学习框架库，极易引发符号冲突或内存错误，导致服务崩溃。\n\n### 使用 neuropod 后\n- **统一接口调用**：无论底层是 TensorFlow 还是 PyTorch 模型，工程师只需调用同一套 `load_neuropod` 和 `infer` API，推理代码完全与框架解耦。\n- **无缝热替换模型**：更换模型框架时无需修改任何运行时代码，仅需替换模型文件即可实现从 TensorFlow 到 PyTorch 的平滑过渡。\n- **标准化问题定义**：通过定义标准的输入输出规范（Problem API），团队轻松构建了通用的评测流水线，可直接对比不同框架模型的准确率与延迟。\n- **进程隔离保稳定**：利用 neuropod 的进程外执行（Out-of-process）功能，各模型在独立进程中运行，彻底避免了多框架共存时的依赖冲突问题。\n\nneuropod 通过屏蔽底层框架差异，让算法团队能自由选择工具创新，同时让工程团队拥有稳定、统一的生产级部署能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fuber_neuropod_90770c74.png","uber","Uber Open Source","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fuber_3a532a8c.png","Open Source at Uber",null,"http:\u002F\u002Fwww.uber.com","https:\u002F\u002Fgithub.com\u002Fuber",[80,84,88,92,96,100,104,108],{"name":81,"color":82,"percentage":83},"C++","#f34b7d",54.4,{"name":85,"color":86,"percentage":87},"Python","#3572A5",22.7,{"name":89,"color":90,"percentage":91},"Starlark","#76d275",9,{"name":93,"color":94,"percentage":95},"Java","#b07219",7.4,{"name":97,"color":98,"percentage":99},"C","#555555",3.5,{"name":101,"color":102,"percentage":103},"Shell","#89e051",2.7,{"name":105,"color":106,"percentage":107},"Dockerfile","#384d54",0.3,{"name":109,"color":110,"percentage":111},"PureBasic","#5a6986",0,943,73,"2026-03-27T03:12:54","Apache-2.0",4,"Linux, macOS","未说明（支持 Linux GPU 环境，但具体型号、显存及 CUDA 版本未在提供的文本中列出）","未说明",{"notes":121,"python":122,"dependencies":123},"该工具是一个统一接口库，支持在 C++ 和 Python 中运行多种框架的模型。支持进程外执行（out-of-process），允许在同一应用中混合使用不同版本的深度学习框架（例如同时使用 Torch nightly 和 Torch 1.1.0）。模型是完全自包含的，包括自定义算子。","支持五个 Python 版本（具体版本号未在提供的文本中列出）",[124,125,126,127,128],"TensorFlow","PyTorch","TorchScript","Keras","Ludwig",[14],[131,132,133,134,135,136,137,138,139],"tensorflow","pytorch","keras","deep-learning","deeplearning","machine-learning","machinelearning","inference","incubation","2026-03-27T02:49:30.150509","2026-04-06T21:10:37.137315",[143,148,153,158,162,166],{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},19984,"加载 Neuropod 模型时遇到 'ImportError: libpython3.7m.so.1.0: cannot open shared object file' 错误怎么办？","该错误通常是因为缺少对应的后端依赖或 Python 环境配置问题。解决方法包括：\n1. 确保安装了正确的后端（backend）。\n2. 如果是从源码编译 Python，必须使用 '--enable-shared' 选项配置，例如：'.\u002Fconfigure --enable-optimizations --enable-shared'。官方 Python 包通常没有问题，但自定义构建时需要显式启用共享库支持。","https:\u002F\u002Fgithub.com\u002Fuber\u002Fneuropod\u002Fissues\u002F387",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},19985,"在高负载下使用多个 OPE 实例时，主进程出现 'SIGSEGV: segmentation violation' 崩溃如何解决？","该问题的根本原因与全局内存分配器（global allocator）有关。解决方案是避免使用单一的线程局部（thread_local）分配器，而是为每个模型实例使用独立的分配器，以最大化缓存命中率并避免竞争条件。维护者已提供修复方案（参考 PR #433），建议升级到包含该修复的版本。临时规避方法是将 'free_memory_every_cycle' 设置为 false，但这不是长久之计。","https:\u002F\u002Fgithub.com\u002Fuber\u002Fneuropod\u002Fissues\u002F397",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},19986,"为什么在单调用者对多模型实例进行轮询（Round-Robin）调用时，性能（RPS）反而大幅下降？","这种现象通常是由缓存未命中（cache misses）和分支预测失败（branch mispredictions）引起的。当调用者在多个实例间频繁切换时，CPU 缓存无法有效利用，导致性能下降。维护者建议：\n1. 检查基准测试是否测量了实际关心的指标。\n2. 考虑 CPU 频率缩放（CPU frequency scaling）的影响。\n3. 如果目的是提高吞吐量，需确保测试场景符合实际生产负载模式，避免人为制造缓存抖动。","https:\u002F\u002Fgithub.com\u002Fuber\u002Fneuropod\u002Fissues\u002F336",{"id":159,"question_zh":160,"answer_zh":161,"source_url":147},19987,"如何正确编译支持 Neuropod 的 Python 环境？","如果使用官方 Python 包，通常无需额外配置。但如果需要从源码编译 Python 以支持 Neuropod，必须在配置时启用共享库支持。具体命令如下：\n'.\u002Fconfigure --enable-optimizations --enable-shared'\n然后执行 'make' 和 'make install'。缺少 '--enable-shared' 会导致加载 Neuropod 时出现共享库找不到错误。",{"id":163,"question_zh":164,"answer_zh":165,"source_url":152},19988,"Neuropod 在多实例部署时，如何优化内存分配策略以避免崩溃？","应避免使用全局或线程局部的内存分配器。推荐为每个模型实例创建独立的内存分配器（allocator per model），这样可以减少锁竞争并提高缓存局部性。维护者指出，将分配器绑定到模型而非线程是解决高并发下 SIGSEGV 崩溃的关键。相关修复已在后续版本中合并。",{"id":167,"question_zh":168,"answer_zh":169,"source_url":157},19989,"在测试多模型实例并行推理性能时，有哪些注意事项？","进行此类测试时需注意：\n1. 确保基准测试反映真实场景，避免仅测量初始化或退出时间。\n2. 关注 CPU 缓存行为和分支预测对性能的影响。\n3. 使用固定索引（如始终调用实例 0）与轮询策略对比时，性能差异可能源于缓存局部性而非代码逻辑。\n4. 可借助性能分析工具（profiling）和系统统计信息（如 CPU 频率）进一步诊断瓶颈。",[171,176,180,184,188,192,197,201,206,211,216,221,226,231],{"id":172,"version":173,"summary_zh":174,"released_at":175},118006,"v0.3.0-rc7","有关安装说明，请参阅 https:\u002F\u002Fneuropod.ai\u002Fdocs\u002Fmaster\u002Finstalling\u002F","2022-07-04T05:41:58",{"id":177,"version":178,"summary_zh":174,"released_at":179},118007,"v0.3.0-rc6","2022-02-23T01:01:16",{"id":181,"version":182,"summary_zh":174,"released_at":183},118008,"v0.3.0-rc5","2022-01-28T07:29:02",{"id":185,"version":186,"summary_zh":174,"released_at":187},118009,"v0.3.0-rc4","2021-11-18T04:26:56",{"id":189,"version":190,"summary_zh":174,"released_at":191},118010,"v0.3.0-rc3","2021-09-08T05:16:10",{"id":193,"version":194,"summary_zh":195,"released_at":196},118011,"v0.3.0-rc2","（更多详情即将发布）\n\n有关安装说明，请参阅 https:\u002F\u002Fneuropod.ai\u002Fdocs\u002Fmaster\u002Finstalling\u002F","2021-08-17T02:53:13",{"id":198,"version":199,"summary_zh":195,"released_at":200},118012,"v0.3.0-rc1","2020-12-18T22:09:13",{"id":202,"version":203,"summary_zh":204,"released_at":205},118013,"v0.2.0","Neuropod 首次公开发布！\n\n更多详细信息请参阅文档：https:\u002F\u002Fneuropod.ai，安装说明请访问：https:\u002F\u002Fneuropod.ai\u002Finstalling\u002F。\n\n## 支持的框架\u002F后端\n- TensorFlow：`1.12.0`、`1.13.1`、`1.14.0`、`1.15.0`\n- TorchScript：`1.1.0`、`1.2.0`、`1.3.0`、`1.4.0`、`1.5.0`\n- Python：`2.7`、`3.5`、`3.6`、`3.7`、`3.8`\n\n支持以下平台：`macOS`、`Linux (CPU)`、`Linux (GPU)`\n\n请注意，Python 后端也可用于运行 PyTorch 模型（即尚未转换为 TorchScript 的模型）。\n\n（有关与先前版本相比的变化，请参阅 v0.2.0-rc2 的发行说明）","2020-06-07T23:17:49",{"id":207,"version":208,"summary_zh":209,"released_at":210},118014,"v0.2.0-rc2","# 主要变更\n\n包括 v0.2.0rc1 中的所有内容 +\n\n- 读\u002F写字符串访问器\n- 文档改进\n- 支持 Torch 1.5.0\n- 支持 Python 3.8\n- 移除 `TestNeuropodBackend`，并将 `TestNeuropodTensor` 替换为 `GenericNeuropodTensor`","2020-05-29T22:38:19",{"id":212,"version":213,"summary_zh":214,"released_at":215},118015,"v0.2.0-rc1","# 主要变更\n\n包括 v0.2.0rc0 中的所有内容 +\n\n- 支持 Python 的四个版本（2.7、3.5、3.6 和 3.7）","2020-04-11T02:07:19",{"id":217,"version":218,"summary_zh":219,"released_at":220},118016,"v0.2.0-rc0","# Major Changes\r\n\r\nThe library has been renamed from Neuropods to Neuropod and all imports, namespaces, classes, and method names have changed to match.\r\n\r\n## Python\r\n- The python libraries now use the native bindings and OPE by default. This means that the same inference code path is used from C++ and python.\r\n\r\n- This also means that Neuropod from python no longer uses your environment's version of a framework for inference. \r\n\r\n- Wheels for each required backend can be installed below. For example, if you're trying to run a TF model, you will need to install at least one of the TensorFlow Neuropod python packages below.\r\n\r\nThis change allows Neuropod to run\u002Ftest your model in a more isolated environment (independent of your version of Torch or TF) when exporting a TorchScript, TensorFlow, or Keras model.\r\n\r\n_Note: If this fails, you can set `_always_use_native=False` when calling `load_neuropod`. Please file an issue if you see problems as `_always_use_native` is a temporary workaround that is intended to be removed at the official release_\r\n\r\n## C++\r\n\r\n- A subset of model outputs can be requested by passing a second argument to `infer`\r\n\r\n- Using out-of-process execution (OPE) is now controlled through the `RuntimeOptions` struct. See the OPE docs for more detail.\r\n\r\n# Other Changes\r\n\r\n- Many improvements to out-of-process execution.\r\n- Lots of build and build system improvements\r\n- Reformatted code and added lint and static analyzers to CI\r\n- Added support for Torch `v1.4.0`\r\n\r\n\r\nSee the list of commits between `v0.1.0-rc3` and `v0.2.0-rc0` for more details: https:\u002F\u002Fgithub.com\u002Fuber\u002Fneuropod\u002Fcompare\u002Fv0.1.0-rc3...v0.2.0-rc0","2020-04-07T00:43:33",{"id":222,"version":223,"summary_zh":224,"released_at":225},118017,"v0.1.0-rc3","## Changes\r\n- Add support for Torch `v1.3.0`\r\n- Add support for TensorFlow `v1.15.0`\r\n- Documentation\r\n- Bugfixes ","2019-11-05T03:12:15",{"id":227,"version":228,"summary_zh":229,"released_at":230},118018,"v0.1.0-rc2","This release candidate is intended to test the release process and verify that the released assets work correctly.\r\n\r\n## Major changes:\r\nNeuropods are now packaged as zip files by default (instead of as directories)\r\n\r\nFor example,\r\n```py\r\ncreate_tensorflow_neuropod(\r\n    neuropod_path=\"\u002Fpath\u002Fto\u002Fmy_addition_model.neuropod\",\r\n    model_name=\"addition_model\",\r\n    ...\r\n)\r\n```\r\n\r\nWill create a single self-contained file at `\u002Fpath\u002Fto\u002Fmy_addition_model.neuropod`\r\n\r\nYou can set `package_as_zip=False` in `create_*_neuropod` to keep the old behavior.\r\n\r\n*Note: Loading is backwards compatible (i.e. both zip and directory neuropods can be loaded from both C++ and Python)*\r\n\r\n## Other Changes\r\n- Added factory functions for creating tensors from C++: `zeros`, `ones`, `full`, `randn`, `arange`, and `eye`\r\n- `NeuropodValueMap` can now be serialized with `neuropods::serialize`\r\n- Updated pybind11 to `v2.4.2`\r\n- Updated boost to `v1.68.0`\r\n- Various bugfixes\r\n- Better packager docstrings\r\n","2019-10-22T20:30:17",{"id":232,"version":233,"summary_zh":234,"released_at":235},118019,"v0.1.0-rc1","The first Neuropod release candidate through GitHub!\r\n\r\nThis release candidate is intended to test the release process and verify that the released assets work correctly.","2019-09-18T05:02:57"]