[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-NVIDIA--NemoClaw":3,"tool-NVIDIA--NemoClaw":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":107,"forks":108,"last_commit_at":109,"license":110,"difficulty_score":10,"env_os":111,"env_gpu":112,"env_ram":113,"env_deps":114,"category_tags":121,"github_topics":79,"view_count":122,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":123,"updated_at":124,"faqs":125,"releases":153},769,"NVIDIA\u002FNemoClaw","NemoClaw","Run OpenClaw more securely inside NVIDIA OpenShell with managed inference","NemoClaw 是 NVIDIA 推出的一款开源参考栈，旨在简化 OpenClaw 常驻助手的运行流程。它通过集成 NVIDIA OpenShell 运行时，为自主智能体提供了额外的安全保障，有效解决了在本地环境中运行 AI 代理时面临的安全隐患问题。NemoClaw 不仅包含必要的运行时组件，还预置了 NVIDIA Nemotron 等开源模型，让用户能更快上手构建自己的智能体应用。\n\nNemoClaw 特别适合开发者、AI 研究人员以及对智能体技术感兴趣的技术爱好者。它提供了一个沙盒化的执行环境，结合 Docker 或 Kubernetes 进行容器管理，确保了隔离性和稳定性。需要注意的是，NemoClaw 目前处于 Alpha 预览阶段，界面和 API 可能会随设计迭代而变化，尚未达到生产就绪状态。因此，它更适合作为早期实验和反馈收集的平台，帮助用户在安全的环境中探索 AI 智能体的潜力。","# NVIDIA NemoClaw: Reference Stack for Running OpenClaw in OpenShell\n\n\u003C!-- start-badges -->\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache_2.0-blue)](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002FLICENSE)\n[![Security Policy](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSecurity-Report%20a%20Vulnerability-red)](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002FSECURITY.md)\n[![Project Status](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fstatus-alpha-orange)](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002Fdocs\u002Fabout\u002Frelease-notes.md)\n\u003C!-- end-badges -->\n\n\u003C!-- start-intro -->\nNVIDIA NemoClaw is an open source reference stack that simplifies running [OpenClaw](https:\u002F\u002Fopenclaw.ai) always-on assistants more safely.\nIt installs the [NVIDIA OpenShell](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FOpenShell) runtime, part of NVIDIA Agent Toolkit, which provides additional security for running autonomous agents.\nIt also includes open source models such as [NVIDIA Nemotron](https:\u002F\u002Fbuild.nvidia.com).\n\u003C!-- end-intro -->\n\n> **Alpha software**\n>\n> NemoClaw is available in early preview starting March 16, 2026.\n> This software is not production-ready.\n> Interfaces, APIs, and behavior may change without notice as we iterate on the design.\n> The project is shared to gather feedback and enable early experimentation.\n> We welcome issues and discussion from the community while the project evolves.\n\n---\n\n## Quick Start\n\nFollow these steps to get started with NemoClaw and your first sandboxed OpenClaw agent.\n\n> **ℹ️ Note**\n>\n> NemoClaw creates a fresh OpenClaw instance inside the sandbox during onboarding.\n\n\u003C!-- start-quickstart-guide -->\n\n### Prerequisites\n\nCheck the prerequisites before you start to ensure you have the necessary software and hardware to run NemoClaw.\n\n#### Hardware\n\n| Resource | Minimum        | Recommended      |\n|----------|----------------|------------------|\n| CPU      | 4 vCPU         | 4+ vCPU          |\n| RAM      | 8 GB           | 16 GB            |\n| Disk     | 20 GB free     | 40 GB free       |\n\nThe sandbox image is approximately 2.4 GB compressed. During image push, the Docker daemon, k3s, and the OpenShell gateway run alongside the export pipeline, which buffers decompressed layers in memory. On machines with less than 8 GB of RAM, this combined usage can trigger the OOM killer. If you cannot add memory, configuring at least 8 GB of swap can work around the issue at the cost of slower performance.\n\n#### Software\n\n| Dependency | Version                          |\n|------------|----------------------------------|\n| Linux      | Ubuntu 22.04 LTS or later |\n| Node.js    | 22.16 or later |\n| npm        | 10 or later |\n| Container runtime | Supported runtime installed and running |\n| [OpenShell](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FOpenShell) | Installed |\n\n#### Container Runtime Support\n\n| Platform | Supported runtimes | Notes |\n|----------|--------------------|-------|\n| Linux | Docker | Primary supported path today |\n| macOS (Apple Silicon) | Colima, Docker Desktop | Recommended runtimes for supported macOS setups |\n| macOS | Podman | Not supported yet. NemoClaw currently depends on OpenShell support for Podman on macOS. |\n| Windows WSL | Docker Desktop (WSL backend) | Supported target path |\n\n#### macOS first-run checklist\n\nOn a fresh macOS machine, install the prerequisites in this order:\n\n1. Install Xcode Command Line Tools:\n\n   ```bash\n   xcode-select --install\n   ```\n\n2. Install and start a supported container runtime:\n   - Docker Desktop\n   - Colima\n3. Run the NemoClaw installer.\n\nThis avoids the two most common first-run failures on macOS:\n\n- missing developer tools needed by the installer and Node.js toolchain\n- Docker connection errors when no supported container runtime is installed or running\n\n> **💡 Tip**\n>\n> For DGX Spark, follow the [DGX Spark setup guide](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002Fspark-install.md). It covers Spark-specific prerequisites, such as cgroup v2 and Docker configuration, before running the standard installer.\n\n### Install NemoClaw and Onboard OpenClaw Agent\n\nDownload and run the installer script.\nThe script installs Node.js if it is not already present, then runs the guided onboard wizard to create a sandbox, configure inference, and apply security policies.\n\n```bash\ncurl -fsSL https:\u002F\u002Fwww.nvidia.com\u002Fnemoclaw.sh | bash\n```\n\nIf you use nvm or fnm to manage Node.js, the installer may not update your current shell's PATH.\nIf `nemoclaw` is not found after install, run `source ~\u002F.bashrc` (or `source ~\u002F.zshrc` for zsh) or open a new terminal.\n\nWhen the install completes, a summary confirms the running environment:\n\n```text\n──────────────────────────────────────────────────\nSandbox      my-assistant (Landlock + seccomp + netns)\nModel        nvidia\u002Fnemotron-3-super-120b-a12b (NVIDIA Endpoints)\n──────────────────────────────────────────────────\nRun:         nemoclaw my-assistant connect\nStatus:      nemoclaw my-assistant status\nLogs:        nemoclaw my-assistant logs --follow\n──────────────────────────────────────────────────\n\n[INFO]  === Installation complete ===\n```\n\n### Chat with the Agent\n\nConnect to the sandbox, then chat with the agent through the TUI or the CLI.\n\n#### Connect to the Sandbox\n\nRun the following command to connect to the sandbox:\n\n```bash\nnemoclaw my-assistant connect\n```\n\nThis connects you to the sandbox shell `sandbox@my-assistant:~$` where you can run `openclaw` commands.\n\n#### OpenClaw TUI\n\nIn the sandbox shell, run the following command to open the OpenClaw TUI, which opens an interactive chat interface.\n\n```bash\nopenclaw tui\n```\n\nSend a test message to the agent and verify you receive a response.\n\n> **ℹ️ Note**\n>\n> The TUI is best for interactive back-and-forth. If you need the full text of a long response such as a large code generation output, use the CLI instead.\n\n#### OpenClaw CLI\n\nIn the sandbox shell, run the following command to send a single message and print the response:\n\n```bash\nopenclaw agent --agent main --local -m \"hello\" --session-id test\n```\n\nThis prints the complete response directly in the terminal and avoids relying on the TUI view for long output.\n\n### Uninstall\n\nTo remove NemoClaw and all resources created during setup, in the terminal outside the sandbox, run:\n\n```bash\ncurl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002FNVIDIA\u002FNemoClaw\u002Frefs\u002Fheads\u002Fmain\u002Funinstall.sh | bash\n```\n\nThe script removes sandboxes, the NemoClaw gateway and providers, related Docker images and containers, local state directories, and the global `nemoclaw` npm package. It does not remove shared system tooling such as Docker, Node.js, npm, or Ollama.\n\n| Flag               | Effect                                              |\n|--------------------|-----------------------------------------------------|\n| `--yes`            | Skip the confirmation prompt.                       |\n| `--keep-openshell` | Leave the `openshell` binary installed.              |\n| `--delete-models`  | Also remove NemoClaw-pulled Ollama models.           |\n\nFor example, to skip the confirmation prompt:\n\n```bash\ncurl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002FNVIDIA\u002FNemoClaw\u002Frefs\u002Fheads\u002Fmain\u002Funinstall.sh | bash -s -- --yes\n```\n\n\u003C!-- end-quickstart-guide -->\n\n---\n\n## How It Works\n\nNemoClaw installs the NVIDIA OpenShell runtime, then creates a sandboxed OpenClaw environment where every network request, file access, and inference call is governed by declarative policy. The `nemoclaw` CLI orchestrates the full stack: OpenShell gateway, sandbox, inference provider, and network policy.\n\n| Component        | Role                                                                                      |\n|------------------|-------------------------------------------------------------------------------------------|\n| **Plugin**       | TypeScript CLI commands for launch, connect, status, and logs.                            |\n| **Blueprint**    | Versioned Python artifact that orchestrates sandbox creation, policy, and inference setup. |\n| **Sandbox**      | Isolated OpenShell container running OpenClaw with policy-enforced egress and filesystem.  |\n| **Inference**    | Provider-routed model calls, routed through the OpenShell gateway, transparent to the agent. |\n\nThe blueprint lifecycle follows four stages: resolve the artifact, verify its digest, plan the resources, and apply through the OpenShell CLI.\n\nWhen something goes wrong, errors may originate from either NemoClaw or the OpenShell layer underneath. Run `nemoclaw \u003Cname> status` for NemoClaw-level health and `openshell sandbox list` to check the underlying sandbox state.\n\n---\n\n## Inference\n\nInference requests from the agent never leave the sandbox directly. OpenShell intercepts every call and routes it to the provider you selected during onboarding.\n\nSupported non-experimental onboarding paths:\n\n| Provider | Notes |\n|---|---|\n| NVIDIA Endpoints | Curated hosted models on `integrate.api.nvidia.com`. |\n| OpenAI | Curated GPT models plus `Other...` for manual model entry. |\n| Other OpenAI-compatible endpoint | For proxies and compatible gateways. |\n| Anthropic | Curated Claude models plus `Other...` for manual model entry. |\n| Other Anthropic-compatible endpoint | For Claude proxies and compatible gateways. |\n| Google Gemini | Google's OpenAI-compatible endpoint. |\n\nDuring onboarding, NemoClaw validates the selected provider and model before it creates the sandbox:\n\n- OpenAI-compatible providers: tries `\u002Fresponses` first, then `\u002Fchat\u002Fcompletions`\n- Anthropic-compatible providers: tries `\u002Fv1\u002Fmessages`\n- If validation fails, the wizard prompts you to fix the selection before continuing\n\nCredentials stay on the host in `~\u002F.nemoclaw\u002Fcredentials.json`. The sandbox only sees the routed `inference.local` endpoint, not your raw provider key.\n\nLocal Ollama is supported in the standard onboarding flow. Local vLLM remains experimental, and local host-routed inference on macOS still depends on OpenShell host-routing support in addition to the local service itself being reachable on the host.\n\n## Host-Side State and Config\n\nNemoClaw keeps its operator-facing state on the host rather than inside the sandbox.\nThese are the main files new users usually need to locate:\n\n| Path | Purpose |\n|---|---|\n| `~\u002F.nemoclaw\u002Fcredentials.json` | Provider credentials saved during onboarding |\n| `~\u002F.nemoclaw\u002Fsandboxes.json` | Registered sandbox metadata, including the default sandbox selection |\n| `~\u002F.openclaw\u002Fopenclaw.json` | Host OpenClaw configuration that NemoClaw snapshots or restores during migration flows |\n\nCommon environment variables for optional services and local access include `TELEGRAM_BOT_TOKEN`, `ALLOWED_CHAT_IDS`, and `CHAT_UI_URL`.\nFor normal sandbox setup and reconfiguration, prefer `nemoclaw onboard` over editing these files by hand.\n\n---\n\n## Protection Layers\n\nThe sandbox starts with a default policy that controls network egress and filesystem access:\n\n| Layer      | What it protects                                    | When it applies             |\n|------------|-----------------------------------------------------|-----------------------------|\n| Network    | Blocks unauthorized outbound connections.           | Hot-reloadable at runtime.  |\n| Filesystem | Prevents reads\u002Fwrites outside `\u002Fsandbox` and `\u002Ftmp`.| Locked at sandbox creation. |\n| Process    | Blocks privilege escalation and dangerous syscalls. | Locked at sandbox creation. |\n| Inference  | Reroutes model API calls to controlled backends.    | Hot-reloadable at runtime.  |\n\nWhen the agent tries to reach an unlisted host, OpenShell blocks the request and surfaces it in the TUI for operator approval.\n\n---\n\n## Configuring Sandbox Policy\n\nThe sandbox policy is defined in a declarative YAML file and enforced by the OpenShell runtime.\nNemoClaw ships a default policy in [`nemoclaw-blueprint\u002Fpolicies\u002Fopenclaw-sandbox.yaml`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002Fnemoclaw-blueprint\u002Fpolicies\u002Fopenclaw-sandbox.yaml) that denies all network egress except explicitly listed endpoints.\n\nOperators can customize the policy in two ways:\n\n| Method | How | Scope |\n|--------|-----|-------|\n| **Static** | Edit `openclaw-sandbox.yaml` and re-run `nemoclaw onboard`. | Persists across restarts. |\n| **Dynamic** | Run `openshell policy set \u003Cpolicy-file>` on a running sandbox. | Session only; resets on restart. |\n\nNemoClaw includes preset policy files for common integrations such as PyPI, Docker Hub, Slack, and Jira in `nemoclaw-blueprint\u002Fpolicies\u002Fpresets\u002F`. Apply a preset as-is or use it as a starting template.\n\nNemoClaw is an open project — we are still determining which presets to ship by default. If you have suggestions, please open an [issue](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fissues) or [discussion](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fdiscussions).\n\nWhen the agent attempts to reach an endpoint not covered by the policy, OpenShell blocks the request and surfaces it in the TUI (`openshell term`) for the operator to approve or deny in real time. Approved endpoints persist for the current session only.\n\nFor step-by-step instructions, see [Customize Network Policy](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Fnetwork-policy\u002Fcustomize-network-policy.html). For the underlying enforcement details, see the OpenShell [Policy Schema](https:\u002F\u002Fdocs.nvidia.com\u002Fopenshell\u002Flatest\u002Freference\u002Fpolicy-schema.html) and [Sandbox Policies](https:\u002F\u002Fdocs.nvidia.com\u002Fopenshell\u002Flatest\u002Fsandboxes\u002Fpolicies.html) documentation.\n\n---\n\n## Key Commands\n\n### Host commands (`nemoclaw`)\n\nRun these on the host to set up, connect to, and manage sandboxes.\n\n| Command                              | Description                                            |\n|--------------------------------------|--------------------------------------------------------|\n| `nemoclaw onboard`                  | Interactive setup wizard: gateway, providers, sandbox. |\n| `nemoclaw \u003Cname> connect`            | Open an interactive shell inside the sandbox.          |\n| `openshell term`                     | Launch the OpenShell TUI for monitoring and approvals. |\n| `nemoclaw start` \u002F `stop` \u002F `status` | Manage auxiliary services (Telegram bridge, tunnel).   |\n\nSee the full [CLI reference](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Fcommands.html) for all commands, flags, and options.\n\n---\n\n## Learn More\n\nRefer to the documentation for more information on NemoClaw.\n\n- [Overview](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Fabout\u002Foverview.html): Learn what NemoClaw does and how it fits together.\n- [How It Works](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Fabout\u002Fhow-it-works.html): Learn about the plugin, blueprint, and sandbox lifecycle.\n- [Architecture](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Farchitecture.html): Learn about the plugin structure, blueprint lifecycle, and sandbox environment.\n- [Inference Profiles](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Finference-profiles.html): Learn how NemoClaw configures routed inference providers.\n- [Network Policies](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Fnetwork-policies.html): Learn about egress control and policy customization.\n- [CLI Commands](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Fcommands.html): Learn about the full command reference.\n- [Troubleshooting](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Ftroubleshooting.html): Troubleshoot common issues and resolution steps.\n- [Discord](https:\u002F\u002Fdiscord.gg\u002FXFpfPv9Uvx): Join the community for questions and discussion.\n\n## License\n\nThis project is licensed under the [Apache License 2.0](LICENSE).\n","# NVIDIA NemoClaw: 在 OpenShell 中运行 OpenClaw 的参考栈\n\n\u003C!-- start-badges -->\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache_2.0-blue)](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002FLICENSE)\n[![Security Policy](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSecurity-Report%20a%20Vulnerability-red)](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002FSECURITY.md)\n[![Project Status](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fstatus-alpha-orange)](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002Fdocs\u002Fabout\u002Frelease-notes.md)\n\u003C!-- end-badges -->\n\n\u003C!-- start-intro -->\nNVIDIA NemoClaw 是一个开源参考栈，旨在更安全地简化运行 [OpenClaw](https:\u002F\u002Fopenclaw.ai) 常驻助手的过程。\n它安装了 [NVIDIA OpenShell](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FOpenShell) 运行时（属于 NVIDIA Agent Toolkit 的一部分），为运行自主代理提供额外的安全性。\n它还包含开源模型，例如 [NVIDIA Nemotron](https:\u002F\u002Fbuild.nvidia.com)。\n\u003C!-- end-intro -->\n\n> **Alpha 软件**\n>\n> NemoClaw 于 2026 年 3 月 16 日起提供早期预览。\n> 该软件尚未准备好用于生产环境。\n> 随着设计迭代，接口、API 和行为可能会在不另行通知的情况下发生变化。\n> 该项目分享出来是为了收集反馈并支持早期实验。\n> 在项目演进过程中，我们欢迎社区提出问题和讨论。\n\n---\n\n## 快速开始\n\n按照以下步骤开始使用 NemoClaw 并设置您的第一个沙箱化 OpenClaw 代理。\n\n> **ℹ️ 注意**\n>\n> NemoClaw 在接入期间会在沙箱内创建一个全新的 OpenClaw 实例。\n\n\u003C!-- start-quickstart-guide -->\n\n### 前置条件\n\n开始前请检查前置条件，确保您拥有运行 NemoClaw 所需的软件和硬件。\n\n#### 硬件\n\n| 资源 | 最低要求 | 推荐配置 |\n|----------|----------------|------------------|\n| CPU      | 4 vCPU         | 4+ vCPU          |\n| RAM      | 8 GB           | 16 GB            |\n| Disk     | 20 GB free     | 40 GB free       |\n\n沙箱镜像压缩后约为 2.4 GB。在推送镜像期间，Docker 守护进程、k3s 和 OpenShell 网关会与导出管道一起运行，后者将解压后的层缓冲在内存中。在内存小于 8 GB 的机器上，这种组合使用可能会触发 OOM killer（内存溢出杀手）。如果您无法增加内存，配置至少 8 GB 的交换空间可以绕过该问题，但代价是性能变慢。\n\n#### 软件\n\n| 依赖项 | 版本 |\n|------------|----------------------------------|\n| Linux      | Ubuntu 22.04 LTS 或更高版本 |\n| Node.js    | 22.16 或更高版本 |\n| npm        | 10 或更高版本 |\n| 容器运行时 | 已安装并运行的受支持运行时 |\n| [OpenShell](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FOpenShell) | 已安装 |\n\n#### 容器运行时支持\n\n| 平台 | 支持的运行时 | 备注 |\n|----------|--------------------|-------|\n| Linux | Docker | 今日主要支持路径 |\n| macOS (Apple Silicon) | Colima, Docker Desktop | 受支持 macOS 设置的推荐运行时 |\n| macOS | Podman | 尚不支持。NemoClaw 目前依赖于 macOS 上 OpenShell 对 Podman 的支持。 |\n| Windows WSL | Docker Desktop (WSL 后端) | 受支持的目标路径 |\n\n#### macOS 首次运行清单\n\n在崭新的 macOS 机器上，请按以下顺序安装前置条件：\n\n1. 安装 Xcode Command Line Tools：\n\n   ```bash\n   xcode-select --install\n   ```\n\n2. 安装并启动受支持的容器运行时：\n   - Docker Desktop\n   - Colima\n3. 运行 NemoClaw 安装程序。\n\n这可以避免 macOS 上最常见的两个首次运行失败：\n\n- 缺少安装程序和 Node.js 工具链所需的开发工具\n- 未安装或运行受支持的容器运行时时的 Docker 连接错误\n\n> **💡 提示**\n>\n> 对于 DGX Spark，请遵循 [DGX Spark 设置指南](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002Fspark-install.md)。它在运行标准安装程序之前涵盖了 Spark 特定的前置条件，例如 cgroup v2 和 Docker 配置。\n\n### 安装 NemoClaw 并接入 OpenClaw 代理\n\n下载并运行安装脚本。\n如果尚未安装 Node.js，脚本会先安装它，然后运行引导式接入向导以创建沙箱、配置推理并应用安全策略。\n\n```bash\ncurl -fsSL https:\u002F\u002Fwww.nvidia.com\u002Fnemoclaw.sh | bash\n```\n\n如果您使用 nvm 或 fnm 管理 Node.js，安装程序可能不会更新当前 Shell 的 PATH。\n如果安装后找不到 `nemoclaw`，请运行 `source ~\u002F.bashrc`（zsh 用户则为 `source ~\u002F.zshrc`）或打开新终端。\n\n安装完成后，摘要将确认运行环境：\n\n```text\n──────────────────────────────────────────────────\nSandbox      my-assistant (Landlock + seccomp + netns)\nModel        nvidia\u002Fnemotron-3-super-120b-a12b (NVIDIA Endpoints)\n──────────────────────────────────────────────────\nRun:         nemoclaw my-assistant connect\nStatus:      nemoclaw my-assistant status\nLogs:        nemoclaw my-assistant logs --follow\n──────────────────────────────────────────────────\n\n[INFO]  === Installation complete ===\n```\n\n### 与代理聊天\n\n连接到沙箱，然后通过 TUI（文本用户界面）或 CLI（命令行界面）与代理聊天。\n\n#### 连接到沙箱\n\n运行以下命令连接到沙箱：\n\n```bash\nnemoclaw my-assistant connect\n```\n\n这将使您连接到沙箱 Shell `sandbox@my-assistant:~$`，您可以在其中运行 `openclaw` 命令。\n\n#### OpenClaw TUI\n\n在沙箱 Shell 中，运行以下命令打开 OpenClaw TUI，它将启动交互式聊天界面。\n\n```bash\nopenclaw tui\n```\n\n向代理发送测试消息并验证是否收到响应。\n\n> **ℹ️ 注意**\n>\n> TUI 最适合交互式来回对话。如果您需要长响应（如大型代码生成输出）的完整文本，请使用 CLI。\n\n#### OpenClaw CLI\n\n在沙箱 Shell 中，运行以下命令发送单条消息并打印响应：\n\n```bash\nopenclaw agent --agent main --local -m \"hello\" --session-id test\n```\n\n这会将完整响应直接打印在终端中，避免依赖 TUI 视图来处理长输出。\n\n### 卸载\n\n要移除 NemoClaw 及安装过程中创建的所有资源，请在沙箱外的终端中运行：\n\n```bash\ncurl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002FNVIDIA\u002FNemoClaw\u002Frefs\u002Fheads\u002Fmain\u002Funinstall.sh | bash\n```\n\n该脚本会移除沙箱、NemoClaw 网关和提供者、相关的 Docker 镜像和容器、本地状态目录以及全局 `nemoclaw` npm 包。它不会移除共享的系统工具，如 Docker、Node.js、npm 或 Ollama。\n\n| 标志               | 效果                                              |\n|--------------------|-----------------------------------------------------|\n| `--yes`            | 跳过确认提示。                       |\n| `--keep-openshell` | 保留已安装的 `openshell` 二进制文件。              |\n| `--delete-models`  | 同时移除 NemoClaw 拉取的 Ollama 模型。           |\n\n例如，若要跳过确认提示：\n\n```bash\ncurl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002FNVIDIA\u002FNemoClaw\u002Frefs\u002Fheads\u002Fmain\u002Funinstall.sh | bash -s -- --yes\n```\n\n\u003C!-- end-quickstart-guide -->\n\n---\n\n## 工作原理\n\nNemoClaw 安装 NVIDIA OpenShell 运行时，然后创建一个沙箱化的 OpenClaw 环境，其中每个网络请求、文件访问和推理调用都由声明式策略管理。`nemoclaw` CLI (命令行界面) 编排整个堆栈：OpenShell 网关、沙箱、推理提供者和网络策略。\n\n| 组件        | 角色                                                                                      |\n|------------------|-------------------------------------------------------------------------------------------|\n| **插件**       | 用于启动、连接、状态和日志的 TypeScript CLI 命令。                            |\n| **蓝图**    | 版本化的 Python 工件，用于编排沙箱创建、策略和推理设置。 |\n| **沙箱**      | 隔离的 OpenShell 容器，运行 OpenClaw，并实施策略控制的出站流量和文件系统。  |\n| **推理**    | 通过 OpenShell 网关路由的提供者模型调用，对代理透明。 |\n\n蓝图的生命周期遵循四个阶段：解析工件、验证其摘要、规划资源，并通过 OpenShell CLI 应用。\n\n当出现问题时，错误可能源自 NemoClaw 或其底层的 OpenShell 层。运行 `nemoclaw \u003Cname> status` 检查 NemoClaw 级别的健康状况，运行 `openshell sandbox list` 检查底层沙箱状态。\n\n---\n\n## 推理\n\n来自代理的推理请求不会直接离开沙箱。OpenShell 拦截每个调用并将其路由到你配置期间选择的提供者。\n\n支持的（非实验性）配置路径：\n\n| 提供者 | 备注 |\n|---|---|\n| NVIDIA Endpoints | `integrate.api.nvidia.com` 上的精选托管模型。 |\n| OpenAI | 精选 GPT 模型加上 `Other...` 用于手动输入模型。 |\n| 其他 OpenAI 兼容端点 | 用于代理和兼容网关。 |\n| Anthropic | 精选 Claude 模型加上 `Other...` 用于手动输入模型。 |\n| 其他 Anthropic 兼容端点 | 用于 Claude 代理和兼容网关。 |\n| Google Gemini | Google 的 OpenAI 兼容端点。 |\n\n在配置期间，NemoClaw 会在创建沙箱之前验证所选的提供者和模型：\n\n- OpenAI 兼容提供者：先尝试 `\u002Fresponses`，然后尝试 `\u002Fchat\u002Fcompletions`\n- Anthropic 兼容提供者：尝试 `\u002Fv1\u002Fmessages`\n- 如果验证失败，向导会提示你在继续之前修复选择\n\n凭据保存在主机上的 `~\u002F.nemoclaw\u002Fcredentials.json` 中。沙箱只能看到路由的 `inference.local` 端点，看不到你的原始提供者密钥。\n\n标准配置流程支持本地 Ollama。本地 vLLM 仍处于实验阶段，macOS 上的本地主机路由推理除了需要本地服务本身可在主机上访问外，仍依赖于 OpenShell 的主机路由支持。\n\n## 主机端状态与配置\n\nNemoClaw 将其面向操作者的状态保存在主机上，而不是沙箱内。\n以下是新用户通常需要的定位的主要文件：\n\n| 路径 | 用途 |\n|---|---|\n| `~\u002F.nemoclaw\u002Fcredentials.json` | 配置期间保存的提供者凭据 |\n| `~\u002F.nemoclaw\u002Fsandboxes.json` | 注册的沙箱元数据，包括默认沙箱选择 |\n| `~\u002F.openclaw\u002Fopenclaw.json` | NemoClaw 在迁移流程中快照或恢复的主机 OpenClaw 配置 |\n\n可选服务和本地访问的常见环境变量包括 `TELEGRAM_BOT_TOKEN`、`ALLOWED_CHAT_IDS` 和 `CHAT_UI_URL`。\n对于正常的沙箱设置和重新配置，建议使用 `nemoclaw onboard` 而不是手动编辑这些文件。\n\n---\n\n## 保护层\n\n沙箱启动时带有默认策略，控制网络出站和文件系统访问：\n\n| 层级      | 保护内容                                    | 何时应用             |\n|------------|-----------------------------------------------------|-----------------------------|\n| 网络    | 阻止未经授权的出站连接。           | 运行时热重载。  |\n| 文件系统 | 防止在 `\u002Fsandbox` 和 `\u002Ftmp` 之外进行读写。| 沙箱创建时锁定。 |\n| 进程    | 阻止权限提升和危险系统调用。 | 沙箱创建时锁定。 |\n| 推理    | 将模型 API 调用重定向到受控后端。    | 运行时热重载。  |\n\n当代理尝试访问未列出的主机时，OpenShell 会阻止请求并在 TUI (文本用户界面) 中显示以供操作者批准。\n\n## 配置沙箱策略\n\n沙箱策略定义在声明式的 YAML 文件中，并由 OpenShell 运行时强制执行。\nNemoClaw 提供了默认策略文件 [`nemoclaw-blueprint\u002Fpolicies\u002Fopenclaw-sandbox.yaml`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002Fnemoclaw-blueprint\u002Fpolicies\u002Fopenclaw-sandbox.yaml)，该策略拒绝所有网络出站流量，除非明确列出了端点。\n\n操作员可以通过以下方式自定义策略：\n\n| 方法 | 方式 | 范围 |\n|--------|-----|-------|\n| **静态** | 编辑 `openclaw-sandbox.yaml` 并重新运行 `nemoclaw onboard`。 | 重启后持久化。 |\n| **动态** | 在运行的沙箱上运行 `openshell policy set \u003Cpolicy-file>`。 | 仅当前会话；重启后重置。 |\n\nNemoClaw 在 `nemoclaw-blueprint\u002Fpolicies\u002Fpresets\u002F` 中包含了用于常见集成的预设策略文件，例如 PyPI、Docker Hub、Slack 和 Jira。可以直接应用预设，或将其作为起始模板使用。\n\nNemoClaw 是一个开源项目——我们仍在确定哪些预设将作为默认发布。如果您有建议，请提交 [issue](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fissues) 或参与 [discussion](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fdiscussions)。\n\n当代理尝试访问策略未覆盖的端点时，OpenShell 会阻止请求，并在 TUI（文本用户界面）(`openshell term`) 中显示出来，以便操作员实时批准或拒绝。批准的端点仅在当前会话中持久化。\n\n如需分步说明，请参阅 [自定义网络策略](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Fnetwork-policy\u002Fcustomize-network-policy.html)。如需了解底层执行细节，请参阅 OpenShell 的 [策略架构](https:\u002F\u002Fdocs.nvidia.com\u002Fopenshell\u002Flatest\u002Freference\u002Fpolicy-schema.html) 和 [沙箱策略](https:\u002F\u002Fdocs.nvidia.com\u002Fopenshell\u002Flatest\u002Fsandboxes\u002Fpolicies.html) 文档。\n\n---\n\n## 关键命令\n\n### 主机命令（`nemoclaw`）\n\n在主机上运行这些命令以设置、连接和管理沙箱。\n\n| 命令                              | 描述                                            |\n|--------------------------------------|--------------------------------------------------------|\n| `nemoclaw onboard`                  | 交互式设置向导：网关、提供者、沙箱。 |\n| `nemoclaw \u003Cname> connect`            | 打开沙箱内的交互式 shell。          |\n| `openshell term`                     | 启动 OpenShell TUI 用于监控和审批。 |\n| `nemoclaw start` \u002F `stop` \u002F `status` | 管理辅助服务（Telegram 桥接、隧道）。   |\n\n有关所有命令、标志和选项，请参阅完整的 [CLI（命令行接口）参考](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Fcommands.html)。\n\n---\n\n## 了解更多\n\n有关 NemoClaw 的更多信息，请参阅文档。\n\n- [概述](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Fabout\u002Foverview.html)：了解 NemoClaw 的功能及其组成部分。\n- [工作原理](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Fabout\u002Fhow-it-works.html)：了解插件、蓝图和沙箱生命周期。\n- [架构](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Farchitecture.html)：了解插件结构、蓝图生命周期和沙箱环境。\n- [推理配置文件](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Finference-profiles.html)：了解 NemoClaw 如何配置路由推理提供者。\n- [网络策略](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Fnetwork-policies.html)：了解出站控制和策略自定义。\n- [CLI 命令](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Fcommands.html)：了解完整的命令参考。\n- [故障排除](https:\u002F\u002Fdocs.nvidia.com\u002Fnemoclaw\u002Flatest\u002Freference\u002Ftroubleshooting.html)：排查常见问题及解决步骤。\n- [Discord](https:\u002F\u002Fdiscord.gg\u002FXFpfPv9Uvx)：加入社区进行提问和交流。\n\n## 许可证\n\n本项目采用 [Apache License 2.0](LICENSE) 许可。","# NVIDIA NemoClaw 快速上手指南\n\n**NVIDIA NemoClaw** 是一个开源参考栈，旨在简化在安全沙箱中运行 [OpenClaw](https:\u002F\u002Fopenclaw.ai) 全天候助手的过程。它集成了 NVIDIA OpenShell 运行时和开源模型（如 Nemotron）。\n\n> **⚠️ 注意：Alpha 版本**\n> 该软件目前处于早期预览阶段（Alpha），尚未投入生产使用。接口、API 和行为可能会随设计迭代而更改。\n\n---\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下软硬件要求。\n\n### 硬件要求\n\n| 资源 | 最低配置 | 推荐配置 |\n| :--- | :--- | :--- |\n| **CPU** | 4 vCPU | 4+ vCPU |\n| **内存** | 8 GB | 16 GB |\n| **磁盘** | 20 GB 可用空间 | 40 GB 可用空间 |\n\n*注：如果内存小于 8 GB，建议至少配置 8 GB 的 Swap 空间以避免 OOM Killer 触发，但这会降低性能。*\n\n### 软件依赖\n\n| 依赖项 | 版本要求 |\n| :--- | :--- |\n| **操作系统** | Linux (Ubuntu 22.04 LTS 或更高) |\n| **Node.js** | 22.16 或更高版本 |\n| **npm** | 10 或更高版本 |\n| **容器运行时** | 已安装并运行的支持运行时 (如 Docker) |\n| **OpenShell** | 必须预先安装 |\n\n#### macOS 用户特别提示\n如果是首次启动 macOS，请按顺序执行以下操作以避免常见错误：\n1. 安装 Xcode 命令行工具：\n   ```bash\n   xcode-select --install\n   ```\n2. 安装并启动支持的容器运行时（Docker Desktop 或 Colima）。\n3. 运行 NemoClaw 安装器。\n\n---\n\n## 安装步骤\n\n下载并运行官方安装脚本。该脚本会自动安装 Node.js（如果未安装）并引导您完成沙箱创建和安全策略配置。\n\n```bash\ncurl -fsSL https:\u002F\u002Fwww.nvidia.com\u002Fnemoclaw.sh | bash\n```\n\n**安装后检查：**\n如果使用了 `nvm` 或 `fnm` 管理 Node.js，可能需要刷新环境变量才能识别命令：\n```bash\nsource ~\u002F.bashrc  # 或使用 source ~\u002F.zshrc\n```\n\n安装完成后，终端将显示运行环境摘要，例如：\n```text\n──────────────────────────────────────────────────\nSandbox      my-assistant (Landlock + seccomp + netns)\nModel        nvidia\u002Fnemotron-3-super-120b-a12b (NVIDIA Endpoints)\n──────────────────────────────────────────────────\nRun:         nemoclaw my-assistant connect\nStatus:      nemoclaw my-assistant status\nLogs:        nemoclaw my-assistant logs --follow\n──────────────────────────────────────────────────\n```\n\n---\n\n## 基本使用\n\n### 1. 连接沙箱\n运行以下命令连接到沙箱内部：\n```bash\nnemoclaw my-assistant connect\n```\n连接成功后，您将进入沙箱 Shell (`sandbox@my-assistant:~$`)，在此可执行 `openclaw` 相关命令。\n\n### 2. 与 Agent 对话\n\n#### 方式 A：使用 TUI 界面（适合交互式对话）\n在沙箱 Shell 中运行：\n```bash\nopenclaw tui\n```\n发送测试消息以验证响应。\n\n#### 方式 B：使用 CLI 命令（适合获取完整输出）\n在沙箱 Shell 中运行：\n```bash\nopenclaw agent --agent main --local -m \"hello\" --session-id test\n```\n此命令会直接在终端打印完整的响应内容，避免 TUI 视图截断长文本。\n\n### 3. 查看状态与日志\n*   **查看状态：** `nemoclaw my-assistant status`\n*   **查看日志：** `nemoclaw my-assistant logs --follow`\n\n### 4. 卸载（可选）\n如需移除 NemoClaw 及相关资源，请在沙箱外部运行：\n```bash\ncurl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002FNVIDIA\u002FNemoClaw\u002Frefs\u002Fheads\u002Fmain\u002Funinstall.sh | bash\n```\n*可通过添加 `--yes` 参数跳过确认提示。*","某金融科技公司的高级工程师正在开发一个 7x24 小时运行的自动化交易验证助手，该助手需实时访问敏感的财务 API 并执行决策。\n\n### 没有 NemoClaw 时\n- 在宿主机直接运行 Agent 导致进程权限过大，一旦模型被对抗攻击可能窃取核心财务数据。\n- 手动搭建 Docker 沙箱环境极其繁琐，常因内存不足触发 OOM Killer 导致服务意外中断。\n- 缺乏统一的推理管理框架，不同版本的 Nemotron 模型配置冲突严重，维护成本极高。\n- 网络策略难以精细控制，Agent 可能意外连接外部恶意服务器进行隐蔽的数据外传。\n\n### 使用 NemoClaw 后\n- NemoClaw 通过 OpenShell 运行时提供强制隔离的沙箱，确保 Agent 无法越权访问宿主文件系统。\n- 预置优化后的容器镜像自动管理资源，有效规避低配机器上的内存溢出问题并提升稳定性。\n- 集成官方推荐的 Nemotron 模型栈，实现开箱即用的安全推理能力无需重复配置依赖库。\n- 内置安全策略网关限制网络出站流量，从底层阻断潜在的数据泄露路径并增强审计能力。\n\nNemoClaw 通过标准化的安全沙箱方案，让开发者能安心部署高权限的自主智能体应用。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_NemoClaw_887a300e.png","NVIDIA","NVIDIA Corporation","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FNVIDIA_7dcf6000.png","",null,"https:\u002F\u002Fnvidia.com","https:\u002F\u002Fgithub.com\u002FNVIDIA",[83,87,91,95,99,103],{"name":84,"color":85,"percentage":86},"JavaScript","#f1e05a",48.4,{"name":88,"color":89,"percentage":90},"Shell","#89e051",30,{"name":92,"color":93,"percentage":94},"TypeScript","#3178c6",16.7,{"name":96,"color":97,"percentage":98},"Python","#3572A5",4.3,{"name":100,"color":101,"percentage":102},"Dockerfile","#384d54",0.6,{"name":104,"color":105,"percentage":106},"Makefile","#427819",0.1,18521,2217,"2026-04-05T23:06:38","Apache-2.0","Linux, macOS, Windows","未说明","最低 8 GB，推荐 16 GB",{"notes":115,"python":112,"dependencies":116},"软件处于 Alpha 预览阶段（非生产就绪）；macOS 需先安装 Xcode 命令行工具；依赖容器运行时（Linux 首选 Docker，macOS 推荐 Colima\u002FDocker Desktop）；支持多种推理提供商（NVIDIA Endpoints, OpenAI 等）；凭据存储在主机 ~\u002F.nemoclaw\u002Fcredentials.json。",[117,118,119,120],"Node.js >= 22.16","npm >= 10","Docker","OpenShell",[15,13,26],5,"2026-03-27T02:49:30.150509","2026-04-06T08:08:40.779844",[126,131,135,140,144,148],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},3304,"在 WSL2 上运行 `nemoclaw onboard` 时强制启用 GPU 导致沙箱失效怎么办？","此问题已在后续版本修复（commit 71f01b3），新版不再向 `openshell gateway start` 传递 `--gpu` 标志。如果仍遇到问题，可手动清理状态：执行 `openshell sandbox delete \u003Cname>`、`openshell gateway destroy --name nemoclaw` 和 `docker volume rm openshell-cluster-nemoclaw`，然后直接运行 `openshell gateway start --name nemoclaw` 不带 `--gpu` 参数。推理将通过主机端 provider 路由，无需沙箱直接访问 GPU。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fissues\u002F208",{"id":132,"question_zh":133,"answer_zh":134,"source_url":130},3305,"`nemoclaw onboard` 完成后连接沙箱时报错 \"sandbox not found\" 如何解决？","这通常是因为 onboard 完成了本地注册但未成功创建 OpenShell 沙箱。建议清理残留状态：运行 `openshell sandbox delete \u003Cname>`、`openshell gateway destroy --name nemoclaw` 以及 `docker volume rm openshell-cluster-nemoclaw`。之后确保使用最新版本的 OpenShell 和 NemoClaw，并尝试重新配置 provider。",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},3306,"在 DGX Spark 设备上安装 NemoClaw 时 OpenShell 网关启动超时或 etcd 连接关闭怎么办？","请使用较新版本的 OpenShell（如 0.0.16）和 NemoClaw。如果安装失败，请参考官方文档 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fblob\u002Fmain\u002Fspark-install.md。确保硬件（Grace Blackwell GB10）和软件环境符合 DGX Spark 的要求。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fissues\u002F341",{"id":141,"question_zh":142,"answer_zh":143,"source_url":139},3307,"DGX Spark 上安装失败后的清理和完整重装步骤是什么？","请先运行 `nemoclaw uninstall` 卸载当前版本。然后依次执行以下命令重新安装依赖：\n1. `curl -LsSf https:\u002F\u002Fraw.githubusercontent.com\u002FNVIDIA\u002FOpenShell\u002Fmain\u002Finstall.sh | sh`\n2. `curl -fsSL https:\u002F\u002Fwww.nvidia.com\u002Fnemoclaw.sh | bash`\n这可以清除之前的损坏状态并重新初始化环境。",{"id":145,"question_zh":146,"answer_zh":147,"source_url":139},3308,"如何收集 NemoClaw 的调试日志用于故障排查？","可以使用内置的调试命令收集诊断信息。运行 `nemoclaw debug --output \u002Ftmp\u002Fnemoclaw-debug.tar.gz` 将生成包含调试信息的压缩包，随后可将其附加到 GitHub Issue 中。也可以使用 `nemoclaw debug [--quick]` 进行快速收集。",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},3309,"运行 `openshell sandbox connect` 时出现 \"ssh exited with status exit status: 1\" 错误如何处理？","这是一个已知问题，可能与管道传输有关。有用户反馈通过应用特定代码更改解决，但具体方案需根据代码变更确认。建议首先检查 OpenShell 和 NemoClaw 是否为最新版本。若持续报错，可尝试清理沙箱状态后重试，或关注相关 Pull Request 的合并状态。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNemoClaw\u002Fissues\u002F580",[154,157,160,163,166,169,172],{"id":155,"version":156,"summary_zh":79,"released_at":79},112545,"v0.0.6",{"id":158,"version":159,"summary_zh":79,"released_at":79},112546,"v0.0.5",{"id":161,"version":162,"summary_zh":79,"released_at":79},112547,"v0.0.4",{"id":164,"version":165,"summary_zh":79,"released_at":79},112548,"v0.0.3",{"id":167,"version":168,"summary_zh":79,"released_at":79},112549,"v0.0.2",{"id":170,"version":171,"summary_zh":79,"released_at":79},112550,"v0.0.1",{"id":173,"version":174,"summary_zh":79,"released_at":79},112551,"latest"]