Skip to content

NemoClaw

★ New
assess
Security open-source Apache-2.0 open-source

At a Glance

NVIDIA's open-source CLI and reference stack for deploying OpenClaw AI agents in hardened sandbox environments, layering Landlock, seccomp, and network namespace isolation via the OpenShell runtime.

Type
open-source
Pricing
open-source
License
Apache-2.0
Adoption fit
small, medium
Top alternatives

What It Does

NemoClaw is a TypeScript CLI that wraps NVIDIA’s OpenShell runtime to provide a guided, opinionated deployment path for running OpenClaw always-on AI assistants in sandboxed environments. A single curl | bash command installs Node.js, the NemoClaw CLI, and runs an onboarding wizard that creates the sandbox, configures inference routing, and applies layered security policies.

The sandbox applies three kernel-level security primitives: Landlock (filesystem access control), seccomp (syscall filtering), and network namespaces (egress isolation). On top of OpenShell’s primitives, NemoClaw adds a “blueprint” lifecycle for snapshot and migration, state management, SSRF validation, and integration with NVIDIA Endpoints for privacy-routed inference. All outbound network connections from the agent pass through a policy engine that can allow, deny, or route-for-inference based on declarative YAML rules.

Key Features

  • Guided onboarding wizard: Single installer that provisions the sandbox, configures the inference backend, and prints a human-readable security summary (Landlock + seccomp + netns)
  • Triple kernel-level isolation: Landlock (filesystem), seccomp (syscall filtering), and network namespaces applied as defense-in-depth at sandbox creation
  • Hot-reloadable network policies: YAML-based egress policies can be updated on a live sandbox without restart via openshell policy set --wait
  • Privacy-aware inference routing: Strips agent credentials, injects backend credentials — the agent never holds provider API keys directly
  • Blueprint lifecycle: Snapshot, migration, and SSRF-validated state management for reproducible environments
  • K3s-in-Docker architecture: OpenShell gateway runs a K3s cluster inside a single Docker container — no external Kubernetes cluster required
  • NVIDIA Endpoints integration: Default inference backend is nvidia/nemotron-3-super-120b-a12b; alternative providers configurable
  • CLI-first operational model: nemoclaw <agent> connect/status/logs for day-2 operations

Use Cases

  • Sandboxed OpenClaw deployment for individuals or small teams: The primary use case — running OpenClaw with enforced filesystem and network constraints on a Linux developer machine or cloud VM
  • Security-conscious AI agent experimentation: Teams wanting visible, policy-codified security defaults before adopting agent tooling in production environments
  • NVIDIA inference stack integration: Organizations evaluating Nemotron models via NVIDIA Endpoints who want a pre-integrated sandbox deployment path
  • Developer workflow hardening: Engineering teams wanting to prevent credential exfiltration and uncontrolled network access from AI coding assistants

Adoption Level Analysis

Small teams (<20 engineers): Good fit for Linux-native teams comfortable with Docker. The one-command install and wizard make security accessible without deep expertise. macOS and Windows work with caveats. 8 GB RAM minimum is the key practical constraint.

Medium orgs (20-200 engineers): Viable for teams wanting policy-as-code agent sandboxing without operating Kubernetes. However, alpha status and tight OpenClaw coupling are risks. Evaluate against kubernetes-sigs/agent-sandbox for teams already on K8s.

Enterprise (200+ engineers): Not yet fit. Alpha software, single-player mode (no multi-tenant support), and VM-level isolation gaps make this inappropriate for production enterprise deployments. Monitor for stability milestones.

Alternatives

AlternativeKey DifferencePrefer when…
OpenShellLower-level runtime NemoClaw builds on; supports Claude Code, Codex, OpenCode, Copilot — not just OpenClawYou want to sandbox agents other than OpenClaw, or want direct policy control without the NemoClaw blueprint abstraction
Kubernetes Agent SandboxK8s-native CRD approach with gVisor/Kata Containers VM-level isolationYou run Kubernetes and need VM-level isolation or multi-tenant sandboxing at scale
E2BFirecracker microVM SaaS; strongest isolation, zero opsYou need hardest isolation boundary and prefer managed infrastructure over self-hosted
ModalgVisor + native GPU support, Python-firstYour workloads are GPU-heavy Python; you prefer a cloud execution model

Evidence & Sources

Notes & Caveats

  • Alpha software: Published March 2026. NVIDIA explicitly warns APIs and behavior may change without notice. Not production-ready.
  • Star count is misleading: ~18,900 stars in under four weeks reflects viral developer interest, not production adoption. No independent production case studies exist yet.
  • OpenClaw is commercial: NemoClaw and OpenShell are Apache-2.0, but the primary agent they run (OpenClaw) is a commercial product. This creates a dependency on a non-open component.
  • Landlock ≠ VM isolation: For adversarially-prompted agents or untrusted code execution, kernel LSM mechanisms can be bypassed by kernel exploits. microVM-based isolation provides a harder boundary.
  • NVIDIA inference lock-in pressure: Default is NVIDIA Endpoints (Nemotron). Organizations with existing inference infrastructure need to explicitly configure alternative providers.
  • OOM risk during setup: Image push + k3s + Docker daemon memory usage can trigger OOM on machines below 8 GB RAM. Documented in the README; configure swap if needed.
  • Single-player mode only: Current architecture is one developer, one environment, one gateway. Multi-tenant deployments are explicitly a future goal, not a current capability.

Related