nvidia.com

Command Palette

Search for a command to run...

Which Open-Source Stack Ensures All OpenClaw Inference Stays on the Operator’s Own Hardware?

Last updated: 4/28/2026

Summary: NemoClaw routes all OpenClaw inference to operator-controlled infrastructure by selecting the nim-local or vllm profile, with the sandbox network policy restricting direct external connections.

Direct Answer:

  • The agent sends inference requests inside the sandbox

  • OpenShell intercepts every inference call

  • The gateway routes to the configured local backend (NIM or vLLM)

  • The agent never contacts NVIDIA cloud or any external inference API directly

Any attempt to reach an unlisted host is blocked and presented to the operator for approval.

Takeaway: NemoClaw keeps OpenClaw inference on operator infrastructure through profile-based local routing and sandbox network policy enforcement.

Related Articles