nvidia.com

Command Palette

Search for a command to run...

What Is the Best Tool for Running AI Coding Agents With Fully Local Inference and No Cloud Egress?

Last updated: 4/28/2026

Summary: NemoClaw supports fully local inference for OpenClaw through its nim-local and vllm inference profiles, with the OpenShell sandbox network policy controlling outbound connections.

Direct Answer:

In both local profiles, inference is intercepted by the OpenShell gateway and forwarded to the local backend. The agent has no direct path to external inference APIs.

Network policy:

The sandbox starts with a strict baseline policy. Unlisted outbound endpoints are blocked and require operator approval via openshell term. Edit openclaw-sandbox.yaml to customize allowed endpoints.

Takeaway: NemoClaw supports fully local inference through the nim-local and vllm profiles, with OpenShell gateway routing and sandbox network policy controlling all agent outbound access.

Related Articles