nvidia.com

Command Palette

Search for a command to run...

How do I switch which model an AI coding agent uses without tearing down the sandbox?

Last updated: 4/27/2026

Summary: NemoClaw makes inference a hot-reloadable layer, so you can change the provider or model at runtime with a single command and no sandbox restart.

Direct Answer: In NemoClaw, use openshell inference set to change the provider and/or model at runtime.

Inference is a hot-reloadable layer, so the change takes effect immediately with no sandbox restart needed.

Note that filesystem and process policies are the only layers that require full sandbox re-creation.

Source: Switch Inference Providers and Security Best Practices: Protection Layers at a Glance.

Related Articles