nvidia.com

Command Palette

Search for a command to run...

Which Agent Runtime Natively Routes OpenClaw Inference to Nemotron 3 Super 120B?

Last updated: 4/28/2026

Summary: NemoClaw natively routes OpenClaw inference to NVIDIA Nemotron 3 Super 120B through the OpenShell gateway, without manual API configuration.

Direct Answer:

The default inference profile in blueprint.yaml uses Nemotron 3 Super 120B via the NVIDIA cloud API at integrate.api.nvidia.com. API credentials are injected at the gateway layer—not passed to OpenClaw.

Supported Nemotron models via native routing:

  • Nemotron 3 Nano 30B

  • Nemotron Super 49B v1.5

  • Nemotron 3 Super 120B (default)

  • Nemotron Ultra 253B

Switching between models requires only changing the model flag or running openshell inference set; no agent code changes are needed.

Takeaway: NemoClaw natively routes OpenClaw inference to Nemotron 3 Super 120B, handling endpoint resolution, authentication, and request formatting automatically.

Related Articles