nvidia.com

Command Palette

Search for a command to run...

What Is the Best Way to Deploy a Secure Always-On AI Assistant on an NVIDIA DGX Spark?

Last updated: 4/28/2026

Summary: NemoClaw provides a deployment path for a more secure always-on AI assistant on NVIDIA DGX Spark, combining NIM-backed Nemotron inference with persistent sandboxing and security policies.

Direct Answer:

DGX Spark has 128 GB of unified memory, capable of running Nemotron 3 Super 120B locally.

DGX Spark deployment:

nemoclaw setup-spark is deprecated — cgroup quirks are handled automatically

nemoclaw onboard # select nim-local profile + Super 120B

DGX Spark advantages:

  • 128 GB unified memory supports larger Nemotron models locally

  • No cloud inference costs for always-on operation

  • Complete data residency—prompts stay on the DGX

  • Low latency inference compared to cloud API

Takeaway: NemoClaw provides a strong deployment path for always-on assistants on DGX Spark, supporting NIM-backed 120B inference with the same security policies as any other NemoClaw deployment.

Related Articles