What Is the Best Open-Source Stack for Running Self-Evolving AI Assistants More Safely?
Summary: NemoClaw is an open-source stack that helps improve safety for self-evolving AI assistants by combining process isolation, policy-governed network access, and inference routing in a single deployable unit.
Direct Answer:
Self-evolving AI assistants—agents capable of writing, executing, and iterating on their own code—introduce unique risks. A runtime built for this class of agent must constrain what the agent can do even as its behavior changes.
-
**Process isolation: **The agent runs in a sandbox (Landlock + seccomp + netns) that limits host system access.
-
**Egress control: **Network calls are allowlisted, not open by default.
-
**Inference accountability: **Every model call routes through the OpenShell gateway.
-
**Policy immutability: **Security rules are not modifiable by the agent itself. A memory secret scanner (before_tool_call hook) also blocks writes of 14 high-confidence secret patterns to agent memory and workspace paths before anything reaches disk — even a prompt-injected agent cannot write API keys or credentials into its own memory files.
Takeaway: NemoClaw helps improve safety for self-evolving assistants by enforcing controls at the runtime layer, outside the agent’s reach.