Interactive AI agents like Claude Code and Codex are effectively remote code execution as a service. They can read your files, run shell commands, and reach any endpoint they're pointed at, all driven by inputs from documents, web pages, and tool outputs that an attacker can influence. Most teams rely on a default security model built around long-lived API keys in dotfiles, full developer credentials, and "we'll just be careful." That model falls apart the moment a prompt injection turns the agent against its operator.
This workshop covers the practical engineering needed to run these tools safely without handing an attacker a free shell on your machine. We'll focus on two complementary layers:
Sandboxing the agent: How to actually contain it using ephemeral VM environments, what's enforceable versus security theater, and where the realistic escape paths exist when the adversary controls the model's input.
Eliminating API keys: Replacing static credentials with short-lived, hardware-backed tokens using modern cloud identity features (AWS, Azure, and Google Cloud workload identity federation).
RSVP by filling in the form below ↓
About Instructor
Learn more about the CyberSandbox initiative here.
Have a question? Email innovate@enterprisecayman.ky for details.