
Sponsored By:
Thursday, December 18
1 pm ET
AI workloads are under attack from new classes of threats including prompt injection, model poisoning, data leakage, and runtime compromise. Most defenses only cover the edge or focus on external models, leaving internal workloads exposed.
In this session you will learn how to extend consistent AI security controls across every layer of your environment. We will explore how to apply one unified policy across SaaS LLMs and proprietary models, how to achieve runtime visibility that stops attacks in real time, and how to deploy protection without code changes or developer effort.
Attendees will walk away with a practical framework for securing AI across the full spectrum, from edge traffic to container runtime.
Key Takeaways:
1. Understand how adversaries are exploiting AI through prompt injection, model poisoning, and runtime compromise.
2. Learn how to extend a single, consistent AI security policy from the edge into containerized workloads.
3. See how to achieve full-spectrum AI protection without code changes, SDKs, or added developer effort.
Register Below:
We'll send you an email confirmation and calendar invite
Joe Murphy
Placeholder - Aqua Security

Dennis Birchard
Global Lead, Principal Enterprise Security Architect - Akamai
