
Sponsored By:
Thursday, November 20
11 am ET
AI is moving faster than security teams can keep up, and its blind spots are multiplying. Unlike traditional applications, today's LLMs, RAG pipelines, and autonomous agents can respond in endlessly varied ways to even slightly different inputs - making them impossible to test exhaustively like traditional code. That means new attack surfaces prompt injection, context, manipulation, multi-turn exploits, and tool misuse that static scanners and pen tests will never catch
This session cuts through the noise with a practical, step-by-step playbook for AI red teaming: what it is, why it matters now, and how to implement it without slowing delivery. We’ll contrast behavioral security with code security, show why SAST/DAST and classic pen tests miss prompt injection, context manipulation, jailbreaking, multi-turn exploits, and tool misuse—and map these risks across LLM apps, RAG systems, and agentic workflows. You’ll learn how to assess your exposure, choose the proper scope and objectives, balance manual creativity with automation, and operationalize continuous testing using the S-Curve maturity model—from first test to Level 3 continuous coverage.
Join us to learn how platforms like Mend AI Premium - Red Teaming can reduce risk identification and remediation time by up to 80% while meeting the rising expectations of both regulators and customers.
Key Takeaways:
- Identify AI-specific attack surfaces across LLM, RAG, and agentic systems.
- Design objective-driven red team exercises that uncover behavioral risks.
- Balance manual testing with automation for scale and regression coverage.
- Apply the S-Curve maturity model to move from ad-hoc to continuous testing.
- Translate findings into fixes, KPIs, and compliance evidence (e.g., NIST, EU AI Act).
Register Below:
We'll send you an email confirmation and calendar invite

Amit Chita
Field CTO - Mend.io
-modified.png?width=557&height=557&name=Gary%20(1)-modified.png)