
Sponsored By:

Thursday, September 18
1 pm ET
Piggybacking off the Cracking the Code session, we take a deeper dive into the topic of LLM Poisoning. AI-assisted coding tools are powerful, but they’re also imperfect. LLMs trained on vast, unvetted codebases can introduce risky patterns and subtle security flaws into your applications. In this episode, we confront the reality of LLM "poisoning" and discuss how Agentic AI can act as an always-on secure code verification layer to catch unsafe patterns in AI-suggested code before they ever reach production. If you’re using GenAI tools in your SDLC, you can’t afford to miss this conversation.
Don't miss this opportunity to gain actionable insights and take practical steps toward a stronger, more aligned DevSecOps strategy.
Register now to secure your spot!
Register Below:
We'll send you an email confirmation and calendar invite
