
Sponsored By:

Wednesday, September 24
11 am ET
The craft of software engineering, and thus the organizational risk that accompanies building applications, is always evolving. From mainframes to web and mobile, waterfall to agile, monoliths to microservices, bare-metal to DevOps and cloud, the past several years have seen massive disruptions to the ways software is developed, and the security tools and processes needed to manage emerging threats have often lagged behind.
We are now in the midst of another generational shift in the way applications are created; this time away from human-centered software development and toward a new world of machine-generated code, large language models, prompt engineering, and AI agents writing and managing software autonomously. This new landscape includes an entirely novel set of attack vectors that the application security practices of yesterday are not equipped to handle.
Just as the methodology of DevSecOps evolved to manage the new risks of DevOps-oriented software practice, a new framework is needed to identify, prioritize, remediate and manage the new classes of vulnerabilities being introduced rapidly into AI-native applications.
Key Takeaways:
- Review the novel risks posed by AI-native software, and why existing application security testing tools and processes are not up to the task.
- Understand how roles and responsibilities are evolving quickly, and what it means for security that building software no longer requires knowledge of writing code.
- Discuss the ideal workflows for managing software risk in an agent-centric, machine-first SDLC.
- Define the pillars and practices of AITrustOps, the security readiness framework necessary to understand and mitigate AI-native software risk.
Register Below:
We'll send you an email confirmation and calendar invite
