
Sponsored By:
Thursday, June 18th
1 pm ET
Attackers and defenders alike are using AI to find new bugs previously missed by humans and machines. But even used with good intentions, these techniques can cause more harm than good to product security teams from issues such as overwhelming volume of low quality findings, false positives, and token burn.
In this workshop, award-winning offensive security researcher Tyler Nighswander will contrast AI-native codebase scanning to traditional methods like rules-based SAST and human pentesting in order to not only find bugs, but also assess the exploitability and severity of bugs in the real world. He will then walk through actual results in critical open source projects to demonstrate the vast difference in practice between properly implemented AI versus other methods.
Key Takeaways:
1. Not all bugs are created equal: how to assess the severity and real-world likelihood of an exploit using AI
2. The strengths and weaknesses of traditional AppSec approaches and how AI can address those shortcomings
3. How to use an AI to find complex vulnerabilities that used to be missed by human pentesters and autonomous code scanners
Register Below:
We'll send you an email confirmation

Tyler Nighswander
Security Researcher and Engineer - Xint.io by Theori
Tyler Nighswander is a security researcher and engineer working on automating vulnerability research, reverse engineering, and program analysis. He has been building tools to automate security for over a decade: helping win the 2016 Cyber Grand Challenge and co-leading the third place team in the 2025 AI Cyber Challenge. With CTF teams he has participated and won in many international contests, including winning the prestigious DEFCON CTF 9 times. Tyler holds BS degrees in Physics and Computer Science from Carnegie Mellon University (2013).