Double Your Ransoms, Not Your Costs: Scaling Your Gang with AI
2025.11.18-Semgrep-LandingPage-1540x660-SB

Sponsored By:

Semgrep logo Black text
Tuesday, November 18th

11 am ET

AI is escaping the confines of a chatbox and is now navigating websites, managing calendars, and making purchases. AI agents and agentic AI are quickly becoming the paradigm for our AI-era applications, but as we launch agents to help us do our work more efficiently so do attackers. So perhaps it is unsurprising that attackers can now use swarms of agents, autonomously and independently attacking organisations. From hackbots (autonomous agents finding and exploiting vulnerabilities), to research bots (advising groups on their next target) or coding bots (to vibe code their malware), attackers are now leveraging agents to help them conduct attacks. At the same time attackers are developing insider AIs, intentionally compromised agents that lurk in the swarm, or turning your AI assistants and agents into an unwilling accomplice in their attack, siphoning sensitive data without your knowledge. Using the existing trust and permissions granted to AI by you, to quietly target your systems.

This webinar will examine some of the current ways threat actors are using AI in their attacks, using real examples of early threat actor experiments with AI, as well as glancing in our crystal ball to see what the future of AI assisted attacks will bring to organizations already struggling with cyber attacks. Attackers almost mirror the corporate world: From the Saas-style ransomware as a service now morphing into AI-enabled attacks and attackers. While much like corporate products AI is still in its early-adoption and experimental phases, attackers are already upgrading from a human labor model to an AI-powered factory one. Security teams now need to operate at the speed of your adversary's most efficient new employee, the AI agent.

Key Takeaways:

  • Attackers are deploying AI agent swarms: Threat actors are using autonomous AI agents to conduct attacks at scale—from hackbots that find and exploit vulnerabilities, to research bots that identify targets, to coding bots that generate malware. Security teams now face coordinated swarms of AI agents working independently to breach organizations.
  • Your trusted AI assistants can become insider threats: Attackers are compromising AI agents to act as insiders, exploiting the trust and permissions you've already granted them. These "insider AIs" can siphon sensitive data and quietly target your systems from within, turning your own AI tools into unwilling accomplices.
  • The attack economy is shifting from human labor to AI factories: Just as corporations are adopting AI, cybercriminals are evolving from ransomware-as-a-service to AI-enabled attack platforms. Security teams must now operate at the speed of AI agents—their adversary's most efficient new employees—not just human attackers.

Register Below:

We'll send you an email confirmation and calendar invite 

Dr. Katie Paxton-Fear-modified

Dr. Katie Paxton-Fear

Staff Security Advocate, Semgrep