
Sponsored By:
Thursday, June 18th
11 am ET
When a developer installs an AI agent skill, they are making a trust decision they almost certainly do not know they are making. Snyk’s ToxicSkills research into 3,000+ skills from ClawHub and skills.sh found that 36% contain security flaws and 13% contain critical issues, including credential theft, backdoor installation, and active prompt injection payloads. And 91% of confirmed malicious skills combine traditional malware with prompt injection in a single artifact.
This is not a theoretical risk. This is a supply chain that is already under active exploitation. In this session, we’ll examine three documented attack chains and define what a rigorous defense of the agentic action layer requires.
We’ll explore:
- A deep dive into how malicious skills operate through recent attacks
- Tactics to motivate risks in the AI skill supply chain
- Actionable strategies to defend the agentic action layer
Register Below:
We'll send you an email confirmation

Sonya Moisset
Staff AI Security Advocate - Snyk
Sonya is a Staff AI Security Advocate and a lifelong traveler who lived in the Middle-East, North Africa and East Asia. She’s passionate about Cybersecurity, DevSecOps and AI. She’s part of the OpenUK Security Advisory Board, an OpenUK ambassador and 2x OpenUK Honoree, 5x Github Star, 5x Top contributor for freeCodeCamp, TryHackMe room creator, 3x CyberGirls fellowship mentor, ambassador for Girlcode, and 2022 #TechWomen100. Sonya is a DevSecOps/AI Security Advocate, an international public speaker, a tech advocate and mentor. Her motto is #GetSecure, #BeSecure & #StaySecure.