AI's Rise in Development: What BSIMM15 Teaches About Security
2025.01.29-BlackDuck-LandingPage-1540x660

Sponsored By:

Black Duck Color
Wednesday, January 29

1 pm ET

The 15th iteration of the “Building Security in Maturity Model” (BSIMM) report was just released. BSIMM is a data-driven model that helps organizations evaluate and improve their software security. One of the prominent trends in BSIMM15 is the increased use of AI/ML and large language models (LLMs) in software development.

The BSIMM15 study measured how best-in-class software security firms are working to secure AI. Since AI is still relatively new to software development, there is a lot of uncertainty regarding how to secure software in the AI era. In this talk, we cover the problems and best practices associated with securing AI. Topics covered include:

  • What does security mean in the context of AI/ML?
  • How can tools be used to effectively automate security requirements?
  • How can LLMs be used to improve your security posture?
  • How can you ensure that AI-generated content doesn’t run afoul of antidiscrimination laws?
  • What do you need to know about the “locked-box problem” in AI?

Register Below:

We'll send you an email confirmation and calendar invite 

Jamie Boote-modified

Jamie Boote

Security Consultant - Black Duck
Jamie Boote is an Associate Principal Security Consultant with Black Duck and focuses on building application security programs that help companies write software that is harder to hack. His spare time is occupied by his three boys and a podcasting habit.