BSidesPDX-2025

LLM Mayhem: Hands-On Red Teaming for LLM Applications
2025-10-24 , Workshop A

Join us in this workshop to engage in hands-on attacks to identify weaknesses in generative AI. If you’re interested in learning about getting started in red teaming generative AI systems, this is the workshop for you.

⚠️ Important:
Workshops require registration via this link: https://square.link/u/LYlZ89gC
(Registration will open at 12:00 Noon PDT, on Friday, October 10th)


We welcome any attendee who is interested in learning about the resiliency of a LLM based application against an adversary set on causing it to output unintended content. No prior experience with red teaming or attacking LLMs is necessary, as we will cover the basics and ramp students up throughout the session.

David Lu is a Senior ML Threat Operations Specialist at HiddenLayer, focusing on ML red teaming exercises, adversarial ML instruction, and the development of security ontologies. With 8 years of experience in security research, David also brings over a decade of academic expertise, having taught computer science at Portland State University and philosophy at Syracuse University. His interdisciplinary background uniquely positions him at the intersection of AI/ML security and ethical technology development.

Travis Smith is the Vice President of ML Threat Operations at HiddenLayer where he is responsible for the services offered by the organization, including red-teaming machine learning systems and teaching adversarial machine learning courses. He has spent the last 20 years building enterprise security products and leading world class security research teams. Travis has presented his original research at information security conferences around the world including Black Hat, RSA Conference, SecTor, and DEF CON Villages.