09-20, 15:50–16:25 (Europe/Amsterdam), Rembrandt
At the intersection of cybersecurity and data science, AI red teaming uses adversarial attacks to test and secure LLM systems, playing a key role in AI security and safety. Following Microsoft's best practices, this hands-on session is tailored to data scientists who acknowledge the need to secure Gen AI systems, but simply do not know how (yet). Attendees will learn to assess and mitigate LLM risks, observe a live penetration testing demo, and gain practical steps to embark on their own AI security journeys.
Outline:
Introduction to LLM Security (5 min)
• Key challenges in LLM security and safety
• Shared responsibilities between data science and cybersecurity
What is AI Red Teaming? (10 min)
• Origins and evolution from traditional red teaming
• Examples of AI red teaming applied to key LLM risks
Demo (5 min)
• Live penetration test demo using Microsoft’s open-source Python package PyRIT
Practical Steps to Getting Started (5 min)
• Recommendations on easy wins, team compositions and learning resources
Takeaways:
• Understand LLM security basics and the concept of AI red teaming
• View examples and a demo on how to test and mitigate common LLM security risks
• Receive actionable recommendations on starting or improving AI security practices at your organization
Notes:
• Familiarity with LLM concepts is expected.
• No prior security knowledge is required.
Richie is a technical economist and currently a member of the Microsoft NL Security team, currently focusing on AI security. Prior to joining Microsoft, as a data scientist, he focused on A/B testing, causal inference, econometrics and consulting.