PyData Amsterdam 2024

Productionizing Generative AI at ING: Navigating Risk, Compliance, and Defense Mechanism
09-19, 11:20–11:55 (Europe/Amsterdam), Mondriaan

Productionizing generative AI applications in a highly regulated banking environment such as ING comes with a plethora of challenges; from handling the technical complexities of massive foundational models, to addressing ethical considerations and rigorous risk assessments. These challenges are not unique to the financial sector but equally critical in other areas such as healthcare, and government. In this talk we will delve into these critical aspects, focusing on implementing, deploying, and monitoring generative AI applications with stringent compliance requirements and limited risk appetite. Highlighting analytical and human-in-the-loop approaches, we'll demonstrate how to establish robust first and second lines of defense, mitigate data and reputational risks, and ensure the accuracy of information, which is vital across various sectors.


Continuous monitoring, output evaluation, and expert 'human-in-the-loop' setups are a necessity when dealing with generative AI applications, particularly in ING's highly regulated financial environment. Such setups often can increase time-to-market, reduce the effectiveness of the produce, and are not scalable. In this talk, we present the architecture landscape and four distinct analytical and human-in-the-loop products designed to address these challenges. These products ensure (i) continuous validation of model and bot guardrails, (ii) an emergency stop mechanism in case of guardrail failures, (iii) 2LoD (second line of defense) for data and reputation risk mitigation, and (iv) periodic model and bot output review tooling.

The outline of this talk is:
- Introduction
- Challenges in implementing Generative AI at ING (compliance & risk appetite)
- The Architect Landscape & Components
o Continues Validation of Guardrail
o Emergency Stop Mechanism
o Second Line of Defense(2LoD)
o Periodical Output Review Tooling
- Examples & Demo (if it’s possible)
- Conclusion

Key Benefits for Attendees:
• Gain actionable strategies for integrating compliance in AI projects.
• Understand the critical role of human oversight in AI implementation.
• Discover effective methods for responsibly automating AI processes.

Audience
This session is particularly tailored for AI professionals, regulatory compliance officers, and technologists in sectors where stringent regulations prevail.

Farzam Fanitabasi is working as the "Chapter Lead Data Science: LLMOps" at ING Netherlands. He has been working in the NLP domain in the past 4 years, and the LLM domain the last 3 years. He has a PhD from ETH Zurich (2018-2021) where he worked on Deep Learning + NLP. Prior to joining ING, Farzam worked as a Postdoc (2021) on applied NLP in the communication science domain (VU Amsterdam) and later as a senior data scientist and tech lead, where he led a technical team in designing and developing large-scale NLP and LLM data products.

I'm a data scientist with a background in physics, holding both a bachelor's and a master's degree in the field. After completing my professional doctorate in data science at Eindhoven University of Technology, I gained extensive industry experience working with diverse clients across various sectors, primarily in the NLP and LLM domains. Outside of work, I'm an avid runner and enjoy playing volleyball and indoor football. In my spare time, I love reading, building with Legos, and engaging in board games.