Last week I had the pleasure to record a live webinar with my colleagues, Maor Ivgi, CTO, Stardat, Heather Dawe, UK Head of Data and Megan Heath, Director UK&I on pragmatic Perspectives on managing algorithmic bias, model explainability, fairness, and trust in the Industry. You can see the recording here.
Following is the brief abstract of the talk -
As AI infused systems become commonplace, this rapid growth of machine learning capabilities and increasing presence in our lives raise pressing questions about the impact, governance, ethics, and accountability of these technologies.
AI governance is about AI being explainable, transparent, and ethical. How do financial institutions, telecommunication companies, retail, and media enterprises implement policies to avoid bias perpetuation, administer principles of trustworthy AI, and deploy their systems in an ethical, inclusive, and accountable manner? Our panel of experts will answer these practical questions with data driven KPIs to measure, and tools to equip the attendees for narrowing down the knowledge gap between data science experts and the variety of people who use, interact with, and are impacted by these technologies.
Today, organizations seek clarity and structure around AI systems and grapple with harnessing the potential of AI systems while ensuring that they do not exacerbate existing inequalities and biases, or even create new ones. A comprehensive AI governance framework is the answer to address these challenges along with model management, data consistency, and algorithmic transparency.
In this session, panelist will outline the tools, technologies, and provide practical advice needed to understand and monitor AI life-cycle. Please join the UST Team of experts to explore policy remedies to address the complex challenges associated with emerging technologies, operationalize these concepts in a production AI system—and crucially, create a culture of trust in AI.