Close

Explainability is Fundamental to Removing Data Bias

South Florida Software Developer Conference is a one day developer events on the topics including Machine Learning, AI, VR/AR/MR, IoT, ,NET Core, Visual Studio 2019, DevOps, MVC Framework, JavaScript, JQuery, SQL Server 2019, Business Intelligence, Software Testing, Xamarin/Mobile Development, cross platform development, Azure/Cloud, and Business/Career Development. Teaming up with TechLauderdale and Palm Beach Tech Association, this year I spoke to the conference on the topic Explainable, Interpretable, and Transparent Machine Learning. The South Florida Software Developer Conference is driven by a community of dedicated volunteers, speakers, and partners, hosted by Nova Southeastern University.

I had the great opportunity to catch up with fellow MVPs, and Microsoft community program manager, Rochelle Sonnenberg 

Microsoft MVPs with Rochelle Sonnenberg

The abstract of my talk:

Most real datasets have hidden biases. Being able to detect the impact of the bias in the data on the model, and then to repair the model, is critical if we are going to deploy machine learning in applications that affect people’s health, welfare, and social opportunities. This requires models that are intelligible. In this talk, we will review variety of tools and open source projects, including Microsoft InterpretML, for training interpretable models and explaining blackbox systems. These tools (WhatIf, Fair 360, etc) are made to help developers experiment with ways to introduce explanations of the output of AI systems. This talk will review the InterpretML and draw parallels with LIME, ELI5, and SHAP. InterpretML implements a number of intelligible models—including Explainable Boosting Machine (an improvement over generalized additive models ), and several methods for generating explanations of the behavior of black-box models or their individual predictions."

During my talk at Florida Developers Conference, I emphasized on the importance of explainability for machine learning, and highlighted the fact that Most real datasets have hidden biases seep in from real-world. Being able to detect the impact of the bias in the data on the model, and then to repair the model, is critical if we are going to deploy machine learning in applications that affect people’s health, welfare, and social opportunities. This requires models that are intelligible.

In this talk, I reviewed variety of algorithms and tools along with open source projects for training interpretable models and explaining blackbox systems. These tools (WhatIf, Fair 360, etc) are made to help developers experiment with ways to introduce explanations of the output of AI systems, and draw parallels with LIME, ELI5, and SHAP.

The slides of the talk can be downloaded from here: 

Explainable and Interpretable AI - Florida Dev Conference Feb 2020

Happy Coding!

Share