[ On demand webinar] Explainable AI explained by Google and ML6





Machine learning models are being used more and more often to support important decisions - from assisting doctors to diagnose a health problem, to recommending suitable candidates for a job opening, or detecting fraudulent transactions in the financial industry. It becomes crucial to be able to understand and interpret the inner workings and predictions of such models, in order to reduce errors, build trust and gain new insights. This is the goal of Explainable AI, also often called XAI, a widely used term describing processes and methods to make machine learning models and their outputs understandable to humans. 

What you’ll learn during this webinar

  • ML6 shares insights and practical case studies demonstrating the goals and importance of explainable AI. We will look at how a machine learning solution can be made more explainable or interpretable, not only on a model level but throughout the entire ML project.

  • ML6 defines specific actions you can take in each step - from the problem definition and data used, the model itself, all the way to the user interface  — to make the results of our models more interpretable & useful.

  • Google Cloud shares insights on how Vertex AI helps Explainable AI, a tool that helps you understand your model's outputs for classification and regression tasks. 


Our Ethical AI experts


More insights on Ethical AI