Machine learning models are being used more and more often to support important decisions - from assisting doctors to diagnose a health problem, to recommending suitable candidates for a job opening, or detecting fraudulent transactions in the financial industry. It becomes crucial to be able to understand and interpret the inner workings and predictions of such models, in order to reduce errors, build trust and gain new insights. This is the goal of Explainable AI, also often called XAI, a widely used term describing processes and methods to make machine learning models and their outputs understandable to humans.
What you’ll learn during this webinar
- We share insights and practical case studies demonstrating the goals and importance of explainable AI. We will look at how a machine learning solution can be made more explainable or interpretable, not only on a model level but throughout the entire ML project.
- We define specific actions you can take in each step - from the problem definition and data used, the model itself, all the way to the user interface — to make the results of our models more interpretable & useful.
- Google shares insights on Vertex Explainable AI helps, a tool that helps you understand your model's outputs for classification and regression tasks.
Get access to the webinar by filling in the form below.