Загрузка страницы

Interpreting ML models with explainable AI

We often trust our high-accuracy ML models to make decisions for our users, but it’s hard to know exactly why or how these models came to specific conclusions. Explainable AI provides a suite of tools to help you interpret your ML model’s predictions. Listen to this discussion regarding how to use Explainable AI to ensure our ML models are treating all users fairly. Watch for a presentation on how to analyze image, text, and tabular models from a fairness perspective, using Explanations on AI Platform. Finally, learn how to use the What-if Tool, an open source visualization tool for optimizing your ML model’s performance and fairness.

Speaker: Sara Robinson

Watch more:
Google Cloud Next ’20: OnAir → https://goo.gle/next2020

Subscribe to the GCP Channel → https://goo.gle/GCP

#GoogleCloudNext

AI218
product: Explainable AI, Cloud AutoML, AI Platform Training; fullname: Sara Robinson;

Видео Interpreting ML models with explainable AI канала Google Cloud Platform
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
15 сентября 2020 г. 21:34:03
00:21:08
Яндекс.Метрика