Загрузка страницы

Serving Machine Learning Models at Scale Using KServe - Animesh Singh, IBM - KubeCon North America

KFServing is a serverless open source solution to serve machine learning models. With machine learning becoming more widely adopted in organizations, the trend is to deploy larger numbers of models. Plus, there is an increasing need to serve models using GPUs. As GPUs are expensive, engineers are seeking ways to serve multiple models with one GPU. The KFServing community designed a Multi-Model Serving solution to scale the number of models that can be served in a Kubernetes cluster. By sharing the serving container that is enabled to host multiple models, Multi-Model Serving addresses three limitations that the current ‘one model, one service’ paradigm encounters: 1) Compute resources (including the cost for public cloud), 2) Maximum number of pods, 3) Maximum number of IP addresses. This talk will present the design of Multi-Model Serving, describe how to use it to serve models for different frameworks, and share benchmark stats that demonstrate its scalability.

Видео Serving Machine Learning Models at Scale Using KServe - Animesh Singh, IBM - KubeCon North America канала Justin Miller
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
20 октября 2021 г. 22:53:44
00:48:34
Яндекс.Метрика