Загрузка страницы

Evaluation Measures for Search and Recommender Systems

In this video you will learn about popular offline metrics (evaluation measures) like Recall@K, Mean Reciprocal Rank (MRR), Mean Average Precision@K (MAP@K), and Normalized Discounted Cumulative Gain (NDCG@K). We will also demonstrate how each of these metrics can be replicated in Python.

Evaluation of information retrieval (IR) systems is critical to making well-informed design decisions. From search to recommendations, evaluation measures are paramount to understanding what does and does not work in retrieval.

Many big tech companies contribute much of their success to well-built IR systems. One of Amazon's earliest iterations of the technology was reportedly driving more than 35% of their sales. Google attributes 70% of YouTube views to their IR recommender systems.

IR systems power some of the greatest companies in the world, and behind every successful IR system is a set of evaluation measures.

🌲 Pinecone article:
https://www.pinecone.io/learn/offline-evaluation

🔗 Code notebooks:
https://github.com/pinecone-io/examples/tree/master/learn/algos-and-libraries/offline-evaluation

🤖 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5

🎉 Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership

👾 Discord:
https://discord.gg/c5QtDB9RAP

00:00 Intro
00:51 Offline Metrics
02:38 Dataset and Retrieval 101
06:08 Recall@K
07:57 Recall@K in Python
09:03 Disadvantages of Recall@K
10:21 MRR
13:32 MRR in Python
14:18 MAP@K
18:17 MAP@K in Python
19:27 NDCG@K
29:26 Pros and Cons of NDCG@K
29:48 Final Thoughts

Видео Evaluation Measures for Search and Recommender Systems канала James Briggs
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
28 июня 2022 г. 20:06:40
00:31:25
Яндекс.Метрика