Загрузка страницы

Training BERT #1 - Masked-Language Modeling (MLM)

🎁 Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp

BERT, everyone's favorite transformer costs Google ~$7K to train (and who knows how much in R&D costs). From there, we write a couple of lines of code to use the same model - all for free.

BERT has enjoyed unparalleled success in NLP thanks to two unique training approaches, masked-language modeling (MLM), and next sentence prediction (NSP).

MLM consists of giving BERT a sentence and optimizing the weights inside BERT to output the same sentence on the other side.

So we input a sentence and ask that BERT outputs the same sentence.

However, before we actually give BERT that input sentence - we mask a few tokens.

So we're actually inputting an incomplete sentence and asking BERT to complete it for us.

How to train BERT with MLM:
https://youtu.be/R6hcxMMOrPE

🤖 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5

Medium article:
https://towardsdatascience.com/masked-language-modelling-with-bert-7d49793e5d2c

🎉 Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership

📖 If membership is too expensive - here's a free link:
https://towardsdatascience.com/masked-language-modelling-with-bert-7d49793e5d2c?sk=17a19eca8dc8280bea4138802580ffe0

🤖 70% Discount on the NLP With Transformers in Python course:
https://www.udemy.com/course/nlp-with-transformers/?couponCode=MEDIUM3

🕹️ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff

Видео Training BERT #1 - Masked-Language Modeling (MLM) канала James Briggs
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
19 мая 2021 г. 19:51:39
00:16:24
Яндекс.Метрика