Загрузка страницы

Fine-tuning language models for spanish NLP tasks by Álvaro Barbero Jiménez

Natural Language Processing (NLP) is nowadays one of the main points of focus of artificial intelligence and machine learning technologies. While conversational agents such as Siri or Alexa are the most visible representatives of NLP, this field finds wide applications in search engines, chatbots, customer service, opinion mining, and so on.

The high levels of success that such NLP solutions have achieved in recent years are mostly fueled by three factors: the public availability of very large datasets (corpora) of web text, the fast upscaling in specialized hardware capabilities (GPUs and TPUs), and the improvements of deep learning models adapted for language.

Focusing on this last point, the so called “language models” have been proven to be quite effective in leveraging the large datasets available. A language model is a deep artificial neural network trained on unlabeled corpora with the aim of modelling the distribution of words (or word pieces) in a particular language. In this way, and while trained in an unsupervised fashion, a language model is able to perform NLP tasks such as filling gaps in sentences or generating text following a cue.

#BIGTH20 #AI #NLP #DeepLearning #MachineLearning

Session presented at Big Things Conference 2020 by Álvaro Barbero Jiménez, Chief Data Scientist at IIC

17th November 2020
Home Edition

Do you want to know more? https://www.bigthingsconference.com/

Видео Fine-tuning language models for spanish NLP tasks by Álvaro Barbero Jiménez канала Big Things Conference
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
26 ноября 2020 г. 22:14:46
00:38:13
Яндекс.Метрика