Stochastic computational graphs: optimization and applications in NLP, Maksim Kretov
Slides: https://bayesgroup.github.io/bmml_sem/2017/kretov_scg-24-nov.pdf
Using stochastic computational graphs formalism for optimization of sequence-to-sequence model
Variety of machine learning problems can be formulated as an optimization task for some (surrogate) loss function. Calculation of loss function can be viewed in terms of stochastic computational graphs (SCG). We use this formalism to analyze a problem of optimization of famous sequence-to-sequence model with attention and propose reformulation of the task. Examples are given for machine translation (MT). Our work provides a unified view on different optimization approaches for sequence-to-sequence models and could help researchers in developing new network architectures with embedded stochastic nodes.
Paper: https://arxiv.org/abs/1711.07724
Differentiable lower bound for expected BLEU score
In natural language processing tasks performance of the models is often measured with some non-differentiable metric, such as BLEU score. To use efficient gradient-based methods for optimization, it is a common workaround to optimize some surrogate loss function. This approach is effective if optimization of such loss also results in improving target metric. The corresponding problem is referred to as loss-evaluation mismatch. In the present work we propose a method for calculation of differentiable lower bound of expected BLEU score that does not involve computationally expensive sampling procedure such as the one required when using REINFORCE rule from reinforcement learning (RL) framework. Derived lower bound is tight in the sense that for degenerate distributions of candidate text it coincides with exact BLEU score, thus it is fair to refer to this lower bound as "differentiable BLEU score".
Paper: https://arxiv.org/abs/1712.04708
Видео Stochastic computational graphs: optimization and applications in NLP, Maksim Kretov канала BayesGroup.ru
Using stochastic computational graphs formalism for optimization of sequence-to-sequence model
Variety of machine learning problems can be formulated as an optimization task for some (surrogate) loss function. Calculation of loss function can be viewed in terms of stochastic computational graphs (SCG). We use this formalism to analyze a problem of optimization of famous sequence-to-sequence model with attention and propose reformulation of the task. Examples are given for machine translation (MT). Our work provides a unified view on different optimization approaches for sequence-to-sequence models and could help researchers in developing new network architectures with embedded stochastic nodes.
Paper: https://arxiv.org/abs/1711.07724
Differentiable lower bound for expected BLEU score
In natural language processing tasks performance of the models is often measured with some non-differentiable metric, such as BLEU score. To use efficient gradient-based methods for optimization, it is a common workaround to optimize some surrogate loss function. This approach is effective if optimization of such loss also results in improving target metric. The corresponding problem is referred to as loss-evaluation mismatch. In the present work we propose a method for calculation of differentiable lower bound of expected BLEU score that does not involve computationally expensive sampling procedure such as the one required when using REINFORCE rule from reinforcement learning (RL) framework. Derived lower bound is tight in the sense that for degenerate distributions of candidate text it coincides with exact BLEU score, thus it is fair to refer to this lower bound as "differentiable BLEU score".
Paper: https://arxiv.org/abs/1712.04708
Видео Stochastic computational graphs: optimization and applications in NLP, Maksim Kretov канала BayesGroup.ru
Показать
Комментарии отсутствуют
Информация о видео
17 декабря 2017 г. 15:36:11
01:37:55
Другие видео канала
Scalable Bayesian Inference in Low-Dimensional SubspacesAutoformer and Autoregressive Denoising Diffusion Models for Time Series Forecasting [in Russian][DeepBayes2018]: Day 2, lecture 4. Discrete latent variablesSparse Bayesian Variational Learning with Matrix Normal DistributionsTensor Train Decomposition for Fast Learning in Large Scale Gaussian Process Models, Dmitry KropotovСлучайные матрицы: теория и приложенияNeural Program Synthesis, part 2 [in Russian]Hyperbolic Deep Learning [in Russian]Tensor Programs, part 2 [in Russian]Discovering Faster Matrix Multiplication Algorithms with Reinforcement Learning [in Russian]Learning Differential Equations that are easy to solve [in Russian]SketchBoost: быстрый бустинг для multiclass/multilabel классификации и multitask регрессии[DeepBayes2018]: Day 3, Practical session 5. Distributional reinforcement learningOn Power Laws in Deep Ensembles [in Russian][DeepBayes2018]: Day 2, practical session 5. Variational autoencoders[DeepBayes2019]: Day 5, Sponsor talkMathematical Models of the Genetic Architecture in Complex Human DisordersControlling GANs Latent Space [in Russian][DeepBayes2019]: Day 2, practical session 2. Variational autoencodersPredicting Oil Movement in a Development System using Deep Latent Dynamics Models