Explaining the Kullback-Liebler divergence through secret codes
Explains the concept of the Kullback-Leibler (KL) divergence through a ‘secret code’ example. The KL divergence is a directional measure of separation between two distributions (although is not a 'distance').
This video is part of a lecture course which closely follows the material covered in the book, "A Student's Guide to Bayesian Statistics", published by Sage, which is available to order on Amazon here: https://www.amazon.co.uk/Students-Guide-Bayesian-Statistics/dp/1473916364
For more information on all things Bayesian, have a look at: https://ben-lambert.com/bayesian/. The playlist for the lecture course is here: https://www.youtube.com/playlist?list=PLwJRxp3blEvZ8AKMXOy0fc0cqT61GsKCG&disable_polymer=true
Видео Explaining the Kullback-Liebler divergence through secret codes канала Ben Lambert
This video is part of a lecture course which closely follows the material covered in the book, "A Student's Guide to Bayesian Statistics", published by Sage, which is available to order on Amazon here: https://www.amazon.co.uk/Students-Guide-Bayesian-Statistics/dp/1473916364
For more information on all things Bayesian, have a look at: https://ben-lambert.com/bayesian/. The playlist for the lecture course is here: https://www.youtube.com/playlist?list=PLwJRxp3blEvZ8AKMXOy0fc0cqT61GsKCG&disable_polymer=true
Видео Explaining the Kullback-Liebler divergence through secret codes канала Ben Lambert
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
A Short Introduction to Entropy, Cross-Entropy and KL-DivergenceThe difficulty with real life Bayesian inference: high multidimensional integrals (and sums)Divergence intuition, part 1EM algorithm: how it worksAn introduction to the concept of a sufficient statisticWhy the World’s Best Mathematicians Are Hoarding ChalkThe ideal measure of a model's predictive fit015 Jensen's inequality & Kullback Leibler divergenceWhat is KL-divergence | KL-divergence vs cross-entropy | Machine learning interview QsWhat is meant by entropy in statistics?Deep Learning 20: (2) Variational AutoEncoder : Explaining KL (Kullback-Leibler) DivergenceROC and AUC, Clearly Explained!An introduction to Jeffreys priors - 3Reverse KL-Divergence training of Prior NetworksThe intuition behind Jensen's InequalityIntroduction to Entropy for Data ScienceStatQuest: Probability vs LikelihoodAn introduction to mutual information