Variational Autoencoders - Part 2 ( Modeling a Distribution of Images )
Let's start with discussing a problem of fitting
a distribution P of X into a data-set of points.
Why? Well, we have already discussed this problem in week one,
when we discussed how to fit a Gaussian to a data-set of points,
we discussed it in week two,
when we discussed clustering problem,
and how we can solve it by fitting the Gaussian mixture model into our data.
And also, we discussed
probabilistic PC which is kind of an infinite mixture of Gaussians.
But now, we will want
to return to this question because it turns out that the methods we covered,
like Gaussian or Gaussian mixture model on the probabilistic PC,
are not enough to capture the complicated objects like images, like natural images.
So, you may want to fit
your data-set of natural images into a probabilistic distribution,
for example, to generate new data.
And, if you try to do that with Gaussian mixture model, it will work,
but it will not work as well as some more sophisticated models we will discuss this week.
And so, in this example for example,
we generated some fake celebrity faces by using a generative model,
and you can do these kinds of things if you
have a probability distribution of your training data,
so you can sample new images from this distribution.
And also you can, if you have such a model, like P of X,
you can also do a kind of Photoshop of the future applications, like here.
So you can, with a few brush strokes,
you can change a few pixels in your image,
and the program will try to recolor everything else,
so the picture will stay for the realistic.
So, it will change the color of the hair and etc...
Видео Variational Autoencoders - Part 2 ( Modeling a Distribution of Images ) канала Machine Learning TV
a distribution P of X into a data-set of points.
Why? Well, we have already discussed this problem in week one,
when we discussed how to fit a Gaussian to a data-set of points,
we discussed it in week two,
when we discussed clustering problem,
and how we can solve it by fitting the Gaussian mixture model into our data.
And also, we discussed
probabilistic PC which is kind of an infinite mixture of Gaussians.
But now, we will want
to return to this question because it turns out that the methods we covered,
like Gaussian or Gaussian mixture model on the probabilistic PC,
are not enough to capture the complicated objects like images, like natural images.
So, you may want to fit
your data-set of natural images into a probabilistic distribution,
for example, to generate new data.
And, if you try to do that with Gaussian mixture model, it will work,
but it will not work as well as some more sophisticated models we will discuss this week.
And so, in this example for example,
we generated some fake celebrity faces by using a generative model,
and you can do these kinds of things if you
have a probability distribution of your training data,
so you can sample new images from this distribution.
And also you can, if you have such a model, like P of X,
you can also do a kind of Photoshop of the future applications, like here.
So you can, with a few brush strokes,
you can change a few pixels in your image,
and the program will try to recolor everything else,
so the picture will stay for the realistic.
So, it will change the color of the hair and etc...
Видео Variational Autoencoders - Part 2 ( Modeling a Distribution of Images ) канала Machine Learning TV
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Large Scale Datasets and Very Deep Neural Networks - Deep Learning with PythonLinear Regression by Tom MitchellLoading Pre trained Models with Theano - Deep Learning with PythonSemantics & Factorization - Stanford UniversityDeep Features - University of WashingtonChallenges of Deep learning - University of WashingtonWhat Never Ending Learning (NELL) Really is? - Tom MitchellMaking Sense of the World with Deep Learning - FacebookComputational Learning Theory by Tom MitchellUnderstanding ChatGPT and LLMs from Scratch - Part 2Basic Neural Nets with TensorFlowUnderstanding Deep Learning with TheanoConvolutional Weights as Image Features ( Deep Learning with TensorFlow)Keras Behind the Scenes - Deep Learning with PythonPython Libraries for Machine Learning You Must Know!Naive Bayes by Tom MitchellLecture 18 320x240Examples of Deep Learning in Computer Vision - University of WashingtonProbability and Estimation by Tom MitchellTowards Practical Machine Learning with Differential Privacy and BeyondTemplate Models: Plate Models - Stanford University