Kalman Filter - Part 2
Course Link: https://www.coursera.org/learn/state-estimation-localization-self-driving-cars
Let's consider our Kalman Filter from the previous lesson and use it to estimate the position of our autonomous car. If we have some way of knowing the true position of the vehicle, for example, an oracle tells us, we can then use this to record a position error of our filter at each time step k. Since we're dealing with random noise, doing this once is not enough. We'll need to repeat this same process over and over and record our position error at each time step. Once we've collected these errors, if they average to zero at a particular time step k, then we say the Kalman Filter estimate is unbiased at this time step. Graphically, this is what the situation may look like. Say the particular time step, we know that the true position is the following...
Видео Kalman Filter - Part 2 канала Machine Learning TV
Let's consider our Kalman Filter from the previous lesson and use it to estimate the position of our autonomous car. If we have some way of knowing the true position of the vehicle, for example, an oracle tells us, we can then use this to record a position error of our filter at each time step k. Since we're dealing with random noise, doing this once is not enough. We'll need to repeat this same process over and over and record our position error at each time step. Once we've collected these errors, if they average to zero at a particular time step k, then we say the Kalman Filter estimate is unbiased at this time step. Graphically, this is what the situation may look like. Say the particular time step, we know that the true position is the following...
Видео Kalman Filter - Part 2 канала Machine Learning TV
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Limitations of the ChatGPT and LLMs - Part 3](https://i.ytimg.com/vi/OSEuOwzQZr8/default.jpg)
![Understanding ChatGPT and LLMs from Scratch - Part 2](https://i.ytimg.com/vi/FsDVOYY3kD8/default.jpg)
![Understanding ChatGPT and LLMs from Scratch - Part 1](https://i.ytimg.com/vi/Wt3Oicmy9VA/default.jpg)
![Understanding BERT Embeddings and How to Generate them in SageMaker](https://i.ytimg.com/vi/CiOL2h1l-EE/default.jpg)
![Understanding Coordinate Descent](https://i.ytimg.com/vi/TiiF3VG_ViU/default.jpg)
![Bootstrap and Monte Carlo Methods](https://i.ytimg.com/vi/d3mcuJycJfI/default.jpg)
![Maximum Likelihood as Minimizing KL Divergence](https://i.ytimg.com/vi/Xwt4aw5tZrE/default.jpg)
![Understanding The Shapley Value](https://i.ytimg.com/vi/9OFMRiAVH-w/default.jpg)
![Kalman Filter - Part 1](https://i.ytimg.com/vi/LioOvUZ1MiM/default.jpg)
![Recurrent Neural Networks (RNNs) and Vanishing Gradients](https://i.ytimg.com/vi/NgxMUHTJYmU/default.jpg)
![Transformers vs Recurrent Neural Networks (RNN)!](https://i.ytimg.com/vi/EFkbT-1VGTQ/default.jpg)
![Language Model Evaluation and Perplexity](https://i.ytimg.com/vi/gHHy2w2agEo/default.jpg)
![Common Patterns in Time Series: Seasonality, Trend and Autocorrelation](https://i.ytimg.com/vi/_z-a6WoNC2s/default.jpg)
![Limitations of Graph Neural Networks (Stanford University)](https://i.ytimg.com/vi/H6oOhElB3yE/default.jpg)
![Understanding Metropolis-Hastings algorithm](https://i.ytimg.com/vi/0lpT-yveuIA/default.jpg)
![Learning to learn: An Introduction to Meta Learning](https://i.ytimg.com/vi/ByeRnmHJ-uk/default.jpg)
![Page Ranking: Web as a Graph (Stanford University 2019)](https://i.ytimg.com/vi/-zq9-6RbKZc/default.jpg)
![Deep Graph Generative Models (Stanford University - 2019)](https://i.ytimg.com/vi/yFLiiK8c9CU/default.jpg)
![Graph Node Embedding Algorithms (Stanford - Fall 2019)](https://i.ytimg.com/vi/7JELX6DiUxQ/default.jpg)
![Graph Representation Learning (Stanford university)](https://i.ytimg.com/vi/YrhBZUtgG4E/default.jpg)