Загрузка страницы

Reinforcement Learning in Continuous Action Spaces | DDPG Tutorial (Pytorch)

In this tutorial we will code a deep deterministic policy gradient (DDPG) agent in Pytorch, to beat the continuous lunar lander environment.

DDPG combines the best of Deep Q Learning and Actor Critic Methods into an algorithm that can solve environments with continuous action spaces. We will have an actor network that learns the (deterministic) policy, coupled with a critic network to learn the action-value functions. We will make use of a replay buffer to maximize sample efficiency, as well as target networks to assist in algorithm convergence and stability.

To deal with the explore exploit dilemma, we will introduce noise into the agent's action choice function. This noise is the Ornstein Uhlenbeck noise that models temporal correlations of brownian motion.

Keep in mind that the performance you see is from an agent that is still in training mode, i.e. it still has some noise in its action. A fully trained agent in evaluation mode will perform even better. You can fix this up in the code by adding a parameter to the choose action function, and omitting the noise if you pass in a variable to indicate you are in evaluation mode.

#DeepDeterministicPolicyGradients #DDPG #ContinuousLunarLander

Learn how to turn deep reinforcement learning papers into code:

Get instant access to all my courses, including the new Hindsight Experience Replay course, with my subscription service. $24.99 a month gives you instant access to explanations and implementations of a dozen deep reinforcement learning algorithms. Not only will you learn everything from Deep Q Learning to Proximal Policy Optimization, but you will learn a repeatable system for learning new algorithms.

Discounts available for Udemy students (enrolled longer than 30 days). Just send an email to sales@neuralnet.ai

https://www.neuralnet.ai/courses

Or, pickup my Udemy courses here:

Deep Q Learning:
https://www.udemy.com/course/deep-q-learning-from-paper-to-code/?couponCode=DQN-FEB-22

Actor Critic Methods:
https://www.udemy.com/course/actor-critic-methods-from-paper-to-code-with-pytorch/?couponCode=AC-FEB-22

Curiosity Driven Deep Reinforcement Learning
https://www.udemy.com/course/curiosity-driven-deep-reinforcement-learning/?couponCode=ICM-FEB-22

Natural Language Processing from First Principles:
https://www.udemy.com/course/natural-language-processing-from-first-principles/?couponCode=NLP1-FEB-22

Reinforcement Learning Fundamentals
https://www.manning.com/livevideo/reinforcement-learning-in-motion

Here are some books / courses I recommend (affiliate links):
Grokking Deep Learning in Motion: https://bit.ly/3fXHy8W
Grokking Deep Learning: https://bit.ly/3yJ14gT
Grokking Deep Reinforcement Learning: https://bit.ly/2VNAXql

Come hang out on Discord here:
https://discord.gg/Zr4VCdv

Need personalized tutoring? Help on a programming project? Shoot me an email! phil@neuralnet.ai

Website: https://www.neuralnet.ai
Github: https://github.com/philtabor
Twitter: https://twitter.com/MLWithPhil

Видео Reinforcement Learning in Continuous Action Spaces | DDPG Tutorial (Pytorch) канала Machine Learning with Phil
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
28 июня 2019 г. 11:51:14
00:58:10
Яндекс.Метрика