Fast reinforcement learning with generalized policy updates (Paper Explained)
#ai #research #reinforcementlearning
Reinforcement Learning is a powerful tool, but it is also incredibly data-hungry. Given a new task, an RL agent has to learn a good policy entirely from scratch. This paper proposes a new framework that allows an agent to carry over knowledge from previous tasks into solving new tasks, even deriving zero-shot policies that perform well on completely new reward functions.
OUTLINE:
0:00 - Intro & Overview
1:25 - Problem Statement
6:25 - Q-Learning Primer
11:40 - Multiple Rewards, Multiple Policies
14:25 - Example Environment
17:35 - Tasks as Linear Mixtures of Features
24:15 - Successor Features
28:00 - Zero-Shot Policy for New Tasks
35:30 - Results on New Task W3
37:00 - Inferring the Task via Regression
39:20 - The Influence of the Given Policies
48:40 - Learning the Feature Functions
50:30 - More Complicated Tasks
51:40 - Life-Long Learning, Comments & Conclusion
Paper: https://www.pnas.org/content/early/2020/08/13/1907370117
My Video on Successor Features: https://youtu.be/KXEEqcwXn8w
Abstract:
The combination of reinforcement learning with deep learning is a promising approach to tackle important sequential decision-making problems that are currently intractable. One obstacle to overcome is the amount of data needed by learning systems of this type. In this article, we propose to address this issue through a divide-and-conquer approach. We argue that complex decision problems can be naturally decomposed into multiple tasks that unfold in sequence or in parallel. By associating each task with a reward function, this problem decomposition can be seamlessly accommodated within the standard reinforcement-learning formalism. The specific way we do so is through a generalization of two fundamental operations in reinforcement learning: policy improvement and policy evaluation. The generalized version of these operations allow one to leverage the solution of some tasks to speed up the solution of others. If the reward function of a task can be well approximated as a linear combination of the reward functions of tasks previously solved, we can reduce a reinforcement-learning problem to a simpler linear regression. When this is not the case, the agent can still exploit the task solutions by using them to interact with and learn about the environment. Both strategies considerably reduce the amount of data needed to solve a reinforcement-learning problem.
Authors:
André Barreto, Shaobo Hou, Diana Borsa, David Silver, and Doina Precup
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Видео Fast reinforcement learning with generalized policy updates (Paper Explained) канала Yannic Kilcher
Reinforcement Learning is a powerful tool, but it is also incredibly data-hungry. Given a new task, an RL agent has to learn a good policy entirely from scratch. This paper proposes a new framework that allows an agent to carry over knowledge from previous tasks into solving new tasks, even deriving zero-shot policies that perform well on completely new reward functions.
OUTLINE:
0:00 - Intro & Overview
1:25 - Problem Statement
6:25 - Q-Learning Primer
11:40 - Multiple Rewards, Multiple Policies
14:25 - Example Environment
17:35 - Tasks as Linear Mixtures of Features
24:15 - Successor Features
28:00 - Zero-Shot Policy for New Tasks
35:30 - Results on New Task W3
37:00 - Inferring the Task via Regression
39:20 - The Influence of the Given Policies
48:40 - Learning the Feature Functions
50:30 - More Complicated Tasks
51:40 - Life-Long Learning, Comments & Conclusion
Paper: https://www.pnas.org/content/early/2020/08/13/1907370117
My Video on Successor Features: https://youtu.be/KXEEqcwXn8w
Abstract:
The combination of reinforcement learning with deep learning is a promising approach to tackle important sequential decision-making problems that are currently intractable. One obstacle to overcome is the amount of data needed by learning systems of this type. In this article, we propose to address this issue through a divide-and-conquer approach. We argue that complex decision problems can be naturally decomposed into multiple tasks that unfold in sequence or in parallel. By associating each task with a reward function, this problem decomposition can be seamlessly accommodated within the standard reinforcement-learning formalism. The specific way we do so is through a generalization of two fundamental operations in reinforcement learning: policy improvement and policy evaluation. The generalized version of these operations allow one to leverage the solution of some tasks to speed up the solution of others. If the reward function of a task can be well approximated as a linear combination of the reward functions of tasks previously solved, we can reduce a reinforcement-learning problem to a simpler linear regression. When this is not the case, the agent can still exploit the task solutions by using them to interact with and learn about the environment. Both strategies considerably reduce the amount of data needed to solve a reinforcement-learning problem.
Authors:
André Barreto, Shaobo Hou, Diana Borsa, David Silver, and Doina Precup
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Видео Fast reinforcement learning with generalized policy updates (Paper Explained) канала Yannic Kilcher
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
iMAML: Meta-Learning with Implicit Gradients (Paper Explained)MIT 6.S091: Introduction to Deep Reinforcement Learning (Deep RL)Fast Reinforcement Learning With Generalized Policy UpdatesJEPA - A Path Towards Autonomous Machine Intelligence (Paper Explained)Elon Musk’s 2 Rules For Learning Anything FasterTraining more effective learned optimizers, and using them to train themselves (Paper Explained)Chip Placement with Deep Reinforcement Learning (Paper Explained)GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding (Paper Explained)A.I. Learns to play Flappy BirdBYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)Multi-Agent Hide and Seek2M All-In into $5 Pot! WWYD? Daniel Negreanu's No-Limit Hold'em Challenge! (Poker Hand Analysis)[Classic] Generative Adversarial Networks (Paper Explained)Reinforcement Learning, Fast and SlowLanguage Models are Open Knowledge Graphs (Paper Explained)Yann LeCun - Self-Supervised Learning: The Dark Matter of Intelligence (FAIR Blog Post Explained)ReBeL - Combining Deep Reinforcement Learning and Search for Imperfect-Information Games (Explained)NVAE: A Deep Hierarchical Variational Autoencoder (Paper Explained)Deep Reinforcement Learning, part 1 - Doina Precup - MLSS 2020, Tübingen