Go-Explore: a New Approach for Hard-Exploration Problems
This algorithm solves the hardest games in the Atari suite and makes it look so easy! This modern version of Dijkstra's shortest path algorithm is outperforming everything else by orders of magnitude, and all based on random exploration.
https://arxiv.org/abs/1901.10995
https://eng.uber.com/go-explore/
https://github.com/uber-research/go-explore
Abstract:
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).
Authors: Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune
Видео Go-Explore: a New Approach for Hard-Exploration Problems канала Yannic Kilcher
https://arxiv.org/abs/1901.10995
https://eng.uber.com/go-explore/
https://github.com/uber-research/go-explore
Abstract:
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).
Authors: Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune
Видео Go-Explore: a New Approach for Hard-Exploration Problems канала Yannic Kilcher
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Go-Explore: A New Type of Algorithm for Hard-exploration ProblemsSelf-training with Noisy Student improves ImageNet classification (Paper Explained)Quantum Computing for Computer ScientistsWhy is Project HAARP so controversial?PCGRL: Procedural Content Generation via Reinforcement Learning (Paper Explained)Attention Is All You NeedCan Underwater Turbines Solve Our Energy Problems?Hand Anatomy Animated TutorialICML 2019 Tutorial: Recent Advances in Population-Based Search for Deep Neural NetworksSolid State Batteries who's doing what?Policy Gradient methods and Proximal Policy Optimization (PPO): diving into Deep RL!The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)REALM: Retrieval-Augmented Language Model Pre-Training (Paper Explained)Tesla's Million Mile BatteryThe Hardware Lottery (Paper Explained)Fast reinforcement learning with generalized policy updates (Paper Explained)Mega Cranes | Exceptional Engineering | Free DocumentaryOnline Education - How I Make My VideosTowards Generalization and Efficiency in Reinforcement Learning