Загрузка страницы

Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)

#ai #neuroscience #rl

Reinforcement Learning is a powerful tool, but it lacks biological plausibility because it learns a fixed policy network. Animals use neuroplasticity to reconfigure their policies on the fly and quickly adapt to new situations. This paper uses Hebbian Learning, a biologically inspired technique, to have agents adapt random networks to high-performing solutions as an episode is progressing, leading to agents that can reconfigure themselves in response to new observations.

OUTLINE:
0:00 - Intro & Overview
2:30 - Reinforcement Learning vs Hebbian Plasticity
9:00 - Episodes in Hebbian Learning
10:00 - Hebbian Plasticity Rules
18:10 - Quadruped Experiment Results
21:20 - Evolutionary Learning of Hebbian Plasticity
29:10 - More Experimental Results
34:50 - Conclusions
35:30 - Broader Impact Statement

Videos: https://twitter.com/risi1979/status/1280544779630186499
Paper: https://arxiv.org/abs/2007.02686

Abstract:
Lifelong learning and adaptability are two defining aspects of biological agents. Modern reinforcement learning (RL) approaches have shown significant progress in solving complex tasks, however once training is concluded, the found solutions are typically static and incapable of adapting to new information or perturbations. While it is still not completely understood how biological brains learn and adapt so efficiently from experience, it is believed that synaptic plasticity plays a prominent role in this process. Inspired by this biological mechanism, we propose a search method that, instead of optimizing the weight parameters of neural networks directly, only searches for synapse-specific Hebbian learning rules that allow the network to continuously self-organize its weights during the lifetime of the agent. We demonstrate our approach on several reinforcement learning tasks with different sensory modalities and more than 450K trainable plasticity parameters. We find that starting from completely random weights, the discovered Hebbian rules enable an agent to navigate a dynamical 2D-pixel environment; likewise they allow a simulated 3D quadrupedal robot to learn how to walk while adapting to different morphological damage in the absence of any explicit reward or error signal.

Authors: Elias Najarro, Sebastian Risi

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Видео Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained) канала Yannic Kilcher
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
12 августа 2020 г. 14:20:25
00:39:22
Яндекс.Метрика