The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Stunning evidence for the hypothesis that neural networks work so well because their random initialization almost certainly contains a nearly optimal sub-network that is responsible for most of the final performance.
https://arxiv.org/abs/1803.03635
Abstract:
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.
We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.
We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
Authors: Jonathan Frankle, Michael Carbin
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Видео The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks канала Yannic Kilcher
https://arxiv.org/abs/1803.03635
Abstract:
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.
We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.
We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
Authors: Jonathan Frankle, Michael Carbin
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Видео The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks канала Yannic Kilcher
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask (Paper Explained)Backpropagation and the brainContinual Learning and Catastrophic ForgettingEI Seminar - Michael Carbin - The Lottery Ticket HypothesisHow to Use Math to Get Rich in the Lottery* - Jordan Ellenberg (Wisconsin–Madison)J. Frankle & M. Carbin: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural NetworksMuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned ModelThe Lottery Ticket Hypothesis Explained!Self-training with Noisy Student improves ImageNet classification (Paper Explained)Neural Network Pruning for Compression & Understanding | Facebook AI Research | Dr. Michela PaganiniThe Lottery Ticket Hypothesis with Jonathan FrankleThe Hardware Lottery (Paper Explained)The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)MIT 6.S191 (2020): Reinforcement LearningSara Hooker - The Hardware Lottery, Sparsity and FairnessSupervised Contrastive LearningMichael Elad: "Sparse Modeling in Image Processing and Deep Learning"Tutorial 9- Drop Out Layers in Multi Neural NetworkDeep Learning State of the Art (2020)Gauge Equivariant Convolutional Networks and the Icosahedral CNN