Group Normalization (Paper Explained)
The dirty little secret of Batch Normalization is its intrinsic dependence on the training batch size. Group Normalization attempts to achieve the benefits of normalization without batch statistics and, most importantly, without sacrificing performance compared to Batch Normalization.
https://arxiv.org/abs/1803.08494
Abstract:
Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries.
Authors: Yuxin Wu, Kaiming He
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Видео Group Normalization (Paper Explained) канала Yannic Kilcher
https://arxiv.org/abs/1803.08494
Abstract:
Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries.
Authors: Yuxin Wu, Kaiming He
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Видео Group Normalization (Paper Explained) канала Yannic Kilcher
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Concept Learning with Energy-Based Models (Paper Explained)Group NormalizationCUDA Explained - Why Deep Learning uses GPUsAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)Why Does Batch Norm Work? (C2W3L06)That Time an Indian Kingdom Invaded Southeast Asia | Rajendra Chola and the Maritime Chola EmpireSupervised Contrastive LearningDETR: End-to-End Object Detection with Transformers (Paper Explained)Standardization Vs Normalization- Feature ScalingDepthwise Separable Convolution - A FASTER CONVOLUTION!Weight Standardization (Paper Explained)OpenAI CLIP: ConnectingText and Images (Paper Explained)Batch Normalization (“batch norm”) explainedWhat is Energy? Is Energy conserved?Neural Architecture Search without Training (Paper Explained)Lecture 49 Layer, Instance, Group NormalizationBig Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate ShiftWeight Initialization explained | A way to reduce the vanishing gradient problem