PR-217: EfficientDet: Scalable and Efficient Object Detection
TensorFlow Korea 논문읽기모임 PR12 217번째 논문 review입니다
이번 논문은 GoogleBrain에서 쓴 EfficientDet입니다. EfficientNet의 후속작으로 accuracy와 efficiency를 둘 다 잡기 위한 object detection 방법을 제안한 논문입니다. 이를 위하여 weighted bidirectional feature pyramid network(BiFPN)과 EfficientNet과 유사한 방법의 detection용 compound scaling 방법을 제안하고 있는데요, 자세한 내용은 영상을 참고해주세요
논문링크: https://arxiv.org/abs/1911.09070
발표자료: https://www.slideshare.net/JinwonLee9/pr217-efficientdet-scalable-and-efficient-object-detection
Видео PR-217: EfficientDet: Scalable and Efficient Object Detection канала JinWon Lee
이번 논문은 GoogleBrain에서 쓴 EfficientDet입니다. EfficientNet의 후속작으로 accuracy와 efficiency를 둘 다 잡기 위한 object detection 방법을 제안한 논문입니다. 이를 위하여 weighted bidirectional feature pyramid network(BiFPN)과 EfficientNet과 유사한 방법의 detection용 compound scaling 방법을 제안하고 있는데요, 자세한 내용은 영상을 참고해주세요
논문링크: https://arxiv.org/abs/1911.09070
발표자료: https://www.slideshare.net/JinwonLee9/pr217-efficientdet-scalable-and-efficient-object-detection
Видео PR-217: EfficientDet: Scalable and Efficient Object Detection канала JinWon Lee
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![PR-366: A ConvNet for the 2020s](https://i.ytimg.com/vi/Mw7IhO2uBGc/default.jpg)
![PR-095: Modularity Matters: Learning Invariant Relational Reasoning Tasks](https://i.ytimg.com/vi/dAGI3mlOmfw/default.jpg)
![PR-406: Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers](https://i.ytimg.com/vi/WXBoiS8p9yU/default.jpg)
![PR-377: Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs](https://i.ytimg.com/vi/zZvkhiQgLCQ/default.jpg)
![PR-197: One ticket to win them all: generalizing lottery ticket initialization](https://i.ytimg.com/vi/YmTNpF2OOjA/default.jpg)
![PR-144: SqueezeNext: Hardware-Aware Neural Network Design](https://i.ytimg.com/vi/WReWeADJ3Pw/default.jpg)
![PR-085: In-Datacenter Performance Analysis of a Tensor Processing Unit](https://i.ytimg.com/vi/7WhWkhFAIO4/default.jpg)
![PR-304: Pretrained Transformers As Universal Computation Engines](https://i.ytimg.com/vi/2rB5aTdRTJM/default.jpg)
![PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://i.ytimg.com/vi/A3RrAIx-KCc/default.jpg)
![PR-243: Designing Network Design Spaces](https://i.ytimg.com/vi/bnbKQRae_u4/default.jpg)
![PR-317: MLP-Mixer: An all-MLP Architecture for Vision](https://i.ytimg.com/vi/KQmZlxdnnuY/default.jpg)
![PR-297: Training Data-efficient Image Transformers & Distillation through Attention (DeiT)](https://i.ytimg.com/vi/DjEvzeiWBTo/default.jpg)
![PR-231: A Simple Framework for Contrastive Learning of Visual Representations](https://i.ytimg.com/vi/FWhM3juUM6s/default.jpg)
![PR-207: YOLOv3: An Incremental Improvement](https://i.ytimg.com/vi/HMgcvgRrDcA/default.jpg)
![PR-344: A Battle of Network Structures: An Empirical Study of CNN, Transformer, and MLP](https://i.ytimg.com/vi/NVLMZZglx14/default.jpg)
![PR-155: Exploring Randomly Wired Neural Networks for Image Recognition](https://i.ytimg.com/vi/qnGm1h365tc/default.jpg)
![PR-183: MixNet: Mixed Depthwise Convolutional Kernels](https://i.ytimg.com/vi/252YxqpHzsg/default.jpg)
![PR-284: End-to-End Object Detection with Transformers(DETR)](https://i.ytimg.com/vi/lXpBcW_I54U/default.jpg)
![PR-270: PP-YOLO: An Effective and Efficient Implementation of Object Detector](https://i.ytimg.com/vi/7v34cCE5H4k/default.jpg)
![PR-044: MobileNet](https://i.ytimg.com/vi/7UoOFKcyIvM/default.jpg)
![PR-355: Masked Autoencoders Are Scalable Vision Learners](https://i.ytimg.com/vi/mtUa3AAxPNQ/default.jpg)