Загрузка страницы

Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks

Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.

Видео Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks канала ComputerVisionFoundation Videos
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
11 августа 2017 г. 3:28:15
00:11:24
Другие видео канала
Unsupervised Representation Learning for Gaze EstimationUnsupervised Representation Learning for Gaze EstimationSyn2Real Transfer Learning for Image Deraining Using Gaussian ProcessesSyn2Real Transfer Learning for Image Deraining Using Gaussian ProcessesLearning to Dress 3D People in Generative ClothingLearning to Dress 3D People in Generative ClothingLearning Physics-Guided Face Relighting Under Directional LightLearning Physics-Guided Face Relighting Under Directional LightWACV20: Keynote Talk: Maja Pantic, Imperial College London and SAICWACV20: Keynote Talk: Maja Pantic, Imperial College London and SAICOrthogonal Convolutional Neural NetworksOrthogonal Convolutional Neural Networks232 - Improving Video Captioning with Temporal Composition of a Visual-Syntactic Embedding232 - Improving Video Captioning with Temporal Composition of a Visual-Syntactic Embedding1276 - ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning1276 - ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised LearningMatch or No Match: Keypoint Filtering Based on Matching ProbabilityMatch or No Match: Keypoint Filtering Based on Matching ProbabilityHandVoxNet: Deep Voxel-Based Network for 3D Hand Shape and Pose Estimation From a Single Depth MapHandVoxNet: Deep Voxel-Based Network for 3D Hand Shape and Pose Estimation From a Single Depth MapDSGN: Deep Stereo Geometry Network for 3D Object DetectionDSGN: Deep Stereo Geometry Network for 3D Object DetectionDeepLPF: Deep Local Parametric Filters for Image EnhancementDeepLPF: Deep Local Parametric Filters for Image Enhancement324 - Weakly Supervised Deep Reinforcement Learning for Video Summarization With Semantically Meani324 - Weakly Supervised Deep Reinforcement Learning for Video Summarization With Semantically MeaniInverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a...Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a...368 - DB-GAN: Boosting Object Recognition Under Strong Lighting Conditions368 - DB-GAN: Boosting Object Recognition Under Strong Lighting ConditionsNeural Pose Transfer by Spatially Adaptive Instance NormalizationNeural Pose Transfer by Spatially Adaptive Instance NormalizationALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday TasksALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday TasksHigh-Frequency Component Helps Explain the Generalization of Convolutional Neural NetworksHigh-Frequency Component Helps Explain the Generalization of Convolutional Neural NetworksBlendedMVS: A Large-Scale Dataset for Generalized Multi-View Stereo NetworksBlendedMVS: A Large-Scale Dataset for Generalized Multi-View Stereo Networks1257 - Multimodal Prototypical Networks for Few-shot Learning1257 - Multimodal Prototypical Networks for Few-shot Learning1369 - CenterFusion:Center-based Radar and Camera Fusionfor 3D Object Detection1369 - CenterFusion:Center-based Radar and Camera Fusionfor 3D Object Detection
Яндекс.Метрика