SinGAN: Learning a Generative Model from a Single Natural Image
With just a single image as an input, this algorithm learns a generative model that matches the input image's patch distribution at multiple scales and resolutions. This enables sampling of extremely realistic looking variations on the original image and much more.
Abstract:
We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.
Authors: Tamar Rott Shaham, Tali Dekel, Tomer Michaeli
https://arxiv.org/abs/1905.01164
https://github.com/tamarott/SinGAN
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Видео SinGAN: Learning a Generative Model from a Single Natural Image канала Yannic Kilcher
Abstract:
We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.
Authors: Tamar Rott Shaham, Tali Dekel, Tomer Michaeli
https://arxiv.org/abs/1905.01164
https://github.com/tamarott/SinGAN
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Видео SinGAN: Learning a Generative Model from a Single Natural Image канала Yannic Kilcher
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
The Visual Task Adaptation BenchmarkSinGAN: Learning a Generative Model from a Single Natural ImageAttention Is All You NeedSinGAN: Generating Images and Animations in Google ColabGenerative Adversarial Networks for Image Synthesis and Translation - Dr. Jan KautzMichelle Gill - Artificial Intelligence Driven Drug DiscoverySinGAN Explained! (ICCV '19 Best Paper)How to Understand the Black Hole ImageWhich Tasks Should be Learned Together in Multi-Task Learning talk for ICMLA critical analysis of self-supervision, or what we can learn from a single image (Paper Explained)Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention (AI Paper Explained)Gradient Surgery for Multi-Task LearningBYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)The incredible inventions of intuitive AI | Maurice ContiGPT-2: Language Models are Unsupervised Multitask LearnersA Friendly Introduction to Generative Adversarial Networks (GANs)iMAML: Meta-Learning with Implicit Gradients (Paper Explained)Generative vs DiscriminativeChallenging Common Assumptions in the Unsupervised Learning of Disentangled Representations