Jukebox: A Generative Model for Music (Paper Explained)
This generative model for music can make entire songs with remarkable quality and consistency. It can be conditioned on genre, artist, and even lyrics.
Blog: https://openai.com/blog/jukebox/
Paper: https://cdn.openai.com/papers/jukebox.pdf
Code: https://github.com/openai/jukebox/
Abstract:
We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.
Authors: Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Видео Jukebox: A Generative Model for Music (Paper Explained) канала Yannic Kilcher
Blog: https://openai.com/blog/jukebox/
Paper: https://cdn.openai.com/papers/jukebox.pdf
Code: https://github.com/openai/jukebox/
Abstract:
We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.
Authors: Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Видео Jukebox: A Generative Model for Music (Paper Explained) канала Yannic Kilcher
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Variational AutoencodersOpenAI’s Jukebox AI Writes Amazing New Songs 🎼The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)I used an AI to make my music for me (OpenAI Jukebox)How AI could compose a personalized soundtrack to your life | Pierre BarreauNVAE: A Deep Hierarchical Variational Autoencoder (Paper Explained)"Old Town Road" but it was Generated by an AIThis AI Can Sound EXACTLY Like Famous ArtistsSmells Like Teen Spirit, but an AI continues the song [OpenAI Jukebox]DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)Trying out unusual genre combinations with OpenAI Jukebox[Read pinned comment] Using OpenAI Jukebox in 2021 (Google Colab)"Lovers into Surfboards" - an album in the style of The Beach Boys [OpenAI Jukebox]Concept Learning with Energy-Based Models (Paper Explained)NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained)DETR: End-to-End Object Detection with Transformers (Paper Explained)OpenAI CLIP: ConnectingText and Images (Paper Explained)Pretrained Transformers as Universal Computation Engines (Machine Learning Research Paper Explained)Red Hot Chili Peppers songs, continued by an AI [OpenAI Jukebox]