Meta has released AudioCraft, a new open-source generative AI framework that can produce music and audio from simple text prompts. AudioCraft aims to revolutionize music and audio creation by empowering professional musicians, indie game developers, small business owners, and anyone who wants to create soundtracks or sound effects. With just a few words, AudioCraft can create a pulsating electronic track that would fit perfectly in a club, a lush atmospheric soundscape that transports you to a faraway forest, and a spine-chilling scream that would make any horror fan jump out of their chair.
AudioCraft is a collection of three robust models: MusicGen, AudioGen and EnCodec. MusicGen uses text-based user input to generate music, such as “I want a song that sounds like the opening credits of ‘The Good, the Bad, and the Ugly.'” AudioGen does a similar job for ambient sounds, such as “the sound of a thousand tiny footsteps on a moonlit beach” Both are trained with Meta-owned and specifically licensed music and public sound effects, respectively. EnCodec is a neural network-based audio compression codec that can reduce the size of audio files without compromising quality.
Generating high-fidelity audio from raw audio signals is a challenging task. Audio involves complex modeling of signals and patterns at varying scales, and traditional methods have relied on symbolic representations such as MIDI or piano rolls. However, these methods are unable to capture the intricate expressive nuances and stylistic elements found in music.
AudioCraft is claimed to be a new AI framework that overcomes this limitation through a novel approach called Transformer-XL. Transformer-XL is an extension of the popular Transformer architecture that can handle long data sequences. This allows AudioCraft to learn from both local and global dependencies in audio signals, resulting in more coherent and diverse results. In addition to Transformer-XL, AudioCraft also uses a technique called contrastive learning. Contrastive learning helps models learn from positive and negative examples of audio generation. This allows AudioCraft to generate more realistic and convincing audio outputs.
Meta claims that AudioCraft can produce music and audio that is comparable or superior to human-made compositions in terms of quality, diversity, and creativity. The company has released some examples of AudioCraft’s outputs on its website, where users can also try out the framework for themselves. Meta has also made AudioCraft’s source code available on GitHub so researchers and practitioners can access these models and train them with their own datasets.
Meta hopes AudioCraft will spur more innovation and collaboration in the field of generative AI for audio and music. The company believes AudioCraft can democratize audio production and enable new forms of artistic expression and storytelling. Meta also hopes that AudioCraft will have a positive social impact by making music and audio more accessible and inclusive for everyone.