Meta has introduced its latest brainchild, AudioCraft, an Artificial Intelligence (AI) tool capable of creating high-quality, realistic audio and music from simple text prompts.
This innovative tool comprises three distinct models – MusicGen, AudioGen, and EnCodec. MusicGen, nurtured with Meta-owned and specifically licensed music, is capable of generating music in response to textual prompts. Similarly, AudioGen, which honed its abilities on a plethora of public sound effects, can generate corresponding audio based on text prompts.
Meta has also announced the release of an advanced version of EnCodec, their dedicated decoder, designed to allow superior music generation with minimum artefacts. They are further releasing their pre-trained AudioGen models, thereby enabling generation of ambient sounds and a myriad of sound effects – from the bark of a dog to the honk of a car, or the echo of footsteps on a wooden floor.
Taking a significant step towards the democratisation of AI in audio generation, Meta is open-sourcing these models, granting researchers and practitioners alike unprecedented access. The objective is to enable users to train their own models using their own datasets, a first in the field. This move could serve to propel the advancement of AI-generated audio and music.
In a world where generative AI has stirred substantial excitement in the realms of images, video, and text, audio has lagged somewhat behind. Despite the presence of some work in this domain, the complex and often closed nature of the field has hindered widespread experimentation. This is particularly true when generating high-fidelity audio, an endeavour that demands modelling complex signals and patterns across varying scales. Music presents a further challenge due to its intricately layered structure of local and long-range patterns.
However, Meta’s AudioCraft models are designed to overcome these challenges, capable of producing high-quality audio with long-term consistency and ease-of-use. They offer a simplified design compared to previous models and provide the opportunity for anyone to experiment with Meta’s models.
Acting as a one-stop-shop for music, sound, compression, and generation, AudioCraft sets a new precedent for reusability and ease of development. It provides a robust open-source foundation for sound generators, compression algorithms, or music generators to build upon and reuse, fostering innovation and revolutionising the way we produce and listen to audio and music.
Meta envisions the potential of MusicGen to evolve into a novel type of instrument, reminiscent of the advent of synthesizers. The AudioCraft family of models is anticipated to inspire musicians and sound designers, aiding rapid brainstorming and fostering innovative composition techniques.
The future of sound generation seems to be echoing with promise and anticipation as the world awaits to see what will be crafted with AudioCraft.