Audio and music-focused AI model from Meta: AudioCraft

The head of Facebook, Instagram and WhatsApp Meta, With AudioCraft, an artificial intelligence model focused on sound and music on the agenda.

MetaToday, a new open source software called AudioCraft allows users to create music and sounds by issuing written commands through fully productive artificial intelligence. (Codes Here) announced the artificial intelligence model. According to the company statement, AudioCraft basically consists of three different models, these are MusicGen, AudioGen And EnCodec is announced as. The most familiar model here is MusicGen, which we shared the details of before. MusicGen, It can convert text inputs into music for the sake of remembering. Artificial intelligence technology that can analyze the entered songs and add them by looking at the text entries, “open source” prepared on the basis of and still directly Here can be tried for free. The system, which is still taking a long time to create music and can reach a maximum length of 12 seconds in high quality at the first stage, was trained using 20 thousand hours of music, as reported. In other words, it can produce any kind of music you can think of. MusicGen system seems very useful for producing and inspiring small pieces of music rather than creating a single song.

YOU MAY BE INTERESTED

The system that allows the created music / audio tracks to be downloaded to the systems, It produces better results when specific definitions are used rather than general inputs. Making a statement for AudioCraft, Meta CEO Mark Zuckerberg said, “We open source AudioCraft’s code, which produces high-quality, realistic sound and music by listening to audio signals and text-based commands.” said. In this regard, the example of the company on X was as follows:

YOU MAY BE INTERESTED

There are other names working on this issue as well. A similar system in this regard is the latest MusicLM internet giant Google had appeared before us. The system, which is similar to artificial intelligence systems that visualizes written texts, focuses directly on music production. Google, which has not signed a first in this regard, has developed its own of MusicLM states that it is more advanced than other examples. According to reports, the system has been trained with more than 280 thousand hours of music, so it can turn what is written into finished music in almost any genre. According to the statement, the system, which is said to be able to perform complex productions, not only combines genres and instruments, but also writes pieces using abstract concepts that are normally difficult for computers to grasp. The system, which can even create melodies based on humming, whistling or a picture’s description, can blend more than one genre in a piece of music, but unfortunately not available for everyone at the moment. Copyright concerns (Among the 280 thousand hours of music used in education, there are some that are protected by copyright.), it is stated that Google will continue to use the system internally, no information is given for public use yet.



lgct-tech-game