The parent company of WhatsApp, Instagram and Facebook Metais currently making noise with the big language model “LLaMA 2”.
Meta prepared by engineers LLAMA 2behind ChatGPT “GPT” or behind the Google Bard “LaMDA” such as a large language model. Unlike many similar ones, it is open source, that is, it can be worked on as desired. “free” the system is currently in three different sizes (7 billion, 13 billion and 70 billion parameters) is coming. OpenAI’s GPT-3.5 series has up to 175 billion parameters and Google’s LaMDA has 137 billion parameters. Meta signature, which is far behind its competitors in this respect, but makes a difference with its open structure. LAMA, It can be tried by anyone, but it is not easy to do. The easiest way to try the service, Here located and Andreessen Horowitz “llama2.ai” website prepared by On this page, it is possible to communicate with the slow-running system and Turkish language support is offered, although not overly good. Directly Meta CEO Mark Zuckerberg announced by LLaMA 2 Microsoft It is explained that it was developed in cooperation with the Azure It can be easily tested on it.
YOU MAY BE INTERESTED
Meta had previously made a sound with a voice-oriented artificial intelligence. You know, artificial intelligence systems change everything, including sound and music. It was Voicebox signed by Meta that made an impact in this area last month. Transforming writing into music MusicGen The new system, which was introduced later, was introduced directly by Meta CEO Mark Zuckerberg. This system, which the company has not yet opened to everyone, uses real human speeches. (currently in six different languages) can become. According to the statement, the system has been trained using more than 50,000 hours of voice. The system, which seems to work very well from the first examples shown, can clear the sounds loaded in it as well as making the texts sound. Here, unwanted noises such as dog barking or automobile horn can be cleared by artificial intelligence in seconds.
The system, which is currently under development, may be made available to everyone in the future. how it works here for the infrastructure clearly revealed in the video “In the future, multi-purpose generative AI models like Voicebox can give natural voices to virtual assistants and NPC characters in the Metaverse.” declaring Meta, He also quoted: “Voicebox can enable visually impaired people to hear text messages from friends as they are read by artificial intelligence in their own voice, give creators new tools to easily create and edit audio tracks for videos, and much more.”