One of the world’s largest technology companies Appleinterestingly it was never brought up but in October an open source LLM published.
ChatGPT Behind systems such as LLM / Large Language Model or its Turkish equivalent “Large Language Model” is included. For example, ChatGPT currently works on the basis of the GPT-4 large language model, and the technology giant Apple is also carrying out important work in this regard, and even today, it is on the agenda that the company published an open source LLM called “Ferret” in October. brings. This LLM, supported by Cornell University, cannot currently be commercialized by anyone, it can only be used for research / development focus. It is thought to have started working on productive artificial intelligence systems earlier than expected. Apple, is surprising in that no announcements have been made for Ferret. LLM, which is reported to have a very high visual perception ability and is also shown with some examples, can be made very advanced in a short time with the contributions of other researchers, thanks to its open source structure, and can save Apple serious time. Ferret, which is reported to have been trained on 8 Nvidia A100 GPUs with 80 GB memory, that is, expands the horizons of Apple, which has never used Nvidia GPUs in its products, does not mean anything for the end user for now, but these LLM studies are big for iOS 18, which is known to have a very strong productive artificial intelligence side. important does.
🚀🚀Introducing Ferret, a new MLLM that can refer and ground anything anywhere at any granularity.
📰https://t.co/gED9Vu0I4y
1⃣ Ferret enables referring of an image region at any shape
2⃣ It often shows better precise understanding of small image regions than GPT-4V (sec 5.6) pic.twitter.com/yVzgVYJmHc— Zhe Gan (@zhegan4) October 12, 2023
In fact, according to information released recently, the company directly He has found a way to run “LLM” on iPhones. Of course, the technical details are quite complicated, but the company wants to keep large LLM data directly in flash memory. For fast access to large amounts of data “groundbreaking” The company, which is reported to have developed special techniques, can use it for its own LLM infrastructure, codenamed Ajax and also called Apple-GPT, and thus iPhones can deliver a vastly improved productive AI experience even when there is no internet connection.
“The groundbreaking system we have developed is crucial for the use of advanced LLMs in resource-limited environments, thereby increasing their applicability and accessibility.” Apple researchers also said in the document they shared, “These efficiency-oriented methods allow artificial intelligence models to run on twice the available memory of the iPhone.” he also explains. Here are two of the company’s techniques for bringing iPhones to LLMs:
Windowing: “Think of it as a method of recycling. Instead of loading new data each time, the artificial intelligence model reuses some of the data it has previously processed. “This reduces the need to constantly recall memory, making the process faster and smoother.”
Packaging: “This technique is like reading a book in larger chunks rather than one word at a time. Grouping more efficiently allows data to be read from flash memory faster, which speeds up the AI’s ability to understand and produce language.”