Apple Intelligence engine uses multiple LLMs

The productive artificial intelligence-based Apple Intelligence infrastructure, introduced within the scope of WWDC24, uses more than one LLM, according to the official statement.

“Apple Intelligence consists of multiple highly capable models customized for our users’ daily tasks.” Technology giant Apple also states the following: “We use an LLM (Large Language Model) with approximately 3 billion parameters on the device, and a larger LLM for the Private Cloud Compute infrastructure running on servers with Apple silicon processors.“. According to the analysis, the device operated “mother” LLM directly Based on OpenELM-3B. OpenELM is trained on an open dataset of 1.8 trillion and is specifically optimized to run natively on devices. Apple trains LLMs in the Apple Intelligence infrastructure using data purchased through license agreements, so that they do not need high resources when developing models. Here He used various techniques which he detailed. Apple’s productive artificial intelligence systems, which no one has had the chance to use yet, will initially only have English language support.

YOU MAY BE INTERESTED IN

The company gave the following summary about Apple Intelligence systems: “Apple today announced it is bringing Apple Intelligence, its personalized smart technology system to iPhone, iPad and Mac, combining the power of generative models with personal context to deliver incredibly useful and unique smart technologies. Apple Intelligence; Fully integrated into iOS 18, iPadOS 18 and macOS Sequoia. Apple Intelligence; It uses the power of the Apple chip to understand and compose language and images, perform actions in applications, and make inferences based on personal context to simplify daily tasks.

With Private Cloud Compute, Apple can flexibly scale computing capacity between on-device processing and larger server-based models running on dedicated Apple chip servers. “This sets a new standard for privacy in artificial intelligence.” “We are excited to enter a new phase of Apple innovation. “Apple Intelligence will completely transform what users can do with our products and what our products can do for our users.” who says Apple CEO Tim Cook continued his words as follows:

“Our unique approach combines generative AI with the user’s personal context to deliver truly assistive smart technologies. Moreover, it helps users do the things that matter most by accessing this information in a completely confidential and secure way. This is an AI that only Apple can offer. “We can’t wait for our users to see what this technology can do.”

The features of Apple Intelligence systems that will run built into the devices (there are also features supported from the cloud) will require at least “A17 Pro” and “M1” processors. That’s why Apple’s built-in productive AI technologies will only be iPhone 15 Pro, iPhone 15 Pro Max, iPad Pro (with M1 or newer processor), iPad Air (with M1 or newer processor), MacBook Pro (with M1 or newer processor), MacBook Air (with M1 or newer processor), iMac (with M1 processor or newer), Mac mini (with M1 processor or newer), Mac Pro (with M2 Ultra or newer processor) And Mac Studio (with M1 Max and newer processors) can be used in models.

lgct-tech-game