Internet giant Google will integrate its advanced artificial intelligence system Gemini into Android phones in 2025 can.
According to the latest information, Google developed its advanced artificial intelligence system or language model Gemini. It can directly become a part of Android phones in 2025. Google, which plans to do this in a way that works offline, that is, wants to establish an infrastructure that does not require a constant server / internet connection, can seriously increase the capabilities of phones with this system. Of course, Google is not the only one making plans in this regard. For example, one of the world’s largest technology companies AppleLooks like they found a way to run “LLM” directly on iPhones looks. For those who missed it, behind systems such as ChatGPT is the LLM / Large Language Model, or in its Turkish equivalent, “Large Language Model” is included. For example, ChatGPT currently works on the basis of the GPT-4 big language model, and Apple is also carrying out important work in this regard. In fact, according to the latest information, the company directly He has found a way to run “LLM” on iPhones.
YOU MAY BE INTERESTED IN
Of course, the technical details are quite complicated, but the company wants to keep large LLM data directly in flash memory. For fast access to large amounts of data “groundbreaking” The company, which is reported to have developed special techniques, can use it for its own LLM infrastructure, codenamed Ajax and also called Apple-GPT, and thus iPhones can deliver a vastly improved productive AI experience even when there is no internet connection.
“The groundbreaking system we have developed is crucial for the use of advanced LLMs in resource-limited environments, thereby increasing their applicability and accessibility.” Apple researchers also said in the document they shared, “These efficiency-oriented methods allow artificial intelligence models to run on twice the available memory of the iPhone.” he also explains.
Here are two of the company’s techniques for bringing iPhones to LLMs:
Windowing: “Think of it as a method of recycling. Instead of loading new data each time, the artificial intelligence model reuses some of the data it has previously processed. “This reduces the need to constantly recall memory, making the process faster and smoother.”
Packaging: “This technique is like reading a book in larger chunks rather than one word at a time. Grouping more efficiently allows data to be read from flash memory faster, which speeds up the AI’s ability to understand and produce language.”