April 23, 2025
Trending News

Will LLMs soon work on all Apple devices?

  • December 22, 2023
  • 0

Apple is experimenting with new techniques to run Large Language Models (LLMs) in flash memory so that these models can be used on all iPhone and Mac devices.

Apple is experimenting with new techniques to run Large Language Models (LLMs) in flash memory so that these models can be used on all iPhone and Mac devices.

Although Apple still lags behind OpenAI, Google and Amazon, it is increasingly leveraging the power of AI. Now Apple is exploring a way to run LLMs on flash memory and make these models ubiquitous across all iPhone and Mac devices. It is not yet clear whether the techniques work effectively. Is this a catch-up maneuver by Apple in the AI ​​race?

LLMs in flash memory

Storing LLMs in flash memory is not an easy task. Basically, models run entirely in RAM, where the CPU and accelerator can access them directly. Devices like smartphones do not have enormous storage capacity. Operating LLMs directly from flash memory would make more models compatible with the small devices. Apple is therefore experimenting with storing these models on flash memory. With this, Apple wants to make LLMs ubiquitous on its iPhone and Mac lineup.

The storage in smartphones, including iPhones, is flash storage (just like the SSD in your laptop). The company is currently experimenting with two techniques. Called the first time fenestrationThe AI ​​model reuses some of the data that has already been processed. This means that less energy has to be put into continuously retrieving data that has already been processed and the process can run faster.

A second technique is this Row-column bundling Group data more efficiently so the AI ​​model can read data from flash memory faster and increase understanding.

Overtaking maneuvers in AI racing

Although Apple was initially skeptical about AI models, it appears to be increasingly embracing it. For example, the company recently launched Apple GPT, developing a new framework that allows developers to build AI models on Apple Silicon.

With these new experiments, Apple hopes that LLMs can also run efficiently on Apple devices, giving them another upgrade. Whether these techniques actually work remains to be seen.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version