April 28, 2025
Gadget

ChatGPT’s biggest weakness is mobile phones. And your opponent smells of blood 2 comments

  • May 17, 2023
  • 0

ChatGPT is great, but it has a small problem: we need to connect to cloud services to use it. OpenAI uses hundreds (thousands?) of very expensive professional GPUs

ChatGPT’s biggest weakness is mobile phones.  And your opponent smells of blood 2 comments

ChatGPT is great, but it has a small problem: we need to connect to cloud services to use it. OpenAI uses hundreds (thousands?) of very expensive professional GPUs to serve users, which means that the cost of providing and using the service can be high. The race to win the battle of generative AI models is now joined by another: the race to make it possible for us to use them locally (and even offline) on our smartphones, rather than in the cloud.

A miniChatGPT for your mobile. While it may seem unthinkable to have a chatbot of this capacity running locally on our mobile, given the enormous resources ChatGPT consumes, it is not. In fact, we’ve seen several projects in recent weeks that point to this future.

Google and the Gecko. One of them is Gecko, one of the variants that Mountain View company proposes to deploy its new LLM PaLM 2 model, which competes with OpenAI’s GPT-4. According to Google, the Gecko is small enough to run natively on a smartphone – they managed to get it to work on a Samsung Galaxy, for example – and although they didn’t demonstrate this capability, the statement of intent was strong.

hybrid artificial intelligence. Some companies like Qualcomm have already started talking about hybrid AI platforms where we use models like ChatGPT in the cloud and others like Gecko on mobile. The company’s CEO, Cristiano Amon, explained in the Financial Times how expensive it would be to depend solely on cloud models. Combining this use with LLM models that can run on a mobile phone cuts costs. They already tried this option at Qualcomm and managed to run Stable Diffusion locally and locally on one of their SoCs.

calls. The trend to “miniaturize” ChatGPT gained momentum with the advent of Meta’s LLM model, LLaMA. There is a version of this model —among others— With a parameter size of 7 billion (“7B”), it can be ported to a mobile device to run locally. That’s exactly what a team from Stanford University did, which created a particular version that they managed to work on the Google Pixel 6. It was running slow, yeah, but it worked. The same agency would also release the Alpaca, a “tuned” model based on LLaMA 7B that could run on much more modest hardware than Meta’s model.

And there are (quite a lot) more. The emergence of mobile-ready generative AI models is on the rise. A few days ago the Open Source MLC LLM project emerged with a clear goal: to be able to deploy LLM models to different hardware platforms, including mobile phones. This project can be installed on several MacBooks, but can also be installed on some iPads or iPhone 14 Pro. Performance is very modest: about 7.2 tokens/second on iPhone 14 Pro, something like ChatGPT writes responses at 4-6 words per second .

stop the rhythm. Some are already talking about some kind of “Android moment” in the AI ​​space, due to this explosion of Open Source projects. At Madrona, they mentioned promising projects like Dolly (Databricks), OpenChainKit (Together.xyz), Cerebras-GPT (Cerebras) or HuggingFace. Apple itself gave a small nod to this segment today with the announcement of the feature that lets you train the iPhone to read sentences with your voice (and run everything on the device). If things go like this in this space, it doesn’t seem unlikely that we’ll soon have ChatGPT running natively directly on mobile without the need to connect to the cloud.

on Xataka | It is not yet possible to install ChatGPT on our PC or laptop. But we already have an alternative.

Source: Xataka

Leave a Reply

Your email address will not be published. Required fields are marked *