Exclusive Content:

https://www.xataka.com/robotica-e-ia/gemini-nano-empieza-nueva-era-ia-bolsillo

There was a lot of anticipation with the launch of Google Gemini, and after yesterday’s announcement we finally know what we have on our hands: not one but three multi-modal AI models to compete with ChatGPT.

The first of these is Gemini Pro, now available for sale via Google Bard, and although the most ambitious one is Gemini Ultra, there is special interest in the youngest of the family. Gemini Nano. The reason is important: It opens the door to a new era in which we will experience this.”Pocket AI“or ‘on device’, it will always be available thanks to our mobile phones and will also be independent of the cloud.”

Welcome to the era of “on-device” AI

With Gemini Nano, Google wanted to offer a much more efficient model that could work directly on our devices locally, without the need to connect to the cloud. This is the main and big difference with models like ChatGPT or Bard, which we can certainly use from our mobile device (via the browser), but which run from the cloud on large servers responsible for processing and generating the responses.

Screenshot on 2023 12 07 9 49 05

Why reply to WhatsApp when AI can do it?

With Gemini Nano, all of this processing and text rendering happens directly on our devices, and this has significant benefits. Which among these? The data we use does not leave the device and is not shared with third parties, at least as far as we know. Therefore, we are faced with pocket AI models that can be run directly on our smartphones, even if we are not connected to data networks.

As those in charge of Google explain on the Android developers blog, this allows us to create “high-quality text summaries, smart contextual replies (like the WhatsApp example in the image just above these paragraphs), and grammar correction with Gemini Nano” and create advanced tests. . Developers interested in creating applications that leverage the power of Gemini Nano can sign up for the Google platform.

Gemini Nano debut and the era of pocket AI Occurred on Pixel 8 Pro, the company’s flagship. This smartphone will have productive AI options such as the ability to summarize a pre-recorded phone call into points.

A more efficient model with Android AICore as the core component

We are faced with the most efficient model of the three models offered by Google; This is obvious if we take into account that its destiny is to be able to work on our mobile phones and not on servers. As those responsible for Google themselves explained in the product report, Nano has two different versions. The first of these is Nano-1 with 1.8 billion parameters (1.8B).. The second is Nano-2 with 3.25 billion parameters (3.25B).

Gemini

Additionally, the model is quantized to 4 bits for representation. This quantization refers to the process of reducing the precision of the model’s weights and activations from 32-bit floating point values ​​to 4-bit integers.

This quantization process significantly reduces the memory footprint of the model, making it more suitable for deployment on resource-constrained devices such as smartphones or IoT devices. Even so, Google says this quantized model yields the following results: comparable or even superior performance to the original 32-bit model on which it is based.

At the heart of this distribution is Android AICore, a new system service that allows us to use basic models such as the Gemini Nano directly on our Android phones.

This new component of Android 14 is also “custom by design” and, among other things, Google itself has implemented processes through a technique called Low-Order Adaptation (LoRA), a technique that adapts large language models (LLMs) such as PaLM. Allows fine tuning. 2 to suit specific tasks and all this on “limited” devices like our smartphones.

This is just the beginning

The launch of Google Nano is promising, but it is a fact that its features and practical applications are limited today. The truth is that only a small portion of users who own the Pixel 8 Pro will be able to start using it and They will only be able to do this in a few very specific scenarios. Summarizing conversations or automatically responding to messages is interesting, but we definitely want much more from these pocket AIs.

In fact, this distribution does not mean that we will now have a “pocket ChatGPT” or a “pocket Google Bard”: the features of the model are not currently intended to replace the Google search engine; this would be throwing stones at one’s own roof; rather, it was to provide ways to better use our device and save time.

Generative AI models in the cloud, such as ChatGPT or Bard, therefore appear not to be threatened by this new era of pocket AI: We are mostly dealing with companions who will act as “co-pilots” – as Microsoft likes to say – this experience, but directly from the mobile device, as if they were independent and separate applications.

From here, yes, the possibilities seem huge and we are just at the beginning. This is something that could result in a mini revolution in itself.

in Xataka | Meta, IBM and others form the AI ​​Alliance. Purpose: Advocate for the development of Open Source Artificial Intelligence models

Source: Xataka

Latest

Newsletter

Don't miss

https://www.compradiccion.com/electrodomesticos-y-hogar/manten-tu-casa-limpia-que-nunca-este-versatil-original-cepillo-limpiador

If you are looking for a way to keep your bathroom or kitchen floors clean, you can't miss this offer. AmazonThis...

Micron expands its storage catalog with the Crucial P310

Micron has announced the launch of the Crucial P310, an SSD that reaches near the maximum performance allowed by the PCIe 4.0 interface and...

LEAVE A REPLY

Please enter your comment!
Please enter your name here