Google has announced plans to integrate its advanced AI language models, known as Gemini, directly into Android smartphones starting in 2025. The move builds on the previous introduction of the Gemini Nano, a scaled-down version of the AI model that still requires internet access.
By building more powerful models directly to the device, users will no longer need to constantly connect for certain AI-driven features, improving both user experience and privacy.
Google’s Gemini Ultra is a powerful model with 1.56 trillion parameters. This puts it on par with OpenAI GPT-4 in terms of language understanding and rendering capabilities. Gemini Ultra integration could offer Android users a host of new features and capabilities.
There is a slowdown in sales in the smartphone market. Industry watchers are looking at AI’s potential to spur innovation and increase consumer interest – a possible “AI supercycle”.
But analysts warn that current advances may not be compelling enough to spur mass upgrades to existing devices. Despite mixed forecasts, Google, along with other tech companies, is investing heavily in chatbots and AI-based virtual assistants. An example of this is the rebranding of Google’s Bard app to Gemini.
This investment aligns with CEO Sundar Pichai’s vision of a unified AI agent that can seamlessly assist users across the entire Google ecosystem. The integration of advanced AI models into Android phones in 2025 is a potential transition to a smarter and more personalized mobile experience. Time will tell how large-scale this change will be.