If it doesn’t have AI, it won’t compete for the best phone on the market. No one wrote this rule, but manufacturers are strictly following it. The new Google Pixel 9, the Samsung Galaxy S24, the next iPhone with Apple Intelligence, and more modest devices like the OPPO Reno12 Pro all have one thing in common: they want to differentiate themselves with their AI capabilities.
For years, we’ve focused on measuring synthetic CPU and GPU performance in benchmarks. While some tests don’t necessarily determine whether one device will perform better or worse than another, it’s still important to know its potential and raw capabilities.
Geekbench is one of the most used benchmarks worldwide and one of the references we use to measure performance at Xataka. The company is now launching Geekbench AI as the queen app candidate for measuring AI performance. The problem? Power won’t be everything to take the crown in this region.
Geekbench AI replaces Geekbench ML, a test that allowed us to measure a phone’s raw performance in AI. “Machine Learning” isn’t as commercial as “Artificial Intelligence,” so the name change is in response to the obvious reasons.
The key point here is that Geekbench kicks things off: Measuring AI performance will be a new trend. But to what extent is this relevant? The first thing is to understand what this comparison is doing.
“Geekbench AI provides a multi-dimensional picture of on-device AI performance by running ten AI workloads, each with three different types of data. Using large datasets that mimic AI use cases, “In the real world, both developers and users can measure on-device AI performance in just a few minutes with single-precision, semi-precision, and measured scores.”
This test allows us to measure the performance of the CPU, GPU or dedicated NPU (as we choose) to run different tests simulating real AI model execution situations. The more power the device has, the greater its capacity.
The point is, if AI wants to flood the phone world, It will be very important to run it online and with the lowest possible power. To do this, Google works with models like the Gemini Nano, which are small and designed to run a large part of the tasks online (although models like the Pixel run locally and require at least 8 GB of RAM), so raw power takes a complete backseat for a while. Models like the OPPO Reno12 Pro are proof of this.
Apple has the opposite example. The only reason the iPhone 15 can’t run Apple Intelligence is because combination of raw power and, most importantly, RAMA massive model that will run mostly locally, connecting to the cloud only for certain functions.
Measuring AI power will be key to knowing a phone’s raw capabilities, but not necessarily what the phone can or cannot do. Integrating online models (a huge opportunity for manufacturers to start making even more money through subscriptions like Gemini) will be key to allowing all phones, even the least powerful ones, to run AI.
Image | Xataka
On Xataka | “Include me” is a brilliant and disturbing idea from an AI that insists on embellishing and distorting our memories