What Android news did we see at Google I/O 2024?
- May 15, 2024
- 0
Although it ended up not having the importance we expected, Android was present at the opening keynote of Google I/O 2024 which took place yesterday and whose main
Although it ended up not having the importance we expected, Android was present at the opening keynote of Google I/O 2024 which took place yesterday and whose main
Although it ended up not having the importance we expected, Android was present at the opening keynote of Google I/O 2024 which took place yesterday and whose main announcements we will inform you about in this publication, although also, as in the rest of the event, with a focus on artificial intelligence. The AI war has intensified again with recent announcements from Google and OpenAI, to be joined next week by Microsoft and next June by Apple (with OpenAI).
The first big confrontation It happened during the first half of last year, when chatbots appeared, which were born as a reaction to the great predecessor in this sense, ChatGPT, and whose main protagonists were Bard (now Gemini) and Bing (now Copilot). However, these were just the starters, as AI integrations into other services, model enhancements, growth in multimodal capabilities, etc. soon began to arrive.
The year 2024 will mark the starting signal for two key points for the future of artificial intelligence. On the one hand, we have already started to see the arrival of the first chips (processors, APUs and SOCs) in which the NPU is the big star, and both privacy concerns and the interest of technology companies to free up their infrastructure. a little of so much workload, they act as accelerators computing capacity of artificial intelligence at the client.
Another key point that has become clear in recent days is the future of virtual assistants based on artificial intelligence, the market was opened many years ago by Apple with Siri, which was quickly joined by the Google assistant, Alexa, the now defunct Cortana, Nixby and many other third-party developers. After almost a decade and a half since the debut of Siri, it now seems that we are approaching the first major evolution of these systems, which will be embodied at Google in Project Astra, which was shown to us for the first time yesterday, and in ChatGPT with GPT-4o, last Monday.
It was difficult for them to show us anything else when it comes to the future of Android, this could surprise us more than what awaits us in some time (Initial plans are to begin “real world” testing of this technology at the end of the year). However, there were also other important announcements, mainly related to Android in general, but also with an (expected) announcement related exclusively to Android 15. An announcement that, on the other hand, we already told you about a few days ago.
So let’s start with the last one, and that’s it Android 15 beta 2 is now available as of todayversion that you can try now if you have a Google Pixel smartphone from generations 6, 7 and 8 (including versions Na), and that it will soon be possible to install it on devices of other brands, initially from Samsung and Xiaomi. Let us remind you that this beta, which we will soon describe in another publication, is the last of the phase before the search for system stability. In other words, it’s best not to even consider installing it on your phone if it’s one you use regularly.
The only news that is specifically limited to Android 15, or at least that was explicitly mentioned in the Google I/O 2024 keynote, is beta 2 releaseso everything that we will see below aims to also reach Android 14, although we understand that not all devices can use this version of the operating system, but only those that meet the technical specifications necessary for their proper operation.
Of yesterday’s news, the future capability of Android undoubtedly attracted the most interest Determine if a phone call is fraudulent based on its content. This of course means that the AI will “monitor” the call in real time, and in the demo we were shown how in a call with a hidden number, in which the fraudster tried to trick his victim into a dangerous bank transaction (corresponding to an already known type of attack), the operational the system will automatically issue a warning to the user.
I realize how scary it can be to think that Google is listening in on your calls, even for a purpose as convenient as this. Which brings me to exactly what I mentioned at the beginning, the need to promote AI to the client. And in this regard, the good news in this particular case is that this feature, currently in testing, uses Gemini Nano, a version of the Google model optimized for features running on the client, so Listening and analysis are performed locally without uploading data to Google servers..
There is currently no planned launch date for this feature, which will alert users when an alleged bank representative asks you for an urgent money transfer, payment with a gift card, or requests personal information such as your PIN or password, but we know that its use will be voluntary , and it Google will provide more information about it in a few months.
These types of scams have become quite popular recently and although we all try to warn of these risks, the lack of knowledge and level of sophistication of some of these attacks making it a very worrisome global threat. Therefore, I believe that Google should prioritize working on polishing this feature as soon as possible, since it can mean more than a remarkable before and after to many people. And probably the same thing happened to you, I’m sure you can think of at least one person that you would like to have such protection.
Circle to Search has continued to grow since its deployment, at the beginning of this year. Already present in the latest generations of Pixel devices (N, Non, N Pro, Fold and Tablet) as well as the Galaxy S, Galaxy Z and Galaxy Tab initially allowed us to select a part of the content displayed on the screen and then search for it, as we could do with Google Lens.
Over time, we’ve seen signs of new features, some officially confirmed by Google, others discovered through in-depth analysis of beta versions released so far. So with what we’ve seen so far, and as I’ve suggested on another occasion, we can say that the full title should be Circle to Find, Circle to Translate and Circle to Share… to which we now We also need to add Circle to Learn.
With this new feature now available to Circle to Search users, the tool can also be used to select different types of problems such as equations and We will automatically get the necessary indications to solve it. In the image above, you can see the tool’s response when capturing an equation, and Google mentions that it’s also currently able to interpret input for physics or math problems. But this is just the beginning, as we were told later, but this year, it will also be able to solve even more complex problems, such as algebraic calculations, diagrams, graphs and more.
In its original version, Google Nano focuses on text, but Google is already preparing its new version, which will be multimodalthat is, it will be able to handle text, image and sound, and this is very important because if the LLM is natively multimodal and has been optimized for certain types of platforms, it will be much more efficient when it comes to performing those tasks that involve input and/or output information in different formats.
Again, no specific dates, but Google has announced it His debut will take place in a few months, originally on Pixel devices. At this point, of course, it will be very important to check whether it is limited to the future Pixel 9 with Android 15 due to its technical requirements or, on the contrary, the degree of its optimization is such that it can work without problems. , in the current generation and even in some of its predecessors.
Fortunately, accessibility is an issue that is being given increasing importance. (actually, we still have a long way to go, but that shouldn’t stop us from celebrating progress) and in that sense, Talkback plays a key role in Android, and the future arrival of a multimodal version of Gemini Nano will be a big improvement to this tool, as it will support the type of content that unfortunately, in a large number of cases, it continues to be an obstacle: images.
According to data provided by the technology company, Talkback users encounter an average of 90 untagged images every day, making them a completely inaccessible type of content. With a future version of this tool, now with multimodal support, Users will be able to get detailed information about the content of the displayed images, and this will also adapt to the context. And of course, since it’s based on the Gemini Nano, you won’t need an internet connection to use this feature.
More information
Source: Muy Computer
Donald Salinas is an experienced automobile journalist and writer for Div Bracket. He brings his readers the latest news and developments from the world of automobiles, offering a unique and knowledgeable perspective on the latest trends and innovations in the automotive industry.