Google, now yes, is stepping on the accelerator in artificial intelligence
- May 11, 2023
- 0
finally time It comes close to what Google originally planned and therefore it is now that the search engine company is really starting to flex its muscles in
finally time It comes close to what Google originally planned and therefore it is now that the search engine company is really starting to flex its muscles in
finally time It comes close to what Google originally planned and therefore it is now that the search engine company is really starting to flex its muscles in an industry in which, as I have stated on previous occasions, it has had the necessary technological substrate for years to differentiate itself from its competitors. However, since the arrival of ChatGPT and especially the new Bing, it has given the impression that Google is about to slipstream. Today, however, this Google I/O 2023 may mark a tipping point, especially depending on the time it takes for all the news presented by the company to arrive during the launch of the developer event.
Google had already set up I/O 2023 at the end of last year as the stage to expand its AI-based designs, so the advances of recent months have been only a timid response to the extensive advances of its competitors. Either due to the need for more time for its development, to Defining the ethical foundation on which to build your products, for their fuller definition, etc. their time has not yet come… until now. And of the little over two hours that the opening event of the meeting for corporate programmers lasted, more than an hour and a half was devoted to artificial intelligence.
The announcements we’ll discuss below, where the degree of AI integration is somewhat different, they are an update, a blow to the table, “enough” Faced with a situation that even those who saw the Internet giant as doomed to irrelevance in the near future were in. It’s true that its rivals have the advantage of getting there first, but Google’s plans are so ambitious and, at least on paper, seem so well-defined that it looks like we’re approaching a very interesting revival of what we can already call “AI War”.
But we will have to wait. check the speed at which deployment is occurring everything you see in the presentation. And importantly, Google has revealed its AI plans for the coming months (or at least a large part of them), which means its rivals are already on notice of what they’ll have to compete against. and as a result they can already work on preparing their answer. Those from the search engine now have to act fast, otherwise they risk that when everything they presented reaches the users, there is already an offer of similar services offered by other technology companies.
In summary, and just before wrapping up everything presented, I think it’s interesting to make a final reflection on Google’s current situation with regard to artificial intelligence. A battery of ads is bringing Google back to the fore, but if they don’t become a reality for all of its potential users, its situation remains fragile., so the next few weeks and months will be crucial to see if they can make up for lost time and be back on top. Personally, I think they are more than capable of it, but it depends on so many factors that everything is really still a big unknown.
Although the number of services that Google offers today is huge, there is no doubt that it is searching for information on the Internet is one of the pillars on which its entire structure is supported, moreover, it is the very origin of society. So it was to be expected that there would be an announcement related to AI and the search engine, although most forecasts in this regard pointed to the integration of Bard into the search engine. However, the plans are much more interesting.
Under a name Look for generative experiencesGoogle will open access to some users who sign up for the Google Labs beta to test (again, only in the United States for now) its generative model integration project, which will be able to provide many more answers. .search-rich To this end, in addition to answering our initial query, it will try to anticipate our interests by suggesting follow-up queries, more or less in the same way that chatbots are able to conduct a conversation, but exclusively related to the initial query.
The answers will of course be supplemented by selection of related linksamong which we will find the resources that the artificial intelligence used to generate the answer, so that we will not have to worry about whether the information it gives us is real or, on the contrary, the result of a hallucination. .
With Bard being Google’s biggest AI announcement to date, we were expecting some big announcements on this front, and thankfully we weren’t disappointed, although there are some aspects we already knew about and some we didn’t. already expected. Since Google I/O 2023 is a developer event, one of the highlights of the presentation was about Bard’s programming capabilities. Support for more than 20 programming languagesquick actions related to generated code, used fonts… the list is the most extensive.
The most important announcement of all Bard-related is without a doubt, its integration with other services, something we’ve already seen in ChatGPT and, to a lesser extent, Bing, but it seems Google wants to go much further. It is clear that the chatbot will offer integration with the company’s ecosystem of services, which was already expected, but it turns out that during these months they worked with other companies to integrate third-party services. So although some of them will take a while to arrive, the list is very interesting and varied, with services from from Adobe Firefly to Spotify via Wolfram Alpha, Trip Advisor and Uber EatsBesides.
Another news announced for Bard includes a direct attack on the GPT-4 waterline and of course all services based on this model. Because? Well, because it will include one of the main features of the latest version of the OpenAI model: lthe ability to understand picturesfor which he will use Google Lens technology In the presentation, we could therefore see how Bard was given a text and image challenge (a photo of a couple of dogs asking him to write something funny about them), which made me Impossible not to connect with him. paper from GPT-4 where one of the use cases asked you to explain why the image is funny.
It’s not the only news with regard to images, as Bard’s interface will become more visual and will offer answers in which the text gives up some of its space to images if they are related to the user’s questions. .
And who will be able to enjoy all these new features along with the features that already existed in Bard? Well, a lot more people than have been able to so far, as Google has announced that it’s canceling its waiting list and that it’s expanding access to the chatbot to 180 countries (though it didn’t specify which ones). At the moment, yes, the service will only be in English, Japanese, and Korean, but “imminent” plans (though without a date, at least publicly) are to expand it to 40 languages, including Spanish.
The part of the presentation dedicated to Google Maps was undoubtedly one of the most striking of the whole event, not so much because of what was announced, which is interesting, but already partially expected a few months ago, but rather because of the visual impact of viewing. on functioning. As we told you before, when creating a route, we will be able to get a view that is extremely rich in details, from simulating the volume of traffic based on statistical information to showing the current weather forecast. in which we perform this movement.
The deployment of Immersive View, yes, will be much slower in terms of its map coverage, which of course is understandable, since the service will have to generate three-dimensional representations of cities that will later be used for immersive visualization of cities. cities, routes. So for now and by 2023, Google has announced that the feature will hit Amsterdam, Berlin, Dublin, Florence, Las Vegas, London, Los Angeles, New York, Miami, Paris, Seattle, San Francisco, San Jose, Tokyo and Venice. .. Fingers crossed that we can enjoy this feature in the Spanish city by the end of 2023, or rather 2024.
Magic Eraser is one of the most interesting features of Google Photos, and as we told you back in February, it has ceased to be the exclusive property of Google Pixel smartphones, as it has been included in the benefits of the Google One plan, whether from Android or from iOS. Well aware of the success of this AI-based tool, the company decided to significantly expand its reach and introduced the Magic Editor, a set of features that allow you to perform advanced creative editing processes without any complications. To illustrate how it works, I think the most explanatory thing is to see the animation below:
As you can see, the Magic Editor is able to make a smart selection, separate it from the rest of the image, replace it in it, and also generate a part of the image that was previously occupied by what we moved. The result is simply spectacular, because in a few seconds it solves a type of retouching that, as a rule, takes much longer and, moreover, requires a high level of knowledge.
Introduced in January, MusicML is a music generative artificial intelligence model that, as Google shows, aims to be well above other services designed for this purpose. However, as we told you at the time, the company decided not to launch it as a service, perhaps out of concern that the model would reproduce fragments of the music used for learning in one of its tracks. But it seems that the company already has more confidence in him, so they finally started to allow access to him, albeit in a controlled manner, with a waiting list.
More information
Source: Muy Computer
Donald Salinas is an experienced automobile journalist and writer for Div Bracket. He brings his readers the latest news and developments from the world of automobiles, offering a unique and knowledgeable perspective on the latest trends and innovations in the automotive industry.