May 11, 2025
Trending News

OpenAI introduces GPT-4o, multimodal, very fast and for everyone

  • May 13, 2024
  • 0

We’ve been waiting for days for the event that OpenAI has planned for today. Additionally, last week rumors of a possible unveiling of its own AI-based search engine

OpenAI introduces GPT-4o, multimodal, very fast and for everyone

We’ve been waiting for days for the event that OpenAI has planned for today. Additionally, last week rumors of a possible unveiling of its own AI-based search engine today, as well as the recent discovery of a model called GPT2 (not to be confused with GPT-2) have led many to believe that today could be the day the company chooses to introducing GPT’s next major development, its more than successful LLM.

However, presumably to soften the blow a little, the company’s own CEO, Sam Altman, public tweet in which he denied both possibilities, i.e. neither GPT-search engine nor GPT-5, so it was clear to us that the development aims to be limited at least mainly to ChatGPT and GPT-4. This was a little disappointing at first, although of course we couldn’t count on it being interesting or not until the event took place.

Well, the event has already taken place and therefore we can now reveal to you all that they presented. And yes, it’s true that it wasn’t a GPT-5 presentation, but yes, we had a new model, the GPT-4o (the letter o “omni”, not zero), which represents a very, very important leap over GPT-4, despite the fact that they both share a common base.

And what makes GPT-4o so interesting? The key is in, in the omni, and we talk about it a model that is able to natively process text, image and sound. And yes, it’s true that we’ve already seen chatbots that are able to manage these three types of information, but until now they’ve done it through a combination of different models (although it’s not something that’s visible to the user). Now, with this new OpenAI model, a single model will manage all three types of information, which translates into greater efficiency as well as reduced latency by eliminating interactions between different models.

This combination of resources in a single model, added to the optimizations found in GPT-4o compared to GPT-4, translates into a performance far beyond what we are used to. In the last part of the (short) presentation, we were able to see some technical demos (you can find the full video below) and you will see that we have more than a significant evolutionary leap ahead for ChatGPT.

Although they are already visible in these tests, OpenAI puts numbers on the performance improvement and latency reduction that comes with GPT-4o. A clear example of this is the responses to audio inputs, which could accumulate average latencies of 2.8 seconds with GPT-3.5) and 5.4 seconds with GPT-4, and they will now drop to an average of 320 millisecondswhich not only speeds up processes but also provides a much more natural level of interaction with the chatbot.

At this point you’re probably thinking that this makes ChatGPT Plus even more interesting, but here comes another surprise that OpenAI has in store for us this afternoon, ChatGPT will be updated to GPT-4o for all users, including free ones. They will do so with a message limit (it is not specified whether it will be daily or more often) and of course Plus accounts will have significantly higher limits. However, giving access to this new model to all users is definitely something that needs to be evaluated very positively.

The rollout starts today, but will be gradual. OpenAI states that it will be completed «in a few weeks” without specifying a specific date. However, their plans seem to call for it to be a quick move. At this point, however, we have yet to see whether different regulatory frameworks will affect its arrival in certain territories, as may be the case with the European Union.

With the arrival of GPT-4o, ChatGPT also improves significantly in terms of its language capabilities. New OpenAI model supports more than 50 languagesand the number of tokens necessary to manage the texts in them has also been significantly reduced, an improvement in efficiency that will also translate into significant savings for those users who will not be using this model through ChatGPT.

And yes, as you already imagined and infer from the end of the previous paragraph, GPT-4o will also be available through the OpenAI API, this means that developers will be able to use this model in their projects. Of course, the rates for its use have not yet been published, and this will be an interesting aspect given its ability to process text, image and sound.

OpenAI improves free ChatGPT accounts

The arrival of GPT-4o on all accounts, including free ones, is already great news for users of the latter, but OpenAI had even more surprises in store, so much so that in fact we can almost say that they were the main protagonists of the presentation with the new model. These are the news for them:

  • GPT-4: as I hinted at the beginning, in terms of analysis and responsiveness, GPT-4o shares roots with GPT-4. This way, when users of free accounts use the new model, they will be able to make the jump from GPT -3.5, which is the version of LLM available for those accounts.
  • Model and web answers– Until now, ChatGPT Internet connection was an exclusive feature for Plus accounts. However, and also with the new model, free accounts will be able to get results based on information from the internet, not just from model training information.
  • Data: so far only available for Enterprise accounts, with this feature users will be able to upload data for ChatGPT to analyze, provide them with the conclusions they are looking for, and additionally create graphs based on it.
  • Photos: since GPT-4o is multi-modal, ChatGPT free mode users will also have the option to upload their own images to the chatbot for it to analyze based on our prompts.
  • Files: just as it will be possible to upload images for analysis, we can do the same with files that can be analyzed by having the chatbot offer us a summary or transcribe them based on our instructions.
  • GPT and GPT Store: Last year, OpenAI introduced its GPT store, i.e. chatbots based on its model, but customized based on more parameters. Until now, this was also an exclusive feature for paid accounts, but now free accounts will also be able to use it.
  • Memory: This is one of the latest chatbot developments that we already told you about when it was announced last February.

OpenAI introduces GPT-4o, multimodal, very fast and for everyone

ChatGPT Desktop

The other big news of this presentation was the announcement, the long-awaited announcement that ChatGPT is finally here it will have a desktop app for Windows and macOS. Initially, of course, the app will only be available to Plus account users and macOS, but OpenAI’s plans include expanding its reach to more users in the coming weeks, as well as releasing a Windows version at some point.

Developing…

Source: Muy Computer

Leave a Reply

Your email address will not be published. Required fields are marked *