GPT-4 under the magnifying glass: reading pictures, language skills and multiple personalities
March 15, 2023
0
GPT-4, the successor of GPT 3.5, was officially launched by OpenAI. The updated model is available for paying ChatGPT users. These are the most notable innovations. OpenAI officially
GPT-4, the successor of GPT 3.5, was officially launched by OpenAI. The updated model is available for paying ChatGPT users. These are the most notable innovations.
OpenAI officially launched GPT-4 Tuesday night after Microsoft talked about it last week. GPT-4 is an improved version of GPT-3.5, the underlying language model that gave rise to the extremely popular ChatGPT. With the launch of GPT-4, the development of conversational, generative AI is booming again.
In a blog, OpenAI explains its latest showpiece in detail. The main improvements in ChatGPT are less in the social aspect and more in the creative and intellectual. GPT-4 is better able to successfully handle nuances and more complex issues. The model should also be significantly more stable overall. We discuss the five most spectacular innovations.
linguistic proficiency
Although ChatGPT was already available in multiple languages, the user experience was focused on the English language. That’s because English is the language of instruction in academia, so GPT-3.5 is primarily trained on English-language data. With GPT-4, you can converse more naturally in your native language.
OpenAI claims that GPT-4 supports no less than 26 languages. According to the company, GPT-4 achieves better accuracy than GPT-3.5 in English in language tests for 24 languages. Only with Marathi and Telugu, two regional Indian languages, does GPT-4 have more trouble, but we can forgive the model for that. Accuracy for Dutch is not specified, but in general GPT-4 performs best for Romance and Germanic languages.
“Read” images.
As if knowing more than twenty languages wasn’t enough, GPT-4 also speaks languages that cannot be put into words. One of the most notable innovations is that GPT-4 is a “multimodal” model that can handle visual input in addition to text. For example, you can now present graphs to ChatGPT and ask them to draw the main conclusions from them. When it’s a bit brighter, the AI assistant can also explain the humor in a picture.
GPT-4 can read and interpret graphics. Source: Open AI
Multiple personalities
ChatGPT is programmed to speak in a neutral tone. For example, by being a little creative with your questions, you can instruct the chat tool to speak like a pirate. GPT-4 can take on multiple “personalities” during a conversation without you having to specifically ask for it. OpenAI is experimenting with the ability to input instructions about the tone and style ChatGPT should use to respond via system messages.
However, there is a clear caveat to this functionality. Manipulating the system messages can mess up the model. Because of this, GPT-4 can get stuck in a certain “character” or suffer from dissociative identity disorder in unprompted moments.
ChatGPT takes on multiple personalities.
Better memory
To support the increased intellectual abilities, GPT-4 also requires more memory. The current GPT 3.5 model has a limit of 4,096 “tokens,” which is roughly eight thousand words. GPT-4, on the other hand, can handle 32,768 tokens or 64,000 words.
This increased storage means you can have much longer conversations on the same topic, as ChatGPT can go further back in the conversation to stay on the right topic. ChatGPT will also be able to handle much larger chunks of text, summarizing up to fifty pages at a time for you. For the time being, OpenAI still has a limit of one hundred questions per four hours.
Less hallucinations
For OpenAI, this is perhaps the most important improvement. GPT-4 is generally more stable and predictable than GPT-3.5, which was still very experimental. This benefits both the accuracy and the security of the system. OpenAI claims that GPT-4 provides up to forty percent more factual information and up to eighty percent less propensity to give lewd or crazy answers.
Despite warns CEO Sam Altman that the model is far from flawless. The company does not dare to give a 100% guarantee that GPT-4 will never talk nonsense. For example, knowledge of current events remains a bottleneck even with the new version. Therefore, it remains important to use artificial intelligence critically and not always to take AI at its word.
Greg Brockman, co-founder of OpenAI, emphasized at the launch of GPT-4 that the AI is not perfect. “But you’re not yourself. Together, you get a empowering tool that can take you to new heights.”
How do you use GPT-4?
Would you like to put GPT-4’s intellectual abilities to the test yourself? Then you need a ChatGPT Plus subscription. OpenAI is making the new model available to a more limited audience for now, as it first wants to gather performance feedback (and sell the paid subscriptions). Have you been admitted to Microsoft’s Bing Chat yet? Then we have good news, because Bing already uses GPT-4.
Developers can sign up for GPT-4’s paid API. The price to handle GPT-4 capabilities in external applications is now around three cents per thousand tokens.
As an experienced journalist and author, Mary has been reporting on the latest news and trends for over 5 years. With a passion for uncovering the stories behind the headlines, Mary has earned a reputation as a trusted voice in the world of journalism. Her writing style is insightful, engaging and thought-provoking, as she takes a deep dive into the most pressing issues of our time.