April 28, 2025
Trending News

ChatGPT can analyze, paint and surf in one conversation: the new capacity tested

  • November 10, 2023
  • 0

The latest version of GPT-4 in ChatGPT can now analyze documents, browse the web and generate images in a single conversation while preserving context. The chatbot suddenly becomes

An illustration of ChatGPT to illustrate yourself

The latest version of GPT-4 in ChatGPT can now analyze documents, browse the web and generate images in a single conversation while preserving context. The chatbot suddenly becomes much more versatile.

OpenAI combines all the features announced for ChatGPT in recent weeks into one. For example, in October the company launched Dall-E 3. This image generator allows you to generate detailed images based on a description. To use Dall-E 3, you had to start a new conversation in ChatGPT and specify in advance that you wanted to talk to Dall-E rather than the standard GPT-4 model.

The same applied to the analysis capacity introduced later. If you selected GPT-4 for analysis, ChatGPT could suddenly read and analyze entire PDF documents. However, it was not possible to generate Dall-E 3 images in this analysis mode. Additionally, ChatGPT has been able to browse the web via Bing for some time, but this functionality was not included in Dall-E 3 or the Analytics version of the bot. For example, anyone who had a document or website analyzed could not immediately ask to create a relevant image based on the result.

This is possible from now on. Bing, Analytics and Dall-E are no longer three different GPT-4 based specialists. GPT-4 in ChatGPT consumes all of their skills. Since these are now accessible in a conversation, the context of that conversation remains in your question.

Work

We experimented with a number of prompts. For example, we asked ChatGPT for a description of ITdaily, which resulted in our site being surfed via Bing, but also the corresponding Wikipedia page being accessed. Next, we asked for a description from you, the typical reader. The chatbot generated this based on the results. Finally, we commissioned us to create an image of such a reader. Taking the context into account, we got a suitable picture. This was not possible until today: We first had to have a conversation via ITdaily and then generate a prompt for Dall-E ourselves.

A second test makes the new abilities even clearer. To date you have been able to:

  • Upload an image and request a description in GPT-4
  • Upload this image to Analytics for color analysis and a histogram
  • Describe this image at Dall-E to create a similar photo

GPT-4 couldn’t analyze the colors, Analytics couldn’t understand the content of the image, and Dall-E couldn’t display the original print, only read the description. Today you can share an image, request a description, get an analysis of the colors, and finally request a new, similar print without having to share or copy additional descriptions yourself.

Of course, you can also go faster, upload an image and immediately ask for a similar image. ChatGPT is now smart enough to perform the necessary analysis in the background. In the example below, when uploading we immediately asked to create a similar expression.

1+1=3

Bringing together the various generative AI capabilities that OpenAI has built over the last few weeks and months is an important step. One plus one equals at least three in this case, as the different abilities can reinforce each other and interact. As a user, the AI ​​tool has suddenly become a lot more useful: you no longer have to think about which subversion of GPT-4 is best suited to solving a problem. Ask a question and ChatGPT will do the rest. The new capacity is currently only available to paying users.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version