Images created by Dall-E 3 are now tagged in the metadata. OpenAI wants to make the distinction between real and AI-generated images clearer.
Paying ChatGPT customers have had access to Dall-E 3 since autumn 2023. Anyone who has already experimented with the image generator will confirm that the results can sometimes hardly be distinguished from a real photo. To avoid confusion, OpenAI announced that it will now add tags to footage that users create with Dall-E 3.
Don’t worry, this label will not be visible on your artistic creations. OpenAI uses the C2PA standard to insert tags into the image metadata. This additional tag does make an image slightly larger, but according to OpenAI, this does not affect the resolution and does not make the images harder to forward. OpenAI will introduce the C2PA label starting February 12th.
Fight against deepfakes
The label is intended to help distinguish AI-generated images and works of art from human works. You may not see them with the naked eye, but web tools like Content Credentials allow you to examine an image’s metadata to determine its origin. Unfortunately, according to OpenAI, this is not the all-encompassing solution to deepfakes, as social media platforms often remove metadata when uploading a photo. Meta said Tuesday that it will also add labels to all AI-generated content distributed across its social media platforms.
There are other ways to differentiate AI-generated content. Google’s DeepMind team has developed a watermark for AI images. This is also a watermark that is invisible to the human eye but can be recognized by machines.