AI-generated deepfakes in Dall-E 3 are debunked faster with OpenAI’s new detection tool.
OpenAI is launching an image recognition tool to find out whether an image was generated by DALL-E 3, the company said in a blog post. Deepfakes are AI-manipulated images of existing photos or videos, often created with malicious intent. OpenAI wants to address this problem and, together with other major technology companies, is joining the Coalition for Content Provenance and Authenticity (C2PA), which sets technical standards for the provenance of digital content.
Deepfakes are increasingly circulating on the Internet and can have harmful consequences for the people affected. OpenAI wants to change that with a new recognition tool that should be able to recognize AI-generated images. Currently, the new tool only works with images generated by the company’s own AI image generator DALL-E.
According to OpenAI, the image recognition tool was able to identify an image generated by DALL-E about 98 percent of the time, and less than about 0.5 percent of non-AI generated images were incorrectly marked as coming from DALL-E 3. OpenAI’s image recognition classification is currently open to an initial group of testers, including research labs and investigative journalism nonprofits.
Authenticity standards
OpenAI also announced in its blog post that it is part of the Coalition for Content Provenance and Authenticity (C2PA) steering group. C2PA is a widely used standard for digital content certification that has been developed and adopted by a wide range of players, including software companies, camera manufacturers and online platforms. This group includes Google, Adobe, Meta and Microsoft.