OpenAI tested its advanced artificial intelligence model GPT-4 to see if it is a threat to humanity.
OpenAI assembled a team of 50 biologists and 50 university biology students with the mission of finding ways to produce a substance that could be used as a biological weapon.
Students were allowed to use the internet and a special version of the ChatGPT GPT-4 language model that was not limited by any censorship on answers and made any requests. Meanwhile, experts could only use the internet.
After comparing the results, the study’s authors found that GPT-4 helped gather more information toward the creation of biological weapons. However, OpenAI believes that GPT-4 access provides, at best, only slightly more data to establish a biological threat.