With the launch of GPT-4, OpenAI is putting pressure on the competition, which can lead them to quickly release something without security restrictions.
With the launch of GPT-4 last week, OpenAI makes clear its ambitions as a leading AI vendor. The technology is based on Natural Language Processing (NLP) took a big step forward last year with the launch of ChatGPT. Suddenly, the technology was within everyone’s reach and usable, often with impressive results.
Microsoft quickly hooked its wagon to OpenAI with a billion-dollar investment, the rest are going their own way and working feverishly on a competitive AI model. For example, Google has its own solution, Bard, which it wants to link to its search engine and within Google Workspace.
Ilya Sutskever, co-founder and head of science at OpenAI, tells The Verge that GPT-4 wasn’t easy to develop. “You see there are many companies today that want to do the same thing. In terms of competition, you can see that the field is maturing.”
GPT-4 and security cards
He points out the risks of such AI models. “We haven’t disclosed much about the inner workings of GPT-4. We’re doing this for security reasons.” According to Sutskever, OpenAI has put a lot of time and effort into setting up limits. “Other companies will not apply such restrictions. As a society, we now have very little time to respond. How are we supposed to fix this? How should we handle this?
OpenAI works with security cards within GPT-4 to ensure that no dangerous or (privacy) sensitive information is shared. For example, you can’t ask how you can use kitchen appliances to make dangerous chemicals.
“I am concerned that such models will be used for mass disinformation,” said Sam Altman, CEO of OpenAI. “As they get better at writing computer code, they can be used for offensive cyberattacks.
public debate
OpenAI was founded in 2015 as a non-profit organization with a focus on the secure and transparent development of artificial intelligence. In 2019 the switch to a limited profitModel with Microsoft as the largest investor.
The CEO of OpenAI wonders how the people of the future will see this time. “We need time for our institutions to figure out what needs to be done. Regulation becomes essential. We need time now to understand what is happening, how people are using these tools and how society can evolve with them.”