Through Bugcrowd, hackers can earn anywhere from $200 to $20,000 for finding a vulnerability.
OpenAI, the company behind ChatGPT and DALL-E, is partnering with bug bounty platform Bugcrowd to test their AI models. The platform allows white hat hackers to discover vulnerabilities, bugs or security holes in OpenAI’s systems. The more serious the error, the more money is made. Rewards range from $200 to $20,000.
According to OpenAI, hackers don’t need to look for problems within the AI āāmodel or things unrelated to cybersecurity. Bugcrowd notes that problems within a model are not single discrete bugs. These are rarely immediately correctable and require more research and a broader approach.
Researchers, white hat hackers, must read the rules carefully before beginning. For example, they must ensure that they report vulnerabilities correctly, that they are not allowed to shut down systems or destroy data. Each reported vulnerability must be kept secret for up to 90 days to give OpenAI time to patch the vulnerability. These rules are commonly used on bug bounty platforms.
It’s good news that OpenAI is putting money into improving its systems. Late last month there was another privacy bug where you saw another user’s chat data when you were logged in. Things like that have to come out if the service is to mature. At the same time, ChatGPT is struggling with GDPR rules after Italy, France, Germany and Ireland are also looking at a possible ban on ChatGPT.