Microsoft-owned OpenAI, the developer of ChatGPT, is now offering up to $20,000 to security researchers to help the company distinguish between good-faith hacking and malicious attacks, as it suffered a security breach last month.
OpenAI has launched a bug bounty programme for ChatGPT and other products, saying the initial priority rating for most findings will use the 'Bugcrowd Vulnerability Rating Taxonomy'.
"Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries," the AI research company said.
"However, vulnerability priority and reward may be modified based on likelihood or impact at OpenAI's sole discretion. In cases of downgraded issues, researchers will receive a detailed explanation,a it added.
The security researchers, however, are not authorised to conduct security testing on plugins created by other people.
OpenAI is also asking ethical hackers to safeguard confidential OpenAI corporate information that may be exposed through third parties.
Some examples in this category include Google Workspace, Asana, Trello, Jira, Monday.com, Zendesk, Salesforce and Stripe.
"You are not authorised to perform additional security testing against these companies. Testing is limited to looking for confidential OpenAI information while following all laws and applicable terms of service. These companies are examples, and OpenAI does not necessarily do business with them," informed the company.
Last month, OpenAI admitted that some users' payment information may have been exposed when it took ChatGPT offline owing to a bug.
According to the company, it took ChatGPT offline due to a bug in an open-source library which allowed some users to see titles from another active user's chat history.
OpenAI discovered that the same bug may have caused the unintentional visibility of "payment-related information of 1.2 per cent of the ChatGPT Plus subscribers who were active during a specific nine-hour window".
The bug was discovered in the Redis client open-source library called "redis-py".