In recent times, with the rapid advancement of artificial intelligence (AI), many regular users are becoming increasingly concerned about its widespread adoption.
Among the notable AI developments is ChatGPT, a popular chatbot created by OpenAI.
OpenAI has recognised the need to address potential risks associated with cutting-edge AI models and has established a dedicated team known as ‘Preparedness.’
The Preparedness team's primary focus is to assess and mitigate significant threats posed by artificial intelligence technologies.
These threats encompass various areas, including chemical, biological, radiological, and nuclear (CBRN) risks. OpenAI officially announced this initiative in a recent blog post.
In their blog post titled "Frontier Risk and Preparedness," OpenAI emphasises the potential benefits of advanced AI models, which could greatly benefit humanity.
However, they also acknowledge the growing concerns and risks associated with these models.
OpenAI has taken a proactive approach to reduce these risks, appointing Aleksander Madry, the Director of the Massachusetts Institute of Technology's Center for Deployable Machine Learning, to lead the Preparedness team.
This team will closely link capabilities assessment, evaluations, and internal testing for advanced AI models, ranging from those in development to those with capabilities comparable to Artificial General Intelligence (AGI).
Furthermore, the Preparedness team's responsibilities extend to monitoring, evaluating, forecasting, and preparing for various catastrophic risks, such as individualised persuasion, cybersecurity threats, and CBRN threats.
They will also address concerns related to autonomous replication and adaptation (ARA), among other potential risks.
The overall objective of OpenAI's blog post is to highlight the Preparedness team's role in developing a Risk-Informed Development Policy (RDP).
This policy outlines OpenAI's approach to creating future AI models, emphasising the organisation's commitment to protective measures and the establishment of a robust government structure for accountability and oversight.
Additionally, OpenAI has launched an AI Preparedness Challenge, inviting ideas and solutions to prevent catastrophic misuse, as outlined in their blog post.