OpenAI Establishes Special Team to Tackle AI’s Catastrophic Risks

October 27, 2023 – OpenAI has announced on Thursday that it is in the process of assembling a new team dedicated to mitigating “catastrophic risks” associated with artificial intelligence. This team will be responsible for tracking, assessing, predicting, and safeguarding against major issues that may arise from AI.

OpenAI has expressed its belief that cutting-edge AI models have the potential to surpass existing state-of-the-art models, offering significant benefits to humanity. However, these advancements also come with increasingly severe risks. The newly formed team will work to reduce a range of significant risks posed by AI, including those related to nuclear threats, biotechnology, chemistry, radiation, and the potential hazards associated with AI “self-replication.”

The company emphasizes the necessity of ensuring that high-capability AI systems have the requisite understanding and infrastructure for their safety. The team will be led by Aleksander Madry, the former director of the MIT Center for Deployable Machine Learning, and will develop and maintain a “Risk-Informed Development Policy” to outline OpenAI’s efforts in assessing and monitoring large AI models.

Previously, the nonprofit organization AI Safety Center issued a brief statement last month, asserting that “mitigating existential risks from artificial intelligence should be placed on par with societal issues like epidemics and nuclear warfare as a global priority.” This concise 22-word initiative has garnered support from numerous influential figures in the AI industry, including OpenAI’s CEO, Sam Altman.

Leave a Reply