May 29, 2024 – OpenAI has recently announced the formation of a Safety and Security Committee within its board of directors. This committee is tasked with advising on crucial safety and security decisions related to OpenAI’s projects and operations.
The immediate priority for the committee is to evaluate and enhance OpenAI’s development processes and safeguards over the next 90 days. Following this period, the committee intends to share its recommendations with the full board.
The committee will be chaired by board member Bret Taylor and include Adam D’Angelo, Nicole Seligman, and OpenAI CEO Sam Altman. Additionally, several OpenAI technical and policy experts, including Aleksander Madry, Lilian Weng, John Schulman, Matt Knight, and Jakub Pachocki, will be part of the committee.
It’s worth noting that OpenAI plans to engage external security, safety, and technical experts, such as former cybersecurity officials Rob Joyce and John Carlin, to provide further support and advice.
Moreover, OpenAI has revealed that it has commenced the training of its next-generation cutting-edge model, anticipating significant progress on the path towards Artificial General Intelligence (AGI).