December 29, 2025 – OpenAI is currently on the hunt for a new Head of Safety and Risk Prevention. This role is crucial as the company aims to anticipate potential harms and risks of misuse associated with its models, thereby shaping its overall safety strategy. The recruitment drive comes at a time when OpenAI is facing a year – end flurry of accusations regarding the impact of ChatGPT on users’ mental health, including several lawsuits related to abnormal deaths.
In a post on the social platform X, OpenAI CEO Sam Altman admitted, “In 2025, we’ve just begun to see the possible impacts of our models on mental health.” He also pointed out that with the enhancement of model capabilities, there are many “real – world challenges.” Altman emphasized that the Head of Safety and Risk Prevention is a “vital position at this critical juncture.”

According to the job posting, the annual salary for this position is $555,000, along with equity bonuses. The core responsibility of the Head of Safety and Risk Prevention is to “lead the development and implementation of the technical strategy for OpenAI’s safety and risk prevention framework. This framework outlines how OpenAI tracks and prevents risks associated with technologies with cutting – edge capabilities that could cause new and serious harms.” Altman described the job as “extremely stressful, with the need to dive into high – intensity work almost immediately upon joining.”
Over the past two years, OpenAI’s safety team has experienced multiple personnel changes. The former Head of Safety and Risk Prevention, Alexander Madry, was reassigned to another role in July 2024. At that time, Altman said that the position would be taken over by senior executives Joaquin Quiñonero Candela and Lily Wong. However, Wong left the company a few months later. By July 2025, Quiñonero Candela also stepped down from his role in the safety and risk prevention team and shifted to be in charge of OpenAI’s recruitment efforts.
