OpenAI Establishes New Team to Tackle Core Technical Challenges in Controlling Superintelligent AI

July 7, 2023 – OpenAI Announces Formation of New Team to Tackle Challenges of Guiding and Controlling Superintelligent AI Systems OpenAI has made a groundbreaking announcement today as it unveils the formation of a new team led by its co-founder and Chief Scientist, Ilya Sutskever, and Alignment Lead, Jan Leike. The team has been granted the highest priority and authorized to utilize 20% of the company’s computational resources. Their primary objective over the next four years is to address the core technological challenges of “controlling superintelligent AI.” The team asserts that they require time for scientific and technological breakthroughs to effectively guide and control AI systems that surpass human intelligence.

Renowned researchers Sutskever and Leike believe that the emergence of the next generation of superintelligent AI could happen within a decade. They assert that this groundbreaking technology has the potential to be the most influential invention in human history, enabling solutions to problems that once required extensive human resources. However, the researchers also acknowledge the risks associated with such technology, as its immense power could lead to the loss of human sovereignty or even extinction.

The team acknowledges that there currently isn’t a viable solution to guide or control potential superintelligent AI, ensuring that these systems adhere to human intent. Presently, humans can only adjust AI through feedback, but they lack a reliable means of overseeing AI systems at their root level.

OpenAI plans to construct an AI system, an “automated alignment researcher,” that approximates human-level intelligence. This system will be expanded using extensive deep learning techniques and iteratively adjusted to become a superintelligence. To accomplish this, researchers need to develop scalable training methods, validate the generated models, and ultimately subject the models to a series of controllability tests.

Interestingly, the researchers are currently planning to employ this AI system to assist in evaluating other AI systems through scalable supervision. They also aim to understand and control how these models generalize supervision to tasks that human researchers cannot supervise.

Sutskever and Leike emphasize that as the research progresses, the focus of their future studies may change, and new research domains may emerge.

Currently, the new team is actively recruiting machine learning researchers and engineers. Sutskever and Leike recognize the critical importance of expanding the team with more machine learning experts to address the challenges of the next generation of superintelligent AI. They plan to share the team’s research findings extensively, fostering collaborative progress within the industry.

The work of this new team complements OpenAI’s existing efforts to enhance the safety of current models like ChatGPT, as well as to understand and mitigate the risks associated with artificial intelligence, such as misuse, economic disruption, misinformation, bias and discrimination, addiction, and overreliance. While the new team will primarily focus on the machine learning challenges of aligning superintelligent AI systems with human intent, OpenAI is actively collaborating with interdisciplinary experts to address related socio-technical issues, ensuring that their technological solutions consider broader human and societal concerns.

Leave a Reply