AI on the Verge: Former OpenAI Exec Says Machines Will Soon Do All Human Computer Tasks

November 05, 2024 – According to a report by Business Insider on November 3, Miles Brundage, the former head of policy research and “AGI (Artificial General Intelligence) preparedness” at OpenAI, has stated that the industry could develop systems capable of “almost remotely completing” all tasks that humans can perform via computers in the coming years. These tasks include using a mouse and keyboard and even presenting a “human-like image” in video chats.

The timeline for developing machines with AGI capabilities is a hot topic of discussion within the industry, particularly for companies like OpenAI. Prominent figures in the field believe that this technology could arrive within the next few years. John Schulman, a co-founder of OpenAI and a research scientist who left the company in August, shares this view, stating that AGI could be achieved in several years. Dario Amodei, the CEO of OpenAI’s competitor Anthropic, predicts that some version of this technology could emerge by 2026.

Brundage, who announced his departure from OpenAI last month after more than six years with the company, has a deep understanding of OpenAI’s AGI timeline. During his tenure, he provided AGI-related advice to the company’s leadership and was involved in key safety research innovations at OpenAI, including the introduction of “red team testing,” which involves external experts testing products for vulnerabilities.

Recently, several high-level safety researchers and executives have left OpenAI, and some have pointed out the company’s balancing act between AGI development and safety. However, Brundage clarified that his departure was not due to specific safety concerns. “I don’t have confidence that other labs will fully grasp these issues,” he said.

In his departure announcement, Brundage expressed his desire to have a greater impact as a policy researcher or advocate for a non-profit organization. “Firstly, I can’t do all the work I want to do, especially on critical cross-industry issues. So, it’s not just about OpenAI’s internal affairs but also about what regulatory measures should be introduced,” he explained.

“Secondly, I want to be more independent and reduce bias. I don’t want people to see my views as simple corporate propaganda just because I’m an insider,” he added.

Leave a Reply