December 8, 2025 – Recently, the CEO of Google DeepMind, Demis Hassabis, who has just garnered widespread acclaim for Gemini 3, made a significant statement regarding the path to Artificial General Intelligence (AGI). As reported by Business Insider yesterday, Hassabis emphasized that scaling up AI models is a crucial step towards achieving AGI.
In Silicon Valley, an ongoing debate has been raging about the future direction of AI in light of the scaling laws. These laws, which describe the predictable relationship between a machine – learning model’s performance, its size, the size of the training dataset, and the available computing resources, are considered a core principle in large – model pre – training. In simple terms, they suggest that a larger model, more data, and longer training times lead to a smarter AI.

Hassabis stated last week, “We need to push the current scaling of AI to its limits. It will eventually become a key component of AGI, and might even form the entire AGI system.” However, he also expressed doubts. While scaling laws might be able to take AI towards AGI, he suspects that one or two additional breakthroughs may be required to fully achieve it.
The scaling laws are not without their flaws. The total amount of publicly available data is finite. Moreover, increasing computing power necessitates the construction of more data centers, which drives up training costs and puts pressure on the natural environment. Some AI experts are also concerned that continuous investment in scaling laws by large language model companies could lead to diminishing returns.
Meanwhile, a different voice has emerged in Silicon Valley. Yann LeCun, the former chief AI scientist at Meta who recently announced his departure to start a new venture, believes that the AI industry cannot rely solely on scaling laws. In April this year, at the National University of Singapore (NUS), he said, “Most truly interesting problems perform extremely poorly under scaling laws. You can’t simply assume that piling on data and computing power will produce a smarter AI.”
It is reported that LeCun left Meta to build an AI system based on a “world model” that relies on spatial data rather than language data. This can be seen as an alternative to large language models. As the global leading AI companies continue to invest heavily in the pursuit of AGI, the debate over the role of scaling laws and alternative approaches is set to continue.
