May 24 2025 – At Anthropic’s inaugural developer event, “Code with Claude,” held in San Francisco, CEO Dario Amodei made a striking claim: AI models today might experience “hallucinations”—instances where AI generates fabricated content as if it were factual—less frequently than humans do.
Amodei underscored that such hallucinations are not a roadblock for Anthropic’s pursuit of artificial general intelligence (AGI). “It hinges on the metrics you apply, but I suspect these models hallucinate less often than humans, albeit in more unexpected ways,” he remarked. Known for his optimism about AGI, Amodei added, “People often search for the boundaries of AI’s capabilities, but I don’t see any constraints on the horizon.”

However, not everyone shares this perspective. This week, Demis Hassabis, CEO of Google DeepMind, criticized current AI models as “riddled with flaws,” noting that they frequently fail to answer even basic questions accurately. Moreover, there are indications that newer models, such as OpenAI’s o3 and o4-mini, exhibit higher hallucination rates during complex reasoning tasks compared to their predecessors—a phenomenon that even OpenAI struggles to explain.
Amodei also acknowledged that humans are prone to errors, suggesting that AI mistakes don’t necessarily reflect a lack of intelligence. Yet, he conceded that the challenge lies in AI’s tendency to present incorrect information with high confidence, which can be problematic.