OpenAI’s GPT-6 to Set New Benchmarks with Estimated 100,000 H100 GPUs for Training

March 1, 2025 – In a recent blog post published by technology media outlet Smartprix, it was reported that OpenAI may have accidentally revealed the potential scale of its upcoming GPT-6 model in a video introducing GPT-4.5. The revelation hints at a massive computational requirement that could dwarf previous models.

During the 2-minute 26-second mark of the GPT-4.5 introduction video, a fleeting glimpse of a chat record mentioning “Num GPUs for GPT 6 Training” was captured. While the video did not elaborate on this detail, the mere mention of “Num” has sparked speculation that it could signify an unprecedented figure, potentially pointing to a need for up to 100,000 GPUs for GPT-6’s training.

Previous reports indicate that OpenAI used approximately 10,000 GPUs for the training of GPT-3. As models evolve, the demand for computational resources has been on an upward trajectory.

GPT 4.5, internally codenamed “Orion,” has made significant strides in naturalness and reducing “hallucinations.” It is speculated to have over 3 to 4 trillion parameters. Estimates suggest that GPT 4.5’s training might have utilized between 30,000 to 50,000 NVIDIA H100 GPUs, incurring a training cost of approximately 750millionto1.5 billion.

The true meaning of the “Num” reference in the screenshot remains enigmatic. It could imply “Numerous,” signaling that GPT-6’s training scale will be unprecedented. Alternatively, this could be a clever diversion by OpenAI, reminiscent of their past practice of using “Strawberry” as a codename for the o1 series. Only time will tell the true scope of GPT-6 and the resources required to bring it to fruition.

Leave a Reply