June 26, 2025 – During a live episode of the podcast Hard Fork on Wednesday, OpenAI’s CEO Sam Altman and COO Brad Lightcap took center stage to discuss a range of pressing issues facing the company.
Early in the recording, Altman wasted no time in addressing the ongoing legal battle between The New York Times and OpenAI, along with its key investor, Microsoft. The media giant has filed a lawsuit alleging that OpenAI utilized its content without proper authorization to train its large – language models. A particularly contentious point in the suit, as highlighted by Altman, is the demand from The New York Times’ legal team that OpenAI retain usage data from ChatGPT users and API clients.
“We have immense respect for The New York Times. It’s a great institution,” Altman stated. “But their current stance is that we must keep chat logs even when users are in private mode or have explicitly requested deletion. On this issue, our position is crystal clear and unwavering.”

Altman then turned the conversation to the podcast hosts, asking for their thoughts on the lawsuit. The hosts, who admitted to contributing articles to The New York Times, were quick to distance themselves from the legal dispute, stating that they had no direct involvement in the case.
The conversation then shifted to the relationship between OpenAI and Microsoft. While Microsoft was once a major driving force behind OpenAI’s growth, recent negotiations for a new contract have strained their partnership. The two companies are now increasingly competing head – to – head in areas such as enterprise software.
In response, Altman acknowledged that friction is inevitable in any deep – seated partnership. “Both our companies are highly ambitious, and it’s natural for disagreements to arise. However, I firmly believe that this collaboration will continue to deliver significant value to both sides in the long run,” he said.
With OpenAI’s leadership currently preoccupied with external competition and legal challenges, there are concerns about how this might impact the company’s ability to advance AI safety deployment on a broader scale.
When asked about the phenomenon of users in unstable mental states turning to ChatGPT to discuss conspiracy theories and even suicide, Altman outlined the measures OpenAI has implemented. These include terminating relevant conversations prematurely and directing users towards professional help.
Altman emphasized that OpenAI is determined not to repeat the mistakes of previous tech companies that were slow to respond. However, when pressed further, he admitted that the company has yet to find a truly effective way to warn users who are in extremely fragile mental states and on the verge of a breakdown.