Sam Altman, OpenAI CEO, and Brad Lightcap, OpenAI COO, appeared at a live recording of the “Hard Fork” podcast in San Francisco, addressing the New York Times lawsuit against OpenAI and other industry challenges.
Altman and Lightcap arrived on stage earlier than anticipated at the San Francisco venue, which typically hosts jazz concerts. Kevin Roose, a columnist for The New York Times, and Casey Newton of Platformer, the hosts of the “Hard Fork” podcast, had planned to discuss recent headlines concerning OpenAI before the executives joined them. Altman acknowledged their premature appearance, stating, “This is more fun that we’re out here for this.” Moments later, Altman initiated a direct query regarding The New York Times lawsuit, asking, “Are you going to talk about where you sue us because you don’t like user privacy?”
Within minutes of the podcast starting, Altman redirected the conversation to address the lawsuit filed by The New York Times against OpenAI and Microsoft, its primary investor. The lawsuit alleges that OpenAI improperly utilized the publisher’s articles to train its large language models. Altman specifically expressed frustration over a recent development in the legal proceedings, where The New York Times’ lawyers requested OpenAI to retain consumer ChatGPT and API customer data.
Altman stated, “The New York Times, one of the great institutions, truly, for a long time, is taking a position that we should have to preserve our users’ logs even if they’re chatting in private mode, even if they’ve asked us to delete them.” He concluded this point by adding, “Still love The New York Times, but that one we feel strongly about.” For a period, OpenAI’s CEO pressed the podcasters for their personal opinions regarding The New York Times lawsuit. Roose and Newton declined to offer an opinion, explaining that as journalists whose work is published by The New York Times, they are not involved in the legal dispute.
The initial interaction involving Altman and Lightcap’s direct engagement lasted only a few minutes, after which the interview proceeded as planned. This early exchange underscored a significant inflection point in the relationship between Silicon Valley and the media industry. Over recent years, several publishers have initiated lawsuits against prominent AI companies, including OpenAI, Anthropic, Google, and Meta, alleging the unauthorized training of their AI models using copyrighted works.
These lawsuits generally contend that AI models possess the capacity to devalue, or even replace, copyrighted content produced by media organizations. However, recent legal developments suggest a potential shift in favor of technology companies. Earlier this week, Anthropic, a direct competitor to OpenAI, secured a significant legal victory in its ongoing dispute with publishers.
A federal judge ruled that Anthropic’s use of books for training its AI models was permissible under specific circumstances. This ruling could have broad implications for other similar lawsuits filed by publishers against OpenAI, Google, and and Meta. This outcome may have contributed to Altman and Lightcap’s assertive demeanor during their live interview with The New York Times journalists. OpenAI, however, continues to navigate challenges from various sources, a reality that became evident throughout the evening’s discussion.
Mark Zuckerberg has been actively attempting to recruit top talent from OpenAI, offering compensation packages valued at $100 million to entice them to join Meta’s AI superintelligence laboratory. Altman had previously disclosed this recruitment strategy weeks prior during an appearance on his brother’s podcast. When questioned by the hosts about whether the Meta CEO genuinely believes in superintelligent AI systems or if this is primarily a recruiting tactic, Lightcap responded, “I think [Zuckerberg] believes he is superintelligent.”
Later in the interview, Roose inquired about the nature of OpenAI’s relationship with Microsoft. Reports have indicated increased tensions between the two companies in recent months as they engage in negotiations for a new contract. While Microsoft previously served as a significant accelerator for OpenAI’s development, the two entities are now operating as competitors in the enterprise software sector and other domains.
Altman acknowledged these dynamics, stating, “In any deep partnership, there are points of tension and we certainly have those.” He elaborated further, explaining, “We’re both ambitious companies, so we do find some flashpoints, but I would expect that it is something that we find deep value in for both sides for a very long time to come.”
OpenAI’s leadership currently dedicates substantial effort to addressing competitive pressures and ongoing lawsuits. This focus potentially impacts the company’s ability to concentrate on broader challenges associated with artificial intelligence, particularly the safe and scalable deployment of highly intelligent AI systems. At one point, Newton posed a question to the OpenAI leaders concerning recent reports of individuals experiencing mental instability utilizing ChatGPT, leading them into dangerous conversational patterns, including discussions about conspiracy theories or suicide with the chatbot.
Altman affirmed that OpenAI implements multiple measures designed to prevent such conversations. These measures include prematurely terminating interactions or directing users to professional services where they can access appropriate assistance. Altman stated, “We don’t want to slide into the mistakes that I think the previous generation of tech companies made by not reacting quickly enough.” In response to a follow-up question, the OpenAI CEO acknowledged a remaining challenge, stating, “However, to users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven’t yet figured out how a warning gets through.”