
Security firm Radware has discovered a zero-click vulnerability, “ShadowLeak,” in ChatGPT’s Deep Research agent.
The flaw allows data theft from OpenAI’s servers as enterprises increasingly use AI to analyze sensitive emails and internal reports.
The adoption of these AI platforms introduces new security risks when handling confidential business information. ShadowLeak is a server-side exploit, meaning an attack executes entirely on OpenAI’s servers. This mechanism allows attackers to exfiltrate sensitive data without requiring any user interaction, operating completely covertly.
David Aviv, chief technology officer at Radware, classified it as “the quintessential zero-click attack.” He stated, “There is no user action required, no visible cue, and no way for victims to know their data has been compromised. Everything happens entirely behind the scenes through autonomous agent actions on OpenAI cloud servers.”
This exploit functions independently of user endpoints or company networks, which makes detection by enterprise security teams extremely difficult. Radware researchers demonstrated that sending an email with hidden instructions could trigger the Deep Research agent, causing it to leak information autonomously without the user’s knowledge.
Pascal Geenens, director of cyber threat intelligence at Radware, warned that internal protections are insufficient. “Enterprises adopting AI cannot rely on built-in safeguards alone to prevent abuse,” Geenens said. “AI-driven workflows can be manipulated in ways not yet anticipated, and these attack vectors often bypass the visibility and detection capabilities of traditional security solutions.”
ShadowLeak represents the first purely server-side, zero-click data exfiltration attack that leaves almost no forensic evidence from a business perspective. With ChatGPT reporting over 5 million paying business users, the potential scale of exposure is substantial. This lack of evidence complicates incident response efforts.
Experts emphasize that human oversight and strict access controls are critical when connecting autonomous AI agents to sensitive data. Organizations are advised to continuously evaluate security gaps and combine technology with operational practices.
Recommended protective measures include: