Google has confirmed a security vulnerability involving a new AI-driven attack that can compromise Gmail accounts.
The company noted that the threat “is not specific to Google” and highlights the need for stronger defenses against prompt injection attacks.
How the prompt injection attack worksThe attack uses malicious instructions hidden inside seemingly harmless items like emails, attachments, or calendar invitations. While these instructions are invisible to a human user, an AI assistant can read and execute them.
Researcher Eito Miyamura demonstrated the vulnerability in a video posted on X.
We got ChatGPT to leak your private email data. All you need? The victim’s email address. AI agents like ChatGPT follow your commands, not your common sense… with just your email, we managed to exfiltrate all your private information.
We got ChatGPT to leak your private email data