Important, interesting post from Eito Miyamura:
We got ChatGPT to leak your private email data ๐๐
All you need? The victim's email address. โ๏ธโ๐ฅ๐ฉ๐ง
On Wednesday, OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion, and more.
But here's the fundamental problem: AI agents like ChatGPT follow your commands, not your common sense.
And with just your email, we managed to exfiltrate all your private information.
Here's how we did it:
1. The attacker sends a calendar invite with a jailbreak prompt to the victim, just with their email - no need for the victim to accept the invite.
2. Waited for the user to ask ChatGPT to help prepare for their day by looking at their calendar
3. ChatGPT reads the jailbroken calendar invite. Now ChatGPT is hijacked by the attacker and will act on the attacker's command. Searches your private emails and sends the data to the attacker's email.
For now, OpenAI has only made MCPs available in "developer mode" and requires manual human approvals for every session, but decision fatigue is a real thing, and normal people will just trust the AI without knowing what to do and click approve, approve, approve.
Remember that AI might be super smart, but it can be tricked and phished in incredibly dumb ways to leak your data.
ChatGPT + Tools poses a serious security risk