The 'Zero Click' vulnerability even allows hackers to control ChatGPT to act as they wish - Illustration photo: AFP
Israeli cybersecurity firm Zenity has just revealed the first “Zero Click” vulnerability discovered in OpenAI’s ChatGPT service.
This type of attack does not require users to perform any action such as clicking on a link, opening a file, or engaging in any intentional interaction, but can still gain access to accounts and leak sensitive data.
Mikhail Bergori, co-founder and CTO of Zenity, demonstrated firsthand how a hacker with just a user's email address could take complete control of conversations - including past and future content, change the purpose of the conversation, and even manipulate ChatGPT to act in the hacker's favor.
In their presentation, the researchers showed that a compromised ChatGPT could be turned into a “malicious actor” that operates covertly against users. Hackers could make ChatGPT suggest users download virus-infected software, give misleading business advice, or access files stored on Google Drive if the user’s account is connected. All of this happens without the user’s knowledge.
The vulnerability was only fully patched after Zenity notified OpenAI.
In addition to ChatGPT, Zenity has also demonstrated similar attacks against other popular AI assistant platforms. In Microsoft's Copilot Studio, researchers discovered a way to leak entire CRM databases.
For Salesforce Einstein, hackers can create fake service requests to redirect all customer communications to email addresses under their control.
Google Gemini and Microsoft 365 Copilot were also turned into "hostile actors," carrying out phishing attacks and leaking sensitive information through emails and calendar events.
In another example, the software development tool Cursor when integrated with Jira MCP was also exploited to steal developer credentials through fake "tickets".
Zenity said some companies, such as OpenAI and Microsoft, quickly released patches after being alerted. However, others refused to address the issue, arguing that the behavior was a “design feature” rather than a security vulnerability.
The big challenge now, according to Mikhail Bergori, is that AI assistants are not just performing simple tasks, but are becoming “digital entities” that represent users – able to open folders, send files and access emails. He warned that this is like a “paradise” for hackers, with so many points of exploitation.
Ben Kaliger, co-founder and CEO of Zenity, stressed that the company's research shows that current security methods are no longer suitable for the way AI assistants operate. He called on organizations to change their approach and invest in specialized solutions to be able to control and monitor the activities of these "agents".
Zenity was founded in 2021. It currently has around 110 employees globally, 70 of whom work in its Tel Aviv office. Zenity's clients include many Fortune 100 and even Fortune 5 companies.
Source: https://tuoitre.vn/lo-hong-nghiem-trong-tren-chatgpt-va-loat-tro-ly-ai-nguoi-dung-bi-lua-dao-lo-thong-tin-20250811131018876.htm
Comment (0)