The most recent generative AI fashions should not simply stand-alone text-generating chatbots—as an alternative, they’ll simply be hooked as much as your information to offer personalised solutions to your questions. OpenAI’s ChatGPT may be linked to your Gmail inbox, allowed to examine your GitHub code, or discover appointments in your Microsoft calendar. However these connections have the potential to be abused—and researchers have proven it will probably take only a single “poisoned” doc to take action.
New findings from safety researchers Michael Bargury and Tamir Ishay Sharbat, revealed on the Black Hat hacker convention in Las Vegas at present, present how a weak spot in OpenAI’s Connectors allowed delicate data to be extracted from a Google Drive account utilizing an oblique immediate injection assault. In an indication of the assault, dubbed AgentFlayer, Bargury reveals the way it was doable to extract developer secrets and techniques, within the type of API keys, that had been saved in an indication Drive account.
The vulnerability highlights how connecting AI fashions to exterior techniques and sharing extra information throughout them will increase the potential assault floor for malicious hackers and doubtlessly multiplies the methods the place vulnerabilities could also be launched.
“There’s nothing the consumer must do to be compromised, and there may be nothing the consumer must do for the info to exit,” Bargury, the CTO at safety agency Zenity, tells WIRED. “We’ve proven that is fully zero-click; we simply want your e mail, we share the doc with you, and that’s it. So sure, that is very, very dangerous,” Bargury says.
OpenAI didn’t instantly reply to WIRED’s request for remark in regards to the vulnerability in Connectors. The corporate launched Connectors for ChatGPT as a beta characteristic earlier this 12 months, and its web site lists no less than 17 totally different providers that may be linked up with its accounts. It says the system means that you can “deliver your instruments and information into ChatGPT” and “search recordsdata, pull dwell information, and reference content material proper within the chat.”
Bargury says he reported the findings to OpenAI earlier this 12 months and that the corporate shortly launched mitigations to forestall the approach he used to extract information through Connectors. The best way the assault works means solely a restricted quantity of information might be extracted directly—full paperwork couldn’t be eliminated as a part of the assault.
“Whereas this subject isn’t particular to Google, it illustrates why creating sturdy protections in opposition to immediate injection assaults is vital,” says Andy Wen, senior director of safety product administration at Google Workspace, pointing to the corporate’s not too long ago enhanced AI safety measures.