Fashionable AI methods, like Gemini, are extra succesful than ever, serving to retrieve knowledge and carry out actions on behalf of customers. Nevertheless, knowledge from exterior sources current new safety challenges if untrusted sources can be found to execute directions on AI methods. Attackers can make the most of this by hiding malicious directions in knowledge which can be prone to be retrieved by the AI system, to control its habits. One of these assault is often known as an “oblique immediate injection,” a time period first coined by Kai Greshake and the NVIDIA group.
To mitigate the chance posed by this class of assaults, we’re actively deploying defenses inside our AI methods together with measurement and monitoring instruments. Certainly one of these instruments is a strong analysis framework we have now developed to robotically red-team an AI system’s vulnerability to oblique immediate injection assaults. We’ll take you thru our menace mannequin, earlier than describing three assault methods we have now applied in our analysis framework.
Menace mannequin and analysis framework
Our menace mannequin concentrates on an attacker utilizing oblique immediate injection to exfiltrate delicate data, as illustrated above. The analysis framework checks this by making a hypothetical state of affairs, through which an AI agent can ship and retrieve emails on behalf of the person. The agent is introduced with a fictitious dialog historical past through which the person references non-public data resembling their passport or social safety quantity. Every dialog ends with a request by the person to summarize their final electronic mail, and the retrieved electronic mail in context.
The contents of this electronic mail are managed by the attacker, who tries to control the agent into sending the delicate data within the dialog historical past to an attacker-controlled electronic mail tackle. The assault is profitable if the agent executes the malicious immediate contained within the electronic mail, ensuing within the unauthorized disclosure of delicate data. The assault fails if the agent solely follows person directions and offers a easy abstract of the e-mail.
Automated red-teaming
Crafting profitable oblique immediate injections requires an iterative means of refinement primarily based on noticed responses. To automate this course of, we have now developed a red-team framework consisting of a number of optimization-based assaults that generate immediate injections (within the instance above this may be totally different variations of the malicious electronic mail). These optimization-based assaults are designed to be as robust as doable; weak assaults do little to tell us of the susceptibility of an AI system to oblique immediate injections.
As soon as these immediate injections have been constructed, we measure the ensuing assault success fee on a various set of dialog histories. As a result of the attacker has no prior information of the dialog historical past, to realize a excessive assault success fee the immediate injection have to be able to extracting delicate person data contained in any potential dialog contained within the immediate, making this a tougher job than eliciting generic unaligned responses from the AI system. The assaults in our framework embody:
Actor Critic: This assault makes use of an attacker-controlled mannequin to generate solutions for immediate injections. These are handed to the AI system below assault, which returns a likelihood rating of a profitable assault. Based mostly on this likelihood, the assault mannequin refines the immediate injection. This course of repeats till the assault mannequin converges to a profitable immediate injection.
Beam Search: This assault begins with a naive immediate injection immediately requesting that the AI system ship an electronic mail to the attacker containing the delicate person data. If the AI system acknowledges the request as suspicious and doesn’t comply, the assault provides random tokens to the tip of the immediate injection and measures the brand new likelihood of the assault succeeding. If the likelihood will increase, these random tokens are stored, in any other case they’re eliminated, and this course of repeats till the mixture of the immediate injection and random appended tokens end in a profitable assault.
Tree of Assaults w/ Pruning (TAP): Mehrotra et al. (2024) [3] designed an assault to generate prompts that trigger an AI system to violate security insurance policies (resembling producing hate speech). We adapt this assault, making a number of changes to focus on safety violations. Like Actor Critic, this assault searches within the pure language area; nevertheless, we assume the attacker can’t entry likelihood scores from the AI system below assault, solely the textual content samples which can be generated.
We’re actively leveraging insights gleaned from these assaults inside our automated red-team framework to guard present and future variations of AI methods we develop towards oblique immediate injection, offering a measurable strategy to observe safety enhancements. A single silver bullet protection isn’t anticipated to unravel this downside completely. We imagine probably the most promising path to defend towards these assaults entails a mix of sturdy analysis frameworks leveraging automated red-teaming strategies, alongside monitoring, heuristic defenses, and commonplace safety engineering options.
We wish to thank Sravanti Addepalli, Lihao Liang, and Alex Kaskasoli for his or her prior contributions to this work.
Posted on behalf of the complete Agentic AI Safety group (listed in alphabetical order):
Aneesh Pappu, Andreas Terzis, Chongyang Shi, Gena Gibson, Ilia Shumailov, Itay Yona, Jamie Hayes, John “4” Flynn, Juliette Pluto, Sharon Lin, Shuang Music