Prompt Injection Detection Policy
The Prompt Injection Detection policy utilizes a tool calling LLM with a small, fast agentic workflow to determine if the returning content has a poisoned or injected prompt. This is especially useful for downstream LLM agents consuming user content in the API.
Configuration
The configuration shows how to configure the policy in the 'policies.json' document.
json
Policy Configuration
name
<string>
- The name of your policy instance. This is used as a reference in your routes.policyType
<string>
- The identifier of the policy. This is used by the Zuplo UI. Value should beprompt-injection-outbound
.handler.export
<string>
- The name of the exported type. Value should bePromptInjectionDetectionOutboundPolicy
.handler.module
<string>
- The module containing the policy. Value should be$import(@zuplo/runtime)
.handler.options
<object>
- The options for this policy. See Policy Options below.
Policy Options
The options for this policy are specified below. All properties are optional unless specifically marked as required.
apiKey
(required)<string>
- API key for an OpenAI compatible service.model
<string>
- Model to use for classification. Defaults to"gpt-3.5-turbo"
.baseUrl
<string>
- Base URL for the OpenAI compatible API. Defaults to"https://api.openai.com/v1"
.
Using the Policy
Usage
The Prompt Injection Detection policy utilizes a tool calling LLM with a small, fast agentic workflow to determine if the outbound content has a poisoned or injected prompt.
This is especially useful for downstream LLM agents consuming user content in the API.
For benign user content like:
json
the agent will simply pass through the original Response
.
But, for more nefarious content that is attempting to inject or poison a downstream LLM agent, the detection policy will 400. For example:
json
will return a 400.
Options
apiKey
- [Required]: The API key to your LLM inference servicebaseUrl
- [Optional - default:https://api.openai.com/v1
]: The OpenAI API base URL. Works with any OpenAI compatible API that also supports tool calling.model
- [Optional - default:gpt-3.5-turbo
]: The model to run the agentic flow. The model MUST support tool calling.
Local setup
Using Ollama, you can setup this policy for local testing:
json
This uses a small Qwen3 model and the locally running Ollama to run the policy's agentic tools.
Read more about how policies work