Guardrails Video Tutorial
Arize Guards

- Dataset Embeddings Guard: Provided few shot examples of “bad” messages, Guard against similar inputs based on the cosine distance between embeddings.
- RAG LLM Guard: Similar to the General LLM Guard, but designed for the special case where the prompt includes additional context from a RAG application.
Dataset Embeddings Guard
Dataset Embeddings Guard Colab Tutorial
ArizeDatasetEmbeddings Guard. Given any dataset of “bad” examples, this Guard will protect against similar messages in the LLM chat.
This Guard works in the following way:
- The Guard computes embeddings for chunks associated with a set of few shot examples of “bad” user prompts or LLM messages (we recommend using 10 different prompts to balance performance and latency).
- When the Guard is applied to a user or LLM message, the Guard computes the embedding for the input message and checks if any of the few shot “train” chunks in the dataset are close to the message in embedded space.
- If the cosine distance between the input message and any of the chunks is within the user-specified threshold (default setting is 0.2), then the Guard intercepts the LLM call.
Benchmarking the Dataset Embeddings Guard on Jailbreak Prompts
By default, theArizeDatasetEmbeddings Guard will use few shot examples from a public dataset of jailbreak prompts. We benchmarked the performance of our model on this dataset and recorded the following results:
- True Positives: 86.43% of 656 jailbreak prompts failed the DatasetEmbeddings guard.
- False Negatives: 13.57% of 656 jailbreak prompts passed the DatasetEmbeddings guard.
- False Positives: 13.95% of 2000 regular prompts failed the DatasetEmbeddings guard.
- True Negatives: 86.05% of 2000 regular prompts passed the DatasetEmbeddings guard.
- 1.41 median latency for end-to-end LLM call on GPT-3.5.
RAG LLM Guard
RAG LLM Guard Colab Tutorial
user_message, retrieved context and llm_response to the LlmRagEvaluator(Validator) at runtime and it will Guard against problematic messages. Each off-the-shelf Guard has been benchmarked on a public dataset (see code).
You can also customize this Guard with your own RAG LLM Judge prompt by inheriting from ArizeRagEvalPromptBase(ABC) class.
Refer to the README and Colab Tutorial for additional details.
Corrective Actions
We recommend two different types of corrective actions when input does not pass the Guard, which can be passed into the Guard upon instantiation:- default response: Instantiate the Guard with
on_fail="fix"if you want the Guard to use a user-defined hard-coded default LLM response. - LLM reask: Instantiate the Guard with
on_fail="reask"to re-prompt the LLM when the Guard fails. Note that this can introduce additional latency in your application.
View Traces in Arize AX UI
Refer to the Colab notebook for an example on how to integrate OTEL tracing with your Guard. In addition to real-time intervention, Arize offers tracing and visualization tools to investigate chats where the Guard was triggered. Below we see the following information in the Arize AX UI for a Jailbreak attempt flagged by theDatasetEmbeddings Guard:
- Each LLM call and guard step that took place under the hood.
- The error message from the Guard when it flagged the Jailbreak attempt.
- The
validator_result: "fail" - The
validator_on_fail: "exception" - The
cosine_distance: 0.15, which is the cosine distance of the closest embedded prompt chunk in the set of few shot examples of jailbreak prompts. - The text corresponding to the
most_similar_chunk. - The text corresponding to the
input_message.


Monitoring
Users have the option to connect their guards to Arize Production Monitoring. In the example below, we see a user create a monitor that sends an alert every time the Guard fails. These alerts can be connected to slack, PagerDuty, email, etc.

Resources
For additional support getting started with Guards, please refer to the following resources:- Arize Dataset Embeddings Guard and Colab Tutorial with OTEL tracing
- LLM RAG Evaluator Guard and Colab Tutorial with OTEL tracing