Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

Guardrails AI is a Python framework for adding validators and corrective behavior around LLM calls. Arize AX captures every Guardrails run — the guard wrapper, validator outcomes, and the underlying LLM call — via the openinference-instrumentation-guardrails package, paired with the openinference-instrumentation-openai instrumentor for full LLM-call detail.
https://storage.googleapis.com/arize-phoenix-assets/assets/images/phoenix-docs-images/gc.ico

Guardrails AI Tracing Tutorial (Google Colab)

Prerequisites

Launch Arize AX

  1. Sign in to your Arize AX account.
  2. From Space Settings, copy your Space ID and API Key. You will set them as ARIZE_SPACE_ID and ARIZE_API_KEY below.

Install

pip install arize-otel \
  openinference-instrumentation-guardrails \
  openinference-instrumentation-openai \
  guardrails-ai openai

Configure credentials

export ARIZE_SPACE_ID="<your-space-id>"
export ARIZE_API_KEY="<your-api-key>"
export ARIZE_PROJECT_NAME="guardrails-ai-tracing-example"
export OPENAI_API_KEY="<your-openai-api-key>"

Setup tracing

# instrumentation.py
import os

from arize.otel import register
from openinference.instrumentation.guardrails import GuardrailsInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer_provider = register(
    space_id=os.environ["ARIZE_SPACE_ID"],
    api_key=os.environ["ARIZE_API_KEY"],
    project_name=os.environ["ARIZE_PROJECT_NAME"],
)

GuardrailsInstrumentor().instrument(tracer_provider=tracer_provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
print("Arize AX tracing initialized for Guardrails AI.")

Run Guardrails AI

# example.py

# Importing instrumentation first ensures tracing is set up
# before `guardrails` is imported.
from instrumentation import tracer_provider

from guardrails import Guard

guard = Guard()

response = guard(
    model="gpt-5",
    messages=[
        {
            "role": "user",
            "content": "Why is the ocean salty? Answer in two sentences.",
        },
    ],
)

print(response.validated_output)

Expected output

Arize AX tracing initialized for Guardrails AI.
The ocean is salty because rivers continuously dissolve mineral salts from rocks and soil and carry them to the sea, where they accumulate over millions of years. Water leaves the ocean through evaporation but the salts remain, steadily concentrating until reaching today's roughly 3.5% salinity.

Verify in Arize AX

  1. Open your Arize AX space and select project guardrails-ai-tracing-example.
  2. You should see a new trace within ~30 seconds containing a guard parent span wrapping a nested OpenAI ChatCompletion LLM child span with the prompt, response, and token usage attached.
  3. If no traces appear, see Troubleshooting.

Troubleshooting

  • No traces in Arize AX. Confirm ARIZE_SPACE_ID and ARIZE_API_KEY are set in the same shell that runs example.py. Enable OpenTelemetry debug logs with export OTEL_LOG_LEVEL=debug and re-run.
  • Guardrails spans missing but OpenAI spans present. GuardrailsInstrumentor().instrument(...) must run before any from guardrails import .... Make sure instrumentation.py is the first import in your entry point.
  • 401 from OpenAI. Verify OPENAI_API_KEY is set and has access to gpt-5. Swap for a model your key can call.
  • Adding validators. This minimal example uses a no-op Guard(). To attach validators (e.g. PII redaction, regex match, semantic similarity), install them from Guardrails Hub with guardrails hub install hub://guardrails/<validator>. Validator outcomes appear as attributes on the Guardrails span.

Resources

Guardrails AI Documentation

OpenInference Guardrails Instrumentor

Guardrails AI GitHub