Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

LiteLLM lets you call 100+ LLM providers — OpenAI, Anthropic, Bedrock, Vertex AI, Together, Groq, and more — through a single OpenAI-compatible interface. Arize AX captures every LiteLLM call — chat completions, embeddings, image generation, retries, and the underlying provider calls — via the openinference-instrumentation-litellm package. The instrumentor wraps completion(), acompletion(), completion_with_retries(), embedding(), aembedding(), image_generation(), and aimage_generation().
This guide covers the LiteLLM Python SDK (litellm.completion(...)) — the in-process library. If you’re using the LiteLLM Proxy (a standalone server that exposes an OpenAI-compatible API on a port), your client is just an OpenAI client pointed at the proxy URL; follow the OpenAI tracing guide and set base_url to your proxy.
https://storage.googleapis.com/arize-phoenix-assets/assets/images/phoenix-docs-images/gc.ico

LiteLLM Tracing Tutorial (Google Colab)

Prerequisites

  • Python 3.10+
  • An Arize AX account (sign up)
  • An OPENAI_API_KEY from the OpenAI Platform (or another provider key — LiteLLM auto-routes based on the model string)

Launch Arize AX

  1. Sign in to your Arize AX account.
  2. From Space Settings, copy your Space ID and API Key. You will set them as ARIZE_SPACE_ID and ARIZE_API_KEY below.

Install

pip install arize-otel openinference-instrumentation-litellm litellm

Configure credentials

export ARIZE_SPACE_ID="<your-space-id>"
export ARIZE_API_KEY="<your-api-key>"
export ARIZE_PROJECT_NAME="litellm-tracing-example"
export OPENAI_API_KEY="<your-openai-api-key>"

Setup tracing

# instrumentation.py
import os

from arize.otel import register
from openinference.instrumentation.litellm import LiteLLMInstrumentor

tracer_provider = register(
    space_id=os.environ["ARIZE_SPACE_ID"],
    api_key=os.environ["ARIZE_API_KEY"],
    project_name=os.environ["ARIZE_PROJECT_NAME"],
)

LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider)
print("Arize AX tracing initialized for LiteLLM.")

Run LiteLLM

# example.py

# Importing instrumentation first ensures tracing is set up
# before `litellm` is imported.
from instrumentation import tracer_provider

import litellm

# litellm reads OPENAI_API_KEY from the environment for openai/* models.
response = litellm.completion(
    model="gpt-5",
    messages=[
        {
            "role": "user",
            "content": "Why is the ocean salty? Answer in two sentences.",
        },
    ],
)

print(response.choices[0].message.content)

Expected output

Arize AX tracing initialized for LiteLLM.
The ocean is salty because rivers continuously dissolve mineral salts from rocks and soil and carry them to the sea, where they accumulate over millions of years. Water leaves the ocean through evaporation but the salts remain, steadily concentrating until reaching today's roughly 3.5% salinity.

Verify in Arize AX

  1. Open your Arize AX space and select project litellm-tracing-example.
  2. You should see a new trace within ~30 seconds containing a completion LLM span (LiteLLM’s wrapper around the underlying provider call) with the prompt, response, and token usage attached.
  3. If no traces appear, see Troubleshooting.

Troubleshooting

  • No traces in Arize AX. Confirm ARIZE_SPACE_ID and ARIZE_API_KEY are set in the same shell that runs example.py. Enable OpenTelemetry debug logs with export OTEL_LOG_LEVEL=debug and re-run.
  • LiteLLM spans missing but other spans present. LiteLLMInstrumentor().instrument(...) must run before any import litellm. Make sure instrumentation.py is the first import in your entry point.
  • 401 from the underlying provider. LiteLLM picks the provider from the model string (openai/..., anthropic/..., groq/...). Make sure the matching key (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) is set.
  • Other LLM providers. Switch the model string to a different provider — litellm.completion(model="anthropic/claude-sonnet-4-5", ...), litellm.completion(model="groq/llama-3.3-70b-versatile", ...), etc. The same LiteLLMInstrumentor covers every provider LiteLLM routes to.
  • Using the LiteLLM Proxy instead. When the client talks to a proxy on a port, the in-process LiteLLMInstrumentor doesn’t see the call — the client is making a plain OpenAI HTTP request. Use the OpenAI tracing guide and set base_url to your proxy URL.

Resources

LiteLLM Documentation

OpenInference LiteLLM Instrumentor

LiteLLM GitHub