LiteLLM lets you call 100+ LLM providers — OpenAI, Anthropic, Bedrock, Vertex AI, Together, Groq, and more — through a single OpenAI-compatible interface. Arize AX captures every LiteLLM call — chat completions, embeddings, image generation, retries, and the underlying provider calls — via theDocumentation Index
Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
openinference-instrumentation-litellm package. The instrumentor wraps completion(), acompletion(), completion_with_retries(), embedding(), aembedding(), image_generation(), and aimage_generation().
This guide covers the LiteLLM Python SDK (
litellm.completion(...)) — the in-process library. If you’re using the LiteLLM Proxy (a standalone server that exposes an OpenAI-compatible API on a port), your client is just an OpenAI client pointed at the proxy URL; follow the OpenAI tracing guide and set base_url to your proxy.LiteLLM Tracing Tutorial (Google Colab)
Prerequisites
- Python 3.10+
- An Arize AX account (sign up)
- An
OPENAI_API_KEYfrom the OpenAI Platform (or another provider key — LiteLLM auto-routes based on the model string)
Launch Arize AX
- Sign in to your Arize AX account.
- From Space Settings, copy your Space ID and API Key. You will set them as
ARIZE_SPACE_IDandARIZE_API_KEYbelow.
Install
Configure credentials
Setup tracing
Run LiteLLM
Expected output
Verify in Arize AX
- Open your Arize AX space and select project
litellm-tracing-example. - You should see a new trace within ~30 seconds containing a
completionLLM span (LiteLLM’s wrapper around the underlying provider call) with the prompt, response, and token usage attached. - If no traces appear, see Troubleshooting.
Troubleshooting
- No traces in Arize AX. Confirm
ARIZE_SPACE_IDandARIZE_API_KEYare set in the same shell that runsexample.py. Enable OpenTelemetry debug logs withexport OTEL_LOG_LEVEL=debugand re-run. - LiteLLM spans missing but other spans present.
LiteLLMInstrumentor().instrument(...)must run before anyimport litellm. Make sureinstrumentation.pyis the first import in your entry point. 401from the underlying provider. LiteLLM picks the provider from themodelstring (openai/...,anthropic/...,groq/...). Make sure the matching key (OPENAI_API_KEY,ANTHROPIC_API_KEY, etc.) is set.- Other LLM providers. Switch the
modelstring to a different provider —litellm.completion(model="anthropic/claude-sonnet-4-5", ...),litellm.completion(model="groq/llama-3.3-70b-versatile", ...), etc. The sameLiteLLMInstrumentorcovers every provider LiteLLM routes to. - Using the LiteLLM Proxy instead. When the client talks to a proxy on a port, the in-process
LiteLLMInstrumentordoesn’t see the call — the client is making a plain OpenAI HTTP request. Use the OpenAI tracing guide and setbase_urlto your proxy URL.