Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

OpenAI provides the GPT family of large language models through the OpenAI Python SDK and OpenAI Node.js SDK. Arize AX captures every OpenAI SDK call — chat completions, embeddings, tool calls, and token usage — via the OpenInference instrumentors for Python and JavaScript / TypeScript. The same instrumentors also cover Azure OpenAI.
https://storage.googleapis.com/arize-phoenix-assets/assets/images/phoenix-docs-images/gc.ico

OpenAI Python Tracing Tutorial (Google Colab)

OpenAI Python Tracing Tutorials on GitHub

Prerequisites

  • Python 3.9+ or Node.js 18+
  • An Arize AX account (sign up)
  • An OPENAI_API_KEY from the OpenAI Platform, or Azure OpenAI credentials

Launch Arize AX

  1. Sign in to your Arize AX account.
  2. From Space Settings, copy your Space ID and API Key. You will set them as ARIZE_SPACE_ID and ARIZE_API_KEY below.

Install

pip install arize-otel openinference-instrumentation-openai openai

Configure credentials

export ARIZE_SPACE_ID="<your-space-id>"
export ARIZE_API_KEY="<your-api-key>"
export ARIZE_PROJECT_NAME="openai-tracing-example"
export OPENAI_API_KEY="<your-openai-api-key>"

Setup tracing

# instrumentation.py
import os

from arize.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer_provider = register(
    space_id=os.environ["ARIZE_SPACE_ID"],
    api_key=os.environ["ARIZE_API_KEY"],
    project_name=os.environ["ARIZE_PROJECT_NAME"],
)

OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
print("Arize AX tracing initialized for OpenAI.")

Run OpenAI

# example.py

# Importing instrumentation first ensures tracing is set up
# before `openai` is imported.
from instrumentation import tracer_provider

import openai

# The client reads OPENAI_API_KEY from the environment.
client = openai.OpenAI()

response = client.chat.completions.create(
    model="gpt-5",
    messages=[
        {
            "role": "user",
            "content": "Write a haiku about observability.",
        },
    ],
)

print(response.choices[0].message.content)

Expected output

Arize AX tracing initialized for OpenAI.
Logs whisper softly,
metrics rise like morning mist —
truth in every span.

Verify in Arize AX

  1. Open your Arize AX space and select project openai-tracing-example.
  2. You should see a new trace within ~30 seconds containing an LLM span — ChatCompletion for the Python SDK or OpenAI Chat Completions for the Node.js SDK — with the prompt, response, and token usage attached.
  3. If no traces appear, see Troubleshooting.
OpenAI tracing in Arize AX

Troubleshooting

  • No traces in Arize AX. Confirm ARIZE_SPACE_ID and ARIZE_API_KEY are set in the same shell that runs the example. Enable OpenTelemetry debug logs with export OTEL_LOG_LEVEL=debug and re-run.
  • OpenAI spans missing but other spans present (Python). OpenAIInstrumentor().instrument(...) must run before any import openai in the application. Make sure instrumentation.py is the first import in your entry point.
  • OpenAI spans missing but other spans present (TypeScript). instrumentation.manuallyInstrument(OpenAI) must run before any code creates an OpenAI client. Make sure import { provider } from "./instrumentation" (or a side-effect-only import "./instrumentation") is the first import in your entry point.
  • 401 from OpenAI. Verify OPENAI_API_KEY is set and has access to the model in the example. Swap gpt-5 for a model your key can call.
  • Azure OpenAI returns Resource not found. Confirm AZURE_OPENAI_ENDPOINT points to your deployment, AZURE_OPENAI_API_VERSION matches a version your deployment supports, and the example uses the Azure client constructor (openai.AzureOpenAI() / new AzureOpenAI()) rather than the standard OpenAI client.
  • TypeScript process exits before spans flush. With SimpleSpanProcessor, spans are sent immediately, but make sure to await provider.forceFlush() (or call provider.shutdown()) before the process exits to avoid losing trailing spans.

Resources

OpenAI Python SDK

OpenAI Node.js SDK

OpenInference OpenAI Instrumentor (Python)

OpenInference OpenAI Instrumentor (JS/TS)