Use this file to discover all available pages before exploring further.
Start where it’s automatic. For supported providers and frameworks, install an instrumentor package, call .instrument(), and every call is traced — no per-call code changes.You can start from the Arize AX UI — when you create a new tracing project, the setup wizard walks you through choosing your integration and gives you the code to copy:
# Ask your AI coding agent:"Set up Arize tracing in my application"
Works with Cursor, Claude Code, Codex, and more. The skill analyzes your stack, picks the right OpenInference package, wires it in, and tells you exactly how to verify traces are flowing:
Install the OpenInference instrumentor for your provider, register a tracer provider with your Arize credentials, and call .instrument().
Go has no first-party OpenInference auto-instrumentor today. Install the OpenTelemetry Go SDK and instrument your LLM calls manually — see Manual instrumentation for the full pattern.
go get \ go.opentelemetry.io/otel \ go.opentelemetry.io/otel/sdk \ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp
2
Register and instrument
Python
JS/TS
Go
from arize.otel import registerfrom openinference.instrumentation.openai import OpenAIInstrumentortracer_provider = register( space_id="YOUR_SPACE_ID", # Settings > API Keys in Arize AX api_key="YOUR_API_KEY", # Settings > API Keys > + New API Key project_name="my-project",)OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
import { NodeTracerProvider, SimpleSpanProcessor } from "@opentelemetry/sdk-trace-node";import { registerInstrumentations } from "@opentelemetry/instrumentation";import { resourceFromAttributes } from "@opentelemetry/resources";import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";import OpenAI from "openai";import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";const provider = new NodeTracerProvider({ resource: resourceFromAttributes({ ["openinference.project.name"]: "my-project", }), spanProcessors: [ new SimpleSpanProcessor( new OTLPTraceExporter({ url: "https://otlp.arize.com/v1/traces", headers: { 'arize-space-id': 'YOUR_SPACE_ID', 'arize-api-key': 'YOUR_API_KEY', }, }), ), ],});const instrumentation = new OpenAIInstrumentation();instrumentation.manuallyInstrument(OpenAI);registerInstrumentations({ instrumentations: [instrumentation] });provider.register();
Set up the TracerProvider directly and create spans with OpenInference attributes by hand around each LLM call. The full snippet is on the Manual instrumentation page; the minimal init looks like:
exporter, err := otlptracehttp.New(ctx, otlptracehttp.WithEndpoint("otlp.arize.com"), otlptracehttp.WithHeaders(map[string]string{ "arize-space-id": os.Getenv("ARIZE_SPACE_ID"), "arize-api-key": os.Getenv("ARIZE_API_KEY"), }),)if err != nil { log.Fatalf("exporter: %v", err) }res, err := resource.New(ctx, resource.WithAttributes( attribute.String("openinference.project.name", "my-project"),))if err != nil { log.Fatalf("resource: %v", err) }tp := sdktrace.NewTracerProvider( sdktrace.WithBatcher(exporter), sdktrace.WithResource(res),)otel.SetTracerProvider(tp)// Use a fresh background context for shutdown — a request-scoped ctx// may already be cancelled and would drop in-flight spans.defer tp.Shutdown(context.Background())
This example uses OpenAI, but the same pattern works for any provider — install the instrumentor, call .instrument(), and go.
For some frameworks (CrewAI, LangChain, AutoGen, LlamaIndex), .instrument() must run before importing the library — they patch methods at runtime, so objects created earlier will not emit spans. See each integration page for specifics.