Use this file to discover all available pages before exploring further.
Take full control of OpenTelemetry. The getting started pages cover register() and OpenInference integrations — this page is for when you need more: batch processing for production, routing spans to multiple projects, or configuring resource attributes directly via the OpenTelemetry SDK.
OpenInference provides auto-instrumentors for popular frameworks. Install the package for your provider, call .instrument(), and every call is traced automatically.
Python packages
Package
Description
openinference-semantic-conventions
Semantic conventions for tracing LLM apps
openinference-instrumentation-openai
OpenAI SDK
openinference-instrumentation-anthropic
Anthropic SDK
openinference-instrumentation-langchain
LangChain
openinference-instrumentation-llama-index
LlamaIndex
openinference-instrumentation-bedrock
AWS Bedrock
openinference-instrumentation-mistralai
MistralAI
openinference-instrumentation-dspy
DSPy
openinference-instrumentation-crewai
CrewAI
openinference-instrumentation-litellm
LiteLLM
openinference-instrumentation-groq
Groq
openinference-instrumentation-instructor
Instructor
openinference-instrumentation-haystack
Haystack
openinference-instrumentation-guardrails
Guardrails AI
openinference-instrumentation-vertexai
VertexAI
JavaScript packages
Package
Description
@arizeai/openinference-semantic-conventions
Semantic conventions
@arizeai/openinference-core
Core utility functions
@arizeai/openinference-instrumentation-openai
OpenAI SDK
@arizeai/openinference-instrumentation-langchain
LangChain.js
@arizeai/openinference-vercel
Vercel AI SDK
register() wires these up for most apps — but when you need more control over the tracer itself, configure OpenTelemetry directly:
import { registerInstrumentations } from "@opentelemetry/instrumentation";import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-base";import { NodeTracerProvider, BatchSpanProcessor } from "@opentelemetry/sdk-trace-node";import { resourceFromAttributes } from "@opentelemetry/resources";import { OTLPTraceExporter as GrpcOTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-grpc";import { Metadata } from "@grpc/grpc-js";const metadata = new Metadata();metadata.set("arize-space-id", "your-space-id");metadata.set("arize-api-key", "your-api-key");const provider = new NodeTracerProvider({ resource: resourceFromAttributes({ "model_id": "your-project-name", "model_version": "v1", }), spanProcessors: [ new BatchSpanProcessor(new ConsoleSpanExporter()), new BatchSpanProcessor( new GrpcOTLPTraceExporter({ url: "https://otlp.arize.com/v1", metadata, }) ), ],});registerInstrumentations({ instrumentations: [new OpenAIInstrumentation({})],});provider.register();
Go has no register() helper or auto-instrumentors — direct OTel SDK configuration is the only path. Pair BatchSpanProcessor (production export) with a stdout exporter for local debugging:
go get \ go.opentelemetry.io/otel \ go.opentelemetry.io/otel/sdk \ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp \ go.opentelemetry.io/otel/exporters/stdout/stdouttrace
package mainimport ( "context" "log" "os" "go.opentelemetry.io/otel" "go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" "go.opentelemetry.io/otel/exporters/stdout/stdouttrace" "go.opentelemetry.io/otel/sdk/resource" sdktrace "go.opentelemetry.io/otel/sdk/trace")func initTracer(ctx context.Context) (*sdktrace.TracerProvider, error) { arizeExporter, err := otlptracehttp.New(ctx, otlptracehttp.WithEndpoint("otlp.arize.com"), otlptracehttp.WithHeaders(map[string]string{ "arize-space-id": os.Getenv("ARIZE_SPACE_ID"), "arize-api-key": os.Getenv("ARIZE_API_KEY"), }), ) if err != nil { return nil, err } consoleExporter, err := stdouttrace.New(stdouttrace.WithPrettyPrint()) if err != nil { return nil, err } // Resource attributes describe the source of telemetry. openinference.project.name // is required — Arize rejects spans (HTTP 500) without it. res, err := resource.New(ctx, resource.WithAttributes( attribute.String("openinference.project.name", "your-project-name"), attribute.String("model.version", "v1"), )) if err != nil { return nil, err } tp := sdktrace.NewTracerProvider( sdktrace.WithBatcher(arizeExporter), // production export to Arize sdktrace.WithBatcher(consoleExporter), // local debugging — drop in production sdktrace.WithResource(res), ) otel.SetTracerProvider(tp) return tp, nil}func main() { ctx := context.Background() tp, err := initTracer(ctx) if err != nil { log.Fatalf("init tracer: %v", err) } defer tp.Shutdown(ctx) // flushes batched spans before exit}
To route traces from one application to multiple Arize spaces or projects, use register_with_routing from arize-otel:
pip install arize-otel
from arize.otel import register_with_routing, set_routing_context# Register once with a single API key — routing happens per-contexttracer_provider = register_with_routing( api_key="your-api-key",)# Route specific operations to a different space + projectwith set_routing_context(space_id="other-space-id", project_name="other-project"): # Spans created in this block are routed to "other-space-id" / "other-project" ...
register_with_routing uses ARIZE_API_KEY from your environment if api_key isn’t passed. Both space_id and project_name must be set inside set_routing_context — otherwise routing won’t be applied.Python-only today. For JS/TS or Go apps — or more complex routing (e.g., by span attribute) — route at the OTel Collector layer instead. See OTEL Collector deployment patterns.
If you operate a centralized OpenTelemetry Collector serving many teams or spaces, see the shared-collector pattern that forwards arize-space-id from inbound request metadata — avoids redeploying the collector each time a new space is added.