Use this file to discover all available pages before exploring further.
Vercel AI SDK (3.3+) provides high-level helpers — generateText, streamText, generateObject — for calling LLMs from TypeScript apps. Arize AX captures every AI SDK call by ingesting the SDK’s native OpenTelemetry spans through the @arizeai/openinference-vercel span processor.
Three runtimes are commonly used to register OpenTelemetry alongside the AI SDK: @opentelemetry/sdk-trace-node (plain Node, shown below), @vercel/otel (Next.js / Vercel edge + Node runtimes), and @opentelemetry/sdk-node. They all wire the same OpenInferenceSimpleSpanProcessor from @arizeai/openinference-vercel — pick whichever matches your runtime. See Troubleshooting for the @vercel/otel setup and the version-pinning rules.
// example.ts// Importing instrumentation first ensures tracing is set up before the// AI SDK is used.import { provider } from "./instrumentation";import { generateText } from "ai";import { openai } from "@ai-sdk/openai";// `experimental_telemetry: { isEnabled: true }` is the AI SDK opt-in// flag — without it, no spans are emitted no matter how OTel is wired.const { text } = await generateText({ model: openai("gpt-5"), prompt: "Why is the ocean salty? Answer in two sentences.", experimental_telemetry: { isEnabled: true },});console.log(text);// Flush any pending spans before the process exits.await provider.forceFlush();
Arize AX tracing initialized for Vercel AI SDK.The ocean is salty because rivers continuously dissolve mineral salts from rocks and soil and carry them to the sea, where they accumulate over millions of years. Water leaves the ocean through evaporation but the salts remain, steadily concentrating until reaching today's roughly 3.5% salinity.
Open your Arize AX space and select project vercel-ai-sdk-tracing-example.
You should see a new trace within ~30 seconds containing an ai.generateText parent span wrapping an ai.generateText.doGenerate LLM child span (with prompt, response, and token usage attached).
Other instrumentations registered alongside the AI SDK (@opentelemetry/instrumentation-http, @vercel/otel, Next.js’s built-in tracing) emit POST / GET spans for every fetch, and the AI SDK’s spans nest under those HTTP roots. @arizeai/openinference-vercel exports an isOpenInferenceSpan predicate that drops the non-AI spans:
import { isOpenInferenceSpan, OpenInferenceSimpleSpanProcessor,} from "@arizeai/openinference-vercel";new OpenInferenceSimpleSpanProcessor({ exporter: new OTLPTraceExporter({ /* ... */ }), spanFilter: isOpenInferenceSpan,});
This is the filter shown in instrumentation.ts above. The trade-off: filtering removes the HTTP root spans, which orphans the surviving AI SDK spans on the Traces tab (no parent to anchor them) — they remain visible on the Spans tab. If you also need a clean trace tree on the Traces tab, swap the filter for a span processor that promotes the first AI SDK span to root by clearing its parent ID:
// root-aware-processor.tsimport { Context } from "@opentelemetry/api";import { Span, SpanExporter } from "@opentelemetry/sdk-trace-base";import { OpenInferenceBatchSpanProcessor, isOpenInferenceSpan,} from "@arizeai/openinference-vercel";import { getSession } from "@arizeai/openinference-core";import { SESSION_ID,} from "@arizeai/openinference-semantic-conventions";import { LRUCache } from "lru-cache";// Top-level AI SDK span names. Mastra and other higher-level Mastra-like// wrappers may suffix a function id (e.g. `ai.generateText my-flow`), so// we match on the first whitespace-delimited token rather than equality.const ROOT_OI_SPAN_PREFIXES = [ "ai.generateText", "ai.generateObject", "ai.streamText", "ai.streamObject", "ai.embed", "ai.embedMany",];function isRootOISpanByName(spanName: string): boolean { const head = spanName.split(" ")[0]; return ROOT_OI_SPAN_PREFIXES.some( (prefix) => head === prefix || head.startsWith(prefix + " "), );}interface RootAwareConfig { exporter: SpanExporter; /** LRU size for tracking which traces have a promoted root. */ cacheSize?: number;}/** * Filters non-OpenInference spans (HTTP, fetch, etc.) and promotes the * first AI SDK span in each trace to root by clearing its parent IDs. * Also propagates session ids from context onto every emitted span. */export class RootAwareOpenInferenceProcessor extends OpenInferenceBatchSpanProcessor { private traceIds: LRUCache<string, boolean>; constructor(config: RootAwareConfig) { super({ exporter: config.exporter, spanFilter: isOpenInferenceSpan }); this.traceIds = new LRUCache({ max: config.cacheSize ?? 1000 }); } onStart(span: Span, parentContext: Context): void { const session = getSession(parentContext); if (session?.sessionId) { span.setAttribute(SESSION_ID, session.sessionId); } const traceId = span.spanContext().traceId; if ( isRootOISpanByName(span.name) && !this.traceIds.has(traceId) ) { // parentSpanId is readonly on the public Span type; cast to clear. (span as unknown as { parentSpanId?: string }).parentSpanId = undefined; (span as unknown as { parentSpanContext?: unknown }) .parentSpanContext = undefined; this.traceIds.set(traceId, true); } super.onStart(span, parentContext); } shutdown(): Promise<void> { this.traceIds.clear(); return super.shutdown(); }}
Wire it in by replacing the OpenInferenceSimpleSpanProcessor in instrumentation.ts:
import { RootAwareOpenInferenceProcessor } from "./root-aware-processor";spanProcessors: [ new RootAwareOpenInferenceProcessor({ exporter: new OTLPTraceExporter({ url: "https://otlp.arize.com/v1/traces", headers: { "arize-space-id": process.env.ARIZE_SPACE_ID ?? "", "arize-api-key": process.env.ARIZE_API_KEY ?? "", }, }), }),],
lru-cache is the only extra dependency: npm install lru-cache.
No traces in Arize AX. Every AI SDK call needs experimental_telemetry: { isEnabled: true } set on it — without that flag, the SDK never emits spans. Also confirm ARIZE_SPACE_ID and ARIZE_API_KEY are set in the same shell that runs example.ts. Enable OpenTelemetry debug logs with export OTEL_LOG_LEVEL=debug and re-run.
401 from OpenAI. Verify OPENAI_API_KEY is set and has access to gpt-5. Swap openai("gpt-5") for a model your key can call.
Process exits before spans flush. Always await provider.forceFlush() (or provider.shutdown()) before the process exits, otherwise trailing spans are dropped.
Next.js / Vercel runtime. Use @vercel/otel’s registerOTel(...) instead of NodeTracerProvider, and pin versions: @vercel/otel@1.x requires @opentelemetry/*1.x (0.1.x for unstable APIs); @vercel/otel@2.x requires @opentelemetry/*2.x (0.2.x). Mismatches surface as silent missing traces.
AI SDK spans orphaned on the Traces tab. Expected when isOpenInferenceSpan is the only filter — see Span filter above for the RootAwareOpenInferenceProcessor recipe that promotes the first AI SDK span to root.