Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

Vercel AI SDK (3.3+) provides high-level helpers — generateText, streamText, generateObject — for calling LLMs from TypeScript apps. Arize AX captures every AI SDK call by ingesting the SDK’s native OpenTelemetry spans through the @arizeai/openinference-vercel span processor.

Prerequisites

  • Node.js 18+
  • An Arize AX account (sign up)
  • An OPENAI_API_KEY from the OpenAI Platform
  • Vercel AI SDK 3.3 or higher

Launch Arize AX

  1. Sign in to your Arize AX account.
  2. From Space Settings, copy your Space ID and API Key. You will set them as ARIZE_SPACE_ID and ARIZE_API_KEY below.

Install

npm install ai @ai-sdk/openai \
  @arizeai/openinference-vercel \
  @opentelemetry/api \
  @opentelemetry/exporter-trace-otlp-proto \
  @opentelemetry/resources \
  @opentelemetry/sdk-trace-base \
  @opentelemetry/sdk-trace-node
Three runtimes are commonly used to register OpenTelemetry alongside the AI SDK: @opentelemetry/sdk-trace-node (plain Node, shown below), @vercel/otel (Next.js / Vercel edge + Node runtimes), and @opentelemetry/sdk-node. They all wire the same OpenInferenceSimpleSpanProcessor from @arizeai/openinference-vercel — pick whichever matches your runtime. See Troubleshooting for the @vercel/otel setup and the version-pinning rules.

Configure credentials

export ARIZE_SPACE_ID="<your-space-id>"
export ARIZE_API_KEY="<your-api-key>"
export ARIZE_PROJECT_NAME="vercel-ai-sdk-tracing-example"
export OPENAI_API_KEY="<your-openai-api-key>"

Setup tracing

// instrumentation.ts
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import {
  isOpenInferenceSpan,
  OpenInferenceSimpleSpanProcessor,
} from "@arizeai/openinference-vercel";

const projectName =
  process.env.ARIZE_PROJECT_NAME ?? "vercel-ai-sdk-tracing-example";

export const provider = new NodeTracerProvider({
  resource: resourceFromAttributes({
    model_id: projectName,
    model_version: "1.0.0",
  }),
  spanProcessors: [
    new OpenInferenceSimpleSpanProcessor({
      exporter: new OTLPTraceExporter({
        url: "https://otlp.arize.com/v1/traces",
        headers: {
          "arize-space-id": process.env.ARIZE_SPACE_ID ?? "",
          "arize-api-key": process.env.ARIZE_API_KEY ?? "",
        },
      }),
      // Drop non-OpenInference spans (e.g. raw HTTP fetch spans).
      spanFilter: isOpenInferenceSpan,
    }),
  ],
});

provider.register();

console.log("Arize AX tracing initialized for Vercel AI SDK.");

Run Vercel AI SDK

// example.ts

// Importing instrumentation first ensures tracing is set up before the
// AI SDK is used.
import { provider } from "./instrumentation";

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

// `experimental_telemetry: { isEnabled: true }` is the AI SDK opt-in
// flag — without it, no spans are emitted no matter how OTel is wired.
const { text } = await generateText({
  model: openai("gpt-5"),
  prompt: "Why is the ocean salty? Answer in two sentences.",
  experimental_telemetry: { isEnabled: true },
});

console.log(text);

// Flush any pending spans before the process exits.
await provider.forceFlush();

Expected output

Arize AX tracing initialized for Vercel AI SDK.
The ocean is salty because rivers continuously dissolve mineral salts from rocks and soil and carry them to the sea, where they accumulate over millions of years. Water leaves the ocean through evaporation but the salts remain, steadily concentrating until reaching today's roughly 3.5% salinity.

Verify in Arize AX

  1. Open your Arize AX space and select project vercel-ai-sdk-tracing-example.
  2. You should see a new trace within ~30 seconds containing an ai.generateText parent span wrapping an ai.generateText.doGenerate LLM child span (with prompt, response, and token usage attached).
  3. If no traces appear, see Troubleshooting.

Span filter

Other instrumentations registered alongside the AI SDK (@opentelemetry/instrumentation-http, @vercel/otel, Next.js’s built-in tracing) emit POST / GET spans for every fetch, and the AI SDK’s spans nest under those HTTP roots. @arizeai/openinference-vercel exports an isOpenInferenceSpan predicate that drops the non-AI spans:
import {
  isOpenInferenceSpan,
  OpenInferenceSimpleSpanProcessor,
} from "@arizeai/openinference-vercel";

new OpenInferenceSimpleSpanProcessor({
  exporter: new OTLPTraceExporter({ /* ... */ }),
  spanFilter: isOpenInferenceSpan,
});
This is the filter shown in instrumentation.ts above. The trade-off: filtering removes the HTTP root spans, which orphans the surviving AI SDK spans on the Traces tab (no parent to anchor them) — they remain visible on the Spans tab. If you also need a clean trace tree on the Traces tab, swap the filter for a span processor that promotes the first AI SDK span to root by clearing its parent ID:
// root-aware-processor.ts
import { Context } from "@opentelemetry/api";
import { Span, SpanExporter } from "@opentelemetry/sdk-trace-base";
import {
  OpenInferenceBatchSpanProcessor,
  isOpenInferenceSpan,
} from "@arizeai/openinference-vercel";
import { getSession } from "@arizeai/openinference-core";
import {
  SESSION_ID,
} from "@arizeai/openinference-semantic-conventions";
import { LRUCache } from "lru-cache";

// Top-level AI SDK span names. Mastra and other higher-level Mastra-like
// wrappers may suffix a function id (e.g. `ai.generateText my-flow`), so
// we match on the first whitespace-delimited token rather than equality.
const ROOT_OI_SPAN_PREFIXES = [
  "ai.generateText",
  "ai.generateObject",
  "ai.streamText",
  "ai.streamObject",
  "ai.embed",
  "ai.embedMany",
];

function isRootOISpanByName(spanName: string): boolean {
  const head = spanName.split(" ")[0];
  return ROOT_OI_SPAN_PREFIXES.some(
    (prefix) => head === prefix || head.startsWith(prefix + " "),
  );
}

interface RootAwareConfig {
  exporter: SpanExporter;
  /** LRU size for tracking which traces have a promoted root. */
  cacheSize?: number;
}

/**
 * Filters non-OpenInference spans (HTTP, fetch, etc.) and promotes the
 * first AI SDK span in each trace to root by clearing its parent IDs.
 * Also propagates session ids from context onto every emitted span.
 */
export class RootAwareOpenInferenceProcessor
  extends OpenInferenceBatchSpanProcessor {
  private traceIds: LRUCache<string, boolean>;

  constructor(config: RootAwareConfig) {
    super({ exporter: config.exporter, spanFilter: isOpenInferenceSpan });
    this.traceIds = new LRUCache({ max: config.cacheSize ?? 1000 });
  }

  onStart(span: Span, parentContext: Context): void {
    const session = getSession(parentContext);
    if (session?.sessionId) {
      span.setAttribute(SESSION_ID, session.sessionId);
    }

    const traceId = span.spanContext().traceId;
    if (
      isRootOISpanByName(span.name) &&
      !this.traceIds.has(traceId)
    ) {
      // parentSpanId is readonly on the public Span type; cast to clear.
      (span as unknown as { parentSpanId?: string }).parentSpanId =
        undefined;
      (span as unknown as { parentSpanContext?: unknown })
        .parentSpanContext = undefined;
      this.traceIds.set(traceId, true);
    }

    super.onStart(span, parentContext);
  }

  shutdown(): Promise<void> {
    this.traceIds.clear();
    return super.shutdown();
  }
}
Wire it in by replacing the OpenInferenceSimpleSpanProcessor in instrumentation.ts:
import { RootAwareOpenInferenceProcessor } from "./root-aware-processor";

spanProcessors: [
  new RootAwareOpenInferenceProcessor({
    exporter: new OTLPTraceExporter({
      url: "https://otlp.arize.com/v1/traces",
      headers: {
        "arize-space-id": process.env.ARIZE_SPACE_ID ?? "",
        "arize-api-key": process.env.ARIZE_API_KEY ?? "",
      },
    }),
  }),
],
lru-cache is the only extra dependency: npm install lru-cache.

Troubleshooting

  • No traces in Arize AX. Every AI SDK call needs experimental_telemetry: { isEnabled: true } set on it — without that flag, the SDK never emits spans. Also confirm ARIZE_SPACE_ID and ARIZE_API_KEY are set in the same shell that runs example.ts. Enable OpenTelemetry debug logs with export OTEL_LOG_LEVEL=debug and re-run.
  • 401 from OpenAI. Verify OPENAI_API_KEY is set and has access to gpt-5. Swap openai("gpt-5") for a model your key can call.
  • Process exits before spans flush. Always await provider.forceFlush() (or provider.shutdown()) before the process exits, otherwise trailing spans are dropped.
  • Next.js / Vercel runtime. Use @vercel/otel’s registerOTel(...) instead of NodeTracerProvider, and pin versions: @vercel/otel@1.x requires @opentelemetry/* 1.x (0.1.x for unstable APIs); @vercel/otel@2.x requires @opentelemetry/* 2.x (0.2.x). Mismatches surface as silent missing traces.
  • AI SDK spans orphaned on the Traces tab. Expected when isOpenInferenceSpan is the only filter — see Span filter above for the RootAwareOpenInferenceProcessor recipe that promotes the first AI SDK span to root.

Resources

Vercel AI SDK Documentation

OpenInference Vercel Span Processor

Vercel AI SDK GitHub