Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

Semantic Kernel is Microsoft’s open-source SDK for blending LLMs with traditional code — kernel functions, planners, and prompt templates. Semantic Kernel emits OpenTelemetry spans natively when OpenLIT is initialized; the openinference-instrumentation-openlit span processor reshapes them into the OpenInference format Arize AX expects.
This guide covers the Python implementation of Semantic Kernel. The same OpenTelemetry principles apply to Semantic Kernel for C# and Java.

Prerequisites

Launch Arize

  1. Sign in to your Arize AX account.
  2. From Space Settings, copy your Space ID and API Key. You will set them as ARIZE_SPACE_ID and ARIZE_API_KEY below.

Install

pip install arize-otel \
  openinference-instrumentation-openlit \
  openlit semantic-kernel openai

Configure credentials

export ARIZE_SPACE_ID="<your-space-id>"
export ARIZE_API_KEY="<your-api-key>"
export ARIZE_PROJECT_NAME="semantic-kernel-tracing-example"
export OPENAI_API_KEY="<your-openai-api-key>"

Setup tracing

# instrumentation.py
import os

import openlit
from arize.otel import BatchSpanProcessor, PROJECT_NAME, Resource
from openinference.instrumentation.openlit import OpenInferenceSpanProcessor
from opentelemetry import trace as otel_trace
from opentelemetry.sdk.trace import TracerProvider

resource = Resource.create({PROJECT_NAME: os.environ["ARIZE_PROJECT_NAME"]})
tracer_provider = TracerProvider(resource=resource)

# Export spans to Arize AX.
tracer_provider.add_span_processor(
    BatchSpanProcessor(
        space_id=os.environ["ARIZE_SPACE_ID"],
        api_key=os.environ["ARIZE_API_KEY"],
    )
)

# Reshape raw OpenLIT spans into the OpenInference format Arize AX expects.
tracer_provider.add_span_processor(OpenInferenceSpanProcessor())

otel_trace.set_tracer_provider(tracer_provider)

# openlit.init() auto-detects the global TracerProvider set above.
openlit.init()
print("Arize AX tracing initialized for Semantic Kernel.")

Run Semantic Kernel

# example.py

# Importing instrumentation first ensures tracing is set up
# before `semantic_kernel` is imported.
from instrumentation import tracer_provider

import asyncio

from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from semantic_kernel.contents import ChatHistory


async def main() -> None:
    # OpenAIChatCompletion reads OPENAI_API_KEY from the environment.
    kernel = Kernel()
    chat = OpenAIChatCompletion(ai_model_id="gpt-5")
    kernel.add_service(chat)

    history = ChatHistory()
    history.add_user_message(
        "Why is the ocean salty? Answer in two sentences."
    )
    response = await chat.get_chat_message_content(
        chat_history=history,
        settings=chat.get_prompt_execution_settings_class()(),
    )
    print(str(response))


asyncio.run(main())

Expected output

Arize AX tracing initialized for Semantic Kernel.
The ocean is salty because rivers continuously dissolve mineral salts from rocks and soil and carry them to the sea, where they accumulate over millions of years. Water leaves the ocean through evaporation but the salts remain, steadily concentrating until reaching today's roughly 3.5% salinity.

Verify in Arize

  1. Open your Arize AX space and select project semantic-kernel-tracing-example.
  2. You should see a new trace within ~30 seconds containing a chat gpt-5 span (the span name reflects whichever model you called) emitted by OpenLIT and reshaped by the OpenInference processor, with the prompt, response, and token usage attached.
  3. If no traces appear, see Troubleshooting.

Troubleshooting

  • No traces in Arize. Confirm ARIZE_SPACE_ID and ARIZE_API_KEY are set in the same shell that runs example.py. Enable OpenTelemetry debug logs with export OTEL_LOG_LEVEL=debug and re-run.
  • Code ran but no spans appear. OpenLIT must be initialized after the global tracer provider is set. Confirm otel_trace.set_tracer_provider(tracer_provider) and openlit.init() both run before any Semantic Kernel call.
  • 401 from OpenAI. Verify OPENAI_API_KEY is set and has access to gpt-5. Swap for a model your key can call.
  • Other LLM providers. Semantic Kernel supports many AI services — Azure OpenAI, Anthropic, Google, and others via the matching connectors.ai.<provider> modules. The same OpenLIT + OpenInference setup covers them.

Resources

Semantic Kernel Documentation

OpenInference OpenLIT Span Processor

Semantic Kernel GitHub