Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

AutoGen AgentChat is Microsoft’s multi-agent framework for building robust agent teams. Arize AX captures every AgentChat run — assistant turns, tool calls, group-chat coordination, and the LLM calls beneath them — via the openinference-instrumentation-autogen-agentchat package.

Prerequisites

Launch Arize AX

  1. Sign in to your Arize AX account.
  2. From Space Settings, copy your Space ID and API Key. You will set them as ARIZE_SPACE_ID and ARIZE_API_KEY below.

Install

pip install arize-otel \
  openinference-instrumentation-autogen-agentchat \
  autogen-agentchat "autogen-ext[openai]"

Configure credentials

export ARIZE_SPACE_ID="<your-space-id>"
export ARIZE_API_KEY="<your-api-key>"
export ARIZE_PROJECT_NAME="autogen-agentchat-tracing-example"
export OPENAI_API_KEY="<your-openai-api-key>"

Setup tracing

# instrumentation.py
import os

from arize.otel import register
from openinference.instrumentation.autogen_agentchat import (
    AutogenAgentChatInstrumentor,
)

tracer_provider = register(
    space_id=os.environ["ARIZE_SPACE_ID"],
    api_key=os.environ["ARIZE_API_KEY"],
    project_name=os.environ["ARIZE_PROJECT_NAME"],
)

AutogenAgentChatInstrumentor().instrument(tracer_provider=tracer_provider)
print("Arize AX tracing initialized for AutoGen AgentChat.")

Run AutoGen AgentChat

# example.py

# Importing instrumentation first ensures tracing is set up
# before `autogen_agentchat` is imported.
from instrumentation import tracer_provider

import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def main() -> None:
    # OpenAIChatCompletionClient reads OPENAI_API_KEY from the environment.
    model_client = OpenAIChatCompletionClient(model="gpt-5")

    agent = AssistantAgent(
        name="assistant",
        model_client=model_client,
        system_message="You are a concise factual assistant.",
    )

    result = await agent.run(
        task="Why is the ocean salty? Answer in two sentences.",
    )
    print(result.messages[-1].content)
    await model_client.close()


asyncio.run(main())

Expected output

Arize AX tracing initialized for AutoGen AgentChat.
The ocean is salty because rivers continuously dissolve mineral salts from rocks and soil and carry them to the sea, where they accumulate over millions of years. Water leaves the ocean through evaporation but the salts remain, steadily concentrating until reaching today's roughly 3.5% salinity.

Verify in Arize AX

  1. Open your Arize AX space and select project autogen-agentchat-tracing-example.
  2. You should see a new trace within ~30 seconds containing an invoke_agent assistant parent span wrapping OpenAIChatCompletionClient.create LLM child spans with the prompt, response, and token usage attached.
  3. If no traces appear, see Troubleshooting.

Troubleshooting

  • No traces in Arize AX. Confirm ARIZE_SPACE_ID and ARIZE_API_KEY are set in the same shell that runs example.py. Enable OpenTelemetry debug logs with export OTEL_LOG_LEVEL=debug and re-run.
  • AgentChat spans missing but other spans present. AutogenAgentChatInstrumentor().instrument(...) must run before any autogen_agentchat import. Make sure instrumentation.py is the first import in your entry point.
  • 401 from OpenAI. Verify OPENAI_API_KEY is set and has access to gpt-5. Swap for a model your key can call.
  • Forgetting await model_client.close(). AutoGen’s chat clients hold an HTTP session; close it at the end of the coroutine to release sockets cleanly. The omission won’t change traces but will surface in aiohttp warnings.

Resources

AutoGen AgentChat Documentation

OpenInference AutoGen AgentChat Instrumentor

AutoGen GitHub