Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

BeeAI Framework is a Python / TypeScript framework from IBM for building production-grade AI agents — tools, memory, multi-step reasoning, and pluggable LLM backends. Arize AX captures every BeeAI agent run via the openinference-instrumentation-beeai package.

Prerequisites

Launch Arize AX

  1. Sign in to your Arize AX account.
  2. From Space Settings, copy your Space ID and API Key. You will set them as ARIZE_SPACE_ID and ARIZE_API_KEY below.

Install

pip install arize-otel openinference-instrumentation-beeai beeai-framework

Configure credentials

export ARIZE_SPACE_ID="<your-space-id>"
export ARIZE_API_KEY="<your-api-key>"
export ARIZE_PROJECT_NAME="beeai-tracing-example"
export OPENAI_API_KEY="<your-openai-api-key>"

Setup tracing

# instrumentation.py
import os

from arize.otel import register
from openinference.instrumentation.beeai import BeeAIInstrumentor

tracer_provider = register(
    space_id=os.environ["ARIZE_SPACE_ID"],
    api_key=os.environ["ARIZE_API_KEY"],
    project_name=os.environ["ARIZE_PROJECT_NAME"],
)

BeeAIInstrumentor().instrument(tracer_provider=tracer_provider)
print("Arize AX tracing initialized for BeeAI.")

Run BeeAI

# example.py

# Importing instrumentation first ensures tracing is set up
# before `beeai_framework` is imported.
from instrumentation import tracer_provider

import asyncio

from beeai_framework.agents.requirement.agent import RequirementAgent
from beeai_framework.backend.chat import ChatModel


async def main() -> None:
    # ChatModel.from_name routes through LiteLLM, so any
    # `<provider>:<model>` slug it supports works here. The OpenAI
    # provider reads OPENAI_API_KEY from the environment.
    llm = ChatModel.from_name("openai:gpt-5")

    agent = RequirementAgent(llm=llm, tools=[])
    response = await agent.run(
        "Why is the ocean salty? Answer in two sentences."
    )

    # response.output is a list of messages — print the final one.
    print(response.output[-1].text)


asyncio.run(main())

Expected output

Arize AX tracing initialized for BeeAI.
The ocean is salty because rivers continuously dissolve mineral salts from rocks and soil and carry them to the sea, where they accumulate over millions of years. Water leaves the ocean through evaporation but the salts remain, steadily concentrating until reaching today's roughly 3.5% salinity.

Verify in Arize AX

  1. Open your Arize AX space and select project beeai-tracing-example.
  2. You should see a new trace within ~30 seconds with this shape: a RequirementAgent root span (AGENT) wraps an OpenAIChatModel LLM child span (model gpt-5, prompt + response + token usage attached) and a final_answer tool span.
  3. If no traces appear, see Troubleshooting.

Troubleshooting

  • No traces in Arize AX. Confirm ARIZE_SPACE_ID and ARIZE_API_KEY are set in the same shell that runs example.py. Enable OpenTelemetry debug logs with export OTEL_LOG_LEVEL=debug and re-run.
  • BeeAI spans missing but other spans present. BeeAIInstrumentor().instrument(...) must run before any from beeai_framework import .... Make sure instrumentation.py is the first import in your entry point.
  • 401 from OpenAI. Verify OPENAI_API_KEY is set and has access to gpt-5. Swap the openai:gpt-5 slug in ChatModel.from_name(...) for a model your key can call.
  • Other LLM providers. BeeAI delegates model calls to LiteLLM, so any LiteLLM-supported provider works — ChatModel.from_name("anthropic:claude-3-5-sonnet-20241022"), ChatModel.from_name("groq:llama-3.3-70b-versatile"), ChatModel.from_name("ollama:granite3.1-dense:8b"), etc. The same BeeAIInstrumentor covers them.

Resources

BeeAI Framework Documentation

OpenInference BeeAI Instrumentor (Python)

BeeAI Framework GitHub