Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

Amazon Bedrock Agents are managed agents you define, prepare, and deploy on AWS. Arize AX captures every Bedrock Agent invocation — the agent’s reasoning steps, action-group (tool) calls, knowledge-base lookups, and the underlying LLM calls — via the openinference-instrumentation-bedrock package (the same instrumentor used by the Amazon Bedrock tracing guide for plain invoke_model / converse calls).

Prerequisites

  • Python 3.10+
  • An Arize AX account (sign up)
  • An AWS account with:
    • Bedrock model access enabled for the foundation model your agent uses
    • A deployed Bedrock Agent in the PREPARED state, with a callable agent alias. Create one from the Bedrock console or via the bedrock-agent (control-plane) API — see Create an agent. You’ll need the agent ID (e.g. WYHKWZQCFM) and the agent alias ID (e.g. TSTALIASID for the auto-generated draft, or a custom alias) to invoke it.

Launch Arize AX

  1. Sign in to your Arize AX account.
  2. From Space Settings, copy your Space ID and API Key. You will set them as ARIZE_SPACE_ID and ARIZE_API_KEY below.

Install

pip install arize-otel openinference-instrumentation-bedrock boto3

Configure credentials

export ARIZE_SPACE_ID="<your-space-id>"
export ARIZE_API_KEY="<your-api-key>"
export ARIZE_PROJECT_NAME="amazon-bedrock-agents-tracing-example"

# AWS credentials — long-lived or SSO/STS temporary.
export AWS_ACCESS_KEY_ID="<your-aws-access-key-id>"
export AWS_SECRET_ACCESS_KEY="<your-aws-secret-access-key>"
export AWS_REGION="us-east-1"
export AWS_SESSION_TOKEN=""  # optional — only for SSO / STS / federated logins

# The Bedrock Agent to invoke. Both must already exist in your AWS
# account and the agent must be in `PREPARED` state.
export BEDROCK_AGENT_ID="<your-agent-id>"
export BEDROCK_AGENT_ALIAS_ID="<your-agent-alias-id>"

Setup tracing

# instrumentation.py
import os

from arize.otel import register
from openinference.instrumentation.bedrock import BedrockInstrumentor

tracer_provider = register(
    space_id=os.environ["ARIZE_SPACE_ID"],
    api_key=os.environ["ARIZE_API_KEY"],
    project_name=os.environ["ARIZE_PROJECT_NAME"],
)

BedrockInstrumentor().instrument(tracer_provider=tracer_provider)
print("Arize AX tracing initialized for Amazon Bedrock Agents.")

Run Amazon Bedrock Agents

# example.py

# Importing instrumentation first ensures BedrockInstrumentor patches
# boto3 before the bedrock-agent-runtime client is created.
from instrumentation import tracer_provider

import os
import time

import boto3

# IMPORTANT: agents use the `bedrock-agent-runtime` service, not the
# `bedrock-runtime` service used for plain model invocations.
client = boto3.client(
    "bedrock-agent-runtime",
    region_name=os.environ.get("AWS_REGION", "us-east-1"),
)

agent_id       = os.environ["BEDROCK_AGENT_ID"]
agent_alias_id = os.environ["BEDROCK_AGENT_ALIAS_ID"]

# `enableTrace=True` tells Bedrock to include the agent's internal
# reasoning steps in the response stream — the instrumentor folds
# those into the span tree.
response = client.invoke_agent(
    agentId=agent_id,
    agentAliasId=agent_alias_id,
    sessionId=f"arize-example-{int(time.time())}",
    inputText="Why is the ocean salty? Answer in two sentences.",
    enableTrace=True,
)

# invoke_agent returns a streaming EventStream — iterate to assemble
# the final answer.
final_text = []
for event in response["completion"]:
    if "chunk" in event and "bytes" in event["chunk"]:
        final_text.append(event["chunk"]["bytes"].decode("utf-8"))
print("".join(final_text))

Expected output

Arize AX tracing initialized for Amazon Bedrock Agents.
The ocean is salty because rivers continuously dissolve mineral salts from rocks and soil and carry them to the sea, where they accumulate over millions of years. Water leaves the ocean through evaporation but the salts remain, steadily concentrating until reaching today's roughly 3.5% salinity.

Verify in Arize AX

  1. Open your Arize AX space and select project amazon-bedrock-agents-tracing-example.
  2. You should see a new trace within ~30 seconds containing a bedrock_agent.invoke_agent parent span (AGENT) wrapping per-step orchestration spans, any bedrock_agent.action_group_invocation (TOOL) or bedrock_agent.knowledge_base_lookup (RETRIEVER) spans your agent uses, and a bedrock_agent.llm LLM span (with the prompt, response, and token usage attached).
  3. If no traces appear, see Troubleshooting.

Troubleshooting

  • No traces in Arize AX. Confirm ARIZE_SPACE_ID and ARIZE_API_KEY are set in the same shell that runs example.py. Enable OpenTelemetry debug logs with export OTEL_LOG_LEVEL=debug and re-run.
  • Bedrock Agents spans missing but other spans present. BedrockInstrumentor().instrument(...) must run before boto3.client("bedrock-agent-runtime", ...) is called. Make sure instrumentation.py is the first import in your entry point — clients created before instrumentation aren’t patched.
  • ValidationException: Agent <agent_id> is not in PREPARED state. A newly-created agent stays in NOT_PREPARED until you click Prepare in the console (or call bedrock_agent.prepare_agent(agentId=...) followed by polling until agentStatus == "PREPARED"). Agents must be re-prepared after any change to instructions, action groups, or knowledge bases.
  • ResourceNotFoundException: ... agent alias .... A “draft” agent has no callable alias by default — the auto-generated test alias TSTALIASID only works once the agent is prepared. Create an alias via the Aliases tab in the Bedrock console or bedrock_agent.create_agent_alias(agentId=..., agentAliasName=...).
  • AccessDeniedException. Your IAM principal needs bedrock:InvokeAgent (note: different permission from bedrock:InvokeModel used by the plain Bedrock tracing guide) plus model-access for the underlying foundation model.
  • Only LLM spans, no agent-reasoning spans. Make sure enableTrace=True is in the invoke_agent call — without it, Bedrock returns only the final answer chunks and the instrumentor has no orchestration data to build the span tree.

Resources

Amazon Bedrock Agents Documentation

OpenInference Bedrock Instrumentor

Bedrock Examples (Agents, Action Groups, Knowledge Bases)

Amazon Bedrock (plain invoke_model / converse) tracing