Tracing is how you see inside your LLM app. Without it, debugging is guesswork. With it, every call, every tool invocation, every retrieval is visible.Documentation Index
Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
How Tracing Works
Your LLM application handles requests — each one might call a model, retrieve documents, run tools, and return a response. A trace captures that entire journey as a tree of spans. Each span represents one operation (an LLM call, a retrieval, a tool invocation) with its input, output, timing, and metadata.- OpenTelemetry (OTel) — the universal, vendor-agnostic framework for collecting and exporting telemetry data. Your instrumentation isn’t locked to Arize AX — it works with any OTel-compatible backend.
- OpenInference — GenAI-specific semantic conventions built on top of OTel, created by Arize. OpenInference defines the attributes that make traces meaningful for LLM applications: span kinds, message formats, token counts, cost, and more. Arize accepts standard OTel spans and reads OpenInference attributes for richer GenAI visualization.
- Instrumentation wraps your function calls (automatically via integrations, or manually) and captures span data following OpenInference semantic conventions.
- An exporter sends the spans to Arize AX via OTLP (gRPC by default).
- The Arize collector ingests and visualizes them so you can explore, filter, and debug.

What’s Inside a Trace
Now that you know how data flows from your app to Arize AX, let’s look at what a trace contains. Each span in the tree is assigned a span kind — the type of operation it represents:| Span Kind | Description |
|---|---|
| LLM | Call to an LLM for a completion or chat |
| Tool | API or function invoked on behalf of an LLM |
| Agent | Root span containing a set of LLM and tool invocations |
| Retriever | Data retrieval query for context from a datastore |
| Chain | The starting point and link between application steps |
| Embedding | Encoding of unstructured data |
| Guardrail | Validates LLM inputs/outputs for safety and compliance |
| Reranker | Relevance-based re-ordering of documents |
| Evaluator | Evaluation process, type, and results |
| Audio | Audio or voice processing operations |

Span Attributes
Each span also carries attributes — key-value pairs likellm.model_name, llm.input_messages, input.value, output.value. These are the OpenInference semantic conventions — auto-instrumentation sets them for you, and you can add your own via Customize your traces. Check the OpenInference source for every attribute defined per span kind: