LlamaIndex Workflows are a building block for complex, event-driven LLM applications — eachDocumentation Index
Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
@step is a typed handler that emits the next event. Arize AX captures every workflow run — each step invocation, the events flowing between them, and the LLM calls made inside steps — via the openinference-instrumentation-llama-index package, the same instrumentor that covers core LlamaIndex.
If you’ve already followed the LlamaIndex tracing guide, workflows are already traced — there is one instrumentor for both. This page is a workflow-focused setup that you can follow standalone.
Prerequisites
- Python 3.10+
- An Arize AX account (sign up)
- An
OPENAI_API_KEYfrom the OpenAI Platform
Launch Arize AX
- Sign in to your Arize AX account.
- From Space Settings, copy your Space ID and API Key. You will set them as
ARIZE_SPACE_IDandARIZE_API_KEYbelow.
Install
Configure credentials
Setup tracing
Run LlamaIndex Workflows
Expected output
Verify in Arize AX
- Open your Arize AX space and select project
llamaindex-workflows-tracing-example. - You should see a new trace within ~30 seconds containing a
OceanFactWorkflow.runparent span wrapping a step span (OceanFactWorkflow.answer) and a nestedOpenAI.acompleteLLM child span with the prompt, response, and token usage attached. - If no traces appear, see Troubleshooting.
Troubleshooting
- No traces in Arize AX. Confirm
ARIZE_SPACE_IDandARIZE_API_KEYare set in the same shell that runsexample.py. Enable OpenTelemetry debug logs withexport OTEL_LOG_LEVEL=debugand re-run. - Workflow ran but no spans appear.
LlamaIndexInstrumentor().instrument(...)must run before anyllama_indeximport. Make sureinstrumentation.pyis the first import in your entry point. 401from OpenAI. VerifyOPENAI_API_KEYis set and has access togpt-5. Swap for a model your key can call.- Step did not return a
StopEvent. Workflows finish only when a step returnsStopEvent(orStartEventrolls into a chain that eventually does). Check each@step’s return type. WorkflowTimeoutError: Operation timed out after N.0 seconds. LlamaIndex Workflow has its own timeout — 45 s by default — separate from any HTTP-client timeout your LLM library uses. Reasoning-heavy models (gpt-5,o3, etc.) can blow past that on the first call. Passtimeout=180(or similar) to the workflow constructor as shown in the Run section.