CrewAI is a Python framework for orchestrating role-playing autonomous agents — agents collaborate on tasks, hand off work, and call tools. Arize AX captures every CrewAI run — Crew / Agent / Task spans plus the underlying LLM calls — via theDocumentation Index
Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
openinference-instrumentation-crewai package, paired with an LLM-level instrumentor matching the provider CrewAI routes to.
CrewAI Tracing Tutorial (Google Colab)
Prerequisites
- Python 3.10+
- An Arize AX account (sign up)
- An
OPENAI_API_KEYfrom the OpenAI Platform - CrewAI 1.0+ (this guide uses the native-SDK routing model introduced in 1.0; for 0.x see Troubleshooting)
Launch Arize AX
- Sign in to your Arize AX account.
- From Space Settings, copy your Space ID and API Key. You will set them as
ARIZE_SPACE_IDandARIZE_API_KEYbelow.
Install
CrewAI 1.0+ routes LLM calls based on the model-string prefix:
openai/... and bare model names go to the native OpenAI SDK, anthropic/... goes to Anthropic, etc. The LLM-level instrumentor must match the provider CrewAI is actually calling. The example below uses openai/gpt-5 so it pairs openinference-instrumentation-crewai with openinference-instrumentation-openai. Other prefixes need a different pairing — see Which instrumentor do I need? below.Which instrumentor do I need?
CrewAI 1.0+ routes LLM calls to different provider SDKs based on your model string. Match your LLM-level instrumentor to the provider CrewAI is actually calling:| Model string | Provider SDK used | Instrumentor package |
|---|---|---|
openai/gpt-5, gpt-5, or no prefix | OpenAI (native) | openinference-instrumentation-openai |
anthropic/claude-sonnet-4-6 | Anthropic (native) | openinference-instrumentation-anthropic |
| Groq, Together, or other providers without a native CrewAI SDK | LiteLLM (fallback) | openinference-instrumentation-litellm |
openinference-instrumentation-crewai) is always required regardless of provider — it captures Crew, Agent, and Task spans.
Configure credentials
Setup tracing
Run CrewAI
Expected output
Verify in Arize AX
- Open your Arize AX space and select project
crewai-tracing-example. - You should see a new trace within ~30 seconds with this shape: a
Crew_<uuid>.kickoffroot span (CHAIN) wrapsCrew Created, per-agent<role>._execute_core(AGENT), and per-taskTask Created/Task Executionspans. Each task’sTask Executionwraps aChatCompletionLLM span (modelgpt-5-2025-08-07, with prompt, response, and token usage attached). - If no traces appear, see Troubleshooting.
Troubleshooting
- No traces in Arize AX. Confirm
ARIZE_SPACE_IDandARIZE_API_KEYare set in the same shell that runsexample.py. Enable OpenTelemetry debug logs withexport OTEL_LOG_LEVEL=debugand re-run. - Crew / Agent / Task spans appear but no LLM spans. The LLM-level instrumentor doesn’t match the provider CrewAI is calling. CrewAI 1.0+ routes by model-string prefix:
openai/<model>(or no prefix) → installopeninference-instrumentation-openaianthropic/<model>→ installopeninference-instrumentation-anthropic- Groq / Together / providers without a native CrewAI SDK → install
openinference-instrumentation-litellm(CrewAI falls back to LiteLLM for these)
- All spans missing — no Crew, no LLM. This is almost always import order.
CrewAIInstrumentor().instrument(...)and the LLM-level instrumentor both monkey-patch class methods or module functions; ifcrewai(or the LLM library) is imported beforeinstrument()runs, the patches land on stale references and never fire. Make sureinstrumentation.pyis the first import in your entry point — and watch for transitive imports (importing your owncrew.pymodule that hasfrom crewai import ...at the top has the same effect). 401from OpenAI. VerifyOPENAI_API_KEYis set and has access togpt-5. Swapopenai/gpt-5inLLM(model=...)for a model your key can call.- Version-check error from
CrewAIInstrumentor. If you’re on a CrewAI release candidate that’s outside the instrumentor’s supported range, escape-hatch withCrewAIInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider). - You’re on CrewAI 0.x. 0.x routes every LLM call through LangChain or LiteLLM rather than provider-native SDKs. Pair the CrewAI instrumentor with both
openinference-instrumentation-langchainandopeninference-instrumentation-litellminstead ofopeninference-instrumentation-openai. The Setup tracing call list becomesCrewAIInstrumentor().instrument(...)+LangChainInstrumentor().instrument(...)+LiteLLMInstrumentor().instrument(...). Upgrade to 1.0+ when you can — the LiteLLM removal guide covers the migration.