TheDocumentation Index
Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
arize-phoenix-evals library uses an LLM-as-judge to grade model output — hallucinations, factuality, helpfulness, toxicity, custom rubrics. Plug OpenAI in as the judge by passing provider="openai" to the LLM(...) wrapper, then build a create_classifier(...) evaluator and run it over a DataFrame with evaluate_dataframe(...).
Prerequisites
- Python 3.11+
- An
OPENAI_API_KEYfrom the OpenAI Platform
Install
Configure credentials
Setup the eval LLM
gpt-5-mini for a cheaper judge if you’re evaluating large batches; the judge’s job is classification, not generation, so a smaller model is often sufficient.
Run an evaluation
This example builds a hallucination classifier and grades two sample question/answer pairs against a reference. The pattern generalizes: replace the prompt template, choices, and DataFrame columns with whatever metric you want to evaluate.Expected output
hallucination_execution_details (status + exceptions + timing) and the original hallucination_score column with each evaluator result’s full dict (name, score, label, explanation, metadata, kind, direction) — useful for surfacing the LLM’s reasoning, persisting eval rows back to Arize AX, or filtering retries.
Troubleshooting
401from OpenAI. VerifyOPENAI_API_KEYis set and has access togpt-5. Swap themodel=argument for any model your key can call (e.g.gpt-5-minifor cheaper batch evaluations).- All rows return the same label. Your prompt template isn’t differentiating cases. Make sure each row’s
{input}/{output}/{reference}columns expose enough context for the judge to discriminate, and thatchoiceslists every label your prompt asks the LLM to emit. - Some rows fail with timeout / rate-limit. Pass
max_retries=toevaluate_dataframe(...)(defaults to 3). For large batches, also passinitial_per_second_request_rate=...toLLM(...)to throttle. - Logging results back to Arize AX. This guide stops at producing the eval DataFrame. To attach those evals to existing spans in an Arize AX project, use
log_evaluations_synconarize.Client. - Using Azure OpenAI instead. Pass
sync_client_kwargs={"azure_endpoint": ..., "api_version": ...}toLLM(...). The same evaluator code works against an Azure-deployed model.