Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

The arize-phoenix-evals library uses an LLM-as-judge to grade model output — hallucinations, factuality, helpfulness, toxicity, custom rubrics. Phoenix Evals does not ship a native Mistral adapter, so plug Mistral in via the LiteLLM proxy by passing provider="litellm" and model="mistral/<model-id>" to the LLM(...) wrapper, then build a create_classifier(...) evaluator and run it over a DataFrame with evaluate_dataframe(...).

Prerequisites

Install

pip install arize-phoenix-evals litellm pandas
litellm is the proxy that routes the eval calls to Mistral. You don’t need the mistralai SDK installed separately.

Configure credentials

export MISTRAL_API_KEY="<your-mistral-api-key>"
LiteLLM reads MISTRAL_API_KEY from the environment when it sees a mistral/... model id.

Setup the eval LLM

# eval_setup.py
from phoenix.evals import LLM

# `provider="litellm"` routes through the LiteLLM proxy.
# The `mistral/` prefix on the model id tells LiteLLM which provider
# to dispatch to and which env var (MISTRAL_API_KEY) to read.
llm = LLM(provider="litellm", model="mistral/mistral-large-latest")
mistral-large-latest is a strong default judge; for cheaper batch evals swap in mistral/mistral-small-latest. The judge’s job is classification, not generation, so a smaller model is often sufficient.

Run an evaluation

This example builds a hallucination classifier and grades two sample question/answer pairs against a reference. The pattern generalizes: replace the prompt template, choices, and DataFrame columns with whatever metric you want to evaluate.
# example.py
import pandas as pd

from phoenix.evals import LLM, create_classifier, evaluate_dataframe

llm = LLM(provider="litellm", model="mistral/mistral-large-latest")

HALLUCINATION_PROMPT = """\
Determine whether the answer below is factually supported by the
reference. Reply with exactly one of: factual, hallucinated.

Question: {input}
Answer: {output}
Reference: {reference}
"""

evaluator = create_classifier(
    name="hallucination",
    prompt_template=HALLUCINATION_PROMPT,
    llm=llm,
    # `choices` maps each label the LLM may emit to a numeric score.
    # `direction="maximize"` (the default) means higher score is better.
    choices={"factual": 1.0, "hallucinated": 0.0},
)

df = pd.DataFrame([
    {
        "input":     "What is the capital of France?",
        "output":    "Paris is the capital of France.",
        "reference": "Paris is the capital and most populous city of France.",
    },
    {
        "input":     "What is the capital of France?",
        "output":    "Berlin is the capital of France.",
        "reference": "Paris is the capital and most populous city of France.",
    },
])

results = evaluate_dataframe(dataframe=df, evaluators=[evaluator])

# `hallucination_score` is a Score row (a dict-like with `score`, `label`,
# `explanation`, …) — pull the numeric out for a flat display column.
results["score"] = results["hallucination_score"].apply(lambda r: r["score"])
print(results[["input", "output", "score"]].to_string())

Expected output

                            input                            output  score
0  What is the capital of France?   Paris is the capital of France.    1.0
1  What is the capital of France?  Berlin is the capital of France.    0.0
The full returned DataFrame also includes hallucination_execution_details (status + exceptions + timing) and the original hallucination_score column with each evaluator result’s full dict (name, score, label, explanation, metadata, kind, direction) — useful for surfacing the LLM’s reasoning, persisting eval rows back to Arize AX, or filtering retries.

Troubleshooting

  • 401 / 403 from Mistral. Verify MISTRAL_API_KEY is set and has access to the model. Generate a new key at console.mistral.ai.
  • model_not_found or 404. Confirm the model id is correct — LiteLLM expects mistral/<id> (e.g. mistral/mistral-large-latest, mistral/mistral-small-latest). See the LiteLLM Mistral provider docs for the current list.
  • All rows return the same label. Your prompt template isn’t differentiating cases. Make sure each row’s {input}/{output}/{reference} columns expose enough context for the judge to discriminate, and that choices lists every label your prompt asks the LLM to emit.
  • Some rows fail with timeout / rate-limit. Pass max_retries= to evaluate_dataframe(...) (defaults to 3). For large batches, also pass initial_per_second_request_rate=... to LLM(...) to throttle.
  • Logging results back to Arize AX. This guide stops at producing the eval DataFrame. To attach those evals to existing spans in an Arize AX project, use log_evaluations_sync on arize.Client.
  • Using a different LiteLLM-supported provider. Swap the mistral/... prefix for any LiteLLM-supported modelprovider="litellm" is the generic escape hatch when no native Phoenix adapter exists.

Resources

Phoenix Evals Documentation

arize-phoenix-evals on PyPI

Phoenix Evals Source

Mistral AI Tracing (instrument app calls)