Skip to main content

Advanced: Evaluator as a Class

Users have the option to run an experiment by creating an evaluator that inherits from the Evaluator(ABC) base class in the Arize Python SDK. The evaluator takes in a single dataset row as input and returns an EvaluationResult dataclass. This is an alternative you can use if you’d prefer to use object oriented programming instead of functional programming.

Eval Class Inputs

The Eval argument values are supported below:
Parameter nameDescriptionExample
inputexperiment run inputdef eval(input): ...
outputexperiment run outputdef eval(output): ...
dataset_rowthe entire row of the data, including every column as dictionary keydef eval(expected): ...
metadataexperiment metadatadef eval(metadata): ...
from arize.experiments import Evaluator

class ExampleAll(Evaluator):
    def evaluate(self, input, output, dataset_row, metadata, **kwargs) -> EvaluationResult:
        print("Evaluator Using All Inputs")

class ExampleDatasetrow(Evaluator):
    def evaluate(self, dataset_row, **kwargs) -> EvaluationResult:
        print("Evaluator Using dataset_row ")

class ExampleInput(Evaluator):
    def evaluate(self, input, **kwargs) -> EvaluationResult:
        print("Evaluator Using Input")

class ExampleOutput(Evaluator):
    def evaluate(self, output, **kwargs) -> EvaluationResult:
        print("Evaluator Using Output")

EvaluationResult Outputs

The EvaluationResult results can be a score, label, tuple (score, label, explanation) or a Class EvaluationResult
Return TypeDescription
EvaluationResultScore, label and explanation
floatScore output
stringLabel string output
class ExampleResult(Evaluator):
    def evaluate(self, input, output, dataset_row, metadata, **kwargs) -> EvaluationResult:  
        print("Evaluator Using All Inputs")
        return(EvaluationResult(score=score, label=label, explanation=explanation))
        
class ExampleScore(Evaluator):
    def evaluate(self, input, output, dataset_row, metadata, **kwargs) -> EvaluationResult:  
        print("Evaluator Using A float")
        return 1.0
      
class ExampleLabel(Evaluator):
    def evaluate(self, input, output, dataset_row, metadata, **kwargs) -> EvaluationResult:  
        print("Evaluator label")
        return "good"  

Code Evaluator as Class

from arize.experiments import EvaluationResult, Evaluator

class MatchesExpected(Evaluator):
    annotator_kind = "CODE"
    name = "matches_expected"

    def evaluate(self, output, dataset_row, **kwargs) -> EvaluationResult:
        expected_output = dataset_row.get("expected")
        label = expected_output == output
        score = float(label)
        return EvaluationResult(score=score, label=label)

    async def async_evaluate(self, _: Example, exp_run: ExperimentRun) -> EvaluationResult:
        return self.evaluate(_, exp_run)
You can run this class using the following:
experiment, results_df = client.experiments.run(
    name="my-experiment",
    dataset_id=dataset_id,
    task=my_task,
    evaluators=[MatchesExpected()],
)

LLM Evaluator as Class Example

Here’s an example of a LLM evaluator that checks for hallucinations in the model output. The Phoenix Evals package is designed for running evaluations in code:
from phoenix.evals import (
    HALLUCINATION_PROMPT_RAILS_MAP,
    HALLUCINATION_PROMPT_TEMPLATE,
    llm_classify,
    OpenAIModel,
)
from arize.experiments import EvaluationResult, Evaluator
import pandas as pd

class HallucinationEvaluator(Evaluator):
    def evaluate(self, output, dataset_row, **kwargs) -> EvaluationResult:
        print("Evaluating outputs")
        expected_output = dataset_row["attributes.llm.output_messages"]

        # Create a DataFrame with the actual and expected outputs
        df_in = pd.DataFrame(
            {"selected_output": output, "expected_output": expected_output}, index=[0]
        )
        # Run the LLM classification
        expect_df = llm_classify(
            dataframe=df_in,
            template=HALLUCINATION_PROMPT_TEMPLATE,
            model=OpenAIModel(model="gpt-4o-mini", api_key=OPENAI_API_KEY),
            rails=HALLUCINATION_PROMPT_RAILS_MAP,
            provide_explanation=True,
        )
        label = expect_df["label"][0]
        score = 1 if label == "factual" else 0
        explanation = expect_df["explanation"][0]

        return EvaluationResult(score=score, label=label, explanation=explanation)
In this example, the HallucinationEvaluator class evaluates whether the output of an experiment contains hallucinations by comparing it to the expected output using an LLM. The llm_classify function runs the eval, and the evaluator returns an EvaluationResult that includes a score, label, and explanation.

Advance: Multiple Evaluators on Experiment Runs

Arize supports running multiple evals on a single experiment, allowing you to comprehensively assess your model’s performance from different angles. When you provide multiple evaluators, Arize creates evaluation runs for every combination of experiment runs and evaluators
experiment, results_df = client.experiments.run(
    name="multi-eval-experiment",
    dataset_id=dataset_id,
    task=task,
    evaluators=[
        ContainsKeyword("hello"),
        MatchesRegex(r"\d+"),
        custom_evaluator_function
    ]
)