- A custom
ClassificationEvaluatorthat returns categorical labels - A custom
ClassificationEvaluatorthat returns numeric scores - A fully custom
LLMEvaluatorfor any complex eval use cases
Why Use Custom Evals?
Install Phoenix Evals
Custom Evals using Categorical Labels
TheClassificationEvaluator is a special LLM-based evaluator designed for classification (both binary and multi-class). This evaluator will only respond with one of the provided label choices and, optionally, an explanation for the judgement.
A classification prompt template looks like the following with instructions for the evaluation as well as placeholders for the evaluation input data:
Label Choices
While the prompt template contains instructions for the LLM, the label choices tell it how to format its response. Thechoices of a ClassificationEvaluator can be structured in a couple of ways:
- A list of string labels only:
choices=["relevant", "irrelevant"]* - String labels mapped to numeric scores:
choices = {"irrelevant": 0, "relevant": 1}
Score objects will have a label but not a numeric score component.
The ClassificationEvaluator also supports multi-class labels and scores, for example: choices = {"good": 1.0, "bad": 0.0, "neutral": 0.5}
There is no limit to the number of label choices you can provide, and you can specify any numeric scores (not limited to values between 0 and 1). For example, you can set choices = {"one": 1, "two": 2, "three": 3, "four": 4, "five": 5} for a numeric rating task.
It ensures the output is clean and is one of the classes you want or UNPARSABLE.
Defining the Evaluator
For the relevance evaluation, we define the evaluator as follows:Custom Evals using Numeric Scores
TheClassificationEvaluator is a flexible LLM-as-a-Judge construct that can also be used to produce numeric ratings.
Note: We generally recommend using categorical labels over numeric ratings for most evaluation tasks. LLMs have inherent limitations in their numeric reasoning abilities, and numeric scores do not correlate as well with human judgements. See this technical report for more information about our findings on this subject.
Defining the Evaluator
This numeric rating task can be framed as a classification task where the set of labels is the set of numbers on the rating scale (here, 1-10). Then we can set up a customClassificationEvaluator for our evaluation task, similar to how we did above. Make sure to set the optimization direction = "minimize" here since a lower score is better on this task (fewer spelling errors).
Alternative: Fully Custom LLM Evaluator
Alternatively, for LLM-as-a-judge tasks that don’t fit the classification paradigm, it is also possible to create a custom evaluator that implements the baseLLMEvaluator class. We can implement our own LLMEvaluator for almost any complex eval that doesn’t fit into the classification type.
Steps to create a custom evaluator:
- Create a new class that inherits the base (
LLMEvaluator) - Define your prompt template and a JSON schema for the structured output.
- Initialize the base class with a name, LLM, prompt template, and direction.
- Implement the
_evaluatemethod that takes aneval_inputand returns a list ofScoreobjects. The base class handles theinput_mappinglogic so you can assume the input here has the required input fields.