- Upload a dataset of examples containing emails to Arize AX
- Define an experiment task that extracts and formats the key details from those emails
- Devise an evaluator measuring Jaro-Winkler Similarity
- Run experiments to iterate on your prompt template and to compare the summaries produced by different LLMs
Notebook Walkthrough
We will go through key code snippets on this page. To follow the full tutorial, check out the notebook.Email text extraction experiments
Experiments in Arize AX
Experiments are made up of 3 elements: a dataset, a task, and an evaluator. The dataset is a collection of the inputs and expected outputs that we’ll use to evaluate. The task is an operation that should be performed on each input. Finally, the evaluator compares the result against an expected output. For this example, here’s what each looks like:- Dataset - a dataframe of emails to analyze, and the expected output for our agent
- Task - a langchain agent that extracts key info from our input emails. The result of this task will then be compared against the expected output
- Eval - Jaro-Winkler distance calculation on the task’s output and expected output
Download JSON Data
We’ve prepared some example emails and actual responses that we can use to evaluate our two models. Let’s download those and save them to a temporary file.Upload Dataset to Arize AX
Set Up LangChain
Now we’ll set up our Langchain agent. This is a straightforward agent that makes a call to our specified model and formats the response as JSON.Define Task Function
Define Evaluator
Next, we need to define our evaluation function. Here we’ll use a Jaro-Winkler similarity function that generates a score for how similar the output and expected text are. Jaro-Winkler similarity is technique for measuring edit distance between two strings.Run Experiment
Now we’re ready to run our experiment. We’ll specify our space id, dataset id, task, evaluator, and experiment name in order to generate and evaluate responses.Re-run with GPT 3.5 Turbo and Compare Results
To compare results with another model, we simply need to redefine our task. Our dataset and evaluator can stay the same.View Results
Now, if you check your Arize AX experiments, you can compare Jaro-Winkler scores on a per query basis, and view aggregate model performance results. The first screenshot below shows a comparison between the average Jaro-Winkler scores for the two experiments we ran. The second screenshot shows a detailed view of each row’s individual Jaro-Winkler score for both experiments. The experiment with GPT-4o is on the left (experiment #1) and the experiment with GPT-3.5-turbo is on the right (experiment #2). The higher the Jaro-Winkler similarity score, the closer the outputted value is to the actual value. You should see that GPT-4o outperforms its older cousin.
