
Google Colab
- Build a RAG application using Llama-Index
- Set up Phoenix as a trace collector for the Llama-Index application
- Use Phoenixβs evals library to compute LLM generated evaluations of our RAG app responses
- Use arize SDK to export the traces and evaluations to Arize AX
Install Dependencies
Letβs get the notebook setup with dependencies.Set up Phoenix as a Trace Collector in our LLM app
To get started, launch the phoenix app. Make sure to open the app in your browser using the link below.Build Your Llama Index RAG Application
We start by setting your OpenAI API key if it is not already set as an environment variable.RetrieverQueryEngine over a pre-built index of the Arize AX documentation, but you can use whatever LlamaIndex application you like. Download the pre-built index of the Arize AX docs from cloud storage and instantiate your storage context.
Use the instrumented Query Engine
We will download a dataset of questions for our RAG application to answer.Run Evaluations on the data in Phoenix
We will use the phoenix client to extract data in the correct format for specific evaluations and the custom evaluators, also from phoenix, to run evaluations on our RAG application.Export data to Arize
Get data into dataframes
We extract the spans and evals dataframes from the phoenix clientInitialize Arize Client

log_spans from the arize client to log our spans data and, if we have evaluations, we can pass the optional evals_dataframe.