Overview
The Prompt Playground supports image inputs for working with multi-modal models, like GPT-4o. This feature allows you to iterate on tasks such as image caption generation, visual QA, and more. You can pass images into your prompts by uploading image URLs as variables. These are used as inputs in your LLM calls—perfect for experimenting with visual content alongside text.Adding Image Inputs
1. Open the Prompt Playground
Navigate to Prompt Playground from the sidebar. Select a multi-modal model (e.g.,gpt-4o) from the dropdown.
2. Add an Image Input Variable
Give your image variable a name (e.g.,image) within the prompt.

3. Upload an Image URL
In the Input Variables section, paste a supported image URL into the input field (supported formats).- Example:
https://example.com/images/hotel.jpg

4. View Results
Run your prompt. The playground will display:- The image input
- The LLM-generated output

Creating Image Datasets and Using Them in the Playground
Method 1: CSV Upload
- Prepare images → Upload to accessible URLs
- Create CSV → Include image URLs and variables
- Upload to Arize → Use “Create via CSV” in Datasets
- Map columns → Set image URL column as image variable
- Use in Playground → Reference images with
{{variable_name}}
Method 2: From Traces
- Existing traces → Must contain image URLs in input/output/attributes
- Create dataset → Use trace-to-dataset functionality
- Auto-mapping → Image URLs extracted as variables
- Use in Playground → Same as CSV method
Supported Image Formats
- JPG/JPEG ✅
- PNG ✅
- GIF ✅
- SVG ✅
- WebP ✅
URL Types Supported
- Web URLs:
https://example.com/image.jpg - Cloud Storage:
gs://bucket/image.png,s3://bucket/image.jpg - Data URLs:
data:image/png;base64,iVBORw0K...