Create a new experiment. Empty experiments are not allowed.
Experiments are composed of “runs”. Each experiment run (JSON object)
must include an example_id field that corresponds to an example in
the dataset, and a output field that contains the task’s output for
the example (the input).
Payload Requirements
name must be unique within the target datasetexperiment_runs.example_id — the ID of an existing example in the dataset/versionoutput — model/task output for that examplemodel, latency_ms,
temperature, prompt, tool_calls, etc.Most Arize AI endpoints require authentication. For those endpoints that require authentication, include your API key in the request header using the format
Body containing experiment creation parameters
An experiment object
Experiments combine a dataset (example inputs/expected outputs), a task (the function that produces model outputs), and one or more evaluators (code or LLM judges) to measure performance. Each run is stored independently so you can compare runs, track progress, and validate improvements over time. See the full definition on the Experiments page.
Use an experiment to run tasks on a dataset, attach evaluators to score outputs, and compare runs to confirm improvements.
Unique identifier for the experiment
Name of the experiment
Unique identifier for the dataset this experiment belongs to
Unique identifier for the dataset version this experiment belongs to
Timestamp for when the experiment was created
Timestamp for the last update of the experiment
Unique identifier for the experiment traces project this experiment belongs to (if it exists)