Now that you’re up and running with Arize AX, you can use the Prompt Playground and Prompt Library to design, test, and manage your prompts with ease.
The Prompt Playground lets you experiment with prompts in real time — explore variations across models, tweak parameters, and compare outputs side-by-side, all within an interactive workspace.
The Prompt Library helps you organize, version, and reuse prompts across projects so your best iterations stay discoverable and consistent.
Before you dive in, make sure your AX environment is connected.
Choose your path to getting started: make your first prompt through the UI, or the Prompt Hub API.
Prompts (UI)
To create a prompt, visit Prompt Hub, click + New Prompt.
This will take you to the Prompt Playground, where you can define and save your prompt.
Here are some parameters you can define
1. LLM Provider
Pick the LLM you want to use for this prompt.
2. Prompt Messages
Add any system, user, or assistant messages to your prompt. For more info on what prompt messages are, you can read this guide.
3. Function calling
Any function/tools you want the LLM to be aware of should be added here.
Function descriptions must be passed in JSON format. In order to add multiple functions, simply add another entry to the JSON list with proper attributes.
4. Invocation Parameters
Different LLM providers have different invocation hyperparameters, that affect the LLMs output. You can attach these to your prompts.
5. Save Prompt
In order to create your prompt and store it in the Prompt Hub, you MUST save it with this button. Save it under a chosen name and description, with tags if you like.
6. Alyx
Use our assistant to create prompts for you.
After saving your prompt, you can view it in the Prompt Hub.
You can also edit, add, or delete any of the settings we configured by clicking Edit in Prompt Playground. Just make sure to save your prompt once you edit it.
Congratulations! You just made, saved, and ran your first prompt. Next Steps
Prompt Hub API
For full info & examples of our Prompt Hub API, you can jump ahead here.
1. Install the SDK
pip install "arize[PromptHub]"
This installs the Prompt Hub client and types.
2. Initialize the client
ArizePromptClient is the main entry point for creating, pulling, and pushing prompts.
from arize.experimental.prompt_hub import ArizePromptClient
prompt_client = ArizePromptClient(
space_id="YOUR_SPACE_ID",
api_key="YOUR_API_KEY",
)
3. Define your prompt template
from arize.experimental.prompt_hub import Prompt, LLMProvider
new_prompt = Prompt(
name="customer_service_greeting",
description="Greets a customer and asks a clarifying question.",
messages=[
{"role": "system", "content": "You are a helpful customer service assistant."},
{"role": "user", "content": "Customer query: {query}"}
],
provider=LLMProvider.OPENAI, # choose your provider
model_name="gpt-4o", # choose your model
tags=["support", "greeting"], # optional
# input_variable_format defaults to F_STRING; you can use MUSTACHE if you prefer
)
Key fields:
-
name,* messages (chat-style list),* provider,* model_name,
Optional Fields:
-
description,* tags,* input_variable_format
4. Save it to Prompt Hub
prompt_client.push_prompt(new_prompt)
This creates the prompt (or updates it if it already exists). Optionally pass a commit_message to track changes.
from openai import OpenAI
oai = OpenAI(api_key="YOUR_OPENAI_API_KEY")
prompt_vars = {"query": "When will my order arrive?"}
formatted = new_prompt.format(prompt_vars) # expands {query} in your messages
# Execute with your provider SDK
resp = oai.chat.completions.create(**formatted)
print(resp.choices[0].message.content)
Use .format({...}) to substitute variables, then pass the formatted messages/model to your provider’s SDK.
Congratulations! You just made, saved, and ran your first prompt.
Next steps
Dive deeper into the following topics to keep improving your LLM application!