Instrument and observe your DSPy application with OpenInference, DSPyInstrumentor, and view traces in Arize.
DSPy is a framework for automatically prompting and fine-tuning language models. It provides composable and declarative APIs that allow developers to describe the architecture of their LLM application in the form of a “module” (inspired by PyTorch’s nn.Module). It then compiles these modules using “teleprompters” that optimize the module for a particular task.OpenInference and Arize make your DSPy applications observable by visualizing the underlying structure of each call to your compiled DSPy module. This often involves instrumenting DSPy itself and LiteLLM, which DSPy frequently uses for model interactions.
Install dspy-ai, the OpenInference instrumentors for DSPy and LiteLLM (if used by DSPy for model calls), and Arize OTel packages.
# Install DSPy frameworkpip install dspy-ai# Install OpenInference instrumentor for DSPypip install openinference-instrumentation-dspy# Install LiteLLM and its OpenInference instrumentor (DSPy often uses LiteLLM)pip install litellm openinference-instrumentation-litellm# Install Arize OTelpip install arize-otel
DSPy frequently uses LiteLLM under the hood to make LLM calls. By adding the OpenInference instrumentor for LiteLLM, you’ll get more detailed traces, including token counts.
Now run your compiled DSPy module. The example below uses dspy.OpenAI which might make direct calls or go via a configured LiteLLM route depending on DSPy version and setup. The LiteLLMInstrumentor helps capture these calls.
import dspyimport osfrom openinference.semconv.trace import SpanAttributes # For using_attributesfrom openinference.instrumentation import using_attributes # For custom attributes# Ensure OPENAI_API_KEY is set in your environment# os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"class BasicQA(dspy.Signature): """Answer questions with short factoid answers.""" question = dspy.InputField() answer = dspy.OutputField(desc="often between 1 and 5 words")if __name__ == "__main__": # Configure the LLM. DSPy can use various LLMs, often via LiteLLM. # For OpenAI models, ensure OPENAI_API_KEY is set. # The dspy.OpenAI client might internally use LiteLLM or a direct OpenAI client. turbo = dspy.OpenAI(model="gpt-3.5-turbo") dspy.settings.configure(lm=turbo) # Example of adding custom attributes to traces using OpenInference context manager with using_attributes( session_id="my-dspy-session-001", user_id="user-dspy-example", metadata={ "environment": "testing", "dspy_module": "BasicQA", }, tags=["dspy", "qa"], prompt_template_version="1.0", prompt_template_variables={ "signature_desc": BasicQA.__doc__.strip() } ): # Define the predictor. generate_answer = dspy.Predict(BasicQA) # Call the predictor on a particular input. pred = generate_answer( question="What is the capital of the United States?" ) print(f"Question: What is the capital of the United States?") print(f"Predicted Answer: {pred.answer}") pred_europe = generate_answer( question="What is the capital of France?" ) print(f"Question: What is the capital of France?") print(f"Predicted Answer: {pred_europe.answer}") print("DSPy example run complete. Check Arize for traces.")
Now that you have tracing setup, all instrumented calls within your DSPy application, including underlying LLM interactions captured by the LiteLLM (or other) instrumentor, will be streamed to your Arize account.