This release adds full structured output support to the playground, giving users precise control over the fields an LLM must return. Models that implement the OpenAI API schema (including OpenAI, Azure, and compatible custom endpoints) now support structured outputs end-to-end. When saving prompts, the structured output JSON is stored alongside other LLM parameters for seamless reuse. Tooltips have also been added to clearly indicate when a model or provider does not support structured outputs.
This release introduces Session Annotations, making it easier than ever to capture human insights without disrupting your workflow. You can now add notes directly from the Session Page—no context switching required.Annotations are supported at two levels:
Input/Output Level: Attach insights to specific output messages, automatically linked to the root span of the trace.
Span Level: Dive deeper into a trace and annotate individual spans for precise, context-rich feedback.
Together, these capabilities make it simple to highlight issues, call out successes, and integrate human feedback seamlessly into your debugging and evaluation process.
This release delivers major improvements to how integrations are managed, scoped, and configured. Integrations can now be targeted to specific orgs and spaces, and the UI has been refreshed to clearly separate AI Providers from Monitoring Integrations. A new creation flow supports both simple API-based setups and flexible custom endpoints, including multi-model configurations with defaults or custom names. Users can also add multiple keys for the same provider, enabling more granular control and easier management at scale.
Added easy manual instrumentation with the same decorators, wrappers, and attribute helpers found in the Python openinference-instrumentation package.
Introduced function tracing utilities that automatically create spans for sync/async function execution, including specialized wrappers for chains, agents, and tools.
Added decorator-based method tracing, enabling automatic span creation on class methods via the @observe decorator.
Expanded attribute helper utilities for standardized OpenTelemetry metadata creation, including helpers for inputs/outputs, LLM operations, embeddings, retrievers, and tool definitions.
Overall, tracing workflows, agent behavior, and external tool calls is now significantly simpler and more consistent across languages.