Portkey AI Gateway Tracing & Observability
Portkey is an AI Gateway and Control Panel that provides production-ready features for AI applications including observability, reliability, and cost management. Learn how to instrument the Portkey SDK using theopeninference-instrumentation-portkey package for comprehensive LLM tracing and monitoring.
Quick Start: Portkey Python Integration
Installation & Setup
Install the required packages for Portkey AI Gateway tracing:Instrumentation Setup
Configure thePortkeyInstrumentor and tracer to send traces to Arize for LLM observability:
Example: Basic Portkey AI Gateway Usage
Test your Portkey integration with this example code and observe traces in Arize:What is covered by the Instrumentation
Arize provides comprehensive observability for Portkey’s AI Gateway capabilities, automatically tracing:Multi-Provider LLM Management
- Multiple Provider Calls: Track requests across different LLM providers (OpenAI, Anthropic, Cohere) through Portkey’s unified interface
- Provider Switching: Monitor seamless switching between AI providers
- Cost Optimization: Track usage and costs across different LLM providers
Reliability & Performance Monitoring
- Fallback and Retry Logic: Monitor automatic fallbacks and retry attempts when primary services fail
- Load Balancing: Observe how requests are distributed across multiple models or providers
- Latency Tracking: Monitor response times and performance metrics
Intelligent Caching & Optimization
- Semantic Caching: See cache hits and misses for semantic caching to optimize costs
- Request Deduplication: Track duplicate request handling
- Performance Optimization: Identify bottlenecks and optimization opportunities