Helicone integrates seamlessly with 100+ AI providers and popular frameworks. Choose the integration method that best fits your use case.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/helicone/helicone/llms.txt
Use this file to discover all available pages before exploring further.
Integration Methods
AI Gateway
Route requests through Helicone’s AI Gateway for unified access to 100+ models with intelligent routing and automatic fallbacks.
Proxy Integration
Route requests through Helicone’s proxy by changing your base URL. Simple and works with any provider.
Async Logging
Log requests asynchronously without proxying. Zero latency impact using OpenLLMetry.
Custom Headers
Add Helicone headers to existing requests for tracking and custom properties.
Supported Providers
Inference Providers
Helicone supports all major AI providers through the AI Gateway:OpenAI
GPT-4, GPT-4o, GPT-3.5, and more
Anthropic
Claude 3.5 Sonnet, Claude 3 Opus, and more
Gemini, PaLM, Vertex AI
Azure OpenAI
Azure-hosted OpenAI models
AWS Bedrock
Claude, Llama, and more on AWS
Groq
High-performance inference
Together AI
Open-source model hosting
Anyscale
Scalable AI inference
DeepInfra
Serverless AI inference
Frameworks & Tools
LangChain
Use Helicone with LangChain applications
Vercel AI SDK
Integrate with Vercel AI SDK
LlamaIndex
RAG and data framework integration
LangGraph
Multi-actor application framework
CrewAI
Multi-agent orchestration
PostHog
Export analytics to PostHog
Proxy Integration
The simplest way to integrate Helicone is by updating your base URL:- OpenAI
- Anthropic
- Python
Custom Headers
Add Helicone headers to any request for tracking and custom properties:Available Headers
Helicone-Auth: Your API key (format:Bearer sk-helicone-xxx)Helicone-Session-Id: Track requests across a sessionHelicone-User-Id: Associate requests with a userHelicone-Property-*: Add custom properties (replace*with property name)Helicone-Prompt-Id: Track prompt versionsHelicone-Cache-Enabled: Enable response caching
Quick Start by Use Case
I want the lowest latency
I want the lowest latency
I want to use multiple providers
I want to use multiple providers
I'm using LangChain
I'm using LangChain
I'm using Vercel AI SDK
I'm using Vercel AI SDK
Next Steps
OpenAI Integration
Complete guide for OpenAI integration
Anthropic Integration
Complete guide for Anthropic integration
AI Gateway
Learn about the AI Gateway
Custom Properties
Track custom metadata
