Documentation Index
Fetch the complete documentation index at: https://mintlify.com/helicone/helicone/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Helicone AI Gateway integrates seamlessly with Helicone’s prompt management system, allowing you to:- Use versioned prompts stored in Helicone
- Dynamically inject prompt variables
- Track prompt usage across providers
- Deploy prompts without code changes
Prompt integration works with all gateway features including routing, fallbacks, and BYOK/PTB.
How It Works
When you include prompt fields in your request, the gateway:- Fetches the prompt template from Helicone
- Injects your input variables
- Expands the template into messages
- Routes to the appropriate provider
Using Prompts with the Gateway
Basic Prompt Usage
- Gateway fetches the “pirate-bot” prompt template
- Injects
person: "Alice"into the template - Expands to full messages array
- Routes to GPT-4o-mini
Model from Prompt
You can omit themodel field and use the model defined in the prompt:
If both
model and prompt_id are provided, the model in the request takes precedence.Prompt Versioning
Use specific prompt versions:Environment-Specific Prompts
Use different prompt versions per environment:Prompt Fields
The ID of the prompt to use from Helicone
Key-value pairs to inject into the prompt template
Specific version of the prompt to use (default: latest)
Environment to use for prompt resolution:
production, staging, or developmentOverride the model defined in the prompt
Creating Prompts
Create and manage prompts in the Helicone dashboard:Navigate to Prompts
Go to Prompts in the Helicone dashboard
Create New Prompt
Click “New Prompt” and define:
- Prompt ID (e.g., “pirate-bot”)
- Model to use
- Message template with variables
- Version tags
Prompt Format with hpf
Use the Helicone Prompt Format (hpf) helper in your code:The
hpf helper automatically tracks prompt variables and associates them with the prompt ID.Prompts with Fallbacks
Combine prompts with provider fallbacks:- Fetches “pirate-bot” prompt
- Tries GPT-4o on OpenAI
- Falls back to Azure if needed
- Falls back to Claude if needed
Using with Helicone-Auth Header
When using the traditional proxy pattern withHelicone-Auth header:
Prompt Tracking
All requests using prompts are automatically tracked:View Prompt Usage
Navigate to Prompts in the dashboard
Select Your Prompt
Click on a prompt to see:
- Total requests
- Success rate
- Cost by provider
- Latency metrics
Advanced Patterns
Dynamic Model Selection
Use prompt fields with dynamic models:Conditional Prompt Selection
Select prompts based on context:A/B Testing Prompts
Test different prompt versions:Error Handling
Prompt Not Found
Missing Required Inputs
Invalid Version
Best Practices
Version Your Prompts
Version Your Prompts
Use semantic versioning for prompts:
v1.0.0: Major changes (breaking)v1.1.0: Minor improvementsv1.1.1: Bug fixes
Use Environment Tags
Use Environment Tags
Track Prompt Performance
Track Prompt Performance
Monitor prompt metrics in the dashboard:
- Success rate by version
- Cost per prompt
- Latency trends
- Provider distribution
Test Before Deploying
Test Before Deploying
Test prompt changes thoroughly:
- Deploy to
developmentenvironment - Test with various inputs
- Promote to
staging - Deploy to
productionafter validation
Use Descriptive IDs
Use Descriptive IDs
Use clear, descriptive prompt IDs:
Complete Example
Next Steps
Create Prompts
Start creating prompts in the dashboard
Routing
Learn about provider routing
Fallbacks
Configure automatic failover
Prompt Docs
Full prompt management documentation
