Documentation Index
Fetch the complete documentation index at: https://mintlify.com/helicone/helicone/llms.txt
Use this file to discover all available pages before exploring further.
What You’ll Learn
These guides provide step-by-step instructions for implementing key observability patterns in your LLM applications. Each guide focuses on practical implementation with real code examples.Core Monitoring Patterns
Agent Tracing
Track complex agent workflows with tool calls, decision paths, and multi-step reasoning
Cost Tracking
Monitor spending, optimize costs, and understand unit economics across your AI stack
Debugging
Identify errors, diagnose issues, and optimize LLM application performance
Experiments
A/B test prompts and models with production data to improve response quality
Fine-Tuning
Prepare datasets and track fine-tuning workflows with OpenPipe integration
Step-by-Step Tutorials
Complete implementations showing how to integrate Helicone into real applications.Vercel AI Gateway
Build a multi-model assistant with intelligent routing and cost optimization
RAGAS Evaluations
Implement comprehensive evaluation pipelines with Ragas metrics
Structured Outputs
Use OpenAI function calling and structured outputs with monitoring
Quick Navigation
Getting Started with Monitoring
Getting Started with Monitoring
Start with Agent Tracing to understand session-based monitoring, then move to Cost Tracking to optimize spending.
Improving Quality
Improving Quality
Use Experiments to test prompt changes, and Fine-Tuning for specialized model behavior.
Production Optimization
Production Optimization
Implement Debugging workflows and use the Vercel AI Gateway tutorial for production patterns.
Integration Patterns
All guides show integration with:- OpenAI SDK (Python & TypeScript)
- Anthropic Claude
- Vercel AI SDK
- Helicone Manual Logger for custom integrations
Need Help?
Discord Community
Ask questions and share patterns with 5,000+ developers
API Reference
Complete reference for Helicone’s REST API
