Documentation Index
Fetch the complete documentation index at: https://mintlify.com/helicone/helicone/llms.txt
Use this file to discover all available pages before exploring further.
Why Self-Host Helicone?
Self-hosting Helicone gives you complete control over your LLM observability platform. Run it on your own infrastructure to meet compliance requirements, keep data private, or customize the deployment to your needs.Full Data Control
Your request logs, prompts, and analytics stay entirely within your infrastructure.
Easy Deployment
Get started with a single Docker command or deploy to Kubernetes with our Helm chart.
Open Source
Apache 2.0 licensed. Inspect, modify, and contribute to the codebase.
Production Ready
Battle-tested architecture powering thousands of production deployments.
Deployment Options
Docker (Recommended for Development)
The fastest way to get Helicone running locally. Perfect for development, testing, or small production deployments.- All-in-One Container: Single container with all services
- Docker Compose: Multi-container setup with separate services
- Minimal Requirements: 4GB RAM, 2 CPU cores
Docker Setup
Get started with Docker in under 5 minutes →
Kubernetes (Recommended for Production)
Enterprise-grade deployment with auto-scaling, high availability, and advanced monitoring.- Helm Chart: Production-ready Kubernetes deployment
- Horizontal Scaling: Scale services independently
- Enterprise Support: Available for production deployments
Kubernetes Setup
Deploy to Kubernetes with Helm →
What Gets Deployed?
Helicone consists of five core services:| Service | Purpose | Technology |
|---|---|---|
| Web | Frontend dashboard and UI | Next.js |
| Jawn | Backend API and request proxy | Express + TypeScript |
| Worker | LLM proxy logging | Cloudflare Workers (Node.js) |
| Database | Application data and auth | PostgreSQL |
| Analytics DB | Request logs and metrics | ClickHouse |
| Object Storage | Request/response bodies | MinIO (S3-compatible) |
Architecture Deep Dive
Learn about Helicone’s architecture →
System Requirements
Minimum (Development)
- 4GB RAM
- 2 CPU cores
- 20GB storage
- Docker 20.10+
Recommended (Production)
- 16GB RAM
- 4 CPU cores
- 100GB+ storage (scales with usage)
- Kubernetes 1.24+
Quick Start
Get Helicone running locally in under 5 minutes:Access the dashboard
Open http://localhost:3000 in your browser.Create your first user account via the Supabase auth UI at:
http://localhost:54323/project/default/auth/users
Available Ports
When running locally, these ports are exposed:| Port | Service | Description |
|---|---|---|
3000 | Web Dashboard | Frontend UI |
8585 | Jawn API | Backend API and LLM proxy |
8787 | Worker (OpenAI) | OpenAI proxy worker |
8788 | Worker (API) | Helicone API worker |
9000 | MinIO API | S3-compatible object storage |
9001 | MinIO Console | MinIO admin interface |
8123 | ClickHouse | Analytics database |
5432 | PostgreSQL | Application database |
Next Steps
Docker Deployment
Complete Docker setup guide with docker-compose
Kubernetes Deployment
Production deployment with Helm
Architecture
Understand how services work together
Configuration
Environment variables and advanced settings
Support
Where can I get help?
Where can I get help?
- Community Support: Join our Discord for community help
- GitHub Issues: Report bugs at github.com/helicone/helicone
- Enterprise Support: Contact enterprise@helicone.ai for production support and SLAs
Can I migrate from cloud to self-hosted?
Can I migrate from cloud to self-hosted?
Yes! Export your data using our API or MCP server, then import it into your self-hosted instance.
What's the difference between the all-in-one image and docker-compose?
What's the difference between the all-in-one image and docker-compose?
- All-in-one: Single container with all services running via supervisord. Easiest to deploy but less flexible.
- Docker Compose: Separate containers for each service. Better for production, easier to scale and debug.
