Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/helicone/helicone/llms.txt

Use this file to discover all available pages before exploring further.

Why Self-Host Helicone?

Self-hosting Helicone gives you complete control over your LLM observability platform. Run it on your own infrastructure to meet compliance requirements, keep data private, or customize the deployment to your needs.

Full Data Control

Your request logs, prompts, and analytics stay entirely within your infrastructure.

Easy Deployment

Get started with a single Docker command or deploy to Kubernetes with our Helm chart.

Open Source

Apache 2.0 licensed. Inspect, modify, and contribute to the codebase.

Production Ready

Battle-tested architecture powering thousands of production deployments.

Deployment Options

The fastest way to get Helicone running locally. Perfect for development, testing, or small production deployments.
  • All-in-One Container: Single container with all services
  • Docker Compose: Multi-container setup with separate services
  • Minimal Requirements: 4GB RAM, 2 CPU cores

Docker Setup

Get started with Docker in under 5 minutes →
Enterprise-grade deployment with auto-scaling, high availability, and advanced monitoring.
  • Helm Chart: Production-ready Kubernetes deployment
  • Horizontal Scaling: Scale services independently
  • Enterprise Support: Available for production deployments

Kubernetes Setup

Deploy to Kubernetes with Helm →

What Gets Deployed?

Helicone consists of five core services:
ServicePurposeTechnology
WebFrontend dashboard and UINext.js
JawnBackend API and request proxyExpress + TypeScript
WorkerLLM proxy loggingCloudflare Workers (Node.js)
DatabaseApplication data and authPostgreSQL
Analytics DBRequest logs and metricsClickHouse
Object StorageRequest/response bodiesMinIO (S3-compatible)

Architecture Deep Dive

Learn about Helicone’s architecture →

System Requirements

Minimum (Development)

  • 4GB RAM
  • 2 CPU cores
  • 20GB storage
  • Docker 20.10+
  • 16GB RAM
  • 4 CPU cores
  • 100GB+ storage (scales with usage)
  • Kubernetes 1.24+

Quick Start

Get Helicone running locally in under 5 minutes:
1

Clone the repository

git clone https://github.com/Helicone/helicone.git
cd helicone/docker
2

Configure environment

cp .env.example .env
# Edit .env with your settings (optional for local dev)
3

Start Helicone

./helicone-compose.sh helicone up
4

Access the dashboard

Open http://localhost:3000 in your browser.Create your first user account via the Supabase auth UI at: http://localhost:54323/project/default/auth/users

Available Ports

When running locally, these ports are exposed:
PortServiceDescription
3000Web DashboardFrontend UI
8585Jawn APIBackend API and LLM proxy
8787Worker (OpenAI)OpenAI proxy worker
8788Worker (API)Helicone API worker
9000MinIO APIS3-compatible object storage
9001MinIO ConsoleMinIO admin interface
8123ClickHouseAnalytics database
5432PostgreSQLApplication database

Next Steps

Docker Deployment

Complete Docker setup guide with docker-compose

Kubernetes Deployment

Production deployment with Helm

Architecture

Understand how services work together

Configuration

Environment variables and advanced settings

Support

Yes! Export your data using our API or MCP server, then import it into your self-hosted instance.
  • All-in-one: Single container with all services running via supervisord. Easiest to deploy but less flexible.
  • Docker Compose: Separate containers for each service. Better for production, easier to scale and debug.