Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/helicone/helicone/llms.txt

Use this file to discover all available pages before exploring further.

Helicone integrates seamlessly with 100+ AI providers and popular frameworks. Choose the integration method that best fits your use case.

Integration Methods

AI Gateway

Route requests through Helicone’s AI Gateway for unified access to 100+ models with intelligent routing and automatic fallbacks.

Proxy Integration

Route requests through Helicone’s proxy by changing your base URL. Simple and works with any provider.

Async Logging

Log requests asynchronously without proxying. Zero latency impact using OpenLLMetry.

Custom Headers

Add Helicone headers to existing requests for tracking and custom properties.

Supported Providers

Inference Providers

Helicone supports all major AI providers through the AI Gateway:

OpenAI

GPT-4, GPT-4o, GPT-3.5, and more

Anthropic

Claude 3.5 Sonnet, Claude 3 Opus, and more

Google

Gemini, PaLM, Vertex AI

Azure OpenAI

Azure-hosted OpenAI models

AWS Bedrock

Claude, Llama, and more on AWS

Groq

High-performance inference

Together AI

Open-source model hosting

Anyscale

Scalable AI inference

DeepInfra

Serverless AI inference

Frameworks & Tools

LangChain

Use Helicone with LangChain applications

Vercel AI SDK

Integrate with Vercel AI SDK

LlamaIndex

RAG and data framework integration

LangGraph

Multi-actor application framework

CrewAI

Multi-agent orchestration

PostHog

Export analytics to PostHog

Proxy Integration

The simplest way to integrate Helicone is by updating your base URL:
import { OpenAI } from "openai";

const client = new OpenAI({
  baseURL: "https://oai.helicone.ai/v1",
  apiKey: process.env.OPENAI_API_KEY,
  defaultHeaders: {
    "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
  },
});

Custom Headers

Add Helicone headers to any request for tracking and custom properties:
const response = await client.chat.completions.create(
  {
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "Hello!" }],
  },
  {
    headers: {
      "Helicone-Session-Id": "session-123",
      "Helicone-User-Id": "user-456",
      "Helicone-Property-Environment": "production",
    },
  }
);

Available Headers

  • Helicone-Auth: Your API key (format: Bearer sk-helicone-xxx)
  • Helicone-Session-Id: Track requests across a session
  • Helicone-User-Id: Associate requests with a user
  • Helicone-Property-*: Add custom properties (replace * with property name)
  • Helicone-Prompt-Id: Track prompt versions
  • Helicone-Cache-Enabled: Enable response caching

Quick Start by Use Case

Use async logging with @helicone/async to eliminate proxy latency:
import { HeliconeAsyncLogger } from "@helicone/async";
import OpenAI from "openai";

const logger = new HeliconeAsyncLogger({
  apiKey: process.env.HELICONE_API_KEY,
  providers: { openAI: OpenAI },
});
logger.init();

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});
See Async Logging for details.
Use the AI Gateway for unified access:
const client = new OpenAI({
  baseURL: "https://ai-gateway.helicone.ai",
  apiKey: process.env.HELICONE_API_KEY,
});

// Use any model from any provider
await client.chat.completions.create({
  model: "claude-3-5-sonnet-20240620/anthropic",
  messages: [{ role: "user", content: "Hello!" }],
});
See AI Gateway for details.
Route LangChain requests through Helicone:
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  configuration: {
    baseURL: "https://oai.helicone.ai/v1",
    defaultHeaders: {
      "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
    },
  },
});
See LangChain Integration for details.
Configure the provider with Helicone:
import { createOpenAI } from "@ai-sdk/openai";

const openai = createOpenAI({
  baseURL: "https://oai.helicone.ai/v1",
  headers: {
    "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
  },
});
See Vercel AI SDK Integration for details.

Next Steps

OpenAI Integration

Complete guide for OpenAI integration

Anthropic Integration

Complete guide for Anthropic integration

AI Gateway

Learn about the AI Gateway

Custom Properties

Track custom metadata