Documentation Index Fetch the complete documentation index at: https://mintlify.com/helicone/helicone/llms.txt
Use this file to discover all available pages before exploring further.
Helicone integrates with LangChain to provide comprehensive observability for your LLM applications, agents, and chains.
Integration Methods
AI Gateway Use the AI Gateway for unified access to all providers with automatic fallbacks.
Proxy Integration Route LangChain requests through Helicone’s proxy.
Async Logging Zero-latency logging using OpenLLMetry.
Custom Headers Add Helicone headers to existing LangChain applications.
Quick Start
TypeScript/JavaScript
Python
Install dependencies
npm install @langchain/openai @langchain/anthropic
Configure with Helicone proxy
import { ChatOpenAI } from "@langchain/openai" ;
const model = new ChatOpenAI ({
modelName: "gpt-4o-mini" ,
configuration: {
baseURL: "https://oai.helicone.ai/v1" ,
defaultHeaders: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
},
},
});
const response = await model . invoke ( "What is the capital of France?" );
console . log ( response . content );
Install dependencies
pip install langchain langchain-openai langchain-anthropic
Configure with Helicone proxy
from langchain_openai import ChatOpenAI
import os
model = ChatOpenAI(
model = "gpt-4o-mini" ,
base_url = "https://oai.helicone.ai/v1" ,
default_headers = {
"Helicone-Auth" : f "Bearer { os.getenv( 'HELICONE_API_KEY' ) } "
}
)
response = model.invoke( "What is the capital of France?" )
print (response.content)
Using the AI Gateway
The AI Gateway provides a unified interface for all providers:
import { ChatOpenAI } from "@langchain/openai" ;
const model = new ChatOpenAI ({
modelName: "claude-3-5-sonnet-20240620/anthropic" , // Use any provider
configuration: {
baseURL: "https://ai-gateway.helicone.ai" ,
apiKey: process . env . HELICONE_API_KEY ,
},
});
const response = await model . invoke ( "Hello!" );
Learn more about the AI Gateway .
Chains
Track LangChain chains with full visibility:
import { ChatOpenAI } from "@langchain/openai" ;
import { PromptTemplate } from "@langchain/core/prompts" ;
import { StringOutputParser } from "@langchain/core/output_parsers" ;
const model = new ChatOpenAI ({
modelName: "gpt-4o-mini" ,
configuration: {
baseURL: "https://oai.helicone.ai/v1" ,
defaultHeaders: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
"Helicone-Session-Id" : "chain-session-123" ,
"Helicone-Property-Chain" : "translation-chain" ,
},
},
});
const prompt = PromptTemplate . fromTemplate (
"Translate the following text to {language}: {text}"
);
const chain = prompt . pipe ( model ). pipe ( new StringOutputParser ());
const result = await chain . invoke ({
language: "French" ,
text: "Hello, how are you?" ,
});
console . log ( result );
Agents
Monitor LangChain agents and tool usage:
import { ChatOpenAI } from "@langchain/openai" ;
import { createOpenAIFunctionsAgent , AgentExecutor } from "langchain/agents" ;
import { pull } from "langchain/hub" ;
import { Calculator } from "@langchain/community/tools/calculator" ;
import { v4 as uuidv4 } from "uuid" ;
const sessionId = uuidv4 ();
const model = new ChatOpenAI ({
modelName: "gpt-4o-mini" ,
configuration: {
baseURL: "https://oai.helicone.ai/v1" ,
defaultHeaders: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
"Helicone-Session-Id" : sessionId ,
"Helicone-Session-Name" : "Calculator Agent" ,
"Helicone-Property-Agent" : "calculator" ,
},
},
});
const tools = [ new Calculator ()];
const prompt = await pull ( "hwchase17/openai-functions-agent" );
const agent = await createOpenAIFunctionsAgent ({
llm: model ,
tools ,
prompt ,
});
const agentExecutor = new AgentExecutor ({
agent ,
tools ,
});
const result = await agentExecutor . invoke ({
input: "What is 25 * 4 + 10?" ,
});
console . log ( result . output );
View the full agent execution trace in your Helicone dashboard, including:
All LLM calls
Tool invocations
Intermediate steps
Total cost and latency
Retrieval (RAG)
Track RAG pipelines with custom properties:
import { ChatOpenAI } from "@langchain/openai" ;
import { OpenAIEmbeddings } from "@langchain/openai" ;
import { MemoryVectorStore } from "langchain/vectorstores/memory" ;
import { RetrievalQAChain } from "langchain/chains" ;
const embeddings = new OpenAIEmbeddings ({
configuration: {
baseURL: "https://oai.helicone.ai/v1" ,
defaultHeaders: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
"Helicone-Property-Type" : "embedding" ,
},
},
});
const vectorStore = await MemoryVectorStore . fromTexts (
[
"Paris is the capital of France." ,
"Berlin is the capital of Germany." ,
"London is the capital of England." ,
],
[{}, {}, {}],
embeddings
);
const model = new ChatOpenAI ({
modelName: "gpt-4o-mini" ,
configuration: {
baseURL: "https://oai.helicone.ai/v1" ,
defaultHeaders: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
"Helicone-Property-Type" : "rag-query" ,
},
},
});
const chain = RetrievalQAChain . fromLLM ( model , vectorStore . asRetriever ());
const result = await chain . call ({
query: "What is the capital of France?" ,
});
console . log ( result . text );
Session Tracking
Track multi-step LangChain workflows:
import { ChatOpenAI } from "@langchain/openai" ;
import { v4 as uuidv4 } from "uuid" ;
const sessionId = uuidv4 ();
const model = new ChatOpenAI ({
modelName: "gpt-4o-mini" ,
configuration: {
baseURL: "https://oai.helicone.ai/v1" ,
defaultHeaders: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
"Helicone-Session-Id" : sessionId ,
"Helicone-Session-Name" : "Research Assistant" ,
},
},
});
// Step 1: Research
const research = await model . invoke ( "Summarize quantum computing" , {
headers: {
"Helicone-Session-Path" : "/research" ,
},
});
// Step 2: Write
const article = await model . invoke (
`Write a blog post based on: ${ research . content } ` ,
{
headers: {
"Helicone-Session-Path" : "/write" ,
},
}
);
// Step 3: Review
const review = await model . invoke ( `Review and improve: ${ article . content } ` , {
headers: {
"Helicone-Session-Path" : "/review" ,
},
});
Learn more about Session Tracking .
Custom Properties
Add context to your LangChain requests:
const model = new ChatOpenAI ({
modelName: "gpt-4o-mini" ,
configuration: {
baseURL: "https://oai.helicone.ai/v1" ,
defaultHeaders: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
"Helicone-User-Id" : "user-123" ,
"Helicone-Property-Environment" : "production" ,
"Helicone-Property-Application" : "chatbot" ,
"Helicone-Property-Version" : "v2.1.0" ,
},
},
});
Streaming
Helicone supports LangChain streaming:
import { ChatOpenAI } from "@langchain/openai" ;
const model = new ChatOpenAI ({
modelName: "gpt-4o-mini" ,
streaming: true ,
configuration: {
baseURL: "https://oai.helicone.ai/v1" ,
defaultHeaders: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
},
},
});
const stream = await model . stream ( "Write a poem about AI" );
for await ( const chunk of stream ) {
process . stdout . write ( chunk . content );
}
Anthropic with LangChain
Use Anthropic Claude with LangChain:
import { ChatAnthropic } from "@langchain/anthropic" ;
const model = new ChatAnthropic ({
modelName: "claude-3-5-sonnet-20240620" ,
anthropicApiKey: process . env . ANTHROPIC_API_KEY ,
clientOptions: {
baseURL: "https://anthropic.helicone.ai" ,
defaultHeaders: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
},
},
});
const response = await model . invoke ( "Hello!" );
console . log ( response . content );
Zero-Latency Integration
For production applications, use async logging:
import { HeliconeAsyncLogger } from "@helicone/async" ;
import OpenAI from "openai" ;
import { ChatOpenAI } from "@langchain/openai" ;
const logger = new HeliconeAsyncLogger ({
apiKey: process . env . HELICONE_API_KEY ,
providers: {
openAI: OpenAI ,
langchain: {
chainsModule: await import ( "langchain/chains" ),
agentsModule: await import ( "langchain/agents" ),
toolsModule: await import ( "langchain/tools" ),
},
},
});
logger . init ();
// Use LangChain normally - logging happens asynchronously
const model = new ChatOpenAI ({
modelName: "gpt-4o-mini" ,
});
const response = await model . invoke ( "Hello!" );
Learn more about Async Logging .
Best Practices
Use session IDs for multi-step workflows
Track complete workflows with session IDs: const sessionId = uuidv4 ();
defaultHeaders : {
"Helicone-Session-Id" : sessionId ,
"Helicone-Session-Name" : "Research Pipeline" ,
}
Add custom properties for filtering
Use custom properties to organize requests: defaultHeaders : {
"Helicone-Property-Chain-Type" : "rag" ,
"Helicone-Property-Environment" : "production" ,
}
Monitor embeddings separately
Track embedding and completion costs separately: // For embeddings
const embeddings = new OpenAIEmbeddings ({
configuration: {
defaultHeaders: {
"Helicone-Property-Type" : "embedding" ,
},
},
});
// For completions
const model = new ChatOpenAI ({
configuration: {
defaultHeaders: {
"Helicone-Property-Type" : "completion" ,
},
},
});
Python Examples
Basic Usage
from langchain_openai import ChatOpenAI
import os
model = ChatOpenAI(
model = "gpt-4o-mini" ,
base_url = "https://oai.helicone.ai/v1" ,
default_headers = {
"Helicone-Auth" : f "Bearer { os.getenv( 'HELICONE_API_KEY' ) } " ,
"Helicone-User-Id" : "user-123" ,
}
)
response = model.invoke( "What is the capital of France?" )
print (response.content)
Chains in Python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
import os
model = ChatOpenAI(
model = "gpt-4o-mini" ,
base_url = "https://oai.helicone.ai/v1" ,
default_headers = {
"Helicone-Auth" : f "Bearer { os.getenv( 'HELICONE_API_KEY' ) } " ,
"Helicone-Property-Chain" : "translation" ,
}
)
prompt = ChatPromptTemplate.from_template(
"Translate the following to {language} : {text} "
)
chain = prompt | model | StrOutputParser()
result = chain.invoke({
"language" : "French" ,
"text" : "Hello, how are you?"
})
print (result)
Troubleshooting
Requests not appearing in dashboard
Verify your Helicone API key is correct
Check the Helicone-Auth header format: Bearer sk-helicone-...
Ensure you’re using the correct base URL
Check that defaultHeaders is in the configuration object
Session tracking not working
Ensure you’re using the same session ID across requests
Session IDs must be UUIDs or unique strings
Include Helicone-Session-Id in every request of the session
Next Steps
AI Gateway Use multiple providers with automatic fallbacks
Session Tracking Track complex workflows
Async Logging Zero-latency integration
Custom Properties Add custom metadata