Documentation Index Fetch the complete documentation index at: https://mintlify.com/helicone/helicone/llms.txt
Use this file to discover all available pages before exploring further.
Helicone integrates seamlessly with the Vercel AI SDK, providing observability for streaming AI applications built with Next.js and React.
Integration Methods
Provider Configuration Configure AI SDK providers to route through Helicone.
AI Gateway Use the AI Gateway for unified access to all providers.
Custom Headers Add Helicone headers to track sessions and custom properties.
Async Logging Zero-latency logging for production apps.
Quick Start
Install dependencies
npm install ai @ai-sdk/openai @ai-sdk/anthropic
Configure provider with Helicone
OpenAI Provider
Anthropic Provider
AI Gateway
import { createOpenAI } from "@ai-sdk/openai" ;
const openai = createOpenAI ({
baseURL: "https://oai.helicone.ai/v1" ,
apiKey: process . env . OPENAI_API_KEY ,
headers: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
},
});
import { createAnthropic } from "@ai-sdk/anthropic" ;
const anthropic = createAnthropic ({
baseURL: "https://anthropic.helicone.ai" ,
apiKey: process . env . ANTHROPIC_API_KEY ,
headers: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
},
});
import { createOpenAI } from "@ai-sdk/openai" ;
const gateway = createOpenAI ({
baseURL: "https://ai-gateway.helicone.ai" ,
apiKey: process . env . HELICONE_API_KEY ,
});
Use with AI SDK functions
import { generateText } from "ai" ;
const { text } = await generateText ({
model: openai ( "gpt-4o-mini" ),
prompt: "What is the capital of France?" ,
});
console . log ( text );
Streaming Chat
Integrate Helicone with Next.js API routes:
// app/api/chat/route.ts
import { createOpenAI } from "@ai-sdk/openai" ;
import { streamText } from "ai" ;
const openai = createOpenAI ({
baseURL: "https://oai.helicone.ai/v1" ,
apiKey: process . env . OPENAI_API_KEY ! ,
headers: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
},
});
export async function POST ( req : Request ) {
const { messages } = await req . json ();
const result = await streamText ({
model: openai ( "gpt-4o-mini" ),
messages ,
});
return result . toDataStreamResponse ();
}
Client-side:
// app/page.tsx
"use client" ;
import { useChat } from "ai/react" ;
export default function Chat () {
const { messages , input , handleInputChange , handleSubmit } = useChat ();
return (
< div >
{ messages . map (( m ) => (
< div key = {m. id } >
{ m . role } : { m . content }
</ div >
))}
< form onSubmit = { handleSubmit } >
< input value = { input } onChange = { handleInputChange } />
< button type = "submit" > Send </ button >
</ form >
</ div >
);
}
Session Tracking
Track multi-turn conversations with session IDs:
import { createOpenAI } from "@ai-sdk/openai" ;
import { generateText } from "ai" ;
import { v4 as uuidv4 } from "uuid" ;
const sessionId = uuidv4 ();
const openai = createOpenAI ({
baseURL: "https://oai.helicone.ai/v1" ,
apiKey: process . env . OPENAI_API_KEY ! ,
headers: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
},
});
const { text } = await generateText ({
model: openai ( "gpt-4o-mini" ),
prompt: "Hello!" ,
headers: {
"Helicone-Session-Id" : sessionId ,
"Helicone-Session-Name" : "Customer Chat" ,
"Helicone-User-Id" : "user-123" ,
},
});
In API routes:
// app/api/chat/route.ts
import { createOpenAI } from "@ai-sdk/openai" ;
import { streamText } from "ai" ;
const openai = createOpenAI ({
baseURL: "https://oai.helicone.ai/v1" ,
apiKey: process . env . OPENAI_API_KEY ! ,
headers: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
},
});
export async function POST ( req : Request ) {
const { messages , sessionId , userId } = await req . json ();
const result = await streamText ({
model: openai ( "gpt-4o-mini" ),
messages ,
headers: {
"Helicone-Session-Id" : sessionId ,
"Helicone-User-Id" : userId ,
"Helicone-Property-Path" : req . url ,
},
});
return result . toDataStreamResponse ();
}
Learn more about Session Tracking .
Custom Properties
Add custom metadata to track important context:
import { generateText } from "ai" ;
const { text } = await generateText ({
model: openai ( "gpt-4o-mini" ),
prompt: "Analyze this user query" ,
headers: {
"Helicone-User-Id" : "user-123" ,
"Helicone-Property-Environment" : "production" ,
"Helicone-Property-Feature" : "query-analysis" ,
"Helicone-Property-Version" : "v2.1.0" ,
},
});
Helicone tracks tool calls with full visibility:
import { generateText , tool } from "ai" ;
import { z } from "zod" ;
const { text , toolCalls } = await generateText ({
model: openai ( "gpt-4o-mini" ),
prompt: "What's the weather in Paris?" ,
tools: {
weather: tool ({
description: "Get the weather for a location" ,
parameters: z . object ({
location: z . string (). describe ( "The city name" ),
}),
execute : async ({ location }) => {
// Fetch weather data
return { temperature: 20 , condition: "sunny" };
},
}),
},
headers: {
"Helicone-Property-Tools" : "weather" ,
},
});
// Tool calls are logged with inputs and outputs
console . log ( toolCalls );
AWS Bedrock
Use Vercel AI SDK with AWS Bedrock through Helicone:
import { createAmazonBedrock } from "@ai-sdk/amazon-bedrock" ;
import { generateText } from "ai" ;
const bedrock = createAmazonBedrock ({
region: process . env . AWS_REGION ! ,
accessKeyId: process . env . AWS_ACCESS_KEY_ID ! ,
secretAccessKey: process . env . AWS_SECRET_ACCESS_KEY ! ,
baseURL: `https://bedrock.helicone.ai/v1/ ${ process . env . AWS_REGION } ` ,
headers: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
"aws-access-key" : process . env . AWS_ACCESS_KEY_ID ! ,
"aws-secret-key" : process . env . AWS_SECRET_ACCESS_KEY ! ,
"Accept-Encoding" : "identity" ,
},
});
const { text } = await generateText ({
model: bedrock ( "anthropic.claude-3-5-sonnet-20240620-v1:0" ),
prompt: "Hello!" ,
headers: {
"Helicone-Session-Id" : "bedrock-session-123" ,
},
});
Response Caching
Reduce costs with response caching:
import { generateText } from "ai" ;
const { text } = await generateText ({
model: openai ( "gpt-4o-mini" ),
prompt: "What is 2+2?" ,
headers: {
"Helicone-Cache-Enabled" : "true" ,
},
});
// Subsequent identical requests return cached responses
Learn more about Response Caching .
Object Generation
Track structured object generation:
import { generateObject } from "ai" ;
import { z } from "zod" ;
const { object } = await generateObject ({
model: openai ( "gpt-4o-mini" ),
schema: z . object ({
name: z . string (),
age: z . number (),
city: z . string (),
}),
prompt: "Generate a person's profile" ,
headers: {
"Helicone-Property-Type" : "structured-generation" ,
},
});
console . log ( object );
Streaming Objects
Stream structured data generation:
import { streamObject } from "ai" ;
import { z } from "zod" ;
const { partialObjectStream } = await streamObject ({
model: openai ( "gpt-4o-mini" ),
schema: z . object ({
recipe: z . object ({
name: z . string (),
ingredients: z . array ( z . string ()),
steps: z . array ( z . string ()),
}),
}),
prompt: "Generate a recipe for chocolate cake" ,
});
for await ( const partialObject of partialObjectStream ) {
console . log ( partialObject );
}
Multi-Modal (Vision)
Use vision models with Helicone:
import { generateText } from "ai" ;
const { text } = await generateText ({
model: openai ( "gpt-4o" ),
messages: [
{
role: "user" ,
content: [
{ type: "text" , text: "What's in this image?" },
{
type: "image" ,
image: "https://example.com/image.jpg" ,
},
],
},
],
headers: {
"Helicone-Property-Type" : "vision" ,
},
});
Rate Limiting
Control usage per user:
import { generateText } from "ai" ;
const { text } = await generateText ({
model: openai ( "gpt-4o-mini" ),
prompt: "Hello!" ,
headers: {
"Helicone-RateLimit-Policy" : "100;w=60;s=user" ,
"Helicone-User-Id" : "user-123" ,
},
});
Learn more about Rate Limiting .
Best Practices
Always include session IDs
Track conversations with session IDs: headers : {
"Helicone-Session-Id" : sessionId ,
"Helicone-User-Id" : userId ,
}
Use custom properties for filtering
Add context for better organization: headers : {
"Helicone-Property-Environment" : "production" ,
"Helicone-Property-Feature" : "chat" ,
}
Enable caching for repeated queries
Reduce costs with caching: headers : {
"Helicone-Cache-Enabled" : "true" ,
}
Server Actions
Use Helicone with Next.js Server Actions:
// app/actions.ts
"use server" ;
import { createOpenAI } from "@ai-sdk/openai" ;
import { generateText } from "ai" ;
const openai = createOpenAI ({
baseURL: "https://oai.helicone.ai/v1" ,
apiKey: process . env . OPENAI_API_KEY ! ,
headers: {
"Helicone-Auth" : `Bearer ${ process . env . HELICONE_API_KEY } ` ,
},
});
export async function generateResponse ( prompt : string , userId : string ) {
const { text } = await generateText ({
model: openai ( "gpt-4o-mini" ),
prompt ,
headers: {
"Helicone-User-Id" : userId ,
"Helicone-Property-Source" : "server-action" ,
},
});
return text ;
}
Troubleshooting
Requests not appearing in dashboard
Verify your Helicone API key is correct
Check the Helicone-Auth header format: Bearer sk-helicone-...
Ensure you’re passing headers to the provider config, not the individual function
Check browser network tab for failed requests
Ensure your Next.js API route returns result.toDataStreamResponse()
Check that streaming is enabled in your deployment platform
Verify no middleware is buffering the response
Session tracking not working
Session IDs must be consistent across requests
Pass session ID from client to API route
Use UUIDs or unique strings for session IDs
Next Steps
AI Gateway Use multiple providers with automatic fallbacks
Session Tracking Track multi-turn conversations
Custom Properties Add custom metadata
Response Caching Reduce costs with caching