Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/helicone/helicone/llms.txt

Use this file to discover all available pages before exploring further.

This endpoint allows you to add an evaluation score to a specific request. Use this to track quality metrics, performance scores, or custom evaluation criteria for your LLM requests.

Use Cases

  • Add human feedback scores to requests
  • Record automated evaluation results
  • Track custom quality metrics
  • Build evaluation datasets for model improvements

Path Parameters

requestId
string
required
The unique identifier of the request to evaluate. This is the Helicone request ID returned when logging requests.

Request Body

name
string
required
Name of the evaluation metric (e.g., “accuracy”, “relevance”, “quality”)
score
number
required
Numerical score for the evaluation. Can be any number, but typically normalized to a range like 0-1 or 0-100.

Response

Returns a Result object indicating success or failure.
data
null
Null on success
error
string | null
Error message if the request failed, null on success

Example Request

curl --request POST \
  --url https://api.helicone.ai/v1/evals/550e8400-e29b-41d4-a716-446655440000 \
  --header 'Authorization: Bearer <YOUR_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "name": "accuracy",
    "score": 0.95
  }'

Example Response

{
  "data": null,
  "error": null
}

Error Response

{
  "data": null,
  "error": "Request not found"
}

Best Practices

  • Use consistent evaluation metric names across your requests
  • Normalize scores to a standard range (e.g., 0-1)
  • Add evaluations soon after request completion for accurate tracking
  • Consider using multiple evaluation metrics to capture different quality dimensions