This endpoint allows you to add an evaluation score to a specific request. Use this to track quality metrics, performance scores, or custom evaluation criteria for your LLM requests.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/helicone/helicone/llms.txt
Use this file to discover all available pages before exploring further.
Use Cases
- Add human feedback scores to requests
- Record automated evaluation results
- Track custom quality metrics
- Build evaluation datasets for model improvements
Path Parameters
The unique identifier of the request to evaluate. This is the Helicone request ID returned when logging requests.
Request Body
Name of the evaluation metric (e.g., “accuracy”, “relevance”, “quality”)
Numerical score for the evaluation. Can be any number, but typically normalized to a range like 0-1 or 0-100.
Response
Returns a Result object indicating success or failure.Null on success
Error message if the request failed, null on success
Example Request
Example Response
Error Response
Best Practices
- Use consistent evaluation metric names across your requests
- Normalize scores to a standard range (e.g., 0-1)
- Add evaluations soon after request completion for accurate tracking
- Consider using multiple evaluation metrics to capture different quality dimensions
