Getting Started
Documentation home
Reward Kit Documentation
Welcome to the Reward Kit documentation. This guide will help you create, test, and deploy reward functions for evaluating and optimizing LLM responses.
Getting Started
Developer Guide
- Getting Started with Reward Functions: Learn the basics of reward functions
- Reward Function Anatomy: Understand the structure of reward functions
- Core Data Types: Explore the data models used in reward functions
- Evaluation Workflows: Learn the complete lifecycle from development to deployment
Examples and Built-in Reward Functions
- Reward Functions Overview: Overview of all built-in reward functions
- Basic Reward Function: Simple example evaluating response clarity
- Advanced Reward Functions: More complex examples with multiple metrics
Built-in Reward Function Documentation
- Code Execution Evaluation: Evaluate code by running it locally
- Code Execution with E2B: Evaluate code using E2B cloud sandbox
- Function Calling Evaluation: Evaluate function calls made by AI models
- JSON Schema Validation: Validate JSON outputs against schemas
- Math Evaluation: Evaluate mathematical answers in responses
Tutorials
- Creating Your First Reward Function: Step-by-step guide to creating a reward function
API Reference
- Coming soon
Command Line Interface
- Coming soon
Best Practices
- Coming soon
Community and Support
- GitHub Issues: Report bugs and request features
- Contributing Guide: How to contribute to the Reward Kit project