Reward Kit API Reference

This API reference provides detailed documentation for the key classes, functions, and data models in the Reward Kit.

Core Components

Classes and Decorators

Data Models

  • Data Models: Documentation for Message, RewardOutput, MetricRewardOutput, and other data models

Modules

reward_function Module

The reward_function module contains the core functionality for creating and using reward functions.

from reward_kit.reward_function import RewardFunction, reward_function

evaluation Module

The evaluation module provides functions for previewing and creating evaluations.

from reward_kit.evaluation import preview_evaluation, create_evaluation

Key functions:

  • preview_evaluation: Previews an evaluation with sample data before deployment
  • create_evaluation: Creates and deploys an evaluator to Fireworks

models Module

The models module contains data models used throughout the Reward Kit.

from reward_kit.models import RewardOutput, MetricRewardOutput, Message

rewards Module

The rewards module contains specialized reward functions for specific use cases.

from reward_kit.rewards.function_calling import match_function_call

server Module

The server module provides functionality for running a reward function as a server.

from reward_kit.server import run_server

auth Module

The auth module handles authentication with Fireworks.

from reward_kit.auth import get_authentication

Command Line Interface

The Reward Kit provides a command-line interface for common operations:

# Show help
reward-kit --help

# Preview an evaluator
reward-kit preview --metrics-folders "metric=./path" --samples ./samples.jsonl

# Deploy an evaluator
reward-kit deploy --id my-evaluator --metrics-folders "metric=./path" --force

For detailed CLI documentation, see the CLI Reference.

Common Patterns

Creating a Basic Reward Function

from reward_kit import reward_function, RewardOutput, MetricRewardOutput

@reward_function
def my_reward_function(messages, original_messages=None, **kwargs):
    # Your evaluation logic here
    response = messages[-1].get("content", "")
    score = calculate_score(response)
    
    return RewardOutput(
        score=score,
        metrics={
            "my_metric": MetricRewardOutput(
                score=score,
                reason="Explanation for the score"
            )
        }
    )

Using a Deployed Reward Function

from reward_kit import RewardFunction

# Create a reference to a deployed reward function
reward_fn = RewardFunction(
    name="my-deployed-evaluator",
    mode="remote"
)

# Call the reward function
result = reward_fn(messages=[
    {"role": "user", "content": "What is machine learning?"},
    {"role": "assistant", "content": "Machine learning is..."}
])

print(f"Score: {result.score}")

Next Steps