What you’ll learn
- How to set up and run a math evaluation using the Eval Protocol SDK
- How to launch an RFT job from the command line
- How to monitor training progress and evaluate accuracy improvements
Prefer a notebook experience? You can also run this tutorial in Google Colab. Note that Colab requires billing enabled on your Google account.
Prerequisites
- Python 3.10+
- A Fireworks API key with permissions to launch RFT jobs (stored in your shell or .env)
- Command-line access (terminal or shell)
1. Install dependencies
Install the latesteval-protocol SDK directly from the main branch and make sure pytest is on the path.
2. Download the evaluator and dataset
Run this Python script to download two files from the Eval Protocol repository into a folder on your machine calledgsm8k_artifacts/.
- Test script (
test_pytest_math_example.py): Defines how to evaluate math answers - Sample dataset (
gsm8k_sample.jsonl): Contains example math problems to test on
tutorial/download_gsm8k_assets.py
3. Run the evaluation
First, start the local UI server to view evaluation results. Open a terminal and run:http://localhost:8000. Keep this terminal running.
Then, in a new terminal, run the test script to evaluate the model on sample math problems:
pytest script will also register your evaluator and dataset with Fireworks automatically, so you can use them in the next step for RFT.

4. Start training
First, set your Fireworks API key so the Fireworks CLI can authenticate you:qwen3-0p6b) to keep training fast and inexpensive. Because your evaluator and dataset were already registered with Fireworks in the last step, we don’t need to specify them again here.

You can also store your API key in a
.env file instead of exporting it each session.Monitor your training progress
Your RFT job is now running. You can monitor progress in the dashboard links provided by the CLI output.Evaluate accuracy regularly
Evaluate accuracy regularly
Re-run the pytest evaluation command to measure your model’s performance on new checkpoints:This helps you see how your model’s accuracy improves over time and decide when to stop training.
Customize your evaluation
Customize your evaluation
You can adjust the evaluation logic to better fit your needs:
- Modify reward shaping: Edit the scoring logic in
test_pytest_math_example.pyto match your answer format expectations - Use your own data: Replace the sample dataset by either editing the JSONL file locally or passing
--dataset-jsonlwhen creating the RFT job
What’s happening behind the scenes
Understanding the training workflow:- Evaluation registration: The pytest script evaluates a small GSM8K subset using numeric answer checking, then automatically registers both your evaluator and dataset with Fireworks
- RFT job creation: The
create rftcommand connects your registered evaluator and dataset to a Reinforcement Fine-Tuning job for your chosen base model - Continuous improvement: As training progresses, evaluation scores on the held-out set reflect improved accuracy, allowing you to iterate quickly before scaling to larger experiments