Skip to main content
Fireworks supports RFT training on warm start and already-fine-tuned models. Upload models to Fireworks and use the warm start option to continue training (e.g. from an SFT LoRA) with RFT, rather than start from scratch with a base model.

When to use warm start

Use the --warm-start-from flag when you want to:
  • Start RFT from an SFT model you’ve trained with Fireworks
  • Continue training from an existing fine-tuned LoRA adapter you’ve uploaded to Fireworks

Basic usage

eval-protocol create rft \
  --warm-start-from accounts/your-account/models/<SFT_MODEL_ID> \
  --output-model <RFT_MODEL_ID>
When using --warm-start-from, do NOT include --base-model. The base model is automatically determined from the LoRA adapter.
# Wrong, includes --base-model
eval-protocol create rft \
  --base-model accounts/fireworks/models/llama-v3p1-8b-instruct \
  --warm-start-from accounts/your-account/models/<SFT_MODEL_ID>

SFT to RFT workflow

1

Create or upload SFT model

Get started with supervised fine-tuning on Fireworks:
firectl create sftj \
  --base-model accounts/fireworks/models/<BASE_MODEL_ID> \
  --dataset accounts/your-account/datasets/<DATASET_ID> \
  --output-model <MODEL_ID>
Or if you already have a LoRA adapter, upload it to Fireworks:
firectl create model <MODEL_ID> /path/to/files/ \
  --base-model "accounts/fireworks/models/<BASE_MODEL_ID>"
Learn more about uploading custom LoRA adapters in the Custom Models guide.
2

Start RFT from SFT model

Use an existing model as a starting point, and combine with standard RFT parameters.
eval-protocol create rft \
  --warm-start-from accounts/your-account/models/<SFT_MODEL_ID> \
  --output-model <RFT_MODEL_ID> \
  --epochs 2 \
  --learning-rate 5e-5 \
  --temperature 0.8

Troubleshooting

This means you specified both --base-model and --warm-start-from. Remove the --base-model flag.
Verify the model exists in your account:
firectl list models --account accounts/your-account