Documentation Index
Fetch the complete documentation index at: https://docs.fireworks.ai/llms.txt
Use this file to discover all available pages before exploring further.
The Training API is currently in private preview. Request early access to get started.
What is the Training API?
Fireworks Training API lets you write training logic in plain Python on your local machine while model computation runs on remote GPUs managed by Fireworks. Most users should start from cookbook recipes, the recommended entry point for standard SFT, DPO, GRPO-style training, and async RL loops for agentic RL. Fork a recipe when you want to adapt an existing loop with your own loss, reward, rollout function, data loading, or checkpointing behavior. Use the Direct Training SDK when you need full control over training behavior.| Mode | Best for | Infrastructure |
|---|---|---|
| Cookbook recipes | Recommended entry point for adapting existing SFT/DPO/GRPO-style loops, including async RL for agentic RL | You configure and implement simple loss, reward, or rollout functions; platform runs GPUs |
| Direct Training SDK | Full control over training behavior | You drive the training flow; platform runs GPUs |
Who does what
| Fireworks handles | Cookbook recipes handle | Direct Training SDK users implement |
|---|---|---|
| GPU provisioning and cluster management | Training loop structure for supported recipes | Training loop logic (forward_backward_custom + optim_step) |
| Service-mode trainer lifecycle (create, health-check, reconnect, delete) | Resource setup, health checks, reconnect, and cleanup | Manager/client wiring when working below recipe utilities |
| Distributed forward pass, backward pass, optimizer execution | Common losses and reward/evaluation plumbing | Loss function and batch construction |
| Checkpoint storage and export | Checkpoint save, resume, promotion, and weight sync helpers | Checkpoint calls (save_weights_for_sampler_ext, DCP snapshots) |
| Inference deployments and hotload | Deployment sampling and serving-integrated evaluation for RL recipes | Custom rollout, sampling, and evaluation logic |
| Preemption recovery and job resume | Resume logic for supported recipe checkpoints | Resume policy and state restoration calls |
| Distributed training (multi-node, sharding, FSDP) | Config surfaces for learning rate, grad accumulation, context length, W&B | Hyperparameter schedules, data pipeline, and experiment tracking |
System architecture
How service-mode training works
Datums
A Datum is the unit of training data sent to the remote GPU. It wraps tokenized input and per-token weights that your loss function needs. Token weights tell the loss function which tokens to train on:0.0= prompt token (don’t train on this)1.0= response token (train on this)
Logprobs and forward_backward_custom
When you callforward_backward_custom, the GPU runs a forward pass and returns per-token log-probabilities as PyTorch tensors with requires_grad=True. Your loss function computes a scalar loss, the API calls loss.backward(), and gradients are sent back to the GPU for the model backward pass.
optim_step to apply the optimizer update:
Futures
All training client API calls return futures. Call.result() to block until completion. Without .result(), errors are silently swallowed.
Checkpointing and weight sync
After training, you export checkpoints for serving:- Base checkpoint: Full model weights. Use for the first checkpoint.
- Delta checkpoint: Only the diff from the previous base (~10x smaller). Use for subsequent checkpoints.
Key APIs
| API | Purpose |
|---|---|
TrainerJobManager | Create, resume, reconnect, and delete service-mode trainer jobs |
FireworksClient | Standalone checkpoint operations such as listing checkpoints or promoting a model without a live training instance |
FiretitanServiceClient | Connect to a live trainer endpoint and create a FiretitanTrainingClient |
FiretitanTrainingClient | forward_backward_custom, optim_step, checkpointing methods |
DeploymentManager | Create deployments, weight sync, and warmup |
DeploymentSampler | Client-side tokenized sampling from deployments |
WeightSyncer | Manages checkpoint + weight sync lifecycle with delta chaining |
Renderers
Chat-template formatting, stop-token handling, and loss-weight masking for SFT/DPO datasets are handled by renderers — pluggable per-model classes that turn raw conversations into the trainer’sDatum shape. Most users never touch a renderer directly; cookbook recipes pick the right one for the base_model you set. If you need to author a new one or debug parity against HuggingFace, the implementation depth lives in the cookbook’s skills/renderer/ skill.
Next steps
- Quickstart — get a custom training loop running in minutes
- Training and Sampling — end-to-end API walkthrough
- Loss Functions — built-in and custom loss functions
- Vision Inputs — fine-tune vision-language models with image and text data
- The Cookbook — ready-to-run recipes for SFT, DPO, ORPO, GRPO/IGPO, and async RL (experimental)